DJI Mavic, Air and Mini Drones
Friendly, Helpful & Knowledgeable Community
Join Us Now

Is it suppose to look like this... (Artifact banding on 48MP on Mini 3 Pro)

A traditional bayer sensor. Not a Quad Bayer.

Maybe you didn't know there have always been 4 photosites for every pixel, with an RGB filter over each set of 4 compact sensors. Traditionally this filter is a square divided into 4 subsquares two adjacent green, with the other two red and blue. That's a Bayer filter.

That's how it's always worked. Camera sensors with 10MP have 40M photosensitive sites.

Some clever engineer realized that with TWO separate sites collecting green intensity near each other the difference can be used, with the red and green intensities, to estimate, with error, the RGB values for each of the 4 photosites. By arranging the 4 filters with the two green diagonal instead of adjacent, the error is reduced.

Hence the introduction of the quad-bayer filter, and claimed higher resolution images. The semiconductor industry didn't have a 4x leap in density all of a sudden.

If you want the highest quality, sharp, color correct images, do not shoot in a quad bayer mode.
I don’t think you have that right. The resulting image will have the same number of pixels as the sensor a 2x2 block of different color pixels is not used to make 1 pixel of the resulting image even on a traditional bayer. 222BA2F5-0DA6-4B79-A8B6-C1E8B780EBB9.png
 
  • Like
Reactions: globetrotterdrone
  • Like
Reactions: miemo and brett8883
That's a quad bayer pattern.
No, that’s incorrect. That is a traditional bayer. A quad bayer has 4 pixels under each color filter. A traditional bayer has only one. It seems you may have the technologies mixed up.

For 12MP photos on a quad bayer, the four pixels under each color filter are combined to increase light sensitivity and noise handling for low light photos. In essence the 4 pixels work as one pixel and the debayer process is the same as a traditional bayer with larger pixels.

In 48MP mode it doesn’t combine the 4 pixels under each filter, it uses all 48m pixels to create a 48MP photo.

B8B63944-B667-4CC5-B8A2-F1AB34BB84D6.jpeg
Here’s what Sony Semiconductor says about quad bayer technology.


In short, the small pixels of a 48MP sensor in such a small sensor size aren’t sensitive enough to produce good low light photos at 48MP so in low light use 12MP mode to get better low light results. However, if there is plenty of light use 48MP mode to get higher resolution and higher detailed photos.
 
Last edited:
In your pictures above, the photosite density is the same. The traditional bayer will produce a true 48MP image, each pixel composed of a 2x2 square of sites, two with green filters, one red, one blue. The data from these FOUR photosites creates the data for ONE pixel. If a small sensor overall, the images are going to be noisy with current technology.

Increasing resolution on a sensor without increasing the size means smaller photosites, which are noisier, killing low light performance, shadows, etc. To keep pushing resolution, some clever engineer(s) cooked up the quad bayer. This filter arrangement allows for 16 smaller photosites to represent a single pixel in a quarter resolution grouping of the sites. At this resolution you get a true image, good dynamic range, and low light performance due to the quad bayer filter arrangement.

The 48MP image that results from a quad bayer has intensity errors because of the distance between the actual photosite where a color was sensed, and the position in the pixel where it is used. Some math can reduce these errors by looking at surrounding intensities, but the old adage is sill true: You can't create information that's not there. As your Array Convert diagram shows, surrounding data is used to interpolate the needed data, which contains errors. Depending on the situation in the image, the errors can be unrecoverable.

Compare a 48MP image from a true Bayer filter to that from a quad bayer. Zoom in. There's no contest.
 
Last edited:
In your pictures above, the photosite density is the same. The traditional bayer will produce a true 48MP image, each pixel composed of a 2x2 square of sites, two with green filters, one red, one blue. The data from these FOUR photosites creates the data for ONE pixel. If a small sensor overall, the images are going to be noisy with current technology.

Increasing resolution on a sensor without increasing the size means smaller photosites, which are noisier, killing low light performance, shadows, etc. To keep pushing resolution, some clever engineer(s) cooked up the quad bayer. This filter arrangement allows for 16 smaller photosites to represent a single pixel in a quarter resolution grouping of the sites. At this resolution you get a true image, good dynamic range, and low light performance due to the quad bayer filter arrangement.

The 48MP image that results from a quad bayer has intensity errors because of the distance between the actual photosite where a color was sensed, and the position in the pixel where it is used. Some math can reduce these errors by looking at surrounding intensities, but the old adage is sill true: You can't create information that's not there. As your Array Convert diagram shows, surrounding data is used to interpolate the needed data, which contains errors. Depending on the situation in the image, the errors can be unrecoverable.
In no way do 4 pixels on the sensor get combined to make one pixel of the photo. Not on any kind of bayer filter sensor. This isn’t how it works. If you would like to learn how it works look up articles on demosaicing of bayer filter data. Both bayer and quad bayer use complicated algorithms to recreate the missing channels from each pixel location but never are 4 different colored pixels combined to make one pixel since doing so would reduce the final image to 1/4 the original resolution.

Quad bayer just uses a different filter pattern and different algorithms to recreate the image. There are different bayer filter patterns used and quad bayer is just one example. It has its benefits and drawbacks just like anything but both quad bayer and traditional bayer interpolate 2 out of 3 colors at every pixel location and they have the same number of each colored pixel.
Compare a 48MP image from a true Bayer filter to that from a quad bayer. Zoom in. There's no contest.
I don’t know how you could do that since, to my knowledge they don’t make traditional bayer filter sensors for sensors this small. All the images I have seen from the Mini 3 Pro look great unless you zoom into like 0.04% as I demonstrated above. On a sensor this small that’s really remarkable.

For the record I don’t have a Mini 3 and I don’t really intent to get one at this point so I don’t have any skin in this game. I just call it like I see it.
 
  • Like
Reactions: globetrotterdrone
A traditional bayer sensor. Not a Quad Bayer.

Maybe you didn't know there have always been 4 photosites for every pixel, with an RGB filter over each set of 4 compact sensors. Traditionally this filter is a square divided into 4 subsquares two adjacent green, with the other two red and blue. That's a Bayer filter.

That's how it's always worked. Camera sensors with 10MP have 40M photosensitive sites.

Some clever engineer realized that with TWO separate sites collecting green intensity near each other the difference can be used, with the red and green intensities, to estimate, with error, the RGB values for each of the 4 photosites. By arranging the 4 filters with the two green diagonal instead of adjacent, the error is reduced.

Hence the introduction of the quad-bayer filter, and claimed higher resolution images. The semiconductor industry didn't have a 4x leap in density all of a sudden.

If you want the highest quality, sharp, color correct images, do not shoot in a quad bayer mode.
So what you are saying is that more resolution doesn't necessarily always mean more resolution with correct color, right?
 
Okey dokey!

Forget everything I said about this stuff, guys. I clearly don't know what I'm talking about 🤣

As my parting remark, there is a reason the quad bayer is called "quad", and it has nothing to do with the fact that the two green filters are arranged differently.

I leave it to the rest of you much smarter people to get to the truth.
 
Last edited:
Okey dokey!

Forget everything I said about this stuff, guys. I clearly don't know what I'm talking about 🤣

As my parting remark, there is a reason the quad bayer is called "quad", and it has nothing to do with the fact that the two green filters are arranged differently.

I leave it to the rest of you much smarter people to get to the truth.
Good luck and happy flying!
 
Anyone know what debayering method the Mini 3 camera uses?

From some of the artifacts on sharp, high contrast lines etc., it looks suspiciously like bilinear interpolation.
 
Quad Bayer sensors are very interesting - they are basically an oversized version of a Bayer sensor with each R, G, or B color section covered by 4 photodiodes (instead of one), each having their own microlens and each can behave as an individual pixel. They are also very well suited to video applications.

They have 3 'tricks':

1) Operate at 1/4 the resolution to improve the signal to noise ratio (binning). In simpler terms this is basically a 4X noise reduction. In this case of the M3P, that would be it's 12MP mode.

2) They can read out every second row of the sensor slightly earlier than the previous to improve highlight information, and combine with the rest of the data to improve DR. This is how the M3P does it's baked-in HDR shooting. Again this only works at 1/4 the resolution, so up to 12MP for the M3P. This is not a problem because 4K is around 8MP. This is what makes them attractive for video applications.

3) All pixels are used, and the data is approximated (re-mosaiced) back into traditional Bayer data but with less precision than a traditional Bayer sensor. So you end up with a 48MP image with a noticeable quality increase from the 12MP version, but a noticeable quality decrease compared to a traditional 48MP Bayer sensor.

When you re-mosaic a regular Bayer sensor, the color data is closer together and you get a much more precise result when the image is reconstructed. When you re-mosaic Quad Bayer sensor data, the process is less precise (the pixel color data is twice as far apart) so that is why you do not get an image that actually has 4X the color resolution even though the final result is still 48MP.

It's not entirely unlike upscaling resolution, which is an example more people might be familiar with (I'm not saying its the same thing). When you play 1080P content on your 4K TV, you are still seeing ~8 million pixels, but that image was built from 2MP of data instead of 8MP. It looks good, but not as good as native 4K, because the input is still only 2MP. A quad Bayer sensor is still working with much less color resolution because each color patch shares 4 subpixels, even though it has 4X the individual pixels.

Another way to look at it is with an extreme example. Lets say you had a 48MP sensor with only 4 color patches, each patch containing 12 million pixels. You still have 48 million individual pixels, but when you go to reconstruct the image, you would have color data with a precision level far too low to be useful. So the sensor in the M3P is technically 48MP, but the color information it has to work with to reconstruct the image is diluted, which reduces image quality compared to a traditional 48MP Bayer sensor.

If you want to look at some real world examples, look at a 12MP and 48MP M3P photo at 100%, and then go compare a 12MP and ~45MP photo from a full frame camera (there are no 48MP FF cameras but it's close enough). Both comparisons are apples to apples as long as you're comparing the same sensor sizes to one another. You will see a night and day difference between a 4X resolution increase via Quad Bayer and a 4X resolution increase with a traditional Bayer.

None of this means the sensor in the M3P is bad, quite the opposite as Quad Bayer has some unique advantages for video applications, but strictly from a stills photography standpoint, a 48MP image from a traditional Bayer sensor is objectively better than that of a Quad Bayer sensor because the re-mosaic is done with a greater degree of precision. This is also why you do not see Quad Bayer sensors in the very best DSLRs and Mirrorless cameras designed primarily for stills photography, however some of the very best video-centric cameras (such as the Sony A7SIII) use Quad Bayer sensors.
 
  • Like
Reactions: globetrotterdrone
3) All pixels are used, and the data is approximated (re-mosaiced) back into traditional Bayer data but with less precision than a traditional Bayer sensor. So you end up with a 48MP image with a noticeable quality increase from the 12MP version, but a noticeable quality decrease compared to a traditional 48MP Bayer sensor.

When you re-mosaic a regular Bayer sensor, the color data is closer together and you get a much more precise result when the image is reconstructed. When you re-mosaic Quad Bayer sensor data, the process is less precise (the pixel color data is twice as far apart) so that is why you do not get an image that actually has 4X the color resolution even though the final result is still 48MP.
I would remind you that a quad bayer sensor has the same number of each color pixel as a bayer sensor of the same resolution so it’s just a different pattern. Every 2x2 block of pixels has 2 green, 1 blue, and 1 red pixel, same as a bayer sensor. Every single pixel on the sensor has pixels of the other two colors touching it. IE the pixel color data isn’t further apart.

Keep in mind on the photo below that the pattern repeats thousands of times and the very outer rows and columns often aren’t used as image pixels. This is total vs effective pixels.

B412E3F8-0DB4-4D70-922D-77AFD4BB10E5.jpeg
The only detractor quad bayer has is when it comes color resolution would be with blue and red pixels on edges. However, the small size of these pixels makes that a moot point. The best lenses in the world can only resolve edges as small as 0.017mm wide (and only at the very center) but the pixels on the mini 3 sensor are 10 times smaller than that. In other words your lens is the limiting factor not the detail that can be produced with these sensors.
 
Last edited:
I would remind you that a quad bayer sensor has the same number of each color pixel as a bayer sensor of the same resolution so it’s just a different pattern. Every 4x4 block of pixels has 2 green, 1 blue, and 1 red pixel, same as a bayer sensor. Every single pixel on the sensor has pixels of the other two colors touching it. IE the pixel color data isn’t further apart.

Keep in mind on the photo below that the pattern repeats thousands of times and the very outer rows and columns often aren’t used as image pixels. This is total vs effective pixels.

View attachment 152735
The only detractor quad bayer has is when it comes color resolution would be with blue and red pixels on edges. However, the small size of these pixels makes that a moot point. The best lenses in the world can only resolve edges as small as 0.017mm wide (and only at the very center) but the pixels on the mini 3 sensor are 10 times smaller than that. In other words your lens is the limiting factor not the detail that can be produced with these sensors.

To me, it sounds like the part you're missing is the precision difference in the re-mosaicing of the image - that is the reason the color resolution is not the same. Nobody is arguing that it's a different pattern, and I see what you are trying to show, but the nature of the pattern and the way that the camera actually does the re-mosaicing is what makes the difference in the final result. That process is not done the way you show above with the outer pixels ignored, it is done with the greater distance between each pixel color, hence the loss of precision - I can see why you believe it works that way, but I think this is where your misunderstanding lies. Quad Bayer sensors are designed primarily to be used in their "low resolution" implementation, which is another reason why you see the "high resolution" option locked out on most smartphones and video-centric cameras that use them.

I'm also not sure what kind of proof you would be satisfied with - every reputable source and write-up on the topic echo's what I said above and you can download sample images yourself and take a look with your own eyes, to which the difference is clearly noticeable even to someone with no photography knowledge. Furthermore, you can see the decisions manufacturers are making with the very best cameras available, and not a single one of them designed primarily for stills photography uses a Quad Bayer despite them being available. Why do you think that might be? They are, however, used in some of the better hybrid cameras with a specific focus on video, and you might also be interested to know that the user is locked out of the 48MP ability even at the very high end (Sony A7SIII).

Samsung has this technology as well and they call it Tetracell. I have one in my phone, which is a Galaxy S22 Ultra. It takes this process one step further with groups of 9 pixels and allows you to take either 12MP or 108MP photos. Again, the 108MP photos are better if the conditions allow for it, however they have nowhere even close to 9 times the actual resolution for the same reason a Quad Bayer does not. You can look up comparisons that clearly show this as well.

If you prefer to hear it from a different source, here are a couple of examples from reputable sources, and all of them echo what I have said regarding final image quality:

This is from an article that was posted and you "liked" earlier in this thread:

"In high resolution scenes, an attempt is made to re-interpret the Quad Bayer data into something closer to Bayer data, to give a full resolution image. This won't be as detailed as an actual Bayer image with the same pixel count, but it still gives much more detail than the other two modes, and hence is more detailed than the 1/4 resolution sensor needed to match the low light mode's performance."


"In a Quad Bayer filter, the pixels of different color are further apart, so demosaicing is less effective (despite what makers claim). So, you’re definitely not getting 4x the detail in 48MP mode than you do in 12MP. In fact, since the HDR and other image processing modes are disabled at 48MP, the 12MP photos sometimes come out with better detail (and much smaller file size, win-win)."



Here is a visual comparison using a Sony 12MP/48MP Quad Bayer. You can see that despite the image being 4X the pixels, it has nowhere near 4X the resolving power. It's improved, as expected, however it doesn't look much different than results you could achieve with some careful sharpening of the 12MP version:

sony1.jpeg


I'm not trying to be argumentative, but I don't know what else anyone can show you that would satisfy you at this point. Quad Bayer is awesome technology, especially for video-oriented devices such such as drones and video-centric mirrorless cameras. They have drawbacks when it comes to stills photography that can be clearly seen and you will not find a single camera using a Quad Bayer over a traditional Bayer sensor where still image quality is a priority. Every single high-end mirrorless or DSLR camera from Sony, Nikon, and Canon use traditional Bayer sensors in their stills-focused cameras and all 3 companies have easy access to both technologies. Panasonic and Sony both make video-centric cameras that do use Quad Bayer technology specifically for access to certain video features, which is where they are most useful.
 
Last edited:
To me, it sounds like the part you're missing is the precision difference in the re-mosaicing of the image - that is the reason the color resolution is not the same. Nobody is arguing that it's a different pattern, and I see what you are trying to show, but the nature of the pattern and the way that the camera actually does the re-mosaicing is what makes the difference in the final result. That process is not done the way you show above with the outer pixels ignored, it is done with the greater distance between each pixel color, hence the loss of precision - I can see why you believe it works that way, but I think this is where your misunderstanding lies. Quad Bayer sensors are designed primarily to be used in their "low resolution" implementation, which is another reason why you see the "high resolution" option locked out on most smartphones and video-centric cameras that use them.

I'm also not sure what kind of proof you would be satisfied with - every reputable source and write-up on the topic echo's what I said above and you can download sample images yourself and take a look with your own eyes, to which the difference is clearly noticeable even to someone with no photography knowledge. Furthermore, you can see the decisions manufacturers are making with the very best cameras available, and not a single one of them designed primarily for stills photography uses a Quad Bayer despite them being available. Why do you think that might be? They are, however, used in some of the better hybrid cameras with a specific focus on video, and you might also be interested to know that the user is locked out of the 48MP ability even at the very high end (Sony A7SIII).

Samsung has this technology as well and they call it Tetracell. I have one in my phone, which is a Galaxy S22 Ultra. It takes this process one step further with groups of 9 pixels and allows you to take either 12MP or 108MP photos. Again, the 108MP photos are better if the conditions allow for it, however they have nowhere even close to 9 times the actual resolution for the same reason a Quad Bayer does not. You can look up comparisons that clearly show this as well.

If you prefer to hear it from a different source, here are a couple of examples from reputable sources, and all of them echo what I have said regarding final image quality:

This is from an article that was posted and you "liked" earlier in this thread:

"In high resolution scenes, an attempt is made to re-interpret the Quad Bayer data into something closer to Bayer data, to give a full resolution image. This won't be as detailed as an actual Bayer image with the same pixel count, but it still gives much more detail than the other two modes, and hence is more detailed than the 1/4 resolution sensor needed to match the low light mode's performance."


"In a Quad Bayer filter, the pixels of different color are further apart, so demosaicing is less effective (despite what makers claim). So, you’re definitely not getting 4x the detail in 48MP mode than you do in 12MP. In fact, since the HDR and other image processing modes are disabled at 48MP, the 12MP photos sometimes come out with better detail (and much smaller file size, win-win)."



Here is a visual comparison using a Sony 12MP/48MP Quad Bayer. You can see that despite the image being 4X the pixels, it has nowhere near 4X the resolving power. It's improved, as expected, however it doesn't look much different than results you could achieve with some careful sharpening of the 12MP version:

sony1.jpeg


I'm not trying to be argumentative, but I don't know what else anyone can show you that would satisfy you at this point. Quad Bayer is awesome technology, especially for video-oriented devices such such as drones and video-centric mirrorless cameras. They have drawbacks when it comes to stills photography that can be clearly seen and you will not find a single camera using a Quad Bayer over a traditional Bayer sensor where still image quality is a priority. Every single high-end mirrorless or DSLR camera from Sony, Nikon, and Canon use traditional Bayer sensors in their stills-focused cameras and all 3 companies have easy access to both technologies. Panasonic and Sony both make video-centric cameras that do use Quad Bayer technology specifically for access to certain video features, which is where they are most useful.
I’m not trying to be argumentative either. Apologies if that’s how it comes off
 
  • Like
Reactions: CanadaDrone
I’m not trying to be argumentative either. Apologies if that’s how it comes off

No not at all, I'm mostly worried about how I come off to others if having a long discussion, so I always try and make it clear I see discussions like this as just that - a discussion and not an argument. Sometimes when you're (I don't mean you) trying to help or when you're having a discussion with someone else it can be taken the wrong way and I try to be sensitive to that. It's not always easy to tell via writing because it's hard to portray things like tone through a keyboard :D
 
To be honest - it looks more like chromatic aberration to me, try hitting it in post-processing with chromatic aberration correction and set it to correct extreme aberrations - that way you'll see whether its a sensor fault or a relatively simple lightroom/darktable fix. I've also found that with some drone cameras (**cough** Autel!), moiree patterns crop up in high dynamic range situations and display similar linear yellow/magenta stripe artefacts.
 
First don't use the 48mp option from a professional point of view ! Always use AEB mode 3 shots and then move the lens around up and down and then crop yourself in post for the best possible quality ! The truth is the 48mp mode isnt very good and I never use it....
You want to be taken about 3 shots from left to right and then one above and then combine them in PS or LR ! Do this all the time and sell my work online..
 
  • Like
Reactions: Phantomrain.org
Lycus Tech Mavic Air 3 Case

DJI Drone Deals

New Threads

Members online

Forum statistics

Threads
134,445
Messages
1,594,850
Members
162,980
Latest member
JefScot