DJI Mavic, Air and Mini Drones
Friendly, Helpful & Knowledgeable Community
Join Us Now

48mp is bad

Is there a native mode, just 12 Mp or whatever the native resolution of the main camera?

What is it doing with 48 Mp, making one giant photo from several shot or binning/stacking several of them together, like phones are doing?

BTW, does this drone even output RAW images or do they reserve RAW for more pricey drones?
Native resolution is 48mp for both sensors, native res is also 48mp for the Mini 3 Pro. 12mp is just the binned mode.
RAW images are available on both. Happy to upload some if you are curious - mini 3 pro
 
Not bad, but here's where you're going astray. This is another installment of me droning on and on, but if you want to really understand this stuff, it'll be worth it.

A non-quad sensor still has a Bayer filter over the photodiodes, and each captures the light intensity of a single color, R, G, or B.

Each of these locations becomes an RGB pixel after the missing two channels are reconstructed by using neighboring pixels that captured that missing channel, via an interpolation algorithm. The simplest is nearest-neighbor averaging, and usually produces a decent result, but there are all manner of complications in the image that can defeat simple averaging and produce artifacts.

This is called demosaicing or debayering, and is a part of capturing images from all digital cameras (with some esoteric exceptions we'll ignore, as they're irrelevant to this discussion).

There are more sophisticated reconstruction algorithms that do a better job, some of the best being "content-aware", analyzing the content of the image and adjusting how it determines the 2 missing channels at each pixel.

I explain all this to make the following point: The recovered channels have an error range (error bars) that can be mathematically calculated. A non-quad image has errors at every pixel for the two reconstructed channels, just like the quad-Bayer capture does. The errors are just larger, and this is critically important – for the same demosaicing algorithm — than for the simple Bayer filter.

So you can see where the problem is with the idea of a "true" 48MP image... what does that mean? Error-free isn't possible. Is there a particular error threshold you have in mind? I'm sure you see the problem.

The idea of a "true 48MP image" gets even more meaningless if you allow for different demosaicing algorithms. Suppose you apply a very sophisticated, compute-intensive demosaicing algorithm to the 48MP quad-bayer capture, and the simplest nearest-neighbor algorithm to a capture from a theoretical camera with the only difference being an ordinary Bayer filter instead of a quad.

The error bars for the missing channels can be less for the quad image than for the non-quad, resulting in a higher resolution higher color fidelity result than the non-quad capture, if it uses a far more sophisticated demosaicing algorithm.

Which one is the "true" 48MP image?

When Sony introduced the quad-bayer in 2018, demosaicing algorithms were all designed for a 2x2 bayer pattern, so didn't do the greatest job minimizing errors. Hence the reputation quad-bayer sensors acquired, and deservedly so.

Fast forward to 2023. A lot of R&D has improved quad-bayer demosaicing – a lot. Computing power has advanced much too, making it possible to do much more than simple averaging in real time in GPUs, and even to some extent on-chip with some higher-end sensors. As we see with the new A3, quad-Bayer captures are getting nearly as good as simple Bayer captures, even better if they can be demosaiced by a sophisticated algorithm.

If we define "true" to mean a pixel where all 3 color channels are captured directly, with no reconstruction, the closest would be a 48MP sensor with a regular Bayer filter capturing in 12MP, a 2x2 cluster of photodiodes representing a single pixel but with the RGGB Bayer filter pattern over them. Then, red, green, and blue get directly captured for the "pixel" and there is no demosaicing. 48MP captures would still require demosaicing, but with the more typical error size for the reconstructed channels.

Why use a quad-Bayer filter then in the first place? Low-light performance, which sensor manufacturers have determined is a bigger gain than eliminating channel reconstruction errors, and can more easily be addressed computationally than data that simply isn't there at all in low light due to limited sensitivity and dynamic range of the sensor.

The other reason, sadly, is resolution wars.

Such a great post well done.

There is a lot of confusion with 12/48mp on these drones, and same with the smartphone industry.

Native resolution is 48mp, it has 48 million light capturing pixels, just a lot more work needed to unravel / process the file to get it to a usable quality within the processing unit.

A lot more interpolating / demosaicing is required to product the file you see when you download them to your computer.

But its not 'upressing' or 'splitting pixels' as some people have said, which is funny.

Hence why a monochrome sensor with no colour array over the top looks so good, there isn't any guess work required by the processing.
 
Yes, or at least another kind of interpolation. (there is always some interpolation going on in an image sensor).
A 48MP quad-Bayer sensor does indeed have 48 millions photodiodes, but the bayer filter in front of them looks like the filter on a 12MP sensor.
The Bayer filter in front of a 48MP sensor should have one colour in front of each photodiode to be a "true" 48MP sensor. But a quad-Bayer filter has one colour covering 4 photodiodes, in a 2x2 pattern. This gives a good quality 12MP photo, with 4 photodiodes capturing the same colour from the bayer filter. A very good idea.
But to make a 48MP photo it has to do some interpolation because one colour in the filter covers 4 photodiodes.
This is why we sometimes see colour artifacts or other strange colour patterns in 48MP photos from these sensors.
On a Bayer sensor you are still interpolating and demosaicing.
Quad bayer means this process is just increased... It isn't a new process.
The 48mp image is the native resolution of the sensor, it is only the colour that it impacted, not the 'resolution' or sharpness of the image. The colour interpolation is what it effecting the perceived image quality.
 
So the way the quadbayer arranges 4 photosensors (or photosites) under a larger color filter canopy is the reason a 48mp doesn't have 4X the detail of a 12mp image and more noise? Based on photos I've made, the 48mp image only has around 50% more detail.
 
It is
So the way the quadbayer arranges 4 photosensors (or photosites) under a larger color filter canopy is the reason a 48mp doesn't have 4X the detail of a 12mp image and more noise? Based on photos I've made, the 48mp image only has around 50% more detail.
In my opinion it is the combination of Quad Bayer and a photo site / pixel size that is a quarter of the area of the 12mp 'binned' photosite. A combination rather than one or the other.

Hence why a 100 or 200mp phone image, with even smaller pixels, is becoming more and more meaningless. Its most headline writing and marketing.

48mp is double the resolution of the 12mp image. But it is probably not double the detail. But it is so hard to compare. To equalise the size of an image (either increase or decrease) to be able to compare you are again introducing an algorithm...
 
48mp is double the resolution of the 12mp image. But it is probably not double the detail. But it is so hard to compare. To equalise the size of an image (either increase or decrease) to be able to compare you are again introducing an algorithm...
Exactly. I have studied quite a lot of photos taken with this 12/48MP sensor in the drones. 48MP has the possibilty to capture more detail under certain conditions, but under other conditions the images do not have more detail, they can even look worse because of colour artifacts, moiré and even some strange "blotches". This could be because of bad internal image processing, and hopefully the stacked sensor in the Air 3 can do a better job because of the faster readout of data. This allows the engineers to improve the processing algorithms.
 
  • Like
Reactions: miemo and GadgetGuy
Exactly. I have studied quite a lot of photos taken with this 12/48MP sensor in the drones. 48MP has the possibilty to capture more detail under certain conditions, but under other conditions the images do not have more detail, they can even look worse because of colour artifacts, moiré and even some strange "blotches". This could be because of bad internal image processing, and hopefully the stacked sensor in the Air 3 can do a better job because of the faster readout of data. This allows the engineers to improve the processing algorithms.
I think I am going to buy the Air 3, so I will do a proper write up and review of the photography capabilities if I do.
I think the Air will be better, that is, if it actually does have stacked sensors... I cant find this in anything DJI have released and only reviewers have mentioned it, but some are reliable reviewers.

What I plan on doing is using the 48mp mode, shooting panorama and then reduce the file back to 50mp or so. Leaving hopefully an artefact and blotch free image.

Also software like Photoshop DeNoise or DXO deep prime works a treat on the Mini 3 Pro files.
 
  • Like
Reactions: waynorth and Yazu
Here we go again, gotta drone on and on about resolution...

I wrote up that long post to address the very common chimerical understanding of what a "true" image is from an X resolution sensor (the MP size of X being irrelevant).

Follow-up comments bring up another commonly vague term which actually isn't... what is "resolution"? What do you mean when you use the term?

When you capture a 48MP image on the Mini3P, 48M individual, distinct light levels are captured. What is the resolution? By what definition?

After demosaicing to synthesize and ADD (important) information to the image data, do the number of discrete pixels change? Is the captured data for each pixel lost? So, after demosaicing, what is the resolution, and what definition are you using?

You can see the problem here. What's failing is vocabulary. We're discussing changes in clarity, not resolution. Demosaicing doesn't reduce resolution. All the data captured is still there, with 48MP resolution.

The problem is psychovisual. Demosaicing introduces color errors at each pixel that varies depending on image content. These errors mess up how we interpret the image. It's not that resolution is lost, it's that the image is wrong.

The bigger the demosaicing errors, the more wrong the image seems. When pixel-peeping we see these artifacts and interpret it as loss of resolution when in fact it's a loss of clarity, and that can result in being unable to resolve a feature in the photo, while the resolution is truly what the sensor captured.

Consider the following, and try an experiment for yourself to better understand this. Consider that there are animals that can see down into the Infrared and up toward UV. This allows detail, not visible in RGB wavelengths we capture with our image sensors. So what is the resolution of a Mini3P image if a three-toed sloth is looking at it?

You can try this yourself. Pull up an image in a decent photo editor. A really good image to do this with has people in it wearing brightly colored clothing.

Pick a color that's well represented, say someone wearing a royal blue t-shirt. Notice the variation in intensity, shadows, in the folds of the shirt. Now turn off the blue channel.

The detail is gone. The shirt looks black. The shadows are indistinguishable, or nearly so. What is the resolution? Unchanged.

Clarity is about having complete, accurate visual information at each pixel, and depends on the viewer. An RGB image we think is very clear is not so to the sloth, as there's detail missing he can see when he looks at the subject with his own eyes. Our "high resolution, sharp" images look to him like what we see with the blue channel turned off.
 
48mp is double the resolution of the 12mp image. But it is probably not double the detail. But it is so hard to compare. To equalise the size of an image (either increase or decrease) to be able to compare you are again introducing an algorithm...
You get it. Detail == clarity, the term I was using.

The most detailed image you can shoot (in theory) is a print of the color filter quad-bayer or regular,depending on the sensor. It would have to be perfectly aligned, which is near impossible, and not demosaiced. Then processed to zero the two missing channels, an operation I don't know of any software to perform.
 
Here we go again, gotta drone on and on about resolution...

I wrote up that long post to address the very common chimerical understanding of what a "true" image is from an X resolution sensor (the MP size of X being irrelevant).

Follow-up comments bring up another commonly vague term which actually isn't... what is "resolution"? What do you mean when you use the term?

When you capture a 48MP image on the Mini3P, 48M individual, distinct light levels are captured. What is the resolution? By what definition?

After demosaicing to synthesize and ADD (important) information to the image data, do the number of discrete pixels change? Is the captured data for each pixel lost? So, after demosaicing, what is the resolution, and what definition are you using?

You can see the problem here. What's failing is vocabulary. We're discussing changes in clarity, not resolution. Demosaicing doesn't reduce resolution. All the data captured is still there, with 48MP resolution.

The problem is psychovisual. Demosaicing introduces color errors at each pixel that varies depending on image content. These errors mess up how we interpret the image. It's not that resolution is lost, it's that the image is wrong.

The bigger the demosaicing errors, the more wrong the image seems. When pixel-peeping we see these artifacts and interpret it as loss of resolution when in fact it's a loss of clarity, and that can result in being unable to resolve a feature in the photo, while the resolution is truly what the sensor captured.

Consider the following, and try an experiment for yourself to better understand this. Consider that there are animals that can see down into the Infrared and up toward UV. This allows detail, not visible in RGB wavelengths we capture with our image sensors. So what is the resolution of a Mini3P image if a three-toed sloth is looking at it?

You can try this yourself. Pull up an image in a decent photo editor. A really good image to do this with has people in it wearing brightly colored clothing.

Pick a color that's well represented, say someone wearing a royal blue t-shirt. Notice the variation in intensity, shadows, in the folds of the shirt. Now turn off the blue channel.

The detail is gone. The shirt looks black. The shadows are indistinguishable, or nearly so. What is the resolution? Unchanged.

Clarity is about having complete, accurate visual information at each pixel, and depends on the viewer. An RGB image we think is very clear is not so to the sloth, as there's detail missing he can see when he looks at the subject with his own eyes. Our "high resolution, sharp" images look to him like what we see with the blue channel turned off.
Resolution is contrast, but it's also resolution. MTF50 is mostly contrast and what we'd call 'resolution'. However MTF9 is mostly resolution but without contrast so we don't see it as such. However if you sharpen it! :)

Now you're right that colour is related to resolution but most measures of resolution measure the green resolution (as it's a decent analogue for the eye).

If you want more colour resolution from your mini3pro 48mp sensor, take a few photographs and combine them using super resolution. Hopefully the drift in your drone should be more than 2pixels and software will be able to resynthesis some of that colour. This is what many modern sensors do, shift by a pixel (or so) and combine the results to get perfect RGB at each pixel.

We're now way off course! In short, 48mp quad has good monochrome resolution (because most colour filters are thin enough to allow a broad range of colours in, to get better low light performance, not just the colour that the filter is supposed to represent), but not double and half decent colour resolution maybe 30% extra (the numbers used are illustrative rather than exact).
 
Remember that 48MB are very large files and the more MB a file has in a photo, if shot in poor light, the more noise there will be, compared to say a 12 or 24MB file. You can't have it both ways in photography, unfortunately.

Larger files in MB gives more details yes and will make better enlarged images compared to a 12MB image but when in low light or not set up correctly for exposure, a large MB file will also give you more noise.
Completely agree with this. My own experience is that the M3Pro gives excellent 48MP results in bright conditions, and sometimes unpredictable results when in low light. I do see colour artefacts from time to time, too, but I'd rather have the 48MP mode available as a choice. Like most things in photography, it's a trade off, you just need to know what mode is more likely to give good results in a particular situation, and that only comes with practical experience.
 
On a Mini 3 Pro, the shots at 48MP on a bright day are more detailed than 12MP shots. One thing to note is that the 48MP sensor has quad or four pixels covered by the same color filter R,G, or Blue for better HDR and binning purposes, so the color in a 48MP shot will contain more “guesses” to estimate the real color and luminance at each pixel.

A simple trick to enhance the cityscape details and reduce the noise are several stacked 48MP shots at different manual or Ev exposures or several of the same phone that are stacked with astronomy processing tools to bring out for detail and reduce any single photo ISO noise.

Also, don’t leave your camera in the hot sun before a photo session as that also raises the thermal noise present on the sensor. If you wanted to be slightly crazy, one could put their Mavic in the fridge or an ice chest at 32F (excluding battery) to cool the camera sensor before a very important shot, but then quickly heat the only the lens to prevent external condensation or snap on a DJI external hot filter, literally. Again tricks that astrophotographers use.

Best of luck.

Hello all, after a few test flights I come to the conclusion that the 48mp mode gives really bad images. It's okay when you film something up close, but as soon as you make a real cityscape from above, the details are really bad. And that's what (I buy my) drone for. What is your opinion on the 48mp function? For example, if you put walls on a photo, the details are almost not recognizable. Curious about your opinion on this quad bayer lens
 
A simple trick to enhance the cityscape details and reduce the noise are several stacked 48MP shots at different manual or Ev exposures or several of the same phone that are stacked with astronomy processing tools to bring out for detail and reduce any single photo ISO noise.

What's the name of said tools?
 
@power64, excellent advice about putting the drone in the fridge. That is a great idea before heading out to catch Golden Hour pics, especially in the evening.
 
@power64, excellent advice about putting the drone in the fridge. That is a great idea before heading out to catch Golden Hour pics, especially in the evening.
Just be careful of condensation on cold surfaces, especially if flying through salt water fog.

The sensor will not stay cold for very long, as it is tiny with lots of air flow. But higher ISO may look a little better. If I wasn’t so lazy, it would be pretty cool to see a comparison of hot vs ice cold DJI sensor at the same high ISO to really convince people.
 
What's the name of said tools?
Here are several HDR stacking traditional software packages:

And for Astro-specific:

Great example in that second link of a high ISO DSLR single shot vs stacked…

1691283866110.jpeg
 
Lycus Tech Mavic Air 3 Case

DJI Drone Deals

New Threads

Forum statistics

Threads
131,364
Messages
1,562,423
Members
160,295
Latest member
dochavez1986