DJI Mavic, Air and Mini Drones
Friendly, Helpful & Knowledgeable Community
Join Us Now

Is the Mini 3’s photo quality as good as the Mavic 2?

And one more time, this is simply wrong; 48MP quad Bayer cameras have been used in phones for several years, and the Mini 3 Pro has one.
Older smartphones used traditional Bayer sensors that are one pixel (photosite) covered by one coloured filter arranged in a 2X2 grid (RGGB). Each of the four pixels in this grid has 50% green sensitivity: 25% red sensitivity and 25% blue sensitivity: there are two green Bayer filters: one red Bayer filter and one blue Bayer filter.

Small form cameras that claim to have high megapixel resolution use what they call a Quad Bayer sensor: which can MIMIC the presence of four pixels per photosite. In other words, with the DJI Mini 3 Pro 12 megapixel camera: four adjacent 2.4 micron pixels in a 2X2 grid share one single colour Bayer filter.

The pixel has remained the same size (2.4 microns)- there are the same number of photosites on the sensor (12 million) - but the size of each of the individual Bayer filter squares in the 2X2 grid has been effectively quadrupled... by software. This is the technique called pixel binning. It IMITATES the attributes of larger photosites. It does not mean that there are four discrete pixels beneath one single physical Bayer filter square. The "48MP" resolution is interpolated - it's clever computer guesswork.

Your Mini 3 Pro sensor still has a physical size of 12 megapixels. This is also why the claimed "48 megapixel" photographs happen to be precisely four times the actual resolution of the sensor (four squares in the Bayer array... 12X4 = 48).

To produce a TRUE 48 megapixel image: a sensor like that fitted to the Mini 3 Pro would have to be 4 times the physical dimensions with 2.4 micron pixels: or it would have to have photosites that were 4 times smaller (0.6 micron) to retain its current size. DJI specs show that the Mini 3 Pro sensor has pixels measuring 2.4 microns.

How a 12 MEGAPIXEL sensor produces 48 megapixel images through pixel binning... and why those 48 megapixel outputs are fuzzier and lacking in detail.

 
  • Like
Reactions: Chapperz
What I've learned on over 20 years of digital imaging is that megapixels by themselves mean very little. I'm not hugely technical, but what I do believe is that an RYYB sensor produces better image files than RGGB, especially in low light. For that reason Autel is a huge disappointment because they produce drones that have superior image quality compared to DJI, but the flight characteristics are generally inferior. Image-wise and Autel Nano Plus will top the Mini 3 hands down, but doesn't have the same power. I haven't done much looking into the Evo Lite+ but that looks to be a pretty good drone. I just don't trust Autel for the type of customer service that a professional needs.
I've had hands-on experience with Autel products (the Nano+). The camera with it's RYYB sensor is the bare-bones of an absolute beast... low light it is phenomenal. But and it's a BIG but... The Autel drones I've had have all been mechanically unreliable and on top of that, Autel really fumbles the ball regarding getting their 80% finished products up to the same release specs you can rely on with DJI drones. Their firmware improvements are woeful and they still haven't tamed the RYYB sensor's proclivity for returning some pretty funky colour interpretation. Autel? Caveat Emptor. That's why I gave Autel a fair crack of the whip and finally (after the second Nano+ on the trot died before take-off) got a refund and forked out for the drone I should have bought in the first place... the Mini 3 Pro.
 
  • Like
Reactions: globetrotterdrone
Older smartphones used traditional Bayer sensors that are one pixel (photosite) covered by one coloured filter arranged in a 2X2 grid (RGGB). Each of the four pixels in this grid has 50% green sensitivity: 25% red sensitivity and 25% blue sensitivity: there are two green Bayer filters: one red Bayer filter and one blue Bayer filter.

Small form cameras that claim to have high megapixel resolution use what they call a Quad Bayer sensor: which can MIMIC the presence of four pixels per photosite. In other words, with the DJI Mini 3 Pro 12 megapixel camera: four adjacent 2.4 micron pixels in a 2X2 grid share one single colour Bayer filter.

The pixel has remained the same size (2.4 microns)- there are the same number of photosites on the sensor (12 million) - but the size of each of the individual Bayer filter squares in the 2X2 grid has been effectively quadrupled... by software. This is the technique called pixel binning. It IMITATES the attributes of larger photosites. It does not mean that there are four discrete pixels beneath one single physical Bayer filter square. The "48MP" resolution is interpolated - it's clever computer guesswork.

Your Mini 3 Pro sensor still has a physical size of 12 megapixels. This is also why the claimed "48 megapixel" photographs happen to be precisely four times the actual resolution of the sensor (four squares in the Bayer array... 12X4 = 48).

To produce a TRUE 48 megapixel image: a sensor like that fitted to the Mini 3 Pro would have to be 4 times the physical dimensions with 2.4 micron pixels: or it would have to have photosites that were 4 times smaller (0.6 micron) to retain its current size. DJI specs show that the Mini 3 Pro sensor has pixels measuring 2.4 microns.

How a 12 MEGAPIXEL sensor produces 48 megapixel images through pixel binning... and why those 48 megapixel outputs are fuzzier and lacking in detail.

I'm sorry, but you're misunderstanding what your video and my source both say: Binning produces 12MP image from a 48MP sensor by combining 4 pixels to make 1, not the other way around. If you don't use that binning mode on the Mini 3 Pro, you're getting 48MP images from 48 million (very small) photodiodes (or "photosites" as your video calls them). How Bayer filtering is used to estimate colors is really a completely different issue.
 
I've had hands-on experience with Autel products (the Nano+). The camera with it's RYYB sensor is the bare-bones of an absolute beast... low light it is phenomenal. But and it's a BIG but... The Autel drones I've had have all been mechanically unreliable and on top of that, Autel really fumbles the ball regarding getting their 80% finished products up to the same release specs you can rely on with DJI drones. Their firmware improvements are woeful and they still haven't tamed the RYYB sensor's proclivity for returning some pretty funky colour interpretation. Autel? Caveat Emptor. That's why I gave Autel a fair crack of the whip and finally (after the second Nano+ on the trot died before take-off) got a refund and forked out for the drone I should have bought in the first place... the Mini 3 Pro.
That all is why I’ve stayed with DJI, but under protest.
 
I think the M3P takes great JPG images out of the box. Better than the RAW files IMOH. But maybe some post photo work improves the RAW images, but I still think JPGs have the edge.
 
RAW images are designed to be post processed. They are and should be poor without. Thats the whole point.

A JPG you're limited to very little to no post processing ability.

A raw should look less sharp, with more noise, less saturation and less contrast than a jpg.
 
I'm sorry, but you're misunderstanding what your video and my source both say: Binning produces 12MP image from a 48MP sensor by combining 4 pixels to make 1, not the other way around. If you don't use that binning mode on the Mini 3 Pro, you're getting 48MP images from 48 million (very small) photodiodes (or "photosites" as your video calls them). How Bayer filtering is used to estimate colors is really a completely different issue.
Went through this before lol,your still screwed up and have it backwards again.
It is a 12 mp sensor period,and through binning produces so called 48 mp photos.
 
Went through this before lol,your still screwed up and have it backwards again.
It is a 12 mp sensor period,and through binning produces so called 48 mp photos.
We can go through it as often as you like; you're simply wrong.

This is the camera that DPReview thinks is used in the Mini 3 Pro, and it does seem to have identical specifications:
OMNIVISION’s OV48C is a 48 megapixel (MP) image sensor with a large 1.2 micron pixel size to enable high resolution and excellent low light performance for flagship smartphone cameras. The OV48C is the industry’s first image sensor for high resolution mobile cameras with on-chip dual conversion gain HDR, which eliminates motion artifacts and produces an excellent signal-to-noise ratio (SNR). This sensor also offers a staggered HDR option with on-chip combination, providing smartphone designers with the maximum flexibility to select the best HDR method for a given scene. The OV48C is the only flagship mobile image sensor in the industry to offer the combination of high 48MP resolution, a large 1.2 micron pixel, high speed, and on-chip high dynamic range, which provides superior SNR, unparalleled low light performance and high quality 4K video.


Built on OMNIVISION’s PureCel®Plus stacked die technology, this 1/1.3″ optical format sensor provides leading-edge still image capture and video performance for flagship smartphones. The OV48C also integrates an on-chip, 4-cell color filter array and hardware remosaic, which provides high quality, 48MP Bayer output, or 8K video, in real time. In low light conditions, this sensor can use near-pixel binning to output a 12MP image for 4K2K video with four times the sensitivity, yielding a 2.4 micron-equivalent performance. In either case, the OV48C can consistently capture the best quality images without motion blur, as well as enabling digital crop zoom with 12MP resolution and fast mode switch. Additionally, this sensor offers a wide range of features, including digital crop zoom and a CPHY interface, making it ideal for main, rear-facing cameras in multicamera configurations. The OV48C also uses 4C Half Shield phase detection for fast autofocus support.
These sensors have 48 million 1.2 micron photodiodes which can be binned in groups of four to produce the equivalent of 12 million 2.4 micron pixels. Please put aside your preconceived assumptions and read for comprehension.
 
We can go through it as often as you like; you're simply wrong.

This is the camera that DPReview thinks is used in the Mini 3 Pro, and it does seem to have identical specifications:

These sensors have 48 million 1.2 micron photodiodes which can be binned in groups of four to produce the equivalent of 12 million 2.4 micron pixels. Please put aside your preconceived assumptions and read for comprehension.
Maybe someone else can get through to you,I know I am done.
 
I guess I'll leave it up to you to decide which drone clips you like better; the Mini 3 or the Mavic 2 pro.I


UPDATE-REVEAL....

I got a PM today asking if I had ever revealed which drone was which. I thought either it was obvious or somehow implied... So here's which drone is which?

Drone1- MAVIC 2 PRO

Drone 2- DJI MINI 3 PRO
 
Dpreview thinks is used,but does not know for sure.So you are following one site,and which may not even be
the right sensor.Oh boy.
Now Im done
That sensor has the same specs and features that DJI claims for the Mini 3 Pro, so I think it's a pretty good guess, but it's not all that important for the discussion at hand, i.e. how many physical pixels a 48MP quad Bayer sensor has (the correct answer is 48 million) and what binning means (the correct explanation is that 48MP input is reduced to 12MP by combining groups of 4 pixels).

Sony has made a chip using similar technology for years:

Their description of the technology is precisely the same as the Omnivision's:
Sony has unveiled the new IMX586 imaging sensor for smartphones that packs an effective 48 MP with a pixel size of 0.8 microns. The CMOS sensor features a Quad Bayer color filter array that bins adjacent 2x2 pixels to yield sensitivity equivalent to a 1.6 micron 12 MP sensor.
I really do hope you are done this time -- this is getting tiresome.
 
Here is the deal guys...
ONE INGREDIENT ALONE DOES NOT A CAKE MAKE.

You have a lens element and lord knows how similar and different they are from drone to drone. There is a processor and there are algorithms ( I think) that contribute to processing and other electroinc components. Does DJI use the same quality components on say a $2000 Mavic 3 as an $800 Mini 3? I'm not sure how much that contributes to raw files which should be data capture only. Others here know more than I do on that front.

I've learned over the years not to depend on megpixels or even sensor specs. My original DSLR's were APS-c and I was led to believe that full frame sensors were better. So I got a full frame camera with higher megapixels, and guess what? Great pictures, but no perceivable difference in detail and quality.

And once again, it all comes down to the cheesecloth that the original files are strained through, into viewable video, still images and prints ... to end up with the viewer.
 
Do you have access to a Mini 3? I suggest you shoot a JPG in the 48 MP mode, open the file in your favourite photo editor and inspect the pixel dimensions of the image. Then, report your findings here.
His insistence about this is simply a case of word policing.

In his world, a 48mp image is only so if there is no interpolation of any of the RGB triplet in creating the pixel data (debayering).

The fact is a pixel produced via debayering is no less valid a pixel than one produced by direct measurement via a photosite quad with and RGGB filter over it (which is how the 12MP images are produced on the M3P).

The debayered, higher resolution pixel simply has a greater error range for each pixel than for the lower resolution pixel. Yet, here's there critical point: For the 12MP, it's not zero! The pixel quad introduces error because each of the RGB photodiodes is measuring a different part of the image. The center of the pixel is the point where the 4 corners of thephotodiodes meet. Certainty very close, but still not a true representation of the intensity of the (singular) pixel.

In fact, "true" pixels (which, are actually an error-laden approximation) can reduce error by – NOOOOOOOOO!! AUGHHHHHHHHHH!! – interpolating the RGB values of the photodiodes in the pixel to estimate an HSV value for the pixel.

Similar to debayering mathematically.

So, it is incorrect to claim an image produced via a full-triple bayer filter is any more "true" than a debayered 4x resolution image from the same sensor. The latter will really be higher resolution, and there's no "trickery" involved. Similar to it's lower resolution cousin, each color channel is obtained from a nearby, adjacent photodiode to the location of the pixel.

The most accurate, and fair thing to say in comparing the two, the 48MP image has larger error bars for each RGB value.

And that difference, with some more sophisticated debayering techniques, is pretty small, for most image data.
 
RAW images are designed to be post processed. They are and should be poor without. Thats the whole point.

A JPG you're limited to very little to no post processing ability.

A raw should look less sharp, with more noise, less saturation and less contrast than a jpg.
I don't believe really really REALLY TRUE raw is available. I'm talking 48MP of the unaltered RGGB values in each quad.

Then we could debayer ourselves, try different algorithms, compare and contrast results.

I know a better result can be achieved for 48MP resolution than what the M3P processing pipeline produces... there are much better, but too compute-intensive for real-time on a drone processor.
 
They arent true raw sadly no. None of the DJI drones are. The earlier Mavics did a lot of processing on the supposed RAW, not all of it good. Its better now but basically "Raw-ish".

Im in the process of seeing what effect the new sharpness and NR options have on the DNG (if anything). Hopefully no effect at all *but* the Mavic 1 they did affect the DNG.
Just need it to not actually rain for a day to do some tests.
 
I don't believe really really REALLY TRUE raw is available. I'm talking 48MP of the unaltered RGGB values in each quad.

Then we could debayer ourselves, try different algorithms, compare and contrast results.

I know a better result can be achieved for 48MP resolution than what the M3P processing pipeline produces... there are much better, but too compute-intensive for real-time on a drone processor.
The only data you get from the pixels is the luminance (brightness), and as I said previously, I do believe you can get that from the DNG files. The color comes from simply knowing what color filter is over each pixel. So, I think it should theoretically be possible to do your own debayering, but I don't know of any software that will let you do that -- I think it would take some coding.
 
Lycus Tech Mavic Air 3 Case

DJI Drone Deals

New Threads

Forum statistics

Threads
131,125
Messages
1,560,105
Members
160,099
Latest member
tflys78