I am struggling to see connection between your reply to my post and the Lr tools the other gentlemen was talking about and myself referring to those.
But since you already replied with the open ended questions, would you please explain to us poor mortals why the 50MP photos from A3S are full of undesirable artifacts, well demonstrated with posted samples in this long thread, while photos from let's say Fuji GFX 50 with 50MP Buyer sensor look so amazing in comparison and without any of those nasty artifacts seen in A3S 50MP photos? I know apples and oranges when it comes to sensor size. Still the night and day difference in IQ is surely not caused by only the sensor size but perhaps the sensor type??
I figured you wouldn't answer the question.
For others:
Ordinary Bayer sensors are named that after Bruce Bayer, the scientist that invented it. It refers to a color filter pattern consisting of 4 pixels in a square, 2 with a green filter, 1 red, 1 blue. Underneath this filter is a pixel array that senses the red, green, or blue component of the light that falls on that pixel.
When a picture is taken, a mosaic of red, green, and blue pixels is created. No pixel read out from the sensor has any information about the other two color channels than the one it sensed due to the Bayer filter.
To make a finished image, a mathematical process called "demosaicing" or "debayering" is applied to the data. This process estimates the values of the two missing channels at each pixel by taking neighboring pixels of that color and mathematically combining them. The simplest algorithm is to simply average adjacent values.
Obviously, there is deviation, error, in what the actual light intensity was at that pixel, and the average of the values around it (for the color being calculated). At each pixel the error can be vanishingly small, or very large, depending on the content of the image. For example, high contrast sharp edges produce large errors with simple averaging that are visible without magnification using simple Bayer sensors.
Of course, a lot has been learned in the 20 years of mainstream digital photography, and much better demosaicing algorithms have been developed, doing an amazing job of "guessing" at the missing two channels for each pixel.
Quad Bayer sensors are exactly the same as Bayer sensors w.r.t. everything discussed above, with one difference: The values used to calculate missing channels can come from 2 pixels away, not just adjacent pixels. This obviously increases the deviation – error – than just using adjacent pixels.
There is more discussion to be had about QB vs B filter pattern, why, and the advantages of a QB sensor over a simple B sensor that I will not address in this post.
So... image quality. B sensor technology is not inherently "better" than QB technology. Rather QB provides more functionality and options for the photographer. The clever QB pattern can be read as a simple B pattern by binning 4 pixels of the same color producing an effective pixel with 4x the sensitivity, at 25% the image resolution... But with 4x the light sensitivity!
B pattern captures have a lower error margin in large part because research on reducing those errors and implementation of algorithms has been going on for decades. QB is very new as a widespread consumer technology, so improving demosaicing is a current, hot area of research. AI has opened new options for minimizing color errors. QB images have improved a lot in the last 5 years, to the point they are more than acceptable for most applications. That's why DJI started using them, and IMO why DJI decided the threshold had been crossed to offer only QB in the Air 3S rather than a larger simple Bayer main sensor.
QB images will continue to improve as computational techniques improve. An interesting side effect of this is RAW QB images that are problematic today after demosaicing may be superb, amazing high detail shots in the future when pushed through the HAL super AI demosaicing process that doesn't exist yet.
So take all your QB high-res shots in RAW. You can use a better demosaicing algorithm than the drone does in post and often get a good image with a bit more detail than the ¼ resolution image from the sensor in Bayer mode, and in the (likely near) future reprocess that image with a new and better demosaicing algorithm that brings even
more detail out of an image from the same capture.