Droning on and on...
Well-Known Member
I don't think so.The only data you get from the pixels is the luminance (brightness), and as I said previously, I do believe you can get that from the DNG files. The color comes from simply knowing what color filter is over each pixel. So, I think it should theoretically be possible to do your own debayering, but I don't know of any software that will let you do that -- I think it would take some coding.
You get a 24 bit RGB triplet for each of 48 million pixels... that's 3x the data actually produced by the sensor. In other words, it debayers in creating the "raw" data, and synthesizes 2 of the 3 values RGB for each pixel.
A truly "raw" 48MP image from the sensor would be 48MB in size, and would display in grayscale. To display in color the application would have to know the color filter pattern (Bayer, Quad Bayer, etc.), which of course it doesn't.
Now, if we know the filter pattern, and if the native color channel for each pixel is passed through by the debayering algorithm unaltered, then it would be a pretty simple matter to extract the actual raw 48MB of "bayered" image data. Then we could really see the best that can be done with that camera.
Control over the debayering algorithm is huge. Image content can affect that choice – for example an image with lots of sharp edges, vs. mostly soft texture. Even the direction of edges (horzontal, vertical, angled, random) can influence how to best debayer.
Heck, even just being able to try different methods and picking the one that looks best would be immensely valuable, especially in professional applications. Can save a lot of cleanup in Photoshop.
Might hack it up, if someone hasn't done it yet.
Last edited: