DJI Mavic, Air and Mini Drones
Friendly, Helpful & Knowledgeable Community
Join Us Now

Exposure Discrepancy - DNG Still Images

Citizen Flier

Well-Known Member
Joined
Jun 27, 2019
Messages
338
Reactions
181
I was snapping stills and video last night at sunset. I adjusted settings to be slightly underexposed for the landscape. The sun was setting over a ridge top in the images, so I was shooting into the sun, illuminating the subjects in the foreground with perfect rear/side lighting. The exposures looked excellent on the Smart Controller screen. I shoot only Raw (DNG). When I loaded the images in Adobe Bridge, they were almost black -except for the sky which was way overexposed. I was able to adjust the images in the Raw Reader, and the result was pretty spectacular. But what I can't decipher, is how the images looked properly exposed on the Smart Controller, but were so underexposed when I viewed them in Bridge. The videos were perfectly exposed & appeared the same in Bridge as they did on the SC. I shot all manual exposure, adjusting the settings as it got darker. I was able to apply one set of image correction to most of the images for a proper exposure in Camera Raw Reader. Thanks for your suggestions.
 
I was snapping stills and video last night at sunset. I adjusted settings to be slightly underexposed for the landscape. The sun was setting over a ridge top in the images, so I was shooting into the sun, illuminating the subjects in the foreground with perfect rear/side lighting. The exposures looked excellent on the Smart Controller screen. I shoot only Raw (DNG). When I loaded the images in Adobe Bridge, they were almost black -except for the sky which was way overexposed. I was able to adjust the images in the Raw Reader, and the result was pretty spectacular. But what I can't decipher, is how the images looked properly exposed on the Smart Controller, but were so underexposed when I viewed them in Bridge. The videos were perfectly exposed & appeared the same in Bridge as they did on the SC. I shot all manual exposure, adjusting the settings as it got darker. I was able to apply one set of image correction to most of the images for a proper exposure in Camera Raw Reader. Thanks for your suggestions.
A raw photo doesn’t have contain luminance levels for pixels, this is why it’s called “RAW”.

The image you see when you open it in a raw photo reader is just some arbitrary luminance setting the raw photo reader has applied, possibly the previous settings you had used. You can usually set the default levels to some preferred setting if all the photos are coming in too dark.

To be clear this isn’t an over or under exposed photo if you were able to recover the details it’s just a matter of the initial settings for photos processed by your raw photo reader
 
Last edited:
A raw photo doesn’t have contain luminance levels for pixels, this is why it’s called “RAW”.

The image you see when you open it in a raw photo reader is just some arbitrary luminance setting the raw photo reader has applied, possibly the previous settings you had used. You can usually set the default levels to some preferred setting if all the photos are coming in too dark.

To be clear this isn’t an over or under exposed photo is you were able to recover the details it’s just a matter of the initial settings for photos processed by your raw photo reader
By this logic the DNG doesn't contain any colour information either.

I think you might be off the track somewhere here- any half decent RAW converter will apply the camera profile when the DNG is opened. The relative brightness displayed will, particularly with the assistance of the histogram, provide a very good indication of whether the capture was under/over exposed.

To be clear a capture can be underexposed- to a significant degree and still allow shadow recovery and/or an increase in overall exposure level in post. It won't and can't ever produce as good a final result as a properly exposed capture. If we are interested in maximising the performance of any digital imaging system we should be exposing to the right- we want the brightest element in the scene to be taking the sensor close to clipping.
 
I'm pretty sure that a RAW file is a set of single luminance values for each pixel. Color is derived from combination of luminance values of them during render.

You can definitely blow-out highlights (all white, no detail) or crush the blacks (all black, no detail, nothing but noise when you try to life the shadow detail). RAW does not saving you from bad exposure.

If you are able to recover details from underexposed images, then they weren't that underexposed, though they can appear very dark when the luminance values is very low (such as 20-80) but not zero. Zero is bad, but anything above that is an increasing amount of saved and recoverable detail.

On top of all those RAW bits is a data island of camera settings, including the exposure settings, white balance, etc. None of those header fields are named 'luminance'. (At least with any RAW file schema that I've seen.) RAW just means that an image has not yet been rendered from the header values (plus the bit's stored luminance values) being applied to the captured bits, but if the bits are way over and under exposed, those header values are not going to save the image.

I don't know for sure why your DNGs were black-foreground / blown-sky, when the video was not in that condition. But I can say that this camera probably just isn't able to record well an image with such a HUGE dynamic range (bright sun in the sky, darkened foreground at sunset). The video is processed with a different curve (especially if you were in DLOG), so that could be why it survived better.

Chris
 
Last edited:
  • Love
Reactions: brett8883
OK, let me be more clear. I'm a pro retoucher & nature photog for fun. I process HUNDREDS or RAW images every month. When I shoot with my DSLR, pocket cam, or M2P, the preview of the RAW image in Camera Raw is typically fairly close to what I saw on the device screen at capture. Raw images that I open in Bridge/Camera Raw are typically displayed "as shot". I actually need to manually select "previous conversion" if I want to begin processing with the same settings as previous conversion -that's not set as Default on my system.

And I certainly understand the concept of a "Digital Negative" and a good deal about exposure. If shooting a scene that has too wide a range, I will bracket exposures, and compose an HDR image in Camera Raw. The point is that these raw images were MANY STOPS off from what I had viewed on the SC screen. As mentioned above, there is not normally a big discrepancy between those previews & what I see in camera RAW. It's possible that I have tweaked some display or preview setting either in the drone gear, or perhaps in Camera Raw. But I'm also wondering if perhaps the Smart Controller might be adjusting the exposure IN THE DISPLAY, but not attaching a SIDECAR setting to the image. That is the SC screen might be misrepresenting the luminance of the raw image.

I rely on the preview in the Smart Controller to give me a REASONABLY ACCURATE preview of the images. That's why we shoot digital raw images as opposed to film with a light meter. The preview might not be dead on with Raw Reader -as it isn't with my DSLR. But it IS reasonably close. I'm guessing the the image I'm seeing in camera raw is an accurate display of how these images were captured. I would like to have my SC display be closer in sync with what I see in Camera Raw. I hope I'm being clear enough. Thanks.
 
  • Like
Reactions: brett8883
I agree.

And it's pretty much that way with me. The preview closely represents the exposure I'm getting, aside from the fact that the preview is based on basic JPG-settings and not the actual RAW image bits. (Though I'm not using the SC, I don't think that is a factor -- it should be the same.)

So it makes no sense to me that it looked relatively fine on the SC but so dark in post-processing.

Personally, I would do some more testing to see if that was a fluke, or if there's something wrong in the system.

I mentioned that a sunset is pushing the limits of the camera's dynamic range, but the preview should have shown that if the image was going to turn out that way.

Chris
 
  • Like
Reactions: Citizen Flier
What I'm trying to grasp is WHAT exactly CAUSED the discrepancy. Extreme dynamic range in the subject, a display setting out of adjustment etc.
 
What I'm trying to grasp is WHAT exactly CAUSED the discrepancy. Extreme dynamic range in the subject, a display setting out of adjustment etc.

I really can't say, since it hasn't happened to me. However, if you don't mind a bit of speculation, I can IMAGINE it going this way:
  • You are shooting video and stills, and you look down at your screen while in video mode.
    • Note: if you shoot video in DLOG, the DJI Go 4 app will enhance to screen to simulate what it might look like after post processing with a LUT and/or color/contrast adjustments. On my tablet, DJI Go tells me about this with an on-screen message every time I switch back to video. You also get a rainbow colored square indicating that you are in that mode.
  • So in my imagined scenario: you look at the screen while in video mode, with the above mentioned enhancement going on, but you mistakenly believe that you are in photo mode (I do this a lot -- too often).
    • Note: when you press the shutter button while in video mode, DJI Go will automatically switch to photo mode. It takes longer than it normally would to just take a picture, but you may not always notice it.
  • So you shoot thinking that you are exposed correctly, but what you had viewed was actually the enhanced video screen.
But I don't know. That's why I suggested above to do some field testing to see if it was a fluke (either my scenario or something completely different) and not the norm.

After all, we do have other things to put our mind to with these things, like flying and other stuff in our immediate surroundings.

Good luck!
Chris
 
I'm pretty sure that a RAW file is a set of single luminance values for each pixel. Color is derived from combination of luminance values of them during render.

You can definitely blow-out highlights (all white, no detail) or crush the blacks (all black, no detail, nothing but noise when you try to life the shadow detail). RAW does not saving you from bad exposure.

If you are able to recover details from underexposed images, then they weren't that underexposed, though they can appear very dark when the luminance values is very low (such as 20-80) but not zero. Zero is bad, but anything above that is an increasing amount of saved and recoverable detail.

On top of all those RAW bits is a data island of camera settings, including the exposure settings, white balance, etc. None of those header fields are named 'luminance'. (At least with any RAW file schema that I've seen.) RAW just means that an image has not yet been rendered from the header values (plus the bit's stored luminance values) being applied to the captured bits, but if the bits are way over and under exposed, those header values are not going to save the image.

I don't know for sure why your DNGs were black-foreground / blown-sky, when the video was not in that condition. But I can say that this camera probably just isn't able to record well an image with such a HUGE dynamic range (bright sun in the sky, darkened foreground at sunset). The video is processed with a different curve (especially if you were in DLOG), so that could be why it survived better.

Chris
It is a long way from my area of expertise however I recall reading years ago when I first started shooting RAW that the luminance is principally, if not exclusively, derived from the green photosite values. That would make sense given there are twice the number (compared to red and blue), they are evenly distributed and it is likely computationally efficienct to process only one channel. I am making an assumption with respect to the why however my recollection is clear on the how.
 
OK, let me be more clear. I'm a pro retoucher & nature photog for fun. I process HUNDREDS or RAW images every month. When I shoot with my DSLR, pocket cam, or M2P, the preview of the RAW image in Camera Raw is typically fairly close to what I saw on the device screen at capture. Raw images that I open in Bridge/Camera Raw are typically displayed "as shot". I actually need to manually select "previous conversion" if I want to begin processing with the same settings as previous conversion -that's not set as Default on my system.

And I certainly understand the concept of a "Digital Negative" and a good deal about exposure. If shooting a scene that has too wide a range, I will bracket exposures, and compose an HDR image in Camera Raw. The point is that these raw images were MANY STOPS off from what I had viewed on the SC screen. As mentioned above, there is not normally a big discrepancy between those previews & what I see in camera RAW. It's possible that I have tweaked some display or preview setting either in the drone gear, or perhaps in Camera Raw. But I'm also wondering if perhaps the Smart Controller might be adjusting the exposure IN THE DISPLAY, but not attaching a SIDECAR setting to the image. That is the SC screen might be misrepresenting the luminance of the raw image.

I rely on the preview in the Smart Controller to give me a REASONABLY ACCURATE preview of the images. That's why we shoot digital raw images as opposed to film with a light meter. The preview might not be dead on with Raw Reader -as it isn't with my DSLR. But it IS reasonably close. I'm guessing the the image I'm seeing in camera raw is an accurate display of how these images were captured. I would like to have my SC display be closer in sync with what I see in Camera Raw. I hope I'm being clear enough. Thanks.

I suspect your experience is the norm. I don’t own and haven’t flown with the SC.
 
It is a long way from my area of expertise however I recall reading years ago when I first started shooting RAW that the luminance is principally, if not exclusively, derived from the green photosite values. That would make sense given there are twice the number (compared to red and blue), they are evenly distributed and it is likely computationally efficienct to process only one channel. I am making an assumption with respect to the why however my recollection is clear on the how.
I found this good resource about this


The RAW file format has yet to undergo demosaicing, and so it contains just one red, green, or blue value at each pixel location
...
“Our eyes perceive differences in lightness logarithmically, and so when light intensity quadruples we only perceive this as roughly a doubling in the amount of light. A digital camera, on the other hand, records differences in lightness linearly — twice the light intensity produces twice the response in the camera sensor. This is why the first and second images above look so much darker than the third. In order for the numbers recorded within a digital camera to be shown as we perceive them, tone curves need to be applied.”

F124E4CF-EC05-414B-BB93-02173E2F04F6.jpeg


It would have been more accurate to say that the raw image reader applies an arbitrary tone curve to derive the bit mapped image.

I didn’t mean to say that you can’t under or over expose shots with raw. Obviously this is possible. What I said was if the image started out nearly black and was able to be processed back to what should be the correct exposure the issue is with the raw photo reader not the image itself.

@Citizen Flier “as shot” in Lightroom and camera raw refers to white balance not tone curve or exposure compensation. Ensure that you have your camera raw defaults set to all zero including the tone curve which some people tend to forget about. The live view from the drone is never going to be exactly what you end up with.The brightness of the screen could also be responsible for over and under exposed shots if that is in fact the issue. Using the histogram is the only real way to determine the correct exposure in camera. However, it seems to me the issue is with the raw program you are using.
 
Last edited:
I found this good resource about this


The RAW file format has yet to undergo demosaicing, and so it contains just one red, green, or blue value at each pixel location
...
“Our eyes perceive differences in lightness logarithmically, and so when light intensity quadruples we only perceive this as roughly a doubling in the amount of light. A digital camera, on the other hand, records differences in lightness linearly — twice the light intensity produces twice the response in the camera sensor. This is why the first and second images above look so much darker than the third. In order for the numbers recorded within a digital camera to be shown as we perceive them, tone curves need to be applied.”

View attachment 94580


It would have been more accurate to say that the raw image reader applies an arbitrary tone curve to derive the bit mapped image.

I didn’t mean to say that you can’t under or over expose shots with raw. Obviously this is possible. What I said was if the image started out nearly black and was able to be processed back to what should be the correct exposure the issue is with the raw photo reader not the image itself.

@Citizen Flier “as shot” in Lightroom and camera raw refers to white balance not tone curve or exposure compensation. Ensure that you have your camera raw defaults set to all zero including the tone curve which some people tend to forget about. The live view from the drone is never going to be exactly what you end up with.The brightness of the screen could also be responsible for over and under exposed shots if that is in fact the issue. Using the histogram is the only real way to determine the correct exposure in camera. However, it seems to me the issue is with the raw program you are using.
If the image starts out as nearly black it is underexposed. If you aren’t exposing in camera so the brightest element in the scene isn’t at or close to the max value the sensor can record you haven’t used the full available dynamic range and you will have more noise in the darker areas if the recorded scene.

It is a trivial exercise for a raw converter to display a depiction of the actual recorded exposure. If we have 8 bits per pixel the highest value that might be depicted in the output of A/D conversion in the camera is 256. This will be pure white. No fancy luminance or tone curves are required to get to that point.
 
If the image starts out as nearly black it is underexposed. If you aren’t exposing in camera so the brightest element in the scene isn’t at or close to the max value the sensor can record you haven’t used the full available dynamic range and you will have more noise in the darker areas if the recorded scene.

It is a trivial exercise for a raw converter to display a depiction of the actual recorded exposure. If we have 8 bits per pixel the highest value that might be depicted in the output of A/D conversion in the camera is 256. This will be pure white. No fancy luminance or tone curves are required to get to that point.
Maybe I’ve misunderstood the OP. I took it as he was saying that the photo is not actually underexposed, that it was just the default representation of the image when opened in camera raw that is too dark an issue of the exposure setting and tone curve in the DNG or camera raw. Of course it is capable of correctly representing the photo but it sounds like it’s not and that’s the issue.

As to the issue of the view from the live view remember that is only a software representation of the exposure. It can only simulate the the effect of the amount of light that will be captured over the duration of the set shutter duration and may do this poorly particularly with longer shutter durations which may be needed at dusk or sunset. The image streaming to the live view for instance is 8 bit when the actual RAW photo will be in 12 bit I believe. I agree that you should expose for the brightest part of the frame but I disagree that the image coming to the live view is an appropriate tool to properly determine exposure. My point is only the histogram can accurately provide exposure information in camera. However, we are getting off topic.

Like I said maybe I’ve misunderstood what the op is asking.
 
Maybe I’ve misunderstood the OP. I took it as he was saying that the photo is not actually underexposed, that it was just the default representation of the image when opened in camera raw that is too dark an issue of the exposure setting and tone curve in the DNG or camera raw. Of course it is capable of correctly representing the photo but it sounds like it’s not and that’s the issue.

As to the issue of the view from the live view remember that is only a software representation of the exposure. It can only simulate the the effect of the amount of light that will be captured over the duration of the set shutter duration and may do this poorly particularly with longer shutter durations which may be needed at dusk or sunset. The image streaming to the live view for instance is 8 bit when the actual RAW photo will be in 12 bit I believe. I agree that you should expose for the brightest part of the frame but I disagree that the image coming to the live view is an appropriate tool to properly determine exposure. My point is only the histogram can accurately provide exposure information in camera. However, we are getting off topic.

Like I said maybe I’ve misunderstood what the op is asking.
The OP seems to have a high level of proficiency in working with RAW files. The issue seems to be that the SC is rendering the preview differently to what the RAW file exposure is.

The principal issue I was addressing in my response to you was your claim that a raw photo doesn’t contain luminance levels for pixels, this is why it’s called “RAW”. This is demonstrably not true. The raw file contains a value assigned to each pixel by the A/D conversion of the recorded exposure. This value is a direct representation of its brightness, it is by definition luminance information. Knowing where the pixel is with respect to the bayer filter allows colour information to be reconstructed.

Yes, the histogram is useful- especially when the limitations are understood. If I am really intent on getting the best image possible I will come back 1/3 stop from what seems to be hard right. Given the vertical displacement represents the number of pixels at any particular luminance we can get a few clipping which often don’t show on the histogram. Waterfalls are notoriously good subjects for catching us out here...
 
The OP seems to have a high level of proficiency in working with RAW files. The issue seems to be that the SC is rendering the preview differently to what the RAW file exposure is.

The principal issue I was addressing in my response to you was your claim that a raw photo doesn’t contain luminance levels for pixels, this is why it’s called “RAW”. This is demonstrably not true. The raw file contains a value assigned to each pixel by the A/D conversion of the recorded exposure. This value is a direct representation of its brightness, it is by definition luminance information. Knowing where the pixel is with respect to the bayer filter allows colour information to be reconstructed.

Yes, the histogram is useful- especially when the limitations are understood. If I am really intent on getting the best image possible I will come back 1/3 stop from what seems to be hard right. Given the vertical displacement represents the number of pixels at any particular luminance we can get a few clipping which often don’t show on the histogram. Waterfalls are notoriously good subjects for catching us out here...
I read the OP again and I certainly misunderstood. That’s my bad.

On the issue of luminance, the camera records brightness on a linear scale but we perceive brightness on a logarithmic scale. Yes the camera records differences in brightness obviously but not the same way our eyes perceive it. It’s up to the raw file reader to convert the data from the sensor to brightness levels as we perceive it or the artist wants the photo to be perceived.

If we just took the the middle value from the camera sensor and used that as the gamma it would be all messed up. For example if the camera has 5 brightness levels it can record 1 2 3 4 5 and we made 3 the gamma value the photo would be all messed up because 2 would be appear twice as bright as 1 but 3 would only be 50% brighter than 2 and 5 would only appear 25% brighter than 4. The raw file converter must apply a tone curve to make it so that the luminosity is evenly distributed throughout the photo. This tone curve or luminosity curve is not contained within the raw data and will vary between raw photo processors and even with different profiles within the raw file reader.

Sure DNGs can come with embedded profiles which contain a LUT to try and produce the same tone curve used by the camera but this is a side car file separate from the raw data from the camera sensor which is the “RAW” file.
 
  • Like
Reactions: Citizen Flier
I read the OP again and I certainly misunderstood. That’s my bad.

On the issue of luminance, the camera records brightness on a linear scale but we perceive brightness on a logarithmic scale. Yes the camera records differences in brightness obviously but not the same way our eyes perceive it. It’s up to the raw file reader to convert the data from the sensor to brightness levels as we perceive it or the artist wants the photo to be perceived.

If we just took the the middle value from the camera sensor and used that as the gamma it would be all messed up. For example if the camera has 5 brightness levels it can record 1 2 3 4 5 and we made 3 the gamma value the photo would be all messed up because 2 would be appear twice as bright as 1 but 3 would only be 50% brighter than 2 and 5 would only appear 25% brighter than 4. The raw file converter must apply a tone curve to make it so that the luminosity is evenly distributed throughout the photo. This tone curve or luminosity curve is not contained within the raw data and will vary between raw photo processors and even with different profiles within the raw file reader.

Sure DNGs can come with embedded profiles which contain a LUT to try and produce the same tone curve used by the camera but this is a side car file separate from the raw data from the camera sensor which is the “RAW” file.
Yes there is a gamma correction that needs to be applied (and often fiddled with in post) to create a realistic or creative/artistic depiction. You have wandered off into a different albeit interesting topic. The fact remains- all RAW files contain luminance information, for every pixel.
 
  • Like
Reactions: brett8883
Yes there is a gamma correction that needs to be applied (and often fiddled with in post) to create a realistic or creative/artistic depiction. You have wandered off into a different albeit interesting topic. The fact remains- all RAW files contain luminance information, for every pixel.
Fair. ?
 
The only thing that makes sense is that for some reason Camera RAW did not apply any default settings for that dng
I don’t use camera raw I use capture one and it seems to understand well enough.
And even if camera raw normally recognizes this dng from this camera its possible That for some reason it did not in this case.
 
As far as I have been able to tell with my drones, the “dng” that comes out is not a real dng but rather a TIFF embedded in the dmg wrapper. It has far less dynamic range than the original bayer representation.
 
  • Like
Reactions: Citizen Flier
Lycus Tech Mavic Air 3 Case

DJI Drone Deals

New Threads

Forum statistics

Threads
131,118
Messages
1,560,007
Members
160,094
Latest member
odofi