DJI Mavic, Air and Mini Drones
Friendly, Helpful & Knowledgeable Community
Join Us Now

Mavic2P HDR and Multiple exposure

CanaanBill

Member
Premium Pilot
Joined
Apr 15, 2018
Messages
19
Reactions
2
Age
76
A month or so ago, someone wrote that new SW/FW allowed you to save individual raw images in HDR. I am running the latest everything, and HDR only saves one composite image as JPEG. They mentioned a selection under camera settings that my SW doesn't seem to have. Any suggestions?
 
  • Like
Reactions: HelicalRovers
I think I understand your question . Hope this helps. Taken from the Go4 manual.

5D7E142B-1820-47AF-9EB9-6A145BE706DE.jpeg
 
Last edited:
You will get vastly better results if you bracket the shot yourself using" AEB" and process the RAWs in external software, if you are so inclined.
 
  • Like
Reactions: cocoon and noosaguy
You will get vastly better results if you bracket the shot yourself using" AEB" and process the RAWs in external software, if you are so inclined.

Only if you know what your doing. The op might be a beginner with photography so a HDR jpg would be a great starting point.
 
  • Like
Reactions: fozzzy
Only if you know what your doing. The op might be a beginner with photography so a HDR jpg would be a great starting point.

Fair enough, but I am assuming he has at least some knowledge given that he invested in the Mavic 2 Pro and not a cheaper drone, but I am still just guessing.

You don't really need to know what you're doing to produce a better result than what DJI's in-drone conversion gives you, as they are quite bad and the JPEG it spits out has virtually no file malleability left. Importing 3-5 files and clicking a "generate HDR" button is pretty straight forward, even if you have to watch a 5 minute YouTube tutorial I think most people can handle that.

Anyways, just a suggestion - if nothing else, I would encourage people to not be intimidated by software like that and it can be fun to experiment. To get the most out of any drone (photo and video), and especially something like the M2P where it can actually give you a decently malleable file, you need to be using external software.
 
  • Like
Reactions: noosaguy
Fair enough, but I am assuming he has at least some knowledge given that he invested in the Mavic 2 Pro and not a cheaper drone, but I am still just guessing.

You don't really need to know what you're doing to produce a better result than what DJI's in-drone conversion gives you, as they are quite bad and the JPEG it spits out has virtually no file malleability left. Importing 3-5 files and clicking a "generate HDR" button is pretty straight forward, even if you have to watch a 5 minute YouTube tutorial I think most people can handle that.

Anyways, just a suggestion - if nothing else, I would encourage people to not be intimidated by software like that and it can be fun to experiment. To get the most out of any drone (photo and video), and especially something like the M2P where it can actually give you a decently malleable file, you need to be using external software.

Agree with you 100% to all your above commentsThumbswayup I bought my M2P back in September & I’m no pro photographer but I bought the DJI model knowing that it’s the best drone to buy, would never buy one of those cheap crappy things that you see on shelves in the supermarket:oops: this is my 3rd drone I have bought from Dji.

I started off & still use HDR but understanding & experimenting & getting as much knowledge as I can before I started playing around with AEB/Raw files then stitching them together in post. I’ve had some really bad examples doing this but you learn by your mistakes.

It’s a big learning curve for people just starting out with drone photography, look at some of the photos that have been posted on this forum which in my eyes are great then you’ll see another photo that’s been posted & think wow that’s brilliant, that person has been in the game for years.

Not all people with the M2P/M2Z are experts but people just assume they are when it comes down to the art of taking nice photos, I’m definitely not in that class but I’m learning;)
 
You will get vastly better results if you bracket the shot yourself using" AEB" and process the RAWs in external software, if you are so inclined.

Honestly you'll get vastly better results if you just expose to the right and apply local adjustments to the image as necessary... for the reasons you've stated elsewhere.
 
Honestly you'll get vastly better results if you just expose to the right and apply local adjustments to the image as necessary... for the reasons you've stated elsewhere.

I think you are confused - the topic of this thread is HDR and Multiple Exposure. A HDR (High Dynamic Range) image would be desired when the scene contains a greater range of contrast and tonality than is possible to capture with a single exposure. ETTR does not magically allow your sensor to capture more information than it is capable of in a single exposure with a single exposure. You will get much better results by using 3, 5, sometimes even 7+ images than trying to force a HDR out of single image using local adjustments or highlights/shadows sliders. If you are able to create the desired image with a single exposure and local adjustments (within the DR limits of the sensor and without introducing unacceptable amounts of noise), then you did not really need a HDR in the first place - simple as that.

There are tools, such as graduated ND filters, that can help you achieve a similar result in a single exposure but for many scenarios they either do not work or are simply not suitable.

The greater the dynamic range of the image, the more exposures you will typically want to map to end up with the best possible result and the smoothest tonal gradients. This also depends on the dynamic range capabilities of the sensors themselves. An image sensor has it's greatest DR capability at base ISO where it can reach FWC - as you raise ISO, you reduce the usable number of photons per photosite. For most sensors, this is a very linear process.

As a side example, you will notice the best smartphone cameras on the market are currently using up to 10 exposures to build their HDR images rather than just automatically applying local adjustments to a single image based a on pixel luminosity algorithm or something similar - this is because it would be literally impossible for them to achieve the same result with a single image, especially with those sensors. The benefits of multiple exposures do not end with HDR photography - they can also be used to aid high ISO noise performance. Working off the knowledge that noise occurs randomly in an image, you can stack multiple exposures to dramatically reduce the overall noise in a image - Google's Pixel phones do this actually to get far better ISO results than their tiny sensors than would otherwise be capable of with a single exposure.
 
Last edited:
yeah i can only recommend to take pics in AEB and combine it yourself, much more freedom
 
I think you are confused - the topic of this thread is HDR and Multiple Exposure. A HDR (High Dynamic Range) image would be desired when the scene contains a greater range of contrast and tonality than is possible to capture with a single exposure. ETTR does not magically allow your sensor to capture more information than it is capable of in a single exposure with a single exposure. You will get much better results by using 3, 5, sometimes even 7+ images than trying to force a HDR out of single image using local adjustments or highlights/shadows sliders. If you are able to create the desired image with a single exposure and local adjustments (within the DR limits of the sensor and without introducing unacceptable amounts of noise), then you did not really need a HDR in the first place - simple as that.

There are tools, such as graduated ND filters, that can help you achieve a similar result in a single exposure but for many scenarios they either do not work or are simply not suitable.

The greater the dynamic range of the image, the more exposures you will typically want to map to end up with the best possible result and the smoothest tonal gradients. This also depends on the dynamic range capabilities of the sensors themselves. An image sensor has it's greatest DR capability at base ISO where it can reach FWC - as you raise ISO, you reduce the usable number of photons per photosite. For most sensors, this is a very linear process.

As a side example, you will notice the best smartphone cameras on the market are currently using up to 10 exposures to build their HDR images rather than just automatically applying local adjustments to a single image based a on pixel luminosity algorithm or something similar - this is because it would be literally impossible for them to achieve the same result with a single image, especially with those sensors. The benefits of multiple exposures do not end with HDR photography - they can also be used to aid high ISO noise performance. Working off the knowledge that noise occurs randomly in an image, you can stack multiple exposures to dramatically reduce the overall noise in a image - Google's Pixel phones do this actually to get far better ISO results than their tiny sensors than would otherwise be capable of with a single exposure.

I'm not at all confused. I shoot plenty of HDR stuff. Have for years. I'm suggesting that a well processed, properly ETTR exposed image (in RAW) would give you generally better results than DJI's internal HDR algorithms. I'm as bullish on computational photography as anybody (and, if I may be so bold, probably understand the underlying mathematics better than the average bear). My claim was intentionally limited in scope: using a single, well ETTR'ed RAW would yield better results than DJI's internal HDR algorithms. That's it.

Also, FWIW, using the internal 5-bracket AEB is only going to give you 1.3 stops of additional range (if properly exposed) - useful, no doubt, but not enough to give results that are as drastically different as you suggest - certainly not like using a 5-or-7-shot 2 EV bracket on an SLR.
 
I'm not at all confused. I shoot plenty of HDR stuff. Have for years. I'm suggesting that a well processed, properly ETTR exposed image (in RAW) would give you generally better results than DJI's internal HDR algorithms. I'm as bullish on computational photography as anybody (and, if I may be so bold, probably understand the underlying mathematics better than the average bear). My claim was intentionally limited in scope: using a single, well ETTR'ed RAW would yield better results than DJI's internal HDR algorithms. That's it.

Also, FWIW, using the internal 5-bracket AEB is only going to give you 1.3 stops of additional range (if properly exposed) - useful, no doubt, but not enough to give results that are as drastically different as you suggest - certainly not like using a 5-or-7-shot 2 EV bracket on an SLR.


I wonder how you with achieve better exposure in a single shot versus several photos combined with different exposure for this photo:

Taken on my m2p

If I exposed for the bright area, the dark area would be washed out and vice versa.

For scenes like this, no properly ettr photo would be superior to combining several photos.

I am willing to stand corrected.org_f2809a23b53aab13_1546345402000.jpg
 
I wonder how you with achieve better exposure in a single shot versus several photos combined with different exposure for this photo:

Taken on my m2p

If I exposed for the bright area, the dark area would be washed out and vice versa.

For scenes like this, no properly ettr photo would be superior to combining several photos.

I am willing to stand corrected.View attachment 57498

So there are a few issues here that are worth untangling, possibly.

First, let's separate what is *theoretically* the case from what is *practically* the case. In theory, yes - perfectly combining the information from shots at different exposures can give you a wider tonal range. However, this assumes a few of things: 1) the tonal range of the scene exceeds the tonal range of the sensor, 2) whatever algorithm you're using to combine the shots is able to do a decent job of deciding what parts of each image should be kept and blends them together well, and 2a) the algorithm doesn't introduce artifacts in the process (e.g. halos) and 3) at least one of the shots you're using contains blown highlights, and at least one of them does not.

Let's start with a practical consideration on the Mavic's HDR/AEB implementation - namely that it uses a 0.7EV interval with a 5 shot maximum spread - meaning that in the best case, we're going to get an additional 2.5-3.0EV of range in the shadows compared to a perfectly exposed ETTR shot - but you'll only get that if you overexpose the entire AEB (i.e. if the "darkest" shot of your AEB is perfectly ETTR). I certainly wouldn't dispute that you *can* get better results in the shadows by doing this. But I'd also contend that it's not, practically, what most people do or what most people mean when they talk about using HDR/AEB to get a "better" result on the Mavic (and doesn't appear to be what you did in the shot above). And remember here that we're really talking about shadow noise and detail, not highlight detail. Half of the information in the (raw) image is dedicated to the brightest 1EV of information. If you've done your ETTR properly, you're going to have plenty of highlight information, and underexposed shots aren't really going to give you much, if any benefit (in fact, they can theoretically hurt you).

My suspicion is that when *most* people talk about using the HDR/AEB functions on the Mavic, they're talking about having the center exposure be "correct" per the camera meter and then blending the +1.3EV / - 1.3EV below that. But here's the thing - if you do that and your brightest shot doesn't have blown highlights (which it's likely not to at only +1.3EV over the camera's meter point), then you actually *can't* do any better than a properly exposed ETTR shot because you don't actually have any additional tonal information (remember - underexposing doesn't give you any additional tonal information. note: not strictly true, but true enough for the purposes of this discussion). Stated differently, if you HDR/AEB and the brightest shot you're using is not blown, then you're better off using the brightest exposure you've got, because it will have the most shadow detail, and the darker exposures don't really give you significant benefit in the highlights.

It's hard to comment on the exact scene you posted without looking at each of the individual DNG files, but I'd encourage you to try yourself - take the brightest exposure that isn't blown and use local adjustments to pull down the exposure on the brightest parts of the image and boost the exposure on the darker parts of the image. I think you'd be surprised how much detail is preserved in just a single layer. My initial thought on your shot is that the sky is completely blown out, so I'm not sure your example is even a particularly good example of what HDR can do (assuming what you posted was the HDR JPG output from the Mavic).

For the record, some single layer ETTR's, SOOC and "final product" (though I didn't spend a ton of time on these):

SOOC:
1-original.jpg

Processed:
1-processed.jpg

SOOC:
2-original.jpg

Processed:
2-processed.jpg

And, for grins, I went back and found one that I'd actually used HDR on (one of my 0.7EV brackets was actually blown, so it was a decent test case):

HDR:
3-HDR.jpg

Single Layer:
3-single.jpg
(note - this was a quick process, and I didn't try to match the colors exactly).

tl;dr:

Is it possible to get better results with HDR compared to a single layer, if done properly in camera and using a competent HDR processing engine? Absolutely. Are there situations where you should use it? Of course. Does the Mavic's implementation of HDR/AEB limit its usefulness unless you really know what you're doing? Certainly. Do most people use HDR/AEB in such a way (especially on the Mavic) that maximizes those benefits? Doubtful.
 
So there are a few issues here that are worth untangling, possibly.

First, let's separate what is *theoretically* the case from what is *practically* the case. In theory, yes - perfectly combining the information from shots at different exposures can give you a wider tonal range. However, this assumes a few of things: 1) the tonal range of the scene exceeds the tonal range of the sensor, 2) whatever algorithm you're using to combine the shots is able to do a decent job of deciding what parts of each image should be kept and blends them together well, and 2a) the algorithm doesn't introduce artifacts in the process (e.g. halos) and 3) at least one of the shots you're using contains blown highlights, and at least one of them does not.

Let's start with a practical consideration on the Mavic's HDR/AEB implementation - namely that it uses a 0.7EV interval with a 5 shot maximum spread - meaning that in the best case, we're going to get an additional 2.5-3.0EV of range in the shadows compared to a perfectly exposed ETTR shot - but you'll only get that if you overexpose the entire AEB (i.e. if the "darkest" shot of your AEB is perfectly ETTR). I certainly wouldn't dispute that you *can* get better results in the shadows by doing this. But I'd also contend that it's not, practically, what most people do or what most people mean when they talk about using HDR/AEB to get a "better" result on the Mavic (and doesn't appear to be what you did in the shot above). And remember here that we're really talking about shadow noise and detail, not highlight detail. Half of the information in the (raw) image is dedicated to the brightest 1EV of information. If you've done your ETTR properly, you're going to have plenty of highlight information, and underexposed shots aren't really going to give you much, if any benefit (in fact, they can theoretically hurt you).

My suspicion is that when *most* people talk about using the HDR/AEB functions on the Mavic, they're talking about having the center exposure be "correct" per the camera meter and then blending the +1.3EV / - 1.3EV below that. But here's the thing - if you do that and your brightest shot doesn't have blown highlights (which it's likely not to at only +1.3EV over the camera's meter point), then you actually *can't* do any better than a properly exposed ETTR shot because you don't actually have any additional tonal information (remember - underexposing doesn't give you any additional tonal information. note: not strictly true, but true enough for the purposes of this discussion). Stated differently, if you HDR/AEB and the brightest shot you're using is not blown, then you're better off using the brightest exposure you've got, because it will have the most shadow detail, and the darker exposures don't really give you significant benefit in the highlights.

It's hard to comment on the exact scene you posted without looking at each of the individual DNG files, but I'd encourage you to try yourself - take the brightest exposure that isn't blown and use local adjustments to pull down the exposure on the brightest parts of the image and boost the exposure on the darker parts of the image. I think you'd be surprised how much detail is preserved in just a single layer. My initial thought on your shot is that the sky is completely blown out, so I'm not sure your example is even a particularly good example of what HDR can do (assuming what you posted was the HDR JPG output from the Mavic).

For the record, some single layer ETTR's, SOOC and "final product" (though I didn't spend a ton of time on these):

SOOC:
View attachment 57537

Processed:
View attachment 57538

SOOC:
View attachment 57539

Processed:
View attachment 57540

And, for grins, I went back and found one that I'd actually used HDR on (one of my 0.7EV brackets was actually blown, so it was a decent test case):

HDR:
View attachment 57541

Single Layer:
View attachment 57542
(note - this was a quick process, and I didn't try to match the colors exactly).

tl;dr:

Is it possible to get better results with HDR compared to a single layer, if done properly in camera and using a competent HDR processing engine? Absolutely. Are there situations where you should use it? Of course. Does the Mavic's implementation of HDR/AEB limit its usefulness unless you really know what you're doing? Certainly. Do most people use HDR/AEB in such a way (especially on the Mavic) that maximizes those benefits? Doubtful.

Okay, I want your Magic Wand! (g)

Those results do seem magical but I guess knowing how to process the images is somewhat magical. Thanks for the visual education of what is possible.
 
  • Like
Reactions: jwischka
So there are a few issues here that are worth untangling, possibly.

First, let's separate what is *theoretically* the case from what is *practically* the case. In theory, yes - perfectly combining the information from shots at different exposures can give you a wider tonal range. However, this assumes a few of things: 1) the tonal range of the scene exceeds the tonal range of the sensor, 2) whatever algorithm you're using to combine the shots is able to do a decent job of deciding what parts of each image should be kept and blends them together well, and 2a) the algorithm doesn't introduce artifacts in the process (e.g. halos) and 3) at least one of the shots you're using contains blown highlights, and at least one of them does not.

Let's start with a practical consideration on the Mavic's HDR/AEB implementation - namely that it uses a 0.7EV interval with a 5 shot maximum spread - meaning that in the best case, we're going to get an additional 2.5-3.0EV of range in the shadows compared to a perfectly exposed ETTR shot - but you'll only get that if you overexpose the entire AEB (i.e. if the "darkest" shot of your AEB is perfectly ETTR). I certainly wouldn't dispute that you *can* get better results in the shadows by doing this. But I'd also contend that it's not, practically, what most people do or what most people mean when they talk about using HDR/AEB to get a "better" result on the Mavic (and doesn't appear to be what you did in the shot above). And remember here that we're really talking about shadow noise and detail, not highlight detail. Half of the information in the (raw) image is dedicated to the brightest 1EV of information. If you've done your ETTR properly, you're going to have plenty of highlight information, and underexposed shots aren't really going to give you much, if any benefit (in fact, they can theoretically hurt you).

My suspicion is that when *most* people talk about using the HDR/AEB functions on the Mavic, they're talking about having the center exposure be "correct" per the camera meter and then blending the +1.3EV / - 1.3EV below that. But here's the thing - if you do that and your brightest shot doesn't have blown highlights (which it's likely not to at only +1.3EV over the camera's meter point), then you actually *can't* do any better than a properly exposed ETTR shot because you don't actually have any additional tonal information (remember - underexposing doesn't give you any additional tonal information. note: not strictly true, but true enough for the purposes of this discussion). Stated differently, if you HDR/AEB and the brightest shot you're using is not blown, then you're better off using the brightest exposure you've got, because it will have the most shadow detail, and the darker exposures don't really give you significant benefit in the highlights.

It's hard to comment on the exact scene you posted without looking at each of the individual DNG files, but I'd encourage you to try yourself - take the brightest exposure that isn't blown and use local adjustments to pull down the exposure on the brightest parts of the image and boost the exposure on the darker parts of the image. I think you'd be surprised how much detail is preserved in just a single layer. My initial thought on your shot is that the sky is completely blown out, so I'm not sure your example is even a particularly good example of what HDR can do (assuming what you posted was the HDR JPG output from the Mavic).

For the record, some single layer ETTR's, SOOC and "final product" (though I didn't spend a ton of time on these):

SOOC:
View attachment 57537

Processed:
View attachment 57538

SOOC:
View attachment 57539

Processed:
View attachment 57540

And, for grins, I went back and found one that I'd actually used HDR on (one of my 0.7EV brackets was actually blown, so it was a decent test case):

HDR:
View attachment 57541

Single Layer:
View attachment 57542
(note - this was a quick process, and I didn't try to match the colors exactly).

tl;dr:

Is it possible to get better results with HDR compared to a single layer, if done properly in camera and using a competent HDR processing engine? Absolutely. Are there situations where you should use it? Of course. Does the Mavic's implementation of HDR/AEB limit its usefulness unless you really know what you're doing? Certainly. Do most people use HDR/AEB in such a way (especially on the Mavic) that maximizes those benefits? Doubtful.


Thank you for the elaborate response.

I fully understand your point.

The photo I posted was not bracketed. A single shot where I manually controlled aperture to yield the best photo for further processing.
 

DJI Drone Deals

New Threads

Forum statistics

Threads
131,226
Messages
1,561,042
Members
160,178
Latest member
InspectorTom