DJI Mavic, Air and Mini Drones
Friendly, Helpful & Knowledgeable Community
Join Us Now

Better stills from the Mavic

Is my pic ok. Uploaded it of iPad. Still all new to this but it seems a lot better than what I was getting at first.

Since you uploaded from the Ipad, I wondering if you are giving the "straight from camera" image, or if there is any processing?

I think you did great. Composition is good. I like the sunset, you managed to really get some good colour on it. Shadows are a bit dark, but that isn't easy to get with that particular shot.

The image is sharp, so that's good. Much better than the first few images I took with mine. Took me a little playing around to get what I wanted.

I personally shoot in RAW with D-Log because it is a flatter image, making shadows show up more, but at the cost of some of the Pop. But I use lightroom to bring back that which I have lost. But if you plan to shoot straight from the camera and upload form an iPad, then you could try changing the style to another option as you can get really nice colours with some of the styles. I suggest playing with the different styles to figure out what will work best in different situation.

It's always a bit of a learning curve (I'm still still learning on the video side of things myself), but keep at it, play with settings, and use the forum and youtube videos to learn all you can.
 
Curious as to why you're "uprezzing" the photos. Most algorithms degrade the image - there's only one (commercial) way to expand image size with minimal degradation - fractal compression. The current iteration of this product is called ON1 Resize by On1 software.
I have printed 12 mp shots from my Canon 5D at 36x24 that look great with no size manipulation aside from printer rendering.
It's about viewing distance with prints. Next time you're in a store look at the razor sharp ad posters - then walk up close. Not so good eh? There are what we call "pixel peepers" that like to give a print close scrutiny and ooh ah over the leaf counts in trees. I guess if that's what thrills ya...
Anyway - stacking should actually work better without the resizing since it artificially inflates the noise and "fuzzes" the edges making them appear less sharp (acutance). Stack first then blow up if you feel the need, bearing in mind any algorithm to expand pixels is more or less making them up based on an original pixel + surrounding pixels.
 
Brojon.
That’s great advise too. Like I said I’m new to this only been flying for the last 3 months or so.
Where in Texas are you I was over there last year on my holiday in Mansfield. What a fantastic place to visit. Would move there tomorrow.
 
Curious as to why you're "uprezzing" the photos. Most algorithms degrade the image - there's only one (commercial) way to expand image size with minimal degradation - fractal compression. The current iteration of this product is called ON1 Resize by On1 software.
I have printed 12 mp shots from my Canon 5D at 36x24 that look great with no size manipulation aside from printer rendering.
It's about viewing distance with prints. Next time you're in a store look at the razor sharp ad posters - then walk up close. Not so good eh? There are what we call "pixel peepers" that like to give a print close scrutiny and ooh ah over the leaf counts in trees. I guess if that's what thrills ya...
Anyway - stacking should actually work better without the resizing since it artificially inflates the noise and "fuzzes" the edges making them appear less sharp (acutance). Stack first then blow up if you feel the need, bearing in mind any algorithm to expand pixels is more or less making them up based on an original pixel + surrounding pixels.

Read about the technique, then actually do the experiment for a number of images; then see what works best for you.
 
  • Like
Reactions: AKflyer
Read about the technique, then actually do the experiment for a number of images; then see what works best for you.
I've been using Photoshop and Lightroom for a long long time. Been a photographer in film and digital even longer.
The things I said I stand behind.
The part that perhaps I'm reading wrong is " In ACR, set resolution at least double the initial file size. If you use LR, you will have to uprez in PS. " There's two ways to read this:
1 - double the pixel count - straightforward image size interpolation.
2 - double the resolution - i.e. from 240 dpi to 480 dpi without changing the pixel count.

#2 of course will do nothing except change the perceived print size. If you leave resample checked then yes, you are interpolating the image - making up new pixels.
It appears that what is being attempted is to "fuzzify" the image edges so the stacking algorithm will "merge" them better. In reality the alignment algorithm depends on the edges to do its job properly. Making them less distinct isn't likely to help the process along.
Have you tried the process without manipulating the image pixel count?
I base my conclusions on the fact Astrophotographers use this stacking technique all the time to reduce noise. Since noise is random the process can cancel it out to some degree. Fuzzify it with interpolation and it likely would work less effectively.
 
I would use ‘median’ stacking rather ‘mean’ stacking if your goal is to improve image quality. Mean stacking is good for imitating long exposures.

I am dubious about uprezzing in PS. Typically a good printing house will have much more sophisticate uprez software that matches their printer.

The best way to increase resolution is through stitching.
 
I would use ‘median’ stacking rather ‘mean’ stacking if your goal is to improve image quality. Mean stacking is good for imitating long exposures.

I am dubious about uprezzing in PS. Typically a good printing house will have much more sophisticate uprez software that matches their printer.

The best way to increase resolution is through stitching.
By that you mean as in a panoramic stitch?
Printing houses use a thing called a RIP (Raster Image Processor) to print fine art prints. Part of their job in rendering is to match the pixel count of the image to the printer resolution. All printing apps have a render engine which handles this matching but comparing Photoshop print rendering to a professional RIP is like comparing a Toyota to a Lambo.
I'm still unclear on the need for such high resolution. I have 24x36" prints made from a 12 mp sensor (Canon 5D) and a 24mp sensor (Canon 5D MkII and Sony NEX-7)
At normal viewing distances you're hard pressed to say which is which. If you pixel peep then yes. But I would submit that very very few images are printed at 24x36" or larger and even if they are people aren't sticking their nose into it. There are online calculators to determine how large a particular megapixel image can be printed and viewed at a specified viewing distance. It may be educational to play with it to see what's really needed. For example using the calculator you can see that a 24x36 image to be viewed at a distance of 3 feet only requires an 8mp image printed at 92dpi.

Print Resolution Calculator - Points in Focus Photography
 
I've been using Photoshop and Lightroom for a long long time. Been a photographer in film and digital even longer.
The things I said I stand behind.
The part that perhaps I'm reading wrong is " In ACR, set resolution at least double the initial file size. If you use LR, you will have to uprez in PS. " There's two ways to read this:
1 - double the pixel count - straightforward image size interpolation.
2 - double the resolution - i.e. from 240 dpi to 480 dpi without changing the pixel count.

#2 of course will do nothing except change the perceived print size. If you leave resample checked then yes, you are interpolating the image - making up new pixels.
It appears that what is being attempted is to "fuzzify" the image edges so the stacking algorithm will "merge" them better. In reality the alignment algorithm depends on the edges to do its job properly. Making them less distinct isn't likely to help the process along.
Have you tried the process without manipulating the image pixel count?
I base my conclusions on the fact Astrophotographers use this stacking technique all the time to reduce noise. Since noise is random the process can cancel it out to some degree. Fuzzify it with interpolation and it likely would work less effectively.

This is a fairly well known technique often referred to as geometric super-resolution imaging, that uses small sensor shifts to achieve both noise reduction and increased resolution. It reduces noise by averaging multiple aligned shifted images, which reduces both systematic and random noise. It can also be used to increase resolution because stack averaging of sensor-shifted images is effectively a simple sub-pixel image localization method. To achieve the latter, the original images have to be upsampled first, and then the stack averaging itself sharpens the details prior to the application of any sharpening algorithms.

It's a very simple method to test if you want to see it in action - all you need are a set of handheld (to get some random sensor shift) burst-mode images and Photoshop, Affinity or a similar program with stack capabilities.
 
  • Like
Reactions: AKflyer
Like I said - they're making the edges "fuzzier" to throw off the alignment algorithm and hopefully an improved version pops out the other side.
I see the utility in stacking with astrophotography, not so convinced with ordinary daylight images which are unlikely to suffer more noise than 800 ISO delivers even at dusk. Although I did once forget to take off my ND32 ;) Lightroom for example has an outstanding easy to use noise reduction system.
There are other ways to gain perceived sharpness - that same high pass filter is used in a procedure called "high pass frequency separation" allowing selective sharpening and noise reduction with great success. What's perceived as "sharpness" is nothing more than an "enhanced" contrast boundary. Much better to pay attention to midtone contrast adjustments for picture quality which can also be accomplished using the same high pass filter with a high radius and blending modes. I learned that one from Jeff Shewe in Atlanta.
As I mentioned there is little need for increased resolution in the real world unless you just like to go over very large prints with a magnifying glass. Even so resampling an image is making new pixels from existing pixels - the algorithm creates new pixels on best guesses. Kinda like breast implants adding material to make more than what's natural. ;)
But if that's your thing...
 
Like I said - they're making the edges "fuzzier" to throw off the alignment algorithm and hopefully an improved version pops out the other side.
I see the utility in stacking with astrophotography, not so convinced with ordinary daylight images which are unlikely to suffer more noise than 800 ISO delivers even at dusk. Although I did once forget to take off my ND32 ;) Lightroom for example has an outstanding easy to use noise reduction system.
There are other ways to gain perceived sharpness - that same high pass filter is used in a procedure called "high pass frequency separation" allowing selective sharpening and noise reduction with great success. What's perceived as "sharpness" is nothing more than an "enhanced" contrast boundary. Much better to pay attention to midtone contrast adjustments for picture quality which can also be accomplished using the same high pass filter with a high radius and blending modes. I learned that one from Jeff Shewe in Atlanta.
As I mentioned there is little need for increased resolution in the real world unless you just like to go over very large prints with a magnifying glass. Even so resampling an image is making new pixels from existing pixels - the algorithm creates new pixels on best guesses. Kinda like breast implants adding material to make more than what's natural. ;)
But if that's your thing...

Now you are deliberately muddying the subject wth random red herrings. You have completely misunderstood how this works - it's not an edge enhancement or high-pass filter algorithm. Additional real spatial information actually exists in such a stack. But since you don't seem to be interested in how it works and instead are just crticizing your misconception of it, further discussion is rather pointless. And whether or not you think higher resolution images are useful is a completely different issue.
 
Shooting RAW, using ISO 100, Auto White Balance, Colour is D-Log, Style is where I am not sure at the moment (Don't have drone with me to connect), but I believe it is +1, -1,-1 but I will check on that.

I wasn't aware Color style had any effect on RAW images.
 
I understand but I'm not sure you do. "Geometric super-resolution imaging" uses a specialized algorithm to reconstruct - and Photoshop align isn't it.
Furthermore the technique assumes that the image being enhanced is itself sampled to a lower resolution but still contains the high frequency information which can be reconstructed. All the papers I have read about it also state that it is not acceptable for images with motion artifacts or misalignments.
That's not to say that the procedure put forth doesn't have any merit, just that it isn't quite up to the $10 fancy name you applied. The primary purpose for image stacking is for noise reduction - not enhancing micro details. You may as well get one of the numerous detail enhancer plugins and have at it.
My original quibble is with the assertion that more megapixels = a good thing. Did you bother to read the link I provided to the calculator and an explanation of why more pixels isn't necessary? One photographer did an experiment with a Billboard magazine cover from his friggin iPhone.

A photographer used an iPhone 7 Plus to take this stunning 'Billboard' magazine cover

We won't even go into the fact that most images are not even printed large - instead they're shown on forums such as this and are lucky to have any side over 1200 pixels. Most folks that get wrapped up in pixel counts do so for bragging rights - not practicality. But I don't care per se - I was merely trying to help as some folks suffer under the misconception that the more pixels you have somehow makes the image better.

PS: To jog your memory here's an excerpt from teh original post:
The stills (from RAW) are usable for lots of purposes, but not for large prints. If you are handy with Photoshop, though, here's a recipe for improving resolution.
 
Last edited:
I understand but I'm not sure you do. "Geometric super-resolution imaging" uses a specialized algorithm to reconstruct - and Photoshop align isn't it.
Furthermore the technique assumes that the image being enhanced is itself sampled to a lower resolution but still contains the high frequency information which can be reconstructed. All the papers I have read about it also state that it is not acceptable for images with motion artifacts or misalignments.
That's not to say that the procedure put forth doesn't have any merit, just that it isn't quite up to the $10 fancy name you applied. The primary purpose for image stacking is for noise reduction - not enhancing micro details. You may as well get one of the numerous detail enhancer plugins and have at it.
My original quibble is with the assertion that more megapixels = a good thing. Did you bother to read the link I provided to the calculator and an explanation of why more pixels isn't necessary? One photographer did an experiment with a Billboard magazine cover from his friggin iPhone.

A photographer used an iPhone 7 Plus to take this stunning 'Billboard' magazine cover

We won't even go into the fact that most images are not even printed large - instead they're shown on forums such as this and are lucky to have any side over 1200 pixels. Most folks that get wrapped up in pixel counts do so for bragging rights - not practicality. But I don't care per se - I was merely trying to help as some folks suffer under the misconception that the more pixels you have somehow makes the image better.

PS: To jog your memory here's an excerpt from teh original post:

You are still dissembling, and wriggling around to try to avoid the point of the original discussion.

Yes - advanced super-resolution techniques use specific techniques but those are not essential to be able to combine spatial information from multiple, shifted images.

Yes - the method does not work well for images with movement - no one said that it did, and that very issue was mentioned earlier.

Your "$10 fancy name" comment is asinine since the name is correct.

Stacking, depending on how it is used, can both reduce noise and enhance detail. I'll assume (dangerous, I know) that you understand how the noise reduction method works. The enhanced detail is available in engineered systems from the inter-image sub-pixel shifts that are applied systematically. In this approach the sensor shift arises from the stochastic variations in handheld (or UAV-held) sensor position, which is simply a poor-mans equivalent. You do not have the privilege to define the primary purpose of anyone else's use of stacking.

No - this is not at all equivalent to detail enhancer plugins, and that suggestion clearly demonstrates that you still have missed the entire point of the method, which is that mutliple images contain more information than a single image. Detail enhancers take the data available in one image and apply various global and local adjustments to sharpen edges, mostly. They are making assumptions about detail. Sub-pixel image localization, as its name implies, takes advantage of the increased quantity of data in a set of non-identical images. What don't you understand about that? Are you disputing that there really are more data (trivially obviously that there are), disputing that this method can extract some of the increased data (e.g. because PS is not a "professional tool"), or simply not thinking it through at all? I don't see any other explanations.

If all the images in a stack were identical (no object motion, no sensor shift, no random sensor noise) then this would not work - the stack average would identically equal each image. In practice the random noise is not zero, and so such a stack could at least reduce that. But if the sensor is moved slightly between images, different spatial data are captured in each image and mulitple shifted images contain more spatial data than any one image of the set. That permits both averaging out of systematic noise and increased localization of point (or small) light sources to better than the orginal pixel resolution, but requires image upsampling first to allow the sub-pixel localization to be stored. Upsampling, aligning and averaging is one way to access the increased data, and the more images that are available, the higher the theoretical resolution achievable.

And while you have consistently added the comment that no one needs higher resolution images, that was not your original assertion (post #22) which was that this method cannot increase resolution without image degradation, which it most certainly can.
 
Ok - this has devolved into a p*****g contest. I suspect it started with another thread but how about we just agree to block each other since you obviously take exception to everything I say?
You are trying yet again to equate a sophisticated specialized algorithm with simplistic image interpolation and a stacking algorithm in Photoshop. Apples and oranges. Just because it bears some superficial similarities does not make it what you claim to be. SR algorithms at their heart are trying to simulate a smaller pixel than what was actually there based on minor variances between frames due to the way pixels are shifted out of the matrix. Originally I think the SR algorithms were developed for photomicrography but later adapted to computer vision applications. Taking successive images from a Mavic while pretty darned good are magnitudes different in terms of image shift - or would you have us believe the Mavic can hold perfectly still?
Seriously man - let's just agree the technique has some merit without trying to gild the lily. Interpolation is making up pixels and will always degrade the image - reconstructing an image by a stacking algorithm that tries to resolve differences between interpolated images can result in an image that is improved by virtue of differences being magnified by the interpolation process. But SR it ain't.
I also stand by industry expert opinions on what image sizes are necessary as regards to display online and printing.
 
No contest in this guys .
Be Nice.jpeg ;)
 
  • Like
Reactions: Brojon
Ok - this has devolved into a p*****g contest. I suspect it started with another thread but how about we just agree to block each other since you obviously take exception to everything I say?
You are trying yet again to equate a sophisticated specialized algorithm with simplistic image interpolation and a stacking algorithm in Photoshop. Apples and oranges. Just because it bears some superficial similarities does not make it what you claim to be. SR algorithms at their heart are trying to simulate a smaller pixel than what was actually there based on minor variances between frames due to the way pixels are shifted out of the matrix. Originally I think the SR algorithms were developed for photomicrography but later adapted to computer vision applications. Taking successive images from a Mavic while pretty darned good are magnitudes different in terms of image shift - or would you have us believe the Mavic can hold perfectly still?
Seriously man - let's just agree the technique has some merit without trying to gild the lily. Interpolation is making up pixels and will always degrade the image - reconstructing an image by a stacking algorithm that tries to resolve differences between interpolated images can result in an image that is improved by virtue of differences being magnified by the interpolation process. But SR it ain't.
I also stand by industry expert opinions on what image sizes are necessary as regards to display online and printing.

OK - now you have actually read about how this works. I completely agree that this is a cruder version of the advanced techniques used in microscopy, and I made exactly that point earlier. But the priniciple is the same and it works, possibly surprisingly well, depending on your understanding of the alignment algorithms. I'm not particularly surprised, having been using autocorrelation methods for PIV purposes for many years - they are really quite robust. The latest commercial solutions by Adobe and others work very well, and that's all you need.

No one was gilding the lilly. The OP described a well(-ish) known technique to reduce noise and increase detail, together with an example image that clearly illustrated the technique working. You jumped in and questioned why one would ever want to do that and asserted, wrongly, that the technique fundamentally could not work. That is what I took issue with as, firstly, it does work and, secondly, it is trivially easy to test and verify that it works. But as you now agree, the technique has merit, especially for certain types of photography.

And, again, no one is arguing with you about industry standards for image sizes as a function of print sizes, so I don't know why you keep bringing up that issue.

Regarding the other thread you alluded to - no, different day, different discussion.
 
Yes, I have a set of Taco RC filters that I use according to lighting conditions.
If you're just taking stills, why the filters? Seems to me you'd want as much light as possible on the sensor. Unless you're also shooting video, and it's just a matter of convenience.
 
Lycus Tech Mavic Air 3 Case

DJI Drone Deals

New Threads

Forum statistics

Threads
131,236
Messages
1,561,138
Members
160,190
Latest member
NotSure