DJI Mavic, Air and Mini Drones
Friendly, Helpful & Knowledgeable Community
Join Us Now

A very useful easy 3 step technique to getting your Mavic footage to look polished and filmic

People have every right to live in a bubble, but please don't bring your prejudices (and bubble) into a discussion about drones. There are so many more (and better) places to have your little operating systems spats ... :)
 
Its simple the iMac is a better computer. I left the PC world 10 years ago and have never looked back.
Nonsense. Apple has all but abandoned the serious creatives in its pursuit of dilettantes. They borked color management, network file sharing and many other things since Tiger - and they simply don't care. Witness that to this day they do not have any native support for scanners - Fujitsu is the only manufacturer to come up with solutions that work - but they are mostly for document management, not professional color managed quality image scanning.
I finally gave up in disgust and went back to the PC which has incredibly good color management and all the tools are on a par with any Mac versions. Microsoft *does* care about creatives and professional needs as does Linux.
But - we will likely agree to disagree since Mac users are a pretty loyal group. So let's just stop any flames right here as I just expressed my opinion.
I still have 2 Macs and 2 Macbook pros - but I do all my serious work on a PC. I use my Mac mostly for Scrivener and Garageband (which has gotten truly excellent).
 
Last edited:
  • Like
Reactions: dlopezjurado
I'm always amused at what passes for info on the forums. I understand that many folks are relatively new to computer imaging so I get that some things seem enlightening when in fact they're not.
This video is a fluff piece. As a professional photographer I find it troubling that the originator went to all the trouble to use a pre-made commercial LUT ($$$) and attributes the "cinematic look" to the LUT when all teh LUT does is compensate for the colorspace the video was shot in. Then the saturation is jacked up to where the sky is an unnatural cyan color which is pretty much the antithesis of color correction.
No mention is made of fps to eliminate the jello effect - this is only about slapping a premade LUT and noise reduction profile onto a video clip. Then I'm REALLY puzzled why it's claimed 2.7k is the best resolution to shoot at and then upscale to 4K. The bits are interpolated in "upscaling" which will always be lesser quality than shooting at the end resolution.
So color me unimpressed. Reminds me of Arthur C. Clarke's 3rd law: "Any sufficiently advanced technology is indistinguishable from magic. (to a primitive culture)".
A LUT isn't magic - someone simply came up with some settings that produced a particular output with a specified input and saved it as a "Look Up Table". A preset noise profile is also easy - they've been around since Neat first got started as a plugin to Photoshop when camera sensors were small and noisy.
It would have been educational if it was shown HOW to make color adjustments (properly) and how to use a noise reduction software to create a profile. This is more of an ad for commercial products used improperly.
 
Crikey! Looks like you need a Phd in Photoshop to use that. Good results though.
I agree with you, I'm a professional and totally handle post-production software like this one BUT this is definitely not EASY, as announced in the title, for mister "everybody" . Or you know this post-production software and don't need particularly to be advised or you have to learn it from A to Z because is no different way to edit anything in 3 easy step if you aren't familiar to it.
 
  • Like
Reactions: butterflyslicer
In the tutorial it mentions in the comments upscaling and why 2.7k is better than 4k for those who think they know what they taking about and claim they are "experts". Here quoted from channel and in the comments on the video "2.7k is better than 4k. The compression blocking that happens in 4k is far more severe than 2.7k and also the 2.7K holds better detail in low contrast scenarios. The image losses detail in 4k whereas the 2.7K holds better. From this point with 4k its harder to grade and the compression artifacts are far worse. Chart tests and practical tests are very different. Also another plus to 2.7k is its much easier to work with and process. For me, I dont mind delivering 2.7k as a format however a lot of people do want 4k and this is a solution to get the best quality out of the mavic to 4k. Hope this helps? :)"

So yeah this workflow will definitely help people achieve a quality look and easily get interesting grades by following these simple steps. The proof is in the pudding which shows in the very high quality films on the channel
 
DJI ads didnt say we needed to spend lots of money and time on programs and parts to make the footage look half decent.

Im not buying in.

The footage is what it is. I dont feel like wasting more time than necessary with the amount and type of shooting i do.

Unless you are needing higher IQ for commercial purposes then it might be worth the jacking around.

Come back when you find a program that with one click you point at your footage and it does the rest instantly :D
Nikon and Canon don't mention that you need to speed time and money to properly edit photos from their high-end DSLRs either, but all photos need proper editing and color adjustment. The Mavic --- and any other drone's camera --- will be no different.

Some people edit their images as required, get great results and move on. Others prefer to spend their time on internet forums complaining. To each his own.
 
Well if you are buying top end camera gear it is only natural that you invest considerable time and money on programs to match and to improve your end result. After all, that is what buying top end gear is all about, the end result.

On the other hand we have the skanky results from a drone carrying a camera that all up, costs less than a single high end lens.
The results from the mavic without a lot of intervention, as you admit, are in need of spending lots of time and money on programs to make decent. Otherwise the IQ is average. Low light, pathetic.
Thats my point and my opinion, you can whine and call it complaining all you want. You brought it up. ;)

One actually doesn't spend a lot of time and money on improving the footage. I guess it's relative and subjective. For all the hardware and software required it's actually very little and will get amazing cinematic results with minimal costs. The costs will be less than a cheap lens for everything. I personally think it's a lot of fun going through the process which we could never have got a few years ago where people had to hire a helicopter with big camera equipment and cinematographer etc. we live in EPIC times [emoji41]
 
Last edited:
  • Like
Reactions: butterflyslicer
Sorry, i dont share you enthusiasm.

"Going through the process" is not why i bought the mavic. Spending money on multiple programs and hunching over a PC for hours and hours is not why i bought the mavic. Becoming a master of photography and knowing all camera settings which seem to change every FW update is not why i bought the mavic.

I accept it for what it is. Some see that as complaining and then proceed to complain. :)

I think if i wanted uber photography or video results, i would spend more than $1000 and the mavic would be no where near the top of the list.

That's fine, then perhaps mention that for your personal needs that you don't require that. This post is for people who want to get the most out of their Mavic and a lot of people don't have $1000's to spend on a fancy inspire 2 with an X5 raw camera. A lot of people with Mavics do want to make awesome videos and so let it be that they can with the resources that are available :)
 
In the tutorial it mentions in the comments upscaling and why 2.7k is better than 4k for those who think they know what they taking about and claim they are "experts". Here quoted from channel and in the comments on the video "2.7k is better than 4k. The compression blocking that happens in 4k is far more severe than 2.7k and also the 2.7K holds better detail in low contrast scenarios. The image losses detail in 4k whereas the 2.7K holds better. From this point with 4k its harder to grade and the compression artifacts are far worse. Chart tests and practical tests are very different. Also another plus to 2.7k is its much easier to work with and process. For me, I dont mind delivering 2.7k as a format however a lot of people do want 4k and this is a solution to get the best quality out of the mavic to 4k. Hope this helps? :)"

So yeah this workflow will definitely help people achieve a quality look and easily get interesting grades by following these simple steps. The proof is in the pudding which shows in the very high quality films on the channel
Yet we know that when sharpness is set to 0 or below that kicks in DJI's excessive compression which results in the blocky shadows and watercolor effect.
So the recommended settings that were put forth (-3,-3,-3) are what sabotages the 4K video. 4K is the native resolution. Interpolating (making up new pixels from existing) is not better than real captured pixels. If your deliverable is 2.7K then by all means shoot at that - bearing in mind that 4K gives better options for cropping - such as you have tilted horizons.
That said it does take a beefier machine to deal with 4K video and it gobbles disk space like there's no tomorrow. But those are subjective matters and bear no relevance to the assertion 2.7K is better from a quality standpoint than 4K. You can find people on the web with both opinions complete with pixel peeping to look at shadows.
What's always missing is the actual exposure information which has an enormous impact on image quality. For the sake of discussion assume the sensor is 10 bits. These 10 bits represent darkest black - 0, and whitest white (10) Each "bit" position represents a full stop of exposure. This is roughly analogous to Ansel Adams Zone System.
So in effect the shadows - for sake of discussion defined as values below 5 - have at most 2 to the 4th power possible values or 16 levels. So with 10 bits sensor resolution your shadows will be expressed as one of 16 possible values. That isn't much at all and can be seen as banding in gradients. This is important because if you underexpose by 1 stop you are literally throwing away half your shadow values - resulting in a blocky appearance. Your shadows within values of 3 bits (0-8) is cut in half and now only has 2 bits with values of 0-4.
If you overexpose then you are in effect boosting the shadows but at the same time throwing away highlights - whited out areas like bright beach sands. This means you have to reduce "exposure" in post which now reduces those shadows back to what should have been their normal value - but also with reduced bits. The whites that were blown remain blown because they were maxed out to begin with.
This is why people use DLog - it flattens the contrast reducing the dynamic range of the image - so shadows are brighter, brights are dimmer so they all hopefully fit into the available range of the sensor. That's why they look very "blah" - everything is mostly middle gray. In post you can get an idea of how it should look by applying a LUT - which is simply a preset that tries to "restore" the contrast and color by applying a known formula to the values that comprise the image - color, saturation, contrast. LUT's are intended to be a tool to "normalize" the scene to some standard - but somehow along the way they became a "fix" in the hands of folks that don't want to learn how to adjust the image.
As an analogy to those who use Photoshop or Lightroom - those tools apply a known "profile" matched to individual cameras to compensate for each camera's unique output. They can also apply corrections based on the lens being used. This isn't the end of your adjustments by any means - it just "normalizes" the image so you can properly evaluate it as to what more needs to be done to satisfy your creative vision.
 
Well if you are buying top end camera gear it is only natural that you invest considerable time and money on programs to match and to improve your end result. After all, that is what buying top end gear is all about, the end result.

On the other hand we have the skanky results from a drone carrying a camera that all up, costs less than a single high end lens.
The results from the mavic without a lot of intervention, as you admit, are in need of spending lots of time and money on programs to make decent. Otherwise the IQ is average. Low light, pathetic.
Thats my point and my opinion, you can whine and call it complaining all you want. You brought it up. ;)
I didn't say the Mavic's images need a lot of intervention. Like photos and videos from all cameras, they do need some tweaking. It takes a few seconds from common editing programs. Which programs you use and how much you spend on them are up to you. GIMP is free. There are lots of free or cheap apps for photo editing on a tablet or smartphone. There are other free editors, both photo and video, for computers, including a cheaper version of Photoshop. Personally, I use Lightroom most of the time for photos, Adobe Premiere Pro for video editing and color correction.

So, you admit the Mavic is a relatively inexpensive piece of equipment, you could have spent more if image quality was a priority for you, but you didn't. You essentially bought a point-and-shoot camera and you're b****ing that it isn't DSLR quality. You can't have it both ways.

If your photos are "skanky," the problem may be the photographer, it may be the camera, it may be the editing. Maybe post some examples of what you don't like and maybe we can help you. Or maybe you just enjoy the complaining more.
 
Last edited:
  • Like
Reactions: mfc
Crikey! Looks like you need a Phd in Photoshop to use that. Good results though.
It's not that hard and most likely done with video editing software as opposed to Photoshop. If you are using a Mac, you can use iMovie to produce the exact same video quality and cinematic effects. Similarly on a Mac, there is Final Cut Pro X (use this as well as iMovie) which is a bit more powerful and is still pretty easy to use. With that being said, the video looks very cool.
 
Yeah totally agree, however it's well worth doing it in my opinion. The render times aren't too bad though. I would rather wait 20-40min for a render that makes my footage look much better. One can grab a quick snack in that time [emoji41]
Spot on Shawn. People want something for nothing now a days. If you want good video, you have to put in the time to get it the way you want or then don't but you don't get to complain about it.
 
  • Like
Reactions: Shawn3D
DJI ads didnt say we needed to spend lots of money and time on programs and parts to make the footage look half decent.

Im not buying in.

The footage is what it is. I dont feel like wasting more time than necessary with the amount and type of shooting i do.

Unless you are needing higher IQ for commercial purposes then it might be worth the jacking around.

Come back when you find a program that with one click you point at your footage and it does the rest instantly :D

No one is asking you to. If you are fine with the way your videos look then be happy with it.
 
I totally agree. Most of us have this as a hobby and enjoy making great films. Otherwise people would be buying a drone without a camera. I love the grading and editing process where you bring everything to life

Exactly. I am a photographer which led me to buy an Inspire 2 as well. When I really want GREAT footage, I go with the I2, but for putzing around the Mavic is okay. However, I still process my images and videos the best way possible regardless of the platform.
 
  • Like
Reactions: BigAl07
Nah, I'm pretty familiar with those settings :) I rendered a looot of drone shots this summer day after day on my laptop, so I was quite surpriced to render something on the iMac and just see the thing blast away. I knew there were some very different render times between Final Cut and Premiere on Mac, but not this. Perhaps it's another bottleneck on the Win thats causing trouble with this. Oh well, I may or may not do a side-by-side test some day. I usually just queue several things overnight anyway.
Could be the data buss on the laptop is not as wide as on the iMac. The iMac may have more throughput which helps to speed things up. Also the amount of ram plays an important part. My iMac 5K just munches on video and rendering where is my new MBP takes a bit longer. To be expected I guess. I am curious to see how the iMac works with natively supported H.265 in High Sierra.
 
Yet we know that when sharpness is set to 0 or below that kicks in DJI's excessive compression which results in the blocky shadows and watercolor effect.
So the recommended settings that were put forth (-3,-3,-3) are what sabotages the 4K video. 4K is the native resolution. Interpolating (making up new pixels from existing) is not better than real captured pixels. If your deliverable is 2.7K then by all means shoot at that - bearing in mind that 4K gives better options for cropping - such as you have tilted horizons.
That said it does take a beefier machine to deal with 4K video and it gobbles disk space like there's no tomorrow. But those are subjective matters and bear no relevance to the assertion 2.7K is better from a quality standpoint than 4K. You can find people on the web with both opinions complete with pixel peeping to look at shadows.
What's always missing is the actual exposure information which has an enormous impact on image quality. For the sake of discussion assume the sensor is 10 bits. These 10 bits represent darkest black - 0, and whitest white (10) Each "bit" position represents a full stop of exposure. This is roughly analogous to Ansel Adams Zone System.
So in effect the shadows - for sake of discussion defined as values below 5 - have at most 2 to the 4th power possible values or 16 levels. So with 10 bits sensor resolution your shadows will be expressed as one of 16 possible values. That isn't much at all and can be seen as banding in gradients. This is important because if you underexpose by 1 stop you are literally throwing away half your shadow values - resulting in a blocky appearance. Your shadows within values of 3 bits (0-8) is cut in half and now only has 2 bits with values of 0-4.
If you overexpose then you are in effect boosting the shadows but at the same time throwing away highlights - whited out areas like bright beach sands. This means you have to reduce "exposure" in post which now reduces those shadows back to what should have been their normal value - but also with reduced bits. The whites that were blown remain blown because they were maxed out to begin with.
This is why people use DLog - it flattens the contrast reducing the dynamic range of the image - so shadows are brighter, brights are dimmer so they all hopefully fit into the available range of the sensor. That's why they look very "blah" - everything is mostly middle gray. In post you can get an idea of how it should look by applying a LUT - which is simply a preset that tries to "restore" the contrast and color by applying a known formula to the values that comprise the image - color, saturation, contrast. LUT's are intended to be a tool to "normalize" the scene to some standard - but somehow along the way they became a "fix" in the hands of folks that don't want to learn how to adjust the image.
As an analogy to those who use Photoshop or Lightroom - those tools apply a known "profile" matched to individual cameras to compensate for each camera's unique output. They can also apply corrections based on the lens being used. This isn't the end of your adjustments by any means - it just "normalizes" the image so you can properly evaluate it as to what more needs to be done to satisfy your creative vision.

Class dismissed. Best post ever...
 
  • Like
Reactions: NinjaNut
Along with my Mavic I recently bought myself an MacBook Pro with a full terabyte of hard drive space and coughed up the extra money for Final Cut Pro thinking this video editing will be awesome...... but I guess I've got a lot to learn. (I was doing some fun stuff with windows movie maker before and it seemed more logical to me for some reason) I actually ran into a little brick wall right off the get go, maybe one of you guys can clue me in. If I take my sd card out of the Mavic after shooting in 4K and put it into the Apple sd reader dongle I don't always see the video that I just took. However if I use a cable from the Mavic directly into the MacBook I can find it and transfer it easily from the same sd card! What gives?
 
Lycus Tech Mavic Air 3 Case

DJI Drone Deals

New Threads

Forum statistics

Threads
131,131
Messages
1,560,136
Members
160,100
Latest member
PilotOne