DJI Mavic, Air and Mini Drones
Friendly, Helpful & Knowledgeable Community
Join Us Now

I NEED MAJOR HELP PLEASE

Great question WTB

Essentially an I Frame

  • I‑frames are the least compressible but don't require other video frames to decode.
  • P‑frames can use data from previous frames to decompress and are more compressible than I‑frames.
  • B‑frames can use both previous and forward frames for data reference to get the highest amount of data compression.
Three types of pictures (or frames) are used in video compression: I, P, and B frames.

An I‑frame (Intra-coded picture) is a complete image, like a JPG or BMP image file.

A P‑frame (Predicted picture) holds only the changes in the image from the previous frame. For example, in a scene where a car moves across a stationary background, only the car's movements need to be encoded. The encoder does not need to store the unchanging background pixels in the P‑frame, thus saving space. P‑frames are also known as delta‑frames.

A B‑frame (Bidirectional predicted picture) saves even more space by using differences between the current frame and both the preceding and following frames to specify its content.



I Frames being virtually uncompressed using video encoding algorithms would be the best frame to grab from a video in which to produce the least compressed frame within that video.
 
Great question WTB

Essentially an I Frame

  • I‑frames are the least compressible but don't require other video frames to decode.
  • P‑frames can use data from previous frames to decompress and are more compressible than I‑frames.
  • B‑frames can use both previous and forward frames for data reference to get the highest amount of data compression.
Three types of pictures (or frames) are used in video compression: I, P, and B frames.

An I‑frame (Intra-coded picture) is a complete image, like a JPG or BMP image file.

A P‑frame (Predicted picture) holds only the changes in the image from the previous frame. For example, in a scene where a car moves across a stationary background, only the car's movements need to be encoded. The encoder does not need to store the unchanging background pixels in the P‑frame, thus saving space. P‑frames are also known as delta‑frames.

A B‑frame (Bidirectional predicted picture) saves even more space by using differences between the current frame and both the preceding and following frames to specify its content.



I Frames being virtually uncompressed using video encoding algorithms would be the best frame to grab from a video in which to produce the least compressed frame within that video.
So nothing to do with a key frame then.... didn’t think so.

Does HVEC even have I frames?? And if it does could you drag one out of the stream?
 
So nothing to do with a key frame then.... didn’t think so.
Totally does, the word just has multiple uses that you're ignoring or didn't know of.

You're thinking of an editing keyframe, that's fine but keyframe is also a proper name for an I-frame.

Key frame - Wikipedia

Most software doesn't give you that level of access, but there is some like avidemux and cut/join programs (since any non-reencoding cut/join needs to ensure the start of a clip is always a keyframe) that lets you specifically select keyframes. H265 has them as well of course.
 
Last edited:
Totally does, the word just has multiple uses that you're ignoring or didn't know of.

You're thinking of an editing keyframe, that's fine but keyframe is also a proper name for an I-frame.

Key frame - Wikipedia

Most software doesn't give you that level of access, but there is some like avidemux and cut/join programs (since any non-reencoding cut/join needs to ensure the start of a clip is always a keyframe) that lets you specifically select keyframes. H265 has them as well of course.
Ok- so call it an I-frame and avoid the confusion....

How often might we expect to see them in H265 compression?

I know what I-frames are now, very interesting- thank you.
 
That totally depends on the settings on the encoding device and the purpose for the encoded file. Every frame could be I, or you could only have one every 10 seconds.
The M2Z uses one per second, that's a pretty common choice for cameras.
 
Thank you EVERYONE for all the help and i pretty much knew all the answers i got i guess i was hoping on some miracle lol. Just FYI i typically always shoot AEB 5 shot DNG and bring into LR first then export to PS to auto align and stack, and or just pick three and HDR but thanks again it was just a nice clip i shot while filming and posted on social media and i had several folks want prints.
 
I am attaching two files a Tiff and a PSD file of the same picture I took of the capital building here in San Francisco with my Mavic Pro. I am severely bummed out because as I was in a hurry and running out of battery I was squeezing off the last of my shots.....and ended up running a 30 clip 4K and the picture I got was beautiful. Of course it looks terrific on a cell phone, iPad...etc However it is severely under exposed and I have been attempting to save this thing for a week or so, the disappointing part is I actually have two clients that would like to purchase this ASAP and I just can get rid of the noise/grainyness in the sky. Nor can I lighten up the underexposed aspects to save the details.....etec I typically shoot everything in RAW so this wouldn't be a problem if I hade just followed what I typically do. If anyone can please help me with this I would be truly grateful for you and owe you ever dying gratitude. Thanks you guys for even taking the time to read this. Now remember as I mentioned I simply played this in Adobe Premiere, found a straight on shot that looked straight....etc and did a hold keyframe and export. So that is what you are looking at, with a few little edits I did do myself. If needed I can include the actual image I exported.
 
I am attaching two files a Tiff and a PSD file of the same picture I took of the capital building here in San Francisco with my Mavic Pro. I am severely bummed out because as I was in a hurry and running out of battery I was squeezing off the last of my shots.....and ended up running a 30 clip 4K and the picture I got was beautiful. Of course it looks terrific on a cell phone, iPad...etc However it is severely under exposed and I have been attempting to save this thing for a week or so, the disappointing part is I actually have two clients that would like to purchase this ASAP and I just can get rid of the noise/grainyness in the sky. Nor can I lighten up the underexposed aspects to save the details.....etec I typically shoot everything in RAW so this wouldn't be a problem if I hade just followed what I typically do. If anyone can please help me with this I would be truly grateful for you and owe you ever dying gratitude. Thanks you guys for even taking the time to read this. Now remember as I mentioned I simply played this in Adobe Premiere, found a straight on shot that looked straight....etc and did a hold keyframe and export. So that is what you are looking at, with a few little edits I did do myself. If needed I can include the actual image I exported.
Email me the picture you want fixed. [email protected] put DJI forum in the subject
 
I opened this image in Photoshop 2019 (copy image/new/paste command+V) then opened it in NIK software Viveza>open shadows. All I got was terrible blotchy noise. You need to provide RAW image to capture details in the shadows. The imnage provided in JPEG is too small to recover any details.
 
I am attaching two files a Tiff and a PSD file of the same picture I took of the capital building here in San Francisco with my Mavic Pro. I am severely bummed out because as I was in a hurry and running out of battery I was squeezing off the last of my shots.....and ended up running a 30 clip 4K and the picture I got was beautiful. Of course it looks terrific on a cell phone, iPad...etc However it is severely under exposed and I have been attempting to save this thing for a week or so, the disappointing part is I actually have two clients that would like to purchase this ASAP and I just can get rid of the noise/grainyness in the sky. Nor can I lighten up the underexposed aspects to save the details.....etec I typically shoot everything in RAW so this wouldn't be a problem if I hade just followed what I typically do. If anyone can please help me with this I would be truly grateful for you and owe you ever dying gratitude. Thanks you guys for even taking the time to read this. Now remember as I mentioned I simply played this in Adobe Premiere, found a straight on shot that looked straight....etc and did a hold keyframe and export. So that is what you are looking at, with a few little edits I did do myself. If needed I can include the actual image I exported.
Maybe you can promote it as an Impressionist rendering? I've had video grabs look as good as stills from the MP. Your images are badly compressed. That happens with the MP and certain settings. That is a video problem and can not be helped by greater bit depth from a raw image. Look for other frames, the video compression can vary. Even an out of level shot can be rotated.
 
Maybe you can promote it as an Impressionist rendering? I've had video grabs look as good as stills from the MP. Your images are badly compressed. That happens with the MP and certain settings. That is a video problem and can not be helped by greater bit depth from a raw image. Look for other frames, the video compression can vary. Even an out of level shot can be rotated.
I forgot to mention Photoshop can take several images and average them into a "better" image. There are astro photography apps that pull multiple frames from video for higher resolution stills, just like Photoshop's image averaging.
 
Overexposed will give you options. Underexposed....not. There's just nothing there to work with. -Lots of advice given here on a future shoot, but the reality is that what you currently have cannot provide detail in the underexposed parts.
 
Overexposed will give you options. Underexposed....not. There's just nothing there to work with. -Lots of advice given here on a future shoot, but the reality is that what you currently have cannot provide detail in the underexposed parts.
You have this backwards. Overexposed has zero prospects for recovery- this is simply because all information above the maximum recordable value is completely absent from the data (it will all be represented as the brightest white). Shadow detail can be increased in brightness. You may have issues with noise however some data will be recorded in the darker areas of the scene.
 
What he posted are not the .jpg images that the Mavic produced.
If he posts the original image, it might be possible to get more out of it than he has already.

And contrary to popular opinion, you can usually get quite good results out of original jpg images.
My Corell Paintshop Pro can fix the original nicely... But shooting again is best scenario here...
 
Yeah @Nosebump: Think slide film. Unlike negatives, lighter is less info... Expose for detail in the highlights and let the shadows fall where they may. It's counter to what we used to do...
 
You have this backwards. Overexposed has zero prospects for recovery- this is simply because all information above the maximum recordable value is completely absent from the data (it will all be represented as the brightest white). Shadow detail can be increased in brightness. You may have issues with noise however some data will be recorded in the darker areas of the scene.

In general that's effectively true, since black shadows are less annoying than completely blown highlights. But in reality the problem is symmetric, in that there is no detail recorded at both ends of the dynamic range, either at full saturation or below the noise floor.
 

DJI Drone Deals

New Threads

Forum statistics

Threads
134,575
Messages
1,596,443
Members
163,076
Latest member
thelelans
Want to Remove this Ad? Simply login or create a free account