DJI Mavic, Air and Mini Drones
Friendly, Helpful & Knowledgeable Community
Join Us Now

My First MAvic Mini Flight with Unexpected Ending:

I understand how software works, and I understand how optics work, but I don't get how software can "create" a higher resolution than the original raw data provided by the optics (unless you change the optics). When I record an audio track at 44.1Khz sampling rate to produce a .wav file, a typical 3 -4 minute song is about 30 Mb in size. After compression (10 to 1 roughly) to MP3 format, the size is down to about 3Mb. Most people can't hear the difference, but through good headphones, I can. As far as I know, there is no software that can take an audio file originally recorded at 44.1Khz and magically convert it to 96Khz rate. Maybe I'm just not technically smart enough, but I'm skeptical of software taking video at 2.7K resolution, and converting to 4K without adding some kind of "artifacts" that were not part of the original. What am I missing?
 
  • Like
Reactions: cadbob
In the old days we used to use line doublers on video projectors,I can imagine the same principle through software you can add more dynamic range to a video. To take a MP3 and convert it to a wav format is another ball park .. you have to add vocals and instrumentation thats not there.. which is really difficult if not imopossible. I m no expert someone will probably correct me.
 
I understand how software works, and I understand how optics work, but I don't get how software can "create" a higher resolution than the original raw data provided by the optics (unless you change the optics). When I record an audio track at 44.1Khz sampling rate to produce a .wav file, a typical 3 -4 minute song is about 30 Mb in size. After compression (10 to 1 roughly) to MP3 format, the size is down to about 3Mb. Most people can't hear the difference, but through good headphones, I can. As far as I know, there is no software that can take an audio file originally recorded at 44.1Khz and magically convert it to 96Khz rate. Maybe I'm just not technically smart enough, but I'm skeptical of software taking video at 2.7K resolution, and converting to 4K without adding some kind of "artifacts" that were not part of the original. What am I missing?

Yeah, I not that technically minded either, all I know is it works and definitely improves the "appearance" of the footage @4K.
It must be all interlacing, resampling, digitally sharpening and all that other clever stuff.
Maybe someone with a bit more knowledge can maybe shed a bit more light on it.
 
You can extract information from frames before and after the current frame to "add" information to the current frame and produce what appears to be a higher resolution video overall. Also, cleverly designed algorithms can "predict" what info would look good to fill in the gaps between pixels to synthesize a higher resolution video. You can't incorporate any new information but you can reinterpret it and display it in a way that fools you into thinking there's more detail than before.

Theoretically the same could be done with audio, but you're essentially working with 2 dimensional data (time, amplitude) rather than 4 dimensions (time, X, Y colour) for video. It's much harder to fool your ears in this way than your eyes as you can't use the other dimensions to obscure artefacts.
 
  • Like
Reactions: sli-woody
I understand how software works, and I understand how optics work, but I don't get how software can "create" a higher resolution than the original raw data provided by the optics

You create new pixels. Red and a black next to each other? Put a dark red inbetween. Upscaling is just stretching the image to the new resolution and putting appropriate pixels inbetween. More sophisticated obv..
 
You are not able to create either higher dynamic range or more real pixels in post processing. All you are doing is smearing the data. The upscaling simply generates in between pixels that are the result of averaging the data between the 2 existing pixels. There are various ways of doing this averaging calculation, but absolutely no new information is created.
 
You create new pixels. Red and a black next to each other? Put a dark red inbetween. Upscaling is just stretching the image to the new resolution and putting appropriate pixels inbetween. More sophisticated obv..

That's what I thought...kind of like compression from .wav to .mp3 in reverse.
 
Possibly a better audio equivalent might be to mix 4 tracks of 22khz audio into a single 22khz file, or mix it in to an "upscaled" 44khz file.

Upscalers can use information in other video frames before and after the current frame to more accurately guess what the new pixels should be for going in the "gaps" between original pixels.
 
Possibly a better audio equivalent might be to mix 4 tracks of 22khz audio into a single 22khz file, or mix it in to an "upscaled" 44khz file.

Upscalers can use information in other video frames before and after the current frame to more accurately guess what the new pixels should be for going in the "gaps" between original pixels.
That sounds exactly what the compression software does in reverse. It looks at all of the frequencies a millisecond before and after, then removes most of those that didn't change.
 
So if I render a 2.7K video at 4K, it just shares exactly the same data over them extra pixels, but if I use a Smart Upscaler It calculates what it thinks the extra pixels should look like.
Well all i know is, it seems to work..
 
Lycus Tech Mavic Air 3 Case

DJI Drone Deals

New Threads

Members online

Forum statistics

Threads
131,131
Messages
1,560,131
Members
160,100
Latest member
PilotOne