DJI Mavic, Air and Mini Drones
Friendly, Helpful & Knowledgeable Community
Join Us Now

Open CV/ Decoding video

Furqan Ahmed

Member
Joined
Sep 30, 2021
Messages
16
Reactions
2
Age
23
Location
Pakistan
Hello Everyone,

Actually, I want to run some machine learning models on the image coming from the drone. Can someone tell me how to get the frame from the drone video... I research some info that I need the video decoder to get the frames, so I checked the VideoStreamDecoder provided on the DJI mobile SDK but it does not have getIframeRawId for Mavic air 2. Therefore, I need some help regarding how to implement the OpenCV method for the drone.

Any kind of help would be appreciated. Thanks in advance!!!
Drone: MA2
 
Last edited:
Hello Everyone,

Actually, I want to run some machine learning models on the image coming from the drone. Can someone tell me how to get the frame from the drone video... I research some info that I need the video decoder to get the frames, so I checked the VideoStreamDecoder provided on the DJI mobile SDK but it does not have getIframeRawId for Mavic air 2. Therefore, I need some help regarding how to implement the OpenCV method for the drone.

Any kind of help would be appreciated. Thanks in advance!!!
Drone: MA2
Easiest way is to dump the fpv-widget to a bmp-map. Saves you a lot of trouble. It's fast to. I can do 100fps even from python.

I've written more info about this on stackoverflow a while ago.
 
Are you trying to analyse video or still images? If video, most video editors can output a stream of images that make up the video. They will probably be jpegs, or perhaps png. Either should be suitable for your analytic sw.
 
Are you trying to analyse video or still images? If video, most video editors can output a stream of images that make up the video. They will probably be jpegs, or perhaps png. Either should be suitable for your analytic sw.
I think he wants to do it realtime, from the fpv stream.

I answered a similar question here. Dead simple :cool:
 
I think he wants to do it realtime, from the fpv stream.

I answered a similar question here. Dead simple :cool:
Thank you for this. I'll let you know once I try this.

And Yes I want to do it in a real-time environment.
 
I think he wants to do it realtime, from the fpv stream.

I answered a similar question here. Dead simple :cool:
Can you tell me how should I mark the objects that I detected on the mobile screen or just for the example show me how to draw a line on the mobile screen in real-time? Since I have never done the bmp-map work before I don't know how to convert bmp-map data back.

Also, I have to ask you this as well. I have seen videos where people actually use mat format to work on the feed coming from their mobile camera. Since I have to work on the feed coming from drone, can you tell me in which format I can perform the rest of my deep learning algorithms?
 
Last edited:
Can you tell me how should I mark the objects that I detected on the mobile screen or just for the example show me how to draw a line on the mobile screen in real-time? Since I have never done the bmp-map work before I don't know how to convert bmp-map data back.

Also, I have to ask you this as well. I have seen videos where people actually use mat format to work on the feed coming from their mobile camera. Since I have to work on the feed coming from drone, can you tell me in which format I can perform the rest of my deep learning algorithms?
All deep learning algorithms I've seen using bmp, usually converted to grey. Which one are you looking at?

Take a look at source of tensorflow-lite demo app. Everything you ask is in there.

Heres an example that points the gimbal to a human object.
The bmp is saved to a file, and the reread. A little slower, but fast enough for my use.

def get_human_position(api: MavicMaxLib.Api) -> Tuple[float, float, float, float]:
api.ui.save_frame_bitmap(filename, 50)
img = cv2.imread(filename,0)

x_size, y_size = int(1920/2), int(1080/2)
img = cv2.resize(img, (x_size, y_size))
boxes, weights = hog.detectMultiScale(img, winStride=(8, 8))
for (x, y, w, h) in boxes:
api.ui.draw_rect(api.drone.drone.drawRectXY(x/x_size, y/y_size, (x + w)/x_size, (y + h)/y_size))
return (x/x_size, y/y_size, (x + w)/x_size, (y + h)/y_size)
return (0.5,0.5,0.5,0.5)
 
Lycus Tech Mavic Air 3 Case

DJI Drone Deals

New Threads

Members online

Forum statistics

Threads
131,346
Messages
1,562,260
Members
160,286
Latest member
hoosiergsp