DJI Mavic, Air and Mini Drones
Friendly, Helpful & Knowledgeable Community
Join Us Now

What is required to port Litchi to MM3 (with dumb controller) or MM2 (with smart controller)? Do both require to attach leads to the microchips?

Joined
Dec 30, 2022
Messages
14
Reactions
9
Age
32
Location
US
Do you have to buy special hardware to dump the stock firmware to be able to modify it?
What is the cheapest hardware that will work?
 
Do you have to buy special hardware to dump the stock firmware to be able to modify it?
What is the cheapest hardware that will work?
You have to wait for DJI to develop an SDK that includes the Mini 3 (and so far there's no indication that they are going to).
Without the SDK, Litchi cannot update their program to work with the Mini 3.
 
Well the firmware for all of the aircraft out there can't use the optical camera to detect obstacles to avoid.
"Maybe if we wait a few years they will add the ability to set a few spots for it to fly to before takeoff if we are lucky, so what do not bother to improve it?"
How about this?
We could make it actually have real AI, a smart aircraft.
 
Not good?
It seems to be fine for most users.
What's not good about it?
It is blind, except for super expensive models with laser range-finders, which are limited to just process how close the nearest obstacle is to each laser range-finder.
Also, they have no memory of obstacles detected, and they do not detect the velocity of obstacles, so they are not able to predict where obstacles will be at future times.
Good firmware would make a 3D map of the world from the forward camera, with which it would predict where out-of-view obstacles will be at later times so that it could detect obstacles if you fly backwards, for example, or if there are obstacles while it does RTH after the remote is lost (raise to pre-set altitude is very bad if trees or bridges overhead.)
 
It is blind, except for super expensive models with laser range-finders, which are limited to just process how close the nearest obstacle is to each laser range-finder.
Also, they have no memory of obstacles detected, and they do not detect the velocity of obstacles, so they are not able to predict where obstacles will be at future times.
Good firmware would make a 3D map of the world from the forward camera, with which it would predict where out-of-view obstacles will be at later times so that it could detect obstacles if you fly backwards, for example, or if there are obstacles while it does RTH after the remote is lost (raise to pre-set altitude is very bad if trees or bridges overhead.)
That's just a factor of the Mini 3 being a cheap, entry level drone, rather than a deficiency of the firmware.
 
That's just a factor of the Mini 3 being a cheap, entry level drone, rather than a deficiency of the firmware.
What aircraft out there is able to use the optical camera to map the world or avoid obstacles?
If somebody already wrote such software I will give up.

Is it able to look for a safe spot to settle if the battery dies while the remote is lost?
Or will it just land on a roof or tree, slide off, and be broken on the ground?
 
Code:
//EXAMPLE (pseudocode) algorithm, to detect/map obstacles with the optical camera 
 typedef struct Velocity { //which way some object goes, at what speed 
   float x; //what to add to GPS x coord (of object) per frame 
   float y; //y coord 
   float z; //z coord 
 } Velocity; 
 
 typedef Velocity Coords; //same datatype for velocity of object as coords of object 
 
 type struct Object { //objects detected from optical data 
    Coords coords; //where is object 
    Velocity velocity; //what velocity vector to add to the coords per frame 
    float radius; //radius of object 
 } Object; //voxels/specs that the camera detects. Complex objects such as creatures or devices will probably show up as lots of Object classes very close together, as opposed to AI that figures out what the objects actually are. It is just to tell where there is matter, to see where it is safe to fly. 
 
 Object *objects; //array of objects that the camera's firmware detected so far, based off how it processed the image for the last FPS frames 
 
 typedef struct PixelF {//floats for RGBA data 
    float r; //How red that particular pixel of the image is. 
    float g; //green 
    float b; //blue 
    float a; //alpha (omit?) 
 } PixelF; 
 
 #include <types.h> /*uint8_t*/ 
 typedef struct Pixel32 { //RGBA without FPU 
    uint8_t r; //red 
    uint8_t g; //green 
    uint8_t b; //blue 
    uint8_t a; //alpha (omit?) 
 } Pixel32; 
 
 typedef PixelF Pixel; 
 //typedef Pixel32 Pixel;//to process without FPU 
   
 typedef struct Frame { 
    Pixel **; //2d array of pixels 
    size_t h; //height of array 
    size_t w; //width of array 
 } Frame; 
 
 typedef struct Camera { 
   Frame *; //array of frames 
   size_t h; //height of camera image 
   size_t w; //width of camera image 
   size_t fps; //framerate of camera 
 } Camera; 
 
 typedef Pixel[3][3] 3X3;  
 3X3 3x3; //replace 3 with whatever works best, a lot of algorithms use 3x3 blocks of pixels to lower the power use. 
 
 
 typedef 3X3[FPS] 3X3XFPS; 
 3X3XFPS 3x3xfps; //this makes it store 60frames if the camera is 60FPS. If memory allows it (plus you are able to process it all,) you could set this as high as you like, to map all that it sees for the whole flight if possible. 
 
 Pixel *pixel = camera.frame.pixel; 
 
 for(size_t time = 0; FPS != time; ++time) { 
    for(size_t i = 0; camera.h != i; ++i) { 
       for(size_t j = 0; camera.w != j; ++j) { 
         3x3[0][0] = pixel[i + 0][j + 0]; 
         3x3[0][1] = pixel[i + 0][j + 1]; 
         3x3[0][2] = pixel[i + 0][j + 2]; 
         3x3[1][0] = pixel[i + 1][j + 0]; 
         3x3[1][1] = pixel[i + 1][j + 1]; 
         3x3[1][2] = pixel[i + 1][j + 2]; 
         3x3[2][0] = pixel[i + 2][j + 0]; 
         3x3[2][1] = pixel[i + 2][j + 1]; 
         3x3[2][2] = pixel[i + 2][j + 2]; 
      } 
    } 
    3x3xfps[time] = 3x3; 
 } //3x3xfps has the last 60 frames of this 3 by 3 square, if the camera is 60 FPS. Take whatever algorithm you prefer that uses blocks of pixels to detect obstacles/velocity with time. Look at HEVC265 or Wikipedia/computer topics for examples. 
 //For 4k (camera.h=4096, camera.w=4096, 16,777,216 pixels,) that is 16*60*32 mbps of data (with 32bit RGBA pixels) So I would try to do it with SIMD (SSE/AVX/NEON) or FPGA/VHDL/Verilog.

Besides a world model to use for itself to fly more safely, this could also be used to give more accurate data to Google Earth (or similar apps.)

Please tell me where to buy a cheap aircraft that lets me reprogram the firmware appropriately, if you would like me to give you more exact source code which will actually compile.
 
Last edited:
Do you have to buy special hardware to dump the stock firmware to be able to modify it?
What is the cheapest hardware that will work?
It's not the firmware that needs to be hacked. It's the sdk.
You can decompile litchi pretty easy, and replace the sdk.

The sdk, not so easy to decompile. Most common way people are doing it is to dump everything with Frida. The root protection isnt in the sdk. Then decompile it with jadx.
Then you can change to a dronetype the sdk accepts. Best way is to hook the receive function and replace the droneid to something that the sdk accepts. Now you have all the telemetry and fpv view. Commands like takeoff and startmoter works.

Now to the hard part :) It seems like some messages like virtual stick has changed in later firmware, so the real problem is to figure out what msg id is the new one. Or you can send the physical sticks message instead.
I struggle with both, but this is where you start. Or decompile sdk5, but most code is compiled c, so it's much harder.

Yeah and everything is encrypted with secneo. Thats why google not allows the apps in Play-store. Seems like everything is uncrypted at startup though, so you fine with dumps from frida.
 
Lycus Tech Mavic Air 3 Case

DJI Drone Deals

New Threads

Forum statistics

Threads
130,984
Messages
1,558,558
Members
159,973
Latest member
rarmstrong2580