DJI Mavic, Air and Mini Drones
Friendly, Helpful & Knowledgeable Community
Join Us Now

3D mapping with the Mini

Kerrowman

Member
Joined
Jun 2, 2020
Messages
23
Reactions
38
Age
68
Location
Cornwall, UK
After owning a Mavic Mini since Feb 2020, and having used it to make a variety of scenic videos of my locality, I have now found a fascinating use for it in 3D site modeling so I wanted to share how I have been doing this and with enough detail so that those interested can do likewise.

Firstly, let me share an example and how best to view it.

Boscawen Un

The images below indicate how to set the correct render setting and also the optional rotate feature once the model has loaded. The model link is below that.

You want to use the 'Shadeless' setting, as shown on the left-hand side of the image, and the rotate feature shown on the other side is a nice way to be moved clockwise around the model at any angle you choose. It takes a second or so to start and the spacebar pauses it, as does using the mouse to change the viewing position.

The left mouse button is tilt and rotate, the scroll wheel is zoom and the right button is drag. Double left-click will take you back to the starting position.

Setting render and rotation

I become interested in archeology since this area has the highest density of Neolithic and later monuments of any area in the UK and there are many stones and marks on the landscape attesting to that. So I started to explore Photogrammetry, extracting useful information and measurements from photographs and 3D models of single small structures. This led to the notion of creating models of much larger areas and sites that would require hundreds of images to construct.

Enter the drone that was starting to feel a bit neglected. With its 2.7k video camera and 12Mp camera, it's modest by the latest standards but very adequate for making accurate 3D site models.

The only way to gather suitable data for a large site for subsequent processing is to use an automated system that will ensure a suitable overlap (75-80%) of the images for the 3D model building process. There were two contenders for that with my Mavic Mini; Drone Harmony or Drone Link. Both are good systems but DH was not fully realized for use with my Mac IPad or iPhone so in the end, I went for DroneLink and which turned out for me to be the better choice.

This lets you create a flight plan on a desktop in a very comprehensive way involving waypoints, map layouts or orbitals, or combinations of those, as well as a range of other more advanced mapping options (such as KML file imports from Google Earth Pro).

Then there's the DL mobile app that you run the missions with and that accesses the flight plans you have made and which can also let you edit the plans in the field.

So to run a mission you set up the drone in the DJI drone app in the normal way and load your geofencing license if required, set camera exposure settings and do any calibrations etc.

Then open up the DL app and select your flight plan. In most cases, it is best to close down the DJI app while running a mission, although that seems to vary with what phone or pad you are using.

Hit run and the drone will take off and fly to its start point. My stone circle map flight plan had a duration of about 22 mins and so at any time, or due to a Return to Home triggered at 20% battery, you can pause the mission, fly the drone back manually (or with auto RTH), swap over the battery and take off again (using DJI app to initiate) and tap Resume, whereupon the drone will fly back to exactly where it left off and continue its plan and image capture.

I tested out a plan on a local burial chamber to get used to using the software and combined vertical shots, where the gimbal is almost vertically down (80 degrees in fact), with orbital shots taken around a series of circles of differing heights, diameters and gimbal angles to give greater detail to the model.
The DroneLink web-based application lets you preview the plan and here is a screen video of such a preview for my burial chamber test flight running at 4x speed:

Mission Preview

I have used my results in the field to refine a plan to give the optimum results. Lowering the drone height will give better image resolution of the structures being captured but a smaller coverage area per shot and so covering the whole area will require more images. So there’s a balance to be found between the level of detail in the final result and the number of photos your computer will need to crunch. Be aware of lighting too. Some say that diffuse lighting from cloud cover gives better results in that it reduces the contrast that might be too high for an acceptable result with the algorithms. In my case, there was bright sunlight and shadows but the results seem to have captured it all ok with acceptable contrast and shadow detail in the final result.

Having run the flight mission I quickly scan through the resulting images and remove any that might produce strange results in the model. Typically these are ones where an important object is under the shadow or blocked by another and where for example a vertical shot has obscured detail of a lower object. The model would end up showing the higher object transplanted onto the lower one. In such cases, it might be necessary to take a few images from ground level of the obscured structure so that there is at least something for the algorithms to work with.

Having got a good set of images, over 400 in this example, I used WebODM (Open Drone Mapping) for the processing which is free open-source software. Despite being free, unless you are happy tinkering under the bonnet of your computer to load it up, you will normally have to pay for an installer that loads Docker which manages 'containers'. With the installer, it's all very smooth to install and operate for whichever platform you are using.

Using a new Mac Mini (with its M1 chip) the images limit for processing is around 500 images so far if I set the Docker resources to 14Gb (of 16) RAM and 4Gb swap file and also reduce the image size in WebODM to 1500 px from a default value of about 2000. I could probably go lower since the result is screen-based and would happily reduce to 1000 px if required which will increase significantly the number of images I can crunch. Note that your disk image will fill quite quickly with Gb of data so it's worth purging every so often but seek guidance on the best way to do this and don't use the Clean/Purge option in Docker unless you have good reason to as that removes more than one needs!

The WebODM output I need from the download options is the Textured model which consists of an Obj, Mtl, Conf, and a folder of texture images in png format.

I then import the Obj file only into Blender for a bit of editing, which again is completely free open source software beloved by game makers, animators, and artists for creating 3D models etc. In Blender, I trimmed off the unwanted parts of the model and, in the stone circle example, there were sections around the edges that were not needed so I trimmed it to a circle although in fact, the stone circle is not a precise circle.

After editing I needed to export a range of files and make a compressed zip file from them to be imported to a 3D viewer like P3D or Sketchfab. For this one needs to save the edited Blender file and then, to the same folder, export from Blender an Obj file, together with its material file, and also a folder of textures (see pic). This is done by using External data - pack and unpack to the same folder.

Files to Zip

This collection of files and the texture folder are then compressed to a zip file resulting in a size of something considerably less than 100Mb, which is handy as the free viewer account limit is typically 100Mb.

So to summarise the steps required to obtain my 3D model are:

Design flight plan on DroneLink
Capture images at the site using the flight plan
Remove or replace any poor images
Run WebODM with suitable settings
Export Textured model
Import Obj file to Blender and edit model
Save Blender file and export Obj, Mtl, and Texture files
Compress to a zip file
Import to a viewer

There you have it. If anyone is enticed into the sort of work, have fun, the results are worth it.

Julian
 
Wow, that's really impressive. I'd love to do something like that, but a little out of my league. I can read this forum and email, that's about it.
 
  • Like
Reactions: BigAl07
You can also use Litchi and Mission Planner to get your photos.
 
The hard work is done by WebODM so as long as you have a drone and a flight planner of some description it is doable by most 😊
 
The hard work is done by WebODM so as long as you have a drone and a flight planner of some description it is doable by most 😊
You don't even have to have a flight planner, have done them free flying with my old Yuneec Q500 years ago. Rendered the photos in Agisoft Metashape. As long as you have good over lap, the right alt. and number of photos its pretty easy to get good results free flying.
Now I use 3DF Zephyr to render my projects. Nice thing about this program you can use video to get some pretty good results.
 
Technically you’re right in that you can get vertical shots easily enough but with orbitals it’s not quite so easy. If on my Mini circular quick shots didn’t default to video then one could keep taking stills while the drone did the flying. Getting a suitable image overlap, at various gimbal angles and at regular, even intervals while manually flying in a clean circle is not straightforward to achieve. So for those not very experienced even the free Lichi might be worth explore using waypoints.

DroneLink is very comprehensive and for about the cost of a drone battery worth it for many.
 
😊 I can see that now it’s £21.99 but when I looked into it 18 months ago I thought it was a free beta version. Yes DL is not cheap but will suit some with its style of working.
 
LOL
Litchi is not free just to clear that up and dronelink does not offer anything I cannot do in Litchi and is over priced in my opinion.
Well, just to be clear, Dronelink can do quite a lot that Litchi can't do -- theoretically, it can do everything that it's possible to do through the SDK, since you can write your own scripts. But just using the built-in capabilities, I'm pretty sure that no other app gives you the detailed level of control that Dronelink does -- certainly not Litchi. But you're right that not everyone needs that much power. Just one example: Yesterday, I built a "component" that takes a three-shot vertical panorama, moves sideways a bit and does it again, then moves a bit more and does it again. The purpose is to make stereographic (3D) photos. (I take three sets so that I will have a choice of three different camera separations for the stereographic pair -- 1+2, 2+3, or 1+3 -- depending on which looks better.) I have this component saved in my repository and can include it in any mission, or easily call it up when out in the field, rotate the whole path to point sideways toward the target, and I'm good to go with a new function that no app has built in.

Edit (after all that, I forgot my main point): Dronelink and some other apps have an automatic mapping function that lets you set the map boundaries, change some defaults such as altitude and overlap percentages if you like, and it will generate the full mission plan. You can do mapping with Litchi, but with considerably more effort.
 
Last edited:
Which is why I chose it as the easiest way to get reliable data sets for large areas. One thing they have yet to do is allow all the drone settings to be managed within the DL app as at the moment you need to do that in the DJI Fly app.

I’m planning my next model which is of Carn Kenidjack, a large outcrop of rock.

BB3EFB49-93CC-4F50-9FC5-CA4A84D43D88.jpeg

I will certainly need to adjust my plan in the field as there is no accurate object height measurement from the map 😳
 
You can do mapping with Litchi, but with considerably more effort.
For the 30 second effort that it takes to do mapping in Litchi and the price difference I will take Litchi over dronelink. Perhaps if I had a drone that waypoints were native to it unlike the Air 2S that uses virtual sticks I might consider dronelink but from everything I have seen dronelink does not produce a better end result.
And Mapping is being considered for litchi so I have been told so lets see how that goes.
 
For the 30 second effort that it takes to do mapping in Litchi ...
If you can do that, you'd be doing the user community a great service by making a YouTube video showing how. Sure, it's easy enough to draw a path that looks like a mapping mission in Litchi, but doing it properly to get the required overlap in both directions is not trivial.
 
By using mission planner and litchi together it is very easy. Search the forum, its been posted already.
 
By using mission planner and litchi together it is very easy. Search the forum, its been posted already.
Ah, I see I missed your post up-thread -- very sorry about that -- but that is nice that Litchi users now have a free way to do the hard part. I actually have Litchi, too, so I'll have to give it a try.
 
Ah, I see I missed your post up-thread -- very sorry about that -- but that is nice that Litchi users now have a free way to do the hard part. I actually have Litchi, too, so I'll have to give it a try.
Yes well it does take a few times to get used to how i all goes together but this is a pretty good work around. I use it more for photogrammetry 3d scanning. Unless you have a drone made for mapping using my Air 2S is pretty useless for that purpose.

 
After owning a Mavic Mini since Feb 2020, and having used it to make a variety of scenic videos of my locality, I have now found a fascinating use for it in 3D site modeling so I wanted to share how I have been doing this and with enough detail so that those interested can do likewise.

Firstly, let me share an example and how best to view it.

Boscawen Un

The images below indicate how to set the correct render setting and also the optional rotate feature once the model has loaded. The model link is below that.

You want to use the 'Shadeless' setting, as shown on the left-hand side of the image, and the rotate feature shown on the other side is a nice way to be moved clockwise around the model at any angle you choose. It takes a second or so to start and the spacebar pauses it, as does using the mouse to change the viewing position.

The left mouse button is tilt and rotate, the scroll wheel is zoom and the right button is drag. Double left-click will take you back to the starting position.

Setting render and rotation

I become interested in archeology since this area has the highest density of Neolithic and later monuments of any area in the UK and there are many stones and marks on the landscape attesting to that. So I started to explore Photogrammetry, extracting useful information and measurements from photographs and 3D models of single small structures. This led to the notion of creating models of much larger areas and sites that would require hundreds of images to construct.

Enter the drone that was starting to feel a bit neglected. With its 2.7k video camera and 12Mp camera, it's modest by the latest standards but very adequate for making accurate 3D site models.

The only way to gather suitable data for a large site for subsequent processing is to use an automated system that will ensure a suitable overlap (75-80%) of the images for the 3D model building process. There were two contenders for that with my Mavic Mini; Drone Harmony or Drone Link. Both are good systems but DH was not fully realized for use with my Mac IPad or iPhone so in the end, I went for DroneLink and which turned out for me to be the better choice.

This lets you create a flight plan on a desktop in a very comprehensive way involving waypoints, map layouts or orbitals, or combinations of those, as well as a range of other more advanced mapping options (such as KML file imports from Google Earth Pro).

Then there's the DL mobile app that you run the missions with and that accesses the flight plans you have made and which can also let you edit the plans in the field.

So to run a mission you set up the drone in the DJI drone app in the normal way and load your geofencing license if required, set camera exposure settings and do any calibrations etc.

Then open up the DL app and select your flight plan. In most cases, it is best to close down the DJI app while running a mission, although that seems to vary with what phone or pad you are using.

Hit run and the drone will take off and fly to its start point. My stone circle map flight plan had a duration of about 22 mins and so at any time, or due to a Return to Home triggered at 20% battery, you can pause the mission, fly the drone back manually (or with auto RTH), swap over the battery and take off again (using DJI app to initiate) and tap Resume, whereupon the drone will fly back to exactly where it left off and continue its plan and image capture.

I tested out a plan on a local burial chamber to get used to using the software and combined vertical shots, where the gimbal is almost vertically down (80 degrees in fact), with orbital shots taken around a series of circles of differing heights, diameters and gimbal angles to give greater detail to the model.
The DroneLink web-based application lets you preview the plan and here is a screen video of such a preview for my burial chamber test flight running at 4x speed:

Mission Preview

I have used my results in the field to refine a plan to give the optimum results. Lowering the drone height will give better image resolution of the structures being captured but a smaller coverage area per shot and so covering the whole area will require more images. So there’s a balance to be found between the level of detail in the final result and the number of photos your computer will need to crunch. Be aware of lighting too. Some say that diffuse lighting from cloud cover gives better results in that it reduces the contrast that might be too high for an acceptable result with the algorithms. In my case, there was bright sunlight and shadows but the results seem to have captured it all ok with acceptable contrast and shadow detail in the final result.

Having run the flight mission I quickly scan through the resulting images and remove any that might produce strange results in the model. Typically these are ones where an important object is under the shadow or blocked by another and where for example a vertical shot has obscured detail of a lower object. The model would end up showing the higher object transplanted onto the lower one. In such cases, it might be necessary to take a few images from ground level of the obscured structure so that there is at least something for the algorithms to work with.

Having got a good set of images, over 400 in this example, I used WebODM (Open Drone Mapping) for the processing which is free open-source software. Despite being free, unless you are happy tinkering under the bonnet of your computer to load it up, you will normally have to pay for an installer that loads Docker which manages 'containers'. With the installer, it's all very smooth to install and operate for whichever platform you are using.

Using a new Mac Mini (with its M1 chip) the images limit for processing is around 500 images so far if I set the Docker resources to 14Gb (of 16) RAM and 4Gb swap file and also reduce the image size in WebODM to 1500 px from a default value of about 2000. I could probably go lower since the result is screen-based and would happily reduce to 1000 px if required which will increase significantly the number of images I can crunch. Note that your disk image will fill quite quickly with Gb of data so it's worth purging every so often but seek guidance on the best way to do this and don't use the Clean/Purge option in Docker unless you have good reason to as that removes more than one needs!

The WebODM output I need from the download options is the Textured model which consists of an Obj, Mtl, Conf, and a folder of texture images in png format.

I then import the Obj file only into Blender for a bit of editing, which again is completely free open source software beloved by game makers, animators, and artists for creating 3D models etc. In Blender, I trimmed off the unwanted parts of the model and, in the stone circle example, there were sections around the edges that were not needed so I trimmed it to a circle although in fact, the stone circle is not a precise circle.

After editing I needed to export a range of files and make a compressed zip file from them to be imported to a 3D viewer like P3D or Sketchfab. For this one needs to save the edited Blender file and then, to the same folder, export from Blender an Obj file, together with its material file, and also a folder of textures (see pic). This is done by using External data - pack and unpack to the same folder.

Files to Zip

This collection of files and the texture folder are then compressed to a zip file resulting in a size of something considerably less than 100Mb, which is handy as the free viewer account limit is typically 100Mb.

So to summarise the steps required to obtain my 3D model are:

Design flight plan on DroneLink
Capture images at the site using the flight plan
Remove or replace any poor images
Run WebODM with suitable settings
Export Textured model
Import Obj file to Blender and edit model
Save Blender file and export Obj, Mtl, and Texture files
Compress to a zip file
Import to a viewer

There you have it. If anyone is enticed into the sort of work, have fun, the results are worth it.

Julian
hi Julian...
Very interesting project. I, also. am interested in 3d modelling of archaeological sites. In particular, the Minoan site of Knossos on Crete. I would love to see the results of your endeavour, links of which, are not shown in your post.
I recently purchased a DJI mini 2... cheers Ian
 
After owning a Mavic Mini since Feb 2020, and having used it to make a variety of scenic videos of my locality, I have now found a fascinating use for it in 3D site modeling so I wanted to share how I have been doing this and with enough detail so that those interested can do likewise.

Firstly, let me share an example and how best to view it.

Boscawen Un

The images below indicate how to set the correct render setting and also the optional rotate feature once the model has loaded. The model link is below that.

You want to use the 'Shadeless' setting, as shown on the left-hand side of the image, and the rotate feature shown on the other side is a nice way to be moved clockwise around the model at any angle you choose. It takes a second or so to start and the spacebar pauses it, as does using the mouse to change the viewing position.

The left mouse button is tilt and rotate, the scroll wheel is zoom and the right button is drag. Double left-click will take you back to the starting position.

Setting render and rotation

I become interested in archeology since this area has the highest density of Neolithic and later monuments of any area in the UK and there are many stones and marks on the landscape attesting to that. So I started to explore Photogrammetry, extracting useful information and measurements from photographs and 3D models of single small structures. This led to the notion of creating models of much larger areas and sites that would require hundreds of images to construct.

Enter the drone that was starting to feel a bit neglected. With its 2.7k video camera and 12Mp camera, it's modest by the latest standards but very adequate for making accurate 3D site models.

The only way to gather suitable data for a large site for subsequent processing is to use an automated system that will ensure a suitable overlap (75-80%) of the images for the 3D model building process. There were two contenders for that with my Mavic Mini; Drone Harmony or Drone Link. Both are good systems but DH was not fully realized for use with my Mac IPad or iPhone so in the end, I went for DroneLink and which turned out for me to be the better choice.

This lets you create a flight plan on a desktop in a very comprehensive way involving waypoints, map layouts or orbitals, or combinations of those, as well as a range of other more advanced mapping options (such as KML file imports from Google Earth Pro).

Then there's the DL mobile app that you run the missions with and that accesses the flight plans you have made and which can also let you edit the plans in the field.

So to run a mission you set up the drone in the DJI drone app in the normal way and load your geofencing license if required, set camera exposure settings and do any calibrations etc.

Then open up the DL app and select your flight plan. In most cases, it is best to close down the DJI app while running a mission, although that seems to vary with what phone or pad you are using.

Hit run and the drone will take off and fly to its start point. My stone circle map flight plan had a duration of about 22 mins and so at any time, or due to a Return to Home triggered at 20% battery, you can pause the mission, fly the drone back manually (or with auto RTH), swap over the battery and take off again (using DJI app to initiate) and tap Resume, whereupon the drone will fly back to exactly where it left off and continue its plan and image capture.

I tested out a plan on a local burial chamber to get used to using the software and combined vertical shots, where the gimbal is almost vertically down (80 degrees in fact), with orbital shots taken around a series of circles of differing heights, diameters and gimbal angles to give greater detail to the model.
The DroneLink web-based application lets you preview the plan and here is a screen video of such a preview for my burial chamber test flight running at 4x speed:

Mission Preview

I have used my results in the field to refine a plan to give the optimum results. Lowering the drone height will give better image resolution of the structures being captured but a smaller coverage area per shot and so covering the whole area will require more images. So there’s a balance to be found between the level of detail in the final result and the number of photos your computer will need to crunch. Be aware of lighting too. Some say that diffuse lighting from cloud cover gives better results in that it reduces the contrast that might be too high for an acceptable result with the algorithms. In my case, there was bright sunlight and shadows but the results seem to have captured it all ok with acceptable contrast and shadow detail in the final result.

Having run the flight mission I quickly scan through the resulting images and remove any that might produce strange results in the model. Typically these are ones where an important object is under the shadow or blocked by another and where for example a vertical shot has obscured detail of a lower object. The model would end up showing the higher object transplanted onto the lower one. In such cases, it might be necessary to take a few images from ground level of the obscured structure so that there is at least something for the algorithms to work with.

Having got a good set of images, over 400 in this example, I used WebODM (Open Drone Mapping) for the processing which is free open-source software. Despite being free, unless you are happy tinkering under the bonnet of your computer to load it up, you will normally have to pay for an installer that loads Docker which manages 'containers'. With the installer, it's all very smooth to install and operate for whichever platform you are using.

Using a new Mac Mini (with its M1 chip) the images limit for processing is around 500 images so far if I set the Docker resources to 14Gb (of 16) RAM and 4Gb swap file and also reduce the image size in WebODM to 1500 px from a default value of about 2000. I could probably go lower since the result is screen-based and would happily reduce to 1000 px if required which will increase significantly the number of images I can crunch. Note that your disk image will fill quite quickly with Gb of data so it's worth purging every so often but seek guidance on the best way to do this and don't use the Clean/Purge option in Docker unless you have good reason to as that removes more than one needs!

The WebODM output I need from the download options is the Textured model which consists of an Obj, Mtl, Conf, and a folder of texture images in png format.

I then import the Obj file only into Blender for a bit of editing, which again is completely free open source software beloved by game makers, animators, and artists for creating 3D models etc. In Blender, I trimmed off the unwanted parts of the model and, in the stone circle example, there were sections around the edges that were not needed so I trimmed it to a circle although in fact, the stone circle is not a precise circle.

After editing I needed to export a range of files and make a compressed zip file from them to be imported to a 3D viewer like P3D or Sketchfab. For this one needs to save the edited Blender file and then, to the same folder, export from Blender an Obj file, together with its material file, and also a folder of textures (see pic). This is done by using External data - pack and unpack to the same folder.

Files to Zip

This collection of files and the texture folder are then compressed to a zip file resulting in a size of something considerably less than 100Mb, which is handy as the free viewer account limit is typically 100Mb.

So to summarise the steps required to obtain my 3D model are:

Design flight plan on DroneLink
Capture images at the site using the flight plan
Remove or replace any poor images
Run WebODM with suitable settings
Export Textured model
Import Obj file to Blender and edit model
Save Blender file and export Obj, Mtl, and Texture files
Compress to a zip file
Import to a viewer

There you have it. If anyone is enticed into the sort of work, have fun, the results are worth it.

Julian
hi Julian...
Update,,, managed to see results of your rendered images on Boscawen Un. Just took a little time to download... Ian
 
After owning a Mavic Mini since Feb 2020, and having used it to make a variety of scenic videos of my locality, I have now found a fascinating use for it in 3D site modeling so I wanted to share how I have been doing this and with enough detail so that those interested can do likewise.

Firstly, let me share an example and how best to view it.

Boscawen Un

The images below indicate how to set the correct render setting and also the optional rotate feature once the model has loaded. The model link is below that.

You want to use the 'Shadeless' setting, as shown on the left-hand side of the image, and the rotate feature shown on the other side is a nice way to be moved clockwise around the model at any angle you choose. It takes a second or so to start and the spacebar pauses it, as does using the mouse to change the viewing position.

The left mouse button is tilt and rotate, the scroll wheel is zoom and the right button is drag. Double left-click will take you back to the starting position.

Setting render and rotation

I become interested in archeology since this area has the highest density of Neolithic and later monuments of any area in the UK and there are many stones and marks on the landscape attesting to that. So I started to explore Photogrammetry, extracting useful information and measurements from photographs and 3D models of single small structures. This led to the notion of creating models of much larger areas and sites that would require hundreds of images to construct.

Enter the drone that was starting to feel a bit neglected. With its 2.7k video camera and 12Mp camera, it's modest by the latest standards but very adequate for making accurate 3D site models.

The only way to gather suitable data for a large site for subsequent processing is to use an automated system that will ensure a suitable overlap (75-80%) of the images for the 3D model building process. There were two contenders for that with my Mavic Mini; Drone Harmony or Drone Link. Both are good systems but DH was not fully realized for use with my Mac IPad or iPhone so in the end, I went for DroneLink and which turned out for me to be the better choice.

This lets you create a flight plan on a desktop in a very comprehensive way involving waypoints, map layouts or orbitals, or combinations of those, as well as a range of other more advanced mapping options (such as KML file imports from Google Earth Pro).

Then there's the DL mobile app that you run the missions with and that accesses the flight plans you have made and which can also let you edit the plans in the field.

So to run a mission you set up the drone in the DJI drone app in the normal way and load your geofencing license if required, set camera exposure settings and do any calibrations etc.

Then open up the DL app and select your flight plan. In most cases, it is best to close down the DJI app while running a mission, although that seems to vary with what phone or pad you are using.

Clique em correr e o drone decolará e voará até o ponto inicial. Meu plano de voo do mapa do círculo de pedras teve uma duração de cerca de 22 minutos e, portanto, a qualquer momento, ou devido a um Retorno para Casa acionado com 20% da bateria, você pode pausar a missão, pilotar o drone de volta manualmente (ou com RTH automático), troque a bateria e decole novamente (usando o aplicativo DJI para iniciar) e toque em Retomar, após o que o drone voará de volta exatamente para onde parou e continuará seu plano e captura de imagem.

Testei um plano em uma câmara mortuária local para me acostumar a usar o software e combinei fotos verticais, onde o gimbal está quase verticalmente para baixo (80 graus na verdade), com fotos orbitais tiradas em torno de uma série de círculos de diferentes alturas, diâmetros e ângulos de gimbal para dar maior detalhe ao modelo.
O aplicativo baseado na web DroneLink permite que você visualize o plano e aqui está um vídeo em tela dessa visualização para meu vôo de teste na câmara funerária rodando a velocidade 4x:

Antevisão da Missão

Usei meus resultados em campo para refinar um plano para obter os melhores resultados. A redução da altura do drone proporcionará uma melhor resolução de imagem das estruturas a serem capturadas, mas uma área de cobertura menor por disparo e, portanto, cobrir toda a área exigirá mais imagens. Portanto, é necessário encontrar um equilíbrio entre o nível de detalhe do resultado final e o número de fotos que seu computador precisará processar. Esteja atento à iluminação também. Alguns dizem que a iluminação difusa da cobertura de nuvens dá melhores resultados, pois reduz o contraste que pode ser muito alto para um resultado aceitável com os algoritmos. No meu caso, havia luz solar intensa e sombras, mas os resultados parecem ter capturado tudo bem, com contraste aceitável e detalhes de sombra no resultado final.

Depois de executar a missão de voo, examino rapidamente as imagens resultantes e removo aquelas que possam produzir resultados estranhos no modelo. Normalmente são aqueles em que um objeto importante está sob a sombra ou bloqueado por outro e onde, por exemplo, uma foto vertical obscureceu detalhes de um objeto inferior. O modelo acabaria mostrando o objeto superior transplantado para o inferior. Nesses casos, pode ser necessário tirar algumas imagens do nível do solo da estrutura obscurecida para que haja pelo menos algo com que os algoritmos possam trabalhar.

Tendo obtido um bom conjunto de imagens, mais de 400 neste exemplo, usei WebODM (Open Drone Mapping) para o processamento, que é um software de código aberto gratuito. Apesar de ser gratuito, a menos que você esteja feliz em mexer no capô do seu computador para carregá-lo, normalmente você terá que pagar por um instalador que carregue o Docker, que gerencia 'contêineres'. Com o instalador, é muito fácil instalar e operar em qualquer plataforma que você esteja usando.

Usando um novo Mac Mini (com seu chip M1), o limite de imagens para processamento é de cerca de 500 imagens até agora se eu definir os recursos do Docker para 14 Gb (de 16) RAM e arquivo de troca de 4 Gb e também reduzir o tamanho da imagem no WebODM para 1500 px de um valor padrão de cerca de 2.000. Eu provavelmente poderia diminuir, já que o resultado é baseado na tela e ficaria feliz em reduzir para 1.000 px, se necessário, o que aumentará significativamente o número de imagens que posso processar. Observe que sua imagem de disco será preenchida rapidamente com Gb de dados, portanto vale a pena limpá-la de vez em quando, mas procure orientação sobre a melhor maneira de fazer isso e não use a opção Limpar/Purgar no Docker, a menos que você tenha um bom motivo para isso. remove mais do que é necessário!

A saída WebODM que preciso nas opções de download é o modelo Texturizado que consiste em Obj, Mtl, Conf e uma pasta de imagens de textura em formato png.

Em seguida, importo o arquivo Obj apenas para o Blender para um pouco de edição, que novamente é um software de código aberto totalmente gratuito, adorado por criadores de jogos, animadores e artistas para a criação de modelos 3D, etc. e, no exemplo do círculo de pedras, havia seções ao redor das bordas que não eram necessárias, então cortei-as em um círculo, embora, na verdade, o círculo de pedras não seja um círculo preciso.

Após a edição, precisei exportar uma série de arquivos e criar um arquivo zip compactado a partir deles para ser importado para um visualizador 3D como P3D ou Sketchfab. Para isso é necessário salvar o arquivo editado do Blender e depois, para a mesma pasta, exportar do Blender um arquivo Obj, junto com seu arquivo de material, e também uma pasta de texturas (ver foto). Isso é feito usando dados externos - compactar e descompactar na mesma pasta.

Arquivos para compactar

Essa coleção de arquivos e a pasta de textura são então compactadas em um arquivo zip, resultando em um tamanho consideravelmente menor que 100 MB, o que é útil, pois o limite da conta de visualizador gratuito é normalmente de 100 MB.

Então, para resumir, as etapas necessárias para obter meu modelo 3D são:

Projetar plano de voo no DroneLink
Capture imagens no local usando o plano de vôo
Remova ou substitua quaisquer imagens ruins
Execute o WebODM com configurações adequadas
Exportar modelo texturizado
Importe o arquivo Obj para o Blender e edite o modelo
Salve o arquivo do Blender e exporte arquivos Obj, Mtl e Textura
Compactar em um arquivo zip
Importar para um visualizador

Aí está. Se alguém se sentir atraído por esse tipo de trabalho, divirta-se, os resultados valem a pena.

Juliano
Muito boa explicação, parabéns. Melhores Drones Archives - forallfor.com
 
When mapping and doing grid pattern with gimble at -90 degrees which mode of photo capture would be more precise and/or superior:
  • interval shots while in continuous flight (timed or distance interval)
  • Interval shots with stop and hoover for a few seconds to take a shot
Somehow I think that with continuous flight the precision would be a tad betted while sacrificing the overall quality (not so sharp image). Any Ideas?
 
Lycus Tech Mavic Air 3 Case

DJI Drone Deals

New Threads

Forum statistics

Threads
130,984
Messages
1,558,559
Members
159,973
Latest member
rarmstrong2580