After owning a Mavic Mini since Feb 2020, and having used it to make a variety of scenic videos of my locality, I have now found a fascinating use for it in 3D site modeling so I wanted to share how I have been doing this and with enough detail so that those interested can do likewise.
Firstly, let me share an example and how best to view it.
Boscawen Un
The images below indicate how to set the correct render setting and also the optional rotate feature once the model has loaded. The model link is below that.
You want to use the 'Shadeless' setting, as shown on the left-hand side of the image, and the rotate feature shown on the other side is a nice way to be moved clockwise around the model at any angle you choose. It takes a second or so to start and the spacebar pauses it, as does using the mouse to change the viewing position.
The left mouse button is tilt and rotate, the scroll wheel is zoom and the right button is drag. Double left-click will take you back to the starting position.
Setting render and rotation
I become interested in archeology since this area has the highest density of Neolithic and later monuments of any area in the UK and there are many stones and marks on the landscape attesting to that. So I started to explore Photogrammetry, extracting useful information and measurements from photographs and 3D models of single small structures. This led to the notion of creating models of much larger areas and sites that would require hundreds of images to construct.
Enter the drone that was starting to feel a bit neglected. With its 2.7k video camera and 12Mp camera, it's modest by the latest standards but very adequate for making accurate 3D site models.
The only way to gather suitable data for a large site for subsequent processing is to use an automated system that will ensure a suitable overlap (75-80%) of the images for the 3D model building process. There were two contenders for that with my Mavic Mini; Drone Harmony or Drone Link. Both are good systems but DH was not fully realized for use with my Mac IPad or iPhone so in the end, I went for DroneLink and which turned out for me to be the better choice.
This lets you create a flight plan on a desktop in a very comprehensive way involving waypoints, map layouts or orbitals, or combinations of those, as well as a range of other more advanced mapping options (such as KML file imports from Google Earth Pro).
Then there's the DL mobile app that you run the missions with and that accesses the flight plans you have made and which can also let you edit the plans in the field.
So to run a mission you set up the drone in the DJI drone app in the normal way and load your geofencing license if required, set camera exposure settings and do any calibrations etc.
Then open up the DL app and select your flight plan. In most cases, it is best to close down the DJI app while running a mission, although that seems to vary with what phone or pad you are using.
Hit run and the drone will take off and fly to its start point. My stone circle map flight plan had a duration of about 22 mins and so at any time, or due to a Return to Home triggered at 20% battery, you can pause the mission, fly the drone back manually (or with auto RTH), swap over the battery and take off again (using DJI app to initiate) and tap Resume, whereupon the drone will fly back to exactly where it left off and continue its plan and image capture.
I tested out a plan on a local burial chamber to get used to using the software and combined vertical shots, where the gimbal is almost vertically down (80 degrees in fact), with orbital shots taken around a series of circles of differing heights, diameters and gimbal angles to give greater detail to the model.
The DroneLink web-based application lets you preview the plan and here is a screen video of such a preview for my burial chamber test flight running at 4x speed:
Mission Preview
I have used my results in the field to refine a plan to give the optimum results. Lowering the drone height will give better image resolution of the structures being captured but a smaller coverage area per shot and so covering the whole area will require more images. So there’s a balance to be found between the level of detail in the final result and the number of photos your computer will need to crunch. Be aware of lighting too. Some say that diffuse lighting from cloud cover gives better results in that it reduces the contrast that might be too high for an acceptable result with the algorithms. In my case, there was bright sunlight and shadows but the results seem to have captured it all ok with acceptable contrast and shadow detail in the final result.
Having run the flight mission I quickly scan through the resulting images and remove any that might produce strange results in the model. Typically these are ones where an important object is under the shadow or blocked by another and where for example a vertical shot has obscured detail of a lower object. The model would end up showing the higher object transplanted onto the lower one. In such cases, it might be necessary to take a few images from ground level of the obscured structure so that there is at least something for the algorithms to work with.
Having got a good set of images, over 400 in this example, I used WebODM (Open Drone Mapping) for the processing which is free open-source software. Despite being free, unless you are happy tinkering under the bonnet of your computer to load it up, you will normally have to pay for an installer that loads Docker which manages 'containers'. With the installer, it's all very smooth to install and operate for whichever platform you are using.
Using a new Mac Mini (with its M1 chip) the images limit for processing is around 500 images so far if I set the Docker resources to 14Gb (of 16) RAM and 4Gb swap file and also reduce the image size in WebODM to 1500 px from a default value of about 2000. I could probably go lower since the result is screen-based and would happily reduce to 1000 px if required which will increase significantly the number of images I can crunch. Note that your disk image will fill quite quickly with Gb of data so it's worth purging every so often but seek guidance on the best way to do this and don't use the Clean/Purge option in Docker unless you have good reason to as that removes more than one needs!
The WebODM output I need from the download options is the Textured model which consists of an Obj, Mtl, Conf, and a folder of texture images in png format.
I then import the Obj file only into Blender for a bit of editing, which again is completely free open source software beloved by game makers, animators, and artists for creating 3D models etc. In Blender, I trimmed off the unwanted parts of the model and, in the stone circle example, there were sections around the edges that were not needed so I trimmed it to a circle although in fact, the stone circle is not a precise circle.
After editing I needed to export a range of files and make a compressed zip file from them to be imported to a 3D viewer like P3D or Sketchfab. For this one needs to save the edited Blender file and then, to the same folder, export from Blender an Obj file, together with its material file, and also a folder of textures (see pic). This is done by using External data - pack and unpack to the same folder.
Files to Zip
This collection of files and the texture folder are then compressed to a zip file resulting in a size of something considerably less than 100Mb, which is handy as the free viewer account limit is typically 100Mb.
So to summarise the steps required to obtain my 3D model are:
Design flight plan on DroneLink
Capture images at the site using the flight plan
Remove or replace any poor images
Run WebODM with suitable settings
Export Textured model
Import Obj file to Blender and edit model
Save Blender file and export Obj, Mtl, and Texture files
Compress to a zip file
Import to a viewer
There you have it. If anyone is enticed into the sort of work, have fun, the results are worth it.
Julian
Firstly, let me share an example and how best to view it.
Boscawen Un
The images below indicate how to set the correct render setting and also the optional rotate feature once the model has loaded. The model link is below that.
You want to use the 'Shadeless' setting, as shown on the left-hand side of the image, and the rotate feature shown on the other side is a nice way to be moved clockwise around the model at any angle you choose. It takes a second or so to start and the spacebar pauses it, as does using the mouse to change the viewing position.
The left mouse button is tilt and rotate, the scroll wheel is zoom and the right button is drag. Double left-click will take you back to the starting position.
Setting render and rotation
I become interested in archeology since this area has the highest density of Neolithic and later monuments of any area in the UK and there are many stones and marks on the landscape attesting to that. So I started to explore Photogrammetry, extracting useful information and measurements from photographs and 3D models of single small structures. This led to the notion of creating models of much larger areas and sites that would require hundreds of images to construct.
Enter the drone that was starting to feel a bit neglected. With its 2.7k video camera and 12Mp camera, it's modest by the latest standards but very adequate for making accurate 3D site models.
The only way to gather suitable data for a large site for subsequent processing is to use an automated system that will ensure a suitable overlap (75-80%) of the images for the 3D model building process. There were two contenders for that with my Mavic Mini; Drone Harmony or Drone Link. Both are good systems but DH was not fully realized for use with my Mac IPad or iPhone so in the end, I went for DroneLink and which turned out for me to be the better choice.
This lets you create a flight plan on a desktop in a very comprehensive way involving waypoints, map layouts or orbitals, or combinations of those, as well as a range of other more advanced mapping options (such as KML file imports from Google Earth Pro).
Then there's the DL mobile app that you run the missions with and that accesses the flight plans you have made and which can also let you edit the plans in the field.
So to run a mission you set up the drone in the DJI drone app in the normal way and load your geofencing license if required, set camera exposure settings and do any calibrations etc.
Then open up the DL app and select your flight plan. In most cases, it is best to close down the DJI app while running a mission, although that seems to vary with what phone or pad you are using.
Hit run and the drone will take off and fly to its start point. My stone circle map flight plan had a duration of about 22 mins and so at any time, or due to a Return to Home triggered at 20% battery, you can pause the mission, fly the drone back manually (or with auto RTH), swap over the battery and take off again (using DJI app to initiate) and tap Resume, whereupon the drone will fly back to exactly where it left off and continue its plan and image capture.
I tested out a plan on a local burial chamber to get used to using the software and combined vertical shots, where the gimbal is almost vertically down (80 degrees in fact), with orbital shots taken around a series of circles of differing heights, diameters and gimbal angles to give greater detail to the model.
The DroneLink web-based application lets you preview the plan and here is a screen video of such a preview for my burial chamber test flight running at 4x speed:
Mission Preview
I have used my results in the field to refine a plan to give the optimum results. Lowering the drone height will give better image resolution of the structures being captured but a smaller coverage area per shot and so covering the whole area will require more images. So there’s a balance to be found between the level of detail in the final result and the number of photos your computer will need to crunch. Be aware of lighting too. Some say that diffuse lighting from cloud cover gives better results in that it reduces the contrast that might be too high for an acceptable result with the algorithms. In my case, there was bright sunlight and shadows but the results seem to have captured it all ok with acceptable contrast and shadow detail in the final result.
Having run the flight mission I quickly scan through the resulting images and remove any that might produce strange results in the model. Typically these are ones where an important object is under the shadow or blocked by another and where for example a vertical shot has obscured detail of a lower object. The model would end up showing the higher object transplanted onto the lower one. In such cases, it might be necessary to take a few images from ground level of the obscured structure so that there is at least something for the algorithms to work with.
Having got a good set of images, over 400 in this example, I used WebODM (Open Drone Mapping) for the processing which is free open-source software. Despite being free, unless you are happy tinkering under the bonnet of your computer to load it up, you will normally have to pay for an installer that loads Docker which manages 'containers'. With the installer, it's all very smooth to install and operate for whichever platform you are using.
Using a new Mac Mini (with its M1 chip) the images limit for processing is around 500 images so far if I set the Docker resources to 14Gb (of 16) RAM and 4Gb swap file and also reduce the image size in WebODM to 1500 px from a default value of about 2000. I could probably go lower since the result is screen-based and would happily reduce to 1000 px if required which will increase significantly the number of images I can crunch. Note that your disk image will fill quite quickly with Gb of data so it's worth purging every so often but seek guidance on the best way to do this and don't use the Clean/Purge option in Docker unless you have good reason to as that removes more than one needs!
The WebODM output I need from the download options is the Textured model which consists of an Obj, Mtl, Conf, and a folder of texture images in png format.
I then import the Obj file only into Blender for a bit of editing, which again is completely free open source software beloved by game makers, animators, and artists for creating 3D models etc. In Blender, I trimmed off the unwanted parts of the model and, in the stone circle example, there were sections around the edges that were not needed so I trimmed it to a circle although in fact, the stone circle is not a precise circle.
After editing I needed to export a range of files and make a compressed zip file from them to be imported to a 3D viewer like P3D or Sketchfab. For this one needs to save the edited Blender file and then, to the same folder, export from Blender an Obj file, together with its material file, and also a folder of textures (see pic). This is done by using External data - pack and unpack to the same folder.
Files to Zip
This collection of files and the texture folder are then compressed to a zip file resulting in a size of something considerably less than 100Mb, which is handy as the free viewer account limit is typically 100Mb.
So to summarise the steps required to obtain my 3D model are:
Design flight plan on DroneLink
Capture images at the site using the flight plan
Remove or replace any poor images
Run WebODM with suitable settings
Export Textured model
Import Obj file to Blender and edit model
Save Blender file and export Obj, Mtl, and Texture files
Compress to a zip file
Import to a viewer
There you have it. If anyone is enticed into the sort of work, have fun, the results are worth it.
Julian