New to tracking? See the Overall Workflow section below.
Tracking drone footage is generally pretty straightforward, but there are some pitfalls to avoid while flying the drone and shooting the footage, as well as subsequently tracking it.
- DO choose a drone with a high quality lens. Lens distortion will create significant additional steps and complexity in your workflow. This is especially the case if you're just learning to track.
- DO NOT spin the drone in place, without translation. Spins are tripod shots, with no perspective and no 3D, so you'll have to use survey points.
- DO NOT fly the drone straight forward, with the camera looking straight forward. This results in a degeneracy (“optical illusion”) where the field of view can't be determined.
- DO orbit the scene, focusing on the area of interest.
- DO fly sideways (crabbing) to traverse the area of interest.
- If the drone has a zoom lens, resist the tendency to zoom in tight, which will flatten the perspective. Keep the lens wider, and fly closer.
- In dynamic shots, keep the shutter time short (if possible) to avoid blurring trackable features. Very dynamic shots may also exhibit Rolling Shutter, but drone shots are typically not that dynamic.
- Automatic tracking will usually work great for drone shots, however...
- For longer shots where features enter and leave the image, increase the number of trackers on the Advanced dialog of the Features Panel.
- Be sure to delete any trackers on people or moving vehicles. All tracked features must be stationary. The Find Erratic Trackers tool can help find moving features.
- Be sure to add trackers that are up off the ground, to add height and perspective shifts. Examples: light posts, tops of buildings, cell towers, etc, but not distant features such as mountains. (Why? If all trackers are co-planar, field of view cannot be determined.) Use Peel on the Feature Panel, or supervised tracking.
- Remove trackers on clouds and distant features, such as mountains. Delete Mode can help.
- Remove trackers on open water or reflective surfaces such as store windows.
- DO set up a coordinate system in SynthEyes that matches what you are trying to insert in the scene. Often this can best be accomplished by importing a (possibly simplified) copy of your 3D model into SynthEyes, to act as a reference.
- DO NOT attempt to reposition, reorient, or rescale the scene in your downstream 3D application, after the scene is exported from SynthEyes. Most such attempts destroy the 3D match. Get it right in SynthEyes before exporting.
- DO add supervised trackers carefully to visible features that can be referenced to site plans etc to accurately locate inserted elements such as buildings.
- DO NOT assume the ground is perfectly flat. Not only is the earth round, but sites are rarely perfectly flat and level. Usually, inserted objects that appear to “slide” are located underground or floating in the air.
- DO use survey data to help locate the drone and determine the field of view, if one of the DO NOT issues above has compromised the shoot. These can be handled from GPS coordinates, lidar data, or even by using a Google Maps image as a reference. (There's a tutorial on that.)
- General tip: Be aware that trying to add buildings, roads, etc behind trees or other complex objects in original footage will result in a very difficult rotoscoping tasks. Typically it will be easier to composite completely over top of existing trees, and add new rendered landscaping.
Overall Workflow
SynthEyes is a tracking application; it analyzes your footage and produces information on where the camera (drone) moved, and about the camera lens. You'll define a coordinate system convenient to your work (including the overall scale/units).
Next, you'll export the tracking information from SynthEyes to a 3-D rendering application (3ds max, Maya, Blender, Cinema 4D, Lightwave...), so that it can render new footage that matches your original.
Then you'll use a compositing application (After Effects, Resolve's Fusion) to blend the new rendered footage into the original, matching brightness, shadow levels, color, defocus, noise, etc.
You might also import your 3D models to SynthEyes to help with coordinate system setup, and might export tracking information to the compositing app for 3-D compositing.