National Unmanned Aircraft SystemsUAS Project Office

Data Research

Today’s low-cost sUAS compatible sensors have built-in GPS that can provide raw imagery with an average geospatial accuracy of 8-12 meters which when combined with additional ground control, achieved by physical ground control points (GCP) scattered around the mission site, can increase data accuracy to centimeters. The focus of the data research activities at the National UAS Project Office (NUPO) are on developing the new processing techniques to achieve this higher resolution source data and maximizing the utilization of this data for generating geospatial products to support scientific analysis.

Disclaimer, External Links and Usage: All DOI and USGS policies apply.

Orthophotos

Raw image data acquired from sensors mounted on sUAS must be converted into orthophotos using a process called orthorectification before it can be used for geospatial analysis or as source data to produce geospatial products. Orthorectification removes the effects of topography (surface relief) and compensates for any sensor tilt or distortions in the raw data to produce a distortion free aerial photograph with a completely uniform scale called an orthophoto. An orthomosaic, frequently used as a base map, can then be created by combining a series of orthophotos into a seamless image.

Traditionally orthorectification required knowledge of the distortions associated with a specific camera lens system (i.e. sensor) to perform the high-quality calibration required to support the precise and accurate extraction of topography (e.g., terrain and 3D surfaces) and/or planimetric features (e.g., road centerlines, streams, or vegetation boundaries) from stereoscopic imagery. Today this can be performed by using the GPS data acquired during the sUAS flight to provide precise geolocation information to calibrate the imagery in structure from motion (SfM) software that applies a multi-view solution to empirically model the lens distortion for any radial ground lens (effectively generating a high-quality in-situ calibration). In other words, NUPO researchers can use the GPS information captured during a low-altitude sUAS flight with imagery acquired from a mounted low-cost digital single-lens reflex (SLR) or any sensor that uses a radial ground lens, to produce high-resolution orthophotos with ground sample distances of less than six inches. And while most digital SLR and point-and-shoot cameras capture natural color (RGB) images, one of the most common types of imagery used to create base maps, the low-cost sensors available today can readily acquire a variety of different data types including natural color, thermal, and multispectral.

Thermal orthophotos are generated from images taken by a thermal camera, such as the FLIR Vue Pro R, that captures non-contact temperature measurements of surfaces as photographs. Orthophotos generated from raw 16-bit radiometrically calibrated thermal imagery can result in a geospatial raster dataset where each pixel location has an associated absolute surface temperature. And if absolute temperature is not required, relative temperature orthophotos can be generated from histogram-stretched JPGs with various color palettes such as WhiteHot, BlackHot, etc. UAS acquired low-altitude thermal imagery generally produces products that provide ground sample distances of less than 15 cm and can be used to support wild fire monitoring, search and rescue, solar panel inspections, and water temperature monitoring.

Color infrared orthophotos are made from imagery acquired from visible and near infrared sensors that detect the red (near infrared edge of the electromagnetic spectrum) centered around 690-720 nm near infrared, green, and blue. Early research missions used a natural color camera modified with a notch filter that blocks the low to mid red-light range resulting in a sensor that detects the red (near infrared edge of the electromagnetic spectrum) centered around 710-740 nm, green, and blue as a low-cost method of capturing near-infrared imagery. Today a MicaSense RedEdge camera is frequently used to capture source imagery in the visible, red edge or near-infrared wavelength range. Orthophotos made from near infrared images are a valuable resource in vegetation analysis and support the generation of Normalized Difference Vegetation Index (NDVI).

A natural color orthomosaic over Devils Tower National Monument made with imagery from a Ricoh GR camera.
A natural color orthomosaic over Devils Tower National Monument made with imagery from a Ricoh GR camera.
Absolute temperature orthomosaic of the Denver Federal Center created with images from the FLIR Vue Pro R.
Absolute temperature orthomosaic of the Denver Federal Center created with images from the FLIR Vue Pro R.
Color infrared orthomosaic over the Sycan River in Oregon created with images from a converted Canon S100 camera.
Color infrared orthomosaic over the Sycan River in Oregon created with images from a converted Canon S100 camera.
Point Clouds and 3D Models

Point clouds are a set of geographic data points in a three-dimensional coordinate system that typically represented X, Y and Z and are an invaluable resource for a variety of geographic applications that evaluate and monitor landscape change. Point clouds vary from sparse to dense and can be derived by using structure from motion (SfM) techniques on aerial imagery or collected by Light Detection And Ranging (LiDAR) scanners. True-color point clouds can also be generated by processing standard imagery in SfM software or combining standard imagery with LiDAR data.

Highly accurate 3D or physical models can be generated by overlying natural color orthophotos or other types of georeferenced orthophotos onto the point cloud data. These realistic 3D models can be used to support computer simulations, display as a two-dimensional image via 3D rendering, or physical model creation by printing from a 3D printer.

Point cloud of the Devils Tower generated in PhotoScan using images captured with the Ricoh GR II camera.
Point cloud of the Devils Tower generated in PhotoScan using images captured with the Ricoh GR II camera.
Close-up of a section of the 3D model of Devils Tower generated from the point cloud and orthomosaic.
Close-up of a section of the 3D model of Devils Tower generated from the point cloud and orthomosaic.
Contours

Contour lines (or contours) are a series of joined points of equal elevation (height) above a given level, such as height above mean sea level. Elevation values derived from an orthomosaic is an ideal data source for generating contour lines. The contour interval used when generating the lines represents the elevation difference between successive contours and can be illustrated as a contour map which are effective tools for terrain visualizations showing valleys and hills, and the steepness of slopes.

Contours of the Piute Valley derived on a ground sample distance (GSD) of 1.4 inch.
Contours of the Piute Valley derived on a ground sample distance (GSD) of 1.4 inch.
Derived 1-foot contours overlaid on the orthomosaic of the Devils Tower in Wyoming.
Derived 1-foot contours overlaid on the orthomosaic of the Devils Tower in Wyoming.
Elevation Models (DEMs, DSMs, DTMs)

A digital elevation model (DEM) is a digital dataset of bare surface elevations (z) at horizontal (x, y) coordinates and can be accurately derived from ground positions sampled at regularly spaced horizontal intervals by standard commercial off-the-shelf cameras mounted on a UAS. Once generated these high-resolution DEMs can also be used as a low-cost option for generating accurate volumetric measurements.

Digital surface models (DSMs) are a form of DEM that contains reflective surface elevations of natural terrain features in addition to vegetation and cultural features such as buildings and roads. Point clouds generated from UAS-mounted LiDAR sensors, which calculate elevation values from both the tops of surfaces and the bare ground, can be used to generate highly accurate DSMs. LiDAR point cloud data can also be used to generate digital terrain models (DTMs) by removing the elevation signals of features such as vegetation and buildings, leaving only the elevation of the terrain or ground.

DSM of the West Fork Mine in Missouri generated from high-resolution imagery (5-10 cm pixel size) and elevation data (6-10 cm vertical and 2-4 cm horizontal resolution).
DSM of the West Fork Mine in Missouri generated from high-resolution imagery (5-10 cm pixel size) and elevation data (6-10 cm vertical and 2-4 cm horizontal resolution).
Elevation model of a section of the Elwha River in Olympic National Park generated from the point cloud data that was derived from GoPro Hero imagery.
Elevation model of a section of the Elwha River in Olympic National Park generated from the point cloud data that was derived from GoPro Hero imagery.
Extracted Features

Feature extraction automates the process of recognizing spatial and spectral patterns within an image and outlining or classifying those features into a newly defined dataset. A high-resolution orthomosaic generated from imagery collected on low-altitude UAS flights, provides an ideal method for accurately identifying small-scale (~1 m) to larger-scale features. Features may then be measured in area, length or count with feature counts providing a valuable tool to support species identification, population studies, and other aspects of resource management.

Extracted American Pelican locations at the Anaho Island National Wildlife Refuge overlaid on a 3D natural color orthomosaic.
Extracted American Pelican locations at the Anaho Island National Wildlife Refuge overlaid on a 3D natural color orthomosaic.
Extracted bird locations at the Chase Lake National Wildlife Refuge; pelican nests (red), cormorant nests (blue), gull/snowy egret non-nesting (black).
Extracted bird locations at the Chase Lake National Wildlife Refuge; pelican nests (red), cormorant nests (blue), gull/snowy egret non-nesting (black).
Extracted bird locations at the Palmyra Atoll National Wildlife Refuge overlaid on orthomosaic of Ricoh GR natural color imagery.
Extracted bird locations at the Palmyra Atoll National Wildlife Refuge overlaid on orthomosaic of Ricoh GR natural color imagery.
Normalized Difference Vegetation Index

Orthomosaics made from multispectral imagery with bands in the red and near infrared range are a valuable resource in vegetation analysis and support the generation of Normalized Difference Vegetation Index (NDVI) maps. Normalized Difference Vegetation Index (NDVI) calculations processed against near infrared data creates a standardized index utilizing the amount of infrared light reflected from a plant. The ratio between reflected infrared light to reflected red light has a strong correlation to the health of the plants imaged where values closer to 1 are healthy vegetation, and values closer to -1 are soils. In other words, the bright red display of the color ramp indicates healthy or highly reflective plants and the blue color indicates the lower reflectivity and possibly less healthy vegetation. Other spectral indices can be calculated using orthomosaics for various applications, such as the Normalized Burn Ratio for assessing burn severity.

NDVI over a mining impoundment in West Virginia. Derived from a orthomosaic of color infrared imagery taken by a modified Canon SX260 camera.
NDVI over a mining impoundment in West Virginia. Derived from a orthomosaic of color infrared imagery taken by a modified Canon SX260 camera.
NDVI derived from a mosaic of near infrared imagery taken from approximately 400 feet AGL over the Sycan River in the Klamath Basin in Oregon.
NDVI derived from an orthomosaic of near infrared imagery taken from approximately 400 feet AGL over the Sycan River in the Klamath Basin in Oregon.

Data Processing Techniques