CFAES Give Today
Ohioline

Ohio State University Extension

CFAES

Remote Sensing in Precision Agriculture

Best Management Practices for Addressing Challenges with Imagery Quality
FABE-554.1
Agriculture and Natural Resources
Date: 
03/28/2017
Sami Khanal, PhD, Research Scientist, Food, Agriculture and Biological Engineering
John Fulton, PhD, Associate Professor, Food, Agriculture and Biological Engineering
Elizabeth Hawkins, PhD, Field Specialist, Agronomic Systems
Kaylee Port and Andrew Klopfenstein, Program Managers, Precision Agriculture

Disclaimer – The information presented here is intended for practitioners interested in utilizing remote sensed imagery within analytical processes for field planning, development of recommendations and farm management where spatial and temporal quality are important.

Precision agriculture (PA) provides the tools and technologies to identify in-field soil and crop variability, offering a means to improve sub-field level farming practices and optimizing agronomic inputs. Variable-rate technology (VRT) provides the capability to vary the rate of soil and crop applied inputs for site-specific application. Today, sensing technologies—both ground based and remote—continue to evolve and have become cheaper for capturing field level data. For the operational success of VRT, maps of crop growth, crop diseases, weeds, crop nutrient deficiencies, and other crop and soil conditions are required. As a result, maps depicting crop and soil variability through remote sensed images acquired by sensors mounted on satellites, aircraft or ground-based equipment have become an integral part of VRT. 

Figure 1. Factors influencing the quality of remote sensed images.
 

Remote sensed imagery can be used for mapping soil properties, classification of crop species, detection of crop water stress, monitoring of weeds and crop diseases, and mapping of crop yield. Use of remote sensing in PA is influenced by the type of platforms (satellite, air or ground) used for data collection; number and width of spectral bands captured by the sensor (multi versus hyperspectral); and spatial (high, medium and low), temporal (hourly, daily and weekly) and radiometric (8-, 12- and 16-bit) resolutions at which sensors collect data. While using remote sensed images for agricultural decision-making, several issues must be carefully evaluated, including: (1) how accurately the image matches the ground location (also called geometric precision); (2) to what extent the image depicts features in the ground (i.e., spatial and spectral resolutions); and (3) the quality of spectral information represented in acquired images.  Figure 1 illustrates some of the issues influencing the quality of remote sensed imagery and are discussed within this publication.

What are the image related issues and how do we address them?

Geometric Precision

While capturing images, sensors mounted to an unmanned aerial vehicle (UAV), aircraft or satellite are influenced by various unavoidable factors (e.g., position and dynamic state of the platform, topographic relief, and earth rotation) which result in geometrically distorted images that do not accurately correspond to the ground object location. Rectification of the geometrically distorted image (commonly called “orthorectification”) is the first and foremost step before remote sensed images can be used for meaningful interpretation and analyses. Once the images accurately represent the geographic locations, they can be used for crop scouting. Figure 2 provides a comparison of the distortion in a raw image and a corrected image taken from a manned aircraft. 

Figure 2. Raw, uncorrected aerial image acquired from manned aircraft (a) and the geometrically rectified image (b). Note: red circles in the left illustrate misalignments between the raw and existing orthorectified images.

Ground control points (GCPs, i.e., points on the earth surface with known locations) are commonly used to orthorectify images and as check points for validation and quality assessment of orthorectified images. The traditional approach for orthorectifying images involves manually feeding information related to the camera system (e.g., lens distortion, focal length), altitude of the sensor and terrain elevation into complex photogrammetric equations which create mathematical relationships between the sensor, image and the target surface. In situations where this information is unavailable but prior orthorectified images exist, orthorectification can be accomplished by manually finding GCPs in the orthorectified images and relating them to the equivalent points in the distorted images. This is done using commercial software packages like ERDAS Imagine, ENVI or ArcGIS. While using this approach, it is recommended to select GCPs that are easily recognizable such as road intersections or landmarks. In the absence of GCPs that can easily be identified in the image—which may occur when the spatial coverage of the image is limited to agricultural fields—GCPs can be established across the study area before taking images. For this method, white metal sheets that distinctly appear in the image can be mounted on a post. Images can then be geometrically corrected by measuring the geographic coordinates of these GCPs with a Global Positioning System (GPS) unit on the ground and relating coordinates with the GCPs that appear in the images. Note that, if the georectification involves selection of several GCPs on the image, this may cause some “stretching” of the image pixels. This stretching can decrease spatial accuracy.

In recent years, efforts have been made toward automating the orthorectification process through the use of on-board GPS and the Inertial Measurement Unit (IMU) technology, pattern recognition technology and digital elevation models. Although there has been some success in automating the orthorectification process, this has been limited to images from UAVs. Similar success has not been achieved for images acquired from satellite or manned-aircraft. Key factors that have helped the automation of orthorectification processes for UAVs images include high overlap (approximately 80% frontal overlap and 60% side overlap) between images acquired by UAVs, and the GPS on board UAVs that provides detailed metadata describing the camera in terms of position (latitude and longitude) and parameters (sensor size, pixel resolution and focal length).  

Image Resolution

There are four types of resolutions in remote sensing that need to be considered while analyzing images. These are spatial, temporal, spectral and radiometric. Among these resolution types, spatial and spectral are particularly significant as they influence the ability to extract detailed information from an image. 

Spatial Resolution

Spatial resolution, when referring to pixel size, determines the size of the smallest identifiable features in an image. With an image of high spatial resolution, small objects can be detected, which in turn displays features in detail. Conversely, with low spatial resolution, the size of a pixel is high. Multiple features are represented by a single pixel, and it becomes difficult to separate one feature from another. Imagery with higher spatial resolution will provide more detail, illustrating higher in-field variability in crop vigor or health than an image with low spatial resolution (Figure 3).

Figure 3. Calculated vegetation index maps collected at two different spatial resolutions.  The low spatial resolution vegetative map (a) has a pixel size of 10 meters, whereas the high-resolution map (b) has a pixel size of 0.25 meters.  These data were collected mid-summer from a canopied corn crop; note the difference in detail between resolutions.

 

Pixel size is determined by the distance between the sensor platform and the target being imaged (i.e., altitude), viewing angle and field of view of the sensor. Images acquired by manned-aircraft or UAVs typically have higher spatial resolution than satellite images, owing to their inherently lower altitudes. 

With manned aircraft and UAVs, there is a less stable camera position. This results in variable spatial resolutions from one image to another along the same flight plan. For meaningful comparisons between images across multiple time periods and geographic locations, images need to be converted into similar spatial resolution. The process of converting an image with a pixel size different than the original resolution is called re-scaling.

Spectral Resolution

Spectral resolution refers to an ability of a sensor to differentiate between wavelength intervals in the electromagnetic spectrum. This is the main factor that distinguishes multispectral images from hyperspectral. Compared to multispectral images which deal with fewer bands, hyperspectral images deal with thousands of fine wavelength intervals to provide detailed information. As shown in Figure 4, different features and details in an image can often be distinguished by comparing their responses over the range of wavelength. Sensors with high spectral resolution are useful in distinguishing features that are often not easily detectable by sensors with broad wavelength ranges. 

Although the hyperspectral images offer additional opportunities to capture variability in crop and soil conditions in agricultural sectors, there are limitations associated with their use including the high cost of the sensor, high image storage requirement and the complexities associated with image processing. Thus, when selecting a spectral sensor for agricultural decision-making, the benefits of using hyperspectral images should be weighed against the expected cost.

Temporal Resolution

Temporal resolution signifies the frequency at which images are collected over the same area (e.g., field). Images from manned aircraft and UAVs can have higher temporal resolution than satellite images due to flexibility in scheduling flight plans (versus fixed re-visit cycles of satellites). When making use of remote sensed images for in-season agricultural decision-making, such as nutrient application and irrigation scheduling, it is important to acquire images at frequent intervals in the crop growing season to detect possible in-season nutrient and water stress. Timely monitoring of crop signals through images during the critical growth stages helps farmers locate potential problem areas and formulate management strategies. 

Radiometric Resolution

Radiometric resolution reflects a sensor’s ability to identify or discriminate very slight differences in reflected or emitted energy. In a remote sensed image, data is digitized and recorded as a positive digital number (DN) which varies from zero to a selected power of 2. The maximum number of brightness levels in an image depends on the number of bits used by the sensor to represent the spectral information. For example, if an image is of 4-bit resolution, there would be 24=16 digital values ranging from zero to 15. The higher the radiometric resolution, the more sensitive it is to detecting small differences in information represented in images as illustrated in Figure 5. Since the actual information in an image is represented through the number of bits, it is useful to have images of higher radiometric resolution for detecting differences in features such as crop canopy or soil variations. 

Figure 5. Red band in visual image at a) 2- and b) 16-bit radiometric resolutions representing the differences in the level of detail.

Quality of Spectral Information

Figure 6. Atmospheric effect on radiation measured by remote sensors.

Spectral information in an image acquired by sensors on board satellites or aircrafts is influenced by various factors, such as atmospheric absorption and scattering, sensor-target-illumination geometry, and sensor calibration which tend to change over time. As a result, the spectral data acquired by sensors do not coincide with reflected or emitted data from ground objects (Figure 6). 

Although the spectral values recorded by a sensor in an image are proportional to upwelling radiation (radiance) from objects on the ground, these are image specific and are not transferable. They are dependent on the viewing geometry of the sensor at the time when images were taken, location of the sun, specific weather condition, etc. To detect true changes in crop and soil conditions as revealed by changes in surface reflectance over multiple time periods, it is necessary to calibrate and correct the spectral reflectance of images. This process is also called “radiometric calibration and correction.” Images that are radiometrically calibrated and corrected allow for detection of temporal changes in crop condition and tracking of critical crop growth periods. This correction also helps to detect where, when and at what intensity the changes occurred.

Radiometric Calibration and Correction

Radiometric calibration of an image is the process by which pixel intensities (i.e., DN) are converted to a physical parameter (i.e., spectral reflectance), whereas radiometric correction is the process of avoiding error in reflectance measurements introduced by sensors, sun angle, topography and atmospheric effects such as absorption and scattering. Spectral reflectance is a standardized measurement that represents the ratio of reflected radiation to incident radiation (down welling radiation). These processes help to compare the spectral data of different origins and provide measurements of physical parameters. 

With satellite images, there exists a standard workflow (Figure 7) for these processes in which every image comes with a calibration and correction coefficient that is used to convert DN to radiance and reflectance (USGS, 2016). However, with manned aircraft and UAV images, such workflows do not exist. To date, most studies based on aircraft and UAV images have applied very little image preprocessing or have simply used raw DN values. 

To radiometrically calibrate aerial images, a reflectance panel (sometimes called a black-gray-white grayscale board) with known reflectance values is placed in the study region during flights. Images of the reflectance panel are taken in the field before the flight using the same sensor mounted on the aircraft. These images are used later for adjusting image color which improves accuracy. This process is often called “white balance adjustment.” 

For radiometric correction of images, an empirical line method is used in which the relationship between spectral reflectance of the panel and the tops of the crop measured using spectroradiometer and DN is established (Haghighattalab et al., 2016; Shi et al., 2016). Because the reflectance measurements are sensitive to environmental conditions—cloudy versus clear sky, windy versus calm periods (Lord et al., 1985)—and sun zenith angle (de Souza et al., 2010), it is recommended to take field measurement and images around solar noon with clear skies during calm periods. The most accurate approach for radiometric correction of remote sensed data requires measuring the radiance of light on a continuous basis while collecting them.

Figure 7. General framework for radiometric correction of remote sensed imagery.

Summary

The quality of remote sensed images is influenced by various factors including the GPS receiver integrated into sensors, sensor position and viewing angle, time of day when images were acquired, and the type of the sensor used for image acquisition. As information products, derived from high quality remote sensed images, provide the potential to improve the application of agricultural inputs while enhancing crop and farm efficiency, careful attention to aforementioned details is required while processing, analyzing and interpreting the images. Quality imagery is imperative to make sure derived information is accurate.

References

  • de Souza, E. G., Scharf, P. C., & Sudduth, K. A. (2010). Sun Position and cloud effects on reflectance and vegetation indices of corn. Agronomy Journal, 102(2), 734-744.
  • Haghighattalab, A., Pérez, L. G., Mondal, S., Singh, D., Schinstock, D., Rutkoski, J., et al. (2016). Application of unmanned aerial systems for high throughput phenotyping of large wheat breeding nurseries. Plant Methods, 12(1), 1.
  • Lord, D., Desjardins, R. L., & Dube, P. A. (1985). Influence of wind on crop canopy reflectance measurements. Remote Sensing of Environment, 18(2), 113-123.
  • Shi, Y., Thomasson, J. A., Murray, S. C., Pugh, N. A., Rooney, W. L., Shafian, S., et al. (2016). Unmanned aerial vehicles for high-throughput phenotyping and agronomic research. PloS one, 11(7), e0159781.
  • Stavrakoudis, D. G., Dragozi, E., Gitas, I. Z., & Karydas, C. G. (2014). Decision fusion based on hyperspectral and multispectral satellite imagery for accurate forest species mapping. Remote Sensing, 6(8), 6897-6928.
  • U.S. Geological Survey (USGS): Landsat Missions – Calibration. Accessed at usgs.gov/landsat-missions/landsat-calibration-parameter-files (August 2023).
Originally posted Mar 30, 2017.
Ohioline https://ohioline.osu.edu