A raw AVHRR image is converted to a projected image by using a model that maps from line/sample location in satellite perspective to geographic latitude/longitude coordinates. An image that has been projected using this model, called a systematically corrected image, may have slight errors because the location and attitude of the satellite are not known exactly.
Navigation refers to the geo-registration of an AVHRR image by using ground control to correct the model. There are two methods for automatic navigation: one uses a vector source for reference, and the other uses a registered base image. The navigation process contains a series of modular steps that result in a registered, projected image. Processing is done with LAS with some specialized functions developed for AVHRR processing.
The vector method of navigation requires three inputs: an AVHRR image, a tie point selection file, and a vector reference, such as the Digital Chart of the World (DCW) database. The AVHRR image is in LAS image format with the usual IMG and DDR files. It also has an ADDR (AVHRR DDR) associated file that contains parameters for the model. The tie point selection file contains a list of points that identify features in a general region of the world referenced by latitude and longitude in geographic degrees and elevation in meters.
The navigation process begins with the production of "chip" images. A chip is a 64x64 image in satellite perspective centered at a tie point. Using the satellite model, each point in the tie point file is converted to a line/sample coordinate to see if it falls within the AVHRR image. If it does, a chip image is generated. The appropriate coastlines and inland shorelines are extracted from the DCW and rasterized into the satellite perspective chip. An output merged tie point file is created linking each chip with the corresponding window in the AVHRR image. The tie point's line/sample location in the chip image (the "reference" coordinate) and in the original AVHRR image (the "search" coordinate) are stored in the merged tie point file, along with the geographic coordinates of the tie point, elevation, name of the DCW chip image, and size of the chip. In a typical AVHRR image, 200+ chip images may be generated depending on the location of the image and the density of the points in that area.
The following is an example of a DCW chip and the corresponding AVHRR image window in satellite perspective:
![]()
Vector chip image AVHRR image window
The next step is correlation. For each point in the merged tie point file, the indicated window from the AVHRR image and the chip image are matched using edge correlation to determine the exact translational alignment of the DCW ("reference") image chip relative to the AVHRR ("search") window. Correlation is performed in the following manner. An edge extraction filter converts the reference chip and the search window into binary edge images. An array of edge density values for each alignment of the reference chip, relative to the search window, is also compiled. The unnormalized sums of the edge-pixel coincidences for every possible alignment of the search and reference are computed. These results are used to compute the normalized cross-correlation values. Next, the alignment at which the correlation value is maximum is found. If the maximum correlation value is less than a specified minimum value or is too near the edge of the search window, or if there are two maximum correlation values of comparable value, no correlation between the reference chip and search window is found. If a correlation is found, a quadratic surface is fit to the correlation values in the neighborhood of the maximum correlation value, and the fractional pixel coordinates of the maximum correlation value are computed. The coordinates are converted to be relative to the full search and reference images. The output of this process is a tie point location file that contains the location of the tie point in the original AVHRR image as well as the location adjusted by the correlation process. Generally a correlation is made for less than half of the points due to cloud cover.
The points which have successfully correlated are automatically edited to remove points that exceed defined error limits. There are two reasons for doing this: first, to remove any errors that occurred in correlation; and second, to reduce the number of points used and thus speed up the navigation process. First, the automatic editor uses a least squares fit of the correlated points to a bivariate polynomial, and all points with a RMSE greater than 0.75 are removed. Next, a distance method of editing is used to further refine the points. The mean distance between the original and adjusted coordinates is calculated, and all points whose distance is outside +/-1.50 times the mean distance are removed.
The next step involves calculating attitude and altitude corrections for the satellite model using a Least-Median-of-Squares fit of the error in the correlated tie points. At least 8 points must remain from the set of correlated and edited points in order to calculate corrections. The points are grouped five at a time to calculate planes for correction in the line and sample dimensions. The pair of planes that have the smallest median residual when applied to all the points and whose set of points are well-distributed around the image are selected as the initial correction planes. Then a residual to these planes for each point is calculated. All points with a residual less than 0.8 are used to calculate a new set of correction planes defined by plane coefficients as such:
deltaline = linecoef[0] * line + linecoef[1] * sample + linecoef[2] deltasample = samplecoef[0] * line + samplecoef[1] * sample + samplecoef[2]These plane coefficients are converted to roll, pitch, yaw, and altitude coefficients. The pitch error (error in the line direction) is caused by the attitude of the satellite, and is indicated by linecoef[0] and linecoef[2]. Since the pitch error indicates an change in time, it will also cause an error in the sample direction (roll) due to rotation of the earth during the time change. Roll error is also caused by the attitude of the satellite. The roll error is related to samplecoef[0] and samplecoef[2]. Yaw error is caused by the satellite's attitude, and is related to linecoef[1]. The altitude of the satellite varies, and this error is indicated by samplecoef[1]. These coefficients refine the satellite model and are stored in the image's associated ADDR file.
A geometric mapping grid is then generated to define a transformation from the AVHRR image's current satellite perspective to the desired output projection. The grid is generated using the refined satellite model, and is applied to the image to create a map projected, registered image. The registered image is accurate to within 1.3 kilometers of the DCW reference.
The base image method of navigation requires three inputs: an AVHRR image in raw satellite space, a tie point file, and a reference registered base image in some projection. In this method, small windows are extracted from the base image and warped into the satellite space to create chip images. A merged tie point file is created linking each chip with the corresponding window in the AVHRR image. The two images are correlated at each point using grey level correlation. The correlated points are only edited with a least squares fit to a bivariate polynomial, removing all points with a RMSE greater than 0.75.
Attitude corrections are calculated, a grid is created, and the image is reprojected as in the vector method.