ISIS 2 Documentation
This section provides an overview of the processing stages necessary to produce
Digital Image Models (DIMs) of planetary surfaces. A summary of the processing
steps described in this section is shown in Figure 1. A DIM is comprised of a
mosaic of digital images that have undergone radiometric, geometric, and
photometric rectification. A DIM provides a uniform cartographic portrayal of
a planetary surface mapped in the Sinusoidal Equal-Area projection (although
production of other map projections is available) allowing an investigator to
explore the geophysical and compositional properties of a planetary surface on
a global or regional scale (Batson, 1987; Batson and Eliason, 1995; McEwen,
et al., 1991a). In essence the processing stages prepare the data for
remote sensing investigations by producing spectrally and spatially
co-registered images with pixel values in radiometric units. Once a DIM has
been produced, the ISIS software can be used to enhance, extract, and display
spectral and spatial signatures of a planetary surface. Alternatively, the
ISIS system can output the DIM in "unlabeled" form for ingestion into
other image processing systems.
Raw planetary images are processed in five stages or "levels". All
corrections made during these stages have some degree of uncertainty; the
processing sequence was designed to process from corrections with highest
probability of accuracy to those with the lowest. Intermediate stages of
processing are usually preserved, at least temporarily, so that an analyst can
return to them later to inspect for processing problems or for future
DIMs are produced by the ISIS system via specific programs and processing steps
that are often unique to the properties of a given imaging system. For example,
the Galileo/NIMS instrument has significantly different properties and
operating characteristics than the Clementine Mission imaging systems. Thus
separate programs and processing requirements may be applicable to data from
different missions but the processing concepts remain the same. For a
description of the detailed processing associated with a particular imaging
system, refer to the "Mission-Specific Image Processing" section.
Producing DIMs from raw planetary images requires complex image processing
procedures and a knowledge of the ISIS system. A significant investment in time
will be required for a planetary investigator to learn the ISIS procedures
necessary to produce DIMs but the reward will be the ability to explore the
geophysical and spectra properties of a planetary surface. For those willing to
invest the time necessary to learn the ISIS system, customer support is
available from the ISIS developers. Please visit the Isis Support Center
(URL: http://isist.astrogeology.usgs.gov/IsisSupport/) for questions on the ISIS system.
To produce DIM data products, ISIS processing begins with raw
planetary image data as acquired by the spacecraft. Raw images,
traditionally called Engineering Data Records (EDR), contain the
blemish artifacts and the radiometric and geometric characteristics
of unprocessed and uncorrected data. Raw EDRs provide the best
starting points for post-mission data processing because the most
current correction procedures can be applied. Experience has shown
that radiometrically and geometrically corrected images generated
during the course of active mission operations are often inadequate
for producing cartographic-quality DIMs. Spacecraft navigation
and camera pointing data derived during active mission operations
often contain large errors and the sensitivity of the imaging
systems may not be properly characterized.
NASA investigators can obtain digital image data for virtually
all planetary missions by contacting the Planetary Data System
(PDS) Imaging Node (Eliason et. al., 1996). The Imaging
Node acts as NASA's curator for the archives of digital images
acquired by NASA flight projects. The archives are distributed
on CD-ROM multi-volume sets as well as through World-Wide-Web
on-line services. Please contact Eric Eliason (email@example.com)
or Susan LaVoie (Sue_LaVoie@iplmail.jpl.nasa.gov) for information
on how to obtain the planetary image data collections. Visit the
Imaging Node home page on the World-Wide-Web (http://www-pdsimage.jpl.nasa.gov)
for information on how to obtain the planetary image data collections
through the Node's on-line services.
Investigators outside the NASA community can retrieve planetary
images from the Imaging Node's on-line services or can acquire
the CD-ROM volume sets by contacting the National Space Science
Data Center (NSSDC). For more information, please contact Nathan
L. James (firstname.lastname@example.org) or visit the NSSDC World-Wide-Web
The Imaging Node and the Geosciences Node (Guinness, et al. 1996) of the PDS maintain on-line search and retrieval catalog systems for the archives of planetary images. The on-line catalogs allow users to search for images of interest using a wide variety of search criteria. For example, images can be selected based on mission, planet, geographic position, observational geometry, photometric parameters, camera operating modes, and other image observation parameters. The catalog systems can be accessed through World-Wide-Web services by visiting the Imaging Node home page (see URL site shown above) or the Geosciences Node page (http://wwwpds.wustl.edu).
ISIS processes data in five stages or levels starting with EDR (Engineering Data Record) images. The level of the data is defined by the output product of a particular level. In other words, level 0 data has completed the level 0 processing steps, level 1 data has completed the level 1 processing steps, and so on. The first level of processing, level 0, prepares the data for processing by ISIS. The images are converted to ISIS format and ancillary data such as viewing geometry are added to the labels of the image file. Level 1 processing applies radiometric corrections and removes artifacts from the image. Level 2 performs geometric processing to remove optical distortions and convert the image geometry to a standard map projection. Level 3 performs photometric processing for normalizing the sun-viewing geometry of an image scene. Level 4 performs mosaicking of individual images to create global or regional views for the planet surface.
3.1 Level 0 - Data Ingestion
The Level 0 processing step prepares the raw image data and associated
meta-data for processing by the ISIS system. Level 0 processing
usually consists of two program steps. The first step reads the
format of the raw image and converts it to an ISIS cube file.
Additionally this step will extract the meta-data from the input
image labels for inclusion into the ISIS label. The meta-data
may contain information such as the instrument operating modes,
temperature of the camera focal plane, UTC time of observation,
and other information necessary to rectify an image. The second
step extracts navigation and pointing data (or "SPICE"
kernel data) for inclusion into the ISIS cube label (see "Level
2 - Geometric Processing" for a description of the SPICE
kernel data). SPICE kernel data, included as an integral part
the ISIS system, are obtained from the Navigation and Ancillary
Information Facility (NAIF) Node of the PDS (Acton, 1996).
Contact the NAIF World-Wide-Web site for more information about
the SPICE system (http://pds.jpl.nasa.gov/naif.html).
The meta-data necessary to describe the operating modes of an
instrument and viewing conditions of an observation are stored
in the ISIS label and are carried along with the image during
the various processing steps. These ancillary data can be viewed
by a user by printing the ISIS labels but, more importantly, programs
can interrogate the labels to extract parameters needed to process
an image. By storing and accessing ancillary data in this organized
way, highly automated procedures can be applied for systematic
3.2 Level 1 - Removal of Image Artifacts and Radiometric Correction
The next level of processing, Level 1, performs radiometric correction and data clean-up on an image. Level 1 consists of a series of programs to correct or remove image artifacts such as 1) camera shading inherent in imaging systems, 2) artifacts caused by minute dust specks located in the optical path, 3) microphonic noise introduced by operation of other instruments on the spacecraft during image observations, and 4) data drop-outs and spikes due to missing or bad data from malfunctioning detectors or missing telemetry data. Level 1 processing results in an "ideal" image that would have been recorded by a camera system with perfect radiometric properties (although in practice residual artifacts and camera shading remain). The density number (DN) values of a radiometrically corrected image are proportional to the brightness of the scene. The radiometric and geometric properties of the Viking Orbiter cameras (Klassen, K.P., et. al. 1977; Benesh, ., and Thorpe, T. 1976), Voyager cameras ( Benesh, M., and Jepsen, P., 1978; Danielson, G.E., et. al.1981), Gailileo/NIMS instrument (Bailey, G., 1979; Carlson, R.W., et. al., 1992), and the Clementine cameras (Kordas, J.F., et. al. 1995; Priest, R.E., et. al, 1995) are well documented.
3.2.1 Removal of Camera Reseau Marks
For vidicon cameras such as the Viking and Voyager imaging systems,
Level 1 processing locates and cosmetically removes reseau marks.
(CCD imaging systems do not contain reseau marks due to the stable
geometry of the optical path and data readout). Reseau marks were
etched on vidicon image tubes to record geometric distortion on
data readout and they appear as black dots on an image distributed
in a regular pattern. The found reseau locations are stored in
the ISIS label for access by the software that corrects the optical
distortions (Level 2 processing). After locating and storing the
reseau locations, they are cosmetically removed from the image.
The ISIS program for finding and removing reseau marks reads a table of their nominal locations and shapes. The program searches for the reseau marks in the neighborhood of the nominal position. Once a reseau is found the program uses the shape of the reseau to determine which pixels to cosmetically correct by replacing them with a weighted average of the neighborhood pixels that are not part of the reseau.
3.2.2 Removal of Image Blemishes
Image blemishes result from a variety of conditions including
1) telemetry data dropouts or transmission errors, 2) malfunctioning
or dead detectors, 3) minute dust specks located in the optical
path or on the focal plane array, and 4) coherent noise caused
by spurious electronic signals from the operation of instruments
onboard the spacecraft. There are three categories of image blemishes:
1) fixed-location blemishes, 2) randomly occurring blemishes,
and 3) coherent "noise" blemishes.
Fixed-location blemishes are the easiest to identify and correct.
These blemishes always exist at the same location in the image
array whose positions can be determined a priori and stored in
a table for reference by a correction program. Fixed location
blemishes can be cosmetically corrected by replacing the bad pixels
with the weighted average of the unaffected neighborhood pixels.
Fixed-location blemishes can result from malfunctioning or dead
detectors or minute dust specks located on the focal plane array.
Randomly occurring blemishes result from data transmission errors
causing data bits to be altered at random intervals in the image.
The random noise produces discrete, isolated pixel variations
or "spikes" and gives an image a "salt-and-pepper"
appearance. Additionally, telemetry drop-outs can cause portions
of an image to be completely missing. The ISIS system uses data
filtering techniques that recognize missing or anomalous data
pixels and replaces these data points with a weighted average
of the unaffected neighborhood pixels. An adaptive box-filtering
technique (McDonnell, 1981) has been demonstrated to be
an effective tool for removal of random noise from digital images
(Eliason and McEwen, 1990).
Coherent noise can be introduced by spurious electronic signals produced by the operation of instruments onboard the spacecraft during image observations. The spurious signals interfere with the electronics of the imaging system causing coherent noise patterns to be electronically "added" to the images. For example, the shuttering electronics of the Viking cameras introduced a spurious "herring-bone" pattern at the top and bottom of the image. A noise-removal algorithm designed to track the phase and amplitude of the herring-bone pattern can remove most of this noise (Chavez and Soderblom, 1974).
3.2.3 Camera Shading Correction and Radiometric Calibration
Vidicon cameras such as those carried on-board the Viking and
Voyager missions, and Charge Couple Device (CCD) cameras such
as on the Galileo and Clementine missions produce digital images
with the inherent artifact known as camera shading. Camera shading
results from the non-uniform brightness sensitivity across the
field-of-view of the imaging instrument.
Perhaps the best way to illustrate camera shading is to imagine
acquiring a digital image of a target of uniform brightness, say
a screen that has been painted a uniform shade of gray. If the
camera sensitivity across the fields-of-view were ideal (and the
flat-field target exactly the same brightness everywhere), then
the acquired digital image would have the same DN value for all
the pixels in the image. However, because of the non-uniform brightness
sensitivity of the camera, the DN values of the resulting image
will vary throughout the image array (a typical camera may have
as much as 20% variation across the field-of-view). Camera shading
corrections are applied to an image that will correct for the
non-uniform sensitivity so that, in our flat-field observation
example, the radiometrically corrected image would contain pixels
of identical value.
In the simplest terms, shading correction consists of pixel-dependent
multiplicative (gain) and additive (dark-current) coefficients
applied to the image array as shown in the equation below:
O(i,j) = W0 * ( (G(i,j) * [R(i,j) - DC(i,j)]) )/EXP
O(i,j) = output corrected image value at pixel location i,j
R(i,j) = input raw image value at pixel location i,j
G(i,j) = mulitplicative correction (gain) at pixel location i,j
DC(i,j)= additive correction (dark-current) at pixel location i,j
EXP = Exposure duration (integration time) of observation
W0 = Omega-naught normalization coefficient
The DC(i,j) term (dark-current) is equal to an image acquired
of a zero radiance target (for example an image of deep space
without stars). In practice the DC(i,j) coefficients are obtained
by acquiring images of deep space or an image acquired with the
camera shutter closed so that no light enters the camera. After
applying a dark current correction, an image of deep space would
result in a zero DN value for all pixels in the image array (excluding
The G(i,j) term (gain) is the mulitplicative constant applied
to the image after the dark current has been subtracted. The gain
can be derived from flat-field observations under varying brightness
and exposure duration of the camera. These observations can be
made in a laboratory environment prior to launch of the spacecraft.
For the Voyager spacecraft in-flight observations were acquired
of a flat-field target that was positioned in front of the cameras
prior to a planetary encounter. In-flight derivations for G(i,j)
can be acquired by averaging many near flat-field targets as was
done for the Clementine cameras. The G(i,j) term is normalized
with respect to a given control location (usually the center of
an image) so that the result of the correction at this control
position remains unchanged in DN value.
The EXP term is the exposure duration (integration time) of the
image observation. The W0 term (omega-naught) provides the normalization
to convert the image pixel values to standard radiometric units.
The shading correction equation shown above is highly simplified and in practice it may contain many more terms that describe the unique electronics design and characteristics of an imaging system. For example, the dark-current and gain coefficients may be time dependent because of the drift of the camera sensitivity throughout the course of the mission. The camera sensitivity is also dependent on the filter-wheel position, operating modes of the instrument, and temperature of the cameras further complicating the shading-correction equation. Additionally, the cameras may be non-linear at various brightness levels. Details of the shading correction can be found in the documentation for each program that performs radiometric corrections for a specific imaging instrument.
3.2.4 Summary: Level 1 Output
The goal of Level 1 processing is to remove image artifacts introduced by the image instruments and to produce output data that can be accurately compared to those of other systems. The primary product of Level 1 processing is a radiometrically calibrated image that has pixel values that have been corrected from raw density numbers to pixels in radiometric units that are proportional to the brightness of a scene. Radiometric calibration forms the final step of Level 1 processing and it produces output values in units of scaled radiance (micro-watts/cm**2-micrometer-steradian). To compare radiance data to surface and sample reflectance spectra, the calibration data are often normalized by a nominal scalar spectrum in "I/F" (I-over-F) units, or scaled irradiance/solar flux. I/F is defined as the ratio of the observed radiance and the radiance of a white screen, normal to the incidence rays of the sun at the sun-to-target distance of the observation. The value of I/F would equal 1.0 for an ideal 100% lambertian reflector with the sun and camera orthogonal to the planet's surface.
3.3 Level 2 - Geometric Processing
Producing DIMs requires geometric processing be performed on the
individual images that make up a DIM. The individual images are
geometrically transformed from spacecraft camera orientation to
a common map coordinate system of a specific resolution. Before
geometric transformation, images are first "tied" to
a ground control net or alternatively to each other. Tying images
together minimizes the spatial misregistration along the image
boundaries contained within a DIM mosaic.
Level 2 performs geometric processing which includes correcting camera distortions as well as transformation from image coordinates to map coordinates (Edwards, 1987). All geometric transformations are made simultaneously so that an image is resampled only once and resolution loss is minimal. In the creation of a DIM the user should select an output resolution slightly greater than the input image resolution so that the image is to some extent oversampled (i.e., the output image has more lines and samples than the original image). Optical distortions are measured from preflight calibration of the optical path of the imaging system. The image transformation is based on the original viewing geometry of the observation, relative position of the target, and the mathematical definition of the map projection.
3.3.1 SPICE Kernels
Several parameters are needed to describe the viewing geometry of spacecraft images so they can be geometrically processed. The parameters are organized according to the "SPICE kernel" concepts developed by the PDS Navigational Ancillary Information Facility (NAIF) Node (Acton C.H., 1995). SPICE is an acronym for Spacecraft and Planet ephemerides, Instrument, C-Matrix, and Event kernel. The S-kernel defines the absolute positions of the spacecraft and target body in the J2000 coordinate system. The P-kernel defines the planet body physical and its rotational elements (Davies, et. al, 1983). For imaging instruments, the I-kernel provides information such as camera mounting alignment, focal lengths, field-of-view specifications, and optical distortion parameters. The C-kernel is the inertially referenced attitude (pointing) for the imaging system. The E-kernel provides information about the time-ordered events of the imaging system such as camera operating modes, instrument temperature, and exposure duration of the observations. The time of observation relates an image to the stored SPICE kernel data for imaging sequences. For instruments such as vidicon and CCD cameras the observation is considered an instantaneous event.
3.3.2 Camera Pointing Errors
Errors in camera pointing (C-matrix) cause images to be improperly
mapped to the surface body resulting in spatial misregistration
of images in a mosaic as well as improper spectral registration
of images that make up a color observation set. In order to produce
cartographic quality DIMs it is often necessary to adjust the
C-matrix. In most cases the pointing errors (even small errors)
cause the largest problem in the accurate definition of the viewing
geometry. Even minute spacecraft movements caused by the operation
of other instruments or the repositioning of the radio antenna
and scan platform can cause uncertainties in the pointing of the
camera. The pointing errors translate to positional errors on
the planet surface. The further the spacecraft from the target
the larger the effect of the pointing errors on the positional
errors. There are several methods of improving the camera pointing.
3.3.3 Tying Images to a Ground Control Net
One method of adjusting the camera pointing involves tying images
to an established ground control net of the planet surface. The
ground control net is made up of a set of surface features that
have accurate latitude and longitude coordinates established by
photogrametric triangulation (Davies and Katayama, 1983; and
Wu and Schafer, 1984). Ideally, the ground control net has
sufficient density on the surface to allow images of any area
on the planet to be tied to the ground control net. The method
for improving an image's C-matrix involves selecting a feature
in the image that corresponds to a ground control point. The line
and sample position of the feature and its latitude and longitude
coordinate defined by the ground control point are used to update
the C-matrix. The C-matrix is adjusted to force the position of
the image feature to match the latitude and longitude position
defined by the ground control net. The spacecraft position relative
to the planet is assumed to be correct. In practice several features
on an image are tied to the ground control net in order to minimize
the errors in measuring the line and sample positions.
The USGS has established digital global base maps of planetary bodies that can act as ground control for tying images to the surface. The global base maps are image mosaics whose individual images have gone through the rigorous process of ground point control. ISIS has display software that permits a user to tie an image to a base map. The image to be controlled and the base map are displayed simultaneously on the computer screen. The user can use the display cursor and mouse to mark the positions of common features (usually three or four control points) between the two images. Once the control points are established the software can update the C-matrix to force the pointing geometry to match the base map. Please visit the Isis Support Center (URL: http://isis.astrogeology.usgs.gov/IsisSupport/) to contact us for information on how to obtain global base maps for ground control point selection.
3.3.4 Tying Images to Each Other
Another method for improving camera pointing for images that make up a DIM is to tie the images to each other rather than to a ground control net. Images in a mosaic contain areas of common coverage (overlap between adjacent images). These areas of common coverage are used to tie image pairs together. In this method, control points are selected between images in overlap regions. ISIS has display software that permits a user to display two images simultaneously. The user can use the display cursor and mouse to mark the positions of common features. The feature's line and sample position on each image is recorded for later use in adjusting the C-matrix of the images. Typically, three of four points are selected between each image pair in order to minimize measurement errors of the line and sample positions of the selected features. Once all the images that make up a DIM have been tied to their respective neighbors, an ISIS program can be applied to the control points to determine updated C-matrices that minimize residual positional errors between neighborhood images. The C-matrix of each image is adjusted so that the positional mismatch of the selected control features in the overlap areas are minimized. The C-matrix adjustment program has the capability of locking certain images to a fixed viewing geometry. This feature is useful when one or more of the images in the set have been tied to an established ground control net.
3.3.5 Autocorrelation Methods for Tying Images Together
An additional method for tying images to each other involves autocorrelation. This method works well for image pairs that have 1) sufficient overlap, 2) similar scale and sun illumination conditions, 3) adequate brightness variation in the image scene, and 4) good initial pointing information. (Clementine imaging works well for autocorrelation techniques.) The advantage of autocorrelation is that a program can automatically tie images together rather than performing this process by hand (as described in section 3.3.4). The autocorrelation program performs a digital comparison of the overlap areas of an image pair. The initial pointing information of the two pairs is used to determine a search area where surface features should match between the images. The image pixels are digitally differenced in the overlap areas and the sum of the absolute values of the differences (SUMDIF) is used as a measure of the quality of the image registration. If the images are well registered then SUMDIF will be small. If they are not well registered then SUMDIF will be larger. To determine the best registration of the images, the program performs a shifting procedure between image pairs and computes SUMDIF each time a shift is made. SUMDIF is computed for all permutations of line and sample shifts. The smallest SUMDIF determines the shift that caused the best registration. The relative line and sample positions of the pair are then stored and used to update the C-matrix as is described in section 3.3.4. Image pairs that do not tie together well with autocorrelation should be hand tied.
3.3.6 Optical Distortions
The vidicon cameras used by the Viking and Voyager spacecraft
have electronic distortions similar in pattern to the optical
distortion in a film camera. These electronic distortions are
introduced because the scanning pattern of the electron beam used
to read out the charge-stored image vidicon is more "barrel-shaped"
than rectangular. Interactions between the charge on the photo-cathode
that represent the image itself and the electron beam produce
additional complex high-order distortions. The reseau locations
found in level 1 processing are used to correct for the optical
distortion. The reseau positions of the original image are mapped
to the undistorted positions on output. The image space between
the reseaus are mapped to the output using an interpolation scheme.
The optical distortions in CCD imaging systems are easier to correct because the distortions are invariant for all images acquired by the camera and there are no reseaus to identify and locate. Because optical distortions in modern CCD cameras are small, a simple n-th order two dimensional polynomial can satisfactorily model the relationship between distorted and undistorted space.
3.3.7 Geometric Transformation
The geometric transformation algorithm used by ISIS has been well described (Edwards, 1987) and is summarized here. The geometric transformation process is divided into two parts. The first part defines the geometric relationship between the original image and the projected image while the second part performs the actual transformation (i.e. part one defines how the pixels will be moved from the input image to the output image while part two actually moves the pixels ). The separation of these two steps provides flexibility. There are several programs that are used to define geometric transformations. For example, the "nuproj" program is responsible for defining transformations between different map projections. Another program, "plansinu", is used to define the transformation from spacecraft camera orientation to the Sinusoidal Equal-Area map projection.
Yet a third program, "planorth" makes a transformation to the orthographic map projection. Even though different programs are used to define different types of transformations, there is only one program, "geom", that performs the transformation process. The geometric transformation program performs resampling by either nearest-neighbor or bilinear interpolation.
3.3.8 Map Projections
ISIS allows users to transform images to a variety of map projections
based on algorithms adopted by the USGS (Snyder, 1982).
When transforming images from spacecraft camera orientation to
that of a map coordinate system, the Sinusoidal Equal-Area projection
should always used (the projection can be used over an entire
planet without being segmented into zones). Program "plansinu"
performs the operation. All DIMs are made and stored initially
in this projection and can be retransformed to other desired map
projections (with program "nuproj") as needed. The table
shown below lists the map projections that are supported by the
Map Projections Supported by the ISIS System
3.3.9 Sub-pixel Color Registration
Pixel-to-pixel misregistration between images acquired through different spectral filters can be a major source of error in the spectral analysis and mapping of planetary soils. A series of ISIS programs have been developed that resample highly correlated images for co-registration to an accuracy of better than 0.2 pixels. These procedures are automated for color filter sets that are initially matched to within a few pixels. If sub-pixel coregistration is not performed on color sets then subsequent image enhancement processes, such as color ratio analysis, can cause "color fringe" artifacts in areas of high spatial frequency content.
3.4 Level 3 - Photometric Normalization
Photometric normalization is applied to images that make up a
DIM in order to balance the brightness levels among the images
that were acquired under different lighting conditions. To illustrate,
consider two images of the same area on the planet where one image
was acquired with the sun directly overhead and the second with
the sun lower to the horizon. The image with the higher sun angle
would be significantly brighter than the image with the low sun
angle. Photometric normalization of the two images would cause
them to be adjusted to the same brightness level.
Radiometrically calibrated spacecraft images measure the brightness
of a scene under specific angles of illumination, emission, and
phase. For an object without an optically significant atmosphere,
this brightness is controlled by two basic classes of information:
1) the intrinsic properties of the surface materials, including
composition, grain size, roughness, and prorosity; and 2) variations
in brightness due to the local topography of the surface (McEwen,
1991b). Photometric normalization is effective only to the
extent that all geometric parameters can be modeled. In general
the local topography is not included in the model (i.e. the planetary
surface is thought of as a smooth sphere.) However, illumination
geometry at each pixel certainly depends on local topography;
unless the topographic slope within a pixel is accurately known
and compensated, the photometric correction cannot be perfect.
The following photometric normalization models (described in McEwen,
A.S., 1991b) are supported in the ISIS environment:
Photometric normalization is applied to an image by 1) computing the illumination, emission, and phase angle for each pixel in an image array, and 2) applying the photometric function with the computed angles to determine a mulitplicative and additive correction for the pixel.
3.5 Level 4 - Seam Removal and Image Mosaicking
3.5.1 Seam Removal
In spite of best efforts at radiometric calibration and photometric normalization, small residual discrepancies in image brightnesses are likely to remain. These brightness differences appear as "seams" in a mosaic. A method has been developed that performs adjustments in the image brightness levels to better match the brightness along the boundaries of neighboring (overlapping) frames.
The seam removal process is two fold. First, a program is run on all neighboring pairs of images that compares the brightness differences in overlapping areas. The brightness information is stored in the labels of each image. A second program extracts the brightness information for all images that make up the DIM and computes a correction factor (multiplicative and additive coefficients) for each image. After applying the correction factors to each image, the resulting brightness differences will be minimized.
Compilation of an accurate digital mosaic of the individual images
is the final stage in the construction of a DIM. The final DIM
is created by first creating a blank (or zero) image that represents
the regional or global area of the users research area. The individual
images are than mosaicked into the initially blank DIM. The order
in which individual images are placed into the mosaic is an important
consideration. Because images are mosaicked one on top of the
other, images that get laid down first are overwritten in the
area of overlap with subsequent images that are added to the mosaic.
It is preferable to first lay down images that have the lowest
data quality followed by images with highest quality. In this
way the areas of image overlap contain the highest quality images.
The ISIS system has the ability to perform radiometric, geometric, and photometric processing on the planetary image data collections from NASA flight projects. DIMs, comprised of a mosaic of rectified images, provides a uniform cartographic portrayal of a planetary surface mapped in the Sinusoidal Equal-Area projection allowing an investigator to explore the geophysical and spectral properties of a planetary surface on a global or regional scale.
Acton, C.H., 1996. Ancillary Data Services of NASA's Navigation
and Ancillary Information Facility, Planetary and Space Science,
Vol. 44, No. 1, pp. 65-70.
Bailey, G., 1979, Design and Test of the Near Infrared Mapping
Spectrometer /NIMS/ Focal Plane for the Galileo Jupiter Orbiter
ission, Proceedings of the Fifth Annual Seminar, Society of
Photo-Optical Instrument Engineers, pp. 210-216.
Batson, R.M., 1987, Digital Cartography of the Planets: New Methods,
Its Status, and Its Future, Photogrammetric Engineering and
Remote Sensing, Vol. 53, No. 9, pp. 1211-1218.
Batson, R.M. and Eliason, E.M., 1995. Digital Maps of Mars, Photogrammetric
Engineering and Remote Sensing, Vol. 61, No. 12, pp.1499-1507.
Benesh, M., and Jepsen, P., 1978, Voyager Imaging Science Subsystem
Calibration Report, JPL Document 618-802, Jet Propulsion
Laboratory, Pasadena, California.
Benesh, M., and Thorpe, T., 1976, Viking Orbiter 1975 Visual Imaging
Subsystem Calibration Report, JPL Document 611-125, Jet
Propulsion Laboratory, Pasadena, California.
Carlson, R.W., Weissman, P.R., Smythe, W.D., Mahoney, J.C., 1992,
Near-Infrared Mapping Spectometer Experiment on Galileo, Space
Science Reviews, Vol. 60, No. 1-4, pp. 457-502.
Chavez, P.S., and Soderblom, L.A., 1974, Simple High Speed Digital
Image Processing to Remove Quasi-Coherent Noise Patterns, Proceedings,
American Society of Photogrammetry Symposium, Washington,
DC, pp. 258-265.
Danielson, G.E., Kuperman P.N., Johnson, T.V., and Soderblom,
L.A., 1981, Radiometric Performance of the Voyager Cameras, Journal
of Geophysical Research, Vol. 86, No. A10, pp. 8683-8689.
Davies, M.E., Abalakin, V.K., Lieske, J.H., Seidelmann, P.M.,
Sinclari, A.T., Sinzi, A.M., Smith, B.A., and Tjuflin, Y.S., 1983,
Report on the IAU Working Group on Cartographic Coordinates and
Rotational Elements of the Plantes and Satellites: 1982. Celestial
echanics, Vol. 29, pp. 309-321.
Davies, M.E., and Katayama, F.Y., 1983, The 1982 control net of
ars, Journal of Geophysical Research, Vol. 88, No. B-9,
Edwards, K., 1987, Geometric Processing of Digital Images of the
Planets, Photogrammetric Engineering and Remote Sensing,
Vol. 53, No. 9, pp. 1219-1222.
Eliason, E. M., and McEwen, A. S., 1990. Adaptive Box Filters
for Removal of Random Noise from Digital Images, Photogrammetric
Engineering and Remote Sensing, Vol. 56, No. 4, pp. 453-456.
Eliason, E. M., LaVoie, S.K., and Soderblom, L. A., 1996. The
Imaging Node for the Planetary Data System, Planetary and Space
Science, Vol. 44, No. 1 pp. 23-32.
Guinness, E. A., Arvidson, R. E., and Slavney, S., 1996, The Planetary
Data System Geosciences Node, Planetary and Space Science,
Vol. 44, No. 1, pp. 13-22.
Klassen, K.P., Thorpe T.E., and Morabito, L.A., 1977, Inflight
Performance of the Viking Visual Imaging Subsystem, Applied
Optics, Vol. 16, No 12., pp. 3158-3170.
Kordas, J.F., Lewis, I.T., Priest, R.E., White, W.T., Nielsen,
D.P., Park, H., Wilson, B.A., Shannon M.J., Ledebuhr, A.G., and
Pleasance, L.D., 1995, UV/Visible Camera for the Clementine Mission,
Proceedings SPIE, Vol. 2478, pp. 175-186.
McDonnell, M.J., 1981, Box-filtering techniques, Computer Graphics
and Image Processing, Vol. 17, pp. 394-407.
McEwen, A.S., Duck, B., and Edwards, K., 1991a, Digital Cartography
of Io, Lunar Planet. Sci. XXII, pp.775-756.
McEwen, A.S.,1991b, Photometric Functions for Photoclinometry
and Other Applications, ICARUS, Vol. 92, pp. 298-311.
Minnaert, M., 1941, The Reciprocity Principle in Lunar Photometry,
Astrophys J., Vol. 93, pp. 403-410.
Priest, R.E., Lewis, I.T., Sewall, N.R., Park, H., Shannon, M.J.,
Ledebuhr, A.,G., Pleasance, L.D., Massie, M.A., Metschuleit, K.,
1995, Near-Infrared Camera for the Clementine Mission, Proceedings
SPIE., Vol. 2475, pp. 393-404.
Snyder, J.P., 1982, Map Projections Used by the U.S. Geological
Survey, Geological Survey Bulletin 1532, U.S. Government
Printing Office, 313 p.
Wu, S.S.C., and Schafer, F.J., 1984, Mars control network, Proceedings of the 50th annual Meeting, American Society of Photogrametry, Vol. 2, pp. 456-463.