Computer Science Department
School of Computer Science, Carnegie Mellon University
Generalized Image Matching by the Method of Differences
Bruce D. Lucas
July 1984 - Thesis
Image matching refers to aligning two similar images related by a transformation such as a translation, rotation, etc. In its general form, image matching is a problem of estimating the parameters that determine that transformation. These parameters may be a few global parameters or a field of parameters describing local transformations.
This thesis explores in theory and by experiment image matching by the method of differences . The method uses intensity differences between the images together with the spatial intensity gradient to obtain from each image point a linear constraint on the match parameters; combining constraints from many points yields a parameter estimate. The method is particularly suitable where an initial estimate of the match parameters is available. In such cases it eliminates search which can be costly, particularly in multi-dimensional parameter spaces. Essential to the technique are smoothing, which increases the range of validity of the constraint provided by the gradient, and iteration, because the parameter estimate is an approximation. Smoothing increases the range of convergence but it decreases accuracy, so a coarse-fine approach is needed. A theoretical analysis supports these claims and provides a means for predicting the algorithm's behavior.
The first application considered here, optical navigation, requires matching two images to determine ther relative camera positions. Here the match parameters are the position parameters, because they determine the image transformation. In many cases, such as robot guidance, the required parameter estimate is available. Using information from points near edges minimizes error due to noise, specularity, etc. The relationship between the three-space geometry of the reference points and the stability of the algorithm is investigated. Optical navigation experiments using both real and synthetic images are presented. They support the claims of the theoretical analysis and demonstrate a range of convergence and accuracy adequate for many tasks.
The second application, stereo vision, is a problem of determining a field of local parameters, namely the distance values. Constraints from the neighborhood of each point contributed to its distance estimate. Experiments on both real and synthetic data provide encouraging results.