next up previous [pdf]

Next: Risky ways to do Up: LEVELED INVERSE INTERPOLATION Previous: Analysis for leveled inverse

Seabeam: theory to practice

I provide here a more fundamental theory for dealing with the Seabeam data. I originally approached the data in this more fundamental way, but with time, I realized that I paid a high price in code complexity, computational speed, and reliability. The basic problem is that the elegant theory requires a good starting model which can only come from the linearized theory. I briefly recount the experience here, because the fundamental theory is interesting and because in other applications, you will face the challenge of sorting out the fundamental features from the essential features.

The linear-interpolation operator carries us from a uniform mesh to irregularly distributed data. Fundamentally we seek to solve the inverse problem to go the other direction. A nonlinear approach to filling in the missing data is suggested by the one-dimensional examples in Figures 26-27, where the PEF and the missing data are estimated simultaneously. The nonlinear approach has the advantage that it allows for completely arbitrary data positioning, whereas the two-stage linear approach forces the data to be on a uniform mesh and requires there not be too many empty mesh locations.

For the 2-D nonlinear application, we follow the same approach we used in one dimension, equations (50) and (51), except that the filtering and the linear interpolations are two dimensional.

I have had considerable experience with this problem on this data set and I can report that bin filling is easier and works much more quickly and reliably. Eventually I realized that the best way to start the nonlinear iteration (50-51) is with the final result of bin filling. Then I learned that the extra complexity of the nonlinear iteration (50-51) offers little apparent improvement to the quality of the SeaBeam result. (This is not to say that we should not try more variations on the idea).

Not only did I find the binning method faster, but I found it to be much faster (compare a minute to an hour). The reasons for being faster (most important first) are,

  1. Binning reduces the amount of data handled in each iteration by a factor of the average number of points per bin.
  2. The 2-D linear interpolation operator adds many operations per data point.
  3. Using two fitting goals seems to require more iterations.
(Parenthetically, I later found that helix preconditioning speeds the Seabeam interpolation from minutes to seconds.)

The most serious criticism of the nonlinear approach is that it does not free us from the linearized approaches. We need them to get a ``close enough'' starting solution to the nonlinear problem. I learned that the iteration (50-51), like most nonlinear sequences, behaves unexpectedly and badly when you start too far from the desired solution. For example, I often began from the assumed PEF being a Laplacian and the original map being fit from that. Oddly, from this starting location I sometimes found myself stuck. The iteration (50-51) would not move towards the map we humans consider a better one.

Having said all those bad things about iteration (50-51), I must hasten to add that with a different type of data set, you might find the results of (50-51) to be significantly better.


next up previous [pdf]

Next: Risky ways to do Up: LEVELED INVERSE INTERPOLATION Previous: Analysis for leveled inverse

2013-07-26