next up previous [pdf]

Next: Sign convention Up: Model fitting by least Previous: Unknown input: deconvolution with

KRYLOV SUBSPACE ITERATIVE METHODS

The solution time for simultaneous linear equations grows cubically with the number of unknowns. There are three regimes for solution; which regime is applicable depends on the number of unknowns in $m$. For $m$ three or less, we use analytical methods. We also sometimes use analytical methods on matrices of size $4\times 4$ if the matrix contains many zeros. My 1988 desktop workstation solved a $100 \times 100$ system in a minute. Ten years later, it would do a $600\times 600$ system in roughly a minute. A nearby more powerful computer would do 1,000 $\times$ 1,000 in a minute. Because the computing effort increases with the third power of the size, and because $4^3=64\approx 60$, an hour's work solves a four times larger matrix, namely 4,000 $\times$ 4,000 on the more powerful machine. For significantly larger values of $m$, exact numerical methods must be abandoned and iterative methods must be used.

The compute time for a rectangular matrix is slightly more pessimistic. It is the product of the number of data points $n$ times the number of model points squared $m^2$ which is also the cost of computing the matrix $\bold F\T\,\bold F$ from $\bold F$. Because the number of data points generally exceeds the number of model points $n>m$ by a substantial factor (to allow averaging of noises), it leaves us with significantly fewer than 4,000 points in model space.

A square image packed into a 4,096-point vector is a $64\times 64$ array. The computer power for linear algebra to give us solutions that fit in a $k\times k$ image is thus proportional to $k^6$, which means that even though computer power grows rapidly, imaging resolution using ``exact numerical methods'' hardly grows at all from our $64\times 64$ current practical limit.

The retina in our eyes captures an image of size roughly 1,000 $\times$ 1,000 which is a lot bigger than $64\times 64$. Life offers us many occasions in which final images exceed the 4,000 points of a $64\times 64$ array. To make linear algebra (and inverse theory) relevant to such applications, we investigate special techniques. A numerical technique known as the ``conjugate-direction method'' works well for all values of $m$ and is our subject here. As with most simultaneous equation solvers, an exact answer (assuming exact arithmetic) is attained in a finite number of steps. And, if $n$ and $m$ are too large to allow enough iterations, the iterative methods can be interrupted at any stage, the partial result often proving useful. Whether or not a partial result actually is useful is the subject of much research; naturally, the results vary from one application to the next.



Subsections
next up previous [pdf]

Next: Sign convention Up: Model fitting by least Previous: Unknown input: deconvolution with

2014-12-01