next up previous [pdf]

Next: Fitting success and solver Up: KRYLOV SUBSPACE ITERATIVE METHODS Previous: Routine for one step

A basic solver program

There are many different methods for iterative least-square estimation some of which are discussed later in this book. The conjugate-gradient (CG) family (including the first order conjugate-direction method previously described) share the property that theoretically they achieve the solution in $n$ iterations, where $n$ is the number of unknowns. The various CG methods differ in their numerical errors, memory required, adaptability to nonlinear optimization, and their requirements on accuracy of the adjoint. What we do in this section is to show you the generic interface.

None of us is an expert in both geophysics and in optimization theory (OT), yet we need to handle both. We would like to have each group write its own code with a relatively easy interface. The problem is that the OT codes must invoke the physical operators yet the OT codes should not need to deal with all the data and parameters needed by the physical operators.

In other words, if a practitioner decides to swap one solver for another, the only thing needed is the name of the new solver.

The operator entrance is for the geophysicist, who formulates the estimation application. The solver entrance is for the specialist in numerical algebra, who designs a new optimization method. The C programming language allows us to achieve this design goal by means of generic function interfaces.

A generic solver subroutine is tinysolver. It is simplified substantially from the library version, which has a much longer list of optional arguments. (The forget parameter is not needed by the solvers we discuss first.)

api/c/tinysolver.c
void sf_tinysolver (sf_operator Fop       /* linear operator */, 
		    sf_solverstep stepper /* stepping function */, 
		    int nm                /* size of model */, 
		    int nd                /* size of data */, 
		    float* m              /* estimated model */,
		    const float* m0       /* starting model */,
		    const float* d        /* data */, 
		    int niter             /* iterations */)
/*< Generic linear solver. Solves oper{x} =  dat >*/
{
    int i, iter;
    float *g, *rr, *gg;

    g =  sf_floatalloc (nm);
    rr = sf_floatalloc (nd);
    gg = sf_floatalloc (nd);

    for (i=0; i < nd; i++) rr[i] = - d[i];
    if (NULL==m0) {
	for (i=0; i < nm; i++) m[i] = 0.0;
    } else {
	for (i=0; i < nm; i++) m[i] = m0[i];
	Fop (false, true, nm, nd, m, rr);
    }
    
    for (iter=0; iter < niter; iter++) {
	Fop (true, false, nm, nd, g, rr);
	Fop (false, false, nm, nd, g, gg);
	
	stepper (false, nm, nd, m, g, rr, gg);
    }
    
    free (g);
    free (rr);
    free (gg);
}

The two most important arguments in tinysolver() are the operator function Fop, which is defined by the interface from Chapter [*], and the solver function stepper, which implements one step of an iterative estimation. For example, a practitioner who choses to use our new cgstep() [*] for iterative solving the operator matmult [*] would write the call

tinysolver ( matmult_lop, cgstep, ...

The other required parameters to tinysolver() are dat (the data we want to fit), x (the model we want to estimate), and niter (the maximum number of iterations). There are also a couple of optional arguments. For example, x0 is the starting guess for the model. If this parameter is omitted, the model is initialized to zero. To output the final residual vector, we include a parameter called res, which is optional as well. We will watch how the list of optional parameters to the generic solver routine grows as we attack more and more complex applications in later chapters.


next up previous [pdf]

Next: Fitting success and solver Up: KRYLOV SUBSPACE ITERATIVE METHODS Previous: Routine for one step

2014-12-01