Regularization is model styling |
For the operator approach to the fitting goal , we rewrite it as where
We begin the calculation with
the known data values where missing data values are replaced by zeros, namely
.
Filter this data,
getting
,
and load it into the residual .
With this initialization completed,
we begin an iteration loop.
First we compute
from equation (9).
(10) |
The subroutine to find missing data is mis1(). It assumes that zero values in the input data correspond to missing data locations. It uses our convolution operator tcai1(). You can also check the book Index for other operators and modules.
void mis1(int niter /* number of iterations */, float *xx /* data/model */, const bool *known /* mask for known data */, const char *step /* solver */) /*< interpolate >*/ { switch (step[1]) { case 'g': /* conjugate gradients */ sf_solver (tcai1_lop, sf_cgstep, nx, ny, xx, zero, niter, "x0", xx, "known", known, "end"); sf_cgstep_close(); break; case 'd': /* conjugate directions */ sf_cdstep_init(); sf_solver (tcai1_lop, sf_cdstep, nx, ny, xx, zero, niter, "x0", xx, "known", known, "end"); sf_cdstep_close(); break; default: sf_error("%s: unknown step %s",__FILE__,step); break; } } |
I sought reference material on conjugate gradients with constraints and did not find anything, leaving me to fear that this chapter was in error, and I had lost the magic property of convergence in a finite number of iterations. I tested the code, and it did converge in a finite number of iterations. The explanation is that these constraints are almost trivial. We pretended we had extra variables, and computed a for each. Using we then set the gradient to zero, therefore making no changes to anything, like as if we had never calculated the extra s.
Regularization is model styling |