next up previous [pdf]

Next: Data-space regularization (model preconditioning) Up: Review of regularization in Previous: Review of regularization in

Model-space regularization

Model-space regularization implies adding equations to system (1) to obtain a fully constrained (well-posed) inverse problem. The additional equations take the form

\begin{displaymath}
\epsilon \mathbf{D m \approx 0} \;,
\end{displaymath} (2)

where $\mathbf {D}$ is a linear operator that represents additional requirements for the model, and $\epsilon$ is the scaling parameter. In many applications, $\mathbf {D}$ can be thought of as a filter, enhancing undesirable components in the model, or as the operator of a differential equation that we assume the model should satisfy.

The full system of equations (1-2) can be written in a short notation as

\begin{displaymath}
\mathbf{G_m m} = \left[\begin{array}{c} \mathbf{L}  \epsi...
...bf{d}  \mathbf{0} \end{array}\right] =
\hat{\mathbf{d}}\;,
\end{displaymath} (3)

where $\hat{\mathbf{d}}$ is the augmented data vector:
\begin{displaymath}
\hat{\mathbf{d}} = \left[\begin{array}{c} \mathbf{d}  \mathbf{0}
\end{array}\right]\;,
\end{displaymath} (4)

and $\mathbf{G_m}$ is a column operator:
\begin{displaymath}
\mathbf{G_m} = \left[\begin{array}{c} \mathbf{L}  \epsilon \mathbf{D}
\end{array}\right]\;.
\end{displaymath} (5)

The estimation problem (3) is fully constrained. We can solve it by means of unconstrained least-squares optimization, minimizing the least-squares norm of the compound residual vector

\begin{displaymath}
\hat{\mathbf{r}} = \hat{\mathbf{d}} - \mathbf{G_m m} =
\left...
...thbf{d - L m} - \epsilon \mathbf{D m}
\end{array}\right]\;.
\end{displaymath} (6)

The formal solution of the regularized optimization problem has the known form (Parker, 1994)
\begin{displaymath}
<\!\!\mathbf{m}\!\!> =
\left(\mathbf{L}^T \mathbf{L} +
...
...bf{D}^T \mathbf{D}\right)^{-1} \mathbf{L}^T \mathbf{d}\;,
\end{displaymath} (7)

where $<\!\!\mathbf{m}\!\!>$ denotes the least-squares estimate of $\mathbf{m}$, and $\mathbf{L}^T$ denotes the adjoint operator. One can carry out the optimization iteratively with the help of the conjugate-gradient method (Hestenes and Steifel, 1952) or its analogs (Paige and Saunders, 1982).

In the next subsection, we describe an alternative formulation of the optimization problem.


next up previous [pdf]

Next: Data-space regularization (model preconditioning) Up: Review of regularization in Previous: Review of regularization in

2013-03-03