next up previous [pdf]

Next: Missing Data Up: Clapp: Multiple realizations Previous: Introduction

Motivation

Regularized linear least squares estimation problems can be written as minimizing the quadratic function
\begin{displaymath}
Q(m) = \Vert\bf d- \bf L \bf m\Vert^2 + \epsilon^2 \Vert\bf A \bf m\Vert^2
\end{displaymath} (1)

where $\bf d$ is our data, $\bf L$ is our modeling operator, $\bf A$ is our regularization operator and we are inverting for a model $\bf m$. Alternately, we can write them in terms of fitting goals,
$\displaystyle \bf d$ $\textstyle \approx$ $\displaystyle \bf L \bf m$ (2)
$\displaystyle \bf0$ $\textstyle \approx$ $\displaystyle \epsilon \bf A\bf m
,$  

For the purpose of this paper I will refer to the first goal as the data fitting goal and the second as the model styling goal. Normally we think of data fitting goal as describing the physics of the problem. The model styling goal is suppose to provide information about the model character. Ideally $\bf A$ should be the inverse model covariance. In practice we don't have the model covariance so we attempt to approximate it through another operator. At SEP the regularization operator is typically one of the following:
Laplacian or gradient
a simple operator that assumes nothing about the model
Prediction Error Filter (PEF)
a stationary operator estimated from known portions of the model or some field with the same properties as the model (Claerbout, 1998)
steering filter
a non-stationary operator built from minimal information about the model (Clapp et al., 1997)
non-stationary PEF
a non-stationary operator built from a field with the same properties as the model (Crawley, 2000).
A problem with the first three operators is that while they approximate the model covariance, they have little concept of model variance. As a result our model estimates tend to have the wrong statistical properties.


next up previous [pdf]

Next: Missing Data Up: Clapp: Multiple realizations Previous: Introduction

2016-03-17