next up previous [pdf]

Next: CGG with residual weight Up: Conjugate guided gradient (CGG) Previous: CG method for Iteratively

Conjugate-Guided-Gradient (CGG) method

From the algorithmic viewpoint of the CG method, the IRLS algorithm can be considered as an LS method, but with its operator, $\mathbf L$, modified by the weights, $\mathbf W_r$ and $\mathbf W_m$. The only change in the problems to solve that distinguishes the IRLS algorithm from the LS one is the substitution of $\mathbf W_r \mathbf L\mathbf W_m$ and $\mathbf W_m^T\mathbf L^T \mathbf W_r^T$ for $\mathbf L$ and $\mathbf L^T$, respectively. Since the weights $\mathbf W_r$ and $\mathbf W_m$ are functions of the residual and the model, respectively, and the residual $\mathbf r$ and the model $\mathbf m$ are changing during the iteration, the problem that IRLS method solves is a nonlinear problem. Therefore, the IRLS method obtains the $\ell^p$-norm solution at the cost of nonlinear implementation. I propose another algorithm that obtains $\ell^p$-norm solution without breaking the linear inversion template. Instead of modifying the operator which results in nonlinear inversion, we can choose a way to guide the search to find the minimum $\mathbf \ell^2$-norm solution in a specific model subspace so as to obtain a solution that meets a user's specific criteria. The specific model subspace could be guided by a specific $\mathbf \ell^p$-norm's gradient or constrained by an a priori model. Such guiding of the model vector can be realized by weighting the residual vector or gradient vector in the CG algorithm. Since the weights are basically changing the direction of the gradient vector in the CG algorithm, this proposed algorithm is named as Conjugate Guided Gradient (CGG) method.



Subsections
next up previous [pdf]

Next: CGG with residual weight Up: Conjugate guided gradient (CGG) Previous: CG method for Iteratively

2011-06-26