next up previous [pdf]

Next: PROGRAM Up: WHAT ARE ADJOINTS FOR? Previous: Orthogonality of the gradients

Short memory of the gradients

Substituting the gradient direction (28) into formula (23) and applying formulas (4) and (27), we can see that
\begin{displaymath}
\beta_n^{(j)} =
{{\left({\bf A c}_n, {\bf r}_{j} - {\bf r...
... c}_{j}\right)} \over
{\alpha_j\Vert{\bf A s}_{j}\Vert^2}}\;.
\end{displaymath} (33)

The orthogonality condition (30) and the definition of the coefficient $\alpha_j$ from equation (31) further transform this formula to the form
$\displaystyle \beta_n^{(n-1)}$ $\textstyle =$ $\displaystyle {{\Vert{\bf c}_n\Vert^2} \over
{\alpha_{n-1}\Vert{\bf A s}_{n-1}\Vert^2}} =
{{\Vert{\bf c}_n\Vert^2} \over
{\Vert{\bf c}_{n-1}\Vert^2}}\;,$ (34)
$\displaystyle \beta_n^{(j)}$ $\textstyle =$ $\displaystyle 0\;,\;\;1 \leq j \leq n-2\;.$ (35)

Equation (35) shows that the conjugate-gradient method needs to remember only the previous step direction in order to optimize the search at each iteration. This is another remarkable property distinguishing that method in the family of conjugate-direction methods.


next up previous [pdf]

Next: PROGRAM Up: WHAT ARE ADJOINTS FOR? Previous: Orthogonality of the gradients

2013-03-03