Model fitting by least squares |
The conjugate vectors and in the program are denoted by gg and ss. The inner part of the conjugate-direction task is in function cgstep().
if (forget) { for (i = 0; i < nx; i++) S[i] = 0.; for (i = 0; i < ny; i++) Ss[i] = 0.; beta = 0.0; alfa = cblas_dsdot( ny, gg, 1, gg, 1); /* Solve G . ( R + G*alfa) = 0 */ if (alfa <= 0.) return; alfa = - cblas_dsdot( ny, gg, 1, rr, 1) / alfa; } else { /* search plane by solving 2-by-2 G . (R + G*alfa + S*beta) = 0 S . (R + G*alfa + S*beta) = 0 */ gdg = cblas_dsdot( ny, gg, 1, gg, 1); sds = cblas_dsdot( ny, Ss, 1, Ss, 1); gds = cblas_dsdot( ny, gg, 1, Ss, 1); if (gdg == 0. || sds == 0.) return; determ = 1.0 - (gds/gdg)*(gds/sds); if (determ > EPSILON) determ *= gdg * sds; else determ = gdg * sds * EPSILON; gdr = - cblas_dsdot( ny, gg, 1, rr, 1); sdr = - cblas_dsdot( ny, Ss, 1, rr, 1); alfa = ( sds * gdr - gds * sdr ) / determ; beta = (-gds * gdr + gdg * sdr ) / determ; } cblas_sscal(nx,beta,S,1); cblas_saxpy(nx,alfa,g,1,S,1); cblas_sscal(ny,beta,Ss,1); cblas_saxpy(ny,alfa,gg,1,Ss,1); for (i = 0; i < nx; i++) { x[i] += S[i]; } for (i = 0; i < ny; i++) { rr[i] += Ss[i]; } |
Observe the cgstep() function has a logical parameter called forget. This parameter does not need to be input. In the normal course of things, forget is true on the first iteration and false on subsequent iterations. On the first iteration there is no previous step, so the conjugate direction method is reduced to the steepest descent method. At any iteration, however, you have the option to set forget=true, which amounts to restarting the calculation from the current location, something we rarely find reason to do.
Model fitting by least squares |