Steep-dip decon is a heavy consumer of computer time.
Many small optimizations could be done,
but more importantly,
I feel there are some deeper issues that warrant further investigation.
The first question is,
how many filter coefficients should there be
and where should they be?
We would like to keep the number of nonzero filter coefficients to a minimum
because it would speed the computation,
but more importantly I fear the filter output
might be defective in some insidious way (perhaps missing primaries)
when too many filter coefficients are used.
Perhaps if 1-D decon were done sequentially with steep-dip decon
the number of free parameters (and hence the amount of computer time)
could be dropped even further.
I looked at some of the filters
and they scatter wildly with the Nyquist frequency
(particularly those coefficients on the trace with the ``1'' constraint).
This suggests using a damping term on the filter coefficients,
after which perhaps the magnitude of a filter coefficient
will be a better measure of whether this practice is really helpful.
Also, it would, of course, be fun to get some complete data sets
(rather than a single shot profile) to see the difference in the final stack.