Program of the month: sfremap1

November 3, 2013 Programs No comments

sfremap1 interpolates the input data to a different grid along the first axis. Here is an elementary example: making some data and interpolating it to a denser grid:

bash$ sfmath n1=5 o1=0 d1=1 output=x1 | sfdisfil 
0: 0 1 2 3 4 
bash$ sfmath n1=5 o1=0 d1=1 output=x1 | sfremap1 n1=10 d1=0.5 | sfdisfil 
0: 0 0.5 1 1.5 2 
5: 2.5 3 3.5 4 4.5

sfremap1 uses ENO (essentially non-oscillatory) interpolation to prevent oscillations when interpolating discontinuous data:

bash$ sergey$ sfspike n1=10 o1=0 d1=1 k1=6 | sfcausint | sfdisfil 
0: 0 0 0 0 0 
5: 1 1 1 1 1 
bash$ sfspike n1=10 o1=0 d1=1 k1=6 | sfcausint | sfremap1 n1=20 d1=0.5 | sfdisfil 
0: 0 0 0 0 0 
5: 0 0 0 0 0.5 
10: 1 1 1 1 1 
15: 1 1 1 1 1

The ENO algorithm is described by Harten et al. (1987) and Shu and Osher (1988):

A. Harten, B. Engquist, S. Osher, and S. R. Chakravarthy, 1987, Uniformly high order accurate essentially non-oscillatory schemes, III: Journal of Computational Physics, 71(2), 231-303.

C. W. Shu and S. Osher, 1988, Efficient implementation of essentially non-oscillatory shock-capturing schemes: Journal of Computational Physics, 77(2), 439-471.

The interpolation order is controlled by the order= parameter, order=2 corresponds to linear interpolation, the default is order=3. The output grid can be specified either by n1=, o1=, d1= parameters or by providing an example file with the desired grid in pattern= parameter. Here is an example:

bash$ sfmath n1=5 o1=0 d1=1 output=x1 > data.rsf
bash$ sfspike n1=10 d1=0.5 > other.rsf
bash$ < data.rsf sfremap1 pattern=other.rsf | sfdisfil
   0:             0          0.5            1          1.5            2
   5:           2.5            3          3.5            4          4.5

For an alternative method (cubic spline interpolation), see sfspline.

10 previous programs of the month:

Trust, but verify

October 26, 2013 Links No comments

“Doveryai, no proveryai” (translated as “trust, but verify”) is a Russian proverb, which became a signature phrase of Ronald Reagan during his nuclear-disarmament negotiations with Mikhail Gorbachev.

Last week, this phrase was used by *The Economist * to describe the troublesome state of modern scientific research. The editorial article How science goes wrong states

“A SIMPLE idea underpins science: “trust, but verify”. Results should always be subject to challenge from experiment. That simple but powerful idea has generated a vast body of knowledge. Since its birth in the 17th century, modern science has changed the world beyond recognition, and overwhelmingly for the better. But success can breed complacency. Modern scientists are doing too much trusting and not enough verifying—to the detriment of the whole of science, and of humanity.”

The article goes on to describe the problems of non-reproducible unverifiable science

“Too many of the findings that fill the academic ether are the result of shoddy experiments or poor analysis. A rule of thumb among biotechnology venture-capitalists is that half of published research cannot be replicated. Even that may be optimistic…”

and eventually suggests a possible cure

“Ideally, research protocols should be registered in advance and monitored in virtual notebooks. This would curb the temptation to fiddle with the experiment’s design midstream so as to make the results look more substantial than they are. Where possible, trial data also should be open for other researchers to inspect and test.”

This sounds like another powerful message in support of reproducible research and a call for changes in the culture of scientific publications. In application to computational science, “virtual notebooks” are reproducible scripts that, in words of Jon Claerbout, “along with required data should be linked with the document itself”. The article ends with a call to science to fix itself

“Science still commands enormous—if sometimes bemused—respect. But its privileged status is founded on the capacity to be right most of the time and to correct its mistakes when it gets things wrong. And it is not as if the universe is short of genuine mysteries to keep generations of scientists hard at work. The false trails laid down by shoddy research are an unforgivable barrier to understanding.”

DSR eikonal tomography

October 16, 2013 Documentation No comments

A new paper is added to the collection of reproducible documents:
First-break traveltime tomography with the double-square-root eikonal equation

First-break traveltime tomography is based on the eikonal equation. Since the eikonal equation is solved at fixed shot positions and only receiver positions can move along the ray-path, the adjoint-state tomography relies on inversion to resolve possible contradicting information between independent shots. The double-square-root eikonal equation allows not only the receivers but also the shots to change position, and thus describes the prestack survey as a whole. Consequently, its linearized tomographic operator naturally handles all shots together, in contrast with the shot-wise approach in the traditional eikonal-based framework. The double-square-root eikonal equation is singular for the horizontal waves, which require special handling. Although it is possible to recover all branches of the solution through post-processing, our current forward modeling and tomography focus on the diving wave branch only. We consider two upwind discretizations of the double-square-root eikonal equation and show that the explicit scheme is only conditionally convergent and relies on non-physical stability conditions. We then prove that an implicit upwind discretization is unconditionally convergent and monotonically causal. The latter property makes it possible to introduce a modified fast marching method thus obtaining first-break traveltimes both efficiently and accurately. To compare the new double-square-root eikonal-based tomography and traditional eikonal-based tomography, we perform linearizations and apply the same adjoint-state formulation and upwind finite-differences implementation to both approaches. Synthetic model examples justify that the proposed approach converges faster and is more robust than the traditional one.

Seismic data decomposition into spectral components

October 9, 2013 Documentation No comments

A new paper is added to the collection of reproducible documents:
Seismic data decomposition into spectral components using regularized nonstationary autoregression

Seismic data can be decomposed into nonstationary spectral components with smoothly variable frequencies and smoothly variable amplitudes. To estimate local frequencies, I use a nonstationary version of Prony’s spectral analysis method defined with the help of regularized nonstationary autoregression (RNAR). To estimate local amplitudes of different components, I fit their sum to the data using regularized nonstationary regression (RNR). Shaping regularization ensures stability of the estimation process and provides controls on smoothness of the estimated parameters. Potential applications of the proposed technique include noise attenuation, seismic data compression, and seismic data regularization.

Ray tracing on GPU

October 9, 2013 Documentation No comments

A new paper is added to the collection of reproducible documents:
Shortest path ray tracing on parallel GPU devices

A new parallel algorithm for shortest path ray tracing on graphics processing units was implemented. This algorithm avoids the enforcing of mutual exclusion during path calculation that is found in other parallel graph algorithms and that degrades their performance. Tests with velocity models composed of millions of vertices with a high conectivity degree show that this parallel algorithm outperforms the sequential implementation.

This is a contribution from Jorge Monsegny, UP Consultorias, and William Agudelo, ICP-Ecopetrol.

Program of the month: sfunif2

October 3, 2013 Programs No comments

sfunif2 creates a layered model from specified interfaces. It is inspired by an analogous program in Seismic Unix.

The following example from rsf/su/rsfmodel shows a simple model generated with sfunif2 and reproduced from an example in the book Seismic Data Processing with Seismic Un*x by David Forel, Thomas Benz, and Wayne Pennington.

The input to sfunif2 is a file containing sampled interfaces. The velocities in the layers can be specified as linear distributions
$$v(z,x) = v_0 + v_z\,(z-z_0)+v_x\,(x-x_0)$$
with parameters given by v00=, dvdz=, dvdx=, z0=, and x0=. These parameters should be lists, with the number of elements greater by one than the number of interfaces. The desired sampling of the depth axis is specified by n1=, o1=, and d1= parameters.

The 3-D version is sfunif3. See the following example from rsf/rsf/unif3

In addition to specifying layer velocities as linear (constant gradient) distributions, sfunif3 allows these properties to be specified in an optional file.

10 previous programs of the month

Program of the month: sfpatch

September 14, 2013 Programs No comments

sfpatch breaks the input data into local windows or “patches”, possibly with overlap.

The patching technique is explained by Jon Claerbout in Nonstationarity: patching chapter from Image Estimation by Example.

Suppose you have a 1-D signal with 10 samples:

bash$ sfmath n1=10 output=x1 > data.rsf

You can divide it, for example, into two patches with 5 samples each:

bash$ < data.rsf sfpatch p=2 w=5 > patch.rsf
bash$ sfget n1 n2 < patch.rsf 
n1=5
n2=2

or into 5 overlapping patches with 3 samples each:

bash$ < data.rsf sfpatch p=5 w=3 > patch.rsf
bash$ sfget n1 n2 < patch.rsf 
n1=3
n2=5

If you specify only the patch size (w= parameter), sfpatch tries to select the number of patches to achieve a good overlap:

bash$ < data.rsf sfpatch w=3 > patch.rsf
bash$ sfget n1 n2 < patch.rsf 
n1=3
n2=6

If you put overlapped patches back together, their amplitudes add in the overlapped regions:

bash$ < data.rsf sfdd type=int | sfdisfil 
   0:    0    1    2    3    4    5    6    7    8    9
bash$< patch.rsf sfpatch inv=y | sfdd type=int | sfdisfil
   0:    0    2    6    6    8   10   12   14    8    9

unless you use the weight option:

bash$ < patch.rsf sfpatch inv=y weight=y | sfdd type=int | sfdisfil
   0:    0    1    2    3    4    5    6    7    8    9

bash$ sfpatch easily handles multidimensional data: w= and p= become lists of dimensions, and the number of dimensions in the output effectively doubles. sfpatch is useful in two applications:

  1. crude handling of non-stationarity when processing non-stationary signals with locally stationary filters,
  2. making non-parallel tasks data-parallel.

A simple example from rsf/rsf/mona illustrates the second use.

Anisotropic diffusion, implemented by sfimpl2, is an effective but relatively slow process. By breaking the input 512×512 image into nine 200×200 overlapping patches and by processing them in parallel on a multi-core computer, we can achieve a significant speed-up without rewriting the original program.

10 previous programs of the month

Telling your story with sfresults

September 11, 2013 Programs No comments

Sometimes it is convenient to have a quick access to all results in your experiment. sfresults, a simple Tkinter script, provides that functionality. Run this script in your project directory. Click on “Cycle” button to run scons view, click on an individual result button to run scons result.view, or select several results and click on “Flip” to flip between the figures on the screen.

Madagascar school in Melbourne

August 25, 2013 Celebration No comments

Australia’s population is smaller than the population of Texas, yet Australia’s surface area is more than ten times larger than Texas. Australia is not only one of the top ten countries by proven natural gas reserves (the full list is Iran, Russia, Qatar, Turkmenistan, USA, Saudi Arabia, Venezuela, Nigeria, Algeria, Australia) but also one of the top ten countries in visits to the Madagascar website (the full list is USA, China, Canada, UK, Brazil, Germany, India, Saudi Arabia, France, Australia).

The Madagascar school in Melbourne, Australia, took place on August 15 and was organized by Jeff Shragge from UWA as a workshop at the 2013 ASEG-PESA meeting. The school attracted nearly 20 participants who listened to presentations and worked through interactive exercises. Jan Stellingwerff from dGB demonstrated the Madagascar interface in OpendTect. The school materials are available now on the website.

Cube-helix color palette

August 23, 2013 Examples No comments

The “cube helix” color palette was suggested by Dave Green for displaying astronomical intensity. This pallette is designed to be monotonically increasing on terms of its perceived brightness, which results in a nice greyscale when printed on a black and white printer.

Now this palette is available to Madagascar plotting programs, such as sfgrey, as color=x. See the following example from rsf/rsf/sfgrey:

More information: