American Innovation and Competitiveness Act was adopted unanimously by the U.S. Congress and signed into law by president Obama in January 2017.
The law contains a section called Research Reproducibility and Replication, which asked the Director of the National Science Foundation in agreement with the National Research Council to prepare a report on issues related to research reproducibility and “to make recommendations for improving rigor and transparency in scientific research”.
To fulfill this requirement, a consensus report of the National Academies of Sciences, Engineering, and Medicine was published in 2019. The report is summarized in the special issue of Harvard Data Science Review in December 2020.
Among the recommendations:
All researchers should include a clear, specific, and complete description of how the reported results were reached. Reports should include details appropriate for the type of research, including:
- a clear description of all methods, instruments, materials, procedures, measurements, and other variables involved in the study;
- a clear description of the analysis of data and decisions for exclusion of some data or inclusion of other;
- for results that depend on statistical inference, a description of the analytic decisions and when these decisions were made and whether the study is exploratory or confirmatory;
- a discussion of the expected constraints on generality, such as which methodological features the authors think could be varied without affecting the result and which must remain constant;
- reporting of precision or statistical power; and
- discussion of the uncertainty of the measurements, results, and inferences.
Funding agencies and organizations should consider investing in research and development of open-source, usable tools and infrastructure that support reproducibility for a broad range of studies across different domains in a seamless fashion. Concurrently, investments would be helpful in outreach to inform and train researchers on best practices and how to use these tools.
Journals should consider ways to ensure computational reproducibility for publications that make claims based on computations, to the extent ethically and legally possible.
The major version of Madagascar, stable version 3.0, has been released. The main change is the added support for Python-3. Both Python-2 and Python-3 are now supported. The new version also features 14 new reproducible papers, as well as other enhancements.
According to the SourceForge statistics, the previous 2.0 stable distribution has been downloaded about 6,000 times. The top country (with 27% of all downloads) was China, followed by the USA, Brazil, Canada, and India.
In September 2019, the total cumulative number of downloads for the stable version of Madagascar has reached 50 thousand. The current development version continues to be available through Github.
Working Workshops as opposed to “talking workshops” are meetings where the participants collaborate in small groups to develop new software code or to conduct computational experiments addressing a particular problem.
The 2018 Working Workshop took place in Houston on August 8-11. It was hosted by the University of Houston and organized by Karl Schleicher. The topic of the workshop was Python and Julia programming languages, as well as their interfaces to Madagascar.
The workshop attracted 16 participants (students, academic staff, and industry professionals) from 12 different organizations. Software projects included such topics as machine learning, 3D plotting, parallel processing, wave equation modeling, and well log analysis.
A new paper is added to the collection of reproducible documents: Matching and merging high-resolution and legacy seismic images
When multiple seismic surveys are acquired over the same area using different technologies that produce data with different frequency content, it may be beneficial to combine these data to produce a broader bandwidth volume. In this paper, we propose a workflow for matching and blending seismic images obtained from shallow high-resolution seismic surveys and conventional surveys conducted over the same area. The workflow consists of three distinct steps: (a) balancing the amplitudes and frequency content of the two images by non-stationary smoothing of the high-resolution image; (b) estimating and removing variable time shifts between the two images; and (c) blending the two images together by least-squares inversion. The proposed workflow is applied successfully to images from the Gulf of Mexico.
A new paper is added to the collection of reproducible documents: Fast time-to-depth conversion and interval velocity estimation in the case of weak lateral variations
Time-domain processing has a long history in seismic imaging and has always been a powerful workhorse that is routinely utilized. It generally leads to an expeditious construction of the subsurface velocity model in time, which can later be expressed in the Cartesian depth coordinates via a subsequent time-to-depth conversion. The conventional practice of such conversion is done using Dix inversion, which is exact in the case of laterally homogeneous media. For other media with lateral heterogeneity, the time-to-depth conversion involves solving a more complex system of partial differential equations (PDEs). In this study, we propose an efficient alternative for time-to-depth conversion and interval velocity estimation based on the assumption of weak lateral velocity variations. By considering only first-order perturbative effects from lateral variations, the exact system of PDEs required to accomplish the exact conversion reduces to a simpler system that can be solved efficiently in a layer-stripping (downward-stepping) fashion. Numerical synthetic and field data examples show that the proposed method can achieve reasonable accuracy and is significantly more efficient than previously proposed method with a speedup by an order of magnitude.
As an exercise for the SEG Reproducibility Zoo, the example in rsf/tutorials/yilmaz1 reproduces examples from Oz Yilmaz’s famous book Seismic Data Analysis, the section on the 2-D Fourier transform.
Madagascar users are encouraged to try improving the results.
As an exercise for the SEG Reproducibility Zoo, the example in rsf/tutorials/cg reproduces the tutorial from Karl Schleicher on the method of conjugate gradients.
The tutorial was published in the April 2018 issue of The Leading Edge.
Madagascar users are encouraged to try improving the results.
The major new release of Madagascar, stable version 2.0 was made during the Madagascar school in Shanghai and features 25 new reproducible papers and significant other enhancements including complete examples of seismic field data processing.
According to the SourceForge statistics, the previous 1.7 stable distribution has been downloaded nearly 12,000 times. The top country (with 28% of all downloads) was USA, followed by China, Brazil, Germany, and Columbia.
The 2017 Madagascar School on Reproducible Computational Geophysics took place in Shanghai, China, on July 10-11 and was hosted by Professor Jiubing Cheng at Tongji University.
The school attracted nearly 80 participants from 12 different universities and 5 other research organizations. The program included lectures given by 6 different instructors and hands-on exercises on different topics in the use of the Madagascar software framework, as well as presentations sharing experience of different research groupd. The school materials are available on the website.
Earlier this year, on April 21-22, another school took place at the University of Houston and was hosted by SEG Wavelets, the local SEG student chapter. The school materials are available on the website.
A new paper is added to the collection of reproducible documents: Elastic wave-vector decomposition in heterogeneous anisotropic media
The goal of wave-mode separation and wave-vector decomposition is to separate full elastic wavefield into three wavefields with each corresponding to a different wave mode. This allows elastic reverse-time migration to handle of each wave mode independently . Several of the previously proposed methods to accomplish this task require the knowledge of the polarization vectors of all three wave modes in a given anisotropic medium. We propose a wave-vector decomposition method where the wavefield is decomposed in the wavenumber domain via the analytical decomposition operator with improved computational efficiency using low-rank approximations. The method is applicable for general heterogeneous anisotropic media. To apply the proposed method in low-symmetry anisotropic media such as orthorhombic, monoclinic, and triclinic, we define the two S modes by sorting them based on their phase velocities (S1 and S2), which are defined everywhere except at the singularities. The singularities can be located using an analytical condition derived from the exact phase-velocity expressions for S waves. This condition defines a weight function, which can be applied to attenuate the planar artifacts caused by the local discontinuity of polarization vectors at the singularities. The amplitude information lost because of weighting can be recovered using the technique of local signal-noise orthogonalization. Numerical examples show that the proposed approach provides an effective decomposition method for all wave modes in heterogeneous, strongly anisotropic media.