A new paper is added to the collection of reproducible documents: CuQ-RTM: A CUDA-based code package for stable and efficient Q-compensated reverse time migration
Reverse time migration (RTM) in attenuating media should take the absorption and dispersion effects into consideration. The latest proposed viscoacoustic wave equation with decoupled fractional Laplacians (DFLs) facilitates separate amplitude compensation and phase correction in $Q$-compensated RTM ($Q$-RTM). However, intensive computation and enormous storage requirements of $Q$-RTM prevent it from being extended into practical application, especially for large-scale 2D or 3D case. The emerging graphics processing unit (GPU) computing technology, built around a scalable array of multithreaded Streaming Multiprocessors (SMs), presents an opportunity for greatly accelerating $Q$-RTM by appropriately exploiting GPU’s architectural characteristics. We present the cu$Q$-RTM, a CUDA-based code package that implements $Q$-RTM based on a set of stable and efficient strategies, such as streamed CUFFT, checkpointing-assisted time-reversal reconstruction (CATRC) and adaptive stabilization. The cu$Q$-RTM can run in a multi-level parallelism (MLP) fashion, either synchronously or asynchronously, to take advantages of all the CPUs and GPUs available, while maintaining impressively good stability and flexibility. We mainly outline the architecture of the cu$Q$-RTM code package and some program optimization schemes. The speedup ratio on a single GeForce GTX760 GPU card relative to a single core of Intel Core i5-4460 CPU can reach above 80 in large-scale simulation. The strong scaling property of multi-GPU parallelism is demonstrated by performing $Q$-RTM on a Marmousi model with one to six GPU(s) involved. Finally, we further verify the feasibility and efficiency of the cu$Q$-RTM on a field data set. The “living” package is available from GitHub at https://github.com/Geophysics-OpenSource/cuQRTM, and peer-reviewed code related to this article can be found at http://software.seg.org/2019/0001.
A new paper is added to the collection of reproducible documents: Fast dictionary learning for noise attenuation of multidimensional seismic data
The K-SVD algorithm has been successfully utilized for adaptively learning the sparse dictionary in 2D seismic denoising. Because of the high computational cost of many SVDs in the K-SVD algorithm, it is not applicable in practical situations, especially in 3D or 5D problems. In this paper, I extend the dictionary learning based denoising approach from 2D to 3D. To address the computational efficiency problem in K-SVD, I propose a fast dictionary learning approach based on the sequential generalized K-means (SGK) algorithm for denoising multidimensional seismic data. The SGK algorithm updates each dictionary atom by taking an arithmetic average of several training signals instead of calculating a SVD as used in K-SVD algorithm. I summarize the sparse dictionary learning algorithm using K-SVD, and introduce SGK algorithm together with its detailed mathematical implications. 3D synthetic, 2D and 3D field data examples are used to demonstrate the performance of both K-SVD and SGK algorithms. It has been shown that SGK algorithm can significantly increase the computational efficiency while only slightly degrading the denoising performance.
A new paper is added to the collection of reproducible documents: Plane-wave orthogonal polynomial transform for amplitude-preserving noise attenuation
Amplitude-preserving data processing is an important and challenging topic in many scientific fields. The amplitude-variation details in seismic data are especially important because the amplitude variation is directly related with the subsurface wave impedance and fluid characteristics. We propose a novel seismic noise attenuation approach that is based on local plane-wave assumption of seismic events and the amplitude preserving capability of the orthogonal polynomial transform (OPT). The OPT is a way for representing spatially correlative seismic data as a superposition of polynomial basis functions, by which the random noise is distinguished from the useful energy by the high orthogonal polynomial coefficients. The seismic energy is the most correlative along the structural direction and thus the OPT is optimally performed in a flattened gather. We introduce in detail the flattening operator for creating the flattened dimension, where the OPT can be applied subsequently. The flattening operator is created by deriving a plane-wave trace continuation relation following the plane-wave equation. We demonstrate that both plane-wave trace continuation and OPT can well preserve the strong amplitude variation existing in seismic data. In order to obtain a robust slope estimation performance in the presence of noise, a robust slope estimation approach is introduced to substitute the traditional method. A group of synthetic, pre-stack and post-stack field seismic data are used to demonstrate the potential of the proposed framework in realistic applications.
A new paper is added to the collection of reproducible documents: Probing the subsurface karst features using time-frequency decomposition
The high-resolution mapping of karst features is of great importance to hydrocarbon discovery and recovery in the resource exploration field. However, currently, there are few effective methods specifically tailored for such mission. The 3D seismic data can reveal the existence of karsts to some extent but cannot obtain a precise characterization. I propose an effective framework for accurately probing the subsurface karst features using a well-developed time-frequency decomposition algorithm. More specifically, I introduce a frequency interval analysis approach for obtaining the best karsts detection result using an optimal frequency interval. A high resolution time-frequency transform is preferred in the proposed framework to capture the inherent frequency components hidden behind the amplitude map. Although the single frequency slice cannot provide a reliable karst depiction result, the summation over the selected frequency interval can obtain a high-resolution and high-fidelity delineation of subsurface karsts. I use a publicly available 3D field seismic dataset as an example to show the performance of the proposed method.
A new paper is added to the collection of reproducible documents: Application of spectral decomposition using regularized non-stationary autoregression to random noise attenuation
We propose an application of spectral decomposition using regularized non-stationary autoregression (SDRNAR) to random noise attenuation. SDRNAR is a recently proposed signal-analysis method, which aims at decomposing the seismic signal into several spectral components, each of which has a smoothly variable frequency and smoothly variable amplitude. In the proposed novel denoising approach, random noise is deemed to be the residual part of decomposed spectral components because it is unpredictable. One unique property of this novel denoising approach is that the amplitude maps for different frequency components can also be obtained during the denoising process, which can be valuable for some interpretation tasks. Compared with spectral decomposition algorithm by empirical mode decomposition (EMD), SDRNAR has higher efficiency and better decomposition performance. Compared with $f$-$x$ deconvolution and mean filter, the proposed denoising approach can obtain higher signal-to-noise ratio (SNR) and preserve more useful energy. The proposed approach can only be applied to seismic profiles with relatively flat events, which becomes its main limitation. However, because it is applied trace by trace, it can preserve spatial discontinuities. We use both synthetic and field data examples to demonstrate the performance of the proposed method.
A new paper is added to the collection of reproducible documents: An open-source Matlab code package for improved rank-reduction 3D seismic data denoising and reconstruction
Simultaneous seismic data denoising and reconstruction is a currently popular research subject in modern reflection seismology. Traditional rank-reduction based 3D seismic data denoising and reconstruction algorithm will cause strong residual noise in the reconstructed data and thus affect the following processing and interpretation tasks. In this paper, we propose an improved rank reduction method by modifying the truncated singular value decomposition (TSVD) formula used in the traditional method. The proposed approach can help us obtain nearly perfect reconstruction performance even in the case of low signal-to-noise ratio (SNR). The proposed algorithm is tested via one synthetic and field data examples. Considering that seismic data interpolation and denoising source packages are seldom in the public domain, we also provide a program template for the rank reduction based simultaneous denoising and reconstruction algorithm by providing an open-source Matlab package.
A new paper is added to the collection of reproducible documents: Structure-oriented singular value decomposition for random noise attenuation of seismic data
Singular value decomposition (SVD) can be used both globally and locally to remove random noise in order to improve the signal-to-noise ratio (SNR) of seismic data. However, they can only be applied to seismic data with simple structure such that there is only one dip component in each processing window. We introduce a novel denoising approach that utilizes a structure-oriented SVD and this approach can enhance seismic reflections with continuous slopes. We create a third dimension for a 2D seismic profile by using the plane-wave prediction operator to predict each trace from its neighbour traces and apply SVD along this dimension. The added dimension is equal to flattening the seismic reflections within a neighbouring window. The third dimension is then averaged to decrease the dimension. We use two synthetic examples with different complexities and one field data example to demonstrate the performance of the proposed structure-oriented SVD. Compared with global and local SVDs, and $f-x$ deconvolution, the structure-oriented SVD can obtain much clearer reflections and preserve more useful energy.
A new paper is added to the collection of reproducible documents: One-step slope estimation for dealiased seismic data reconstruction via iterative seislet thresholding
The seislet transform can be used to interpolate regularly under-sampled seismic data if an accurate local slope map can be obtained. The dealiasing capability of such method highly depends on the accuracy of the estimated local slope, which can be achieved by using the low-frequency components of the aliased seismic data in an iterative manner. Previous approaches to solving this problem have been limited to the unstable estimation of local slope via a large number of iterations. Here, we propose a new way to obtain the slope estimation. We first estimate the NMO velocity and then use a velocity-slope transformation to get the optimal local slope. The new method allows us to avoid the iterative slope estimation and can obtain an accurate slope field in one step. The one-step slope estimation can significantly accelerate the iterative seislet domain thresholding process and can also stabilize the iterative inversion. Both synthetic and field data examples are used to demonstrate the performance by using the proposed approach compared with alternative approaches.
A new paper is added to the collection of reproducible documents: Dip-separated structural filtering using seislet transform and adaptive empirical mode decomposition based dip filter
The seislet transform has been demonstrated to have a better compression performance for seismic data compared with other well-known sparsity promoting transforms, thus it can be used to remove random noise by simply applying a thresholding operator in the seislet domain. Since the seislet transform compresses the seismic data along the local structures, the seislet thresholding can be viewed as a simple structural filtering approach. Because of the dependence on a precise local slope estimation, the seislet transform usually suffers from low compression ratio and high reconstruction error for seismic profiles that have dip conflicts. In order to remove the limitation of seislet thresholding in dealing with conflicting-dip data, I propose a dip-separated filtering strategy. In this method, I first use an adaptive empirical mode decomposition based dip filter to separate the seismic data into several dip bands (5 or 6). Next, I apply seislet thresholding to each separated dip component to remove random noise. Then I combine all the denoised components to form the final denoised data. Compared with other dip filters, the empirical mode decomposition based dip filter is data-adaptive. Both complicated synthetic and field data examples show superior performance of my proposed approach than the traditional alternatives. The dip-separated structural filtering is not limited to seislet thresholding, and can also be extended to all those methods that require slope information.
A new paper is added to the collection of reproducible documents: Irregular seismic data reconstruction using a percentile-half-thresholding algorithm
In this paper, a percentile-half-thresholding approach is proposed in the transformed domain thresholding process for iterative shrinkage thresholding (IST). The percentile-thresholding strategy is more convenient for implementing than the constant-value, linear-decreasing, or exponential-decreasing thresholding because it’s data-driven. The novel half-thresholding strategy is inspired from the recent advancement in the researches on optimization using non-convex regularization. We summarize a general thresholding framework for IST and show that the only difference between half thresholding and the conventional soft or hard thresholding lays in the thresholding operator. Thus it’s straightforward to insert the existing percentile-thresholding strategy to the half-thresholding iterative framework. We use both synthetic and field data examples to compare the performances using soft thresholding or half thresholding with constant threshold or percentile threshold. Synthetic and field data show consistent results that apart from the threshold-setting convenience, the percentile thresholding also has the possibility for improving the recovery performance. Compared with soft thresholding, half thresholding tends to have a more precise reconstructed result.