Documentation

Time-frequency analysis of seismic data using non-stationary Prony method

July 18, 2020 Documentation No comments

A new paper is added to the collection of reproducible documents: Data-driven time-frequency analysis of seismic data using non-stationary Prony method

The empirical mode decomposition aims to decompose the input signal into a small number of components named intrinsic mode functions with slowly varying amplitudes and frequencies. In spite of its simplicity and usefulness, however, the empirical mode decomposition lack solid mathematical foundation. In this paper, we describe a method to extract the intrinsic mode functions of the input signal using non-stationary Prony method. The proposed method captures the philosophy of the empirical mode decomposition, but use a different method to compute the intrinsic mode functions. Having the intrinsic mode functions obtained, we then compute the spectrum of the input signal using Hilbert transform. Synthetic and field data validate the proposed method can correctly compute the spectrum of the input signal, and could be used in seismic data analysis to facilitate interpretation.

Enhancing seismic reflections using empirical mode decomposition

July 18, 2020 Documentation No comments

A new paper is added to the collection of reproducible documents: Enhancing seismic reflections using empirical mode decomposition in the flattened domain

Due to different reasons, the seismic reflections are not continuous even when no faults or no discontinuities exist. We propose a novel approach for enhancing the amplitude of seismic reflections and making the seismic reflections continuous. We use plane-wave flattening technique to provide horizontal events for the following empirical mode decomposition (EMD) based smoothing in the flattened domain. The inverse plane-wave flattening can be used to obtain original curved events. The plane-wave flattening process requires a precise local slope estimation, which is provided by the plane-wave destruction (PWD) algorithm. The EMD based smoothing filter is a non-parametric and adaptive filter, thus can be conveniently used. Both pre-stack and post-stack field data examples show tremendous improvement for the data quality, which makes the following interpretation easier and more reliable.

Application of principal component analysis in weighted stacking of seismic data

July 18, 2020 Documentation No comments

A new paper is added to the collection of reproducible documents: Application of principal component analysis in weighted stacking of seismic data


Optimal stacking of multiple datasets plays a significant role in many scientific domains. The quality of stacking will affect the signal-to-noise ratio (SNR) and amplitude fidelity of the stacked image. In seismic data processing, the similarity-weighted stacking makes use of the local similarity between each trace and a reference trace as the weight to stack the flattened prestack seismic data after normal moveout (NMO) correction. The traditional reference trace is an approximated zero-offset trace that is calculated from a direct arithmetic mean of the data matrix along the spatial direction. However, in the case that the data matrix contains abnormal mis-aligned trace, erratic and non-gaussian random noise, the accuracy of the approximated zero-offset trace would be greatly affected, thereby further influence the quality of stacking. We propose a novel weighted stacking method that is based on principal component analysis (PCA). The principal components of the data matrix, namely the useful signals, are extracted based on a low-rank decomposition method by solving an optimization problem with a low-rank constraint. The optimization problem is solved via a common singular value decomposition algorithm. The low-rank decomposition of the data matrix will alleviate the influence of abnormal trace, erratic and non-gaussian random noise, thus will be more robust than the traditional alternatives. We use both synthetic and field data examples to show the successful performance of the proposed approach.

Velocity analysis of simultaneous-source data

April 11, 2020 Documentation No comments

A new paper is added to the collection of reproducible documents: Velocity analysis of simultaneous-source data using high-resolution semblance – coping with the strong noise


Direct imaging of simultaneous-source (or blended) data, without the need of deblending, requires a precise subsurface velocity model. In this paper, we focus on the velocity analysis of simultaneous-source data using the NMO-based velocity picking approach. We demonstrate that it is possible to obtain a precise velocity model directly from the blended data in the common-midpoint (CMP) domain. The similarity-weighted semblance can help us obtain much better velocity spectrum with higher resolution and higher reliability compared with the traditional semblance. The similarity-weighted semblance enforces an inherent noise attenuation solely in the semblance calculation stage, thus is not sensitive to the intense interference. We use both simulated synthetic and field data examples to demonstrate the performance of the similarity-weighted semblance in obtaining reliable subsurface velocity model for direct migration of simultaneous-source data. The migrated image of blended field data using prestack kirchhoff time migration (PSKTM) approach based on the picked velocity from the similarity-weighted semblance is very close to the migrated image of unblended data.

Compressive sensing for seismic data reconstruction

April 11, 2020 Documentation No comments

A new paper is added to the collection of reproducible documents: Compressive sensing for seismic data reconstruction via fast projection onto convex sets based on seislet transform


According to the compressive sensing (CS) theory in the signal-processing field, we proposed a new CS approach based on a fast projection onto convex sets (POCS) algorithm with sparsity constraint in the seislet transform domain. The seislet transform appears to be the sparest among the state-of-the-art sparse transforms. The FPOCS can obtain much faster convergence than conventional POCS (about two thirds of conventional iterations can be saved), while maintaining the same recovery performance. The FPOCS can obtain faster and better performance than FISTA for relatively cleaner data but will get slower and worse performance than FISTA, which becomes a reference to decide which algorithm to use in practice according the noise level in the seismic data. The seislet transform based CS approach can achieve obviously better data recovery results than $f-k$ transform based scenarios, considering signal-to-noise ratio (SNR), local similarity comparison, and visual observation, because of a much sparser structure in the seislet transform domain. We have used both synthetic and field data examples to demonstrate the superior performance the proposed seislet-based FPOCS approach.

CUDA package for Q-compensated RTM

April 3, 2020 Documentation No comments

A new paper is added to the collection of reproducible documents: CuQ-RTM: A CUDA-based code package for stable and efficient Q-compensated reverse time migration

Reverse time migration (RTM) in attenuating media should take the absorption and dispersion effects into consideration. The latest proposed viscoacoustic wave equation with decoupled fractional Laplacians (DFLs) facilitates separate amplitude compensation and phase correction in $Q$-compensated RTM ($Q$-RTM). However, intensive computation and enormous storage requirements of $Q$-RTM prevent it from being extended into practical application, especially for large-scale 2D or 3D case. The emerging graphics processing unit (GPU) computing technology, built around a scalable array of multithreaded Streaming Multiprocessors (SMs), presents an opportunity for greatly accelerating $Q$-RTM by appropriately exploiting GPU’s architectural characteristics. We present the cu$Q$-RTM, a CUDA-based code package that implements $Q$-RTM based on a set of stable and efficient strategies, such as streamed CUFFT, checkpointing-assisted time-reversal reconstruction (CATRC) and adaptive stabilization. The cu$Q$-RTM can run in a multi-level parallelism (MLP) fashion, either synchronously or asynchronously, to take advantages of all the CPUs and GPUs available, while maintaining impressively good stability and flexibility. We mainly outline the architecture of the cu$Q$-RTM code package and some program optimization schemes. The speedup ratio on a single GeForce GTX760 GPU card relative to a single core of Intel Core i5-4460 CPU can reach above 80 in large-scale simulation. The strong scaling property of multi-GPU parallelism is demonstrated by performing $Q$-RTM on a Marmousi model with one to six GPU(s) involved. Finally, we further verify the feasibility and efficiency of the cu$Q$-RTM on a field data set. The “living” package is available from GitHub at https://github.com/Geophysics-OpenSource/cuQRTM, and peer-reviewed code related to this article can be found at http://software.seg.org/2019/0001.

Fast dictionary learning

April 3, 2020 Documentation No comments

A new paper is added to the collection of reproducible documents: Fast dictionary learning for noise attenuation of multidimensional seismic data

The K-SVD algorithm has been successfully utilized for adaptively learning the sparse dictionary in 2D seismic denoising. Because of the high computational cost of many SVDs in the K-SVD algorithm, it is not applicable in practical situations, especially in 3D or 5D problems. In this paper, I extend the dictionary learning based denoising approach from 2D to 3D. To address the computational efficiency problem in K-SVD, I propose a fast dictionary learning approach based on the sequential generalized K-means (SGK) algorithm for denoising multidimensional seismic data. The SGK algorithm updates each dictionary atom by taking an arithmetic average of several training signals instead of calculating a SVD as used in K-SVD algorithm. I summarize the sparse dictionary learning algorithm using K-SVD, and introduce SGK algorithm together with its detailed mathematical implications. 3D synthetic, 2D and 3D field data examples are used to demonstrate the performance of both K-SVD and SGK algorithms. It has been shown that SGK algorithm can significantly increase the computational efficiency while only slightly degrading the denoising performance.

Plane-wave orthogonal polynomial transform

March 27, 2020 Documentation No comments

A new paper is added to the collection of reproducible documents: Plane-wave orthogonal polynomial transform for amplitude-preserving noise attenuation

Amplitude-preserving data processing is an important and challenging topic in many scientific fields. The amplitude-variation details in seismic data are especially important because the amplitude variation is directly related with the subsurface wave impedance and fluid characteristics. We propose a novel seismic noise attenuation approach that is based on local plane-wave assumption of seismic events and the amplitude preserving capability of the orthogonal polynomial transform (OPT). The OPT is a way for representing spatially correlative seismic data as a superposition of polynomial basis functions, by which the random noise is distinguished from the useful energy by the high orthogonal polynomial coefficients. The seismic energy is the most correlative along the structural direction and thus the OPT is optimally performed in a flattened gather. We introduce in detail the flattening operator for creating the flattened dimension, where the OPT can be applied subsequently. The flattening operator is created by deriving a plane-wave trace continuation relation following the plane-wave equation. We demonstrate that both plane-wave trace continuation and OPT can well preserve the strong amplitude variation existing in seismic data. In order to obtain a robust slope estimation performance in the presence of noise, a robust slope estimation approach is introduced to substitute the traditional method. A group of synthetic, pre-stack and post-stack field seismic data are used to demonstrate the potential of the proposed framework in realistic applications.

Time-frequency decomposition

March 18, 2020 Documentation No comments

A new paper is added to the collection of reproducible documents: Probing the subsurface karst features using time-frequency decomposition

The high-resolution mapping of karst features is of great importance to hydrocarbon discovery and recovery in the resource exploration field. However, currently, there are few effective methods specifically tailored for such mission. The 3D seismic data can reveal the existence of karsts to some extent but cannot obtain a precise characterization. I propose an effective framework for accurately probing the subsurface karst features using a well-developed time-frequency decomposition algorithm. More specifically, I introduce a frequency interval analysis approach for obtaining the best karsts detection result using an optimal frequency interval. A high resolution time-frequency transform is preferred in the proposed framework to capture the inherent frequency components hidden behind the amplitude map. Although the single frequency slice cannot provide a reliable karst depiction result, the summation over the selected frequency interval can obtain a high-resolution and high-fidelity delineation of subsurface karsts. I use a publicly available 3D field seismic dataset as an example to show the performance of the proposed method.

Spectral decomposition using regularized non-stationary autoregression

March 11, 2020 Documentation No comments

A new paper is added to the collection of reproducible documents: Application of spectral decomposition using regularized non-stationary autoregression to random noise attenuation

We propose an application of spectral decomposition using regularized non-stationary autoregression (SDRNAR) to random noise attenuation. SDRNAR is a recently proposed signal-analysis method, which aims at decomposing the seismic signal into several spectral components, each of which has a smoothly variable frequency and smoothly variable amplitude. In the proposed novel denoising approach, random noise is deemed to be the residual part of decomposed spectral components because it is unpredictable. One unique property of this novel denoising approach is that the amplitude maps for different frequency components can also be obtained during the denoising process, which can be valuable for some interpretation tasks. Compared with spectral decomposition algorithm by empirical mode decomposition (EMD), SDRNAR has higher efficiency and better decomposition performance. Compared with $f$-$x$ deconvolution and mean filter, the proposed denoising approach can obtain higher signal-to-noise ratio (SNR) and preserve more useful energy. The proposed approach can only be applied to seismic profiles with relatively flat events, which becomes its main limitation. However, because it is applied trace by trace, it can preserve spatial discontinuities. We use both synthetic and field data examples to demonstrate the performance of the proposed method.