Five-dimensional seismic data reconstruction using the optimally damped rank-reduction method

December 7, 2020 Documentation No comments

A new paper is added to the collection of reproducible documents: Five-dimensional seismic data reconstruction using the optimally damped rank-reduction method


It is difficult to separate additive random noise from spatially coherent signal using a rank-reduction method that is based on the truncated singular value decomposition (TSVD) operation. This problem is due to the mixture of the signal and the noise subspaces after the TSVD operation. This drawback can be partially conquered using a damped rank reduction method, where the singular values corresponding to effective signals are adjusted via a carefully designed damping operator. The damping operator works most powerfully in the case of a small rank and a small damping factor. However, for complicated seismic data, e.g., multi-channel reflection seismic data containing highly curved events, the rank should be large enough to preserve the details in the data, which makes the damped rank reduction method less effective. In this paper, we develop an optimal damping strategy for adjusting the singular values when a large rank parameter is selected so that the estimated signal can best approximate the exact signal. We first weight the singular values using optimally calculated weights. The weights are theoretically derived by solving an optimization problem that minimizes the Frobenius-norm difference between the approximated signal components and the exact signal components. The damping operator is then derived based on the initial weighting operator to further reduce the residual noise after the optimal weighting. The resulted optimally damped rank reduction method is nearly an adaptive method, i.e., insensitive to the rank parameter. We demonstrate the performance of the proposed method on a group of synthetic and real five-dimensional seismic data.

Seismic signal enhancement based on the lowrank methods

December 6, 2020 Documentation No comments

A new paper is added to the collection of reproducible documents: Seismic signal enhancement based on the lowrank methods


Based on the fact that the Hankel matrix constructed by noise-free seismic data is lowrank (LR), LR approximation (or rank-reduction) methods have been widely used for removing noise from seismic data. Due to the linear-event assumption of the traditional LR approximation method, it is difficult to define a rank that optimally separates the data subspace into signal and noise subspaces. For preserving the most useful signal energy, a relatively large rank threshold is often chosen, which inevitably leaves residual noise. To reduce the energy of residual noise, we propose an optimally damped rank-reduction method. The optimal damping is applied via two steps. In the first step, a set of optimal damping weights is derived. In the second step, we derive an optimal singular-value damping operator. We review several traditional lowrank methods and compare their performance with the new one. We also compare these lowrank methods with two sparsity-promoting transform methods. Examples demonstrate that the proposed optimally damped rank-reduction method could get significantly cleaner denoised images compared with the state-of-the-art methods.

Simultaneous denoising and reconstruction of 5D seismic data via damped rank-reduction method

December 6, 2020 Documentation No comments

A new paper is added to the collection of reproducible documents: Simultaneous denoising and reconstruction of 5D seismic data via damped rank-reduction method


The Cadzow rank-reduction method can be effectively utilized in simultaneously denoising and reconstructing 5D seismic data that depends on four spatial dimensions. The classic version of Cadzow rank-reduction method arranges the 4D spatial data into a level-four block Hankel/Toeplitz matrix and then applies truncated singular value decomposition (TSVD) for rank-reduction. When the observed data is extremely noisy, which is often the feature of real seismic data, traditional TSVD cannot be adequate for attenuating the noise and reconstructing the signals. The reconstructed data tends to contain a significant amount of residual noise using the traditional TSVD method, which can be explained by the fact that the reconstructed data space is a mixture of both signal subspace and noise subspace. In order to better decompose the block Hankel matrix into signal and noise components, we introduced a damping operator into the traditional TSVD formula, which we call the damped rank-reduction method. The damped rank-reduction method can obtain a perfect reconstruction performance even when the observed data has extremely low signal-to-noise ratio (SNR). The feasibility of the improved 5D seismic data reconstruction method was validated via both 5D synthetic and field data examples. We presented comprehensive analysis of the data examples and obtained valuable experience and guidelines in better utilizing the proposed method in practice. Since the proposed method is convenient to implement and can achieve immediate improvement, we suggest its wide application in the industry.

Separation and imaging of seismic diffractions

December 5, 2020 Documentation No comments

A new paper is added to the collection of reproducible documents: Separation and imaging of seismic diffractions using a localized rank-reduction method with adaptively selected ranks



Seismic diffractions are some weak seismic events hidden within the more dominant reflection events in a seismic profile. Separating diffraction energy from the post-stack seismic profiles can help infer the subsurface discontinuities that generate the diffraction events. The separated seismic diffractions can be migrated with a traditional seismic imaging method or a specifically designed migration method to highlight the diffractors, i.e., the diffraction image. Traditional diffraction separation methods based on the the underlying plane-wave assumption are limited by either the inaccurate slope estimation or the plane-wave assumption of the PWD filter, and thus will cause reflection leakage into the separated diffraction profile. The leaked reflection energy will deteriorate the resolution of the subsequent diffraction imaging result. Here, we propose a new diffraction separation method based on a localized rank-reduction method. The localized rank-reduction method assumes the reflection events to be locally low-rank and the diffraction energy can be separated by a rank-reduction operation. Compared with the global rank-reduction method, the localized rank-reduction method is more constrained in selecting the rank and is free of separation artifacts. We use a carefully designed synthetic example to demonstrate that the localized rank-reduction method can help separate the diffraction energy from a post-stack seismic profile with both kinematically and dynamically accurate performance.

Time-frequency analysis of seismic data using non-stationary Prony method

July 18, 2020 Documentation No comments

A new paper is added to the collection of reproducible documents: Data-driven time-frequency analysis of seismic data using non-stationary Prony method

The empirical mode decomposition aims to decompose the input signal into a small number of components named intrinsic mode functions with slowly varying amplitudes and frequencies. In spite of its simplicity and usefulness, however, the empirical mode decomposition lack solid mathematical foundation. In this paper, we describe a method to extract the intrinsic mode functions of the input signal using non-stationary Prony method. The proposed method captures the philosophy of the empirical mode decomposition, but use a different method to compute the intrinsic mode functions. Having the intrinsic mode functions obtained, we then compute the spectrum of the input signal using Hilbert transform. Synthetic and field data validate the proposed method can correctly compute the spectrum of the input signal, and could be used in seismic data analysis to facilitate interpretation.

Enhancing seismic reflections using empirical mode decomposition

July 18, 2020 Documentation No comments

A new paper is added to the collection of reproducible documents: Enhancing seismic reflections using empirical mode decomposition in the flattened domain

Due to different reasons, the seismic reflections are not continuous even when no faults or no discontinuities exist. We propose a novel approach for enhancing the amplitude of seismic reflections and making the seismic reflections continuous. We use plane-wave flattening technique to provide horizontal events for the following empirical mode decomposition (EMD) based smoothing in the flattened domain. The inverse plane-wave flattening can be used to obtain original curved events. The plane-wave flattening process requires a precise local slope estimation, which is provided by the plane-wave destruction (PWD) algorithm. The EMD based smoothing filter is a non-parametric and adaptive filter, thus can be conveniently used. Both pre-stack and post-stack field data examples show tremendous improvement for the data quality, which makes the following interpretation easier and more reliable.

Application of principal component analysis in weighted stacking of seismic data

July 18, 2020 Documentation No comments

A new paper is added to the collection of reproducible documents: Application of principal component analysis in weighted stacking of seismic data


Optimal stacking of multiple datasets plays a significant role in many scientific domains. The quality of stacking will affect the signal-to-noise ratio (SNR) and amplitude fidelity of the stacked image. In seismic data processing, the similarity-weighted stacking makes use of the local similarity between each trace and a reference trace as the weight to stack the flattened prestack seismic data after normal moveout (NMO) correction. The traditional reference trace is an approximated zero-offset trace that is calculated from a direct arithmetic mean of the data matrix along the spatial direction. However, in the case that the data matrix contains abnormal mis-aligned trace, erratic and non-gaussian random noise, the accuracy of the approximated zero-offset trace would be greatly affected, thereby further influence the quality of stacking. We propose a novel weighted stacking method that is based on principal component analysis (PCA). The principal components of the data matrix, namely the useful signals, are extracted based on a low-rank decomposition method by solving an optimization problem with a low-rank constraint. The optimization problem is solved via a common singular value decomposition algorithm. The low-rank decomposition of the data matrix will alleviate the influence of abnormal trace, erratic and non-gaussian random noise, thus will be more robust than the traditional alternatives. We use both synthetic and field data examples to show the successful performance of the proposed approach.

Velocity analysis of simultaneous-source data

April 11, 2020 Documentation No comments

A new paper is added to the collection of reproducible documents: Velocity analysis of simultaneous-source data using high-resolution semblance – coping with the strong noise


Direct imaging of simultaneous-source (or blended) data, without the need of deblending, requires a precise subsurface velocity model. In this paper, we focus on the velocity analysis of simultaneous-source data using the NMO-based velocity picking approach. We demonstrate that it is possible to obtain a precise velocity model directly from the blended data in the common-midpoint (CMP) domain. The similarity-weighted semblance can help us obtain much better velocity spectrum with higher resolution and higher reliability compared with the traditional semblance. The similarity-weighted semblance enforces an inherent noise attenuation solely in the semblance calculation stage, thus is not sensitive to the intense interference. We use both simulated synthetic and field data examples to demonstrate the performance of the similarity-weighted semblance in obtaining reliable subsurface velocity model for direct migration of simultaneous-source data. The migrated image of blended field data using prestack kirchhoff time migration (PSKTM) approach based on the picked velocity from the similarity-weighted semblance is very close to the migrated image of unblended data.

Compressive sensing for seismic data reconstruction

April 11, 2020 Documentation No comments

A new paper is added to the collection of reproducible documents: Compressive sensing for seismic data reconstruction via fast projection onto convex sets based on seislet transform


According to the compressive sensing (CS) theory in the signal-processing field, we proposed a new CS approach based on a fast projection onto convex sets (POCS) algorithm with sparsity constraint in the seislet transform domain. The seislet transform appears to be the sparest among the state-of-the-art sparse transforms. The FPOCS can obtain much faster convergence than conventional POCS (about two thirds of conventional iterations can be saved), while maintaining the same recovery performance. The FPOCS can obtain faster and better performance than FISTA for relatively cleaner data but will get slower and worse performance than FISTA, which becomes a reference to decide which algorithm to use in practice according the noise level in the seismic data. The seislet transform based CS approach can achieve obviously better data recovery results than $f-k$ transform based scenarios, considering signal-to-noise ratio (SNR), local similarity comparison, and visual observation, because of a much sparser structure in the seislet transform domain. We have used both synthetic and field data examples to demonstrate the superior performance the proposed seislet-based FPOCS approach.

CUDA package for Q-compensated RTM

April 3, 2020 Documentation No comments

A new paper is added to the collection of reproducible documents: CuQ-RTM: A CUDA-based code package for stable and efficient Q-compensated reverse time migration

Reverse time migration (RTM) in attenuating media should take the absorption and dispersion effects into consideration. The latest proposed viscoacoustic wave equation with decoupled fractional Laplacians (DFLs) facilitates separate amplitude compensation and phase correction in $Q$-compensated RTM ($Q$-RTM). However, intensive computation and enormous storage requirements of $Q$-RTM prevent it from being extended into practical application, especially for large-scale 2D or 3D case. The emerging graphics processing unit (GPU) computing technology, built around a scalable array of multithreaded Streaming Multiprocessors (SMs), presents an opportunity for greatly accelerating $Q$-RTM by appropriately exploiting GPU’s architectural characteristics. We present the cu$Q$-RTM, a CUDA-based code package that implements $Q$-RTM based on a set of stable and efficient strategies, such as streamed CUFFT, checkpointing-assisted time-reversal reconstruction (CATRC) and adaptive stabilization. The cu$Q$-RTM can run in a multi-level parallelism (MLP) fashion, either synchronously or asynchronously, to take advantages of all the CPUs and GPUs available, while maintaining impressively good stability and flexibility. We mainly outline the architecture of the cu$Q$-RTM code package and some program optimization schemes. The speedup ratio on a single GeForce GTX760 GPU card relative to a single core of Intel Core i5-4460 CPU can reach above 80 in large-scale simulation. The strong scaling property of multi-GPU parallelism is demonstrated by performing $Q$-RTM on a Marmousi model with one to six GPU(s) involved. Finally, we further verify the feasibility and efficiency of the cu$Q$-RTM on a field data set. The “living” package is available from GitHub at https://github.com/Geophysics-OpenSource/cuQRTM, and peer-reviewed code related to this article can be found at http://software.seg.org/2019/0001.