Introduction

In an ideal data acquisition system, continuous receivers are set up and each receiver has continuous sources, however, such an ideal system has not been achieved in practice. There are various geometries, and uniform data coverage is rarely achieved because of practical and economic constraints (Stone, 1994). Furthermore, regular data distribution is prerequisite in some applications of seismic migrations (Claerbout, 1971; Biondi and Palacharla, 1996; Gardner and Canning, 1994; Zhang et al., 2003), multiple elimination (Dragoset and Jericevic, 1998; Verschuur et al., 1992), and 4D seismic monitoring (Morice et al., 2000). Under such circumstances, new data points are constructed with known data points through data interpolation. Therefore, data interpolation has become a key technique for data processing in seismic exploration, especially wide azimuth seismic surveys.

Many seismic data interpolation methods have been proposed in the past few years, among which the binning method and its improvements are the simplest. However, they always create artifacts, which affect the final processing results. Another approach is based on different types of continuation operators (Liu and Fomel, 2010; Stolt, 2002; Fomel, 2003). Sometimes, these operators do not give good results at small offset distances because of limited integration apertures. In some methods, data interpolation is applied as an iterative optimization problem using a prediction-error filter (PEF). Several different prediction methods have been proposed, including $f$-$x$ prediction filter interpolation (Wang, 2002; Spitz, 1991; Naghizadeh and Sacchi, 2009; Porsani, 1999) and $t$-$x$ domain PEF interpolation (Fomel, 2002; Curry, 2003; Claerbout and Nichols, 1991; Liu and Fomel, 2011). Gülünay (2003) used nonaliased lower frequencies to remove aliases. In fact, Gülünay's method is equivalent to implementing Spitz's method in a different domain. Abma and Kabir (2006b) compared different $t$-$x$, $f$-$x$, and $f$-$k$ methods and determined their characteristics. Many interpolation algorithms have been implemented for various scenarios yielding improvements in seismic images. The latest data interpolation technologies bring revolutionary changes in the acquisition design, significantly reducing the cost, and turnaround time for seismic acquisition. Nevertheless, accurate handling of highly aliased seismic data with strongly irregular sampled acquisition grids and handling complex data in which seismic events interfere each other with different amplitudes and noise levels remain challenges for seismic data interpolation algorithms.

If seismic data show sparsity in some domains, compressed sensing (CS) (Donoho, 2006) can be used for reconstructing missing data. CS-based methods for data interpolation usually consist of two parts, namely, sparse transform and iterative algorithm, and recently, they have been undergoing rapid development. There are different iterative approaches to solve the inverse problem corresponding to data interpolation. For instance, Abma and Kabir (2006a) introduced a projection onto convex sets (POCS) algorithm to seismic data interpolation. Liang et al. (2014) proposed a split inexact Uzawa algorithm. Yu et al. (2015) used a two-stage method to solve objective function for data interpolation. Chen et al. (2015) proposed a nonlinear shaping regularization for solving the inverse problem. Different iterative strategies have been designed to control the balance between fast convergence and accurate recovery of missing data. A widely used sparse transform for seismic data interpolation is Fourier transform under plane waves assumption (Xu et al., 2005; Zwartjes and Sacchi, 2007; Zwartjes and Gisolf, 2007; Naghizadeh and Innanen, 2013). The wavelet transform is powerful in representing piecewise signals (Mallat, 2009); however, it is still not suitable for compressing nonstationary seismic data.

Wavelet-like transforms have different applications in seismic data analysis (Bo$ß$mann and Ma, 2015; Herrmann et al., 2008; Wang et al., 2015). Seislet transform (Fomel, 2006) is a sparse transform specifically designed for characterizing seismic data. Fomel and Liu (2010) further improved the seislet framework and proposed additional applications. The original seislet transform utilizes local data slopes estimated by plane-wave destruction (PWD) filters (Chen et al., 2013a,b; Fomel, 2002). However, a PWD operator can be sensitive to strong interference, which leads to occasional failure of the PWD-seislet transform in describing noisy signals. To address this problem, Liu et al. (2015) proposed a velocity-dependent (VD) concept where local slopes in prestack data are evaluated from moveout parameters estimated by conventional velocity-analysis techniques.

In this paper, we extended the velocity-dependent (VD)-seislet transform (Liu et al., 2015) to a nonhyperbolic pattern and applied it to data interpolation with a new modified Bregman iteration. We tested the performance of generalized VD-seislet transforms using numerical experiments with synthetic and field data, and the results are described in this paper.


2019-05-06