next up previous [pdf]

Next: Method Up: Chen & Fomel: Denoising Previous: Chen & Fomel: Denoising

Introduction

Most random noise attenuation techniques necessitate a control of the trade-off between preservation of useful signal and removal of random noise. $ F$ -$ x$ deconvolution and projection (Soubaras, 1995; Canales, 1984), also known as $ f$ -$ x$ predictive filtering (Bekara and van der Baan, 2009; Chen and Ma, 2014), is a common technique for random noise attenuation thanks to its convenient implementation and high efficiency. Commonly a prediction filter length is modified in order to increase or decrease the ability of $ f$ -$ x$ deconvolution to remove random noise, at the expense of decreasing or increasing the preservation of useful signal. When the data character is complex, the filter length should be long enough to capture certain amounts of useful signal, but at the same time to attenuate a limited amount of noise. Otherwise, a portion of useful signal can be lost because of a large number of dip components in the data (Chen and Ma, 2014). For certain specific types of random noise, such as blending noise in simultaneous-source seismic data (Chen et al., 2014a; Berkhout, 2008; Abma et al., 2010; Beasley et al., 1998), median filtering (MF) can be particularly effective (Chen et al., 2014b; Huo et al., 2012; Chen, 2014). However, the ability of MF to remove spike-like blending noise strongly depends on the length of the filtering window. A longer filter leads to a more powerful attenuation for blending noise, but at the same time causes a possible loss of useful energy. In contrast, when the length of the filtering window is small, MF may eliminate less useful energy but also will be less able to remove spiky noise. Removing spiky noise without losing too much useful signal becomes a difficult trade-off.

Suppose that the seismic data $ \mathbf{d}$ is composed of signal $ \mathbf{s}_{true}$ and noise $ \mathbf{n}_{true}$ , a successful random noise attenuation will give accurate signal and noise estimation: $ \mathbf{s}_0\approx\mathbf{s}_{true}$ and $ \mathbf{n}_0\approx\mathbf{n}_{true}$ , where $ \mathbf{s}_0$ and $ \mathbf{n}_0$ are the estimated signal and noise. However, the ideal situation may not occur in practice for two main possible reasons: incorrect parameter selection or inadequacy of denoising assumptions. When those assumptions are not met, the conventional denoising approaches may not achieve the optimal performance. For most noise attenuation approaches, the noise section will contain a certain amount of useful signal, which can be called signal-leakage energy. Sometimes, the leakage energy is negligible, whereas in other cases, this loss of energy may result in a decrease in resolution.

Local seismic attributes measure seismic signal characteristics neither instantaneously, at each signal point, nor globally, across a data window, but locally in the neighborhood of each point (Fomel, 2007a). One of the most useful local attributes is local similarity, which has found numerous successful applications in different areas of seismic data processing: multicomponent image registration (Fomel et al., 2005; Fomel, 2007a), time-lapse registration (Fomel and Jin, 2009; Zhang et al., 2013), time-frequency analysis (Liu et al., 2011b), structure-enhancing filtering (Liu et al., 2010), phase estimation (Fomel and van der Baan, 2014), etc. By using a weighted stacking of seismic data according to local similarity to the reference trace, a seismic image with an increased signal-to-noise ratio can be obtained (Liu et al., 2011a,2009). In this paper, we introduce a new local seismic attribute: local orthogonalization weight (LOW) in order to perform local orthogonalization. LOW appeared previously in the definition of local similarity and can be obtained by solving a minimization problem using shaping regularization with a smoothness constraint.

In order to compensate for the loss of useful signal in traditional noise attenuation approaches, because of incorrect parameter selection or inadequacy of denoising assumptions, we employ a weighting operator on the initially denoised section for the retrieval of useful signal from the initial noise section. The new denoising process corresponds to local orthogonalization of signal and noise based on the assumption that final estimated signal and noise should be orthogonal to each other in the time-space domain. The orthogonality assumption is similar as assuming that the signal and noise do not correlate with each other. Thus, the orthogonality assumption is assumed to be valid for all kinds of noise that do not correlate with the useful signal, e.g., random noise. The proposed local-orthogonalization approach can be considered as a specific case of previously proposed nonstationary matching filtering (Fomel, 2009) for one-point filter length. The proposed approach is not very sensitive to the parameter selection, and thus is robust in practice. We use two synthetic examples and three field data examples to demonstrate successful performance of the proposed approach in applications to both conventional and simultaneous-source seismic data.


next up previous [pdf]

Next: Method Up: Chen & Fomel: Denoising Previous: Chen & Fomel: Denoising

2015-03-25