Rank reduction via MSSA

Consider a block of 3D data $\mathbf{D}_{time}(t,x,y)$ of $N_t$ by $N_x$ by $N_y$ samples $(t=1\cdots N_t,x=1\cdots N_x,y=1\cdots N_y)$. The MSSA (Oropeza and Sacchi, 2011) operates on the data in the following way: first, MSSA transforms $\mathbf{D}_{time}(t,x,y)$ into $\mathbf{D}_{freq}(w,x,y)(w=1\cdots N_w)$ of complex values of the frequency domain. Each frequency slice of the data, at a given frequency $w_0$, can be represented by the following matrix:

$\displaystyle \mathbf{D}(w_0)=\left(\begin{array}{cccc}
D(1,1) & D(1,2) & \cdot...
...ts &\ddots &\vdots \\
D(N_y,1)&D(N_y,2) &\cdots&D(N_y,N_x)
\end{array}\right).$ (1)

To avoid notational clutter we omit the argument $w_0$. Second, MSSA constructs a Hankel matrix for each row of $\mathbf{D}$; the Hankel matrix $\mathbf{R}_i$ for row $i$ of $\mathbf{D}$ is as follows:

$\displaystyle \mathbf{R}_i=\left(\begin{array}{cccc}
D(i,1) & D(i,2) & \cdots &...
...dots &\vdots \\
D(i,N_x-m+1)&D(i,N_x-m+2) &\cdots&D(i,N_x)
\end{array}\right).$ (2)

Then MSSA constructs a block Hankel matrix $\mathbf{M}$ for $\mathbf{R}_i$:

$\displaystyle \mathbf{M}=\left(\begin{array}{cccc}
\mathbf{R}_1 &\mathbf{R}_2 &...
...{R}_{N_y-n+1}&\mathbf{R}_{N_y-n+2} &\cdots&\mathbf{R}_{N_y}
\end{array}\right).$ (3)

The size of $\mathbf{M}$ is $I\times J$, $I=(N_x-m+1)(N_y-n+1)$, $J=mn$. $m$ and $n$ are predifined integers chosen such that the block Hankel matrices $\mathbf{R}_i$ and $\mathbf{M}$ are close to square matrices. The transformation of the data matrix into a block Hankel matrix can be represented in operator notation as follows:

$\displaystyle \mathbf{M}=\mathcal{H}\mathbf{D},$ (4)

where $\mathcal{H}$ denotes the Hankelization operator.

In general, the block Hankel matrix $\mathbf{M}$ can be represented as:

$\displaystyle \mathbf{M}=\mathbf{S}+\mathbf{N},$ (5)

where $\mathbf{S}$ and $\mathbf{N}$ denote the block Hankel matrix of signal and of random noise, respectively.

We assume that $\mathbf{M}$ and $\mathbf{N}$ have full rank, $rank(\mathbf{M})$= $rank(\mathbf{N})=J$ and $\mathbf{S}$ has deficient rank, $rank(\mathbf{S})=N<J$. The singular value decomposition (SVD) of $\mathbf{M}$ can be represented as:

$\displaystyle \mathbf{M} = [\mathbf{U}_1^M\quad \mathbf{U}_2^M]\left[\begin{arr...
...t[\begin{array}{c}
(\mathbf{V}_1^M)^H\\
(\mathbf{V}_2^M)^H
\end{array}\right],$ (6)

where $\Sigma_1^M$ ($N\times N$) and $\Sigma_2^M$ ( $(I-N)\times(J-N)$) are diagonal matrices and contain, respectively, larger singular values and smaller singular values. $\mathbf{U}_1^M$ ($I\times N$), $\mathbf{U}_2^M$ ( $I\times (I-N)$), $\mathbf{V}_1^M$ ($J\times N$) and $\mathbf{V}_2^M$ ( $J\times (J-N)$) denote the associated matrices with singular vectors. The symbol $[\cdot]^H$ denotes the conjugate transpose of a matrix. In general the signal is more energy-concentrated and correlative than the random noise. Thus, the larger singular values and their associated singular vectors represent the signal, while the smaller values and their associated singular vectors represent the random noise. We let $\Sigma_2^M$ be $\mathbf{0}$ to achieve the goal of attenuating random noise while recovering the missing data during the first iteration in reconstruction process as follows:

$\displaystyle \tilde{\mathbf{M}} = \mathbf{U}_1^M\Sigma_1^M(\mathbf{V}_1^M)^H.$ (7)

Equation 7 is referred to as the TSVD, which is used in the conventional MSSA approach.


2020-03-10