\[ \newcommand{\NN}{\mathbb{N}} \newcommand{\CC}{\mathbb{C}} \newcommand{\GG}{\mathbb{G}} \newcommand{\LL}{\mathbb{L}} \newcommand{\PP}{\mathbb{P}} \newcommand{\QQ}{\mathbb{Q}} \newcommand{\RR}{\mathbb{R}} \newcommand{\VV}{\mathbb{V}} \newcommand{\ZZ}{\mathbb{Z}} \newcommand{\FF}{\mathbb{F}} \newcommand{\KK}{\mathbb{K}} \newcommand{\UU}{\mathbb{U}} \newcommand{\EE}{\mathbb{E}} \newcommand{\Aa}{\mathcal{A}} \newcommand{\Bb}{\mathcal{B}} \newcommand{\Cc}{\mathcal{C}} \newcommand{\Dd}{\mathcal{D}} \newcommand{\Ee}{\mathcal{E}} \newcommand{\Ff}{\mathcal{F}} \newcommand{\Gg}{\mathcal{G}} \newcommand{\Hh}{\mathcal{H}} \newcommand{\Ii}{\mathcal{I}} \newcommand{\Jj}{\mathcal{J}} \newcommand{\Kk}{\mathcal{K}} \newcommand{\Ll}{\mathcal{L}} \newcommand{\Mm}{\mathcal{M}} \newcommand{\Nn}{\mathcal{N}} \newcommand{\Oo}{\mathcal{O}} \newcommand{\Pp}{\mathcal{P}} \newcommand{\Qq}{\mathcal{Q}} \newcommand{\Rr}{\mathcal{R}} \newcommand{\Ss}{\mathcal{S}} \newcommand{\Tt}{\mathcal{T}} \newcommand{\Uu}{\mathcal{U}} \newcommand{\Vv}{\mathcal{V}} \newcommand{\Ww}{\mathcal{W}} \newcommand{\Xx}{\mathcal{X}} \newcommand{\Yy}{\mathcal{Y}} \newcommand{\Zz}{\mathcal{Z}} \newcommand{\al}{\alpha} \newcommand{\la}{\lambda} \newcommand{\ga}{\gamma} \newcommand{\Ga}{\Gamma} \newcommand{\La}{\Lambda} \newcommand{\Si}{\Sigma} \newcommand{\si}{\sigma} \newcommand{\be}{\beta} \newcommand{\de}{\delta} \newcommand{\De}{\Delta} \renewcommand{\phi}{\varphi} \renewcommand{\th}{\theta} \newcommand{\om}{\omega} \newcommand{\Om}{\Omega} \renewcommand{\epsilon}{\varepsilon} \newcommand{\Calpha}{\mathrm{C}^\al} \newcommand{\Cbeta}{\mathrm{C}^\be} \newcommand{\Cal}{\text{C}^\al} \newcommand{\Cdeux}{\text{C}^{2}} \newcommand{\Cun}{\text{C}^{1}} \newcommand{\Calt}[1]{\text{C}^{#1}} \newcommand{\lun}{\ell^1} \newcommand{\ldeux}{\ell^2} \newcommand{\linf}{\ell^\infty} \newcommand{\ldeuxj}{{\ldeux_j}} \newcommand{\Lun}{\text{\upshape L}^1} \newcommand{\Ldeux}{\text{\upshape L}^2} \newcommand{\Lp}{\text{\upshape L}^p} \newcommand{\Lq}{\text{\upshape L}^q} \newcommand{\Linf}{\text{\upshape L}^\infty} \newcommand{\lzero}{\ell^0} \newcommand{\lp}{\ell^p} \renewcommand{\d}{\ins{d}} \newcommand{\Grad}{\text{Grad}} \newcommand{\grad}{\text{grad}} \renewcommand{\div}{\text{div}} \newcommand{\diag}{\text{diag}} \newcommand{\pd}[2]{ \frac{ \partial #1}{\partial #2} } \newcommand{\pdd}[2]{ \frac{ \partial^2 #1}{\partial #2^2} } \newcommand{\dotp}[2]{\langle #1,\,#2\rangle} \newcommand{\norm}[1]{|\!| #1 |\!|} \newcommand{\normi}[1]{\norm{#1}_{\infty}} \newcommand{\normu}[1]{\norm{#1}_{1}} \newcommand{\normz}[1]{\norm{#1}_{0}} \newcommand{\abs}[1]{\vert #1 \vert} \newcommand{\argmin}{\text{argmin}} \newcommand{\argmax}{\text{argmax}} \newcommand{\uargmin}[1]{\underset{#1}{\argmin}\;} \newcommand{\uargmax}[1]{\underset{#1}{\argmax}\;} \newcommand{\umin}[1]{\underset{#1}{\min}\;} \newcommand{\umax}[1]{\underset{#1}{\max}\;} \newcommand{\pa}[1]{\left( #1 \right)} \newcommand{\choice}[1]{ \left\{ \begin{array}{l} #1 \end{array} \right. } \newcommand{\enscond}[2]{ \left\{ #1 \;:\; #2 \right\} } \newcommand{\qandq}{ \quad \text{and} \quad } \newcommand{\qqandqq}{ \qquad \text{and} \qquad } \newcommand{\qifq}{ \quad \text{if} \quad } \newcommand{\qqifqq}{ \qquad \text{if} \qquad } \newcommand{\qwhereq}{ \quad \text{where} \quad } \newcommand{\qqwhereqq}{ \qquad \text{where} \qquad } \newcommand{\qwithq}{ \quad \text{with} \quad } \newcommand{\qqwithqq}{ \qquad \text{with} \qquad } \newcommand{\qforq}{ \quad \text{for} \quad } \newcommand{\qqforqq}{ \qquad \text{for} \qquad } \newcommand{\qqsinceqq}{ \qquad \text{since} \qquad } \newcommand{\qsinceq}{ \quad \text{since} \quad } \newcommand{\qarrq}{\quad\Longrightarrow\quad} \newcommand{\qqarrqq}{\quad\Longrightarrow\quad} \newcommand{\qiffq}{\quad\Longleftrightarrow\quad} \newcommand{\qqiffqq}{\qquad\Longleftrightarrow\qquad} \newcommand{\qsubjq}{ \quad \text{subject to} \quad } \newcommand{\qqsubjqq}{ \qquad \text{subject to} \qquad } \]

Compressed Sensing of Images

Compressed Sensing of Images

This tour explores compressed sensing of natural images, using different sparsity priors over a wavelet basis.


Installing toolboxes and setting up the path.

You need to download the following files: signal toolbox and general toolbox.

You need to unzip these toolboxes in your working directory, so that you have toolbox_signal and toolbox_general in your directory.

For Scilab user: you must replace the Matlab comment '%' by its Scilab counterpart '//'.

Recommandation: You should create a text file named for instance numericaltour.sce (in Scilab) or numericaltour.m (in Matlab) to write all the Scilab/Matlab command you want to execute. Then, simply run exec('numericaltour.sce'); (in Scilab) or numericaltour; (in Matlab) to run the commands.

Execute this line only if you are using Matlab.

getd = @(p)path(p,path); % scilab users must *not* execute this

Then you can add the toolboxes to the path.


Low Pass Linear Measures

We first make use of \(P\) low pass linear measurements to remove the low frequency content of the image.

Natural images are not only sparse over a wavelet domain. They also exhibit a fast decay of the coefficient through the scale. The coarse (low pass) wavelets caries much of the image energy. It thus make sense to measure directly the low pass coefficients.

We load an image \(f \in \RR^{n^2}\) of \(n \times n\) pixels.

name = 'boat';
n = 256;
f = load_image(name, n);
f = rescale(f);

Shortcuts for the wavelet transform \( \{\dotp{f}{\psi_m}\}_m \). We only compute up to a scale \(J\) so that only \(k_0\) sub-bands are transformed.

k0 = 2;
J = log2(n)-k0;
Wav  = @(f)perform_wavelet_transf(f,J,+1);
WavI = @(x)perform_wavelet_transf(x,J,-1);

Compute the wavelet transform.

fw = Wav(f);

Display the coefficients.

plot_wavelet(fw, J);

Exercice 1: (check the solution) Compute an approximation fLow using the \( P=2^{2J}=(n/k_0)^2 \) low pass coefficients.


Randomized Orthogonal Measurements

We consider a compressed sensing operator that corresponds to randomized orthogonal projections.

Extract the high pass wavelet coefficients, \(x_0 = \{ \dotp{f}{\psi_m} \}_{m \in I_0}\).

A = ones(n,n); A(1:2^J,1:2^J) = 0;
I0 = find(A==1);
x0 = fw(I0);

Number of coefficients.

N = length(x0);

Number \( P_0 = 2^{2J}=(n/k_0)^2 \) of low pass measurements.

P0 = (n/2^k0)^2;

Number of CS measurements.

P = 4 * P0;

Generate random permutation operators \(S_1,S_2 : \RR^N \rightarrow \RR^N\) so that \(S_k(x)_i = x_{\sigma_k(i)}\) where \( \sigma_k \in \Sigma_N \) is a random permutation of \(\{1,\ldots,N\}\).

sigma1 = randperm(N)';
sigma2 = randperm(N)';
S1 = @(x)x(sigma1);
S2 = @(x)x(sigma2);

The adjoint (and also inverse) operators \(S_1^*,S_2^*\) (denoted S1S,S2S) corresponds to the inverse permutation \(\sigma_k^*\) such that \(\sigma_k^* \circ \sigma_k(i)=i\).

sigma1S = 1:N; sigma1S(sigma1) = 1:N;
sigma2S = 1:N; sigma2S(sigma2) = 1:N;
S1S = @(x)x(sigma1S);
S2S = @(x)x(sigma2S);

We consider a CS operator \( \Phi : \RR^N \rightarrow \RR^P \) that corresponds to a projection on randomized atoms \[ (\Phi x)_i = \dotp{x}{ \phi_{\sigma_2(i)}} \] where \( \phi_i \) is a scrambled orthogonal basis \[ \phi_i(x) = c_i( \sigma_1(x) ) \] where \(\{ c_i \}_i\) is the orthogonal DCT basis.

This can be rewritten in compact operator form as \[ \Phi x = ( S_2 \circ C \circ S_1 (x) ) \downarrow_P \] where \(S_1,S_2\) are the permutation operators, and \(\downarrow_P\) selects the \(P\) first entries of a vector.

downarrow = @(x)x(1:P);
Phi = @(x)downarrow(S2(dct(S1(x))));

The adjoint operator is \[ \Phi^* x = S_1^* \circ C^* \circ S_2^* (x\uparrow_P) \] where \(\uparrow_P\) append \(N-P\) zeros at the end of a vector, and \(C^*\) is the inverse DCT transform.

uparrow = @(x)[x; zeros(N-P,1)];
PhiS = @(x)S1S(idct(S2S(uparrow(x))));

Perform the CS (noiseless) measurements.

y = Phi(x0);

Exercice 2: (check the solution) Reconstruct an image using the pseudo inverse coefficients \(\Phi^+ y = \Phi^* y\).


Compressed Sensing Recovery using Douglas Rachford Scheme

We consider the minimum \(\ell^1\) recovery from the measurements \(y = \Phi x_0 \in \RR^P\) \[ \umin{\Phi x = y} \normu{x}. \] This can be written as \[ \umin{ x } F(x) + G(x) \qwhereq \choice{ F(x) = i_{\Cc}(x), \\ G(x) = \normu{x}. } \] where \(\Cc = \enscond{x}{\Phi x =y}\).

One can solve this problem using the Douglas-Rachford iterations \[ \tilde x_{k+1} = \pa{1-\frac{\mu}{2}} \tilde x_k + \frac{\mu}{2} \text{rPox}_{\gamma G}( \text{rProx}_{\gamma F}(\tilde x_k) ) \qandq x_{k+1} = \text{Prox}_{\gamma F}(\tilde x_{k+1},) \]

We have use the following definition for the proximal and reversed-proximal mappings: \[ \text{rProx}_{\gamma F}(x) = 2\text{Prox}_{\gamma F}(x)-x \] \[ \text{Prox}_{\gamma F}(x) = \uargmin{y} \frac{1}{2}\norm{x-y}^2 + \ga F(y). \]

One can show that for any value of \(\gamma>0\), any \( 0 < \mu < 2 \), and any \(\tilde x_0\), \(x_k \rightarrow x^\star\) which is a solution of the minimization of \(F+G\).

Exercice 3: (check the solution) Implement the proximal and reversed-proximal mappings of \(F\) (the orthogonal projector on \(\Cc\) and \(G\) (soft thresholding). In Matlab, use inline function with the @ operator.


Value for the \(0 < \mu < 2\) and \(\gamma>0\) parameters. You can use other values, this might speed up the convergence.

mu = 1;
gamma = 1;

Exercice 4: (check the solution) Implement the DR iterative algorithm. Keep track of the evolution of the \(\ell^1\) norm \(G(x_k)\).


Exercice 5: (check the solution) Display the image reconstructed using the \(P_0\) linear and \(P\) CS measurements. The total number of used measurements is thus \(P+P_0\).


Compressed Sensing Reconstruction using Block Sparsity

In order to enhance the CS reconstruction, it is possible to use more advanced priors than plain \( \ell^1 \).

One can for instance use a block \( \ell^1 \) norm \[ G(x) = \sum_i \norm{x_{B_i}} \] where \( (B_i)_i \) is a disjoint segmentation of the index set \( \{1,\ldots,N\} \), where \(x_{B} = \{ x_i \}_{i \in B} \in \RR^{|B|}\) extracts the coefficients within \(B\), and \( \norm{x_B} \) is the \(\ell^2\) norm.

The proximal operator of this block \(\ell^1\) norm is a block thresholding \[ \forall \, m \in B_i, \quad \text{Prox}_{\ga G}(x)_i = \max(0, 1-\ga/\norm{x_{B_i}}) x_i. \]

We use uniform blocks of size \(w \times w\).

w = 4;

Blocks position and offset in the image domain.

v = 1:w:n;
dv = 0:w-1;
[dX,dY,X,Y] = ndgrid(dv,dv,v,v);
q = size(X,3);
dX = reshape(dX, [w*w q*q]);
dY = reshape(dY, [w*w q*q]);
X = reshape(X, [w*w q*q]);
Y = reshape(Y, [w*w q*q]);

Remove the block which fails outside the image.

I = find( sum(X+dX>n | Y+dY>n)  );
X(:,I) = [];
Y(:,I) = [];
dX(:,I) = [];
dY(:,I) = [];

Compute the indexes of the block in \(\{1,\ldots,N\}\), i.e. not in image space but over the CS coefficients space.

U = zeros(n,n);
U(I0) = 1:N;
Ind = X+dX + (Y+dY-1)*n;
I = U(Ind);

Remove the indexes that corresponds to low pass wavelet coefficients.

I(:,sum(I==0)>0) = [];

A block is defined as \(B_i = \{ I_{k,i} \}_{k=1}^{w^2}\). Define the energy.

G = @(x)sum( sqrt(sum(x(I).^2)) );

Just for check : display in coefficient space the block structure.

[A,tmp] = meshgrid( randperm(size(I,2)) , ones(w*w,1));
x = zeros(N,1); x(I) = A;
Z = zeros(n,n); Z(I0) = x;
colormap jet(256);

Exercice 6: (check the solution) define the proximal operator \( \text{Prox}_{\ga G} \) of \(G\), and its reversed proximal mapping.


Exercice 7: (check the solution) Implement the DR iterative algorithm. Keep track of the evolution of \(G(x_k)\).


Exercice 8: (check the solution) Display the image reconstructed using the \(P_0\) linear and \(P\) CS measurements.