Inverse Problem: resolution by optimization


$$ x = \mathrm{argmin}\ \mathcal{L}(y,A,x) + \lambda \mathcal{R}(x) $$ where

  • $\mathcal{L}(y,A,x)$ is the loss: models the link between the signal $x$ and the observation $y$ through the operator $A$
  • $\mathcal{R}(x)$ is the loss: models the “prior” on the signal $x$
  • $\lambda > 0$ is some “hyper-parameter”.

Loss

  • $\frac{1}{2} ||y - Ax ||_2^2$: energy of the residual, adapted to white gaussian noise
  • $\frac{1}{2} ||y - Ax ||_1$: robust regression

Regularization

  • $\frac{1}{2} ||x||_2^2$: energy of the signal
  • $\frac{1}{2} || \nabla x ||_2^2$: energy of the derivative
  • $ || x ||_1$: sparsity of the signal
  • $ || \nabla x ||_1$: sparsity of the derivative (total variation)

Use of the dictionary

  • $\mathcal{R}(x)$ can be difficult to chose
  • Idea: use a dictionary (such as Wavelets or time-frequency), where the signal is known to be sparse (well represented by few coefficients)
  • Let $\Phi\in\mathbb{R}^{NK}$ be such a dictionary, with $x=\Phi\alpha$, $\alpha\in\mathbb{R}^K$ are called the synthesis coefficients

Then, the direct problem becomes

$$ y = A\Phi \alpha + n $$

And the minimization problem becomes

$$ \alpha = \mathrm{argmin} \frac{1}{2} ||y - A\Phi\alpha||_2^2 + \lambda ||\alpha||_1 $$ and $$ x = \Phi \alpha $$