Phase transitions in higher dimension

| No Comments
The Asilomar conference was great! I met several new people, and remet several others. In particular, I had an informative chat with Jeremy Vila, who has designed with Phil Schniter one of the most promising inverse problem solvers: expectation maximization Bernoulli Gaussian approximate message passing (EMBGAMP). Before the conference I had run some simulations with the algorithm at an ambient dimensionality of 400, and was not seeing appreciable increases in the phase transitions across all problem indeterminaces. Jeremy took a look at my simulations and suggested performing them in a higher dimension. What was going on was that with only a few components active, the algorithm was having a hard time estimating the parameters of the distribution underlying the sparse signal --- as can be expected. So upon returning home, I reran my simulations with an ambient dimension of 1000, and now see improved performance over all indeterminaces. (I am also running experiments this week at 2000.)
Below we see the empirical phase transitions of 14 different algorithms for recovery from compressively sampled non-noisy sparse signals with non-zero elements distributed Normally. I define a recovery successful when the support is returned exactly. EMBGAMP is designed precisely for this distribution, and because of that we see a large margin of improvement. As an added benefit, it runs impressively fast!

Normal1000.png When the EMBGAMP algorithm attempts to recover sparse signals that are not distributed Bernoulli (single) Gaussian, its performance degrades. Below are the phase transitions of these same algorithms, but for sparse signals distributed Rademacher (now that I have learned the proper name :), which means its non-zero elements are equiprobable in \(\{-1,1\}\). (I was calling this distribution "Bernoulli" before. Maleki and Donoho called it CARS, for "constant amplitude random sign".) But look at this! EMBGAMP is still better than regular AMP, and the theoretical transition of \(\ell_1\) minimization. What a graceful way to "fail". :)

Bernoulli1000.png

Leave a comment

About this Entry

This page contains a single entry by Bob L. Sturm published on November 14, 2011 2:54 PM.

My Asilomar 2011 Slides was the previous entry in this blog.

Which Mahler symphony is it? is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.