July 2011 Archives

You can read all about it here: http://arxiv.org/abs/1103.6246. Here is the abstract:

It is well known that the performance of sparse vector recovery algorithms from compressive measurements can depend on the distribution underlying the non-zero elements of a sparse vector. However, the magnitudes of these effects has yet to be explored, and formally presented. In this paper, I empirically investigate this dependence for seven distributions and fifteen recovery algorithms. The two morals of this work are: 1) any judgment of the recovery performance of one algorithm over that of another must be prefaced by the conditions for which this is observed to be true, including sparse vector distributions, and the criterion for exact recovery; and 2) a recovery algorithm must be selected carefully based on what distribution one expects to underlie the sensed sparse signal.
While I am on vacation in Hawaii :), I have the time to finish my revision and considerable expansion of my empirical studies of recovery algorithm performance for sparse vectors distributed in various ways. I have updated my arXiv paper, but the paper will not be available until Monday --- which I will announce then. Here is an intriguing picture of how the empirical phase transition can change depending on the distribution underlying the sensed sparse signal. Below are the best empirical phase transitions for sparse vectors distributed: Normal (N), Laplacian (L), Uniform (U), Bernoulli (B), Bimodal Gaussian (BG), Bimodal Uniform (BU), Bimodal Rayleigh (BR). My perfect recovery condition is a tough one: find the true support of the signal. (I find the exact recovery condition of Maleki and Donoho to be too forgiving for vectors distributed with densities having large tails and significant density around zero.) My sensing matrix is sampled from the uniform spherical ensemble. The ambient dimension of my signals is 400.

BP and AMP (approximate message passing) perform the same for Bernoulli and bimodal uniform distributions; but AMP takes much less time. For the other five distributions, SL0 performs the best at larger indeterminacies (\(\delta > 0.2\)), and PrOMP performs better for these at smaller indeterminacies. Like AMP, SL0 is incredibly fast. From this graph we see the large difference in recoverability from compressive measurements of signals distributed Laplacian and Bernoulli.

phasebest.png

Noting the similarity between the two, Maleki and Donoho generalize CoSaMP and subspace pursuit (SP) into Two-stage thresholding (TST). Given \(\vx_k\), \(\vr_k\), and \(s\), TST with parameters \((\alpha,\beta)\) finds $$ J := \mathcal{S} \left [T_{\alpha s} ( \vx_k + \kappa \mathbf{\Phi}^*\vr_k ) \right ] $$ where \(0 < \kappa < 1\) is a relaxation parameter, and \(T_{\alpha s}(\vy)\) nulls all elements of \(\vy\) except for the \(\alpha s\) ones with the largest magnitudes, and \(\mathcal{S}(\vy)\) returns the support set of a vector \(\vy\). TST then thresholds again to find the new support $$ \Omega_{k+1} = \mathcal{S}\left [ T_{\beta s} ( \arg \min_{\vx} || \vu - \MPhi \MI_{\mathcal{S}(\vx)\cup J} \vx ||_2) \right ] $$ where \(T_{\beta s}(\vy)\) nulls all elements of \(\vy\) except for the \(\beta s\) ones with the largest magnitudes, and the \(N\) square matrix \([\MI_{J}]_{jj} = 1 \; \forall j \in J\), and zero elsewhere. The new solution \(\vx_{k+1}\) is then computed by $$ \vx_k = \arg \min_\vx || \vu - \MPhi\MI_{\Omega_{k}}\vx||_2. $$ Obviously, when \((\alpha,\beta) = (2,1)\), TST becomes similar to CoSaMP; and to become similar to SP, \(\alpha = \beta = 1\). However, it appears similarities can be deceiving.
This algorithm power hour is devoted to Gradient Projection for Sparse Reconstruction (GPSR), presented in the paper: M. Figueiredo, R. Nowak, and S. J. Wright, "Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems," IEEE J. Selected Topics in Signal Process., vol. 1, no. 4, pp. 586-597, 2007.
While running simulations last night I saw some warnings about ill-conditioned matrices. I didn't think much of it at the time, but I awoke this morning knowing from what this problem comes. In my original version of CoSaMP, I have the following:
This algorithm power hour is devoted to Approximate Message Passing (AMP), presented in the paper: D. L. Donoho, A. Maleki, and A. Montanari, "Message-passing algorithms for compressed sensing," Proc. National Academy of the Sciences, vol. 106, no. 45, pp. 18914-18919, Nov. 2009.
This algorithm power hour is devoted to Subspace Pursuit, presented in the paper: W. Dai and O. Milenkovic, "Subspace pursuit for compressive sensing signal reconstruction," IEEE Trans. Inf. Theory, vol. 55, pp. 2230-2249, May 2009.
This algorithm power hour is devoted to Compressive Sampling Matching Pursuit (CoSaMP), presented in the paper: D. Needell and J. A. Tropp, "CoSaMP: iterative signal recovery from incomplete and inaccurate samples," Appl. Comput. Harmonic Anal., vol. 26, no. 3, pp. 301-321, 2009.
This algorithm power hour is devoted to Stagewise Orthogonal Matching Pursuit (StOMP), presented in the paper: D. L. Donoho, Y. Tsaig, I. Drori, and J.-L. Starck, "Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit," IEEE Trans. on Info. Theory, 2006, but apparently not accepted or published elsewhere. Report here.
This algorithm power hour is devoted to Regularized Orthogonal Matching Pursuit (ROMP), presented in the paper: D. Needell and R. Vershynin, "Signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit," IEEE J. Selected Topics in Signal Process., vol. 4, pp. 310-316, Apr. 2010.
About six of the many things I learned at SPARS2011 is that the world of algorithms for the solution of underdetermined systems is growing and growing. My feeble study of nine algorithms with seven sparse vector distributions appears entirely insufficient for providing a good review of state-of-the-art work. Now, I am working to fix this state of affairs, and submit my extended and revised work (rejected from Signal Processing Letters), to this special issue of the EURASIP Journal on Advances in Signal Processing where it is a much better fit and can fill more than 4 pages. Gone is testing CMP (it pretty much does the same as OMP), and in comes the following parade of characters:
As part of my research activities funded by the Danish government, I am happy to announce my new blog: Pursuits in the Null Space. (I decided to change the name from "Null Space Pursuits" because there already is a Null Space Pursuit, though it is not a blog.) I have copied all of my content from CRISSP to here (and I think fixed all the links). I will continue to document over the next 30 months my researches in varying detail.

SPARS 2011, day 4

| No Comments
The fourth and final day of SPARS 2011 served up two plenaries by two prodigious reserarchers: Joel Tropp and Stephen Wright. At the beginning of his talk, Tropp asked who in the room knows how MATLAB computes the SVD. Only a few out of about 200 raised their hand, and a few more gestured that they kind of knew. The problem is that the methods we use today are treated as black boxes, but are based on extremely optimized classical methods that are incapable of working with massive matrices (billions by billions and up). So, we need better tools. He presented his work in SVD by a randomized algorithm ... which at first sounds scarily inaccurate, but proves to be extremely effective at a much reduced computational cost.

In the last plenary, Wright presented a lot of work in state of the art methods for regularized optimization. At the beginning, he showed some fantastic pictures that he called an "Atlas of the Null Space," which showed where solutions to min l1 are the same as min l0. His talked centered around the message that though we talk a lot of exact solutions, or sparsest representations, most applications in the real world only need good algorithms that give the correct support before the whole solution. The trick is to determine when to stop an algorithm, and post-process the results to find the better solution.

In between these talks, there were plenty others, discussing various items of interest with dictionary learning, audio inpainting (Po'D coming soon), and several posters, one of which is by CRISSP reader Graham Coleman. He presented his novel work applying l1 minimization of sound feature mixtures to drive concatenative sound synthesis, or musaicing. (I have discussed an earlier version of this work here.) Coleman's approach appears to be the next generation of concatenative synthesis.

All in all, this workshop was an excellent use of my time and money. Its duration was just perfect that after the last session I really felt as if my fuel tank was completely full. The organizers did an extremely nice job of selecting plenary speakers, assembling a wide range of quality work, and finding an accommodating venue with helpful staff. I even heard that the committee was able to raise enough funds so that many of the student participants had their accommodations paid for. I am really looking forward to the 2013 edition of SPARS (or CoSPARS).

About this Archive

This page is an archive of entries from July 2011 listed from newest to oldest.

June 2011 is the previous archive.

August 2011 is the next archive.

Find recent content on the main index or look in the archives to find all content.