August 2011 Archives

Hello, and welcome to Paper of the Day (Po'D): Equivalence Between BPDN and Support Vector Regression Edition. Today's paper is F. Girosi, "An Equivalence Between Sparse Approximation and Support Vector Machines", Neural Computation, vol. 10, pp. 1455-1480, 1998. In this paper, Girosi shows that under some conditions, the optimization problem involved in basis pursuit de-noising (BPDN) can be cast as a quadratic programming (QP) problem that is equivalent to the one solved for support vector regression (or more precisely, \(\epsilon\)-SVR). Chen, Donoho, and Saunders proposed BPDN as a tractable technique for sparse approximation in 1995. It's good to note that in his PhD thesis, Chen showed that the optimization involved in BPDN is equivalent to a perturbed linear programming problem, which is "really quadratic programming".

CoSaMP and CoSaOMP

| 2 Comments
Przemek Dymarski writes,
I have noticed, that your COSAMP code exhibits an important difference from the original algorithm of Needell and Tropp: you recalculate K gains at the end of each iteration, but in the original algorithm the gains are left unchanged. Thus, you perform 2 pinv's per iteration and Needell and Tropp only one. Your version is in fact a variant od SP - the only difference is the number of vectors being merged (2K in SP, 3K in COSAMP). Of course your version is better (and slightly more costly) but is it still the COSAMP?
Przemek is absolutely correct!
Hello, and welcome to Papers of the Day (Po'D): Audio Coding with Psychoacoustic-adaptive Matching Pursuits. Today's historical papers address the application of perceptual audio coding using greedy methods of sparse approximation, or something just like it.
In my recent article (submitted), I briefly look at the differences between exact recovery conditions based on the normalized Euclidean distance between the real and recovered solution, and the recovery of the support of the solution. At a dimension of \(N=400\) using sensing matrices sampled from the uniform spherical ensemble, I found for sparse vectors distributed Laplacian the following differences in the empirical phase transitions between the two recovery conditions (no noise in the measurements):
Hello, and welcome to Paper of the Day (Po'D): Exemplar-based Sparse Representations for Noise Robust Automatic Speech Recognition. Today's paper is a very recent contribution: J. F. Gemmeke, T. Virtanen and A. Hurmalainen, "Exemplar-based Sparse Representations for Noise Robust Automatic Speech Recognition," IEEE Trans. Audio, Speech, and Lang. Process., vol. 19, no. 7, pp. 2067-2080, Sep. 2011. Subsumed into the work described in this paper are the following works:
  • J. Gemmeke and T. Virtanen, "Noise robust exemplar-based connected digit recognition," in Proc. IEEE Int. Conf. Acoustics, Speech, Signal Process., (Dallas, TX, USA), March 2010 (Po'D here)
  • J. Gemmeke, L. ten Bosch, L.Boves, and B. Cranen, "Using sparse representations for exemplar based continuous digit recognition", in Proc. EUSIPCO, pp. 1755-1759, Glasgow, Scotland, Aug., 2009 (Po'D here)
  • J. Gemmeke and B. Cranen, "Noise robust digit recognition using sparse representations," in Proc. of ISCA Speech Analysis and Processing for Knowledge Discovery, 2008.
  • J. Gemmeke, H. Van hamme, B. Cranen and L. Boves, "Compressive sensing for missing data imputation in noise robust speech recognition," IEEE J. Selected Topics in Signal Process., vol. 4, no. 2, pp. 272-287, 2010.

CoSaMP in "Comparative Studies"

| No Comments
In this Po'D I noted that CoSaMP was acting rather odd. When I looked at the code made available by the authors, I saw the reason for it. (Hooray for reproducibility!) Przemyslaw Dymarski has sent me some graphs of the correction to CoSaMP I discuss here and here.

Asilomar 2011: Room Sharing?

| No Comments
Is anyone going to Asilomar 2011 and would like to share a room? I will be there all three nights, driving down from San Francisco on Nov. 5 (after 17h), and back on Nov. 9 (before 16h). I could check the box that says, "Assign roommate randomly," but it would be nice to have some control over that situation. I am a clean roommate, but I might have a surf board in the room.
To accompany this interesting discussion about the behavior of the magnitude inner products as matching pursuit (MP) proceeds, I decided to run some experiments in the same way as those of Dymarski et al..
The other day, I analyzed the use of order statistics to study matching pursuit (MP) by Manuel Moussallam et al. in their paper "Matching Pursuits with Random Sequential Subdictionaries" (Po'D here). Manuel has returned from his vacation and kindly responds to my comments below. Thank you Manuel!
Hello, and welcome to the Papers of the Day (Po'D): Comparative Studies of Greedy Sparse Decompositions. Today's two papers are an interesting lot, looking at the performance of some greedy algorithms for the recovery of sparse signals:
  1. P. Dymarski, N. Moreau and G. Richard, ``Greedy sparse decompositions: a comparative study'', EURASIP Journal on Advances in Signal Processing, Aug. 2011.
  2. G. Rath and A. Sahoo, ``A COMPARATIVE STUDY OF SOME GREEDY PURSUIT ALGORITHMS FOR SPARSE APPROXIMATION,'' in Proc. EUSIPCO, Glasgow, Scotland, Aug. 2009.
(I attended a seminar by Nicolas Moreau on this subject during my postdoc in Paris in 2009, where he was persuaded to publish such a comprehensive review. It is nice to see the final paper!)
Hello, and welcome to Paper of the Day (Po'D): Audio Inpainting Edition. I saw a presentation of today's paper at the recent SPARS 2011 workshop: A. Adler, V. Emiya, M. G. Jafari, M. Elad, R. Gribonval and M. D. Plumbley, "Audio Inpainting", submitted, 2011. This work contains some key results of the SMALL project. Also related is this paper is, A. Adler, V. Emiya, M. G. Jafari, M. Elad, R. Gribonval and M. D. Plumbley, "A Constrained Matching Pursuit Approach to Audio Declipping", in Proc. IEEE ICASSP, Prague, Czech Republic, May 2011. At SPARS2011, Valentin Emiya announced the release of a MATLAB toolbox for exploring audio inpainting and declipping using sparse approximation.
In the Po'D yesterday, I discussed the use of order statistics for analyzing the behavior of matching pursuit (MP). I think this is a cool idea, so here I want to do a little more poking around.

Note 20110809: Manuel has responded to my analysis here. Thank you!
Hello, and welcome to Paper of the Day (Po'D): Matching Pursuits with Random Sequential Subdictionaries Edition. I saw today's paper at the recent SPARS 2011 workshop: M. Moussallam, L. Daudet, and G. Richard, "Matching Pursuits with Random Sequential Subdictionaries", which is at arXiv while it is being reviewed. The approach proposed in the paper is quite similar to ones I have seen before, e.g., in
  1. P. J. Durka, D. Ircha, and K. J. Blinowska, "Stochastic time-frequency dictionaries for matching pursuit," IEEE Trans. Signal Process., vol. 49, pp. 507-510, Mar. 2001 (Po'D here);
  2. M. Elad and I. Yavneh, "A plurality of sparse representations is better than the sparsest one alone," IEEE Trans. Info. Theory, vol. 55, pp. 4701-4714, Oct. 2009 (Po'D here);
  3. S. E. Ferrando, E. J. Doolittle, A. J. Bernal, and L. J. Bernal, "Probabilistic matching pursuit with Gabor dictionaries", Signal Processing, vol. 80, no. 10, pp. 2099-2120, Oct. 2000 (Po'D here);
  4. A. Divekar and O. K. Ersoy, "Probabilistic Matching Pursuit for Compressive Sensing," Electrical and Computer Engineering Technical Report, Purdue University, May 2010 (Po'D here)
Not to forget Bayesian approaches:
  1. H. Zayyani, M. Babaie-Zadeh, and C. Jutten, "Bayesian pursuit algorithm for sparse representation," in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., (Taipei, Taiwan), pp. 1549-1552, Apr. 2009
  2. (Po'D here)
  3. P. Schniter, L. C. Potter, and J. Ziniel, ``Fast Bayesian Matching Pursuit,'' Proc. Workshop on Information Theory and Applications (ITA), (La Jolla, CA), Jan. 2008 (Po'D here)

Coming Papers

| No Comments
I have finally returned from my vacation --- which is the reason why "Pursuits in the Null Space" has been so dormant. In the coming months there will be a flurry of activity here, as I gear up to begin in January a two-year research project titled, "Greedy Sparse Approximation and the Automatic Description of Audio and Music Data." It won't be entirely devoted to "greedy" methods of course, or exclusively focused on audio and music data. But audio and music data provides a good real world playground with plenty of stuff difficult to climb (sans injury).

And I have some papers accepted to Asilomar this year:
  1. B. L. Sturm, M. Christensen, and R. Gribonval, "Cyclic Greedy Algorithms for Recovering Compressively Sampled Sparse Signals".
  2. M. Christensen and B. L. Sturm, "A Perceptually Re-Weighted Mixed-Norm Method for Sparse Approximation of Audio Signals".
That should be a good conference as there appears to be several interesting titles in the program. And this time I think I will rent a surfboard.

Regarding my investigations on the effects of distributions on sparse signal recovery, Phil Schniter has let me know about his work in combining approximate message passing with expectation maximization to learn the signal distribution during its recovery. Here is a poster he and a student recently presented at the Duke Workshop on Sensing and Analysis of High-Dimensional Data. The results look promising.

The ICASSP 2012 deadline is a little more than a month away; and soon I will be teaching a new course for me, Multivariate Statistics and Pattern Recognition. And for this course, I have just completed some notes for my probability review session titled, Three Fun Games of Probability, and the Big Bang Lottery, or, The Fatal Flaw of Arguing for Intelligent Design from Probability. This will be fun!

Blog Roll

About this Archive

This page is an archive of entries from August 2011 listed from newest to oldest.

July 2011 is the previous archive.

September 2011 is the next archive.

Find recent content on the main index or look in the archives to find all content.