November 2011 Archives

On his fantastic blog, Dirk discusses a simple scenario where BP fails to recover the sparsest solution. His approach makes an overcomplete unit norm dictionary \(\MPhi\) as a matrix containing the standard basis for \(\mathbf{R}^N\) and two vectors that are nearly orthogonal to the measurements \(\vb := \mathbf{1}\), but the span of which contains it, e.g.,

\displaystyle A = \begin{bmatrix} c(1+\epsilon/2) & c(-1+\epsilon/2) & 1 & 0 & \dots & \dots &0\\ c(-1+\epsilon/2) & c(1+\epsilon/2) & 0 & 1 & \ddots & & 0\\ c\epsilon/2 & c\epsilon/2 & \vdots & \ddots& \ddots & \ddots& \vdots\\ \vdots & \vdots & \vdots & & \ddots & \ddots& 0\\ c\epsilon/2 & c\epsilon/2 & 0 & \dots& \dots& 0 & 1 \end{bmatrix}.

where $$ c = \left [ 2 + \frac{N \epsilon^2}{4} \right ]^{-1/2} = \frac{2}{\sqrt{8+N\epsilon^2}} $$ to make the first two columns unit norm.

Recognition at last?

| No Comments
Received this morning:

Dear Dr. Bob L. Sturm,

The international conference: 2012 3rd International Conference on Legal Medicine, Medical Negligence and Litigation in Medical Practice & 3rd International Conference on Current Trends in Forensic Sciences, Forensic Medicine & Toxicology will be held on 3rd to 5th February, 2012 in Jaipur, Rajasthan, India. This event will be organized and hosted by Saraswathi Institute of Medical Sciences, Ghaziabad, Uttar Pradesh, India. EPS Inc., a Canada-based biomedical consulting agency, is authorized by the Scientific Committee to co-organize this important and exciting educational event.

On behalf of the organizing committee, we cordially invite you to attend the conferences to present your recent work and ideas and share your knowledge in these specific fields. For the conference program details, please visit our websites at www.epsglobal.ca and www.epsworldlink.com.

We are very interested in your article Recursive nearest neighbor search in a sparse and multiscale domain for comparing audio signals that is published on Signal Processing. This article includes some novel conceptions, which may impress the worldwide experts in your field. The conferences will present an excellent opportunity for you to introduce this article to the worldwide experts, highlighting the great significance of your research achievement. We believe you will enjoy the casual and interactive settings of this meeting...

---
Of course I will skip this opportunity. I just wish I had an automatic reply to these things:

Dear    2012 3rd International Conference on Legal Medicine, Medical Negligence and Litigation in Medical Practice & 3rd International Conference on Current Trends in Forensic Sciences, Forensic Medicine & Toxicology,

I appreciate your enthusiastic and bold invitation to present my work at your conference in    Jaipur, Rajasthan, India, but I have already received a similar request from the    5th International Multi-Conference on Engineering and Technological Innovation: IMETI 2012. Their transrobotic agent emailed me first, and I don't think it polite for me to decline, especially since their form email was thoughtfully in color.

Please consider me again in the future, especially after I obtain tenure.

Sincerely,
Hello, and welcome to Paper of the Day (Po'D): Cyclic Pure Greedy Algorithms Edition. We just finished and submitted today's paper for the proceedings of the 2011 Asilomar Conference: B. L. Sturm, M. G. Christensen and R. Gribonval, "Cyclic pure greedy algorithms for recovering compressively sampled sparse signals," Proc. IEEE Asilomar Conf. Signals, Systems and Comp., Pacific Grove, CA, Nov. 2011. I previously posted the slides here.

We explore the use of cyclic pure greedy algorithms matching pursuit (CMP) and complementary matching pursuit (CompMP) (both of which have been discussed on this blog many times before) for solving the inverse problems posed by compressed sensing (CS) within the non-noisy regime. Previously, we had only considered applying the cyclic minimization principle to MP, and not to CS. Motivated by the work in Dymarski et al., we realized that CompMP is also a pure greedy algorithm (previously there was only one) with just as small a computational complexity as MP (a little more overhead in the beginning), and which can be encased within a cyclic minimization framework. So how do these two algorithms perform in full support recovery?

The performance of cyclic CompMP (CompCMP) easily beats orthogonal least squares for sparse signals distributed Gaussian, and even for the least favorable Rademacher --- as long as the refinement cycles are performed after each model augmentation. We also see that CompMP without any refinement performs as good as OLS for Rademacher signals. In comparison with state of the art methods, when CompCMP refines after every augmentation, it performs competitively in support recovery, while maintaining an extremely simple and low computational complexity. For Guassian distributed signals, CompCMP performs nearly as well as the best performing SL0 and PrOMP. Curiously, when it waits to refine the model, CompCMP takes a heavier hit to performance than CMP.
Continuing from the naive OMP, the Cholesky OMP, and the QR OMP, we now look at using the matrix inversion lemma (MIL) to make OMP extremely efficient. This is one I haven't ever seen, but it might be the same as the block inversion approach discussed by Pati et al. 1993. I still have to take a closer look.

Note: If you see some LaTeX error messages about macros not being defined, shift reload the page.

Consider that in its \(k\)th iteration, OMP has the set of dictionary inner products \(\Gamma_l = \{\langle \vphi_l, \vphi_n \rangle\}_{n \in \Omega}\) and squared norms \(\{\|\vphi_n\|^2\}_{n \in \Omega}\), the sets of original and current inner products, \(\mathcal{I}_{0} = \{\langle \vu, \vphi_n \rangle \}_{n \in \Omega}\) and \(\mathcal{I}_{k} = \{\langle \vr_{k}, \vphi_n \rangle \}_{n \in \Omega}\), the current solution \(\vx_k\), and the basis dual to \(\MPhi_{\Omega_k}\), i.e., \(\MPsi_{\Omega_k} = \MPhi_{\Omega_k}(\MPhi_{\Omega_k}^H\MPhi_{\Omega_k})^{-1}\). OMP guarantees that \(\MPhi_{\Omega_k}\) has full column rank when \(k \le M\), and so \(\MPsi_{\Omega_k}\) always exists when \(k \le M\).
I am back in Rennes, working again on developing and testing a very "fast" version of cyclic matching pursuit in the MPTK framework. You may remember all my previous trials and tribulations in June of this year. Now that it is working, and MPTK is bug free, I can begin experimenting with large signals and dictionaries.

Below are the residual energy decays for MP (gray) and CMP (black) for a musical signals (glockenspiel), using a dictionary of 1,662,616 Gabor atoms of scales powers of 2 from 7 to 14, and translations one quarter their scales. CMP is refining the model every 3 dB decrease in the residual energy, with a maximum of 10 refinement cycles, or until it sees a residual energy decrease less than 0.001 dB. This of course takes longer than MP. The race to -40 dB residual energy ends for CMP at 76% of the number of MP iterations. The simple trick of CMP saves nearly a quarter of the atoms.

glock2_energydecay.pdf.png
Continuing from the naive OMP and the Cholesky OMP, we now look at using QR decomposition to make OMP extremely efficient. I previously discussed this approach here, but below I make some additions.

Consider that in the \(k\)th step of OMP we have \(\mathcal{I}_{0} = \{\langle \vu, \vphi_n \rangle/\|\vphi_n\| \}_{n \in \Omega}\), \(\mathcal{I}_{k-1} = \{\langle \vr_{k-1}, \vphi_n \rangle/\|\vphi_n\| \}_{n \in \Omega}\), the set of dictionary inner products \(\Gamma_l = \{\langle \vphi_l, \vphi_n \rangle/\|\vphi_n\|\}_{n \in \Omega}\), as well as the QR decomposition of the subdictionary produced thus far, i.e., \(\MPhi_{\Omega_{k-1}} = \MQ_{k-1}\MR_{k-1}\). The QR decomposition of a matrix will always exist, but is not unique.
Continuing from my previous post, we now look at using Cholesky decomposition to make OMP extremely efficient.

Consider that in its \(k\)th step, OMP has the two sets of projections \(\mathcal{I}_{k-1} = \{\langle \vr_{k-1}, \vphi_n \rangle/\|\vphi_n\| \}_{n \in \Omega}\) and \(\mathcal{I}_{0} = \{\langle \vu, \vphi_n \rangle/\|\vphi_n\|\}_{n \in \Omega}\), the set of dictionary inner products \(\Gamma_l = \{\langle \vphi_l, \vphi_n \rangle/\|\vphi_n\|\}_{n \in \Omega}\) and norms \(\{\|\vphi_n\|\}_{n \in \Omega}\), and the Cholesky decomposition of the Gramian of the subdictionary \(\MPhi_{\Omega_{k-1}}^H \MPhi_{\Omega_{k-1}} = \ML_{k-1}\ML_{k-1}^H\). Every Gramian can be expressed as a Cholesky decomposition.
Over the next few days, I am posting portions of a paper I am currently writing about making greedy algorithms as efficient as possible. I post this material before submission because: 1) much of it has been discussed before in a variety of places, and I am merely assembling some of these methods in one place; 2) I think it can find immediate use, rather than wait to release the entire paper after the paper deadline in February. Today, we begin with some notation, and a look at the naïve implementation of orthogonal matching pursuit (OMP). This provides us a baseline complexity.

We are interested in efficiently modeling a signal \(\vu\) by a linear combination of the atoms defined in a dictionary \(\mathcal{D} = \{ \vphi_\omega \in C^M \}_{\omega \in \Omega= \{1, 2, \ldots, N\}}\): $$ \vu = \MPhi \vx + \vn $$ where \(\MPhi := [\vphi_1 | \vphi_2 | \cdots | \vphi_N]\). By \(\Omega_k \subset \Omega\) we denote an ordered subset of cardinality \(k\) of indices into the dictionary. We denote the \(i\)th element of a set by \(\mathcal{I}(i)\). The matrix \(\MPhi_{\Omega_k}\) is thus composed of the ordered atoms indexed by \(\Omega_k\). We denote \(\MP_k\) the orthogonal projection matrix onto the range space of \(\MPhi_{\Omega_k}\), or equivalently, onto the span of the subdictionary \(\mathcal{D}_k \subset \mathcal{D}\). The projection matrix \(\MP_k^\perp = \MI - \MP_k\) is thus the orthogonal projection onto the subspace orthogonal to the span of the subdictionary, or equivalently, the left null space of \(\MPhi_{\Omega_k}\). We denote the inner product between two vectors in \(C^M\) as \(\langle \vu_1, \vu_2 \rangle = \vu_2^H \vu_1\), where \(^H\) denotes the complex conjugate transpose. The vector \(\ve_k\) is the \(k\)th element of the standard basis of \(R^N\), i.e., all zeros with a 1 in its \(k\)th row. Finally, all norms are implicitly the \(\ell_2\)-norm, \(\|\vx\|^2 = \langle \vx, \vx \rangle\).
My tests at an ambient dimension of \(N=2000\) are nearly finished for signals distributed Normally; Rademacher is yet to simulation, but that will have to wait after my trip to Bretagne. Previously, here are my tests at a dimensionality of \(N=400\), and here at \(N=1000\). I don't really see any game changers here. For some reason, we see CompCMP flattens out at higher indeterminaces, but beats SL0 and PrOMP at midlevel indeterminaces, but these could be due to me only averaging the results of 25 signals at each pair of sparsity and indeterminacy. (Again, the criteria here for successful recovery is exact support recovery, i.e., no missed detections and no false alarms.)

Which Mahler symphony is it?

| No Comments
13191749722011-11-16-What-Mahler-Symphony-Did-You-Hear-.gif

This is a poor, but funny, example of applying a hierarchical model to "cover song identification." For instance, it wouldn't be useful to detect Mahler in Uri Caine. From here.
The Asilomar conference was great! I met several new people, and remet several others. In particular, I had an informative chat with Jeremy Vila, who has designed with Phil Schniter one of the most promising inverse problem solvers: expectation maximization Bernoulli Gaussian approximate message passing (EMBGAMP). Before the conference I had run some simulations with the algorithm at an ambient dimensionality of 400, and was not seeing appreciable increases in the phase transitions across all problem indeterminaces. Jeremy took a look at my simulations and suggested performing them in a higher dimension. What was going on was that with only a few components active, the algorithm was having a hard time estimating the parameters of the distribution underlying the sparse signal --- as can be expected. So upon returning home, I reran my simulations with an ambient dimension of 1000, and now see improved performance over all indeterminaces. (I am also running experiments this week at 2000.)

My Asilomar 2011 Slides

| No Comments
Since I am finished with my presentation, I now upload my slides: Sturm2011.pdf. The paper will soon follow: B. L. Sturm, M. G. Christensen, and R. Gribonval, "Cyclic Greedy Algorithms for Recovering Compressively Sampled Sparse Signals", Proc. Asilomar Conf. Signals, Systems, and Comp., Nov. 2011.

Now, off to attend the talk, "Compressed sensing: snake oil or good idea?"

Blog Roll

About this Archive

This page is an archive of entries from November 2011 listed from newest to oldest.

October 2011 is the previous archive.

December 2011 is the next archive.

Find recent content on the main index or look in the archives to find all content.