Speedups in OMP Implementations

| 1 Comment
I am now compiling many results in a comparative study of several different implementations of OMP: the naive way, the Cholesky way, the QR way, and the MIL way.

So far, it appears that each implementation has some error accumulation from recursion, but at completely acceptable levels (at least on my 64-bit machine). Furthermore, I have yet to observe a case where one fails while the others succeed. Where they are significantly different, however, is in their computation times. Below we see in the phase plane how much faster the three implementations are compared to the naive way for particular ambient dimension \(N\). The z-axis is the mean time of the naive (averaged over 100 independent trials at each problem sparsity and indeterminacy) divided by the mean time of another implementation. (This is mean time of entire pursuit, not per iteration.) It is interesting to see how large of a benefit it is to use QR vs. Cholesky for large problem sizes and more iterations. meantimeofpursuit.gif

1 Comment

Have you compared against the version in the SPAMS toolbox? It's open source C++, so you could convert to Matlab if you want a fair test. I believe they use a single Cholesky factorization, and it's extremely rapid. I tried to break it with gigantic dictionaries and failed; it's always fast.

(Note: if you do compare with their version, the output may be slightly different, because they use a slightly different variant. In my tests, their variant gives better results too).

Leave a comment

About this Entry

This page contains a single entry by Bob L. Sturm published on February 24, 2012 11:17 AM.

Are you or someone you know learning digital signal processing? was the previous entry in this blog.

Fun with probability is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.