May 2011 Archives

Quick Cyclic Matching Pursuit, but not Practical

A long time ago, I noted a well-cited trick to making matching pursuit (MP) fast in theory, but not in practice when it comes to the large dictionaries common in audio signal processing. The trick is calculating all inner products of the dictionary atoms $$\mathcal{D} = \{ \vd_i \}$$ with the original signal $$\vx$$, and then using simple recursive update formula to choose the next atom by the criterion: $$\vh_{n+1} = \arg \max_{\vd \in \mathcal{D}} | \langle \vr(n+1), \vd \rangle |$$ where \begin{align} \langle \vr(n+1), \vd \rangle & = \langle \vr(n), \vd \rangle - a_n \langle \vh_n, \vd \rangle \\ a_n & = \langle \vr(n), \vh_n \rangle \end{align} where $$\vh_n \in \mathcal{D}$$ is the ($$n+1$$)th atom selected by MP (starting with $$\vh_0$$.) Starting from the set $$\{\langle \vr(0), \vd_i \rangle : \vd_i \in \mathcal{D} \}$$, we can essentially read off the atoms and their weights from an MP decomposition without ever having to compute a residual again. The updates require $$| \mathcal{D} |$$ multiplications, and a search for the single largest magnitude value, which has complexity order of at most $$| \mathcal{D} |$$. Thus, for $$s$$ iterations of MP, we are looking at a minimum complexity of order $$s | \mathcal{D} |$$ MPTK, the fastest implementation of MP in the world, has a complexity of $$s | \mathcal{D} | \log | \mathcal{D} |$$, because it does not use this impractical approach.

Recovery of Compressively Sensed Normally-distributed Sparse Signals in Noise

Before, I experimentally measured the phase transitions of all kinds of recovery algorithms (greedy, convex optimization, thresholding, cyclic) for vectors of varying lengths and sparsities, as well as distributions, where no noise is corrupting the measurements. For vectors distributed Normally and Laplacian, I find greedy approaches like orthogonal matching pursuit (OMP) and cyclic matching pursuit (CMP) outperform to a large extent convex optimization approaches like $$\ell_1$$ minimization (BP). For sparse vectors distributed bimodally, the performance of these greedy approaches significantly decrease. Furthermore, the performance of $$\ell_1$$-minimization is extremely robust to the distribution underlying the non-zero elements of a sparse vector. Building on this work, I am now interested in how the performance of these algorithms are affected by additive noise in the measurements.

Ensembles of Sparse Reconstruction Algorithms

| 1 Comment
Over the past few days, Igor of Nuit Blanche has been asking about this "ensemble" line on my phase plots here, which I also discuss in my article here. I explained in a comment that
We choose the [solution] that produces the smallest l_2 error given the measurements $$|| y - A x_i ||_2$$ to be the sensed vector. I do not expect that OMP, BP, IHT, CMP, etc. fail for the same vectors. So using an ensemble, I am guaranteed to have at least the recovery performance of the best performing algorithm. When the recovery algorithms fail for different vectors, then I will have better performance.
After thinking about this and looking at my code, I see that is not correct. In order to clear up any and all confusion, I want to formalize what this "ensemble" line means.

Paper of the Day (Po'D): An Analysis of Single-Layer Networks in Unsupervised Feature Learning Edition

Hello, and welcome to Paper of the Day (Po'D): An Analysis of Single-Layer Networks in Unsupervised Feature Learning Edition. Today's paper is A. Coates and H. Lee, and A. Y. Ng, "An Analysis of Single-Layer Networks in Unsupervised Feature Learning", In Proc. Int. Conf. on AI & Stats., Ft. Lauderdale, FL, USA, Apr. 2011. As in here and here, the focus of this paper is unsupervised feature learning.

Pardis is currently a graduate student at the Department of Computer Engineering and Information Technology at Amirkabir University of Technology, Tehran, Iran. I have invited her to contribute to CRISSP discussions in and about her research in machine learning, and in particular, methods for data content classification, e.g., deep belief networks. These are exciting areas of which I am looking forward to learning more.

Welcome Pardis! :)

Recovery of Compressively Sensed Sparse Signals using Cyclic OLS

My simulations for normally distributed sparse signals have finally finished (after three weeks, jeesh!). This builds on the work here. The question is, what is the performance of cyclic OLS (COLS) in recovering compressively sensed sparse vectors?

A downside of recommender systems

This is a point of view I hadn't even considered before: a recommender system, honed with my personal preferences mined from my history of clicking on stories, buying books, similarity to others' behaviors, etc., could end up reinforcing my unfounded assumptions about the world, and limiting my exposure to a wider range of information. This reminds me of the presentation by an American conservative internet activist that encourages fellow conservatives to rate high on Amazon books denying climate change, and rate science books low. "I don't read 80% of the books I rate," he says.

Is there some constraint I can add to the Google page rank algorithm such that my portal to the WWW does not become a cage with glass walls?

Anyhow, it may not matter as tomorrow is the end.

Computational Complexity of Greedy Sparse Approximation Algorithms

Continuing from yesterday, I want to look more closely at the computational complexities of greedy sparse approximation algorithms. Considering that we have a signal of dimension $$m$$ with assumed sparsity $$s \ll m$$, and a dictionary of $$N \gg m$$ atoms, Mailhé et al. (Metal) arrive at orders of computational costs for pursuits using general and fast dictionaries. (I have modified their notation to fit that often used in compressed sensing, where $$m$$ stands for measurements, $$s$$ stands for sparsity; and it doesn't make sense to me to make $$D$$ be the number of atoms in a dictionary $$\MD$$ --- so I set it to $$N$$ because that is what Tropp and Wright do in J. A. Tropp and S. J. Wright, "Computational methods for sparse solution of linear inverse problems," Proc. IEEE, vol. 98, pp. 948-958, June 2010.) Metal describe a little bit in their paper about how they arrive at these costs, but I need to fill in some details. I reproduce below their costs for general dictionaries, and in parenthesis for fast dictionaries where different.

In the past few weeks, Pardis and I have been looking at "deep belief networks", and their application to learning features for audio and music classification: DBNs, and convolutional DBNs. I am still completely hazy on the details, but in this talk, Professor Ng provides an excellent overview of the power of such approaches. I think one reviewer's comment summarizes it nicely in one line:

Andrew Ng got bored of improving one algorithm so he decided to improve all algorithms at once...﻿
On his course website at Stanford, Ng provides some tutorials.

These approaches with DBNs --- let the algorithm find the features that make sense with respect to some basic principle of economy (whether it be sparsity or energy) --- makes me think about the recent opinion article by Malcolm Slaney, Does Content Matter? Of course content matters since that is how us expert humans "like" something, e.g., giving a thumbs up on YouTube... I "liked" Ng's talk because of its content and delivery, and not because 52 other people liked it. (Who are the two people that "disliked" Ng's talk!? How could someone not like him?) We just aren't using the best features.

Also note the recent "resurgence" of neural networks!

Paper of the Day (Po'D): Fast Orthogonal Sparse Approximation Algorithms over Local Dictionaries Edition

Hello, and welcome to Paper of the Day (Po'D): Fast Orthogonal Sparse Approximation Algorithms over Local Dictionaries Edition. In the forthcoming weeks, I am preparing for the first part of my scientific mission at INRIA in Rennes, France (details to come). Today's paper is co-authored by members of INRIA and IRISA in Rennes: B. Mailhé, R. Gribonval, P. Vandergheynst and F. Bimbot, "Fast Orthogonal Sparse Approximation Algorithms over Local Dictionaries," Signal Processing, http://dx.doi.org/10.1016/j.sigpro.2011.01.004 (this paper appears in a forthcoming special issue of Signal Processing on "Advances in Multirate Filter Bank Structures and Multiscale Representations.")

100 years ago today

Gustav Mahler, one of the greatest composers ever in my opinion, died today a century ago.

As a teenager, I could only listen to Mahler on Friday and Saturday nights because his music would infect my brain and keep me up too late --- particularly earwormy is the second movement of his first, and all the movements of his second. Thanks for the sleep debt Gustav.

Also, on the school bus rides home in junior high school we could give the driver a music tape to play during the ride. Metallica, Poison, White Snake, were the usual. Then I mustered up the courage to have him play my tape of Mahler. After five minutes of loud complaining and verbal beat down by the other kids, the driver had to switch tapes to keep peace. Little did those stoners and dirt heads know, Mahler is the heavy metal composer of his era.

Paper of the Day (Po'D): Unsupervised Feature Learning for Audio Classification using Convolutional Deep Belief Networks Edition

Hello, and welcome to Paper of the Day (Po'D): Unsupervised Feature Learning for Audio Classification using Convolutional Deep Belief Networks Edition. Today's paper is H. Lee and Y. Largman, and P. Pham and A. Y. Ng, "Unsupervised Feature Learning for Audio Classification using Convolutional Deep Belief Networks", Proc. Neural Info. Process. Systems, Vancouver, B.C., Canada, Dec. 2009. We have recently reviewed another deep architectures paper here. Now, we are looking at "convolutional deep belief networks" (CDBN) for describing audio signals, which provide yet another layer of abstraction in a generative modeling approach of complex signals to reach highly relevant descriptions of signals without engineering them by hand.

The following annotation is authored by Pardis Noorzad, in the Department of Computer Engineering and IT, Amirkabir University of Technology, Tehran, Iran (pardis@aut.ac.ir).

Fantastic reply to an idiotic decision

Gregory Petsko's letter to George M. Philip, President of the State University of New York At Albany is a wonderfully biting critique on much of what is wrong with today's attitudes toward higher education and universities in the USA. See here, for example.

If college is neither a luxury good nor an investment, what is it?
How about this? A university is a unique human institution participating in the cooperative discovery and preservation of knowledge, not for exploitation for financial gain, but for ameliorating the human condition.

Why do so many people think that university is just a place of teaching?

Two assistant professor positions at AAU Copenhagen!

In our department we are advertising two new assistant professor positions (3 years, excellent benefits, a great dynamic place to do work)!

Assistant Professor in Games and Media Technology (201125)

Assistant Professor in Media Technology (201129)

The first position focuses on "games, artificial intelligence, affective computing, multimodal intelligent interaction and virtual environments." The second focuses on "augmented reality, HCI, computer graphics, animations, computer vision, gesture recognition and multi-modal systems."

Submit your application! Spread the word! Send any questions you have to me or others on the staff!

Paper of the Day (Po'D): Probably Approximately Correct Search Edition

Hello, and welcome to Paper of the Day (Po'D): Probably Approximately Correct Search Edition. Today's paper presents an analysis of an interesting approach to document search: I. J. Cox, R. Fu and L.K. Hansen, "Probably Approximately Correct Search", Proc. of the Int. Conf. on Theoretical Information Retrieval, 2009. In the interest of extreme brevity, here is my one line description of the work in this paper:
We can probably find several matches to a query by independently randomly sampling a large number of documents over a large number of computers, thereby reducing the complexity of the search, and maintenance of the system.

Paper of the Day (Po'D): Sparse Decomposition Based on Smoothed $$\ell_0$$-norm Edition

Hello, and welcome to Paper of the Day (Po'D): Sparse Decomposition Based on Smoothed $$\ell_0$$-norm Edition. In the forthcoming weeks, I am preparing for the first part of my scientific mission at INRIA in Rennes, France (details to come). Today's paper has fantastic results: G. H. Mohimani, M. Babaie-Zadeh, and C. Jutten, "A fast approach for overcomplete sparse decomposition based on smoothed $$\ell$$_0 norm," IEEE Trans. Signal Process., vol. 57, pp. 289-301, Jan. 2009. In addition to this paper, there is also "Robust SL0" presented in: A. Eftekhari, M. Babaie-Zadeh, C. Jutten, and H. Abrishami Moghaddam, "Robust-SL0 for Stable Sparse Representation in Noisy Settings," in Proc. ICASSP, Taipei, Taiwan, Apr. 2009. In the interest of extreme brevity, here is my one line description of the work in these papers:
Approach $$\ell_0$$-norm minimization by majorization, et voilà --- we find sparse solutions with scalable complexity.

Does Content Matter?

In his recent Visions and Views column for the IEEE Multimedia Magazine, Malcolm Slaney asks the question, "Does content analysis matter?" when it comes to judging the similarity between media, or making recommendations on media, at the scale of the Internet. He discusses experimental results that show user ratings, and/or context of media on the WWW, appear to be much more powerful than automatic content analysis, i.e., careful feature design, segmentation and feature extraction, and classification, for predicting the similarity between two pieces of music, or for building a playlist of music, or for recommending a movie, or for determining the content of a given image (for instance, as done by Google Image Search, my favorite tool when building a lecture). This is altogether not at all unbelievable since we humans operate in a non-random and time varying way; and it shows how far we engineers must go until we can extract from complex media "actionable intelligence" for making decisions on media within particular complex contexts (e.g., pornography from 100 years ago is not pornography today, "rock" in the 60's is now "classic rock", etc.). Thus, Slaney opines, the way forward is to "take advantage of human signals" for aiding content analysis of multimedia, and all things related.

While I agree with his thesis, Slaney says something with which I do not agree: "The [music] audio waveform is rich in information --- it tells us everything we need to know about the music." The first part is ok; the second part, not so much. Barring argument over the set composition of "everything we need to know" about the music in the recording, among the things I need to know as a modern classical music lover are the composer and year of composition. Perhaps for a real connoisseur, he or she would also like to know the conductor, orchestra, soloists, and the place and time of a recording. Such things are not present in the audio waveform. Perhaps in only a few cases may the composer be inferred, or a soloist named, but only with expert experience (or expert systems like Shazaam) can such judgements be reliable. This information is not present in the waveform. (There are also a lot of imitators, e.g., Bach's children, David Cope's EMI, counterpoint students, etc.)

Thank you Windows!

For the past week I have been running simulations of compressed sensing recovery using cyclic orthogonal least squares (COLS). Because of the refinement cycles, COLS performs many Gram-Schmidt orthogonalizations (I haven't optimized this part yet). Thus, I am waiting, and waiting, for the simulations to finish on the super powerful 3D computer we have at this lab. I checked in on the process this morning and found the fun Window's log in sceen, meaning one of our students disregarded my note to not touch the machine, and rebooted or powered down the machine. When I logged back in, I was prompted by a little note on that helpful bar of icons at the bottom right: "At 3:00 AM Windows rebooted this machine to finish its automatic update of your computer."

Thank you Windows! Thank you for ignoring the MATLAB process taking up so much memory and computation. You probably thought it was some Flash script gone amok in IE. Lucky for me, I didn't lose more than 5 hours of computation since I am saving every iteration. But still --- annoying!

Multivariate Analysis for the Technical and Life Sciences 2011

Yesterday, I gave an invited tutorial on time-frequency processing with sparse approximation at the Multivariate Analysis for the Technical and Life Sciences (MATLS) day -- which is part of the larger 2011 Vision Days conference at the Technical University of Denmark. The audience was extremely diverse, with many coming from statistics and chemometrics, to the food sciences. More and more, I am delving into the world of chemometrics as it is a wonderful blend of signal processing, statistics and estimation, and pattern recognition. And uniting such techniques with sparse approximation methods --- not just as feature extraction, but also for classification and dictionary learning --- and acoustics, feels natural.

At our department today, one of the organizers of MATLS, Dr. Stina Frosch, is giving a seminar about her work on measuring food quality with acoustic measurements. Naturally, we are working together to tease out the relevant data from acoustic measurements of falling frozen prawns for estimating their ice glaze content. However, what we need are some "shrimp-invariant features," e.g., features that are invariant under changes in shrimp size. This work will no doubt lead to fantastic possibilities for paper titles. And if I can unite this with my work in "dark energy," my moniker could become "the dark energy shrimp."

Archive to Mash Up

Someday soon, the below will be possible automatically. And then the art in creating these things will be in the forming interesting queries.

Post-doctoral fellowship at Ohio State University

Post-Doctoral Fellowship in Music Cognition School of Music, Ohio State University

The School of Music at the Ohio State University is pleased to offer a Post-doctoral Fellowship in Music Cognition/Music Theory. The fellowship provides one year of support for full-time music research, with the possibility of renewal for a second year. Candidates may choose their own program of research, although preference is given to studies in listening, performance, analysis, modeling, and cross-cultural phenomena. The fellowship is designed to further a candidate's training and research experience in music cognition and systematic approaches to music theory. Candidates must have completed a doctoral degree at the time the fellowship begins.

The Fellowship is open to any individual who has been awarded a doctoral degree within the past seven years, and who did not receive an MFA or doctorate from the Ohio State University. Untenured faculty at other institutions are eligible to apply.

The deadline for receipt of applications is June 1, 2011. The fellowship is to begin no earlier than September 1, 2011 and no later than November 1, 2011.

Applications should include a covering letter and supporting documents. The covering letter should include (1) a description of the candidate's background, training, recent scholarly interests, and career goals, (2) a research plan that specifies the candidate's proposed area of research, and (3) a specified starting date of appointment.

Supporting documents should include a curriculum vitae, two letters of recommendation from persons not employed by the Ohio State University, and an official transcript showing the awarding of a doctorate (sent by the university conferring the degree). If the doctorate is pending, indicate the anticipated date of completion. N.B. Fellowships cannot be taken up prior to the completion of the doctorate.

Applicants are strongly encouraged to contact Dr. David Huron (huron.1@osu.edu) prior to submitting their applications.

Letters of application and all supporting material must be received by June 1, 2011. Either electronic or printed submissions are welcome.
Prof. David Huron, School of Music, 110 Weigel Hall
1866 North College Road, Ohio State University
Columbus, Ohio 43210-1170 U.S.A.

Presentation of the Day (Po'D): From the Laboratory to the Wild!

On Wednesday, May 4, our department hosted Dr. Rolf Bardeli of the Competence Center NetMedia of Fraunhofer Institute for Intelligent Analysis and Information Systems (IAIS), in St. Augustin, Germany. He gave a great talk here summarizing his impressive work in speech and audio processing over the past eight years. I initially became aware of his work when I reviewed his paper, "Similarity search in animal sound databases," IEEE Trans. Multimedia, vol. 11, pp. 68-76, Jan. 2009. In his talk, Rolf discussed four main areas, all centering upon the issue of making searchable the content in massive amounts of data: the "controlled domains" of music search and retrieval, and content extraction and alignment of annotation of news broadcast archives, and the "wild domains" of bioacoustic archives, and humanistic archives of speech and language studies. The former are "in the wild" because for bioacoustics the recording conditions are difficult, and for aligning text to speech recordings of dying languages there is not enough data to build accurate models for speech recognition.

What I like about this work is that it takes a formalized approach to similarity search in terms of group theory. In only a few lines, it is not only clear what is meant by comparing two pieces of music, and the conditions under which they are thought to be similar, but also we find an efficient search strategy for large databases of documents. This latter bit is extremely important for creating and deploying useful applications. Rolf showed four such applications that are or soon will go public: searching animal sound archives, making the content of video news and other videos searchable, finding and comparing the content of speeches by particular politicians (an extremely useful tool for what the staff do manually at The Daily Show with Jon Stewart), and software for the automatic annotation of field study videos.

Rolf makes available his slides here.

Paper of the Day (Po'D): Learning Features from Audio with Deep Belief Networks Edition

Hello, and welcome to Paper of the Day (Po'D): Learning Features from Audio with Deep Belief Networks Edition. Today's paper looks at the derivation of discriminative features from audio signals using what are called "deep belief networks": P. Hamel and D. Eck, "Learning features from music audio with deep belief networks," in Proc. Int. Symp. Music Info. Retrieval, (Utrecht, Netherlands), Aug. 2010. This is interesting because I wonder if we can apply this technique to process the auditory spectrograms that Panagakis et al. use in their genre recognition task. How do the resulting features compare with the auditory temporal modulations? I would hope that we could learn features as good as ATMs for a variety of time-scales.

The following is co-authored with Pardis Noorzad, in the Department of Computer Engineering and IT, Amirkabir University of Technology, Tehran, Iran (pardis@aut.ac.ir).

Part 5 of STWOPetal

After a long week of Easter and massive amounts of peer reviewing, I am returning to reproducing the results in Y. Panagakis, C. Kotropoulos, and G. R. Arce, "Music genre classification via sparse representations of auditory temporal modulations," in Proc. European Signal Process. Conf., (Glasgow, Scotland), pp. 1-5, Aug. 2009; Y. Panagakis, C. Kotropoulos, and G. R. Arce, "Music genre classification using locality preserving non-negative tensor factorization and sparse representations," in Proc. Int. Soc. Music Info. Retrieval Conf., pp. 249-254, Kobe, Japan, Oct. 2009; Y. Panagakis, C. Kotropoulos, and G. R. Arce, "Non-negative multilinear principal component analysis of auditory temporal modulations for music genre classification," IEEE Trans. Acoustics, Speech, Lang. Process., vol. 18, pp. 576-588, Mar. 2010. We have received some help from the authors and are eagerly awaiting the necessary details to get moving again.

Bob L. Sturm, Associate Professor
Audio Analysis Lab
Aalborg University Copenhagen
A.C. Meyers Vænge 15
DK-2450 Copenahgen SV, Denmark
Email: bst_at_create.aau.dk