## The Sonification Handbook

For those that have not yet heard: The Sonification Handbook edited by Thomas Hermann, Andy Hunt, John G. Neuhoff is published. And, even better, freely available for download here!

## SMC 2012 in Copenhagen!!

9th Sound and Music Computing Conference, 12-14 July 2012
Medialogy section,  Department of Architecture, Design and Media Technology, Aalborg University Copenhagen
http://smc2012.smcnetwork.org/

The SMC Conference is the forum for international exchanges around the
core interdisciplinary topics of Sound and Music Computing,
and features workshops, lectures, posters, demos, concerts, sound installations, and
satellite events. The SMC Summer School, which takes place just before the
conference, aims at giving young researchers the opportunity to
interactively learn about core topics in this interdisciplinary field from experts,
and to build a network of international contacts.
The specific theme of SMC 2012 is "Illusions", and
that of the SMC Summer School is "Multimodality".

================Important dates=================
Deadline for submissions of music and sound installations: Friday, February 3, 2012
Deadline for paper submissions: Monday 2 April, 2012
Notification of music acceptances: Friday, March 16, 2012
Deadline for applications to the Summer School: Friday March 30, 2012
Notification of acceptance to Summer School: Monday April 16, 2012
Deadline for submission of final music and sound installation materials: Friday, April 27, 2012
Notification of paper acceptances: Wednesday 2 May, 2012
SMC Summer School: Sunday 8 - Wednesday morning 11 July, 2012
SMC Workshops: Wednesday afternoon 11 July, 2012
SMC 2011: Thursday 12 - Saturday 14 July, 2012
===========================================

SMC2012 will cover topics that lie at the core of the Sound and Music Computing research and creative exploration.
- processing sound and music data
- modeling and understanding sound and music data
- interfaces for sound and music creation
-music creation and performance with established and novel hardware and software technologies

================Call for papers==================
SMC 2012 will include paper presentations as both lectures and poster/
demos. We invite submissions examining all the core areas of the Sound
and Music Computing field. Submission related to the theme "Illusions" are especially encouraged.
All submissions will be peer-reviewed according to their novelty, technical content, presentation, and
contribution to the overall balance of topics represented at the
conference. Paper submissions should have a maximum of 8 pages
including figures and references, and a length of 6 pages is strongly
encouraged. Accepted papers will be designated to be presented either
as posters/demos or as lectures. More details are available at
http://smc2012.smcnetwork.org/
===========================================

================Call for music works and sound installations==================
SMC 2012 will include four curated concerts addressing the conference topic "Illusions". We invite submissions of original compositions created for acoustic instruments and electronics, novel instruments and interfaces, music robots, and speakers as sound objects. Submissions of sound installation are also encouraged. See curatorial statements and call specifics at: http://smc2012.smcnetwork.org.
==============================================================

## A blog devoted entirely to sparse representation

As part of my research activities funded by the Danish government, I am happy to announce my new blog: Null Space Pursuits. I have copied all of my content from here to there (though the links still point to CRISSP), and will continue to document over the next 30 months my researches in varying detail.

## SPARS 2011, day 4

The fourth and final day of SPARS 2011 served up two plenaries by two prodigious reserarchers: Joel Tropp and Stephen Wright. At the beginning of his talk, Tropp asked who in the room knows how MATLAB computes the SVD. Only a few out of about 200 raised their hand, and a few more gestured that they kind of knew. The problem is that the methods we use today are treated as black boxes, but are based on extremely optimized classical methods that are incapable of working with massive matrices (billions by billions and up). So, we need better tools. He presented his work in SVD by a randomized algorithm ... which at first sounds scarily inaccurate, but proves to be extremely effective at a much reduced computational cost.

In the last plenary, Wright presented a lot of work in state of the art methods for regularized optimization. At the beginning, he showed some fantastic pictures that he called an "Atlas of the Null Space," which showed where solutions to min l1 are the same as min l0. His talked centered around the message that though we talk a lot of exact solutions, or sparsest representations, most applications in the real world only need good algorithms that give the correct support before the whole solution. The trick is to determine when to stop an algorithm, and post-process the results to find the better solution.

In between these talks, there were plenty others, discussing various items of interest with dictionary learning, audio inpainting (Po'D coming soon), and several posters, one of which is by CRISSP reader Graham Coleman. He presented his novel work applying l1 minimization of sound feature mixtures to drive concatenative sound synthesis, or musaicing. (I have discussed an earlier version of this work here.) Coleman's approach appears to be the next generation of concatenative synthesis.

All in all, this workshop was an excellent use of my time and money. Its duration was just perfect that after the last session I really felt as if my fuel tank was completely full. The organizers did an extremely nice job of selecting plenary speakers, assembling a wide range of quality work, and finding an accommodating venue with helpful staff. I even heard that the committee was able to raise enough funds so that many of the student participants had their accommodations paid for. I am really looking forward to the 2013 edition of SPARS (or CoSPARS).

## SPARS 2011, day 3

| 1 Comment
Big things today, with plenaries given by David Donoho and Martin Vetterli. Donoho answered all the questions I have regarding the variability of recovery algorithms on distributions underlying sparse vectors. I just need a few years to understand them. I also need to look more closely at approximate message passing. And Vetterli gave a great talk, discussing the tendency in algorithm development to jump to a solution before solving the outstanding problem, e.g., sampling the real continuous world on a discretized grid.

Now I need to eat dinner, and run some experiments.

## SPARS 2011, day 2

Though the SPARS2011 twitter feed appears miserable, this workshop is jam packed by excellent presentations and discussions. I think too many people are having too much good discussion to have too much time to twitter.

Today at SPARS 2011: Heavy hitters Francis Bach and Rémi Gribonval delivered the two plenary talks. This morning Bach talked on a new subject for me: submodular functions. In particular, he is exploring these ideas for creating sparsity-inducing norms. A motivation for this work is that while the l1 norm promotes sparsity within groups, it does not promote sparsity among groups... or vice versa (it is new to me). But I liked how he described his formalization as "the norm-design business." Someone asked him a question about analyzing greedy methods vs. convex optimization. Bach's answer made me realize that we can more completely understand the behavior of convex optimization methods than greedy methods because convex methods are decoupled from the dictionary. For greedy methods, the dictionary is involved from the get go.

This afternoon, Gribonval talked on "cosparsity", or when a signal is sparsely represented by the dual of a frame instead of the frame itself. His entire talk revolved around looking more closely at the assumption that atomic decomposition and a transform are somehow similar. Or that when we say a signal is sparse, we mean it is sparse in some dictionary; but we can also mean its projection on a frame is sparse. This is then "cosparsity", which brings with it l1-analysis. To be a little more formal, we can considering solving the "synthesis" problem $$\min_\vz || \vz ||_1 \; \textrm{subject to} \; \vy = \MA \MD \vz$$ where we assume $$\vz$$ is sparse; or the "analysis" problem $$\min_\vx || \MG \vx ||_1 \; \textrm{subject to} \; \vy = \MA \vx$$ where we assume the analysis (or transformation) of $$\vx$$ by $$\MG$$, i.e., $$\MG\vx$$, is sparse. Gribonval et al. have done an excellent job interpreting what is really going on with l1-analysis. Instead of wanting to minimize the number of non-zeros in the signal domain, l1-analysis wants to maximize the number of zero in the transform domain. Later on, his student Sangnam Nam presented extraordinary results of this work with their Greedy Analysis Pursuit, which attempts to null non-zeros in the solution. This reminded me a bit about the complementary matching pursuit, but this is quantitatively different. Gribonval joked that "sparsity" may now be "over." The new hot topic is "cosparsity."

There were many other exciting talks too, showing extraordinary results; but now I must go and work on some interesting ideas that may or may not require my computer to run through the night.

## SPARS 2011

And so it begins! A whole week of nothing but sparsity in various forms and guises. My summer has officially started!

The proceedings collect all the accepted one-page submissions, which I find provide very tantalizing details. And for a cool down, I am reading Michael Elad's excellent book Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing. It does for sparse signal processing what Hamming's book does for digital filters: completely accessible, drawing together numerous disciplines, and giving a good big picture perspective.

Today began with a bang, featuring Yi Ma. Just as Andrew Ng's Google Talk, Ma amazed me (and I am sure many others) with his examples of the incredible power of Robust PCA for everything from face and text alignment, to extracting the geometry of buildings from 2D pictures without any use of edge or corner detection. All one needs are the pixels, and the rest is done by the assumption that the image can be decomposed into a low rank texture matrix, and a sparse matrix with non-textural items, like a person moving in front of a background. One of my favorite examples was where he took 30 images of Bill Gate's face. Robust PCA aligned them all, corrected for transformations like shearing, and produced a mean image of Bill Gates. Now, I wonder, can we do the same for a piece of classical music, where we create a mean version of a particular Bach Partita from a dozen Glenn Gould recordings?

There were many other fantastic talks and conversations to be had. Because my internet access at this expensive hotel is free only for 30 minutes every 24 hours at a severely limited bandwidth, I must limit my description to that. Tomorrow will be another exciting day in "Sparseland", as Elad calls it.

## CMP in MPTK: Third Results

In a previous entry, I compared our results with those produced by my own implementation of CMP in MATLAB --- which did not suffer from the bug because it computes the optimal amplitude and phases in a slow way with matrix inverses. Now, with the new corrected code, I have produced the following results. Just for comparison, here are the residual energy decays of my previous experiments, detailed in my paper on CMP with time-frequency dictionaries.

Now, with the corrections, I observe the decays. The "MPold" decay is that produced by the uncorrected MPTK. "MP" shows that of the new code. Only in Attack and Sine do we see much difference; and at times in Sine the previous version of MPTK beats the corrected version. (Such is the behavior of greedy algorithms. I will write a Po'D about this soon.) Anyhow, the decays of CMP-$$\ell$$ (where the number denotes the largest number of possible cycles of refinement, but I suspend refinement cycles when energyAfter/energyBefore > 0.999), comports with the decays I see in my MATLAB implementation (see above). So, now I am comfortable moving on.

Below we see the decays and cycle refinements for three different CMPs for these four signals. (Note the change in the y axes.) Bimodal appears to benefit the most in the short term from the refinement cycles, after which improvement is sporadic. The modeling of Sine has a flurry of improvements. It is interesting to note that as $$\ell$$ increases, we do not necessarily see better models with respect to the residual energy. For instance, for Attack, the residual energy for CMP-1 beats the others.

And briefly back to the glockenspiel signal, below we see the decays and improvements using a multiscale Gabor dictionary (up to atoms with scale 512 samples).

## Grab your things, I've come to take you home!

I have solved the mystery that has pushed me for the past week into excruciatingly fun debugging sessions. Yes, I know I mentioned on June 9 that CMP was extremely easy to implement in MPTK. Then came second thoughts as to the behavior of the implementation. And there followed more observations, and rambling observations, and then the videos appeared. And then the music video appeared. Well, now here's another: ex1_MP_atoms_solved.mov

## Don't Give Up

This is my life the past few days. And yet again, I think I have it cornered. The same thing happens for atoms at the Nyquist frequency. Now, how to fix it?

CRISSP is a research group in ADMT at Aalborg University Copenhagen (AAU-KBH), Denmark.

Authors:

 Bob L. Sturm Sofia Dahl Stefania Serafin

### Blog Roll

Find recent content on the main index or look in the archives to find all content.