September 2012 Archives

A misguided study?

| No Comments
H. Jennings, P. Ivanov, A. Martins, P. da Silva, and G. Viswanathan, "Variance fluctuations in nonstationary time series: a comparative study of music genres," Physica A: Statistical and Theoretical Physics, vol. 336, pp. 585-594, May 2004.

In classic physics style, this work essentially reduces the music signal to be an amplitude envelope, and then claims truths on entire genres based on correlations.

The Medium Shapes the Message

| No Comments
Here is a nice article by David Byrne on how the "venue" shapes his sound, as well as that of many other artists, including birds. It reminds me of this fantastic course I took in graduate school about how minimalism (in music) arose and developed in part from the dawn of the LP and hi-fidelity.
The paper, R. B. Dannenberg, B. Thom, and D. Watson, "A machine learning approach to musical style recognition," in Proc. International Computer Music Conf., Thessaloniki, Greece, Sep. 1997, is regarded as the first to explore something like recognizing the genre of a musical signal. It proposes a system to determine the playing style of a musician. However, I have just discovered the following fascinating paper: K.-P. Han, Y.-S. Park, S.-G. Jeon, G.-C. Lee, and Y.-H. Ha, "Genre classification system of TV sound signals based on a spectrogram analysis," IEEE Transactions on Consumer Electronics, vol. 44, pp. 33-42, Feb. 1998. In that paper, they look at discriminating between speech and music, and Jazz, Classical and Popular genres. Not only do they simulate the algorithm, they actually implement the system using circuits and show the results. They also list the musical pieces they put in each genre dataset. Was Kansas Popular in 1998?
Hello, and welcome to the Paper of the Day (Po'D): Optimization System of Musical Expression for the Music Genre Classification. Today's paper is S. Park, J. Park, and K. Sim, "Optimization System of Musical Expression for the Music Genre Classification", Proc. Int. Conf. Control, Automation, and Systems, Gyeonggi-do, Korea, Oct. 2011. This paper is best summarized by the first paragraph:

The fountain in the park, several cultural facilities and Landscape indispensable component is an important factor. It is also an environmentally friendly effect is relaxation of the tourists. A fountain (from the Latin "fons" or "foints", a source or spring) is a piece of architecture which pours water into a basin or jets it into the air either to supply drinking water or for decorative or dramatic effect. Today fountains may be practical, such as drinking fountains and village fountains which provide clean drinking water; or designed for recreation, such as splash fountains, where residents can cool off in summer; or ornamental, decorating city parks and squares and home gardens.
To be succinct: Fountains, they be wet.

Music genre flowchart

| 1 Comment
flow.png From: T. Zhang, "Semi-automatic approach for music classification," in Proc. SPIE Conf. on Internet Multimedia Management Systems, 2003.

The authors put together a flowchart for automatic classification. I was curious about "detect features of symphony", especially when one only has a 30 second clip: "Since a symphony is composed of multiple movements and repetitions, there is an alternation between relatively high volume audio signal (e.g. performance of the whole orchestra) and low volume audio signal (e.g. performance of single instrument or a few instruments of the orchestra) along the music piece. ... Thus, by checking the existence of alternation between high volume and low volume intervals (with each interval longer than a certain threshold) and/or repetition(s) in the whole music piece, symphonies will be distinguished [from other genres]."

Props to the authors for attempting the impossible, but any flowchart for assigning music genre must be broken from the very first decision. Genres are not uniquely specified by characteristics that mutually exclude others.
My opinion of EURASIP Journal on Advances in Signal Processing continues to go lower. Consider this "research article": K Umapathy, B Ghoraani and S Krishnan, "Audio Signal Processing Using Time-Frequency Approaches: Coding, Classification, Fingerprinting, and Watermarking," EURASIP Journal on Advances in Signal Processing 2010, 2010:451695 doi:10.1155/2010/451695.

ALL of the text from section 4.1.1 to 4.1.4 appears exactly the same in K. Umapathy, S. Krishnan, and S. Jimaa, "Multigroup classification of audio signals using time-frequency parameters," IEEE Trans. Multimedia, vol. 7, pp. 308 - 315, Apr. 2005. This is hardly the new and advanced type of work to which I thought this journal was aiming to give a platform. I can't find any text that says, "Portions of this article appear exactly in ..." What happened to S. Jimaa that he is not on the "new" version? Does this Journal really permit such unacceptable scholarship?

Music genre taxonomy

| No Comments
genretax.png From: J. G. A. Barbedo and A. Lopes, "Automatic genre classification of musical signals," EURASIP Journal on Advances in Signal Processing, 2007.

The authors specify the meaning of each of these labels. For instance, "Dance" music has "strong percussive elements and very marked beating." Stemming from "Dance" there is "Jazz", "characterized by the predominance of instruments like piano and saxophone. Electric guitars and drums can also be present; vocals, when present, are very characteristic." And stemming from "Dance," stemming from "Jazz," there is "Cool", a "jazz style [that is] light and introspective, with a very slow rhythm." The genres "Techno" and "Disco" --- which both emphasize the importance of listening with your body and feet --- do not stem from "Dance," but instead from "Pop/Rock," "the largest class, including a wide variety of songs."

Props to the authors for attempting the impossible, but any taxonomy of music genre must be broken from the very first stem. Genres are not like species, and cannot be arranged like so. (On the plus side, it appears that to differentiate introspective music from non-introspective music requires only four spectral features computed over 21.3 ms windows.)

Blog Roll

About this Archive

This page is an archive of entries from September 2012 listed from newest to oldest.

August 2012 is the previous archive.

October 2012 is the next archive.

Find recent content on the main index or look in the archives to find all content.