Jun 20 2008
All ISMIR 2008 tutorials will be held on Sunday, September 14 at Drexel University. Two tutorials are held simultaneously (two in the morning and two in the afternoon). Unfortunately, the registration system will allow you to sign up (and charge you) for simultaneous tutorials. Please make your choices carefully.
Tutorial AM I (09:00-12:00): Music Information Retrieval in ChucK; Real-Time Prototyping for MIR Systems & Performance
Ge Wang, Stanford University
Rebecca Fiebrink, Princeton University
Perry R. Cook, Princeton University
This hands-on ISMIR tutorial focuses on the free, open-source ChucK programming language for music analysis, synthesis, learning, and prototyping. Our goal is to familiarize the ISMIR audience with ChucK’s new capabilities for MIR prototyping and real-time performance systems, and to stimulate discussion on future directions for language development and toolkit repository contents.built-in classifiers and implement their own. We will discuss exciting issues in applying classification to real-time performance, including on-the-fly learning.
Tutorial AM II (09:00-12:00): Computational Temporal Aesthetics; Relation Between Surprise, Salience and Aesthetics in Music and Audio Signals
Shlomo Dubnov, UC San Diego
In this tutorial we will link measures of aesthetics to expectancies, creating a bridge between meaning and beauty in music. We will present an up to date account of different models of expectancies and dynamic information in musical signals, tying this to and generalizing upon the notions of aesthetics, surprisal and salience in other domains. It also shows the importance of tracking the dynamics of the listening process to account for the time varying nature of musical experience, suggesting novel description schemes that capture experiential profiles. The tutorial presents a survey of methods that recently appeared in the literature that try to measure the dynamic complexity in temporal signals. Measures such as entropy and mutual information are used to characterize random processes and were recently extended as a way to characterise temporal structure in music. These works are also related to research on surprisal and salience in other types of signals, such as Itti and Baldi image salience using Bayesian modeling framework of interest point detection, predictive information in machine learning, and surprisal in text based on grammatical approaches.
Paul Lamere, Sun Labs
Elias Pampalk, Last.fm
Social Tags are free text labels that are applied to items such as artists, playlists and songs. These tags have the potential to have a positive impact on music information retrieval research. In this tutorial we describe the state of the art in commercial and research social tagging systems for music. We explore some of the motivations for tagging. We describe the factors that affect the quantity and quality of collected tags. We present a toolkit that MIR researchers can use to harvest and process tags. We look at how tags are collected and used in current commercial and research systems. We explore some of the issues and problems that are encountered when using tags. We present current MIR-related research centered on social tags and suggest possible areas of exploration for future research.
Tutorial PM II (13:30-16:30): Survey of Symbolic Data for Music Applications
Eleanor Selfridge-Field, Stanford University
Craig Sapp, University of London
The primary aim of this tutorial is to show how symbolic data usage has been broadened and deepened over the past ten years, carrying it far beyond the confines of program-specific data encoding schemes (principally for musical notation). However, one size does not fit all when dealing with musical data, and an encoding choice will depend on the intended application. Data formats to be discussed include: Humdrum (for music analysis), MusicXML (for data transfer), the Music Encoding Initiative’s XML format (for archival symbolic scores), SCORE (for graphical score layout), as well as legacy musical codes such as Plaine & Easie (used to encode RISM incipits), and pre-computer era notational codes.