Event Abstract

Enabling emotional expression and interaction with new expressive interfaces

  • 1 Department of Speech Music and Hearing,KTH School of Computer Science and Communication, Sweden

Many past studies about music and emotion have focused on the relation between musical features and specific emotions. A large number of these studies have been summarized qualitatively resulting in detailed specifications of musical features and the corresponding emotional expressions (e.g. Gabrielsson and Lindström, 2001; Juslin 2001; Juslin and Laukka, 2003). For example, a sad performance is associated with slow tempo, legato articulation and minor mode.

Using this qualitative knowledge as a starting point many new applications has recently been proposed. Possible applications include music analysis, music recommendation systems, music performance system, artistic interfaces etc. We have been working mainly on real-time systems both for generating musical expression (synthesis) and for recognizing musical expression (analysis). The basic tools have been a simple emotion recognizer and the KTH rule system for music performance. The emotion recognizer have been used for gesture and singing input in a computer game, as well as connected to an artificial head and other graphical displays. The rule system has been used in a layman conductor interface, and for doing experiments both with musicians and children. An extension of the rule system is under development in which an existing audio recording can be morphed into another expression manipulating tempo, dynamics and articulation of individual voices.

These applications and tools have been found to particularly useful in non-traditional musical contexts such as in computer games, behavioral studies or in music therapy. Recently, different innovative sound/music hardware interfaces have been designed, and are currently evaluated for the potential use of helping children with reduced hearing (hearing loss, deafness, cochlear implants) to explore and develop their sensitivity and discrimination of music and sound in a playful way.

Conference: Tuning the Brain for Music, Helsinki, Finland, 5 Feb - 6 Feb, 2009.

Presentation Type: Oral Presentation

Topic: Session Talks

Citation: Friberg A, Bresin R, Falkenbe Hansen K and Fabiani M (2009). Enabling emotional expression and interaction with new expressive interfaces. Conference Abstract: Tuning the Brain for Music. doi: 10.3389/conf.neuro.09.2009.02.012

Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters.

The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated.

Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed.

For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions.

Received: 23 Jan 2009; Published Online: 23 Jan 2009.

* Correspondence: Anders Friberg, Department of Speech Music and Hearing,KTH School of Computer Science and Communication, Stockholm, Sweden, afriberg@csc.kth.se