Le projet Topophonie porte sur la navigation sonore dans les flux et les masses d'évènements audiographiques spatialisés.

23 septembre 2011
Audio-graphic Sound Synthesis Workshop

Versatile Sound Models for Interaction in Audio–Graphic Virtual Environments: Control of Audio-graphic Sound Synthesis

Satellite Workshop at the 14th International Conference on Digital Audio Effects (DAFx), 2011, Paris

Andy Farnell (Friday September 23, 2011, salle Stravinsky at Ircam, Paris)

The use of 3D interactive virtual environments is becoming more widespread in areas such as games, architecture, urbanism, information visualization and sonification, interactive artistic digital media, serious games, gamification. The limitations in sound generation in existing environments are increasingly obvious with current requirements.

This workshop will look at recent advances and future prospects in sound modeling, representation, transformation and synthesis for interactive audio-graphic scene design.

Several approaches to extending sound generation in 3D virtual environments have been developed in recent years, such as sampling, modal synthesis, additive synthesis, corpus based synthesis, granular synthesis, description based synthesis, physical modeling... These techniques can be quite different in their methods and results, but may also become complementary towards the common goal of versatile and understandable virtual scenes, in order to cover a wide range of object types and interactions between objects and with them.

The purpose of this workshop is to sum up these different approaches, present current work in the field, and to discuss their differences, commonalities and complementarities.

Workshop program, presentations & videos

  9:00 0:10

Welcome and Introduction
D. Schwarz, R. Cahen » watch the video

  9:10 0:30 Invited Presentation: Principles and Practice of Procedural Audio (PDF)
A. Farnell » watch the video
       
     
Presentations 1
 
  9:40 0:15

Phya: A Lightweight Framework and Synthesis Toolkit For Interactive Environmental Audio (PDF)
D. Menzies » watch the video

       
  9:55 0:15 Modeling of audio-graphic scenes with mass-interaction physical models (PDF)
M. Christou, O. Tache, C. Cadoz, N. Castagné, A. Luciani » watch the video 
       
  10:10 0:15 Sound in Virtual Cities: the TerraDynamica Project
S.H. Chan, C. Le Prado, S. Natkin, G. Tiger, A. Topol » watch the video 
       
  10:25 0:15 Topophonie Mobile: An immersive audio interactive augmented experience (PDF)
R. Cahen, X. Boissarie, N. Schnell, D. Schwarz » watch the video 
       
  10:40 0:05 Gestural Auditory And Visual Interactive Platform (PDF)
B. Caramiaux, S. Fdili-Alaoui, T.Bouchara, G. Parseihian, M. Rébillat » watch the video 
       
  10:45 0:15 Pause
       
     
Presentations 2
 
  11:00 0:15

Sonification of drawings (ZIP)
E. Thoret, M. Aramaki, R. Kronland-Martinet, J.L. Velay, S. Ystad (web page » watch the video

     


  11:15 0:15 Dynamic Intermediate Models for Audio-Graphic Synthesis (web page)
V. Goudard, H. Genevois, B. Doval » watch the video 
       
  11:30 0:15 Hybrid sparse models of water stream texture sounds (PDF)
S. Kersten, H. Purwins » watch the video 
       
  11:45 0:15 Descriptor-Based Texture-Synthesis Control in Interactive 3D Scenes by Activation Profiles (PDF) 
D. Schwarz, R. Cahen, N. Schnell » watch the video
       
     
Short Presentations
 
  12:00 0:05 Hierarchical Musical Structures in 3D Virtual Environments (PDF) 
F. Berthaut » watch the video
       
  12:05 0:05 Gestural Control of Environmental Texture Synthesis (PDF) 
A. Masurelle, D. Schwarz » watch the video
       
  12:10 0:05 Visualization of Perceptual Qualities in Textural Sounds (PDF) 
T. Grill, U. Rauter » watch the video
       
  12:15 0:05 Multi-Modal Musical Environments for Mixed-Reality Performance (PDF) 
R. Hamilton » watch the video
       
  12:20 0:40 Discussion » watch the video
  13:00   End

The workshop is free for attendants of the DAFx conference by inscription and for non-DAFx-attendants by invitation.

Program Chairs

Diemo Schwarz, IRCAM
Roland Cahen, ENSCI-les Ateliers
Hui Ding, LIMSI-CNRS & University Paris Sud 11

Program committee

Nicolas Tsingos (Dolby Laboratories)
Lonce Wyse (National University of Singapore)
Andrea Valle (University of Torino)
Hendrik Purwins (University Pompeu Fabra)
Thomas Grill (Institut für Elektronische Musik IEM, Graz)
Charles Verron (McGill University, Montreal)
Cécile Le Prado (Centre National des Arts et Metiers CNAM)
Christian Jacquemin (LIMSI-CNRS & University Paris Sud 11)

Topics in detail

Which other and better alternatives to traditional sample triggering do exist to produce comprehensive, flexible, expressive, realistic sounds in virtual environments? How to produce rich interaction with scene objects such as physically informed models for contact and friction sounds etc? How to edit and structure audio–graphic scenes otherwise than mapping one event to one sound? There is no standardized architecture, representation and language for auditory scenes and objects, as is OpenGL for graphics. The workshop will treat higher level questions of architecture and modeling of interactive audio-graphic scenes, down to the detailed question of sound modeling, representation, transformation and synthesis. These questions cannot be detached from implementation issues: novel and hybrid synthesis methods, comparison and improvement of existing platforms, software architecture, plug-in systems, standards, formats, etc.

New possibilities regarding the use of audio descriptors and dynamic access to audio databases will also be discussed.

Beyond these main questions, the workshop will cover other recent advances in audio-graphic scene modeling such as:

  • audio-graphic object rendering, and physically and geometrically driven sound rendering,
    • interactive sound texture synthesis, based on signal models, or physically informed
  • joint representation of sound and graphic spaces and objects,
  • sound rendering for audio-graphic scenes:
    • level of detail, which is a very advanced concept in graphics, but is rarely treated in audio.
    • representation of space and distance,
    • masking and occlusion of sources,
    • clustering of sources
  • audio-graphic interface design,
  • sound and graphic localization,
  • cross- and bi-modal perceptual evaluations,
  • interactive audio-graphic arts,
  • industrial audio-graphic data:
    • architectural acoustics,
    • sound maps,
    • urban soundscapes...
  • platforms and tools for audio-graphic scene modeling and rendering,

These areas are interdisciplinary in nature and interrelated. New advancements in each area will benefit the others. This workshop will allow to exchange the latest developments and to point out current challenges and new directions.

Topophonie, c'est quoi ?

Le projet Topophonie porte sur la navigation sonore dans les flux et les masses d'évènements audiographiques spatialisés.

The research project Topophonie proposes lines of research and innovative developments for sound and visual navigation in spaces composed of multiple and disseminated sound and visual elements.

en lire plus...

Avec qui ?

L'équipe du projet est composée de chercheurs spécialisés dans les domaines sonores et de la visualisation, de designers, d'artistes et d'entreprises spécialisés dans les domaines d'application concernés. Les partenaires sont : l'Ensci-les Ateliers, le Limsi, l'Ircam, Navidis, Orbe et User Studio.

en savoir plus...

Soutiens

Topophonie est lauréat de l'appel à projet CONTINT (contenus interactifs) 2009 publié par l'Agence Nationale de la Recherche et bénéficie à ce titre d'une aide au développement. Le projet est également labélisé par le pôle de compétitivité Cap Digital.

ANR - Agence Nationale de la Recherche Cap Digital

en imagesFlickr
en vidéosVimeo