Chloé × Ircam @ Fête de la musique 2015

The French Ministère de la culture et de la communication asked IRCAM to imagine a participative concert for the Fête de la musique ’15, whose theme was « Vivre ensemble la musique » (“To live music together”). We partnered with Chloé in order to design the interactive live experience Chloé × Ircam. During that experience, Chloé alternates between moments when she plays alone — partially distributing sound on the audience’s smartphones —, and moments when she leaves room for the audience to play with her using their smartphones, thus enabling a musical dialog between her and the audience.

The concert took place on June 21st at the Jardin du Palais-Royal in Paris. At the beginning of the concert, participants are invited to join a WiFi network and connect to a URL (chloe.ircam.fr). After they indicate their positions in the venue thanks to a simple interface, the experience can begin. In addition to her usual live electronics setup, Chloé has four tablets where each participant shows up as a circle at the indicated position. Touching these circles she can play different sound textures on the participants’ smartphones. When moving her fingers over the touch screens the sound textures move over the space of the audience. On an additional tablet, Chloé can enable four different interfaces (i.e. simple instruments) on the participants’ devices that they can play by touching the screen and shaking the device.

The preparation of this project took three months, during which we made two live tests, one at IRCAM in the beginning of May with around 30 colleagues and friends, and one at the Centre Pompidou on June 9 with over 150 participants.

Open House @ IRCAM

During IRCAM’s open house on June 6th, CoSiMa presented two different projects, Collective Loops and Woodland.


Collective Loops

Collective Loops is a collaborative version of an 8-step loop sequencer. When visitors access the webpage of the installation with their smartphone, they are automatically assigned to an available step in the sequence loop, and their smartphone plays a sound when it is their turn. The participants control the pitch of the sound through the inclination of their smartphones. The participants are invited to collaboratively create a melody of 8 pitches that circulates in a steady tempo over their smartphones.

A circular visualization of the sequencer is projected on the floor. The projection consists of a circle divided in 8 sections that light up in counterclockwise circular movement synchronized with the sounds emitted by the smartphones. Each section of the projection is further divided into 12 radial segments that display the pitch of the corresponding sequence step (i.e. controlled through the inclination of the participants smartphone).

The 8 first participants who connect to the sequencer have a celesta sound, the 8 following can play with a drum kit, the 8 last have a bass sound. All together, 24 players can create complex rhythmic and melodic patterns.


Woodland

Woodland is a very early stage prototype that aims at explaining how natural audio effects (such as reverb) are created in the natural environment. For this, we create a setting where each participant is a tree in a forest. At some point, a designated player “throws a sound” in the forest by swinging his / her smartphone upwards. After a few seconds of calculations, the sound falls on one tree; then we hear the first wave of resonances when the sound reaches the other trees; and so on recursively until the sound ultimately vanishes.

In order to make people understand what is going on, we can control several parameters of the simulation such as the speed of sound in the air, the absorbance of the air, the type of sound (with a hard or soft attack), etc. That way, if we set the parameters to be similar to the natural setting, we hear the same reverb as we would hear in a forest. But if for example we slow down the speed of sound, we can hear a very slow version of how this natural reverb is built, hearing each echo one by one.

This very first prototype was very promising, and further developments might include a visualization on the floor of the different sounds that bounce from trees to trees to create that reverb effect.

Web Audio Now! @ Best of Web 2015

When Cyril Balit participated to the CoSiMa demonstration at Paris Face Cachée, he asked us if we would be willing to present our work to the Best of Web 2015 conference in Paris, which is a compilation of the best talks of 8 Parisian web meet-ups.

So there we went! Together with the WAVE project — the IRCAM Web Audio library on which most of the things we do is based on, see also wavesjs on GitHub) —, we presented our ongoing research and projects. In particular, we took advantage of this event to test a new collaborative experience that would premiere at Fête de la musique (see Chloé × Ircam). With more that 130 connections, it confirmed that we were on the right track for the show! We also got a lot of positive feedback from the JS community on the technologies we are developing.

The slides of the presentation are available here.

Rone : Créatures & Cie @ Palais de Tokyo

Rone was invited to unveil the universe he created for his latest album Créatures at the Palais de Tokyo (Le Point Perché). Along with illustrations, photographs, binaural experiences and video games, CoSiMa presented Créatures & Cie – Collective Sound Check, a spontaneous collective performance that enables the audience to play with Rone’s creatures. By simply opening a web page, the visitors of the exhibition can discover a novel way of exploring Rone’s musical universe and fill in the space with his sound creatures.

http://cosima.ircam.fr/creatures-et-cie (from a smartphone)

This performance is a first step towards a new generation of interactive musical experiences that Rone is developing in collaboration with IRCAM.

Rone : Creatures & Cie

Sonosphere @ Confluences

Orbe designed and implemented Sonosphere for the Confluence museum that opened at the end of 2014 in Lyon. Sonosphere is an immersive sound experience deployed indoor and outdoor. The visitor navigates through the voices that inhabit the areas of the museum with a mobile device. Sonosphere at the Confluence museum allows the visitor to explore the sound memory of the museum, to discover spaces behind the scenes, and to access other hidden dimensions.

As part of the CoSiMa project, Sonosphere integrates 3D-sound technologies provided by IRCAM. Visitors are solicited by spatialized voices that seem to come from the art pieces, columns or walls. HRTF filtering enables spatial localization of sound sources. The visitors are continuously geolocated through a Bluetooth LE beacon network that equips the 20,000 m² of the museum, and we use the smartphone inertial sensors to get an idea of the orientation of the visitors within the space. Display free mode allows the visitors to navigate with their ears. This intuitive interface is close to natural listening and thus requires no learning.

With this geolocation and binaural sound processing, Sonosphere adds a sound layer to the museum spaces. The sounds no longer belong to the headphones, but are truly part of the environment.

Collective Sound Check @ Paris Face Cachée

On February 6th, 7th and 8th, La Ville de Paris and the À Suivre association organized the 4th edition of Paris Face Cachée, which aims at proposing original and off-the-wall ways to discover the city. The CoSiMa team led the workshops Expérimentations sonores held at IRCAM on February 7th.

Three groups of 24 participants could test the latest web applications we developed. The audience first tried a few soundscapes (Birds and Monks) to get familiar with the sound-motion interactions on their smartphones, and to learn how to listen to each other while individually contributing to a collective sonic environment.

In the second part of the workshop, we proposed the participants to take part in the Drops collective smartphone performance. While the soundscapes also work as standalone web applications (i.e. they do not technically require other people to play with), Drops is inherently designed for a group of players, where the technology directly supports the social interaction. The players can play a limited number of sound drops that vary in pitch depending on the touch position. The sound drops are automatically echoed by the smartphones of other players before coming back to the player, creating a fading loop of long echoes until they vanish. The collective performance is accompanied by a synchronized soundscape on ambient loudspeakers.

The performance is strongly inspired by the mobile application Bloom by Brian Eno and and Peter Chilvers.

Below are a few pictures from the event.

CoSiMa @ WAC’15

At the first international Web Audio Conference (WAC’15), CoSiMa presented three pieces of work.


Collective Sound Checks (poster)

Just like at TEI’15 the week before, we presented our work on the Collective Sound Checks through the poster you can see below. Quite a lot of people gathered at our booth during the demo session to play with the web apps and create spontaneous collective performances.

Collective Sound Checks WAC'15 Poster


Soundworks (paper & poster)

We presented a draft of the Soundworks library (which has evolved quite a lot since then): Soundworks is a Javascript framework that enables artists and developers to create collaborative music performances where a group of participants distributed in space use their smartphones to generate sound and light through touch and motion.

In particular, we used Soundworks to build the Drops collective performance (see below). You can read the WAC paper here, or have a look at the Github repository for more up-to-date information. Finally, you’ll find the WAC poster below.

Soundworks WAC'15 poster


Drops (performance)

Finally, we presented the first public representation of Drops, a collective smartphone performance built with Soundworks. Drops is strongly inspired by the mobile application Bloom by Brian Eno and and Peter Chilvers, and transposes it into a collaborative experience: each participant can only play a single sound (i.e. a single pitch), whose timbre can vary depending on the touch position. Together, the players can construct sound sequences (i.e. melodies) by combining their sounds. The sounds are repeated in a fading loop every few seconds until they vanish. Players can clear the loop by shaking their smartphones. The sounds triggered by one player are automatically echoed by the smartphones of other players. The collective performance on the smartphones is accompanied by a synchronized soundscape on ambient loudspeakers. This first Drops representation gathered around 60 players at the WAC.

CoSiMa @ TEI’15

CoSiMa submitted a Work-in-Progress paper at the Tangible and Embedded Interfaces conference held at Stanford University in January 2015 (TEI’15). The paper Collective Sound Checks — Exploring Intertwined Sonic and Social Affordances of Mobile Web Applications describes the mobile-web scenarios we tested at the Centre Pompidou with the Studio 13/16, and explores how these new forms of musical expression strongly shift the focus of design from human-computer interactions towards the emergence of computer-mediated interactions between players based on sonic and social affordances of ubiquitous technologies.

We presented our work during the poster session, and we got a lot of attention from the conference attendees: people had a lot of fun playing with the CoSC Web Applications and We Will Rock You: Reloaded, were impressed by the work done, and are looking forward to the upcoming developments.

Collective Sound Checks TEI-15 Poster

The paper is available in the ACM Digital Library (PDF and additional information).

Overexposure / Surexposition @ Fête des Lumières

An interactive public installation with smartphones, Fête des Lumières, Lyon, décembre 2014

Overexposure is an interactive work bringing together a public installation and a smart phone application. On an urban square, a large black monolith projects an intense beam of white light into the sky. Visible all over the city, the beam turns off and on, pulsating in way that communicates rigor, a will to communicate, even if we don’t immediately understand the signals it is producing. On one side of the monolith, white dots and dashes scroll past, from the bottom up, marking the installation with their rhythm: each time one reaches the top of the monolith, the light goes off, as if the marks were emptying into the light. On a completely different scale, we see the same marks scrolling across the smartphone screens of the people in attendance, interacting with the work, following the same rhythm. Here, it is the flash of the smartphones that releases light in accordance with the coded language. Because these are in fact messages that are being sent—in Morse code, from everyone, to everyone and to the sky—and that we can read thanks to the super-titling that accompanies the marks. Using a smartphone, anyone can send a message, saying what they think and therefore presenting themselves, for a few moments, to everyone, to a community sharing the same time, the same rhythm. And we can take the pulse of an even larger community—on the scale of the city and in real time—through a map of mobile phone network use, which can be visualized on one side of the monolith or via smartphone.

From an individual device (smartphone) the size of a hand to a shared format on the scale of the city, a momentary community forms and transforms, sharing a space, a pace, the same data, following a type of communication whose ability to bring together through a sensory experience is more important than the meaning of the messages it transmits or their destination, which is lost in the sky.


(Photos: Samuel Bianchini)

Credits
An Orange/EnsadLab project

A project under the direction of Samuel Bianchini (EnsadLab), in collaboration with Dominique Cunin (EnsadLab), Catherine Ramus (Orange Labs/Sense), and Marc Brice (Orange Labs/Openserv), in the framework of a research partnership with Orange Labs

“Orange/EnsadLab” partnership directors: Armelle Pasco, Director of Cultural and Institutional Partnerships, Orange and Emmanuel Mahé, Head of Research, EnsAD

  • Project Manager (Orange): Abla Benmiloud-Faucher
  • IT Development (EnsadLab): Dominique Cunin, Oussama Mubarak, Jonathan Tanant, and Sylvie Tissot
  • Mobile network data supply: Orange Fluxvision
  • Mobile network data processing: Cezary Ziemlicki and Zbigniew Smoreda (Orange)
  • SMS Server Development: Orange Applications for Business
  • Graphic Design: Alexandre Dechosal (EnsadLab)
  • In situ installation (artistic and engineering collaboration): Alexandre Saunier (EnsadLab)
  • Lighting and construction of the installation structure: Sky Light
  • Wireless network deployment coordination: Christophe Such (Orange)
  • Communication: Nadine Castellani, Karine Duckit Claudia Mangel (Orange), Nathalie Battais-Foucher (EnsAD)
  • Mediation: Nadjah Djadli (Orange)
  • Project previsualization: Christophe Pornay
  • Assistant: Élodie Tincq
  • Message moderators: Élodie Tincq, Marion Flament, Charlotte Gautier
  • Production: Orange
  • Executive Production: EnsadLab

Research and development for this work was carried out in connection with the research project Cosima (“Collaborative Situated Media”), with the support of the French National Research Agency (ANR), and participates in the development of Mobilizing.js, a programming environment for mobile screens developed by EnsadLab for artists and designers

CoSC: WWRY:R

We Will Rock You: Reloaded

This mobile web application has been developed in the context of Collective Sound Checks with the Studio 13/16 at the Centre Pompidou.

This application allows a group of players to perform Queen’s song “We Will Rock You” with a set of simple instruments and to create their own versions of the song. The players can choose between drums, voice solo, choirs, Freddy Mercury’s voice fill-ins (‘sing it’), a guitar power chord, and the final guitar riff.

While most of the instruments trigger segments of the original recordings when striking with the device in the air, the power chord and guitar riff resynthesize guitar sounds through granular synthesis.

The application has been published here (requires a mobile device under iOS 6 or later, or Android 4.2 or later with Chrome 35 or later).