Terminal is an interactive installation that has been created in collaboration with Chloé and the Scale collective for the Paris Musique Club. The installation will be shown from October 24, 2015 to January 31, 2016 at the Gaité Lyrique.
The project transposes the musical elements and mobile interactions of the Chloé ⨉ Ircam concert into the situation of an exhibition.
The installation features a looped 15-minutes 4-channel music track staged in a 7-meters corridor with 21 smartphones aligned along the wall and luminous lines running on the floor.
Similar as in the concert, visitors can connect to the installation with their mobile devices to participate. At given passages of the music track, the participants are invited to play sound with touch and motion interfaces that appear on their mobile device. The graphical animations and sound of their device are echoed by one of the smartphones on the wall.
Every now and then, waves of sound textures appear on the participants’ mobile devices. In addition, visitors can use a wall-mounted tablet to distribute sound textures over the smartphones on the wall. The light on the floor reacts on the music as well as the visitors’ interactions with the tablet.
This video summarizes the concert from a rather technical point of view.
Like at the Fete de la musique, the audience participates in this concert by connecting their smartphones to the local Wi-Fi network Chloe × Ircam and by opening the web page chloe.ircam.fr in their browser. Once connected, the participants are asked to indicate their approximate position on map of the concert space. During the concert, Chloé can move sounds over the audience’s smartphones – using four tablets integrated into her setting – and let appear dedicated sound interfaces on the touchscreens. The concert starts and ends with everybody playing with Chloé’s whispering voice.
The Orbe collective has presented three experimental scenarios of augmented soundwalks. The participants equipped with a smartphone are invited to experience an augmented audio reality that reacts on their position, trajectory and movement (using GPS, BTLE beacons and motion sensors). Each scenario proposes a different narrative and leads the participants on different possible trajectories through the same district of Chalon-sur-Saône. The trajectories take between 30 minutes and 2 hours depending on the participants’ preference and their engagement with the proposed activities.
The trajectories of all participants have been recorded and visualized on a screen at the arrival point where the team invited the participants to a debriefing of their experience.
The French Ministère de la culture et de la communication asked IRCAM to imagine a participative concert for the Fête de la musique ’15, whose theme was « Vivre ensemble la musique » (“To live music together”). We partnered with Chloé in order to design the interactive live experience Chloé × Ircam. During that experience, Chloé alternates between moments when she plays alone — partially distributing sound on the audience’s smartphones —, and moments when she leaves room for the audience to play with her using their smartphones, thus enabling a musical dialog between her and the audience.
The concert took place on June 21st at the Jardin du Palais-Royal in Paris. At the beginning of the concert, participants are invited to join a WiFi network and connect to a URL (chloe.ircam.fr). After they indicate their positions in the venue thanks to a simple interface, the experience can begin. In addition to her usual live electronics setup, Chloé has four tablets where each participant shows up as a circle at the indicated position. Touching these circles she can play different sound textures on the participants’ smartphones. When moving her fingers over the touch screens the sound textures move over the space of the audience. On an additional tablet, Chloé can enable four different interfaces (i.e. simple instruments) on the participants’ devices that they can play by touching the screen and shaking the device.
The preparation of this project took three months, during which we made two live tests, one at IRCAM in the beginning of May with around 30 colleagues and friends, and one at the Centre Pompidou on June 9 with over 150 participants.
During IRCAM’s open house on June 6th, CoSiMa presented two different projects, Collective Loops and Woodland.
Collective Loops is a collaborative version of an 8-step loop sequencer. When visitors access the webpage of the installation with their smartphone, they are automatically assigned to an available step in the sequence loop, and their smartphone plays a sound when it is their turn. The participants control the pitch of the sound through the inclination of their smartphones. The participants are invited to collaboratively create a melody of 8 pitches that circulates in a steady tempo over their smartphones.
A circular visualization of the sequencer is projected on the floor. The projection consists of a circle divided in 8 sections that light up in counterclockwise circular movement synchronized with the sounds emitted by the smartphones. Each section of the projection is further divided into 12 radial segments that display the pitch of the corresponding sequence step (i.e. controlled through the inclination of the participants smartphone).
The 8 first participants who connect to the sequencer have a celesta sound, the 8 following can play with a drum kit, the 8 last have a bass sound. All together, 24 players can create complex rhythmic and melodic patterns.
Woodland is a very early stage prototype that aims at explaining how natural audio effects (such as reverb) are created in the natural environment. For this, we create a setting where each participant is a tree in a forest. At some point, a designated player “throws a sound” in the forest by swinging his / her smartphone upwards. After a few seconds of calculations, the sound falls on one tree; then we hear the first wave of resonances when the sound reaches the other trees; and so on recursively until the sound ultimately vanishes.
In order to make people understand what is going on, we can control several parameters of the simulation such as the speed of sound in the air, the absorbance of the air, the type of sound (with a hard or soft attack), etc. That way, if we set the parameters to be similar to the natural setting, we hear the same reverb as we would hear in a forest. But if for example we slow down the speed of sound, we can hear a very slow version of how this natural reverb is built, hearing each echo one by one.
This very first prototype was very promising, and further developments might include a visualization on the floor of the different sounds that bounce from trees to trees to create that reverb effect.
So there we went! Together with the WAVE project — the IRCAM Web Audio library on which most of the things we do is based on, see also wavesjs on GitHub) —, we presented our ongoing research and projects. In particular, we took advantage of this event to test a new collaborative experience that would premiere at Fête de la musique (see Chloé × Ircam). With more that 130 connections, it confirmed that we were on the right track for the show! We also got a lot of positive feedback from the JS community on the technologies we are developing.
The slides of the presentation are available here.
Rone was invited to unveil the universe he created for his latest album Créatures at the Palais de Tokyo (Le Point Perché). Along with illustrations, photographs, binaural experiences and video games, CoSiMa presented Créatures & Cie – Collective Sound Check, a spontaneous collective performance that enables the audience to play with Rone’s creatures. By simply opening a web page, the visitors of the exhibition can discover a novel way of exploring Rone’s musical universe and fill in the space with his sound creatures.
Orbe designed and implemented Sonosphere for the Confluence museum that opened at the end of 2014 in Lyon. Sonosphere is an immersive sound experience deployed indoor and outdoor. The visitor navigates through the voices that inhabit the areas of the museum with a mobile device. Sonosphere at the Confluence museum allows the visitor to explore the sound memory of the museum, to discover spaces behind the scenes, and to access other hidden dimensions.
As part of the CoSiMa project, Sonosphere integrates 3D-sound technologies provided by IRCAM. Visitors are solicited by spatialized voices that seem to come from the art pieces, columns or walls. HRTF filtering enables spatial localization of sound sources. The visitors are continuously geolocated through a Bluetooth LE beacon network that equips the 20,000 m² of the museum, and we use the smartphone inertial sensors to get an idea of the orientation of the visitors within the space. Display free mode allows the visitors to navigate with their ears. This intuitive interface is close to natural listening and thus requires no learning.
With this geolocation and binaural sound processing, Sonosphere adds a sound layer to the museum spaces. The sounds no longer belong to the headphones, but are truly part of the environment.
On February 6th, 7th and 8th, La Ville de Paris and the À Suivre association organized the 4th edition of Paris Face Cachée, which aims at proposing original and off-the-wall ways to discover the city. The CoSiMa team led the workshops Expérimentations sonores held at IRCAM on February 7th.
Three groups of 24 participants could test the latest web applications we developed. The audience first tried a few soundscapes (Birds and Monks) to get familiar with the sound-motion interactions on their smartphones, and to learn how to listen to each other while individually contributing to a collective sonic environment.
In the second part of the workshop, we proposed the participants to take part in the Drops collective smartphone performance. While the soundscapes also work as standalone web applications (i.e. they do not technically require other people to play with), Drops is inherently designed for a group of players, where the technology directly supports the social interaction. The players can play a limited number of sound drops that vary in pitch depending on the touch position. The sound drops are automatically echoed by the smartphones of other players before coming back to the player, creating a fading loop of long echoes until they vanish. The collective performance is accompanied by a synchronized soundscape on ambient loudspeakers.
The performance is strongly inspired by the mobile application Bloom by Brian Eno and and Peter Chilvers.
At the first international Web Audio Conference (WAC’15), CoSiMa presented three pieces of work.
Collective Sound Checks (poster)
Just like at TEI’15 the week before, we presented our work on the Collective Sound Checks through the poster you can see below. Quite a lot of people gathered at our booth during the demo session to play with the web apps and create spontaneous collective performances.
Soundworks (paper & poster)
Finally, we presented the first public representation of Drops, a collective smartphone performance built with Soundworks. Drops is strongly inspired by the mobile application Bloom by Brian Eno and and Peter Chilvers, and transposes it into a collaborative experience: each participant can only play a single sound (i.e. a single pitch), whose timbre can vary depending on the touch position. Together, the players can construct sound sequences (i.e. melodies) by combining their sounds. The sounds are repeated in a fading loop every few seconds until they vanish. Players can clear the loop by shaking their smartphones. The sounds triggered by one player are automatically echoed by the smartphones of other players. The collective performance on the smartphones is accompanied by a synchronized soundscape on ambient loudspeakers. This first Drops representation gathered around 60 players at the WAC.