CoSiMa Sonar Innovation Challenge @ Sonar+D

CoSiMa has participated at the Sonar+D international conference on creativity and technology of the Sonar music festival in Barcelone with a Sonar Innovation Challenge. A team of 5 musicians, designers, and developers – formed over a month before the event – has worked two and a half days on a music application dedicated to a public interacting collaboratively through their smartphones. The web-based application has been developed with the Soundworks framework.

SIC-app-weatherThe resulting application is Weather, a performance for a DJ and a public participating through their smartphones. As usual in performances based on the Soundworks framework, participants connect their smartphones to the local CoSiMa Wi-Fi network and visit the web page of the Weather application. Once connected to the application, the participants can play with four gestures to switch between different weather states that are associated to different sound textures and visualizations generated on their mobile devices: (1.) Touching the screen generates the bird chirps of a sunny afternoon, (2.) swaying and tilting the device generates wind, (3.) shaking it softly generates a rain sound and rain drops on the screen, and (4.) shaking it harder generates thunder sounds and lightning on screen.

The sound generated by the participants creates a sound textures distributed over the audience. The current weather states of all clients are collected on the server to generate a weather profile that controls visuals on a public display and environmental sounds on the PA. In addition, the weather profile is interpreted by a DJ playing live electronic music in dialog with the audience’s sound textures.

The five CoSiMa SIC challengers who developed the Weather performance are Matthew Bethancourt, Andrés Ferraro, JP Carrascal, Chaithanya Jade, and Yuli Levtov.

Hack the Audience @ MTF Berlin

CoSiMa participated at the Music Tech Fest in Berlin with a workshop « Hack the Audience » featuring the Soundworks framework. In two days, mai 26 and 27, we developed two performances in which the audience participates with their smartphones : MTF Orgy and GrainField. In both performances, the audience connects their smartphones to the CoSiMa Wi-Fi network and visits a given webpage to participate.

In MTF Orgy, each participant controls the intensity and detuning of two harmonics of a distributed additive synthesizer – the Orgy organ – by tilting their smartphone. The lower harmonics are generated on the PA and the higher ones on the participant’s mobile devices. A musician on stage plays chords on a MIDI keyboard that determine the fundamental frequencies. Other musicians can join the performance. At the MTF performance, we were accompanied by Steve Lawson on the bass.

In GrainField the smartphones enable the participants to play with the granular synthesis of 2 secs of sound recorded from a percussionist sitting in the middle of the audience (see images below). The system records every second 2 secs of sound that are send to the smartphones of the audience so that the sound a participant plays with changes every 8 seconds. The sound generated by the participant’s smartphones can be perceived as a distributed granular echo of the percussionist’s performance without any other amplification.

In addition, we presented the CoSiMa project in a brief talk and performance with the audience playing birds and drops on their smartphones.


Open House @ IRCAM

During IRCAM’s open house on June 6th, CoSiMa presented two different projects, Collective Loops and Woodland.

Collective Loops

Collective Loops is a collaborative version of an 8-step loop sequencer. When visitors access the webpage of the installation with their smartphone, they are automatically assigned to an available step in the sequence loop, and their smartphone plays a sound when it is their turn. The participants control the pitch of the sound through the inclination of their smartphones. The participants are invited to collaboratively create a melody of 8 pitches that circulates in a steady tempo over their smartphones.

A circular visualization of the sequencer is projected on the floor. The projection consists of a circle divided in 8 sections that light up in counterclockwise circular movement synchronized with the sounds emitted by the smartphones. Each section of the projection is further divided into 12 radial segments that display the pitch of the corresponding sequence step (i.e. controlled through the inclination of the participants smartphone).

The 8 first participants who connect to the sequencer have a celesta sound, the 8 following can play with a drum kit, the 8 last have a bass sound. All together, 24 players can create complex rhythmic and melodic patterns.


Woodland is a very early stage prototype that aims at explaining how natural audio effects (such as reverb) are created in the natural environment. For this, we create a setting where each participant is a tree in a forest. At some point, a designated player “throws a sound” in the forest by swinging his / her smartphone upwards. After a few seconds of calculations, the sound falls on one tree; then we hear the first wave of resonances when the sound reaches the other trees; and so on recursively until the sound ultimately vanishes.

In order to make people understand what is going on, we can control several parameters of the simulation such as the speed of sound in the air, the absorbance of the air, the type of sound (with a hard or soft attack), etc. That way, if we set the parameters to be similar to the natural setting, we hear the same reverb as we would hear in a forest. But if for example we slow down the speed of sound, we can hear a very slow version of how this natural reverb is built, hearing each echo one by one.

This very first prototype was very promising, and further developments might include a visualization on the floor of the different sounds that bounce from trees to trees to create that reverb effect.

Expérimentations sonores @ Paris Face Cachée

On February 6th, 7th and 8th, La Ville de Paris and the À Suivre association organized the 4th edition of Paris Face Cachée, which aims at proposing original and off-the-wall ways to discover the city. The CoSiMa team led the workshops Expérimentations sonores held at IRCAM on February 7th.

Three groups of 24 participants could test the latest web applications we developed. The audience first tried a few soundscapes (Birds and Monks) to get familiar with the sound-motion interactions on their smartphones, and to learn how to listen to each other while individually contributing to a collective sonic environment.

In the second part of the workshop, we proposed the participants to take part in the Drops collective smartphone performance. While the soundscapes also work as standalone web applications (i.e. they do not technically require other people to play with), Drops is inherently designed for a group of players, where the technology directly supports the social interaction. The players can play a limited number of sound drops that vary in pitch depending on the touch position. The sound drops are automatically echoed by the smartphones of other players before coming back to the player, creating a fading loop of long echoes until they vanish. The collective performance is accompanied by a synchronized soundscape on ambient loudspeakers.

The performance is strongly inspired by the mobile application Bloom by Brian Eno and and Peter Chilvers.

Below are a few pictures from the event.

Collective Sound Checks

An important aspect of CoSiMa is the experimentation of user scenarios, technologies, and content developed in the framework of the project with a community of users.

« Collective Sound Checks » are regular events that allow us to try out new developments with a larger number of users and to validate technological, aesthetic, and social hypotheses of our work. Each event proposes different experiences inviting users to play music together, or a game, or to discover augmented reality spaces.

The first CoSiMa Collective Sound Checks have been conducted in collaboration with the Studio 13/16.*

A first series of workshops a the Studio 13/16 happened in spring 2014 on May 14, May 28 and June 14 (for the Open House at Ircam). A second series followed in Fall/Winter 2014 on October 1, October 15, November 5, and December 17.

We developed a series of web applications for these sessions. A selection of these applications for smartphones is online at (please visit from a smartphone).


* Studio 13/16 sur facebook.