Collective Sound Check @ Paris Face Cachée

On February 6th, 7th and 8th, La Ville de Paris and the À Suivre association organized the 4th edition of Paris Face CachĂ©e, which aims at proposing original and off-the-wall ways to discover the city. The CoSiMa team led the workshops ExpĂ©rimentations sonores held at IRCAM on February 7th.

Three groups of 24 participants could test the latest web applications we developed. The audience first tried a few soundscapes (Birds and Monks) to get familiar with the sound-motion interactions on their smartphones, and to learn how to listen to each other while individually contributing to a collective sonic environment.

In the second part of the workshop, we proposed the participants to take part in the Drops collective smartphone performance. While the soundscapes also work as standalone web applications (i.e. they do not technically require other people to play with), Drops is inherently designed for a group of players, where the technology directly supports the social interaction. The players can play a limited number of sound drops that vary in pitch depending on the touch position. The sound drops are automatically echoed by the smartphones of other players before coming back to the player, creating a fading loop of long echoes until they vanish. The collective performance is accompanied by a synchronized soundscape on ambient loudspeakers.

The performance is strongly inspired by the mobile application Bloom by Brian Eno and and Peter Chilvers.

Below are a few pictures from the event.

CoSiMa @ WAC’15

At the first international Web Audio Conference (WAC’15), CoSiMa presented three pieces of work.


Collective Sound Checks (poster)

Just like at TEI’15 the week before, we presented our work on the Collective Sound Checks through the poster you can see below. Quite a lot of people gathered at our booth during the demo session to play with the web apps and create spontaneous collective performances.

Collective Sound Checks WAC'15 Poster


Soundworks (paper & poster)

We presented a draft of the Soundworks library (which has evolved quite a lot since then): Soundworks is a Javascript framework that enables artists and developers to create collaborative music performances where a group of participants distributed in space use their smartphones to generate sound and light through touch and motion.

In particular, we used Soundworks to build the Drops collective performance (see below). You can read the WAC paper here, or have a look at the Github repository for more up-to-date information. Finally, you’ll find the WAC poster below.

Soundworks WAC'15 poster


Drops (performance)

Finally, we presented the first public representation of Drops, a collective smartphone performance built with Soundworks. Drops is strongly inspired by the mobile application Bloom by Brian Eno and and Peter Chilvers, and transposes it into a collaborative experience: each participant can only play a single sound (i.e. a single pitch), whose timbre can vary depending on the touch position. Together, the players can construct sound sequences (i.e. melodies) by combining their sounds. The sounds are repeated in a fading loop every few seconds until they vanish. Players can clear the loop by shaking their smartphones. The sounds triggered by one player are automatically echoed by the smartphones of other players. The collective performance on the smartphones is accompanied by a synchronized soundscape on ambient loudspeakers. This first Drops representation gathered around 60 players at the WAC.

CoSiMa @ TEI’15

CoSiMa submitted a Work-in-Progress paper at the Tangible and Embedded Interfaces conference held at Stanford University in January 2015 (TEI’15). The paper Collective Sound Checks — Exploring Intertwined Sonic and Social Affordances of Mobile Web Applications describes the mobile-web scenarios we tested at the Centre Pompidou with the Studio 13/16, and explores how these new forms of musical expression strongly shift the focus of design from human-computer interactions towards the emergence of computer-mediated interactions between players based on sonic and social affordances of ubiquitous technologies.

We presented our work during the poster session, and we got a lot of attention from the conference attendees: people had a lot of fun playing with the CoSC Web Applications and We Will Rock You: Reloaded, were impressed by the work done, and are looking forward to the upcoming developments.

Collective Sound Checks TEI-15 Poster

The paper is available in the ACM Digital Library (PDF and additional information).

CoSC: WWRY:R

We Will Rock You: Reloaded

This mobile web application has been developed in the context of Collective Sound Checks with the Studio 13/16 at the Centre Pompidou.

This application allows a group of players to perform Queen’s song “We Will Rock You” with a set of simple instruments and to create their own versions of the song. The players can choose between drums, voice solo, choirs, Freddy Mercury’s voice fill-ins (‘sing it’), a guitar power chord, and the final guitar riff.

While most of the instruments trigger segments of the original recordings when striking with the device in the air, the power chord and guitar riff resynthesize guitar sounds through granular synthesis.

The application has been published here (requires a mobile device under iOS 6 or later, or Android 4.2 or later with Chrome 35 or later).

CoSC: Shaker

This mobile web application has been developed for the Collective Sound Checks that took place in the Studio 13/16 at the Centre Pompidou.

In this scenario, players can record arbitrary percussive sounds (with their voice or using props) using a microphone (and a foot pedal). Once the recording is finished, the players can load the recorded sound on their mobile devices and perform it by shaking the devices. All the devices are beat synchronized to a steady tempo (16th beats on 100 BPM) so that multiple players can easily perform together.

Each sound recording is analysed on the server and segmented into percussive elements that are classified by their intensity. On the mobile device, the concatenative synthesizer generates a sound on each beat. Each sound is selected according to the device motion intensity: the synth plays soft sound segments when the player shakes the device softly, and louder segments when the player shakes the device more vigorously.

The players are encouraged to record phrases with percussive elements of a wide dynamic range. They can experiment with different sound recordings and create an ensemble by recording complementary materials.

CoSC: Matrix

This mobile web application has been developed in the context of Collective Sound Checks with the Studio 13/16 at the Centre Pompidou.

The players are sitting on a grid (for instance, 3 rows by 4 columns, for a total of 12 people). Their mobile devices form a matrix of screens and loudspeakers that are used to spatialize sound and light.

For now, the Matrix is performed by one player at a time: a representation of the matrix appears on the screen of a player who becomes the performer: by moving his finger on the matrix (on the screen), he controls from which smartphone(s) the light and sound come from in the real world. (The sound changes with the speed of the finger trajectory.) That way, he remotely uses the other people’s instruments. After a fixed time, another player takes over the control of the sound and light, and becomes the new performer.

The video below gives an idea of the technical setup. While the players are usually seated at a distance of 1 or 2 meters from each other, the smartphones are spaced by a few centimeters only for the purpose of this video.

The sound is generated locally on the mobile devices that are connected to a Web Socket server (using node.js and socket.io). The server receives the position from the performer’s device and controls the sound generators of all devices of the matrix.

CoSC Web Applications

For the Collective Sound Checks with the Studio 13/16 at the Centre Pompidou we have developed a series of web applications for mobile devices. The applications are based on web technologies such as HTML 5, JavaScript, Web Audio API, and Web Sockets.

The first set of applications we developed are a few gadgets that produce sound depending on the device’s motion. The gadgets can be played individually or with a group of player and allow for exploring different, techniques, sound materials, and metaphors. The drone, birds, monks and the rainstick are described below.

In addition to these gadgets we have experimented with collaborative scenarios that are described in separate posts:

The gadgets and We Will Rock You: Reloaded application have been published at http://cosima.ircam.fr/checks (the applications work on mobile devices and require at least iOS 6 or Android 4.2).


Drone

The drone reacts on the device rotation and responds with the amplitude and frequency modulation of a set of oscillators generating a bass drone. Strongly shaking the device generates an electric sound synthesized through granular synthesis.


Birds

Birds is a collection of bird sounds that are played by jiggling the device. Each player can try different bird calls. Two or more players can communicate through tweeting and create a forest-like atmosphere of distributed bird sounds answering to each other.


Monks

Monks features a short extract of the song “Early Morning Melody” from Meredith Monk’s Book of the Days and the recording of a Tibetan chant. Both extracts are performed through granular synthesis by tilting the device sidewards. A group of players can form a choir.


Rainstick

The rainstick is based on sound materials that have been created by Pierre Jodlowski and also used for the audio visual installation Grainstick produced at Ircam in 2010. The player has to hold your device horizontally and tilt it up and down like a rainstick to produce sound.

Collective Sound Checks

An important aspect of CoSiMa is the experimentation of user scenarios, technologies, and content developed in the framework of the project with a community of users.

« Collective Sound Checks » are regular events that allow us to try out new developments with a larger number of users and to validate technological, aesthetic, and social hypotheses of our work. Each event proposes different experiences inviting users to play music together, or a game, or to discover augmented reality spaces.

The first CoSiMa Collective Sound Checks have been conducted in collaboration with the Studio 13/16.*

A first series of workshops a the Studio 13/16 happened in spring 2014 on May 14, May 28 and June 14 (for the Open House at Ircam). A second series followed in Fall/Winter 2014 on October 1, October 15, November 5, and December 17.

We developed a series of web applications for these sessions. A selection of these applications for smartphones is online at cosima.ircam.fr/checks (please visit from a smartphone).

mobile-hands-perspective

* Studio 13/16 sur facebook.