This application allows a group of players to perform Queen’s song “We Will Rock You” with a set of simple instruments and to create their own versions of the song. The players can choose between drums, voice solo, choirs, Freddy Mercury’s voice fill-ins (‘sing it’), a guitar power chord, and the final guitar riff.
While most of the instruments trigger segments of the original recordings when striking with the device in the air, the power chord and guitar riff resynthesize guitar sounds through granular synthesis.
The application has been published here (requires a mobile device under iOS 6 or later, or Android 4.2 or later with Chrome 35 or later).
In this scenario, players can record arbitrary percussive sounds (with their voice or using props) using a microphone (and a foot pedal). Once the recording is finished, the players can load the recorded sound on their mobile devices and perform it by shaking the devices. All the devices are beat synchronized to a steady tempo (16th beats on 100 BPM) so that multiple players can easily perform together.
Each sound recording is analysed on the server and segmented into percussive elements that are classified by their intensity. On the mobile device, the concatenative synthesizer generates a sound on each beat. Each sound is selected according to the device motion intensity: the synth plays soft sound segments when the player shakes the device softly, and louder segments when the player shakes the device more vigorously.
The players are encouraged to record phrases with percussive elements of a wide dynamic range. They can experiment with different sound recordings and create an ensemble by recording complementary materials.
The players are sitting on a grid (for instance, 3 rows by 4 columns, for a total of 12 people). Their mobile devices form a matrix of screens and loudspeakers that are used to spatialize sound and light.
For now, the Matrix is performed by one player at a time: a representation of the matrix appears on the screen of a player who becomes the performer: by moving his finger on the matrix (on the screen), he controls from which smartphone(s) the light and sound come from in the real world. (The sound changes with the speed of the finger trajectory.) That way, he remotely uses the other people’s instruments. After a fixed time, another player takes over the control of the sound and light, and becomes the new performer.
The video below gives an idea of the technical setup. While the players are usually seated at a distance of 1 or 2 meters from each other, the smartphones are spaced by a few centimeters only for the purpose of this video.
The sound is generated locally on the mobile devices that are connected to a Web Socket server (using node.js and socket.io). The server receives the position from the performer’s device and controls the sound generators of all devices of the matrix.
The first set of applications we developed are a few gadgets that produce sound depending on the device’s motion. The gadgets can be played individually or with a group of player and allow for exploring different, techniques, sound materials, and metaphors. The drone, birds, monks and the rainstick are described below.
In addition to these gadgets we have experimented with collaborative scenarios that are described in separate posts:
The gadgets and We Will Rock You: Reloaded application have been published at http://cosima.ircam.fr/checks (the applications work on mobile devices and require at least iOS 6 or Android 4.2).
The drone reacts on the device rotation and responds with the amplitude and frequency modulation of a set of oscillators generating a bass drone. Strongly shaking the device generates an electric sound synthesized through granular synthesis.
Birds is a collection of bird sounds that are played by jiggling the device. Each player can try different bird calls. Two or more players can communicate through tweeting and create a forest-like atmosphere of distributed bird sounds answering to each other.
Monks features a short extract of the song “Early Morning Melody” from Meredith Monk’s Book of the Days and the recording of a Tibetan chant. Both extracts are performed through granular synthesis by tilting the device sidewards. A group of players can form a choir.
The rainstick is based on sound materials that have been created by Pierre Jodlowski and also used for the audio visual installation Grainstick produced at Ircam in 2010. The player has to hold your device horizontally and tilt it up and down like a rainstick to produce sound.
An important aspect of CoSiMa is the experimentation of user scenarios, technologies, and content developed in the framework of the project with a community of users.
« Collective Sound Checks » are regular events that allow us to try out new developments with a larger number of users and to validate technological, aesthetic, and social hypotheses of our work. Each event proposes different experiences inviting users to play music together, or a game, or to discover augmented reality spaces.
The first CoSiMa Collective Sound Checks have been conducted in collaboration with the Studio 13/16.*
A first series of workshops a the Studio 13/16 happened in spring 2014 on May 14, May 28 and June 14 (for the Open House at Ircam). A second series followed in Fall/Winter 2014 on October 1, October 15, November 5, and December 17.
Orbe created Murmures Urbains, an emerging fiction built on the principle of post-narrative writing. Firstly, a staging based on protocols creates multiple situations outbreaks in the public space. Secondly, the traces and testimonies from these experiments are collected in a scenic area. In the epilogue course and stories are presented in an exhibition space.
Murmures Urbains is a rich context to experiment situated medias. The framework used for these experiments is a foreshadowing of the Cosima platform. With Medias-Situés, you can associate media with a combination of trigger constraints : space conditions, time, environmental or behavioral. The device also allows you to synchronize events between multiple mobile, manage spatial sound or hybridize remote areas. Murmures Urbains has been deployed in several contexts : workshops in art and design schools, such as ENJMIN (National School of Video Game and Interactive Media), during festivals and events like Chalon dans la Rue.
Murmures Urbains will be in residence at L’Hôpital Ephémère in April 2015 and will be presented at Chalon dans la Rue festival in July 2015.