While working at Aesthetec Studio, I developed ShadowRock, which is currently being installed at the Telus World of Science in Calgary. The final version seen in my portfolio is something like the twelfth version that I made. Here are a few of alternative versions that didn’t make the cut, and one installation I made for fun while learning contour analysis.
Shadow Sound Interactive Version 1
One of the earliest versions of ShadowRock I developed. I had just gotten blob detection to work, so that the image of the shadow is treated as one physical object by the software, allowing it to collide and interact with other objects. Each musical object is programmed to behave like a spring, so that when they are dislocated from their original position, they bounce back with a force proportional to the distance moved.
Each musical object has its own unique sound, and when it collides with the shadow or another musical object, the sound is played and a burst of particles is generated.
Shadow Sound Interactive Version 2
Later on in the development stage, I thought that one way to make the installation more interested was to make the sound emitted by each object be depended on the object’s location, so that users would be encouraged to create different shadow shapes and interact with the objects in more interesting ways. In the video shown above, the musical objects each play a note in a scale when contact is made, and the exact pitch of the note depends on the object’s y-coordinate.
Shadow Sound Interactive Version 3
This is an extension of the same idea explored in version 2, in that the position of the musical object affects the sound that it makes. The difference here is that instead of having the notes be on a scale, each musical object represents one of five instruments (drums, bass, guitar, piano, synth), and samples one of 3 tracks, depending on the y-coordinate of its position. This resulted in a very different interaction, and the user is encouraged to hold certain positions in order to let the tracks play and mix, as opposed to version 2, where there needed to be a lot of up and down movements in order to create interesting sounds.
Somebody in the openFrameworks forum had posted a video of a similar exhibit at the Arizona Science Center, and was asking how such an installation worked. I thought I’d have a go at it and created the above piece. It uses the openFrameworks box2d library to handle the contour analysis as well as all the particle collisions. Unfortunately I didn’t get a chance to optimize the program so you can see that it’s lagging in some parts, especially when there’s a build-up of particles.
All of these were written in openFrameworks. Unfortunately I’m not at liberty to release the source code. However, I will try to do a tutorial on blob detection and contour analysis at some point in the future.