Rapid prototyping for music, art and design work

Pulsar Kites

Pulsar kite video

It is fun to play with kites in Helsinki since there is a good wind current during most part of the year. For this course it was considered to add a value for kites with electronic features, in other words, to produce a sonification of kite movement. Thus the kite works as an interface to generate data to produce sound on real time, an instrument for musical or sound art performance which is controlled manually and modified randomly by wind conditions.

The interface has a wireless system that measure the speed and rotation of the kite, thus this data is transformed into sound on real time. Sound modules are designed upon on the principles of composition with pulsars, developed by Curtis Roads, and the micro sound studies of Alberto de Campo. For future development is considered to include a dynamic led light system and increase the number of simultaneous kites. Here is some pictures of the process of creation of the kite, and electronic devices used for the project.


You can view more test videos and documentation of Pulsar kites here:

Finally this project was presented in Voda Helsinki a visual arts, literature and music event  in Kalasataman Aukio, on sunday 19th august 2012.

Presentations from Physical Computing course at Sibelius Academy

Perrito faldero (Lap dog)

Sound object that interact with the voice.

The aim is to create a metaphor of the digitalization of the sound, continue signal to discontinue signal, and to try to make a machine, which react with the voice’s expressivity.


input sound -> microphone -> audio interface -> computer (max/msp) -> arduino board -> motors -> output sound

Sound: the principally incoming sound is the voice.

Microphone: cardioid headband voice microphone.

Audio interface: it transforms the analog signal in digital information.

Computer (max/msp): It analyzes the information (pitch and amplitude) that comes from the interface with help of the fiddle object (Max/msp). For control and to communicate with the arduino, I use the arduino object.

Arduino board: the information that the analysis gives me, modify the number of motors that are active and the velocity of each one.

Sound object: is a small machine that consists in twelfth motors that are controlled by the input sound (voice). Each motor produces a pulse and repetitive output sound. Three of them have the possibility to control their velocity and others to control only on/off.

Sound: the pulse and repetitive sound is created by each motor with a propeller touching a fix material in each rotation.

Alejandro Montes de Oca

Feather Report

illustrationMy new project plan goes as follows. I have a fan blowing air into a transparent container with feather of different color (say, red, green and blue). The movement of these feathers is then shot with a video camera or a webcam and color-tracked with Jitter. The data of the X-Y movement of each feather is then mapped to drive sound (2 control parameters per feather = 6 in total). To make it more interesting, the fan should be controlled by Arduino as well. I could drive the fan with a semi-random algorithmic data to make the whole thing a self-containing alive entity or, say, control it with a midi controller for a sound performance. Or the fan could be indirectly controlled by the feathers movement to introduce some feedback to the system. Furthermore, I could make the system more interesting by having several fans to create different turbulent fields.So, how easy is it to control a fan with Arduino? Would the small motors we have be powerful enough to operate as a fan (if I build a fan blade for example from cardboard) or should I buy ready fans?The color tracking works already, I’ve tested it. I should just 1.) obtain a fan 2.) obtain feathers 3.) obtain materials to build a container for the feathers and build it, 3.) figure out how to control the fan with Arduino and 4.) build a sound-making instrument and map the paramaeters in a musical way.Comments, suggestions?

Hands on work started today


Alejandro works with an old keyboard to build a sensor.
Alejandro II first steps

Alejandro II experiments with Arduino and motors.

The sensor is a microphone…

Components can be connected to Puduino and Maxduino

I made a list of the components can be used with Pudino and Maxduino patch but not all of them have been tested. I will update when I found new components compatible with the patches.

Analog In

Digital In

Analog Out

    – LED (Available from your local component shop)
    – DC motor (Available from your local component shop)

Digital Out

    – LED (Available from your local component shop)
    – DC motor (Available from your local component shop)

Continuous Motion, Discrete Signal

The concept involves defining discrete regions of space between 2
‘pads’ (preferably small patches that can be attached to the body). Data is controlled in intervals by the proximity between 2 objects increasing and decreasing.

For instance, instead of the continuous stream of data from a source
such as a Theramin, there would be defined regions of space which
would trigger a discrete sequence of a defined (musical) scale.

The user would not be touching the pads but the signals would be triggered by the pads moving closer to one another or farther apart. One scenario is that a person would have one attached to their hand and one to the top of their foot – and there would be a range of intervals that would be triggered between the closeness of the hand and foot.

Working Plan / Technical Solutions

The proximity sensor, PING Ultrasonic, communicates with BASIC Stamp and MAX/MSP via Arduino Bluetooth.

A program defines the distinct regions between the pads.



Mat Synthesiser

A “foot controller” of audio synthesis. Sort of “track pad” sensible to the position and pressure to be played by the foots.

Technical solutions : Big Pressure sensors (5 or 6) can be placed under a “mat” flat surface (20cm, 20cm) to give information in an axis system (x y z?) for MAXMSP or PD.

Soup sounds

Preliminary plan:

In short the idea is to use chemical reactions as control data. Here the key question is: what kind of parameters can one measure in a liquid? A few that have come to my mind: salt level (conductivity?), temperature etc. I’m sure there are others…

This control data would be mapped to generate and/or modify sound. The idea is to create a performative composition with little direct user intervention, although it might be necessary to work as a cook and mix in different ingredients during the performance and change the temperature of the substance.

Soundtrack of our life

The working title title for my project is “soundtrack of our life”. The idea for this project derives for one of my earlier works. A sound installation for an mp3 player: where the viewer/ listener was offered an ipod set to play a huge amount of various samples on random. Changing the experience of the surrounding by changing the sound scape. This time i would like it to be possible for the sound to change because of the listeners actions/movements.

Sensor input would change the sound. Ideally the user should be able to carry the set around.

Technical solutions are something where i only have some vague ideas at the moment. I do think that i need a sound source, sensors and a micro-controller that communicates with them both.

Robotic Orchestra

As project for the physival computing course I´d like to realize a “robotic orchestra”. The basic idea is to control several motors that are connected to the computer via an Arduino board. The actions of these motors is controlled creating a Pure Data patch. The motors serve like “real world players”. They can play and “bang” respectively on all kinds of objects according to the control data they receive from the host software. Digital bits and bytes are transferred into real motions.
I´ve seen a couple of people using solenoid motors for similar purposes, so this might be one direction to start with.

Technical requirements:

1.) arduino board

2.) motors (solenoid?)

3.) pure data

I Can

The working title of my project is I Can. This might be regarded as the can equivalent of I, Claudius, in the sense that it has something to do with I-dentity. The idea is to make a musical instrument out of a beer can and discover what its potential for sonic performance might be (er). If a performance is forthcoming, it will surely have something to do with the metaphor of ease vs. effort, i.e. the I Can constrasting with the I Cannot. In any case, the can will have to have some life of its own, otherwise it will not be its own I.

I would like to put a wireless Arduino inside, along with a battery box, some sensors and a servo motor to make the beer can shake (rhythmically?)The Instrument on command from the remote computer. What sensors CAN I put inside? Perhaps touch, velocity, orientation, a microswitch to detect whether the ring cap is in place or lifted.

What sounds will I control with it? Maybe Offenbach’s Can-Can. OK, as yet I have no idea.

The programming of the beer belly to go with the instrument is another long term project. More on that when I know how many beers I had to drink before I found the ideal can for the prototype.

Nickname: persona

Short description:

Wondering today about how new ways of communication, allowed by our present time technology, affect human identity transformation and construction in the net and how they can cause physical feeling and human energy perception to be lost or experienced in a new manner, nickname: persona project is an interactive audiovisual installation where people is obligated to act to make the piece possible and wants to make them wonder about the kind of relationship established between them and how they are showing their identity to the others;
The work would consist of taking data from visitors’ bodies by sensors, which would manipulate the sound and taking snapshots of their faces to build a new face with all of them. Sound and image would be streamed to a website.

Development (working plan):

During the following three month:
1. Make some tests with sensors to experience the kind of data that we can receive from the human body and decide the kind of relation that it can be established between them and the sound of the installation.
2. Make some test about the disposition of the sensors and how the visitor must use them.
3. Decide the shape of the presentation of this part of the project and also the whole project output.

Technical solutions:

Sensors (undefined)


Are there some sensors that I could test before buying them? I’m interested in blood pressure, body temperature, sweat, heat beating, nervous system… any other ideas?

Sound visualization using fire.

This is very interesting sound visualization.

Ruben’s Tube (wikipedia)

Maywa Denki Edelweiss Series

Maywa DenkiOfficial site

Edelweiss Series Video

Maywa Denki Tsukuba Series

Maywa DenkiOfficial site

Tsukuba Series Video


Tenori-on by Toshio Iwai and Yamaha

Tenori-on UK official site

Arduino + PD, Max/MSP

Arduino + Max/MSP by Marius Schebella

Arduino + PureData by Hans Steiner

Examples from Youtube

Social links powered by Ecreative Internet Marketing