Visual drums

The origin of this project was the idea of give the user or users the ability to control the projection mapping over the building. This setup allowed us to have 4 buttons that triggered different visuals on the canvas and promote the interaction on one person and 4 drums or 4 people with one drums each.

An Arduino nano along a microphone module was installed inside each drum. The microphone sensed the audio peaks on the environment and the Arduino translates into keyboard commands. That way we can connect the hardware to any computer and It will detect the peaks as key presses. We use a simple project on Resolume Arena (a VJ software) to demonstrate the responsiveness and capabilities of the installation.

Enlighten 02

Hardware

First testes shows there are glitches in the lower voltages, so knowing that working between 10% and 30% of the power is working fine (50% is too brigth to enjoy the experience but you can go up to 90% easily if you want to). So far there are not significant differences between LED and Tradional bulbs so in the first testes i am mixing them without bigger issues.

8channel.jpeg

Regarding the flame’s movements tracking, it could be accomplished by using image analisys and video input but it would include camera setup and callibration according with light from the environment (this is necessary in almost all scenarios I can imaging). The addional problems with the camera includes shadows casting, color callibration, resolution, tele lenses, point of view, blind spots, etc. I decide to use 4 light sensor in the NORTH, SOUTH, WEST, EAST locations related with the candle. Every photoresistor will give us analog values from 0 - 1023 of how much light is receiving each corner (almost like x and y factors).

In our ideal scenario the flame is giving the same light to every sensor, but if there is more light in one sensor than another it means the flame has a direction. This baheviour allow us to make this relation:

+X position = EAST - WEST; +Y position = NORTH - SOUTH;

This resulted values are the direction vectors of the flame related with a coordinate system at [0,0]; For example, is all the NORTH, SOUTH, WEST, EAST are 1023 that means X and Y = 0 which means no direction, because all the sensors are reading the same quantity of light. The cool of this is the noise implied in a flame’s movement which moves the readings every time. The first set Up included uncovered sensors but they didnt work because they were receiving external data from environment light. Convered sensors focus on sensing our flame.

photoresistors.jpeg

The first prototype was made in cardboard which is not the more friendly material with fire. So far I haven’t had any sort of issue with it, but it is crucial to change the materials to avoid incidents.

Simulation

Enlighten will controlled the brighness of 8 bulbs in the space depending on where they are, meaning, their relation with the flame. In order to make this happen I designed a 3D simulator where the bulbs can be placed whereever its necessary. This app make the experience editable and scalable (this may include new features in the future). The simulator include a particle system which simulates a virtual flame which responses to virtual wind. Brightness and wind’s direction is controlled by our SERIAL readings from ARDUINO and going back to dim every bulb.

simulator.jpeg

Testing

The first testes included linear arrays of light bulbs (8 of them using just one 8 channel module).

There was a flickering in some of the bulbs due to the frequency of the voltage. In despite 8 channels dimmer mentioned it identifies frecuency and voltage automaticly, you have to be sure you’re using the correct timing setup in your Arduino’s code in order to work with 50Hz or 60Hz (The exact timing data is included in the Arduino’s code example).

Deployment

In order to control 16 bulbs is necessary to use at least to Arduinos. This implied 2 Serial ports to write to (at least in the processing side). This also implies that each Arduino should have a differemt baud rate in order to stablish a communication. +myPort1 = new Serial(this, portName1, 9600); +myPort1 = new Serial(this, portName1, 38400); The baud rate should be the same in the Arduinoo’s side respectively. That way you can control more than one Arduino.

installationEnlighten.jpeg

Calibration

The light sensors are receiving bounces of light all the time, so It reads different values between natural (morning, afternoon) or artificial light which makes mandatory a calibration process.

Wanna a Brain?

Brief

Let’s make something for Halloween. We didn’t wanna thought about technology, instead, we decided to think about a Halloween product to share with our audience regardless the tech involved. Personally speaking, I can say this is the first project I’ve made just for fun, thinking what are the feelings we want to produce in people. Now, I feel that’s the right and only way to do it. Just ask to ourselves: What do we want to communicate?.

What to produce?

We wanted to produce Halloween feelings of course!. Thinking about the format, target and place we decided to hide every tech clue about our device, in order to create a freak, repulsive experience where the people would share, laugh and wonder.
For some reason the first thing which came to our minds was food. Mostly interaction in Halloween it’s about get and give some food, sweets, chocolates, etc. Let’s give interactive food. How weird can that sound?. That’s being said, it was sort of easy to picture what we wanted.

References

Head in jars from futurama was the main reference. These floating heads in jars of different personalities. What about the opportunity of taste that water, those faces or eat those brains. The brain shape gave us a character with no identity who can be anyone, different voices and faces.

wannaBrain07.png

Concept

Once the basis were thought, we started deliberating about several parts of our system. We thought about the timeline experience which is: people are gonna approach to this repulsive food being invited to eat it. They would try to eat it and then they would be aware of its “active” state. Let’s think about this device as a system which has inputs, procceses, outputs and maintenance.

Inputs:

1. touch

Processes:

1. Give the brain a voice and a face - Identity
2. Come to live 

Outputs:

1. Speak
2. Move
3. Light up
4. Show a face

Maintenance:

1. We made a cardboard box with access in the bottom face (just in case we need quick access).
2. We realized people might not be interested in eat it because of sanitary issues. 
We came with extra materials as foil and plastic wrap.
3. The food would be the last part to assemble to ensure cleanliness.

Technical solution

We decided to use this schematic.
wannaBrain06.jpeg
1. Capacitive sensor: based on CapSense library for Arduino to trigger action.
2. Button: to trigger actions in case of emergency.
3. 8 Ohms Speaker as sound output
4. Record and playback module ISD1820 to record a message in loop "EAT THE BRAIN"
5. Servo motor to shake the brain
6. 8 lEDS connected to a Shift register.
7. p5.js sketch to trigger images and audios
8. JELLO as a main material. 
It is conductive (metal spoons) to use as a bridge between the circuit and us as a capacitors.

Process:

wannaBrain02.png
wannaBrain03.gif
wannaBrain04.png

We made jello several times because we didnt reach enough density to mantain a firm shape.

wannaBrain05.jpeg

Experience:

wannaBrain.png
wannaBrain01.png

The best is her face!.

Technical issues

  • 8 Ohms speaker’s power is nothing (too low). As a recomendation, It’s useful to have a built-in amplifier module to make it louder.

  • When you touch the jello, you make it shake indeed, so the shake feedback produced by the servo is almost invisible.

  • LEDs have focused light. In order to make it more uniform distribuited you should use a diffuser.

  • Movement of the capatitor, in this case the jello could activate the capacitive sensor, triggering the experience. Your should be sure you have callibrated a right threshold.