type of project: fellow research project
published: 2021
by: Nico Parisius and Caspar Bankert
contact: nico.parisius@theater.digital
repository:
https://git.theater.digital/nico.parisius
https://github.com/zecktos/rovPlayer
Memory of Things (Nico Parisius & Caspar Bankert)
“Memory_of_things” is a walk-in installation where the audience explores objects in a room which are linked to historical fragments.
These objects, equipped with sensors can register interaction and trigger light, video and sound.
The idea is to create a “performance of things”, telling story's about times where no people are left and only objects are the last whiteness.
That means we want to use digital tools to make provenance research tangible.
About
We are Nico Alexander Parisius and Caspar Bankert. We both studied puppetry at the Hochschule für Schauspielkunst “Ernst Busch” Berlin. Nico Parisius worked at the Puppentheater Halle for five years and discovered his interest for creative coding in his spare time. He worked on projects that merges his interest in computer systems with his proficiency on stage. Caspar Bankert is a freelance artist based in Berlin. He combines traditional and modern games and their possibilities with stage plays to create interesting new experiences.
Lena Wimmer also studied puppetry at the Hochschule für Schauspielkunst “Ernst Busch” Berlin. She works as a freelance performer as well as in the field of theater pedagogic for children and adults.
Lea Grüter studied history of art in Göttingen and Museology in Amsterdam. She works as a provenience researcher at the Rijksmuseum Amsterdam.
Goal
The goal of our research in Dortmund was to formulate a first draft of what Memory of Things could one day become, as well as to build a working prototype that outlines the requirements we would need to construct a final version. In the ordinary rehearsal schedule of any theatre performance there often isn’t enough time to try out new or just unfamiliar methods of play or interesting technology. Because of the tight deadlines there often is no room for error while testing things that are off the beaten path. We want to use the Fellowship to look at different hardware, software, objects, lights, sounds and narrative structures to create a room which tells their visitors an interactive narrative.
Process / Development
February/March
- thinking about different combinations of objects and sensors / which interactions can be interesting ?
- deepening Arduino coding skills
- exploring the specialties of the esp32
- wifi connection
- bluetooth connection
- dual core cpu
- exploring different sensors and peripherals and their protocols like I2C, oneWire or SPI.
April
- starting the development of a remote controllable and linux compatible video player
- exploring the Unity game engine and connecting the esp32 via the osc protocol
- starting the development of a 2D game with Unity game engine
- creating a room with furniture, light and audio setup
May
- gathering historical fragments for a first prototype with provenience researcher Lea Grüter
- research on story telling with real biographies and how to handle historical fragments with Lena Wimmer
June
- building a functional prototype
Smart Objects
Cup
Our first smart object was a white coffee-cup we took from the academy. While it was our prototype object it later proved to be the most reliable and versatile. Mainly thanks to the precise outputs of the gyroscopic sensor.
v0.1
At first we outfitted it with an ESP-32 and a gyroscopic sensor, a battery pack as well as an accelerometer. We just put that into the cup not hidden in any way. As a test we connected that to the LED-lights of the academy in a way that changed the color of the lights by changing the yaw, pitch and roll of the cup.
v0.2
When we moved in our room in Lab 2 we changed the cup to instead send the OSC signals to the cataigne program and thus making it able to influence the lights, sounds and interactions in your performance . We also added a heat sensor to the cup. At this stage we played with the idea, that things could be triggered by warming or cooling the cup. For example the lights would turn warmer by heating the cup. This proved ineffective because:
1. Body temperature alone would warm the cup to slowly and the changes weren’t that noticeable.
2. The natural temperature differences in the room fluctuated to much, to reliably set triggers to certain temperatures.
3. In the beginning we send all temperature, gyro and accelerometer datapoints to chataigne, which in turn made the ESP-32 pretty hot, falsifying the temperature measurements.
We also tried to use the Bluetooth capabilities of the ESP-32 to measure distance to another ESP-32. More about that in the section →Bluetooth Sensor.
v0.3
In its final incarnation we 3D printed a lid for the cup, hiding all its electronics on the inside.
Partytrigger
Discontinued
This first tryout for a timed action using the chataingne Programm was a trigger that played an mp3 of the song „Eleven” form Hitchhiker. The trigger started a sequence in chataingne which simultaneously send the audio data to the sound card of the computer as well as changing the DMX-Lights we daisychained in Lab 2. By activating that trigger a sound and light show would suddenly start to play back. A little party. Later we figured out, that loading audio files in such a way would cause the sequence first to buffer the audio, before activating the sequence, which resulted in a delay of up to 2 seconds. That’s why we opted to load the audio files in ableton live, on another computer and starting them with an OSC-command from chataigne, while playing back a sequence on chataingne, that had the exact same length of the audio clip, to set certain triggers that coincide with audio queues. That was certainly not elegant but functional.
Bluetooth Sensor
Discontinued
For our first presentation, which happened to be 2 days after Easter, we tried to use our technology to build an interactive easter game. We used the build in bluetooth of the ESP-32 chip in our cup to pair with another ESP-32 we hid next to an egg. The two ESP´s would send the value of the strength of their connection to chataigne. In theory, whenever this number would reach a certain threshold, we could assume that the cup and the egg would be right next to each other and the hiding place would be found. In that case a voice would announced: „Congratulations, the egg has been found - Happy Easter everyone!“ But in reality the small size of our room and some random surface reflections of the bluetooth signal triggered the „found threshold“ sometimes from across the room. In a bigger area this could be used for guiding people to certain locations by display proximity via a gradual change of some indicator.
Flowers
We installed a rasberry pi zero in a vase with an arrangement of plastic flowers we borrowed from the theatre department.
v0.1
At first we outfitted the Flowers wit an screen and a motion sensor, we hid in the plastic plant life. In its first iteration a little .gif of a animated dog wagging its tail would play whenever somebody would trigger the motion sensor by waving something above the vase.
v0.2
In its second version we added a tiny speaker to the vase. The participants of our presentation could find snippets of the Berthold Brecht poem „An die Nachgeborenen“ in our presentation. One of those was hidden in the vase. If somebody activated the motion sensor of the vase parts of a black and white silent movie would play while the poem would play from the speaker.
Teddy and Map
We used a Teddy Bear to move an object over an interactive Map on a Computer screen hidden inside a cupboard. (→ Unity)
v0.1
The fist version of Teddy, used an ESP-32 chip we stuck on the inside of the plushy. Two sensors which detected how much they are bend where protruding through holes in the hips of the bear. At this point we thought we could use them, hidden inside the legs of the teddy, to maneuver the object on screen. Unfortunately the sensors soon went out of shape, being bent in the same direction over and over again, during testing.
v0.2
The second version used the already tested gyroscopic sensor and a pressure sensor to maneuver the object on screen.
Armchair
A big armchair made to recognize if somebody sits on it.
v0.1
The so called bossa-nova-chair used a pressure sensor like the one we installed in the teddy to play back a sequence similar to the party sequence (→ Partytrigger). The lights would turn red and a slow bossa nova tune played whenever the pressure plate was detecting somebody in the chair.
v0.2
The version we used for our final presentation utilized a scale to detect if somebody sits on it. It also had a speaker installed, similar to the vase. The weight scales where installed on each of the four legs of the armchair. We hoped to use the four outputs of the scale to detect in which direction somebody, sitting in the chair, was leaning in. Unfortunately the type of scale we where using already calculated the average weight of the four measurements not giving us the opportunity to get separate values.
Painting
An empty frame. We used it to project a version of Peter Potters self portrait into. This painting was the central narrative piece of the story in our final presentation. A magnetic sensor on the bottom side of the frame connected to an ESP-32, detected if somebody had moved the picture. Similar sensors detected if somebody had opened the cupboard the frame was resting on, activating the interactive map (→ Teddy and Map).
Requirements
Hardware
- multiple esp32 Microcontroller
- raspberry pi zero with hifi-hat
- 1.54“ 240×240 LCD Display
- sensors :
- load sensors
- radar motion sensor
- multiple gyroscopes
- reed sensor
- beamer
- audio system
- dmx light system
- 3 computer
- reliable wifi network
Software
Video Player
For video projection we build a custom video player with openframeworks. The player is remote controllable via the osc-protocol and supports basic mapping. For every mapping surface you can load up to two different shader and set values for the shader also via osc. This made it possible for us to connect audience interaction with the objects to different video effects.
The source code can be found here.
Châtaigne
The overall logic of the performance and communication between objects and software was controlled with the open source software chataigne (over 6 month fellowship we couldn't get the pronunciation right!!). Chataigne collects all incoming sensor data and then triggers or passes these values to the audio and video software. The great thing about this software is you can built your logic in a node based state machine and also in timelines.
Ableton
For audio playback we used Ableton because of its “Session View”. To remote control Ableton we used this this plugin.
Unity
We used the game engine Unity to create an interactive Map. At one point in the performance a representation of an object was able to move over the world map to uncover past events in different Citys. Unity’s 2D game engine made it easy to build a flat plain with an image of the wold map and much like a top down RPG-Game a player representing the object could explore the map. We used this Unity library to communicate with OSC signals with the Chataigne software. That way actions taken on the map could trigger outcomes in our physical setup or in reverse the input to our program could be made using the ordinary objects which had been outfitted with microcontrollers in our room.
In our final presentation for example you could move the „player“ over the map using the gyroscopic inputs as well as the pressure sensor we installed inside a teddy bear. By tilting the teddy forward you could accelerate the player model on the map and by pressing the belly of the teddy you could turn the player 90° clockwise. When the player collided with one of the relevant city’s marked by a flag the room atmosphere would change, in this new scene you could discover new story’s and elements depending on which city you where in.
Because the monitor we used displaying the Unity map was placed in a central spot of our finished demo, we decided to add some additional features to the map program such as playing back video files and a black screen which also could be triggered with OSC signals.
—-
How to
Let’s walk you through a rough setup similar to the one we used in our final presentation.
Story I
Depending on how you usually approach this sort of operation you might have a clear story in mind, that the objects in your setup should tell. If that’s the case, the next steps should always serve the general story you want to tell. If you want to explore a science fiction scenario the room could be a landing pod on its way from orbit and the smart objects could be strange alien artifacts made from tissue talking to your audience when tickled at the right spot, or when exploring a haunted living room the audience might realize that ghosts of the past start to communicate with them when they interact with the objects, that where dear to them while still alive. But if you do not have a narrative in mind yet. Do not fret. Just start by following the next steps.
Room
For the Room you should find a enclosed area that suits your general idea. This area should have a stable WiFi connection as well as an access to enough electricity sources for your technical setup. For a more immersive effect, place the computers which are controlling the performance in a nearby room. Think about the placement of your smart objects. Where is the light coming from? Where is the sound coming from? How is your setup wired? Do you want to let the audience see the source of the sound or the wires or do you want to hide most of the electronics? Where? Do you want some kind of projection? Where is the projector located? After you found your room its time to acquire your:
Smart Objects
Our performance was taking place in an living room type area. Thus our smart objects where mainly things you would find in such a room. An armchair, a cup, a picture frame and so on. Building those objects was a trial and error process. For us it helped to not think to much out of the box when we where creating the objects. Think about what a person would reflexively do, when prompted to interact with an object. After all, the audience is on its own during your performance. Let them sit in a chair, swing on a swing or step on a carpet. If you can not come up with smart objects you can take some inspiration from the ones we build, earlier in this Wiki. We mainly thought about how to connect an object with a sensor and sometimes an effector. Like:
Armchair + scale + speaker = bossa nova chair (a chair that plays relaxing bossa nova music when somebody sits on it)
But you could easily come up with more elaborate contraptions by adding sensors and effectors to the object. We mainly used ESP-32 microcontrollers and send the data with the OSC protocoll via Wifi to the Computer that is running Chatainge.
Computer Terminal
This is the core controlling unit of the performance. In our version it consisted of 3 computers. 1. A computer running the video player 2. A computer running the Ableton Live session and 3. A computer running the Unity Game and the Chataigne Session. Due to this multi program setup a considerable amount of computing power is needed especially for the Computer running Unity and Chataigne. We already described the basic functionality of those programs. To put it briefly the ESP chips send their signals to Computer 3. You need to capture those Singnals by creating an OSC Module within that programm. Now you can use the changing data to trigger actions within the state machine of chataingne. With other OSC modules you can also send new OSC signals to either some of the ESP that have some kind of effector or the other two computers/programs to trigger the video projection, some input into the Unity program or the sounds. You can get creative here. An action in some software you wrote could relay back to Chataigne which in turn could move some motors in some of your smart objects. With this basic setup its very easy to trigger all kinds of machinery or software by basically any input you could imagine. But you might want some:
Lights
We used some daisy chained DMX lights to light our performance. That had the benefit that we could use all kinds of colorful light situations to brighten our scenes. Chatainge has the capabilities to directly communicate with DMX lights. But setting it up in any useful way is a little tricky. It worked for us to build an always active state in the state machine containing a always active (module) for each seperate light that sends a color value to the respective light. Then we created a custom variable group with a variable for every thus made (module) that contained a color value. The modules got their color from each of those variables. Now you can easily create presets for different light situations and switch between them by interpolating between the presets with a trigger somewhere in the state machine. Of course you have to do that process all over again for the brightness and any other effect you want to use.
Story II
Now that your setup is complete you have to start to build your story inside Chataigne. You can think of the states in the state machine as scenes in a theatre play. When you connect different states with a transition only one of the connected scenes can be active at any time. This means your performance (if it is like ours fully automatic) will generally follow a very narrow and logical path. A bit like a flowchart diagram. Think of it like a branching pathway. For example: You are in scene 1 „THE CAVE“ you could now go on to scene 2a „THE HOUSE“ by opening the cupboard or go to scene 2b „THE MOUNTAIN“ by sitting in the chair. The more branches you build the bigger your story. So maybe you reconnect some branching paths later on. (Scene 3 „THE CASTLE“). Also think about what your performance does if nobody figures out how to advance the story. Is there a timer running in the background of every scene giving out clues after some amount of time or even advancing the scene by itself? Or is there a possibility for your audience to turn back on their path to choose another way? Is there some kind of visual indication where they are in the story? Or is your performance more a hub world with many interactions to explore rather than a linear branching path. You can find a lot of inspiration of computer aided or rules driven narrative by looking at computer games, escape rooms, and narrative tabletop games.