type of project: fellow research project

published: 2022

by: Florencia Alonso & Naoto Hieda




GlitchMe3D (Flor & Naoto)

GlitchMe is a laboratory of witchcraft around composition of scenic objects and decomposition of algorithms invoking a network of feedback loops. Digitality can become dreams moving and interacting in multiple spaces in a non-linear time. Through feedback loops we create windows that can open new possibilities to narrate distorted and “glitched” worlds.

Part of the project is a performative installation; its components, such as projection of live-coded visuals, servo motors, scanned 3D models, strings aligned as the warp of weaving, crumpled paper, laser-cut pieces, algorithmic and reactive sound from speakers, exhibit not only traces of making but they live together to create a landscape. Everyday, the artist duo dismantles and builds the installation all over, working with the dump: the storage. What is left is important to use; the trace of something, in this sense, is a treasure for us. We collect trash, take 3D scans of spontaneous scenes that appear on the street, for example an abandoned bicycle hanging on a tree. We write code, we read and write texts, then describe with words on whiteboards, and also sketch scenes with objects and lights on a possible stage.

This project is a collaboration by Florencia Alonso (Flor de Fuego), Naoto Hieda, Jorge Guevara, Olivia Jack and Annique Nahumury (Terror Kittens)

We wonder about this process of research, which led us to the question about the relation between humans and machines. Absence and presence of the body, or machines as an extension, a lot of cameras live streaming or just recording that creates multiple points of view, besides the scenic physical one.

We create scenic objects that dialogue with virtuality, reflecting or mirroring. Paper and projection mapping are built through code, we aim for conversation between immaterial and material things. We try to move and interact with the bodies in the presence but also virtual bodies.

After 2 years of online collaboration, we started a fellowship at Academy for Theater and Digitality in February 2022. As a niche practice to perform with programming on stage, live-coding is rather a new practice in digital art. The artists push the limit of live-coding by applying it to the entire creation process; the visuals, sounds, kinetic motor movements and lights are also live-coded, and the practice bleeds into drawing, weaving, printing, and cutting. The practice manifests decolonization from the productive and design-centered process of technology, namely hacker witchcraft. To practice hacker witchcraft means to alter what is given, bring new usages or meanings to reality that doesn’t fit for everyone. In this sense working with a glitch means to find the beauty in what is not perfect and should not be either. In literature, there is ‘magical realism’ and this means not to talk about mimesis in a story, but to find the magic in common things, something that in daily life sometimes it is hard to see. Dreams in this sense, do the opposite way, symbolism and objects from reality appear in a world that is not real.

Hydra is is live code-able video synth and coding environment that runs directly in the browser. It is free and open-source and made for beginners and experts alike. It is created by Olivia Jack, and since 2018, Flor and Naoto have been using Hydra and actively publishing sketches and tutorials.

Prior to the fellowship, Hydra maintainers including Flor and Naoto received a grant from Clinic for Open Source Arts to enhance accessibility, which flourished as an interactive documentation and translation in Spanish, Arabic, Indonesian and Japanese.

The five-month fellowship started by extending our online practice of “live-coding jam” into the physical space. Coding jam is similar to a jam session in music; several coders play code as a visual or musical instrument, but there is no score while the jam might have a keyword or a theme to follow. There are online tools such as Flok, Estuary and PixelJam for coding collaboratively. Flor and Naoto have been using such platforms since 2019 to code together and explore the potential of Hydra.

To bring a live-coding jam into a physical space, we started by using a projector to illuminate the space with the patterns generated in real-time. Instead of using a flat white screen for video projection, we built a landscape as scenography, which came from Flor’s and Naoto’s skill set in theater and projection mapping (first image). A quick way of building a space is to use paper; we have an abundance of packaging material and cardboard boxes, and we upcycled them. In the earlier jams, we used structured light technique to generate a mask (second image) of the space for accurate projection mapping, which can reveal layers of textures and colors with the drawback of time and effort required for calibration. Another drawback is that since the projection mask is precisely aligned to the objects, once the objects are moved, the projection mapping effect is lost. In the later sessions, we preferred simple and high-contrast visual patterns to use projectors as light sources, and rather we craft the space by moving the surfaces (third image).

Particular to coding in the physical space, we added two elements. We installed spotlights whose colors are controllable through DMX protocol. We programmed the lights so that they can be seamlessly controlled on Flok, similar to the way Hydra is coded (repo). Another element is our bodies; as we code, we noticed that the bodies as coders should be present in the space too. In various jams, often we code in the projection, and other times we danced in a video loop of our previous session to free ourselves from the computers and to experiment with the choreography, which benefited from the help of Jorge Guevara, who mentored our project as a choreographer.


We presented the project outside the academy on several occasions. At Dortmunder U, we collaborated with storyLab kiU (Fachhochschule Dortmund) to use their immersive dome as described in the next section. At the Academy of Media Arts Cologne (KHM), Flor, Naoto and Jorge presented a performance in the frame of MidTermReview organized by Christian Sievers. Unlike other jams and performances, we decided to use our bodies and torch lights in a dark room and apply choreographic techniques. At Time Window (Rotterdam, the Netherlands), we collaborated with Annique Nahumury (Terror Kittens) to experiment with the scenography around projection and costumes. The blackbox in the basement was transformed in an afternoon using laser-cut material and prints that we brought and fabrics from Annique. During the performance, Flor and Naoto coded and performed, and Annique cut out materials on the spot to create costumes. Projection screens are repurposed as costumes and vice versa, blending screens and bodies.


As a collaborative project with sound artist collective Ekheo (Aude Langlois and Belinda Sykora), we created a video that features Hydra sketches and various photos that we have been taking. The performance was presented in the frame of NO LAB at Gaîté Lyrique (Paris).


exMedia Lab at KHM invited as a frame of Science Kitchen to give a live performance and an artist talk. We performed with Hydra and Orca live coding platforms and presented our process of upcycling materials and creating scenic objects. We had positive feedback and fruitful discussion about the transformation from concepts and observations to virtual and physical materials.

We collaborated with storyLab kiU (Fachhochschule Dortmund) to experiment with their immersive dome at the ground floor of Dortmunder U. As we already established the process of collaborative jams to create visuals, it was a natural transition to use the dome, in which the only difference is that the screen is circular. Therefore, radial, circular or symmetric patterns are preferable to, e.g., patterns with a square grid.


Not only with Hydra, we created a 3D scene with a dome-mapping technique to generate a 3D perspective effect based on dome-experiments repository, which can be controlled and blended into Hydra and Flok. This way, the result is not only abstract patterns, but the viewer can feel the movement relative to the virtual space.


Our mentor Olivia Jack developed LiveLab, a video-call platform for performing arts, and she extended the platform to live code and remix each other’s video over the network. Participants in the session can share their video feed, and anyone can remix their videos using embedded Hydra. We used the platform to perform in the dome, while four of us, Flor, Naoto, Olivia and Jorge, have a laptop on our own. Since the system is decentralized, each of us can move freely in and out of the dome to experiment with their webcams while we collaboratively create content to be projected in the dome.


As a next step, we are creating an audiovisual piece to be exhibited at the U as a permanent installation. Also we are planning to extend the reach to other domes and planetariums to exhibit our work with live coding.

Dump is a key concept of our work, not only as a trash but also as an artifact from abundance. The virtual dump is an airtable database, which we set up to collect all the physical and virtual artifacts as records. The database is published as a website. In the physical sense, we upcycle objects such as cardboard and furniture from a dump, and we repurpose them to use in the installation.


Q: Images of your whiteboard thought patterns, with notes to explain your working process during the fellowship?

A whiteboard is an essential element in our process; a series of experiments starts with a brainstorming session using a whiteboard and wraps up with a feedback round, taking notes on a whiteboard. The board “People started coding” (titled “Cleanup & Reset”) is from a feedback session of the open house. “People started coding” is a reaction to some visitors who played with the computer displayed in front of our lab space. The computer was meant to exhibit our interactive website, which showcases some of our sketches that are coded on a web editor. A visitor opened the editor and started editing the code, which was unexpected. It opened up new possibility that the interaction does not have to be always intuitive and universal, but a complex modality of interaction can be preferable depending on the target audience.

Q: Small examples of specific objects within your installation environment, with little descriptions that explain their role in your work?

Plotter: Cameo Silhouette is a 2-in-1 plotter that can draw on or cut paper given a vectorized image data. We use the following repository to control the plotter with Inkscape.

The image is made with the following process. First, a Hydra sketch is created, which features several elements, such as a video feed of a webcam and a photogrammetry (3D scan) of a table. A snapshot of the sketch is fed to Illustrator to generate an outline (vectorization). Finally, Cameo plots the lines onto a sheet of paper.


Cursors: laser cutting is applied on acrylic. Similar to plotting, a laser cutter requires a vectorized data. In this case, the data is an outline of a mouse cursor; again, the cursor is deformed and glitched using Hydra, and its outline is generated by Illustrator before sent to the laser cutter.


Cut paper: Cameo can be used for cutting paper, too, similar to laser cutting. This pattern is also generated with Hydra, but in this case, from a basic pattern of Hydra without external image or 3D modes. We experimented overlaying plotter-cut sheets of paper with different Hydra patterns; however, the paper is not rigid and tangles with each other, so we did not experiment further.


Servo motors: We used servo motors to move light objects, mostly laser-cut material. Arduino is attached to a servo driver, and we did several experiments:

  1. control servo motors in a sequence, i.e., automated fashion
  2. the motors react to touch using capacitive touch sensor
  3. connect a PC to send mouse position to move servo motors, just like using a mouse to move physical mouse cursors (see playfulness).


Risograph from jam: the print is made with Risograph, a type of printing invented by Riso Kagaku. Each color is printed layer by layer, and overlapping inks mix with each other. Because paper has to be fed back to the machine by hand after every iteration, the overlap of different ink cannot be perfect, and thus Risograph makes an interesting offset, which can be seen as analog glitch. We had access to a Risograph at KHM, and printed several patterns. This pattern is from a screenshot of a Hydra/Flok jam (small image). We naively separated red, green and blue channels of the screenshot to print in pink, green and blue ink cartridges. Since the center is black, it did not appear as any color on the print; unlike inkjet or laser printer, which add more ink on black and no ink on white, Risograph printer inverts the color. So we decided to add gold ink where the black region is in the original image, and ended up as an interesting pattern on the photo.


Risograph of osc pattern: “osc” is a primitive pattern of Hydra and an important motif for our project. As artist Christian Sievers says, it is a pattern of a repeating gradient without edges that suggests smooth transition. We printed it in black, pink, yellow, orange (pink + yellow) and gold, and used it to make a massive pattern by pasting them on walls and pillars.



Oscilloscope: Similar to laser cut and plotter, oscilloscope can draw vectorized images by controlling the trajectory of electrons. We tried 2 methods to draw images:

  1. xyscope by Ted Davis to seamlessly convert vectorized image into sound waves (large image)
  2. directly using SuperCollider with 2-channel output to draw lissajous shapes (small image)


Flully sculpture: these sculptures are handcrafted by cutting strips of paper and sticking them on a box, or weaving them similar to Rya knot technique. They diffuse light in an unexpected way to create depth in projection.


Raspberry Pi: the raspberry pi 4 can run Hydra on the browser - thus we programmed it to make a plug-and-play hydra player, showing osc pattern. The code can be remotely updated using Flok.


Bubble: the children toy is attached to a DC motor, which is connected to a motor driver and Arduino. It can be controlled by MIDI, and in the end, we chained 4 drivers and motors with 2 modes (forward and reverse) thus 8 MIDI channels to control objects, such as the bubble and puppies (image follows).


Paper sculptures are essential for creating depth in the space while effectively diffusing light. We torn used office paper by hand and glued them on a cardboard frame to create a volume.


Turntable: a cardboard box has a stepper motor attached, and the motor is controlled by a stepper motor driver and Arduino Mega to continuously rotate with relatively high torque.

Q: Your thoughts on “playfulness” as a way of working?


Playfulness is a key concept for us to design an interaction. Our goal of interaction is not meant for usability or productivity but to make an object that is attracting for the visitors and makes them curious to interact. One of the examples is the motorized toy of a puppy; no matter what their purpose is, every visitor's attention is instantly caught by the puppies. They bark and walk as a harmony; the toys are connected to an Arduino and a motor driver, controlled by MIDI signals from ORCA, an esoteric programming language for live coding.


Another example is an interactive cursor. The mouse-cursor-shaped objects are attached to servo motors, and they react to the movement of a mouse which is placed next to the sculpture. Thus, the visitor is not moving a virtual cursor as in a normal computer but moving cursors in real life. This idea came from the concept of the whole installation as a “browser in real life”.

Q: A description of what might come next for your project, after the fellowship? Or what elements of your project do you think could be researched further?

Our next step is to continue the project while we are in a distance as Naoto will be in Colombia for half a year and Flor will stay in Germany. This means we collaborate in a way how we started, when Flor was in Argentina and Naoto was in Germany.

Another concrete step is to exhibit and perform in different contexts. One idea is to present the artifacts we built as an exhibition instead of a performative installation that we are used to. Another idea is to make contents for the dome projection and propose the project to domes and planetariums.

Q: the 3d model of “inside” your space!

We scanned the scene of some of the sessions using photogrammetry and dropped it into hydra and web-space again:

  • projects/glitchme3d/start.txt
  • Last modified: 12.07.2022 11:07
  • by Naoto Hieda