type of project: fellow research project

published: 2022

by: Meredith Thomas




Machine Learning in Artistic Production (Meredith Thomas)

I believe firmly in the potential of artificial intelligence to create entirely new, intimate and transformative experiences. It holds to promise to realise the dreams of early cyberneticists and open up rich intuitive mediation between: human and machine, performer and audience, real spaces and virtual spaces. However, it is easy to become overwhelmed by the hype surrounding ML. The last decade has seen extremely rapid progress that has left a gap in intuition for what these new technologies are capable of, and what their limitations are.

ML is a field in its infancy and I know from long experience that research can be hard, time consuming, data hungry, costly, frustrating and often fruitless. The neural networks we train are mostly 'narrow', in the sense that they can tackle only well defined problems within a very specific domain. These limitations need to be communicated to artists, yet strongly motivate the direct involvements of performers, directors, choreographers and other creatives with ML research. The only people in a position to apply ML to performance will by performances and performing art institutions themselves.

If ML is an empirical field it is also deeply philosophical through its intimate relation to the field of AI more generally, which directly considers the nature of intelligence itself. The efficacy of neural networks is underpinned by their ability to extract and abstract semantic representations from raw data. This is a process that rightly fascinates us and I have never found a better environment to holistically explore the technical complexity and philosophical ambiguity of AI than the performing arts.

During my fellowship at the [The Academy for Theatre and Digitality]( I embarked on a practice-led programme to explore a range of applications of ML in a series creative projects. Many of these projects stem from existing collaborations with performers, choreographers, dancers, media artists and programmers.

Follow the links below to see documentation on the projects I worked on during the Fellowship.

  1. Melete
  2. Mortality
  3. Unhate
  4. Terpsichore
  5. ANA
  6. Dancing at the End of the World
  7. Carbon Rapture
  8. Humane Methods
  9. Six Scores

“Melete is an interactive video installation that tells a personal story about drawing, machine learning, data and failure. The installations documents a long-running project to build a dateset of drawings and experiment with machine learning algorithms. This piece is open to a single participant at a time and lasts 15 minutes. Drawing is a deeply intuitive, dare I say, philosophical exercise. A draftsman makes hundreds of decisions as she translates a scene in front of her into a series of lines on a page. Is the complexity and artistry of this process something a machine can hope to recreate?

Over a period of many years, I have explored, largely unsuccessfully, the potential for recurrent neural networks to work with lines and drawings. I believe is important to capture drawings as series of lines not as arrays of pixels. In this way, the essential nature and timing of each mark is better captured, allowing neural networks to better learn and abstract from the underlying process. At its most abstract, I am interested how a relationship might develop between the machine and the artist if the machine is allowed to intervene interactively. But machine learning algorithms need data and time, and when no data exits, you have to generate and collect it yourself. As it turns out, that means lots of programming and lots of drawing. Perhaps there are lessons in the act of drawing itself we can meditate on whilst we diligently generate the data?”

Esther de Bruijn (alias Studio Vodka, Alyster) is a creative technologist, 3D artist, singer, VJ, streamer, all-round multi-potentiate and old friend. I saw my collaboration with her first and foremost as a opportunity to have fun and push the boundaries of what was visually possible with AI generated imagery.

We picked a track from GIF's album [Better Than Yesterday]( and set ourselves a challenge of making a music video for the track [Mortality](, mixing machine learning techniques with computer generated imagery to weave together a visual story.

“Emergence is a mysterious phenomenon, fundamental to all living processes, whereby systems with simple components interact to produce unexpectedly complex behaviors; atoms form molecules, molecules form single-celled life and cells form complex social organisms, which in turn comprise our globe-spanning civilization.

'Cellular automata' are a traditional model for studying emergence. A common formulation, named the Game of Life[1], shows that by writing very simple rules to govern the behavior of elements on a grid, complex patterns of behavior such as locomotion, predation and reproduction can be observed. Neural cellular automata (NCA)[2] are a recent invention that take this concept one step further by incorporating a neural network and enabling the cells to learn their own rules, such that they reproduce an arbitrary emergent pattern.

For the production, we used 'neural styles' extracted from images of nature as targets to train neural cellular automata over millions of simulations. As the cells grow, despite being unable to perceive anything other than their very nearest neighbors, they learn to coordinate their actions to form large scale natural formations of roots, leaves and flowers.

This video installation collates experimental results generated ‘behind the scenes’, while training neural cellular automata for the production of UNHATE Experience, which was installed during MWC Barcelona 2022. The sequences each represent models at set points during millions of training cycles.

The ‘mindful’ experience was inspired by the notion that life will always emerge to win over hate. Corals grow to encrust the hulls of sunken battleships, roots crawl into the cracks of abandoned bunkers and forests slowly cover lands scarred by violence. The hatespeech becomes the environment in which the digital organisms survive and flourish, finally coming to overgrow the hate one pixel at a time.

UNHATE Experience by Deutsche Telekom and produced by Elastique.  [1] Gardner, M. (1970) Mathematical Games The Fantastic Combinations of John Conway's New Solitaire Game “Life”. Scientific American, 223, 120-123.  [2] Mordvintsev, et al., ”“Thread: Differentiable Self-organizing Systems”“, Distill, 2020.” }

A personal project to ease data collection and inference at the edge of performance. A common requirement for performance projects involving technology is for camera of other sensors to collect data or stream data of performers in a space.

Began prototyping an experience for interactive storytelling using cutting edge natural language processing.

ANA is an empathic AI-based interaction system for generating encounter experiences through an act of collaborative, iterative improvisation of stories. The system be staged as a single-person walk-in installation in the form of photo-booth. The production focus is on creating a generalisable software system that can be demonstrated in different performance and installation contexts.

The interaction between a visitor is based on the continuous recognition of the visitor's emotional state through verbal and non-verbal cues. At the same time, the system maintains its own affective state, which is simulated based on cognitive models of human affect. Part of this state is the mood of the system, which on the one hand reacts to the recognised emotions of the visitor and on the other hand influences in which way the system continues the co-improvised story. In this way, the affect of the system can also be experienced by the visitor, and an emotional feedback loop is created.

The aim of this interaction is to enable an encounter where mutual empathy between human and machine can be experienced through the jointly improvised narrative. This is also intended to convey a more positive vision of the future of AI as responsive, empathetic collaborators.

[Dancing at the Edge of the World]( is an interdisciplinary collaboration with scientists, designers, creative coders and new media artists, REPLICA imagines and rehearses future societies with their folklores. And they prototype new tools and rituals.

During the fellowship, I took the time to continued this research project to further explore and visualise patterns in movements data.

Featuring in this project, and developed in collaboration with Mika Satomi, is a capture wearable technology: a collar, with bend sensors that can be attached to 8 points on the body. The sensor data is processed and gathered locally on a Bela board attached to each item, and can be streamlined in real-time to a central unit through a wireless network. A speaker sits on the front of the collar, emitting sounds that can react to the position of the sensors — to how the performer moves their body. 

These sensor data are lists of numbers, which can be sonified or visualised in various ways, creating aural or visual cartographies of movement captured in flight. It’s a hyper-simplification of the reality of movement, yet it has something essential: it’s the beginning of a digital body-language, which the performer is able to connect to, understand and alter through her physicality.

[John Rogers]( is an actor based in Galway, Ireland. He has a long interest in integrating technology with his performance. During his visit in June 2020, we decided to focus on improvised performance with chatbots, in particular [GPT-3]( and [Replika]( The resulting scratch performance can be viewed above.

Began developing some new material with Fronte Vacuo for their Humane Methods series. Marco Donnarumma and Andrea Familari are previous fellows at the Academy and have documented Human Methods more fully here.

  • projects/machinelearninginartisticproduction/start.txt
  • Last modified: 20.09.2022 17:30
  • by Meredith Thomas