type of project: fellow research project
published: 2021
by: Jiayun Zhuang & Edwin van Steenbergen
website(s):http://thefxke.com/
license(s): MIT (optional)
maintainer(s)/contact:
repository:
FXKE, A Post-truth Storytelling (Jiayun Zhuang & Edwin van Steenbergen)
Our project, entitled FXKE, is a performance about the post-truth digital environment active in our current attention economy, an economy that stimulates and prioritizes eyeball-grabbing content to keep users engaged. Through FXKE, we want to reflect upon the compulsive undermining of the infrastructure of truth (a post-truth symptom), and our modern-day condition of information overload.
We have two correlated creative impulses when creating this piece: 1. to simulate a post-truth digital media environment in which a wealth of content exhausts the attention of its recipients; 2. to channel the kind of unbounded and unguarded generative energy of digital media into our performance, and to create a performative experience of real-time, algorithm-driven, and rewards-optimized content generation.
Research Question(s)
Artistic Questions
- With what digital theatrical instruments can we use to build up a performance piece, the process of which is “generated” in real-time through the work of algorithms and human improvisations?
- How do we bring the audience into the content generation process and, in so doing, play with the logic of media engagement?
Technical Questions
- How do we work with GPT-3, the OpenAI language generator, and how do we build applications on the GPT-3 platform for the performance piece?
- How do we use other applications (e.g., VUO) to create real-time generative theatrics?
- How do we build a control system for FXKE via which the audience can engage with the performance digitally?
Development
February 2021
- Working on the concept of the project.
- Exploring python and GPT-2 (we gained access to the GPT-3 API in late March).
- Scraping social media data.
March 2021
- Developing dramaturgy and narrative elements for the performance.
- Learning VUO; creating text and audio visualizations with VUO.
- Exploring data visualization (e.g., Tableau, Flourish).
April 2021
- Exploring GPT-3 and its applications; using it to create textual content for the performance.
- Building a website for FXKE (https://thefxke.com); creating textual and visual content for the website with the assistance of GPT-3.
- Working with Nanne Verheij to develop a polling system.
May 2021
- Building a chatbot based on GPT-3.
- Developing the narrative of the performance further, and tailoring content based on technical capacities.
- Working with Sibe Kokke to develop the interactive components of the website for FXKE.
June 2021
- Researching APIs for image generation.
- Working with Nanne Verheij to engineer an operation system for the performance.
- Planning and implementing technical setup for the final presentation.
- Editing visuals; creating music and sound.
Project Technical Setup
- 2 projection screens and 1 hologauze
- 3 projectors
- 1 camera (for taking snapshots of the audience)
- 1 Mackie sound mixer
- 2 Electro-voice speakers
- 1 MacPro
- Vuo Pro, Syphon, QLab, NewTek NDI, Audacity
- Visual Studio
- GPT-3
- 1 5G WLAN Router
- A website for the performance
How to
- Fictionalize an AI-powered crowd-reimagining system, FXKE. The performance piece is then a series of user guides explaining the system's features, and it brings the users (audience) into the content generation process. The performance script (the user guides given by the central system, and the content of the website for the performance (website as the system's subsidiary facility), are generated by GPT-3 based on prompts.
- Build the technical construction of the website for the performance (JavaScript, Python, and HTML), on which a polling system and a GPT-3 powered chatbot are built. The audience's chatting data is collected for real-time generative animation. The polling system consists of four HTML pages: a controlling page, an audience's page, a page for screens, and a page for performers.
- Create visual and sound compositions via VUO that respond to the real-time textual generation from different GPT-3 APIs, then newer contents for the performance are made. These new contents are sent through Syphon, and to QLab, and projected on the screens.