projects:sensory_airplanes:start

type of project: fellow research project

published: 2020

by: Lukas Rehm

website(s): lukasrehm.net

maintainer(s)/contact: mail (at) lukasrehm.net

Sensory Airplanes

Sensory Airplanes is a project by artist and composer Lukas Rehm, tracing the emergence and consequentialities of posthuman self-conceptions along the gradients between hard sciences, mythology, the imaginary and the affective. Efforts by physicists emulating biological neural networks on electronic substrates are paralleled by philosophical advocacies to remodel thinking and knowledge production itself based on new insights in non-human agency. The surplus of meaning both are confronting and consequently producing is set in relation to preceding societal shifts introduced by new communication media as well as a recursive take on the ancient narrative of the invention of Mnemotechnics. The work is realized as an environment composition in multichannel moving image, 4D-Sound and animated light.

The principle forming the basis for combining staged scenes with observations of biological agents, seemingly inanimate landscapes, natural and technological artifacts as well as prototypes of neuromorphic machines is formulated in the works core conceptual features: “living structures, as complex systems that change with time can be considered as computers“ is a quote in the exposition of the work by Karlheinz Meier, lead scientist of the Human Brain Project, a large scale program researching and reverse engineering biological intelligence with the help of machine models. Consequential equations, simulacra, twins and the exemplary are also subject to Fahim Amir’s critique on anthropocentrism and western, image based modes of thought. The mythological twin pair Castor and Pollux – one mortal and one immortal until both transcend into an eternal existence in the stars – can be considered as model for the approximately different progressively turning into the indifferent. The twins appear in the adapted narrative around the inventor of Mnemotechnics Simonides of Keos, who is singing in praise on the banquette of a noble person – the scene is reimagined as a simulation where Simonides’ songs are rewritten by an artificial neural network trained on databanks about the twins in mythology and star constellation as well as techno-utopian theory from the silicon valley. The score is based on current research on ancient Greek harmonies in this segment, whereas in other parts fragments of Jean-Philippe Rameau’s Opera Castor et Pollux appear after being digitally processed or reimagined by machine intelligence. The technological trans humanism representing the fatalistic noble man running the simulation is distinguished from its twin sister, the critical trans-humanism by philosopher Janina Loh exemplifying the fallacies of the first in correspondence to Simonides own destiny: his enhanced memory enables him to identify the disfigured victims after the fatal end of the banquet. Simonides willing but demanding approach to the trauma of the catastrophe points back at the surplus of meaning introduced in the work by the sociologist Dirk Baecker. Any new form of information processing, including the ones we’re operating with when we are thinking introduces new sensitivities.

The project originates in the music theatre piece “Castor&&Pollux” by Lukas Rehm (moving image, spatial sound composition and stage) with Lisa Charlotte Friederich (stage direction and libretto) and Jim Igor Kallenberg (dramaturgy), produced by and premiered at Heidelberger Frühling in 2019. The interviews with Fahim Amir, Janina Loh and Dirk Baecker and were conducted by Jim Igor Kallenberg and directed by Lukas Rehm. Tilmann Rödiger acted as the projects DOP.

Primary goal of the fellowship is the dedicated work on the footage and its extension towards an installative format, as well as a virtual edition, addressing both questions of accessibility and representation formats for an artwork based on spatial media.

The research includes:

  • familiarization with the new edition of the 4DSound software and its features
  • examination of adaptation possibilities of 4Dsound compositions in different scenarios and media (different speaker systems and arrangements, realtime application in theatre spaces with existing audio hardware, digital realtime experiences in game engines/vr/hubs)
  • research on new dataset based image processing tools / generative visual machine learning
  • research on realtime image recognition tools / analytical machine learning and their integration in fixed and interactive media
  • technical and artistic research on the “virtual” realizability of the work or the qualities of digital 3 dimensional editions for documentation

September + October 2020:

  • Research on machine learing based image processing
  • spectator tracking, integration of real time image recognition
  • Composition of new music
  • Editing of five channel video version

November 2020:

  • Implementation of the project at the Spatial Sound Institute Budapest
  • week 1 research on new features
  • week 2+3 spatial sound arrangement and composition
  • week 3+4 work with five channel projection, adapting the audio spatializaton and arrangement of audience experience
  • week 4 documentation of prototype + presentation to Spatial Sound Institute staff

December 2020:

  • Documentation
  • Continued work on moving image edit with ambisonic mixdown

January 2021:

  • installative prototype research in the stage like labs with 8 channel audio and 5 projectors.
  • research on 4DSound without omni-directional speakers, different arrays
  • execution of machine learning based image generation/processing for ending
  • evaluating the integration of live image-recognition and position tracking for ending
  • transformation in online 3D installation
  • documentation

Equipment:

  • 4D-Soundsystem (incl. Server, Amps, omnidirectional speaker array, mutliple subs, transducers)
  • laptop with abelton live + max for live / map
  • dante virtual sound card
  • ethernet switch + cabling
  • media server with 5 fhd video outputs (i.e. PC with rtx2080ti + usb3 to hdmi adapter + Resolume Arena, Midi-OX to observe midi traffic)
  • 5 projectors and screens + cabeling

Space:

5-channel video installation, roughly circular arrangement with sufficient distance to enable distant viewing without projector obstruction.

Time: 42 minutes

  • 01 Expo
  • 02 Surplus
  • 03 Styx
  • 04 IO
  • 05 Dancer
  • 06 Assembler
  • 07 Speed of Light
  • 08 Disaster Funding
  • 09 Tentakel

Machine Intelligence Applications:

  • realtime Image recogition via “Darknet Yolo”, directly compatible with ZED stereoscopic cameras to read depth information (of audience)
  • text generation via “char-rnn”, a recurrent neural network with a character based training method ~1MB data of raw text necessary, if less artificial enlarging of the dataset through copy pasting delivers way better results
  • midi notes generation via “tensorflow magenta”, interesting results when training with small dataset (overfitting), sounds like someone rehearsing into frustration and ending in a quick abstract improvisation
  • projects/sensory_airplanes/start.txt
  • Last modified: 29.01.2021 11:40
  • by Lukas Rehm