Our understanding of space has changed dramatically in recent decades. Digitization has created new spaces, distances have been eliminated and our sense of reality has been questioned. Hearing is an extremely immersive sensory impression, which today, besides seeing, experiences constant expansion and translation through technology. In her project, which Ariane Trümper developped at ATD, she is concerned with the perception of sound and the experience of digital space. This is done with the help of a 3D sound space, created in the game engine unity, which integrates the positions and movements of the listeners interactively. In this spatial, virtual sound scenario, Ariane would like to investigate what stories such a room tells.

Ariane Trümper is a scenographer, artist and researcher living in Rotterdam. She works on the interfaces between media art, performance and spatial design and researches performative processes and perception filtered through bodies and technologies. Her work deals with the transitions between physical and non-physical spaces and the experience of those. Ariane has an MFA scenography, is a member of the editorial team of the Dutch group Platform-Scenography, pre-PhD candidate and lecturer of the MA Scenography at the University of the Arts in Utrecht. http://www.arianetrümper.de

Hyperlink to the project startpage of WHAT WE WANT TO HEAR

Resuming the fellowship

Looking back on the residency there have been a few big challenges (besides bureaucracy of cultural funding being a pain in the ass and things going way slower then everybody would like ;). The main challenges

1. DECIDING FOR ONE PATH TO GO WITH. Meaning, which technology to use and to further research into, without knowing for sure if that will be successful. For reasons of sustainability to my own practice, I wanted to keep the costs for technology as low as possible and the set-up as focused around one computer as possible (kind of the heart of everything). I ended up with two main options: A. Have unity run on one computer and to find a solution how to get different audio outputs out of this game engine that only allows one microphone per scene, B. Would have been to design a multiplayer set up, that uses as many computers as participants and works with a network and a server-client set-up. This would have meant more technology and potentially more elements in the 'set-up' chain that could make problems. It seemed harder to overview for one person. (C. Would have been running several instances of the same program running on one computer, but this promised to be a hassle and fiddly, so I discarded this relatively early). After having at several points even considered to completely swap software, I decided to go for solution A, as this seamed to be the most suitable path for my practice, running my performative installations often as a 'one-woman-show'.

2. FINDING THE RIGHT PERSON TO WORK WITH. In this world of technology and digitalization there are so many specializations and different things to know and not to know. Being myself rather an autodidact and someone who knows a bit of many things, I was looking for a programmer, in this case for unity, who is really interested in not just doing what he/she already knows of, but who is interested in working in an artistic research process. Meaning, in a situation where you not just get a task which you finish, but in which you are invited to think along, to have own ideas and to discuss those. I started trying to talk to as many people as possible, to also make the decision on which technology to go with, and I basically got as many answers as I asked questions. Many 'it should be possible', which often means 'I actually don't know if it is possible'. Even though this has been a overwhelming and sometimes frustrating process, it was necessary and worth it and I found a collaborator in December, Causa Creation from Vienna. We only worked remotely with each other.

3. TO CREATE A SOLUTION TO GET 4 DIFFERENT BINAURAL STEREO SIGNALS OUT OF UNITY aka. USING SOFTWARE IN WAYS IT'S NOT MEANT TO BE USED. This includes to be able to connect an external audio interface and address it's different output with unity. This problem is related to point 1. but it deserves it's own point, as it was one of the crucial challenges. Unity is of course in it's core a game engine, it is not a live performance software. It is meant to make games, that people listen to over their speakers or headphones, but it basically means only one audio 'signal' is needed per instance running on a computer. This we wanted to hack. The basic assumption is, if we only can have one microphone in unity per scene (this is pre-given), all the sounds have to go through this microphone. Question is: how do we then get different audio experiences out of unity? In collaboration mainly with Chris from Cause creation, we came up with the solution that every 'original' audio source I place in a scene needs to be copied for any participant. So each participant gets their own copies of audio sources. These audio sources now get moved towards the central one microphone, in the according distance and rotation a participant move to the original audio sources. Two explain this further: Every participant has a Vive tracker on his headphones that picks up their location and rotation, these data get send into unity. I thus know how far away and in which angle someone is standing towards an original (virtual) audio source. This relation we now transfer to the copied audio sources towards the one central unity microphone, placed in the middle of a scene. The copied audio sources then get routed to a specific mixer (1 mixer per participant) and from there, with the help of a bought Asset AudioStream, routed to the audio interface. This solution of course means, that if I set 8 original audio sources in a scene and I have i.e. 4 participant, there will be 32 copied audio sources running as well…. so far performance is fine with that.

4. STAYING WITH THE TROUBLE. It's a lovely statement from the techno-feminist Donna Haraway, which I - specifically in our nowadays times – just think is an urgent mantra (even more in life then in making art). It means to stay with the problem, even if you don't have the solution yet. It took me till January, to have my first Beta test version running, that included the main components of my project. 4 1/2 month into my fellowship….. And of course, when you continue working on the project and add new components, new problems appear. I had a nicely running version and just before I had to stop my fellowship due to Corona, faced a new audio problem due to having implemented an animation of the max/min distance of the sound sources. I am confident to fix this problem, but it reminded me as well, that working with technology, in not 'pre-fixed' ways always means that 'trouble' is a constant companion. I guess that's the beauty of it, to continue figuring it out.

What's left to say. There are so many ideas, chances and possibilities floating around at the beginning of such a project and there are many ideas and things that have to be discarded or at least put on hold along the way. This is always a struggle and a bit painful, but I believe as well necessary. It's been a valuable experience, with passionate and engaged people, who are really trying to bring the idea of theatre in Germany further, to stop thinking in historic and just not actual terms, but to embrace new realities of life and of how our society works.


Public presentation of the research that has been done during the last 5 month. I presented some example scenes of my set-up accessible for 2 people equiped with headphones. I mainly showed two scenarios: 1.) 4 different audio sources (different songs) where virtually placed in the moving area of the participants, 2.) The different tracks of Queen’s ‘another one bites the dust’ have been spread over the place. By walking in the real space through the virtual audio sources spread over the area the participant can decide which song/part of a song they want to hear and in which intensity.

What did I achieve? What failed?

First of all I am pretty happy with the perception of the installation. People immediately start moving in space and exploring the sound environment. They start ducking down, running quicker or stopping to test the experience. The binaurally rendered sound is pretty convincing and there seems to ‘float’ an audio source in space around which one can move. It was a good first run of the technical set-up, there are still more things to test, such as the implementation of 4 participants, the time out problem with the trackers, the somehow unnecessary but currently still having to be present HMD and the implementation of a user interface and further control over the audio sources.

What am i planning for the extension of my fellowship research?

Besides working on the improvement of the above listed technical set-up, I want to work on a as well aesthetic set-up, a more ‘concrete’ scene in which I am composing the experience. I think at this step it would be helpful to get away from the so far very productionel and technical focus of my research towards and artistic focus and as well help improve the usability in performing of the technical set-up.

  • fellowships/2019-08/what_we_want_to_hear/start.txt
  • Last modified: 25.03.2020 17:39
  • by Ariane Trümper