Interview with Scottish improviser and sound artist Lauren Sarah Hayes

Photo by Tobias Feltus

Lauren Sarah Hayes is a Scottish electronic musician and sound artist who creates music based on improvisation and uncertainty. Utilizing a variety of electronic instruments and sounds as well as voice, she uses custom software to interact with her hybrid analog/digital setup with a game controller. This provides a sense of immediacy and tangibility to performances and collaborations. Hayes’s most recent release, “Embrace” (Superpang), came out in early 2021. She has had many collaborative releases, as she is also a member of the New BBC Radiophonic Workshop. In an email interview, she discussed her music and creative process.

Could you discuss your background and what led you to experimental and electronic music?

I grew up in a very musical household. I was constantly surrounded by music; not only recorded music, but playing music with groups of people, family and friends. I started performing when I was really young, from plays and musicals, to when I started being in punk/grunge-inspired bands in my late teens. I was drawn to experimental and electronic music in several ways: firstly, because I had early exposure to technology through DIY electronics and coding. I understood the autonomy of being able to make something for yourself and saw that technology could enable that. Secondly, I was immersed by the electronic music scene in Scotland in the late 1990s/early 2000s, which was an education on how sound could be affective in how it could make bodies move. Finally, I found the possibilities for understanding what music could be to be really enticing: being an improviser working with live electronics has allowed me to facilitate rich creative scenarios both intimate and public, build lasting friendships, work with a variety of different people and communities, all while satisfying my curiosity for exploring different ideas through sound. 

Could you discuss your work with the New BBC Radiophonic Workshop?

I first became affiliated with the New BBC Radiophonic Workshop in 2012. My friend Yann Seznec, who was one of the first members when the Workshop was relaunched, got in touch with me and I was excited by the idea that there would be a contingent based in Scotland doing things related to sound and music, rather than being purely London-centric. I’ve been involved in a variety of projects over the years: I gave a commissioned performance at King’s Place in London with musician and media artist Jo Apps where we performed almost completely in the dark. Photographer and visual artist Tobias Feltus created a set of sonically-responsive camera flashes that reacted to our sound in a concert hall otherwise completely absent of light. With other members, I led a workshop for the BBC Proms on creating electronic music for 12-18 years olds. I was also invited to create a long-form radio piece for Dutch radio station, Concertzender, for their series Inventions for Radio. Most recently, I’ve been involved with supporting artists and adjudication for the Oram Awards, responsible for promoting forward-thinking work from women and gender marginalized artists.

You use self-built hybrid analogue/digital live electronics, which you’ve developed over the last 14 years. Could you describe those tools and discuss how your set-up has changed/evolved over the years?

In practical terms, I generally use a combination of drum machines, analogue synths, voice processors, MIDI controllers, and a laptop running custom software. Sometimes acoustic piano or my own voice becomes part of this ecosystem, but most often at the moment, it’s a hybrid analogue/digital setup. My goal was always to create a performance system that could become a sort of sonic playground: something that would allow me to create spontaneously by exploring and shifting around within the materials that I put together. My work is very much about the live situation, so most of the work I put in to create this setup is so that I can go into a project with an instrument that can be used with other performers (with regular collaborators or new improvisers), or within unique contexts (performing in unusual architectural spaces, for example). The software design takes care of a lot of this: I try to think beyond individual effects and parameters. I set up connections between processes within the hardware and the software so that I can only really experience the result of these relationships, rather than the individual processes themselves. I interact with everything using a PS2 game controller because it brings a sense of immediacy and tangibility into my hands. I tried it out once in 2007 and it stuck. When I started on this project, I was thinking more rigidly about creating specific pieces. Over the years, the tools have become much more of an instrument, one that continually offers me new places to explore musically.  

I know you use MaxMSP, which is a very powerful and flexible software environment.  What role does it play in your creative process and performances?

Most of what I do with electronic music involves Max. I’m not particularly interested in editing or working in DAWs because I find the process tedious and I struggle with the decision making. Other people do it beautifully and I’m very glad for that. For me, this type of software environment lets me investigate all the ‘what if’ questions that come up in the creative process. What if I could analyze the pitch of my voice at a particular moment and use that to set the fundamental frequency of a pulsar synthesis process? What would that sound like? How would it deal with the instability of the voice? What could it offer me as an expressive tool? What would happen if I then changed the input to this process? Or fed the synthesis back into itself? Creative coding environments, once you get to a level where you can put things together quickly (which is sooner than you might expect once you notice that establishing simple relationships can lead to interesting behaviours), can enable you to indulge in these kinds of explorations. Unless, for example, I’m using only synths with a group of improvisers, everything I do on stage involves Max. Working in this way is what has made performing electronic music feel authentic, physical, embodied, and real for me. That’s what I’ll always proselytise though: not the technology itself, but finding the unique combination of things that feels right for each performer, both individually, and also together with others within group formations. In essence, that is the process of finding your own musical voice.

How does your creative process compare between your live performances and recorded work? Does the improvisational aspect differ at all? Are there things you can’t do in regular live performances that can be part of recordings? (or live in-studio work vs. public performances)

Aside from some of the most recent things I’ve been doing with machine learning which I’m still working out how to do reliably in a show, there’s no real difference between live and recorded work. The things that I’ve released have generally been recordings of improvisations. The sound artist, performer and inventor Laetitia Sonami was someone who eschewed making recordings and I totally feel this because I’m much more interested in what is created together with an audience or co-performers, in the moment, during live performance. Making musical recordings is a phenomenon that is only about 150 years old, which is an incredibly short era in the entire history of human music making. Even though I work extensively with digital sound, I still have mixed feelings about releasing recorded music. My performances are physical and highly gestural. It seems that this is conveyed in my releases, but it’s always a challenge for me to work in a purely auditory format. 

What type of preparation goes into your improvised live performances?

I love playing with collaborators and I get so much out of that, even when it’s not in front of an audience, but the idea of playing through improvisations on my own feels very alien. Of course it’s necessary because improvisation is a practice, but at the same time, I trust my instinct in this because I really don’t understand music as something that is only rehearsed and then presented in performance on stage. Doing music is a sensuous way of being and a way of being with other people. So I think of preparation in the same way that dancers use the idea of blocking or staging. I carefully go through every element of my set up in terms of cables, physical arrangement, software protocols, and navigation of musical materials. Everything needs to be as failsafe as it can be and I need to have complete familiarity with all my tools: hardware, settings, software, etc. If something breaks, can I debug the code? Can I repair the controller? What will I do on stage if something fails? Of course I want to get to the stage where the chances of this are extremely minimal and where contingency is built in. Things can go wrong with any instrument or set up, but using non-standard live electronics means that the number of things that can break is multiplied. I’ll rerun my code extensively until the point where I’m confident in my entire system. At this point, when everything feels familiar and safe in this way, I’ll move through some ideas or try out ways of navigating my setup, but it is more about blocking, rather than a full-blown rehearsal. I always want to make sure that there is some friction or unfamiliarity to take into the show. 

Your 2016 release Manipulation consisted of unedited one-take improvisations performed on unpredictable systems, spanning many years. What were your thoughts on how those compositions turned out? Did any, in particular, end up going in unexpected directions?

When I look back at this album, I get a lot of joy because some of the pieces came out of really simple early techniques that I was trying out that totally worked. Flutter, for example, involved running my first analogue monosynth, a Jen SX1000 (an unassuming Italian entry-level synth) through some live sampling stuff that I was building in Max. When I found the right spot on the resonance, the combination of analogue and digital led to these lovely swirling descending tendrils of sound. I can also hear some of the techniques in this track that I still use today in Max, but they sound quite different and much more subdued, which I enjoy.

Some of the other improvisations, bounce and technoscribble, for example, were made in an afternoon of rapidly creating new configurations between my hardware and software. I was in a shared studio in Edinburgh, and two other people were working on another project at the same time. Somehow that actually allowed me to maintain focus on what I was doing, and create these playful explorations. My software is designed in such a way that I can quickly add in extra synths or hardware or acoustic instruments. For these tracks, I was creating weird control feedback loops between my beloved Elektron Machinedrum (a now dysfunctional SPS-1 MK1), contrasting that with the crude percussion of the Korg Monotribe, and my original MS-20 (which I never really fell in love with). I enjoyed what came out that day, although I was having too much fun and didn’t make any notes so I’ll never be able to recreate what I did, but I’m OK with that. 

Unpredictability seems to be a common thread throughout your work. Could you discuss the ways you strive for this and what challenges it might introduce?

I’m always looking for that balance between, to steal from your title, chaos and control. Somewhere within that is where the most interesting things can happen musically. I don’t want to be in control of everything going on because then it becomes simply like executing a series of tasks. As an improviser, I thrive on having to respond to unexpected sounds, gestures, or events. These can come either from a co-performer or from my performance environment itself. I’m always investigating how we understand and make sense of the world, our environments, and our social dynamics as embodied beings; technology helps me to do some of that through the dynamic processes of musical improvisation. When I started working with code, I was working with deterministic chaos functions. The main challenge is that you can create something that is interesting conceptually but is pretty dull in practice. I’m interested both in the concept and the aesthetics. I’m much less formalistic now in that I could do an entire set just working with negative feedback.

Your latest release, Embrace, is also comprised of improvisations. Does it represent a complete session, or was it compiled out of a larger group of recordings?

For Embrace, I had the opportunity to explore some ideas that wouldn’t necessarily work in live performance. Or rather, I haven’t quite worked out how to approach that yet. Two of the tracks, Femme Endings and Dont Glitsh On My Cascade are unedited improvisations using the most current version of my performance setup. The other two tracks are made up from material recorded in the same session, but are reconstructed using improvisation of machine learning techniques. So, in a sense, I’m remixing my own improvisations; perhaps this could be called second-order improvisation. I’m not familiar enough yet with these newer tools to fully build them into everything I’m doing, so I’m still exploring how they can raise or amplify creative possibilities. It can take years to figure this out in performance. I’m really happy that Embrace came out of this process. It’s been really nice to see how it’s been received given that I’m really just saying: here’s what I’m doing, I’m trying this out, here’s what happened, we can also think about music in this way.

For more info visit: laurensarahhayes.com.

Share
Tweet
Reddit
Share