Stretching Sound, Playing With Time

Stretching Sound, Playing With Time.
Sonic Method-Making and Timing Issues in Digital Sound Culture

Paper presented at the conference Hands On Sonic Skills
Practical experiential approaches to 
sound, music, and media in musicological education

Translated in a hurry with the help of deepl.com

I am very happy to be here today. Above all, because I share the conviction that sonic skills, practical approaches based on experience, or even auditory design – as I would like to say in my beloved Lüneburg dialect – are of central importance for the fields and issues we all work in and on.

In my short presentation, I would like to make a few points against this very background: First, I would like to suggest the format of the sound lecture in a very practical way, so to speak in passing, in order to discuss how the practical level of sound can be used, especially in presentation and lecture situations.

Secondly – and most importantly – I will take this opportunity to address a topic that is also quite important to me and which, I believe, is particularly suitable for being understood and listened to in a practical way – in or as a sound experience: I am interested in how our current experience in digital cultures is inevitably permeated by a diversity, a heterogeneity of temporalities. And I am interested in the extent to which these different levels of temporal embeddedness of our actions can be made concretely experiential, perhaps not least in sound and music culture – and that would also mean: precisely in and through sonic skills.

And finally, thirdly, I would like to conclude with just a few brief thoughts on the question of the extent to which the development of new formats for academic work – in research and teaching – and the search for new methodological approaches (in a very broad sense) can be a fundamentally critical practice. A method-making, as Katherine McKittrick recently called for.


I would like to begin, however, with the question of experiential approaches, of auditory design. I assume that music in general, and what I would decisively call “media music” with Rolf Großmann in particular, represent an almost paradigmatic object of reflection, on which something like an active creative engagement with the new spaces for agency and play in digital cultures could be developed. For such media-musical practice is at the same time a very concrete reflection on media. It develops practical questions based on and in conjunction with the sounds it creates. Questions such as how ubiquitous and increasingly inconspicuous digital media are realigning our formations of knowledge, our experience of the world and our subjective existence in media worlds. In other words, what I would call digital cultures; media-cultural practice after the digital has become commonplace.

But if we assume that sounds know something specific about these media environments that span our present, then a contemporary scholarly examination – a contemporary sound culture studies – would have the urgent task, in line with the spirit of this conference, of developing formats and methods precisely in order to make this epistemic moment of our very practical handling and experience of sounds compatible.

This applies in particular to formats of academic exchange and presentation – in other words, to lectures such as this one. In their Audio Paper Manifesto 2016, Sanne Krogh Groth and Kristine Samson already pointed out the possibilities that would be opened up by incorporating the auditory level into academic text work – into the paper:

‘For instance, it combines the rationality of language and speech with the sensation and affective materiality of the voice, or it incorporates the sound aesthetics of various environments, landscapes and spaces to underline and strengthen the academic argument.’ (Sanne Krogh Groth & Kristine Samson (2016): ‘Audio Papers – a manifesto’)

And I would like to add that it is not just a matter of strengthening and underlining independently conceived arguments through sound, but rather that these arguments could perhaps be developed quite differently on the auditory level.

In relation to lecture situations – such as this one – the term ‘sound lecture’ is often used. I am not at all sure whether what I have in mind is a sound lecture. If so, then only insofar as I would like to try to explicitly echo some of the points I am about to make. The level of sound will give me specific access to my topic. For sound – and this brings us closer to our topic – can hardly be thought of as anything other than temporal, and precisely for this reason offers a special potential for making tangible the new technocultural constellations of temporality in which we find ourselves immersed in contemporary digital cultures. It is precisely this aspect that I would like to develop briefly here today: I would like to understand tinkering with the various aspects of temporal phonographic media as a genuine exercise in the specific ‘chronotopes of digital cultures’.

This modality of temporality in digital cultures – or rather, their specific temporalities, in the plural – currently reminds us once again how much even the most naive conceptualisation of time is based on its own media prerequisites.

The problem of different, heterogeneous media time regimes has rightly become a central topic of discussion in media studies. To give just one example, Isabell Otto writes in her current book ‘Prozess und Zeitordnung’ (Process and Order of Time):

„Under the conditions of digital networking, it becomes obvious and observable that every temporal order is in a state of tension and competition with all other temporal orders, which are also structured around the challenges of digitally conditioned temporality. There is no uniform time dimension of digitally networked technologies, but rather numerous divergent, heterogeneous dimensions of temporality. The relativity, situationality and context dependency of every temporal structure has thus become particularly clear under the conditions of digital networking.“ (Otto 2020, 11)

This sounds rather complex and yet remains vague. But we can make it quite concrete: like calendars, clocks or metronomes before them, the smart mobile devices in our pockets make time literally tangible in a new way. However, they do not function in a straightforward manner, precisely because they integrate different media and thus different concepts of time: for example, as a calendar that not only reminds me that I have an appointment in Halle today, but can also independently identify this appointment from emails and perhaps voice messages. Or as a mobile connection to McLuhan’s prophecy of an instantly addressable, digitally networked globe, which enables me to incorporate a file into my presentation that is – perhaps – stored on a server on the other side of the world. We’ll come to that in a moment. Or as a tracker whose sensors continuously record my currently slightly nervous health data – and perhaps predict my life expectancy on this basis – in other words, the time I have left. That would be, so to speak, the classic Heideggerian twist: even digital time is perhaps always only the time that remains.

Because my time is also finite – especially when it comes to this lecture: let’s continue!

In the following, I would like to make it clear that what we see here in the background are exemplary first études of a media-musical reflection on digital time regimes. We see various time-related actions in the interface of the DAW Ableton Live, one of the most widely used commercial music and audio production software programmes currently available. In this video loop, we see a wide variety of such actions, which I repeatedly go through with students in introductory courses on auditory design in digital media. And we see – and this is the really exciting part – that these utterly inconspicuous, trivial actions are oriented towards very different conceptions of time.

Above all: my favourite argument, zooming in the timeline. The timeline is the time axis stretched from left to right on which the virtual sound head travels along the virtual tape tracks. And we can zoom in this graphically stretched time; by simply clicking and dragging, we can immediately switch from the macro area, where we can view the track structure at the arrangement level, to the millisecond area, where we can shape the micro-rhythmic subtleties of the groove, for example, by dragging individual beats and individual sounds slightly against the tempo (and in a completely different way than a drummer would do it…).

As already mentioned, all of these are initially fairly trivial mouse gestures, parameter settings, or visualisations. These are actions and phenomena that, firstly, are not specific to music and, secondly, are acquired by users (especially beginners) as a matter of course – and, of course, they get along very well without, as I somewhat grandly demand, reflecting on the media time regimes in which they are embedded here.

So, to make my point a little clearer, I would like to take a step or two back. Because, of course, this work on the timeline does not begin with digital audio workstations. Rather, as Friedrich Kittler puts it, it is perhaps the basic function of technical media itself – what he calls ‘time axis manipulation’. Let’s listen to what that means…

Tape

So if, in preparation for this lecture, I had recorded my somewhat nervous voice on tape in order to listen to myself speaking and train away all the “ums” and stutters, // then – depending on the age of the tape – my voice would inevitably have been submerged in tape noise. And we could hear this noise as a kind of media time signature. Namely, as the irreversibility with which the magnetisation of the tape classically and thermodynamically strives steadily towards entropy. Such linear time regimes know only one direction: straight from the past towards the future. //

But my voice on the tape already complicates such linear single-trackness: auditory media are devices for manipulating the time axis in real time. Unlike musical scores and phonetic transcriptions, they do not record the passing of what is said or heard in a sequence of symbols, but translate it into the time regime of a technical sound script – phonography.

So if my tape machine were getting on in years and the drive were no longer running smoothly, this drifting asynchrony between recording and playback would immediately become audible. The media time of the tape no longer runs straight, it increasingly veers off course.

// Phonographic media allow us to tinker with time. They allow us to play with time – via simple technical parameters such as the playback speed of a drive motor. They inevitably multiply the immanent temporality of the sounds they make available.

// This can also be heard on my tape, which is in dire need of maintenance: the two stereo channels that spatially reproduce my voice are slowly drifting out of phase and beginning to diverge. The simultaneity of my voice in the stereo image suddenly becomes audible as the duality of two separate channels. Due to the poorly adjusted recording head, there is also crosstalk on the tracks. And the supposedly documented real time of my nervous lecture preparations quickly turns into a droning back and forth between a multitude of voices – and a multitude of times.

But did I really fetch a dusty tape recorder from the attic to prepare this lecture? It is much more likely that I simply dictated my text into the built-in microphone of my smartphone. In other words, I created a digital sound file that is safely stored in the solid-state memory chip, protected from analogue interference and analogue time fluctuations.

But even the temporality of this digital sound file is much more complex than it initially appears. // And in doing so, I am – for the sake of time – completely ignoring what the psychoacoustic MP3 codec, which my phone uses to compress the recording, does with its temporality.

But let’s assume that my phone doesn’t store the garbled recording of my lecture locally on a memory chip, but outsources it to cloud storage. The recording would therefore be located on some server rack in some data centre on the other side of the world. And if I wanted to access it again here in Halle as part of this lecture, //then the file would be streamed from that server. That, in turn, does not mean that there would be a continuous flow of data from the server to Halle. Instead, according to the Internet Protocol, the data is packaged into packets – packet switching – which take very different routes through the global data networks and therefore arrive here in Halle at very different times. Only as long as all this happens in time frames that are below our human perception thresholds can my computer reassemble everything in time to play back the file in such a way that we hardly notice anything. But if, for whatever reason – certainly not because of poor network coverage in Halle – individual packets were delayed too much or even lost altogether, this would create increasingly noticeable gaps in the seemingly continuous flow of the streamed audio. And if the connection were to stall completely at some point, the recording would keep getting stuck, like a skipping analogue record. Perhaps at some point it would run into a final recursion, a loop, a time loop.

But after this detour via dusty tape and cloudy server racks, let’s return to the starting point of my reflections: the digital audio workstation.

Just to prove once again that this potential for reflection offered by DAWs, especially for media music education, is not just my idiosyncratic fantasy. Will Kuhn and Ethan Hein open their book ‘Electronic Music School’ with a brief, implicitly media-theoretical classification. Before presenting their, in my opinion, very promising programme for secondary school music lessons using DAWs, the authors write:

„Working with the computer as a medium doesn’t just affect the resulting sounds. The visualisation systems used in digital audio workstations […] can change the entire conceptual imagining of music. For example, producers are constantly zooming in and out across timescales. At one extreme, they manipulate fragments less than a millisecond long, while at the other, they view long and complex compositions compressed to fit into a single screen.“ (Kuhn/Hein, 13)

Unlike my supposed tape recordings, however, dealing with media time, the media-aesthetic work on the timeline in the DAW, becomes a genuinely creative option. I do not wish to ignore the long history of aesthetic experiments with tape between Paris, Cologne and San Francisco, most of which were presented with great avant-garde gestures. But I believe that in the digital realm, what was once an analogue-mechanical necessity – such as the inevitable connection between playback speed and frequency space – becomes largely contingent and thus a creative option.

And once again: what I call here a reflection on media time regimes may sound very grandiose. But it is meant to be absolutely low-threshold. The students Kuhn and Hein write about, who zoom back and forth across time domains, do exactly that. They engage aesthetically, musically and playfully with the new viscosity of technical time above the clock speed of the CPU. They play with digital time.

And this playing with digital time has, of course, been going on for quite some time. Digital time axis manipulation has a long history, particularly in the sound laboratories and aesthetic experiments of pop music – from house and hip hop to techno and jungle.

So, shortly before the end, let’s switch back to the beginning, back to Ableton Live, or perhaps back to 1995.

// Timestamp 1995 //: ‘We have advanced to a level where we can control time solely by stretching sound… we play with time.’ When jungle musician A Guy Called Gerald writes about ‘playing with time’ in the liner notes to his now iconic LP Black Secret Technology, he means it quite literally, even back in 1995. In the early 1990s, digital samplers such as the Akai S950 offered him the possibility of slowing down or speeding up his breakbeat audio material for the first time through so-called time stretching – without changing the tonality of the material. For Gerald, the time of his sounds became an elastic material that he could practically compress and stretch, that he could shape.

Again, these are completely trivial operations. But I believe, and I experience this time and again in teaching students, that it is precisely these initially self-evident operations that hold great potential for a fundamental engagement with digital media.


I hope that in this hasty overview I have been able to give an impression of what I understand by the heterogeneous technical temporalities of auditory media. And above all, I hope that I have at least been able to suggest the extent to which the sonic level can provide a fundamentally unique perspective on such questions. This does not necessarily require complex technical setups or highly virtuoso performances – but above all the willingness to understand sound itself as part of one’s own argumentation. I am convinced that this would be a crucial aspect: The legitimate demand for more sonic skills in sound culture studies, in teaching and research at universities, is not primarily about imparting manual skills for the “correct” use of software, the “correct” setting of a parameter set, or the “canonical” use of certain sound aesthetics.

Rather, it is precisely about establishing sound as a subject in its own right, BUT ALSO as a method of dedicated sound science. Sound as an integral part of scientific procedure, as part of the ways in which we develop our questions, construct our arguments and draw our conclusions. Sound not as the sonification of an abstract, formal demonstration – but as a medium of knowledge in its own right.

This, in turn, would not necessarily mean positioning one’s own approach as NON- or ANTI-science, as Holger Schulze has at least hinted at in relation to sound studies. However, it would definitely mean developing the question of sound and its specific epistemic function, as well as the question of a different kind of science – perhaps quite implicitly.

Science is not neutral. Questions about scientific work, about our approach, about our methods are not neutral. Demands for rigour and clarity, for precision and transparency are not neutral – they have been deconstructed in many ways and with good reason, and not coincidentally, especially by queerfeminist or anti-racist voices, for example.

Katherine McKittrick, who is perhaps working her way into sound studies from this very intersection of critical theoretical axes, has in this sense just designed method-making, the creation of new methods, as the starting point for another possible scientific practice. Against established epistemologies, which ultimately conceptualise knowledge as the greatest possible control, as definitive categorisation and hierarchisation, as the final subjugation of their respective objects, McKittrick wants to develop OTHER methods, a different scientific approach to the world. And that could mean – as she puts it – asking the groove or the poem for theoretical insight.

‘Asking the groove or the poem for theoretical insight.’

That, too, would be sonic skills: developing a different, equally sensitive and knowledgeable approach to sound that always understands scientific sound work as work on the relationships we tinker with when we tinker with sounds.

‘Asking the groove or the poem for theoretical insight.’

That sounds like a start.