Like A Girl

Pushing the conversation on gender equality.

Code Like A Girl

Making Video Without A Videocamera

(Part 2)

Making a Music Visualiser

In Part 1, I described how I used the Processing 3 coding environment to make some music videos through coding simple animations and recording from my screen.

One of my original goals was to code a music visualiser — an animation that reacts to parameters of the music itself. A particular wish was that I would be able to represent the frequencies (pitches) present in the music in a visual way. Such a music visualiser application based on Digital Signal Processing (DSP) of the sound could also be used to project images to screen during performances of my music.

The Fourier Transform

The science behind analysing the frequencies present in music goes something like this: music is made up of a pretty complex sound wave. Any complex sound wave can be broken down into a collection of simple sine waves which tell you how that sound wave is put together. The genius who worked this out was a French mathematician called Joseph Fourier, and he gave his name to the bit of maths that allows you to disassemble the sound signal into its component parts: the Fourier Transform. Once you know what the components are, it’s possible to represent them graphically in any way you like.

That’s lovely, and makes sense to me in theory, but the maths looks horrible! Thankfully, this is something that was covered by the Creative Programming for Digital Media & Mobile Apps course that I was following, and the Goldsmiths team had written a module for Processing which included a ‘Fast Fourier Transform’, so I didn’t need to worry about coding this function myself. The snag, however, was that while it worked fine for small chunks of pre-recorded music, I couldn’t get it to work for longer .wav files of normal song length or for analysing music being played in real-time.

Head-butting

After a lot of head-butting, I reached a point where I didn’t think I’d be able to solve the original problem of routing live sound through a unique, self-programmed music visualiser. So I went back to basics. Ditching the sound module provided on the Creative Programming course, I looked into Processing’s own sound ‘library’ using the online documentation. Libraries, by the way, are confusingly-named – I haven’t found a way to read them! I like to think of them as invisible modules of functions that you can invoke, almost Harry Potter style, with the right lines of code.

To my relief, there is a Fast Fourier Transform (FFT) within the sound library for Processing, and there is sample code in the language reference to show how to invoke it. I just had to make sure I had the sound library installed on my copy of Processing, then try out the sample code.

And bingo, I suddenly had something that was responding to input from the soundcard. Just like that. The graphics were terrible — just a fuzzy line at the bottom of the screen — but I finally had the beginnings of a solution.

Getting creative

Having now shown that I could make frequency analysis work for sound that wasn’t pre-recorded, I amended my visualiser code. I imported the sound library, incorporated the relevant commands and used my own graphics to create something more eye catching than the twitching line in the sample code.

You can tell the FFT function how precisely to analyse the sound data by setting the number of frequency bands that you want it to break the sound up into for analysis. A larger number of bands will mean your processor has to work harder to keep pace with real-time sound wave data. I found that my computer was comfortable with the 512 bands suggested in the sample code, and that this gave enough definition to the graphics for the look I wanted to achieve.

I used the resulting visualiser to record a Quicktime video of my song Silver Bird, then edited the results in iMovie. You can see the finished video via the link at the top of this page. It’s not perfect, I haven’t yet solved the problem of recording sound to a Quicktime video — if that is actually possible. This meant that I had to try and line up the sound with the graphics afterwards, which was extremely difficult, especially as I wanted to make the video more interesting by making some sections monochrome, and to add in some speeded up sections. I didn’t manage to line the sound and visuals up all the time, but there are parts of the video where the music visualisation is more obvious, such as here.