As I approach what is hopefully my big personal project of 2014 (watch this space) I spent an evening putting some of the elements I’ve been working on into a 10 minute video piece.
I call it Spectral Songs of the Slitscanned Selfies.
As the slit-scanner moves across the photograph that pixel-wide column is stretched across the whole image in a translucent layer. Simultaneously photos are fed through an ANS synth, treating them as a sound spectrograph and producing a unique drone. You can watch the progress of each scan by the light grey tick at the bottom of the screen.
The process went through a few stages and builds on lots of previous work of mine.
First off, the source material comes from the hundreds of thousands of seflie photographs I’ve been collecting since November last year. The selfies seem to be becoming raw material rather than essential to the work and I don’t think I’m making any useful commentary on hypernetworked vernacular self-portrature at the moment, but that’s fine. Maybe I’ll return to them later. For now the only notable thing is that they are face-shapes.
Next, I’m processing the images using a slit-scan technique. At it’s most basic this mimics the action of a flat-bed scanner by examining an image one column of pixels at a time. My methodology for this was to repeatedly crop the photo to 1x600px strips and save each one in sequence and then stretch that strip back to a 600x600px square. I did this using basic ImageMagick scripts.
Then I made a movie of the stretched slits moving through the image in sequence from left to right. Here’s a test which I called Selfies in Flatland:
Flatland is a reference to Edwin Abbott’s 1884 novel set on a two dimensional world which explains how we 3D people might experience a four dimensional object. In the book, Square is visited by Sphere but he cannot see Sphere all at once. As Sphere moves through flatland Square sees a dot grow into a circle and then shrink back to a dot. Square sees Sphere in slices. So what I’m doing with these 2D photos is passing them through a single dimension, the Lineland of the book.
I heard about Flatland from Rudy Rucker’s excellent book The Fourth Dimension and How to Get There which I can highly recommend. (Out of print for some stupid reason but plenty of cheap copies on Amazon.)
Finally, there’s the sound. My big personal project of 2014 is currently based around an app called Phono Paper which got some attention recently. It’s based on the Russian ANS synthesizer which creates sounds from 2D arrays of light and dark points, lines and areas. Points at the top produce a high note, points at the bottom a low note, just like notes on the stave of a traditional score. The resulting sound is called Spectral Music, from the score being a graphical representation of the spectrum of soundwaves, but it also alludes to the cosmic, otherworldly feel of the music which featured in Solaris, Stalker and other Tarkovsky movies.
Here’s a demo video of Phono Paper:
What’s interesting to me is that the scores that go in and come out of the ANS synth are very similar to photographs - rectangles of black and white dots arranged in a specific way to produce an aesthetic effect. This opens some new doors.
One of the best things I did last year was regularly attend If Wet, a salon-style event where sound artists and experimental musicians meet in a village hall to perform, present and discuss their work. I go to take photos as a favour to the organisers but the real benefit is hearing people on the cutting edge of manipulating and recording soundwaves talk about it in detail. As someone who thinks about manipulating and recording lightwaves with a camera, applying the ideas and concepts from If Wet to the relatively staid and standardised practice of photography has been really useful.
A very simply concept that came to me from If Wet came from learning about electronic transducers which turn one form of energy into another. For example, a loundspeaker turns electronic signals into sound by vibrating the air, or a guitar pickup turns the motion of the strings into electricity. Simple stuff but it got me thinking about converting photos into other signals, such as sound. Obviously, this isn’t news but it gave me a coherent framework to play in.
Namely, is there a relationship between the composition of a sound and the composition of a image?
I touched on this a few months back when playing with processing photos in sound editing software noting that a “echo” audio effect produced echoes within the image. But what of the elements of the image themselves?
Anyway, back to the Spectral Music which satisfies the concept of “transduction” nicely. The developer of the Phono Paper app has also produced a more advanced program mimicking the venerable ANS synth called Virtual ANS which runs on Mac/Linux/PC/iOS/Android so you should definitely check it out. You can load images in and see what they sound like. Woo!
To create a soundtrack that matched the visual slitscanning I stitched the selfies in a row and set the synth to “read” each column of pixels at the same speed as the movie, 30fps. Here it is playing in Virtual ANS:
So when you watch the movie you are hearing the sounds of each slit of pixels as they appear to you. My hope is that the viewer can make the connection between the lines, the sounds and the original photo, though I have included a subtle progress line at the bottom to help navigate.
This work is now complete. I’ll be moving on to the big personal project of 2014 next. Anyone got an old record player they don’t want?
For example, a loundspeaker turns electronic signals into sound by vibrating the air…….. Do you know (possibly stating the obvious here?) - they work the other way too?? Connect a (bare, basic) speaker’s terminals to the input of an amp, and it works like a microphone……… Ray x @ Chatham