Collective Photography – The Book


Over the last few months I’ve been slowly getting my head around what happened last year when I applied for, and received, my first Arts Council grant to run some photo walks in Birmingham. I went in to the process as someone who was probably capable of being an artist but didn’t have the confidence to call themselves an artist. I came out of the process as someone who, when asked by the registrar what to put in the occupation box on his wedding certificate last month, said Artist.

In March I half-joked that I’d got so much stuff in my head that I needed to write a book to get it straight. And then I looked at the sketchy notes I’d written and realised there was a book there. So I wrote it.

The book is called Collective Photography and it’s free to read/download in various ebook formats from Leanpub. (You can pay for it if you like but there’s no obligation).

The meat of the book is 10,000 words long, which isn’t that bad, and I’ve tried to keep it light and readable. It’s not the evaluation report, though I did need to write the book to get my thoughts straight for it.

The rest is a huge slab of appendices including blog posts, the funding application, questionnaire answers and a couple of interviews. You don’t need to read these unless they’re of interest.

Amongst other things I’m hoping this book is useful to other people like me who might not come from an art-school background and want to see how the Grant for the Arts process could work for them.

Please note that it isn’t 100% finished. I will be taking on board some feedback and tweaking the text in places, as well as adding some images and the evaluation over the next few weeks. I might even ask someone to do a not-shite cover! But if you go through the Leanpub process you’ll be notified when updates are available. I’ll also extract some of the newer pieces for this blog.

Please do let me know what you think of it and, for the next few weeks anyway, any comments on improving bits.

Download / read it here

Experimenting with repeatedly photographing projections

I had this idea a couple of months ago and have been itching to try it out. The idea is simple, and once again plays on Alvin Lucier’s I Am Sitting In A Room, a work I’ve been referencing for what seems like years now. The text rather nicely explains the work.

I am sitting in a room different from the one you are in now. I am recording the sound of my speaking voice and I am going to play it back into the room again and again until the resonant frequencies of the room reinforce themselves so that any semblance of my speech, with perhaps the exception of rhythm, is destroyed. What you will hear, then, are the natural resonant frequencies of the room articulated by speech. I regard this activity not so much as a demonstration of a physical fact, but more as a way to smooth out any irregularities my speech might have.

Here it is on YouTube:

The key thing that interests me here is something to do with slow entropy and finding, or not being able to find, the point at which the “pixels” (or their equivalent) no longer form a representation of the original thing, and then what the remaining void means in itself. Slow entropy is fascinating wherever you find it, from the You Are Here point on a poster map where repeated touching has worn a hole to desire lines worn in grass verges that weren’t meant to be walked across.

But I digress. This current experiment is slightly different to the strict Lucier and I think it’s going to take a few goes to get right. The basic idea is to photograph a portion of a room and then project that photograph back, aligning it as closely as possible with the room. Then I take another photograph of the projection in the room and so on.

This has involved me investigating projection mapping, something all the cool digital artists are poking at, specifically the demo version of MadMapper which is fairly easy to use.

Anyway, here’s the first attempt, three photos starting with a clean shot and then two projections.




A failure, to be sure, but if we don’t record our failures… Next I think I’ll try something simpler.

Revisiting the Outer Circle Bus Stop photos

On Sunday at the special Magic Cinema for Still Walking I showed a film I made in 2009 for Jon Bounds’ 11 Bus project where he invited people to travel on Birmingham’s Outer Circle bus route on November 11th to see what happened. I decided to do it on my bike with the task of photographing every other bus stop in the clockwise direction with the TTV contraption attached to my camera. While liking a few shots I remember being slightly disapointed with the the photos as a whole and this film was an attempt to make sense of them.

The film takes the fortuitously titled song Outer Circle by local band Woodbine and runs through the photos in sequence, starting by my house in Stirchley and going all the way around through the afternoon into the winter evening.

Watching it again projected on a screen in a room full of people forced me to see it with fresh eyes and I was rather pleased with how it stood up. Hindsight is a wonderful thing, especially when explaining your art process, and I could see some really interesting themes in there.

The main one was this idea that street furniture like bus stops is a design constant. The 11 bus goes through a wide variety of districts and communities in Birmingham (I recommend it to all newcomers to get a sense of the diversity of Birmingham) but the bus stops are all the same. The repetition of this sameness emphasises the differences and turned out to be a rather neat device.

Of course it helps that the music is great. I love the “11a, 11b” refrain towards the end.

Thanks to Andy of The Magic Cinema for recognising it was interesting and forcing me to make a higher quality version for screening.

Thumbnail Literacy

Pic by Joseph Kesisoglou

The Creative Hacktivism seminar at the not-even-started-the-refurb BOM venue had the usual mix of good and dull stuff but the highlight was probably Robert M Ochshorn whose presentation reminded me of being at Resonate a few months ago. He’s one of those people who casually demonstrates something that is effectively from “the future” and then more casually mentions he’s moved on from that a couple of years ago. He also ran his presentation from the command line. No powerpoint, just shell commands. Hardcore. Sadly his website has nothing on it relating to his talk – he did apologise for this – but I made copious notes in big exciting handwriting.

One of his goals is to visualise a whole film at once, to be able to visually browse a time-based medium. Related to this was a PDF reader where the thumbnails were displayed behind a floating reader window showing where in the publication you are. These are not novel. Video editing has a visual scrubber. PDFs have thumbnail views. But the way he integrated these into the reading / viewing experience was interesting.

Where he used slitscan to visualise films obviously caught my attention since I’m well into slitscan though I’d not thought of it as a way of effectively thumbnailing a temporal sequence. A film still just captures 1/24th of a second. A slitscan shows, or at least represents, a much longer period of time.

But using slitscan as an indexing tool implies a level of literacy which we might take for granted in other forms. I asked Robert about this and he agreed. We do “read” thumbnails and try to make sense of their abstractions. This is not something I’d really considered before but it ties in with a lot of my thinking, bringing photographic composition into the realm of symbolism and such. One of those nice “click!” moments.

One of Robert’s projects was a visual index of every Godard film. The screen showed them all as slitscanned thumbnails which the viewer could scrub through, amongst other things. I was struck by how the traffic jam scene, made with long tracking shots, in Weekend jumped out and was immediately recognisable to me. Another scene of a woman dancing in a window was not necessarily recognisable but clearly showed a moving subject shot with a fixed camera. Similarly you can tell when the film is full of fast jump-cuts because the scan becomes noisy.

Slitscan images made from films are interesting because they compress time into a single image. But if you don’t know how a slitscan is made you might assume it’s still an image of a single moment, not lots of moments compressed into one.

This evening I tried an experiment. I took that traffic jam scene from Weekend and turned it into a slitscan (extracting 10 frames per second). It looks like this. (click for full size)

Godard slitscan

I also squashed it into a 3:2 rectangle.


I then posted it to Twitter, telling people it was a scene from a film and asking them to name it. Here are the answers people gave:

  • The Italian Job
  • Monty Python and the Holy Grail (French castle scene)
  • Monty Python and the Holy Grail (Knights who say Ni)
  • Falling Down
  • Get Carter
  • Shaun of the Dead (back garden first zombie scene)
  • Four Weddings and the Funeral
  • Life of Brian (stoning scene)
  • Clockwork
  • Quadrophenia

The Italian Job was the most popular guess and the cars do look a bit like Mini Coopers in Red White and Blue.

What I think we’re seeing here is thumbnail literacy but for stills rather than for slitscan. Most people don’t know how a slitscan is made, even those who’ve been following my work. The idea that this represents 7 minutes or so of a film is not obvious. So people fall back on the more common method of looking for symbols and icons. Those colours, the era. Interesting that along with The Italian Job you also have Get Carter and Quadrophenia.

On reflection I maybe should have chosen a more well known film (only one person got it right) but the wrong answers were actually more interesting. They show they is some kind of dominant thumbnail literacy out there and that if we want to use different sorts of visual abstractions to represent time and space we need to take those into account.

Here’s the original scene:

Pic at top by Joseph Kesisoglou

Spectral Songs of the Slitscanned Selfies

Spectral Selfies Screenshots

As I approach what is hopefully my big personal project of 2014 (watch this space) I spent an evening putting some of the elements I’ve been working on into a 10 minute video piece.

I call it Spectral Songs of the Slitscanned Selfies.

As the slit-scanner moves across the photograph that pixel-wide column is stretched across the whole image in a translucent layer. Simultaneously photos are fed through an ANS synth, treating them as a sound spectrograph and producing a unique drone. You can watch the progress of each scan by the light grey tick at the bottom of the screen.

The process went through a few stages and builds on lots of previous work of mine.

First off, the source material comes from the hundreds of thousands of seflie photographs I’ve been collecting since November last year. The selfies seem to be becoming raw material rather than essential to the work and I don’t think I’m making any useful commentary on hypernetworked vernacular self-portrature at the moment, but that’s fine. Maybe I’ll return to them later. For now the only notable thing is that they are face-shapes.

Next, I’m processing the images using a slit-scan technique. At it’s most basic this mimics the action of a flat-bed scanner by examining an image one column of pixels at a time. My methodology for this was to repeatedly crop the photo to 1x600px strips and save each one in sequence and then stretch that strip back to a 600x600px square. I did this using basic ImageMagick scripts.

Then I made a movie of the stretched slits moving through the image in sequence from left to right. Here’s a test which I called Selfies in Flatland:

Flatland is a reference to Edwin Abbott’s 1884 novel set on a two dimensional world which explains how we 3D people might experience a four dimensional object. In the book, Square is visited by Sphere but he cannot see Sphere all at once. As Sphere moves through flatland Square sees a dot grow into a circle and then shrink back to a dot. Square sees Sphere in slices. So what I’m doing with these 2D photos is passing them through a single dimension, the Lineland of the book.

I heard about Flatland from Rudy Rucker’s excellent book The Fourth Dimension and How to Get There which I can highly recommend. (Out of print for some stupid reason but plenty of cheap copies on Amazon.)

Finally, there’s the sound. My big personal project of 2014 is currently based around an app called Phono Paper which got some attention recently. It’s based on the Russian ANS synthesizer which creates sounds from 2D arrays of light and dark points, lines and areas. Points at the top produce a high note, points at the bottom a low note, just like notes on the stave of a traditional score. The resulting sound is called Spectral Music, from the score being a graphical representation of the spectrum of soundwaves, but it also alludes to the cosmic, otherworldly feel of the music which featured in Solaris, Stalker and other Tarkovsky movies.

Here’s a demo video of Phono Paper:

What’s interesting to me is that the scores that go in and come out of the ANS synth are very similar to photographs – rectangles of black and white dots arranged in a specific way to produce an aesthetic effect. This opens some new doors.

One of the best things I did last year was regularly attend If Wet, a salon-style event where sound artists and experimental musicians meet in a village hall to perform, present and discuss their work. I go to take photos as a favour to the organisers but the real benefit is hearing people on the cutting edge of manipulating and recording soundwaves talk about it in detail. As someone who thinks about manipulating and recording lightwaves with a camera, applying the ideas and concepts from If Wet to the relatively staid and standardised practice of photography has been really useful.

A very simply concept that came to me from If Wet came from learning about electronic transducers which turn one form of energy into another. For example, a loundspeaker turns electronic signals into sound by vibrating the air, or a guitar pickup turns the motion of the strings into electricity. Simple stuff but it got me thinking about converting photos into other signals, such as sound. Obviously, this isn’t news but it gave me a coherent framework to play in.

Namely, is there a relationship between the composition of a sound and the composition of a image?

I touched on this a few months back when playing with processing photos in sound editing software noting that a “echo” audio effect produced echoes within the image. But what of the elements of the image themselves?

Anyway, back to the Spectral Music which satisfies the concept of “transduction” nicely. The developer of the Phono Paper app has also produced a more advanced program mimicking the venerable ANS synth called Virtual ANS which runs on Mac/Linux/PC/iOS/Android so you should definitely check it out. You can load images in and see what they sound like. Woo!

To create a soundtrack that matched the visual slitscanning I stitched the selfies in a row and set the synth to “read” each column of pixels at the same speed as the movie, 30fps. Here it is playing in Virtual ANS:

Virtual ANS screenshot small

So when you watch the movie you are hearing the sounds of each slit of pixels as they appear to you. My hope is that the viewer can make the connection between the lines, the sounds and the original photo, though I have included a subtle progress line at the bottom to help navigate.

This work is now complete. I’ll be moving on to the big personal project of 2014 next. Anyone got an old record player they don’t want?