Drop-in Raspberry Pi workshop at Photographers Gallery


This Saturday 24th October I’ll be in London at The Photographers Gallery for their Geekender weekend exploring how the Raspberry Pi computer can help us reinvent the camera. In retrospect it’ll probably mark the start of the next stage of my BOM fellowship in 2016, pulling together the theories of Vilém Flusser and the experience of the new Camera Obscura this year where we realised we’d made a tool that had never existed before. Cameras in the modern age are pretty standardised. What happens when we change those standards? Is the question I’ll be answering my making lots of unique cameras, from the way you handle them to the systems inside.

The Raspberry Pi is part of that. Essentially a low-powered Linux computer running on a mobile phone chip, its key feature is it’s size and price. You can get a kit for around £50 and it fits in a cigarette packet. And because it’s utterly hackable and programmable you can make it do almost anything.

I’ll be set up at the Photographers Gallery from 2-5pm seeing what this “almost anything” might be. It’s a drop-in session where you can help out or just ask questions.

Stirchley Artists Collective show

I will have a print on the walls at this.

2015-10-22 Stirchley Artist Collective FLyer

In celebration of the imminent re-opening of the Stirchley Baths as a community centre – after two-decade dereliction – local resident and graphic designer Kerry Leslie, has brought together a number of Stirchley-based artists, designers and illustrators, to produce a piece of work to be part of a collection inspired by swimming and Stirchley Baths.

Artists include: Carla Smith, Mark Murphy, Roxanna Collins, Matt Cox, Mya Munnelly, Pete Ashton, Jonny Graney, Dale Hipkiss, Kat Kon, Shona McQuillan, Ruth Harvey, Joe Flory and Beth Harris, with Justin Wiggan previewing his new commission by Stirchley Baths.

The exhibition is at new poetry and arts cafe P Café Stirchley 27th October until 23rd December. Also as part of the exhibition there will be a number of associated events and workshops.

Further details available on

The private view is Friday 30th October at 7pm, there will be music, visuals, and drinks (including speciality beers) via Stirchley Wines and Spirits. If you would like to attend please email Kerry at

How InstaBeck was made


I was delighted to be chosen by Jo Gane to exhibit at The JHB Archive, her group show at Birmingham Open Media running from Sept 11th to Oct 3rd, with a launch on the evening of Sept 10th.

All the pieces are inspired by the scant information on the archival record for a missing sculpture, reverse-engineering the metadata into art, if you like.

My piece continues work taking core samples of photographic images from social sharing networks which I first developed with the IMG_4228 series. I started with the text of the archive record itself which I split into 7 lines of 8 words each.

abstract sculpture by julian h beck rectangular mahogany

type walnut according to sculptor marked 25 in

paint near base two roundhead screws for attachment

to plinth on regtangular plinth covered in black

tape inside of plinth marked 23 1961 or

28 1961 number painted out and replaced by

new number see press cuttings file and minutes

I put each word / letter / number into Instagram as a hashtag and saved the first eight images that came up. I then put these into a couple of grids.

InstaBeck 1

InstaBeck 2

Unsatisfied with how these looked, I cropped each image to its central pixel (finding the average colour simply produces a lot of grey/brown) and resized them back up to 200x200px, giving a nice clean field of colour.

The final stage was to randomise the image. For this I used PureData, writing the following patch (available with images in this Dropbox folder if you want to pick it apart).

Instabeck PD patch

Which no doubt looks terrifying, but it’s actually pretty simple. Each of these…

Screen Shot 2015-09-01 at 13.41.56

…builds a square representing a word/letter/number from the description, and all the lines coming into that bit randomises which colour is shown.

To pre-load the work you hit the orange button, then the green button launches the video changing a square every second, though the speed can be changed using the blue buttons.

Over in the top-right is an optional bit of code which uses the computer’s microphone to alter the speed, so a noisy room will produce a frenetic display while silence will freeze it.

Here it is in situ during testing:

But for most of the exhibition a pre-recorded loop will simply run, not unlike this:

By a happy co-incidence the work bears a striking similarity to one of Beck’s own paintings and Jo sent me a photo.



The preview is on Thursday 10th September, 6-8pm. Please come along!

Deep Dream resources roundup


I’ve been following the Deep Dream tool since it emerged a few weeks ago with the idea that this is much more interesting than a bunch of freaky psychedelic images. We might not be able to say exactly why it’s more interesting, but there’s something there, I’m sure of it.

So here’s a list of the stuff I’ve seen to date. If you know of anything else please let me know.


Deep Dream is Blowing My Mind and more background info from Memo Akten is a great primer.

Google’s original explanation.

Why Google’s Neural Networks Look Like They’re on Acid – an attempt to explain just that.

As Art

Is it art? Specifically for Rich Oglesby’s answer.

Why Google’s Deep Dream Is Future Kitsch

Taking it further

Shardcore attempts to use it sparingly

Avoiding kitch. A concerted effort to get away from the puppyslug. Also has a good explanation.

Deep Dream Music Video


Running deep dream on Windows and OSX. I was completely flumoxed until I found this.

Google’s deepdream code

How to make gifs and videos using DeepDream Animator

DeepDream Video

Online Deepdream services


There are at least two broad meanings of the term “focus” in relation to photography. The common one is to bring the subject into focus, to make it not-blurry. Effectively it is not the subject that is out of focus but the focal plane which is either in the wrong place and needs to be moved closer to or away from the aperture, or is not deep enough and needs to be increased by reducing the size of the aperture.

The other meaning of “focus” is slightly more interesting. Here we’re talking about the subject of the photograph, that which we want to draw attention to. There could be many identifiable things in the image but we want the viewer to ignore them and focus on one in particular. We do this in at least two ways.

The first is to use compositional techniques, such as the Rule of Thirds and creative lighting, to leverage cognitive biases in the viewer’s mind telling them what is important and what is irrelevant. We build up the image, as a painter might, to create a story which leads the viewer through the image, causing them to pause as they understand the subject through the visual cues that lead them to it.

The second is through framing. Here we decide what to include and what to leave out. Photography is in many ways a subtractive art. We create involving photos by excluding many of the things that make up reality and then asking the viewer to put them back. What happened before and after is removed as we take a slice through time. And what is happening beyond the edges of the photo is excluded, to be ignored or imagined but never known.

The edges of the photo have become interesting to me. Being in the camera obscura you have a sense that you’re seeing everything, but then someone walks in to the frame and you’re surprised. Where did they come from? There isn’t supposed to be anything beyond the edges.

This blinkered view is embedded into photography. We zoom in and crop out to focus on the subject and the resulting image is better for it. This mimics how our brain processes the surprisingly wide visual field our eyes are constantly pouring into it. We are vaguely aware of things in the periphery but we mostly ignore them, focussing on a relatively small area in the middle. We mentally zoom in and crop, so it’s no surprise our cameras are made to do this too.

Wide-angle lenses do exist, but they look wrong. The barrelling of the fisheye is the most immediate problem, but they also push the background away, creating a distancing between anything in the foreground and its context. The GoPro, frequently mounted on the subject, actually uses this to its advantage, emphasising the thrillseeker and making their context seem vast and awe-inspiring. They know it didn’t look quite like that in reality, that it looks wrong, but we on our laptops are ignorant that a trade-off has been made.

To focus is to exclude, to create voids. And in doing so it is to invite speculation and imagination, for the brain abhors a vacuum and will fill it with ideas. And that is where photography gets its power from.