Slitscan Selfies 1

It’s nice when two things you’ve been dabbling with but not knowing what they might be useful for come together and reveal something that might actually work.

I started playing properly with Slit-scan photography last summer (experiments are here) and I like how it deals with time in a very different way to the long-exposure photo. Here’s a nice collection of landmarks in slit-scan photography and there are plenty of apps for the networked lens on your pocket computer to run (I like this one myself.)

I also have the Slit Scan Movie Maker app for the Mac which does the same thing but with videos creating stuff which looks all cool and weird but is also intellectually intriguing, like bending space-time or something. I hadn’t found a good use for it yet though.

Over the last few months I’ve been collecting Selfies from Instagram. I have over 65,000 which is getting close to a usable sampling of the 77 million Instagram has tagged. I’m not 100% why I’m collecting them. I know why I think they’re important, why they interest me, and what they represent the wars of photography. But I’m not sure what I’m going to do with this collection.

The taxonomy thing keeps cropping up. I’ve been (very slowly) visually sorting them into three main groups (looking at camera, looking away from camera, in mirror with camera) and have been trying image-matching software to find similar poses (the DoppelSelfie project) which has been somewhat fruitful but essentially just moving things into smaller buckets.

What I’m after is something that shows these photos as a mass and to find the order, or chaos, that emerges.

I made a video of 1500 Selfies with the photographer looking straight at the camera. They flash through at 25fps so you can’t dwell on individuals.

I then put this through the Slit Scan app with 10 pixel gaps scrolling upwards.

I like how the eyes pop out and it’s a fun thing to watch patterns emerge, but I’m not sure those patterns say anything.

The app also has a “Finishing Line” mode, which is doing something I should probably explain. It takes the first column of pixels from each photo and places them in a row (hence the 3:1 wide-screen format - this is 1500x480px). That forms the first frame. It then takes the second column of pixels from each photo and builds the second frame. Then the third, and so on until all 480 columns are accounted form. Let’s have a look:

What you’re seeing here is a comparison of the same area of each photo across 480 frames. What’s interesting is how the colours change with flesh-tones dominating in the middle.

A pattern emerges.

I tried adding a histogram to each frame to see if that pattern plays out in the data. The red bars plot the range of dark to light areas of the image with pure black on the left and pure white on the right.

It sort of does, although the histogram is measuring brightness whereas I’m more interested in colour here. I could do more of an RGB analysis but that’s stretching my meagre Processing skills too far. More motivation to learn stuff.

So, another step up the ladder. What I’m liking here is it’s not tied merely to Selfies. I’m starting to develop tools to interrogate pools of images which share a visual language, which are used to communicate in ways photographs haven’t done so before. That’s pretty exciting.