Difference Point Clouds


Today was a classic Rainy Sunday Afternoon when it felt right to stay in and curl up on the sofa reading. The book I was reading was a seemingly dull computer tome from the O’Reilly stable with the uninspiring title Programming Interactivity, recommended to me by Mark Hancock after I decided I needed to learn the Processing programming language in order to do stuff like this.

Despite the delightful O’Reilly animals on the front it doesn’t look too promising, does it.

But I think this book is going to be the core of my work in 2014, allowing me to really work with the fundamentals of image creation, from understanding and capturing light to rendering images and movies with dots. In other words, photography.

It helps that the book is written for an artistic audience as much as a tech one. Here’s an excerpt from the section “Using Pixels and Bitmaps As Input”

What does it mean to use bitmaps as input? It means that each pixel is being analyzed as a piece of data or that each pixel is being analyzed to find patterns, colors, faces, contours, and shapes, which will then be analyzed. […] You can perform simple presence detection by taking an initial frame of an image of a room and comparing it with subsequent frames. A substantial difference in the two frames would imply that someone or something is present in the room or space. There are far more sophisticated ways to do motion detection, but at its simplest, motion detection is really just looking for a group of pixels near one another that have changed substantially in color from one frame to the next.

The contextualising of how these tools and techniques can be applied is really useful as I’ve been familiar with Processing and Arduino-type devices for a long time without really knowing how they might apply to my work. I’m starting to get a more nuanced idea now.

I had an idea a few weeks ago for a big physical project that tied together a few strands I’ve been thinking about this year. The first ingredient is the Camera Obscura, specifically the room-sized ones, which I’ve wanted to build since researching them for the Pinhole Camera workshop in May. I was particularly drawn to Abelardo Morell’s work contrasting the scale of the landscape with the scale of a living- or bedroom.

The next strand was the transducer which has haunted me since I learned what it meant at the first If Wet. The transducer they used there turned sound signals into physical vibrations but a transducer is just something that converts one form of energy into another. So a digital camera is a transducer which converts light into electricity. That excited me for some reason and got me thinking about photoresistors.

A photoresistor is a pretty simple thing. If a resistor is sort of like a tap that lets more or less electricity pass along a wire, a photoresistor turns this tap depending on how much light hits it. Those lights which turn on automatically at dusk will use a photoresistor as a trigger, but they can be quite subtle. The Optical Theremin, for example, uses a photoresistor to control pitch. More light = high note, less light = low note.

Camera Obscuras and transducers combined in my head and got me thinking about building a massive camera, maybe out of a shipping container hung from a crane, with the back wall covered in a thousand or so crude photoresistors. These would sent an analogue electric signal to the computer dependent on how much light was hitting that section of the wall, just as a the electronic CCD sensor in a camera does. Turning those signals into pixels of varying shades of grey would give us a crude photograph or video. Which would be nice.

But all I’d done was build a big camera. It might look impressive but it wouldn’t be very interesting. So I carried on with the transducer idea. What if, rather than producing an image, it produced movement. I thought of those Pinscreen toys where a 3D image is created by resting an array of pins against an object. How far the pin extrudes can be given a value between a minimum and maximum, as can the light hitting the photoresistor. Putting it crudely, if the pins extrude 10cm then pure white would equal 0cm, mid-grey would equal 5cm and pure black 10cm. And, of course, if the system were to run live the pinscreen would undulate as the light in front of it changed.

This was my big idea for a while. While half-formed and still without meaning it felt like I’d hit on something special. And of course it’s been done before. In 1999 Daniel Rozin built his Wooden Mirror which turns a live video feed into an array of wooden pixels lit from above. It’s a beautiful thing which he explains in these two short videos.

So, already done. That’s fine. There’s nothing original under the sun and it just means I have to think harder. And the purely visual representation thing wasn’t really working for me. While gorgeous and enchanting, Rozin’s mirror is essentially a big video screen, and video screens are pretty mundane these days. I also had issues with transducing the flat visual data into three dimensional co-ordinates (known as a Point Cloud). It might be interesting and strange but it wouldn’t be elegant. I want to avoid weird for the sake of weird. For some reason that’s not Art in my mind. It needs to be beautiful.

Here’s a diagram from Programming Interactivity:

This is the result of a short program that looks at the two video frames and displays the difference. It shows movement but it also shows changing light, because when something moves the light reflected off it changes. We can look at this image and understand that the body is moving a little bit while the thumb is moving a lot. Meanwhile the background is static.

This notion of measuring changes in reflected light got me excited as it abstracted away from simple visual mirroring and allowed for the point cloud to be employed without this 2D into 3D baggage. Naturally it came to be fully formed in the bath and here’s what I quickly tapped out wrapped in a towel.

Not about the image - about mapping the changes between images. The changes in light. When we see differences between frames we’re seeing changes in the light. Photography / video is recording light. The model is based on a bitmap of differences between frames. A large difference creates a point cloud which undulates across as the difference moves across the frame. By mapping changes in light we map movement.

Think of a camera pointing at a street. As people or cars move along the street they are recorded as changes in the light hitting the camera. These changes are measured and the values sent to an array of pins or pistons which extend and withdraw accordingly. The pins create a carpet or wall which undulates as the light changes.

Here’s a rough demo I threw together using some ImageMagick commands on frames extracted from some traffic camera footage. First the original.

Now just the differences between frames, removing everything except the moving traffic and people.

It’s obviously not there yet but it feels a lot more elegant in thought and design. Still it’s just an idea. Building the thing will take quite a while and probably cost a fair bit. Something for 2014-15 then.

So what has this got to do with Photo Walks in Digbeth? I don’t know and I’m not sure it matters. One of the things I’m investigating is the ubiquity of the camera now it’s been reduced from chunky expensive equipment to a tiny sensor on a phone. To explore a place with a camera feels like it’s getting more and more vague as the camera shrinks and threatens to disappear. So thinking about a camera as a sensor that records the environment and renders it in some form is a useful process.

That and I’m itching to build something BIG next year.


Couldn’t get this Processing script to work last night but managed to today. Here’s a 2-tone video of light differences between frames (aka motion). It wouldn’t make for a great sculpture because it’s just On or Off but imagine it with 150 or so shades of grey.