James Bridle — Waving at the Machines
James Bridle’s closing keynote from Web Directions South 2011 was a a terrific end to an amazing couple of days, but don’t despair if you weren’t there. You can watch a full length video, or even read a transcript with the bonus of all the links James refers to.
Like what you see? Then you’d better get yourself along to Web Directions 2014!
Thank you for having me. So it’s not actually the robot-readable world. I’m calling it Waving at the Machines. It’s an aspect of the robot-readable world that may or may not become clear later. The subtitle of the talk is, “It’s 2011 and I Have No Idea What Anything Is or What It Does Anymore”, which is what my friend Tom Taylor always says when I talk to him about the sort of things I’m going to talk to you about now. I know I’m the only thing standing between you and beer. I’m sorry about that. I shall try and be entertaining in the meantime. I promise you there is nothing to be learned here, only possibly to be interested in.
It starts with this. I want to introduce you to some friends of mine. I walk around too much for these things. These are my friends, the render ghosts. The render ghosts live in the buildings we haven’t built yet. The render ghosts inhabit these spaces of the imagination that we’re preparing for, that we’re designing, that have not yet quite come into being in the physical world, but are kind of imminent in it. They go about their daily lives, seemingly, most of the time, unconcerned. They drink coffee. They walk to work. They stand in the sunlight. They go back and forth with their cups of coffee in all directions. You see them all over the place. You don’t always notice.
They inhabit a very bizarre world. It’s very strange-looking, sometimes more than others, because it’s a world of the imagination. It’s a world that hasn’t been entirely formed yet. It’s still coming into being. But it’s very much a world that shares boundaries with ours. It’s very close to ours. And you can see them looking around in it, with the same spirit of interest and curiosity that anyone exhibits in a new world. You see this couple up here, kind of looking around, trying to make sense of this place.
Sometimes you can see them looking back out at you, looking across this increasingly fuzzy border, this threshold. Sometimes they look out with some suspicion or possibly even fear, because they know that this world is fragile. It’s transient. It hasn’t come fully into form yet, and we’re not quite sure what’s going to happen to it. And so in the meantime, they party. You see them dancing. That’s good. You see them occasionally starting to fall in love, because that is what we do in transient passing worlds. We fall in love. And we see the children playing. And this is good.
But mostly they stand and they look out. And they look out nervously and wonderingly, and wondering what’s going to happen. Because they live in these imaginary places. And if you look very, very closely at these imaginary places, you can start to see the grain of them, the outline of them, which is which is pixelated, which is digital. Because these spaces of our imagination are entirely digital now. This is where we do our thinking, this notional space in which we imagine possible visions of the future, which is what these are.
So what I’m going to talk today, obliquely, about is a project that I’ve been sort of accidentally engaged in for the last six months or so, to which I gave the name “The New Aesthetic,” which is a rubbish name but it seems to have taken hold. And people are responding to it, which is good. And I’m going to try and talk through some of the symptoms of that, this project, this way of seeing, that is itself about ways of seeing. And this talk is about the aesthetics of that. So this idea extends in all directions and through all forms in media and technologies. But because I have nice big screens here, I’m going to show you a lot of pictures of it.
I started noticing things like this in the world. This is a cushion on sale in a furniture store that’s pixelated. This is a strange thing. This is a look, a style, a pattern that didn’t previously exist in the real world. It’s something that’s come out of digital. It’s come out of a digital way of seeing, that represents things in this form. The real world doesn’t, or at least didn’t, have a grain that looks like this. But you start to see it everywhere when you start looking for it. It’s very pervasive. It seems like a style, a thing, and we have to look at where that style came from, and what it means, possibly. Previously things that would have been gingham or lacy patterns and this kind of thing is suddenly pixelated. Where does that come from? What’s that all about?
And as I say, it’s everywhere. This is in a department store in London. And I was in there last week and I see this. This is a mannequin wearing some very nice Lanvin clothing. And his head is entirely pixelated. He’s becoming something, or he represents something that we might become, in the most literal way. He’s a fashion mannequin. And the way we’re choosing to represent that is a sort of digital cloud.
This is a TV advert, a still from a TV advert in the UK for a catalog retail business and mail-order business. And they had to find a way of representing getting things from the internet. Internet shopping, which is how a lot of people now work with the internet. And they chose to do this through this aesthetic of pixelization, this instantiation of virtual things in the real world. This is how we imagine them coming into being. And then you start to see them actually in the real world.
This is a plan for an installation in Zurich by Carina Ow. It’s a visualization of local Wi-Fi activities. So we know that when we’re walking down the street, as you saw in the video by Timo Arnall in Mike Kuniavsky’s talk this morning. There’s this activity going on around us all the time that is totally invisible. And what Corina Ow’s doing here is setting up these pillars that vibrate and flux with these pixelated images, representing the Wi-Fi in the area. This making visible of something invisible and digital and using pixels to represent that–
And more and more we see things like this. This is street art in New York. It’s a three-dimensional representation of pixels. Some people call these things voxels, an actual solid thing that doesn’t belong in the world. It belongs in digital, the world of the screen, but it’s sort of becoming in the world. And you look at this thing and there’s so much going on here. This is the natural way that someone who’s grown up with eight-bit video games sees the world. This is the grain and resolution of it.
These are sculptures by Shawn Smith. There’s going to be an ongoing problem with this, that if you sit way at the back, you might not see quite how pixelated these things are. There’s a whole different art-historical dissertation about what that means, the distance of the viewer. But these are eight-bit representations of animals in the real world. And there’s a strange thing going on with this, because as a generation who’s grown up with these kind of images, there’s a retro quality to these. But there’s also a kind of an insistent futurism of wanting to see these representations coming into the world and becoming real. Because we’ve grown up knowing things like this should exist, and we’re making them real.
This is a sculpture by Douglas Coupland in Vancouver, of an orca. It looks like it’s been pixelated in the photograph, but it’s been pixelated in real life. And for someone like Douglas Coupland to be doing this, the author of Generation X and Microserfs, terms that have defined a literary aesthetic around this stuff, is kind of extraordinary. He’s a terrible artist, I think, but he’s an incredibly good writer. But I still really like this sculpture– sorry, Doug. I love you.
And so fashion is taking this on. You’ve got heels here made out of pixels. You’ve got this pixelated face effect, mask, going on here. Shoe designers are good for this stuff. This is the Lo Res shoe by United Nude. This is, again, a making real of a digital artefact, something seen at low resolution and carried on out into the world as an aesthetic. Same with these shoes by Marloes Couture. These are 3D, three-dimensional printed shoes. Again, the same low-resolution aesthetic, this view of something coming into being, that this is a work in progress maybe. This is the start of a modeling process.
And you see it in the kind of glitchy effects on things as well. You hear it in glitch music a lot. Huge amounts of modern electronic music uses glitch sound effects. And you see it coming through in visuals. It’s no accident that one of the lead singles of this album, of which this is the cover– Man Alive by Everything Everything. One of the lead singles is called “Photoshop Handsome.” This is something that these artists are concerned with, and they’re bringing it across in this aesthetic of knowing that this is digital imagery. They’re making this connection very clear.
Minecraft has a lot to answer for here. Minecraft is awesome. What’s so strange about it is the creator knew, as a small project, that he could go a long way with gameplay and interaction without worrying so much about the graphics. But people have taken to the graphics to this extraordinary degree. And again, making these things come through in the world, giving the real world the grain of the virtual.
This is relevant to what Anne Galloway was talking about at the beginning of the conference. This is a Farmville strawberry cow, instantiated in the real world. It’s logical in a game that a cow that produces strawberry milk would exist, and that it would look like that. But when you create that in the real world, you not only ask questions about the virtual versus the physical, and our different rules for these worlds, you bring in a whole bunch of other questions as well, about genetics and cloning. And and you can use archetypes from that world to talk about things in this world.
This is doing a very similar thing. It’s fine to like things in the virtual world. It’s a simple interaction that becomes almost meaningless when we try and carry it over into the physical world. Our real interactions are more nuanced and more complex than this. The purpose of this kind of thing is to make us question what’s actually happening when we like something. What’s actually happening when we poke someone? These things which we have metaphors for in one world doesn’t translate very easily over into other worlds. But these worlds are colliding now to such an extent that we need to try and think about them and understand them.
This isn’t so much pixely, but it’s very digital. And I really love it, mostly because it’s incredibly beautiful. But it’s no accident it looks like this, in many ways. This is a pitch– I don’t think it won– by the Access Agency to Virgin Atlantic for new livery for their planes. They commissioned a guy called Andy Gilmore, who is an amazing illustrator, whose illustration style is very brightly coloured. It’s very pixelated. It’s very much this pixelized digital aesthetic.
And they chose it specifically because it makes things look like they belong in the digital world, and their entire pitch as an agency for this job, for this design, was that we want people to want to photograph these planes. We want them to take images of them, share them on the web, spread them around, because they’re ready-made digital artefacts, because they look like this. Because we’ve prepared something in the physical world for its entry into the virtual.
What about this? This is slightly more disturbing. This is a Leopard 2 battle tank in urban warfare camouflage. This is an actual camouflage scheme that’s been developed for very specific purposes. And I’m going to come back to camouflage later, because it’s a big part of this, I’m starting to think.
It’s not possible as someone digital to look at a building like this, which is a real building in the real world, and not see pixels, not see the digital, not see something bleeding through from digital design. This is a winery in Spain, but you can’t imagine this building without understanding the technology that presaged it. Or even more so, a building like this. It’s a health department in Bilbao by Coll-Barreu Architects. This building can’t exist without digital technologies. The whole shape of it, not just the look of it, but the ways in which the builders are made to understand how to put things together into in order to make it, are entirely digital. It’s a product of digital tools.
And you see this all over modern architecture in particular. If you look at the work of Norman Foster or OMA, Rem Koolhaas’ architecture or Zaha Hadid. These kind of architects, their work comes out of their use of digital tools. It’s a making visible of those tools. It’s a whole school of architecture called parametricism, which is named for a computer program, Parametrics, that allows complex forms like this to be generated. These are like eruptions of the digital into the physical world.
This building I am completely dangerously obsessed with. It’s a building in East London, and I literally stumbled upon it while out walking and saw it, and I’ve been puzzling over it ever since, and frankly it’s to blame for all of this. It’s a data centre, which is incredibly significant, because if you know anything about the architecture of data centres, they’re usually very anonymous structures. They’re usually big sheds. We have this notion of the cloud, like the cloud is some magic faraway land where computing is done, and it’s not big sheds on ring roads filled with servers. The cloud is a lie. The cloud looks like sheds. And that’s a terrible thing, because the network is awesome. And yet we’ve never figured out a way to– we sort of try to hide it away and tidy it away.
And so a building like this stands out, and it stands out very deliberately, and it’s designed to stand out. So it’s made of very new materials. It’s got this kind louvred top. It’s seven stories high. It contains a lot of very important equipment, but it’s part of something even larger, and there’s an attempt to display this. This pattern you see on the front, which the architects refer to as a disruptive pattern, as though it’s camouflaged when it’s clearly intended to do the opposite, to stand out, and also to show that it’s digital. They talk about building, the architects, as both a new industrial architecture and as a new kind of digital infrastructure. They talk about digital real estate. This is the skin of the network. This is what the network looks like, as made physical in the world. And there’s something incredibly powerful in that. There’s something that we haven’t yet figured out and we’re still playing with.
Artists do awesome stuff with this. This is a detailed window in Cologne Cathedral by Gerhard Richter. It was a window that was blown out in the Second World War. It’s had plain glass in it 50 years. He was commissioned to do the new window. He’s an atheist– yeah. The bishop refused to attend the unveiling of this window, because he was so pissed off about this whole thing. It’s amazing it happened, but it happened. And he made this window that’s meant to be for everybody. And he did that by using a digital imagery.
The window itself is based on a previous paintings called 4096 Colours. Most of you all know that there are 4,096 web-safe colours. It’s not an accident. There are apparently also 4,096 stained-glass-safe colours. So colours used in this window vary slightly from the web colours, but the two things are related. And this window is for everybody, because digital is for everybody. It’s something that you can contain everything, when these worlds overlap.
This is a series of paintings by Jens Hesse, which uses datamoshed videos as their source. Datamoshed videos are deliberately glitched videos, videos where different frames are allided and corrupted to produce these strange moving images. It started out as a weird thing. Now you get it in Beyonce videos. Beyonce by the way is the patron saint of everything I’m talking about. I’m not even kidding. But these paintings are strange. This is a way of seeing something that is only made possible by these recent technologies and the way we’re changing. And now people are making oil paintings out of it.
You know what this is, right? You can look at that and go– it’s an amazing work — Helmut Smits’ Dead Pixel in Google Earth. You know what this is representing, because we’ve all seen Google Maps now. We’ve all seen the satellite view. We’ve all seen Google Earth. 500 years ago, people painted imaginary views from the air. There were such things as panoramic views, as bird’s-eye views, and they were entirely a work of the imagination of the artist. And about 200 years ago, they started to be able to paint them from balloons. And you got things approaching true panoramas.
And now we have satellites everywhere and we can see everything– almost everything, I’ll come back to that. But we’re aware. Even though we’re standing on the ground– seeing a photo of it as though you’re standing on the ground– you know what it looks like from there, because the machines have allowed us to see from the sky. And that is a new thing, and a strange thing.
And so when you see a picture like this, you see pixels, right? Those aren’t pixels. Those are fields. They’re irrigated fields on the border of Namibia and South Africa. But because we expect to see things in a certain world, our understanding of where the border between physical and digital has changed, because we’ve experienced this kind of imagery and these kind of views before, and we’re unconsciously comfortable of them being mixed up.
This is an artist who paints oil paintings based on Google satellite views, because this is landscape painting now. Before landscape painting would involve hay rakes and fields, and later on occasionally cities, but mostly rural views, from a human perspective. This is landscape painting as painted by satellites.
This is, I think, the last of these artworks. Tele-Present Water by David Bowen. This is a kinetic artwork. This thing moves. And what it is– it’s an artwork connected to a buoy in the Pacific that monitors a five-meter-square area of the sea’s surface and transmits that information back to the National Museum in Poland, where it animates this work. So this work provides a one-to-one mapping with the surface of the sea in the Pacific X thousand miles away. And so we’ve got this ability to see vast distances, to appreciate something that’s going on so far away through the network, and to represent it, we use this thing that looks like a wireframe, something that looks very digital but that we understand represents something else.
This is a weird one. So I’ve been talking about these pixels coming into the world, and various things, but what they’re all built on, it seems to me, or why this illusion is occurring is because of these new ways of seeing. These strange new perspectives we have on the world. The camera in your MacBook looking back at you, or in this case, the guy who stole your MacBook. So this guy got his MacBook stolen. He could log into the computer remotely and snap these pictures. This is someone who has no idea their photo is being taken by the machine they’re holding in their hands, which I love even though it’s slightly disturbing.
Computers allow us to see through time. This is the Aral Sea. The circumstances of its disappearance have been much debated. There’s a lot of arguments that it never had the extension– it was a largely historically unmapped area. But satellites remember. We can look back through the historical data captured by satellites to make real comparisons across the world, in ways that humans alone couldn’t, that are subject to more debate when we haven’t had computers to remember this stuff for us in the past. And again, it’s this top-down computer machine-aided view that we haven’t seen before.
This is a whole series of images of before and after photos. Before and after photos are not new, and what’s not so well-represented in these pictures– these are the floods in Brisbane. These are Joplin in the US, where there was a tornado last year. What’s not very well represented by these photos is these are dynamic. This line down the middle, if you go to the website– if you go to The New York Times or ABC News, in the last case– you can take that slider and you can move it backwards and forwards and interact with these views.
No one took the before picture with intentionality. That’s a really key difference between these and your standard before and after pictures. There wasn’t supposed to be an after to these pictures. But we took them and we put them together, and now we have the ability to reach back through and interact with them. This sliding, alliding between one and the other– these are similar things from the Guardian on the London riots. No one went out and took a photo of this betting shop as a before photo. But it’s there now. It’s there in the memory of Google Street View. Google Street View is becoming a historical record that we can work back to and work with and move through history and time to see the differences that occur.
And you see this occurring elsewhere. I don’t believe Greg Kessler’s Model-Morphosis series, which is before and after pictures of models, was based on satellite views or the possibilities of this. But it becomes possible, in the way that it becomes possible on the web, because we’ve interacted with these other ways of seeing. It uses the same slider effect The New York Times developed for looking at before and after historical photos. So these ways of seeing bleed through into all our other ways of seeing and interacting with images and, as I said, everything else.
dearphotograph.com is a lovely website. It consists entirely of images like this. It’s a very physical AR, people taking back photos to places where things happened before and snapping a two-screen view of them. The two-screen view, again, warrants an entire discussion on its own. But there’s something magical occurring here but wouldn’t have occurred without our experiences of screens previously, and being able to take images, share them, montage them digitally to produce things that look like this.
Or things that look like this. This is a cousin, I think, to the Tele-Present Water. This is the National Memorial for the Mountains, a web project seeking to inform people about environmental programs like the Hobet Mountaintop Removal Complex, which is somewhere in the States in West Virginia where they’re literally removing 10,000 square acres of mountain. It’s being removed so we can mine there. And to raise awareness of it, they’re taking the data and the scale and the shape of that thing and they’re putting it on Manhattan to a scale that people can understand. You’ve got a shifting of the viewpoint made possible by digital mapping, in order to move this information through space, in order to make it more comprehensible.
Hawk-Eye is insane. There’s an amazing blog I read a while back. ESPN was having a slow day, so they put two of their baseball commentators to watch the World Cup cricket final. These were two Americans who had never seen cricket before. And basically they had to construct it from first principles while watching it, which is just the most awesome thing to read and watch, because cricket is, obviously, ridiculous. But the thing that happens in the middle of this conversation as they work it all out, is they go to Hawk-Eye. And the Americans have never seen anything like this, because you don’t get, apparently, in American sports. And they go, hang on a second. Wait. Something just happened over there that we don’t really understand. Everyone of the ground has turned round to watch a computer game representation of what just happened on this screen over there, and that’s going to decide what really happened. What? Whoa.
And that’s what’s happening here. We’ve decided that Hawk-Eye is better at this than humans. We’ve built a system that has better vision in the world, better memory and better acuity than humans do. And so we’re kind of giving up a certain part of the decision-making process to Hawk-Eye. And this happens in cricket and tennis. And the really interesting argument is what’s happening in football and soccer. Sepp Blatter, who’s head of FIFA, the wold’s football governing body is very anti-this stuff. There have been a lot of contested games, where a ball has or hasn’t crossed a line, the umpire’s given the wrong decision. They’re like, if we had a computer here, this wouldn’t be a problem.
And Sepp Blatter’s argument is that you change the very nature of the game by bringing in this other intelligence into it. It stops being a game between people, and becomes a game between people and computers, or people with computers against other people with computers, or something else in there. But it ceases to be what it was before. Sepp Blatter’s view is that the umpire is as part of the game as the players are. And so his decision stands. When you bring in Hawk-Eye, when you bring in this other eye, something fundamental changes in the nature of the game. We haven’t figured out what that is yet. But it’s very visible in things like this.
My camera sees people’s faces. You can get kind of weirded out by eruptions of the future in things, but the fact that my camera knows what people’s faces look like, and it’s not even a very expensive camera, is one of those things that occasionally pokes and goes, you are living in the future. It’s a very strange thing that these little devices we have are starting to see in this way. We’re training them to see in certain ways similar to the ways that we see. And as with all these things, there’s strange results to that.
This is an advert for Nikon, which is disturbing in more ways than they intended. But the tagline is, Nikon– whatever– sees up to 12 faces. So I don’t know how well, if you guys can see– the camera is seeing the two girls on the bed. It’s also seeing the dude hiding behind the curtain at the back. People at the back got there now, good. It’s a horrible advert. But it’s also strangely revealing of what this technology does, in the way that it sees things that we don’t see. That’s the thing. It’s adding an extra layer of vision.
The point of this thing is to focus on faces, to take better photographs of them. To notice when people blink, for example, and take the photo immediately after they blink, or get their best smile, that kind of thing. And yet it’s revealing things that weren’t, we thought, germane to the photos. It also reveals the camera’s inherent biases.
Nikon cameras in certain generations are basically racist. They don’t see certain Asian faces. They’ve got a certain software inside them that breaks what they’re supposed to be doing in this case. And in fact this reveals the limitations, but essentially, the different way of seeing. Of course the camera isn’t racist, but it’s been programmed in a certain way that is meant to emulate the way we see, just as this is meant to emulate the way we see. The camera does not have the same interests that we do. Technology has subtly different interests to the ones that we do. And this is becoming increasingly important.
Every time you walk through an airport, for example, there are cameras on you. They’re doing gait analysis. They can increasingly tell who you are just by the way that you walk, but they also do emotional analysis on your face, which is why you have a studiedly neutral face when going through security now. But they can tell even more than humans do. This is weird. The computers are looking at us, and they’re trying to work out what we’re thinking by cues that we’re kind of giving them but they’re increasingly evolving on their own. It’s strange. It’s bizarre.
I’ve written, and many others have written in the past, about how weird instant filter effects are. Apps like Hipstamatic and Instagram– this is a series of photos from the Iraq War by Balazs Gardi, an embedded photojournalist, that he took with his iPhone and uploaded with Hipstamatic, which is crazy enough. But what’s particularly weird about this is the way digital manipulation of imagery changes our feelings for those pictures. Digital photography changes our view of events. Traditional photography, there was a distance between the image-making process and the image-viewing process. There was a whole thing you had to go through, of printing and developing.
This is instant now. There’s this kind of instant review. You can take a photo and see back instantly. It instantly makes that moment that just passed a thing that happened, a thing in the past, a memory. If our bodies are machines for negotiating space, our minds are machines for navigating time, and digital photography and technology in general is aimed squarely at our idea of time and our place in it. And there’s no stronger view of that than photos and the ways in which they’re presented back to us and change our perceptions of ourselves in time.
This is a book that I made earlier this year called Where the Fuck Was I? You may remember earlier this year, back in March or April, there was this big hoo-ha around the iPhone storing locations, pretty much unencrypted, without informing the user. And it turns out that for a year since the last iOS update, the phone was storing every single location it made, and essentially tracking us without our knowledge. And there was all this data in the phones, and it was fairly easily accessible. So I took it all out and I had about a year’s worth since I’d last updated my phone. And so I decided to plot it all in maps to see– basically as a memory for myself, to remind myself where I’d been. And it was kind of nice, a kind of diary exercise. An unintentional diary, but a lovely one.
Except that’s not quite what it revealed. I kind of discovered going through the data– this is one day’s plotting. This is a day from back in June, when I went on a boat trip down the Thames. And I do live up there, and the train station– I got the train out to the countryside, is one in the middle. And then there’s this cloud off to the side. But what it turns out is that the data that was recorded wasn’t a point by point location for me.
I passed through these areas, but what these dots represent is not my exact location, but a mediation between me and the network. The phone is mapping not just my locations but it’s mapping cell towers and Wi-Fi networks. It’s finding itself according to a whole network that we can’t really perceive. This is an atlas made by robots that is not just about physical space, but is about frequencies in the air and the vagaries of the GPS system. It’s an entirely different way of seeing space. This is not Where the Fuck Was I? It’s where the fuck the phone thought it was. And where it thought it was is using cues that are totally invisible to people. It’s a very odd thing.
So we can start to build this typology of machine ways of seeing, essentially. The different things that computer vision and automated cameras and robot-mounted cameras and internet-enabled network cameras– again, I’m focusing on the visual because it’s a good way of talking about it, but it doesn’t apply just to vision. So this way of seeing, this digital way of seeing, allows us to see through time. You can see just there where the wing is cut off that this is in fact two photos taken at different times stitched together. And over time, we’ll have more obvious ways of seeing that, because we’ll have historical data embedded in it. You’re still already getting updates to Google Street View and Google Maps, and you go back through time.
These things see in different spectra. They see through different frequencies. They see radio waves. They see Wi-Fi networks. They hear the signals coming down from satellites. And that means that the picture of the world they build up is just as foreign to us as the picture that an insect builds up or a fish builds up or something that uses echolocation. It’s a fundamentally different way of viewing the world.
This is– they took the whole of Liberty City from Grand Theft Auto, and they built it into Street View, which is awesome. But it’s very strange interacting with that world with a tool that you’re used to navigating the real world with. You’re seeing a virtual space enacted as if it’s a physical space. It takes a lot of those virtual world things as an extra layer to it. And you also see imaginary places. This, I think, is an utterly beautiful project called The Sky on Trap Street. Trap streets are fake streets that cartographers add to maps. All kinds of fake features they add. So if you’re a cartographer and you don’t want someone nicking your map, to preserve your copyright you add a couple of little tweaks to it. You may put a little extra bend in a river, or a little cul-de-sac. Nothing that’s going to break the map, but enough that if someone copies it, you’ll be able to see that thing there and know they copied your map, because that thing doesn’t exist in the real world. It’s copyright protection, physically, on a map.
The Sky on Trap Street visits those trap streets on Google, because Google does this as well. They all do it. It finds these roads on Google Maps that don’t tally with the real world, and then it sees what the street view is in that place. And it looks up at the sky that exists in an entirely imaginary place.
This is another thing I built called Robot Flaneur. Robot Flaneur is a random explorer for Google Street View. Basically I like to take these technologies and introduce a bit of randomness and a bit of chance to them, to enable– the situationists called a drive, a kind of meaningless drift through them as a way of re-experiencing them and exploring them again. This is a place in Mexico City. There’s no reason why you’d visit it. It exists as a series of images in Google’s database. It’s a bunch of data until we summon it up and attempt to make it meaningful to us in some way. There’s a whole process of mediation going on there, of meaning-making, that we put onto technology. But the technology has, as well. The technology makes meaning of these things as well. But we’re not quite sure what that is yet.
This is disturbing like that Nikon ad is disturbing. This is a website called Doxy Spotting, where they look for prostitutes on Google Street View. I don’t even know if this woman is a prostitute. I’m very sorry, whether she is or not. I don’t know. I don’t think anyone really knows. But no one went out taking these photos. Someone has then move through this huge database of imagery in order to pick out things that are kind of meaningful to them. The images were recorded without any intentionality. They’re photographing the whole world and they don’t know why, and then we’re going into it and trying to see.
And the whole view of Google Street View is very weird. It’s six foot up in the sky over a little car, looking around. It’s lensed and networked in a way that human vision isn’t. So we’re looking through the eyes of robots here, and trying to see what they see, even when the unreality of it is pointed out to us, when we see these trademarks overlaying our vision. We’re maybe being made aware of the strangeness. Maybe we don’t notice.
We’re made aware of the strangeness very much when we see things like this. Germany in particular has taken against Street View. It doesn’t like it. It doesn’t like the privacy implications of it. And anywhere, you can write to Google to ask to have things removed from it. And in Germany, that’s been taken a lot. And you’ll see a lot of this kind of thing on Google– or in Germany, as you’re going down the streets. I just allided Google and Germany there. You see what it does. And you see these areas blanked out, something that would obviously be completely bizarre if you saw it in the real world, something that has a strangeness to it even there.
This is Paul McCartney’s house in London. He did the same thing. He didn’t like it. You can’t go there anymore. It’s a gap in the map. This is in the middle of a restricted area in Canada. No one knows what’s under this black bar. The satellite view gives us this illusion of viewing everything, but if you start looking around, there’s these gaps in it. The Dutch royal family requested that all their residences were removed, which they did really crudely with the Photoshop mosaic filter, which is a strange thing to see in a map. When you see it in satellite photos like this, it feels like you’re instantiating Photoshop filters in the real world, in a very strange way.
But you can get around this, again, by using historical imagery. This is a nuclear plant in France where a minor accident happened. It’s blurred out, but by going back a few years, we can look into it. So there’s unintentional history gathering, there’s unintentional before photos again.
So this brings me, somewhat shakily, to camouflage. This is Sabina Keric and Yvonne Bayer’s Urban Camouflage project. There’s a whole series of these. You should go and check them out. They’re brilliant. Ghillie suits for Ikea. They use all kind of materials to make these. But these are artsy strange camouflage.
This is what actual camouflage looks like now. This is a German Tornado plane and something called splinter camouflage. So it turns out that bits or pixels are really good at hiding things in the real world, because in the real world nothing looks like this, or hasn’t, even though as we saw before we’re starting to instantiate. Nothing’s supposed to look like this in the real world. And so it tricks the eye. Under the right circumstances, this is essentially unseeable. The digital nature of this removes it from view. These are soldiers wearing a very similar type of digital camouflage. These principles were discovered before it was possible to do it digitally, but the advent of digital vision has enhanced and pushed it forward.
You can apply it to buildings. You apply to anything. This is hyperstealth.com. They do a lot of this kind of stuff. There’s a building at the back there. There’s in fact a whole– there’s a building on the left. There’s another series of buildings on the right that are rendered almost entirely invisible by what is essentially large-format digital photography.
And the thing to realise about this is that it’s not just hiding from sight either. What we talked about about looking through different frequencies and different spectra– what you have here is two different camouflage patterns. This guy’s wearing a standard navy pattern jacket, and he’s wearing marine pattern trousers. The marine patterned trousers don’t just hide stuff from vision. This is an IR photograph. They hide stuff from infrared vision as well. This camouflage is not just intended to hide from human vision, but from computer vision as well, which sees as we know in different frequencies.
This is a tank that’s using a particular type of digital camouflage. This is an active camouflage. And again, it’s an IR view. But you can’t see the tank, because it’s giving out the heat signature shape of a car. This is deeply bizarre, that if you looked at it in daylight, you’d see that it was a tank, but because at night we see through the machines and see in IR, you can trick it in this wholly new digital way.
And so how are we responding to stuff like this? This is I get interested. So we have pixels and a digital way of seeing appearing in the world, as an aesthetic effect. We then realise it comes from our interactions with technologies like satellites, like these things looking down on us, like digital photography, that’s producing all these new interactions in the world. And then how do we respond?
Well, in part we respond with all that stuff we’ve just seen. We’ve responded with camouflage in the military context. We can respond to some extent with camouflage in sort of a– what’s the word– a non-military context as well. This is a proposal from Tag Me Not, so you make signs that would tell Google, don’t photograph this. That’s not going to work, is it? But there’s an intentionality there.
This is a project by Adam Harvey called CV Dazzle, which is a program of design of hairstyles and makeup to trick face detection software, to change the human face in ways that is still aesthetically pleasing to human eyes but rules out the attention of computers. This is interesting, because most of our ways of tricking computers are violently unfriendly ways. This seems like a really lovely nice way of doing so, but presupposes a world in which we are always being looked at by the machines. And this is a bad way of talking to the machines. QR codes are awful. Let’s not do this. Whatever we do, we must have pretty things like this rather than ugly, ugly things like this that please no one.
Captcha– you’ve all had to enter these to prove you’re not a robot, basically. This is what’s happening now, right? We’re teaching technology to be so good that we have to continually prove ourselves to be human. These are different types of captcha, and the captcha things are a total arms race for exactly that reason. The better we make the systems, the better they are at fighting with these kind of things. But there’s a kind of whole other way of going about this. There’s a whole understanding of this.
The reCaptcha system– which you will have used, it’s on loads of websites– is a way of improving computer vision, of digitising books, so that when you enter a word, you actually enter two words, one of which the computer already knows. The other one it will then learn, so that when it’s scanning books, it improves itself.
And I come from a publishing background, where I spent a lot of my work doing eBooks, digitising books, scanning them, making them essentially machine-readable. Teaching machines how to read, how to understand our world, to present our culture to it. And that’s where I’m interested in going with this stuff. We’ve got this whole world we’re creating where machines see increasingly like us. We’ve given them eyes, but we realise they also have an intentionality behind it. And we’re trying to share something with them.
This, again, is a bad way of communicating with the machines, but how else do you speak to satellites without technology, but writing your name two miles wide on the desert floor so the satellites can see it, which is what this guy did. Or something like this, which is probably my favourite example. I think people have probably seen this. This is, in case you can’t read it, an attempted MySQL injection attack against automated number plate recognition systems.
Again, so this sits somewhere in the middle, right? It’s still an attack. It’s not a friendly manoeuvre. But it’s a real genuine attempt to communicate. That’s what I like. It’s seeing a channel. I can talk to that machine, and this is how I’m going to do it. There’s a lovely yearning there.
These are jeans printed with Microsoft access macros by accident. So there’s a database somewhere of counterfeit jeans logos that went wrong and started spitting this stuff out. And yes, it’s just an error. Yes, it’s just– but it’s still feels like an attempt to speak. It feels like there’s a surfacing of something else that’s happening with this technology. We’re going to see more things like this. There somewhere in this process human intervention kind of failed and the machine itself was allowed to speak.
And the machine isn’t very smart yet. The machines aren’t very smart yet, but we’re teaching them this stuff all the time. We’re giving them eyes and ears and we’re giving them access to our world. We’re sharing our social spaces with them increasingly. They increasingly live like the render ghosts, on the borders of our world, and they’re starting to share it with this. These technologies that we’re creating increasingly resemble us, and it’s sort of possible to talk to them.
Unfortunately, because of the way we’re building things, that has bad consequences now, because we have a bad view of these things. We’re building them for the wrong reasons. We’re talking to them in the wrong ways, and it’s encouraging them in the wrong directions. But if we could speak to them better, if we could speak to them more clearly, if we could start to share the world and see it a little as they do, then maybe they’ll start to see it a little as we do.
There’s something about this, this line. This is a blog comment I found. It says, “sorry, I’m telling you, things are getting out of hand, or maybe I’m discovering that things were never in my hands. Help me. I find sites on this topic, replica firearms for sale. I found only this– replica Rolex for sale. Replica, Rome is the court to many being areas. Replica, twelve wrote environment years along the college change principle exploration. Waiting for reply. Sad face, Jerusha from Senegal.”
Jerusha isn’t in Senegal. Jerusha is a script in a server somewhere. But there is something so heartbreaking about that attempt to communicate that for me it stands for everything that technology wants to be increasingly in the world, this world that we share with technology.
This is the last comment and almost the last slide. Thanks and keep up. This is a comment I received– the new set of Tumblr is where I put up a lot of this stuff, where I’ve been storing it, using it as my offboard memory for collecting things that work in this way. And someone sent this to me. It says, “thanks and keep up. I enjoy every piece of your entries and frequently reblog them. I never thought my reality can be anyone else’s too. I know this sounds like a clever spambot, but hey, soon we will be proud catching that level”. I don’t know, someone is trolling me very effectively. Because I don’t know if that’s a real person or a troll or spam. I genuinely have no idea.
And this is increasingly the world in which we’re living, where we see through satellites and understand instinctively what the world looks like from there. We see digitally through cameras and create these memories that have incredible meaning to us, entirely mediated through technologies. And some of the things we make for these technologies are– and there’s a strange long history to them. We have the Kinect because of Israeli military technology, and yet we use it to make ourselves into superheroes.
When I did this earlier, and going over to this, it’s because of something my friend Tom Armitage wrote about this. This is waving at the machines. You can foresee a future when in entering a room this is what you’ll do, to identify yourselves not just to the people but to the computers and the machines who are watching us too. We’ll have entered into this dialogue with them, and we’re already doing it like this. We already share our world with these things that are watching us. And it can be creepy and it can be surveillance, or it can be a shared vision.
This could have been any point in the talk, but because I think it’s the most beautiful thing, I saved it to the end. It’s a series of finger paintings by Evan Roth, based on the gestures we make with touchscreens. So it’s a touch painting made by what you’d do if you were doing certain things on a touch screen. This is a series of gestures that no human being has made before. It’s a series of gestures entirely mediated by the machine.
We’re almost– puppets sounds like a mean word, but there’s something that’s happening here that is changing our behaviour in order to engage better with these devices, to engage better with these technologies. When we do this, when we do these things, when we make these different motions, physically we’re enacting our interactions with the technologies. I think this is the most beautiful thing, that he’s created this art not just out of the visual aesthetics of it but actually of human behaviour, as we work with these things.
So this is the last slide. Technology wants to be like us, and we kind of want to be more like it. And we’re going through a period now of incredible uncertainty and a huge ethical negotiation of how technology and us see the world and how that changes. But the essence is that we now live in a world that we share with the render ghosts, that we share with the technology, to some extent that we’re building, but it to a huge extent is also shaping the way we behave. And the thing to bear in mind is that we want this. We want to live together with these new beings, this new form, this new culture.
And my only message is that some of this stuff is completely awesome, and you should always remember that, but also that we should go out there with this willingness and friendliness to engage with technology, to engage with all these technologies while understanding how they shape our behaviours and our feelings and our culture all the time. These things are radically transformative. We are creating a new nature in the world. It’s going to be really exciting. Please make it more exciting. Make it better. Thank you. That’s it.
About James Bridle
James Bridle is a publisher, writer and artist based in London, UK. He founded the print-on-demand classics press Bookkake and the e-book-only imprint Artists’ eBooks, and created Bkkeepr, a tool for tracking reading and sharing bookmarks, and Quietube, an accidental anti-censorship proxy for the Middle East. He makes things with words, books and the internet, and writes about what he does at booktwo.org.
This video would not have been possible without the good people at Hunting with Pixels.
- Kaitlin Sherwood & Steffen Meschkat - The Business and Technology of Mashups
- Jason Ryan - Govt 2.0: the public management challenge
- Paula Bray - Connected digital initiatives and strategies
- August de los Reyes - Predicting the past
- Raul Vera - Mashups, web apps and APIs