Via BLDGBLOG, Timo Arnall’s “Robot Readable World”, “an experiment in found machine-vision footage, exploring the aesthetics of the robot eye”:
This video is rather obviously fantastic, but I do think it’s worth calling attention to a perceptive comment left on the Vimeo page. Arnall describes the video as exploring the questions “how do robots see the world?” and “how do they gather meaning from our streets, cities, media, and us?”, which is obviously in-line with the line of inquiry set up by Matt Jones (like Arnall, of BERG) in his talk (well worth reading) on the robot-readable world, which explores “the evolutionary pressure of… three billion (and growing) linked, artificial eyes on our environment”. The comment that I mentioned, though, from Greg Borenstein, notes that the video, while it certainly succeeds in exploring the aesthetics of the robot eye, is perhaps not in fact so directly interacting with the question of “how robots see the world”:
I keep thinking about whether these are equivalent to the Terminator HUDs that Slavin mocks (images.wikia.com/terminator/images/2/25/T-800a_Threat.jpg). Why would a computer communicate to itself with text? These visualizations are really for the human observer of the CV process. They’re akin to Rodney Brooks’s idea of language having been invented by god to make it easier to read our minds. In this case these graphics give a window on the extent to which the CV algorithms are seeing the world the way we want them to, whether their vision agrees with ours. It’s not an internal representation, it’s a performance for our benefit. Like Kyle, find myself doing a dance of rapidly connecting the displays with the semantics of what they represent: arrows for flow, boxes for blobs, etc. These graphics were designed by people to be seen by other people. They’re meant to let us see how the algorithm is doing. It’s only an internal state of the computer to the extent to which conversing is mind reading.
Insofar as the aesthetic (or “new aesthetic”) of this robot-readable world is not, in fact, a purely robotic aesthetic, but rather a translation of the algorithmic “reasoning” of these robots into a visual language that makes their reasoning human-readable, I find that this makes Manaugh’s suggestion of a “landscape architecture for machines” at least partially composed of “future gardens optimized for autonomous robot navigation” all the more intriguing, as an intermediary aesthetic that sits between the robot and the human might be the perfect aesthetic for constructing baroque vegetative monuments to the robot-readable world: Vaux-le-Vicomtes of comfortably pixelated topiaries, to be enjoyed by both the Google Car and Sergey Brin. (It’s worth noting here that the French Baroque is typically understood as embodying an interest in demonstrating the imposition of human Order on floral Nature, which also suggests some precedent for a Robot Baroque, monumentalizing the literal interface between the human and the robot.)
[Also worth reading — BLDGBLOG’s speculations on “object cancers”, or a “kind of robot-blocking world” as a “corporate response to the robot-readable world.]