We spend a lot of time and words on what autonomous cars can do, but sometimes its a more interesting question to ask what they cant do. The limitations of a technology are at least as important as its capabilities. Thats what this little bit of performance art tells me, anyway.

You can see the nature of Autonomous trap 001 right away. One of the first and most important things a self-driving system will learn or be taught is how to interpret the markings on the road. This is the edge of a lane, this means its for carpools only, and so on.

British (but Athens-living) artist James Bridle illustrates the limits of knowledge without context an issue well be coming back to a lot in this age of artificial intelligence.

A bargain-bin artificial mind would know that one of the most critical rules of the road is never to cross a solid line with a dashed one on the far side. But of course its just fine to cross one if the dashes are on the near side.

A circle like this with the line on the inside and dashes on the outside acts, absent any exculpatory logic, like a roach hotel for dumb smart cars. (Of course, its just a regular car he drives into it for demonstration purposes. It would take too long to catch a real one.)

Its no coincidence that the trap is drawn with salt (the medium is listed as salt ritual); the idea of using salt or ash to create summoning or binding symbols for spirits and demons is an extremely old one. Knowing the words of command or secret workings of these mysterious beings allowed one power over them.

Here too a simple symbol binds the target entity in place, where ideally it would remain until its makers got there and salvaged it? Or until someone broke the magic circle or until whoever was in the drivers seat took over control from the AI and hit the gas.

Imagine a distant future in which autonomous systems have taken over the world and knowledge of their creation and internal processes has been lost (or you could just play Horizon: Zero Dawn) this simple trap might appear to our poor debased descendants to be magic.

What other tricks might we devise that cause inexorably a simple-minded AI to stop, pull over, or otherwise disable itself? How will we protect against them? What will the crime against mechanized AIs be assault, or property damage? Strange days ahead.

Keep an eye on Bridles Vimeo or blog the video above is a temporary one and the performance, like most things, is a work in progress.

Source article viahttps://techcrunch.com

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>