Technology

#Researchers fooled AI into ignoring stop signs using a cheap projector

#Researchers fooled AI into ignoring stop signs using a cheap projector

A trio of researchers at Purdue today published pre-print research demonstrating a novel adversarial attack against computer vision systems that can make an AI see – or not see – whatever the attacker wants.

It’s something that could potentially affect self-driving vehicles, such as Tesla’s, that rely on cameras to navigate and identify objects.

Up front: The researchers wanted to confront the problem of digital manipulation in the physical world. It’s easy enough to hack a computer or fool an AI if you have physical access to it, but tricking a closed system is much harder.

Per the team’s pre-print paper:

Adversarial attacks and defenses today are predominantly driven by studies in the digital space where the attacker manipulates a digital image on a computer. The other form of attacks, which are the physical attacks, have been reported in the literature, but most of the existing ones are invasive in the sense that they need to touch the objects, for example, painting a stop sign, wearing a colored shirt, or 3D-printing a turtle.

In this paper, we present a non-invasive attack using structured illumination. The new attack, called the OPtical ADversarial attack (OPAD), is based on a low-cost projector-camera system where we project calculated patterns to alter the appearance of the 3D objects.

Background: There are a lot of ways to try and trick an AI vision system. They use cameras to capture images and then run those images against a database to try and match them with similar images.

If we wanted to stop an AI from scanning our face we could wear a Halloween mask. And if we wanted to stop an AI from seeing at all we could cover its cameras. But those solutions require a level of physical access that’s often prohibitive for dastardly deed-doers.

What the researchers have done here is come up with a novel way to attack a digital system in the physical world.

They use a “low-cost projector” to shine an adversarial pattern – a specific arrangement of light, images, and shadows — that tricks the AI into misinterpreting what it’s seeing.

A screenshot from a pre-print paper demonstrating an adversarial attack on a basketball and a stop sign using a projector.
Credit: Gnanasambandam, et al.
Close

Please allow ads on our site

Please consider supporting us by disabling your ad blocker!