AI Has a Hallucination Problem That’s Proving Tough to Fix

Machine learning systems, like those used in self-driving cars, can be tricked into seeing objects that don’t exist. Defenses proposed by Google, Amazon, and others are vulnerable too.

Sourced through from:

WHY IT MATTERS: it is possible to induce AI to hallucinate – see things that are not there in the real world. This article provides insights into what it may mean when cars start to hallucinate and see “stop” signs as if they meant “speed up”…

Farid Mheir