Watch an AI goalie psych out its opponent — Nerd hilarious

0

No, the red player in the video above is not having a seizure. And the blue player is not drunk. Instead, you are seeing what happens when one artificial intelligence (AI) gets the better of another, by simply behaving in an unexpected manner.

One way to create AI smarter would be to have it learn from its surroundings. Cars of the future, by way of instance, will be better at reading road signs and preventing pedestrians as they gain more experience. But hackers can exploit these systems with”adversarial attacks”: From subtly and just modifying a picture, say, you can fool an AI to misidentifying it. A stop sign with a couple of stickers on it may be regarded as a speed limit sign, for instance.

The new study shows AI can be duped into not only seeing something it should not but also into behaving in ways it should not. The study occurs in the world of simulated sports: soccer, sumo wrestling, and a game where someone stops a runner from crossing a line. Typically, both opponents train by playing against each other. Because of this, the goalie bot starts to play horribly, wobbling to and fro like a drunk pirate, and losing up to twice as many games as it should, according to research presented here this month in the Neural Information Processing Systems Conference.

Such adversarial attacks could cause real-world issues for autonomous driving, financial trading, or product recommendation systems, like those found on Amazon. One can envision a vehicle owned by a prankster or terrorist jiggling its steering wheel in only such a way as to create a nearby car to swerve off the road, or an algorithm implementing trades that cause other people to go haywire and make a market crash.

LEAVE A REPLY

Please enter your comment!
Please enter your name here