In the past, people have competed in gaming events to prove who’s better, whether it was the high-scoring competition in The King of Kong: A Fistful of Quarters or the yearly, multi-million dollar eSports competitions, where players compete for top cash prizes. However, Google may have just created its own champion, from an artificial intelligence-built system.

VentureBeat reports that Google has built a robot with AI that can actually teach itself gaming habits. The system operates just like the human brain, with the system capable of learning and adapting to rules when it comes to winning games, as tested on the Atari 2600 system across 49 unique titles.

Google has been working on this adaptive technology with its DeepMind Technologies subsidiary. It’s seeking out ways for machines to think more like humans, and this is a step in that direction although limited to certain games. With this research, Google hopes to get a better understanding of web content which it can then use to provide better services to consumers.

DeepMind vice president of engineering Dr. Demis Hassabis explained how the system works on a recent appearance at BBC news, showcasing how it could play Breakout – a classic ball-and-paddle set-up. “After 30 minutes of gameplay, the system doesn’t really know how to play the game yet,” he said. “But it’s starting to get the hang of it — that it’s supposed to move the bat to the ball. After an hour, it’s a lot better, but it’s still not perfect. And then, if we leave it playing for another hour, it can play the game better than any human. It almost never misses the ball, even when it comes back at very fast angles.”

The video below showcases this explanation. (This is a strategy) that the programmers and researchers working on this project didn’t even know (about Breakout),” said Hassabis. “We only told it to get the highest score it could.”