In a surprising reversal of the 2016 computer triumph that was considered as a watershed moment in the emergence of artificial intelligence, a human player has comprehensively defeated a top-ranked AI system at the board game Go.
Kellin Pelrine, an American player one level below the top amateur ranking, defeated the system by exploiting a previously undiscovered weakness discovered by another computer. But, the head-to-head competition in which he won 14 of 15 games was conducted without direct computer assistance.
The victory, which had not previously been disclosed, showed a flaw in the top Go computer programmes that is shared by most of today’s commonly used AI systems, including OpenAI’s ChatGPT chatbot.
The strategies that returned a human to the top of the Go board were recommended by a computer programme that had investigated the AI systems for flaws. Pelrine then gave the proposed plan mercilessly.
“It was really straightforward for us to attack this system,” said Adam Gleave, CEO of FAR AI, the California-based research group that created the programme. He claimed that the software played more than 1 million games against KataGo, one of the top Go-playing systems, to uncover a “blind spot” that a human player could exploit.
The winning technique disclosed by the programme “is not absolutely basic, but it’s not super-difficult” for a human to learn and could be utilised to beat the robots by an intermediate-level player, according to Pelrine. He also utilised the strategy to defeat Leela Zero, another top Go system.
The conclusive triumph, although with the assistance of computer-generated tactics, comes seven years after AI looked to have acquired an unbeatable lead over humans in what is often regarded as the most complex of all board games.
In 2016, AlphaGo, a system developed by Google-owned DeepMind, defeated world Go champion Lee Sedol four games to one. Sedol ascribed his retirement from Go three years later to the emergence of AI, claiming that it was “an entity that cannot be defeated”. Although AlphaGo is not publicly available, the systems Pelrine defeated are comparable.
In a game of Go, two players alternately put black and white stones on a 19×19 grid, attempting to encircle their opponent’s stones and enclose the most area. Because of the large number of possible moves, a computer cannot evaluate all possible future moves.
Pelrine’s strategy entailed gradually stringing together a big “loop” of stones to encircle one of his opponent’s own groups, while diverting the Computer with actions in other areas of the board. Pelrine claims that the Go-playing machine was unaware of its weakness until the encirclement was virtually complete.
“As a person, that would be fairly obvious,” he added.
According to Stuart Russell, a computer science professor at the University of California, Berkeley, the finding of a fault in some of the most advanced Go-playing machines hints to a fundamental issue in the deep learning techniques that underpin today’s most advanced AI.
He stated that the algorithms can “understand” only specific situations to which they have been exposed in the past and are unable to generalise in a way that humans find easy.
“It once again demonstrates that we’ve been far too quick to credit superhuman levels of intelligence to robots,” Russell added.
According to the researchers, the precise explanation of the Go-playing algorithms’ failure is a matter of conjecture. One possible explanation is because the approach used by Pelrine is rarely used, implying that the AI systems had not been trained on enough similar games to recognise their vulnerability, according to Gleave.
When AI systems are subjected to the type of “adversarial attack” deployed against Go-playing computers, faults in the systems are widespread, he adds. Despite this, “we’re seeing extremely large [AI] systems deployed at scale with no verification,” according to the report.