Computer beats human champion at Go

Google made an announcement on Wednesday saying that a program created by its DeepMind artificial intelligence lab AlphaGo had beaten the European Go champion by 5-0, without a handicap, in a five-game match past October.

The upcoming test is going to be held in March, in which AlphaGo will challenge the world champion, Lee Sedol, during a five-game match in Seoul. Legendary game master Lee is by consensus the best player of modern times. It is going to be the Go equivalent of the prominent chess match that took place between IBM’s Deep Blue and Garry Kasparov in 1996.

The news came out in a Google article appeared in the journal Nature, this week with a title, ‘Mastering the game of Go with deep neural networks and tree search’.

The DeepMind team’s approach brought together a well-established algorithmic approach called Monte Carlo tree search, which has been helpful for computers in defeating humans at a number of less-complex games, with a cutting-edge approach called deep neural networks.

Generally, AlphaGo uses two different neural networks, including a ‘policy network’, limiting its analysis scope to a handful of attractive options for every move, and a ‘value network’ that peers nearly 20 moves into the future for seeing which of the chosen options are apparently the most promising. The team has explained its working in the Nature video.

The implications of Google’s achievement spread beyond the sphere of games. You can use neural networks in all types of settings, demanding the human-like capacity for the evaluation of various strategies under conditions of ambiguity.

Presently, the identification of possible applications of such a fundamentally powerful technology would need a feat of imagination in its own right.