Just over a year ago, an artificial intelligence beat the fourth-ranked player in the world at Go, a complex and ancient game that is said to be a better test of the unique capabilities of the human brain than chess. Now, that same artificial intelligence, called AlphaGo, is preparing for its next public demonstration: a summit in China in May where it will collaborate with human players to come up with strategies, and then face off against the top-ranked player in a series of three matches. If it wins, it will have shown that its underlying algorithms are ready for something more than a game.
AlphaGo is the product of DeepMind, a London-based division of Google parent company Alphabet. DeepMind was founded in 2010 around the goal of researching artificial intelligence in order to understand the nature of intelligence and harness the full power of computerized brainpower for humanity. Its experiments with Go — a game thought to be years away from being conquered by AI before last year — are designed to bring us closer to designing a computer with human-like understanding that can solve problems like a human mind can.
Historically, there have been tasks that humans do well — communicating, improvising, emoting — and tasks that computers do well, which tend to be those that require lots of computations — like math of any kind, including statistical analysis and modeling of, say, journeying to the moon. Slowly, artificial intelligence scientists have been pushing that barrier. At one time, chess was a human's game, something people were better at. Now it's a computer's game. At one time, Go was a human's game — the large number of possible board combinations aren’t inherently out of a computer’s reach, but teaching a computer to understand the less tangible ways in which groups of stones influence each other is a different thing altogether. Now it's looking like computers will soon be able to claim it too.
Go is played on a board with a grid with 19-by-19 possible positions. Each player takes turn placing stones (one player with white, the other with black) on empty intersections of the grid. The goal is to completely surround the stones of another player, removing them from the board. The number of possible positions compared to chess thanks in part to the size of the board and ability to take any unoccupied position is part of what makes it so complex.
Or, as DeepMind co-founder Demis Hassabis put it last year, “There are 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 possible positions.”
Computing many possible combinations is something computers are traditionally good at, however. What makes Go tougher for AI is that it essentially requires the computer to understand intuition. Programming a computer to do this is basically impossible, so DeepMind took a different approach: Using machine learning to analyze Go matches and teach itself the game.
This is the heart of current artificial intelligence research: programming a computer to teach itself, then watching as that computer exceeds the abilities of its programmers.
AlphaGo doesn’t really approach the game of Go like a human would because it fundamentally understands the game in a different way. As previously mentioned, a difficult-to-measure aspect of how Go is played is the influence of a stone’s position on the board. Players consider how groups of stones affect others, and then make a call on where to place their stones based on that. The difference between AlphaGo and human players is that AlphaGo considers the entire board at once whereas players construct this overview based on understanding the parts and not the whole.
AlphaGo defeated South Korean Go champion Lee Sedol in March 2016, winning four out of the five games. That was unexpected: Most thought AlphaGo would do well, but ultimately lose. Instead, it handily beat Sedol, at one point making a move that was so clever commentators at first thought it was a mistake.
What makes Go tougher for AI is that it essentially requires the computer to understand intuition
This type of surprise shouldn't be that, well, surprising because humans didn't teach AlphaGo how to play. It taught itself. Not even the scientists at DeepMind know everything that AlphaGo is capable of.
In the future, artificial intelligence could achieve the same leaps in other areas that would be more applicable to daily life, “from climate modelling to complex disease analysis,” according to DeepMind.
In May, AlphaGo will face the top-ranked player in the world, the teenager Ke Jie. It's expected to win outright, since AlphaGo won the three online matches it had secretly played against Ke Jie around January 1.
Before that, he’d reportedly made comments after watching several of the Lee Sedol matches about how it good the AI was, and that, under similar conditions, it was “highly likely” that he could lose. Even so, after the unofficial online losses, Ke Jie seemed to indicate that he had something up his sleeve; Quartz reported that he wrote on the Chinese social network Weibo that he had “one last move.”
Even if AlphaGo beats Ke Jie in every game, it doesn't mean the AI is perfect. It will continue to play Go until Go no longer serves as a functional testbed for new methods of data analysis. A win alone isn’t necessarily the fastest or best win possible. A next step along the same lines as Go would be to tackle a game where the computer doesn’t have all the information in front of it, like poker. Or even video games like StarCraft II, which DeepMind has worked with developer Blizzard on exploring.
Correction: An earlier version of this story said Go is played on an 18-by-18 grid; it is in fact an 18-by-18 grid with 19-by-19 possible positions.