Log in
Enquire now
AlphaGo

AlphaGo

AlphaGo is a computer program developed by Google DeepMind to play the board game Go. AlphaGo became the first computer program to defeat a professional human Go player and the first to defeat a Go world champion.

OverviewStructured DataIssuesContributors

Contents

deepmind.com/research/case-studies/alphago-the-story-so-far
deepmind.com/research/highlighted-research/alphago
deepmind.com...go.html
Is a
Software
Software
Product
Product

Product attributes

Industry
Board game
Board game
Machine learning
Machine learning
Artificial Intelligence (AI)
Artificial Intelligence (AI)
Product Parent Company
Google DeepMind
Google DeepMind

Other attributes

Creator
Official Name
AlphaGo0
Publisher
Alphabet Inc.
Alphabet Inc.
0
Wikidata ID
Q22329209
Overview

AlphaGo is a computer program developed by Google DeepMind to play the board game Go. AlphaGo became the first computer program to defeat a professional human Go player and the first to defeat a Go world champion. DeepMind has since developed a more advanced artificial intelligence (AI) program for playing Go, called AlphaGo Zero. The techniques behind AlphaGo Zero have been generalized into another AI system called AlphaZero, capable of playing Go, chess, and shogi.

Go originated in China over 2,500 years ago and is played by more than 40 million people worldwide. The game requires multiple levels of strategic thinking to win. Two players (one playing as white and one playing as black) take turns placing stones on a 19 by 19 board. The aim of the game is to surround and capture the opponent's stones or strategically create spaces of territory. After all possible moves have been played, the players count one point for each vacant point inside their own territory, and one point for every stone they have captured. The player with the larger total of territory plus prisoners is the winner. Go has 10170 possible board configurations—more than the number of atoms in the known universe. Go's level of complexity makes it challenging for computers to play, and before AlphaGo, leading programs could only play Go as well as amateurs.

Standard AI methods, which test all possible moves and positions using a search tree, fail to handle the sheer number of possible Go moves or evaluate the strength of each possible board position. DeepMind developed a new approach when creating AlphaGo, combining advanced search trees with deep neural networks. The description of the Go board is input to these neural networks, processing through a number of different layers containing millions of neuron-like connections. One neural network, called the "policy network," selects the move to play. Another neural network, called the "value network," predicts the winner of the game.

These deep neural networks are trained by a combination of supervised learning from human expert games and reinforcement learning from games of self-play. DeepMind trained the neural networks on 30 million moves from games played by human experts until it could predict the human move 57% of the time. Next, AlphaGo learned to discover new strategies for itself, by playing thousands of games between its neural networks, adjusting the connections using a trial-and-error process (reinforcement learning). Training AlphaGo required significant computing power, making use of Google's cloud platform.

AlphaGo is described in detail in a 2016 Nature paper titled "Mastering the game of Go with deep neural networks and tree search."

Matches

AlphaGo was first tested in a tournament between the leading Go-playing computer programs. AlphaGo lost only one game out of 500. In October 2015, DeepMind invited reigning three-time European Go Champion Fan Hui to a closed-door match. AlphaGo won 5-0. In March 2016, AlphaGo defeated Lee Sedol, winner of eighteen world titles, 4-1 in a series of matches watched by over 200 million people worldwide. The game against Sedol meant AlphaGo earned a 9-dan professional ranking—the highest certification—and it was the first time a computer Go player received the accolade. During the games, AlphaGo played several inventive and surprising winning moves that have been extensively studied since.

In January 2017, DeepMind revealed an improved online version of AlphaGo called Master, which achieved sixty straight wins in time-control games against top international players. In May 2017, AlphaGo took part in the Future of Go Summit in China. The summit included various game formats, such as pair Go, team Go, and a match with the world’s number one player, Ke Jie. After the summit, DeepMind unveiled AlphaGo's successor, AlphaGo Zero.

Timeline

No Timeline data yet.

Further Resources

Title
Author
Link
Type
Date

AlphaGo - The Movie | Full award-winning documentary

https://www.youtube.com/watch?v=WXuK6gekU1Y

Web

March 13, 2020

Mastering the game of Go with deep neural networks and tree search

Demis Hassabis et al.

https://storage.googleapis.com/deepmind-media/alphago/AlphaGoNaturePaper.pdf

Academic paper

Match 1 - Google DeepMind Challenge Match: Lee Sedol vs AlphaGo

https://www.youtube.com/watch?v=vFr3K2DORc8

Mar 9, 2016

Match 2 - Google DeepMind Challenge Match: Lee Sedol vs AlphaGo

https://www.youtube.com/watch?v=l-GsfyVCBu0

Mar 10, 2016

Match 3 - Google DeepMind Challenge Match: Lee Sedol vs AlphaGo

https://www.youtube.com/watch?v=qUAmTYHEyM8

Mar 12, 2016

References

Find more entities like AlphaGo

Use the Golden Query Tool to find similar entities by any field in the Knowledge Graph, including industry, location, and more.
Open Query Tool
Access by API
Golden Query Tool
Golden logo

Company

  • Home
  • Press & Media
  • Blog
  • Careers
  • WE'RE HIRING

Products

  • Knowledge Graph
  • Query Tool
  • Data Requests
  • Knowledge Storage
  • API
  • Pricing
  • Enterprise
  • ChatGPT Plugin

Legal

  • Terms of Service
  • Enterprise Terms of Service
  • Privacy Policy

Help

  • Help center
  • API Documentation
  • Contact Us
By using this site, you agree to our Terms of Service.