In early 2009 I found the game of Go. Go is an ancient board game at least 3000 years old. It only has a few simple rules and even pre-schoolers are able to learn the rules and play the game. However, these simple rules give rise to an incredible depth of strategy and tactics that is largely unmatched in other games with such simple rules. I play Go as a hobby, and although I enjoy it thoroughly, I already know that I will not reach the level of some of the old masters, some of which lived many hundreds of years ago and have studies the game as a profession. Another interesting aspect of Go is that although great effort has been put into computer players for the game, computers have not yet managed to come close to the top human players of the day. My current research focus is on trying to help computers close this gap.
To realise why Go is so difficult for computers it can be compared to another board game: Chess. Whilst Chess is played on an 8×8 board, Go is usually played on a 19×19 board. Also, in contrast to Chess, in Go almost all the open spaces on the board are legal moves. These factors factors combine to mean that whereas Chess has about 20 or so possible moves per player’s turn, Go has closer to 200. This explosion in moves makes classical board game AI techniques like minimax and alpha-beta pruning ineffective.
In the past few years, a new technique has been applied to Go: Monte-Carlo Tree Search (MCTS). MCTS has allowed computers to make large improvements in Computer Go and it has become the dominant algorithm across the field. My research focus is on improving MCTS by making it learn from previous situations (something it currently doesn’t do very well) and on segmenting the Go board into sub-problems so that the problem of Go can be tackled in a more efficient manner.My contact details: firstname.lastname@example.org