[computer-go] IEEE T-CIAIG Special Issue on Monte Carlo Techniques and Computer Go
IEEE Transactions on Computational Intelligence and AI in Games Special Issue on Monte Carlo Techniques and Computer Go Special-issue editors: Chang-Shing Lee, Martin Müller, Olivier Teytaud In the last few years Monte Carlo Tree Search (MCTS) has revolutionised Computer Go, with MCTS programs such as MoGo, Crazy Stone, Fuego, Many Faces of Go, and Zen achieving a level of play that seemed unthinkable only a decade ago. These programs are now competitive at a professional level for 9x9 Go, and with an 8 stone handicap for 19x19 Go. The purpose of this special issue is to publish high quality papers reporting the latest research covering the theory and practice of these and other methods applied to Go, and also in applying MCTS to other games. MCTS can play very well even with little knowledge about the game as evidenced by its success in General Game Playing. However, it does not work well for all games, which poses some interesting questions. When and why does it succeed and fail? How can it be extended to new applications where it does not work yet? How best may it be combined with other approaches such as classical minimax search and knowledge-based methods? Topics include but are not limited to: l Emergent Technologies for Computer Go l Variants of Go (phantom Go, Go Siege) l Knowledge Representation Models for Computer Go l MCTS and Reinforcement Learning l MCTS for Video Games l Approximation Methods for MCTS l MCTS for General Game Playing l Hybrid MCTS Approaches l Evolving MCTS Players Authors should follow normal T-CIAIG guidelines for their submissions, but clearly identify their papers for this special issue during the submission process. See http://www.ieee-cis.org/pubs/tciaig/ for author information. Extended versions of previously published conference papers are welcome providing the journal paper provides a significant extension of the conference paper, and is accompanied by a covering letter explaining the additional contribution. Schedule · Deadline for submissions: March 15, 2010 · Notification of Acceptance: June 15, 2010 · Final copy due: October 20, 2010 · Publication: December 2010 or March 2011 ___ computer-go mailing list computer-go@computer-go.org http://www.computer-go.org/mailman/listinfo/computer-go/
[computer-go] is Zen gone commercial?
not so long ago (after its win in the computer olympiad) it was announced (or was it just a rumour) that Zen would come publicly available or available as commercial package. Any news about this?. ___ computer-go mailing list computer-go@computer-go.org http://www.computer-go.org/mailman/listinfo/computer-go/
Re: [computer-go] is Zen gone commercial?
Willemien wrote: not so long ago (after its win in the computer olympiad) it was announced (or was it just a rumour) that Zen would come publicly available or available as commercial package. It is already shipped in Japan, as Tencho no Igo. The product's name in English is Zenith Go. (Tencho = Zenith) It will be available on the Internet in the near future. -- Yamato ___ computer-go mailing list computer-go@computer-go.org http://www.computer-go.org/mailman/listinfo/computer-go/
Re: [computer-go] is Zen gone commercial?
It is already shipped in Japan, as Tencho no Igo. http://soft.mycom.co.jp/pcigo/tencho/index.html Looks like Windows only. Anyone know if it will run under wine on linux? They are advertising it as 2-dan (i.e. Japanese 2-dan). A rather pricey 13,400 yen, or 10,752 yen ($120) online. Darren -- Darren Cook, Software Researcher/Developer http://dcook.org/gobet/ (Shodan Go Bet - who will win?) http://dcook.org/mlsn/ (Multilingual open source semantic network) http://dcook.org/work/ (About me and my work) http://dcook.org/blogs.html (My blogs and articles) ___ computer-go mailing list computer-go@computer-go.org http://www.computer-go.org/mailman/listinfo/computer-go/
Re: [computer-go] is Zen gone commercial?
They are advertising it as 2-dan (i.e. Japanese 2-dan). Sorry, I skimmed it too quickly. It actually says: KGS 2-dan, which is equivalent to Japanese Nihon Kiin 3-4 dan. Darren -- Darren Cook, Software Researcher/Developer http://dcook.org/gobet/ (Shodan Go Bet - who will win?) http://dcook.org/mlsn/ (Multilingual open source semantic network) http://dcook.org/work/ (About me and my work) http://dcook.org/blogs.html (My blogs and articles) ___ computer-go mailing list computer-go@computer-go.org http://www.computer-go.org/mailman/listinfo/computer-go/
Re: [computer-go] is Zen gone commercial?
Darren, If it doesn't work on Wine, you could always load a VM, like Sun's VirtualBox, install a copy of Windows in that and play from there. VirtualBox has very good performance metrics at above 95% of max (non VM) speed. And there's plenty of throw-away copies of XP licenses available all over the place as old systems retire and are replaced with newer hardware upon which Vista is now installed. Jim From: Darren Cook dar...@dcook.org To: computer-go computer-go@computer-go.org Sent: Thursday, September 24, 2009 8:00:03 AM Subject: Re: [computer-go] is Zen gone commercial? It is already shipped in Japan, as Tencho no Igo. http://soft.mycom.co.jp/pcigo/tencho/index.html Looks like Windows only. Anyone know if it will run under wine on linux? They are advertising it as 2-dan (i.e. Japanese 2-dan). A rather pricey 13,400 yen, or 10,752 yen ($120) online. Darren -- Darren Cook, Software Researcher/Developer http://dcook.org/gobet/ (Shodan Go Bet - who will win?) http://dcook.org/mlsn/ (Multilingual open source semantic network) http://dcook.org/work/ (About me and my work) http://dcook.org/blogs.html (My blogs and articles) ___ computer-go mailing list computer-go@computer-go.org http://www.computer-go.org/mailman/listinfo/computer-go/ ___ computer-go mailing list computer-go@computer-go.org http://www.computer-go.org/mailman/listinfo/computer-go/
Re: [computer-go] is Zen gone commercial?
Darren Cook wrote: They are advertising it as 2-dan (i.e. Japanese 2-dan). Sorry, I skimmed it too quickly. It actually says: KGS 2-dan, which is equivalent to Japanese Nihon Kiin 3-4 dan. Actually it is a little misleading. They didn't say that the commercial version is KGS 2d :-) Its 2-dan is equivalent to ZenLv6, weak KGS 1d. -- Yamato ___ computer-go mailing list computer-go@computer-go.org http://www.computer-go.org/mailman/listinfo/computer-go/
RE: [computer-go] IEEE T-CIAIG Special Issue on Monte Carlo Techniques and Computer Go
Before monte carlo I spent a couple of years writing and tuning an alpha-beta searcher. It's still in there and I ship it to provide the lower playing levels. Alpha-beta with limited time makes much prettier moves than monte carlo. Would there be interest in a paper that compares the same knowledge and engine used in an alpha-beta and monte carlo framework? David -Original Message- From: computer-go-boun...@computer-go.org [mailto:computer-go- boun...@computer-go.org] On Behalf Of Olivier Teytaud Sent: Thursday, September 24, 2009 4:45 AM To: computer-go Subject: [computer-go] IEEE T-CIAIG Special Issue on Monte Carlo Techniques and Computer Go IEEE Transactions on Computational Intelligence and AI in Games Special Issue on Monte Carlo Techniques and Computer Go Special-issue editors: Chang-Shing Lee, Martin Müller, Olivier Teytaud In the last few years Monte Carlo Tree Search (MCTS) has revolutionised Computer Go, with MCTS programs such as MoGo, Crazy Stone, Fuego, Many Faces of Go, and Zen achieving a level of play that seemed unthinkable only a decade ago. These programs are now competitive at a professional level for 9x9 Go, and with an 8 stone handicap for 19x19 Go. The purpose of this special issue is to publish high quality papers reporting the latest research covering the theory and practice of these and other methods applied to Go, and also in applying MCTS to other games. MCTS can play very well even with little knowledge about the game as evidenced by its success in General Game Playing. However, it does not work well for all games, which poses some interesting questions. When and why does it succeed and fail? How can it be extended to new applications where it does not work yet? How best may it be combined with other approaches such as classical minimax search and knowledge-based methods? Topics include but are not limited to: l Emergent Technologies for Computer Go l Variants of Go (phantom Go, Go Siege) l Knowledge Representation Models for Computer Go l MCTS and Reinforcement Learning l MCTS for Video Games l Approximation Methods for MCTS l MCTS for General Game Playing l Hybrid MCTS Approaches l Evolving MCTS Players Authors should follow normal T-CIAIG guidelines for their submissions, but clearly identify their papers for this special issue during the submission process. See http://www.ieee-cis.org/pubs/tciaig/ for author information. Extended versions of previously published conference papers are welcome providing the journal paper provides a significant extension of the conference paper, and is accompanied by a covering letter explaining the additional contribution. Schedule · Deadline for submissions: March 15, 2010 · Notification of Acceptance: June 15, 2010 · Final copy due: October 20, 2010 · Publication: December 2010 or March 2011 ___ computer-go mailing list computer-go@computer-go.org http://www.computer-go.org/mailman/listinfo/computer-go/ ___ computer-go mailing list computer-go@computer-go.org http://www.computer-go.org/mailman/listinfo/computer-go/
Re: [SPAM] RE: [computer-go] IEEE T-CIAIG Special Issue on Monte Carlo Techniques and Computer Go
Before monte carlo I spent a couple of years writing and tuning an alpha-beta searcher. It's still in there and I ship it to provide the lower playing levels. Alpha-beta with limited time makes much prettier moves than monte carlo. Would there be interest in a paper that compares the same knowledge and engine used in an alpha-beta and monte carlo framework? In my humble opinion, definitely yes; I'll be an interested reader of this. Olivier ___ computer-go mailing list computer-go@computer-go.org http://www.computer-go.org/mailman/listinfo/computer-go/
[computer-go] Generalizing RAVE
RAVE is part of a larger family of algorithms. In general we can use direct Monte-Carlo results (i.e., the move played directly from a node) to determine the probability of winning after playing such a move. The generalized RAVE (GRAVE?) family does this by including (usually with some discount) moves played on similar boards. Different algorithms in this family count different boards as similar: Basic MCTS (i.e., UCT) without a transposition table counts no other boards. A transposition table counts identical boards, i.e., those with the same stones on the board, player to move, simple ko point, and number of passes. AMAF counts all boards. RAVE counts boards that follow the current board in a playout. CRAVE (Context-dependent RAVE) counts boards where the neighborhood of the move in question looks similar. Dave Hillis discussed one implementation for this. I tried another; it works better than plain MCTS, but not as well as RAVE. NAVE (Nearest-neighbor RAVE) counts some set of boards which have a small Hamming distance from the current board. Literally storing all board-move pairs is catastrophically expensive in both memory and time. DAVE (Distributed RAVE) stores this information holographically, storing win/run counts for each move combined with each point/color combination on the board. Thus, there are a set of runs for when a2 is black, another for when e3 is vacant, and so forth. To find the values for a particular board, sum across the points on that board. This is too expensive, but by probing based on only one random point, I was able to get something that beats MCTS (but not RAVE). The following are left as exercises: http://www.onelook.com/?loc=rz4w=*avescwo=1sswo=1 It's conceivable that some statistical machine learning technique (e.g., neural networks) could be applied, with the playouts providing data for the regression. The more I study this and try different variants, the more impressed I am by RAVE. Boards after the current board is a very clever way of defining similarity. Also, recorded RAVE playouts, being stored in each node, expire in an elegant way. It still seems that RAVE fails to exploit some sibling information. For example, if I start a playout with black A, white B, and white wins, I should (weakly) consider B as a response to any black first move. Peter Drake http://www.lclark.edu/~drake/ ___ computer-go mailing list computer-go@computer-go.org http://www.computer-go.org/mailman/listinfo/computer-go/
Re: [computer-go] Generalizing RAVE
Peter Drake wrote: The more I study this and try different variants, the more impressed I am by RAVE. Boards after the current board is a very clever way of defining similarity. Also, recorded RAVE playouts, being stored in each node, expire in an elegant way. It still seems that RAVE fails to exploit some sibling information. For example, if I start a playout with black A, white B, and white wins, I should (weakly) consider B as a response to any black first move. It is exactly the same as my thought. I also have tried CRAVE, but the results were worse than normal RAVE. While RAVE is a very efficient algorithm, it strongly limits scalability of the program. It typically makes a fatal mistake in the position that the order of moves are important. We definitely need to improve RAVE, but it is a very tough job. -- Yamato ___ computer-go mailing list computer-go@computer-go.org http://www.computer-go.org/mailman/listinfo/computer-go/
Re: [computer-go] Generalizing RAVE
Peter Drake wrote: The more I study this and try different variants, the more impressed I am by RAVE. Boards after the current board is a very clever way of defining similarity. Also, recorded RAVE playouts, being stored in each node, expire in an elegant way. It still seems that RAVE fails to exploit some sibling information. For example, if I start a playout with black A, white B, and white wins, I should (weakly) consider B as a response to any black first move. Yamato replied: It is exactly the same as my thought. I also have tried CRAVE, but the results were worse than normal RAVE. While RAVE is a very efficient algorithm, it strongly limits scalability of the program. It typically makes a fatal mistake in the position that the order of moves are important. We definitely need to improve RAVE, but it is a very tough job. Indeed it is. How may a program reason about the order of moves? At higher levels of play, the order of moves is often crucial. ___ computer-go mailing list computer-go@computer-go.org http://www.computer-go.org/mailman/listinfo/computer-go/
RE: [computer-go] Generalizing RAVE
Tried CRAVE also, using 3x3 patterns as the context. It didn't work. David -Original Message- From: computer-go-boun...@computer-go.org [mailto:computer-go- boun...@computer-go.org] On Behalf Of Peter Drake Sent: Thursday, September 24, 2009 12:00 PM To: Computer Go Subject: [computer-go] Generalizing RAVE RAVE is part of a larger family of algorithms. In general we can use direct Monte-Carlo results (i.e., the move played directly from a node) to determine the probability of winning after playing such a move. The generalized RAVE (GRAVE?) family does this by including (usually with some discount) moves played on similar boards. Different algorithms in this family count different boards as similar: Basic MCTS (i.e., UCT) without a transposition table counts no other boards. A transposition table counts identical boards, i.e., those with the same stones on the board, player to move, simple ko point, and number of passes. AMAF counts all boards. RAVE counts boards that follow the current board in a playout. CRAVE (Context-dependent RAVE) counts boards where the neighborhood of the move in question looks similar. Dave Hillis discussed one implementation for this. I tried another; it works better than plain MCTS, but not as well as RAVE. NAVE (Nearest-neighbor RAVE) counts some set of boards which have a small Hamming distance from the current board. Literally storing all board-move pairs is catastrophically expensive in both memory and time. DAVE (Distributed RAVE) stores this information holographically, storing win/run counts for each move combined with each point/color combination on the board. Thus, there are a set of runs for when a2 is black, another for when e3 is vacant, and so forth. To find the values for a particular board, sum across the points on that board. This is too expensive, but by probing based on only one random point, I was able to get something that beats MCTS (but not RAVE). The following are left as exercises: http://www.onelook.com/?loc=rz4w=*avescwo=1sswo=1 It's conceivable that some statistical machine learning technique (e.g., neural networks) could be applied, with the playouts providing data for the regression. The more I study this and try different variants, the more impressed I am by RAVE. Boards after the current board is a very clever way of defining similarity. Also, recorded RAVE playouts, being stored in each node, expire in an elegant way. It still seems that RAVE fails to exploit some sibling information. For example, if I start a playout with black A, white B, and white wins, I should (weakly) consider B as a response to any black first move. Peter Drake http://www.lclark.edu/~drake/ ___ computer-go mailing list computer-go@computer-go.org http://www.computer-go.org/mailman/listinfo/computer-go/ ___ computer-go mailing list computer-go@computer-go.org http://www.computer-go.org/mailman/listinfo/computer-go/