After about the 5th reading, I'm concluding that this is an excellent paper. Is anyone (besides the authors) doing research based on this? There is a lot to do.

David Silver wrote:

Hi everyone,Please find attached my ICML paper with Gerry Tesauro on automaticallylearning a simulation policy for Monte-Carlo Go. Our preliminary resultsshow a 200+ Elo improvement over previous approaches, although ourexperiments were restricted to simple Monte-Carlo search with no tree onsmall boards.AbstractIn this paper we introduce the first algorithms for efficiently learninga simulation policy for Monte-Carlo search. Our main idea is to optimisethe balance of a simulation policy, so that an accurate spread ofsimulation outcomes is maintained, rather than optimising the directstrength of the simulation policy. We develop two algorithms forbalancing a simulation policy by gradient descent. The first algorithmoptimises the balance of complete simulations, using a policy gradientalgorithm; whereas the second algorithm optimises the balance over everytwo steps of simulation. We compare our algorithms to reinforcementlearning and supervised learning algorithms for maximising the strengthof the simulation policy. We test each algorithm in the domain of 5x5and 6x6 Computer Go, using a softmax policy that is parameterised byweights for a hundred simple patterns. When used in a simple Monte-Carlosearch, the policies learnt by simulation balancing achievedsignificantly better performance, with half the mean squared error of auniform random policy, and equal overall performance to a sophisticatedGo engine.-Dave ------------------------------------------------------------------------ _______________________________________________ computer-go mailing list computer-go@computer-go.org http://www.computer-go.org/mailman/listinfo/computer-go/

_______________________________________________ computer-go mailing list computer-go@computer-go.org http://www.computer-go.org/mailman/listinfo/computer-go/