>
> Your implementation must be very different from mine. Actually I don't
> use Progressive widening (or unpruning) at all. It's a mystery to me why
> others say it does work.
>
>
Hi Yamato. I want to clarify what we use in MoGo.
What we call "progressive unpruning" is termed "progressive bias" by Rémi
Coulom. It is the use of a score which is a linear combination between
1) expert knowledge
2) patterns (in 19x19)
3) rave values
4) regularized success rate (nbWins +K ) /(nbSims + 2K)
(the original "progressive bias" is simpler than that)

for small numbers of simulations, 1) and 2) are the most important;
3) become important later; and 4) is, later, the most important term.

Progressive widening is not efficient for us, when we use progressive bias,
in MoGo.

However, for some applications of UCT, progressive widening, as specified in
Rémi's paper, is very efficient for us, but with different constants (the
paper by Yizao Wang, Jean-Yves Audibert and Rémi Munos is also interesting
around that - yet, it's not useful for the case of Go).

Thanks to all people posting interesting informations here, best regards,
Olivier
_______________________________________________
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to