hi; I don't know to which extent my terminology is commonly used, but it seems to be close to the distinction by Dave (but not exactly equal).
For me I use "progressive widening" when we add moves, progressively, to the pool of moves which are to be considered; whereas I use "progressive unpruning" when a score is computed for all moves, with a weight depending on the number of simulations; typically, I consider the Rave formula in Gelly&Silver as a progressive unpruning (with, for "prior" value, the Rave value). Progressive unpruning is better in MoGo, in spite of the fact that: * with progressive widening, if you apply "pure" UCB values, you are definitely consistent and explore (asymptotically) all the tree; * with progressive unpruning, if your prior is bad, you might have situations in which the score of the only good move is 0, whereas all bad moves have values going to 0 but >0 and therefore the good move is never simulated (this can, however, easily be patched by a lower bound on the value given by the prior) Progressive unpruning, if you use the same terminology as me, has the advantage that the number of moves to be considered is adapted depending on the score; if you have three reasonnable moves and 300 bad moves, with progressive unpruning you'll visit all moves, whereas progressive unpruning will stay on the three reasonnable moves as long as they provide a better score than the prior score for the 300 bad moves. (The difference with Dave's terminology is that I do not necessarily use a sum of a prior number of simulations and a number of "real" simulations; I use a weighted sum with weight depending on the number of simulations in a complicated manner - I think this was not the case in the first paper by Chaslot et al about progressive un pruning (well, I believe this is the first paper about progressive unpruning)) Best regards, Oliveir
_______________________________________________ computer-go mailing list computer-go@computer-go.org http://www.computer-go.org/mailman/listinfo/computer-go/