: [SPAM] Re: [computer-go] Progressive widening vs
unpruning
4) regularized success rate (nbWins +K ) /(nbSims + 2K)
(the original progressive bias is simpler than that)
I'm not sure what you mean here. Can you explain a bit more?
Sorry for being unclear, I hope I'll do better below
What's your general approach? My understanding from your previous posts
is
that it's something like:
Your understanding is right.
By the way, all the current strong programs are really very similar...
Perhaps Fuego has something different in 19x19 (no big database of patterns
?). I'm not
On Oct 2, 2009, at 2:24 PM, Olivier Teytaud olivier.teyt...@lri.fr
wrote:
4) regularized success rate (nbWins +K ) /(nbSims + 2K)
(the original progressive bias is simpler than that)
I'm not sure what you mean here. Can you explain a bit more?
Look for the graph I posted a few weeks ago. Most things tried make it
worse. Some make it a little better, and every now and then there is a big
jump.
David
I'm wondering, are these tunings about squeezing single-percent
increases with very narrow confidence bounds, or something that gives
I guess I'm not really appreciating the difference between node value
prior and progressive bias - adding a fixed small number of wins or
diminishing heuristic value seems very similar to me in practice. Is the
difference noticeable?
It just means that the weight of the prior does not
.
David
From: computer-go-boun...@computer-go.org
[mailto:computer-go-boun...@computer-go.org] On Behalf Of Olivier Teytaud
Sent: Tuesday, September 29, 2009 1:26 PM
To: computer-go
Subject: Re: [SPAM] Re: [computer-go] Progressive widening vs unpruning
I guess I'm not really appreciating