On Tue, Sep 29, 2009 at 10:25:40PM +0200, Olivier Teytaud wrote:
> I think someone pointed out a long time ago on this mailing list that
> initializing the prior in terms of Rave simulations was far less efficient
> than initializing the prior in terms of "real" simulations, at least if you
> have classical "rave" formulas - at least, we had an improvement when adding
> a prior in the "real" simulations, but we had also an improvement when
> adding one more term, which is not linear. Sorry for forgetting who :-(

I'm wondering, are these tunings about squeezing single-percent
increases with very narrow confidence bounds, or something that gives
immediately noticeable 10% boost when applied? I'm curious about how the
top bots improve, if it's accumulating many tiny increases or long
quests for sudden boosts.

-- 
                                Petr "Pasky" Baudis
A lot of people have my books on their bookshelves.
That's the problem, they need to read them. -- Don Knuth
_______________________________________________
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to