Hi,

I know, this topic was in the list a while ago. My problem is, as in all
science, nobody publishes negative results:)

Oakfoam uses both, progressive bias and progressive widening. My
understanding is, this is state of the art in many mc bots, at least the
theses I read used both.

Both is working well in oakfoam. Now I turned off progressive widening
and tuned progressive bias carefully (good scaling of the bias and
improved decay functions). I got the same playing strength as with both
(bias and widening) before on 9x9 against gnugo, but I can not improve
anymore with progressive widening turned on again.

My interpretation is: Progressive bias is the superior concept, but it
is easier to use progressive widening.
Progressive widening is not sensitive to the ratio of the pre knowledge
value of two moves, only the better move must be unpruned first, but
progressive bias is sensitive to the ratio between the pre knowledge
values.

It may be even more difficult to improve the progressive bias on 19x19,
so there might be a reason to use widening, but at the moment I feel I
should try without?

Am I wrong?

Detlef


_______________________________________________
Computer-go mailing list
[email protected]
http://dvandva.org/cgi-bin/mailman/listinfo/computer-go

Reply via email to