Hi,

As the original poster in this thread, I want to thank all who
contributed insights. It is a good reminder that statistic results are
often not obvious/expected (mostly because the model one assumes is
too simple and reality has always some surprises for you).

I ran similar rounds of tests with pachi and observed also a bias. In
that case, it was because I wasn't using --pass_all_alive which
resulted in games that were not really finished and the score
evaluation seemed to have a bias in this case. After adding that
parameter, the results were much closer to what I expected. (I ran
several series of over 1000 games and never saw the kind of bias that
was there before)

It is possible that there was a similar thing with fuego. Or an
"unlucky" streak.

best regards,
Vlad
_______________________________________________
Computer-go mailing list
[email protected]
http://dvandva.org/cgi-bin/mailman/listinfo/computer-go

Reply via email to