Hi all! I finally got an idea that is worth investigating. Luckily, it is something that can be tested by modifying existing programs and so I started to set up an environment to test it. In order to have a reference, I started this morning a small tournament with two identical versions of fuego (@1k) and gnugo (@level 10) and they played 863 rounds so far. The scores towards gnugo are almost identical, but the two fuegos score 449-415, which is 52% and the 95% confidence is ~3%, i.e. ~10 ELO. Now this is within limits, and it varies a bit, but it is always on the side of one of the instances, never less than 51.5%.
Is this something normal? Sorry for the n00b question :-) best regards, Vlad _______________________________________________ Computer-go mailing list [email protected] http://dvandva.org/cgi-bin/mailman/listinfo/computer-go
