My thinking was (and it was only my judgement, non tested, so could be wrong) is that a benchmark consisting of computer only positions would be more biased than one with some human input. Note that human input can go both ways; first you get positions arising from bad plays, and second you may get to positions from good plays the computer does not recognises (i.e. more backgames, say)
-Joseph On 17 January 2012 02:53, Mark Higgins <[email protected]> wrote: > As I (imperfectly!) understand it, the benchmark database was created by > randomly picking off 100k positions from FIBS games. > > Why use FIBS games instead of generating them from eg an > intermediate-strength neural net player? Presumably we don't care about the > exact positions, but rather just want to have a large sample of > representative positions that happen in realistic games. > > > > _______________________________________________ > Bug-gnubg mailing list > [email protected] > https://lists.gnu.org/mailman/listinfo/bug-gnubg _______________________________________________ Bug-gnubg mailing list [email protected] https://lists.gnu.org/mailman/listinfo/bug-gnubg
