On Tue, 2008-12-16 at 19:34 -0500, Weston Markham wrote:
> I may do that, although personally I would be far more cautious about
> drawing conclusions from those matches, as compared to ones played
> against a strong reference opponent.  But I guess other people feel
> differently about this.  Anyway, the results would still be
> interesting to me no matter which way they went, even if they failed
> to convince me of anything.

It's my opinion that it's bad to test against a single opponent.
Ideally, if it is possible to arrange you want to test against a variety
of opponents that are "different", or not based on the same code or even
similar in design.   But I don't think it's possible any longer to avoid
MCTS based bots - but it's possible with the reference bot.   

Also, you want to normalize the strength.  You want to find opponents
that play close to the same strength.   You may have to manipulate the
playing levels of course to achieve this.   It's basically a waste of
resources to play opponents that are going to beat you most of the time,
or that you are going to beat because it takes more games to zero on
your actual performance with any accuracy.    For instance if you can
only win 1 out of 100 games,  how much are you going to learn from 100
games?   You will either lose all 100,  or possibly win 1 or 2 games and
you cannot ascertain with much precision how strong the program is.  

- Don



_______________________________________________
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to