On Tue, Feb 17, 2009 at 8:23 PM, George Dahl <george.d...@gmail.com> wrote: > It is very hard for me to figure out how good a given evaluator is (if > anyone has suggestions for this please let me know) without seeing it > incorporated into a bot and looking at the bot's performance. There > is a complicated trade off between the accuracy of the evaluator and > how fast it is. We plan on looking at how well our evaluators > predicts the winner or territory outcome or something for pro games, > but in the end, what does that really tell us? There is no way we are > going to ever be able to make a fast evaluator using our methods that > perfectly predicts these things.
You are optimizing two things; quality and speed. One can be exchanged for the other, so together they span a frontier of solutions. Until you fixate one, e.g., by setting a time constraint, you can look at the entire frontier of (Pareto-optimal) solutions. Any evaluator that falls behind the frontier is bad. Erik _______________________________________________ computer-go mailing list computer-go@computer-go.org http://www.computer-go.org/mailman/listinfo/computer-go/