Some months ago someone published a set of L&D problems made for MCTS programs. Going through this I found a lot of serious bugs in Valkyria where overly aggressive pruning removed tesujis (tesuji = move that normally should be pruned).

After that Valkyria improved perhaps 50-100 Elo. But I agree that finetuning on difficult problems may make the program weaker. It is like letting evolution run on Zoo animals for generations and then let them free in their natural environment. Not likely to be an improvement.

So, I think test suits can be very helpful for finding serious bugs but thats it.

Studying L&D is good for human players but IMHO the strength gain do not come from solving L&D situation better, it is because L&D problems helps improving the ability to read in general. Or in other words studying L&D improves the "human search algorithm" in general.

-Magnus


Quoting David Fotland <[EMAIL PROTECTED]>:

The scary strong Rybka program claims to be weak tactically.  The
developers say that problem solving skill does not correlate strongly
with playing strength and they don't tune or care about that.

I've found the same thing for go.  I have a large tactical problem set, and
I use it for regressions, but I've found that spending much time tuning to
solve problems can make the program weaker.  There is not a strong
correlation between problem solving and general go strength.

_______________________________________________
computer-go mailing list
[email protected]
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to