On Fri, 2008-08-29 at 10:00 +0200, Magnus Persson wrote:
> Some months ago someone published a set of L&D problems made for MCTS  
> programs. Going through this I found a lot of serious bugs in Valkyria  
> where overly aggressive pruning removed tesujis (tesuji = move that  
> normally should be pruned).
> 
> After that Valkyria improved perhaps 50-100 Elo. But I agree that  
> finetuning on difficult problems may make the program weaker. It is  
> like letting evolution run on Zoo animals for generations and then let  
> them free in their natural environment. Not likely to be an improvement.
> 
> So, I think test suits can be very helpful for finding serious bugs  
> but thats it.
> 
> Studying L&D is good for human players but IMHO the strength gain do  
> not come from solving L&D situation better, it is because L&D problems  
> helps improving the ability to read in general. Or in other words  
> studying L&D improves the "human search algorithm" in general.

Problem solving ability of course IS a relevant skill and there is some
correlation between problem solving and strength, perhaps even a LOT of
correlation.   If you take a computer and program from 20 years ago and
had a problem solving contest with a modern program,  there would be no
contest whatsoever.  

But I think the fact remains that so much else is involved that this is
just 1 small aspect of winning ability.   You can easily have 1 program
that is significantly weaker in solving problems but it can still be the
superior program.   But "weaker" is relative - it cannot play tactically
like programs of 20 years ago.   We define tactically weak (for chess
programs) much differently than we used to - if a program cannot find a
mate in 10 in a second or two,  we might consider it's tactically weak.
20 years ago that would have been considered amazing and such a program
probably would have dominated the others.

But as far as I know nobody has successfully used problem sets to
replace playing actual games for measuring strength improvements.

I think it might be technically possible to use a large set of positions
of the type "find the best move" and use this,  but  it may be close to
impossible to CHOOSE such a set of positions that would actually work
for this purpose.  And I'm sure these positions would not be very
tactical and might even be downright boring.   

I have found, like you, that using problem for debugging and spotting
issues works best.

- Don




> 
> -Magnus
> 
> 
> Quoting David Fotland <[EMAIL PROTECTED]>:
> 
> >> The scary strong Rybka program claims to be weak tactically.  The
> >> developers say that problem solving skill does not correlate strongly
> >> with playing strength and they don't tune or care about that.
> >
> > I've found the same thing for go.  I have a large tactical problem set, and
> > I use it for regressions, but I've found that spending much time tuning to
> > solve problems can make the program weaker.  There is not a strong
> > correlation between problem solving and general go strength.
> 
> _______________________________________________
> computer-go mailing list
> [email protected]
> http://www.computer-go.org/mailman/listinfo/computer-go/

_______________________________________________
computer-go mailing list
[email protected]
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to