Gian-Carlo Pascutto wrote:
> Don Dailey wrote:
>> It looks like we have a clear trend now.   Light play-outs do not scale
>> as well as heavy play-outs.  
>> This is the same behavior we get with computer chess.   For the last few
>> decades it has been understood that a chess program with a better
>> evaluation function  improves MORE with increasing depth than one with a
>> lesser evaluation function so it appears that Go is not unique in this
>> regard or in it's general ability to scale with CPU power.
>
> I'm also wondering what chess research result you are basing this on?
I am very surprised that you don't know about this principle, being the
author of a strong chess program.    We knew about this in the 80's and
90's,  but it's probably not that important to understand it to write a
chess program.    I would like to give you a little historical
perspective on this down below.   But first ...

First of all,   I am not aware of any published work on this specific
thing.   There may be some, but I'm not aware of it.   Certainly there
are many chess concepts in the public domain that remain unpublished
officially.   

However, you ask about chess research and I have done my own and others
have report similar results to me.   I did a massive (unpublished) study
with my then partner Larry Kaufman (who works with the Rybka team now.)
   I started with a primitive chess program that I just began writing
and I did the massive self play  testing at several levels, I found that
pawn structure was more important for the higher levels.     My
conclusion is that search magnifies the power of the evaluation function
in a non-linear way.    I don't know if this holds any water,  but one
possible interpretation is that tactics is so dominant at lower depths
that a little extra positional understanding is not going to matter much
and it's better to be a little faster.   (It's dangerous making up
ad-hoc explanations that sound reasonable but usually are not.)

This was a long time ago and I don't have the actual data available.  
Sorry.   However, you can reproduce this study for yourself if you are
interested.

I was heavily into computer chess since the 70's  and originally
believed just the opposite.    As I was to discover from many
conversations with others (usually at computer chess competitions) 
during the late 80's and 90's,  this is the same conclusion that every
other programmer I talked to arrived at.    

Over the years,  many researches did scalability studies to measure the
effects of diminishing returns on search depth,  but most of them were
seriously flawed due to a meager number of test games.     A friend of
mine, Ernst Heinz, probably did the most definitive study on this but he
didn't address the issue of evaluation quality on scalability.   


Here is some historical perspective, tell me if it matches what you know:

In the beginning of the "brute force" era of computer chess, which in my
mind began in the mid 70's with the Northwestern chess program "Chess
4.7" ,    speed and depth was emphasized at all cost.    It was possible
to get good results just by building a well engineered fast and stupid
chess program.     (This left a stigma on computer chess that persists
to this very day - that the best programs are stupid and fast, but
that's another diatribe.)

At that time, there were authors and enthusiasts who fell into one of
two camps,  sometime called the "brute force" camp and the "intelligent
heuristics" camp.    They would get into heated arguments about this
from time to time.    Nevertheless,  it seems that all of the strongest
programs emphasized speed of execution and simple evaluation functions
above all else.     The secret seemed to be a very fast search combined
with a well balanced but not particularly sophisticated evaluation
function.  

As the years went by and chess programs started searching much more
deeply,  it seems like the quality of the evaluation function seemed to
be what was deciding most games,  more so than search depth.    This was
starting to be a big deal in the 90's.     The fast programs seemed to
decline (programs like Fritz were upgraded with knowledge even though
this slowed them down) and the smarter programs started to dominate.    
Of course it goes without saying that you still had to have an excellent
search,  but the real mine-field was in the positional understanding -
you couldn't misunderstand a position without getting pounced on and
beaten to death unmercifully. 

It could be that this was a natural evolution and that in the early days
it was just as important to have chess knowledge as it is today,  but
that authors just didn't realize it but I believe that it was a
consequence of the scalability issue.     Even though programmers may
not be CONSCIOUS of it,  they are going to gravitate towards what is
giving them the greatest returns.  

So in the early days it seemed more productive to make your program
faster - but now it's more productive to make them smarter.       I
believe that if you ask any computer chess programmer that has been
doing this for 20 years or more, you would get this same
understanding.    This is not normally the type of general knowledge
that gets studied and published since it cannot be directly applied in
strict fashion.    It was simply what we all understood from our own
testing.

Having said that,  it would be easy to study this directly with a
serious published study (if it hasn't been done.)     You could do this
test with your own chess program,   make a version devoid of pawn
structure knowledge, for instance,  and test at about a dozen fixed
depth levels in round robin fashion. 

As a thought experiment, you might imagine a chess evaluation function
that always returned zero,  unless there was a checkmate.     Compare
that to an improved function that also knew the values of the pieces.   
There is no question which evaluation function is "better",  but which
improves more with depth?        

- Don


 



 









_______________________________________________
computer-go mailing list
[email protected]
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to