On Thu, 2006-11-30 at 14:44 -0800, David Doshay wrote:

> This is not my experience at all.
> 
> SlugGo was first written by a graduate student with data structures  
> that made sense to them, but not to me. I rewrote it to use  
> completely different data structures but with exactly the same  
> algorithm. It took less than half the time to run, and play was at  
> exactly the same level because it was move for move identical. Data  
> structures can have tremendous effect upon speed.
> 
> Also, my data shows that if I doubled the time allowed for playing,  
> thus "using" the time gained from faster execution for doing deeper  
> lookahead, the results did not improve, but actually got worse.
> 

To me, this just seems like horizon-effect in disguise.
Once you exhaust your ability to evaluate, you cannot see through the
fog. The horizon is dictated by the inability to evaluate correctly.
(try fuzzing up the evaluation results by 0.01-1 stone just by adding
some noise, to get my point. You will see the horizon come closer)

Using multiple instances of gnugo to do the evaluation for you, still
sticks you to the (#1) minimax+evaluation model, even if you apply the
slave-gnugo processes only for "local" problems (not mentioning
interactions, or how to identify subproblems)

Having slave processes to do your tsumego- or MC-evaluations for you,
still keeps you dependent on their evaluation noise. Adding CPU's wont
help to beat the noise, IMHO. It just pushes your horizon upto the point
where the fog hits you.

This is the point where I would like to introduce a paradigm shift.
But I cannot invent one, presently.

HTH,
AvK

(#1) by "minimax", I mean minimax-variants, including alpha-beta. They
are all the same.


_______________________________________________
computer-go mailing list
[email protected]
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to