Erik van der Werf wrote:
On 4/10/07, alain Baeckeroot <[EMAIL PROTECTED]> wrote:
Le lundi 9 avril 2007 14:06, Don Dailey a écrit:
>  But the point is that
> as long as you can provide time and memory you will get improvement
> until perfect play is reached.
Is there any proof that heavy player converge toward the same solution as
the pure random playout ?

With infinite resource, i agree that random playout will find the best move.
But it seems that nothing is guaranted for heavy playout.

With infinite resources the MC part won't have to make any move (heavy
or light), so it does not matter. OC, this is all just theoretical
throughout most of the game for any board of reasonable size.

BTW pure random would fill it's own eyes...
The benefit that MC/UCT has is that it can be stopped any time and get
a reasonable current selected move. The probability that it will find better moves tends to increase with more resources/playouts.

But wouldn't a simple brute force search of the game tree be more
efficient than MC/UCT at finding the certain best move, given the
hypothetical hyper super computer we are talking about?

So if ever this hyper computer exists with near infinite speed and
resources we can just run a methodical brute force search and be done
with it.  Why would one bother running a googolplex random simulations
to approach perfect play?

So to be practical we have to find ways to improve the search beyond
pure scalability like the MoGo developers are doing.

Does this make sense?

_______________________________________________
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to