Thank you for the paper, Olivier.

It's very interesting.  Now I have now two questions.

(1) In Algorism 1, line 7 in the code part (below) is funny.
>Function reward = PerformSimulation(T, s)
I guess it's just "Function PerformSimulation(T, s)".

(2) In Table 4, why the configurations are not 32 vs 1, 16 vs 1, 8 vs 
1, ... but 32 vs 1, 32 vs 2, 32 vs  4, ...?  Former is more common 
and easy to evaluate the scalability, I suppose.  Don't you have those 
numbers?

Hideki

Olivier Teytaud: <[email protected]>:
>>
>> I'd like to know both numbers, Pasky.  BTW, does pachi use root
>> parallelisation on the cluster, ie, the same as of MoGo, Fuego
>> and MFG?
>>
>>
>In MoGo it's not root parallelization. We share the statistics in the tree,
>e.g. once per second (depending on the time settings);
>more precisely, we average
>- the number of wins,
>- the number of losses,
>- the number of amaf-wins
>- the number of amaf-losses
>in all nodes in the tree with more than e.g. 5% of the number of simulations
>at the root.
>
>This is not so different from averaging just at the root, but there's a
>slight improvement.
> (the difference is probably much higher when building strategies, as in
>opening book building, instead of just choosing one move)
>
>It is, on the other hand, much better than averaging just before taking the
>decision (i.e., roughly speaking, voting) as proposed in some papers.
>
>More details in http://hal.inria.fr/inria-00512854/.
>
>Best regards,
>Olivier
>---- inline file
>_______________________________________________
>Computer-go mailing list
>[email protected]
>http://dvandva.org/cgi-bin/mailman/listinfo/computer-go
-- 
Hideki Kato <mailto:[email protected]>
_______________________________________________
Computer-go mailing list
[email protected]
http://dvandva.org/cgi-bin/mailman/listinfo/computer-go

Reply via email to