Darren Cook: <[email protected]>: >> The math escapes me here. I think doubling the playouts gains in the >> neighborhood of 70 ELO points. If adding a thread costs 10 ELO, >> adding more threads would stop being beneficial after about 14 >> threads. Doubling from 7 to 14 would lose 7*10 ELO, equaling the gain >> of the extra playouts. After that, adding threads should actually >> lose ELO. Yet we see people trying to put together systems with 100s >> of CPUs. What am I missing? > >Richard Segal (who operates Blue Fuego) has a paper on the upper limit >for scaling: > http://www.springerlink.com/content/b8p81h40129116kl/ >(Sorry, I couldn't find an author's download link for the paper; Richard >is on the Fuego list but I'm not sure he is even a lurker here.) > >I didn't fully understand the methodology, but what I did take away from >it (and discussions with Richard) was that though we're satisfied that >pure UCT eventually expands all nodes and can solve a position just like >minimax, this is not the case once you start adding enhancements such as >rave and virtual loss and *parallelizing the algorithm*.
Hm, even if the theory guarantee the convergence, not only time but also memory are finite in practice. Good GC (garbage collection), or actually, pruning-less-important-leaf-node algorithms are important for MCTS because MCTS requires much more memory than depth first search. My question is, then, is it possible to design a GC algorithm for MCTS which does not harm the convergence? Currently most developers use the visit-count of a node to prune the node or not, I guess. One more question, does this harm the convergence? Hideki -- Hideki Kato <mailto:[email protected]> _______________________________________________ Computer-go mailing list [email protected] http://dvandva.org/cgi-bin/mailman/listinfo/computer-go
