I use thread safe SMP within a node.  The tree expansion and playouts have
no locks, but Many Faces' engine is not thread safe, so there is a lock
around it, so only one thread can be biasing moves at a time.

For multinode I use MPI.  Even though the model is different, the MPI code
is not very large.  MPI is pretty powerful.  With MPI, thread 0 of the SMP
handles the sharing between nodes.  I only share UCT wins and visits, and
the MPI version only scales well to 4 nodes.  Perhaps sharing rave values as
well would be better.

David

> -----Original Message-----
> From: [email protected] [mailto:computer-go-
> [email protected]] On Behalf Of Brian Sheppard
> Sent: Thursday, October 29, 2009 11:40 AM
> To: [email protected]
> Subject: [computer-go] MPI vs Thread-safe
> 
> I have a question for those who have parallelized programs.
> 
> It seems like MPI is the obvious architecture when scaling a program to
> multiple machines. Let's assume that we implement a program that has
> that
> capability.
> 
> Now, it is possible to use MPI for scaling *within* a compute node. For
> example, on a 4-core machine we could run four processes and use MPI to
> synchronize them.
> 
> That policy has the obvious downside that the shared memory on a multi-
> core
> box is fragmented, and some portion of the tree is duplicated even
> within
> processes, which seems wasteful.
> 
> For this reason I have assumed that programs would use a thread-safe
> shared-memory design within a multi-core box, and only use MPI to scale
> to
> clusters.
> 
> But there are downsides to that design as well. Like the extra
> complexity of
> having two models for parallel programming.
> 
> And I don't really know the cost of duplicating nodes. Maybe the tree
> splits
> so much that different processes share relatively few nodes. Or maybe
> you
> can allocate trials so that is the case.
> 
> And now my question: what do you actually do: MPI, thread-safe, both,
> or
> something else?
> 
> And can you share any observations about your choices?
> 
> Thanks,
> Brian
> 
> 
> _______________________________________________
> computer-go mailing list
> [email protected]
> http://www.computer-go.org/mailman/listinfo/computer-go/

_______________________________________________
computer-go mailing list
[email protected]
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to