Darren Cook wrote:
Down the years my excuse for working on computer go has been as a
test-bed for learning new technologies. This time it is taking advantage
of multi-core.
Interesting article here [1]:
http://www.linux-mag.com/launchpad/business-class-hpc/main/3538
I'm too lazy to make an account.
What do people think about this? What about the specific example of
wanting to run multiple random playouts (or heavy playouts) in a UCT
program? Can MPI be as quick as threads on a 2- or 4-core single
machine? [2] Playouts have practically no memory demands.
The usage of MPI that I've seen required explicit population of
(user-defined) communication structures that get copied in the process
of doing an MPI call. I really didn't want to manually specify,
populate, and then read out data for any given inter-process call. I
ended up using delegates / functors for inter-process communication and
restricted the bot to threading to keep the work to implement it simple.
Maybe after doing enough multi-threaded programming I'll come to realize
that MPI isn't much of a cost and shift my IPC design to use MPI. In
the mean time, I'm comforted that allowing slug go support can bridge
the gap if needed.
What about if a heavy playout algorithm is using a pattern library too
big to fit in the cores local cache? Would that change the MPI vs.
threads decision?
Darren
[1]: I think it requires membership, but it is free, and no catches
except you get emails telling you about new articles.
[2]: My question is about MPI vs. thread overhead; not about overhead
comparisons between say a cluster of 4 single-core machines and a single
4-cpu machine. That is a kettle of fish for a different thread. ;-)
_______________________________________________
computer-go mailing list
[email protected]
http://www.computer-go.org/mailman/listinfo/computer-go/