Jon's recent post has prompted me to echo a post by Chuck Bower at GoL. I thought it be useful to have it on this mailing list for the record; maybe some of you don't read GoL, and posts there get lost after a few months, anyway.
I don't know whether this would benefit gnubg or not. I wonder whether the neural net part might benefit from multi-threads. As I understand NNs, they use lots of floating point math operations, with the results added together to give the final evaluation. If these could be paralleled up, there must be some speed benefit. Chuck's original post follows: ==== I think this topic came up here a couple months as a potential improvement to GNU-bg (but was shot down?) NVIDIA has a compiler for its latest and greatest GPU which seems to make dual-core and quad-core processing obsolete before they even hits the store shelves. Would it take a totally new development project? Does it even make sense for a backgammon bot? I suspect you could do n-ply lookahead (where n > 3) much more quickly than now, making realtime multiple lookahead feasible. Sounds intriguing. read more at http://developer.nvidia.com/object/cuda.html and http://tomshardware.co.uk/2006/11/08/geforce_8800_uk/page11.html. ==== Cheers, Ian _______________________________________________ Bug-gnubg mailing list [email protected] http://lists.gnu.org/mailman/listinfo/bug-gnubg
