Hi!
On Mon, May 23, 2011 at 04:49:34PM +0100, [email protected] wrote:
> As an avid game as well as AI programmer, (and long time lurker on this list)
> I was wondering if anyone had given any thought running their code on a DX11
> graphics card with openCompute enabled?
>
> I have read about some colleagues running market simulations on them due to
> their exceptionally high multithreading and fast memory, but I wondered if
> there was some obvious limitation that I had overlooked before I start
> working in this direction and learning a while new low level API.
The restrictions are the same as when using OpenCL and CUDA. See:
http://www.mail-archive.com/[email protected]/msg12474.html
http://www.mail-archive.com/[email protected]/msg12485.html
There are two problems:
* Latency. Transition from tree descent to GPU-accelerated
simulation would take VERY long time. Descending trees on
GPU as well would need investigation but there are trouble
spots.
* Parallelism. You have massive multithreading, but large
blocks of threads need to always execute the same instruction,
though on different data obviously. If some threads diverge,
others stall. It is difficult to avoid these divergences when
implementing Go rules (particularly captures).
Overally, there might be a bonus, but it's small and the coding and
debugging is much more laborous. Even when you have basic simulation
going, you need also tree descent. Then you need to start transferring
all the crucial heuristics to your GPU code, adding even more divergence.
But there has not been THAT much investigation, just the attempt of
Christian Nentwich and me. So maybe you would figure out some smart
tricks. :-)
--
Petr "Pasky" Baudis
UNIX is user friendly, it's just picky about who its friends are.
_______________________________________________
Computer-go mailing list
[email protected]
http://dvandva.org/cgi-bin/mailman/listinfo/computer-go