On Wednesday, 18 February 2015 at 15:15:21 UTC, Russel Winder wrote:
It strikes me that D really ought to be able to work with GPGPU – is there already something and I just failed to notice. This is data parallelism but of a slightly different sort to that in std.parallelism. std.concurrent, std.parallelism, std.gpgpu ought to be harmonious
though.

The issue is to create a GPGPU kernel (usually C code with bizarre data structures and calling conventions) set it running and then pipe data in and collect data out – currently very slow but the next generation of Intel chips will fix this (*). And then there is the OpenCL/CUDA debate.

Personally I think OpenCL, for all it's deficiencies, as it is vendor neutral. CUDA binds you to NVIDIA. Anyway there is an NVIDIA back end for OpenCL. With a system like PyOpenCL, the infrastructure data and process handling is abstracted, but you still have to write the kernels in C. They really ought to do a Python DSL for that, but… So with D can we write D kernels and have them compiled and loaded using a combination
of CTFE, D → C translation, C ompiler call, and other magic?

Is this a GSoC 2015 type thing?


(*) It will be interesting to see how NVIDIA responds to the tack Intel
are taking on GPGPU and main memory access.

I agree it would be very helpful.

I have this on my to look at list, and don't yet know exactly what it does and doesn't do:
http://code.dlang.org/packages/derelict-cuda

Reply via email to