On Wednesday, 18 February 2015 at 16:03:20 UTC, Laeeth Isharc wrote:
On Wednesday, 18 February 2015 at 15:15:21 UTC, Russel Winder wrote:
It strikes me that D really ought to be able to work with GPGPU – is there already something and I just failed to notice. This is data parallelism but of a slightly different sort to that in std.parallelism. std.concurrent, std.parallelism, std.gpgpu ought to be harmonious
though.

The issue is to create a GPGPU kernel (usually C code with bizarre data structures and calling conventions) set it running and then pipe data in and collect data out – currently very slow but the next generation of Intel chips will fix this (*). And then there is the OpenCL/CUDA debate.

Personally I think OpenCL, for all it's deficiencies, as it is vendor neutral. CUDA binds you to NVIDIA. Anyway there is an NVIDIA back end for OpenCL. With a system like PyOpenCL, the infrastructure data and process handling is abstracted, but you still have to write the kernels in C. They really ought to do a Python DSL for that, but… So with D can we write D kernels and have them compiled and loaded using a combination
of CTFE, D → C translation, C ompiler call, and other magic?

Is this a GSoC 2015 type thing?


(*) It will be interesting to see how NVIDIA responds to the tack Intel
are taking on GPGPU and main memory access.

I agree it would be very helpful.

I have this on my to look at list, and don't yet know exactly what it does and doesn't do:
http://code.dlang.org/packages/derelict-cuda

What is does is provide access to the most useful part of the CUDA API which is two-headed:

- the Driver API provides the most control over the GPU and I would recommend this one. If you are in CUDA you probably want top efficiency and control.

- the Runtime API abstract over multi-GPU and is the basis for high-level libraries NVIDIA churns out in trendy domains. (request to Linux/Mac readers: still searching for the correct library names for linux :) ).

When using DerelictCUDA, you still need nvcc to compile your .cu files and then load them.

This is "less easy" than when using the NVIDIA SDK which will eventually allow to combine GPU and CPU code in the same source file. Apart from that, this is 2015 and I see little reasons to start new projects in CUDA with the advent of OpenCL 2.0 drivers.

Reply via email to