It strikes me that D really ought to be able to work with GPGPU – is
there already something and I just failed to notice. This is data
parallelism but of a slightly different sort to that in std.parallelism.
std.concurrent, std.parallelism, std.gpgpu ought to be harmonious
though.

The issue is to create a GPGPU kernel (usually C code with bizarre data
structures and calling conventions) set it running and then pipe data in
and collect data out – currently very slow but the next generation of
Intel chips will fix this (*). And then there is the OpenCL/CUDA debate.

Personally I think OpenCL, for all it's deficiencies, as it is vendor
neutral. CUDA binds you to NVIDIA. Anyway there is an NVIDIA back end
for OpenCL. With a system like PyOpenCL, the infrastructure data and
process handling is abstracted, but you still have to write the kernels
in C. They really ought to do a Python DSL for that, but… So with D can
we write D kernels and have them compiled and loaded using a combination
of CTFE, D → C translation, C ompiler call, and other magic?

Is this a GSoC 2015 type thing?


(*) It will be interesting to see how NVIDIA responds to the tack Intel
are taking on GPGPU and main memory access.

-- 
Russel.
=============================================================================
Dr Russel Winder      t: +44 20 7585 2200   voip: sip:[email protected]
41 Buckmaster Road    m: +44 7770 465 077   xmpp: [email protected]
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder

Attachment: signature.asc
Description: This is a digitally signed message part

Reply via email to