On Saturday, 4 April 2015 at 10:26:27 UTC, Walter Bright wrote:
On 4/4/2015 3:04 AM, weaselcat wrote:
PR?
Exactly!
The idea is that GPUs can greatly accelerate code (2x to
1000x), and if D wants to appeal to high performance computing
programmers, we need to have a workable way to program the GPU.
At this point, it doesn't have to be slick or great, but it has
to be doable.
Nvidia appears to have put a lot of effort into CUDA, and it
shouldn't be hard to work with CUDA given the Derelict D
headers, and will give us an answer to D users who want to
leverage the GPU.
It would also be dazz if someone were to look at std.algorithm
and see what could be accelerated with GPU code.
A good OpenCL wrapper library like cl4d would do wonders.