One of the reasons I bound to arrayfire was that it provided one interface to 
AMD (opencl) and nvidia (cuda faster on it than opencl), and also to paralel 
cpu libraries if no gpu is available.

A practicality downside is that arrayfire routines have a compilation step such 
that the first time you do any matrix operation (for example) in a set of 
dimensions, its slow.  But overall its pretty convenient from high level J with 
self managed memory (because you can chain operations on results).  You can 
treat an array of pointers (to arrays) as though they are semi-native arrays.  
Until you are done with processing, you want to keep data on the gpu as a 
performance boost.


________________________________
From: Henry Rich <[email protected]>
To: [email protected] 
Sent: Tuesday, December 19, 2017 4:22 PM
Subject: Re: [Jprogramming] gpu-backed J vs Tensorflow




>    Given that much of said works reduces to cuBlas + cuDNN, it seems like a
> GPU-backed-J, although more concise, would end up calling the same
> functions.
I expect you're right.  I don't know what the interfaces to GPUs look 
like, but the goal would be to have the rank operator (gpufunc"2 for 
example), which loops over cells of input, allow operation on cells in 
parallel.

As it happens I have some work to do in that area, aimed at reducing the 
amount of data-copying for large arguments.  What should I Google to 
learn about interfacing to GPUs?

Henry Rich

---
This email has been checked for viruses by AVG.
http://www.avg.com


----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to