I've noticed that there are computer owners who only have access to an AMD 
GPU (Mac Pro 2013, 2015 iMac) and wish to tap into their GPUs for extra 
processing power for machine learning applications. 

cuBLAS won't help them and it appears that the state of out of the box 
(without much customizations required by the user) OpenCL tool development 
seems to lacking progress. I wish to change that.

On Tuesday, April 26, 2016 at 7:20:52 AM UTC-4, Chris Rackauckas wrote:
>
> I think the GPU integration libraries for Julia are already really good if 
> you're using CUDA. CUDArt.jl and Arrayfire.jl work quite well. I don't know 
> too much about the OpenCL side, but I don't tend to have a use for it.
>
> On Monday, April 25, 2016 at 6:06:10 PM UTC-7, Michael Jin wrote:
>>
>> (Reposted from julia-dev I was told that julia-user was a 
>> more appropriate place to have this thread.)
>>
>> Hi, I'm an undergraduate student and I've been using Julia since 2013. 
>> I've been trying to use the GPU seamlessly for projects involving Julia 
>> matrices. For that end, I have started working on my own OpenCL BLAS Julia 
>> library to test the clBLAS library at the lowest level possible for the GPU 
>> with the OpenCL C library.
>>
>> Here's a link to my project: https://github.com/mikhail-j/OpenCLBLAS.jl
>>
>> This project has been tested on a NVIDIA GTX 780 Ti.
>>
>> Any suggestions on what I can do to improve the state of GPU integration 
>> with Julia?
>>
>

Reply via email to