[julia-users] Re: GPU capabilities

2016-04-29 Thread Chris Rackauckas
I cover that in another tutorial. With the current packages you have to manually split the data to each GPU. If you are trying to solve a vectorized call, I worked out all the details for doing that split in the tutorial

[julia-users] Re: GPU capabilities

2016-04-29 Thread feza
Thanks for sharing. For multiple GPUs do you have manually split the data to each GPU or does that get taken care of automatically? BTW for multi GPU stuff I assume you don't need SLI and that SLI is just for gaming. On Friday, April 29, 2016 at 4:31:32 PM UTC-4, Chris Rackauckas wrote: > >

[julia-users] Re: GPU capabilities

2016-04-29 Thread David Parks
I'm just getting started in this area myself, so this is not personal experience (yet), but for an example of what's been done you might want to look through the Mocha code which implements a deep learning library on the GPU. It's probably a great example On Thursday, April 28, 2016 at 1:13:56

[julia-users] Re: GPU capabilities

2016-04-29 Thread Matthew Pearce
My university cluster uses Tesla M2090 cards. My experience (not comprehensive) so far is that the CUDArt.jl + CU.jl libraries work as one would expect. They're not all 100% complete and some further documentation in places would be nice, but they're pretty good. The only funny behaviour