I cover that in another tutorial. <http://www.stochasticlifestyle.com/multiple-gpu-on-the-hpc-with-julia/> With the current packages you have to manually split the data to each GPU. If you are trying to solve a vectorized call, I worked out all the details for doing that split in the tutorial so actually doing it isn't hard, but it's a little tedious. They don't have to be SLI, I don't think that the data actually transfers over SLI when using CUDA at all. Note that for Tesla K80s you have to program for multiple GPUs in order to use all the cores (it's a double GPU).
Note that there is a way for doing linear algebra easily with multiple GPUs. That's part of the CuBLASxt package from NVIDIA. It automatically uses multiple GPUs in its algorithms. However, I don't think its functionality is bound to any Julia package (I could be wrong). On Friday, April 29, 2016 at 3:12:38 PM UTC-7, feza wrote: > > Thanks for sharing. For multiple GPUs do you have manually split the data > to each GPU or does that get taken care of automatically? BTW for multi GPU > stuff I assume you don't need SLI and that SLI is just for gaming. > > On Friday, April 29, 2016 at 4:31:32 PM UTC-4, Chris Rackauckas wrote: >> >> Works great for me. Here's a tutorial where I describe something I did >> on XSEDE's Comet >> <http://www.stochasticlifestyle.com/julia-on-the-hpc-with-gpus/> which >> has Tesla K80s. It works great. I have had code running on GTX970s, 980Tis, >> K40s, and K80s with no problem. >> >> On Thursday, April 28, 2016 at 1:13:56 PM UTC-7, feza wrote: >>> >>> Hi All, >>> >>> Has anyone here had experience using Julia programming using Nvidia's >>> Tesla K80 or K40 GPU? What was the experience, is it buggy or does Julia >>> have no problem.? >>> >>
