Hello,

Does anybody out there know how to compile Julia with cuBLAS replacement 
for BLAS? cuBLAS is not open source but it is freely available since CUDA 
6.0. 
According to my preliminary tests, the randmult of Float32 matrices (size 
8K) is roughly 6x faster on GTX760 than quadcore i7, so this seems to be 
quite a gain.

More specifically, two questions:

1. With octave, one can simply switch BLAS versions, e.g.

"LD_PRELOAD=/usr/local/cuda-6.5/lib64/libnvblas.so octave" 

or

"LD_PRELOAD=/usr/lib/openblas-base/libopenblas.so.0 octave"

However, with julia this does not work, why?

2. When compiling Julia source from github, what do I have to change in 
Make and Make.inc files in order to replace the Julia's default
libopenblas.so.0 with libnvblas.so?

Please notice that the speed gain is incredible only for Float32, but this 
is still quite important as the codes are much faster with nvblas.

I will wait for your suggestions, thank you for your time.

John Smith

Reply via email to