I think USE_SYSTEM_LAPACK should be 0. That way it should use a system 
provided BLAS that should be the CUDA BLAS, and build LAPACK from source. 
This is meant to work, and if it does not, please file an issue. 

-viral

On Thursday, November 13, 2014 2:16:38 PM UTC+5:30, John Smith wrote:
>
> Thanks for your inputs. Not much luck with my first attempts at CUBLAS.jl 
> (some errors), but this is exactly what I am looking for (basically 
> Python's cudamat ability to multiply two matrices on GPU and send back to 
> the host).
>
> Regarding recompilation, I have tried linking explicitly to  the available 
> "/usr/local/cuda-6.5/lib64/libnvblas.so", in Make.inc:
>
> LIBBLAS = -L/usr/local/cuda-6.5/lib64 -lnvblas
> LIBBLASNAME = libnvblas
>
> It passes the test on BLAS, but fails with LAPACK:
>
> checking for sgemm_ in -L/usr/local/cuda-6.5/lib64 -lnvblas... yes
> checking for cheev_ in -L/usr/local/cuda-6.5/lib64 -lnvblas... no
> ...
> make: *** [release] Error 2
>
> It is a shame that it is hard to compile Julia with BLAS and without 
> LAPACK, or with the BLAS subset of BLAS changed to that of cuBLAS while 
> falling back to the CPU LAPACK because Julia has a lot of other goodies and 
> my applications just need matrix products, while the compilation gets stuck 
> with cheev_, which is some specialized eigenvalue decomposition I do not 
> need at all. It seems this is a problem experienced by Elliot Saba as well. 
>
> BTW, in Octave it is easy to change BLAS to libnvblas.so while leaving the 
> ATLAS part from the same libopenblas.so (see the first part of my post, 
> except that I forgot to mention that one must place nvblas.conf, one can 
> find more details at http://www.tuicool.com/articles/mQb6bu which seems 
> to be quite useful), so the "old wheel" Octave is in a way much closer to 
> "hybrid computing", and I wish Julia had the same functionality (there 
> would be no need for any external libs and extra syntax in order to 
> multiply two matrices on GPU as in CUBLAS.jl (if you get it working), in 
> Octave you run the same code, which saves a lot of time and debugging).
>
> On Thursday, November 13, 2014 12:47:04 AM UTC+2, Elliot Saba wrote:
>>
>> When compiling your Julia, you need to set the following make variables:
>>
>> LIBBLAS=-lnvblas
>> LIBLAPACK=-lnvblas
>> USE_SYSTEM_BLAS=1
>> USE_SYSTEM_LAPACK=1
>>
>> I'm assuming that libnvblas provides lapack as well.  If it doesn't, you 
>> may run into issues because the LAPACK library needs access to BLAS 
>> functionality itself.
>> -E
>>
>>
>> On Wed, Nov 12, 2014 at 1:55 PM, cdm <[email protected]> wrote:
>>
>>>
>>> this may be helpful ...
>>>
>>>    https://github.com/nwh/CUBLAS.jl
>>>
>>>
>>> i have not tried adding
>>> this package and have
>>> no experience with it.
>>>
>>> good luck,
>>>
>>> cdm
>>>
>>>
>>>
>>> On Wednesday, November 12, 2014 1:35:20 PM UTC-8, John Smith wrote:
>>>>
>>>>
>>>>
>>>> Hello,
>>>>
>>>> Does anybody out there know how to compile Julia with cuBLAS 
>>>> replacement for BLAS? cuBLAS is not open source but it is freely available 
>>>> since CUDA 6.0.
>>>>
>>>
>>

Reply via email to