I'd like to add that in many cases it is not necessary to call the methods
in the BLAS module to get the benefit form BLAS3. You can use the A_mul_B!
family of functions in base/linalg/matmul. That is more generic and the
approach I prefer.

What I meant about BLAS 3 is that you would like to structure your code
such it performs matrix-matrix multiplications instead of matrix-vector
multiplications. That will give you the speed. You can see an example of
such julia code below for the Cholesky which is easy to block

https://github.com/andreasnoack/LinearAlgebra.jl/blob/master/src/cholesky.jl

Med venlig hilsen

Andreas Noack

2014-09-26 12:55 GMT-04:00 Douglas Bates <[email protected]>:

> On Friday, September 26, 2014 2:50:51 AM UTC-5, Staro Pickle wrote:
>>
>> Thank you all.
>>
>> On Friday, September 26, 2014 2:50:56 AM UTC+8, Jason Riedy wrote:
>>>
>>>
>>> Others have given some reasons...  If you want to see contortions
>>> that were (perhaps still are) necessary to be competitive, see:
>>>   https://groups.google.com/forum/#!msg/julia-users/
>>> DJRxApmpJmU/NqepNaX5ARIJ
>>>
>>
>> This article is expert but as you say, they are not clear and straight.
>> Is it possible to call BLAS-3 an easy way?
>>
>
> There are two ways to access BLAS subroutines from within Julia but
> neither is considered "easy".  The first is direct calls to functions in
> the BLAS and LAPACK namespaces.  The fact that these names are not exported
> means that they are not intended for general use.  However, sometimes it is
> useful to access them directly.  To find the available functions use
>
> julia> whos(BLAS)
> BLAS                          Module
> asum                          Function
> axpy!                         Function
> dot                           Function
> dotc                          Function
> dotu                          Function
> gbmv                          Function
> gbmv!                         Function
> gemm                          Function
> gemm!                         Function
> ger!                          Function
> her!                          Function
> her2k                         Function
> her2k!                        Function
> herk                          Function
> herk!                         Function
> iamax                         Function
> nrm2                          Function
> sbmv                          Function
> sbmv!                         Function
> scal                          Function
> scal!                         Function
> symm                          Function
> symm!                         Function
> symv                          Function
> symv!                         Function
> syr!                          Function
> syr2k                         Function
> syr2k!                        Function
> syrk                          Function
> syrk!                         Function
>
>
> If you know the BLAS names you will see that these names are the same with
> the storage type indicator removed.  So instead of the BLAS names cgemm,
> dgemm, sgemm and zgemm there is a single gemm function.  The difference
> between the functions whose names end in ! and those whose names don't is
> that the name ending in ! is the mutating version of the function.  It
> overwrites one of its arguments with the result.  The name without the !
> copies that argument before calling the mutating version.
>
> The functions are called as, e.g.
> julia> A = ones(5,5)
> 5x5 Array{Float64,2}:
>  1.0  1.0  1.0  1.0  1.0
>  1.0  1.0  1.0  1.0  1.0
>  1.0  1.0  1.0  1.0  1.0
>  1.0  1.0  1.0  1.0  1.0
>  1.0  1.0  1.0  1.0  1.0
>
> julia> B = ones(5,2)
> 5x2 Array{Float64,2}:
>  1.0  1.0
>  1.0  1.0
>  1.0  1.0
>  1.0  1.0
>  1.0  1.0
>
> julia> BLAS.gemm('N','N',1.0,A,B)
> 5x2 Array{Float64,2}:
>  5.0  5.0
>  5.0  5.0
>  5.0  5.0
>  5.0  5.0
>  5.0  5.0
>
> julia> C = Array(Float64,(5,2))
> 5x2 Array{Float64,2}:
>  0.0  0.0
>  0.0  0.0
>  0.0  0.0
>  0.0  0.0
>  0.0  0.0
>
> julia> BLAS.gemm!('N','N',2.0,A,B,0.0,C)
> 5x2 Array{Float64,2}:
>  10.0  10.0
>  10.0  10.0
>  10.0  10.0
>  10.0  10.0
>  10.0  10.0
>
> julia> C
> 5x2 Array{Float64,2}:
>  10.0  10.0
>  10.0  10.0
>  10.0  10.0
>  10.0  10.0
>  10.0  10.0
>
>
> You can determine the calling sequence from a call like
> methods(gemm!)
> but the result isn't the most readable you will ever see.  You may be
> better off reading the source code in src/linalg/blas.jl
>
> In that file you will see that these functions just marshal argument
> values for the Fortran calling conventions then call ccall to call the BLAS
> or LAPACK subroutine.  That's the other way of calling BLAS or LAPACK
> directly, although it should not be necessary to drop down to that level of
> code.
>
>
>>
>>> However, if your matrices always have this form, I would highly
>>> recommend solving a few by hand to notice the pattern.
>>>
>>
>> Good observation and sense! Yes, I know that can be calculated by hand. I
>> use this matrix in a case related to sparse matrix.
>>
>> By the way, the citation above is not displayed grey as usual. I made a
>> few try but didn't see how. Should that text be chosen grey?
>>
>

Reply via email to