Basic linear algebra is done by BLAS and LAPACK, so I don't think these issues 
cause performance problems.

 -- John

On Jul 14, 2014, at 6:55 PM, Andrei Zh <[email protected]> wrote:

> Linear transformation is super-frequent operation in scientific computations, 
> so I'd like to know how to perform it in a most efficient way. Often linear 
> transformation is written as: 
> 
>     r = A * x
> 
> where A is transformation matrix and x is a data vector. I don't know how 
> exactly matrix-by-vector multiplication is implemented in Julia, but I 
> believe for vectors it reduces to something like this (please correct me if 
> this it algorithmically wrong): 
>     
>     r = zeros(size(x))
>     for i=1:length(x)
>        r[i] = dot(A[i, :], x)
>     end
> 
> Important part here is "A[i, :]", that is, transformation matrix is accessed 
> row by row. Since matrices in Julia are stored column-wise, row-wise access 
> is pretty inefficient. 
> 
> I thought about a trick with multiplication order. Instead of calculating "A 
> * x" I can store transformed version of A, say, B = A', and compute 
> transformation as follows: 
> 
>     r = (x' * B)'
> 
> B here is accessed column by column, so it should work well. 
> 
> But what if instead of single vector "x" we have data matrix "X"? In this 
> case both - "A * X" and "X * A" are inefficient, since one of these matrices 
> will be accessed row by row. In NumPy I would just set C order (row-wise) of 
> elements to the first matrix and Fortran order (column-wise) for the second 
> one, but AFAIK there's no such option in Julia (is it?). So what's the most 
> efficient way to perform linear transformation defined by matrix A on data 
> matrix X? 

Reply via email to