Ivar:
Thanks, I assumed numpy made a copy while slicing, not just doing it by 
reference (which numpy calls a view). I found that reducing from double 
copy to single copy cut ~0.08s, so reducing from single copy to no copy 
should put the time at roughly ~0.22s, which is very similar to numpy's 
time.

Stefan:
Thanks, I forgot that malloc doesn't return 0-filled memory (so that 
initializing a tridiagonal matrix still involves setting all of the 
elements not on the tridiagonal to 0).

So, does Julia have some equivalent to a numpy view (so that I can perform 
the matrix multiplication only on the slice of the matrix without having to 
copy that slice), or would I need to just hard-code this matrix 
multiplication (on a submatrix) to avoid copying?

-Eric

On Tuesday, December 24, 2013 5:45:39 PM UTC-6, Stefan Karpinski wrote:
>
> On Tue, Dec 24, 2013 at 5:17 PM, Eric Martin 
> <[email protected]<javascript:>
> > wrote:
>
>> Also, its (marginally but consistently) faster in Julia to just call 
>> "copy(T)" rather than "full(Tridiagonal(map(i -> diag(T, i), -1:1)...))", 
>> which seems odd to me because "copy" should be O(N^2) and the other method 
>> (only copying the sub, main, and super diagonals) is O(N), where N is the 
>> dimension of a square matrix.
>
>
> Both do work proportional to the number of elements in the matrix, which 
> is O(N^2). The copy is a *lot* simpler, however. All it does is a memcpy, 
> which is not much slower than just filling the matrix with zeros:
>
> julia> A = randn(10000,10000);
>
> julia> @time copy(A);
> elapsed time: 0.432631311 seconds (800450296 bytes allocated)
>
> julia> @time zeros(10000,10000);
> elapsed time: 0.365747207 seconds (800000128 bytes allocated)
>
>

Reply via email to