@all who provided Julia optimization advice

Thanks for the help optimizing my code. I performed 2 more optimizations to 
my previously posted code (current code is here: 
https://gist.github.com/lightcatcher/8118181 ):
1. hardcoding both matrix multiplications so that they occur in the same 
loop. This lets me avoid copying the matrices.
2. computing Q rather than Q' (allows me to iterate over matrix in 
cache-friendly manner).

Note only the Julia code is updated (not the timings or the Python code in 
the gist).

These reduced my tridiagonal QR times to ~0.04s compared to the 0.14s 
required by Julia's built-in 'qr' method, making my Julia tridiagonal QR 
implementation faster than any of the alternatives I looked at.

@Alan

I agree I'm "chasing the subdiagonal", but I'm doing it with Householder 
reflections and only using the non-zero components of the "u" vector (where 
the Householder projection is P = I - u*u' and only the first 2 components 
of u are non-zero).

I'm writing all of this code to learn Julia as well as to better understand 
some of the material from my numerical linear algebra class. I am writing 
the QR decomposition to use in QR iteration to find the eigenvalues of a 
tridiagonal matrix (as part of computing low-rank SVD by Lanczos iteration).

I've read about "chasing the bulge" and the implicit QR algorithm, but I 
haven't been able to find a description (or implementation/psuedocode) that 
I could easily understand. Is there any chance you could point me to an 
implementation or pseudocode or a good explanation of the algorithm? Do you 
happen to know of an exposition of the implicit QR algorithm that uses 
Householder projections rather than Givens rotations? 

In particular, I've looked at the description in Demmel's "Applied 
Numerical Linear Algebra" book and am having trouble understanding the 
algorithm from that. If you happen to be familiar with that book, the only 
real coverage of implicit QR algorithm seems to be example 4.10, which 
doesn't make sense to me if matrix A is already tridiagonal or upper 
Hessenberg (as it just looks like a way to convert a matrix to upper 
Hessenberg form).

Thanks for any help with the numerical linear algebra, and thanks to all 
for the optimization advice. Sorry if the numerical linear algebra stuff is 
off-topic for this mailing list.

-Eric

On Wednesday, December 25, 2013 7:42:50 AM UTC-6, Alan Edelman wrote:
>
> You say Householder, but I would say you could be  "chasing the 
> subdiagonal" with Givens Rotations
> You are using a full array but you only need three diagonals of R, and if 
> you save Q as n-1 angles you don't even need a dense Q
>
> This begs the question of why you are doing QR on a symmetric tridiagonal? 
>  hopefully not a step for eigenvalues because chasing the bulge is 
> the way to go for that.
>
>
>
> On Tuesday, December 24, 2013 7:55:23 PM UTC-5, Ivar Nesje wrote:
>>
>> There is a SubArray implementation, sub(), but it has some performance 
>> issues related to indexing that should be fixed before it becomes default 
>> for slicing.
>>
>> For small arrays I would guess that hardcoded multiplication is faster 
>> than calling a BLAS.
>>
>> You can find info about you julia by calling versioninfo()
>>
>>

Reply via email to