Can someone explain why qrfact is faster and requires less memory than 
lufact for square, sparse matrices? LU is less computationally complex than 
QR, and with decent pivoting LU should be able to preserve sparsity better 
than QR, so I'm confused by what I'm seeing in practice.

The plots below compare computation time, memory allocation, and garbage 
collection time for the two factorization methods. I generated them by 
factorizing sprand sparse matrices. The top plot shows results for matrices 
with 10% nonzeros; the bottom plot shows results for matrices with 50% 
nonzeros. The methods seem to converge in memory performance as density 
increases, but LU loses to QR in both cases.

I have looked through the paper describing the multifrontal QR algorithm 
Julia calls, but I don't understand it fully. Before I spend more time 
studying it, I figured I would see if someone here knows the secret sauce 
that helps it beat LU.
<https://lh3.googleusercontent.com/-EIfk6ZvRVY8/Vx6AN44Se4I/AAAAAAACOgI/hIKvJGOuRGc45IGlAJ_zobqoMUquBFeCwCLcB/s1600/2016-04-25-lu-vs-qr-01.png>
<https://lh3.googleusercontent.com/-agWjcrtP4iE/Vx6ASu0AnBI/AAAAAAACOgM/sOyXToRy2BEuW1gjaIcHNsbEDLYtp-LtACLcB/s1600/2016-04-25-lu-vs-qr-05.png>


Reply via email to