On Monday, April 25, 2016 at 4:49:40 PM UTC-4, Jonas Kersulis wrote:
>
> Can someone explain why qrfact is faster and requires less memory than 
> lufact for square, sparse matrices? LU is less computationally complex than 
> QR, and with decent pivoting LU should be able to preserve sparsity better 
> than QR, so I'm confused by what I'm seeing in practice.
>

50% nonzeros is not sparse enough to exploit sparsity effectively, and even 
10% nonzeros is probably too much.  None of the sparse algorithms are 
really going to work particularly well in such cases; you are better off 
with dense matrices.    I'm not sure comparing the sparse LU and QR 
algorithms in such cases is particularly meaningful.

A more realistic application of sparse matrices would be something like 
discretizing a PDE on a 2d mesh.  A 100x100 grid with nearest-neighbor 
interactions is a 10000x10000 matrix, with about 5*10000 nonzero entries, 
corresponding to 5/10000 = 0.05% sparsity.

(There is a famous Wilkinson quote: "A sparse matrix is any matrix with 
enough zeros that is worthwhile to take advantage of them."   You don't 
have enough zeros to be worth exploiting.)

Reply via email to