Good point. The matrices I actually want to factorize are about 1% 
nonzeros. I changed the nonzero parameter to that value and re-ran the 
experiment. Now LU beats QR. 
<https://lh3.googleusercontent.com/-OF5V5uLMic4/Vx_xurvpgnI/AAAAAAACOkY/pwZ9_iVZj005nGZzymXZ0pQiJpi5y8RnwCLcB/s1600/2016-04-26-lu-vs-qr-01.png>


On Tuesday, April 26, 2016 at 12:05:00 PM UTC-4, Steven G. Johnson wrote:
>
>
>
> On Monday, April 25, 2016 at 4:49:40 PM UTC-4, Jonas Kersulis wrote:
>>
>> Can someone explain why qrfact is faster and requires less memory than 
>> lufact for square, sparse matrices? LU is less computationally complex than 
>> QR, and with decent pivoting LU should be able to preserve sparsity better 
>> than QR, so I'm confused by what I'm seeing in practice.
>>
>
> 50% nonzeros is not sparse enough to exploit sparsity effectively, and 
> even 10% nonzeros is probably too much.  None of the sparse algorithms are 
> really going to work particularly well in such cases; you are better off 
> with dense matrices.    I'm not sure comparing the sparse LU and QR 
> algorithms in such cases is particularly meaningful.
>
> A more realistic application of sparse matrices would be something like 
> discretizing a PDE on a 2d mesh.  A 100x100 grid with nearest-neighbor 
> interactions is a 10000x10000 matrix, with about 5*10000 nonzero entries, 
> corresponding to 5/10000 = 0.05% sparsity.
>
> (There is a famous Wilkinson quote: "A sparse matrix is any matrix with 
> enough zeros that is worthwhile to take advantage of them."   You don't 
> have enough zeros to be worth exploiting.)
>

Reply via email to