Here is the result of rounding each basis to 15 places. I definitely should 
have done that before in the interest of comparing apples to apples. From 
this figure, I can believe the vectors are the same (up to scaling and 
order).

<https://lh3.googleusercontent.com/-l6Omfrrv37k/VmWrMX0AsWI/AAAAAAACERk/oWg9bImr4VE/s1600/Screenshot%2Bfrom%2B2015-12-07%2B10%253A51%253A39.png>


The matrix has 6 rows and full row rank. If I use eigs or svds to obtain 
the 6 nonzero Eigenvectors/left singular vectors, then augment the vectors 
with zeros (to yield square matrices) and take the QR decomposition, Q will 
be a completed basis. Here is what Q looks like:

<https://lh3.googleusercontent.com/-Y8f6r5esPeA/VmWwxY5Eq1I/AAAAAAACER0/rfCxJdc-4q4/s1600/Screenshot%2Bfrom%2B2015-12-07%2B11%253A11%253A57.png>

It's the same in both cases, as expected.


As far as speed goes, I thought svd would be faster because it avoids the 
multiplication A'A. I'm still not sure I understand why it would be slower.


Thanks for your help, I appreciate it!




On Thursday, December 3, 2015 at 8:23:00 PM UTC-5, Steven G. Johnson wrote:
>
> PS. eig(A'A) is typically going to be much faster (but less accurate) than 
> svd(A), so I'm not sure why you say that the latter is faster.  e.g. here 
> are some sample timings (timing everything twice to avoid the initial 
> compilation cost):
>
> *julia>* A = rand(1000,200);
>
>
> *julia>* @time eig(A'A); @time eig(A'A);
>
>   0.790879 seconds (1.12 M allocations: 52.454 MB, 3.47% gc time)
>
>   0.027542 seconds (37 allocations: 1.294 MB)
>
>
> *julia>* @time svd(A); @time svd(A);
>
>   0.160237 seconds (90.29 k allocations: 8.908 MB)
>
>   0.079330 seconds (28 allocations: 4.908 MB)
>
>
> *julia>* @time svd(A'); @time svd(A');
>
>   0.234619 seconds (107 allocations: 7.659 MB, 1.39% gc time)
>
>   0.234533 seconds (33 allocations: 7.655 MB)
>
>

Reply via email to