Am 06.06.2022 um 23:56 schrieb Cottrell, Allin:
On Mon, Jun 6, 2022 at 5:24 PM Sven Schreiber <sveto...@gmx.net> wrote:

OTOH, I ran some very crude and simple speed comparisons between
qrdecomp with and without pivoting (on Linux, current git), and
depending on the input the advantage could go either way it seemed,
which I found a little surprising. But then I also ran SVD on the same
input (and also grabbing the optional output), and that went even
faster, so now I'm puzzled... isn't SVD supposed to be the most
expensive thing of the candidates? What am I missing?

Are you doing enough replications to get good resolution from the timer?

Here's what I see for a 100 x 10 random matrix with 10,000 replications:

SVD 0.324311s
QR  0.094924s
QRP 0.114874s

I think I had 500x12 or so. But interesting that QRP is almost as fast
as QR.


But as I mentioned earlier, big m favors SVD. Here's 800 x 20 with
1000 replications.

SVD 0.137257s
QR  0.231528s
QRP 0.248974s

Apparently I missed that, this is something I was completely unaware of.
I thought that one tries to avoid SVD because of the costs. But here I
see no costs!? For mols() for example, SVD has to be forced explicitly.
OK, you could say that at least in macro T=100 is more common than
T=800. But maybe then a heuristic could help, so SVD is used for some
large enough T (or T-K, or T/K). (Only as a fallback from Cholesky, of
course.)

For the dropcoll() functions and related ones that currently use the R
diagonal internally, I guess a similar argument applies. The cost of QRP
seems negligible, and SVD takes over at some point. The advantage of
plain QR seems to be limited.

thanks
sven
_______________________________________________
Gretl-devel mailing list -- gretl-devel@gretlml.univpm.it
To unsubscribe send an email to gretl-devel-le...@gretlml.univpm.it
Website: 
https://gretlml.univpm.it/postorius/lists/gretl-devel.gretlml.univpm.it/

Reply via email to