Presently, if you use solve_right() to solve a linear system, Sage
will first investigate the rank of the coefficient matrix, and branch
accordingly.  Some classes of matrices can compute rank very quickly
(ZZ, QQ, mod p), while others compute the echelon form and infer rank
from that.  Eventually, the coefficient matrix (or a choice
nonsingular submatrix) is augmented and a second reduced row-echelon
form is computed.  When the coefficient matrix has full rank, the
second echelon form computation is a near-duplicate of the first.  So
instead, if you just dive right in and do the naive thing you were
taught as a student (augment and row-reduce no matter what), solving
can be twice as fast.

Then it turns out that the naive thing seems to be faster almost
always.  3% to 10% for the special cases, and twice as fast for
"general" matrices, though sometimes even more.  The only exception I
can find is with entries that are integers mod p, and the matrix of
"constants" has about 10 times more columns than the coefficient
matrix, and then it is still very close.

Despite a lot of testing with random matrices, I'm skeptical, in part
due to a sensitivity about how the random matrices are generated.

See ticket for preliminary (but fully functional) patch and some
timings:
http://trac.sagemath.org/sage_trac/ticket/11286

If you have any good ideas for Sage routines that will really stress
solve_right() with something "real," I'd love to hear about them so I
can test this further.  Here or on the ticket, as appropriate.  Thanks
in advance.

Rob

-- 
To post to this group, send an email to [email protected]
To unsubscribe from this group, send an email to 
[email protected]
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org

Reply via email to