#8096: Speed up parent creation for multiplication of square matrices
---------------------------------------+------------------------------------
   Reporter:  boothby                  |          Owner:  boothby          
       Type:  enhancement              |         Status:  needs_info       
   Priority:  minor                    |      Milestone:                   
  Component:  linear algebra           |       Keywords:                   
Work_issues:  various doc test errors  |       Upstream:  N/A              
   Reviewer:                           |         Author:  boothby, robertwb
     Merged:                           |   Dependencies:                   
---------------------------------------+------------------------------------

Comment(by malb):

 > As much as I know, there already is an experimental version for matrices
 over
 > GF(2^n), isn't it, Martin? How fast is it?

 Sorry, I completely missed this question. M4RIE, i.e. dense linear algebra
 over GF(2^e^) for 2 <= e <= 10, is either about as fast as Magma or much
 faster. Actually, it's not experimental but a fully working patch +
 library has been around for ages, it was agreed months ago it would go
 into Sage, there has just not been enough push to finish the review: #9562

 Here's a summary of the current approach & performance:

 http://martinralbrecht.files.wordpress.com/2011/03/20110330_-_m4ri_-_nancy.pdf

 To compare with your results:

 '''Karatsuba'''

 {{{
 #!python
 sage: K.<a> = GF(4)
 sage: A = random_matrix(K,5000,5000)
 sage: B = random_matrix(K,5000,5000)
 sage: %time C=A*B
 CPU times: user 0.51 s, sys: 0.01 s, total: 0.52 s
 Wall time: 0.52 s
 }}}

 '''Travolta'''

 {{{
 #!python
 sage: K.<a> = GF(4)
 sage: A = random_matrix(K,5000,5000)
 sage: B = random_matrix(K,5000,5000)
 sage: %time C=A._multiply_travolta(B)
 CPU times: user 1.19 s, sys: 0.00 s, total: 1.20 s
 Wall time: 1.20 s
 }}}

 So both algorithms in M4RIE are much faster than ~80 seconds. I didn't
 precisely measure memory consumption though.



 > The default matrix multiplication over finite fields that is currently
 provided in Sage is  not competitive at all.

 Very much agreed!

 > Over prime fields, Linbox is still faster, but my implementation of
 Strassen multiplication needs only one tenth of the memory.

 I hope to fix this next week at Sage Days, i.e. to wrap LinBox properly.

 > As far as I know, Linbox does not officially support non-prime fields,
 in contrast to meataxe. And Meataxe is a lot faster than the current
 implementation in Sage, except for GF(2).

 Clément told me that LinBox does support non-prime fields we just never
 wrapped them in Sage.

 > My meataxe wrapper could still be improved - I have no idea of how to
 optimize the use of L1 and L2 cache, and would need help to do so.

 Let's move that to e-mail.

 > Big disadvantage of my current meataxe wrapper: It only supports field
 sizes less than 256.

 I don't think it's that big of an issue, a lot of applications need small
 fields!

-- 
Ticket URL: <http://trac.sagemath.org/sage_trac/ticket/8096#comment:26>
Sage <http://www.sagemath.org>
Sage: Creating a Viable Open Source Alternative to Magma, Maple, Mathematica, 
and MATLAB

-- 
You received this message because you are subscribed to the Google Groups 
"sage-trac" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/sage-trac?hl=en.

Reply via email to