Hi!

Recently, a "MeatAxe" package was added to Sage. Currently, it is
experimental --- based on my mistaken assumption that a new package
should first be experimental before it can be made optional.

So, I'm asking here whether MeatAxe should become "optional". I actually
wouldn't mind about "standard" either. Reasoning is below.

[ ] Keep it experimental
[ ] Make it optional
[ ] Make it optional and consider to make it standard

I'd also like to ask people to test the package on a range of platforms.
Previous versions of the package did work on Solaris and on bsd.math,
but I don't think that the latest version was tested as well. I
have no access to a wide range of platforms.

Cheers,
Simon

Purpose of upstream MeatAxe
===========================
MeatAxe is a set of programs for working with modular representations.
If I am not mistaken, it is also available as a GAP3 package.

Upstream is not very active, even the newest version of MeatAxe is
several years old. But I was told that they consider to publish yet
another version.

Purpose of the meataxe package for Sage
=======================================

I am not so much interested in the "representation" part, but more in
the implementation of matrix arithmetics over finite non-prime fields
of odd characteristic and order <255. The timings below show that Sage
is *ridiculously* slow in that setting. MeatAxe provides a major
improvement to Sage.

Of course, people interested in the representation part can use that as
well.

Changes to upstream
===================

I had to patch the upstream sources considerably.
- One patch prevents that MeatAxe writes multiplication tables into the
  current directory. Upstream claims that setting an appropriate
  environment variable would be enough for that purpose, but for that
  one first needs to fix a bug.
- One patch does some rather trivial improvements to Gaussian
  elimination.
- One patch implements asymptotically fast matrix multiplication
  (Strassen-Winograd with a memory efficient schedule). It beats MeatAxe's
  school book multiplication even for relatively small matrices.
- One patch makes it so that fast matrix multiplication is used in the
  modular representation part of MeatAxe, too.

I did send my patches to upstream, but they didn't react (though they
did react on some other questions).

What does "sage -i meataxe" do?
===============================

The latest Sage versions contain an *optional* Cython wrapper for some
parts of MeatAxe; "optional" means that the wrapper is always present but
can only be used if one additionally does "sage -i meataxe".

So, if the package is installed and a matrix over a non-prime field of
odd order < 255 is created, then MeatAxe is used as default backend. And in
addition, some executables become available (I've never used them, to be
honest).

Licence
=======

GPL version 2 or later.

Test coverage
=============

- A previous version of the MeatAxe wrapper is part of my optional group
  cohomology spkg (that is currently broken as it is old-style). The
  cohomology package has a very extensive test suite and used to work on
  all supported platforms. Thus, the old version of MeatAxe did work there,
  too.
- MeatAxe has a test suite that is executed on request when installing
  the package. It includes tests of the modular representation
  functionality.
- The wrapper in SAGE_ROOT/src/sage/matrix/matrix_gfpn_dense.pyx of
  course has lots of doc tests. It includes consistency tests with
  several matrix backends.

Timings
=======

I am considering 4000x4000 matrices over GF(9), timings on my Laptop,
with Sage 7.1.beta1. The timings are for the matrix backend in GAP, for
MeatAxe, and for the generic code that is currently used in Sage for
matrices over GF(p^n) for p odd and n>1.

Setup (with meataxe installed in Sage)
--------------------------------------

  sage: M = random_matrix(GF(9,'x'),4000,4000)
  sage: K = M.base_ring()
  sage: D = dict((a,libgap(a)) for a in K)
  sage: gL = [[D[a] for a in row] for row in M]
  sage: gM = libgap.ImmutableMatrix(K, gL)

MeatAxe
-------

  sage: %time M2 = M*M
  CPU times: user 25.7 s, sys: 0 ns, total: 25.7 s
  Wall time: 25.7 s
  sage: %time Minv = M^(-1)
  CPU times: user 1min 1s, sys: 4 ms, total: 1min 1s
  Wall time: 1min 1s

GAP
---

  sage: %time gM2 = gM*gM
  CPU times: user 2min 6s, sys: 0 ns, total: 2min 6s
  Wall time: 2min 6s
  sage: %time gMinv = gM^(-1)
  CPU times: user 2min 48s, sys: 0 ns, total: 2min 48s
  Wall time: 2min 48s

Sage generic code
-----------------

  sage: MS = M.parent()
  sage: MS._MatrixSpace__matrix_class = 
sage.matrix.matrix_generic_dense.Matrix_generic_dense
  sage: Mgen = MS(M._list())
  sage: %time Mgen2 = Mgen*Mgen

I tried to interrupt the computation with Ctrl-c, after MORE THAN ONE
HOUR!!! Strangely, interruption did not work, I had to kill the Sage
process. I didn't try to compute Mgen^(-1).

That said, Magma would still be much faster than MeatAxe. And some
experimental code for matrix multiplication that Martin Albrecht and
I once created would be faster, too. But alas it has never been
finished...

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.

Reply via email to