Not to mutate this thread but comparing software with buildings
rarely takes requirement changes into account, like changed use
of buildings, or, with Sage, tool changes like Py2--3. Which
seems stalling at a point where the easy fixes are done, and
concerted effort is needed to tackle the hard
Hi all,
Anton Mellit reported a bug in flint's heuristic GCD code which was hit in
the wild.
Anton also supplied a working patch for this issue, which we applied to our
code.
After generating lots of examples quite similar to Anton's, we also
discovered another very subtle, very rare bug, which
On Tuesday, February 17, 2015 at 4:14:37 AM UTC-8, pdenapo wrote:
Writing something like
SR(0).function(x)
instead of
ConstantFunction(0)
is not what most mathematicians or students would do, I guess. Maybe
there is something to improve here.
Yes, looking at the documentation
Hi!
I have vectors (say, nx1 matrices) over finite fields, and I have nxn
matrices, by which I want ot multiply the vectors.
If I am taking the default matrix implementations for fields GF(2),
GF(4), GF(5) and GF(25), the timings are considerably worse than when
taking my age-old wrapper for an
Hi Simon,
over GF(2) it helps to multiply from the left:
sage: A = random_matrix(GF(2), 1024, 1024)
sage: v = random_matrix(GF(2), 1024, 1)
sage: %timeit A*v
1 loops, best of 3: 85 µs per loop
sage: vT = v.transpose()
sage: AT = A.transpose()
sage: %timeit vT*AT
10 loops, best of 3: 15
On Tuesday, February 17, 2015 at 9:06:30 AM UTC-8, Simon King wrote:
Hi!
I have vectors (say, nx1 matrices) over finite fields, and I have nxn
matrices, by which I want ot multiply the vectors.
If I am taking the default matrix implementations for fields GF(2),
GF(4), GF(5) and GF(25),
Hello everybody,
I am trying to compute a couple of things on polyhedra, and for that I
need to generate and test a lot of them, each at a time. While I never
store them in any way my code stops after a while because of lack of
memory: would somebody know if this is caused by same reason as for
Hi Martin,
On 2015-02-17, Martin Albrecht martinralbre...@googlemail.com wrote:
over GF(2) it helps to multiply from the left:
Indeed. And GF(2) actually is a case where MeatAxe matrices are slower
than the Sage standard.
Over other fields (where MeatAxe matrices are faster, I just checked it
Clearly they are in the business of reinventing the wheel ever year, with
each generation of students. Coincidentally, I read something today that
resonated with me on that issue:
Real Software Engineering is still in the future. There is nothing in
current SE that is like the construction of
On Tuesday, February 17, 2015 at 8:46:57 PM UTC+1, William wrote:
On Tue, Feb 17, 2015 at 2:24 PM, Volker Braun vbrau...@gmail.com
javascript: wrote:
Clearly they are in the business of reinventing the wheel ever year,
with
Just curious -- who is they?
Well TFA was about the
You can deamortise matrix-matrix multiplication to get better than
quadratic time for matrix-vector multiplication if you allow
precomputation. Probably not relevant here though.
On Tuesday, 17 February 2015 18:40:40 UTC+1, Peter Bruin wrote:
Hi Simon,
I have vectors (say, nx1 matrices)
Seems relevant and possibly related to e.g. SMC or sagenb - we have already
had many discussions of this type in some of the education circles about
homework with Sage.
http://www.wired.com/2015/02/university-bans-github-homework-changes-mind/
--
You received this message because you are
Hi Simon,
I have vectors (say, nx1 matrices) over finite fields, and I have nxn
matrices, by which I want ot multiply the vectors.
If I am taking the default matrix implementations for fields GF(2),
GF(4), GF(5) and GF(25), the timings are considerably worse than when
taking my age-old
Although it is frustrating to have learned this only hours after a
release (which had at least 3 rc's first), please can we have a
bug-fix release as soon as Bill has patched Flint? No Sage code would
need to be changed, though adding a doctest with this example would be
a good idea.
John
On 17
On Tue, Feb 17, 2015 at 2:24 PM, Volker Braun vbraun.n...@gmail.com wrote:
Clearly they are in the business of reinventing the wheel ever year, with
Just curious -- who is they?
each generation of students. Coincidentally, I read something today that
resonated with me on that issue:
Real
Someone (thanks Curtis Bright) has now reported that this passes on a 32
bit machine. So it looks like this patch release is good to go.
Bill.
On Tuesday, 17 February 2015 16:19:02 UTC+1, Bill Hart wrote:
Hi all,
Anton Mellit reported a bug in flint's heuristic GCD code which was hit in
Hi!
On 2015-02-17, Peter Bruin p.j.br...@math.leidenuniv.nl wrote:
Asymptotically fast multiplication doesn't seem relevant here.
Indeed, the classical algorithm is the optimal way to multiply a matrix
by a vector.
So I wonder: Is there perhaps some overhead killing the performance?
For anyone who is interested in when this bug was likely to bite, the
*only* cases I can get it to trigger were when taking gcd(f, g) where
either f or g, but not both, was divisible by x+1 or x-1 (there are other
conditions on this polynomial, which I didn't quantify).
Typically one of the
I can confirm this is definitely a bug in flint. And the fix is absolutely
correct. Thanks to Anton Mellit for not only figuring out which library was
at fault, but actually tracing it to an individual line of C code!!
It is a rare corner case, but very serious.
I will issue a patch release
If ever there was a blocker, surely this is it!
John
-- Forwarded message --
From: Anton Mellit mel...@gmail.com
Date: 16 February 2015 at 23:54
Subject: [sage-support] Bug in polynomial GCD (FLINT)
To: sage-supp...@googlegroups.com
Here is the code:
R.q=QQ[]
X=3*q^12 - 8*q^11
Many thanks Nils for your help.
I think that is important that sage has consistent and easy to use
interfaces, that functions do what most people would expect them to do
at every place. Specially if we want it to be used in calculus
classes, etc.
Writing something like
SR(0).function(x)
21 matches
Mail list logo