Rank over Z or Q will be slower than Zp, but it need not be too slow.
Consider the following method, for example:
http://issac2009.kias.re.kr/Storjohann.pdf
http://www.cs.uwaterloo.ca/~astorjoh/issac09.pdf
--
--
To post to this group, send an email to sage-devel@googlegroups.com
To
I was surprised to still see a bit of this in practice on an 8x Core2 system
with the example from our paper:
f := (1 + x + y + 2*z^2 + 3*t^3 + 5*u^5)^12:
g := (1 + u + t + 2*z^2 + 3*y^3 + 5*x^5)^12:
What happens here is that we construct the result one term at a time, and
doing that requires
I'm curious to know if performance of Maple 15 are following the same as
for Maple 14.
Very similar. SDMP was refactored for Maple 15. It's size was cut in half,
it was made re-entrant to allow parallel polynomial algorithms, and we now
support Zp for multiprecision p. We added Kronecker
Congratulations, this looks really good.
--
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL:
On Oct 28, 4:20 am, luisfe lftab...@yahoo.es wrote:
Computing with generic quotient rings I am afraid that will be slow
and that will yield to various errors. Specially as in this case,
where the ideal is not prime (you are looking for solutions in GF(4)).
Doesn't GF(4) construct a field with
On Oct 19, 9:09 am, kcrisman kcris...@gmail.com wrote:
Yes, let's keep in mind that notebook servers with fewer users are
usually very snappy and a great resource. It's not CPU power, but
number of simultaneous users, I think.
That suggests the bottleneck is disk I/O. Sage is quite large,
Maple 14 on iMac Core i5 2.66 GHz 8GB (64-bit):
f := x*y^3*z^2 + x^2*y^2*z + x*y^3*z + x*y^2*z^2 + y^3*z^2 + y^3*z +
2*y^2*z^2 + 2*x*y*z + y^2*z + y*z^2 + y^2 + 2*y*z + z;
curr := 1:
TIMER := time[real]():
for i from 1 to 100 do
curr := expand(curr*f):
lprint(i=time[real]()-TIMER):
end do:
I get that f^100 is a polynomial with 3721951 terms. The largest
coefficient belongs to x^44*y^181*z^131 and is
540685566063956356849231312581525435336487979299724512007837438591842230283354998840425635151449237483722428755963200
--
To post to this group, send an email to
On May 15, 6:21 pm, Bill Hart goodwillh...@googlemail.com wrote:
I have the right number of terms, but not quite the right coefficient,
as of yet. This is a good test to help me dig out the bug. :-)
Do you have a division routine? I divided f^100 by f to check the
result. This is one way I
On May 14, 9:54 am, Bill Hart goodwillh...@googlemail.com wrote:
On the other hand, I am unable to replicate the very sparse benchmark
unless I assume the result will fit in 2 limbs and allocate all the
output mpz's in advance, etc. Then I can basically replicate it. If I
use my generic no
On May 13, 2:45 am, parisse bernard.pari...@ujf-grenoble.fr wrote:
In my own experience, coding with an univariate polynomial is not
efficient especially if the polynomial is sparse.
There must be some kind of inefficiency. If you use word
operations
for all monomial operations then it should
Since this is turning into an all purpose post, I'm going to crosspost
to sci.math.symbolic. I want to start by saying that the heap
method
should be called Johnson's algorithm. See
http://portal.acm.org/citation.cfm?id=1086847
We've made contributions to improve it, but our actual work has
For what it's worth, PowerPC is totally obsolete and there were not
that many 32-bit only Intel Macs shipped before they switched to the
Core2. I think you would do fine supporting only 64-bit x86 on 10.5
and 10.6. That should cover everything back to Fall 2006, i.e. 0-4
year old machines, and
On Jan 19, 7:21 pm, Jonathan Bober jwbo...@gmail.com wrote:
Should PARI always be compiled with -fPIC? (Should I really be asking
this question to PARI developers who decided not to use PIC?) I don't
know much about this, but apparently -fPIC might cause some slowdown on
some systems. It seems
On Aug 5, 2:35 pm, mirko mirko.vison...@gmail.com wrote:
I would be interested if Cylindrical Algebraic Decomposition is
implemented in Sage?
QEPCAD
--~--~-~--~~~---~--~~
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from
Just a guess, but is Sage trying to find a solution over the integers
(not the rationals) ? That would take forever. You should try it
over Q. For solving over the rationals, chinese remaindering is not
the best approach. It can be very fast for small matrices with small
solutions, but for
If you were to print out the source code and distribute it in a book,
it should not change the conclusions of copyright law. People tend to
get very caught up in technical theories, and they often view the law
the way they view software, but a judge will do a basic sanity
check. If you
On Apr 29, 4:27 am, Martin Albrecht m...@informatik.uni-bremen.de
wrote:
Yes, sparse LA is definitely the main obstacle and yes I'm trying to
implement it myself. I know of the existence of M4RI but I'm interested
in larger fields and also in large systems that require sparse LA.
My
On Apr 29, 4:39 pm, Franco Saliola sali...@gmail.com wrote:
I wonder if they fixed the 'numbpart' function.
It looks like they did.
--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email
On Apr 28, 4:10 pm, William Stein wst...@gmail.com wrote:
Maple 13 was released today, I think. The new features page is here:
http://www.maplesoft.com/products/maple/new_features/full_list.aspx
Looking it over, the only overlap with Sage (current or in development
features) seems to be
On Mar 29, 12:49 am, Ondrej Certik ond...@certik.cz wrote:
I just tried the following code on several linuxes (Debian, Ubuntu,
Gentoo, Red Hat, OpenSUSE) and on OS X 10.5 Intel and it seems to just
work everywhere:
#include unistd.h
#include stdio.h
int main()
{
int ncpus;
/* Linux */
#include sched.h
int sched_getaffinity(pid_t pid, unsigned int cpusetsize, cpu_set_t
*mask);
static inline int num_processors()
{
unsigned int bit;
int np;
cpu_set_t aff;
memset(aff, 0, sizeof(aff) );
sched_getaffinity(0, sizeof(aff), aff );
On Feb 1, 6:18 pm, William Stein wst...@gmail.com wrote:
with(linalg);
A := LinearAlgebra:-RandomMatrix(200);
det(A);
and it takes 30 seconds.
I know it was deprecated in Maple 6, but isn't it odd that Maple doesn't even
print a warning or something like 6 years later that one is
I just want to point out the Maple's linear algebra is not quite as
bad as old Linbox times imply. The linalg package has been obsolete
for some time now.
-bash-3.2$ maple
|\^/| Maple 12 (X86 64 LINUX)
._|\| |/|_. Copyright (c) Maplesoft, a division of Waterloo Maple
Inc. 2008
\
On Feb 1, 12:46 pm, William Stein wst...@gmail.com wrote:
On Sun, Feb 1, 2009 at 11:21 AM, Roman Pearce rpear...@gmail.com wrote:
I just want to point out the Maple's linear algebra is not quite as
bad as old Linbox times imply. The linalg package has been obsolete
for some time now
On Jan 25, 1:31 am, parisse bernard.pari...@ujf-grenoble.fr wrote:
I also implemented parallel multiplication in giac, but with the
degree of the first variable to separate threads (that's easier to
implement than rebuilding one heap from several heaps). This work also
on distributed
On Jan 26, 7:22 am, parisse bernard.pari...@ujf-grenoble.fr wrote:
For dense problems the answer is tentatively yes, however you
can also shrink the size of the heap. See the chaining section
in http://www.cecm.sfu.ca/~rpearcea/sdmp/sdmp_div.pdf
The details of what may be faster or not
Following up, here is the first version of the paper on parallel
sparse polynomial multiplication:
http://www.cecm.sfu.ca/~rpearcea/sdmp/sdmp_pmul.pdf
Thank you for the use of the machine. We did acknowledge the NSF
grant. Does anyone here feel like discussing high performance
parallel
) but the paper should be very helpful
to anyone trying to implement something.
Roman Pearce
CECM/SFU
--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to
sage-devel-unsubscr...@googlegroups.com
Wow you guys must have a lot of money :)
Thanks!
On Jan 21, 10:13 pm, William Stein wst...@gmail.com wrote:
On Wed, Jan 21, 2009 at 9:36 PM, Roman Pearce rpear...@gmail.com wrote:
Let me start by thanking William Stein for making this machine
available. I would like to run a parallel
I liked the abstract2 version better. It had a better overview of the
project :)
--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this
BTW, asking for contributors is the surest way to get zero
contributors. You should invite people to try Sage (online) and to
download it so it runs faster.
Also, I thought of another great reason why they would like Sage.
Many of these people write their own libraries. Then you have to
write
On Apr 29, 11:57 pm, William Stein [EMAIL PROTECTED] wrote:
I'm giving a plenary talk at ISSAC in Linz, Austria this summer. I'm supposed
to write a 2-page abstract/paper for the proceedings. I just wrote
something:
http://sage.math.washington.edu/home/was/tmp/abstract.pdf
I think what
On Apr 30, 8:09 am, William Stein [EMAIL PROTECTED] wrote:
The open source philosophy is the entire reason for the
existence of Sage.
That may be true, but it won't sell. There have been other open
source systems before Sage (Axiom, Maxima, ...) and very good
specialized systems (Singular,
On Apr 1, 11:36 pm, Michael Brickenstein [EMAIL PROTECTED] wrote:
I don't find it very impressive, posting some benchmark for just one
example.
There are 4 benchmarks in
http://www.cecm.sfu.ca/~rpearcea/sdmp/2008_04_01/benchmarks.txt
6376 x 46376 = 635376 terms (dense, 4 variables)
26599 x
On Mar 31, 10:55 pm, William Stein [EMAIL PROTECTED] wrote:
On Mon, Mar 31, 2008 at 6:48 PM, Roman Pearce [EMAIL PROTECTED] wrote:
You need Algorithms for Computer Algebra by Geddes, Czapor, and
Labahn:
Chapter 5: Chinese Remainder Theorem
Chapter 6: Newton's Iteration and Hensel
Please excuse a (possibly naive) suggestion, but why not use Maxima
for multivariate gcds and factorization ? I looked at the source code
and it appears to do Hensel lifting for both. That is the correct
algorithm that Sage appears to need. I'm not sure how to run it mod p
or over GF(p^q), but
On Feb 21, 1:18 pm, William Stein [EMAIL PROTECTED] wrote:
FYI I'll be an invited speaker at ISSAC 2008 in Linz, Austria in July:
http://www.risc.uni-linz.ac.at/about/conferences/issac2008/
... I don't have any idea what to expect
since I've never been to ISSAC before.
That's very good
On Feb 18, 6:21 am, Bill Hart [EMAIL PROTECTED] wrote:
Laurent Bernardin and Michael B. Monagan.
Efficient Multivariate Factorization Over Finite Fields.
If Sage has or can get fast LLL you should implement the new algorithm
of Mark van Hoeij.
However, I don't know of any new (or old) algorithm by Mark van Hoeij
that addresses the problem of Efficient Multivariate Factorization Over
Finite Fields using LLL. Could you please clarify.
I am aware of Mark's algorithms for univariate polynomial factorization
over global fields using
40 matches
Mail list logo