You can get access to the development version of FLINT at its svn
repository:

https://flint.svn.sourceforge.net/svnroot/flint/trunk

But to be honest, I had to hack FLINT a little due to some bugs this
question exposed. I hadn't yet implemented aliasing in all the FLINT
multiplication functions so I substituted a suboptimal multiplication
function for an optimal one.

FLINT is also not ready for prime time. It may not even compile on
your machine! But in case it does, make fmpz_poly-test. The final test
it does is of the polynomial powering function. The test code itself
is at the bottom of the file fmpz_poly-test.c.

I'll commit the new code and test code within the next half hour or
so.

I'll fix the aliasing problems soonish, but that is going to take
longer to fix, maybe a couple of days. The time difference is not
noticeable though.

Bill.

On 28 Aug, 01:13, [EMAIL PROTECTED] wrote:
> Wow, that's fast!  Where can I download this?
>
> On Mon, 27 Aug 2007, Bill Hart wrote:
>
> > I wrote up a ridiculously naive polynomial powering function in FLINT.
> > Here is a basic FLINT program for computing the above powers:
>
> >    fmpz_poly_t poly, power;
> >    fmpz_poly_init(power);
> >    fmpz_poly_init(poly);
> >    fmpz_poly_set_coeff_ui(poly, 0, 1);
> >    fmpz_poly_set_coeff_ui(poly, 1, 1);
> >    fmpz_poly_power(power, poly, (1UL<<13));
>
> > Here are the times:
>
> > real    0m1.190s
> > user    0m0.960s
> > sys     0m0.232s
>
> > If I replace 2^13 with 2^13-1 I get:
>
> > real    0m0.758s
> > user    0m0.628s
> > sys     0m0.128s
>
> > I'm sure there's plenty we can do to speed that up. For a start the
> > system time could trivially be all but wiped out by allocating all the
> > required memory up front. There's probably also some FFT caching I
> > could do but haven't.
>
> > Polynomial evaluation should probably be used beyond some point, but
> > we haven't implemented that yet.
>
> > Bill.
>
> > On 27 Aug, 18:32, [EMAIL PROTECTED] wrote:
> >> I was recently contacted by Niell Clift, who is arguably the foremost 
> >> expert on addition chains.  Though he's most concerned with computing 
> >> minimal addition chains, which aren't always optimal and can take a 
> >> ridiculous amount of time to compute, I believe that some of the work that 
> >> he's done can be used to construct a rather generic addition chain 
> >> package.  I don't expect to beat numeric exponentiation, but polynomial 
> >> exponentiation seems oddly slow.
>
> >> Already, there seems to be a cutoff point where my work on addition chains 
> >> can be used to improve the speed of exponentiation.
>
> >> (x^n)(x+1) constructs the polynomial x^n and evaluates it (this uses my 
> >> polynomial evaluation code)
>
> >> The for loop performs binary exponentiation (I pick 2^13 and 2^13-1 to 
> >> make this easy).  This puzzles me -- binary exponentiation in python 
> >> currently beats the pants off of whatever is getting used for the 
> >> polynomials.  What gives?
>
> >> sage: x = polygen(ZZ)
> >> sage: n = 2^13
>
> >> sage: time a = (x+1)^n
> >> CPU time: 16.82 s,  Wall time: 16.82 s
>
> >> sage: time a = (x^n)(x+1)
> >> CPU time: 2.47 s,  Wall time: 2.47 s
>
> >> sage: %time
> >> sage: z = x+1
> >> sage: for i in range(13):
> >> ...       z = z*z
> >> CPU time: 2.46 s,  Wall time: 2.46 s
>
> >> sage: n = 2^13-1
>
> >> sage: time a = (x+1)^n
> >> CPU time: 3.66 s,  Wall time: 3.66 s
>
> >> sage: time a = (x^n)(x+1)
> >> CPU time: 0.91 s,  Wall time: 0.91 s
>
> >> sage: %time
> >> sage: z = x+1
> >> sage: y = z
> >> sage: for i in range(12):
> >> ...       z = z*z
> >> ...       y*= z
> >> CPU time: 1.61 s,  Wall time: 1.61 s


--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~----------~----~----~----~------~----~------~--~---

Reply via email to