[sage-devel] Re: sqrt for polynomials

2008-08-18 Thread Joel B. Mohler

On Sunday 17 August 2008 01:05:53 pm John Cremona wrote:
 I have needed the following.  g(x) is a univariate polynomial which I
 know to be a square, and i want its square root.  g.factor() does too
 much, as does g.squarefree_decomposition().  I can get the sqrt via
 g.gcd(g.derivative()), which works in my case since I know that the
 sqrt is itself square-free.

 Is there a better way, and would there be a demand for adding an
 implementation?

I've wanted a sqrt for polynomials as well (actually I wanted it for 
multivariate polynomials).  Your request caught my imagination so I whipped 
one up.  I suppose the most sensible algorithm is just to solve the system of 
equations that you get by equating corresponding coefficients -- there's an 
obvious order in which to solve these equations so you never actually have 2 
unknowns.

Some observations:
1)  Over ZZ[x] and ZZ(p)[x], computing 
gcd(f,f.derivative())
is really incredibly fast.  You'd have to get very agressive with cython to 
make my implementation faster than the gcd (but, my implementation is much 
more reliable -- e.g. handles a squared squares correctly).  Even up to 
degree 1000, the gcd is competitive, but I think there's got to be a 
cross-over somewhere.  But, note that this obvious algorithm is quadratic in 
the degree and I don't know the run-time for the gcd.

2)  gcd of things in QQ[x] is very slow:
sage: f=ZZ[x].random_element(50)^2
sage: %time _=gcd(f,f.derivative())
CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s
Wall time: 0.00 s
sage: f=QQ[x].random_element(50)^2
sage: %time _=gcd(f,f.derivative())
CPU times: user 18.41 s, sys: 0.07 s, total: 18.48 s
Wall time: 18.49 s
It should be much much faster to pull out a denominator and do the gcd over 
ZZ[x] and possibly do some book-keeping.  Or, is there something I'm missing?

3) The '//' operator is not implemented for GF(p)
http://trac.sagemath.org/sage_trac/ticket/3890

4)  The 'extend=False' parameter for CyclotomicField(4) is not implemented.
http://trac.sagemath.org/sage_trac/ticket/3889

My sqrt implementation is much faster than the gcd method over number fields 
and QQ.  Fixing QQ gcd will change that considerably though.

Here's a trac ticket:
http://trac.sagemath.org/sage_trac/ticket/3891
and preliminary implementation (actual patch will probably come tomorrow):
http://kiwistrawberry.us/poly-sqrt-bench.py

--
Joel

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: sqrt for polynomials

2008-08-18 Thread Bill Hart

Oh of course, how could I forget. There's also functional
decomposition. But I've not investigated the asymptotics. I'm sure
Paul Zimmermann knows since he's been studying this in relation to
factoring polynomials, I think. I expect the asymptotics to be bad in
comparison with using power series methods though. But as it is only
dependent on computing some gcd's it might be viable at some regime.

Bill.

P.S: here - hear in my last post.

On 18 Aug, 16:44, Bill Hart [EMAIL PROTECTED] wrote:
 Assuming FLINT is in fact being used for the GCD in Z[x], the
 implementation is only fast up to degree about 250. It depends on
 your definition of fast though. :-) In a later release of FLINT, GCD
 will be asymptotically faster, and certainly by degree 1000 will be
 many times faster than it currently is.

 Unfortunately I see no trivial way to add a fast square root to FLINT,
 or I would knock it up. However I have added it to the list of
 features, as I anticipate further need for it. So FLINT will implement
 it in polys over Z and Z/pZ eventually. It's now trac ticket #71 for
 FLINT. I'll make sure SAGE developers here about it when it is done.

 Almost everything over Q should probably be converted to a problem
 over Z. I haven't seen any polynomial problems over Q which should not
 be dealt with this way so far, but I suppose they may exist.

 If I'm not mistaken, the linear algebra method is nearly cubic
 complexity. The GCD method should be subquadratic (though only just).
 The method using power series has the same complexity as Karatsuba
 multiplication of polynomials, in terms of ring operations (see papers
 of Paul Zimmermann), and the constant can be made very low by use of
 the recursive middle product algorithm.

 I guess it should be possible to use the FFT to get something like
 asymptotical complexity n log n log log n. Certainly this can be
 achieved if one exponentiates the logarithm. The question is what the
 constant would be in that case. In terms of Karatsuba multiplication
 the constant is less than 1, but for FFT it's likely to be 3/2, 5/2 or
 2 or something like that, but I don't have a handy reference for this.
 Dan Bernstein's famous multapps paper does give the asymptotic
 complexity though I think.

 Note that if one requires the square root of an exact square of
 polynomials, if one uses the power series method, one only needs to
 work to a precision about half the length of the original polynomial
 since that is how large the square root will be. I think so anyway,
 unless I'm missing something silly here (altogether likely).

 Bill.

 On 18 Aug, 15:00, Joel B. Mohler [EMAIL PROTECTED] wrote:



  On Sunday 17 August 2008 01:05:53 pm John Cremona wrote:

   I have needed the following.  g(x) is a univariate polynomial which I
   know to be a square, and i want its square root.  g.factor() does too
   much, as does g.squarefree_decomposition().  I can get the sqrt via
   g.gcd(g.derivative()), which works in my case since I know that the
   sqrt is itself square-free.

   Is there a better way, and would there be a demand for adding an
   implementation?

  I've wanted a sqrt for polynomials as well (actually I wanted it for
  multivariate polynomials).  Your request caught my imagination so I whipped
  one up.  I suppose the most sensible algorithm is just to solve the system 
  of
  equations that you get by equating corresponding coefficients -- there's an
  obvious order in which to solve these equations so you never actually have 2
  unknowns.

  Some observations:
  1)  Over ZZ[x] and ZZ(p)[x], computing
          gcd(f,f.derivative())
  is really incredibly fast.  You'd have to get very agressive with cython to
  make my implementation faster than the gcd (but, my implementation is much
  more reliable -- e.g. handles a squared squares correctly).  Even up to
  degree 1000, the gcd is competitive, but I think there's got to be a
  cross-over somewhere.  But, note that this obvious algorithm is quadratic in
  the degree and I don't know the run-time for the gcd.

  2)  gcd of things in QQ[x] is very slow:
  sage: f=ZZ[x].random_element(50)^2
  sage: %time _=gcd(f,f.derivative())
  CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s
  Wall time: 0.00 s
  sage: f=QQ[x].random_element(50)^2
  sage: %time _=gcd(f,f.derivative())
  CPU times: user 18.41 s, sys: 0.07 s, total: 18.48 s
  Wall time: 18.49 s
  It should be much much faster to pull out a denominator and do the gcd over
  ZZ[x] and possibly do some book-keeping.  Or, is there something I'm 
  missing?

  3) The '//' operator is not implemented for 
  GF(p)http://trac.sagemath.org/sage_trac/ticket/3890

  4)  The 'extend=False' parameter for CyclotomicField(4) is not 
  implemented.http://trac.sagemath.org/sage_trac/ticket/3889

  My sqrt implementation is much faster than the gcd method over number fields
  and QQ.  Fixing QQ gcd will change that considerably though.

  Here's a trac 

[sage-devel] Re: sqrt for polynomials

2008-08-18 Thread Bill Hart

Doh, there's another simple way. Maybe Joel already does this. Simply
substitute a large prime or sufficiently high power of 2 and take the
square root of the resulting integer. Read off the square root of the
polynomial from the p-adic expansion of the square root. This should
be asymptotically fast and has the advantage of being easy to
implement. It's not necessarily as fast as the power series way, but
should be relatively quick in practice. Guaranteeing you picked the
prime large enough without checking the result at the end is probably
hard though.

Sorry Joel I didn't read your patch yet, but given that this is a
method you championed for polynomial factorisation I'm guessing you
already thought of it.

Bill.

On 18 Aug, 16:50, Bill Hart [EMAIL PROTECTED] wrote:
 Oh of course, how could I forget. There's also functional
 decomposition. But I've not investigated the asymptotics. I'm sure
 Paul Zimmermann knows since he's been studying this in relation to
 factoring polynomials, I think. I expect the asymptotics to be bad in
 comparison with using power series methods though. But as it is only
 dependent on computing some gcd's it might be viable at some regime.

 Bill.

 P.S: here - hear in my last post.

 On 18 Aug, 16:44, Bill Hart [EMAIL PROTECTED] wrote:



  Assuming FLINT is in fact being used for the GCD in Z[x], the
  implementation is only fast up to degree about 250. It depends on
  your definition of fast though. :-) In a later release of FLINT, GCD
  will be asymptotically faster, and certainly by degree 1000 will be
  many times faster than it currently is.

  Unfortunately I see no trivial way to add a fast square root to FLINT,
  or I would knock it up. However I have added it to the list of
  features, as I anticipate further need for it. So FLINT will implement
  it in polys over Z and Z/pZ eventually. It's now trac ticket #71 for
  FLINT. I'll make sure SAGE developers here about it when it is done.

  Almost everything over Q should probably be converted to a problem
  over Z. I haven't seen any polynomial problems over Q which should not
  be dealt with this way so far, but I suppose they may exist.

  If I'm not mistaken, the linear algebra method is nearly cubic
  complexity. The GCD method should be subquadratic (though only just).
  The method using power series has the same complexity as Karatsuba
  multiplication of polynomials, in terms of ring operations (see papers
  of Paul Zimmermann), and the constant can be made very low by use of
  the recursive middle product algorithm.

  I guess it should be possible to use the FFT to get something like
  asymptotical complexity n log n log log n. Certainly this can be
  achieved if one exponentiates the logarithm. The question is what the
  constant would be in that case. In terms of Karatsuba multiplication
  the constant is less than 1, but for FFT it's likely to be 3/2, 5/2 or
  2 or something like that, but I don't have a handy reference for this.
  Dan Bernstein's famous multapps paper does give the asymptotic
  complexity though I think.

  Note that if one requires the square root of an exact square of
  polynomials, if one uses the power series method, one only needs to
  work to a precision about half the length of the original polynomial
  since that is how large the square root will be. I think so anyway,
  unless I'm missing something silly here (altogether likely).

  Bill.

  On 18 Aug, 15:00, Joel B. Mohler [EMAIL PROTECTED] wrote:

   On Sunday 17 August 2008 01:05:53 pm John Cremona wrote:

I have needed the following.  g(x) is a univariate polynomial which I
know to be a square, and i want its square root.  g.factor() does too
much, as does g.squarefree_decomposition().  I can get the sqrt via
g.gcd(g.derivative()), which works in my case since I know that the
sqrt is itself square-free.

Is there a better way, and would there be a demand for adding an
implementation?

   I've wanted a sqrt for polynomials as well (actually I wanted it for
   multivariate polynomials).  Your request caught my imagination so I 
   whipped
   one up.  I suppose the most sensible algorithm is just to solve the 
   system of
   equations that you get by equating corresponding coefficients -- there's 
   an
   obvious order in which to solve these equations so you never actually 
   have 2
   unknowns.

   Some observations:
   1)  Over ZZ[x] and ZZ(p)[x], computing
           gcd(f,f.derivative())
   is really incredibly fast.  You'd have to get very agressive with cython 
   to
   make my implementation faster than the gcd (but, my implementation is much
   more reliable -- e.g. handles a squared squares correctly).  Even up to
   degree 1000, the gcd is competitive, but I think there's got to be a
   cross-over somewhere.  But, note that this obvious algorithm is quadratic 
   in
   the degree and I don't know the run-time for the gcd.

   2)  gcd of things in QQ[x] is very slow:
   sage: 

[sage-devel] Re: sqrt for polynomials

2008-08-18 Thread Joel B. Mohler

On Monday 18 August 2008 11:44:11 am Bill Hart wrote:
 Assuming FLINT is in fact being used for the GCD in Z[x], the
 implementation is only fast up to degree about 250. It depends on
 your definition of fast though. :-) In a later release of FLINT, GCD
 will be asymptotically faster, and certainly by degree 1000 will be
 many times faster than it currently is.

By fast, I meant faster than my naive implementation of the sqrt.  The ZZ 
gcd certainly seems fast enough for any problem I'm likely to have in the 
next 6 months :).  The sage I'm using (3.0.6) is using the flint 
fmpz_poly_gcd.

 If I'm not mistaken, the linear algebra method is nearly cubic
 complexity. The GCD method should be subquadratic (though only just).
 The method using power series has the same complexity as Karatsuba
 multiplication of polynomials, in terms of ring operations (see papers
 of Paul Zimmermann), and the constant can be made very low by use of
 the recursive middle product algorithm.

My algorithm is most definitely quadratic (as I had already thought from the 
simple nested for-loops).  Doubling the degree produces a nearly perfect 
quadrupling of run-time.  Note that it isn't a linear system of the 
coefficients and the solution is so obvious that it's easy to find one 
unknown at a time.  I'm not sure what you are calling the power series 
method -- I wonder if it may be what I'm doing.

 Note that if one requires the square root of an exact square of
 polynomials, if one uses the power series method, one only needs to
 work to a precision about half the length of the original polynomial
 since that is how large the square root will be. I think so anyway,
 unless I'm missing something silly here (altogether likely).

That makes sense.  If you know that you have a perfect square you only need to 
think about half the coefficients of the square input.  Of course, if you 
want to raise an error when something is not a perfect square, you'll need to 
test all the coefficients by comparing with the square of your tentative 
square root.

--
Joel

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: sqrt for polynomials

2008-08-18 Thread Joel B. Mohler

On Monday 18 August 2008 12:06:06 pm Bill Hart wrote:
 Doh, there's another simple way. Maybe Joel already does this. Simply
 substitute a large prime or sufficiently high power of 2 and take the
 square root of the resulting integer. Read off the square root of the
 polynomial from the p-adic expansion of the square root. This should
 be asymptotically fast and has the advantage of being easy to
 implement. It's not necessarily as fast as the power series way, but
 should be relatively quick in practice. Guaranteeing you picked the
 prime large enough without checking the result at the end is probably
 hard though.

 Sorry Joel I didn't read your patch yet, but given that this is a
 method you championed for polynomial factorisation I'm guessing you
 already thought of it.

Yep, I should've thought of that.  No, I didn't do it that way.  One strike 
against that method is it's not so general for different base rings (unless 
there is another trick I'm missing).

My implementation works unchanged against number fields, finite fields, QQ, ZZ 
modulo some bugs I found and trac'ed in various base rings.

--
Joel

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~--~~~~--~~--~--~---



[sage-devel] Re: sqrt for polynomials

2008-08-18 Thread Bill Hart



On 18 Aug, 17:39, Joel B. Mohler [EMAIL PROTECTED] wrote:
 On Monday 18 August 2008 11:44:11 am Bill Hart wrote:

  Assuming FLINT is in fact being used for the GCD in Z[x], the
  implementation is only fast up to degree about 250. It depends on
  your definition of fast though. :-) In a later release of FLINT, GCD
  will be asymptotically faster, and certainly by degree 1000 will be
  many times faster than it currently is.

 By fast, I meant faster than my naive implementation of the sqrt.  The ZZ
 gcd certainly seems fast enough for any problem I'm likely to have in the
 next 6 months :).  The sage I'm using (3.0.6) is using the flint
 fmpz_poly_gcd.

OK. That is currently quadratic, but with a low constant. :-)


  If I'm not mistaken, the linear algebra method is nearly cubic
  complexity. The GCD method should be subquadratic (though only just).
  The method using power series has the same complexity as Karatsuba
  multiplication of polynomials, in terms of ring operations (see papers
  of Paul Zimmermann), and the constant can be made very low by use of
  the recursive middle product algorithm.

 My algorithm is most definitely quadratic (as I had already thought from the
 simple nested for-loops).  

Yes I've had a look now, and I think you are right.

 Doubling the degree produces a nearly perfect
 quadrupling of run-time.  Note that it isn't a linear system of the
 coefficients and the solution is so obvious that it's easy to find one
 unknown at a time.  I'm not sure what you are calling the power series
 method -- I wonder if it may be what I'm doing.

No it definitely can't be, since the latter has much better asymptotic
complexity. I'm simply referring to thinking of the polynomial as a
power series, then essentially using Newton iteration to compute the
square root, though there are many variants of this basic idea which
can cut the implied constant by a very considerable factor.

Another method is to take the logarithm of the power series, multiply
by 0.5 and then exponentiate.


  Note that if one requires the square root of an exact square of
  polynomials, if one uses the power series method, one only needs to
  work to a precision about half the length of the original polynomial
  since that is how large the square root will be. I think so anyway,
  unless I'm missing something silly here (altogether likely).

 That makes sense.  If you know that you have a perfect square you only need to
 think about half the coefficients of the square input.  Of course, if you
 want to raise an error when something is not a perfect square, you'll need to
 test all the coefficients by comparing with the square of your tentative
 square root.


It depends on how you implement it. As soon as you know one of the
coefficients is wrong for it to be a square, you can bail out and say
it isn't. It may not be necessary to jack the precision right up to n
for that. E.g. suppose n is just below a power of 2 and you use Newton
iteration to double the number of coefficients each time, you may
already know by the time you have just passed half way that you don't
have a square. For that matter, you may know at any time.

It should also be possible to use a multimodular method to check it is
a square root in time something like n log (nm) where n is the degree
and m is the largest coefficient. This has a significant chance of
bailing out early if it isn't a square root and would certainly be
faster than multiplying out in full. I suspect on average O(1) primes
are needed on average to check if it isn't a square, and so this gives
an asymptotic improvement over multimodular multiplication, which
takes log m primes.

Obviously over a general ring, you have to multiply using classical or
Karatsuba or multiplication, not all of which needs to be done to get
some of the coefficients. Of course that is a pointless observation in
special rings like Z, Z/pZ and Q where you don't have to use these
algorithms.

And also note you only need a truncated product, since you know
already the top half of the terms are correct. Asymptotically that
doesn't save anything, but FLINT saves a small factor doing truncated
products anyway, since there is less real work to do, even when it
uses an FFT. You can combine this suggestion with the multimodular
suggestion.

Another throw away observation is that squaring can be done faster
than a full product. FLINT does this automatically for sufficiently
large polynomials, i.e. length  20 (FLINT 2.0 will do it for all
polynomials and I've already written some of the code for that). Again
you can use this with the multimodular and truncated product
suggestions.

Anyhow, I've just outlined why I won't be doing this in FLINT
soon. :-) There's actually quite a lot of tricks to go through, and
there may even be an algorithm for detecting perfect squares which I
know nothing about.

Bill.
--~--~-~--~~~---~--~~
To post to this group, send email to 

[sage-devel] Re: sqrt for polynomials

2008-08-18 Thread John Cremona

The power series method would apply to any polynomial whose constant
term is a square (or rather, whose lowest degree monomial has the form
(square coefficient)*(even power of x).  Viewed as a power series such
a thing is always a square, as a simple induction constructs a square
root.  We already have this:

sage: R.x=PowerSeriesRing(QQ)
sage: f=R(1+2*x+3*x^2)
sage: f.sqrt()
1 + x + x^2 - x^3 + 1/2*x^4 + 1/2*x^5 - 3/2*x^6 + 3/2*x^7 + 3/8*x^8 -
29/8*x^9 + 43/8*x^10 - 11/8*x^11 - 155/16*x^12 + 325/16*x^13 -
215/16*x^14 - 393/16*x^15 + 9891/128*x^16 - 10269/128*x^17 -
5901/128*x^18 + 36813/128*x^19 + O(x^20)
sage: g=f^2
sage: g.sqrt()
1 + 2*x + 3*x^2
sage: PolynomialRing(QQ,'X')(g.sqrt().list())
3*X^2 + 2*X + 1

John

2008/8/18 Bill Hart [EMAIL PROTECTED]:



 On 18 Aug, 17:39, Joel B. Mohler [EMAIL PROTECTED] wrote:
 On Monday 18 August 2008 11:44:11 am Bill Hart wrote:

  Assuming FLINT is in fact being used for the GCD in Z[x], the
  implementation is only fast up to degree about 250. It depends on
  your definition of fast though. :-) In a later release of FLINT, GCD
  will be asymptotically faster, and certainly by degree 1000 will be
  many times faster than it currently is.

 By fast, I meant faster than my naive implementation of the sqrt.  The ZZ
 gcd certainly seems fast enough for any problem I'm likely to have in the
 next 6 months :).  The sage I'm using (3.0.6) is using the flint
 fmpz_poly_gcd.

 OK. That is currently quadratic, but with a low constant. :-)


  If I'm not mistaken, the linear algebra method is nearly cubic
  complexity. The GCD method should be subquadratic (though only just).
  The method using power series has the same complexity as Karatsuba
  multiplication of polynomials, in terms of ring operations (see papers
  of Paul Zimmermann), and the constant can be made very low by use of
  the recursive middle product algorithm.

 My algorithm is most definitely quadratic (as I had already thought from the
 simple nested for-loops).

 Yes I've had a look now, and I think you are right.

 Doubling the degree produces a nearly perfect
 quadrupling of run-time.  Note that it isn't a linear system of the
 coefficients and the solution is so obvious that it's easy to find one
 unknown at a time.  I'm not sure what you are calling the power series
 method -- I wonder if it may be what I'm doing.

 No it definitely can't be, since the latter has much better asymptotic
 complexity. I'm simply referring to thinking of the polynomial as a
 power series, then essentially using Newton iteration to compute the
 square root, though there are many variants of this basic idea which
 can cut the implied constant by a very considerable factor.

 Another method is to take the logarithm of the power series, multiply
 by 0.5 and then exponentiate.


  Note that if one requires the square root of an exact square of
  polynomials, if one uses the power series method, one only needs to
  work to a precision about half the length of the original polynomial
  since that is how large the square root will be. I think so anyway,
  unless I'm missing something silly here (altogether likely).

 That makes sense.  If you know that you have a perfect square you only need 
 to
 think about half the coefficients of the square input.  Of course, if you
 want to raise an error when something is not a perfect square, you'll need to
 test all the coefficients by comparing with the square of your tentative
 square root.


 It depends on how you implement it. As soon as you know one of the
 coefficients is wrong for it to be a square, you can bail out and say
 it isn't. It may not be necessary to jack the precision right up to n
 for that. E.g. suppose n is just below a power of 2 and you use Newton
 iteration to double the number of coefficients each time, you may
 already know by the time you have just passed half way that you don't
 have a square. For that matter, you may know at any time.

 It should also be possible to use a multimodular method to check it is
 a square root in time something like n log (nm) where n is the degree
 and m is the largest coefficient. This has a significant chance of
 bailing out early if it isn't a square root and would certainly be
 faster than multiplying out in full. I suspect on average O(1) primes
 are needed on average to check if it isn't a square, and so this gives
 an asymptotic improvement over multimodular multiplication, which
 takes log m primes.

 Obviously over a general ring, you have to multiply using classical or
 Karatsuba or multiplication, not all of which needs to be done to get
 some of the coefficients. Of course that is a pointless observation in
 special rings like Z, Z/pZ and Q where you don't have to use these
 algorithms.

 And also note you only need a truncated product, since you know
 already the top half of the terms are correct. Asymptotically that
 doesn't save anything, but FLINT saves a small factor doing truncated
 products anyway, since there 

[sage-devel] Re: sqrt for polynomials

2008-08-18 Thread Bill Hart

If a polynomial is a perfect square, is it not necessarily true that
the constant term is a square? Thus you already bail out if this is
not the case.

I wonder when the time will come that virtually everything that can be
implemented, is implemented in Sage. The the algorithm will then
always be, let's ask Sage nope, an algorithm does not exist!

Bill.

On 18 Aug, 18:32, John Cremona [EMAIL PROTECTED] wrote:
 The power series method would apply to any polynomial whose constant
 term is a square (or rather, whose lowest degree monomial has the form
 (square coefficient)*(even power of x).  Viewed as a power series such
 a thing is always a square, as a simple induction constructs a square
 root.  We already have this:

 sage: R.x=PowerSeriesRing(QQ)
 sage: f=R(1+2*x+3*x^2)
 sage: f.sqrt()
 1 + x + x^2 - x^3 + 1/2*x^4 + 1/2*x^5 - 3/2*x^6 + 3/2*x^7 + 3/8*x^8 -
 29/8*x^9 + 43/8*x^10 - 11/8*x^11 - 155/16*x^12 + 325/16*x^13 -
 215/16*x^14 - 393/16*x^15 + 9891/128*x^16 - 10269/128*x^17 -
 5901/128*x^18 + 36813/128*x^19 + O(x^20)
 sage: g=f^2
 sage: g.sqrt()
 1 + 2*x + 3*x^2
 sage: PolynomialRing(QQ,'X')(g.sqrt().list())
 3*X^2 + 2*X + 1

 John

 2008/8/18 Bill Hart [EMAIL PROTECTED]:





  On 18 Aug, 17:39, Joel B. Mohler [EMAIL PROTECTED] wrote:
  On Monday 18 August 2008 11:44:11 am Bill Hart wrote:

   Assuming FLINT is in fact being used for the GCD in Z[x], the
   implementation is only fast up to degree about 250. It depends on
   your definition of fast though. :-) In a later release of FLINT, GCD
   will be asymptotically faster, and certainly by degree 1000 will be
   many times faster than it currently is.

  By fast, I meant faster than my naive implementation of the sqrt.  The ZZ
  gcd certainly seems fast enough for any problem I'm likely to have in the
  next 6 months :).  The sage I'm using (3.0.6) is using the flint
  fmpz_poly_gcd.

  OK. That is currently quadratic, but with a low constant. :-)

   If I'm not mistaken, the linear algebra method is nearly cubic
   complexity. The GCD method should be subquadratic (though only just).
   The method using power series has the same complexity as Karatsuba
   multiplication of polynomials, in terms of ring operations (see papers
   of Paul Zimmermann), and the constant can be made very low by use of
   the recursive middle product algorithm.

  My algorithm is most definitely quadratic (as I had already thought from 
  the
  simple nested for-loops).

  Yes I've had a look now, and I think you are right.

  Doubling the degree produces a nearly perfect
  quadrupling of run-time.  Note that it isn't a linear system of the
  coefficients and the solution is so obvious that it's easy to find one
  unknown at a time.  I'm not sure what you are calling the power series
  method -- I wonder if it may be what I'm doing.

  No it definitely can't be, since the latter has much better asymptotic
  complexity. I'm simply referring to thinking of the polynomial as a
  power series, then essentially using Newton iteration to compute the
  square root, though there are many variants of this basic idea which
  can cut the implied constant by a very considerable factor.

  Another method is to take the logarithm of the power series, multiply
  by 0.5 and then exponentiate.

   Note that if one requires the square root of an exact square of
   polynomials, if one uses the power series method, one only needs to
   work to a precision about half the length of the original polynomial
   since that is how large the square root will be. I think so anyway,
   unless I'm missing something silly here (altogether likely).

  That makes sense.  If you know that you have a perfect square you only 
  need to
  think about half the coefficients of the square input.  Of course, if you
  want to raise an error when something is not a perfect square, you'll need 
  to
  test all the coefficients by comparing with the square of your tentative
  square root.

  It depends on how you implement it. As soon as you know one of the
  coefficients is wrong for it to be a square, you can bail out and say
  it isn't. It may not be necessary to jack the precision right up to n
  for that. E.g. suppose n is just below a power of 2 and you use Newton
  iteration to double the number of coefficients each time, you may
  already know by the time you have just passed half way that you don't
  have a square. For that matter, you may know at any time.

  It should also be possible to use a multimodular method to check it is
  a square root in time something like n log (nm) where n is the degree
  and m is the largest coefficient. This has a significant chance of
  bailing out early if it isn't a square root and would certainly be
  faster than multiplying out in full. I suspect on average O(1) primes
  are needed on average to check if it isn't a square, and so this gives
  an asymptotic improvement over multimodular multiplication, which
  takes log m primes.

  Obviously over a general ring, 

[sage-devel] Re: sqrt for polynomials

2008-08-18 Thread Bill Hart

Oh sorry, I get you. You mean that if the first term *is* a square
then the power series method will *always* find a purported square
root for you.

So what I meant is that as soon as the power series algorithm returns
a non-zero coefficient past the n/2-th coefficient you know your
original polynomial was not a square. This may occur if your Newton
iteration exceeds this many coefficients of the power series square
root. So I think what I said makes sense.

Of course to prove it is a polynomial square root, if it is, you need
to do an n/2*n/2 truncated squaring. I think that is less than what is
required to complete the final step of the Newton iteration.

Bill.

On 18 Aug, 20:39, Bill Hart [EMAIL PROTECTED] wrote:
 If a polynomial is a perfect square, is it not necessarily true that
 the constant term is a square? Thus you already bail out if this is
 not the case.

 I wonder when the time will come that virtually everything that can be
 implemented, is implemented in Sage. The the algorithm will then
 always be, let's ask Sage nope, an algorithm does not exist!

 Bill.

 On 18 Aug, 18:32, John Cremona [EMAIL PROTECTED] wrote:



  The power series method would apply to any polynomial whose constant
  term is a square (or rather, whose lowest degree monomial has the form
  (square coefficient)*(even power of x).  Viewed as a power series such
  a thing is always a square, as a simple induction constructs a square
  root.  We already have this:

  sage: R.x=PowerSeriesRing(QQ)
  sage: f=R(1+2*x+3*x^2)
  sage: f.sqrt()
  1 + x + x^2 - x^3 + 1/2*x^4 + 1/2*x^5 - 3/2*x^6 + 3/2*x^7 + 3/8*x^8 -
  29/8*x^9 + 43/8*x^10 - 11/8*x^11 - 155/16*x^12 + 325/16*x^13 -
  215/16*x^14 - 393/16*x^15 + 9891/128*x^16 - 10269/128*x^17 -
  5901/128*x^18 + 36813/128*x^19 + O(x^20)
  sage: g=f^2
  sage: g.sqrt()
  1 + 2*x + 3*x^2
  sage: PolynomialRing(QQ,'X')(g.sqrt().list())
  3*X^2 + 2*X + 1

  John

  2008/8/18 Bill Hart [EMAIL PROTECTED]:

   On 18 Aug, 17:39, Joel B. Mohler [EMAIL PROTECTED] wrote:
   On Monday 18 August 2008 11:44:11 am Bill Hart wrote:

Assuming FLINT is in fact being used for the GCD in Z[x], the
implementation is only fast up to degree about 250. It depends on
your definition of fast though. :-) In a later release of FLINT, GCD
will be asymptotically faster, and certainly by degree 1000 will be
many times faster than it currently is.

   By fast, I meant faster than my naive implementation of the sqrt.  The 
   ZZ
   gcd certainly seems fast enough for any problem I'm likely to have in the
   next 6 months :).  The sage I'm using (3.0.6) is using the flint
   fmpz_poly_gcd.

   OK. That is currently quadratic, but with a low constant. :-)

If I'm not mistaken, the linear algebra method is nearly cubic
complexity. The GCD method should be subquadratic (though only just).
The method using power series has the same complexity as Karatsuba
multiplication of polynomials, in terms of ring operations (see papers
of Paul Zimmermann), and the constant can be made very low by use of
the recursive middle product algorithm.

   My algorithm is most definitely quadratic (as I had already thought from 
   the
   simple nested for-loops).

   Yes I've had a look now, and I think you are right.

   Doubling the degree produces a nearly perfect
   quadrupling of run-time.  Note that it isn't a linear system of the
   coefficients and the solution is so obvious that it's easy to find one
   unknown at a time.  I'm not sure what you are calling the power series
   method -- I wonder if it may be what I'm doing.

   No it definitely can't be, since the latter has much better asymptotic
   complexity. I'm simply referring to thinking of the polynomial as a
   power series, then essentially using Newton iteration to compute the
   square root, though there are many variants of this basic idea which
   can cut the implied constant by a very considerable factor.

   Another method is to take the logarithm of the power series, multiply
   by 0.5 and then exponentiate.

Note that if one requires the square root of an exact square of
polynomials, if one uses the power series method, one only needs to
work to a precision about half the length of the original polynomial
since that is how large the square root will be. I think so anyway,
unless I'm missing something silly here (altogether likely).

   That makes sense.  If you know that you have a perfect square you only 
   need to
   think about half the coefficients of the square input.  Of course, if you
   want to raise an error when something is not a perfect square, you'll 
   need to
   test all the coefficients by comparing with the square of your tentative
   square root.

   It depends on how you implement it. As soon as you know one of the
   coefficients is wrong for it to be a square, you can bail out and say
   it isn't. It may not be necessary to jack the precision right up to n
   for that. E.g. 

[sage-devel] Re: sqrt for polynomials

2008-08-18 Thread John Cremona

2008/8/18 Bill Hart [EMAIL PROTECTED]:

 Oh sorry, I get you. You mean that if the first term *is* a square
 then the power series method will *always* find a purported square
 root for you.

Yes, that's what I meant.


 So what I meant is that as soon as the power series algorithm returns
 a non-zero coefficient past the n/2-th coefficient you know your
 original polynomial was not a square. This may occur if your Newton
 iteration exceeds this many coefficients of the power series square
 root. So I think what I said makes sense.


That's true.

 Of course to prove it is a polynomial square root, if it is, you need
 to do an n/2*n/2 truncated squaring. I think that is less than what is
 required to complete the final step of the Newton iteration.


John

 Bill.

 On 18 Aug, 20:39, Bill Hart [EMAIL PROTECTED] wrote:
 If a polynomial is a perfect square, is it not necessarily true that
 the constant term is a square? Thus you already bail out if this is
 not the case.

 I wonder when the time will come that virtually everything that can be
 implemented, is implemented in Sage. The the algorithm will then
 always be, let's ask Sage nope, an algorithm does not exist!

 Bill.

 On 18 Aug, 18:32, John Cremona [EMAIL PROTECTED] wrote:



  The power series method would apply to any polynomial whose constant
  term is a square (or rather, whose lowest degree monomial has the form
  (square coefficient)*(even power of x).  Viewed as a power series such
  a thing is always a square, as a simple induction constructs a square
  root.  We already have this:

  sage: R.x=PowerSeriesRing(QQ)
  sage: f=R(1+2*x+3*x^2)
  sage: f.sqrt()
  1 + x + x^2 - x^3 + 1/2*x^4 + 1/2*x^5 - 3/2*x^6 + 3/2*x^7 + 3/8*x^8 -
  29/8*x^9 + 43/8*x^10 - 11/8*x^11 - 155/16*x^12 + 325/16*x^13 -
  215/16*x^14 - 393/16*x^15 + 9891/128*x^16 - 10269/128*x^17 -
  5901/128*x^18 + 36813/128*x^19 + O(x^20)
  sage: g=f^2
  sage: g.sqrt()
  1 + 2*x + 3*x^2
  sage: PolynomialRing(QQ,'X')(g.sqrt().list())
  3*X^2 + 2*X + 1

  John

  2008/8/18 Bill Hart [EMAIL PROTECTED]:

   On 18 Aug, 17:39, Joel B. Mohler [EMAIL PROTECTED] wrote:
   On Monday 18 August 2008 11:44:11 am Bill Hart wrote:

Assuming FLINT is in fact being used for the GCD in Z[x], the
implementation is only fast up to degree about 250. It depends on
your definition of fast though. :-) In a later release of FLINT, GCD
will be asymptotically faster, and certainly by degree 1000 will be
many times faster than it currently is.

   By fast, I meant faster than my naive implementation of the sqrt.  
   The ZZ
   gcd certainly seems fast enough for any problem I'm likely to have in 
   the
   next 6 months :).  The sage I'm using (3.0.6) is using the flint
   fmpz_poly_gcd.

   OK. That is currently quadratic, but with a low constant. :-)

If I'm not mistaken, the linear algebra method is nearly cubic
complexity. The GCD method should be subquadratic (though only just).
The method using power series has the same complexity as Karatsuba
multiplication of polynomials, in terms of ring operations (see papers
of Paul Zimmermann), and the constant can be made very low by use of
the recursive middle product algorithm.

   My algorithm is most definitely quadratic (as I had already thought 
   from the
   simple nested for-loops).

   Yes I've had a look now, and I think you are right.

   Doubling the degree produces a nearly perfect
   quadrupling of run-time.  Note that it isn't a linear system of the
   coefficients and the solution is so obvious that it's easy to find one
   unknown at a time.  I'm not sure what you are calling the power series
   method -- I wonder if it may be what I'm doing.

   No it definitely can't be, since the latter has much better asymptotic
   complexity. I'm simply referring to thinking of the polynomial as a
   power series, then essentially using Newton iteration to compute the
   square root, though there are many variants of this basic idea which
   can cut the implied constant by a very considerable factor.

   Another method is to take the logarithm of the power series, multiply
   by 0.5 and then exponentiate.

Note that if one requires the square root of an exact square of
polynomials, if one uses the power series method, one only needs to
work to a precision about half the length of the original polynomial
since that is how large the square root will be. I think so anyway,
unless I'm missing something silly here (altogether likely).

   That makes sense.  If you know that you have a perfect square you only 
   need to
   think about half the coefficients of the square input.  Of course, if 
   you
   want to raise an error when something is not a perfect square, you'll 
   need to
   test all the coefficients by comparing with the square of your tentative
   square root.

   It depends on how you implement it. As soon as you know one of the
   coefficients is wrong for it to be a square, you can 

[sage-devel] Re: sqrt for polynomials

2008-08-18 Thread Robert Bradshaw

On Mon, 18 Aug 2008, John Cremona wrote:


 2008/8/18 Bill Hart [EMAIL PROTECTED]:

 Oh sorry, I get you. You mean that if the first term *is* a square
 then the power series method will *always* find a purported square
 root for you.

 Yes, that's what I meant.


 So what I meant is that as soon as the power series algorithm returns
 a non-zero coefficient past the n/2-th coefficient you know your
 original polynomial was not a square. This may occur if your Newton
 iteration exceeds this many coefficients of the power series square
 root. So I think what I said makes sense.


 That's true.

The power series sqrt code alreay checks to see if an exact square was 
found (note that the result is not +O(x^20)). It may not abort as early as 
it could though.


 Of course to prove it is a polynomial square root, if it is, you need
 to do an n/2*n/2 truncated squaring. I think that is less than what is
 required to complete the final step of the Newton iteration.


 John

 Bill.

 On 18 Aug, 20:39, Bill Hart [EMAIL PROTECTED] wrote:
 If a polynomial is a perfect square, is it not necessarily true that
 the constant term is a square? Thus you already bail out if this is
 not the case.

 I wonder when the time will come that virtually everything that can be
 implemented, is implemented in Sage. The the algorithm will then
 always be, let's ask Sage nope, an algorithm does not exist!

 Bill.

 On 18 Aug, 18:32, John Cremona [EMAIL PROTECTED] wrote:



 The power series method would apply to any polynomial whose constant
 term is a square (or rather, whose lowest degree monomial has the form
 (square coefficient)*(even power of x).  Viewed as a power series such
 a thing is always a square, as a simple induction constructs a square
 root.  We already have this:

 sage: R.x=PowerSeriesRing(QQ)
 sage: f=R(1+2*x+3*x^2)
 sage: f.sqrt()
 1 + x + x^2 - x^3 + 1/2*x^4 + 1/2*x^5 - 3/2*x^6 + 3/2*x^7 + 3/8*x^8 -
 29/8*x^9 + 43/8*x^10 - 11/8*x^11 - 155/16*x^12 + 325/16*x^13 -
 215/16*x^14 - 393/16*x^15 + 9891/128*x^16 - 10269/128*x^17 -
 5901/128*x^18 + 36813/128*x^19 + O(x^20)
 sage: g=f^2
 sage: g.sqrt()
 1 + 2*x + 3*x^2
 sage: PolynomialRing(QQ,'X')(g.sqrt().list())
 3*X^2 + 2*X + 1

 John

 2008/8/18 Bill Hart [EMAIL PROTECTED]:

 On 18 Aug, 17:39, Joel B. Mohler [EMAIL PROTECTED] wrote:
 On Monday 18 August 2008 11:44:11 am Bill Hart wrote:

 Assuming FLINT is in fact being used for the GCD in Z[x], the
 implementation is only fast up to degree about 250. It depends on
 your definition of fast though. :-) In a later release of FLINT, GCD
 will be asymptotically faster, and certainly by degree 1000 will be
 many times faster than it currently is.

 By fast, I meant faster than my naive implementation of the sqrt.  The 
 ZZ
 gcd certainly seems fast enough for any problem I'm likely to have in the
 next 6 months :).  The sage I'm using (3.0.6) is using the flint
 fmpz_poly_gcd.

 OK. That is currently quadratic, but with a low constant. :-)

 If I'm not mistaken, the linear algebra method is nearly cubic
 complexity. The GCD method should be subquadratic (though only just).
 The method using power series has the same complexity as Karatsuba
 multiplication of polynomials, in terms of ring operations (see papers
 of Paul Zimmermann), and the constant can be made very low by use of
 the recursive middle product algorithm.

 My algorithm is most definitely quadratic (as I had already thought from 
 the
 simple nested for-loops).

 Yes I've had a look now, and I think you are right.

 Doubling the degree produces a nearly perfect
 quadrupling of run-time.  Note that it isn't a linear system of the
 coefficients and the solution is so obvious that it's easy to find one
 unknown at a time.  I'm not sure what you are calling the power series
 method -- I wonder if it may be what I'm doing.

 No it definitely can't be, since the latter has much better asymptotic
 complexity. I'm simply referring to thinking of the polynomial as a
 power series, then essentially using Newton iteration to compute the
 square root, though there are many variants of this basic idea which
 can cut the implied constant by a very considerable factor.

 Another method is to take the logarithm of the power series, multiply
 by 0.5 and then exponentiate.

 Note that if one requires the square root of an exact square of
 polynomials, if one uses the power series method, one only needs to
 work to a precision about half the length of the original polynomial
 since that is how large the square root will be. I think so anyway,
 unless I'm missing something silly here (altogether likely).

 That makes sense.  If you know that you have a perfect square you only 
 need to
 think about half the coefficients of the square input.  Of course, if you
 want to raise an error when something is not a perfect square, you'll 
 need to
 test all the coefficients by comparing with the square of your tentative
 square root.

 It depends on how you implement it. As soon as