[sage-devel] Re: Request for comments: Padé approximants

2019-11-30 Thread Vincent Neiger
Concerning what minimal_approximant_basis returns: this is specified in the 
documentation, 
http://doc.sagemath.org/html/en/reference/matrices/sage/matrix/matrix_polynomial_dense.html#sage.matrix.matrix_polynomial_dense.Matrix_polynomial_dense.minimal_approximant_basis
but in formal (hence technical) terms. Let me try to give a more 
"intuitive" description of the terms in the documentation.

*Basis of the module of approximants:*
In your case, the space of all solutions to the Padé equation (ignoring 
degree constraints) is something of dimension 2, which is generated by 2 
elements (each of them being a pair of univariate polynomials).
The algorithm returns a basis of that space, represented as a 2 x 2 matrix: 
the space of all solutions to the Pade equation is precisely the set of 
linear combinations (with polynomial coefficients) of the two rows of the 
matrix.

*Basis in shifts-ordered weak Popov form:*
The algorithm does not return any basis; it returns one which has specific 
degree properties (setting the input parameter normal_form to True further 
reduces the entries to obtain a uniquely defined basis).
By setting the shift to the negated degree constraints, these degree 
properties ensure in particular that one of the two rows will satisfy the 
Padé degree constraints (assuming these were chosen so that there is a 
solution). If the degree constraints were chosen tight (their sum is the 
precision), the other row will not satisfy the degree constraints, so in 
this case the choice is easy: take the row which satisfies the constraints.

Hoping this clarifies a bit what the output looks like.

Concerning speed, for which kind of base field did you make your 
experiments? In any case, this algorithm is much slower than the best 
existing ones (the complexity quadratic in the precision instead of 
quasi-linear). For the general Hermite-Pade problem handled by this 
algorithm, experiments showed that the faster ones were not that 
interesting in Sage for the moment: we would need a faster polynomial 
matrix multiplication. However, for this specific Padé case the matrices 
are 2 x 2, so it seems to me that here these faster algorithms could be 
indeed much faster (again I have "base field = finite field" in mind). I 
will look into this and keep you informed on this thread.


Le vendredi 29 novembre 2019 20:32:47 UTC+1, Emmanuel Charpentier a écrit :
>
> A big thank you for your help, which was absolutely necessary. Synthesis:
>
> 1) Vincent Neiger's proposal works ; I have been able to obtain acceptable 
> duplications of Maxima's pade results in a small sample of test cases 
> (mostly by treating the "shifts" symmetrically).
>
> However, the overall combination of Sage's series and 
> minimal_approximant_basis, with the necessary base ring changes, seems a 
> tad slower than a straight translation to Maxima, chaining Maxima's taylor 
> and pade function and converting back to Sage.
>
> Furthermore, I think that this method is more general and may have other 
> applications ; the problem is that I do not (yet) understand what it does. 
> For example, it returns two rational fractions, whose "best" is not always 
> obvious to my naked eye...
>
> I will stock it for further exploration and separate implementation 
> (possibly more general than SR rational approximations).
>
> 2) I think that the "naïve" method of handling multiple variables is the 
> only one that can be used without supplementary information (and much more 
> understanding of the subject than I have currently).
>
> Therefore, unless I receive further advice or comments, a ticket (to be 
> announced) will propose a wrapper around Maxima's implementations, 
> "naïvely" extended to the multivariate case, with an extension of the 
> current argument passing of  taylor.
>
> Do you think that a paragraph should be added in the relevant 
> introduction/tutorial, texts (if so which ones ?) In this case, I think 
> that a word of caution about the limitations and pitfalls of this approach 
> to approximation should be part of it (e. g. the introductin of "spurious" 
> poles and zeroes, radiius of convergence, etc...). Again, advice welcome...
>
> Le dimanche 10 novembre 2019 14:32:52 UTC+1, Emmanuel Charpentier a écrit :
>>
>> Dear list,
>>
>> IMHO, Sage may use an implementation of Padé approximants (similar t its 
>> implementation of Taylor series), if only for numerical use reasons. 
>> Currently, this can be done:
>>
>>- by wrapping the Maxima functions taylor and pade (Maxima's pade 
>>needs a Maxima taylor development object, which does not cleanly 
>>translate to Sage);
>>- by using the PowerSeriesRing.pade method.
>>
>> Some trials have shown that the latter method, as advertised in its 
>> documentation, is indeed unsuitable even for moderately large degrees of 
>> the numerator and denominator: the expression thus obtained are extremely 
>> unwieldy large and are slow to evaluate numerically.
>> In contrast, the 

[sage-devel] Re: Request for comments: Padé approximants

2019-11-29 Thread Emmanuel Charpentier
A big thank you for your help, which was absolutely necessary. Synthesis:

1) Vincent Neiger's proposal works ; I have been able to obtain acceptable 
duplications of Maxima's pade results in a small sample of test cases 
(mostly by treating the "shifts" symmetrically).

However, the overall combination of Sage's series and 
minimal_approximant_basis, with the necessary base ring changes, seems a 
tad slower than a straight translation to Maxima, chaining Maxima's taylor 
and pade function and converting back to Sage.

Furthermore, I think that this method is more general and may have other 
applications ; the problem is that I do not (yet) understand what it does. 
For example, it returns two rational fractions, whose "best" is not always 
obvious to my naked eye...

I will stock it for further exploration and separate implementation 
(possibly more general than SR rational approximations).

2) I think that the "naïve" method of handling multiple variables is the 
only one that can be used without supplementary information (and much more 
understanding of the subject than I have currently).

Therefore, unless I receive further advice or comments, a ticket (to be 
announced) will propose a wrapper around Maxima's implementations, 
"naïvely" extended to the multivariate case, with an extension of the 
current argument passing of  taylor.

Do you think that a paragraph should be added in the relevant 
introduction/tutorial, texts (if so which ones ?) In this case, I think 
that a word of caution about the limitations and pitfalls of this approach 
to approximation should be part of it (e. g. the introductin of "spurious" 
poles and zeroes, radiius of convergence, etc...). Again, advice welcome...

Le dimanche 10 novembre 2019 14:32:52 UTC+1, Emmanuel Charpentier a écrit :
>
> Dear list,
>
> IMHO, Sage may use an implementation of Padé approximants (similar t its 
> implementation of Taylor series), if only for numerical use reasons. 
> Currently, this can be done:
>
>- by wrapping the Maxima functions taylor and pade (Maxima's pade 
>needs a Maxima taylor development object, which does not cleanly 
>translate to Sage);
>- by using the PowerSeriesRing.pade method.
>
> Some trials have shown that the latter method, as advertised in its 
> documentation, is indeed unsuitable even for moderately large degrees of 
> the numerator and denominator: the expression thus obtained are extremely 
> unwieldy large and are slow to evaluate numerically.
> In contrast, the algorithm used by Maxima, gives easily usable results 
> (even if they can be enhanced by expansion and possibly factorization 
> and/or simplification). But using it worsens our dependence on Maxima.
> Hence, a couple of questions:
>
> *Algorithms:*
>
> Do you have pointers to better algorithms for Padé approximants 
> computation (especially for the multivariate case ? (These might also be 
> helpful in the implementation of PowerSeriesRing.pade ...)
>
> *User interface:*
>
> We can follow our current taylor() convention, which is a bit of a 
> straightjacket in the multivariate case,imposing the same degree for all 
> the developments wrt the different variables.
> We can also allow so specify different degrees for the development wrt the 
> different variables (this can make sense for very asymetric functions).
>
> Suggestions ?
>
> *Multivariate case:*
>
> An elementary implementation (see (Huard and Guillaume, 2000) 
>  for 
> example) is to iteratively create a Padé approximant for the successive 
> variables. i. e. if p(f, x) denotes the Pade approximant wrt the variable 
> x, you end up computing (p(p(p(f,x),y),z) (the implementation is 
> trivial). The paper I quoted hints that there are better solutions, but is 
> a bit above my pay grade (my initial formation is in dentistry and 
> surgery...).
>
> Do you have suggestions on this point ?
>
> More generally, any comments are welcome !
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sage-devel/1ee46ca8-0e0b-4b84-9881-28b7d30f5b27%40googlegroups.com.


Re: [sage-devel] Re: Request for comments: Padé approximants

2019-11-13 Thread Emmanuel Charpentier
Dear Dima,

I'm trying to offer a tool for "engineering" problems, similar to (and 
along the lines of) our taylor() and series() functions for symbolic 
expressions. Therefore, few assumptions can be made. Accordingly, we can 
accept some failures (as we accept failures of taylor() or series() as in 
(for example):

sage: sqrt(x).series(x, 5)
1 + Order(x^5)
sage: sqrt(x).taylor(x, 0, 5)
sqrt(x)

(Note : the second one is more informative than the first...).

Since anyway rational fractions approximations need some serious thinking 
before use (e. g. introducing poles when the original function is 
smooth...), this tool is more of an help than a black-box solution...

Le mercredi 13 novembre 2019 13:57:57 UTC+1, Dima Pasechnik a écrit :
>
> In the multivariate case a lot depends on input. 
> E.g.,  do you know something about zeros of your function?
>

Not in general
 

> E.g. do you have derivatives easily available?
>

I'm postulating that either taylor() or series() can be used  to get them.
 

> If derivates are hard, you probably would like to avoid them all together, 
> using something known as Newton-Pade
> approximation: https://link.springer.com/article/10.1007/BF01390129
>

Thanks for this reference. I think that this is an useful tool for getting 
numerical approximations in caes where symbolic solutions (whch I'm aiming 
for) aren't available...

I'll keep them in mind for further work.
 

>
> By the way, have you looked at the survey
> https://www.sciencedirect.com/science/article/pii/S03770427337X 
> 
>  
> ?
>

That's the paper I quoted in my first mail... ;-).

Thanks a lot !
 

>
> Dima
>
> On Wed, 13 Nov 2019, 11:16 Emmanuel Charpentier,  > wrote:
>
>> Le mardi 12 novembre 2019 20:42:16 UTC+1, rjf a écrit :
>>>
>>> Since Maxima is  free and open source and gpl, why not just read the 
>>> algorithm implemented there
>>> and rewrite it in Python?
>>>
>>
>> That can be done. But I had other interest in mind:
>>
>>- multivariate case (my solution of iterative univariate 
>>approximations may not be optimal...)
>>- other (newer) algorithms, possibly more efficient.
>>
>> The hint given by Vincent is interesting, and I'm following it. I'll keep 
>> this thread posted.
>> Anyway, wrapping the relevant Maxima calls in a Python function is 
>> trivial, and may serve as a first implementation.
>>  
>>
>>>
>>> RJF
>>> \
>>>
>>> On Monday, November 11, 2019 at 1:29:56 AM UTC-8, Emmanuel Charpentier 
>>> wrote:

 Dear Vincent,

 a very quick answer (limbic system level. :-). Thank you for this hint. 
 It seems really interesting. But I'll need time to explore it and find my 
 way.

 Of course, I will have to work on PolynomialRing(SR) in order to be 
 able to work on one variable while ignoring the rest...

 I'll keep you posted.

 Le lundi 11 novembre 2019 09:28:58 UTC+1, Vincent Neiger a écrit :
>
> Dear Emmanuel,
>
> You may be interested in taking a look at the following function:
> Matrix_polynomial_dense.minimal_approximant_basis
>
> This only supports the univariate case. This solves a problem which 
> generalizes Padé approximation (the documentation gives a precise 
> description of what it computes; taking notation from there, instead of 
> thinking in terms of polynomials you may view the matrix F as having 
> power 
> series entries, with the column j truncated at order d_j ).
>
> Coming back to our problem: if you have a power series S(x) at some 
> order d and you want to find a Padé approximation f(x) / g(x) of it, you 
> can find it by calling the above function on the 2 x 1 polynomial matrix 
> [[S(x).polynomial()], [-1]] with the input order being d. Specifically on 
> this input the above function will return a 2 x 2 matrix of univariate 
> polynomials, and its first row will be [[g(x) , f(x)]], your sought 
> approximant.
>
> By default you will get the approximant with f and g of degree about 
> d/2 and deg(g) > deg(f) (well, at least on typical instances), but by 
> manipulating the optional argument "shift" you can get any other type 
> that 
> you want. This shift is basically equivalent to the notion of "defects" 
> often found in the literature on approximants.
>
> This is based on an algorithm described by Van Barel and Bultheel and 
> by Beckermann and Labahn in the early 1990s. A much faster algorithm 
> exists, using a divide and conquer approach and fast polynomial 
> multiplication, but for it to be interesting we would need Sage to have a 
> faster polynomial matrix multiplication (for the moment this faster 
> algorithm is not really interesting except for small matrix dimensions... 
> so it could actually 

Re: [sage-devel] Re: Request for comments: Padé approximants

2019-11-13 Thread Dima Pasechnik
In the multivariate case a lot depends on input.
E.g.,  do you know something about zeros of your function?
E.g. do you have derivatives easily available?
If derivates are hard, you probably would like to avoid them all together,
using something known as Newton-Pade
approximation: https://link.springer.com/article/10.1007/BF01390129

By the way, have you looked at the survey
https://www.sciencedirect.com/science/article/pii/S03770427337X ?

Dima

On Wed, 13 Nov 2019, 11:16 Emmanuel Charpentier, <
emanuel.charpent...@gmail.com> wrote:

> Le mardi 12 novembre 2019 20:42:16 UTC+1, rjf a écrit :
>>
>> Since Maxima is  free and open source and gpl, why not just read the
>> algorithm implemented there
>> and rewrite it in Python?
>>
>
> That can be done. But I had other interest in mind:
>
>- multivariate case (my solution of iterative univariate
>approximations may not be optimal...)
>- other (newer) algorithms, possibly more efficient.
>
> The hint given by Vincent is interesting, and I'm following it. I'll keep
> this thread posted.
> Anyway, wrapping the relevant Maxima calls in a Python function is
> trivial, and may serve as a first implementation.
>
>
>>
>> RJF
>> \
>>
>> On Monday, November 11, 2019 at 1:29:56 AM UTC-8, Emmanuel Charpentier
>> wrote:
>>>
>>> Dear Vincent,
>>>
>>> a very quick answer (limbic system level. :-). Thank you for this hint.
>>> It seems really interesting. But I'll need time to explore it and find my
>>> way.
>>>
>>> Of course, I will have to work on PolynomialRing(SR) in order to be able
>>> to work on one variable while ignoring the rest...
>>>
>>> I'll keep you posted.
>>>
>>> Le lundi 11 novembre 2019 09:28:58 UTC+1, Vincent Neiger a écrit :

 Dear Emmanuel,

 You may be interested in taking a look at the following function:
 Matrix_polynomial_dense.minimal_approximant_basis

 This only supports the univariate case. This solves a problem which
 generalizes Padé approximation (the documentation gives a precise
 description of what it computes; taking notation from there, instead of
 thinking in terms of polynomials you may view the matrix F as having power
 series entries, with the column j truncated at order d_j ).

 Coming back to our problem: if you have a power series S(x) at some
 order d and you want to find a Padé approximation f(x) / g(x) of it, you
 can find it by calling the above function on the 2 x 1 polynomial matrix
 [[S(x).polynomial()], [-1]] with the input order being d. Specifically on
 this input the above function will return a 2 x 2 matrix of univariate
 polynomials, and its first row will be [[g(x) , f(x)]], your sought
 approximant.

 By default you will get the approximant with f and g of degree about
 d/2 and deg(g) > deg(f) (well, at least on typical instances), but by
 manipulating the optional argument "shift" you can get any other type that
 you want. This shift is basically equivalent to the notion of "defects"
 often found in the literature on approximants.

 This is based on an algorithm described by Van Barel and Bultheel and
 by Beckermann and Labahn in the early 1990s. A much faster algorithm
 exists, using a divide and conquer approach and fast polynomial
 multiplication, but for it to be interesting we would need Sage to have a
 faster polynomial matrix multiplication (for the moment this faster
 algorithm is not really interesting except for small matrix dimensions...
 so it could actually bring substantial speedups for this 2 x 1 case).

 As you can guess from the documentation of the above function, it has
 been mainly designed with an "exact arithmetic" context in mind, typically
 working over a finite field or the rationals. So it is my turn to write
 that I would be very interested in reading your comments on how this
 existing method behaves in your context.


 Le dimanche 10 novembre 2019 14:32:52 UTC+1, Emmanuel Charpentier a
 écrit :
>
> Dear list,
>
> IMHO, Sage may use an implementation of Padé approximants (similar t
> its implementation of Taylor series), if only for numerical use reasons.
> Currently, this can be done:
>
>- by wrapping the Maxima functions taylor and pade (Maxima's pade
>needs a Maxima taylor development object, which does not cleanly
>translate to Sage);
>- by using the PowerSeriesRing.pade method.
>
> Some trials have shown that the latter method, as advertised in its
> documentation, is indeed unsuitable even for moderately large degrees of
> the numerator and denominator: the expression thus obtained are extremely
> unwieldy large and are slow to evaluate numerically.
> In contrast, the algorithm used by Maxima, gives easily usable results
> (even if they can be enhanced by expansion and possibly factorization
> and/or 

[sage-devel] Re: Request for comments: Padé approximants

2019-11-13 Thread 'Martin R' via sage-devel
For the univariate case, to compare speed etc., you could also call FriCAS 
and do something like

sage: fricas.pade(3,2, fricas.taylor(atan(x), x=0)).sage()
(4/9*x^3 + 5/3*x)/(x^2 + 5/3)

In fact, there is also Hermite-Padé in FriCAS, but I cannot remember the 
details.

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sage-devel/72282b3e-4c47-407c-9ba0-636d87b9a375%40googlegroups.com.


[sage-devel] Re: Request for comments: Padé approximants

2019-11-13 Thread Emmanuel Charpentier
Le mardi 12 novembre 2019 20:42:16 UTC+1, rjf a écrit :
>
> Since Maxima is  free and open source and gpl, why not just read the 
> algorithm implemented there
> and rewrite it in Python?
>

That can be done. But I had other interest in mind:

   - multivariate case (my solution of iterative univariate approximations 
   may not be optimal...)
   - other (newer) algorithms, possibly more efficient.
   
The hint given by Vincent is interesting, and I'm following it. I'll keep 
this thread posted.
Anyway, wrapping the relevant Maxima calls in a Python function is trivial, 
and may serve as a first implementation.
 

>
> RJF
> \
>
> On Monday, November 11, 2019 at 1:29:56 AM UTC-8, Emmanuel Charpentier 
> wrote:
>>
>> Dear Vincent,
>>
>> a very quick answer (limbic system level. :-). Thank you for this hint. 
>> It seems really interesting. But I'll need time to explore it and find my 
>> way.
>>
>> Of course, I will have to work on PolynomialRing(SR) in order to be able 
>> to work on one variable while ignoring the rest...
>>
>> I'll keep you posted.
>>
>> Le lundi 11 novembre 2019 09:28:58 UTC+1, Vincent Neiger a écrit :
>>>
>>> Dear Emmanuel,
>>>
>>> You may be interested in taking a look at the following function:
>>> Matrix_polynomial_dense.minimal_approximant_basis
>>>
>>> This only supports the univariate case. This solves a problem which 
>>> generalizes Padé approximation (the documentation gives a precise 
>>> description of what it computes; taking notation from there, instead of 
>>> thinking in terms of polynomials you may view the matrix F as having power 
>>> series entries, with the column j truncated at order d_j ).
>>>
>>> Coming back to our problem: if you have a power series S(x) at some 
>>> order d and you want to find a Padé approximation f(x) / g(x) of it, you 
>>> can find it by calling the above function on the 2 x 1 polynomial matrix 
>>> [[S(x).polynomial()], [-1]] with the input order being d. Specifically on 
>>> this input the above function will return a 2 x 2 matrix of univariate 
>>> polynomials, and its first row will be [[g(x) , f(x)]], your sought 
>>> approximant.
>>>
>>> By default you will get the approximant with f and g of degree about d/2 
>>> and deg(g) > deg(f) (well, at least on typical instances), but by 
>>> manipulating the optional argument "shift" you can get any other type that 
>>> you want. This shift is basically equivalent to the notion of "defects" 
>>> often found in the literature on approximants.
>>>
>>> This is based on an algorithm described by Van Barel and Bultheel and by 
>>> Beckermann and Labahn in the early 1990s. A much faster algorithm exists, 
>>> using a divide and conquer approach and fast polynomial multiplication, but 
>>> for it to be interesting we would need Sage to have a faster polynomial 
>>> matrix multiplication (for the moment this faster algorithm is not really 
>>> interesting except for small matrix dimensions... so it could actually 
>>> bring substantial speedups for this 2 x 1 case).
>>>
>>> As you can guess from the documentation of the above function, it has 
>>> been mainly designed with an "exact arithmetic" context in mind, typically 
>>> working over a finite field or the rationals. So it is my turn to write 
>>> that I would be very interested in reading your comments on how this 
>>> existing method behaves in your context.
>>>
>>>
>>> Le dimanche 10 novembre 2019 14:32:52 UTC+1, Emmanuel Charpentier a 
>>> écrit :

 Dear list,

 IMHO, Sage may use an implementation of Padé approximants (similar t 
 its implementation of Taylor series), if only for numerical use reasons. 
 Currently, this can be done:

- by wrapping the Maxima functions taylor and pade (Maxima's pade 
needs a Maxima taylor development object, which does not cleanly 
translate to Sage);
- by using the PowerSeriesRing.pade method.

 Some trials have shown that the latter method, as advertised in its 
 documentation, is indeed unsuitable even for moderately large degrees of 
 the numerator and denominator: the expression thus obtained are extremely 
 unwieldy large and are slow to evaluate numerically.
 In contrast, the algorithm used by Maxima, gives easily usable results 
 (even if they can be enhanced by expansion and possibly factorization 
 and/or simplification). But using it worsens our dependence on Maxima.
 Hence, a couple of questions:

 *Algorithms:*

 Do you have pointers to better algorithms for Padé approximants 
 computation (especially for the multivariate case ? (These might also be 
 helpful in the implementation of PowerSeriesRing.pade ...)

 *User interface:*

 We can follow our current taylor() convention, which is a bit of a 
 straightjacket in the multivariate case,imposing the same degree for all 
 the developments wrt the different variables.
 We can also allow so specify 

[sage-devel] Re: Request for comments: Padé approximants

2019-11-12 Thread rjf
Since Maxima is  free and open source and gpl, why not just read the 
algorithm implemented there
and rewrite it in Python?

RJF
\

On Monday, November 11, 2019 at 1:29:56 AM UTC-8, Emmanuel Charpentier 
wrote:
>
> Dear Vincent,
>
> a very quick answer (limbic system level. :-). Thank you for this hint. It 
> seems really interesting. But I'll need time to explore it and find my way.
>
> Of course, I will have to work on PolynomialRing(SR) in order to be able 
> to work on one variable while ignoring the rest...
>
> I'll keep you posted.
>
> Le lundi 11 novembre 2019 09:28:58 UTC+1, Vincent Neiger a écrit :
>>
>> Dear Emmanuel,
>>
>> You may be interested in taking a look at the following function:
>> Matrix_polynomial_dense.minimal_approximant_basis
>>
>> This only supports the univariate case. This solves a problem which 
>> generalizes Padé approximation (the documentation gives a precise 
>> description of what it computes; taking notation from there, instead of 
>> thinking in terms of polynomials you may view the matrix F as having power 
>> series entries, with the column j truncated at order d_j ).
>>
>> Coming back to our problem: if you have a power series S(x) at some order 
>> d and you want to find a Padé approximation f(x) / g(x) of it, you can find 
>> it by calling the above function on the 2 x 1 polynomial matrix 
>> [[S(x).polynomial()], [-1]] with the input order being d. Specifically on 
>> this input the above function will return a 2 x 2 matrix of univariate 
>> polynomials, and its first row will be [[g(x) , f(x)]], your sought 
>> approximant.
>>
>> By default you will get the approximant with f and g of degree about d/2 
>> and deg(g) > deg(f) (well, at least on typical instances), but by 
>> manipulating the optional argument "shift" you can get any other type that 
>> you want. This shift is basically equivalent to the notion of "defects" 
>> often found in the literature on approximants.
>>
>> This is based on an algorithm described by Van Barel and Bultheel and by 
>> Beckermann and Labahn in the early 1990s. A much faster algorithm exists, 
>> using a divide and conquer approach and fast polynomial multiplication, but 
>> for it to be interesting we would need Sage to have a faster polynomial 
>> matrix multiplication (for the moment this faster algorithm is not really 
>> interesting except for small matrix dimensions... so it could actually 
>> bring substantial speedups for this 2 x 1 case).
>>
>> As you can guess from the documentation of the above function, it has 
>> been mainly designed with an "exact arithmetic" context in mind, typically 
>> working over a finite field or the rationals. So it is my turn to write 
>> that I would be very interested in reading your comments on how this 
>> existing method behaves in your context.
>>
>>
>> Le dimanche 10 novembre 2019 14:32:52 UTC+1, Emmanuel Charpentier a 
>> écrit :
>>>
>>> Dear list,
>>>
>>> IMHO, Sage may use an implementation of Padé approximants (similar t its 
>>> implementation of Taylor series), if only for numerical use reasons. 
>>> Currently, this can be done:
>>>
>>>- by wrapping the Maxima functions taylor and pade (Maxima's pade 
>>>needs a Maxima taylor development object, which does not cleanly 
>>>translate to Sage);
>>>- by using the PowerSeriesRing.pade method.
>>>
>>> Some trials have shown that the latter method, as advertised in its 
>>> documentation, is indeed unsuitable even for moderately large degrees of 
>>> the numerator and denominator: the expression thus obtained are extremely 
>>> unwieldy large and are slow to evaluate numerically.
>>> In contrast, the algorithm used by Maxima, gives easily usable results 
>>> (even if they can be enhanced by expansion and possibly factorization 
>>> and/or simplification). But using it worsens our dependence on Maxima.
>>> Hence, a couple of questions:
>>>
>>> *Algorithms:*
>>>
>>> Do you have pointers to better algorithms for Padé approximants 
>>> computation (especially for the multivariate case ? (These might also be 
>>> helpful in the implementation of PowerSeriesRing.pade ...)
>>>
>>> *User interface:*
>>>
>>> We can follow our current taylor() convention, which is a bit of a 
>>> straightjacket in the multivariate case,imposing the same degree for all 
>>> the developments wrt the different variables.
>>> We can also allow so specify different degrees for the development wrt 
>>> the different variables (this can make sense for very asymetric functions).
>>>
>>> Suggestions ?
>>>
>>> *Multivariate case:*
>>>
>>> An elementary implementation (see (Huard and Guillaume, 2000) 
>>>  
>>> for example) is to iteratively create a Padé approximant for the successive 
>>> variables. i. e. if p(f, x) denotes the Pade approximant wrt the 
>>> variable x, you end up computing (p(p(p(f,x),y),z) (the implementation 
>>> is trivial). The paper I quoted hints that there are 

[sage-devel] Re: Request for comments: Padé approximants

2019-11-11 Thread Emmanuel Charpentier
Dear Vincent,

a very quick answer (limbic system level. :-). Thank you for this hint. It 
seems really interesting. But I'll need time to explore it and find my way.

Of course, I will have to work on PolynomialRing(SR) in order to be able to 
work on one variable while ignoring the rest...

I'll keep you posted.

Le lundi 11 novembre 2019 09:28:58 UTC+1, Vincent Neiger a écrit :
>
> Dear Emmanuel,
>
> You may be interested in taking a look at the following function:
> Matrix_polynomial_dense.minimal_approximant_basis
>
> This only supports the univariate case. This solves a problem which 
> generalizes Padé approximation (the documentation gives a precise 
> description of what it computes; taking notation from there, instead of 
> thinking in terms of polynomials you may view the matrix F as having power 
> series entries, with the column j truncated at order d_j ).
>
> Coming back to our problem: if you have a power series S(x) at some order 
> d and you want to find a Padé approximation f(x) / g(x) of it, you can find 
> it by calling the above function on the 2 x 1 polynomial matrix 
> [[S(x).polynomial()], [-1]] with the input order being d. Specifically on 
> this input the above function will return a 2 x 2 matrix of univariate 
> polynomials, and its first row will be [[g(x) , f(x)]], your sought 
> approximant.
>
> By default you will get the approximant with f and g of degree about d/2 
> and deg(g) > deg(f) (well, at least on typical instances), but by 
> manipulating the optional argument "shift" you can get any other type that 
> you want. This shift is basically equivalent to the notion of "defects" 
> often found in the literature on approximants.
>
> This is based on an algorithm described by Van Barel and Bultheel and by 
> Beckermann and Labahn in the early 1990s. A much faster algorithm exists, 
> using a divide and conquer approach and fast polynomial multiplication, but 
> for it to be interesting we would need Sage to have a faster polynomial 
> matrix multiplication (for the moment this faster algorithm is not really 
> interesting except for small matrix dimensions... so it could actually 
> bring substantial speedups for this 2 x 1 case).
>
> As you can guess from the documentation of the above function, it has been 
> mainly designed with an "exact arithmetic" context in mind, typically 
> working over a finite field or the rationals. So it is my turn to write 
> that I would be very interested in reading your comments on how this 
> existing method behaves in your context.
>
>
> Le dimanche 10 novembre 2019 14:32:52 UTC+1, Emmanuel Charpentier a écrit :
>>
>> Dear list,
>>
>> IMHO, Sage may use an implementation of Padé approximants (similar t its 
>> implementation of Taylor series), if only for numerical use reasons. 
>> Currently, this can be done:
>>
>>- by wrapping the Maxima functions taylor and pade (Maxima's pade 
>>needs a Maxima taylor development object, which does not cleanly 
>>translate to Sage);
>>- by using the PowerSeriesRing.pade method.
>>
>> Some trials have shown that the latter method, as advertised in its 
>> documentation, is indeed unsuitable even for moderately large degrees of 
>> the numerator and denominator: the expression thus obtained are extremely 
>> unwieldy large and are slow to evaluate numerically.
>> In contrast, the algorithm used by Maxima, gives easily usable results 
>> (even if they can be enhanced by expansion and possibly factorization 
>> and/or simplification). But using it worsens our dependence on Maxima.
>> Hence, a couple of questions:
>>
>> *Algorithms:*
>>
>> Do you have pointers to better algorithms for Padé approximants 
>> computation (especially for the multivariate case ? (These might also be 
>> helpful in the implementation of PowerSeriesRing.pade ...)
>>
>> *User interface:*
>>
>> We can follow our current taylor() convention, which is a bit of a 
>> straightjacket in the multivariate case,imposing the same degree for all 
>> the developments wrt the different variables.
>> We can also allow so specify different degrees for the development wrt 
>> the different variables (this can make sense for very asymetric functions).
>>
>> Suggestions ?
>>
>> *Multivariate case:*
>>
>> An elementary implementation (see (Huard and Guillaume, 2000) 
>>  
>> for example) is to iteratively create a Padé approximant for the successive 
>> variables. i. e. if p(f, x) denotes the Pade approximant wrt the 
>> variable x, you end up computing (p(p(p(f,x),y),z) (the implementation 
>> is trivial). The paper I quoted hints that there are better solutions, but 
>> is a bit above my pay grade (my initial formation is in dentistry and 
>> surgery...).
>>
>> Do you have suggestions on this point ?
>>
>> More generally, any comments are welcome !
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To 

[sage-devel] Re: Request for comments: Padé approximants

2019-11-11 Thread Vincent Neiger
Dear Emmanuel,

You may be interested in taking a look at the following function:
Matrix_polynomial_dense.minimal_approximant_basis

This only supports the univariate case. This solves a problem which 
generalizes Padé approximation (the documentation gives a precise 
description of what it computes; taking notation from there, instead of 
thinking in terms of polynomials you may view the matrix F as having power 
series entries, with the column j truncated at order d_j ).

Coming back to our problem: if you have a power series S(x) at some order d 
and you want to find a Padé approximation f(x) / g(x) of it, you can find 
it by calling the above function on the 2 x 1 polynomial matrix 
[[S(x).polynomial()], [-1]] with the input order being d. Specifically on 
this input the above function will return a 2 x 2 matrix of univariate 
polynomials, and its first row will be [[g(x) , f(x)]], your sought 
approximant.

By default you will get the approximant with f and g of degree about d/2 
and deg(g) > deg(f) (well, at least on typical instances), but by 
manipulating the optional argument "shift" you can get any other type that 
you want. This shift is basically equivalent to the notion of "defects" 
often found in the literature on approximants.

This is based on an algorithm described by Van Barel and Bultheel and by 
Beckermann and Labahn in the early 1990s. A much faster algorithm exists, 
using a divide and conquer approach and fast polynomial multiplication, but 
for it to be interesting we would need Sage to have a faster polynomial 
matrix multiplication (for the moment this faster algorithm is not really 
interesting except for small matrix dimensions... so it could actually 
bring substantial speedups for this 2 x 1 case).

As you can guess from the documentation of the above function, it has been 
mainly designed with an "exact arithmetic" context in mind, typically 
working over a finite field or the rationals. So it is my turn to write 
that I would be very interested in reading your comments on how this 
existing method behaves in your context.


Le dimanche 10 novembre 2019 14:32:52 UTC+1, Emmanuel Charpentier a écrit :
>
> Dear list,
>
> IMHO, Sage may use an implementation of Padé approximants (similar t its 
> implementation of Taylor series), if only for numerical use reasons. 
> Currently, this can be done:
>
>- by wrapping the Maxima functions taylor and pade (Maxima's pade 
>needs a Maxima taylor development object, which does not cleanly 
>translate to Sage);
>- by using the PowerSeriesRing.pade method.
>
> Some trials have shown that the latter method, as advertised in its 
> documentation, is indeed unsuitable even for moderately large degrees of 
> the numerator and denominator: the expression thus obtained are extremely 
> unwieldy large and are slow to evaluate numerically.
> In contrast, the algorithm used by Maxima, gives easily usable results 
> (even if they can be enhanced by expansion and possibly factorization 
> and/or simplification). But using it worsens our dependence on Maxima.
> Hence, a couple of questions:
>
> *Algorithms:*
>
> Do you have pointers to better algorithms for Padé approximants 
> computation (especially for the multivariate case ? (These might also be 
> helpful in the implementation of PowerSeriesRing.pade ...)
>
> *User interface:*
>
> We can follow our current taylor() convention, which is a bit of a 
> straightjacket in the multivariate case,imposing the same degree for all 
> the developments wrt the different variables.
> We can also allow so specify different degrees for the development wrt the 
> different variables (this can make sense for very asymetric functions).
>
> Suggestions ?
>
> *Multivariate case:*
>
> An elementary implementation (see (Huard and Guillaume, 2000) 
>  for 
> example) is to iteratively create a Padé approximant for the successive 
> variables. i. e. if p(f, x) denotes the Pade approximant wrt the variable 
> x, you end up computing (p(p(p(f,x),y),z) (the implementation is 
> trivial). The paper I quoted hints that there are better solutions, but is 
> a bit above my pay grade (my initial formation is in dentistry and 
> surgery...).
>
> Do you have suggestions on this point ?
>
> More generally, any comments are welcome !
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sage-devel/f4973bbd-394c-4d89-b69e-3a1887eac4f3%40googlegroups.com.