Announcement:
-----------------------
I have started to implement vectorized univariate truncated Taylor
polynomial operations (add,sub,mul,div,sin,exp,...)  in ANSI-C.
The interface to python is implemented by using numpy.ndarray's ctypes
functionality. Unit tests are implement using nose.
It is BSD licencsed and hosted on github: http://github.com/b45ch1/taylorpoly

Rationale:
------------------------

A truncated Taylor polynomial is of the form
[x]_d = \sum_{d=0}^{D-1} x_d t^d
where x_d a real number and t an external parameter.

Truncated Taylor polynomials should not be confused with polynomials
for interpolation, as it is implemented in numpy.polynomial.
The difference is that Taylor polynomials, as implemented in
`taylorpoly`, are polynomials in an "external parameter". I.e. they
are *never* evaluated for some t.

One should think of this algebraic class as an extension of the real
numbers ( e.g. similarly to the complex numbers).

At the moment, I am not aware of any such algorithms available in
Python. I believe algorithms for this algebraic structure are a very
important building block for more sophisticated algorithms, e.g. for
Algorithmic Differentiation (AD) tools and  Taylor polynomial
integrators.

More Detailed Description:
---------------------------------------

The operations add,mul,etc. are extended to compute on truncated
Taylor polynomials, e.g. for the multiplication

[z]_{D+1} &=_{D+1}& [x]_{D+1} y_{D+1} \\
\sum_{d=0}^D z_d t^d & =_{D+1} & \left( \sum_{d=0}^D x_d t^d \right)
\left( \sum_{c=0}^D y_c t^c \right)

where D+1 is the degree of the polynomial.  The coefficients z_d are
only evaluated up to degree D+1, i.e. higher orders are truncated.


Request for Opinions:
-------------------------------

Before I continue implementing more algorithms, it would be nice to
have some feedback.
There are things I'm still not sure about:

1) data structure for vectorized operations:

For the non-vectorized algorithms, it is quite clear that the
coefficients of a Taylor polynomial \sum_{d=0}^{D-1} x_d t^d are
stored in an 1D array [x_0,x_1,...,x_{D-1}]:
In the vectorized version, P different Taylor polynomials are computed
at once, but the base coefficient x_0 is the same for all of them.

At the moment I have implemented the data structure:
[x]_{D,P} := [x_0, x_{1,1},...,x_{1,D-1},x_{2,1},...,x_{P,D-1}].

Another possibility would be:
[x] =  [x_0, x_{1,1}, ..., x_{1,P}, x_{2, 1}, ..., x_{D-1, P}]

Any thoughts about which to choose? The first version is much easier
to implement. The second is possibly easier to vectorize by a
compiler.


2) implementation of binary operators in Python:
I have defined a class UTPS (univariate Taylor polynomial over Scalar)
that basically only stores the above array [x]_{D,P} in the attribute
`UTPS.data` and the algorithms add,sub,mul,div take instances of the
class UTPS.

I.e. the functions are not implemented as member functions. I plan to
add member functions later for convenience that call those functions.
Is this is a good idea? I think it would be good to be consistent with numpy.

3) coding style of the algorithms in C
I'm making heavy use of pointer arithmetic, rendering the algorithms a
little hard to write and to understand.
The alternative would be array indexing with address computations.
Does anyone know how good the compilers are nowadays at optimizing
away the address computations?

I'd be very happy to get some feedback.

regards,
Sebastian
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to