I hadn't thought of making two passes. Thanks!

I'd have to have the terms indexed by two different orderings (nondecreasing in x's degree and nondecreasing in y's), or be willing to sort. That seems tricky or slowish, but much better than what I've had in mind. It should also work with other orthogonal bases, like the Chebyshev polynomials, using Clenshaw's algorithm (of which Horner's is a special case).

To let you know where I'm going with this, I'm considering designs for `math/polynomial'. I want a data type that can represent sparse polynomials over any ring, with any orthogonal basis in that ring. Basic polynomial ops would be +, *, and conversion to another basis or another ring type (e.g. from Exact-Rational to Flonum). A quotient polynomial type would lift polynomial rings to fields.

I'd also like another basic op to be generating the syntax of a function that evaluates the polynomial. Polynomials could then be built at expansion time - say, by building a Taylor, Chebyshev, or minimax approximation - and then evaluated quickly at runtime.

It looks like you know what you're talking about (you probably understood all the preceeding text and can even point out errors :D), so I'd love to hear any ideas you have along these lines.

Neil ⊥

On 10/15/2012 10:46 PM, Gregory Woodhouse wrote:
I suppose I'm just stating the obvious here, but R[x, y] is naturally 
isomorphic to R[x][y]. That is, polynomials in x and y over the ring R have a 
natural interpretation as polynomials in y over the ring R[x] of polynomials 
over R. So, if you had a good library for working with polynomials (of one 
variable) you could just evaluate p(a, b) by evaluating Horner's rule twice.

Sent from my iPhone

On Oct 15, 2012, at 10:15 AM, Neil Toronto <neil.toro...@gmail.com> wrote:

On 10/15/2012 11:49 AM, Jens Axel Søgaard wrote:
2012/10/15 Stephen Bloch <bl...@adelphi.edu>:

But probably slower, at least for exact numbers.  If "expt" were implemented naively as "for i 
= 1 to num", the total number of multiplications would be quadratic in degree; if it were implemented by 
repeated squaring, the total number of multiplications would be O(n log(n)); with Horner's algorithm or your 
"values" approach, it's linear.

Horner's algorithm gives us

(lambda (poly x)
    (for/fold ([sum 0])
       ([c (polynomial-coeffs poly)])
       (+ c (* sum x))))

If I recall correctly, Horner's algorithm also gives more precise results,
when used with pseudo real numbers.

Especially true when coefficients alternate signs, producing massive 
cancellation. That's the main reason to use it, since floating-point 
exponentiation runs in constant time. (It's also a tad faster, requiring at 
least 1-2 fewer flops per coefficient... but probably many more, because 
hardware `pow' is usually broken for non-rational inputs and implementations 
have to work around that.)

What I'd really like, by the time the math library has polynomials, is a 
Horner-like algorithm for sparse polynomials of the form

    c0 + ... + cn * x^(i_n) * y^(j_n) + ...

The sequence of (i_n,j_n) can only be partially ordered, which makes it tricky.

Neil ⊥

____________________
Racket Users list:
http://lists.racket-lang.org/users

____________________
 Racket Users list:
 http://lists.racket-lang.org/users

Reply via email to