I suppose I'm just stating the obvious here, but R[x, y] is naturally isomorphic to R[x][y]. That is, polynomials in x and y over the ring R have a natural interpretation as polynomials in y over the ring R[x] of polynomials over R. So, if you had a good library for working with polynomials (of one variable) you could just evaluate p(a, b) by evaluating Horner's rule twice.
Sent from my iPhone On Oct 15, 2012, at 10:15 AM, Neil Toronto <neil.toro...@gmail.com> wrote: > On 10/15/2012 11:49 AM, Jens Axel Søgaard wrote: >> 2012/10/15 Stephen Bloch <bl...@adelphi.edu>: >> >>> But probably slower, at least for exact numbers. If "expt" were >>> implemented naively as "for i = 1 to num", the total number of >>> multiplications would be quadratic in degree; if it were implemented by >>> repeated squaring, the total number of multiplications would be O(n >>> log(n)); with Horner's algorithm or your "values" approach, it's linear. >>> >>> Horner's algorithm gives us >>> >>> (lambda (poly x) >>> (for/fold ([sum 0]) >>> ([c (polynomial-coeffs poly)]) >>> (+ c (* sum x)))) >> >> If I recall correctly, Horner's algorithm also gives more precise results, >> when used with pseudo real numbers. > > Especially true when coefficients alternate signs, producing massive > cancellation. That's the main reason to use it, since floating-point > exponentiation runs in constant time. (It's also a tad faster, requiring at > least 1-2 fewer flops per coefficient... but probably many more, because > hardware `pow' is usually broken for non-rational inputs and implementations > have to work around that.) > > What I'd really like, by the time the math library has polynomials, is a > Horner-like algorithm for sparse polynomials of the form > > c0 + ... + cn * x^(i_n) * y^(j_n) + ... > > The sequence of (i_n,j_n) can only be partially ordered, which makes it > tricky. > > Neil ⊥ > > ____________________ > Racket Users list: > http://lists.racket-lang.org/users ____________________ Racket Users list: http://lists.racket-lang.org/users