Hi Jim. > The main problem is that "must" doesn't exist for IEEE floating point > numbers. You can find the root for one of the endpoints and it may > return "t = -.00001" even though the value exactly matched the > endpoint, but after all the math was said and done the answer > it came up had the bit pattern for a tiny negative number, not > 0 (or 1.0001). That t value would be rejected and then you'd > have no roots.
That's true. That's what I meant when I said "finite precision math doesn't necessarily care what calculus says" ;-) > It's not hurting anything and we may find it useful in other contexts. > We probably should have put it in a more generic package. Sounds good. I suppose we can move it if the need ever arises. > Shouldn't it be [A, B]? I thought about this when implementing it, but I don't think it mattered whether it was closed or half open, and the closed interval would have been somewhat more awkward to implement. > getMaxAcc functions - don't we want the furthest value from 0, > positive or negative? You are looking for most positive value > and negative accelerations are equally problematic, aren't they? > If so then these functions need some work. You're right about both, but there's a much more serious problem that I didn't think of when writing them: the value I compute in the if statement in Dasher:355 is not an upper bound on the acceleration of the curve. The acceleration is: C'(t).dot(C''(t))/len(C'(t)) which in terms of the parameter polynomials is (x'(t)*x''(t) + y'(t)*y''(t))/sqrt(x'(t)^2 + y'(t)^2) What those functions would compute if they were "correct" would be max(abs(x''(t))) and max(abs(y''(t))), and the sum of these is not closely related to the maximum absolute acceleration, which is what we want. Without the upper bound property, I don't think it's a very meaningful test, and I think we should abandon this optimization. Do you agree? Regards, Denis.