Hi all, I was wondering to myself how much error is introduced by linearly interpolating two sample points of 1/x in the range [.5, 1]. I was considering writing a program to determine this empirically, then I realized I could get a good picture from a simple python expression.
My reasoning went: the error must be greatest near the middle of the interpolation interval, so I'll calculate the error there. We'll be using a 1024 entry lookup table, and that's close enough to 1000 that I'll work it out for 1000 interpolation intervals instead of 1024, just so I can write a simple expression in decimal. I imagine (but did not prove) that the maximum error is quite similar over all the intervals between .5 and 1, because the curvature of 1/x doesn't change much in that interval, so I'll just see what happens in the middle of the interval nearest 1. That is, I'll take the inverse of .9995 and see how much it differs from the average of 1/.999 and 1. So: >>> 1/.9995 - (1/.999 + 1)/2 -2.5037543815997765e-07 So there we have it, it's out by about one part in 4 million, or roughly 22 bits of accuracy. That should do. The error will be dominated by the limited precision of the sample point, not by the interpolation. Note that this excellent behavior relies on the fact that the input is normalized. Regards, Daniel _______________________________________________ Open-graphics mailing list [email protected] http://lists.duskglow.com/mailman/listinfo/open-graphics List service provided by Duskglow Consulting, LLC (www.duskglow.com)
