Here is a technical question. I know that people always talk about ulp's in the context of how good a function implementation is. I think the ulp is the number of base 2 digits at the end of the mantissa that we cannot rely on.

So if one were to write a naive implementation of lexp(x) that used Taylor's series if x is positive, and 1/lexp(-x) is x is negative - well one could fairly easily estimate an upper bound on the ulp, but it wouldn't be low (like ulp=1 or 2), but probably rather higher (ulp of the order of 10 or 20).

So do people really work hard to get that last drop of ulp out of their calculations? Would a ulp=10 be considered unacceptable?

Also, looking through the source code for the FreeBSD implementation of exp, I saw that they used some rather smart rational function. (I don't know how they came up with it.)

Presumably a big part of the issue is to make the functions work rather fast. And a naive implementation of Taylor's series wouldn't be fast. But if people want lexp rather than exp, they must have already decided that accuracy is more important than speed.
_______________________________________________
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"

Reply via email to