On Sun, 02 Oct 2011 14:06:48 EDT erik quanstrom <[email protected]> wrote: > > > To my limited knowledge, an OS is integer based, so the floating > > > point support is mainly "user space" and is, despite IEEE754 and due to > > > the interaction between hardware, software, and programmer, really > > > floating, but is there a range given for the association of OS/hardware > > > telling that say sin(r) or asin(s) is accurate, at worst, at some > > > epsilon near? > > > > It depends on the algorithm used, not on the OS. The C > > standard leaves accuracy upto the implementation. If you care, > > you can compare the result of a C function with what bc(1) > > computes for the same function (by using a suitably large > > scale). > > unless the hardware doesn't actually have floating point, doesn't > this depend only on the hardware? (c.f. /sys/src/libc/386/387/sin.s) > > 754 defines the results to be accurate to within 1 bit. obviously > that's as good as you can get. minix's math(3) points to a collection > of detailed man pages on the subject.
IEEE754-1985 didn't specify circular, hyperbolic or other advanced functions. You can have 754 compliant hardware and not implement these functions. In any case the standard can not dictate the accuracy of functions not specified in it. An iterative algorithm may lose more than 1 bit of accuracy since iterations won't be done in infinite precision. One can not assume accuracy to a bit even where these functions are imeplemented in h/w. For x86, accuracy may be specified in some Intel or AMD manual.
