> > To my limited knowledge, an OS is integer based, so the floating
> > point support is mainly "user space" and is, despite IEEE754 and due to
> > the interaction between hardware, software, and programmer, really
> > floating, but is there a range given for the association of OS/hardware
> > telling that say sin(r) or asin(s) is accurate, at worst, at some 
> > epsilon near?
> 
> It depends on the algorithm used, not on the OS. The C
> standard leaves accuracy upto the implementation. If you care,
> you can compare the result of a C function with what bc(1)
> computes for the same function (by using a suitably large
> scale).

unless the hardware doesn't actually have floating point, doesn't
this depend only on the hardware?  (c.f. /sys/src/libc/386/387/sin.s)

754 defines the results to be accurate to within 1 bit.  obviously
that's as good as you can get.  minix's math(3) points to a collection
of detailed man pages on the subject.

- erik

Reply via email to