On Sat, 10 Nov 2007 [EMAIL PROTECTED] wrote:
Quoting [EMAIL PROTECTED]:
Then, a *rational* approximation gives you the same precision with
less coeffs. Nowadays the division is not sooo much more expensive
than the multiplication, so the efficiency doesn't suffer much.
It might not
On Sat, 10 Nov 2007, Daniel Fischer wrote:
Since you seem to know a lot about these things, out of curiosity, do you know
how these functions are actually implemented? Do they use Taylor series or
other techniques?
I think that for sin and cos the Taylor series are a good choice. For
other
Brent Yorgey wrote:
More generally, this is due to the fact that floating-point numbers can
only have finite precision, so a little bit of rounding error is
inevitable when dealing with irrational numbers like pi. This problem
is in no way specific to Haskell.
But some systems always
Carl Witty writes:
On Sat, 2007-11-10 at 01:29 +0100, Daniel Fischer wrote:
... do you know
how these functions are actually implemented? Do they use Taylor
series or other techniques?
I don't really know that much about it;
... It seems likely that this instruction (and library
G'day all.
Quoting [EMAIL PROTECTED]:
== No, Gentlemen, nobody rational would use Taylor nowadays! It is
lousy.
This is correct. Real implementations are far more likely to use the
minmax polynomial of some order. However...
Then, a *rational* approximation gives you the same precision
Hello All,
Can anybody explain the results for 1.0, 2.0 and 3.0 times pi below?
GHCi yields the same results. I did search the Haskell report and my
text books, but to no avail. Thanks in advance,
Hans van Thiel
Hugs sin (0.0 * pi)
0.0
Hugs sin (0.5 * pi)
1.0
Hugs sin (1.0 * pi)
Hans van Thiel wrote:
Can anybody explain the results for 1.0, 2.0 and 3.0 times pi below?
It's due to rounding error in the platform's math library. You'll see
the same results in most other languages that call into libm.
b
___
On Nov 9, 2007 2:08 PM, Hans van Thiel [EMAIL PROTECTED] wrote:
Hello All,
Can anybody explain the results for 1.0, 2.0 and 3.0 times pi below?
GHCi yields the same results. I did search the Haskell report and my
text books, but to no avail. Thanks in advance,
Hans van Thiel
Hugs sin (0.0
On Nov 9, 2007 11:30 AM, Brent Yorgey [EMAIL PROTECTED] wrote:
More generally, this is due to the fact that floating-point numbers can only
have finite precision
This popped up on reddit recently:
http://blogs.sun.com/jag/entry/transcendental_meditation .
Interestingly, AMD did apparently
On Fri, 2007-11-09 at 14:30 -0500, Brent Yorgey wrote:
On Nov 9, 2007 2:08 PM, Hans van Thiel [EMAIL PROTECTED] wrote:
Hello All,
Can anybody explain the results for 1.0, 2.0 and 3.0 times pi
below?
GHCi yields the same results. I did search the Haskell
Am Freitag, 9. November 2007 21:02 schrieb Hans van Thiel:
On Fri, 2007-11-09 at 14:30 -0500, Brent Yorgey wrote:
On Nov 9, 2007 2:08 PM, Hans van Thiel [EMAIL PROTECTED] wrote:
Hello All,
Can anybody explain the results for 1.0, 2.0 and 3.0 times pi
below?
On Fri, 2007-11-09 at 21:34 +0100, Daniel Fischer wrote:
Am Freitag, 9. November 2007 21:02 schrieb Hans van Thiel:
On Fri, 2007-11-09 at 14:30 -0500, Brent Yorgey wrote:
On Nov 9, 2007 2:08 PM, Hans van Thiel [EMAIL PROTECTED] wrote:
Hello All,
Can anybody explain the
Am Samstag, 10. November 2007 00:36 schrieb Carl Witty:
Actually, there are about 95 million floating-point values in the
vicinity of pi/2 such that the best possible floating-point
approximation of sin on those values is exactly 1.0 (this number is
2^(53/2), where 53 is the number of mantissa
On Sat, 2007-11-10 at 01:29 +0100, Daniel Fischer wrote:
The above essay was written after much experimentation using the MPFR
library for correctly-rounded arbitrary-precision floating point, as
exposed in the Sage computer algebra system.
Carl Witty
Thanks a lot.
Since you seem
14 matches
Mail list logo