On Saturday, 28 June 2014 at 09:07:17 UTC, John Colvin wrote:
On Saturday, 28 June 2014 at 06:16:51 UTC, Walter Bright wrote:
On 6/27/2014 10:18 PM, Walter Bright wrote:
On 6/27/2014 4:10 AM, John Colvin wrote:
*The number of algorithms that are both numerically stable/correct and benefit
significantly from > 64bit doubles is very small.

To be blunt, baloney. I ran into these problems ALL THE TIME when doing
professional numerical work.


Sorry for being so abrupt. FP is important to me - it's not just about performance, it's also about accuracy.

I still maintain that the need for the precision of 80bit reals is a niche demand. Its a very important niche, but it doesn't justify having its relatively extreme requirements be the default. Someone writing a matrix inversion has only themselves to blame if they don't know plenty of numerical analysis and look very carefully at the specifications of all operations they are using.

Paying the cost of moving to/from the fpu, missing out on increasingly large SIMD units, these make everyone pay the price.

inclusion of the 'real' type in D was a great idea, but std.math should be overloaded for float/double/real so people have the choice where they stand on the performance/precision front.

Would thar make sense to have std.mast and std.fastmath, or something along these lines ?

Reply via email to