On 5/16/16 8:37 AM, Walter Bright wrote:
On 5/16/2016 3:27 AM, Andrei Alexandrescu wrote:
I'm looking for example at
http://nicolas.limare.net/pro/notes/2014/12/12_arit_speed/ and see
that on all
Intel and compatible hardware, the speed of 80-bit floating point
operations
ranges between much slower and disastrously slower.

It's not a totally fair comparison. A matrix inversion algorithm that
compensates for cumulative precision loss involves executing a lot more
FP instructions (don't know the ratio).

It is rare to need to actually compute the inverse of a matrix. Most of the time it's of interest to solve a linear equation of the form Ax = b, for which a variety of good methods exist that don't entail computing the actual inverse.

I emphasize the danger of this kind of thinking: 1-2 anecdotes trump a lot of other evidence. This is what happened with input vs. forward C++ iterators as the main motivator for a variety of concepts designs.

Some counter points:

Glad to see these!

1. Go uses 256 bit soft float for constant folding.

Go can afford it because it does no interesting things during compilation. We can't.

2. Speed is hardly the only criterion. Quickly getting the wrong answer
(and not just a few bits off, but total loss of precision) is of no value.

Of course. But it turns out the precision argument loses to the speed argument.

A. It's been many many years and very few if any people commend D for its superior approach to FP precision.

B. In contrast, a bunch of folks complain about anything slow, be it during compilation or at runtime.

Good algorithms lead to good precision, not 16 additional bits. Precision is overrated. Speed isn't.

3. Supporting 80 bit reals does not take away from the speed of
floats/doubles at runtime.

Fast compile-time floats are of strategic importance to us. Give me fast FP during compilation, I'll make it go slow (whilst put to do amazing work).

4. Removing 80 bit reals will consume resources (adapting the test
suite, rewriting the math library, ...).

I won't argue with that! Just let's focus on the right things: good fast streamlined computing using the appropriate hardware.

5. Other languages not supporting it means D has a capability they don't
have. My experience with selling products is that if you have an
exclusive feature that a particular customer needs, it's a slam dunk sale.

Again: I'm not seeing people coming out of the woodwork to praise D's precision. What they would indeed enjoy is amazing FP use during compilation, and that can be done only if CTFE FP is __FAST__. That _is_, indeed the capability others are missing!

6. My other experience with feature sets is if you drop things that make
your product different, and concentrate on matching feature checklists
with Major Brand X, customers go with Major Brand X.

This is true in principle but too vague to be relevant. Again, what evidence do you have that D's additional precision is revered? I see none, over like a decade.

7. 80 bit reals are there and they work. The support is mature, and is
rarely worked on, i.e. it does not consume resources.

Yeah, I just don't want it used in any new code. Please. It's using lead for building boats.

8. Removing it would break an unknown amount of code, and there's no
reasonable workaround for those that rely on it.

Let's do what everybody did to x87: continue supporting, but slowly and surely drive it to obsolescence.

That's the right way.


Andrei

Reply via email to