On 5/16/2016 7:32 AM, Andrei Alexandrescu wrote:
It is rare to need to actually compute the inverse of a matrix. Most of the time
it's of interest to solve a linear equation of the form Ax = b, for which a
variety of good methods exist that don't entail computing the actual inverse.

I was solving n equations with n unknowns.

I emphasize the danger of this kind of thinking: 1-2 anecdotes trump a lot of
other evidence. This is what happened with input vs. forward C++ iterators as
the main motivator for a variety of concepts designs.

What I did was implement the algorithm out of my calculus textbook. Sure, it's a naive algorithm - but it is highly unlikely that untrained FP programmers know intuitively how to deal with precision loss. I bring up our very own Phobos sum algorithm, which was re-implemented later with the Kahan method to reduce precision loss.


1. Go uses 256 bit soft float for constant folding.
Go can afford it because it does no interesting things during compilation. We
can't.

The we can't is conjecture at the moment.


2. Speed is hardly the only criterion. Quickly getting the wrong answer
(and not just a few bits off, but total loss of precision) is of no value.
Of course. But it turns out the precision argument loses to the speed argument.

A. It's been many many years and very few if any people commend D for its
superior approach to FP precision.

B. In contrast, a bunch of folks complain about anything slow, be it during
compilation or at runtime.

D's support for reals does not negatively impact the speed of float or double computations.


3. Supporting 80 bit reals does not take away from the speed of
floats/doubles at runtime.
Fast compile-time floats are of strategic importance to us. Give me fast FP
during compilation, I'll make it go slow (whilst put to do amazing work).

I still have a hard time seeing what you plan to do at compile time that would involve tens of millions of FP calculations.


6. My other experience with feature sets is if you drop things that make
your product different, and concentrate on matching feature checklists
with Major Brand X, customers go with Major Brand X.

This is true in principle but too vague to be relevant. Again, what evidence do
you have that D's additional precision is revered? I see none, over like a 
decade.

Fortran supports Quad (128 bit floats) as standard.

https://en.wikipedia.org/wiki/Quadruple-precision_floating-point_format

Reply via email to