On 11/5/2013 8:19 AM, Don wrote:
On Wednesday, 30 October 2013 at 18:28:14 UTC, Walter Bright wrote:
Not exactly what I meant - I mean the algorithm should be designed so that
extra precision does not break it.
Unfortunately, that's considerably more difficult than writing an algorithm for
a known precision.
And it is impossible in any case where you need full machine precision (which
applies to practically all library code, and most of my work).
I have a hard time buying this. For example, when I wrote matrix inversion code,
more precision was always gave more accurate results.
A compiler intrinsic, which generates no code (simply inserting a barrier for
the optimiser) sounds like the correct approach.
Coming up for a name for this operation is difficult.
float toFloatPrecision(real arg) ?