On Monday, 16 May 2016 at 09:54:51 UTC, Iain Buclaw wrote:
On 16 May 2016 at 10:52, Ola Fosheim Grøstad via Digitalmars-d <[email protected]> wrote:
On Monday, 16 May 2016 at 08:47:03 UTC, Iain Buclaw wrote:

But you *didn't* request coercion to 32 bit floats. Otherwise you would have used 1.30f.


        const float f = 1.3f;
        float c = f;
        assert(c*1.0 == f*1.0); // Fails! SHUTDOWN!



Your still using doubles. Are you intentionally missing the point?

What is your point? My point is that no other language I have ever used has overruled me requesting a coercion to single precision floats. And yes, binding it to a single precision float does qualify as a coercion on all platforms I have ever used.

I should not have to implement a function with float parameters if I have a working function with real parameters, just to get reasonable behaviour:

void assert_equality(real x, real y){ assert(x==y);}

void main(){
  const float f = 1.3f;
  float c = f;
  assert_equality(f,c); // Fails!
}


Stuff like this makes the codebase brittle.

Not being able to control precision in unit tests make unit tests potentially succeed when they should fail. That makes testing floating point code for correctness virtually impossible in D.

I don't use 32 bit float scalars to save space. I use it to get higher performance and in order to be able to turn the code into pure SIMD code at a later stage. So I require SIMD like semantics for float scalars. Anything less is unacceptable.

If I want high precision at compile time, then I use rational numbers like std::ratio in C++ which gives me _exact_ values. If I want something more advanced then I use Maxima and literally copy in the results.

C++17 is getting hex literals for floating point for a reason: accurate bit level representation.


Reply via email to