I mean "surprising inequality", but I'm not talking about `< or >`. Here are 
the wrappers:

  * 
[https://github.com/nim-lang/Nim/blob/master/lib/system.nim#L1079](https://github.com/nim-lang/Nim/blob/master/lib/system.nim#L1079)



>From the manual:

> An implementation should always use the maximum precision available to 
> evaluate floating pointer values at compile time;

So if I _store_ a float as float32, I see loss of precision. But if I 
type-convert to float32 and compare, I see none.
    
    
    let x: float64 = 47.11
    assert x == 47.11  # passes
    let y: float32 = 47.11'f32
    assert y == 47.11  # fails, since the down-converted y becomes 
47.11000061035156
    assert y == 47.11'f32  # also fails! but why?
    assert y == float32(47.11) # same question
    

I guess the conversion from the default f64 down to f32 does not actually occur 
unless absolutely necessary. I can understand this beahavior, but since 
`cfloat` is 32-bit, it causes problems with **c2nim**.

Reply via email to