This question was literally asked yesterday!

https://groups.google.com/forum/#!topic/julia-users/JQQJN58-Zes

On Friday, September 4, 2015 at 8:57:38 AM UTC+2, Konstantinos Prokopidis 
wrote:
>
> hi,
>
> I perform some tests on julia precision as follows (on my 32-bit computer)
>
> julia> 0.1-0.3
> -0.19999999999999998
>
> julia> 0.1f0-0.3f0
> -0.20000002f0
>
> julia> 0.1e0-0.3e0
> -0.19999999999999998
>
> julia> float32(0.1-0.3)
> -0.2f0
>
> julia> float64(0.1-0.3)
> -0.19999999999999998
>
> On the contrary in MATLAB and C (using either double of float) we get
> -0.2
>
> Why there are the above differences? Do we have different number 
> representation in the C language?
>
> Thanks a lot
>
>

Reply via email to