On 7/18/06, Bradley Arsenault <[EMAIL PROTECTED]> wrote:
So why do you think that a compiler would intentionally leave bits
unitialized in the mantissa when it could initialize them and add
better determinism?

All of my usages of float are casts, temporaries, un-accumulated
values from integers. Theres no reason for the compiler to leave bits
unitialized when its being provided the same deterministic value as
input.

Float insafety is caused mainly by accumulated-error and having a
different exponent and mantissa representing the same value ( 50 * 0.1
or 5 * 1.0 ).

If your willing to explain yourself, then good, but I see no reason
for the compiler to leave unitialized bits inside a value thats being
initialized.



>
> Do _not_ _ever_ depend on doubles for bit perfect reproducibility.
>
> > I'll fix this as soon as I have time. Sorry everyone, my mistake.
>
> CU
>
>  -- CFD
>
>
> _______________________________________________
> glob2-devel mailing list
> [email protected]
> http://lists.nongnu.org/mailman/listinfo/glob2-devel
>


--
Start and finish, Bradley Arsenault



Ooops, I didn't notice, I am using an accumulated value. But still,
its providded the same values(because their from casts), using the
same operations (which are deterministic in the computers sense), so
there shouldn't be problem with the same computer, same compiler.


--
Start and finish, Bradley Arsenault


_______________________________________________
glob2-devel mailing list
[email protected]
http://lists.nongnu.org/mailman/listinfo/glob2-devel

Reply via email to