On Thursday, 13 October 2022 at 19:27:22 UTC, Steven Schveighoffer wrote:
On 10/13/22 3:00 PM, Sergey wrote:
[...]

It doesn't look really that far off. You can't expect floating point parsing to be exact, as floating point does not perfectly represent decimal numbers, especially when you get down to the least significant bits.

[...]
To me it looks like there is a conversion to `real` (80 bit floats) somewhere in the D code and that the other languages stay in `double` mode everywhere. Maybe forcing `double` by disabling x87 on the D side would yield the same results as the other languages?

Reply via email to