On Saturday, 9 February 2019 at 04:30:22 UTC, DanielG wrote:
On Saturday, 9 February 2019 at 03:33:13 UTC, Murilo wrote:
Thanks but here is the situation, I use printf("%.20f", 0.1); in both C and D, C returns 0.10000000000000000555 whereas D returns 0.10000000000000001000. So I understand your point, D rounds off more, but that causes loss of precision, isn't that something bad if you are working with math and physics for example?

0.1 in floating point is actually 0.100000001490116119384765625 behind the scenes.

So why is it important that it displays as:

0.10000000000000000555

versus

0.10000000000000001000

?

*Technically* the D version has less error, relative to the internal binary representation. Since there's no exact way of representing 0.1 in floating point, the computer has no way of knowing you really mean "0.1 decimal". If the accuracy is that important to you, you'll probably have to look into software-only number representations, for arbitrary decimal precision (I've not explored them in D, but other languages have things like "BigDecimal" data types)

Thank you very much for clearing this up. But how do I make D behave just like C? Is there a way to do that?

Reply via email to