Thanks to all who replied. On Thu, May 30, 2013 at 10:07 PM, Diggory <digg...@googlemail.com> wrote: > > Since D does all operations at highest possible precision anyway (even for > double or float) it only makes a difference when the value is being stored > to memory and then read back again.
But isn't this true for even C/C++ i.e. that the actual FP calculation is done at a higher precision than what is exposed? And this is so that rounding errors may be minimized? (I mean, I can see how repeated multiplications and square roots and such would totally devalue the LSBs of a double if calculations were done only in double precision.) So IIUC the only thing D does newly is to actually *expose* the full machine precision for those who want it? But really how much use is that? Because a friend of mine was warning (in general, not particularly about D) against falling into the illusion of higher precision == higher accuracy. If I use 80-bit FP as my data storage type, then only if an even higher precision were actually used inside the processor for calculations would the LSBs retain their significance, right? So in the end what is real useful for? -- Shriramana Sharma ஶ்ரீரமணஶர்மா श्रीरमणशर्मा