http://d.puremagic.com/issues/show_bug.cgi?id=9297
[email protected] changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |[email protected] --- Comment #1 from [email protected] 2013-02-26 16:56:01 PST --- Does gcc/g++ support 80-bit floats? I made a little test program in a Linux environment, and it seems that in spite of the cast, some extra precision is still getting through: import std.stdio; void main() { // Stuff with way too many digits than the types can handle, to see how // far they get. enum float phi_f = 1.6180339887_4989484820_4586834365f; enum double phi_d = 1.6180339887_4989484820_4586834365; enum real phi_r = 1.6180339887_4989484820_4586834365L; // The exact digits, taken from: // // http://fabulousfibonacci.com/portal/index.php?option=com_content&view=article&id=7&Itemid=17 // // (NOTE: the last digit is actually 6 if we round up the next digit // instead of truncating. But built-in types won't even come close to // that point). string phi_s = "1.618033988749894848204586834365"; writeln(float.dig); writeln(double.dig); writeln(real.dig); writefln("%.25f", phi_f); writefln("%.25f", phi_d); writefln("%.25f", phi_r); writefln("%s", phi_s); } Output: $ ./test 6 15 18 1.6180340051651000976562500 1.6180339887498949025257389 1.6180339887498948482072100 1.618033988749894848204586834365 $ Notice that the third line, which is the formatting of real, shows up with more matching digits than double. Could this be something to do with the way the Linux C/C++ ABI works (IIRC some floats gets passed in the FPU registers where they retain their 80-bit precision)? I don't have a way to test this on VC, though. -- Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email ------- You are receiving this mail because: -------
