Am 27.06.2014 14:20, schrieb Russel Winder via Digitalmars-d:
On Fri, 2014-06-27 at 11:10 +0000, John Colvin via Digitalmars-d wrote:
[
]
I understand why the current situation exists. In 2000 x87 was
the standard and the 80bit precision came for free.

Real programmers have been using 128-bit floating point for decades. All
this namby-pamby 80-bit stuff is just an aberration and should never
have happened.

what consumer hardware and compiler supports 128-bit floating points?

Reply via email to