On 2019-10-30 15:12:29 +0000, H. S. Teoh said:

It wasn't a wrong *decision* per se, but a wrong *prediction* of where
the industry would be headed.

Fair point...

Walter was expecting that people would move towards higher precision, but what with SSE2 and other such trends, and the general neglect of x87 in hardware developments, it appears that people have been moving towards 64-bit doubles rather than 80-bit extended.

Yes, which is wondering me as well... but all the AI stuff seems to dominate the game and follow the hype is still a frequently used management strategy.

Though TBH, my opinion is that it's not so much neglecting higher
precision, but a general sentiment of the recent years towards
standardization, i.e., to be IEEE-compliant (64-bit floating point)
rather than work with a non-standard format (80-bit x87 reals).

I see it more of a "let's sell what people want". The CPU vendors don't seem able to market higher precision. Better implement a highly-specific and exploding command-set...

Do you mean *simulated* 128-bit reals (e.g. with a pair of 64-bit
doubles), or do you mean actual IEEE 128-bit reals?

Simulated, because HW support is lacking on X86. And PPC is not that mainstream. I exect Apple to move to ARM, but never heard about 128-Bit support for ARM.

I'm still longing for 128-bit reals (i.e., actual IEEE 128-bit format)
to show up in x86, but I'm not holding my breath.

Me too.

In the meantime, I've been looking into arbitrary-precision float libraries like libgmp instead. It's software-simulated, and therefore slower, but for certain applications where I want very high precision, it's the currently the only option.

Yes, but it's way too slow for our product.

Maybe one day we need to deliver an FPGA based co-processor PCI card that can run 128-Bit based calculations... but that will be a pretty hard way to go.

--
Robert M. Münch
http://www.saphirion.com
smarter | better | faster

Reply via email to