In the spirit of repeating previous messages, I again refer interested readers to my 2018 email to core-libs-dev which addresses many of the technical points being (re)raised here:

http://mail.openjdk.java.net/pipermail/core-libs-dev/2018-March/051952.html

Appeal to authority is a commonly used rhetorical technique. A worse variant of appeal to authority is when the work being cited does not in fact support the argument being put forward. Case point, the document

    "How Java’s Floating-Point Hurts Everyone Everywhere"
    https://people.eecs.berkeley.edu/~wkahan/JAVAhurt.pdf

originally authored by Prof. Kahan and myself in 1998, does not advocate decimal-only computation. The word "decimal" appears exactly zero times in the July 30, 2004 version of the document (Prof. Kahan often revises and reposts his documents).

In brief, the thesis of the "JavaHURT" paper is that the Java platform commits sins of omission by requiring IEEE 754 arithmetic while forbidding certain mandatory feature of the IEEE 754 standard and by precluding support for the 80-bit floating-point format found on contemporary x86 processors. Moral conclusions aside, it is correct that the Java platform then, as now, does not natively support those mandatory features of IEEE 754 (rounding modes, floating-point exception handling) for the built-in floating-point types float and double. No other widely used and available programming platform I know of supports those features either. In the intervening years, the 80-bit format originating with the x87 co-processor has been effectively deprecated by both Intel and AMD.

While IEEE 754 is commonly thought of as a hardware standard, the designers of the standard intended it to provide a programming platform. More recent revisions of IEEE 754 have tried to make this intention clearer.

Without researching the exact JDK where SSE support was first included, it was would have been at least 15 years ago, probably more.

The SSE instructions do *not* support decimal floating-point computation; they are primarily, but not exclusively, about 32-bit and 64-bit binary floating-point operations:

    https://en.wikipedia.org/wiki/Streaming_SIMD_Extensions

The POWER6 chip is one of the few that does have hardware support for decimal floating-point, a feature added in the 2008 revision of IEEE 754:

    https://en.wikipedia.org/wiki/Decimal_floating_point

Rather than re-asking core-libs-dev every few days if the the platform stewards have suddenly decided to undertake this large, but poorly defined effort, I suggest trying you organize like minded parties, perhaps including A Z poweruserm at live.com.au, to yourselves build a library/environment/platform with the features you envision to concretely demonstrate its benefits.

-Joe

On 4/21/2022 11:55 PM, sminervini.prism wrote:
mailto:core-libs-...@openjdk.net
To core-libs, OpenJDK, JCP, and all

For the sake of the consequences of the real issues it raises,
I include the rebuttal to Andrew Haley's earlier comments,
and I reiterate that the real need is to improve the Java software
at its roots level, more so.

Andrew Haley said, and we reply:

1) Firstly, it is not possible to change Java's core floating-point
arithmetic in a compatible way. We certainly will not adopt decimal
floating-point for Java's core arithmetic.

While I don't like re-submitting this article, certainly not on this forum,
there has always been this article:

https://people.eecs.berkeley.edu/~wkahan/JAVAhurt.pdf

The age of this article does not matter. It is as bearing to action in 2022
as it was then.

-Even without recombiling floating point or StrictMath code, of course it could
be changed, and compatibly. Runtime or compile time switches could be 
introduced.
Key words could be introduced that may apply at many different levels. Maybe
even annotations could be used for the compiler, which can already apply at
any point that floating point arithmetic and StrictMath methods and fields
may occur. Whevever there is a code space, there could be an annotation
or a keyword. At the class or interface or static block level. At the
variable, data, field and method level. Even at the main method,
Thread level, Runnable or Future level, or even further.

2) Secondly, Java should not be your focus. We will not change Java's
arithmetic in a way that is incompatible with other languages or
which makes Java slower on existing hardware.

-There could be dual mode floating point correction, implemented inside
Java at any level you could like. Dual mode couldn't be incompatible
with anything.

3) You must read and fully understand this before you go any further. It
will require a lot of study:

https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html

May the Java Community Process reconsider the present floating point
operations and method calls situation, based on an imperfect
standard and improper workaround, and provide corrected, defaulting
or specifying-compatible waya for Java floating point arithmetic and
Calculator class method results to always cohere in their ranges,
without denormal and pronormal inclusion?
In a word, no. That possibility is so unlikely that it is not worthy
of consideration.

-IEEE 754 has a blind spot, an oversight error. It says nothing about
operation values that straddle the range of the arithmetic or method result,
partway in, partway out. BigInteger, BigDecimal, or the big-math library
at https://github.com/eobermuhlner/big-math are only temporary stop-gap measures
that are two large in memory, more than needed, and too slow. The article 
included
as part of 3) doesn't even mention SSE, the presence of end of range
carrying additional bits.

Speed at the expense of accurary, ie. providing a rapid falsehood, is also a
logic error that can compromise the entire enterprise of computer software
itself. What is required is the inclusion of SSE additional registers and their
use; just a little bit of extra registry space to handle range end carries if 
they occur.

We wish to appeal to reason and software needs on this subject, if not the odds!

A mechanical clock is really amazing because it is complex, and because all the 
pieces whizz around together at amazing speed, but because it maintains those 
earlier two
properties while maintaining accurate time. The clock loses usefulness,
and even any usefulness, if the accurate time can't be set or maintained,
at the rate it needs to operate at.

The fact that more registers are referred to, to uphold float and double, has 
not been
enough of a speed compromise to prevent 128 bit numbers elsewhere.
Besides, the emptiness of extra bits past the range limit of float and double
could be optimised and controlled by one flag bit.

When the enhancements to the switch statements came along, all previous options 
were
maintained, while including, even integrating, the new ones: the abbility to 
switch on String, and the ability to coelesce cases in any way. There ended up 
being no kind of ultimate problem, no matter which developers used which 
approach for accurate, logic correct software. Whatever approach is taken, 
floating point correction need be no better, and offers in-place advantages. 
And the present state of floating point is a logic error, with IEEE 754 on this 
precise point being silent, incorrect, and irrelevant.

Is there someone else involved in core-libs-dev@openjdk.java.net, or the JCP,
who can give a more particular response to this issue via the points raised 
here,
and engender change to the attitudes to floating point arithmetic and
floating point elementary functions, so that release versions of Java SE and
OpenJDK can include FP error correction in this Java domain?

Sergio MinerviniS.M.

Sent with [ProtonMail](https://protonmail.com/) secure email.

Reply via email to