Re: Questions about enhancement and Correction to Java OpenJDK Floating Point?

2022-03-14 Thread Raffaello Giulietti

Hi Terry,

as pointed out by Martin, the real issue is using *binary* 
floating-point arithmetic, like float or double, to emulate *decimal* 
arithmetic.


When you write 0.1D in Java, C or C++, what happens is that this decimal 
number is rounded to the double closest to the mathematical value 1/10. 
There's no double that is exactly 1/10, so you start with a value that 
is already rounded and inexact. Multiplication rounds as well, so you 
end up with a value that was subject to 3 roundings: twice for the 
conversion of the two openads from 0.1D to the closest doubles and once 
for the multiplication. The result is slightly different than the 
naively "expected" 0.01D, which is subject to one rounding only during 
conversion to the closest double. In other words, 0.1D*0.1D != 0.01D, 
even in C/C++ and most programming languages/environments.


In Java, however, when you convert a double to a decimal string by means 
of System.out.print[ln](), the library outputs just as many digits as 
necessary, and no less, for an input routine to be able to recover the 
original double. C and C++ do *not* ensure this. In Java, 0.01D (1 
rounding) is correctly converted to "0.01" while 0.1D*0.1D (3 roundings) 
is correctly converted to "0.010002".


In C/C++, try to output both 0.1D*0.1D and 0.01D with 20 digits, say, 
instead of the default 6 and you'll see a difference.


As observed by Rémi, Java offers formatting similar to C/C++ if that is 
what you want.


To summarize, Java uses IEEE 754 binary arithmetic as by specification, 
as do most other languages, including C/C++. It is however fundamentally 
wrong to use binary floating-point arithmetic to emulate decimal 
behavior. Also, pay attention to the output routines that convert float 
and double values to a decimal representation. Usually, C and C++ will 
have information loss by default, as in your case.



HTH
Raffaello


On 3/14/22 07:49, A Z wrote:

To whom it may concern,

Having noticed

https://bugs.java.com/bugdatabase/view_bug.do?bug_id=8190947
https://bugs.openjdk.java.net/browse/JDK-8190991

and similar, at 
https://community.oracle.com/tech/developers/discussion/4126262/big-issue-with-float-double-java-floating-point

I have been referred on to the core-libs-dev area.

The software development company I represent wishes to keep its name 
confidential,
and no-mentioned, at this time.

A number of us at our end have found that floating point and StrictMath 
arithmetic
on both float and double does not result in range accuracy, but produces 
denormal
and pronormal values.

We are aware of the Java Language Specification, and IEEE 754 specifications,
to these issues, but are still finding that they are not the most relevant or 
great issue.

While we are aware of the BigDecimal and BigInteger workarounds, and
furthermore, the calculator class including big-math  
https://github.com/eobermuhlner,
we are finding in the development, debugging, and editing of our Java programs,
that using other classes to operate and exchange for the lack of range accuracy 
from float,
double and java.lang.StrictMath, that we are getting bogged down with what 
turns into
more inferior software.  The known and available workaround approaches are 
becoming
stop-gap measures, perforcedly put in place, while introducing other problems
into OpenJDK or Java software that don't have any particular, immediate, 
solutions.

Substituting float and double data in and out of BigDecimal and BigInteger 
produces
source code which is much messier, complicated, error prone, difficult to 
understand
and to change, is definitely slower, and is an inferior substitute when float
and double are more than enough in the overwhelming majority of corresponding 
cases.
This is particularly showing up in 2D and 3D Graphics software, by the default
OpenJDK Libraries, but also through JMonkeyEngine 3.5.

Possessing the option to immediately deal with the precondition, postcondition
and field types of float and double is far superior and more than ideal.
All this is before the massive advantage of being able to use operators,
but the change case becomes overwhelming when along a range accurate,
double (or maybe float, also) supporting Scientific Calculator class.

If I want to discuss (at least OpenJDK) change in this area, I have
been pointed to the core-libs area, by one of the respondents
of the article:

https://community.oracle.com/tech/developers/discussion/4126262/big-issue-with-float-double-java-floating-point.

Is there anyone here, at core-libs-dev, who can point
me in a better Oracle or OpenJDK direction, to discuss further
and see about Java float and double and StrictMath floating point
arithmetic denormal and pronormal values being repaired away
and being made range accurate for all evaluation operations
with them?

Certainly since other languages already have, that are open source
and open resource file ones.  It is a mathematical fact that, for
consistent, necessary and 

Re: Questions about enhancement and Correction to Java OpenJDK Floating Point?

2022-03-14 Thread Martin Desruisseaux

Hello A.Z

As far as I know, the difference in output that you observed between 
Java and C/C++ is not caused by a difference in floating-point 
computation, but in a difference in the way numbers are rounded by the 
"printf" command. Java has a policy of showing all significant digits, 
while C "printf" has a policy of rounding before printing. In my 
opinion, this is more dangerous because it hides what really happen in 
floating-point computation.


The following statement is not entirely true when using finite floating 
point precision:



(…snip…) It is a mathematical fact that, for
consistent, necessary and even fast term, 10% of 10% must
always precisely be 1%, and by no means anything else.


Above statement can be true only when using base 10 or some other bases. 
We could also said "It is a mathematical fact that 1/3 of 1/3 must 
always precisely be 1/9 and nothing else", but it can not be represented 
fully accurately with base 10. It can be represented fully accurately 
with base 3 however. There will always be examples that work in one base 
and not in another, and natural laws has no preference for base 10. I 
understand that base 10 is special for financial applications, but for 
many other applications (scientific, engineering, rendering…) base 2 is 
as good as any other base. I would even argue that base 10 can be 
dangerous because it gives a false sense of accuracy: it gives the 
illusion that rounding errors do not happen when testing with a few 
sample values in base 10 (like 10% of 10%), while in reality rounding 
errors continue to exist in the general case.


    Martin




Re: Questions about enhancement and Correction to Java OpenJDK Floating Point?

2022-03-14 Thread Remi Forax
- Original Message -
> From: "A Z" 
> To: "core-libs-dev" 
> Sent: Monday, March 14, 2022 7:49:04 AM
> Subject: Questions about enhancement and Correction to Java OpenJDK Floating 
> Point?

Hi Terry,
if you want to have the same output as C, instead of println() use printf().

In your example, using
  out.printf("%f\n", c);

prints
  0.01

  0.01

regards,
Rémi

> To whom it may concern,
> 
> Having noticed
> 
> https://bugs.java.com/bugdatabase/view_bug.do?bug_id=8190947
> https://bugs.openjdk.java.net/browse/JDK-8190991
> 
> and similar, at
> https://community.oracle.com/tech/developers/discussion/4126262/big-issue-with-float-double-java-floating-point
> 
> I have been referred on to the core-libs-dev area.
> 
> The software development company I represent wishes to keep its name
> confidential,
> and no-mentioned, at this time.
> 
> A number of us at our end have found that floating point and StrictMath
> arithmetic
> on both float and double does not result in range accuracy, but produces
> denormal
> and pronormal values.
> 
> We are aware of the Java Language Specification, and IEEE 754 specifications,
> to these issues, but are still finding that they are not the most relevant or
> great issue.
> 
> While we are aware of the BigDecimal and BigInteger workarounds, and
> furthermore, the calculator class including big-math
> https://github.com/eobermuhlner,
> we are finding in the development, debugging, and editing of our Java 
> programs,
> that using other classes to operate and exchange for the lack of range 
> accuracy
> from float,
> double and java.lang.StrictMath, that we are getting bogged down with what 
> turns
> into
> more inferior software.  The known and available workaround approaches are
> becoming
> stop-gap measures, perforcedly put in place, while introducing other problems
> into OpenJDK or Java software that don't have any particular, immediate,
> solutions.
> 
> Substituting float and double data in and out of BigDecimal and BigInteger
> produces
> source code which is much messier, complicated, error prone, difficult to
> understand
> and to change, is definitely slower, and is an inferior substitute when float
> and double are more than enough in the overwhelming majority of corresponding
> cases.
> This is particularly showing up in 2D and 3D Graphics software, by the default
> OpenJDK Libraries, but also through JMonkeyEngine 3.5.
> 
> Possessing the option to immediately deal with the precondition, postcondition
> and field types of float and double is far superior and more than ideal.
> All this is before the massive advantage of being able to use operators,
> but the change case becomes overwhelming when along a range accurate,
> double (or maybe float, also) supporting Scientific Calculator class.
> 
> If I want to discuss (at least OpenJDK) change in this area, I have
> been pointed to the core-libs area, by one of the respondents
> of the article:
> 
> https://community.oracle.com/tech/developers/discussion/4126262/big-issue-with-float-double-java-floating-point.
> 
> Is there anyone here, at core-libs-dev, who can point
> me in a better Oracle or OpenJDK direction, to discuss further
> and see about Java float and double and StrictMath floating point
> arithmetic denormal and pronormal values being repaired away
> and being made range accurate for all evaluation operations
> with them?
> 
> Certainly since other languages already have, that are open source
> and open resource file ones.  It is a mathematical fact that, for
> consistent, necessary and even fast term, 10% of 10% must
> always precisely be 1%, and by no means anything else.
> 
> Consider these three different language API evaluations,
> using their equivalents of float and double to perform
> the floating point equivalent of that precise evaluation:
> 
> //--
> //The C Language.
> #include 
> 
> int main()
> {
>printf("Program has started...");
>printf("\n");
>printf("\n");
>double a = 0.1D;
>double b = 0.1D;
>double c = a*b;
>printf("%lf",c);
>printf("\n");
>printf("\n");
>float d = 0.1F;
>float e = 0.1F;
>float f = d*e;
>printf("%lf",f);
>printf("\n");
>printf("\n");
>printf("Program has Finished.");
>return 0;
> }
> 
> /*
> Program has started...
> 
> 0.01
> 
> 0.01
> 
> Program has Finished.
> */
> //--

Questions about enhancement and Correction to Java OpenJDK Floating Point?

2022-03-14 Thread A Z
To whom it may concern,

Having noticed

https://bugs.java.com/bugdatabase/view_bug.do?bug_id=8190947
https://bugs.openjdk.java.net/browse/JDK-8190991

and similar, at 
https://community.oracle.com/tech/developers/discussion/4126262/big-issue-with-float-double-java-floating-point

I have been referred on to the core-libs-dev area.

The software development company I represent wishes to keep its name 
confidential,
and no-mentioned, at this time.

A number of us at our end have found that floating point and StrictMath 
arithmetic
on both float and double does not result in range accuracy, but produces 
denormal
and pronormal values.

We are aware of the Java Language Specification, and IEEE 754 specifications,
to these issues, but are still finding that they are not the most relevant or 
great issue.

While we are aware of the BigDecimal and BigInteger workarounds, and
furthermore, the calculator class including big-math  
https://github.com/eobermuhlner,
we are finding in the development, debugging, and editing of our Java programs,
that using other classes to operate and exchange for the lack of range accuracy 
from float,
double and java.lang.StrictMath, that we are getting bogged down with what 
turns into
more inferior software.  The known and available workaround approaches are 
becoming
stop-gap measures, perforcedly put in place, while introducing other problems
into OpenJDK or Java software that don't have any particular, immediate, 
solutions.

Substituting float and double data in and out of BigDecimal and BigInteger 
produces
source code which is much messier, complicated, error prone, difficult to 
understand
and to change, is definitely slower, and is an inferior substitute when float
and double are more than enough in the overwhelming majority of corresponding 
cases.
This is particularly showing up in 2D and 3D Graphics software, by the default
OpenJDK Libraries, but also through JMonkeyEngine 3.5.

Possessing the option to immediately deal with the precondition, postcondition
and field types of float and double is far superior and more than ideal.
All this is before the massive advantage of being able to use operators,
but the change case becomes overwhelming when along a range accurate,
double (or maybe float, also) supporting Scientific Calculator class.

If I want to discuss (at least OpenJDK) change in this area, I have
been pointed to the core-libs area, by one of the respondents
of the article:

https://community.oracle.com/tech/developers/discussion/4126262/big-issue-with-float-double-java-floating-point.

Is there anyone here, at core-libs-dev, who can point
me in a better Oracle or OpenJDK direction, to discuss further
and see about Java float and double and StrictMath floating point
arithmetic denormal and pronormal values being repaired away
and being made range accurate for all evaluation operations
with them?

Certainly since other languages already have, that are open source
and open resource file ones.  It is a mathematical fact that, for
consistent, necessary and even fast term, 10% of 10% must
always precisely be 1%, and by no means anything else.

Consider these three different language API evaluations,
using their equivalents of float and double to perform
the floating point equivalent of that precise evaluation:

//--
//The C Language.
#include 

int main()
{
printf("Program has started...");
printf("\n");
printf("\n");
double a = 0.1D;
double b = 0.1D;
double c = a*b;
printf("%lf",c);
printf("\n");
printf("\n");
float d = 0.1F;
float e = 0.1F;
float f = d*e;
printf("%lf",f);
printf("\n");
printf("\n");
printf("Program has Finished.");
return 0;
}

/*
Program has started...

0.01

0.01

Program has Finished.
*/
//--
//The C++ Language.

#include 

using namespace std;

int main()
{
cout << "Program has started..." << endl;
double a = 0.1D;
double b = 0.1D;
double c = a*b;
cout << endl << c << endl << endl;
float d = 0.1F;
float e = 0.1F;
float f = d*e;
cout << f << endl << endl;
cout << "Program has Finished.";
return 0;
}

/*
Program has started...

0.01

0.01

Program has Finished.
*/

//--
//The Java Language.

import static java.lang.System.*;
public class Start
{
public static void main(String ... args)
{
out.println("Program has started...");
double a = 0.1D;
double b = 0.1D;
double c = a*b;
out.println();
out.println(c);
float d = 0.1F;
float e = 0.1F;
float f = d*e;
out.println();
out.println(f);   out.println();
out.println("Program has Finished.");
}}

/*
Program has started...

0.010002

0.01001

Program has Finished..
*/
//--

In order for java