[ 
https://issues.apache.org/jira/browse/NUMBERS-193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17700306#comment-17700306
 ] 

Gilles Sadowski commented on NUMBERS-193:
-----------------------------------------

bq. From here, I understood that the DD class would be implemented in an 
immutable class.

Sure, that would be the ideal outcome for several reasons, one of them being 
that all "number" types currently implemented in [Numbers] ({{Fraction}}, 
{{BigFraction}}, {{Complex}}, {{Quaternion}}) are immutable, on the same 
rationale that e.g. {{Double}}, in the JDK is also immutable.

bq. The reason for changing from a mutable class to an immutable class is that 
I believe encapsulation prevents the use of unnormalized double-double numbers.

I don't understand the relationship...

bq. And when implemented in an immutable class, performance becomes an issue 
apart from the benefit of encapsulation.
In particular, each method requires a process to create a new instance, which 
may degrade performance.

Yes, but there are other benefits, and maybe some JVM optimizations can be only 
be performed on objects guaranteed to be immutable...

bq. Therefore, before changing to the immutable class, we wanted to investigate 
the impact of object creation and garbage collection on performance by 
evaluating performance with and without adding a method that creates a new 
instance.

Yes, but my point was that the benchmark could be biased if the class is 
mutable but does not takes advantage of it, and creates new instances instead 
(in effect taking the worst of both worlds).  The comparison should be between
* mutability and not creating instances
* immutability and (necessarily) creating new instances

bq. What do both implementations mean, the original DD class and the other?

Yes. Because the current "DD" was meant for performance but uses an unsafe API. 
 For most usage, top performance might not be required; if the trade-off is 
acceptable for the increased precision.

bq. If not a change to an immutable class,

In my understanding, the immutable class is the "other" implementation.



> Add support for extended precision floating-point numbers
> ---------------------------------------------------------
>
>                 Key: NUMBERS-193
>                 URL: https://issues.apache.org/jira/browse/NUMBERS-193
>             Project: Commons Numbers
>          Issue Type: New Feature
>            Reporter: Alex Herbert
>            Priority: Major
>              Labels: full-time, gsoc2023, part-time
>
> Add implementations of extended precision floating point numbers.
> An extended precision floating point number is a series of floating-point 
> numbers that are non-overlapping such that:
> {noformat}
> double-double (a, b):
> |a| > |b|
> a == a + b{noformat}
> Common representations are double-double and quad-double (see for example 
> David Bailey's paper on a quad-double library: 
> [QD|https://www.davidhbailey.com/dhbpapers/qd.pdf]).
> Many computations in the Commons Numbers and Statistics libraries use 
> extended precision computations where the accumulated error of a double would 
> lead to complete cancellation of all significant bits; or create intermediate 
> overflow of integer values.
> This project would formalise the code underlying these use cases with a 
> generic library applicable for use in the case where the result is expected 
> to be a finite value and using Java's BigDecimal and/or BigInteger negatively 
> impacts performance.
> An example would be the average of long values where the intermediate sum 
> overflows or the conversion to a double loses bits:
> {code:java}
> long[] values = {Long.MAX_VALUE, Long.MAX_VALUE}; 
> System.out.println(Arrays.stream(values).average().getAsDouble()); 
> System.out.println(Arrays.stream(values).mapToObj(BigDecimal::valueOf)
>     .reduce(BigDecimal.ZERO, BigDecimal::add)
>     .divide(BigDecimal.valueOf(values.length)).doubleValue());
> long[] values2 = {Long.MAX_VALUE, Long.MIN_VALUE}; 
> System.out.println(Arrays.stream(values2).asDoubleStream().average().getAsDouble());
>  System.out.println(Arrays.stream(values2).mapToObj(BigDecimal::valueOf)
>     .reduce(BigDecimal.ZERO, BigDecimal::add)
>     .divide(BigDecimal.valueOf(values2.length)).doubleValue());
> {code}
> Outputs:
> {noformat}
> -1.0
> 9.223372036854776E18
> 0.0
> -0.5{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to