[
https://issues.apache.org/jira/browse/NUMBERS-193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17700276#comment-17700276
]
Sentaro Onizuka commented on NUMBERS-193:
-----------------------------------------
I may have misunderstood Alex's suggestion. Please let me know if I
misunderstand any of the following.
{quote}
The DD class would be a more general number to be used as you would use a
double or a BigDecimal. I would imagine the API would consist of methods acting
on the current instance and returning a new instance:
{quote}
>From here, I understood that the DD class would be implemented in an immutable
>class.
{quote}
The key point of the current API is that it requires no memory allocation
within the class. As such the class has been written to be mutable. All methods
act on primitives and write results to an output argument. However this does
not fully encapsulate the functionality and methods may be called with
arguments that are not normalised double-double numbers.
{quote}
Also, from here,
The reason for changing from a mutable class to an immutable class is that I
believe encapsulation prevents the use of unnormalized double-double numbers.
And when implemented in an immutable class, performance becomes an issue apart
from the benefit of encapsulation.
In particular, each method requires a process to create a new instance, which
may degrade performance.
Therefore, before changing to the immutable class, we wanted to investigate the
impact of object creation and garbage collection on performance by evaluating
performance with and without adding a method that creates a new instance.
{quote}
I would have thought of the reverse: How would one measure relative performance
before having implemented the two alternative APIs?
{quote}
As described above, I considered comparing performance with and without adding
a method that creates a new instance.
Of course, after implementing the DD class in the immutable class, I believe a
comparison with the original DD class will also be necessary.
{quote}
However, I'd think that it's safer and simpler to first ensure that both
implementations are correct through unit testing
{quote}
What do both implementations mean, the original DD class and the other?
If not a change to an immutable class, what changes are necessary?
> Add support for extended precision floating-point numbers
> ---------------------------------------------------------
>
> Key: NUMBERS-193
> URL: https://issues.apache.org/jira/browse/NUMBERS-193
> Project: Commons Numbers
> Issue Type: New Feature
> Reporter: Alex Herbert
> Priority: Major
> Labels: full-time, gsoc2023, part-time
>
> Add implementations of extended precision floating point numbers.
> An extended precision floating point number is a series of floating-point
> numbers that are non-overlapping such that:
> {noformat}
> double-double (a, b):
> |a| > |b|
> a == a + b{noformat}
> Common representations are double-double and quad-double (see for example
> David Bailey's paper on a quad-double library:
> [QD|https://www.davidhbailey.com/dhbpapers/qd.pdf]).
> Many computations in the Commons Numbers and Statistics libraries use
> extended precision computations where the accumulated error of a double would
> lead to complete cancellation of all significant bits; or create intermediate
> overflow of integer values.
> This project would formalise the code underlying these use cases with a
> generic library applicable for use in the case where the result is expected
> to be a finite value and using Java's BigDecimal and/or BigInteger negatively
> impacts performance.
> An example would be the average of long values where the intermediate sum
> overflows or the conversion to a double loses bits:
> {code:java}
> long[] values = {Long.MAX_VALUE, Long.MAX_VALUE};
> System.out.println(Arrays.stream(values).average().getAsDouble());
> System.out.println(Arrays.stream(values).mapToObj(BigDecimal::valueOf)
> .reduce(BigDecimal.ZERO, BigDecimal::add)
> .divide(BigDecimal.valueOf(values.length)).doubleValue());
> long[] values2 = {Long.MAX_VALUE, Long.MIN_VALUE};
> System.out.println(Arrays.stream(values2).asDoubleStream().average().getAsDouble());
> System.out.println(Arrays.stream(values2).mapToObj(BigDecimal::valueOf)
> .reduce(BigDecimal.ZERO, BigDecimal::add)
> .divide(BigDecimal.valueOf(values2.length)).doubleValue());
> {code}
> Outputs:
> {noformat}
> -1.0
> 9.223372036854776E18
> 0.0
> -0.5{noformat}
--
This message was sent by Atlassian Jira
(v8.20.10#820010)