Hi Joe,

On 2021-03-30 21:56, Joe Darcy wrote:
Hi Raffaello,

On 3/29/2021 1:14 PM, Raffaello Giulietti wrote:
Hello,


Assuming you have DecimalN <-> BigDecimal conversions, the BigDecimal type should be usable for testing at least. For in-range values not near the exponent range, the scale (exponent) and significand for finite and nonzero values should be the same for the basic arithmetic operations and square root in DecimalN and BigDecimal.

(Around the limits of the exponent range, there are subnormal and "supernormal" results where the rounding of the significand interacts with the exponent computation. This would make it a bit tricky to offload BigDecimal computations in range of, say, a Decimal128 to a hardware implementation where one was available.)


Yes, some of my current tests exploit BigDecimal for crosschecks in the normal range.


Fixed-size decimal computations have an interesting design space to explore in terms of trading off memory compactness and computational cost. For example, the IEEE 754 spec details two different encodings of three decimal digits in 10 bits. However, it is not strictly necessary for each operation to produce results whose internal storage maps to an "interchange format," in the terminology of 754. It would be possible to keep the significand encoded as normal integers and only produce an interchange representation on something akin to a serialization event.


The internal representation is none of the interchange formats. Similarly to the interchange format's 1'000-based declets, it holds 1'000'000'000-based (30 bits) "declets"/int, in addition to the unbiased exponent, the sign, the precision and the kind, as you mention below.


I understand hardware implementations of FPUs will use these sort of techniques and also have architectural-invisible bits indicating what kind of value a floating-point number is. For example, the beginning of many math library functions starts with "Handle NaN cases," ... "Handle infinity case" ... "Handle zero case" ... Sometimes the handling falls out naturally and doesn't require explicit testing, but in the cases that does not occur, it can be cheaper to have the "isFoo" bit already computed.


I would be glad to contribute the code to the OpenJDK provided there is a genuine interest from the community and a reviewer willing to sponsor my effort. (I guess the best candidate for this role would be Joe Darcy from what I can read in this mailing list.)

At least for the time being, I don't think the OpenJDK (at least the main project) would be the best home for such code. As you're noted, post Valhalla having such a type in the JDK becomes more interesting from a performance perspective.


They are already more compact and faster than BigDecimal even on the current, pre-Valhalla release, so they would make sense even today, were they ready for review.

(I'm spending my free time on this, so I just wanted to make sure I'm not wasting energy for something that will sit on my computer only.)


Greetings
Raffaello

Reply via email to