On 29-1-2018 10:45, Alex Peshkoff via Firebird-devel wrote:
May be taken standalone your suggestion is simpler but in conjunction with existing logic for numerics based on bigint and smaller integer values and need to cast between them use of existing impelmentation appeared to me simpler.

I admit I'm not privy to all the details of the calculation done, but wouldn't the same behavior be achieved by using the normal Decimal128 division operation, and then rescaling with rounding down (truncation) to the required scale of -1 * (ScaleOp1 + ScaleOp2).

And if it is used as a container for a plain big integer, why not just use a densely packed decimal (the format used in a Decimal128 to encode the coefficient), or 'better', a binary encoded big integer? Both would allow for even higher precision with the same storage requirements (or smaller storage requirements for the same precision).

And have a nihgtmare casting from this 'better' format to Decimal(34)?

You still have some of that nightmare when casting between decfloat and decimal(34, x).

Also take into an account that for existing format we already have support in indices. Alternative formats/libraries were discussed but decision was taken to use same library for both Decimal and DecimalFixed.

Point taken, but my suggestion was more that we now don't utilize the Decimal128 to its fullest for decimal, and users of the direct API now need to handle decfloat and decimal(19+, x) in a very different manner even though the underlying datatype is the same (and provides the convenient feature to communicate the scale inline).

The reason for this design seems to be mainly that it was more convenient internally, and to be honest, I don't think Firebird's public API (and its wire protocol), should export internal design problems in this manner.

In any case, if this is a sailed ship, and this is not going to change, then this needs to be very clearly documented in the release notes. And preferably even include some code example how to handle both decfloat and the extended decimal precision for users of both the old and new API.

And also very interesting: inserting a pre-scaled Decimal128 value is allowed, but results in a doubly scaled value (so in essence a decimal(34,2) suddenly contains a value with scale 4). For example, inserting a Decimal128 with value 123.45 (as in 12345E-2) into a decimal(34,2) and then casting to varchar results in value 1.2345.

It's just s a bug - missing check for incorrect input parameter.

Ok, I'll create a ticket for that.

From a technical perspective, I would prefer if Firebird would use (send and accept) the Decimal128 values that correspond with the value (that is: correctly scaled, ie 12345E-2 in above examples), and reject values that have the wrong scale.

Alternatively - but more complex - scale and round where appropriate (that is sending a 100E0 value will be scaled to 10000E-2, and 123456E-3 is rounded to 12346E-2), and throw an overflow error if the value is too big. (eg sending 1234567890123456789012345678901234E0 for a decimal(34,1)).

However, if the current solution is kept, then Firebird must reject any Decimal128 parameter values with an (unbiased) exponent other than 0 to prevent issues like demonstrated with inserting a pre-scaled value.


Certainly.

I'll create a ticket for that as well.

From the perspective of Jaybird, where otherwise I would have been able to use the handling of Decimal128 I already had in place for DECFLOAT (only needing to round to the target scale when sending values), I now need to perform additional fiddling to either correctly scale on receive, or get the unscaled value and convert that to Decimal128 when sending values.

Exactly same logic as applied when dealing with Numeric(15,2), is not it?

For the rescaling, yes, but for precision <= 18, I would then obtain the unscaled value and use its primitive value, and now I need to create yet another object, instead of being able to use the correctly rescaled BigDecimal directly for encoding on send, or the BigDecimal I derived from the Decimal128 decoding on receive. Technically creating new objects is dirt cheap, but all in all, this does add to the garbage collection overhead.

In terms of memory, this will require the creation of at least one additional intermediate object per value sent or received, or creating ugly hacks to circumvent the need to create these intermediate objects.

I'm not familiar with details of JB implementation. In native API one should place a value into the message and may be scale it correctly. That's what engine itself does here and there, I see no problems with it and no memory losses. But certainly that all highly depends upon the rest of implementation details.

There is no memory loss, but the java objects involved are immutable, so you need to create a new one, which also means it needs to be collected again, which has its CPU overhead.

Mark
--
Mark Rotteveel

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel

Reply via email to