On 11/13/16 19:40, Dmitry Yemanov wrote:
> 13.11.2016 19:18, Alex Peshkoff wrote:
>> SQL> SELECT * FROM TESTDECFLOAT;
>>
>>               FEE_DECFLOAT       FEE_REAL               PERCENTAGE
>> ======================== ============== ========================
>>                       0.70     0.69999999                     0.05
> I do see the difference with REAL but I'm asking for a difference with
> NUMERIC backed by int32/int64.
>
> I don't mind DECFLOAT being a "better float" than FLOAT / DOUBLE
> PRECISION but so far I don't see how it's better than NUMERIC/DECIMAL
> (which also stores exactly 0.70 in dialect 3).

It's better due to presence of exponent. For example the following does 
not work with decimal w/o loose of precision:

SQL> select x, ln(x), ln(x) / 1000000000 from td;

                        X LN                                     DIVIDE

                      1.5 0.4054651081081643819780131154643491 
4.054651081081643819780131154643491E-10
              3.587817        1.277543939551353278138773202143221 
1.277543939551353278138773202143221E-9
                    2.876 1.056400439858800255306975940749954 
1.056400439858800255306975940749954E-9

I.e. this type at the same time provides precision of numeric and 
flexibility of float.


>>> Moreover, what are we going to do when people ask as for precisions
>>> beyond the 34 decimal digits? Introduce blr_dec256/blr_dec512/etc or
>>> switch to blr_varydec backed by decNumber (and probably stored as packed
>>> BCD)? Are there any reasons why the current implementation doesn't
>>> follow this way other than hardware accelerated computations for 64/128
>>> bits?
>> Use of unlimited length fields is certainly great but I suppose we
>> should switch to it in future versions (including unlimited length
>> strings). Better precision calculations are needed right now, in v.4.
> My question wasn't about v4 but rather about extensibility in general,
> how do you plan to support longer precisions? By increasing bits or by
> switching to some variable-length implementation?

Certainly ability to use of variable length columns is great. As soon as 
we will have other types of variable length objects adding numbers with 
variable precision is not too big problem.

> If the latter, why is
> dec64/dec128 better *now* than varydec? Just hardware backed
> computations or anything else?
>

Hardware backed computations currently work only on some relatively 
exotic CPU (seems to be PowerPC, but I have lost proof link) i.e. this 
is not serious reason. In decNumber library use of unlimited precision 
numbers means use of dynamic memory allocation, and this is already 
enough reason to avoid them right now. If we decide to use them in the 
future I suppose we will have to rework parts of the library. And need 
to rework it is strong argument to use fixed length variables right now.



------------------------------------------------------------------------------
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today. http://sdm.link/xeonphi
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel

Reply via email to