I've compared various possible implementations of high precision
numeric. Except existing in fb4 (decfloat based) were checked native
gcc's __int128 and ttmath (fixed high precision library with pure .h
implementation). Test performed a mix of sum/mult/div operations in a
loop. Native code was compiled w/o opimization - even with -O1 loop was
optimized and test completed at once.
Something like this:
for (int i = 0; i < n; ++i)
{
e += (a / b) + (c * d);
a++;
c--;
}
Results are generally as expected (x64):
gcc - 0.5 sec
ttmath - 1.2 sec
decfloat - 10.5 sec
As an additional bonus internal binary layout of 128-bit integer is the
same for __int128 on x64 (on x86 unsupported), ttmath's 128-bit class on
x64 and ttmath on x86. I could not test other architectures (first of
all bigendians are interesting), but looking at the code I do not expect
bad surprises from that library.
I.e. I suggest to replace high precision numeric's implementation using
decfloat with native 128-bit integer when possible and ttmath in other
cases. That will make it possible to use 128-bit integers in all cases
when 64 is not enough without serious performance penalty. Comments?
Firebird-Devel mailing list, web interface at
https://lists.sourceforge.net/lists/listinfo/firebird-devel