Firebird allows you to specify a precision with FLOAT(p), where 0-7 gives you a 32 bit single precision (aka REAL), and 8 and higher gives you a double precision. In other words, the precision is interpreted as (approximate) decimal digits.

Is this documented and 'supported' behavior? It isn't mentioned in the IB 6 documentation nor in the language reference, and I couldn't find a ticket that documents the introduction of this change.

I'm asking, because the current interpretation of precision isn't SQL-compliant, as it specifies that the precision is the (minimum requested) binary precision (aka the number of bits of the significand).

Which would mean that 1-24 should map to REAL (32 bit single precision) and 25-53 should maps to DOUBLE PRECISION (64 bit double precision).

If this behavior isn't documented anywhere, we might get away with 'changing' it. This would also make us 'more compatible' with for example SQL Server.

--
Mark Rotteveel

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel

Reply via email to