On Thursday, 1 December, 2016 19:50, Jens Alfke <j...@mooseyard.com> said:
> > On Nov 30, 2016, at 5:53 PM, Keith Medcalf <kmedc...@dessus.com> wrote:
> > Wattage problem based on incoherent understanding of how floating point
> > numbers are stored.
> You may not be aware of this, Keith, but that comes off as really snarky
> and condescending.
Factual information often does when it is contrary to something that has been
deliberately oversimplified for mass consumption to the point of being
> I feel I have a fairly coherent (if not domain-expert) understanding of
> FP, and it sounds like others here do too. And while I do not myself work
> with database schema involving money, I understand that the consensus
> among those who do is to _not_ store monetary values as floating-point,
> due to the roundoff error. Some people reject algorithms out-of-hand that
> accumulate even tiny amounts of roundoff error*. And of course the
> roundoff error increases as the size of the integer portion of the number
That ancient recommendation is from the good old days when computers used what
is now called "fast floating point" which was designed to have about a digit
more accuracy than a slide-rule -- and performed computations with about the
same speed as a slide-rule.
In IEEE754 that is called binary16 (or half precision) and has an accuracy of 3
decimal digits -- completely unsuitable for money. binary32 (single precision)
has an accuracy of about 7 decimal digits and is good as long as no operand,
intermediate, or result exceeds about $100. binary64 (double precision) has an
accuracy of about 15 decimal digits and is good as long as no operand,
intermediate, or result is greater than about $1,000,000,000. binary128
(quadruple precision) has an accuracy of about 33 decimal digits and is good up
to $10^28. Binary256 (octuple precision) has an accuracy of about 71 decimal
digits and is good up to $10^67. Decimal32, Decimal64 and Decimal128 have
about the same precision as their binary counterparts (as do the various fixed
point BCD or packed-decimal formats, or scaled integer binary types, or even
bignums). Fixed or Floating arithmetic is equally accurate within the limits
of precision associated with the storage and calculation format in use.
> * just as some people reject UUID schemes with a tiny-but-nonzero chance
> of collisions, hmm?
Not really. Using a storage format that has the required precision cannot lead
to "happenstance" (probabilistic) errors, whether that storage format is fixed
or floating point, binary, decimal, BCD, packed-decimal, or scaled integer.
However, UUID schemes have a 100% certainty of collision. There is a huge
sqlite-users mailing list