Author: wyoung
Date: Thu Aug 15 21:03:30 2013
New Revision: 2749
URL: http://svn.gna.org/viewcvs/mysqlpp?rev=2749&view=rev
Log:
- Previous comparison to Larry Ellison's lost change was mathematical
nonsense. MySQL++ currently distinguishes $1,000,000,000.00001
from $1 bn, which is even more risible than ignoring the distinction
between its proper inverse, which would be ignoring a difference of
$100,000 when starting with $1 bn.
Now the prose contains mathemtically *sound* nonsense. Better.
- Picking on Carlos Slim for the third billionare comparison, instead of
doubling up on Bill Gates. Poor guy.
Modified:
trunk/Wishlist
Modified: trunk/Wishlist
URL:
http://svn.gna.org/viewcvs/mysqlpp/trunk/Wishlist?rev=2749&r1=2748&r2=2749&view=diff
==============================================================================
--- trunk/Wishlist (original)
+++ trunk/Wishlist Thu Aug 15 21:03:30 2013
@@ -55,11 +55,15 @@
dealing with Null<T> wrapped types, such as sql_blob_null.
o The current method SSQLS uses to compare floating point numbers
- is highly dubious. It just subtracts them, and checks that the
- absolute difference is under some threshold. According to this
- current rule, $1,000,010,000 is the same as $1,000,000,000,
- which means Larry Ellison should send me a check for tens of
- thousands of dollars, since it is an insignificant amount.
+ is highly dubious. It just subtracts them, and checks that
+ the absolute difference is under some threshold. The manual
+ warns that this is fine for "human scale" applications, but even
+ that's not actually true. It means that if Larry Ellison loses
+ a hundredth of a penny in his couch, it is somehow significant.
+ I have no idea how much money Larry Ellison is comfortable losing
+ to his couch cushions, but it's probably closer to 1 ppm than
+ the current threshold, which is 100 parts per quadrillion on
+ the scale of $1 bn.
For backards compatibility, we should keep this method, but we
should add these two more mathematically sound methods:
@@ -67,9 +71,9 @@
- Percentage: Divide the smaller number into the larger, then
compare against a threshold. The default should be
something like 1.000001 (1 ppm), which lets us make much
- finer distinctions without running out of precision, more
- on the order of the amount of money Bill Gates loses in his
- couch cushions.
+ finer distinctions without running out of precision, even
+ with single-precision numbers counting Bill Gates' losses to
+ his couch cushions.
- Logarithmic, or "Bels": Same as percentage, but on a log10
scale so it works better for FP numbers, which are based on
@@ -80,10 +84,14 @@
1 ppm is ~4.3e-7, which is below what single-precision FP
can distinguish. Increasing the threshold to a value you
*can* distinghish with a 32-bit IEEE float makes it ignore
- significant amounts of money in Bill Gates' couch cusions.
+ significant amounts of money in Carlos Slim's couch cusions.
(Hundreds of dollars.) Therefore, we should use something
like 1e-7 or 1e-8 anyway, and make it clear that the default
threshold is only suitable for doubles.
+
+ Someone using single precision FP should increase the threshold
+ to 1e-5 or so. Such a person would be assumed to know what
+ they're doing.
It's probably more efficient to change the algorithm from:
_______________________________________________
Mysqlpp-commits mailing list
[email protected]
https://mail.gna.org/listinfo/mysqlpp-commits