On Wed, Jul 20, 2016 at 11:54 PM, Marko Rauhamaa <ma...@pacujo.net> wrote:
>  2. Floating-point numbers are *imperfect approximations* of real
>     numbers. Even when real numbers are derived exactly, floating-point
>     operations may introduce "lossy compression artifacts" that have to
>     be compensated for in application programs.

This is the kind of black FUD that has to be fought off. What
"compression artifacts" are introduced? The *only* lossiness in IEEE
binary floating-point arithmetic is rounding. (This is the bit where
someone like Steven is going to point out that there's something else
as well.) Unless you are working with numbers that require more
precision than you have available, the result should be perfectly
accurate. And there are other systems far less 'simple'. Can you
imagine this second assertion failing?

assert x <= y # if not, swap the values
assert x <= (x+y)/2 <= y

Because it can with decimal.Decimal, due to the way rounding happens in decimal.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list

Reply via email to