M.-A. Lemburg wrote:

> If you are eventually rounding to say 2 decimal
> places in the end of the calculation, you won't
> want to find yourself presenting the user 1.12
> and 1.13 as equal values :-)

Even if, before rounding, they were actually
1.12499999999 and 1.125000000001? And if the
difference were only due to the unrepresentability
of some decimal fraction exactly in binary?

I still maintain that (a) rounding a *binary*
float to *decimal* places is wrongheaded, and
(b) digit chopping is a bad way to decide
whether two inexact numbers should be
considered equal. Not just a different way,
but a poorer way.

> Most
> such calculations do work with floats, so having
> round() return an int would just add another
> coercion to a float for those use-cases.

I'm *not* proposing to eliminate round-to-float,
despite the fact that I can't see much use for
it personally.

I'm also *not* advocating changing the existing
behaviour of round() or int(). That was just
tentative speculation.

All I'm asking for is another function that does
round-to-int directly. I wouldn't have thought
that was such a controversial idea, given the
frequency of use for that operation.

--
Greg

_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to