On 1 August 2013 07:32, Chris Angelico <ros...@gmail.com> wrote: > On Thu, Aug 1, 2013 at 7:20 AM, Steven D'Aprano > <steve+comp.lang.pyt...@pearwood.info> wrote: >> I know this, and that's not what surprised me. What surprised me was that >> Fraction converts the float to a fraction, then compares. It surprises me >> because in other operations, Fractions down-cast to float. >> >> Adding a float to a Fraction converts the Fraction to the nearest float, >> then adds: >> >> py> 1/3 + Fraction(1, 3) >> 0.6666666666666666 > > Hmm. This is the one that surprises me. That would be like the > addition of a float and an int resulting in an int (at least in C; in > Python, where floats have limited range and ints have arbitrary > precision, the matter's not quite so clear-cut). Perhaps this needs to > be changed?
The Python numeric tower is here: http://docs.python.org/3/library/numbers.html#module-numbers Essentially it says that Integral < Rational < Real < Complex and that numeric coercions in mixed type arithmetic should go from left to right which makes sense mathematically in terms of the subset/superset relationships between the numeric fields. When you recast this in terms of Python's builtin/stdlib types it becomes int < Fraction < {float, Decimal} < complex and taking account of boundedness and imprecision we find that the only subset/superset relationships that are actually valid are int < Fraction and float < complex In fact Fraction is a superset of both float and Decimal (ignoring inf/nan/-0 etc.). int is not a subset of float, Decimal or complex. float is a superset of none of the types. Decimal is a superset of float but the tower places them on the same level. The real dividing line between {int, Fraction} and {float, Decimal, complex} is about (in)exactness. The numeric tower ensures the property that inexactness is contagious which I think is a good thing. This is not explicitly documented anywhere. PEP 3141 makes a dangling reference to an Exact ABC as a superclass of Rational but this is unimplemented anywhere AFAICT: http://www.python.org/dev/peps/pep-3141/ The reason contagious inexactness is a good thing is the same as having contagious quite NaNs. It makes it possible to rule out inexact computations playing a role in the final computed result. In my previous post I asked what the use case is for mixing floats and Rationals in computation. I have always considered this to be something that I wanted to avoid and I'm glad that contagious inexactness helps me to avoid mixing floats into exact computations. Oscar -- http://mail.python.org/mailman/listinfo/python-list