On Fri, Feb 22, 2013 at 8:59 AM, Peter Pearson <ppearson@nowhere.invalid> wrote: > On Fri, 22 Feb 2013 08:23:27 +1100, Chris Angelico <ros...@gmail.com> wrote: >> In theory, a float should hold the nearest representable value to the >> exact result. Considering that only one operation is being performed, >> there should be no accumulation of error. The integer results show a >> small number (618) of collisions, eg 2**16 and 4**8; why should some >> of those NOT collide when done with floating point? My initial thought >> was "Oh, this is comparing floats for equality", but after one single >> operation, that should be not a problem. > > Does this help explain it? > >>>> print hex(int(math.pow(3,60))); print hex(3**60) > 0x88f924eeceeda80000000000L > 0x88f924eeceeda7fe92e1f5b1L >
I understand how the inaccuracy works, but I'm expecting it to be as consistent as Mr Grossmith's entertainments. It doesn't matter that math.pow(3,60) != 3**60, but the number of collisions is different when done with floats on the OP's Mac. Here's what I'm talking about: >>> set((3**60,9**30,27**20)) {42391158275216203514294433201} >>> set((math.pow(3,60),math.pow(9,30),math.pow(27,20))) {4.23911582752162e+28} Note how, in each case, calculating three powers that have the same real-number result gives a one-element set. Three to the sixtieth power can't be perfectly rendered with a 53-bit mantissa, but it's rendered the same way whichever route is used to calculate it. ChrisA -- http://mail.python.org/mailman/listinfo/python-list