On 9/10/07, Guido van Rossum <[EMAIL PROTECTED]> wrote: > On 9/10/07, Nicholas Bastin <[EMAIL PROTECTED]> wrote: > > > > I did redo my benchmark using 200 as the increment number instead of > > > > 1, to duck any impact from the interning of small value ints in 2.6, > > > > and it made no discernible difference in the results. > > > > > > I'm sorry, I've lost context. I'm not at all clear at this point what > > > benchmark you might have ran. > > > > I posted a tiny snippet of code earlier in the thread that was a > > sortof silly benchmark of integer math operations. > > Can you report the exact code after all the changes you made, *and* > the results that you are now comparing?
Simple example code: inttest.py: def int_test2(rounds): index = 0 while index < rounds: foo = 0 while foo < 200000000: foo += 200 .... above line repeated 99 more times index += 1 python timeit.py "import inttest; inttest.int_test2(5)" 3.0: 10 loops, best of 3: 6.76 sec per loop 2.6: 10 loops, best of 3: 2.61 sec per loop The case of foo += 200 actually performs worse in 3.0 than foo += 1, although 2.6 is consistent using either value. This is on Windows XP Pro, Pentium D 3.00 ghz (dual core). Python was invoked with REALTIME process priority with thread affinity set to 1. Without thread affinity, 3.0 averaged 7.15 seconds per loop and 2.6 averaged 2.64 seconds per loop. -- Nick _______________________________________________ Python-3000 mailing list Python-3000@python.org http://mail.python.org/mailman/listinfo/python-3000 Unsubscribe: http://mail.python.org/mailman/options/python-3000/archive%40mail-archive.com