On 8/16/06, "Martin v. Löwis" <[EMAIL PROTECTED]> wrote:
> I have now some numbers. For the attached t.py, the unmodified svn
> python gives
>
> Test 1 3.25420880318
> Test 2 1.86433696747
>
> and the one with the attached patch gives
>
> Test 1 3.45080399513
> Test 2 2.09729003906
>
> So there apparently is a performance drop on int allocations of about
> 5-10%.
>
> On this machine (P4 3.2GHz) I could not find any difference in pystones
> from this patch.
>
> Notice that this test case is extremely focused on measuring int
> allocation (I just noticed I should have omitted the for loop in
> the second case, though).

I think the test isn't hardly focused enough on int allocation. I
wonder if you could come up with a benchmark that repeatedly allocates
100s of 1000s of ints and then deletes them? What if it also allocates
other small objects so the ints become more fragmented?

-- 
--Guido van Rossum (home page: http://www.python.org/~guido/)
_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to