Antoine Pitrou wrote:
On Thu, 15 Dec 2011 22:18:18 +0000
Mark Shannon <m...@hotpy.org> wrote:
For the gcbench benchmark (from unladen swallow),
cpython with the new dict is about 9% faster and, more importantly,
reduces memory use from 99 Mbytes to 61Mbytes (a 38% reduction).
All tests were done on my ancient 32 bit intel linux machine,
please try it out on your machines and let me know what sort of results
you get.
Benchmark results under a Core i5, 64-bit Linux:
Report on Linux localhost.localdomain 2.6.38.8-desktop-8.mga #1 SMP Fri
Nov 4 00:05:53 UTC 2011 x86_64 x86_64 Total CPU cores: 4
### call_method ###
Min: 0.292352 -> 0.274041: 1.07x faster
Avg: 0.292978 -> 0.277124: 1.06x faster
Significant (t=17.31)
Stddev: 0.00053 -> 0.00351: 6.5719x larger
### call_method_slots ###
Min: 0.284101 -> 0.273508: 1.04x faster
Avg: 0.285029 -> 0.274534: 1.04x faster
Significant (t=26.86)
Stddev: 0.00068 -> 0.00135: 1.9969x larger
### call_simple ###
Min: 0.225191 -> 0.222104: 1.01x faster
Avg: 0.227443 -> 0.222776: 1.02x faster
Significant (t=9.53)
Stddev: 0.00181 -> 0.00056: 3.2266x smaller
### fastpickle ###
Min: 0.482402 -> 0.493695: 1.02x slower
Avg: 0.486077 -> 0.496568: 1.02x slower
Significant (t=-5.35)
Stddev: 0.00340 -> 0.00276: 1.2335x smaller
### fastunpickle ###
Min: 0.394846 -> 0.433733: 1.10x slower
Avg: 0.397362 -> 0.436318: 1.10x slower
Significant (t=-23.73)
Stddev: 0.00234 -> 0.00283: 1.2129x larger
### float ###
Min: 0.052567 -> 0.051377: 1.02x faster
Avg: 0.053812 -> 0.052669: 1.02x faster
Significant (t=3.72)
Stddev: 0.00110 -> 0.00107: 1.0203x smaller
### json_dump ###
Min: 0.381395 -> 0.391053: 1.03x slower
Avg: 0.381937 -> 0.393219: 1.03x slower
Significant (t=-7.15)
Stddev: 0.00043 -> 0.00350: 8.1447x larger
### json_load ###
Min: 0.347112 -> 0.369763: 1.07x slower
Avg: 0.347490 -> 0.370317: 1.07x slower
Significant (t=-69.64)
Stddev: 0.00045 -> 0.00058: 1.2717x larger
### nbody ###
Min: 0.238068 -> 0.219208: 1.09x faster
Avg: 0.238951 -> 0.220000: 1.09x faster
Significant (t=36.09)
Stddev: 0.00076 -> 0.00090: 1.1863x larger
### nqueens ###
Min: 0.262282 -> 0.252576: 1.04x faster
Avg: 0.263835 -> 0.254497: 1.04x faster
Significant (t=7.12)
Stddev: 0.00117 -> 0.00269: 2.2914x larger
### regex_effbot ###
Min: 0.060298 -> 0.057791: 1.04x faster
Avg: 0.060435 -> 0.058128: 1.04x faster
Significant (t=17.82)
Stddev: 0.00012 -> 0.00026: 2.1761x larger
### richards ###
Min: 0.148266 -> 0.143755: 1.03x faster
Avg: 0.150677 -> 0.145003: 1.04x faster
Significant (t=5.74)
Stddev: 0.00200 -> 0.00094: 2.1329x smaller
### silent_logging ###
Min: 0.057191 -> 0.059082: 1.03x slower
Avg: 0.057335 -> 0.059194: 1.03x slower
Significant (t=-17.40)
Stddev: 0.00020 -> 0.00013: 1.4948x smaller
### unpack_sequence ###
Min: 0.000046 -> 0.000042: 1.10x faster
Avg: 0.000048 -> 0.000044: 1.09x faster
Significant (t=128.98)
Stddev: 0.00000 -> 0.00000: 1.8933x smaller
Thanks for running the benchmarks.
It's probably best not to attach to much significance to
a few percent her and there, but its good to see that performance is OK.
gcbench first showed no memory consumption difference (using "ps -u").
I then removed the "stretch tree" (which apparently reserves memory
upfront) and I saw a ~30% memory saving as well as a 20% performance
improvement on large sizes.
I should say how I did my memory tests.
I did a search using ulimit to limit the maximum amount of memory the
process was allowed. The given numbers were the minimum required to
complete, I did not remove the "stretch tree".
Cheers,
Mark.
_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com