Hi Victor,

Understood on the obsolete benchmark part.  This was the work done before the 
new benchmark was created on github.
I thought this is related, and thus didn't open a new thread.

Maybe you could point me to one single micro-benchmark for the time being, and 
then we could compare result across?  
 
Regards,

Peter



-----Original Message-----
From: Victor Stinner [mailto:victor.stin...@gmail.com] 
Sent: Wednesday, March 15, 2017 5:51 PM
To: speed@python.org; Wang, Peter Xihong <peter.xihong.w...@intel.com>
Subject: ASLR

2017-03-16 1:38 GMT+01:00 Wang, Peter Xihong <peter.xihong.w...@intel.com>:
> Hi All,
>
> I am attaching an image with comparison running the CALL_METHOD in the old 
> Grand Unified Python Benchmark (GUPB) suite 
> (https://hg.python.org/benchmarks), with and without ASLR disabled.

This benchmark suite is now deprecated, please update to the new 'performance' 
benchmark suite:
https://github.com/python/performance

The old benchmark suite didn't spawn multiple processes and so was less 
reliable.

By the way, maybe I should commit a change in hg.python.org/benchmarks to 
remove the code and only keep a README.txt? Code will still be accessible in 
Mercurial history.


> You could see the run2run variation was reduced significantly, from data 
> scattering all over the place, to just one single outlier, out of 30 repeated 
> runs.
> This effectively eliminated most of the variations for this micro-benchmark.
>
> On a Linux system, you could do this by:
> as root
> echo 0 > /proc/sys/kernel/randomize_va_space   # to disable
> echo 2 > /proc/sys/kernel/randomize_va_space   # to enable
>
> If anyone still experiences run2run variation, I'd suggest to read on:
> Based on my observation in our labs, a lot of factors could impact 
> performance, including environment (yes, even a room temperature),

I made my own experiment on the impact on temperature on performance, and above 
100°C, I didn't notice anything:
https://haypo.github.io/intel-cpus-part2.html
"Impact of the CPU temperature on benchmarks"

I tested a desktop and a laptop PC with an Intel CPU.


>  HW components or related such as platforms, chipset, memory DIMMs, CPU 
> generations and stepping, BIOS version, kernels, the list goes on and on.
>
> Being said that, would it be helpful we work together, to identify the root 
> cause, be it due to SW, or anything else?  We could start with a specific 
> micro-benchmark, with specific goal as what to measure.
> After that, or in parallel after some baseline work is done, then focus on 
> measurement process/methodology?
>
> Is this helpful?
>
> Thanks,
>
> Peter

Note: Please open a new thread instead of replying to an email of an existing 
thread.

Victor
_______________________________________________
Speed mailing list
Speed@python.org
https://mail.python.org/mailman/listinfo/speed

Reply via email to