Hi David.

Any reason you run a tiny tiny subset of benchmarks?

On Tue, Dec 1, 2015 at 5:26 PM, Stewart, David C
<david.c.stew...@intel.com> wrote:
>
>
> From: Fabio Zadrozny <fabi...@gmail.com<mailto:fabi...@gmail.com>>
> Date: Tuesday, December 1, 2015 at 1:36 AM
> To: David Stewart 
> <david.c.stew...@intel.com<mailto:david.c.stew...@intel.com>>
> Cc: "R. David Murray" <rdmur...@bitdance.com<mailto:rdmur...@bitdance.com>>, 
> "python-dev@python.org<mailto:python-dev@python.org>" 
> <python-dev@python.org<mailto:python-dev@python.org>>
> Subject: Re: [Python-Dev] Avoiding CPython performance regressions
>
>
> On Mon, Nov 30, 2015 at 3:33 PM, Stewart, David C 
> <david.c.stew...@intel.com<mailto:david.c.stew...@intel.com>> wrote:
>
> On 11/30/15, 5:52 AM, "Python-Dev on behalf of R. David Murray" 
> <python-dev-bounces+david.c.stewart=intel....@python.org<mailto:intel....@python.org>
>  on behalf of rdmur...@bitdance.com<mailto:rdmur...@bitdance.com>> wrote:
>
>>
>>There's also an Intel project posted about here recently that checks
>>individual benchmarks for performance regressions and posts the results
>>to python-checkins.
>
> The description of the project is at https://01.org/lp - Python results are 
> indeed sent daily to python-checkins. (No results for Nov 30 and Dec 1 due to 
> Romania National Day holiday!)
>
> There is also a graphic dashboard at http://languagesperformance.intel.com/
>
> Hi Dave,
>
> Interesting, but I'm curious on which benchmark set are you running? From the 
> graphs it seems it has a really high standard deviation, so, I'm curious to 
> know if that's really due to changes in the CPython codebase / issues in the 
> benchmark set or in how the benchmarks are run... (it doesn't seem to be the 
> benchmarks from https://hg.python.org/benchmarks/ right?).
>
> Fabio – my advice to you is to check out the daily emails sent to 
> python-checkins. An example is 
> https://mail.python.org/pipermail/python-checkins/2015-November/140185.html. 
> If you still have questions, Stefan can answer (he is copied).
>
> The graphs are really just a manager-level indicator of trends, which I find 
> very useful (I have it running continuously on one of the monitors in my 
> office) but core developers might want to see day-to-day the effect of their 
> changes. (Particular if they thought one was going to improve performance. 
> It's nice to see if you get community confirmation).
>
> We do run nightly a subset of https://hg.python.org/benchmarks/ and run the 
> full set when we are evaluating our performance patches.
>
> Some of the "benchmarks" really do have a high standard deviation, which 
> makes them hardly very useful for measuring incremental performance 
> improvements, IMHO. I like to see it spelled out so I can tell whether I 
> should be worried or not about a particular delta.
>
> Dave
> _______________________________________________
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/fijall%40gmail.com
_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to