On Thu, 1 Sep 2016 at 03:58 Victor Stinner <victor.stin...@gmail.com> wrote:

> Hi,
>
> Would it be possible to run a new instance of CodeSpeed (the website
> behing speed.python.org) which would run the "performance" benchmark
> suite rather than the "benchmarks" benchmark suite? And would it be
> possible to run it on CPython (2.7 and 3.5 branches) and PyPy (master
> branch, maybe also the py3k branch)?
>

I believe Zach has the repo containing the code. He also said it's all
rather hacked up at the moment. Maybe something to discuss next week at the
sprint as I think you're both going to be there.


>
> I found https://github.com/tobami/codespeed/ but I didn't look at it
> right now. I guess that some code should be written to convert perf
> JSON file to the format expected by CodeSpeed?
>
> FYI I released performance 0.2 yesterday. JSON files now contain the
> version of the benchmark suite ("performance_version: 0.2"). I plan to
> use semantic version: increase the major version (ex: upgrade to 0.3,
> but later it will be 1.x, 2.x, etc.) when benchmark results are
> considered to not be compatible.
>

SGTM.


>
> For example, I upgraded Django (from 1.9 to 1.10) and Chameleon (from
> 2.22 to 2.24) in performance 0.2.
>
> The question is how to upgrade the performance to a new major version:
> should we drop previous benchmark results?
>

They don't really compare anymore, so they should at least not be compared
to benchmark results from a newer benchmark.


>
> Maybe we should put the performance version in the URL, and use
> "/latest/" by default. Only /latest/ would get new results, and
> /latest/ would restart from an empty set of results when performance
> is upgraded?
>

SGTM


>
> Another option, less exciting, is to never upgrade benchmarks. The
> benchmarks project *added* new benchmarks when a dependency was
> "upgraded". In fact, the old dependency was kept and a new dependency
> (full copy of the code in fact ;-)) was added. So it has django,
> django_v2, django_v3, etc. The problem is that it still uses Mercurial
> 1.2 which was released 7 years ago (2009)... Since it's painful to
> upgrade, most dependencies were outdated.
>

Based on my experience with the benchmark suite I don't like this option
either; it just gathers cruft. As Maciej and the PyPy folks have pointed
out, benchmarks should try to represent modern code and old benchmarks
won't necessarily do that.


>
> Do you care of old benchmark results? It's quite easy to regenerate
> them (on demand?) if needed, no? Using Mercurial and Git, it's easy to
> update to any old revisions to run again a benchmark on an old version
> of CPython / PyPy / etc.
>

I personally don't, but that's because care about either current
performance in comparison to others or very short timescales to see when a
regression occurred (hence a switchover has a very small chance of
impacting that investigation), not long timescale results for historical
purposes.
_______________________________________________
Speed mailing list
Speed@python.org
https://mail.python.org/mailman/listinfo/speed

Reply via email to