[Speed] Re: How can I integrate pyperf[ormance] w/ a heavy-duty memory profiler?

2022-02-08 Thread Victor Stinner
On Tue, Feb 8, 2022 at 11:28 AM Christos P. Lamprakos wrote: > As you point out, pyperf spawns lots of processes. I am interested only in > those that actually run each benchmark. For instance, in the case of 2to3, I > want to profile just the command that executes 2to3. If the inherit-environ

[Speed] Re: How can I integrate pyperf[ormance] w/ a heavy-duty memory profiler?

2022-02-07 Thread Victor Stinner
Hi, Did you try to write a pyperf benchmark and run it with: $ MEMORY_PROFILER_LOG=warn LD_PRELOAD=./libbytehound.so python3 mybench.py -o -v --inherit-environ=MEMORY_PROFILER_LOG,LD_PRELOAD By defauly, pyperf removes most environment variables, you should have explicitly specify which ones are

[Speed] Re: Fwd: feedback on: speed.python.org/comparison/

2021-10-07 Thread Victor Stinner
Hi, If I recall correctly, speed.python.org is run by this software: https://github.com/tobami/codespeed/ You can propose a pull request there ;-) I don't know if it's still actively maintained. Victor On Thu, Oct 7, 2021 at 11:17 AM Christopher Brousseau wrote: > > Forwarding this feedback

[Speed] Re: Benchmarks not running after "main" branch rename.

2021-05-10 Thread Victor Stinner
The issue is discussed at: https://github.com/python/pyperformance/issues/98 Pablo fixed the branch name, but genshi benchmark is failing. Victor On Sun, May 9, 2021 at 1:49 AM Dennis Sweeney wrote: > > It looks like speed.python.org has no benchmarks from after May 3. I suspect > this is

[Speed] pyperf 2.1.0 released

2021-01-14 Thread Victor Stinner
Hi, I released pyperf 2.1.0: the compare_to command now computes the geometric mean of a whole benchmark suite and no longer displays percentages (display less number to not confuse readers). If the benchmark suites contain more than one benchmark, the geometric mean is computed: normalize

[Speed] New geometric mean feature in pyperf compare_to command

2020-10-27 Thread Victor Stinner
Hi, I implemented a new feature in the pyperf compare_to command: compute the geometric mean of the benchmarks values mean normalized to the reference benchmark suite. Before making a release, I'm looking for testers and feedback to ensure that I implemented it properly and makes sure that it's

[Speed] pyperf 1.7.0 and pyperformance 1.0.0 released

2019-12-17 Thread Victor Stinner
Hi, I released pyperf 1.7.0 and pyperformance 1.0.0. The major change is that pyperformance 1.0 dropped Python 2 support. pyperformance changes between 0.9.1 and 1.0.0: * Enable pyflate benchmarks on Python 3. * Remove ``spambayes`` benchmark: it is not compatible with Python 3. * Remove

[Speed] pyperformance drops Python 2.7 support

2019-12-17 Thread Victor Stinner
Hi, Python 3.9 introduced incompatible changes which broke Django and Tornado benchmarks of pyperformance. But Python 2.7 support prevented me to upgrade Django and Tornado which were stuck to Django 1.11.x and Tornado 5.x. I decided to drop Python 2.7 support in pyperformance to support Python

[Speed] Python 3.8 compared to Python 3.7

2019-10-10 Thread Victor Stinner
I have been asked to update speed.python.org, so here are some data. I didn't look into the details. Python 3.8 compared to 3.7 (compare development branches): At least 10% difference: 10 faster, 3 slower haypo@speed-python$ PYTHONPATH=~/pyperf python3 -m pyperf compare_to

[Speed] Re: Rename performance project to pyperformance?

2019-05-28 Thread Victor Stinner
Ok, I released pyperformance 0.9.0 which is now compatible with Python 3.8 alpha4: Genshi is upgraded to 0.7.3 which support Python 3.8. Victor ___ Speed mailing list -- speed@python.org To unsubscribe send an email to speed-le...@python.org

[Speed] Re: Rename performance project to pyperformance?

2019-05-24 Thread Victor Stinner
Le jeu. 23 mai 2019 à 22:32, Brett Cannon a écrit : >> What do you think of renaming "performance" to "pyperformance"? > > Seems reasonable to me. Ok, thanks. Does anyone have an opinion? My plan is to rename the project is one week if nobody else is opposed to this change ;-) Victor -- Night

[Speed] Rename performance project to pyperformance?

2019-05-21 Thread Victor Stinner
Hi, I just renamed my "perf" module to "pyperf" to avoid confusion with the Linux perf tool which provides a Python binding using "perf" name as well. For the Python benchmark suite https://github.com/python/performance/ I chose to use the "performance" name on GitHub and PyPI, but

[Speed] performance 0.8.0 released

2019-05-09 Thread Victor Stinner
Hi, I just released performance 0.8.0: https://pyperformance.readthedocs.io/ Changes: * compile command: Add "pkg_only" option to benchmark.conf. Add support for native libraries that are installed but not on path. Patch by Robert Grimm. * Update Travis configuration: use trusty image, use

[Speed] perf 1.6.0 released

2019-01-11 Thread Victor Stinner
Changes between 1.5.1 and 1.6.0: * Add *teardown* optional parameter to Runner.timeit() and --teardown option to the "perf timeit" command. Patch by **Alex Khomchenko**. * Runner.timeit(stmt) can now be used to use the statement as the benchmark name. * Port system tune command to Python 2

[Speed] Re: http://speed.python.org/ updated manually

2018-12-07 Thread Victor Stinner
hine and call it a day, check > back every couple of weeks that it is still running. > I could do this over the next few days just to get the data flowing > Matti > > On 28/06/18 07:38, Victor Stinner wrote: > > I'm not aware of any existing doc... Let me see: > > * There a

[Speed] Benchmark Alpine Linux, Debian Stretch and "Slim"

2018-11-06 Thread Victor Stinner
Hi, I have been told about a benchmark comparing 3 Linux distributions. It runs pyperformance on docker containers. Benchmark: https://github.com/tao12345666333/docker-python-perf Results: http://moelove.info/docker-python-perf/ Victor ___ Speed

[Speed] Re: performance 0.7.0 released

2018-10-17 Thread Victor Stinner
if the upstream reacts quickly enough (so I can reenable the two benchmarks), or if I should release performance 0.7.1 with the benchmarks disabled. Victor Le mar. 16 oct. 2018 à 17:01, Victor Stinner a écrit : > > Hi, > > It seems like the performance benchmark suite is broken since

Re: [Speed] adding a pypy runner for speed.python.org

2018-01-31 Thread Victor Stinner
Hi, I tried but failed to find someone in PyPy to adjust performance benchmarks for PyPy. Currently, the JIT is not properly warmed up, and the results can be dishonnest or not reliable. My latest attempt to support is PyPy is: http://vstinner.readthedocs.io/pypy_warmups.html IMHO we need to

Re: [Speed] Update Django from 1.11 to 2.0? What about Python 2.7 and PyPy2?

2018-01-09 Thread Victor Stinner
2018-01-09 17:55 GMT+01:00 Antoine Pitrou : > I don't know, what is the point of dropping it? I'm trying to keep pyperformance up to date: always update the next performance release to latest versions of requirements. Use an older performance version if you want older

Re: [Speed] Update Django from 1.11 to 2.0? What about Python 2.7 and PyPy2?

2018-01-09 Thread Victor Stinner
2018-01-09 18:01 GMT+01:00 Antoine Pitrou : > I admit, I find the whole virtual environment thing annoying. If I > even want to run *one* benchmark, it starts downloading and > installing *every* potentially useful third-party library. If Django is installed in your

Re: [Speed] Update Django from 1.11 to 2.0? What about Python 2.7 and PyPy2?

2018-01-09 Thread Victor Stinner
n 2018 17:39:33 +0100 > Victor Stinner <victor.stin...@gmail.com> > wrote: >> 2018-01-09 16:42 GMT+01:00 INADA Naoki <songofaca...@gmail.com>: >> > We already compare different libraries. For example, pickle is very >> > different >> > between Pyth

Re: [Speed] Update Django from 1.11 to 2.0? What about Python 2.7 and PyPy2?

2018-01-09 Thread Victor Stinner
2018-01-09 16:17 GMT+01:00 Antoine Pitrou : > How do you plan to make numbers comparable if you change the Django > version for a given benchmark? The only solution IMHO is to add a > different benchmark. I mostly use performance to compare Python versions, like compare

Re: [Speed] Update Django from 1.11 to 2.0? What about Python 2.7 and PyPy2?

2018-01-09 Thread Victor Stinner
2018-01-09 16:42 GMT+01:00 INADA Naoki : > We already compare different libraries. For example, pickle is very different > between Python 2.7 and 3.6. > Even though it's not good for comparing interpreter performance, it's good > for people comparing Python 2 and 3. > > If

Re: [Speed] No cronjob yet at speed.python.org

2017-12-20 Thread Victor Stinner
I was in touch with Intel who asked me to add a "Broadwell-EP" config, but they never published any result. Victor 2017-12-20 20:09 GMT+01:00 Antoine Pitrou <solip...@pitrou.net>: > On Mon, 18 Dec 2017 16:00:08 +0100 > Victor Stinner <victor.stin...@gmail.com&g

[Speed] No cronjob yet at speed.python.org

2017-12-18 Thread Victor Stinner
Hi, FYI there is still no cronjob at speed.python.org to run the benchmark one per week. I run the script manually, so don't be surprised if sometimes there are holes of 3 months or longer :-) We can easily fill these holes, someone just had to pick the right Git commit number and run the

Re: [Speed] Python Performance Benchmark Suite revision request

2017-09-13 Thread Victor Stinner
2017-09-14 1:19 GMT+02:00 Nick Coghlan : > It would be useful to provide (...) Contributions as pull requests are welcome ;-) Victor ___ Speed mailing list Speed@python.org https://mail.python.org/mailman/listinfo/speed

Re: [Speed] Python Performance Benchmark Suite revision request

2017-09-13 Thread Victor Stinner
Hi, 2017-09-12 18:35 GMT+02:00 Wang, Peter Xihong : > I am currently using the Python Benchmark Suite > https://github.com/python/performance for a customer and running into some > issues. Due to specific circumstance, the net speed/bandwidth must be > limited. As a

Re: [Speed] Threading benchmark

2017-08-10 Thread Victor Stinner
I don't understand what you are trying to test. For example, for a lock, it's very different if a single thread uses the lock, or if two threads use the lock. None of your benchmarks seem to measure concurrency. Victor 2017-08-11 0:33 GMT+02:00 Bhavishya : > Hello, > >

Re: [Speed] option in performance to invoke number of worker preocess

2017-07-26 Thread Victor Stinner
There is a --fast option to spawn less processes: http://pyperformance.readthedocs.io/usage.html#run But I don't suggest you to use it since it's less reliable ;-) For me, it's really important to get stable benchmarks: http://pyperformance.readthedocs.io/usage.html#how-to-get-stable-benchmarks

[Speed] performance 0.6.0 released

2017-07-06 Thread Victor Stinner
Hi, I released performance 0.6.0: http://pyperformance.readthedocs.io/ This releaes uses the just released perf 1.4 which fix parse_cpu_list() for the Linux setup of Xiang Zhang :-) https://github.com/python/performance/issues/29 As any performance release, results produced by

Re: [Speed] About issue 30815

2017-06-30 Thread Victor Stinner
2017-06-30 14:31 GMT+02:00 INADA Naoki : > This issue may be relatively easy and small. > I think it's good for your first step of successful optimization. I'm not sure about "easy and small", but since I wrote a similar patch 5 years ago (and that I wrote

Re: [Speed] To start work on "fat-python".

2017-06-28 Thread Victor Stinner
Hi, FYI last year I proposed FAT Python as a GSoC project, but I failed to pick a candidate. You may be interested by my project page: http://fatoptimizer.readthedocs.io/en/latest/gsoc.html And the TODO list: http://fatoptimizer.readthedocs.io/en/latest/todo.html The good news is that I rebased

Re: [Speed] performance 0.5.5 and perf 1.3 released

2017-05-29 Thread Victor Stinner
2017-05-29 22:57 GMT+02:00 Victor Stinner <victor.stin...@gmail.com>: > When I started to work on benchmarks last year, I noticed that we used > a Mercurial version which was 5 years old, and a Django version which > was something like 3 years old. I would like to benchmark

Re: [Speed] performance 0.5.5 and perf 1.3 released

2017-05-29 Thread Victor Stinner
2017-05-29 22:45 GMT+02:00 Antoine Pitrou : > I don't know. It means that benchmark results published on the Web > are generally not comparable with each other unless they happen to be > generated with the exact same version. It reduces the usefulness of > the benchmarks

Re: [Speed] performance 0.5.5 and perf 1.3 released

2017-05-29 Thread Victor Stinner
2017-05-29 19:10 GMT+02:00 Antoine Pitrou : > Also, to expand a bit on what I'm trying to say: like you, I have my own > idea of which benchmarks are pointless and unrepresentative, but when > maintaining the former benchmarks suite I usually refrained from > removing those

[Speed] pybench, call_simple and call_method microbenchmarks removed

2017-04-13 Thread Victor Stinner
Hi, I removed pybench, call_simple and call_method microbenchmarks from the performance project and moved them my other project: https://github.com/haypo/pymicrobench The pymicrobench project is a collection of CPython microbenchmarks used to optimize CPython and test specific

Re: [Speed] Testing a wide Unicode build of Python 2 on speed.python.org?

2017-04-12 Thread Victor Stinner
2017-04-12 10:52 GMT+02:00 Victor Stinner <victor.stin...@gmail.com>: > I'm running benchmarks with this option. Once results will be ready, I > will remove the old 2.7 result to replace it with the new one. Done. speed.python.org now uses UCS-4 on Python 2.7. Is it better now? P

Re: [Speed] CPython benchmark status, April 2017

2017-04-07 Thread Victor Stinner
2017-04-07 7:22 GMT+02:00 Serhiy Storchaka : >>https://speed.python.org/timeline/ > > Excellent! I always wanted to see such graphics. Cool :-) > But can you please output years on the scale? Ah ah, yeah, the lack of year becomes painful :-) You should look at

[Speed] CPython benchmark status, April 2017

2017-04-06 Thread Victor Stinner
Hi, I'm still working on analyzing past optimizations to guide future optimizations. I succeeded to identify multiple significant optimizations over the last 3 years. At least for me, some were unexpected like "Use the test suite for profile data" which made pidigts 1.16x faster. Here is a

Re: [Speed] Timeline of CPython performance April, 2014 - April, 2017

2017-04-04 Thread Victor Stinner
(Crap, how did I sent an incomplete email? Sorry about that.) Hi, I hacked my "performance compile" command to force pip 7.1.2 on alpha versions of Python 3.5, which worked around the pyparsing regression (used since pip 8): https://sourceforge.net/p/pyparsing/bugs/100/ I succeeded to run

[Speed] Timeline of CPython performance April, 2014 - April, 2017

2017-04-04 Thread Victor Stinner
Hi, I hacked my "performance compile" command to force pip 7.1.2 on alpha versions of Python 3.5, which worked around the pyparsing regression: https://sourceforge.net/p/pyparsing/bugs/100/ I succeeded to run benchmarks on CPython on the period April, 2014 - April, 2017, with one dot per

Re: [Speed] performance: remove "pure python" pickle benchmarks?

2017-04-04 Thread Victor Stinner
2017-04-04 12:06 GMT+02:00 Serhiy Storchaka : > I consider it as a benchmark of Python interpreter itself. Don't we have enough benchmarks to test the Python interpreter? I would prefer to have more realistic use cases than "reimplement pickle in pure Python".

Re: [Speed] performance: remove "pure python" pickle benchmarks?

2017-04-03 Thread Victor Stinner
2017-04-04 0:59 GMT+02:00 R. David Murray <rdmur...@bitdance.com>: > On Tue, 04 Apr 2017 00:21:33 +0200, Victor Stinner <victor.stin...@gmail.com> > wrote: >> I don't see the point of testing the pure Python implementation, since >> the C accelerator (_pickle) is al

Re: [Speed] Issues to run benchmarks on Python before 2015-04-01

2017-04-03 Thread Victor Stinner
2017-04-01 0:47 GMT+02:00 Victor Stinner <victor.stin...@gmail.com>: > (2) 2014-04-01, 2014-07-01, 2014-10-01, 2015-01-01: "venv/bin/python > -m pip install" fails in extract_stack() of pyparsing I reported the issue on pip bug tracker: https://github.com/pypa/p

[Speed] Issues to run benchmarks on Python before 2015-04-01

2017-03-31 Thread Victor Stinner
Hi, I'm trying to run benchmarks on revisions between 2014-01-01 and today, but I got two different issues: see below. I'm now looking for workarounds :-/ Because of these bugs, I'm unable to get benchmarks results before 2015-04-01 (at 2015-04-01, benchmarks work again). (1) 2014-01-01:

Re: [Speed] speed.python.org: move to Git, remove old previous results

2017-03-28 Thread Victor Stinner
2017-03-28 9:36 GMT+02:00 Miquel Torres : > I can have a look into increasing the number of points displayed. There is a "Show the last [50] results" widget, but it's disabled if you select "(o) Display all in a grid". Maybe we should enable the first widget but limit the

Re: [Speed] ASLR

2017-03-27 Thread Victor Stinner
2017-03-16 17:19 GMT+01:00 Brett Cannon : >> By the way, maybe I should commit a change in hg.python.org/benchmarks >> to remove the code and only keep a README.txt? Code will still be >> accessible in Mercurial history. > > Since we might not shut down hg.python.org for a long

Re: [Speed] speed.python.org: move to Git, remove old previous results

2017-03-27 Thread Victor Stinner
.. command: Mean +- std dev: 21.2 ms +- 3.2 ms Victor 2017-03-27 0:12 GMT+02:00 Victor Stinner <victor.stin...@gmail.com>: > Hi, > > I'm going to remove old previous benchmark results from > speed.python.org. As we discussed previously, there is no plan to keep > old

[Speed] speed.python.org: move to Git, remove old previous results

2017-03-26 Thread Victor Stinner
Hi, I'm going to remove old previous benchmark results from speed.python.org. As we discussed previously, there is no plan to keep old results when we need to change something. In this case, CPython moved from Mercurial to Git, and I'm too lazy to upgrade the revisions in database. I prefer to

[Speed] pymicrobench: collection of CPython microbenchmarks

2017-03-16 Thread Victor Stinner
Hi, I started to create a collection of microbenchmarks for CPython from scripts found on the bug tracker: https://github.com/haypo/pymicrobench I'm not sure that this collection is used yet, but some of you may want to take a look :-) I know that some people have random microbenchmarks in a

Re: [Speed] ASLR

2017-03-16 Thread Victor Stinner
2017-03-17 0:00 GMT+01:00 Wang, Peter Xihong : > In addition to turbo boost, I also turned off hyperthreading, and c-state, > p-state, on Intel CPUs. My "python3 -m perf system tune" command sets the minimum frequency of CPUs used for benchmarks to the maximum

[Speed] perf 1.0 released: with a stable API

2017-03-16 Thread Victor Stinner
Hi, After 9 months of development, the perf API became stable with the awaited "1.0" version. The perf module has now a complete API to write, run and analyze benchmarks and a nice documentation explaining traps of benchmarking and how to avoid, or even, fix them. http://perf.readthedocs.io/

Re: [Speed] Median +- MAD or Mean +- std dev?

2017-03-16 Thread Victor Stinner
2017-03-15 23:44 GMT+01:00 Serhiy Storchaka : > Don't use the "+-" notation. It is misleading even for the stddev of normal > distribution, because with the chance 1 against 2 the sample is out of the > specified interval. Use "Mean: 10 ms Stddev: 1 ms" or "Median: 10 ms

Re: [Speed] ASLR

2017-03-16 Thread Victor Stinner
2017-03-16 10:22 GMT+01:00 Antoine Pitrou : > I suspect temperature can have an impact on performance if Turbo is > enabled (or, as you noticed, if CPU cooling is deficient). Oh sure, I now always start by disabling Turbo Boost. It's common that I run benchmarks on my desktop

Re: [Speed] perf 0.9.6 released

2017-03-15 Thread Victor Stinner
It's easy for me to understand wall clock time rather than CPU time, and it's more consistent with other perf methods. Victor 2017-03-16 1:59 GMT+01:00 Victor Stinner <victor.stin...@gmail.com>: > Hi, > > I released perf 0.9.6 with many changes. First, "Mean +- std dev" is

[Speed] perf 0.9.6 released

2017-03-15 Thread Victor Stinner
Hi, I released perf 0.9.6 with many changes. First, "Mean +- std dev" is now displayed, instead of "Median +- std dev", as a result of the previous thread on this list. The median is still accessible via the stats command. By the way, the "stats" command now displays "Median +- MAD" instead of

[Speed] ASLR

2017-03-15 Thread Victor Stinner
2017-03-16 1:38 GMT+01:00 Wang, Peter Xihong : > Hi All, > > I am attaching an image with comparison running the CALL_METHOD in the old > Grand Unified Python Benchmark (GUPB) suite > (https://hg.python.org/benchmarks), with and without ASLR disabled. This benchmark

Re: [Speed] Median +- MAD or Mean +- std dev?

2017-03-15 Thread Victor Stinner
2017-03-15 18:11 GMT+01:00 Antoine Pitrou : > I would say keep it simple. mean/stddev is informative enough, no need > to add or maintain options of dubious utility. Ok. I added a message to suggest to use perf stats to analyze results. Example of warnings with a benchmark

Re: [Speed] Median +- MAD or Mean +- std dev?

2017-03-15 Thread Victor Stinner
While I like the "automatic removal of outliers feature" of median and MAD ("robust" statistics), I'm not confortable with these numbers. They are new to me and uncommon in other benchmark tools. It's not easy to compare MAD to standard deviation. It seems like MAD can even be misleading when

Re: [Speed] Median +- MAD or Mean +- std dev?

2017-03-15 Thread Victor Stinner
2017-03-14 18:13 GMT+01:00 Antoine Pitrou : > Victor is trying to eliminate the effects of system noise by using the > median, but if that's the primary goal, using the minimum is arguably > better, since the system noise is always a positive contributor (i.e. > it can only

Re: [Speed] Median +- MAD or Mean +- std dev?

2017-03-15 Thread Victor Stinner
2017-03-14 15:42 GMT+01:00 Nick Coghlan : > That would suggest that the implicit assumption of a measure-of-centrality > with a measure-of-symmetric-deviation may need to be challenged, as at least > some meaningful performance problems are going to show up as non-normal >

Re: [Speed] Median +- MAD or Mean +- std dev?

2017-03-15 Thread Victor Stinner
2017-03-14 8:14 GMT+01:00 Serhiy Storchaka : > Std dev is well understood for the distribution close to normal. But when > the distribution is too skewed or multimodal (as in your quick example) > common assumptions (that 2/3 of samples are in the range of the std dev, 95% >

Re: [Speed] Median +- MAD or Mean +- std dev?

2017-03-15 Thread Victor Stinner
2017-03-13 21:38 GMT+01:00 Antoine Pitrou : >> If the goal is to get reproductible results, Median +- MAD seems better. > > Getting reproducible results is only half of the goal. Getting > meaningful (i.e. informative) results is the other half. If the system is tuned for

Re: [Speed] Median +- MAD or Mean +- std dev?

2017-03-06 Thread Victor Stinner
Another example on the same computer. It's interesting: * MAD and std dev is the half of result 1 * the benchmark is less unstable * median is very close to result 1 * mean changed much more than median Benchmark result 1: Median +- MAD: 276 ns +- 10 ns Mean +- std dev: 371 ns +- 196 ns

[Speed] Median +- MAD or Mean +- std dev?

2017-03-06 Thread Victor Stinner
Hi, Serhiy Storchaka opened a bug report in my perf module: perf displays Median +- std dev, whereas median absolute deviation (MAD) should be displayed instead: https://github.com/haypo/perf/issues/20 I just modified perf to display Median +- MAD, but I'm not sure that it's better than Mean +-

[Speed] perf 0.9.4 released

2017-03-01 Thread Victor Stinner
Hi, I released the version 0.9.4 of my Python perf module: * Add --compare-to option to the Runner CLI * compare_to command: Add --table option to render a table http://perf.readthedocs.io/en/latest/ I used the --table feature to write this FASTCALL microbenchmarks article:

[Speed] Benchmark tooling broken by the migration to Git

2017-02-20 Thread Victor Stinner
Hi, The website speed.python.org is made of different tools which were written for Mercurial, but CPython moved to Git. These tools should be updated. Tools: * The Django application "codespeed" * scripts/bench_cpython.py and scripts/bench_revisions.py of https://github.com/python/performance/

[Speed] Slides and video of my "How to run a stable benchmark" talk at FOSDEM

2017-02-07 Thread Victor Stinner
Hi, I annoyed many of you on this mailing list with all my WTF performance issues. I consider that I now succeeded to get benchmarks stable enough to become "usable" (reliable). Two days ago, I gave a talk "How to run a stable benchmark" at FOSDEM (Brussels, Belgium) on my findings. Slides and

Re: [Speed] Running pyperformance on Windows

2017-01-16 Thread Victor Stinner
Hi Peter, Thank you for your bug report. The bug is a recent regression in the perf module. I just fixed it with the newly released perf 0.9.3. http://perf.readthedocs.io/en/latest/changelog.html#version-0-9-3-2017-01-16 I also released performance 0.5.1 which now uses perf 0.9.3. By the way, I

Re: [Speed] speed.python.org: recent issues to run benchmarks

2017-01-03 Thread Victor Stinner
2017-01-03 0:56 GMT+01:00 Victor Stinner <victor.stin...@gmail.com>: > I'm now trying to run benchmarks one more time... ;-) Good news: results are slowly being uploaded to https://speed.python.org/ It takes 50 minutes to benchmark one revision: first compilation, run the test suite

[Speed] speed.python.org: recent issues to run benchmarks

2017-01-02 Thread Victor Stinner
Hi, tl;dr I'm trying to run benchmarks with PGO, but I get new errors, so speed.python.org is currently broken. Last december, I upgraded the speed-python server (server used to run benchmarks for CPython) from Ubuntu 14.04 (LTS) to 16.04 (LTS) to be able to compile Python using PGO compilation.

Re: [Speed] Analysis of a Python performance issue

2016-11-19 Thread Victor Stinner
Le 19 nov. 2016 21:29, "serge guelton" a écrit : > Thanks *a lot* victor for this great article. You not only very > accurately describe the method you used to track the performance bug, > but also give very convincing results. You're welcome. I'm not 100% sure that

[Speed] Analysis of a Python performance issue

2016-11-18 Thread Victor Stinner
Hi, I'm happy because I just finished an article putting the most important things that I learnt this year on the most silly issue with Python performance: code placement. https://haypo.github.io/analysis-python-performance-issue.html I explain how to debug such issue and my attempt to fix it

Re: [Speed] Ubuntu 16.04 speed issues

2016-11-10 Thread Victor Stinner
Hello, > The OpenStack-Ansible project has noticed that performance on Ubuntu 16.04 is > quite significantly slower than on 14.04. > At the moment it's looking like *possibly* a GCC related bug. Is it exactly the same Python version? What is the full version? Try to get compiler flags:

Re: [Speed] Latest enhancements of perf 0.8.1 and performance 0.3.1

2016-11-05 Thread Victor Stinner
2016-11-05 15:56 GMT+01:00 Nick Coghlan : > Since the use case for --duplicate is to reduce the relative overhead > of the outer loop when testing a micro-optimisation within a *given* > interpreter, perhaps the error should be for combining --duplicate and > --compare-to at

Re: [Speed] Benchmarks: Comparison between Python 2.7 and Python 3.6 performance

2016-11-04 Thread Victor Stinner
2016-11-04 21:58 GMT+01:00 Victor Stinner <victor.stin...@gmail.com>: > 2016-11-04 20:21 GMT+01:00 Yury Selivanov <yselivanov...@gmail.com>: >> I'm curious why call_* benchmarks became slower on 3.x? > > It's almost the same between 2.7 and default. For 3.

Re: [Speed] Performance difference in call_method()

2016-11-04 Thread Victor Stinner
I proposed a patch which fixes the issue: http://bugs.python.org/issue28618 "Decorate hot functions using __attribute__((hot)) to optimize Python" Victor ___ Speed mailing list Speed@python.org https://mail.python.org/mailman/listinfo/speed

Re: [Speed] Performance difference in call_method()

2016-11-04 Thread Victor Stinner
I found some interesting differences using the Linux perf tool. # perf stat -e L1-icache-loads,L1-icache-load-misses ./python performance/benchmarks/bm_call_method.py --inherit=PYTHONPATH -v --worker -l1 -n 25 -w0 2016-11-04 23:35 GMT+01:00 Victor Stinner <victor.stin...@gmail.com>

[Speed] Performance difference in call_method()

2016-11-04 Thread Victor Stinner
Hi, I noticed a temporary performance peak in the call_method: https://speed.python.org/timeline/#/?exe=4=call_method=1=50=off=on=on The difference is major: 17 ms => 29 ms, 70% slower! I expected a temporary issue on the server used to run benchmarks, but... I reproduced the result on the

Re: [Speed] Latest enhancements of perf 0.8.1 and performance 0.3.1

2016-11-02 Thread Victor Stinner
2016-11-02 15:20 GMT+01:00 Armin Rigo : > Is that really the kind of examples you want to put forward? I am not a big fan of timeit, but we must use it sometimes to micro-optimizations in CPython to check if an optimize really makes CPython faster or not. I am only trying to

Re: [Speed] Latest enhancements of perf 0.8.1 and performance 0.3.1

2016-11-02 Thread Victor Stinner
Hum, so for an usability point of view, I think that the best to do is to ignore the option if Python has a JIT. On CPython, --duplicate makes sense (no?). So for example, the following command should use duplicate on CPython but not on PyPy: python2 -m perf timeit '[1,2]*1000'

[Speed] Fwd: Benchmarking Python and micro-optimizations

2016-10-20 Thread Victor Stinner
If you are not subscribed to the Python-Dev mailing list, here is the copy of the email I just sent. Victor -- Forwarded message -- From: Victor Stinner <victor.stin...@gmail.com> Date: 2016-10-20 12:56 GMT+02:00 Subject: Benchmarking Python and micro-optimizations To:

[Speed] Performance 0.3 released with 10 new benchmarks

2016-10-10 Thread Victor Stinner
Hi, I just released performance 0.3, the Python benchmark suite, with 10 new benchmarks from the PyPy benchmark suite: https://github.com/python/performance Version 0.3.0 changelog. New benchmarks: * Add ``crypto_pyaes``: Benchmark a pure-Python implementation of the AES block-cipher in CTR

[Speed] perf 0.7.12: --python and --compare-to options

2016-09-30 Thread Victor Stinner
Hi, I always wanted to be able to compare the performance of two Python versions using timeit *in a single command*. So I just implemented it! I added --python and --compare-to options. Real example to show the new "timeit --compared-to" feature: --- $ export PYTHONPATH=~/prog/GIT/perf $

Re: [Speed] intel_pstate C0 bug on isolated CPUs with the performance governor

2016-09-29 Thread Victor Stinner
2016-09-29 17:11 GMT+02:00 Armin Rigo : > On my laptop, the speed ranges between 500MHz and 2300MHz. Oh I see, on this computer the difference can be up to 5x slower! >> * (Use NOHZ_FULL but) Force frequency to the maximum >> * Don't use NOHZ_FULL > > I think we should force

Re: [Speed] intel_pstate C0 bug on isolated CPUs with the performance governor

2016-09-23 Thread Victor Stinner
2016-09-23 12:19 GMT+02:00 Antoine Pitrou <solip...@pitrou.net>: > On Fri, 23 Sep 2016 11:44:12 +0200 > Victor Stinner <victor.stin...@gmail.com> > wrote: >> I guess that for some reasons, the CPU frequency was 1.6 GHz (min >> frequency) even if I conf

[Speed] perf 0.7.11 released

2016-09-19 Thread Victor Stinner
Hi, I released perf 0.7.11. News since perf 0.7.3: * Support PyPy * Add units to samples: second, byte, integer. Benchmarks on the memory usage (track memory) are now displayed correctly. * Remove environment variables: add --inherit-environ cmdline option. * Add more metadata: mem_max_rss,

Re: [Speed] New instance of CodeSpeed at speed.python.org running performance on CPython and PyPy?

2016-09-01 Thread Victor Stinner
2016-09-01 19:53 GMT+02:00 Kevin Modzelewski : > Just my two cents -- having a benchmark change underneath the benchmark > runner is quite confusing to debug, because it looks indistinguishable from > a non-reproducible regression that happens in the performance itself. I agree.

[Speed] New instance of CodeSpeed at speed.python.org running performance on CPython and PyPy?

2016-09-01 Thread Victor Stinner
Hi, Would it be possible to run a new instance of CodeSpeed (the website behing speed.python.org) which would run the "performance" benchmark suite rather than the "benchmarks" benchmark suite? And would it be possible to run it on CPython (2.7 and 3.5 branches) and PyPy (master branch, maybe

Re: [Speed] performance 0.1 (and 0.1.1) release

2016-08-26 Thread Victor Stinner
Release early, release often: performance 0.1.2 has been released! The first version supporting Windows. I renamed the GitHub project from python/benchmarks to python/performance. All changes: * Windows is now supported * Add a new ``venv`` command to show, create, recrete or remove the

Re: [Speed] Rename python/benchmarks GitHub project to python/performance?

2016-08-26 Thread Victor Stinner
Le vendredi 26 août 2016, Antoine Pitrou <solip...@pitrou.net> a écrit : > On Fri, 26 Aug 2016 00:07:53 +0200 > Victor Stinner <victor.stin...@gmail.com <javascript:;>> > wrote: > > > > By the way, I don't know if it's worth it to have a "pyperformance

[Speed] Rename python/benchmarks GitHub project to python/performance?

2016-08-25 Thread Victor Stinner
Hi, For the first release of the "new" benchmark suite, I chose the name "performance", since "benchmark" and "benchmarks" names were already reserved on PyPI. It's the name of the Python module, but also of the command line tool: "pyperformance". Since there is an "old" benchmark suite

Re: [Speed] performance 0.1 (and 0.1.1) release

2016-08-24 Thread Victor Stinner
2016-08-24 17:38 GMT+02:00 Victor Stinner <victor.stin...@gmail.com>: > Now the development version always install performance 0.1.1 (see > performance/requirements.txt). I should fix this to install the > development version of performance/ when it is run from the source > c

Re: [Speed] New benchmark suite for Python

2016-08-22 Thread Victor Stinner
Done: I renamed "django" benchmark" to "django_template": https://github.com/python/benchmarks/commit/d674a99e3a9a10a29c44349b2916740680e936c8 Victor 2016-08-21 19:36 GMT+02:00 Victor Stinner <victor.stin...@gmail.com>: > Le 21 août 2016 11:02 AM, "Maciej

Re: [Speed] New benchmark suite for Python

2016-08-17 Thread Victor Stinner
2016-08-18 2:37 GMT+02:00 Victor Stinner <victor.stin...@gmail.com>: > PyPy, Pyston, Pyjion, Numba, etc. : Hey! it's now time to start to > take a look at my project and test it ;-) Tell me what is broken, what > is missing, and I will try to help you to move your project to this

[Speed] perf 0.7.3 released

2016-08-17 Thread Victor Stinner
The Python perf module is a toolkit to write, run, analyze and modify benchmarks: http://perf.readthedocs.io/en/latest/ Version 0.7.3 (2016-08-17): * add a new ``slowest`` command * convert: add ``--extract-metadata=NAME`` * add ``--tracemalloc`` option: use the ``tracemalloc`` module to track

Re: [Speed] CPython Benchmark Suite usinf my perf module, GitHub, etc.

2016-07-29 Thread Victor Stinner
2016-07-29 20:51 GMT+02:00 Zachary Ware : > I think rather than using virtual environments which aren't truly > supported by <3.3 anyway, ... What do you mean? I'm building and destroying dozens of venv everyday at work using tox on Python 2.7. The virtualenv command

[Speed] perf module 0.7 released

2016-07-18 Thread Victor Stinner
Hi, I released perf 0.7 (quickly followed by a 0.7.1 bugfix): http://perf.readthedocs.io/ I wrote this new version to collect more data in each process. It now reads (and stores) CPUs config, CPUs temperature, CPUs frequency, system load average, etc. Later we can add for example the process RSS

Re: [Speed] perf 0.6 released

2016-07-06 Thread Victor Stinner
2016-07-06 22:24 GMT+02:00 Antoine Pitrou : >> 5% smallest/5% largest: do you mean something like sorting all >> samples, remove items from the two tails? >> >> Something like sorted(samples)[3:-3] ? > > Yes. Hum, it may work if the distribution is uniform (symmetric), but

Re: [Speed] perf 0.6 released

2016-07-06 Thread Victor Stinner
2016-07-06 18:41 GMT+02:00 Antoine Pitrou : > I'm not sure this is meant to implement my suggestion from the other > thread, Yes, I implemented this after the discussion we had in the other thread. > but if so, there is a misunderstanding: I did not suggest to > remove the

  1   2   >