[Speed] Re: Rename performance project to pyperformance?

2019-05-23 Thread Brett Cannon
On Tue, May 21, 2019 at 4:39 PM Victor Stinner  wrote:

> Hi,
>
> I just renamed my "perf" module to "pyperf" to avoid confusion with
> the Linux perf tool which provides a Python binding using "perf" name
> as well.
>
> For the Python benchmark suite https://github.com/python/performance/
> I chose to use the "performance" name on GitHub and PyPI, but
> "pyperformance" on ReadTheDocs to avoid confusion:
>
> http://github.com/python/performance/
> vs
> https://pyperformance.readthedocs.io/
>
> Moreover, "pip install performance" installs a program called...
> "pyperformance" :-)
>
> What do you think of renaming "performance" to "pyperformance"?
>

Seems reasonable to me.

-Brett


>
> For perf/pyperf, I prefer "pyperf" since pyperf is really designed to
> measure the perofrmance of Python applications. Same for
> "performance": it is designed to measure the performance of ... Python
> itself :-)
>
> Victor
> --
> Night gathers, and now my watch begins. It shall not end until my death.
> ___
> Speed mailing list -- speed@python.org
> To unsubscribe send an email to speed-le...@python.org
>
___
Speed mailing list -- speed@python.org
To unsubscribe send an email to speed-le...@python.org


Re: [Speed] steps to get pypy benchmarks running

2018-03-26 Thread Brett Cannon
On Sun, 25 Mar 2018 at 07:37 Matti Picus  wrote:

> On 20/03/18 17:31, Matti Picus wrote:
> > On 14/02/18 20:18, Nick Coghlan wrote:
> >> On 14 February 2018 at 07:52, Mark Shannon  wrote:
> >>> Hi,
> >>>
> >>> On 13/02/18 14:27, Matti Picus wrote:
>  I have begun to dive into the performance/perf code. My goal is to get
>  pypy benchmarks running on http://speed.python.org. Since PyPy has
>  a JIT,
>  the benchmark runs must have a warmup stage.
> >>> Why?
> >>> The other interpreters don't get an arbitrary chunk of time for
> >>> free, so
> >>> neither should PyPy. Warmup in an inherent cost of dynamic
> >>> optimisers. The
> >>> benefits should outweigh the costs, but the costs shouldn't be ignored.
> >> For speed.python.org purposes, that would likely be most usefully
> >> reported as separate "PyPy (cold)" and "PyPy (warm)" results (where
> >> the former runs under the same conditions as CPython, while the latter
> >> is given the benefit of warming up the JIT first).
> >>
> >> Only reporting the former would miss the point of PyPy's main use case
> >> (i.e. long lived processes), while only reporting the latter would
> >> miss one of the main answers to "Why hasn't everyone already switched
> >> to PyPy for all their Python needs?" (i.e. when the app doesn't run
> >> long enough to pay back the increased start-up overhead).
> >>
> >> Cheers,
> >> Nick.
> > So would it be reasonable as a first step to get the PyPy runner(s)
> > into operation by modifying the nightly runs to download from the
> > latest nightly builds [1], [2]?
> > We can deal with reporting cold/warm statistics later. As people have
> > said, they are really two orthogonal issues.
> >
> > [1]
> > http://buildbot.pypy.org/nightly/trunk/pypy-c-jit-latest-linux64.tar.bz2
> > for python 2.7
> > [2]
> > http://buildbot.pypy.org/nightly/py3.5/pypy-c-jit-latest-linux64.tar.bz2
> > for python 3.5 (latest released pypy3 version, python 3.6 is still alpha)
> >
> > Matti
>
> No responses, maybe I asked the wrong question.
>

I think the people who have traditionally maintained speed.python.org are
just not available to answer the question, not that it was the wrong
question.


> I would be willing to issue a pull request to get PyPy runners into
> operation on "the beast" so it can report results to speed.python.org.
> Which repo holds the code that stages `performance` runs and reports to
> speed.pypy.org?
>

Unfortunately I don't know.

-Brett


> Matti
> ___
> Speed mailing list
> Speed@python.org
> https://mail.python.org/mailman/listinfo/speed
>
___
Speed mailing list
Speed@python.org
https://mail.python.org/mailman/listinfo/speed


Re: [Speed] Speed Digest, Vol 36, Issue 1

2017-06-26 Thread Brett Cannon
On Mon, 26 Jun 2017 at 01:07 Bhavishya <bhavishyagop...@gmail.com> wrote:

> Hi,
> I'm having some trouble making the lazy_import "implicit", I tried
> following approaches:
>
> 1)overriding "__import__ " using 'builtins.__import__ =
> lazy_import.lazy_import_module' but this can't take more than 2 args
> whereas "__import__" takes upto 5 args, so now I have to make changes to
> "_bootstrap.py"(which I shouldn't probably make) and __init__.py to
> accomadate everything new.
>

Don't override __import__ basically ever.


>
> 2)Using "meta_path" and "path_hooks" to register custom "Loader" and
> "finder" but somehow it's breaking during 'make' ...probably due to
> 'finder'(unable to import stuff like '.org*')...Working on this.
>

I don't know what you mean by this, e.g. what is ".org*"?


>
> 3)The Whitelist option(that Antoine suggested) ,somewhat hardcode, i.e. to
> identify modules and in place of "import" add the lazy_import_module
> function.
>

I actually don't think that's what Antoine meant. I think he was suggesting
setting up a finder and loader as normal that would do lazy loading but
controlling it with a whitelist/blacklist to then potentially fail
accordingly so the normal finders/loaders would do the right thing.


>
>  Any suggestion?
>

As I said in my other email, just don't bother with this right now. To
quickly see if things will be faster you should just edit the modules that
are imported at startup directly to use a lazy_import() function and
measure the performance.

-Brett


>
> Regards,
> Bhavishya
>
> On Sun, Jun 25, 2017 at 12:52 PM, Bhavishya <bhavishyagop...@gmail.com>
> wrote:
>
>> Also,  the "fatoptimizer" project's compile fails due to missing
>> "ma_version_tag" (and works on changing it to "ma_version").
>>
>> Regards,
>> Bhavishya
>>
>> On Sun, Jun 25, 2017 at 12:32 PM, Bhavishya <bhavishyagop...@gmail.com>
>> wrote:
>>
>>> Hello,
>>> I have added the "lazy_import"
>>> <https://github.com/bhavishyagopesh/gsoc_2017/blob/master/python_startup_time/lazy_loader.py>
>>> function but still working on adding  it  implicitly(To ensure that at
>>> startup it is actually  used.)
>>>
>>> Thanks Antoine for the suggestion
>>>
>>> Regards,
>>> Bhavishya
>>>
>>> On Sat, Jun 24, 2017 at 9:30 PM, <speed-requ...@python.org> wrote:
>>>
>>>> Send Speed mailing list submissions to
>>>> speed@python.org
>>>>
>>>> To subscribe or unsubscribe via the World Wide Web, visit
>>>> https://mail.python.org/mailman/listinfo/speed
>>>> or, via email, send a message with subject or body 'help' to
>>>> speed-requ...@python.org
>>>>
>>>> You can reach the person managing the list at
>>>> speed-ow...@python.org
>>>>
>>>> When replying, please edit your Subject line so it is more specific
>>>> than "Re: Contents of Speed digest..."
>>>>
>>>>
>>>> Today's Topics:
>>>>
>>>>1. Re: Lazy-loading to decrease python_startup time (Brett Cannon)
>>>>2. Re: Lazy-loading to decrease python_startup time (Antoine Pitrou)
>>>>
>>>>
>>>> --
>>>>
>>>> Message: 1
>>>> Date: Fri, 23 Jun 2017 23:03:57 +
>>>> From: Brett Cannon <br...@python.org>
>>>> To: Bhavishya <bhavishyagop...@gmail.com>, speed@python.org,  Ramya
>>>> Meruva <meruvaramya...@gmail.com>, Victor Stinner
>>>> <victor.stin...@gmail.com>
>>>> Subject: Re: [Speed] Lazy-loading to decrease python_startup time
>>>> Message-ID:
>>>> 

Re: [Speed] Speed Digest, Vol 36, Issue 1

2017-06-25 Thread Brett Cannon
An easier way to guarantee its use is to have the lazy import print
something under 'Python -v' and then make sure every logged import has a
corresponding lazy message to go with it.

On Sun, Jun 25, 2017, 08:33 Bhavishya, <bhavishyagop...@gmail.com> wrote:

> Hello,
> I have added the "lazy_import"
> <https://github.com/bhavishyagopesh/gsoc_2017/blob/master/python_startup_time/lazy_loader.py>
> function but still working on adding  it  implicitly(To ensure that at
> startup it is actually  used.)
>
> Thanks Antoine for the suggestion
>
> Regards,
> Bhavishya
>
> On Sat, Jun 24, 2017 at 9:30 PM, <speed-requ...@python.org> wrote:
>
>> Send Speed mailing list submissions to
>> speed@python.org
>>
>> To subscribe or unsubscribe via the World Wide Web, visit
>> https://mail.python.org/mailman/listinfo/speed
>> or, via email, send a message with subject or body 'help' to
>> speed-requ...@python.org
>>
>> You can reach the person managing the list at
>> speed-ow...@python.org
>>
>> When replying, please edit your Subject line so it is more specific
>> than "Re: Contents of Speed digest..."
>>
>>
>> Today's Topics:
>>
>>1. Re: Lazy-loading to decrease python_startup time (Brett Cannon)
>>2. Re: Lazy-loading to decrease python_startup time (Antoine Pitrou)
>>
>>
>> --
>>
>> Message: 1
>> Date: Fri, 23 Jun 2017 23:03:57 +
>> From: Brett Cannon <br...@python.org>
>> To: Bhavishya <bhavishyagop...@gmail.com>, speed@python.org,  Ramya
>> Meruva <meruvaramya...@gmail.com>, Victor Stinner
>> <victor.stin...@gmail.com>
>> Subject: Re: [Speed] Lazy-loading to decrease python_startup time
>> Message-ID:
>> 

Re: [Speed] Lazy-loading to decrease python_startup time

2017-06-24 Thread Brett Cannon
On Sat, 24 Jun 2017 at 09:50 Antoine Pitrou <solip...@pitrou.net> wrote:

> On Sat, 24 Jun 2017 16:28:29 +0000
> Brett Cannon <br...@python.org> wrote:
> > >
> > > My experience is that:
> > > - you want lazy imports to be implicit, i.e. they should work using the
> > >   "import" statement and not any special syntax or function invocation
> > > - you want a blacklist and/or whitelist mechanism to restrict lazy
> > >   imports to a particular set of modules and packages, because some
> > >   modules may not play well with lazy importing (e.g. anything that
> > >   registers plugins, specializations -- think singledispatch --, etc.)
> > >
> > > For example, I may want to register the "tornado", "asyncio" and
> "numpy"
> > > namespaces / packages for lazy importing, but not the "numba" namespace
> > > as it uses registration mechanisms quite heavily.
> > >
> > > (and the stdlib could be part of the default lazy import whitelist)
> > >
> >
> > That's all true for an end user's application, but for the stdlib where
> > there is no knock-on effects from dependencies not being loaded lazily I
> > don't think it's quite as critical. Plus lazy loading does make debugging
> > harder by making loads that trigger an exception happen at the line of
> > first use instead of at the import line, so I don't know if it's
> desirable
> > to automatically make the whole stdlib be lazily loaded from the outset
> > (which is what would be required since doing it in e.g. sitecustomize.py
> > wouldn't happen until after startup anyway).
>
> Yes, you are right.  I was assuming that if we take the time to include
> a lazy import system, we'd make it available for third-party
> applications, though ;-)
>

That's step 2. :) While EIBTI for the lazy_import(), "practicality beats
purity" in making it easy to just flip a switch to make all imports lazy.
Plus I have not thought through the design of the "switch" solution yet
while the function solution is already done thanks to
importlib.import_module() (which I honestly thought of giving a 'lazy'
keyword-only argument to, but since the algorithm would change I figured it
was probably best not to go down that route).
___
Speed mailing list
Speed@python.org
https://mail.python.org/mailman/listinfo/speed


Re: [Speed] performance 0.5.5 and perf 1.3 released

2017-05-29 Thread Brett Cannon
I thought Piston had been shuttered by Dropbox, so I wouldn't worry about
convincing them to change speed.pyston.org.

On Mon, May 29, 2017, 13:57 Victor Stinner, 
wrote:

> 2017-05-29 22:45 GMT+02:00 Antoine Pitrou :
> > I don't know.  It means that benchmark results published on the Web
> > are generally not comparable with each other unless they happen to be
> > generated with the exact same version.  It reduces the usefulness of
> > the benchmarks suite quite a bit IMHO.
>
> I only know a 3 websites to compare Python performances:
>
> * speed.python.org
> * speed.pypy.org
> * speed.pyston.org
>
> My goal is to convaince PyPy developers to use performance. I'm not
> sure that pyston.org is revelant: it seems like their forked benchmark
> suite is modified, so I don't expect that results on pypy.org and
> pyston.org are comparable. I would also prefer that Pyston uses the
> same benchmark suite.
>
> About speed.python.org, what was decided is to *drop* all previous
> results if we modify benchmarks. That's what I already did 3 times:
>
> * 2017-03-31: old results removed, new CPython results to use Git
> commits instead of Mercurial.
> * 2017-01: old results computed without PGO removed (unstable because
> of code placement), new CPython results using PGO
> * 2016-11-04: old results computed with benchmarks removed, new
> CPython results (using LTO but not PGO) computed with the new
> performance benchmark suite.
>
> To be honest, in the meanwhile, I chose to run the master branch of
> perf and performance to develop perf and performance. In practice, I
> never noticed any significant performance change on any performance
> the last 12 months when I updated dependencies. Sadly, it seems like
> no significant optimization was merged in our dependencies.
>
> > Let's ask the question a different way: was there any necessity to
> > update those dependencies?  If yes, then fair enough.  Otherwise, the
> > compatibility breakage is gratuitous.
>
> When I started to work on benchmarks last year, I noticed that we used
> a Mercurial version which was 5 years old, and a Django version which
> was something like 3 years old. I would like to benchmark the
> Mercurial and Django versions deployed on production.
>
> Why do you want to update performance if you want a pinned version of
> Django? Just always use the same performance version, no?
>
> For speed.python.org, maybe we can decide that we always use a fixed
> version of performance, and that we must remove all data each time we
> change the performance version. For my needs, maybe we could spawn a
> "beta" subdomain running master branches? Again, I expect no
> significant difference between the main website and the beta website.
> But we can do it if you want.
>
> Victor
> ___
> Speed mailing list
> Speed@python.org
> https://mail.python.org/mailman/listinfo/speed
>
___
Speed mailing list
Speed@python.org
https://mail.python.org/mailman/listinfo/speed


Re: [Speed] ASLR

2017-03-16 Thread Brett Cannon
On Wed, 15 Mar 2017 at 17:54 Victor Stinner 
wrote:

> 2017-03-16 1:38 GMT+01:00 Wang, Peter Xihong  >:
> > Hi All,
> >
> > I am attaching an image with comparison running the CALL_METHOD in the
> old Grand Unified Python Benchmark (GUPB) suite (
> https://hg.python.org/benchmarks), with and without ASLR disabled.
>
> This benchmark suite is now deprecated, please update to the new
> 'performance' benchmark suite:
> https://github.com/python/performance
>
> The old benchmark suite didn't spawn multiple processes and so was
> less reliable.
>
> By the way, maybe I should commit a change in hg.python.org/benchmarks
> to remove the code and only keep a README.txt? Code will still be
> accessible in Mercurial history.
>

Since we might not shut down hg.python.org for a long time I say go ahead
and commit such a change.
___
Speed mailing list
Speed@python.org
https://mail.python.org/mailman/listinfo/speed


Re: [Speed] Running pyperformance on Windows

2017-01-15 Thread Brett Cannon
Have you reported this to the performance issue tracker, Peter?
https://github.com/python/performance

On Sun, 15 Jan 2017 at 02:31 Peter Cawley  wrote:

> Hi all,
>
> I'd like to run pyperformance (https://github.com/python/performance)
> for exploring Python 3.6 x64 speed on Windows. Apparently, "Windows is
> now supported" since version 0.1.2, but I'm observing errors such as
> the below when trying to run pyperformance, which at first glance seem
> pretty fatal for doing anything on Windows - am I missing something?
>
> -
> Execute: venv\cpython3.6-68187c45ff81\Scripts\python.exe -m
> performance run --inside-venv
> Python benchmark suite 0.5.0
> INFO:root:Skipping Python2-only benchmark pyflate; not compatible with
> Python sys.version_info(major=3, minor=6, micro=0,
> releaselevel='final', serial=0)
> INFO:root:Skipping Python2-only benchmark spambayes; not compatible
> with Python sys.version_info(major=3, minor=6, micro=0,
> releaselevel='final', serial=0)
> INFO:root:Skipping Python2-only benchmark hg_startup; not compatible
> with Python sys.version_info(major=3, minor=6, micro=0,
> releaselevel='final', serial=0)
> [ 1/51] 2to3...
> INFO:root:Running `c:\program
> files\python36\venv\cpython3.6-68187c45ff81\Scripts\python.exe
> c:\program
> files\python36\venv\cpython3.6-68187c45ff81\lib\site-packages\performance\benchmarks\bm_2to3.py
> --output C:\Users\Peter\AppData\Local\Temp\tmpwc1mjawc`
> Traceback (most recent call last):
>   File "c:\program
>
> files\python36\venv\cpython3.6-68187c45ff81\lib\site-packages\performance\benchmarks\bm_2to3.py",
> line 30, in 
> runner.bench_func('2to3', bench_2to3, command, devnull_out)
>   File "c:\program
>
> files\python36\venv\cpython3.6-68187c45ff81\lib\site-packages\perf\_runner.py",
> line 573, in bench_func
> return self._main(name, sample_func, inner_loops)
>   File "c:\program
>
> files\python36\venv\cpython3.6-68187c45ff81\lib\site-packages\perf\_runner.py",
> line 496, in _main
> bench = self._master()
>   File "c:\program
>
> files\python36\venv\cpython3.6-68187c45ff81\lib\site-packages\perf\_runner.py",
> line 735, in _master
> bench = self._spawn_workers()
>   File "c:\program
>
> files\python36\venv\cpython3.6-68187c45ff81\lib\site-packages\perf\_runner.py",
> line 698, in _spawn_workers
> worker_bench = self._spawn_worker_bench(calibrate)
>   File "c:\program
>
> files\python36\venv\cpython3.6-68187c45ff81\lib\site-packages\perf\_runner.py",
> line 631, in _spawn_worker_bench
> suite = self._spawn_worker_suite(calibrate)
>   File "c:\program
>
> files\python36\venv\cpython3.6-68187c45ff81\lib\site-packages\perf\_runner.py",
> line 617, in _spawn_worker_suite
> proc = subprocess.Popen(cmd, env=env, **kw)
>   File "c:\program files\python36\lib\subprocess.py", line 707, in __init__
> restore_signals, start_new_session)
>   File "c:\program files\python36\lib\subprocess.py", line 961, in
> _execute_child
> assert not pass_fds, "pass_fds not supported on Windows."
> AssertionError: pass_fds not supported on Windows.
> ERROR: Benchmark 2to3 failed: Benchmark died
> Traceback (most recent call last):
>   File "c:\program
>
> files\python36\venv\cpython3.6-68187c45ff81\lib\site-packages\performance\run.py",
> line 126, in run_benchmarks
> bench = func(cmd_prefix, options)
>   File "c:\program
>
> files\python36\venv\cpython3.6-68187c45ff81\lib\site-packages\performance\benchmarks\__init__.py",
> line 121, in BM_2to3
> return run_perf_script(python, options, "2to3")
>   File "c:\program
>
> files\python36\venv\cpython3.6-68187c45ff81\lib\site-packages\performance\run.py",
> line 92, in run_perf_script
> run_command(cmd, hide_stderr=not options.verbose)
>   File "c:\program
>
> files\python36\venv\cpython3.6-68187c45ff81\lib\site-packages\performance\run.py",
> line 61, in run_command
> raise RuntimeError("Benchmark died")
> RuntimeError: Benchmark died
> -
>
> Thanks,
> Peter
> ___
> Speed mailing list
> Speed@python.org
> https://mail.python.org/mailman/listinfo/speed
>
___
Speed mailing list
Speed@python.org
https://mail.python.org/mailman/listinfo/speed


Re: [Speed] New instance of CodeSpeed at speed.python.org running performance on CPython and PyPy?

2016-09-01 Thread Brett Cannon
On Thu, 1 Sep 2016 at 03:58 Victor Stinner  wrote:

> Hi,
>
> Would it be possible to run a new instance of CodeSpeed (the website
> behing speed.python.org) which would run the "performance" benchmark
> suite rather than the "benchmarks" benchmark suite? And would it be
> possible to run it on CPython (2.7 and 3.5 branches) and PyPy (master
> branch, maybe also the py3k branch)?
>

I believe Zach has the repo containing the code. He also said it's all
rather hacked up at the moment. Maybe something to discuss next week at the
sprint as I think you're both going to be there.


>
> I found https://github.com/tobami/codespeed/ but I didn't look at it
> right now. I guess that some code should be written to convert perf
> JSON file to the format expected by CodeSpeed?
>
> FYI I released performance 0.2 yesterday. JSON files now contain the
> version of the benchmark suite ("performance_version: 0.2"). I plan to
> use semantic version: increase the major version (ex: upgrade to 0.3,
> but later it will be 1.x, 2.x, etc.) when benchmark results are
> considered to not be compatible.
>

SGTM.


>
> For example, I upgraded Django (from 1.9 to 1.10) and Chameleon (from
> 2.22 to 2.24) in performance 0.2.
>
> The question is how to upgrade the performance to a new major version:
> should we drop previous benchmark results?
>

They don't really compare anymore, so they should at least not be compared
to benchmark results from a newer benchmark.


>
> Maybe we should put the performance version in the URL, and use
> "/latest/" by default. Only /latest/ would get new results, and
> /latest/ would restart from an empty set of results when performance
> is upgraded?
>

SGTM


>
> Another option, less exciting, is to never upgrade benchmarks. The
> benchmarks project *added* new benchmarks when a dependency was
> "upgraded". In fact, the old dependency was kept and a new dependency
> (full copy of the code in fact ;-)) was added. So it has django,
> django_v2, django_v3, etc. The problem is that it still uses Mercurial
> 1.2 which was released 7 years ago (2009)... Since it's painful to
> upgrade, most dependencies were outdated.
>

Based on my experience with the benchmark suite I don't like this option
either; it just gathers cruft. As Maciej and the PyPy folks have pointed
out, benchmarks should try to represent modern code and old benchmarks
won't necessarily do that.


>
> Do you care of old benchmark results? It's quite easy to regenerate
> them (on demand?) if needed, no? Using Mercurial and Git, it's easy to
> update to any old revisions to run again a benchmark on an old version
> of CPython / PyPy / etc.
>

I personally don't, but that's because care about either current
performance in comparison to others or very short timescales to see when a
regression occurred (hence a switchover has a very small chance of
impacting that investigation), not long timescale results for historical
purposes.
___
Speed mailing list
Speed@python.org
https://mail.python.org/mailman/listinfo/speed


Re: [Speed] Rename python/benchmarks GitHub project to python/performance?

2016-08-26 Thread Brett Cannon
On Thu, 25 Aug 2016 at 15:08 Victor Stinner 
wrote:

> Hi,
>
> For the first release of the "new" benchmark suite, I chose the name
> "performance", since "benchmark" and "benchmarks" names were already
> reserved on PyPI. It's the name of the Python module, but also of the
> command line tool: "pyperformance".
>
> Since there is an "old" benchmark suite
> (https://hg.python.org/benchmarks), PyPy has its benchmark suite, etc.
> I propose to rename the GitHub project to "performance" to avoid
> confusion.
>
> What do you think?
>

If you want to do then go ahead, but I don't think it will be a big issue
in the grand scheme of things.


>
> Note: I'm not a big fan of the "performance" name, but I don't think
> it matters much. The name only needs to be unique and available on
> PyPI :-D
>
> By the way, I don't know if it's worth it to have a "pyperformance"
> command line tool. You can already use "python3 -m performance ..."
> syntax. But you have to recall the Python version used to install the
> module. "python2 -m performance ..." doesn't work if you only
> installed performance for Python 3!
>

As Antoine pointed out, if it doesn't matter what interpreter has the
script installed to run the benchmarks against another interpreter than a
script makes sense (but do keep it available through -m).
___
Speed mailing list
Speed@python.org
https://mail.python.org/mailman/listinfo/speed


Re: [Speed] CPython Benchmark Suite usinf my perf module, GitHub, etc.

2016-07-29 Thread Brett Cannon
On Thu, 28 Jul 2016 at 10:25 Victor Stinner 
wrote:

> Hi,
>
> I updated all benchmarks of the CPython Benchmark Suite to use my perf
> module. So you get timings of all individual run all *all* benchmarks
> and store them in JSON to analyze them in detail. Each benchmark has a
> full CLI, for example it gets a new --output option to store result as
> JSON directly. But it also gets nice funtions like --hist for
> histogram, --stats for statistics, etc.
>
> The two remaining questions are:
>
> * Should it support --track_memory? it doesn't support it correctly
> right now, it's less precise than before (memory is tracked directly
> in worker processes, no more by the main process) => discuss this
> point in my previous email
>

I don't have an opinion as I have never gotten to use the old feature.


>
> * Should we remove vendor copies of libraries and work with virtual
> environments? Not all libraries are available on PyPI :-/ See the
> requirements.txt file and TODO.
>

If they are not on PyPI then we should just drop the benchmark. And I say
we do use virtual environments to keep the repo size down.


>
> My repository:
> https://hg.python.org/sandbox/benchmarks_perf
>
> I would like to push my work as a single giant commit.
>
> Brett also proposed me to move the benchmarks repository to GitHub
> (and so convert it to Git). I don't know if it's appropriate to do all
> these things at once? What do you think?
>

I say just start a new repo from scratch. There isn't a ton of magical
history in the  hg repo that I think we need to have carried around in the
git repo. Plus if we stop shipping project source with the repo then it
will be immensely smaller if we start from scratch.


>
> Reminder: My final goal is to merge again all benchmarks suites
> (CPython, PyPy, Pyston, Pyjion, ...) into one unique project!


I hope this happens!
___
Speed mailing list
Speed@python.org
https://mail.python.org/mailman/listinfo/speed


Re: [Speed] New CPython benchmark suite based on perf

2016-07-04 Thread Brett Cannon
I just wanted to quickly say, Victor, this all sounds great!

On Mon, 4 Jul 2016 at 07:17 Victor Stinner  wrote:

> Hi,
>
> I modified the CPython benchmark suite to use my perf module:
> https://hg.python.org/sandbox/benchmarks_perf
>
>
> Changes:
>
> * use statistics.median() rather than mean() to compute of "average"
> of samples. Example:
>
>Median +- Std dev: 256 ms +- 3 ms -> 262 ms +- 4 ms: 1.03x slower
>
> * replace compat.py with external six dependency
> * replace util.py with perf
> * replace explicit warmups with perf automatic warmup
> * add name metadata
> * for benchmark taking parameters, save parameters in metadata
> * avoid nested loops, prefer a single level of loop: perf is
> responsible to call the sample function enough times to collect enough
> samples
> * store django and mako version in metadata
> * use JSON format to exchange timings between benchmarks and runner.py
>
>
> perf adds more features:
>
> * run each benchmark in multiple processes (25 by default, 50 in rigorous
> mode)
> * calibrate each benchmark to compute the number of loops to get a
> sample between 100 ms and 1 second
>
>
> TODO:
>
> * Right now the calibration in done twice: in the reference python and
> in the changed python. It should only be once in the reference python
> * runner.py should write results in a JSON file. Currently, data are
> not written on disk (a pipe is used with child processes)
> * Drop external dependencies and create a virtual environment per python
> * Port more Python 2-only benchmarks to Python 3
> * Add more benchmarks from PyPy, Pyston and Pyjion benchmark suites:
> unify again the benchmark suites :-)
>
>
> perf has builtin tools to analyze the distribution of samples:
>
> * add --hist option to a benchmark to display an histogram in text mode
> * add --stats option to a benchmark to display statistics: number of
> samples, shortest raw sample, min, max, etc.
> * "python3 -m perf" CLI allows has many commands to analyze a benchmark:
> http://perf.readthedocs.io/en/latest/cli.html
>
>
> Right now, perf JSON format is only able to store one benchmark. I
> will extend the format to be able to store a list of benchmarks. So it
> will be possible to store all results of a python version into a
> single file.
>
> By the way, I also want to change runner.py CLI to be able to run the
> benchmarks on a single python version and then use a second command to
> compare two files. Rather than always running each benchmark twice
> (reference python, changed python). PyPy runner also works like that
> if I recall correctly.
>
> Victor
> ___
> Speed mailing list
> Speed@python.org
> https://mail.python.org/mailman/listinfo/speed
>
___
Speed mailing list
Speed@python.org
https://mail.python.org/mailman/listinfo/speed


Re: [Speed] [Python-Dev] speed.python.org

2016-02-05 Thread Brett Cannon
To piggyback on Zach's speed.python.org announcement, we will most likely
be kicking off a discussion of redoing the benchmark suite, tweaking the
test runner, etc. over on the speed@ ML. Those of us who have been doing
perf work lately have found some shortcoming we would like to fix in our
benchmarks suite, so if you want to participate in that discussion, please
join speed@ by next week.

On Wed, 3 Feb 2016 at 22:49 Zachary Ware <zachary.ware+py...@gmail.com>
wrote:

> I'm happy to announce that speed.python.org is finally functional!
> There's not much there yet, as each benchmark builder has only sent
> one result so far (and one of those involved a bit of cheating on my
> part), but it's there.
>
> There are likely to be rough edges that still need smoothing out.
> When you find them, please report them at
> https://github.com/zware/codespeed/issues or on the speed@python.org
> mailing list.
>
> Many thanks to Intel for funding the work to get it set up and to
> Brett Cannon and Benjamin Peterson for their reviews.
>
> Happy benchmarking,
> --
> Zach
> ___
> Python-Dev mailing list
> python-...@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/brett%40python.org
>
___
Speed mailing list
Speed@python.org
https://mail.python.org/mailman/listinfo/speed


[Speed] People interested in reworking the benchmark suite in 2016?

2016-01-03 Thread Brett Cannon
With the planned move to GitHub, there is an opportunity to try and rework
the set of benchmarks -- and anything else -- in 2016 by starting a new
benchmark repo from scratch. E.g., modern numeric benchmarks, long-running
benchmarks that warm up JITs, using pip with pegged bugfix versions so we
stop shipping library code with the benchmarks, etc.  We could also
standardize results output -- e.g. should we just make everything run under
codespeed? -- so that the benchmarks are easy to run locally for one-off
results as well as continuous benchmarking for trend details with a common
benchmark driver?

Would people be interested and motivated enough in getting representatives
from the various Python implementations together at PyCon and have a BoF to
discuss what we want from a proper, unified, baseline benchmark suite and
see if we can pull one together -- or at least start one -- in 2016?
___
Speed mailing list
Speed@python.org
https://mail.python.org/mailman/listinfo/speed


Re: [Speed] Status of speed.python.org?

2012-10-19 Thread Brett Cannon
On Fri, Oct 19, 2012 at 3:38 PM, Miquel Torres tob...@gmail.com wrote:

 Right. On the webapp (Codespeed) side of things I am willing to help
 in anything you need.
 The blocker has been mostly the benchmark runner, AFAIK.


And how specifically is that a blocker so we can work on eliminating those
issues?

-Brett


 Miquel


 2012/10/19 Maciej Fijalkowski fij...@gmail.com:
  On Fri, Oct 19, 2012 at 12:46 PM, Jesse Noller jnol...@gmail.com
 wrote:
 
 
  On Oct 19, 2012, at 6:25 AM, Maciej Fijalkowski fij...@gmail.com
 wrote:
 
  On Thu, Oct 18, 2012 at 10:40 PM, Victor Stinner
  victor.stin...@gmail.com wrote:
  Hi,
 
  What is the status of speed.python.org? Where are the benchmarks?
 Does
  anyone try to setup something to run regulary benchmarks and display
  data on web pages?
 
  How can I help?
 
  Victor
 
  Hi Victor.
 
  The status is noone cares.
 
  Hi. I care. Other people do too. Maybe you don't - that's ok. The
 problem is lack of time / project planning.
 
  Ok, to be precise noone cares enough to make things happen, this is
  a fact. I actually care to some extend and I'm willing to help with
  stuff, however, there must be someone who cares more on the python
  core development team to make the exercise not-completely-pointless.
 
  Cheers,
  fijal
  ___
  Speed mailing list
  Speed@python.org
  http://mail.python.org/mailman/listinfo/speed
 ___
 Speed mailing list
 Speed@python.org
 http://mail.python.org/mailman/listinfo/speed

___
Speed mailing list
Speed@python.org
http://mail.python.org/mailman/listinfo/speed


Re: [Speed] slow benchmark

2012-09-30 Thread Brett Cannon
On Sun, Sep 30, 2012 at 7:21 PM, Antoine Pitrou solip...@pitrou.net wrote:


 Hello,

 The hexiom benchmark is very slow. Is there a reason it's included
 there?


Already been asked and answered:
http://mail.python.org/pipermail/speed/2012-September/000209.html
___
Speed mailing list
Speed@python.org
http://mail.python.org/mailman/listinfo/speed


Re: [Speed] standalone PyPy benchmarks ported

2012-09-17 Thread Brett Cannon
On Sun, Sep 16, 2012 at 10:54 AM, Maciej Fijalkowski fij...@gmail.comwrote:

 On Sun, Sep 16, 2012 at 4:43 PM, Brett Cannon br...@python.org wrote:
  Quick question about the hexiom2 benchmark: what does it measure? It is
 by
  far the slowest benchmark I ported, and considering it isn't a real-world
  app benchmark I want to make sure the slowness of it is worth it.
 Otherwise
  I would rather drop it since having something run 1/25 as many iterations
  compared to the other simple benchmarks seems to water down its
 robustness.

 It's a puzzle solver. It got included because PyPy 1.9 got slower than
 1.8 on this particular benchmark that people were actually running
 somewhere, so it has *some* value.


Fair enough. Just wanted to make sure that it was worth having a slow
execution over.


 I wonder, does adding a fixed
 random number seed help the distribution?


Fix how? hexiom2 doesn't use a random value for anything.

-Brett



 
 
  On Fri, Sep 14, 2012 at 5:44 PM, Maciej Fijalkowski fij...@gmail.com
  wrote:
 
  On Fri, Sep 14, 2012 at 10:19 PM, Brett Cannon br...@python.org
 wrote:
   So I managed to get the following benchmarks moved into the unladen
 repo
   (not pushed yet until I figure out some reasonable scaling values as
   some
   finish probably too fast and others go for a while):
  
   chaos
   fannkuch
   meteor-contest (renamed meteor_contest)
   spectral-norm (renamed spectral_norm)
   telco
   bm_mako (renamed bm_mako_v2; also pulled in mako 0.9.7 for this
   benchmark)
   go
   hexiom2
   json_bench (renamed json_dump_v2)
   raytrace_simple (renamed raytrace)
  
   Most of the porting was range/xrange related. After that is was
   str/unicode.
   I also stopped having the benchmarks write out files as it was always
 to
   verify results and not a core part of the benchmark.
  
   That leaves us with the benchmarks that rely on third-party projects.
   The
   chameleon benchmark can probably be ported as chameleon has a version
   released running on Python 3. But django and html5lib have only
   in-development versions that support Python 3. If we want to pull in
 the
   tip
   of their repos then those benchmarks can also be ported now rather
 than
   later. People have opinions on in-dev code vs. released for
   benchmarking?
  
   There is also the sphinx benchmark, but that requires getting
 CPython's
   docs
   building under Python 3 (see http://bugs.python.org/issue10224).
  
   ___
   Speed mailing list
   Speed@python.org
   http://mail.python.org/mailman/listinfo/speed
  
 
  great job!
 
 

___
Speed mailing list
Speed@python.org
http://mail.python.org/mailman/listinfo/speed


Re: [Speed] standalone PyPy benchmarks ported

2012-09-17 Thread Brett Cannon
On Mon, Sep 17, 2012 at 11:36 AM, Maciej Fijalkowski fij...@gmail.comwrote:

 On Mon, Sep 17, 2012 at 5:00 PM, Brett Cannon br...@python.org wrote:
 
 
  On Sun, Sep 16, 2012 at 10:54 AM, Maciej Fijalkowski fij...@gmail.com
  wrote:
 
  On Sun, Sep 16, 2012 at 4:43 PM, Brett Cannon br...@python.org wrote:
   Quick question about the hexiom2 benchmark: what does it measure? It
 is
   by
   far the slowest benchmark I ported, and considering it isn't a
   real-world
   app benchmark I want to make sure the slowness of it is worth it.
   Otherwise
   I would rather drop it since having something run 1/25 as many
   iterations
   compared to the other simple benchmarks seems to water down its
   robustness.
 
  It's a puzzle solver. It got included because PyPy 1.9 got slower than
  1.8 on this particular benchmark that people were actually running
  somewhere, so it has *some* value.
 
 
  Fair enough. Just wanted to make sure that it was worth having a slow
  execution over.
 
 
  I wonder, does adding a fixed
  random number seed help the distribution?
 
 
  Fix how? hexiom2 doesn't use a random value for anything.

 Ok, then please explain why having 1/25th of iterations kill robustness?


Less iterations to help smooth out any bumps in the measurements. E.g 4
iterations compared to 100 doesn't lead to as even of a measurement. I mean
you would hope because the benchmark goes for so long that it would just
level out within a single run instead of needing multiple runs to get the
same evening out.

-Brett



 
  -Brett
 
 
 
  
  
   On Fri, Sep 14, 2012 at 5:44 PM, Maciej Fijalkowski fij...@gmail.com
 
   wrote:
  
   On Fri, Sep 14, 2012 at 10:19 PM, Brett Cannon br...@python.org
   wrote:
So I managed to get the following benchmarks moved into the unladen
repo
(not pushed yet until I figure out some reasonable scaling values
 as
some
finish probably too fast and others go for a while):
   
chaos
fannkuch
meteor-contest (renamed meteor_contest)
spectral-norm (renamed spectral_norm)
telco
bm_mako (renamed bm_mako_v2; also pulled in mako 0.9.7 for this
benchmark)
go
hexiom2
json_bench (renamed json_dump_v2)
raytrace_simple (renamed raytrace)
   
Most of the porting was range/xrange related. After that is was
str/unicode.
I also stopped having the benchmarks write out files as it was
 always
to
verify results and not a core part of the benchmark.
   
That leaves us with the benchmarks that rely on third-party
 projects.
The
chameleon benchmark can probably be ported as chameleon has a
 version
released running on Python 3. But django and html5lib have only
in-development versions that support Python 3. If we want to pull
 in
the
tip
of their repos then those benchmarks can also be ported now rather
than
later. People have opinions on in-dev code vs. released for
benchmarking?
   
There is also the sphinx benchmark, but that requires getting
CPython's
docs
building under Python 3 (see http://bugs.python.org/issue10224).
   
___
Speed mailing list
Speed@python.org
http://mail.python.org/mailman/listinfo/speed
   
  
   great job!
  
  
 
 

___
Speed mailing list
Speed@python.org
http://mail.python.org/mailman/listinfo/speed


Re: [Speed] merging PyPy and Python benchmark suite

2012-07-26 Thread Brett Cannon
On Wed, Jul 25, 2012 at 4:54 PM, Antoine Pitrou solip...@pitrou.net wrote:

 On Wed, 25 Jul 2012 16:45:27 -0400
 Brett Cannon br...@python.org wrote:
  On Wed, Jul 25, 2012 at 2:39 PM, Antoine Pitrou solip...@pitrou.net
 wrote:
 
  
   Should we add the MIT license to our benchmarks repo as well?
  
 
  I'm fine with it, although is there an issue with changing it? I know
 that
  the code has no history and thus doesn't strictly need to use the PSF
  license, but IANAL.

 Well there's no license right now, which makes it non-open source
 software :)


And I noticed you added the MIT license, so now we are license-compatible.
Let the wholesale copying begin! =)

-Brett



 Regards

 Antoine.


 --
 Software development and contracting: http://pro.pitrou.net


 ___
 Speed mailing list
 Speed@python.org
 http://mail.python.org/mailman/listinfo/speed

___
Speed mailing list
Speed@python.org
http://mail.python.org/mailman/listinfo/speed


Re: [Speed] merging PyPy and Python benchmark suite

2012-07-25 Thread Brett Cannon
On Wed, Jul 25, 2012 at 6:54 AM, Maciej Fijalkowski fij...@gmail.comwrote:

 On Wed, Jul 25, 2012 at 5:12 AM, Alex Gaynor alex.gay...@gmail.com
 wrote:
 
 
  On Tue, Jul 24, 2012 at 7:37 PM, Nick Coghlan ncogh...@gmail.com
 wrote:
 
  On Wed, Jul 25, 2012 at 9:51 AM, Brett Cannon br...@python.org wrote:
  
  
   On Tue, Jul 24, 2012 at 5:38 PM, Nick Coghlan ncogh...@gmail.com
   wrote:
  
   Antoine's right on this one - just use and redistribute the upstream
   components under their existing licenses. CPython itself is different
   because the PSF has chosen to reserve relicensing privileges for
 that,
   which
   requires the extra permissions granted in the contributor agreement.
  
  
   But I'm talking about the benchmarks themselves, not the wholesale
   inclusion
   of Mako, etc. (which I am not worried about since the code in the
   dependencies is not edited). Can we move the PyPy benchmarks
 themselves
   (e.g. bm_mako.py that PyPy has) over to the PSF benchmarks without
   getting
   contributor agreements.
 
  The PyPy team need to put a clear license notice (similar to the one
  in the main pypy repo) on their benchmarks repo. But yes, I believe
  you're right that copying that code as it stands would technically be
  a copyright violation, even if the PyPy team intend for it to be
  allowed.
 
  If you're really concerned, check with Van first, but otherwise I'd
  just file a bug with the PyPy folks requesting that they clarify the
  licensing by adding a LICENSE file and in the meantime assume they
  intended for it to be covered by the MIT license, just like PyPy
  itself.
 
  The PSF license is necessary for CPython because of the long and
  complicated history of that code base. We can use simpler licenses for
  other stuff (like the benchmark suite) and just run with license in =
  license out rather than preserving the right for the PSF to change the
  license.
 
  Cheers,
  Nick.
 
  --
  Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
  ___
  Speed mailing list
  Speed@python.org
  http://mail.python.org/mailman/listinfo/speed
 
 
  First, I believe all the unalden swallow stuff (including the runner) is
  under the PSF licence, though you'd have to check the repo for a license
  file or bug Jeffrey and Collin.  Someone (fijal) will add an MIT license
 for
  our half of the repo.
 
 
  Alex

 Done. PyPy benchmarks are MIT


Great! Then I'm happy with moving PyPy benchmarks over wholesale. Are there
any benchmarks that are *really* good and are thus a priority to move, or
any that are just flat-out bad and I shouldn't bother moviing?
___
Speed mailing list
Speed@python.org
http://mail.python.org/mailman/listinfo/speed


Re: [Speed] merging PyPy and Python benchmark suite

2012-07-25 Thread Brett Cannon
On Wed, Jul 25, 2012 at 2:39 PM, Antoine Pitrou solip...@pitrou.net wrote:


 Should we add the MIT license to our benchmarks repo as well?


I'm fine with it, although is there an issue with changing it? I know that
the code has no history and thus doesn't strictly need to use the PSF
license, but IANAL.

-Brett



 cheers

 Antoine.



 On Wed, 25 Jul 2012 14:16:34 -0400
 Brett Cannon br...@python.org wrote:

  On Wed, Jul 25, 2012 at 6:54 AM, Maciej Fijalkowski fij...@gmail.com
 wrote:
 
   On Wed, Jul 25, 2012 at 5:12 AM, Alex Gaynor alex.gay...@gmail.com
   wrote:
   
   
On Tue, Jul 24, 2012 at 7:37 PM, Nick Coghlan ncogh...@gmail.com
   wrote:
   
On Wed, Jul 25, 2012 at 9:51 AM, Brett Cannon br...@python.org
 wrote:


 On Tue, Jul 24, 2012 at 5:38 PM, Nick Coghlan ncogh...@gmail.com
 
 wrote:

 Antoine's right on this one - just use and redistribute the
 upstream
 components under their existing licenses. CPython itself is
 different
 because the PSF has chosen to reserve relicensing privileges for
   that,
 which
 requires the extra permissions granted in the contributor
 agreement.


 But I'm talking about the benchmarks themselves, not the wholesale
 inclusion
 of Mako, etc. (which I am not worried about since the code in the
 dependencies is not edited). Can we move the PyPy benchmarks
   themselves
 (e.g. bm_mako.py that PyPy has) over to the PSF benchmarks without
 getting
 contributor agreements.
   
The PyPy team need to put a clear license notice (similar to the one
in the main pypy repo) on their benchmarks repo. But yes, I believe
you're right that copying that code as it stands would technically
 be
a copyright violation, even if the PyPy team intend for it to be
allowed.
   
If you're really concerned, check with Van first, but otherwise I'd
just file a bug with the PyPy folks requesting that they clarify the
licensing by adding a LICENSE file and in the meantime assume they
intended for it to be covered by the MIT license, just like PyPy
itself.
   
The PSF license is necessary for CPython because of the long and
complicated history of that code base. We can use simpler licenses
 for
other stuff (like the benchmark suite) and just run with license in
 =
license out rather than preserving the right for the PSF to change
 the
license.
   
Cheers,
Nick.
   
--
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
___
Speed mailing list
Speed@python.org
http://mail.python.org/mailman/listinfo/speed
   
   
First, I believe all the unalden swallow stuff (including the
 runner) is
under the PSF licence, though you'd have to check the repo for a
 license
file or bug Jeffrey and Collin.  Someone (fijal) will add an MIT
 license
   for
our half of the repo.
   
   
Alex
  
   Done. PyPy benchmarks are MIT
 
 
  Great! Then I'm happy with moving PyPy benchmarks over wholesale. Are
 there
  any benchmarks that are *really* good and are thus a priority to move, or
  any that are just flat-out bad and I shouldn't bother moviing?
 


 --
 Software development and contracting: http://pro.pitrou.net


 ___
 Speed mailing list
 Speed@python.org
 http://mail.python.org/mailman/listinfo/speed

___
Speed mailing list
Speed@python.org
http://mail.python.org/mailman/listinfo/speed


Re: [Speed] merging PyPy and Python benchmark suite

2012-07-24 Thread Brett Cannon
On Tue, Jul 24, 2012 at 1:14 PM, Alex Gaynor alex.gay...@gmail.com wrote:



 On Tue, Jul 24, 2012 at 10:10 AM, Brett Cannon br...@python.org wrote:



 On Mon, Jul 23, 2012 at 7:34 PM, Maciej Fijalkowski fij...@gmail.comwrote:

 On Mon, Jul 23, 2012 at 11:46 PM, Brett Cannon br...@python.org wrote:
 
 
  On Mon, Jul 23, 2012 at 4:39 PM, Armin Rigo ar...@tunes.org wrote:
 
  Hi Brett,
 
  On Mon, Jul 23, 2012 at 10:15 PM, Brett Cannon br...@python.org
 wrote:
   That's what I'm trying to establish; how much have they diverged
 and if
   I'm
   looking in the proper place.
 
  bm_mako.py is not from Unladen Swallow; that's why it is in
  pypy/benchmarks/own/.  In case of doubts, check it in the history of
  Hg.  The PyPy version was added from virhilo, which seems to be the
  name of his author, on 2010-12-21, and was not changed at all since
  then.
 
 
  OK. Maciej has always told me that a problem with the Unladen
 benchmarks was
  that some of them had artificial loop unrolling, etc., so I had
 assumed you
  had simply fixed those instances instead of creating entirely new
  benchmarks.

 No we did not use those benchmarks. Those were mostly completely
 artificial microbenchmarks (call, call_method etc.). We decided we're
 not really that interested in microbenchmarks.

 
 
 
  Hg tells me that there was no change at all in the 'unladen_swallow'
  subdirectory, apart from 'unladen_swallow/perf.py' and adding some
  __init__.py somewhere.  So at least these benchmarks did not receive
  any pypy-specific adapatations.  If there are divergences, they come
  from changes done to the unladen-swallow benchmark suite after PyPy
  copied it on 2010-01-15.
 
 
  I know that directory wasn't changed, but I also noticed that some
  benchmarks had the same name, which is why I thought they were forked
  versions of the same-named Unladen benchmarks.

 Not if they're in own/ directory.


 OK, good to know. I realized I can't copy code wholesale from PyPy's
 benchmark suite as I don't know the code's history and thus if the
 contributor signed Python's contributor agreement. Can the people who are
 familiar with the code help move benchmarks over where the copyright isn't
 in question?

 I can at least try to improve the Python 3 situation by doing things like
 pulling in Vinay's py3k port of Django, etc. to fill in gaps. I will also
 try to get the benchmarks to work with a Python 2.7 control and a Python 3
 experimental target for comparing performance since that's what I need
 (or at least be able to run the benchmarks on their own and writing out the
 results for later comparison).

 Anything else that should be worked on?

 ___
 Speed mailing list
 Speed@python.org
 http://mail.python.org/mailman/listinfo/speed


 The important thing is that once a benchmark is in the repo it can *never*
 change including all the versions of dependencies, only Python can vary,
 otherwise you kill the ability to actually do science with the numbers.

 So, e.g., I wouldn't pull in Vijnay's fork, since that's going to be
 utterly obsolete in a few weeks probably, I'd wait to have django on py3k
 for that work to all be merged into django itself.


If that's happening in a few weeks then I can wait. But remember my desire
is to get benchmark numbers between Python 2.7 and Python 3.3 for my
November keynotes so I can't punt indefinitely.

-Brett



 Alex

 --
 I disapprove of what you say, but I will defend to the death your right
 to say it. -- Evelyn Beatrice Hall (summarizing Voltaire)
 The people's good is the highest law. -- Cicero


___
Speed mailing list
Speed@python.org
http://mail.python.org/mailman/listinfo/speed