On Mon, Jul 23, 2012 at 7:34 PM, Maciej Fijalkowski <fij...@gmail.com>wrote:

> On Mon, Jul 23, 2012 at 11:46 PM, Brett Cannon <br...@python.org> wrote:
> >
> >
> > On Mon, Jul 23, 2012 at 4:39 PM, Armin Rigo <ar...@tunes.org> wrote:
> >>
> >> Hi Brett,
> >>
> >> On Mon, Jul 23, 2012 at 10:15 PM, Brett Cannon <br...@python.org>
> wrote:
> >> > That's what I'm trying to establish; how much have they diverged and
> if
> >> > I'm
> >> > looking in the proper place.
> >>
> >> bm_mako.py is not from Unladen Swallow; that's why it is in
> >> pypy/benchmarks/own/.  In case of doubts, check it in the history of
> >> Hg.  The PyPy version was added from virhilo, which seems to be the
> >> name of his author, on 2010-12-21, and was not changed at all since
> >> then.
> >
> >
> > OK. Maciej has always told me that a problem with the Unladen benchmarks
> was
> > that some of them had artificial loop unrolling, etc., so I had assumed
> you
> > had simply fixed those instances instead of creating entirely new
> > benchmarks.
>
> No we did not use those benchmarks. Those were mostly completely
> artificial microbenchmarks (call, call_method etc.). We decided we're
> not really that interested in microbenchmarks.
>
> >
> >>
> >>
> >> Hg tells me that there was no change at all in the 'unladen_swallow'
> >> subdirectory, apart from 'unladen_swallow/perf.py' and adding some
> >> __init__.py somewhere.  So at least these benchmarks did not receive
> >> any pypy-specific adapatations.  If there are divergences, they come
> >> from changes done to the unladen-swallow benchmark suite after PyPy
> >> copied it on 2010-01-15.
> >
> >
> > I know that directory wasn't changed, but I also noticed that some
> > benchmarks had the same name, which is why I thought they were forked
> > versions of the same-named Unladen benchmarks.
>
> Not if they're in own/ directory.
>

OK, good to know. I realized I can't copy code wholesale from PyPy's
benchmark suite as I don't know the code's history and thus if the
contributor signed Python's contributor agreement. Can the people who are
familiar with the code help move benchmarks over where the copyright isn't
in question?

I can at least try to improve the Python 3 situation by doing things like
pulling in Vinay's py3k port of Django, etc. to fill in gaps. I will also
try to get the benchmarks to work with a Python 2.7 control and a Python 3
"experimental" target for comparing performance since that's what I need
(or at least be able to run the benchmarks on their own and writing out the
results for later comparison).

Anything else that should be worked on?
_______________________________________________
Speed mailing list
Speed@python.org
http://mail.python.org/mailman/listinfo/speed

Reply via email to