On Thu, Sep 13, 2012 at 12:35 AM, Brett Cannon <br...@python.org> wrote: > I went through the list of benchmarks that PyPy has to see which ones could > be ported to Python 3 now (others can be in the future but they depend on a > project who has not released an official version with python 3 support): > > ai > chaos > fannkuch > float > meteor-contest > nbody_modified > richards > spectral-norm > telco > > bm_chameleon* > bm_mako > go > hexiom2 > json_bench > pidigits > pyflate-fast > raytrace-simple > sphinx* > > The first grouping is the 20 shown on the speed.pypy.org homepage, the rest > are in the complete list. Anything with an asterisk has an external > dependency that is not already in the unladen benchmarks. > > Are the twenty shown on the homepage of speed.pypy.org in some way special, > or were they the first benchmarks that you were good/bad at, or what? Are > there any benchmarks here that are particularly good or bad? I'm trying to > prioritize what benchmarks I port so that if I hit a time crunch I got the > critical ones moved first.
The 20 shown on the front page are the ones that we have full historical data, so we can compare. Others are simply newer. I don't think there is any priority associated, we should probably put the others on the first page despite not having full data. _______________________________________________ Speed mailing list Speed@python.org http://mail.python.org/mailman/listinfo/speed