Re: [pypy-dev] Idea for speed.pypy.org

2010-12-22 Thread Miquel Torres
so, what about the ai and spectral-norm benchmarks. Anybody can come
up with a description for them?


2010/12/15 Paolo Giarrusso p.giarru...@gmail.com:
 On Mon, Dec 13, 2010 at 09:31, Miquel Torres tob...@googlemail.com wrote:
 Oh, btw., the normalized stacked bars now display a warning note
 about its correctness, and how it must be viewed as giving results a
 weighting instead of them being normalized. It even includes a link to
 the proper paper. I hope that is enough for the strict statisticians
 among us ;-)

 I see. Thanks!

 See:
 http://speed.pypy.org/comparison/?exe=1%2B172,3%2B172,1%2BL,3%2BLben=1,2,25,3,4,5,22,6,7,8,23,24,9,10,11,12,13,14,15,16,17,18,19,20env=1hor=truebas=2%2B35chart=stacked+bars

 PS: there is a bug in the jqPlot plotting library when null values are
 present. Trying to display PyPy 1.3 results for the newer go, pyflake
 or  raytrace will create some nasty js loops. It also has problems
 with autoscaling the axis sometimes.



 2010/12/13 Miquel Torres tob...@googlemail.com:
 Thanks all for the input.
 I've compiled a list based on your mails, the Unladen benchmarks page
 (http://code.google.com/p/unladen-swallow/wiki/Benchmarks), and the
 alioth descriptions. Here is an extract of the current speed.pypy.org
 admin:

 ai
 chaos   Creates chaosgame-like fractals
 crypto_pyaes    A pure python implementation of AES
 django          Uses the Django template system to build a 150x150-cell 
 HTML table

 fannkuch                Indexed-access to tiny integer-sequence. The 
 fannkuch
 benchmark is defined by programs in Performing Lisp Analysis of the
 FANNKUCH Benchmark, Kenneth R. Anderson and Duane Rettig.

 float           Creates an array of points using circular projection and 
 then
 normalizes and maximizes them. Floating-point heavy.
 go              A go (chess like game) computer player AI.
 html5lib        Parses the HTML 5 spec using html5lib
 meteor-contest  Searchs for solutions to shape packing puzzle.
 nbody_modified          Double-precision N-body simulation. It models the
 orbits of Jovian planets, using a simple symplectic-integrator.
 pyflate-fast            Stand-alone pure-Python DEFLATE (gzip) and bzip2
 decoder/decompressor.
 raytrace-simple A raytracer renderer
 richards                Medium-sized language benchmark that simulates the 
 task
 dispatcher in the kernel of an operating system.
 rietveld        A Django application benchmark.
 slowspitfire
 spambayes       Runs a canned mailbox through a SpamBayes ham/spam 
 classifier
 spectral-norm
 spitfire        Uses the Spitfire template system to build a 1000x1000-cell 
 HTML table.
 spitfire_cstringio      Uses the Spitfire template system to build a
 1000x1000-cell HTML table, using the cstringio module.
 telco
 twisted_iteration
 twisted_names
 twisted_pb
 twisted_tcp     Connects one Twised client to one Twisted server over TCP
 (on the loopback interface) and then writes bytes as fast as it can.
 waf     Python-based framework for configuring, compiling and installing
 applications. It derives from the concepts of other build tools such
 as Scons, Autotools, CMake or Ant.


 So the remaining descriptions are
 ai
 slowspitfire (what is the exact difference between the three spitfire 
 benches?)
 spectral-norm
 telco
 twisted (most of them)

 Are the descriptions all right so far?. They can be made much longer
 if you deem it desirable.

 on speed.pypy.org you will currently see the descriptions in 3 places:
 - Changes view: A tooltip on hover over each benchmark
 - Timeline: a description box beneath each plot
 - Comparison: A tooltip over each benchmark when hovering the
 selection menu on the left side.

 Any suggestions on how to improve it further are welcome ;-)

 Miquel


 2010/12/9 Paolo Giarrusso p.giarru...@gmail.com:
 On Thu, Dec 9, 2010 at 14:14, Leonardo Santagada santag...@gmail.com 
 wrote:
 Here is a incomplete draft list:

 [slow]spitfire[cstringio]: Spitfire is a template language, the
 cstringio version uses a modified engine (that uses cstringio)

 spambayes: Spambayes is a bayesian spam filter

 Why is [slow]spitfire slower with PyPy? Is it regex-related? I
 remember when, because of this, spambayes was slower (including
 release 1.3, now solved). But for spitfire, 1.3 was faster than 1.4
 and the head (for slowspitfire it's the opposite).

 For the rest, I see no significant case of slowdown of PyPy over time.
 http://speed.pypy.org/comparison/?exe=2%2B35,1%2B41,1%2B172,1%2BLben=1,2,25,3,4,5,22,6,7,8,23,24,9,10,11,12,13,14,15,16,17,18,19,20,26env=1hor=truebas=2%2B35chart=normal+bars
 --
 Paolo Giarrusso - Ph.D. Student
 http://www.informatik.uni-marburg.de/~pgiarrusso/






 --
 Paolo Giarrusso - Ph.D. Student
 http://www.informatik.uni-marburg.de/~pgiarrusso/

___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev


Re: [pypy-dev] Idea for speed.pypy.org

2010-12-22 Thread Alex Gaynor
ai: runs a bruce force n-queens solver

Alex

On Wed, Dec 22, 2010 at 2:26 AM, Miquel Torres tob...@googlemail.comwrote:

 so, what about the ai and spectral-norm benchmarks. Anybody can come
 up with a description for them?


 2010/12/15 Paolo Giarrusso p.giarru...@gmail.com:
  On Mon, Dec 13, 2010 at 09:31, Miquel Torres tob...@googlemail.com
 wrote:
  Oh, btw., the normalized stacked bars now display a warning note
  about its correctness, and how it must be viewed as giving results a
  weighting instead of them being normalized. It even includes a link to
  the proper paper. I hope that is enough for the strict statisticians
  among us ;-)
 
  I see. Thanks!
 
  See:
 
 http://speed.pypy.org/comparison/?exe=1%2B172,3%2B172,1%2BL,3%2BLben=1,2,25,3,4,5,22,6,7,8,23,24,9,10,11,12,13,14,15,16,17,18,19,20env=1hor=truebas=2%2B35chart=stacked+bars
 
  PS: there is a bug in the jqPlot plotting library when null values are
  present. Trying to display PyPy 1.3 results for the newer go, pyflake
  or  raytrace will create some nasty js loops. It also has problems
  with autoscaling the axis sometimes.
 
 
 
  2010/12/13 Miquel Torres tob...@googlemail.com:
  Thanks all for the input.
  I've compiled a list based on your mails, the Unladen benchmarks page
  (http://code.google.com/p/unladen-swallow/wiki/Benchmarks), and the
  alioth descriptions. Here is an extract of the current speed.pypy.org
  admin:
 
  ai
  chaos   Creates chaosgame-like fractals
  crypto_pyaesA pure python implementation of AES
  django  Uses the Django template system to build a 150x150-cell
 HTML table
 
  fannkuchIndexed-access to tiny integer-sequence. The
 fannkuch
  benchmark is defined by programs in Performing Lisp Analysis of the
  FANNKUCH Benchmark, Kenneth R. Anderson and Duane Rettig.
 
  float   Creates an array of points using circular projection
 and then
  normalizes and maximizes them. Floating-point heavy.
  go  A go (chess like game) computer player AI.
  html5libParses the HTML 5 spec using html5lib
  meteor-contest  Searchs for solutions to shape packing puzzle.
  nbody_modified  Double-precision N-body simulation. It models
 the
  orbits of Jovian planets, using a simple symplectic-integrator.
  pyflate-fastStand-alone pure-Python DEFLATE (gzip) and
 bzip2
  decoder/decompressor.
  raytrace-simple A raytracer renderer
  richardsMedium-sized language benchmark that simulates
 the task
  dispatcher in the kernel of an operating system.
  rietveldA Django application benchmark.
  slowspitfire
  spambayes   Runs a canned mailbox through a SpamBayes ham/spam
 classifier
  spectral-norm
  spitfireUses the Spitfire template system to build a
 1000x1000-cell HTML table.
  spitfire_cstringio  Uses the Spitfire template system to build a
  1000x1000-cell HTML table, using the cstringio module.
  telco
  twisted_iteration
  twisted_names
  twisted_pb
  twisted_tcp Connects one Twised client to one Twisted server over
 TCP
  (on the loopback interface) and then writes bytes as fast as it can.
  waf Python-based framework for configuring, compiling and
 installing
  applications. It derives from the concepts of other build tools such
  as Scons, Autotools, CMake or Ant.
 
 
  So the remaining descriptions are
  ai
  slowspitfire (what is the exact difference between the three spitfire
 benches?)
  spectral-norm
  telco
  twisted (most of them)
 
  Are the descriptions all right so far?. They can be made much longer
  if you deem it desirable.
 
  on speed.pypy.org you will currently see the descriptions in 3 places:
  - Changes view: A tooltip on hover over each benchmark
  - Timeline: a description box beneath each plot
  - Comparison: A tooltip over each benchmark when hovering the
  selection menu on the left side.
 
  Any suggestions on how to improve it further are welcome ;-)
 
  Miquel
 
 
  2010/12/9 Paolo Giarrusso p.giarru...@gmail.com:
  On Thu, Dec 9, 2010 at 14:14, Leonardo Santagada santag...@gmail.com
 wrote:
  Here is a incomplete draft list:
 
  [slow]spitfire[cstringio]: Spitfire is a template language, the
  cstringio version uses a modified engine (that uses cstringio)
 
  spambayes: Spambayes is a bayesian spam filter
 
  Why is [slow]spitfire slower with PyPy? Is it regex-related? I
  remember when, because of this, spambayes was slower (including
  release 1.3, now solved). But for spitfire, 1.3 was faster than 1.4
  and the head (for slowspitfire it's the opposite).
 
  For the rest, I see no significant case of slowdown of PyPy over time.
 
 http://speed.pypy.org/comparison/?exe=2%2B35,1%2B41,1%2B172,1%2BLben=1,2,25,3,4,5,22,6,7,8,23,24,9,10,11,12,13,14,15,16,17,18,19,20,26env=1hor=truebas=2%2B35chart=normal+bars
  --
  Paolo Giarrusso - Ph.D. Student
  http://www.informatik.uni-marburg.de/~pgiarrusso/http://www.informatik.uni-marburg.de/%7Epgiarrusso/
 
 
 
 
 
 
  --
  Paolo 

Re: [pypy-dev] Idea for speed.pypy.org

2010-12-14 Thread Paolo Giarrusso
On Mon, Dec 13, 2010 at 09:31, Miquel Torres tob...@googlemail.com wrote:
 Oh, btw., the normalized stacked bars now display a warning note
 about its correctness, and how it must be viewed as giving results a
 weighting instead of them being normalized. It even includes a link to
 the proper paper. I hope that is enough for the strict statisticians
 among us ;-)

I see. Thanks!

 See:
 http://speed.pypy.org/comparison/?exe=1%2B172,3%2B172,1%2BL,3%2BLben=1,2,25,3,4,5,22,6,7,8,23,24,9,10,11,12,13,14,15,16,17,18,19,20env=1hor=truebas=2%2B35chart=stacked+bars

 PS: there is a bug in the jqPlot plotting library when null values are
 present. Trying to display PyPy 1.3 results for the newer go, pyflake
 or  raytrace will create some nasty js loops. It also has problems
 with autoscaling the axis sometimes.



 2010/12/13 Miquel Torres tob...@googlemail.com:
 Thanks all for the input.
 I've compiled a list based on your mails, the Unladen benchmarks page
 (http://code.google.com/p/unladen-swallow/wiki/Benchmarks), and the
 alioth descriptions. Here is an extract of the current speed.pypy.org
 admin:

 ai
 chaos   Creates chaosgame-like fractals
 crypto_pyaes    A pure python implementation of AES
 django          Uses the Django template system to build a 150x150-cell HTML 
 table

 fannkuch                Indexed-access to tiny integer-sequence. The fannkuch
 benchmark is defined by programs in Performing Lisp Analysis of the
 FANNKUCH Benchmark, Kenneth R. Anderson and Duane Rettig.

 float           Creates an array of points using circular projection and then
 normalizes and maximizes them. Floating-point heavy.
 go              A go (chess like game) computer player AI.
 html5lib        Parses the HTML 5 spec using html5lib
 meteor-contest  Searchs for solutions to shape packing puzzle.
 nbody_modified          Double-precision N-body simulation. It models the
 orbits of Jovian planets, using a simple symplectic-integrator.
 pyflate-fast            Stand-alone pure-Python DEFLATE (gzip) and bzip2
 decoder/decompressor.
 raytrace-simple A raytracer renderer
 richards                Medium-sized language benchmark that simulates the 
 task
 dispatcher in the kernel of an operating system.
 rietveld        A Django application benchmark.
 slowspitfire
 spambayes       Runs a canned mailbox through a SpamBayes ham/spam classifier
 spectral-norm
 spitfire        Uses the Spitfire template system to build a 1000x1000-cell 
 HTML table.
 spitfire_cstringio      Uses the Spitfire template system to build a
 1000x1000-cell HTML table, using the cstringio module.
 telco
 twisted_iteration
 twisted_names
 twisted_pb
 twisted_tcp     Connects one Twised client to one Twisted server over TCP
 (on the loopback interface) and then writes bytes as fast as it can.
 waf     Python-based framework for configuring, compiling and installing
 applications. It derives from the concepts of other build tools such
 as Scons, Autotools, CMake or Ant.


 So the remaining descriptions are
 ai
 slowspitfire (what is the exact difference between the three spitfire 
 benches?)
 spectral-norm
 telco
 twisted (most of them)

 Are the descriptions all right so far?. They can be made much longer
 if you deem it desirable.

 on speed.pypy.org you will currently see the descriptions in 3 places:
 - Changes view: A tooltip on hover over each benchmark
 - Timeline: a description box beneath each plot
 - Comparison: A tooltip over each benchmark when hovering the
 selection menu on the left side.

 Any suggestions on how to improve it further are welcome ;-)

 Miquel


 2010/12/9 Paolo Giarrusso p.giarru...@gmail.com:
 On Thu, Dec 9, 2010 at 14:14, Leonardo Santagada santag...@gmail.com 
 wrote:
 Here is a incomplete draft list:

 [slow]spitfire[cstringio]: Spitfire is a template language, the
 cstringio version uses a modified engine (that uses cstringio)

 spambayes: Spambayes is a bayesian spam filter

 Why is [slow]spitfire slower with PyPy? Is it regex-related? I
 remember when, because of this, spambayes was slower (including
 release 1.3, now solved). But for spitfire, 1.3 was faster than 1.4
 and the head (for slowspitfire it's the opposite).

 For the rest, I see no significant case of slowdown of PyPy over time.
 http://speed.pypy.org/comparison/?exe=2%2B35,1%2B41,1%2B172,1%2BLben=1,2,25,3,4,5,22,6,7,8,23,24,9,10,11,12,13,14,15,16,17,18,19,20,26env=1hor=truebas=2%2B35chart=normal+bars
 --
 Paolo Giarrusso - Ph.D. Student
 http://www.informatik.uni-marburg.de/~pgiarrusso/






-- 
Paolo Giarrusso - Ph.D. Student
http://www.informatik.uni-marburg.de/~pgiarrusso/
___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev

Re: [pypy-dev] Idea for speed.pypy.org

2010-12-13 Thread Miquel Torres
Thanks all for the input.
I've compiled a list based on your mails, the Unladen benchmarks page
(http://code.google.com/p/unladen-swallow/wiki/Benchmarks), and the
alioth descriptions. Here is an extract of the current speed.pypy.org
admin:

ai  
chaos   Creates chaosgame-like fractals
crypto_pyaesA pure python implementation of AES
django  Uses the Django template system to build a 150x150-cell HTML 
table

fannkuchIndexed-access to tiny integer-sequence. The fannkuch
benchmark is defined by programs in Performing Lisp Analysis of the
FANNKUCH Benchmark, Kenneth R. Anderson and Duane Rettig.

float   Creates an array of points using circular projection and then
normalizes and maximizes them. Floating-point heavy.
go  A go (chess like game) computer player AI.
html5libParses the HTML 5 spec using html5lib
meteor-contest  Searchs for solutions to shape packing puzzle.  
nbody_modified  Double-precision N-body simulation. It models the
orbits of Jovian planets, using a simple symplectic-integrator.
pyflate-fastStand-alone pure-Python DEFLATE (gzip) and bzip2
decoder/decompressor.
raytrace-simple A raytracer renderer
richardsMedium-sized language benchmark that simulates the task
dispatcher in the kernel of an operating system.
rietveldA Django application benchmark.
slowspitfire
spambayes   Runs a canned mailbox through a SpamBayes ham/spam classifier
spectral-norm   
spitfireUses the Spitfire template system to build a 1000x1000-cell 
HTML table.
spitfire_cstringio  Uses the Spitfire template system to build a
1000x1000-cell HTML table, using the cstringio module.
telco
twisted_iteration
twisted_names
twisted_pb
twisted_tcp Connects one Twised client to one Twisted server over TCP
(on the loopback interface) and then writes bytes as fast as it can.
waf Python-based framework for configuring, compiling and installing
applications. It derives from the concepts of other build tools such
as Scons, Autotools, CMake or Ant.


So the remaining descriptions are
ai
slowspitfire (what is the exact difference between the three spitfire benches?)
spectral-norm
telco
twisted (most of them)

Are the descriptions all right so far?. They can be made much longer
if you deem it desirable.

on speed.pypy.org you will currently see the descriptions in 3 places:
- Changes view: A tooltip on hover over each benchmark
- Timeline: a description box beneath each plot
- Comparison: A tooltip over each benchmark when hovering the
selection menu on the left side.

Any suggestions on how to improve it further are welcome ;-)

Miquel


2010/12/9 Paolo Giarrusso p.giarru...@gmail.com:
 On Thu, Dec 9, 2010 at 14:14, Leonardo Santagada santag...@gmail.com wrote:
 Here is a incomplete draft list:

 [slow]spitfire[cstringio]: Spitfire is a template language, the
 cstringio version uses a modified engine (that uses cstringio)

 spambayes: Spambayes is a bayesian spam filter

 Why is [slow]spitfire slower with PyPy? Is it regex-related? I
 remember when, because of this, spambayes was slower (including
 release 1.3, now solved). But for spitfire, 1.3 was faster than 1.4
 and the head (for slowspitfire it's the opposite).

 For the rest, I see no significant case of slowdown of PyPy over time.
 http://speed.pypy.org/comparison/?exe=2%2B35,1%2B41,1%2B172,1%2BLben=1,2,25,3,4,5,22,6,7,8,23,24,9,10,11,12,13,14,15,16,17,18,19,20,26env=1hor=truebas=2%2B35chart=normal+bars
 --
 Paolo Giarrusso - Ph.D. Student
 http://www.informatik.uni-marburg.de/~pgiarrusso/

___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev


Re: [pypy-dev] Idea for speed.pypy.org

2010-12-13 Thread Miquel Torres
Oh, btw., the normalized stacked bars now display a warning note
about its correctness, and how it must be viewed as giving results a
weighting instead of them being normalized. It even includes a link to
the proper paper. I hope that is enough for the strict statisticians
among us ;-)

See:
http://speed.pypy.org/comparison/?exe=1%2B172,3%2B172,1%2BL,3%2BLben=1,2,25,3,4,5,22,6,7,8,23,24,9,10,11,12,13,14,15,16,17,18,19,20env=1hor=truebas=2%2B35chart=stacked+bars

PS: there is a bug in the jqPlot plotting library when null values are
present. Trying to display PyPy 1.3 results for the newer go, pyflake
or  raytrace will create some nasty js loops. It also has problems
with autoscaling the axis sometimes.



2010/12/13 Miquel Torres tob...@googlemail.com:
 Thanks all for the input.
 I've compiled a list based on your mails, the Unladen benchmarks page
 (http://code.google.com/p/unladen-swallow/wiki/Benchmarks), and the
 alioth descriptions. Here is an extract of the current speed.pypy.org
 admin:

 ai
 chaos   Creates chaosgame-like fractals
 crypto_pyaes    A pure python implementation of AES
 django          Uses the Django template system to build a 150x150-cell HTML 
 table

 fannkuch                Indexed-access to tiny integer-sequence. The fannkuch
 benchmark is defined by programs in Performing Lisp Analysis of the
 FANNKUCH Benchmark, Kenneth R. Anderson and Duane Rettig.

 float           Creates an array of points using circular projection and then
 normalizes and maximizes them. Floating-point heavy.
 go              A go (chess like game) computer player AI.
 html5lib        Parses the HTML 5 spec using html5lib
 meteor-contest  Searchs for solutions to shape packing puzzle.
 nbody_modified          Double-precision N-body simulation. It models the
 orbits of Jovian planets, using a simple symplectic-integrator.
 pyflate-fast            Stand-alone pure-Python DEFLATE (gzip) and bzip2
 decoder/decompressor.
 raytrace-simple A raytracer renderer
 richards                Medium-sized language benchmark that simulates the 
 task
 dispatcher in the kernel of an operating system.
 rietveld        A Django application benchmark.
 slowspitfire
 spambayes       Runs a canned mailbox through a SpamBayes ham/spam classifier
 spectral-norm
 spitfire        Uses the Spitfire template system to build a 1000x1000-cell 
 HTML table.
 spitfire_cstringio      Uses the Spitfire template system to build a
 1000x1000-cell HTML table, using the cstringio module.
 telco
 twisted_iteration
 twisted_names
 twisted_pb
 twisted_tcp     Connects one Twised client to one Twisted server over TCP
 (on the loopback interface) and then writes bytes as fast as it can.
 waf     Python-based framework for configuring, compiling and installing
 applications. It derives from the concepts of other build tools such
 as Scons, Autotools, CMake or Ant.


 So the remaining descriptions are
 ai
 slowspitfire (what is the exact difference between the three spitfire 
 benches?)
 spectral-norm
 telco
 twisted (most of them)

 Are the descriptions all right so far?. They can be made much longer
 if you deem it desirable.

 on speed.pypy.org you will currently see the descriptions in 3 places:
 - Changes view: A tooltip on hover over each benchmark
 - Timeline: a description box beneath each plot
 - Comparison: A tooltip over each benchmark when hovering the
 selection menu on the left side.

 Any suggestions on how to improve it further are welcome ;-)

 Miquel


 2010/12/9 Paolo Giarrusso p.giarru...@gmail.com:
 On Thu, Dec 9, 2010 at 14:14, Leonardo Santagada santag...@gmail.com wrote:
 Here is a incomplete draft list:

 [slow]spitfire[cstringio]: Spitfire is a template language, the
 cstringio version uses a modified engine (that uses cstringio)

 spambayes: Spambayes is a bayesian spam filter

 Why is [slow]spitfire slower with PyPy? Is it regex-related? I
 remember when, because of this, spambayes was slower (including
 release 1.3, now solved). But for spitfire, 1.3 was faster than 1.4
 and the head (for slowspitfire it's the opposite).

 For the rest, I see no significant case of slowdown of PyPy over time.
 http://speed.pypy.org/comparison/?exe=2%2B35,1%2B41,1%2B172,1%2BLben=1,2,25,3,4,5,22,6,7,8,23,24,9,10,11,12,13,14,15,16,17,18,19,20,26env=1hor=truebas=2%2B35chart=normal+bars
 --
 Paolo Giarrusso - Ph.D. Student
 http://www.informatik.uni-marburg.de/~pgiarrusso/


___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev


Re: [pypy-dev] Idea for speed.pypy.org

2010-12-13 Thread Carl Friedrich Bolz
Hi Miquel,

On 12/13/2010 09:20 AM, Miquel Torres wrote:
 So the remaining descriptions are
 ai
 slowspitfire (what is the exact difference between the three spitfire 
 benches?)

The difference is the way the final string is built, using cstringio, or 
other means. I guess Maciek knows the details.

 spectral-norm
 telco

Telco covers the essence of a telephone company billing application 
using decimal arithmetic: http://speleotrove.com/decimal/telco.html

Thanks for doing this,

Carl Friedrich
___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev


Re: [pypy-dev] Idea for speed.pypy.org

2010-12-13 Thread exarkun
On 08:20 am, tob...@googlemail.com wrote:
Thanks all for the input.
I've compiled a list based on your mails, the Unladen benchmarks page
(http://code.google.com/p/unladen-swallow/wiki/Benchmarks), and the
alioth descriptions. Here is an extract of the current speed.pypy.org
admin:

[snip]
twisted_iteration

Iterates a Twisted reactor as quickly as possible without doing any 
work.
twisted_names

Runs a DNS server with Twisted Names and then issues requests to it over 
loopback UDP.
twisted_pb

Runs a Perspective Broker server with a no-op method and invokes that 
method over loopback TCP with some strings, dictionaries, and tuples as 
arguments.
___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev


Re: [pypy-dev] Idea for speed.pypy.org

2010-12-13 Thread Miquel Torres
@Carl Friedrich  exarkun: thanks, I've added those.

only spectral-norm, slowspitfire and ai to go.

slowspitfire is described at the Unladen page as using psyco, but it
doesn't make sense in our case?



2010/12/13  exar...@twistedmatrix.com:
 On 08:20 am, tob...@googlemail.com wrote:

 Thanks all for the input.
 I've compiled a list based on your mails, the Unladen benchmarks page
 (http://code.google.com/p/unladen-swallow/wiki/Benchmarks), and the
 alioth descriptions. Here is an extract of the current speed.pypy.org
 admin:

 [snip]
 twisted_iteration

 Iterates a Twisted reactor as quickly as possible without doing any work.

 twisted_names

 Runs a DNS server with Twisted Names and then issues requests to it over
 loopback UDP.

 twisted_pb

 Runs a Perspective Broker server with a no-op method and invokes that method
 over loopback TCP with some strings, dictionaries, and tuples as arguments.

___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev


Re: [pypy-dev] Idea for speed.pypy.org

2010-12-13 Thread Miquel Torres
sorry, I meant the opposite. To recap, according to
http://code.google.com/p/unladen-swallow/wiki/Benchmarks,
spitfire: psyco
slowspitfire: pure python

in addition we have spitfire_cstringio, which uses a c module (so it
is even faster).

what is vanilla spitfire in our case?


2010/12/13 Miquel Torres tob...@googlemail.com:
 @Carl Friedrich  exarkun: thanks, I've added those.

 only spectral-norm, slowspitfire and ai to go.

 slowspitfire is described at the Unladen page as using psyco, but it
 doesn't make sense in our case?



 2010/12/13  exar...@twistedmatrix.com:
 On 08:20 am, tob...@googlemail.com wrote:

 Thanks all for the input.
 I've compiled a list based on your mails, the Unladen benchmarks page
 (http://code.google.com/p/unladen-swallow/wiki/Benchmarks), and the
 alioth descriptions. Here is an extract of the current speed.pypy.org
 admin:

 [snip]
 twisted_iteration

 Iterates a Twisted reactor as quickly as possible without doing any work.

 twisted_names

 Runs a DNS server with Twisted Names and then issues requests to it over
 loopback UDP.

 twisted_pb

 Runs a Perspective Broker server with a no-op method and invokes that method
 over loopback TCP with some strings, dictionaries, and tuples as arguments.


___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev


Re: [pypy-dev] Idea for speed.pypy.org

2010-12-13 Thread Miquel Torres
@Maciej: it doesn't make a lot of sense. Looking at this graph:
http://speed.pypy.org/comparison/?exe=2%2B35,4%2B35,1%2B172,3%2B172ben=11,14,15env=1hor=falsebas=nonechart=normal+bars

slowspitfire is much faster than the other two. Is that because it
performs more iterations?

Also, how come pypy-c-jit is faster than cpython or psyco precisely in
cstringio, where performance should be dependent on cstringIO and thus
be more similar across interpreters?


2010/12/13 Leonardo Santagada santag...@gmail.com:
 why not have only 2 versions, both with the same size table and name
 one spitfire_cstringio and the other spitfire_strjoin? I think it
 would make things clearer.


 On Mon, Dec 13, 2010 at 2:43 PM, Maciej Fijalkowski fij...@gmail.com wrote:
 Hi.

 spitfires are confusing.

 slowspitfire and spitfire use ''.join(list-of-strings) where
 spitfire_cstringio uses cStringIO instead.

 spitfire and spitfire_cstringio use smaller table to render (100x100 I
 think) which was the default on original benchmarks

 slowspitfire uses 1000x1000 (which is why it used to be slower than
 spitfire) and was chosen by US guys to let the JIT warm up. We should
 remove _slow these days.

 On Mon, Dec 13, 2010 at 5:08 PM, Miquel Torres tob...@googlemail.com wrote:
 sorry, I meant the opposite. To recap, according to
 http://code.google.com/p/unladen-swallow/wiki/Benchmarks,
 spitfire: psyco
 slowspitfire: pure python

 in addition we have spitfire_cstringio, which uses a c module (so it
 is even faster).

 what is vanilla spitfire in our case?


 2010/12/13 Miquel Torres tob...@googlemail.com:
 @Carl Friedrich  exarkun: thanks, I've added those.

 only spectral-norm, slowspitfire and ai to go.

 slowspitfire is described at the Unladen page as using psyco, but it
 doesn't make sense in our case?



 2010/12/13  exar...@twistedmatrix.com:
 On 08:20 am, tob...@googlemail.com wrote:

 Thanks all for the input.
 I've compiled a list based on your mails, the Unladen benchmarks page
 (http://code.google.com/p/unladen-swallow/wiki/Benchmarks), and the
 alioth descriptions. Here is an extract of the current speed.pypy.org
 admin:

 [snip]
 twisted_iteration

 Iterates a Twisted reactor as quickly as possible without doing any work.

 twisted_names

 Runs a DNS server with Twisted Names and then issues requests to it over
 loopback UDP.

 twisted_pb

 Runs a Perspective Broker server with a no-op method and invokes that 
 method
 over loopback TCP with some strings, dictionaries, and tuples as 
 arguments.


 ___
 pypy-dev@codespeak.net
 http://codespeak.net/mailman/listinfo/pypy-dev

 ___
 pypy-dev@codespeak.net
 http://codespeak.net/mailman/listinfo/pypy-dev



 --
 Leonardo Santagada

___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev


Re: [pypy-dev] Idea for speed.pypy.org

2010-12-13 Thread Maciej Fijalkowski
On Mon, Dec 13, 2010 at 10:51 PM, Miquel Torres tob...@googlemail.com wrote:
 @Maciej: it doesn't make a lot of sense. Looking at this graph:
 http://speed.pypy.org/comparison/?exe=2%2B35,4%2B35,1%2B172,3%2B172ben=11,14,15env=1hor=falsebas=nonechart=normal+bars

 slowspitfire is much faster than the other two. Is that because it
 performs more iterations?

I think it's apples to oranges (they have different table sizes and
different number of iterations)


 Also, how come pypy-c-jit is faster than cpython or psyco precisely in
 cstringio, where performance should be dependent on cstringIO and thus
 be more similar across interpreters?

because having a list of small strings means you have a large (old)
object referencing a lot of young objects, hence GC cost. It's not the
case with cstringio where you have a single chunk of memory which does
not contain GC pointers.



 2010/12/13 Leonardo Santagada santag...@gmail.com:
 why not have only 2 versions, both with the same size table and name
 one spitfire_cstringio and the other spitfire_strjoin? I think it
 would make things clearer.


 On Mon, Dec 13, 2010 at 2:43 PM, Maciej Fijalkowski fij...@gmail.com wrote:
 Hi.

 spitfires are confusing.

 slowspitfire and spitfire use ''.join(list-of-strings) where
 spitfire_cstringio uses cStringIO instead.

 spitfire and spitfire_cstringio use smaller table to render (100x100 I
 think) which was the default on original benchmarks

 slowspitfire uses 1000x1000 (which is why it used to be slower than
 spitfire) and was chosen by US guys to let the JIT warm up. We should
 remove _slow these days.

 On Mon, Dec 13, 2010 at 5:08 PM, Miquel Torres tob...@googlemail.com 
 wrote:
 sorry, I meant the opposite. To recap, according to
 http://code.google.com/p/unladen-swallow/wiki/Benchmarks,
 spitfire: psyco
 slowspitfire: pure python

 in addition we have spitfire_cstringio, which uses a c module (so it
 is even faster).

 what is vanilla spitfire in our case?


 2010/12/13 Miquel Torres tob...@googlemail.com:
 @Carl Friedrich  exarkun: thanks, I've added those.

 only spectral-norm, slowspitfire and ai to go.

 slowspitfire is described at the Unladen page as using psyco, but it
 doesn't make sense in our case?



 2010/12/13  exar...@twistedmatrix.com:
 On 08:20 am, tob...@googlemail.com wrote:

 Thanks all for the input.
 I've compiled a list based on your mails, the Unladen benchmarks page
 (http://code.google.com/p/unladen-swallow/wiki/Benchmarks), and the
 alioth descriptions. Here is an extract of the current speed.pypy.org
 admin:

 [snip]
 twisted_iteration

 Iterates a Twisted reactor as quickly as possible without doing any work.

 twisted_names

 Runs a DNS server with Twisted Names and then issues requests to it over
 loopback UDP.

 twisted_pb

 Runs a Perspective Broker server with a no-op method and invokes that 
 method
 over loopback TCP with some strings, dictionaries, and tuples as 
 arguments.


 ___
 pypy-dev@codespeak.net
 http://codespeak.net/mailman/listinfo/pypy-dev

 ___
 pypy-dev@codespeak.net
 http://codespeak.net/mailman/listinfo/pypy-dev



 --
 Leonardo Santagada


___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev

Re: [pypy-dev] Idea for speed.pypy.org

2010-12-13 Thread Maciej Fijalkowski
Hey miquel, didn't we loose colors somehow?

On Tue, Dec 14, 2010 at 8:32 AM, Maciej Fijalkowski fij...@gmail.com wrote:
 On Mon, Dec 13, 2010 at 10:51 PM, Miquel Torres tob...@googlemail.com wrote:
 @Maciej: it doesn't make a lot of sense. Looking at this graph:
 http://speed.pypy.org/comparison/?exe=2%2B35,4%2B35,1%2B172,3%2B172ben=11,14,15env=1hor=falsebas=nonechart=normal+bars

 slowspitfire is much faster than the other two. Is that because it
 performs more iterations?

 I think it's apples to oranges (they have different table sizes and
 different number of iterations)


 Also, how come pypy-c-jit is faster than cpython or psyco precisely in
 cstringio, where performance should be dependent on cstringIO and thus
 be more similar across interpreters?

 because having a list of small strings means you have a large (old)
 object referencing a lot of young objects, hence GC cost. It's not the
 case with cstringio where you have a single chunk of memory which does
 not contain GC pointers.



 2010/12/13 Leonardo Santagada santag...@gmail.com:
 why not have only 2 versions, both with the same size table and name
 one spitfire_cstringio and the other spitfire_strjoin? I think it
 would make things clearer.


 On Mon, Dec 13, 2010 at 2:43 PM, Maciej Fijalkowski fij...@gmail.com 
 wrote:
 Hi.

 spitfires are confusing.

 slowspitfire and spitfire use ''.join(list-of-strings) where
 spitfire_cstringio uses cStringIO instead.

 spitfire and spitfire_cstringio use smaller table to render (100x100 I
 think) which was the default on original benchmarks

 slowspitfire uses 1000x1000 (which is why it used to be slower than
 spitfire) and was chosen by US guys to let the JIT warm up. We should
 remove _slow these days.

 On Mon, Dec 13, 2010 at 5:08 PM, Miquel Torres tob...@googlemail.com 
 wrote:
 sorry, I meant the opposite. To recap, according to
 http://code.google.com/p/unladen-swallow/wiki/Benchmarks,
 spitfire: psyco
 slowspitfire: pure python

 in addition we have spitfire_cstringio, which uses a c module (so it
 is even faster).

 what is vanilla spitfire in our case?


 2010/12/13 Miquel Torres tob...@googlemail.com:
 @Carl Friedrich  exarkun: thanks, I've added those.

 only spectral-norm, slowspitfire and ai to go.

 slowspitfire is described at the Unladen page as using psyco, but it
 doesn't make sense in our case?



 2010/12/13  exar...@twistedmatrix.com:
 On 08:20 am, tob...@googlemail.com wrote:

 Thanks all for the input.
 I've compiled a list based on your mails, the Unladen benchmarks page
 (http://code.google.com/p/unladen-swallow/wiki/Benchmarks), and the
 alioth descriptions. Here is an extract of the current speed.pypy.org
 admin:

 [snip]
 twisted_iteration

 Iterates a Twisted reactor as quickly as possible without doing any 
 work.

 twisted_names

 Runs a DNS server with Twisted Names and then issues requests to it over
 loopback UDP.

 twisted_pb

 Runs a Perspective Broker server with a no-op method and invokes that 
 method
 over loopback TCP with some strings, dictionaries, and tuples as 
 arguments.


 ___
 pypy-dev@codespeak.net
 http://codespeak.net/mailman/listinfo/pypy-dev

 ___
 pypy-dev@codespeak.net
 http://codespeak.net/mailman/listinfo/pypy-dev



 --
 Leonardo Santagada



___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev

Re: [pypy-dev] Idea for speed.pypy.org

2010-12-09 Thread Leonardo Santagada
Here is a incomplete draft list:

ai:

chaos: create chaosgame-like fractals

fannkuch: 
http://shootout.alioth.debian.org/u64/performance.php?test=fannkuchredux

float: (this is just from looking at it) creates an array of points
using circular projection and then normalizes and maximizes them.

html5:

meteor-contest:
http://shootout.alioth.debian.org/u64/performance.php?test=meteor

nbody_modified: http://shootout.alioth.debian.org/u64/performance.php?test=nbody

richards: Martin Richards benchmark, implemented in many languages

rietveld: A Django application benchmark

[slow]spitfire[cstringio]: Spitfire is a template language, the
cstringio version uses a modified engine (that uses cstringio)

spambayes: Spambayes is a bayesian spam filter

telco:

go: A go (chess like game) computer player AI

pyflate-fast: Stand-alone pure-Python DEFLATE (gzip) and bzip2
decoder/decompressor.

raytrace: A raytracer renderer

crypto_pyaes: A pure python implementation of AES

waf: Waf is a Python-based framework for configuring, compiling and
installing applications. It derives from the concepts of other build
tools such as Scons, Autotools, CMake or Ant.

On Wed, Dec 8, 2010 at 4:41 PM, Miquel Torres tob...@googlemail.com wrote:
 Anyone want to help in creating that benchmark description list? Just
 reply to this mail so that everyone can review what the benchmarks
 do.




-- 
Leonardo Santagada
___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev


Re: [pypy-dev] Idea for speed.pypy.org

2010-12-09 Thread Jacob Hallén
Wednesday 08 December 2010 you wrote:
 On Wed, Dec 8, 2010 at 8:41 PM, Miquel Torres tob...@googlemail.com wrote:
  Hey, I was serious when I said I would improve on benchmarks info!
  
  Anyone want to help in creating that benchmark description list? Just
  reply to this mail so that everyone can review what the benchmarks
  do.
 
 Hey.
 
 I can do some stuff. I was serious about documenting why we're
 slow/fast on benchmarks - that maybe we should bring down our docs to
 a manageable number first :) Benchmarks descriptions are however
 unlikely to change

Just having the descriptions would be great. Why we are slow/fast was a 
wishlist item, and I can see that it is more difficult to provide this 
information and keep it up to date.

Providing Miguel with the information can be done in the simplest way, for 
instance by mailing texts to this list (so that people can see which ones have 
been produced)..

Jacob


signature.asc
Description: This is a digitally signed message part.
___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev

Re: [pypy-dev] Idea for speed.pypy.org

2010-12-09 Thread Jacob Hallén
Extracted from what exarkun said on the IRC channel.

twisted-tcp:

Connects one Twised client to one Twisted server over TCP (on the loopback 
interface) and then writes bytes as fast as it can.



signature.asc
Description: This is a digitally signed message part.
___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev

Re: [pypy-dev] Idea for speed.pypy.org

2010-12-09 Thread René Dudfield
Hello,

Why is this twisted tcp benchmark slow in pypy?  Does memory management play
a role?  Or perhaps the interface to the system calls?  eg, is there no
event io being used by the twisted version?

I think in CPython, it would be using a c binding, and not allocating any
memory as it would all be coming from memory pools.  It's probably using the
same pieces of memory too.

cu.

On Thu, Dec 9, 2010 at 2:30 PM, Jacob Hallén ja...@openend.se wrote:

 Extracted from what exarkun said on the IRC channel.

 twisted-tcp:

 Connects one Twised client to one Twisted server over TCP (on the loopback
 interface) and then writes bytes as fast as it can.


 ___
 pypy-dev@codespeak.net
 http://codespeak.net/mailman/listinfo/pypy-dev

___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev

Re: [pypy-dev] Idea for speed.pypy.org

2010-12-09 Thread Paolo Giarrusso
On Thu, Dec 9, 2010 at 14:14, Leonardo Santagada santag...@gmail.com wrote:
 Here is a incomplete draft list:

 [slow]spitfire[cstringio]: Spitfire is a template language, the
 cstringio version uses a modified engine (that uses cstringio)

 spambayes: Spambayes is a bayesian spam filter

Why is [slow]spitfire slower with PyPy? Is it regex-related? I
remember when, because of this, spambayes was slower (including
release 1.3, now solved). But for spitfire, 1.3 was faster than 1.4
and the head (for slowspitfire it's the opposite).

For the rest, I see no significant case of slowdown of PyPy over time.
http://speed.pypy.org/comparison/?exe=2%2B35,1%2B41,1%2B172,1%2BLben=1,2,25,3,4,5,22,6,7,8,23,24,9,10,11,12,13,14,15,16,17,18,19,20,26env=1hor=truebas=2%2B35chart=normal+bars
-- 
Paolo Giarrusso - Ph.D. Student
http://www.informatik.uni-marburg.de/~pgiarrusso/
___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev


Re: [pypy-dev] Idea for speed.pypy.org

2010-12-08 Thread Miquel Torres
Hey, I was serious when I said I would improve on benchmarks info!

Anyone want to help in creating that benchmark description list? Just
reply to this mail so that everyone can review what the benchmarks
do.


2010/12/4 Maciej Fijalkowski fij...@gmail.com:
 On Sat, Dec 4, 2010 at 1:05 PM, Laura Creighton l...@openend.se wrote:
 re: keeping the 'why we are slower/what we could do to fix it' info up
 to date -- one possibility is to make a 'why we were/what we are for
 release 1.4.'  Then every time you make a major release, you update
 those fields as needed.  And if major changes happen between 'what
 was in the last major release' vs 'what's on trunk now' you can even
 make a note of it -- 'fixed in rev whatever it was, when we merged in
 whoever it was' brank, see blog post over here'.  And if you forget,
 well, you will catch it when the next major release comes out.

 Just an idea.

 Laura


 I think the general idea we know what's wrong vs we have no clue
 is good. However, I would like to maybe decide what we do with tons of
 docs that we already have first :)

___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev


Re: [pypy-dev] Idea for speed.pypy.org

2010-12-08 Thread Maciej Fijalkowski
On Wed, Dec 8, 2010 at 8:41 PM, Miquel Torres tob...@googlemail.com wrote:
 Hey, I was serious when I said I would improve on benchmarks info!

 Anyone want to help in creating that benchmark description list? Just
 reply to this mail so that everyone can review what the benchmarks
 do.


Hey.

I can do some stuff. I was serious about documenting why we're
slow/fast on benchmarks - that maybe we should bring down our docs to
a manageable number first :) Benchmarks descriptions are however
unlikely to change


 2010/12/4 Maciej Fijalkowski fij...@gmail.com:
 On Sat, Dec 4, 2010 at 1:05 PM, Laura Creighton l...@openend.se wrote:
 re: keeping the 'why we are slower/what we could do to fix it' info up
 to date -- one possibility is to make a 'why we were/what we are for
 release 1.4.'  Then every time you make a major release, you update
 those fields as needed.  And if major changes happen between 'what
 was in the last major release' vs 'what's on trunk now' you can even
 make a note of it -- 'fixed in rev whatever it was, when we merged in
 whoever it was' brank, see blog post over here'.  And if you forget,
 well, you will catch it when the next major release comes out.

 Just an idea.

 Laura


 I think the general idea we know what's wrong vs we have no clue
 is good. However, I would like to maybe decide what we do with tons of
 docs that we already have first :)


___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev

Re: [pypy-dev] Idea for speed.pypy.org

2010-12-04 Thread Laura Creighton
re: keeping the 'why we are slower/what we could do to fix it' info up
to date -- one possibility is to make a 'why we were/what we are for
release 1.4.'  Then every time you make a major release, you update
those fields as needed.  And if major changes happen between 'what
was in the last major release' vs 'what's on trunk now' you can even
make a note of it -- 'fixed in rev whatever it was, when we merged in
whoever it was' brank, see blog post over here'.  And if you forget,
well, you will catch it when the next major release comes out.

Just an idea.

Laura

___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev


Re: [pypy-dev] Idea for speed.pypy.org

2010-12-04 Thread Maciej Fijalkowski
On Sat, Dec 4, 2010 at 1:05 PM, Laura Creighton l...@openend.se wrote:
 re: keeping the 'why we are slower/what we could do to fix it' info up
 to date -- one possibility is to make a 'why we were/what we are for
 release 1.4.'  Then every time you make a major release, you update
 those fields as needed.  And if major changes happen between 'what
 was in the last major release' vs 'what's on trunk now' you can even
 make a note of it -- 'fixed in rev whatever it was, when we merged in
 whoever it was' brank, see blog post over here'.  And if you forget,
 well, you will catch it when the next major release comes out.

 Just an idea.

 Laura


I think the general idea we know what's wrong vs we have no clue
is good. However, I would like to maybe decide what we do with tons of
docs that we already have first :)
___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev

[pypy-dev] Idea for speed.pypy.org

2010-12-03 Thread Jacob Hallén
Something that would be nice on the speed.pypy.org site would be that each 
benchmark links to a page that describes the benchmark

- What the benchmark is designed to measure
- Why PyPy performs better/worse than CPYthon (if known)
- What might be done to further improve performance (if there are ideas)
- Link to the source code of the benchmark

Any takers?

Jacob


signature.asc
Description: This is a digitally signed message part.
___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev

Re: [pypy-dev] Idea for speed.pypy.org

2010-12-03 Thread exarkun
On 3 Dec, 11:42 pm, ja...@openend.se wrote:
Something that would be nice on the speed.pypy.org site would be that 
each
benchmark links to a page that describes the benchmark

- What the benchmark is designed to measure
- Why PyPy performs better/worse than CPYthon (if known)
- What might be done to further improve performance (if there are 
ideas)
- Link to the source code of the benchmark

Any takers?

There's a column in the database where descriptions can be added - this 
could either be the full description you're talking about, or a link to 
a page with more information.  The description is rendered on hover, and 
below the large version of the graph (if you have a recent enough 
codespeed).

So it may just be a matter of entering some text for each benchmark (as 
opposed to making changes to the codespeed software).

Jean-Paul
___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev


Re: [pypy-dev] Idea for speed.pypy.org

2010-12-03 Thread Maciej Fijalkowski
 - Why PyPy performs better/worse than CPYthon (if known)

I'm a bit worried about keeping this up to date
___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev


Re: [pypy-dev] Idea for speed.pypy.org

2010-12-03 Thread Andy
The one thing that would be the most helpful is that for each test shows the 
memory usage comparison between PyPy-JIT  CPython.


--- On Fri, 12/3/10, Jacob Hallén ja...@openend.se wrote:

 From: Jacob Hallén ja...@openend.se
 Subject: [pypy-dev] Idea for speed.pypy.org
 To: pypy-dev@codespeak.net
 Date: Friday, December 3, 2010, 6:42 PM
 Something that would be nice on the
 speed.pypy.org site would be that each 
 benchmark links to a page that describes the benchmark
 
 - What the benchmark is designed to measure
 - Why PyPy performs better/worse than CPYthon (if known)
 - What might be done to further improve performance (if
 there are ideas)
 - Link to the source code of the benchmark
 
 Any takers?
 
 Jacob
 
 -Inline Attachment Follows-
 
 ___
 pypy-dev@codespeak.net
 http://codespeak.net/mailman/listinfo/pypy-dev


  
___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev


Re: [pypy-dev] Idea for speed.pypy.org

2010-12-03 Thread Miquel Torres
Hi Jacob,

benchmark descriptions is certainly something that could be improved upon.

As exarkun said, a quick way to get *something* is to just fill out
the description fields for each benchmark. I can happily do that, I
just need a mail containing a list with brief descriptions for all the
benchmarks ;-)

I can even imaging improving it further so that there is a link to
source code (your point 4) or even an info page dedicated to each
benchmark.

Points 2 and 3, as Maciej said, is problematic. Reasons why it
performs better or worse than cpython and how it could be improved is
something that may change fairly often for some benchmarks, which
would be a pain to maintain. And speed.pypy.org doesn't have 3 or 4
benchmarks, it has 25 right now!

@Andy: memory consumption is difficult to measure. I think someone was
looking at how to best measure it?

Cheers,
Miquel


2010/12/4 Andy angelf...@yahoo.com:
 The one thing that would be the most helpful is that for each test shows the 
 memory usage comparison between PyPy-JIT  CPython.


 --- On Fri, 12/3/10, Jacob Hallén ja...@openend.se wrote:

 From: Jacob Hallén ja...@openend.se
 Subject: [pypy-dev] Idea for speed.pypy.org
 To: pypy-dev@codespeak.net
 Date: Friday, December 3, 2010, 6:42 PM
 Something that would be nice on the
 speed.pypy.org site would be that each
 benchmark links to a page that describes the benchmark

 - What the benchmark is designed to measure
 - Why PyPy performs better/worse than CPYthon (if known)
 - What might be done to further improve performance (if
 there are ideas)
 - Link to the source code of the benchmark

 Any takers?

 Jacob

 -Inline Attachment Follows-

 ___
 pypy-dev@codespeak.net
 http://codespeak.net/mailman/listinfo/pypy-dev



 ___
 pypy-dev@codespeak.net
 http://codespeak.net/mailman/listinfo/pypy-dev

___
pypy-dev@codespeak.net
http://codespeak.net/mailman/listinfo/pypy-dev