Author: Armin Rigo <ar...@tunes.org>
Branch: extradoc
Changeset: r612:6aa41e28f863
Date: 2015-06-01 12:12 +0200
http://bitbucket.org/pypy/pypy.org/changeset/6aa41e28f863/

Log:    Put a link to vmprof.

diff --git a/performance.html b/performance.html
--- a/performance.html
+++ b/performance.html
@@ -72,17 +72,32 @@
 <div class="contents topic" id="contents">
 <p class="topic-title first">Contents</p>
 <ul class="simple">
-<li><a class="reference internal" href="#optimization-strategy" 
id="id1">Optimization strategy</a></li>
-<li><a class="reference internal" href="#micro-tuning-tips" 
id="id2">Micro-tuning tips</a></li>
-<li><a class="reference internal" href="#insider-s-point-of-view" 
id="id3">Insider's point of view</a></li>
+<li><a class="reference internal" href="#profiling-vmprof" id="id1">Profiling: 
vmprof</a></li>
+<li><a class="reference internal" href="#optimization-strategy" 
id="id2">Optimization strategy</a></li>
+<li><a class="reference internal" href="#micro-tuning-tips" 
id="id3">Micro-tuning tips</a></li>
+<li><a class="reference internal" href="#insider-s-point-of-view" 
id="id4">Insider's point of view</a></li>
 </ul>
 </div>
 <p>This document collects strategies, tactics and tricks for making your
 code run faster under PyPy.  Many of these are also useful hints for
 stock Python and other languages.  For contrast, we also describe some
 CPython (stock Python) optimizations that are not needed in PyPy.</p>
+<hr class="docutils" />
+<div class="section" id="profiling-vmprof">
+<span id="profiling"></span><span id="profiler"></span><h1><a 
class="toc-backref" href="#id1">Profiling: vmprof</a></h1>
+<p>As a general rule, when considering performance issues, follow these
+three points: first <em>measure</em> them (it is counter-productive to fight
+imaginary performance issues); then <em>profile</em> your code (it is useless
+to optimize the wrong parts).  Only optimize then.</p>
+<p>PyPy 2.6 introduced <a class="reference external" 
href="https://vmprof.readthedocs.org/";>vmprof</a>, a very-low-overhead 
statistical profiler.
+The standard, non-statistical <tt class="docutils literal">cProfile</tt> is 
also supported, and can be
+enabled without turning off the JIT.  We do recommend vmprof anyway
+because turning on cProfile can distort the result (sometimes massively,
+though hopefully this should not be too common).</p>
+</div>
+<hr class="docutils" />
 <div class="section" id="optimization-strategy">
-<h1><a class="toc-backref" href="#id1">Optimization strategy</a></h1>
+<h1><a class="toc-backref" href="#id2">Optimization strategy</a></h1>
 <p>These suggestions apply to all computer languages.  They're here as
 reminders of things to try before any Python or PyPy-specific tweaking.</p>
 <div class="section" id="build-a-regression-test-suite">
@@ -95,7 +110,7 @@
 <div class="section" id="measure-don-t-guess">
 <h2>Measure, don't guess</h2>
 <p>Human beings are bad at guessing or intuiting where the hotspots in code 
are.
-Measure, don't guess; use a profiler to pin down the 20% of the
+Measure, don't guess; use a <a class="reference internal" 
href="#profiler">profiler</a> to pin down the 20% of the
 code where the code is spending 80% of its time, then speed-tune that.</p>
 <p>Measuring will save you a lot of effort wasted on tuning parts of the code
 that aren't actually bottlenecks.</p>
@@ -109,7 +124,7 @@
 bound (slow because of disk or network delays).</p>
 <p>Expect to get most of your gains from optimizing compute-bound code.
 It's usually (though not always) a sign that you're near the end of
-worthwhile tuning when profiling shows that the bulk of the
+worthwhile tuning when <a class="reference internal" 
href="#profiling">profiling</a> shows that the bulk of the
 application's time is spent on network and disk I/O.</p>
 </div>
 <div class="section" id="tune-your-algorithms-first">
@@ -160,8 +175,9 @@
 function of your regression test suite can be as a speed benchmark.</p>
 </div>
 </div>
+<hr class="docutils" />
 <div class="section" id="micro-tuning-tips">
-<h1><a class="toc-backref" href="#id2">Micro-tuning tips</a></h1>
+<h1><a class="toc-backref" href="#id3">Micro-tuning tips</a></h1>
 <p>These are in no particular order.</p>
 <div class="section" id="keep-it-simple">
 <h2>Keep it simple</h2>
@@ -270,8 +286,9 @@
 <p><em>(Thanks Eric S. Raymond for the text above)</em></p>
 </div>
 </div>
+<hr class="docutils" />
 <div class="section" id="insider-s-point-of-view">
-<h1><a class="toc-backref" href="#id3">Insider's point of view</a></h1>
+<h1><a class="toc-backref" href="#id4">Insider's point of view</a></h1>
 <p>This section describes performance issues from the point of view of
 insiders of the project; it should be particularly interesting if you
 plan to contribute in that area.</p>
diff --git a/source/performance.txt b/source/performance.txt
--- a/source/performance.txt
+++ b/source/performance.txt
@@ -11,6 +11,31 @@
 stock Python and other languages.  For contrast, we also describe some
 CPython (stock Python) optimizations that are not needed in PyPy.
 
+
+=================
+
+.. _profiler:
+.. _profiling:
+
+Profiling: vmprof
+=================
+
+As a general rule, when considering performance issues, follow these
+three points: first *measure* them (it is counter-productive to fight
+imaginary performance issues); then *profile* your code (it is useless
+to optimize the wrong parts).  Only optimize then.
+
+PyPy 2.6 introduced vmprof_, a very-low-overhead statistical profiler.
+The standard, non-statistical ``cProfile`` is also supported, and can be
+enabled without turning off the JIT.  We do recommend vmprof anyway
+because turning on cProfile can distort the result (sometimes massively,
+though hopefully this should not be too common).
+
+.. _vmprof: https://vmprof.readthedocs.org/
+
+
+=====================
+
 Optimization strategy
 =====================
 
@@ -29,7 +54,7 @@
 --------------------
 
 Human beings are bad at guessing or intuiting where the hotspots in code are.
-Measure, don't guess; use a profiler to pin down the 20% of the 
+Measure, don't guess; use a profiler_ to pin down the 20% of the 
 code where the code is spending 80% of its time, then speed-tune that.
 
 Measuring will save you a lot of effort wasted on tuning parts of the code
@@ -47,7 +72,7 @@
 
 Expect to get most of your gains from optimizing compute-bound code.
 It's usually (though not always) a sign that you're near the end of
-worthwhile tuning when profiling shows that the bulk of the
+worthwhile tuning when profiling_ shows that the bulk of the
 application's time is spent on network and disk I/O.
 
 Tune your algorithms first
@@ -107,6 +132,9 @@
 which takes us right back to "Measure, don't guess".  And another
 function of your regression test suite can be as a speed benchmark.
 
+
+=================
+
 Micro-tuning tips
 =================
 
@@ -239,6 +267,8 @@
 *(Thanks Eric S. Raymond for the text above)*
 
 
+=======================
+
 Insider's point of view
 =======================
 
_______________________________________________
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit

Reply via email to