Author: David Schneider <david.schnei...@picle.org> Branch: extradoc Changeset: r4361:2d01ba83b98b Date: 2012-07-25 15:26 +0200 http://bitbucket.org/pypy/extradoc/changeset/2d01ba83b98b/
Log: start explaining the contents of the tables diff --git a/talk/vmil2012/paper.tex b/talk/vmil2012/paper.tex --- a/talk/vmil2012/paper.tex +++ b/talk/vmil2012/paper.tex @@ -455,17 +455,55 @@ \section{Evaluation} \label{sec:evaluation} +The following analysis is based on a selection of benchmarks taken from the set +of benchmarks used to measure the performance of PyPy as can be seen +on\footnote{http://speed.pypy.org/}. The selection is based on the following +criteria \bivab{??}. The benchmarks were taken from the PyPy benchmarks +repository using revision +\texttt{ff7b35837d0f}\footnote{https://bitbucket.org/pypy/benchmarks/src/ff7b35837d0f}. +The benchmarks were run on a version of PyPy based on the +tag~\texttt{release-1.9} and patched to collect additional data about the +guards in the machine code +backends\footnote{https://bitbucket.org/pypy/pypy/src/release-1.9}. All +benchmark data was collected on a MacBook Pro 64 bit running Max OS +10.7.4 \bivab{do we need more data for this kind of benchmarks} with the loop +unrolling optimization disabled\bivab{rationale?}. + +Figure~\ref{fig:ops_count} shows the total number of operations that are +recorded during tracing for each of the benchmarks on what percentage of these +are guards. Figure~\ref{fig:ops_count} also shows the number of operations left +after performing the different trace optimizations done by the trace optimizer, +such as xxx. The last columns show the overall optimization rate and the +optimization rate specific for guard operations, showing what percentage of the +operations was removed during the optimizations phase. + \begin{figure*} \include{figures/benchmarks_table} \caption{Benchmark Results} \label{fig:ops_count} \end{figure*} + +\bivab{should we rather count the trampolines as part of the guard data instead +of counting it as part of the instructions} + +Figure~\ref{fig:backend_data} shows +the total memory consumption of the code and of the data generated by the machine code +backend for the different benchmarks mentioned above. Meaning the operations +left after optimization take the space shown in Figure~\ref{fig:backend_data} +after being compiled. Also the additional data stored for the guards to be used +in case of a bailout and attaching a bridge. \begin{figure*} \include{figures/backend_table} \caption{Total size of generated machine code and guard data} \label{fig:backend_data} \end{figure*} +Both figures do not take into account garbage collection. Pieces of machine +code can be globally invalidated or just become cold again. In both cases the +generated machine code and the related data is garbage collected. The figures +show the total amount of operations that are evaluated by the JIT and the +total amount of code and data that is generated from the optimized traces. + * Evaluation * Measure guard memory consumption and machine code size * Extrapolate memory consumption for guard other guard encodings _______________________________________________ pypy-commit mailing list pypy-commit@python.org http://mail.python.org/mailman/listinfo/pypy-commit