Author: Carl Friedrich Bolz <[email protected]>
Branch: extradoc
Changeset: r3653:896af7f41eed
Date: 2011-06-12 22:25 +0200
http://bitbucket.org/pypy/extradoc/changeset/896af7f41eed/

Log:    various fixes/rewrites

diff --git a/talk/iwtc11/paper.tex b/talk/iwtc11/paper.tex
--- a/talk/iwtc11/paper.tex
+++ b/talk/iwtc11/paper.tex
@@ -219,6 +219,7 @@
 
 
 \begin{figure}
+XXX the code for is\_positive is missing everywhere
 \begin{lstlisting}[mathescape,numbers = 
right,basicstyle=\setstretch{1.05}\ttfamily\scriptsize]
 $l_0$($p_{0}$, $p_{1}$):
 # inside f: y = y.add(step)
@@ -305,20 +306,16 @@
 followed by a set of other optimizations and how they interact with
 loop peeling.
 
-\subsection{Loop peeling}
-Loop peeling is achieved by inlining the trace at the end of
-itself. The input arguments of the second iteration are replaced with
-the jump arguments of the first iterations and then the arguments of all
-the operations are updated to operate on the new input arguments. To
-keep the single-assignment form new variables has to be introduced as
-the results of all the operations. The first iteration of the loop
-will end with a jump to the second iteration of the loop while the
-second iteration will end with a jump to itself. This way the first
-copy of the trace only be executed once while the second copy will be
-used for every other iteration. The rationality here is that the
-optimizations below typically will be able to optimize the second copy
-more efficiently than the first. The trace from Figure~\ref{fig:unopt-trace} 
would
-after this operation become the trace in Figure~\ref{fig:peeled-trace}.
+\subsection{Loop Peeling}
+
+XXX find reference
+
+Loop peeling is achieved prefixing the loop with one iteration of itself. The
+peeled of iteration of the loop will end with a jump to the full loop, which
+ends with a jump to itself. This way the peeled of iteration will only be
+executed once while the second copy will be used for every further iteration.
+The trace from Figure~\ref{fig:unopt-trace} would after this operation become
+the trace in Figure~\ref{fig:peeled-trace}.
 
 \begin{figure}
 \begin{lstlisting}[mathescape,numbers = 
right,basicstyle=\setstretch{1.05}\ttfamily\scriptsize]
@@ -354,23 +351,23 @@
 \label{fig:peeled-trace}
 \end{figure}
 
-When applying the following optimizations to this two iteration trace
+When applying the following optimizations to this two-iteration trace
 some care has to taken as to how the jump arguments of both
 iterations and the input arguments of the second iteration are
 treated. It has to be ensured that the second iteration stays a proper
-trace in the sens that the operations within it only operations on
+trace in the sense that the operations within it only operations on
 variables that are either among the input arguments of the second iterations
 or are produced within the second iterations. To ensure this we need
 to introduce a bit of formalism. 
 
 The original trace (prior too peeling) consists of three parts. 
 A vector of input
-variables, $I=\left(I_1, I_2, \cdots, I_{|I|}\right)$, a list of non
+variables, $I=\left(I_1, I_2, \cdots, I_{|I|}\right)$, a list of non-
 jump operations and a single
 jump operation. The jump operation contains a vector of jump variables,
 $J=\left(J_1, J_2, \cdots, J_{|J|}\right)$, that are passed as the input 
variables of the target loop. After
 loop peeling there will be a second copy of this trace with input
-variables equal to the jump arguments of the first copy, $J$, and jump
+variables equal to the jump arguments of the peeled copy, $J$, and jump
 arguments $K$. Looking back at our example we have
 \begin{equation}
   %\left\{
@@ -383,24 +380,15 @@
   .
 \end{equation}
 To construct the second iteration from the first we also need a
-function, $m$, mapping the variables of the first iteration onto the
+function $m$, mapping the variables of the first iteration onto the
 variables of the second. This function is constructed during the
 inlining. It is initialized by mapping the input arguments, $I$, to
 the jump arguments $J$,
 \begin{equation}
   m\left(I_i\right) = J_i \ \text{for}\ i = 1, 2, \cdots |I| .
 \end{equation}
-In the example that means (XXX which notation do we prefer?)
-\begin{equation}
-  m(v) = 
-  \left\{
-    \begin{array}{lcl}
-      p_0 &\text{if}& v=p_0 \\
-      p_5 &\text{if}& v=p_1 \\
-    \end{array}
-  \right.
-  .
-\end{equation}
+In the example that means:
+
 \begin{equation}
   %\left\{
     \begin{array}{lcl}
@@ -410,18 +398,18 @@
   %\right.
   .
 \end{equation}
-Each operation in the trace is inlined in the order they are
-executed. To inline an operation with argument vector 
-$A=\left(A_1, A_2, \cdots, A_{|A|}\right)$ producing the variable $v$
+
+Each operation in the trace is inlined in order.
+To inline an operation $v=op\left(A_1, A_2, \cdots, A_{|A|}\right)$
 a new variable, $\hat v$ is introduced. The inlined operation will
 produce $\hat v$ from the input arguments 
 \begin{equation}
-  \left(m\left(A_1\right), m\left(A_2\right), 
+  \hat v = op\left(m\left(A_1\right), m\left(A_2\right), 
     \cdots, m\left(A_{|A|}\right)\right) . 
 \end{equation}
 Before the
-next operation is inlined, $m$ is extend by making $m\left(v\right) = \hat
-v$. After all the operations in the example have been inlined we have
+next operation is inlined, $m$ is extend by assigning $m\left(v\right) = \hat
+v$. For the example above, after all the operations have been inlined we have
 \begin{equation}
   %\left\{
     \begin{array}{lcl}
@@ -436,13 +424,14 @@
   .
 \end{equation}
 
-\subsection{Redundant guard removal}
+\subsection{Redundant Guard Removal}
+
 No special concerns needs to be taken when implementing redundant
-guard removal together with loop peeling. However the the guards from
+guard removal together with loop peeling. The guards from
 the first iteration might make the guards of the second iterations
-redundant and thus removed. So the net effect of combining redundant
-guard removal with loop peeling is that guards are moved out of the
-loop. The second iteraton of the example reduces to
+redundant and thus removed. Therefore the net effect of combining redundant
+guard removal with loop peeling is that loop-invariant guards are moved out of 
the
+loop. The second iteration of the example reduces to
 
 \begin{lstlisting}[mathescape,numbers = 
right,basicstyle=\setstretch{1.05}\ttfamily\scriptsize]
 $l_1$($p_{0}$, $p_{5}$):
@@ -463,13 +452,12 @@
 guard on $p_0$ on line 20 can be removed since it is identical to the
 guard on line 6.
 
-\subsection{Heap caching}
+\subsection{Heap Caching}
 The objective of heap caching is to remove \lstinline{get} and
 \lstinline{set} operations whose results can be deduced from previous
 \lstinline{get} and \lstinline{set} operations. Exact details of the
-process are outside the scope of this paper We will here assume that
-it works perfectly and only consider the interactions with loop
-peeling.
+process are outside the scope of this paper. We only consider the interaction
+with loop peeling.
 
 The issue at hand is to keep the second iteration a proper
 trace. Consider the \lstinline{get} operation on line 19 of
@@ -481,18 +469,18 @@
 replace $i_6$ with $i_4$ and $i_7$ with $i_3$. 
 
 After that, the second
-iteration will no longer be proper as it operates on $i_3$ and $i_4$
+iteration will no longer be in SSA form as it operates on $i_3$ and $i_4$
 which are not part of it. The solution is to extend the input
 arguments, $J$, with those two variables. This will also extend the
 jump arguments of the first iteration, which is also $J$. 
 Implicitly that also extends the jump arguments of the second iteration, $K$,
-since they are the inlined versions of $J$. That is the, $I$ has to
+since they are the inlined versions of $J$. For the example $I$ has to
 be replaced by $\hat I$ which is formed as a concatenation of $I$ and
 $\left(i_3, i_4\right)$. At the same time $K$ has to be replaced by
 $\hat K$ which is formed as a concatenation of $K$ and 
 $\left(m\left(i_3\right), m\left(i_4\right)\right) = \left(i_7, i_8\right)$. 
 The variable $i_7$ will then be replaced by $i_3$ by the heap caching
-algorithm as it has removed the variable $i_7$. XXX: Maybe we should
+optimization as it has removed the variable $i_7$. XXX: Maybe we should
 replace $i_7=$get(...) with $i_7=i_3$ instead of removing it?
 
 In general what is needed is for the heap optimizer is to keep track of
@@ -507,7 +495,7 @@
   .
 \end{equation}
 In the optimized trace $I$ is replaced by $\hat I$ and $K$ by $\hat
-K$. The trace from Figure~\ref{fig:unopt-trace} will be optimized into
+K$. The trace from Figure~\ref{fig:unopt-trace} will be optimized to:
 
 \begin{lstlisting}[mathescape,numbers = 
right,basicstyle=\setstretch{1.05}\ttfamily\scriptsize]
 $l_0$($p_{0}$, $p_{1}$):
@@ -535,11 +523,11 @@
 jump($l_1$, $p_{0}$, $p_{9}$, $i_3$, $i_8$)
 \end{lstlisting}
 
-\subsection{Virtualization}
+\subsection{Allocation Removals}
 Using escape analysis we can XXX
 
 Let $\tilde J$ be all variables in $J$ not representing virtuals (in the
-same order). Extend it with all non virtual fields, $H_i$, of the
+same order). Extend it with all non-virtual fields, $H_i$, of the
 removed virtuals,
 \begin{equation}
   \hat J = \left(\tilde J_1, \tilde J_2, \cdots, \tilde J_{|\tilde J|}, 
_______________________________________________
pypy-commit mailing list
[email protected]
http://mail.python.org/mailman/listinfo/pypy-commit

Reply via email to