[pypy-commit] extradoc extradoc: Add a table showing the percentage of guards that ever fail and the percentage of guards that fail more than 200 times

2012-08-09 Thread bivab
Author: David Schneider david.schnei...@picle.org
Branch: extradoc
Changeset: r4486:6ee6eb13d8bb
Date: 2012-08-09 11:32 +0200
http://bitbucket.org/pypy/extradoc/changeset/6ee6eb13d8bb/

Log:Add a table showing the percentage of guards that ever fail and the
percentage of guards that fail more than 200 times

diff --git a/talk/vmil2012/Makefile b/talk/vmil2012/Makefile
--- a/talk/vmil2012/Makefile
+++ b/talk/vmil2012/Makefile
@@ -1,5 +1,5 @@
 
-jit-guards.pdf: paper.tex paper.bib figures/log.tex figures/example.tex 
figures/benchmarks_table.tex figures/backend_table.tex 
figures/ops_count_table.tex figures/loop_bridge.pdf figures/guard_table.tex 
figures/resume_data_table.tex
+jit-guards.pdf: paper.tex paper.bib figures/log.tex figures/example.tex 
figures/benchmarks_table.tex figures/backend_table.tex 
figures/ops_count_table.tex figures/loop_bridge.pdf figures/guard_table.tex 
figures/resume_data_table.tex figures/failing_guards_table.tex
pdflatex paper
bibtex paper
pdflatex paper
@@ -18,7 +18,7 @@
 %.tex: %.py
pygmentize -l python -o $@ $
 
-figures/%_table.tex: tool/build_tables.py logs/backend_summary.csv 
logs/summary.csv tool/table_template.tex logs/bridge_summary.csv 
logs/resume_summary.csv
+figures/%_table.tex: tool/build_tables.py logs/backend_summary.csv 
logs/summary.csv tool/table_template.tex logs/bridge_summary.csv 
logs/resume_summary.csv logs/guard_summary.json
tool/setup.sh
paper_env/bin/python tool/build_tables.py $@
 
diff --git a/talk/vmil2012/paper.tex b/talk/vmil2012/paper.tex
--- a/talk/vmil2012/paper.tex
+++ b/talk/vmil2012/paper.tex
@@ -632,6 +632,13 @@
 \label{fig:resume_data_sizes}
 \end{figure}
 
+\begin{figure}
+\include{figures/failing_guards_table}
+\caption{Failing guards}
+\label{fig:failing_guards}
+\end{figure}
+
+
 \todo{figure about failure counts of guards (histogram?)}
 \todo{add resume data sizes without sharing}
 \todo{add a footnote about why guards have a threshold of 100}
diff --git a/talk/vmil2012/tool/build_tables.py 
b/talk/vmil2012/tool/build_tables.py
--- a/talk/vmil2012/tool/build_tables.py
+++ b/talk/vmil2012/tool/build_tables.py
@@ -1,9 +1,10 @@
 from __future__ import division
 import csv
 import django
-from django.template import Template, Context
+import json
 import os
 import sys
+from django.template import Template, Context
 
 # This line is required for Django configuration
 django.conf.settings.configure()
@@ -15,6 +16,33 @@
 return [l for l in reader]
 
 
+def build_failing_guards_table(files, texfile, template):
+BRIDGE_THRESHOLD = 200
+assert len(files) == 2
+with open(files[1]) as f:
+failures = json.load(f)
+for l in getlines(files[0]):
+failures[l['bench']]['nguards'] = float(l['number of guards'])
+
+table = []
+head = ['Benchmark',
+'failing guards',
+'over %d failures' % BRIDGE_THRESHOLD]
+
+for bench, info in failures.iteritems():
+total = failures[bench]['nguards']
+total_failures = len(info['results'])
+bridges = len([k for k,v in info['results'].iteritems() \
+if v  BRIDGE_THRESHOLD])
+res = [bench.replace('_', '\\_'),
+%.2f \\%% % (100 * total_failures/total),
+%.2f \\%% % (100 * bridges/total),
+]
+table.append(res)
+output = render_table(template, head, sorted(table))
+write_table(output, texfile)
+
+
 def build_resume_data_table(csvfiles, texfile, template):
 assert len(csvfiles) == 1
 lines = getlines(csvfiles[0])
@@ -82,6 +110,7 @@
 assert len(csvfiles) == 2
 lines = getlines(csvfiles[0])
 bridge_lines = getlines(csvfiles[1])
+# keep this around for the assertion bellow
 bridgedata = {}
 for l in bridge_lines:
 bridgedata[l['bench']] = l
@@ -178,6 +207,8 @@
 (['summary.csv'], build_guard_table),
 'resume_data_table.tex':
 (['resume_summary.csv'], build_resume_data_table),
+'failing_guards_table.tex':
+(['resume_summary.csv', 'guard_summary.json'], 
build_failing_guards_table),
 }
 
 
___
pypy-commit mailing list
pypy-commit@python.org
http://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] extradoc extradoc: merge heads

2012-08-09 Thread bivab
Author: David Schneider david.schnei...@picle.org
Branch: extradoc
Changeset: r4487:72fb3711f20c
Date: 2012-08-09 11:33 +0200
http://bitbucket.org/pypy/extradoc/changeset/72fb3711f20c/

Log:merge heads

diff --git a/blog/draft/stm-jul2012.rst b/blog/draft/stm-jul2012.rst
--- a/blog/draft/stm-jul2012.rst
+++ b/blog/draft/stm-jul2012.rst
@@ -4,14 +4,14 @@
 Hi all,
 
 This is a short position paper kind of post about my view (Armin
-Rigo's) on the future of multicore programming.  It is a summary of the
+Rigo's) on the future of multicore programming in high-level languages.
+It is a summary of the
 keynote presentation at EuroPython.  As I learned by talking with people
 afterwards, I am not a good enough speaker to manage to convey a deeper
 message in a 20-minutes talk.  I will try instead to convey it in a
-150-lines post...
+250-lines post...
 
-This is fundamentally about three points, which can be summarized as
-follow:
+This is about three points:
 
 1. We often hear about people wanting a version of Python running without
the Global Interpreter Lock (GIL): a GIL-less Python.  But what we
@@ -20,8 +20,9 @@
threads and locks.  One way is Automatic Mutual Exclusion (AME), which
would give us an AME Python.
 
-2. A good enough Software Transactional Memory (STM) system can do that.
-   This is what we are building into PyPy: an AME PyPy.
+2. A good enough Software Transactional Memory (STM) system can be used
+   as an internal tool to do that.
+   This is what we are building into an AME PyPy.
 
 3. The picture is darker for CPython, though there is a way too.  The
problem is that when we say STM, we think about either GCC 4.7's STM
@@ -49,51 +50,96 @@
 We need to solve this issue with a higher-level solution.  Such
 solutions exist theoretically, and Automatic Mutual Exclusion (AME) is
 one of them.  The idea of AME is that we divide the execution of each
-thread into a number of blocks.  Each block is well-delimited and
-typically large.  Each block runs atomically, as if it acquired a GIL
-for its whole duration.  The trick is that internally we use
-Transactional Memory, which is a a technique that lets the interpreter
-run the blocks from each thread in parallel, while giving the programmer
+thread into a number of atomic blocks.  Each block is well-delimited
+and typically large.  Each block runs atomically, as if it acquired a
+GIL for its whole duration.  The trick is that internally we use
+Transactional Memory, which is a technique that lets the system run the
+atomic blocks from each thread in parallel, while giving the programmer
 the illusion that the blocks have been run in some global serialized
 order.
 
 This doesn't magically solve all possible issues, but it helps a lot: it
-is far easier to reason in terms of a random ordering of large blocks
-than in terms of a random ordering of individual instructions.  For
-example, a program might contain a loop over all keys of a dictionary,
-performing some mostly-independent work on each value.  By using the
-technique described here, putting each piece of work in one block
-running in one thread of a pool, we get exactly the same effect: the
-pieces of work still appear to run in some global serialized order, in
-some random order (as it is anyway when iterating over the keys of a
-dictionary).  There are even techniques building on top of AME that can
-be used to force the order of the blocks, if needed.
+is far easier to reason in terms of a random ordering of large atomic
+blocks than in terms of a random ordering of lines of code --- not to
+mention the mess that multithreaded C is, where even a random ordering
+of instructions is not a sufficient model any more.
+
+How do such atomic blocks look like?  For example, a program might
+contain a loop over all keys of a dictionary, performing some
+mostly-independent work on each value.  This is a typical example:
+each atomic block is one iteration through the loop.  By using the
+technique described here, we can run the iterations in parallel
+(e.g. using a thread pool) but using AME to ensure that they appear to
+run serially.
+
+In Python, we don't care about the order in which the loop iterations
+are done, because we are anyway iterating over the keys of a dictionary.
+So we get exactly the same effect as before: the iterations still run in
+some random order, but --- and that's the important point --- in a
+global serialized order.  In other words, we introduced parallelism, but
+only under the hood: from the programmer's point of view, his program
+still appears to run completely serially.  Parallelisation as a
+theoretically invisible optimization...  more about the theoretically
+in the next paragraph.
+
+Note that randomness of order is not fundamental: they are techniques
+building on top of AME that can be used to force the order of the
+atomic blocks, if needed.
 
 
 PyPy and STM/AME
 
 
 Talking more precisely about PyPy: the current 

[pypy-commit] cffi default: Finally found out the right way to implement ffi.gc(), in just a

2012-08-09 Thread arigo
Author: Armin Rigo ar...@tunes.org
Branch: 
Changeset: r793:a8efbdc7c1cc
Date: 2012-08-09 11:34 +0200
http://bitbucket.org/cffi/cffi/changeset/a8efbdc7c1cc/

Log:Finally found out the right way to implement ffi.gc(), in just a
few lines of Python code using weakrefs with callbacks.

diff --git a/cffi/api.py b/cffi/api.py
--- a/cffi/api.py
+++ b/cffi/api.py
@@ -234,6 +234,18 @@
 replace_with = ' ' + replace_with
 return self._backend.getcname(cdecl, replace_with)
 
+def gc(self, cdata, destructor):
+Return a new cdata object that points to the same
+data.  Later, when this new cdata object is garbage-collected,
+'destructor(old_cdata_object)' will be called.
+
+try:
+gc_weakrefs = self.gc_weakrefs
+except AttributeError:
+from .gc_weakref import GcWeakrefs
+gc_weakrefs = self.gc_weakrefs = GcWeakrefs(self)
+return gc_weakrefs.build(cdata, destructor)
+
 def _get_cached_btype(self, type):
 try:
 BType = self._cached_btypes[type]
diff --git a/cffi/backend_ctypes.py b/cffi/backend_ctypes.py
--- a/cffi/backend_ctypes.py
+++ b/cffi/backend_ctypes.py
@@ -2,7 +2,7 @@
 from . import model
 
 class CTypesData(object):
-__slots__ = []
+__slots__ = ['__weakref__']
 
 def __init__(self, *args):
 raise TypeError(cannot instantiate %r % (self.__class__,))
diff --git a/cffi/gc_weakref.py b/cffi/gc_weakref.py
new file mode 100644
--- /dev/null
+++ b/cffi/gc_weakref.py
@@ -0,0 +1,19 @@
+from weakref import ref
+
+
+class GcWeakrefs(object):
+# code copied and adapted from WeakKeyDictionary.
+
+def __init__(self, ffi):
+self.ffi = ffi
+self.data = data = {}
+def remove(k):
+destructor, cdata = data.pop(k)
+destructor(cdata)
+self.remove = remove
+
+def build(self, cdata, destructor):
+# make a new cdata of the same type as the original one
+new_cdata = self.ffi.cast(self.ffi.typeof(cdata), cdata)
+self.data[ref(new_cdata, self.remove)] = destructor, cdata
+return new_cdata
diff --git a/doc/source/index.rst b/doc/source/index.rst
--- a/doc/source/index.rst
+++ b/doc/source/index.rst
@@ -914,6 +914,14 @@
 ``ffi.getcname(ffi.typeof(x), *)`` returns the string representation
 of the C type pointer to the same type than x.
 
+``ffi.gc(cdata, destructor)``: return a new cdata object that points to the
+same data.  Later, when this new cdata object is garbage-collected,
+``destructor(old_cdata_object)`` will be called.  Example of usage:
+``ptr = ffi.gc(lib.malloc(42), lib.free)``.  *New in version 0.3* (together
+with the fact that any cdata object can be weakly referenced).
+
+.. versionadded:: 0.3 --- inlined in the previous paragraph
+
 
 Unimplemented features
 --
diff --git a/testing/backend_tests.py b/testing/backend_tests.py
--- a/testing/backend_tests.py
+++ b/testing/backend_tests.py
@@ -1279,3 +1279,18 @@
 q = ffi.cast(int[3], p)
 assert q[0] == -5
 assert repr(q).startswith(cdata 'int[3]' 0x)
+
+def test_gc(self):
+ffi = FFI(backend=self.Backend())
+p = ffi.new(int *, 123)
+seen = []
+def destructor(p1):
+assert p1 is p
+assert p1[0] == 123
+seen.append(1)
+q = ffi.gc(p, destructor)
+import gc; gc.collect()
+assert seen == []
+del q
+import gc; gc.collect(); gc.collect(); gc.collect()
+assert seen == [1]
___
pypy-commit mailing list
pypy-commit@python.org
http://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] extradoc extradoc: add ssa reference

2012-08-09 Thread cfbolz
Author: Carl Friedrich Bolz cfb...@gmx.de
Branch: extradoc
Changeset: r4488:43bbddb246d7
Date: 2012-08-09 14:43 +0200
http://bitbucket.org/pypy/extradoc/changeset/43bbddb246d7/

Log:add ssa reference

diff --git a/talk/vmil2012/zotero.bib b/talk/vmil2012/zotero.bib
--- a/talk/vmil2012/zotero.bib
+++ b/talk/vmil2012/zotero.bib
@@ -116,6 +116,17 @@
pages = {32#8211;43}
 },
 
+@article{cytron_efficiently_1991,
+   title = {Efficiently Computing Static Single Assignment Form and the 
Control Dependence Graph},
+   volume = {13},
+   number = {4},
+   journal = {{ACM} Transactions on Programming Languages and Systems},
+   author = {Cytron, Ron and Ferrante, Jeanne and Rosen, Barry K. and 
Wegman, Mark N. and Zadeck, F. Kenneth},
+   month = oct,
+   year = {1991},
+   pages = {451#8211;490}
+},
+
 @inproceedings{bolz_tracing_2009,
address = {Genova, Italy},
title = {Tracing the meta-level: {PyPy's} tracing {JIT} compiler},
___
pypy-commit mailing list
pypy-commit@python.org
http://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] extradoc extradoc: remove some todos and update one

2012-08-09 Thread bivab
Author: David Schneider david.schnei...@picle.org
Branch: extradoc
Changeset: r4489:cba57497c2a5
Date: 2012-08-09 14:44 +0200
http://bitbucket.org/pypy/extradoc/changeset/cba57497c2a5/

Log:remove some todos and update one

diff --git a/talk/vmil2012/paper.tex b/talk/vmil2012/paper.tex
--- a/talk/vmil2012/paper.tex
+++ b/talk/vmil2012/paper.tex
@@ -352,7 +352,6 @@
 \item For virtuals,
 the payload is an index into a list of virtuals, see next section.
 \end{itemize}
-\todo{figure showing linked resume-data}
 
 \subsection{Interaction With Optimization}
 \label{sub:optimization}
@@ -639,9 +638,7 @@
 \end{figure}
 
 
-\todo{figure about failure counts of guards (histogram?)}
-\todo{add resume data sizes without sharing}
-\todo{add a footnote about why guards have a threshold of 100}
+\todo{add a footnote about why guards have a threshold of 200}
 
 The overhead that is incurred by the JIT to manage the \texttt{resume data},
 the \texttt{low-level resume data} as well as the generated machine code is
___
pypy-commit mailing list
pypy-commit@python.org
http://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] extradoc extradoc: Use SSA reference

2012-08-09 Thread bivab
Author: David Schneider david.schnei...@picle.org
Branch: extradoc
Changeset: r4490:9cd7a4b73cc8
Date: 2012-08-09 14:48 +0200
http://bitbucket.org/pypy/extradoc/changeset/9cd7a4b73cc8/

Log:Use SSA reference

diff --git a/talk/vmil2012/paper.tex b/talk/vmil2012/paper.tex
--- a/talk/vmil2012/paper.tex
+++ b/talk/vmil2012/paper.tex
@@ -237,9 +237,10 @@
 interpreter profiles the executed program and selects frequently executed code
 paths to be compiled to machine code. After profiling identified an interesting
 path, tracing is started, recording all operations that are executed on this
-path. Like in most compilers tracing JITs use an intermediate representation
-to store the recorded operations, which is typically in SSA form\todo{some ssa
-reference}. Since tracing follows actual execution the code that is recorded
+path. Like in most compilers tracing JITs use an intermediate representation to
+store the recorded operations, which is typically in SSA
+form~\cite{cytron_efficiently_1991}. Since tracing follows actual execution the
+code that is recorded
 represents only one possible path through the control flow graph. Points of
 divergence from the recorded path are marked with special operations called
 \emph{guards}, these operations ensure that assumptions valid during the
___
pypy-commit mailing list
pypy-commit@python.org
http://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] extradoc extradoc: fix

2012-08-09 Thread bivab
Author: David Schneider david.schnei...@picle.org
Branch: extradoc
Changeset: r4491:cf6f9d7d26d8
Date: 2012-08-09 17:15 +0200
http://bitbucket.org/pypy/extradoc/changeset/cf6f9d7d26d8/

Log:fix

diff --git a/talk/vmil2012/tool/bridgedata.py b/talk/vmil2012/tool/bridgedata.py
--- a/talk/vmil2012/tool/bridgedata.py
+++ b/talk/vmil2012/tool/bridgedata.py
@@ -20,6 +20,7 @@
 summary = logparser.extract_category(logfile, 'jit-summary')
 if len(summary) == 0:
 yield (exe, name, log, 'n/a', 'n/a')
+continue
 summary = summary[0].splitlines()
 for line in summary:
 if line.startswith('Total # of bridges'):
___
pypy-commit mailing list
pypy-commit@python.org
http://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] extradoc extradoc: make the data size table fit into one column

2012-08-09 Thread bivab
Author: David Schneider david.schnei...@picle.org
Branch: extradoc
Changeset: r4492:14fa16b2eeaa
Date: 2012-08-09 17:15 +0200
http://bitbucket.org/pypy/extradoc/changeset/14fa16b2eeaa/

Log:make the data size table fit into one column

diff --git a/talk/vmil2012/tool/build_tables.py 
b/talk/vmil2012/tool/build_tables.py
--- a/talk/vmil2012/tool/build_tables.py
+++ b/talk/vmil2012/tool/build_tables.py
@@ -157,11 +157,11 @@
 for l in resume_lines:
 resumedata[l['bench']] = l
 
-head = ['Benchmark',
-'Machine code size (kB)',
-'hl resume data (kB)',
-'ll resume data (kB)',
-'machine code resume data relation in \\%']
+head = [r'Benchmark',
+r'Code',
+r'resume data',
+r'll data',
+r'relation']
 
 table = []
 # collect data
@@ -171,12 +171,12 @@
 gmsize = float(bench['guard map size'])
 asmsize = float(bench['asm size'])
 rdsize = float(resumedata[name]['total resume data size'])
-rel = %.2f % (asmsize / (gmsize + rdsize) * 100,)
+rel = r%.1f {\scriptsize \%%} % (asmsize / (gmsize + rdsize) * 100,)
 table.append([
-bench['bench'],
-%.2f % (asmsize,),
-%.2f % (rdsize,),
-%.2f % (gmsize,),
+r%s % bench['bench'],
+r%.1f {\scriptsize kB} % (asmsize,),
+r%.1f {\scriptsize kB} % (rdsize,),
+r%.1f {\scriptsize kB} % (gmsize,),
 rel])
 output = render_table(template, head, sorted(table))
 write_table(output, texfile)
___
pypy-commit mailing list
pypy-commit@python.org
http://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] extradoc extradoc: Move some figures around and add sub sections to the evaluation section

2012-08-09 Thread bivab
Author: David Schneider david.schnei...@picle.org
Branch: extradoc
Changeset: r4493:14bfddc82d2e
Date: 2012-08-09 17:16 +0200
http://bitbucket.org/pypy/extradoc/changeset/14bfddc82d2e/

Log:Move some figures around and add sub sections to the evaluation
section

diff --git a/talk/vmil2012/paper.tex b/talk/vmil2012/paper.tex
--- a/talk/vmil2012/paper.tex
+++ b/talk/vmil2012/paper.tex
@@ -608,7 +608,16 @@
 \end{description}
 
 From the mentioned benchmarks we collected different datasets to evaluate the
-Frequency, the overhead and overall behaviour of guards.
+Frequency, the overhead and overall behaviour of guards, the results are
+summarized in the remainder of this section.
+
+\subsection{Frequency of Guards}
+\label{sub:guard_frequency}
+\begin{figure*}
+\include{figures/benchmarks_table}
+\caption{Benchmark Results}
+\label{fig:benchmarks}
+\end{figure*}
 Figure~\ref{fig:benchmarks} summarizes the total number of operations that were
 recorded during tracing for each of the benchmarks and what percentage of these
 operations are guards. The number of operations was counted on the unoptimized
@@ -618,29 +627,14 @@
 Figure~\ref{fig:guard_percent}. These numbers show that guards are a rather
 common operation in the traces, which is a reason the put effort into
 optimizing them.
-\todo{some pie charts about operation distribution}
-
-\begin{figure*}
-\include{figures/benchmarks_table}
-\caption{Benchmark Results}
-\label{fig:benchmarks}
-\end{figure*}
-
+\subsection{Overhead of Guards}
+\label{sub:guard_overhead}
 \begin{figure}
 \include{figures/resume_data_table}
 \caption{Resume Data sizes in KiB}
 \label{fig:resume_data_sizes}
 \end{figure}
 
-\begin{figure}
-\include{figures/failing_guards_table}
-\caption{Failing guards}
-\label{fig:failing_guards}
-\end{figure}
-
-
-\todo{add a footnote about why guards have a threshold of 200}
-
 The overhead that is incurred by the JIT to manage the \texttt{resume data},
 the \texttt{low-level resume data} as well as the generated machine code is
 shown in Figure~\ref{fig:backend_data}. It shows the total memory consumption
@@ -667,11 +661,6 @@
 the overhead associated to guards to resume execution from a side exit appears
 to be high.\bivab{put into relation to other JITs, compilers in general}
 
-\begin{figure*}
-\include{figures/backend_table}
-\caption{Total size of generated machine code and guard data}
-\label{fig:backend_data}
-\end{figure*}
 
 Both figures do not take into account garbage collection. Pieces of machine
 code can be globally invalidated or just become cold again. In both cases the
@@ -681,6 +670,23 @@
 
 \todo{compare to naive variant of resume data}
 
+\begin{figure}
+\include{figures/backend_table}
+\caption{Total size of generated machine code and guard data}
+\label{fig:backend_data}
+\end{figure}
+
+\subsection{Guard Failures}
+\label{sub:guard_failure}
+\begin{figure}
+\include{figures/failing_guards_table}
+\caption{Failing guards}
+\label{fig:failing_guards}
+\end{figure}
+
+
+\todo{add a footnote about why guards have a threshold of 200}
+
 \section{Related Work}
 \label{sec:Related Work}
 
___
pypy-commit mailing list
pypy-commit@python.org
http://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy default: hopefully fix test_jit_get_stats

2012-08-09 Thread fijal
Author: Maciej Fijalkowski fij...@gmail.com
Branch: 
Changeset: r56664:645f736fefcf
Date: 2012-08-09 18:30 +0200
http://bitbucket.org/pypy/pypy/changeset/645f736fefcf/

Log:hopefully fix test_jit_get_stats

diff --git a/pypy/jit/backend/x86/assembler.py 
b/pypy/jit/backend/x86/assembler.py
--- a/pypy/jit/backend/x86/assembler.py
+++ b/pypy/jit/backend/x86/assembler.py
@@ -127,9 +127,13 @@
 self._build_stack_check_slowpath()
 if gc_ll_descr.gcrootmap:
 self._build_release_gil(gc_ll_descr.gcrootmap)
-debug_start('jit-backend-counts')
-self.set_debug(have_debug_prints())
-debug_stop('jit-backend-counts')
+if not self._debug:
+# if self._debug is already set it means that someone called
+# set_debug by hand before initializing the assembler. Leave it
+# as it is
+debug_start('jit-backend-counts')
+self.set_debug(have_debug_prints())
+debug_stop('jit-backend-counts')
 
 def setup(self, looptoken):
 assert self.memcpy_addr != 0, setup_once() not called?
diff --git a/pypy/jit/backend/x86/test/test_ztranslation.py 
b/pypy/jit/backend/x86/test/test_ztranslation.py
--- a/pypy/jit/backend/x86/test/test_ztranslation.py
+++ b/pypy/jit/backend/x86/test/test_ztranslation.py
@@ -172,7 +172,6 @@
 assert bound  (bound-1) == 0   # a power of two
 
 def test_jit_get_stats(self):
-py.test.xfail()
 driver = JitDriver(greens = [], reds = ['i'])
 
 def f():
___
pypy-commit mailing list
pypy-commit@python.org
http://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy stm-jit: Start to draft the tests for the GcStmReviewerAssembler as

2012-08-09 Thread arigo
Author: Armin Rigo ar...@tunes.org
Branch: stm-jit
Changeset: r56665:5c1d01b84795
Date: 2012-08-09 20:43 +0200
http://bitbucket.org/pypy/pypy/changeset/5c1d01b84795/

Log:Start to draft the tests for the GcStmReviewerAssembler as a
llsupport subclass of GcRewriterAssembler. Unsure yet if this is
the ideal level.

diff --git a/pypy/jit/backend/llsupport/gc.py b/pypy/jit/backend/llsupport/gc.py
--- a/pypy/jit/backend/llsupport/gc.py
+++ b/pypy/jit/backend/llsupport/gc.py
@@ -16,7 +16,6 @@
 from pypy.jit.backend.llsupport.descr import GcCache, get_field_descr
 from pypy.jit.backend.llsupport.descr import get_array_descr
 from pypy.jit.backend.llsupport.descr import get_call_descr
-from pypy.jit.backend.llsupport.rewrite import GcRewriterAssembler
 from pypy.rpython.memory.gctransform import asmgcroot
 
 # 
@@ -103,6 +102,11 @@
 gcrefs_output_list.append(p)
 
 def rewrite_assembler(self, cpu, operations, gcrefs_output_list):
+if not self.stm:
+from pypy.jit.backend.llsupport.rewrite import GcRewriterAssembler
+else:
+from pypy.jit.backend.llsupport import stmrewrite
+GcRewriterAssembler = stmrewrite.GcStmReviewerAssembler
 rewriter = GcRewriterAssembler(self, cpu)
 newops = rewriter.rewrite(operations)
 # record all GCREFs, because the GC (or Boehm) cannot see them and
@@ -658,10 +662,10 @@
 GcLLDescription.__init__(self, gcdescr, translator, rtyper)
 self.translator = translator
 self.llop1 = llop1
-try:
-self.stm = translator.config.translation.stm
-except AttributeError:
-pass  # keep the default of False
+#try:
+self.stm = gcdescr.config.translation.stm
+#except AttributeError:
+#pass  # keep the default of False
 if really_not_translated:
 assert not self.translate_support_code  # but half does not work
 self._initialize_for_tests()
diff --git a/pypy/jit/backend/llsupport/test/test_gc.py 
b/pypy/jit/backend/llsupport/test/test_gc.py
--- a/pypy/jit/backend/llsupport/test/test_gc.py
+++ b/pypy/jit/backend/llsupport/test/test_gc.py
@@ -305,6 +305,7 @@
 gcrootfinder = 'asmgcc'
 gctransformer = 'framework'
 gcremovetypeptr = False
+stm = False
 class FakeTranslator(object):
 config = config_
 class FakeCPU(object):
@@ -405,6 +406,7 @@
 assert self.llop1.record == [('barrier', s_adr)]
 
 def test_gen_write_barrier(self):
+from pypy.jit.backend.llsupport.rewrite import GcRewriterAssembler
 gc_ll_descr = self.gc_ll_descr
 llop1 = self.llop1
 #
diff --git a/pypy/jit/backend/llsupport/test/test_rewrite.py 
b/pypy/jit/backend/llsupport/test/test_rewrite.py
--- a/pypy/jit/backend/llsupport/test/test_rewrite.py
+++ b/pypy/jit/backend/llsupport/test/test_rewrite.py
@@ -26,6 +26,7 @@
 tdescr = get_size_descr(self.gc_ll_descr, T)
 tdescr.tid = 5678
 tzdescr = get_field_descr(self.gc_ll_descr, T, 'z')
+tydescr = get_field_descr(self.gc_ll_descr, T, 'y')
 #
 A = lltype.GcArray(lltype.Signed)
 adescr = get_array_descr(self.gc_ll_descr, A)
@@ -209,6 +210,7 @@
 gcrootfinder = 'asmgcc'
 gctransformer = 'framework'
 gcremovetypeptr = False
+stm = False
 gcdescr = get_description(config_)
 self.gc_ll_descr = GcLLDescr_framework(gcdescr, None, None, None,
really_not_translated=True)
diff --git a/pypy/jit/backend/llsupport/test/test_stmrewrite.py 
b/pypy/jit/backend/llsupport/test/test_stmrewrite.py
new file mode 100644
--- /dev/null
+++ b/pypy/jit/backend/llsupport/test/test_stmrewrite.py
@@ -0,0 +1,332 @@
+from pypy.jit.backend.llsupport.gc import *
+from pypy.jit.metainterp.gc import get_description
+from pypy.jit.backend.llsupport.test.test_rewrite import RewriteTests
+
+
+class TestStm(RewriteTests):
+def setup_method(self, meth):
+class config_(object):
+class translation(object):
+stm = True
+gc = 'stmgc'
+gcrootfinder = 'stm'
+gctransformer = 'framework'
+gcremovetypeptr = False
+gcdescr = get_description(config_)
+self.gc_ll_descr = GcLLDescr_framework(gcdescr, None, None, None,
+   really_not_translated=True)
+#
+class FakeCPU(object):
+def sizeof(self, STRUCT):
+descr = SizeDescrWithVTable(104)
+descr.tid = 9315
+return descr
+self.cpu = FakeCPU()
+
+def test_rewrite_one_setfield_gc(self):
+self.check_rewrite(
+[p1, p2]
+

[pypy-commit] pypy default: improve the message not to get too annoyed

2012-08-09 Thread fijal
Author: Maciej Fijalkowski fij...@gmail.com
Branch: 
Changeset: r5:756cbdf37781
Date: 2012-08-09 22:42 +0200
http://bitbucket.org/pypy/pypy/changeset/756cbdf37781/

Log:improve the message not to get too annoyed

diff --git a/pypy/translator/backendopt/removeassert.py 
b/pypy/translator/backendopt/removeassert.py
--- a/pypy/translator/backendopt/removeassert.py
+++ b/pypy/translator/backendopt/removeassert.py
@@ -41,7 +41,19 @@
 log.removeassert(removed %d asserts in %s % (count, 
graph.name))
 checkgraph(graph)
 #transform_dead_op_vars(graph, translator)
-log.removeassert(Could not remove %d asserts, but removed %d asserts. % 
tuple(total_count))
+total_count = tuple(total_count)
+if total_count[0] == 0:
+if total_count[1] == 0:
+msg = None
+else:
+msg = Removed %d asserts % (total_count[1],)
+else:
+if total_count[1] == 0:
+msg = Could not remove %d asserts % (total_count[0],)
+else:
+msg = Could not remove %d asserts, but removed %d asserts. % 
total_count
+if msg is not None:
+log.removeassert(msg)
 
 
 def kill_assertion_link(graph, link):
___
pypy-commit mailing list
pypy-commit@python.org
http://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy default: fix the test

2012-08-09 Thread fijal
Author: Maciej Fijalkowski fij...@gmail.com
Branch: 
Changeset: r56667:c5bf753ea9c2
Date: 2012-08-09 22:43 +0200
http://bitbucket.org/pypy/pypy/changeset/c5bf753ea9c2/

Log:fix the test

diff --git a/pypy/jit/backend/x86/test/test_ztranslation.py 
b/pypy/jit/backend/x86/test/test_ztranslation.py
--- a/pypy/jit/backend/x86/test/test_ztranslation.py
+++ b/pypy/jit/backend/x86/test/test_ztranslation.py
@@ -187,7 +187,8 @@
 return len(ll_times)
 
 res = self.meta_interp(main, [])
-assert res == 1
+assert res == 3
+# one for loop, one for entry point and one for the prologue
 
 class TestTranslationRemoveTypePtrX86(CCompiledMixin):
 CPUClass = getcpuclass()
___
pypy-commit mailing list
pypy-commit@python.org
http://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy default: patch from matkor for PLD and other strange linux distros

2012-08-09 Thread fijal
Author: Maciej Fijalkowski fij...@gmail.com
Branch: 
Changeset: r56668:0cf0134d39eb
Date: 2012-08-10 00:04 +0200
http://bitbucket.org/pypy/pypy/changeset/0cf0134d39eb/

Log:patch from matkor for PLD and other strange linux distros

diff --git a/pypy/module/_minimal_curses/fficurses.py 
b/pypy/module/_minimal_curses/fficurses.py
--- a/pypy/module/_minimal_curses/fficurses.py
+++ b/pypy/module/_minimal_curses/fficurses.py
@@ -9,10 +9,12 @@
 from pypy.module._minimal_curses import interp_curses
 from pypy.translator.tool.cbuild import ExternalCompilationInfo
 from sys import platform
+import os.path
 
 _CYGWIN = platform == 'cygwin'
+_NCURSES_CURSES = os.path.isfile(/usr/include/ncurses/curses.h) 
 
-if _CYGWIN:
+if _CYGWIN or _NCURSES_CURSES:
 eci = ExternalCompilationInfo(
 includes = ['ncurses/curses.h', 'ncurses/term.h'],
 libraries = ['curses'],
___
pypy-commit mailing list
pypy-commit@python.org
http://mail.python.org/mailman/listinfo/pypy-commit