[pypy-commit] pypy default: It's been six years since these lines were last uncommented. They can go.

2014-11-08 Thread ltratt
Author: Laurence Tratt 
Branch: 
Changeset: r74400:96df37fb66d7
Date: 2014-11-08 10:55 +
http://bitbucket.org/pypy/pypy/changeset/96df37fb66d7/

Log:It's been six years since these lines were last uncommented. They
can go.

diff --git a/rpython/jit/metainterp/pyjitpl.py 
b/rpython/jit/metainterp/pyjitpl.py
--- a/rpython/jit/metainterp/pyjitpl.py
+++ b/rpython/jit/metainterp/pyjitpl.py
@@ -1333,8 +1333,6 @@
 while True:
 pc = self.pc
 op = ord(self.bytecode[pc])
-#debug_print(self.jitcode.name, pc)
-#print staticdata.opcode_names[op]
 staticdata.opcode_implementations[op](self, pc)
 except ChangeFrame:
 pass
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy recursion_and_inlining: Stop tracing when inlining is detected.

2014-11-12 Thread ltratt
Author: Laurence Tratt 
Branch: recursion_and_inlining
Changeset: r74470:b60064f55316
Date: 2014-11-11 15:54 +
http://bitbucket.org/pypy/pypy/changeset/b60064f55316/

Log:Stop tracing when inlining is detected.

Currently, if a function called part way through a trace turns out
to be recursively, it is endlessly inlined, often leading to a
tracing abort. This is tremendously inefficient, and many seemingly
innocent recursive functions have extremely bad performance
characteristics.

This patch is the first step in trying to address this. Notice that
this breaks a few tests, in the sense that it changes what is
traced. It should have no visible effect on end behaviour.

diff --git a/rpython/jit/metainterp/pyjitpl.py 
b/rpython/jit/metainterp/pyjitpl.py
--- a/rpython/jit/metainterp/pyjitpl.py
+++ b/rpython/jit/metainterp/pyjitpl.py
@@ -951,9 +951,31 @@
 assembler_call = False
 if warmrunnerstate.inlining:
 if warmrunnerstate.can_inline_callable(greenboxes):
+# We've found a potentially inlinable function; now we need to
+# see if it's already on the stack. In other words: are we 
about
+# to enter recursion? If so, we don't want to inline the
+# recursion, which would be equivalent to unrolling a while
+# loop.
 portal_code = targetjitdriver_sd.mainjitcode
-return self.metainterp.perform_call(portal_code, allboxes,
-greenkey=greenboxes)
+inline = True
+if self.metainterp.is_main_jitcode(portal_code):
+for gk, _ in self.metainterp.portal_trace_positions:
+if gk is None:
+continue
+assert len(gk) == len(greenboxes)
+i = 0
+for i in range(len(gk)):
+if not gk[i].same_constant(greenboxes[i]):
+break
+else:
+# The greenkey of a trace position on the stack
+# matches what we have, which means we're 
definitely
+# about to recurse.
+inline = False
+break
+if inline:
+return self.metainterp.perform_call(portal_code, allboxes,
+greenkey=greenboxes)
 assembler_call = True
 # verify that we have all green args, needed to make sure
 # that assembler that we call is still correct
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy recursion_and_inlining: We only need can_inline_callable.

2014-11-12 Thread ltratt
Author: Laurence Tratt 
Branch: recursion_and_inlining
Changeset: r74471:45bc61aafcd3
Date: 2014-11-11 16:02 +
http://bitbucket.org/pypy/pypy/changeset/45bc61aafcd3/

Log:We only need can_inline_callable.

diff --git a/rpython/jit/metainterp/warmstate.py 
b/rpython/jit/metainterp/warmstate.py
--- a/rpython/jit/metainterp/warmstate.py
+++ b/rpython/jit/metainterp/warmstate.py
@@ -567,17 +567,14 @@
 jd = self.jitdriver_sd
 cpu = self.cpu
 
-def can_inline_greenargs(*greenargs):
+def can_inline_callable(greenkey):
+greenargs = unwrap_greenkey(greenkey)
 if can_never_inline(*greenargs):
 return False
 cell = JitCell.get_jitcell(*greenargs)
 if cell is not None and (cell.flags & JC_DONT_TRACE_HERE) != 0:
 return False
 return True
-def can_inline_callable(greenkey):
-greenargs = unwrap_greenkey(greenkey)
-return can_inline_greenargs(*greenargs)
-self.can_inline_greenargs = can_inline_greenargs
 self.can_inline_callable = can_inline_callable
 
 if jd._should_unroll_one_iteration_ptr is None:
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy recursion_and_inlining: Force recursive functions to be (separately) traced sooner.

2014-11-12 Thread ltratt
Author: Laurence Tratt 
Branch: recursion_and_inlining
Changeset: r74472:ebc86588479f
Date: 2014-11-12 09:44 +
http://bitbucket.org/pypy/pypy/changeset/ebc86588479f/

Log:Force recursive functions to be (separately) traced sooner.

As soon as we've identified a recursive function, we know we don't
to inline it into other functions. Instead, we want to have it
traced separately. This patch simply uses the same mechanism as
aborted traces to achieve this.

diff --git a/rpython/jit/metainterp/pyjitpl.py 
b/rpython/jit/metainterp/pyjitpl.py
--- a/rpython/jit/metainterp/pyjitpl.py
+++ b/rpython/jit/metainterp/pyjitpl.py
@@ -971,6 +971,7 @@
 # The greenkey of a trace position on the stack
 # matches what we have, which means we're 
definitely
 # about to recurse.
+warmrunnerstate.dont_trace_here(greenboxes)
 inline = False
 break
 if inline:
diff --git a/rpython/jit/metainterp/warmstate.py 
b/rpython/jit/metainterp/warmstate.py
--- a/rpython/jit/metainterp/warmstate.py
+++ b/rpython/jit/metainterp/warmstate.py
@@ -577,6 +577,16 @@
 return True
 self.can_inline_callable = can_inline_callable
 
+def dont_trace_here(greenkey):
+# Set greenkey as somewhere that tracing should not occur into;
+# notice that, as per the description of JC_DONT_TRACE_HERE 
earlier,
+# if greenkey hasn't been traced separately, setting
+# JC_DONT_TRACE_HERE will force tracing the next time the function
+# is encountered.
+cell = JitCell.ensure_jit_cell_at_key(greenkey)
+cell.flags |= JC_DONT_TRACE_HERE
+self.dont_trace_here = dont_trace_here
+
 if jd._should_unroll_one_iteration_ptr is None:
 def should_unroll_one_iteration(greenkey):
 return False
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy recursion_and_inlining: Unroll a (customisable) fixed number of iterations of recursive functions.

2014-11-13 Thread ltratt
Author: Laurence Tratt 
Branch: recursion_and_inlining
Changeset: r74498:2fa67aa20eea
Date: 2014-11-13 10:53 +
http://bitbucket.org/pypy/pypy/changeset/2fa67aa20eea/

Log:Unroll a (customisable) fixed number of iterations of recursive
functions.

In essence, we count how instances of the function we're about to
call are already on the meta-interpreter stack and only stop tracing
when that count has been exceeded. Initial experiments suggest that
7 is a reasonable number, though this shouldn't be considered fixed
in stone, as it's heavily dependent on what benchmarks one uses.

diff --git a/rpython/jit/metainterp/pyjitpl.py 
b/rpython/jit/metainterp/pyjitpl.py
--- a/rpython/jit/metainterp/pyjitpl.py
+++ b/rpython/jit/metainterp/pyjitpl.py
@@ -959,6 +959,7 @@
 portal_code = targetjitdriver_sd.mainjitcode
 inline = True
 if self.metainterp.is_main_jitcode(portal_code):
+count = 0
 for gk, _ in self.metainterp.portal_trace_positions:
 if gk is None:
 continue
@@ -968,12 +969,16 @@
 if not gk[i].same_constant(greenboxes[i]):
 break
 else:
-# The greenkey of a trace position on the stack
-# matches what we have, which means we're 
definitely
-# about to recurse.
-warmrunnerstate.dont_trace_here(greenboxes)
-inline = False
-break
+count += 1
+memmgr = 
self.metainterp.staticdata.warmrunnerdesc.memory_manager
+if count >= memmgr.max_unroll_recursion:
+# This function is recursive and has exceeded the
+# maximum number of unrollings we allow. We want to 
stop
+# inlining it further and to make sure that, if it
+# hasn't happened already, the function is traced
+# separately as soon as possible.
+warmrunnerstate.dont_trace_here(greenboxes)
+inline = False
 if inline:
 return self.metainterp.perform_call(portal_code, allboxes,
 greenkey=greenboxes)
diff --git a/rpython/jit/metainterp/warmspot.py 
b/rpython/jit/metainterp/warmspot.py
--- a/rpython/jit/metainterp/warmspot.py
+++ b/rpython/jit/metainterp/warmspot.py
@@ -69,7 +69,8 @@
 backendopt=False, trace_limit=sys.maxint,
 inline=False, loop_longevity=0, retrace_limit=5,
 function_threshold=4,
-enable_opts=ALL_OPTS_NAMES, max_retrace_guards=15, **kwds):
+enable_opts=ALL_OPTS_NAMES, max_retrace_guards=15, 
+max_unroll_recursion=7, **kwds):
 from rpython.config.config import ConfigError
 translator = interp.typer.annotator.translator
 try:
@@ -91,6 +92,7 @@
 jd.warmstate.set_param_retrace_limit(retrace_limit)
 jd.warmstate.set_param_max_retrace_guards(max_retrace_guards)
 jd.warmstate.set_param_enable_opts(enable_opts)
+jd.warmstate.set_param_max_unroll_recursion(max_unroll_recursion)
 warmrunnerdesc.finish()
 if graph_and_interp_only:
 return interp, graph
diff --git a/rpython/jit/metainterp/warmstate.py 
b/rpython/jit/metainterp/warmstate.py
--- a/rpython/jit/metainterp/warmstate.py
+++ b/rpython/jit/metainterp/warmstate.py
@@ -291,6 +291,11 @@
 if self.warmrunnerdesc.memory_manager:
 self.warmrunnerdesc.memory_manager.max_unroll_loops = value
 
+def set_param_max_unroll_recursion(self, value):
+if self.warmrunnerdesc:
+if self.warmrunnerdesc.memory_manager:
+self.warmrunnerdesc.memory_manager.max_unroll_recursion = value
+
 def disable_noninlinable_function(self, greenkey):
 cell = self.JitCell.ensure_jit_cell_at_key(greenkey)
 cell.flags |= JC_DONT_TRACE_HERE
diff --git a/rpython/rlib/jit.py b/rpython/rlib/jit.py
--- a/rpython/rlib/jit.py
+++ b/rpython/rlib/jit.py
@@ -463,6 +463,7 @@
 'max_unroll_loops': 'number of extra unrollings a loop can cause',
 'enable_opts': 'INTERNAL USE ONLY (MAY NOT WORK OR LEAD TO CRASHES): '
'optimizations to enable, or all = %s' % ENABLE_ALL_OPTS,
+'max_unroll_recursion': 'how many levels deep to unroll a recursive 
function'
 }
 
 PARAMETERS = {'threshold': 1039, # just above 1024, prime
@@ -476,6 +477,7 @@
   'max_retrace_guards': 15,
   'max_unroll_loops': 0,
   'enable_opts': 'all',
+  'max_unroll_recursion': 7,
   }
 unroll_parameters

[pypy-commit] pypy recursion_and_inlining: Use framestack instead of portal_trace_positions.

2014-12-09 Thread ltratt
Author: Laurence Tratt 
Branch: recursion_and_inlining
Changeset: r74863:fe3efdc5abfa
Date: 2014-12-08 15:22 +
http://bitbucket.org/pypy/pypy/changeset/fe3efdc5abfa/

Log:Use framestack instead of portal_trace_positions.

The latter does not, despite first appearences, model the frame
stack: it models all call positions the portal has gone through in
its history. If I'd looked more carefully, I might have noticed that
the portal has a semi-hidden framestack attribute, which has a semi-
hidden greenkey attribute. This records exactly what we want, and
also solves the problem that we're no longer tied to being the main
jitcode.

diff --git a/rpython/jit/metainterp/pyjitpl.py 
b/rpython/jit/metainterp/pyjitpl.py
--- a/rpython/jit/metainterp/pyjitpl.py
+++ b/rpython/jit/metainterp/pyjitpl.py
@@ -958,27 +958,29 @@
 # loop.
 portal_code = targetjitdriver_sd.mainjitcode
 inline = True
-if self.metainterp.is_main_jitcode(portal_code):
-count = 0
-for gk, _ in self.metainterp.portal_trace_positions:
-if gk is None:
-continue
-assert len(gk) == len(greenboxes)
-i = 0
-for i in range(len(gk)):
-if not gk[i].same_constant(greenboxes[i]):
-break
-else:
-count += 1
-memmgr = 
self.metainterp.staticdata.warmrunnerdesc.memory_manager
-if count >= memmgr.max_unroll_recursion:
-# This function is recursive and has exceeded the
-# maximum number of unrollings we allow. We want to 
stop
-# inlining it further and to make sure that, if it
-# hasn't happened already, the function is traced
-# separately as soon as possible.
-warmrunnerstate.dont_trace_here(greenboxes)
-inline = False
+count = 0
+for f in self.metainterp.framestack:
+if f.jitcode is not portal_code:
+continue
+gk = f.greenkey
+if gk is None:
+continue
+assert len(gk) == len(greenboxes)
+i = 0
+for i in range(len(gk)):
+if not gk[i].same_constant(greenboxes[i]):
+break
+else:
+count += 1
+memmgr = 
self.metainterp.staticdata.warmrunnerdesc.memory_manager
+if count >= memmgr.max_unroll_recursion:
+# This function is recursive and has exceeded the
+# maximum number of unrollings we allow. We want to stop
+# inlining it further and to make sure that, if it
+# hasn't happened already, the function is traced
+# separately as soon as possible.
+warmrunnerstate.dont_trace_here(greenboxes)
+inline = False
 if inline:
 return self.metainterp.perform_call(portal_code, allboxes,
 greenkey=greenboxes)
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy recursion_and_inlining: Add output to JIT logging informing people of when recursive functions have stopped being inlined.

2014-12-09 Thread ltratt
Author: Laurence Tratt 
Branch: recursion_and_inlining
Changeset: r74866:8b9a6b6f9c9d
Date: 2014-12-09 15:44 +
http://bitbucket.org/pypy/pypy/changeset/8b9a6b6f9c9d/

Log:Add output to JIT logging informing people of when recursive
functions have stopped being inlined.

diff --git a/rpython/jit/metainterp/pyjitpl.py 
b/rpython/jit/metainterp/pyjitpl.py
--- a/rpython/jit/metainterp/pyjitpl.py
+++ b/rpython/jit/metainterp/pyjitpl.py
@@ -978,6 +978,9 @@
 # inlining it further and to make sure that, if it
 # hasn't happened already, the function is traced
 # separately as soon as possible.
+if have_debug_prints():
+loc = 
targetjitdriver_sd.warmstate.get_location_str(greenboxes)
+debug_print("recursive function (not inlined):", loc)
 warmrunnerstate.dont_trace_here(greenboxes)
 else:
 return self.metainterp.perform_call(portal_code, allboxes,
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy default: Merge recursion_and_inlining.

2014-12-09 Thread ltratt
Author: Laurence Tratt 
Branch: 
Changeset: r74869:70d88f23b9bb
Date: 2014-12-09 16:29 +
http://bitbucket.org/pypy/pypy/changeset/70d88f23b9bb/

Log:Merge recursion_and_inlining.

This branch stops inlining in recursive function calls after N
levels of (posiblly indirect) recursion in a function (where N is
configurable; what the best possible value of N might be is still a
little unclear, and ideally requires testing on a wider range of
benchmarks). This stops us abusing abort as a way of stopping
inlining in recursion, and improves performance.

diff --git a/rpython/jit/metainterp/pyjitpl.py 
b/rpython/jit/metainterp/pyjitpl.py
--- a/rpython/jit/metainterp/pyjitpl.py
+++ b/rpython/jit/metainterp/pyjitpl.py
@@ -964,9 +964,40 @@
 assembler_call = False
 if warmrunnerstate.inlining:
 if warmrunnerstate.can_inline_callable(greenboxes):
+# We've found a potentially inlinable function; now we need to
+# see if it's already on the stack. In other words: are we 
about
+# to enter recursion? If so, we don't want to inline the
+# recursion, which would be equivalent to unrolling a while
+# loop.
 portal_code = targetjitdriver_sd.mainjitcode
-return self.metainterp.perform_call(portal_code, allboxes,
-greenkey=greenboxes)
+count = 0
+for f in self.metainterp.framestack:
+if f.jitcode is not portal_code:
+continue
+gk = f.greenkey
+if gk is None:
+continue
+assert len(gk) == len(greenboxes)
+i = 0
+for i in range(len(gk)):
+if not gk[i].same_constant(greenboxes[i]):
+break
+else:
+count += 1
+memmgr = 
self.metainterp.staticdata.warmrunnerdesc.memory_manager
+if count >= memmgr.max_unroll_recursion:
+# This function is recursive and has exceeded the
+# maximum number of unrollings we allow. We want to stop
+# inlining it further and to make sure that, if it
+# hasn't happened already, the function is traced
+# separately as soon as possible.
+if have_debug_prints():
+loc = 
targetjitdriver_sd.warmstate.get_location_str(greenboxes)
+debug_print("recursive function (not inlined):", loc)
+warmrunnerstate.dont_trace_here(greenboxes)
+else:
+return self.metainterp.perform_call(portal_code, allboxes,
+greenkey=greenboxes)
 assembler_call = True
 # verify that we have all green args, needed to make sure
 # that assembler that we call is still correct
diff --git a/rpython/jit/metainterp/test/test_recursive.py 
b/rpython/jit/metainterp/test/test_recursive.py
--- a/rpython/jit/metainterp/test/test_recursive.py
+++ b/rpython/jit/metainterp/test/test_recursive.py
@@ -1112,6 +1112,37 @@
 assert res == 2095
 self.check_resops(call_assembler=12)
 
+def test_inline_recursion_limit(self):
+driver = JitDriver(greens = ["threshold", "loop"], reds=["i"])
+@dont_look_inside
+def f():
+set_param(driver, "max_unroll_recursion", 10)
+def portal(threshold, loop, i):
+f()
+if i > threshold:
+return i
+while True:
+driver.jit_merge_point(threshold=threshold, loop=loop, i=i)
+if loop:
+portal(threshold, False, 0)
+else:
+portal(threshold, False, i + 1)
+return i
+if i > 10:
+return 1
+i += 1
+driver.can_enter_jit(threshold=threshold, loop=loop, i=i)
+
+res1 = portal(10, True, 0)
+res2 = self.meta_interp(portal, [10, True, 0], inline=True)
+assert res1 == res2
+self.check_resops(call_assembler=2)
+
+res1 = portal(9, True, 0)
+res2 = self.meta_interp(portal, [9, True, 0], inline=True)
+assert res1 == res2
+self.check_resops(call_assembler=0)
+
 def test_handle_jitexception_in_portal(self):
 # a test for _handle_jitexception_in_portal in blackhole.py
 driver = JitDriver(greens = ['codeno'], reds = ['i', 'str'],
diff --git a/rpython/jit/metainterp/warmspot.py 
b/rpython/jit/metainterp/warmspot.py
--- a/rpython/jit/metainterp/warmspot.py
+++ b/rpython/jit/metainterp/warmspot.py
@@ -69,7 +69,8 @@
 backendopt=F

[pypy-commit] pypy recursion_and_inlining: Test that recursing past a specified threshold turns off recursion inlining.

2014-12-09 Thread ltratt
Author: Laurence Tratt 
Branch: recursion_and_inlining
Changeset: r74865:64b3af5150aa
Date: 2014-12-09 15:44 +
http://bitbucket.org/pypy/pypy/changeset/64b3af5150aa/

Log:Test that recursing past a specified threshold turns off recursion
inlining.

diff --git a/rpython/jit/metainterp/test/test_recursive.py 
b/rpython/jit/metainterp/test/test_recursive.py
--- a/rpython/jit/metainterp/test/test_recursive.py
+++ b/rpython/jit/metainterp/test/test_recursive.py
@@ -1112,6 +1112,37 @@
 assert res == 2095
 self.check_resops(call_assembler=12)
 
+def test_inline_recursion_limit(self):
+driver = JitDriver(greens = ["threshold", "loop"], reds=["i"])
+@dont_look_inside
+def f():
+set_param(driver, "max_unroll_recursion", 10)
+def portal(threshold, loop, i):
+f()
+if i > threshold:
+return i
+while True:
+driver.jit_merge_point(threshold=threshold, loop=loop, i=i)
+if loop:
+portal(threshold, False, 0)
+else:
+portal(threshold, False, i + 1)
+return i
+if i > 10:
+return 1
+i += 1
+driver.can_enter_jit(threshold=threshold, loop=loop, i=i)
+
+res1 = portal(10, True, 0)
+res2 = self.meta_interp(portal, [10, True, 0], inline=True)
+assert res1 == res2
+self.check_resops(call_assembler=2)
+
+res1 = portal(9, True, 0)
+res2 = self.meta_interp(portal, [9, True, 0], inline=True)
+assert res1 == res2
+self.check_resops(call_assembler=0)
+
 def test_handle_jitexception_in_portal(self):
 # a test for _handle_jitexception_in_portal in blackhole.py
 driver = JitDriver(greens = ['codeno'], reds = ['i', 'str'],
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy recursion_and_inlining: Close to-be-merged branch.

2014-12-09 Thread ltratt
Author: Laurence Tratt 
Branch: recursion_and_inlining
Changeset: r74868:dcda54278f74
Date: 2014-12-09 16:26 +
http://bitbucket.org/pypy/pypy/changeset/dcda54278f74/

Log:Close to-be-merged branch.

___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy recursion_and_inlining: We no longer need a separate inline variable.

2014-12-09 Thread ltratt
Author: Laurence Tratt 
Branch: recursion_and_inlining
Changeset: r74864:b02aa3253678
Date: 2014-12-08 15:39 +
http://bitbucket.org/pypy/pypy/changeset/b02aa3253678/

Log:We no longer need a separate inline variable.

diff --git a/rpython/jit/metainterp/pyjitpl.py 
b/rpython/jit/metainterp/pyjitpl.py
--- a/rpython/jit/metainterp/pyjitpl.py
+++ b/rpython/jit/metainterp/pyjitpl.py
@@ -957,7 +957,6 @@
 # recursion, which would be equivalent to unrolling a while
 # loop.
 portal_code = targetjitdriver_sd.mainjitcode
-inline = True
 count = 0
 for f in self.metainterp.framestack:
 if f.jitcode is not portal_code:
@@ -980,8 +979,7 @@
 # hasn't happened already, the function is traced
 # separately as soon as possible.
 warmrunnerstate.dont_trace_here(greenboxes)
-inline = False
-if inline:
+else:
 return self.metainterp.perform_call(portal_code, allboxes,
 greenkey=greenboxes)
 assembler_call = True
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy recursion_and_inlining: Merge default.

2014-12-09 Thread ltratt
Author: Laurence Tratt 
Branch: recursion_and_inlining
Changeset: r74867:fd5a11ac1b71
Date: 2014-12-09 15:48 +
http://bitbucket.org/pypy/pypy/changeset/fd5a11ac1b71/

Log:Merge default.

diff too long, truncating to 2000 out of 18633 lines

diff --git a/lib-python/2.7/subprocess.py b/lib-python/2.7/subprocess.py
--- a/lib-python/2.7/subprocess.py
+++ b/lib-python/2.7/subprocess.py
@@ -655,6 +655,21 @@
 """Create new Popen instance."""
 _cleanup()
 
+# --- PyPy hack, see _pypy_install_libs_after_virtualenv() ---
+# match arguments passed by different versions of virtualenv
+if args[1:] in (
+['-c', 'import sys; print(sys.prefix)'],# 1.6 10ba3f3c
+['-c', "\nimport sys\nprefix = sys.prefix\n"# 1.7 0e9342ce
+ "if sys.version_info[0] == 3:\n"
+ "prefix = prefix.encode('utf8')\n"
+ "if hasattr(sys.stdout, 'detach'):\n"
+ "sys.stdout = sys.stdout.detach()\n"
+ "elif hasattr(sys.stdout, 'buffer'):\n"
+ "sys.stdout = sys.stdout.buffer\nsys.stdout.write(prefix)\n"],
+['-c', 'import sys;out=sys.stdout;getattr(out, "buffer"'
+ ', out).write(sys.prefix.encode("utf-8"))']):  # 1.7.2 a9454bce
+_pypy_install_libs_after_virtualenv(args[0])
+
 if not isinstance(bufsize, (int, long)):
 raise TypeError("bufsize must be an integer")
 
@@ -1560,6 +1575,27 @@
 self.send_signal(signal.SIGKILL)
 
 
+def _pypy_install_libs_after_virtualenv(target_executable):
+# https://bitbucket.org/pypy/pypy/issue/1922/future-proofing-virtualenv
+#
+# PyPy 2.4.1 turned --shared on by default.  This means the pypy binary
+# depends on the 'libpypy-c.so' shared library to be able to run.
+# The virtualenv code existing at the time did not account for this
+# and would break.  Try to detect that we're running under such a
+# virtualenv in the "Testing executable with" phase and copy the
+# library ourselves.
+caller = sys._getframe(2)
+if ('virtualenv_version' in caller.f_globals and
+  'copyfile' in caller.f_globals):
+dest_dir = sys.pypy_resolvedirof(target_executable)
+src_dir = sys.pypy_resolvedirof(sys.executable)
+for libname in ['libpypy-c.so']:
+dest_library = os.path.join(dest_dir, libname)
+src_library = os.path.join(src_dir, libname)
+if os.path.exists(src_library):
+caller.f_globals['copyfile'](src_library, dest_library)
+
+
 def _demo_posix():
 #
 # Example 1: Simple redirection: Get process list
diff --git a/lib-python/conftest.py b/lib-python/conftest.py
--- a/lib-python/conftest.py
+++ b/lib-python/conftest.py
@@ -59,7 +59,7 @@
 def __init__(self, basename, core=False, compiler=None, usemodules='',
  skip=None):
 self.basename = basename
-self._usemodules = usemodules.split() + ['signal', 'rctime', 
'itertools', '_socket']
+self._usemodules = usemodules.split() + ['signal', 'time', 
'itertools', '_socket']
 self._compiler = compiler
 self.core = core
 self.skip = skip
diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py
--- a/pypy/config/pypyoption.py
+++ b/pypy/config/pypyoption.py
@@ -29,7 +29,7 @@
 # --allworkingmodules
 working_modules = default_modules.copy()
 working_modules.update([
-"_socket", "unicodedata", "mmap", "fcntl", "_locale", "pwd", "rctime" ,
+"_socket", "unicodedata", "mmap", "fcntl", "_locale", "pwd", "time" ,
 "select", "zipimport", "_lsprof", "crypt", "signal", "_rawffi", "termios",
 "zlib", "bz2", "struct", "_hashlib", "_md5", "_sha", "_minimal_curses",
 "cStringIO", "thread", "itertools", "pyexpat", "_ssl", "cpyext", "array",
@@ -40,7 +40,7 @@
 
 translation_modules = default_modules.copy()
 translation_modules.update([
-"fcntl", "rctime", "select", "signal", "_rawffi", "zlib", "struct", "_md5",
+"fcntl", "time", "select", "signal", "_rawffi", "zlib", "struct", "_md5",
 "cStringIO", "array", "binascii",
 # the following are needed for pyrepl (and hence for the
 # interactive prompt/pdb)
@@ -64,19 +64,15 @@
 default_modules.add("_locale")
 
 if sys.platform == "sunos5":
-working_modules.remove('mmap')   # depend on ctypes, can't get at c-level 
'errono'
-working_modules.remove('rctime') # depend on ctypes, missing 
tm_zone/tm_gmtoff
-working_modules.remove('signal') # depend on ctypes, can't get at c-level 
'errono'
 working_modules.remove('fcntl')  # LOCK_NB not defined
 working_modules.remove("_minimal_curses")
 working_modules.remove("termios")
-working_modules.remove("_multiprocessing")   # depends on rctime
 if "cppyy" in working_modules:
 working_modules.remove("cppyy")  # depends on ctypes
 
 
 module_dependencies = {
-'_multiprocessing': [('objspace.usemodules.rctime

[pypy-commit] benchmarks default: Krakatau takes a long time to warmup, so discard warmup iterations.

2014-12-10 Thread ltratt
Author: Laurence Tratt 
Branch: 
Changeset: r294:63916bb5a798
Date: 2014-12-10 17:48 +
http://bitbucket.org/pypy/benchmarks/changeset/63916bb5a798/

Log:Krakatau takes a long time to warmup, so discard warmup iterations.

When it is warmed up, it runs much faster than cold, so we then need
to run things several times to make it long enough to be reliable.

diff --git a/own/bm_krakatau.py b/own/bm_krakatau.py
--- a/own/bm_krakatau.py
+++ b/own/bm_krakatau.py
@@ -47,19 +47,22 @@
 source = javaclass.generateAST(c, makeGraph).print_()
 
 
+WARMUP_ITERATIONS = 30 # Krakatau needs a number of iterations to warmup...
+
 def main(n):
 l = []
 old_stdout = sys.stdout
 sys.stdout = cStringIO.StringIO()
 try:
-for i in range(n):
+for i in range(WARMUP_ITERATIONS + n):
 t0 = time.time()
-decompileClass()
+for j in range(4):
+decompileClass()
 time_elapsed = time.time() - t0
 l.append(time_elapsed)
 finally:
 sys.stdout = old_stdout
-return l
+return l[WARMUP_ITERATIONS:]
 
 if __name__ == "__main__":
 parser = optparse.OptionParser(
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] benchmarks default: Stop Krakatau from using cache files.

2014-12-11 Thread ltratt
Author: Laurence Tratt 
Branch: 
Changeset: r295:d1adfc92d840
Date: 2014-12-11 12:35 +
http://bitbucket.org/pypy/benchmarks/changeset/d1adfc92d840/

Log:Stop Krakatau from using cache files.

diff --git a/lib/Krakatau/Krakatau/stdcache.py 
b/lib/Krakatau/Krakatau/stdcache.py
--- a/lib/Krakatau/Krakatau/stdcache.py
+++ b/lib/Krakatau/Krakatau/stdcache.py
@@ -6,11 +6,9 @@
 self.env = env
 self.filename = filename
 
-try:
-with open(self.filename, 'rb') as f:
-fdata = f.read()
-except IOError:
-fdata = ''
+# XXX for a benchmark, we don't ever want to use a cache, so we simply
+# don't load data from a cache file (even if it exists)
+fdata = ''
 
 #Note, we assume \n will never appear in a class name. This should be 
true for classes in the Java package,
 #but isn't necessarily true for user defined classes (Which we don't 
cache anyway)
@@ -46,4 +44,4 @@
 class_ = self.env.getClass(name, partial=True)
 if shouldCache(name):
 self._cache_info(class_)
-return class_.flags
\ No newline at end of file
+return class_.flags
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] benchmarks default: Don't read/write from a cache file.

2014-12-11 Thread ltratt
Author: Laurence Tratt 
Branch: 
Changeset: r298:2a17a0177782
Date: 2014-12-11 14:50 +
http://bitbucket.org/pypy/benchmarks/changeset/2a17a0177782/

Log:Don't read/write from a cache file.

Without this, the benchmark's behaviour is (even more) non-
deterministic from one run to another.

diff --git a/own/krakatau/Krakatau/Krakatau/stdcache.py 
b/own/krakatau/Krakatau/Krakatau/stdcache.py
--- a/own/krakatau/Krakatau/Krakatau/stdcache.py
+++ b/own/krakatau/Krakatau/Krakatau/stdcache.py
@@ -6,11 +6,14 @@
 self.env = env
 self.filename = filename
 
-try:
-with open(self.filename, 'rb') as f:
-fdata = f.read()
-except IOError:
-fdata = ''
+# XXX for a benchmark, we don't ever want to use a cache, so we simply
+# don't load data from a cache file (even if it exists)
+fdata = ''
+#try:
+#with open(self.filename, 'rb') as f:
+#fdata = f.read()
+#except IOError:
+#fdata = ''
 
 #Note, we assume \n will never appear in a class name. This should be 
true for classes in the Java package,
 #but isn't necessarily true for user defined classes (Which we don't 
cache anyway)
@@ -24,8 +27,8 @@
 newvals = class_.getSuperclassHierarchy(), class_.flags 
 self.data[class_.name] = newvals 
 writedata = ';'.join(','.join(x) for x in newvals)
-with open(self.filename, 'ab') as f:
-f.write(writedata + '\n')
+#with open(self.filename, 'ab') as f:
+#f.write(writedata + '\n')
 print class_.name, 'cached'
 
 def isCached(self, name): return name in self.data
@@ -46,4 +49,4 @@
 class_ = self.env.getClass(name, partial=True)
 if shouldCache(name):
 self._cache_info(class_)
-return class_.flags
\ No newline at end of file
+return class_.flags
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] benchmarks min_5_secs: Merge default.

2014-12-12 Thread ltratt
Author: Laurence Tratt 
Branch: min_5_secs
Changeset: r299:e3aea38bffa2
Date: 2014-12-12 17:47 +
http://bitbucket.org/pypy/benchmarks/changeset/e3aea38bffa2/

Log:Merge default.

diff too long, truncating to 2000 out of 119576 lines

diff --git a/benchmarks.py b/benchmarks.py
--- a/benchmarks.py
+++ b/benchmarks.py
@@ -82,7 +82,8 @@
  'spectral-norm', 'chaos', 'telco', 'go', 'pyflate-fast',
  'raytrace-simple', 'crypto_pyaes', 'bm_mako', 'bm_chameleon',
  'json_bench', 'pidigits', 'hexiom2', 'eparse', 'deltablue',
- 'bm_dulwich_log']:
+ 'bm_dulwich_log', 'bm_krakatau', 'bm_mdp', 'pypy_interp',
+ 'bm_icbd']:
 _register_new_bm(name, name, globals(), **opts.get(name, {}))
 
 for name in ['names', 'iteration', 'tcp', 'pb', ]:#'web']:#, 'accepts']:
diff --git a/lib/Krakatau/Documentation/assembler.txt 
b/lib/Krakatau/Documentation/assembler.txt
deleted file mode 100644
--- a/lib/Krakatau/Documentation/assembler.txt
+++ /dev/null
@@ -1,88 +0,0 @@
-Krakatau Assembler Syntax
-
-This guide is intended to help write bytecode assembly files for use with the 
Krakatau assembler. It assumes that you are already familiar with the JVM 
classfile format and how to write bytecode. If not, you can find a simple 
tutorial to writing bytecode at 
https://greyhat.gatech.edu/wiki/index.php?title=Java_Bytecode_Tutorial. You can 
also find some examples of assembler files in the examples directory.
-
-Krakatau syntax is largely backwards compatible with the classic Jasmin 
assembler syntax. In a couple of places, backwards compatibility is broken 
either by the introduction of new keywords or to fix ambiguities in the Jasmin 
syntax. However, Krakatau is not necessarily compatible with the extensions 
introduced by JasminXT.
-
-The basic format for an assembler file consists of a list of classfile 
entries. Each entry will result in the generation of a seperate classfile, so a 
single assembly file can contain multiple classes where convienent. These 
entries are completely independent - mutiple classes never share constant pool 
entries, fields, methods, or directives, even the version directive. Each one 
has the format
-
-.bytecode major minor  (optional)
-class directives
-.class classref
-.super classref
-interface declarations
-class directives
-topitems
-.end class
-
-The .end class on the final entry may be ommitted. So the simplest possible 
assembler file to declare a class named Foo would be
-
-.class Foo
-.super java/lang/Object
-
-To declare three classes A, B, and C in the same file with B and C inheriting 
from A and different versions, you could do
-
-.class A
-.super java/lang/Object
-.end class
-.class B
-.super A
-.end class
-.class C
-.super A
-
-The classfile version is specified by the .bytecode directive. It is specified 
by major, minor, a pair of decimal integers. If ommitted, the default is 
version 49.0. So the following is equivalent to the earlier example
-
-.bytecode 49 0
-.class Foo
-.super java/lang/Object
-
-Other class directives include .runtimevisible, .runtimeinvisible, .signature, 
.attribute, .source, .inner, .innerlength, and .enclosing. These are used to 
control the attributes of the class and will be covered later.
-
-Topitems are the actual meat of the class. There are three types: fields, 
methods, and constant definitions. The last is unique to Krakatau and is 
closely related to the rest of the syntax. In Krakatau, there are multiple ways 
to specify a constant pool entry. The most common are via WORD tokens, symbolic 
references and numerical references. The later constist of square brackets with 
lowercase alphanumerics and underscores inside.
-
-When you specify .class Foo, the string Foo isn't directly included in the 
output. The classfile format says that the class field is actually a two byte 
index into the constant pool of the classfile. This points to a Class_info 
which points to a Utf8_info which holds the actual name of the class. 
Therefore, Krakatau implicitly creates constant pool entries and inserts the 
appropriate references. But this process can be controlled more directly.
-
-Instead of writing
-.class Foo
-.super java/lang/Object
-
-You could explicitly write out all the classfile references as follows
-
-.class [foocls]
-.super [objcls]
-
-.const [foocls] = Class [fooutf]
-.const [fooutf] = Utf8 Foo
-.const [objcls] = Class [objutf]
-.const [objutf] = Utf8 java/lang/Object
-
-There are two types of references. If the contents are a decimal int, then it 
is a direct numerical reference to a particular slot in the constant pool. You 
are responsible for making sure that everything is consistent and that the 
contents of that slot are valid. This option is most useful for specifiying the 
null entry [0]. For example, to express the Object class itself, one would do
-
-.class java/lang/Object
-.super [0]
-
-If the contents are any other nonempty lowercase alphanumeric + underscores 

[pypy-commit] pypy default: (arigo, ltratt) Avoid allocating memory when ordereddicts have spare capacity.

2014-12-15 Thread ltratt
Author: Laurence Tratt 
Branch: 
Changeset: r74939:7ff0e531c521
Date: 2014-12-15 15:35 +
http://bitbucket.org/pypy/pypy/changeset/7ff0e531c521/

Log:(arigo, ltratt) Avoid allocating memory when ordereddicts have spare
capacity.

Previously, dicts could yo-yo in size if someone continually added
and removed elements, but the total number of live elements remained
more-or-less constant. This patch changes things so that compaction
of entries happens more often than shrinking of memory. Put another
way, we only shrink the size allocated to an ordereddict when it is
extremely sparsely populated. This reduces GC pressure quite a bit.

diff --git a/rpython/rtyper/lltypesystem/rordereddict.py 
b/rpython/rtyper/lltypesystem/rordereddict.py
--- a/rpython/rtyper/lltypesystem/rordereddict.py
+++ b/rpython/rtyper/lltypesystem/rordereddict.py
@@ -575,7 +575,11 @@
 
 @jit.dont_look_inside
 def ll_dict_grow(d):
-if d.num_items < d.num_used_items // 4:
+if d.num_items < d.num_used_items // 2:
+# At least 50% of the allocated entries are dead, so perform a
+# compaction. If ll_dict_remove_deleted_items detects that over
+# 75% of allocated entries are dead, then it will also shrink the
+# memory allocated at the same time as doing a compaction.
 ll_dict_remove_deleted_items(d)
 return True
 
@@ -594,8 +598,10 @@
 return False
 
 def ll_dict_remove_deleted_items(d):
-new_allocated = _overallocate_entries_len(d.num_items)
-if new_allocated < len(d.entries) // 2:
+if d.num_items < len(d.entries) // 4:
+# At least 75% of the allocated entries are dead, so shrink the memory
+# allocated as well as doing a compaction.
+new_allocated = _overallocate_entries_len(d.num_items)
 newitems = lltype.malloc(lltype.typeOf(d).TO.entries.TO, new_allocated)
 else:
 newitems = d.entries
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy default: Compact dictionaries that have very few live items left in them.

2014-12-15 Thread ltratt
Author: Laurence Tratt 
Branch: 
Changeset: r74945:22ee57670e0d
Date: 2014-12-15 16:37 +
http://bitbucket.org/pypy/pypy/changeset/22ee57670e0d/

Log:Compact dictionaries that have very few live items left in them.

For some reason, we copied CPython's hard-to-defend behaviour here.
Replace it with a (very) conservative check that will only shrink
the memory allocated for a dictionary if it has very few items left.

diff --git a/rpython/rtyper/lltypesystem/rordereddict.py 
b/rpython/rtyper/lltypesystem/rordereddict.py
--- a/rpython/rtyper/lltypesystem/rordereddict.py
+++ b/rpython/rtyper/lltypesystem/rordereddict.py
@@ -661,19 +661,11 @@
 entry.key = lltype.nullptr(ENTRY.key.TO)
 if ENTRIES.must_clear_value:
 entry.value = lltype.nullptr(ENTRY.value.TO)
-#
-# The rest is commented out: like CPython we no longer shrink the
-# dictionary here.  It may shrink later if we try to append a number
-# of new items to it.  Unsure if this behavior was designed in
-# CPython or is accidental.  A design reason would be that if you
-# delete all items in a dictionary (e.g. with a series of
-# popitem()), then CPython avoids shrinking the table several times.
-#num_entries = len(d.entries)
-#if num_entries > DICT_INITSIZE and d.num_items <= num_entries / 4:
-#ll_dict_resize(d)
-# A previous xxx: move the size checking and resize into a single
-# call which is opaque to the JIT when the dict isn't virtual, to
-# avoid extra branches.
+
+# If the dictionary is at least 87.5% dead items, then consider shrinking
+# it.
+if d.num_live_items + DICT_INITSIZE <= len(d.entries) / 8:
+ll_dict_resize(d)
 
 def ll_dict_resize(d):
 # make a 'new_size' estimate and shrink it if there are many
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy default: Merge.

2014-12-15 Thread ltratt
Author: Laurence Tratt 
Branch: 
Changeset: r74947:acbf6e9532c4
Date: 2014-12-15 17:52 +
http://bitbucket.org/pypy/pypy/changeset/acbf6e9532c4/

Log:Merge.

diff --git a/rpython/rlib/rgc.py b/rpython/rlib/rgc.py
--- a/rpython/rlib/rgc.py
+++ b/rpython/rlib/rgc.py
@@ -330,6 +330,20 @@
 keepalive_until_here(newp)
 return newp
 
+@jit.dont_look_inside
+@specialize.ll()
+def ll_arrayclear(p):
+# Equivalent to memset(array, 0).  Only for GcArray(primitive-type) for 
now.
+from rpython.rlib.objectmodel import keepalive_until_here
+
+length = len(p)
+ARRAY = lltype.typeOf(p).TO
+offset = llmemory.itemoffsetof(ARRAY, 0)
+dest_addr = llmemory.cast_ptr_to_adr(p) + offset
+llmemory.raw_memclear(dest_addr, llmemory.sizeof(ARRAY.OF) * length)
+keepalive_until_here(p)
+
+
 def no_release_gil(func):
 func._dont_inline_ = True
 func._no_release_gil_ = True
diff --git a/rpython/rlib/test/test_rgc.py b/rpython/rlib/test/test_rgc.py
--- a/rpython/rlib/test/test_rgc.py
+++ b/rpython/rlib/test/test_rgc.py
@@ -158,6 +158,16 @@
 assert a2[2].x == 3
 assert a2[2].y == 15
 
+def test_ll_arrayclear():
+TYPE = lltype.GcArray(lltype.Signed)
+a1 = lltype.malloc(TYPE, 10)
+for i in range(10):
+a1[i] = 100 + i
+rgc.ll_arrayclear(a1)
+assert len(a1) == 10
+for i in range(10):
+assert a1[i] == 0
+
 def test__contains_gcptr():
 assert not rgc._contains_gcptr(lltype.Signed)
 assert not rgc._contains_gcptr(
diff --git a/rpython/rtyper/lltypesystem/rordereddict.py 
b/rpython/rtyper/lltypesystem/rordereddict.py
--- a/rpython/rtyper/lltypesystem/rordereddict.py
+++ b/rpython/rtyper/lltypesystem/rordereddict.py
@@ -72,6 +72,8 @@
 'must_clear_value': (isinstance(DICTVALUE, lltype.Ptr)
  and DICTVALUE._needsgc()),
 }
+if getattr(ll_eq_function, 'no_direct_compare', False):
+entrymeths['no_direct_compare'] = True
 
 # * the key
 entryfields.append(("key", DICTKEY))
@@ -416,6 +418,7 @@
 TYPE_LONG  = lltype.Unsigned
 
 def ll_malloc_indexes_and_choose_lookup(d, n):
+# keep in sync with ll_clear_indexes() below
 if n <= 256:
 d.indexes = lltype.cast_opaque_ptr(llmemory.GCREF,
lltype.malloc(DICTINDEX_BYTE.TO, n,
@@ -437,6 +440,16 @@
  zero=True))
 d.lookup_function_no = FUNC_LONG
 
+def ll_clear_indexes(d, n):
+if n <= 256:
+rgc.ll_arrayclear(lltype.cast_opaque_ptr(DICTINDEX_BYTE, d.indexes))
+elif n <= 65536:
+rgc.ll_arrayclear(lltype.cast_opaque_ptr(DICTINDEX_SHORT, d.indexes))
+elif IS_64BIT and n <= 2 ** 32:
+rgc.ll_arrayclear(lltype.cast_opaque_ptr(DICTINDEX_INT, d.indexes))
+else:
+rgc.ll_arrayclear(lltype.cast_opaque_ptr(DICTINDEX_LONG, d.indexes))
+
 def ll_call_insert_clean_function(d, hash, i):
 DICT = lltype.typeOf(d).TO
 if d.lookup_function_no == FUNC_BYTE:
@@ -605,6 +618,11 @@
 newitems = lltype.malloc(lltype.typeOf(d).TO.entries.TO, new_allocated)
 else:
 newitems = d.entries
+# The loop below does a lot of writes into 'newitems'.  It's a better
+# idea to do a single gc_writebarrier rather than activating the
+# card-by-card logic (worth 11% in microbenchmarks).
+from rpython.rtyper.lltypesystem.lloperation import llop
+llop.gc_writebarrier(lltype.Void, newitems)
 #
 ENTRIES = lltype.typeOf(d).TO.entries.TO
 ENTRY = ENTRIES.OF
@@ -702,13 +720,17 @@
 ll_dict_reindex(d, new_size)
 
 def ll_dict_reindex(d, new_size):
-ll_malloc_indexes_and_choose_lookup(d, new_size)
+if bool(d.indexes) and _ll_len_of_d_indexes(d) == new_size:
+ll_clear_indexes(d, new_size)   # hack: we can reuse the same array
+else:
+ll_malloc_indexes_and_choose_lookup(d, new_size)
 d.resize_counter = new_size * 2 - d.num_live_items * 3
 assert d.resize_counter > 0
 #
 entries = d.entries
 i = 0
-while i < d.num_ever_used_items:
+ibound = d.num_ever_used_items
+while i < ibound:
 if entries.valid(i):
 hash = entries.hash(i)
 ll_call_insert_clean_function(d, hash, i)
diff --git a/rpython/rtyper/test/test_rordereddict.py 
b/rpython/rtyper/test/test_rordereddict.py
--- a/rpython/rtyper/test/test_rordereddict.py
+++ b/rpython/rtyper/test/test_rordereddict.py
@@ -292,9 +292,6 @@
 res = self.interpret(func, [5])
 assert res == 6
 
-def test_dict_with_SHORT_keys(self):
-py.test.skip("I don't want to edit this file on two branches")
-
 def test_memoryerror_should_not_insert(self):
 py.test.skip("I don't want to edit this file on two branches")
 
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy default: Rename ordereddict attributes to be less confusing.

2014-12-15 Thread ltratt
Author: Laurence Tratt 
Branch: 
Changeset: r74944:7e75a782be2c
Date: 2014-12-15 16:33 +
http://bitbucket.org/pypy/pypy/changeset/7e75a782be2c/

Log:Rename ordereddict attributes to be less confusing.

diff --git a/rpython/rtyper/lltypesystem/rordereddict.py 
b/rpython/rtyper/lltypesystem/rordereddict.py
--- a/rpython/rtyper/lltypesystem/rordereddict.py
+++ b/rpython/rtyper/lltypesystem/rordereddict.py
@@ -28,8 +28,8 @@
 #}
 #
 #struct dicttable {
-#int num_items;
-#int num_used_items;
+#int num_live_items;
+#int num_ever_used_items;
 #int resize_counter;
 #{byte, short, int, long} *indexes;
 #dictentry *entries;
@@ -113,8 +113,8 @@
 DICTENTRY = lltype.Struct("odictentry", *entryfields)
 DICTENTRYARRAY = lltype.GcArray(DICTENTRY,
 adtmeths=entrymeths)
-fields =  [ ("num_items", lltype.Signed),
-("num_used_items", lltype.Signed),
+fields =  [ ("num_live_items", lltype.Signed),
+("num_ever_used_items", lltype.Signed),
 ("resize_counter", lltype.Signed),
 ("indexes", llmemory.GCREF),
 ("lookup_function_no", lltype.Signed),
@@ -492,11 +492,11 @@
 return objectmodel.hlinvoke(DICT.r_rdict_eqfn, d.fnkeyeq, key1, key2)
 
 def ll_dict_len(d):
-return d.num_items
+return d.num_live_items
 
 def ll_dict_bool(d):
 # check if a dict is True, allowing for None
-return bool(d) and d.num_items != 0
+return bool(d) and d.num_live_items != 0
 
 def ll_dict_getitem(d, key):
 index = d.lookup_function(d, key, d.keyhash(key), FLAG_LOOKUP)
@@ -519,18 +519,18 @@
 entry = d.entries[i]
 entry.value = value
 else:
-if len(d.entries) == d.num_used_items:
+if len(d.entries) == d.num_ever_used_items:
 if ll_dict_grow(d):
-ll_call_insert_clean_function(d, hash, d.num_used_items)
-entry = d.entries[d.num_used_items]
+ll_call_insert_clean_function(d, hash, d.num_ever_used_items)
+entry = d.entries[d.num_ever_used_items]
 entry.key = key
 entry.value = value
 if hasattr(ENTRY, 'f_hash'):
 entry.f_hash = hash
 if hasattr(ENTRY, 'f_valid'):
 entry.f_valid = True
-d.num_used_items += 1
-d.num_items += 1
+d.num_ever_used_items += 1
+d.num_live_items += 1
 rc = d.resize_counter - 3
 if rc <= 0:
 ll_dict_resize(d)
@@ -540,16 +540,16 @@
 
 def _ll_dict_insertclean(d, key, value, hash):
 ENTRY = lltype.typeOf(d.entries).TO.OF
-ll_call_insert_clean_function(d, hash, d.num_used_items)
-entry = d.entries[d.num_used_items]
+ll_call_insert_clean_function(d, hash, d.num_ever_used_items)
+entry = d.entries[d.num_ever_used_items]
 entry.key = key
 entry.value = value
 if hasattr(ENTRY, 'f_hash'):
 entry.f_hash = hash
 if hasattr(ENTRY, 'f_valid'):
 entry.f_valid = True
-d.num_used_items += 1
-d.num_items += 1
+d.num_ever_used_items += 1
+d.num_live_items += 1
 rc = d.resize_counter - 3
 d.resize_counter = rc
 
@@ -575,7 +575,7 @@
 
 @jit.dont_look_inside
 def ll_dict_grow(d):
-if d.num_items < d.num_used_items // 2:
+if d.num_live_items < d.num_ever_used_items // 2:
 # At least 50% of the allocated entries are dead, so perform a
 # compaction. If ll_dict_remove_deleted_items detects that over
 # 75% of allocated entries are dead, then it will also shrink the
@@ -598,10 +598,10 @@
 return False
 
 def ll_dict_remove_deleted_items(d):
-if d.num_items < len(d.entries) // 4:
+if d.num_live_items < len(d.entries) // 4:
 # At least 75% of the allocated entries are dead, so shrink the memory
 # allocated as well as doing a compaction.
-new_allocated = _overallocate_entries_len(d.num_items)
+new_allocated = _overallocate_entries_len(d.num_live_items)
 newitems = lltype.malloc(lltype.typeOf(d).TO.entries.TO, new_allocated)
 else:
 newitems = d.entries
@@ -610,7 +610,7 @@
 ENTRY = ENTRIES.OF
 isrc = 0
 idst = 0
-isrclimit = d.num_used_items
+isrclimit = d.num_ever_used_items
 while isrc < isrclimit:
 if d.entries.valid(isrc):
 src = d.entries[isrc]
@@ -624,8 +624,8 @@
 dst.f_valid = True
 idst += 1
 isrc += 1
-assert d.num_items == idst
-d.num_used_items = idst
+assert d.num_live_items == idst
+d.num_ever_used_items = idst
 if ((ENTRIES.must_clear_key or ENTRIES.must_clear_value) and
 d.entries == newitems):
 # must clear the extra entries: they may contain valid pointers
@@ -652,7 +652,7 @@
 @jit.look_inside_iff(lambda d, i: jit.isvirtual(d) and jit.isconstant(i))
 def _ll_dict_del(d, i

[pypy-commit] pypy default: Reuse deleted items at the end of an orderedarray.

2014-12-15 Thread ltratt
Author: Laurence Tratt 
Branch: 
Changeset: r74946:16ce1a0def4d
Date: 2014-12-15 17:03 +
http://bitbucket.org/pypy/pypy/changeset/16ce1a0def4d/

Log:Reuse deleted items at the end of an orderedarray.

If a user deletes item(s) at the end of an orderedarray, they can be
immediately reused rather than marked as dead and compacted later.

diff --git a/rpython/rtyper/lltypesystem/rordereddict.py 
b/rpython/rtyper/lltypesystem/rordereddict.py
--- a/rpython/rtyper/lltypesystem/rordereddict.py
+++ b/rpython/rtyper/lltypesystem/rordereddict.py
@@ -662,6 +662,19 @@
 if ENTRIES.must_clear_value:
 entry.value = lltype.nullptr(ENTRY.value.TO)
 
+if index == d.num_ever_used_items - 1:
+# The last element of the ordereddict has been deleted. Instead of
+# simply marking the item as dead, we can safely reuse it. Since it's
+# also possible that there are more dead items immediately behind the
+# last one, we reclaim all the dead items at the end of the ordereditem
+# at the same point.
+i = d.num_ever_used_items - 2
+while i >= 0 and not d.entries.valid(i):
+i -= 1
+j = i + 1
+assert j >= 0
+d.num_ever_used_items = j
+
 # If the dictionary is at least 87.5% dead items, then consider shrinking
 # it.
 if d.num_live_items + DICT_INITSIZE <= len(d.entries) / 8:
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy all_ordered_dicts: Merge default.

2014-12-16 Thread ltratt
Author: Laurence Tratt 
Branch: all_ordered_dicts
Changeset: r74957:a8e941c88899
Date: 2014-12-16 16:21 +
http://bitbucket.org/pypy/pypy/changeset/a8e941c88899/

Log:Merge default.

diff --git a/lib-python/2.7/test/test_collections.py 
b/lib-python/2.7/test/test_collections.py
--- a/lib-python/2.7/test/test_collections.py
+++ b/lib-python/2.7/test/test_collections.py
@@ -1108,6 +1108,16 @@
 od.popitem()
 self.assertEqual(len(od), 0)
 
+def test_popitem_first(self):
+pairs = [('c', 1), ('b', 2), ('a', 3), ('d', 4), ('e', 5), ('f', 6)]
+shuffle(pairs)
+od = OrderedDict(pairs)
+while pairs:
+self.assertEqual(od.popitem(last=False), pairs.pop(0))
+with self.assertRaises(KeyError):
+od.popitem(last=False)
+self.assertEqual(len(od), 0)
+
 def test_pop(self):
 pairs = [('c', 1), ('b', 2), ('a', 3), ('d', 4), ('e', 5), ('f', 6)]
 shuffle(pairs)
@@ -1179,7 +1189,11 @@
 od = OrderedDict(pairs)
 # yaml.dump(od) -->
 # '!!python/object/apply:__main__.OrderedDict\n- - [a, 1]\n  - [b, 
2]\n'
-self.assertTrue(all(type(pair)==list for pair in od.__reduce__()[1]))
+
+# PyPy bug fix: added [0] at the end of this line, because the
+# test is really about the 2-tuples that need to be 2-lists
+# inside the list of 6 of them
+self.assertTrue(all(type(pair)==list for pair in 
od.__reduce__()[1][0]))
 
 def test_reduce_not_too_fat(self):
 # do not save instance dictionary if not needed
@@ -1189,6 +1203,16 @@
 od.x = 10
 self.assertEqual(len(od.__reduce__()), 3)
 
+def test_reduce_exact_output(self):
+# PyPy: test that __reduce__() produces the exact same answer as
+# CPython does, even though in the 'all_ordered_dicts' branch we
+# have to emulate it.
+pairs = [['c', 1], ['b', 2], ['d', 4]]
+od = OrderedDict(pairs)
+self.assertEqual(od.__reduce__(), (OrderedDict, (pairs,)))
+od.x = 10
+self.assertEqual(od.__reduce__(), (OrderedDict, (pairs,), {'x': 10}))
+
 def test_repr(self):
 od = OrderedDict([('c', 1), ('b', 2), ('a', 3), ('d', 4), ('e', 5), 
('f', 6)])
 self.assertEqual(repr(od),
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy all_ordered_dicts: Make all dictionaries be ordered by default.

2014-12-16 Thread ltratt
Author: Laurence Tratt 
Branch: all_ordered_dicts
Changeset: r74956:91b77ba66c70
Date: 2014-12-16 14:48 +
http://bitbucket.org/pypy/pypy/changeset/91b77ba66c70/

Log:Make all dictionaries be ordered by default.

This could be done in a less ugly way in the long term.

diff --git a/rpython/annotator/model.py b/rpython/annotator/model.py
--- a/rpython/annotator/model.py
+++ b/rpython/annotator/model.py
@@ -389,6 +389,8 @@
 assert isinstance(dct2, SomeOrderedDict), "OrderedDict.update(dict) 
not allowed"
 dct1.dictdef.union(dct2.dictdef)
 
+SomeDict=SomeOrderedDict
+
 
 class SomeIterator(SomeObject):
 "Stands for an iterator returning objects from a given container."
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy more_strategies: Provide fast paths in find for integer and float strategy lists.

2013-11-07 Thread ltratt
Author: Laurence Tratt 
Branch: more_strategies
Changeset: r67876:599ed4285a6d
Date: 2013-11-07 21:56 +
http://bitbucket.org/pypy/pypy/changeset/599ed4285a6d/

Log:Provide fast paths in find for integer and float strategy lists.

This patch affects "x in l" and "l.index(x)" where l is a list. It
leaves the expected common path (searching for an integer in an
integer list; for a float in a flaot list) unchanged. However,
comparisons of other types are significantly sped up. In some cases,
we can use the type of an object to immediately prove that it can't
be in the list (e.g. a user object which doesn't override __eq__
can't possibly be in an integer or float list) and return
immediately; in others (e.g. when searching for a float in an
integer list), we can convert the input type into a primitive that
allows significantly faster comparisons.

As rough examples, searching for a float in an integer list is
approximately 3x faster; for a long in an integer list approximately
10x faster; searching for a string in an integer list returns
immediately, no matter the size of the list.

diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py
--- a/pypy/objspace/std/listobject.py
+++ b/pypy/objspace/std/listobject.py
@@ -19,6 +19,7 @@
 from pypy.objspace.std import slicetype
 from pypy.objspace.std.floatobject import W_FloatObject
 from pypy.objspace.std.intobject import W_IntObject
+from pypy.objspace.std.longobject import W_LongObject
 from pypy.objspace.std.iterobject import (W_FastListIterObject,
 W_ReverseSeqIterObject)
 from pypy.objspace.std.sliceobject import W_SliceObject, normalize_simple_slice
@@ -1537,6 +1538,47 @@
 def getitems_int(self, w_list):
 return self.unerase(w_list.lstorage)
 
+_orig_find = find
+def find(self, w_list, w_obj, start, stop):
+# Find an element in this integer list. For integers, floats, and 
longs,
+# we can use primitive comparisons (possibly after a conversion to an
+# int). For other user types (strings and user objects which don't play
+# funny tricks with __eq__ etc.) we can prove immediately that an 
object
+# could not be in the list and return.
+#
+# Note: although it might seem we want to do the clever tricks first,
+# we expect that the common case is searching for an integer in an
+# integer list. The clauses of this if are thus ordered in likely order
+# of frequency of use.
+
+w_objt = type(w_obj)
+if w_objt is W_IntObject:
+return self._safe_find(w_list, self.unwrap(w_obj), start, stop)
+elif w_objt is W_FloatObject or w_objt is W_LongObject:
+if w_objt is W_FloatObject:
+# Asking for an int from a W_FloatObject can return either a
+# W_IntObject or W_LongObject, so we then need to disambiguate
+# between the two.
+w_obj = self.space.int(w_obj)
+w_objt = type(w_obj)
+
+if w_objt is W_IntObject:
+intv = self.unwrap(w_obj)
+else:
+assert w_objt is W_LongObject
+try:
+intv = w_obj.toint()
+except OverflowError:
+# Longs which overflow can't possibly be found in an 
integer
+# list.
+raise ValueError
+return self._safe_find(w_list, intv, start, stop)
+elif w_objt is W_StringObject or w_objt is W_UnicodeObject:
+raise ValueError
+elif self.space.type(w_obj).compares_by_identity():
+raise ValueError
+return self._orig_find(w_list, w_obj, start, stop)
+
 
 _base_extend_from_list = _extend_from_list
 
@@ -1581,6 +1623,19 @@
 def list_is_correct_type(self, w_list):
 return w_list.strategy is self.space.fromcache(FloatListStrategy)
 
+_orig_find = find
+def find(self, w_list, w_obj, start, stop):
+w_objt = type(w_obj)
+if w_objt is W_FloatObject:
+return self._safe_find(w_list, self.unwrap(w_obj), start, stop)
+elif w_objt is W_IntObject or w_objt is W_LongObject:
+return self._safe_find(w_list, w_obj.float_w(self.space), start, 
stop)
+elif w_objt is W_StringObject or w_objt is W_UnicodeObject:
+raise ValueError
+elif self.space.type(w_obj).compares_by_identity():
+raise ValueError
+return self._orig_find(w_list, w_obj, start, stop)
+
 def sort(self, w_list, reverse):
 l = self.unerase(w_list.lstorage)
 sorter = FloatSort(l, len(l))
diff --git a/pypy/objspace/std/test/test_listobject.py 
b/pypy/objspace/std/test/test_listobject.py
--- a/pypy/objspace/std/test/test_listobject.py
+++ b/pypy/objspace/std/test/test_listobject.py
@@ -457,6 +457,39 @@

[pypy-commit] pypy more_strategies: Treat floats in integer lists more carefully.

2013-11-07 Thread ltratt
Author: Laurence Tratt 
Branch: more_strategies
Changeset: r67877:b18acdb9aaf4
Date: 2013-11-07 23:11 +
http://bitbucket.org/pypy/pypy/changeset/b18acdb9aaf4/

Log:Treat floats in integer lists more carefully.

Previously floats were all rounded off, which leads to incorrect
semantics for any float with a fractional component. A float which,
when converted to an integer, doesn't compare True to itself can
never match against any integer, so simply bail out when such a
float is encountered. Bug pointed out to Amaury Forgeot d'Arc.

diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py
--- a/pypy/objspace/std/listobject.py
+++ b/pypy/objspace/std/listobject.py
@@ -1556,10 +1556,17 @@
 return self._safe_find(w_list, self.unwrap(w_obj), start, stop)
 elif w_objt is W_FloatObject or w_objt is W_LongObject:
 if w_objt is W_FloatObject:
+# Floats with a fractional part can never compare True with
+# respect to an integer, so we convert the float to an int and
+# see if it compares True to itself or not. If it doesn't, we
+# can immediately bail out.
+w_objn = self.space.int(w_obj)
+if not self.space.eq_w(w_obj, w_objn):
+raise ValueError
+w_obj = w_objn
 # Asking for an int from a W_FloatObject can return either a
 # W_IntObject or W_LongObject, so we then need to disambiguate
 # between the two.
-w_obj = self.space.int(w_obj)
 w_objt = type(w_obj)
 
 if w_objt is W_IntObject:
diff --git a/pypy/objspace/std/test/test_listobject.py 
b/pypy/objspace/std/test/test_listobject.py
--- a/pypy/objspace/std/test/test_listobject.py
+++ b/pypy/objspace/std/test/test_listobject.py
@@ -457,8 +457,7 @@
 assert l.__contains__(2)
 assert not l.__contains__("2")
 assert l.__contains__(1.0)
-assert l.__contains__(1.1)
-assert l.__contains__(1.9)
+assert not l.__contains__(1.1)
 assert l.__contains__(1L)
 assert not l.__contains__(object())
 assert not l.__contains__(object())
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy more_strategies: Remove unnecessary double type-check.

2013-11-08 Thread ltratt
Author: Laurence Tratt 
Branch: more_strategies
Changeset: r67881:1f36c73c569a
Date: 2013-11-08 10:29 +
http://bitbucket.org/pypy/pypy/changeset/1f36c73c569a/

Log:Remove unnecessary double type-check.

diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py
--- a/pypy/objspace/std/listobject.py
+++ b/pypy/objspace/std/listobject.py
@@ -1538,7 +1538,6 @@
 def getitems_int(self, w_list):
 return self.unerase(w_list.lstorage)
 
-_orig_find = find
 def find(self, w_list, w_obj, start, stop):
 # Find an element in this integer list. For integers, floats, and 
longs,
 # we can use primitive comparisons (possibly after a conversion to an
@@ -1584,7 +1583,7 @@
 raise ValueError
 elif self.space.type(w_obj).compares_by_identity():
 raise ValueError
-return self._orig_find(w_list, w_obj, start, stop)
+return ListStrategy.find(self, w_list, w_obj, start, stop)
 
 
 _base_extend_from_list = _extend_from_list
@@ -1630,7 +1629,6 @@
 def list_is_correct_type(self, w_list):
 return w_list.strategy is self.space.fromcache(FloatListStrategy)
 
-_orig_find = find
 def find(self, w_list, w_obj, start, stop):
 w_objt = type(w_obj)
 if w_objt is W_FloatObject:
@@ -1641,7 +1639,7 @@
 raise ValueError
 elif self.space.type(w_obj).compares_by_identity():
 raise ValueError
-return self._orig_find(w_list, w_obj, start, stop)
+return ListStrategy.find(self, w_list, w_obj, start, stop)
 
 def sort(self, w_list, reverse):
 l = self.unerase(w_list.lstorage)
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy more_strategies: Be more conservative about comparing floats within an integer list.

2013-11-08 Thread ltratt
Author: Laurence Tratt 
Branch: more_strategies
Changeset: r67884:08eb5e457fba
Date: 2013-11-08 16:03 +
http://bitbucket.org/pypy/pypy/changeset/08eb5e457fba/

Log:Be more conservative about comparing floats within an integer list.

Very large floats can have no representation as a machine integer on
a 64 bit machine. Rather than encoding lots of clever logic, be
simple and conservative: any float whose representation as an int
compraes true to the original float can go through the fast path.
Otherwise use the slow path.

diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py
--- a/pypy/objspace/std/listobject.py
+++ b/pypy/objspace/std/listobject.py
@@ -1555,13 +1555,18 @@
 return self._safe_find(w_list, self.unwrap(w_obj), start, stop)
 elif w_objt is W_FloatObject or w_objt is W_LongObject:
 if w_objt is W_FloatObject:
-# Floats with a fractional part can never compare True with
-# respect to an integer, so we convert the float to an int and
-# see if it compares True to itself or not. If it doesn't, we
-# can immediately bail out.
+# We take a conservative approach to floats. Any float which,
+# when converted into an integer compares true to the
+# original float, can be compared using a fast case. When that
+# isn't true, it either means the float is fractional or it's
+# got to the range that doubles can't accurately represent
+# (e.g. float(2**53+1) == 2**53+1 evaluates to False). Rather
+# than encoding potentially platform dependent stuff here, we
+# simply fall back on the slow-case to be sure we're not
+# unintentionally changing number semantics.
 w_objn = self.space.int(w_obj)
 if not self.space.eq_w(w_obj, w_objn):
-raise ValueError
+return ListStrategy.find(self, w_list, w_obj, start, stop)
 w_obj = w_objn
 # Asking for an int from a W_FloatObject can return either a
 # W_IntObject or W_LongObject, so we then need to disambiguate
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy more_strategies: Collapse two identical cases.

2013-11-08 Thread ltratt
Author: Laurence Tratt 
Branch: more_strategies
Changeset: r67883:31b5f4d5ba4b
Date: 2013-11-08 11:34 +
http://bitbucket.org/pypy/pypy/changeset/31b5f4d5ba4b/

Log:Collapse two identical cases.

No functional change.

diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py
--- a/pypy/objspace/std/listobject.py
+++ b/pypy/objspace/std/listobject.py
@@ -1579,9 +1579,8 @@
 # list.
 raise ValueError
 return self._safe_find(w_list, intv, start, stop)
-elif w_objt is W_StringObject or w_objt is W_UnicodeObject:
-raise ValueError
-elif self.space.type(w_obj).compares_by_identity():
+elif w_objt is W_StringObject or w_objt is W_UnicodeObject \
+  or self.space.type(w_obj).compares_by_identity():
 raise ValueError
 return ListStrategy.find(self, w_list, w_obj, start, stop)
 
@@ -1635,9 +1634,8 @@
 return self._safe_find(w_list, self.unwrap(w_obj), start, stop)
 elif w_objt is W_IntObject or w_objt is W_LongObject:
 return self._safe_find(w_list, w_obj.float_w(self.space), start, 
stop)
-elif w_objt is W_StringObject or w_objt is W_UnicodeObject:
-raise ValueError
-elif self.space.type(w_obj).compares_by_identity():
+elif w_objt is W_StringObject or w_objt is W_UnicodeObject \
+  or self.space.type(w_obj).compares_by_identity(): 
 raise ValueError
 return ListStrategy.find(self, w_list, w_obj, start, stop)
 
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy more_strategies: Remove pointless print statements.

2013-11-11 Thread ltratt
Author: Laurence Tratt 
Branch: more_strategies
Changeset: r67959:3a4f3f694fe9
Date: 2013-11-11 15:29 +
http://bitbucket.org/pypy/pypy/changeset/3a4f3f694fe9/

Log:Remove pointless print statements.

Presumably these are long forgotten debugging aids.

diff --git a/pypy/objspace/std/test/test_listobject.py 
b/pypy/objspace/std/test/test_listobject.py
--- a/pypy/objspace/std/test/test_listobject.py
+++ b/pypy/objspace/std/test/test_listobject.py
@@ -1187,7 +1187,6 @@
 skip("not reliable on top of Boehm")
 class A(object):
 def __del__(self):
-print 'del'
 del lst[:]
 for i in range(10):
 keepalive = []
@@ -1257,7 +1256,6 @@
 (dict, []), (dict, [(5,6)]), (dict, [('x',7)]), (dict, 
[(X,8)]),
 (dict, [(u'x', 7)]),
 ]:
-print base, arg
 class SubClass(base):
 def __iter__(self):
 return iter("foobar")
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy more_strategies: Add IntegerListAscending strategy.

2013-11-13 Thread ltratt
Author: Laurence Tratt 
Branch: more_strategies
Changeset: r68006:ef5630ce70e3
Date: 2013-11-13 15:26 +
http://bitbucket.org/pypy/pypy/changeset/ef5630ce70e3/

Log:Add IntegerListAscending strategy.

diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py
--- a/pypy/objspace/std/listobject.py
+++ b/pypy/objspace/std/listobject.py
@@ -896,7 +896,7 @@
 
 def switch_to_correct_strategy(self, w_list, w_item):
 if type(w_item) is W_IntObject:
-strategy = self.space.fromcache(IntegerListStrategy)
+strategy = self.space.fromcache(IntegerListAscendingStrategy)
 elif type(w_item) is W_StringObject:
 strategy = self.space.fromcache(StringListStrategy)
 elif type(w_item) is W_UnicodeObject:
@@ -1010,7 +1010,11 @@
 
 def switch_to_integer_strategy(self, w_list):
 items = self._getitems_range(w_list, False)
-strategy = w_list.strategy = self.space.fromcache(IntegerListStrategy)
+start, step, length = self.unerase(w_list.lstorage)
+if step > 0:
+strategy = w_list.strategy = 
self.space.fromcache(IntegerListAscendingStrategy)
+else:
+strategy = w_list.strategy = 
self.space.fromcache(IntegerListStrategy)
 w_list.lstorage = strategy.erase(items)
 
 def wrap(self, intval):
@@ -1518,6 +1522,25 @@
 def unwrap(self, w_int):
 return self.space.int_w(w_int)
 
+def init_from_list_w(self, w_list, list_w):
+# While unpacking integer elements, also determine whether they're
+# pre-sorted.
+assert len(list_w) > 0
+asc = True
+l = [0] * len(list_w)
+lst = l[0] = self.unwrap(list_w[0])
+for i in range(1, len(list_w)):
+item_w = list_w[i]
+it = self.unwrap(item_w)
+if asc and it < lst:
+asc = False
+l[i] = it
+lst = it
+w_list.lstorage = self.erase(l)
+if asc:
+# The list was already sorted into ascending order.
+w_list.strategy = 
self.space.fromcache(IntegerListAscendingStrategy)
+
 erase, unerase = rerased.new_erasing_pair("integer")
 erase = staticmethod(erase)
 unerase = staticmethod(unerase)
@@ -1526,7 +1549,8 @@
 return type(w_obj) is W_IntObject
 
 def list_is_correct_type(self, w_list):
-return w_list.strategy is self.space.fromcache(IntegerListStrategy)
+return w_list.strategy is self.space.fromcache(IntegerListStrategy) \
+  or w_list.strategy is 
self.space.fromcache(IntegerListAscendingStrategy)
 
 def sort(self, w_list, reverse):
 l = self.unerase(w_list.lstorage)
@@ -1534,6 +1558,8 @@
 sorter.sort()
 if reverse:
 l.reverse()
+else:
+w_list.strategy = 
self.space.fromcache(IntegerListAscendingStrategy)
 
 def getitems_int(self, w_list):
 return self.unerase(w_list.lstorage)
@@ -1611,6 +1637,94 @@
 self.space, storage, self)
 return self._base_setslice(w_list, start, step, slicelength, w_other)
 
+class IntegerListAscendingStrategy(IntegerListStrategy):
+def sort(self, w_list, reverse):
+if reverse:
+self.unerase(w_list.lstorage).reverse()
+w_list.strategy = self.space.fromcache(IntegerListStrategy)
+
+def append(self, w_list, w_item):
+if type(w_item) is W_IntObject:
+l = self.unerase(w_list.lstorage)
+length = len(l)
+item = self.unwrap(w_item)
+if length == 0 or l[length - 1] <= item:
+l.append(item)
+return
+w_list.strategy = self.space.fromcache(IntegerListStrategy)
+IntegerListStrategy.append(self, w_list, w_item)
+
+def insert(self, w_list, index, w_item):
+if type(w_item) is W_IntObject:
+l = self.unerase(w_list.lstorage)
+length = len(l)
+item = self.unwrap(w_item)
+if length == 0 or \
+  ((index == 0 or l[index - 1] <= item) and (index == length or 
l[index] >= item)):
+l.insert(index, item)
+return
+w_list.strategy = self.space.fromcache(IntegerListStrategy)
+IntegerListStrategy.insert(self, w_list, index, w_item)
+
+def _extend_from_list(self, w_list, w_item):
+if type(w_item) is W_ListObject and \
+  w_item.strategy is 
self.space.fromcache(IntegerListAscendingStrategy):
+self_l = self.unerase(w_list.lstorage)
+other_l = self.unerase(w_item.lstorage)
+if len(self_l) == 0 or len(other_l) == 0 or self_l[len(self_l) - 
1] <= other_l[0]:
+self_l.extend(other_l)
+return
+w_list.strategy = self.space.fromcache(IntegerListStrategy)
+IntegerListStrategy._extend_from_list(self,w_list, w_item)
+
+def setitem(self, w_list, index, w_item):
+if type

[pypy-commit] pypy more_strategies: Revert the sorted list experiment.

2013-11-24 Thread ltratt
Author: Laurence Tratt 
Branch: more_strategies
Changeset: r68313:b80e7392cb94
Date: 2013-11-24 20:17 +
http://bitbucket.org/pypy/pypy/changeset/b80e7392cb94/

Log:Revert the sorted list experiment.

diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py
--- a/pypy/objspace/std/listobject.py
+++ b/pypy/objspace/std/listobject.py
@@ -896,7 +896,7 @@
 
 def switch_to_correct_strategy(self, w_list, w_item):
 if type(w_item) is W_IntObject:
-strategy = self.space.fromcache(IntegerListAscendingStrategy)
+strategy = self.space.fromcache(IntegerListStrategy)
 elif type(w_item) is W_StringObject:
 strategy = self.space.fromcache(StringListStrategy)
 elif type(w_item) is W_UnicodeObject:
@@ -1010,11 +1010,7 @@
 
 def switch_to_integer_strategy(self, w_list):
 items = self._getitems_range(w_list, False)
-start, step, length = self.unerase(w_list.lstorage)
-if step > 0:
-strategy = w_list.strategy = 
self.space.fromcache(IntegerListAscendingStrategy)
-else:
-strategy = w_list.strategy = 
self.space.fromcache(IntegerListStrategy)
+strategy = w_list.strategy = self.space.fromcache(IntegerListStrategy)
 w_list.lstorage = strategy.erase(items)
 
 def wrap(self, intval):
@@ -1522,25 +1518,6 @@
 def unwrap(self, w_int):
 return self.space.int_w(w_int)
 
-def init_from_list_w(self, w_list, list_w):
-# While unpacking integer elements, also determine whether they're
-# pre-sorted.
-assert len(list_w) > 0
-asc = True
-l = [0] * len(list_w)
-lst = l[0] = self.unwrap(list_w[0])
-for i in range(1, len(list_w)):
-item_w = list_w[i]
-it = self.unwrap(item_w)
-if asc and it < lst:
-asc = False
-l[i] = it
-lst = it
-w_list.lstorage = self.erase(l)
-if asc:
-# The list was already sorted into ascending order.
-w_list.strategy = 
self.space.fromcache(IntegerListAscendingStrategy)
-
 erase, unerase = rerased.new_erasing_pair("integer")
 erase = staticmethod(erase)
 unerase = staticmethod(unerase)
@@ -1549,8 +1526,7 @@
 return type(w_obj) is W_IntObject
 
 def list_is_correct_type(self, w_list):
-return w_list.strategy is self.space.fromcache(IntegerListStrategy) \
-  or w_list.strategy is 
self.space.fromcache(IntegerListAscendingStrategy)
+return w_list.strategy is self.space.fromcache(IntegerListStrategy)
 
 def sort(self, w_list, reverse):
 l = self.unerase(w_list.lstorage)
@@ -1558,8 +1534,6 @@
 sorter.sort()
 if reverse:
 l.reverse()
-else:
-w_list.strategy = 
self.space.fromcache(IntegerListAscendingStrategy)
 
 def getitems_int(self, w_list):
 return self.unerase(w_list.lstorage)
@@ -1637,94 +1611,6 @@
 self.space, storage, self)
 return self._base_setslice(w_list, start, step, slicelength, w_other)
 
-class IntegerListAscendingStrategy(IntegerListStrategy):
-def sort(self, w_list, reverse):
-if reverse:
-self.unerase(w_list.lstorage).reverse()
-w_list.strategy = self.space.fromcache(IntegerListStrategy)
-
-def append(self, w_list, w_item):
-if type(w_item) is W_IntObject:
-l = self.unerase(w_list.lstorage)
-length = len(l)
-item = self.unwrap(w_item)
-if length == 0 or l[length - 1] <= item:
-l.append(item)
-return
-w_list.strategy = self.space.fromcache(IntegerListStrategy)
-IntegerListStrategy.append(self, w_list, w_item)
-
-def insert(self, w_list, index, w_item):
-if type(w_item) is W_IntObject:
-l = self.unerase(w_list.lstorage)
-length = len(l)
-item = self.unwrap(w_item)
-if length == 0 or \
-  ((index == 0 or l[index - 1] <= item) and (index == length or 
l[index] >= item)):
-l.insert(index, item)
-return
-w_list.strategy = self.space.fromcache(IntegerListStrategy)
-IntegerListStrategy.insert(self, w_list, index, w_item)
-
-def _extend_from_list(self, w_list, w_item):
-if type(w_item) is W_ListObject and \
-  w_item.strategy is 
self.space.fromcache(IntegerListAscendingStrategy):
-self_l = self.unerase(w_list.lstorage)
-other_l = self.unerase(w_item.lstorage)
-if len(self_l) == 0 or len(other_l) == 0 or self_l[len(self_l) - 
1] <= other_l[0]:
-self_l.extend(other_l)
-return
-w_list.strategy = self.space.fromcache(IntegerListStrategy)
-IntegerListStrategy._extend_from_list(self,w_list, w_item)
-
-def setitem(self, w_list, index, w_item):
-if type

[pypy-commit] pypy more_strategies: Remove some of the unlikely special cases.

2013-11-26 Thread ltratt
Author: Laurence Tratt 
Branch: more_strategies
Changeset: r68325:422d7a1ecc4d
Date: 2013-11-26 15:45 +
http://bitbucket.org/pypy/pypy/changeset/422d7a1ecc4d/

Log:Remove some of the unlikely special cases.

These don't do any harm, but they are unlikely to trigger very
often. By common consensus, they're probably better off removed.

diff --git a/pypy/objspace/std/listobject.py b/pypy/objspace/std/listobject.py
--- a/pypy/objspace/std/listobject.py
+++ b/pypy/objspace/std/listobject.py
@@ -1584,9 +1584,6 @@
 # list.
 raise ValueError
 return self._safe_find(w_list, intv, start, stop)
-elif w_objt is W_StringObject or w_objt is W_UnicodeObject \
-  or self.space.type(w_obj).compares_by_identity():
-raise ValueError
 return ListStrategy.find(self, w_list, w_obj, start, stop)
 
 
@@ -1639,9 +1636,6 @@
 return self._safe_find(w_list, self.unwrap(w_obj), start, stop)
 elif w_objt is W_IntObject or w_objt is W_LongObject:
 return self._safe_find(w_list, w_obj.float_w(self.space), start, 
stop)
-elif w_objt is W_StringObject or w_objt is W_UnicodeObject \
-  or self.space.type(w_obj).compares_by_identity(): 
-raise ValueError
 return ListStrategy.find(self, w_list, w_obj, start, stop)
 
 def sort(self, w_list, reverse):
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy default: OpenBSD doesn't currently support vmprof.

2017-12-26 Thread ltratt
Author: Laurence Tratt 
Branch: 
Changeset: r93578:8390220c0526
Date: 2017-12-26 14:39 +
http://bitbucket.org/pypy/pypy/changeset/8390220c0526/

Log:OpenBSD doesn't currently support vmprof.

diff --git a/rpython/rlib/rvmprof/cintf.py b/rpython/rlib/rvmprof/cintf.py
--- a/rpython/rlib/rvmprof/cintf.py
+++ b/rpython/rlib/rvmprof/cintf.py
@@ -17,7 +17,7 @@
 
 # vmprof works only on x86 for now
 IS_SUPPORTED = detect_cpu.autodetect().startswith('x86')
-if sys.platform == 'win32':
+if sys.platform == 'win32' or sys.platform.startswith("openbsd"):
 IS_SUPPORTED = False
 
 ROOT = py.path.local(rpythonroot).join('rpython', 'rlib', 'rvmprof')
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy default: OpenBSD also needs sys/ttycom.h included.

2017-05-14 Thread ltratt
Author: Laurence Tratt 
Branch: 
Changeset: r91291:51b52e05a32a
Date: 2017-05-14 19:22 +0800
http://bitbucket.org/pypy/pypy/changeset/51b52e05a32a/

Log:OpenBSD also needs sys/ttycom.h included.

diff --git a/rpython/rlib/rposix.py b/rpython/rlib/rposix.py
--- a/rpython/rlib/rposix.py
+++ b/rpython/rlib/rposix.py
@@ -239,7 +239,7 @@
 'signal.h', 'sys/utsname.h', _ptyh]
 if sys.platform.startswith('linux'):
 includes.append('sys/sysmacros.h')
-if sys.platform.startswith('freebsd'):
+if sys.platform.startswith('freebsd') or 
sys.platform.startswith('openbsd'):
 includes.append('sys/ttycom.h')
 libraries = ['util']
 eci = ExternalCompilationInfo(
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy default: hg merge default

2017-05-14 Thread ltratt
Author: Laurence Tratt 
Branch: 
Changeset: r91294:92e51c0101b5
Date: 2017-05-15 09:18 +0800
http://bitbucket.org/pypy/pypy/changeset/92e51c0101b5/

Log:hg merge default

diff --git a/rpython/rlib/streamio.py b/rpython/rlib/streamio.py
--- a/rpython/rlib/streamio.py
+++ b/rpython/rlib/streamio.py
@@ -902,18 +902,30 @@
 self.do_read = base.read
 self.do_write = base.write
 self.do_flush = base.flush_buffers
-self.lfbuffer = ""
+self.readahead_count = 0   # either 0 or 1
 
 def read(self, n=-1):
-data = self.lfbuffer + self.do_read(n)
-self.lfbuffer = ""
+"""If n >= 1, this should read between 1 and n bytes."""
+if n <= 0:
+if n < 0:
+return self.readall()
+else:
+return ""
+
+data = self.do_read(n - self.readahead_count)
+if self.readahead_count > 0:
+data = self.readahead_char + data
+self.readahead_count = 0
+
 if data.endswith("\r"):
 c = self.do_read(1)
-if c and c[0] == '\n':
-data = data + '\n'
-self.lfbuffer = c[1:]
-else:
-self.lfbuffer = c
+if len(c) >= 1:
+assert len(c) == 1
+if c[0] == '\n':
+data = data + '\n'
+else:
+self.readahead_char = c[0]
+self.readahead_count = 1
 
 result = []
 offset = 0
@@ -936,21 +948,21 @@
 
 def tell(self):
 pos = self.base.tell()
-return pos - len(self.lfbuffer)
+return pos - self.readahead_count
 
 def seek(self, offset, whence):
 if whence == 1:
-offset -= len(self.lfbuffer)   # correct for already-read-ahead 
character
+offset -= self.readahead_count   # correct for already-read-ahead 
character
 self.base.seek(offset, whence)
-self.lfbuffer = ""
+self.readahead_count = 0
 
 def flush_buffers(self):
-if self.lfbuffer:
+if self.readahead_count > 0:
 try:
-self.base.seek(-len(self.lfbuffer), 1)
+self.base.seek(-self.readahead_count, 1)
 except (MyNotImplementedError, OSError):
 return
-self.lfbuffer = ""
+self.readahead_count = 0
 self.do_flush()
 
 def write(self, data):
diff --git a/rpython/rlib/test/test_streamio.py 
b/rpython/rlib/test/test_streamio.py
--- a/rpython/rlib/test/test_streamio.py
+++ b/rpython/rlib/test/test_streamio.py
@@ -657,6 +657,23 @@
 assert line == ''
 self.interpret(f, [])
 
+def test_read1(self):
+s_input = "abc\r\nabc\nd\r\nef\r\ngha\rbc\rdef\n\r\n\r"
+s_output = "abc\nabc\nd\nef\ngha\rbc\rdef\n\n\r"
+assert s_output == s_input.replace('\r\n', '\n')
+packets = list(s_input)
+expected = list(s_output)
+crlf = streamio.TextCRLFFilter(TSource(packets))
+def f():
+blocks = []
+while True:
+block = crlf.read(1)
+if not block:
+break
+blocks.append(block)
+assert blocks == expected
+self.interpret(f, [])
+
 class TestTextCRLFFilterLLInterp(BaseTestTextCRLFFilter):
 pass
 
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy default: string.h needs to be included for strlen to be found.

2017-05-14 Thread ltratt
Author: Laurence Tratt 
Branch: 
Changeset: r91292:42c6ee223963
Date: 2017-05-14 19:25 +0800
http://bitbucket.org/pypy/pypy/changeset/42c6ee223963/

Log:string.h needs to be included for strlen to be found.

diff --git a/rpython/rlib/rvmprof/src/shared/machine.c 
b/rpython/rlib/rvmprof/src/shared/machine.c
--- a/rpython/rlib/rvmprof/src/shared/machine.c
+++ b/rpython/rlib/rvmprof/src/shared/machine.c
@@ -4,6 +4,7 @@
 #include 
 
 #ifdef VMPROF_UNIX
+#include 
 #include 
 #include 
 #endif
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy default: Disable vmprof on OpenBSD as it doesn't build.

2017-05-14 Thread ltratt
Author: Laurence Tratt 
Branch: 
Changeset: r91293:00193a29fff8
Date: 2017-05-15 09:10 +0800
http://bitbucket.org/pypy/pypy/changeset/00193a29fff8/

Log:Disable vmprof on OpenBSD as it doesn't build.

diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py
--- a/pypy/config/pypyoption.py
+++ b/pypy/config/pypyoption.py
@@ -42,8 +42,9 @@
 from rpython.jit.backend import detect_cpu
 try:
 if detect_cpu.autodetect().startswith('x86'):
-working_modules.add('_vmprof')
-working_modules.add('faulthandler')
+if not sys.platform.startswith('openbsd'):
+working_modules.add('_vmprof')
+working_modules.add('faulthandler')
 except detect_cpu.ProcessorAutodetectError:
 pass
 
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy default: Build main binaries on OpenBSD with wxallowed.

2017-05-22 Thread ltratt
Author: Laurence Tratt 
Branch: 
Changeset: r91362:008c6f3a2f8f
Date: 2017-05-22 15:18 +0200
http://bitbucket.org/pypy/pypy/changeset/008c6f3a2f8f/

Log:Build main binaries on OpenBSD with wxallowed.

On RPython, JIT compilers that need to read and write to the same
page need to be marked as wxallowed (unless they've been buit to
cope with this restriction). Previously, this meant specifying
LDFLAGS in the environment before building an RPython VM which I
always forgot to do. This commit automatically marks the final
binary as wxallowed without any such annoyances.

diff --git a/rpython/translator/platform/openbsd.py 
b/rpython/translator/platform/openbsd.py
--- a/rpython/translator/platform/openbsd.py
+++ b/rpython/translator/platform/openbsd.py
@@ -16,5 +16,14 @@
 libraries=set(libraries + ("intl", "iconv"))
 return ['-l%s' % lib for lib in libraries if lib not in ["crypt", 
"dl", "rt"]]
 
+def makefile_link_flags(self):
+# On OpenBSD, we need to build the final binary with the link flags
+# below. However, if we modify self.link_flags to include these, the
+# various platform check binaries that RPython builds end up with these
+# flags: since these binaries are generally located on /tmp -- which
+# isn't a wxallowed file system -- that gives rise to "permission
+# denied" errors, which kill the build.
+return list(self.link_flags) + ["-Wl,-z,wxneeded"]
+
 class OpenBSD_64(OpenBSD):
 shared_only = ('-fPIC',)
diff --git a/rpython/translator/platform/posix.py 
b/rpython/translator/platform/posix.py
--- a/rpython/translator/platform/posix.py
+++ b/rpython/translator/platform/posix.py
@@ -98,6 +98,9 @@
 def get_shared_only_compile_flags(self):
 return tuple(self.shared_only) + ('-fvisibility=hidden',)
 
+def makefile_link_flags(self):
+return list(self.link_flags)
+
 def gen_makefile(self, cfiles, eci, exe_name=None, path=None,
  shared=False, headers_to_precompile=[],
  no_precompile_cfiles = [], config=None):
@@ -113,7 +116,7 @@
 else:
 exe_name = exe_name.new(ext=self.exe_ext)
 
-linkflags = list(self.link_flags)
+linkflags = self.makefile_link_flags()
 if shared:
 linkflags = self._args_for_shared(linkflags)
 
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy default: Merge default

2017-05-22 Thread ltratt
Author: Laurence Tratt 
Branch: 
Changeset: r91363:cfe544728497
Date: 2017-05-22 15:23 +0200
http://bitbucket.org/pypy/pypy/changeset/cfe544728497/

Log:Merge default

diff --git a/lib_pypy/_ctypes/function.py b/lib_pypy/_ctypes/function.py
--- a/lib_pypy/_ctypes/function.py
+++ b/lib_pypy/_ctypes/function.py
@@ -1,4 +1,3 @@
-
 from _ctypes.basics import _CData, _CDataMeta, cdata_from_address
 from _ctypes.primitive import SimpleType, _SimpleCData
 from _ctypes.basics import ArgumentError, keepalive_key
@@ -9,13 +8,16 @@
 import sys
 import traceback
 
-try: from __pypy__ import builtinify
-except ImportError: builtinify = lambda f: f
+
+try:
+from __pypy__ import builtinify
+except ImportError:
+builtinify = lambda f: f
 
 # XXX this file needs huge refactoring I fear
 
-PARAMFLAG_FIN   = 0x1
-PARAMFLAG_FOUT  = 0x2
+PARAMFLAG_FIN = 0x1
+PARAMFLAG_FOUT = 0x2
 PARAMFLAG_FLCID = 0x4
 PARAMFLAG_COMBINED = PARAMFLAG_FIN | PARAMFLAG_FOUT | PARAMFLAG_FLCID
 
@@ -24,9 +26,9 @@
 PARAMFLAG_FIN,
 PARAMFLAG_FIN | PARAMFLAG_FOUT,
 PARAMFLAG_FIN | PARAMFLAG_FLCID
-)
+)
 
-WIN64 = sys.platform == 'win32' and sys.maxint == 2**63 - 1
+WIN64 = sys.platform == 'win32' and sys.maxint == 2 ** 63 - 1
 
 
 def get_com_error(errcode, riid, pIunk):
@@ -35,6 +37,7 @@
 from _ctypes import COMError
 return COMError(errcode, None, None)
 
+
 @builtinify
 def call_function(func, args):
 "Only for debugging so far: So that we can call CFunction instances"
@@ -94,14 +97,9 @@
 "item %d in _argtypes_ has no from_param method" % (
 i + 1,))
 self._argtypes_ = list(argtypes)
-self._check_argtypes_for_fastpath()
+
 argtypes = property(_getargtypes, _setargtypes)
 
-def _check_argtypes_for_fastpath(self):
-if all([hasattr(argtype, '_ffiargshape_') for argtype in 
self._argtypes_]):
-fastpath_cls = make_fastpath_subclass(self.__class__)
-fastpath_cls.enable_fastpath_maybe(self)
-
 def _getparamflags(self):
 return self._paramflags
 
@@ -126,27 +124,26 @@
 raise TypeError(
 "paramflags must be a sequence of (int [,string [,value]]) 
"
 "tuples"
-)
+)
 if not isinstance(flag, int):
 raise TypeError(
 "paramflags must be a sequence of (int [,string [,value]]) 
"
 "tuples"
-)
+)
 _flag = flag & PARAMFLAG_COMBINED
 if _flag == PARAMFLAG_FOUT:
 typ = self._argtypes_[idx]
 if getattr(typ, '_ffiargshape_', None) not in ('P', 'z', 'Z'):
 raise TypeError(
 "'out' parameter %d must be a pointer type, not %s"
-% (idx+1, type(typ).__name__)
-)
+% (idx + 1, type(typ).__name__)
+)
 elif _flag not in VALID_PARAMFLAGS:
 raise TypeError("paramflag value %d not supported" % flag)
 self._paramflags = paramflags
 
 paramflags = property(_getparamflags, _setparamflags)
 
-
 def _getrestype(self):
 return self._restype_
 
@@ -156,7 +153,7 @@
 from ctypes import c_int
 restype = c_int
 if not (isinstance(restype, _CDataMeta) or restype is None or
-callable(restype)):
+callable(restype)):
 raise TypeError("restype must be a type, a callable, or None")
 self._restype_ = restype
 
@@ -168,15 +165,18 @@
 
 def _geterrcheck(self):
 return getattr(self, '_errcheck_', None)
+
 def _seterrcheck(self, errcheck):
 if not callable(errcheck):
 raise TypeError("The errcheck attribute must be callable")
 self._errcheck_ = errcheck
+
 def _delerrcheck(self):
 try:
 del self._errcheck_
 except AttributeError:
 pass
+
 errcheck = property(_geterrcheck, _seterrcheck, _delerrcheck)
 
 def _ffishapes(self, args, restype):
@@ -188,7 +188,7 @@
 raise TypeError("invalid result type for callback function")
 restype = restype._ffiargshape_
 else:
-restype = 'O' # void
+restype = 'O'  # void
 return argtypes, restype
 
 def _set_address(self, address):
@@ -201,7 +201,7 @@
 
 def __init__(self, *args):
 self.name = None
-self._objects = {keepalive_key(0):self}
+self._objects = {keepalive_key(0): self}
 self._needs_free = True
 
 # Empty function object -- this is needed for casts
@@ -222,10 +222,8 @@
 if self._argtypes_ is None:
 self._argtypes_ = []
 self._ptr = self._getfuncptr_fromaddress(self._argtypes_, restype)
-self._check_argtypes_for_fastpath(

[pypy-commit] pypy default: The UNIX file type is opaque.

2014-06-25 Thread ltratt
Author: Laurence Tratt 
Branch: 
Changeset: r72236:324c9d701969
Date: 2014-06-25 23:55 +0100
http://bitbucket.org/pypy/pypy/changeset/324c9d701969/

Log:The UNIX file type is opaque.

Unless an opaque pointer is used, RPython generates code which can
call C functions which are really macros which expand to horrible
things. Put another way: without this, things don't work on OpenBSD.

diff --git a/rpython/rlib/rfile.py b/rpython/rlib/rfile.py
--- a/rpython/rlib/rfile.py
+++ b/rpython/rlib/rfile.py
@@ -32,32 +32,32 @@
 config = platform.configure(CConfig)
 
 OFF_T = config['off_t']
-FILE = lltype.Struct('FILE')  # opaque type maybe
+FILEP = rffi.COpaquePtr("FILE")
 
-c_open = llexternal('fopen', [rffi.CCHARP, rffi.CCHARP], lltype.Ptr(FILE))
-c_close = llexternal('fclose', [lltype.Ptr(FILE)], rffi.INT, releasegil=False)
+c_open = llexternal('fopen', [rffi.CCHARP, rffi.CCHARP], FILEP)
+c_close = llexternal('fclose', [FILEP], rffi.INT, releasegil=False)
 c_fwrite = llexternal('fwrite', [rffi.CCHARP, rffi.SIZE_T, rffi.SIZE_T,
- lltype.Ptr(FILE)], rffi.SIZE_T)
+ FILEP], rffi.SIZE_T)
 c_fread = llexternal('fread', [rffi.CCHARP, rffi.SIZE_T, rffi.SIZE_T,
-   lltype.Ptr(FILE)], rffi.SIZE_T)
-c_feof = llexternal('feof', [lltype.Ptr(FILE)], rffi.INT)
-c_ferror = llexternal('ferror', [lltype.Ptr(FILE)], rffi.INT)
-c_clearerror = llexternal('clearerr', [lltype.Ptr(FILE)], lltype.Void)
-c_fseek = llexternal('fseek', [lltype.Ptr(FILE), rffi.LONG, rffi.INT],
+   FILEP], rffi.SIZE_T)
+c_feof = llexternal('feof', [FILEP], rffi.INT)
+c_ferror = llexternal('ferror', [FILEP], rffi.INT)
+c_clearerror = llexternal('clearerr', [FILEP], lltype.Void)
+c_fseek = llexternal('fseek', [FILEP, rffi.LONG, rffi.INT],
  rffi.INT)
-c_tmpfile = llexternal('tmpfile', [], lltype.Ptr(FILE))
-c_fileno = llexternal(fileno, [lltype.Ptr(FILE)], rffi.INT)
+c_tmpfile = llexternal('tmpfile', [], FILEP)
+c_fileno = llexternal(fileno, [FILEP], rffi.INT)
 c_fdopen = llexternal(('_' if os.name == 'nt' else '') + 'fdopen',
-  [rffi.INT, rffi.CCHARP], lltype.Ptr(FILE))
-c_ftell = llexternal('ftell', [lltype.Ptr(FILE)], rffi.LONG)
-c_fflush = llexternal('fflush', [lltype.Ptr(FILE)], rffi.INT)
+  [rffi.INT, rffi.CCHARP], FILEP)
+c_ftell = llexternal('ftell', [FILEP], rffi.LONG)
+c_fflush = llexternal('fflush', [FILEP], rffi.INT)
 c_ftruncate = llexternal(ftruncate, [rffi.INT, OFF_T], rffi.INT, macro=True)
 
-c_fgets = llexternal('fgets', [rffi.CCHARP, rffi.INT, lltype.Ptr(FILE)],
+c_fgets = llexternal('fgets', [rffi.CCHARP, rffi.INT, FILEP],
  rffi.CCHARP)
 
-c_popen = llexternal('popen', [rffi.CCHARP, rffi.CCHARP], lltype.Ptr(FILE))
-c_pclose = llexternal('pclose', [lltype.Ptr(FILE)], rffi.INT, releasegil=False)
+c_popen = llexternal('popen', [rffi.CCHARP, rffi.CCHARP], FILEP)
+c_pclose = llexternal('pclose', [FILEP], rffi.INT, releasegil=False)
 
 BASE_BUF_SIZE = 4096
 BASE_LINE_SIZE = 100
@@ -157,7 +157,7 @@
 ll_f = self.ll_file
 if ll_f:
 # double close is allowed
-self.ll_file = lltype.nullptr(FILE)
+self.ll_file = lltype.nullptr(FILEP.TO)
 res = self._do_close(ll_f)
 if res == -1:
 errno = rposix.get_errno()
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy default: htons and friends are macros on OpenBSD.

2014-06-27 Thread ltratt
Author: Laurence Tratt 
Branch: 
Changeset: r72253:cad6c535d3a5
Date: 2014-06-27 13:59 +0100
http://bitbucket.org/pypy/pypy/changeset/cad6c535d3a5/

Log:htons and friends are macros on OpenBSD.

diff --git a/rpython/rlib/_rsocket_rffi.py b/rpython/rlib/_rsocket_rffi.py
--- a/rpython/rlib/_rsocket_rffi.py
+++ b/rpython/rlib/_rsocket_rffi.py
@@ -493,10 +493,16 @@
 getnameinfo = external('getnameinfo', [sockaddr_ptr, socklen_t, CCHARP,
size_t, CCHARP, size_t, rffi.INT], rffi.INT)
 
-htonl = external('htonl', [rffi.UINT], rffi.UINT, releasegil=False)
-htons = external('htons', [rffi.USHORT], rffi.USHORT, releasegil=False)
-ntohl = external('ntohl', [rffi.UINT], rffi.UINT, releasegil=False)
-ntohs = external('ntohs', [rffi.USHORT], rffi.USHORT, releasegil=False)
+if sys.platform.startswith("openbsd"):
+htonl = external('htonl', [rffi.UINT], rffi.UINT, releasegil=False, 
macro=True)
+htons = external('htons', [rffi.USHORT], rffi.USHORT, releasegil=False, 
macro=True)
+ntohl = external('ntohl', [rffi.UINT], rffi.UINT, releasegil=False, 
macro=True)
+ntohs = external('ntohs', [rffi.USHORT], rffi.USHORT, releasegil=False, 
macro=True)
+else:
+htonl = external('htonl', [rffi.UINT], rffi.UINT, releasegil=False)
+htons = external('htons', [rffi.USHORT], rffi.USHORT, releasegil=False)
+ntohl = external('ntohl', [rffi.UINT], rffi.UINT, releasegil=False)
+ntohs = external('ntohs', [rffi.USHORT], rffi.USHORT, releasegil=False)
 
 if _POSIX:
 inet_aton = external('inet_aton', [CCHARP, lltype.Ptr(in_addr)],
___
pypy-commit mailing list
pypy-commit@python.org
https://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy default: Use appropriate types for struct kevent on OpenBSD.

2012-06-29 Thread ltratt
Author: Laurence Tratt 
Branch: 
Changeset: r55871:6942c79aa982
Date: 2012-06-27 12:03 +0100
http://bitbucket.org/pypy/pypy/changeset/6942c79aa982/

Log:Use appropriate types for struct kevent on OpenBSD.

Without this, RPython's type system spots something's wrong and
throws an error; even if it didn't, the resulting C code would
probably have been wrong.

diff --git a/pypy/module/select/interp_kqueue.py 
b/pypy/module/select/interp_kqueue.py
--- a/pypy/module/select/interp_kqueue.py
+++ b/pypy/module/select/interp_kqueue.py
@@ -7,6 +7,7 @@
 from pypy.rpython.lltypesystem import rffi, lltype
 from pypy.rpython.tool import rffi_platform
 from pypy.translator.tool.cbuild import ExternalCompilationInfo
+import sys
 
 
 eci = ExternalCompilationInfo(
@@ -20,14 +21,26 @@
 _compilation_info_ = eci
 
 
-CConfig.kevent = rffi_platform.Struct("struct kevent", [
-("ident", rffi.UINTPTR_T),
-("filter", rffi.SHORT),
-("flags", rffi.USHORT),
-("fflags", rffi.UINT),
-("data", rffi.INTPTR_T),
-("udata", rffi.VOIDP),
-])
+if "openbsd" in sys.platform:
+IDENT_UINT = True
+CConfig.kevent = rffi_platform.Struct("struct kevent", [
+("ident", rffi.UINT),
+("filter", rffi.SHORT),
+("flags", rffi.USHORT),
+("fflags", rffi.UINT),
+("data", rffi.INT),
+("udata", rffi.VOIDP),
+])
+else:
+IDENT_UINT = False
+CConfig.kevent = rffi_platform.Struct("struct kevent", [
+("ident", rffi.UINTPTR_T),
+("filter", rffi.SHORT),
+("flags", rffi.USHORT),
+("fflags", rffi.UINT),
+("data", rffi.INTPTR_T),
+("udata", rffi.VOIDP),
+])
 
 
 CConfig.timespec = rffi_platform.Struct("struct timespec", [
@@ -243,16 +256,24 @@
 self.event.c_udata = rffi.cast(rffi.VOIDP, udata)
 
 def _compare_all_fields(self, other, op):
-l_ident = self.event.c_ident
-r_ident = other.event.c_ident
+if IDENT_UINT:
+l_ident = rffi.cast(lltype.Unsigned, self.event.c_ident)
+r_ident = rffi.cast(lltype.Unsigned, other.event.c_ident)
+else:
+l_ident = self.event.c_ident
+r_ident = other.event.c_ident
 l_filter = rffi.cast(lltype.Signed, self.event.c_filter)
 r_filter = rffi.cast(lltype.Signed, other.event.c_filter)
 l_flags = rffi.cast(lltype.Unsigned, self.event.c_flags)
 r_flags = rffi.cast(lltype.Unsigned, other.event.c_flags)
 l_fflags = rffi.cast(lltype.Unsigned, self.event.c_fflags)
 r_fflags = rffi.cast(lltype.Unsigned, other.event.c_fflags)
-l_data = self.event.c_data
-r_data = other.event.c_data
+if IDENT_UINT:
+l_data = rffi.cast(lltype.Signed, self.event.c_data)
+r_data = rffi.cast(lltype.Signed, other.event.c_data)
+else:
+l_data = self.event.c_data
+r_data = other.event.c_data
 l_udata = rffi.cast(lltype.Unsigned, self.event.c_udata)
 r_udata = rffi.cast(lltype.Unsigned, other.event.c_udata)
 
___
pypy-commit mailing list
pypy-commit@python.org
http://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy default: Fix typing issue.

2013-04-08 Thread ltratt
Author: Laurence Tratt 
Branch: 
Changeset: r63151:9ed36ac750d8
Date: 2013-04-08 18:50 +0100
http://bitbucket.org/pypy/pypy/changeset/9ed36ac750d8/

Log:Fix typing issue.

This prevented PyPy from building on OpenBSD (and maybe other
platforms). Fix suggested by mattip, based on precedent in
ll_os_stat.py.

diff --git a/pypy/module/__pypy__/interp_time.py 
b/pypy/module/__pypy__/interp_time.py
--- a/pypy/module/__pypy__/interp_time.py
+++ b/pypy/module/__pypy__/interp_time.py
@@ -61,7 +61,7 @@
 ret = c_clock_gettime(clk_id, tp)
 if ret != 0:
 raise exception_from_errno(space, space.w_IOError)
-return space.wrap(tp.c_tv_sec + tp.c_tv_nsec * 1e-9)
+return space.wrap(int(tp.c_tv_sec) + 1e-9 * int(tp.c_tv_nsec))
 
 @unwrap_spec(clk_id="c_int")
 def clock_getres(space, clk_id):
@@ -69,4 +69,4 @@
 ret = c_clock_getres(clk_id, tp)
 if ret != 0:
 raise exception_from_errno(space, space.w_IOError)
-return space.wrap(tp.c_tv_sec + tp.c_tv_nsec * 1e-9)
+return space.wrap(int(tp.c_tv_sec) + 1e-9 * int(tp.c_tv_nsec))
___
pypy-commit mailing list
pypy-commit@python.org
http://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy default: Update import to reflect pypy/rpython directory split.

2013-04-29 Thread ltratt
Author: Laurence Tratt 
Branch: 
Changeset: r63753:5ddc7be1ee16
Date: 2013-04-29 17:50 +0200
http://bitbucket.org/pypy/pypy/changeset/5ddc7be1ee16/

Log:Update import to reflect pypy/rpython directory split.

diff --git a/rpython/translator/platform/openbsd.py 
b/rpython/translator/platform/openbsd.py
--- a/rpython/translator/platform/openbsd.py
+++ b/rpython/translator/platform/openbsd.py
@@ -2,7 +2,7 @@
 
 import os
 
-from pypy.translator.platform.bsd import BSD
+from rpython.translator.platform.bsd import BSD
 
 class OpenBSD(BSD):
 name = "openbsd"
___
pypy-commit mailing list
pypy-commit@python.org
http://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy default: The default compiler on OpenBSD isn't clang.

2013-04-29 Thread ltratt
Author: Laurence Tratt 
Branch: 
Changeset: r63754:dbd71b24f537
Date: 2013-04-29 17:53 +0200
http://bitbucket.org/pypy/pypy/changeset/dbd71b24f537/

Log:The default compiler on OpenBSD isn't clang.

There's no reason to hard-code a compiler name: cc will always point
to the operating system blessed compiler.

diff --git a/rpython/translator/platform/openbsd.py 
b/rpython/translator/platform/openbsd.py
--- a/rpython/translator/platform/openbsd.py
+++ b/rpython/translator/platform/openbsd.py
@@ -5,6 +5,7 @@
 from rpython.translator.platform.bsd import BSD
 
 class OpenBSD(BSD):
+DEFAULT_CC = "cc"
 name = "openbsd"
 
 link_flags = os.environ.get("LDFLAGS", '-pthread').split()
___
pypy-commit mailing list
pypy-commit@python.org
http://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy sanitise_bytecode_dispatch: (cfbolz, ltratt) Manually unroll the opcode dispatch.

2013-08-30 Thread ltratt
Author: Laurence Tratt 
Branch: sanitise_bytecode_dispatch
Changeset: r66673:69d275947985
Date: 2013-08-30 16:38 +0100
http://bitbucket.org/pypy/pypy/changeset/69d275947985/

Log:(cfbolz, ltratt) Manually unroll the opcode dispatch.

This should be much easier to read than the previous magic.

diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py
--- a/pypy/interpreter/pyopcode.py
+++ b/pypy/interpreter/pyopcode.py
@@ -179,8 +179,7 @@
 
 # note: the structure of the code here is such that it makes
 # (after translation) a big "if/elif" chain, which is then
-# turned into a switch().  It starts here: even if the first
-# one is not an "if" but a "while" the effect is the same.
+# turned into a switch().
 
 while opcode == self.opcodedesc.EXTENDED_ARG.index:
 opcode = ord(co_code[next_instr])
@@ -201,8 +200,7 @@
 unroller = SReturnValue(w_returnvalue)
 next_instr = block.handle(self, unroller)
 return next_instr# now inside a 'finally' block
-
-if opcode == self.opcodedesc.END_FINALLY.index:
+elif opcode == self.opcodedesc.END_FINALLY.index:
 unroller = self.end_finally()
 if isinstance(unroller, SuspendedUnroller):
 # go on unrolling the stack
@@ -214,23 +212,246 @@
 else:
 next_instr = block.handle(self, unroller)
 return next_instr
-
-if opcode == self.opcodedesc.JUMP_ABSOLUTE.index:
+elif opcode == self.opcodedesc.JUMP_ABSOLUTE.index:
 return self.jump_absolute(oparg, ec)
-
-for opdesc in unrolling_all_opcode_descs:
-# the following "if" is part of the big switch described
-# above.
-if opcode == opdesc.index:
-# dispatch to the opcode method
-meth = getattr(self, opdesc.methodname)
-res = meth(oparg, next_instr)
-# !! warning, for the annotator the next line is not
-# comparing an int and None - you can't do that.
-# Instead, it's constant-folded to either True or False
-if res is not None:
-next_instr = res
-break
+elif opcode == self.opcodedesc.BREAK_LOOP.index:
+next_instr = self.BREAK_LOOP(oparg, next_instr)
+elif opcode == self.opcodedesc.CONTINUE_LOOP.index:
+next_instr = self.CONTINUE_LOOP(oparg, next_instr)
+elif opcode == self.opcodedesc.FOR_ITER.index:
+next_instr = self.FOR_ITER(oparg, next_instr)
+elif opcode == self.opcodedesc.JUMP_FORWARD.index:
+next_instr = self.JUMP_FORWARD(oparg, next_instr)
+elif opcode == self.opcodedesc.JUMP_IF_FALSE_OR_POP.index:
+next_instr = self.JUMP_IF_FALSE_OR_POP(oparg, next_instr)
+elif opcode == self.opcodedesc.JUMP_IF_NOT_DEBUG.index:
+next_instr = self.JUMP_IF_NOT_DEBUG(oparg, next_instr)
+elif opcode == self.opcodedesc.JUMP_IF_TRUE_OR_POP.index:
+next_instr = self.JUMP_IF_TRUE_OR_POP(oparg, next_instr)
+elif opcode == self.opcodedesc.POP_JUMP_IF_FALSE.index:
+next_instr = self.POP_JUMP_IF_FALSE(oparg, next_instr)
+elif opcode == self.opcodedesc.POP_JUMP_IF_TRUE.index:
+next_instr = self.POP_JUMP_IF_TRUE(oparg, next_instr)
+elif opcode == self.opcodedesc.BINARY_ADD.index:
+self.BINARY_ADD(oparg, next_instr)
+elif opcode == self.opcodedesc.BINARY_AND.index:
+self.BINARY_AND(oparg, next_instr)
+elif opcode == self.opcodedesc.BINARY_DIVIDE.index:
+self.BINARY_DIVIDE(oparg, next_instr)
+elif opcode == self.opcodedesc.BINARY_FLOOR_DIVIDE.index:
+self.BINARY_FLOOR_DIVIDE(oparg, next_instr)
+elif opcode == self.opcodedesc.BINARY_LSHIFT.index:
+self.BINARY_LSHIFT(oparg, next_instr)
+elif opcode == self.opcodedesc.BINARY_MODULO.index:
+self.BINARY_MODULO(oparg, next_instr)
+elif opcode == self.opcodedesc.BINARY_MULTIPLY.index:
+self.BINARY_MULTIPLY(oparg, next_instr)
+elif opcode == self.opcodedesc.BINARY_OR.index:
+self.BINARY_OR(oparg, next_instr)
+elif opcode == self.opcodedesc.BINARY_POWER.index:
+self.BINARY_POWER(oparg, next_instr)
+elif opcode == self.opcodedesc.BINARY_RSHIFT.index:
+self.BINARY_RSHIFT(oparg, next_instr)
+elif opcode == self.opcodedesc.BINARY_SUBSCR.

[pypy-commit] pypy sanitise_bytecode_dispatch: (arigo, cfbolz, ltratt) Kill CPythonFrame.

2013-08-30 Thread ltratt
Author: Laurence Tratt 
Branch: sanitise_bytecode_dispatch
Changeset: r66674:af10c9476946
Date: 2013-08-30 17:02 +0100
http://bitbucket.org/pypy/pypy/changeset/af10c9476946/

Log:(arigo, cfbolz, ltratt) Kill CPythonFrame.

diff --git a/pypy/interpreter/pycode.py b/pypy/interpreter/pycode.py
--- a/pypy/interpreter/pycode.py
+++ b/pypy/interpreter/pycode.py
@@ -251,8 +251,10 @@
 tuple(self.co_cellvars))
 
 def exec_host_bytecode(self, w_globals, w_locals):
-from pypy.interpreter.pyframe import CPythonFrame
-frame = CPythonFrame(self.space, self, w_globals, None)
+if sys.version_info < (2, 7):
+raise Exception("PyPy no longer supports Python 2.6 or lower")
+from pypy.interpreter.pyframe import PyFrame
+frame = PyFrame(self.space, self, w_globals, None)
 frame.setdictscope(w_locals)
 return frame.run()
 
diff --git a/pypy/interpreter/pyframe.py b/pypy/interpreter/pyframe.py
--- a/pypy/interpreter/pyframe.py
+++ b/pypy/interpreter/pyframe.py
@@ -52,7 +52,7 @@
 
 def __init__(self, space, code, w_globals, outer_func):
 if not we_are_translated():
-assert type(self) in (space.FrameClass, CPythonFrame), (
+assert type(self) == space.FrameClass, (
 "use space.FrameClass(), not directly PyFrame()")
 self = hint(self, access_directly=True, fresh_virtualizable=True)
 assert isinstance(code, pycode.PyCode)
@@ -674,17 +674,6 @@
 return space.wrap(self.builtin is not space.builtin)
 return space.w_False
 
-class CPythonFrame(PyFrame):
-"""
-Execution of host (CPython) opcodes.
-"""
-
-bytecode_spec = host_bytecode_spec
-opcode_method_names = host_bytecode_spec.method_names
-opcodedesc = host_bytecode_spec.opcodedesc
-opdescmap = host_bytecode_spec.opdescmap
-HAVE_ARGUMENT = host_bytecode_spec.HAVE_ARGUMENT
-
 
 # 
 
diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py
--- a/pypy/interpreter/pyopcode.py
+++ b/pypy/interpreter/pyopcode.py
@@ -1282,49 +1282,6 @@
 self.space.setitem(w_dict, w_key, w_value)
 
 
-class __extend__(pyframe.CPythonFrame):
-
-def JUMP_IF_FALSE(self, stepby, next_instr):
-w_cond = self.peekvalue()
-if not self.space.is_true(w_cond):
-next_instr += stepby
-return next_instr
-
-def JUMP_IF_TRUE(self, stepby, next_instr):
-w_cond = self.peekvalue()
-if self.space.is_true(w_cond):
-next_instr += stepby
-return next_instr
-
-def BUILD_MAP(self, itemcount, next_instr):
-if sys.version_info >= (2, 6):
-# We could pre-allocate a dict here
-# but for the moment this code is not translated.
-pass
-else:
-if itemcount != 0:
-raise BytecodeCorruption
-w_dict = self.space.newdict()
-self.pushvalue(w_dict)
-
-def STORE_MAP(self, zero, next_instr):
-if sys.version_info >= (2, 6):
-w_key = self.popvalue()
-w_value = self.popvalue()
-w_dict = self.peekvalue()
-self.space.setitem(w_dict, w_key, w_value)
-else:
-raise BytecodeCorruption
-
-def LIST_APPEND(self, oparg, next_instr):
-w = self.popvalue()
-if sys.version_info < (2, 7):
-v = self.popvalue()
-else:
-v = self.peekvalue(oparg - 1)
-self.space.call_method(v, 'append', w)
-
-
 ###  ###
 
 class ExitFrame(Exception):
___
pypy-commit mailing list
pypy-commit@python.org
http://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy sanitise_bytecode_dispatch: (arigo, cfbolz, ltratt) Remove the special case for interpreter/translated.

2013-08-30 Thread ltratt
Author: Laurence Tratt 
Branch: sanitise_bytecode_dispatch
Changeset: r66670:68f8446b80f5
Date: 2013-08-30 15:02 +0100
http://bitbucket.org/pypy/pypy/changeset/68f8446b80f5/

Log:(arigo, cfbolz, ltratt) Remove the special case for
interpreter/translated.

diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py
--- a/pypy/interpreter/pyopcode.py
+++ b/pypy/interpreter/pyopcode.py
@@ -219,45 +219,27 @@
 if opcode == self.opcodedesc.JUMP_ABSOLUTE.index:
 return self.jump_absolute(oparg, ec)
 
-if we_are_translated():
-for opdesc in unrolling_all_opcode_descs:
-# static checks to skip this whole case if necessary
-if opdesc.bytecode_spec is not self.bytecode_spec:
-continue
-if not opdesc.is_enabled(space):
-continue
-if opdesc.methodname in (
-'EXTENDED_ARG', 'RETURN_VALUE',
-'END_FINALLY', 'JUMP_ABSOLUTE'):
-continue   # opcodes implemented above
+for opdesc in unrolling_all_opcode_descs:
+# static checks to skip this whole case if necessary
+if opdesc.bytecode_spec is not self.bytecode_spec:
+continue
+if not opdesc.is_enabled(space):
+continue
 
-# the following "if" is part of the big switch described
-# above.
-if opcode == opdesc.index:
-# dispatch to the opcode method
-meth = getattr(self, opdesc.methodname)
-res = meth(oparg, next_instr)
-# !! warning, for the annotator the next line is not
-# comparing an int and None - you can't do that.
-# Instead, it's constant-folded to either True or False
-if res is not None:
-next_instr = res
-break
-else:
-self.MISSING_OPCODE(oparg, next_instr)
-
-else:  # when we are not translated, a list lookup is much faster
-methodname = self.opcode_method_names[opcode]
-try:
-meth = getattr(self, methodname)
-except AttributeError:
-raise BytecodeCorruption("unimplemented opcode, ofs=%d, "
- "code=%d, name=%s" %
- (self.last_instr, opcode,
-  methodname))
-res = meth(oparg, next_instr)
-if res is not None:
-next_instr = res
+# the following "if" is part of the big switch described
+# above.
+if opcode == opdesc.index:
+# dispatch to the opcode method
+meth = getattr(self, opdesc.methodname)
+res = meth(oparg, next_instr)
+# !! warning, for the annotator the next line is not
+# comparing an int and None - you can't do that.
+# Instead, it's constant-folded to either True or False
+if res is not None:
+next_instr = res
+break
+else:
+self.MISSING_OPCODE(oparg, next_instr)
 
 if jit.we_are_jitted():
 return next_instr
___
pypy-commit mailing list
pypy-commit@python.org
http://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy sanitise_bytecode_dispatch: (cfbolz, ltratt) Remove the CALL_METHOD option.

2013-08-30 Thread ltratt
Author: Laurence Tratt 
Branch: sanitise_bytecode_dispatch
Changeset: r66672:e434731eb29b
Date: 2013-08-30 15:47 +0100
http://bitbucket.org/pypy/pypy/changeset/e434731eb29b/

Log:(cfbolz, ltratt) Remove the CALL_METHOD option.

This has been enabled by default for some time.

diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py
--- a/pypy/config/pypyoption.py
+++ b/pypy/config/pypyoption.py
@@ -127,11 +127,6 @@
 
 
 pypy_optiondescription = OptionDescription("objspace", "Object Space Options", 
[
-OptionDescription("opcodes", "opcodes to enable in the interpreter", [
-BoolOption("CALL_METHOD", "emit a special bytecode for expr.name()",
-   default=False),
-]),
-
 OptionDescription("usemodules", "Which Modules should be used", [
 BoolOption(modname, "use module %s" % (modname, ),
default=modname in default_modules,
@@ -307,7 +302,6 @@
 
 # all the good optimizations for PyPy should be listed here
 if level in ['2', '3', 'jit']:
-config.objspace.opcodes.suggest(CALL_METHOD=True)
 config.objspace.std.suggest(withrangelist=True)
 config.objspace.std.suggest(withmethodcache=True)
 config.objspace.std.suggest(withprebuiltchar=True)
diff --git a/pypy/doc/config/objspace.opcodes.CALL_METHOD.txt 
b/pypy/doc/config/objspace.opcodes.CALL_METHOD.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.opcodes.CALL_METHOD.txt
+++ /dev/null
@@ -1,10 +0,0 @@
-Enable a pair of bytecodes that speed up method calls.
-See ``pypy.interpreter.callmethod`` for a description.
-
-The goal is to avoid creating the bound method object in the common
-case.  So far, this only works for calls with no keyword, no ``*arg``
-and no ``**arg`` but it would be easy to extend.
-
-For more information, see the section in `Standard Interpreter Optimizations`_.
-
-.. _`Standard Interpreter Optimizations`: 
../interpreter-optimizations.html#lookup-method-call-method
diff --git a/pypy/doc/config/objspace.opcodes.txt 
b/pypy/doc/config/objspace.opcodes.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.opcodes.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-..  intentionally empty
diff --git a/pypy/doc/interpreter-optimizations.rst 
b/pypy/doc/interpreter-optimizations.rst
--- a/pypy/doc/interpreter-optimizations.rst
+++ b/pypy/doc/interpreter-optimizations.rst
@@ -198,9 +198,6 @@
 if it is not None, then it is considered to be an additional first
 argument in the call to the *im_func* object from the stack.
 
-You can enable this feature with the :config:`objspace.opcodes.CALL_METHOD`
-option.
-
 .. more here?
 
 Overall Effects
diff --git a/pypy/interpreter/astcompiler/codegen.py 
b/pypy/interpreter/astcompiler/codegen.py
--- a/pypy/interpreter/astcompiler/codegen.py
+++ b/pypy/interpreter/astcompiler/codegen.py
@@ -982,9 +982,8 @@
 return self._call_has_no_star_args(call) and not call.keywords
 
 def _optimize_method_call(self, call):
-if not self.space.config.objspace.opcodes.CALL_METHOD or \
-not self._call_has_no_star_args(call) or \
-not isinstance(call.func, ast.Attribute):
+if not self._call_has_no_star_args(call) or \
+   not isinstance(call.func, ast.Attribute):
 return False
 attr_lookup = call.func
 assert isinstance(attr_lookup, ast.Attribute)
diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py
--- a/pypy/interpreter/pyopcode.py
+++ b/pypy/interpreter/pyopcode.py
@@ -219,9 +219,6 @@
 return self.jump_absolute(oparg, ec)
 
 for opdesc in unrolling_all_opcode_descs:
-if not opdesc.is_enabled(space):
-continue
-
 # the following "if" is part of the big switch described
 # above.
 if opcode == opdesc.index:
diff --git a/pypy/interpreter/test/test_compiler.py 
b/pypy/interpreter/test/test_compiler.py
--- a/pypy/interpreter/test/test_compiler.py
+++ b/pypy/interpreter/test/test_compiler.py
@@ -953,10 +953,6 @@
 assert i > -1
 assert isinstance(co.co_consts[i], frozenset)
 
-
-class AppTestCallMethod(object):
-spaceconfig = {'objspace.opcodes.CALL_METHOD': True}
-
 def test_call_method_kwargs(self):
 source = """def _f(a):
 return a.f(a=a)
diff --git a/pypy/interpreter/test/test_executioncontext.py 
b/pypy/interpreter/test/test_executioncontext.py
--- a/pypy/interpreter/test/test_executioncontext.py
+++ b/pypy/interpreter/test/test_executioncontext.py
@@ -253,10 +253,6 @@
 """)
 
 
-class TestExecutionContextWithCallMethod(TestExecutionContext):
-spaceconfig ={'objspace.opcodes.CALL_METHOD': True}
-
-
 class AppTestDelNotBlocked:
 
 def s

[pypy-commit] pypy sanitise_bytecode_dispatch: (cfbolz, fijal, ltratt) Apparently we no longer need this check.

2013-08-30 Thread ltratt
Author: Laurence Tratt 
Branch: sanitise_bytecode_dispatch
Changeset: r66671:28b6b806d7b2
Date: 2013-08-30 15:15 +0100
http://bitbucket.org/pypy/pypy/changeset/28b6b806d7b2/

Log:(cfbolz, fijal, ltratt) Apparently we no longer need this check.

According to Maciej.

diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py
--- a/pypy/interpreter/pyopcode.py
+++ b/pypy/interpreter/pyopcode.py
@@ -63,7 +63,6 @@
 """A PyFrame that knows about interpretation of standard Python opcodes
 minus the ones related to nested scopes."""
 
-bytecode_spec = bytecode_spec
 opcode_method_names = bytecode_spec.method_names
 opcodedesc = bytecode_spec.opcodedesc
 opdescmap = bytecode_spec.opdescmap
@@ -220,9 +219,6 @@
 return self.jump_absolute(oparg, ec)
 
 for opdesc in unrolling_all_opcode_descs:
-# static checks to skip this whole case if necessary
-if opdesc.bytecode_spec is not self.bytecode_spec:
-continue
 if not opdesc.is_enabled(space):
 continue
 
___
pypy-commit mailing list
pypy-commit@python.org
http://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy sanitise_bytecode_dispatch: Manually unroll the comparison operations.

2013-08-30 Thread ltratt
Author: Laurence Tratt 
Branch: sanitise_bytecode_dispatch
Changeset: r66687:4ef98acff95d
Date: 2013-08-30 22:33 +0100
http://bitbucket.org/pypy/pypy/changeset/4ef98acff95d/

Log:Manually unroll the comparison operations.

diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py
--- a/pypy/interpreter/pyopcode.py
+++ b/pypy/interpreter/pyopcode.py
@@ -41,23 +41,6 @@
 
 return func_with_new_name(opimpl, "opcode_impl_for_%s" % operationname)
 
-compare_dispatch_table = [
-"cmp_lt",   # "<"
-"cmp_le",   # "<="
-"cmp_eq",   # "=="
-"cmp_ne",   # "!="
-"cmp_gt",   # ">"
-"cmp_ge",   # ">="
-"cmp_in",
-"cmp_not_in",
-"cmp_is",
-"cmp_is_not",
-"cmp_exc_match",
-]
-
-unrolling_compare_dispatch_table = unrolling_iterable(
-enumerate(compare_dispatch_table))
-
 
 class __extend__(pyframe.PyFrame):
 """A PyFrame that knows about interpretation of standard Python opcodes
@@ -975,11 +958,28 @@
 def COMPARE_OP(self, testnum, next_instr):
 w_2 = self.popvalue()
 w_1 = self.popvalue()
-w_result = None
-for i, attr in unrolling_compare_dispatch_table:
-if i == testnum:
-w_result = getattr(self, attr)(w_1, w_2)
-break
+if testnum == 0:
+w_result = self.cmp_lt(w_1, w_2)
+elif testnum == 1:
+w_result = self.cmp_le(w_1, w_2)
+elif testnum == 2:
+w_result = self.cmp_eq(w_1, w_2)
+elif testnum == 3:
+w_result = self.cmp_ne(w_1, w_2)
+elif testnum == 4:
+w_result = self.cmp_gt(w_1, w_2)
+elif testnum == 5:
+w_result = self.cmp_ge(w_1, w_2)
+elif testnum == 6:
+w_result = self.cmp_in(w_1, w_2)
+elif testnum == 7:
+w_result = self.cmp_not_in(w_1, w_2)
+elif testnum == 8:
+w_result = self.cmp_is(w_1, w_2)
+elif testnum == 9:
+w_result = self.cmp_is_not(w_1, w_2)
+elif testnum == 10:
+w_result = self.cmp_exc_match(w_1, w_2)
 else:
 raise BytecodeCorruption("bad COMPARE_OP oparg")
 self.pushvalue(w_result)
___
pypy-commit mailing list
pypy-commit@python.org
http://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy sanitise_bytecode_dispatch: Manually inline the comparison operations.

2013-08-30 Thread ltratt
Author: Laurence Tratt 
Branch: sanitise_bytecode_dispatch
Changeset: r66688:99a6090d8e49
Date: 2013-08-30 22:54 +0100
http://bitbucket.org/pypy/pypy/changeset/99a6090d8e49/

Log:Manually inline the comparison operations.

As far as I can see, breaking these out to separate functions causes
bloat for no good reason.

diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py
--- a/pypy/interpreter/pyopcode.py
+++ b/pypy/interpreter/pyopcode.py
@@ -912,36 +912,6 @@
 self.pushvalue(w_value)
 LOAD_ATTR._always_inline_ = True
 
-def cmp_lt(self, w_1, w_2):
-return self.space.lt(w_1, w_2)
-
-def cmp_le(self, w_1, w_2):
-return self.space.le(w_1, w_2)
-
-def cmp_eq(self, w_1, w_2):
-return self.space.eq(w_1, w_2)
-
-def cmp_ne(self, w_1, w_2):
-return self.space.ne(w_1, w_2)
-
-def cmp_gt(self, w_1, w_2):
-return self.space.gt(w_1, w_2)
-
-def cmp_ge(self, w_1, w_2):
-return self.space.ge(w_1, w_2)
-
-def cmp_in(self, w_1, w_2):
-return self.space.contains(w_2, w_1)
-
-def cmp_not_in(self, w_1, w_2):
-return self.space.not_(self.space.contains(w_2, w_1))
-
-def cmp_is(self, w_1, w_2):
-return self.space.is_(w_1, w_2)
-
-def cmp_is_not(self, w_1, w_2):
-return self.space.not_(self.space.is_(w_1, w_2))
-
 @jit.unroll_safe
 def cmp_exc_match(self, w_1, w_2):
 space = self.space
@@ -959,25 +929,25 @@
 w_2 = self.popvalue()
 w_1 = self.popvalue()
 if testnum == 0:
-w_result = self.cmp_lt(w_1, w_2)
+w_result = self.space.lt(w_1, w_2)
 elif testnum == 1:
-w_result = self.cmp_le(w_1, w_2)
+w_result = self.space.le(w_1, w_2)
 elif testnum == 2:
-w_result = self.cmp_eq(w_1, w_2)
+w_result = self.space.eq(w_1, w_2)
 elif testnum == 3:
-w_result = self.cmp_ne(w_1, w_2)
+w_result = self.space.ne(w_1, w_2)
 elif testnum == 4:
-w_result = self.cmp_gt(w_1, w_2)
+w_result = self.space.gt(w_1, w_2)
 elif testnum == 5:
-w_result = self.cmp_ge(w_1, w_2)
+w_result = self.space.ge(w_1, w_2)
 elif testnum == 6:
-w_result = self.cmp_in(w_1, w_2)
+w_result = self.space.contains(w_2, w_1)
 elif testnum == 7:
-w_result = self.cmp_not_in(w_1, w_2)
+w_result = self.space.not_(self.space.contains(w_2, w_1))
 elif testnum == 8:
-w_result = self.cmp_is(w_1, w_2)
+w_result = self.space.is_(w_1, w_2)
 elif testnum == 9:
-w_result = self.cmp_is_not(w_1, w_2)
+w_result = self.space.not_(self.space.is_(w_1, w_2))
 elif testnum == 10:
 w_result = self.cmp_exc_match(w_1, w_2)
 else:
___
pypy-commit mailing list
pypy-commit@python.org
http://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy sanitise_bytecode_dispatch: (arigo, ltratt) Remove some unneeded imports and simplify code accordingly.

2013-08-31 Thread ltratt
Author: Laurence Tratt 
Branch: sanitise_bytecode_dispatch
Changeset: r66710:9e4b0a2b129a
Date: 2013-08-31 13:03 +0100
http://bitbucket.org/pypy/pypy/changeset/9e4b0a2b129a/

Log:(arigo, ltratt) Remove some unneeded imports and simplify code
accordingly.

diff --git a/pypy/interpreter/pyopcode.py b/pypy/interpreter/pyopcode.py
--- a/pypy/interpreter/pyopcode.py
+++ b/pypy/interpreter/pyopcode.py
@@ -13,10 +13,8 @@
 from rpython.rlib.objectmodel import we_are_translated
 from rpython.rlib import jit, rstackovf
 from rpython.rlib.rarithmetic import r_uint, intmask
-from rpython.rlib.unroll import unrolling_iterable
 from rpython.rlib.debug import check_nonneg
-from pypy.tool.stdlib_opcode import (bytecode_spec,
- unrolling_all_opcode_descs)
+from pypy.tool.stdlib_opcode import bytecode_spec
 
 def unaryoperation(operationname):
 """NOT_RPYTHON"""
@@ -42,15 +40,13 @@
 return func_with_new_name(opimpl, "opcode_impl_for_%s" % operationname)
 
 
+opcodedesc = bytecode_spec.opcodedesc
+HAVE_ARGUMENT = bytecode_spec.HAVE_ARGUMENT
+
 class __extend__(pyframe.PyFrame):
 """A PyFrame that knows about interpretation of standard Python opcodes
 minus the ones related to nested scopes."""
 
-opcode_method_names = bytecode_spec.method_names
-opcodedesc = bytecode_spec.opcodedesc
-opdescmap = bytecode_spec.opdescmap
-HAVE_ARGUMENT = bytecode_spec.HAVE_ARGUMENT
-
 ### opcode dispatch ###
 
 def dispatch(self, pycode, next_instr, ec):
@@ -152,7 +148,7 @@
 opcode = ord(co_code[next_instr])
 next_instr += 1
 
-if opcode >= self.HAVE_ARGUMENT:
+if opcode >= HAVE_ARGUMENT:
 lo = ord(co_code[next_instr])
 hi = ord(co_code[next_instr+1])
 next_instr += 2
@@ -164,16 +160,16 @@
 # (after translation) a big "if/elif" chain, which is then
 # turned into a switch().
 
-while opcode == self.opcodedesc.EXTENDED_ARG.index:
+while opcode == opcodedesc.EXTENDED_ARG.index:
 opcode = ord(co_code[next_instr])
-if opcode < self.HAVE_ARGUMENT:
+if opcode < HAVE_ARGUMENT:
 raise BytecodeCorruption
 lo = ord(co_code[next_instr+1])
 hi = ord(co_code[next_instr+2])
 next_instr += 3
 oparg = (oparg * 65536) | (hi * 256) | lo
 
-if opcode == self.opcodedesc.RETURN_VALUE.index:
+if opcode == opcodedesc.RETURN_VALUE.index:
 w_returnvalue = self.popvalue()
 block = self.unrollstack(SReturnValue.kind)
 if block is None:
@@ -183,7 +179,7 @@
 unroller = SReturnValue(w_returnvalue)
 next_instr = block.handle(self, unroller)
 return next_instr# now inside a 'finally' block
-elif opcode == self.opcodedesc.END_FINALLY.index:
+elif opcode == opcodedesc.END_FINALLY.index:
 unroller = self.end_finally()
 if isinstance(unroller, SuspendedUnroller):
 # go on unrolling the stack
@@ -195,245 +191,245 @@
 else:
 next_instr = block.handle(self, unroller)
 return next_instr
-elif opcode == self.opcodedesc.JUMP_ABSOLUTE.index:
+elif opcode == opcodedesc.JUMP_ABSOLUTE.index:
 return self.jump_absolute(oparg, ec)
-elif opcode == self.opcodedesc.BREAK_LOOP.index:
+elif opcode == opcodedesc.BREAK_LOOP.index:
 next_instr = self.BREAK_LOOP(oparg, next_instr)
-elif opcode == self.opcodedesc.CONTINUE_LOOP.index:
+elif opcode == opcodedesc.CONTINUE_LOOP.index:
 next_instr = self.CONTINUE_LOOP(oparg, next_instr)
-elif opcode == self.opcodedesc.FOR_ITER.index:
+elif opcode == opcodedesc.FOR_ITER.index:
 next_instr = self.FOR_ITER(oparg, next_instr)
-elif opcode == self.opcodedesc.JUMP_FORWARD.index:
+elif opcode == opcodedesc.JUMP_FORWARD.index:
 next_instr = self.JUMP_FORWARD(oparg, next_instr)
-elif opcode == self.opcodedesc.JUMP_IF_FALSE_OR_POP.index:
+elif opcode == opcodedesc.JUMP_IF_FALSE_OR_POP.index:
 next_instr = self.JUMP_IF_FALSE_OR_POP(oparg, next_instr)
-elif opcode == self.opcodedesc.JUMP_IF_NOT_DEBUG.index:
+elif opcode == opcodedesc.JUMP_IF_NOT_DEBUG.index:
 next_instr = self.JUMP_IF_NOT_DEBUG(oparg, next_instr)
-elif opcode == self.opcodedesc.JUMP_IF_TRUE_OR_POP.index:
+elif opcode == opcodedesc.JUMP_IF_TRUE_OR_POP.ind

[pypy-commit] pypy sanitise_bytecode_dispatch: Close branch.

2013-08-31 Thread ltratt
Author: Laurence Tratt 
Branch: sanitise_bytecode_dispatch
Changeset: r66718:3fe1e3d41a0a
Date: 2013-08-31 14:35 +0100
http://bitbucket.org/pypy/pypy/changeset/3fe1e3d41a0a/

Log:Close branch.

___
pypy-commit mailing list
pypy-commit@python.org
http://mail.python.org/mailman/listinfo/pypy-commit


[pypy-commit] pypy default: Merge heads.

2013-08-31 Thread ltratt
Author: Laurence Tratt 
Branch: 
Changeset: r66720:d8a0476111d0
Date: 2013-08-31 14:37 +0100
http://bitbucket.org/pypy/pypy/changeset/d8a0476111d0/

Log:Merge heads.

diff --git a/lib-python/2.7/uuid.py b/lib-python/2.7/uuid.py
--- a/lib-python/2.7/uuid.py
+++ b/lib-python/2.7/uuid.py
@@ -127,8 +127,12 @@
 overriding the given 'hex', 'bytes', 'bytes_le', 'fields', or 'int'.
 """
 
-if [hex, bytes, bytes_le, fields, int].count(None) != 4:
-raise TypeError('need one of hex, bytes, bytes_le, fields, or int')
+if (
+((hex is not None) + (bytes is not None) + (bytes_le is not None) +
+ (fields is not None) + (int is not None)) != 1
+):
+raise TypeError('need exactly one of hex, bytes, bytes_le, fields,'
+' or int')
 if hex is not None:
 hex = hex.replace('urn:', '').replace('uuid:', '')
 hex = hex.strip('{}').replace('-', '')
diff --git a/pypy/doc/tool/makecontributor.py b/pypy/doc/tool/makecontributor.py
--- a/pypy/doc/tool/makecontributor.py
+++ b/pypy/doc/tool/makecontributor.py
@@ -60,6 +60,11 @@
 'Roberto De Ioris': ['roberto@mrspurr'],
 'Sven Hager': ['hager'],
 'Tomo Cocoa': ['cocoatomo'],
+'Romain Guillebert': ['rguillebert', 'rguillbert', 'romain', 'Guillebert 
Romain'],
+'Ronan Lamy': ['ronan'],
+'Edd Barrett': ['edd'],
+'Manuel Jacob': ['mjacob'],
+'Rami Chowdhury': ['necaris'],
 }
 
 alias_map = {}
@@ -80,7 +85,8 @@
 if not match:
 return set()
 ignore_words = ['around', 'consulting', 'yesterday', 'for a bit', 'thanks',
-'in-progress', 'bits of', 'even a little', 'floating',]
+'in-progress', 'bits of', 'even a little', 'floating',
+'a bit', 'reviewing']
 sep_words = ['and', ';', '+', '/', 'with special  by']
 nicknames = match.group(1)
 for word in ignore_words:
@@ -119,7 +125,7 @@
 ## print '%5d %s' % (n, name)
 ## else:
 ## print name
-
+
 items = authors_count.items()
 items.sort(key=operator.itemgetter(1), reverse=True)
 for name, n in items:
diff --git a/pypy/doc/whatsnew-head.rst b/pypy/doc/whatsnew-head.rst
--- a/pypy/doc/whatsnew-head.rst
+++ b/pypy/doc/whatsnew-head.rst
@@ -75,3 +75,12 @@
 .. branch: reflex-support
 .. branch: numpypy-inplace-op
 .. branch: rewritten-loop-logging
+
+.. branch: nobold-backtrace
+Work on improving UnionError messages and stack trace displays.
+
+.. branch: improve-errors-again
+More improvements and refactorings of error messages.
+
+.. branch: improve-errors-again2
+Unbreak tests in rlib.
diff --git a/pypy/module/micronumpy/loop.py b/pypy/module/micronumpy/loop.py
--- a/pypy/module/micronumpy/loop.py
+++ b/pypy/module/micronumpy/loop.py
@@ -132,7 +132,7 @@
 
 reduce_driver = jit.JitDriver(name='numpy_reduce',
   greens = ['shapelen', 'func', 'done_func',
-'calc_dtype', 'identity'],
+'calc_dtype'],
   reds = 'auto')
 
 def compute_reduce(obj, calc_dtype, func, done_func, identity):
@@ -146,7 +146,7 @@
 while not obj_iter.done():
 reduce_driver.jit_merge_point(shapelen=shapelen, func=func,
   done_func=done_func,
-  calc_dtype=calc_dtype, identity=identity,
+  calc_dtype=calc_dtype,
   )
 rval = obj_iter.getitem().convert_to(calc_dtype)
 if done_func is not None and done_func(calc_dtype, rval):
diff --git a/pypy/module/micronumpy/test/test_zjit.py 
b/pypy/module/micronumpy/test/test_zjit.py
--- a/pypy/module/micronumpy/test/test_zjit.py
+++ b/pypy/module/micronumpy/test/test_zjit.py
@@ -56,7 +56,7 @@
 elif isinstance(w_res, interp_boxes.W_BoolBox):
 return float(w_res.value)
 raise TypeError(w_res)
-
+  
 if self.graph is None:
 interp, graph = self.meta_interp(f, [0],
  listops=True,
@@ -139,11 +139,17 @@
 'int_add': 3,
 })
 
+def define_reduce():
+return """
+a = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
+sum(a)
+"""
+
 def test_reduce_compile_only_once(self):
 self.compile_graph()
 reset_stats()
 pyjitpl._warmrunnerdesc.memory_manager.alive_loops.clear()
-i = self.code_mapping['sum']
+i = self.code_mapping['reduce']
 # run it twice
 retval = self.interp.eval_graph(self.graph, [i])
 retval = self.interp.eval_graph(self.graph, [i])
diff --git a/pypy/module/pypyjit/test_pypy_c/test_containers.py 
b/pypy/module/pypyjit/test_pypy_c/test_containers.py
--- a/pypy/module/p

[pypy-commit] pypy default: Merge in sanitise_bytecode_dispatch.

2013-08-31 Thread ltratt
Author: Laurence Tratt 
Branch: 
Changeset: r66719:66de007c669a
Date: 2013-08-31 14:35 +0100
http://bitbucket.org/pypy/pypy/changeset/66de007c669a/

Log:Merge in sanitise_bytecode_dispatch.

diff --git a/pypy/config/pypyoption.py b/pypy/config/pypyoption.py
--- a/pypy/config/pypyoption.py
+++ b/pypy/config/pypyoption.py
@@ -127,11 +127,6 @@
 
 
 pypy_optiondescription = OptionDescription("objspace", "Object Space Options", 
[
-OptionDescription("opcodes", "opcodes to enable in the interpreter", [
-BoolOption("CALL_METHOD", "emit a special bytecode for expr.name()",
-   default=False),
-]),
-
 OptionDescription("usemodules", "Which Modules should be used", [
 BoolOption(modname, "use module %s" % (modname, ),
default=modname in default_modules,
@@ -307,7 +302,6 @@
 
 # all the good optimizations for PyPy should be listed here
 if level in ['2', '3', 'jit']:
-config.objspace.opcodes.suggest(CALL_METHOD=True)
 config.objspace.std.suggest(withrangelist=True)
 config.objspace.std.suggest(withmethodcache=True)
 config.objspace.std.suggest(withprebuiltchar=True)
diff --git a/pypy/doc/config/objspace.opcodes.CALL_METHOD.txt 
b/pypy/doc/config/objspace.opcodes.CALL_METHOD.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.opcodes.CALL_METHOD.txt
+++ /dev/null
@@ -1,10 +0,0 @@
-Enable a pair of bytecodes that speed up method calls.
-See ``pypy.interpreter.callmethod`` for a description.
-
-The goal is to avoid creating the bound method object in the common
-case.  So far, this only works for calls with no keyword, no ``*arg``
-and no ``**arg`` but it would be easy to extend.
-
-For more information, see the section in `Standard Interpreter Optimizations`_.
-
-.. _`Standard Interpreter Optimizations`: 
../interpreter-optimizations.html#lookup-method-call-method
diff --git a/pypy/doc/config/objspace.opcodes.txt 
b/pypy/doc/config/objspace.opcodes.txt
deleted file mode 100644
--- a/pypy/doc/config/objspace.opcodes.txt
+++ /dev/null
@@ -1,1 +0,0 @@
-..  intentionally empty
diff --git a/pypy/doc/interpreter-optimizations.rst 
b/pypy/doc/interpreter-optimizations.rst
--- a/pypy/doc/interpreter-optimizations.rst
+++ b/pypy/doc/interpreter-optimizations.rst
@@ -198,9 +198,6 @@
 if it is not None, then it is considered to be an additional first
 argument in the call to the *im_func* object from the stack.
 
-You can enable this feature with the :config:`objspace.opcodes.CALL_METHOD`
-option.
-
 .. more here?
 
 Overall Effects
diff --git a/pypy/interpreter/astcompiler/codegen.py 
b/pypy/interpreter/astcompiler/codegen.py
--- a/pypy/interpreter/astcompiler/codegen.py
+++ b/pypy/interpreter/astcompiler/codegen.py
@@ -982,9 +982,8 @@
 return self._call_has_no_star_args(call) and not call.keywords
 
 def _optimize_method_call(self, call):
-if not self.space.config.objspace.opcodes.CALL_METHOD or \
-not self._call_has_no_star_args(call) or \
-not isinstance(call.func, ast.Attribute):
+if not self._call_has_no_star_args(call) or \
+   not isinstance(call.func, ast.Attribute):
 return False
 attr_lookup = call.func
 assert isinstance(attr_lookup, ast.Attribute)
diff --git a/pypy/interpreter/pycode.py b/pypy/interpreter/pycode.py
--- a/pypy/interpreter/pycode.py
+++ b/pypy/interpreter/pycode.py
@@ -251,8 +251,10 @@
 tuple(self.co_cellvars))
 
 def exec_host_bytecode(self, w_globals, w_locals):
-from pypy.interpreter.pyframe import CPythonFrame
-frame = CPythonFrame(self.space, self, w_globals, None)
+if sys.version_info < (2, 7):
+raise Exception("PyPy no longer supports Python 2.6 or lower")
+from pypy.interpreter.pyframe import PyFrame
+frame = PyFrame(self.space, self, w_globals, None)
 frame.setdictscope(w_locals)
 return frame.run()
 
diff --git a/pypy/interpreter/pyframe.py b/pypy/interpreter/pyframe.py
--- a/pypy/interpreter/pyframe.py
+++ b/pypy/interpreter/pyframe.py
@@ -52,7 +52,7 @@
 
 def __init__(self, space, code, w_globals, outer_func):
 if not we_are_translated():
-assert type(self) in (space.FrameClass, CPythonFrame), (
+assert type(self) == space.FrameClass, (
 "use space.FrameClass(), not directly PyFrame()")
 self = hint(self, access_directly=True, fresh_virtualizable=True)
 assert isinstance(code, pycode.PyCode)
@@ -674,17 +674,6 @@
 return space.wrap(self.builtin is not space.builtin)
 return space.w_False
 
-class CPythonFrame(PyFrame):
-"""
-Execution of host (CPython) opcodes.
-"""
-
-bytecode_spec = host_bytecode_spec
-opcode_method_names = host_bytecode_spec.method_names
-opcodedesc = host_bytecode_spec.opcodedesc
-opdescmap = host_bytecode_spec.opdescmap
-HAVE_ARGUMENT = host_byte

[pypy-commit] pypy default: Document the sanitise_bytecode_dispatch branch.

2013-08-31 Thread ltratt
Author: Laurence Tratt 
Branch: 
Changeset: r66728:141c29c67263
Date: 2013-08-31 15:48 +0100
http://bitbucket.org/pypy/pypy/changeset/141c29c67263/

Log:Document the sanitise_bytecode_dispatch branch.

diff --git a/pypy/doc/whatsnew-head.rst b/pypy/doc/whatsnew-head.rst
--- a/pypy/doc/whatsnew-head.rst
+++ b/pypy/doc/whatsnew-head.rst
@@ -5,6 +5,11 @@
 .. this is a revision shortly after release-2.1-beta
 .. startrev: 4eb52818e7c0
 
+.. branch: sanitise_bytecode_dispatch
+Make PyPy's bytecode dispatcher easy to read, and less reliant on RPython
+magic. There is no functional change, though the removal of dead code leads
+to many fewer tests to execute.
+
 .. branch: fastjson
 Fast json decoder written in RPython, about 3-4x faster than the pure Python
 decoder which comes with the stdlib
___
pypy-commit mailing list
pypy-commit@python.org
http://mail.python.org/mailman/listinfo/pypy-commit