Author: mattip <[email protected]>
Branch: cpyext-for-merge
Changeset: r83877:b88a7cbb6b17
Date: 2016-04-25 07:59 +0300
http://bitbucket.org/pypy/pypy/changeset/b88a7cbb6b17/

Log:    merge default into branch

diff --git a/.hgtags b/.hgtags
--- a/.hgtags
+++ b/.hgtags
@@ -20,3 +20,4 @@
 5f8302b8bf9f53056e40426f10c72151564e5b19 release-4.0.1
 246c9cf22037b11dc0e8c29ce3f291d3b8c5935a release-5.0
 bbd45126bc691f669c4ebdfbd74456cd274c6b92 release-5.0.1
+3260adbeba4a8b6659d1cc0d0b41f266769b74da release-5.1
diff --git a/lib_pypy/syslog.py b/lib_pypy/syslog.py
--- a/lib_pypy/syslog.py
+++ b/lib_pypy/syslog.py
@@ -51,6 +51,8 @@
     # if log is not opened, open it now
     if not _S_log_open:
         openlog()
+    if isinstance(message, unicode):
+        message = str(message)
     lib.syslog(priority, "%s", message)
 
 @builtinify
diff --git a/pypy/doc/build.rst b/pypy/doc/build.rst
--- a/pypy/doc/build.rst
+++ b/pypy/doc/build.rst
@@ -102,7 +102,7 @@
 
     apt-get install gcc make libffi-dev pkg-config libz-dev libbz2-dev \
     libsqlite3-dev libncurses-dev libexpat1-dev libssl-dev libgdbm-dev \
-    tk-dev
+    tk-dev libgc-dev
 
 For the optional lzma module on PyPy3 you will also need ``liblzma-dev``.
 
diff --git a/pypy/doc/introduction.rst b/pypy/doc/introduction.rst
--- a/pypy/doc/introduction.rst
+++ b/pypy/doc/introduction.rst
@@ -1,16 +1,22 @@
 What is PyPy?
 =============
 
-In common parlance, PyPy has been used to mean two things.  The first is the
-:ref:`RPython translation toolchain <rpython:index>`, which is a framework for 
generating
-dynamic programming language implementations.  And the second is one
-particular implementation that is so generated --
-an implementation of the Python_ programming language written in
-Python itself.  It is designed to be flexible and easy to experiment with.
+Historically, PyPy has been used to mean two things.  The first is the
+:ref:`RPython translation toolchain <rpython:index>` for generating
+interpreters for dynamic programming languages.  And the second is one
+particular implementation of Python_ produced with it. Because RPython
+uses the same syntax as Python, this generated version became known as
+Python interpreter written in Python. It is designed to be flexible and
+easy to experiment with.
 
-This double usage has proven to be confusing, and we are trying to move
-away from using the word PyPy to mean both things.  From now on we will
-try to use PyPy to only mean the Python implementation, and say the
+To make it more clear, we start with source code written in RPython,
+apply the RPython translation toolchain, and end up with PyPy as a
+binary executable. This executable is the Python interpreter.
+
+Double usage has proven to be confusing, so we've moved away from using
+the word PyPy to mean both toolchain and generated interpreter.  Now we
+use word PyPy to refer to the Python implementation, and explicitly
+mention
 :ref:`RPython translation toolchain <rpython:index>` when we mean the 
framework.
 
 Some older documents, presentations, papers and videos will still have the old
diff --git a/pypy/doc/release-5.1.0.rst b/pypy/doc/release-5.1.0.rst
--- a/pypy/doc/release-5.1.0.rst
+++ b/pypy/doc/release-5.1.0.rst
@@ -3,10 +3,17 @@
 ========
 
 We have released PyPy 5.1, about a month after PyPy 5.0.
-We encourage all users of PyPy to update to this version. Apart from the usual
-bug fixes, there is an ongoing effort to improve the warmup time and memory
-usage of JIT-related metadata, and we now fully support the IBM s390x 
-architecture.
+
+This release includes more improvement to warmup time and memory
+requirements. We have seen about a 20% memory requirement reduction and up to
+30% warmup time improvement, more detail in the `blog post`_.
+
+We also now have `fully support for the IBM s390x`_. Since this support is in
+`RPython`_, any dynamic language written using RPython, like PyPy, will
+automagically be supported on that architecture.  
+
+We updated cffi_ to 1.6, and continue to improve support for the wider
+python ecosystem using the PyPy interpreter.
 
 You can download the PyPy 5.1 release here:
 
@@ -26,6 +33,9 @@
 .. _`modules`: 
http://doc.pypy.org/en/latest/project-ideas.html#make-more-python-modules-pypy-friendly
 .. _`help`: http://doc.pypy.org/en/latest/project-ideas.html
 .. _`numpy`: https://bitbucket.org/pypy/numpy
+.. _cffi: https://cffi.readthedocs.org
+.. _`fully support for the IBM s390x`: 
http://morepypy.blogspot.com/2016/04/pypy-enterprise-edition.html
+.. _`blog post`: 
http://morepypy.blogspot.com/2016/04/warmup-improvements-more-efficient.html
 
 What is PyPy?
 =============
@@ -46,7 +56,7 @@
   
   * big- and little-endian variants of **PPC64** running Linux,
 
-  * **s960x** running Linux
+  * **s390x** running Linux
 
 .. _`PyPy and CPython 2.7.x`: http://speed.pypy.org
 .. _`dynamic languages`: http://pypyjs.org
@@ -74,6 +84,8 @@
   * Fix a corner case in the JIT
 
   * Fix edge cases in the cpyext refcounting-compatible semantics
+    (more work on cpyext compatibility is coming in the ``cpyext-ext``
+    branch, but isn't ready yet)
 
   * Try harder to not emit NEON instructions on ARM processors without NEON
     support
@@ -92,11 +104,17 @@
 
   * Fix sandbox startup (a regression in 5.0)
 
+  * Fix possible segfault for classes with mangled mro or __metaclass__
+
+  * Fix isinstance(deque(), Hashable) on the pure python deque
+
+  * Fix an issue with forkpty()
+
   * Issues reported with our previous release were resolved_ after reports 
from users on
     our issue tracker at https://bitbucket.org/pypy/pypy/issues or on IRC at
     #pypy
 
-* Numpy:
+* Numpy_:
 
   * Implemented numpy.where for a single argument
 
@@ -108,6 +126,8 @@
     functions exported from libpypy.so are declared in pypy_numpy.h, which is
     included only when building our fork of numpy
 
+  * Add broadcast
+
 * Performance improvements:
 
   * Improve str.endswith([tuple]) and str.startswith([tuple]) to allow JITting
@@ -119,14 +139,18 @@
   * Remove the forced minor collection that occurs when rewriting the
     assembler at the start of the JIT backend
 
+  * Port the resource module to cffi
+
 * Internal refactorings:
 
   * Use a simpler logger to speed up translation
 
   * Drop vestiges of Python 2.5 support in testing
 
+  * Update rpython functions with ones needed for py3k
+
 .. _resolved: http://doc.pypy.org/en/latest/whatsnew-5.0.0.html
-.. _`blog post`: http://morepypy.blogspot.com/2016/02/c-api-support-update.html
+.. _Numpy: https://bitbucket.org/pypy/numpy
 
 Please update, and continue to help us make PyPy better.
 
diff --git a/pypy/doc/whatsnew-5.1.0.rst b/pypy/doc/whatsnew-5.1.0.rst
--- a/pypy/doc/whatsnew-5.1.0.rst
+++ b/pypy/doc/whatsnew-5.1.0.rst
@@ -60,3 +60,13 @@
 Remove old uneeded numpy headers, what is left is only for testing. Also 
 generate pypy_numpy.h which exposes functions to directly use micronumpy
 ndarray and ufuncs
+
+.. branch: rposix-for-3
+
+Reuse rposix definition of TIMESPEC in rposix_stat, add wrapper for fstatat().
+This updates the underlying rpython functions with the ones needed for the 
+py3k branch
+ 
+.. branch: numpy_broadcast
+
+Add broadcast to micronumpy
diff --git a/pypy/doc/whatsnew-head.rst b/pypy/doc/whatsnew-head.rst
--- a/pypy/doc/whatsnew-head.rst
+++ b/pypy/doc/whatsnew-head.rst
@@ -3,14 +3,10 @@
 =========================
 
 .. this is a revision shortly after release-5.1
-.. startrev: 2180e1eaf6f6
+.. startrev: aa60332382a1
 
-.. branch: rposix-for-3
+.. branch: techtonik/introductionrst-simplify-explanation-abo-1460879168046
 
-Reuse rposix definition of TIMESPEC in rposix_stat, add wrapper for fstatat().
-This updates the underlying rpython functions with the ones needed for the 
-py3k branch
- 
-.. branch: numpy_broadcast
+.. branch: gcheader-decl
 
-Add broadcast to micronumpy
+Reduce the size of generated C sources.
diff --git a/pypy/module/_cffi_backend/__init__.py 
b/pypy/module/_cffi_backend/__init__.py
--- a/pypy/module/_cffi_backend/__init__.py
+++ b/pypy/module/_cffi_backend/__init__.py
@@ -46,6 +46,7 @@
         '_get_types': 'func._get_types',
         '_get_common_types': 'func._get_common_types',
         'from_buffer': 'func.from_buffer',
+        'gcp': 'func.gcp',
 
         'string': 'func.string',
         'unpack': 'func.unpack',
diff --git a/pypy/module/_cffi_backend/test/test_recompiler.py 
b/pypy/module/_cffi_backend/test/test_recompiler.py
--- a/pypy/module/_cffi_backend/test/test_recompiler.py
+++ b/pypy/module/_cffi_backend/test/test_recompiler.py
@@ -1773,14 +1773,14 @@
 
     def test_introspect_order(self):
         ffi, lib = self.prepare("""
-            union aaa { int a; }; typedef struct ccc { int a; } b;
-            union g   { int a; }; typedef struct cc  { int a; } bbb;
-            union aa  { int a; }; typedef struct a   { int a; } bb;
+            union CFFIaaa { int a; }; typedef struct CFFIccc { int a; } CFFIb;
+            union CFFIg   { int a; }; typedef struct CFFIcc  { int a; } 
CFFIbbb;
+            union CFFIaa  { int a; }; typedef struct CFFIa   { int a; } CFFIbb;
         """, "test_introspect_order", """
-            union aaa { int a; }; typedef struct ccc { int a; } b;
-            union g   { int a; }; typedef struct cc  { int a; } bbb;
-            union aa  { int a; }; typedef struct a   { int a; } bb;
+            union CFFIaaa { int a; }; typedef struct CFFIccc { int a; } CFFIb;
+            union CFFIg   { int a; }; typedef struct CFFIcc  { int a; } 
CFFIbbb;
+            union CFFIaa  { int a; }; typedef struct CFFIa   { int a; } CFFIbb;
         """)
-        assert ffi.list_types() == (['b', 'bb', 'bbb'],
-                                        ['a', 'cc', 'ccc'],
-                                        ['aa', 'aaa', 'g'])
+        assert ffi.list_types() == (['CFFIb', 'CFFIbb', 'CFFIbbb'],
+                                    ['CFFIa', 'CFFIcc', 'CFFIccc'],
+                                    ['CFFIaa', 'CFFIaaa', 'CFFIg'])
diff --git a/pypy/module/_cffi_backend/wrapper.py 
b/pypy/module/_cffi_backend/wrapper.py
--- a/pypy/module/_cffi_backend/wrapper.py
+++ b/pypy/module/_cffi_backend/wrapper.py
@@ -92,7 +92,8 @@
         return ctype._call(self.fnptr, args_w)
 
     def descr_repr(self, space):
-        return space.wrap("<FFIFunctionWrapper for %s()>" % (self.fnname,))
+        doc = self.rawfunctype.repr_fn_type(self.ffi, self.fnname)
+        return space.wrap("<FFIFunctionWrapper '%s'>" % (doc,))
 
     def descr_get_doc(self, space):
         doc = self.rawfunctype.repr_fn_type(self.ffi, self.fnname)
diff --git a/pypy/objspace/std/specialisedtupleobject.py 
b/pypy/objspace/std/specialisedtupleobject.py
--- a/pypy/objspace/std/specialisedtupleobject.py
+++ b/pypy/objspace/std/specialisedtupleobject.py
@@ -180,10 +180,9 @@
 
 def specialized_zip_2_lists(space, w_list1, w_list2):
     from pypy.objspace.std.listobject import W_ListObject
-    if (not isinstance(w_list1, W_ListObject) or
-        not isinstance(w_list2, W_ListObject)):
+    if type(w_list1) is not W_ListObject or type(w_list2) is not W_ListObject:
         raise OperationError(space.w_TypeError,
-                             space.wrap("expected two lists"))
+                             space.wrap("expected two exact lists"))
 
     if space.config.objspace.std.withspecialisedtuple:
         intlist1 = w_list1.getitems_int()
diff --git a/pypy/tool/release/repackage.sh b/pypy/tool/release/repackage.sh
--- a/pypy/tool/release/repackage.sh
+++ b/pypy/tool/release/repackage.sh
@@ -3,13 +3,17 @@
 min=1
 rev=0
 branchname=release-$maj.x  # ==OR== release-$maj.$min.x
-tagname=release-$maj.$min.$rev
+tagname=release-$maj.$min  # ==OR== release-$maj.$min.$rev
+
+hg log -r $branchname || exit 1
+hg log -r $tagname || exit 1
+
 # This script will download latest builds from the buildmaster, rename the top
 # level directory, and repackage ready to be uploaded to bitbucket. It will 
also
 # download source, assuming a tag for the release already exists, and 
repackage them.
 # The script should be run in an empty directory, i.e. /tmp/release_xxx
 
-for plat in linux linux64 linux-armhf-raspbian linux-armhf-raring linux-armel 
osx64
+for plat in linux linux64 linux-armhf-raspbian linux-armhf-raring linux-armel 
osx64 s390x
   do
     wget 
http://buildbot.pypy.org/nightly/$branchname/pypy-c-jit-latest-$plat.tar.bz2
     tar -xf pypy-c-jit-latest-$plat.tar.bz2
diff --git a/rpython/memory/gc/incminimark.py b/rpython/memory/gc/incminimark.py
--- a/rpython/memory/gc/incminimark.py
+++ b/rpython/memory/gc/incminimark.py
@@ -341,6 +341,20 @@
         self.prebuilt_root_objects = self.AddressStack()
         #
         self._init_writebarrier_logic()
+        #
+        # The size of all the objects turned from 'young' to 'old'
+        # since we started the last major collection cycle.  This is
+        # used to track progress of the incremental GC: normally, we
+        # run one major GC step after each minor collection, but if a
+        # lot of objects are made old, we need run two or more steps.
+        # Otherwise the risk is that we create old objects faster than
+        # we're collecting them.  The 'threshold' is incremented after
+        # each major GC step at a fixed rate; the idea is that as long
+        # as 'size_objects_made_old > threshold_objects_made_old' then
+        # we must do more major GC steps.  See major_collection_step()
+        # for more details.
+        self.size_objects_made_old = r_uint(0)
+        self.threshold_objects_made_old = r_uint(0)
 
 
     def setup(self):
@@ -464,7 +478,7 @@
                 self.gc_nursery_debug = True
             else:
                 self.gc_nursery_debug = False
-            self.minor_collection()    # to empty the nursery
+            self._minor_collection()    # to empty the nursery
             llarena.arena_free(self.nursery)
             self.nursery_size = newsize
             self.allocate_nursery()
@@ -509,8 +523,8 @@
         self.min_heap_size = max(self.min_heap_size, self.nursery_size *
                                               self.major_collection_threshold)
         # the following two values are usually equal, but during raw mallocs
-        # of arrays, next_major_collection_threshold is decremented to make
-        # the next major collection arrive earlier.
+        # with memory pressure accounting, next_major_collection_threshold
+        # is decremented to make the next major collection arrive earlier.
         # See translator/c/test/test_newgc, test_nongc_attached_to_gc
         self.next_major_collection_initial = self.min_heap_size
         self.next_major_collection_threshold = self.min_heap_size
@@ -700,21 +714,60 @@
     def collect(self, gen=2):
         """Do a minor (gen=0), start a major (gen=1), or do a full
         major (gen>=2) collection."""
-        if gen <= 1:
-            self.minor_collection()
-            if gen == 1 or (self.gc_state != STATE_SCANNING and gen != -1):
+        if gen < 0:
+            self._minor_collection()   # dangerous! no major GC cycle progress
+        elif gen <= 1:
+            self.minor_collection_with_major_progress()
+            if gen == 1 and self.gc_state == STATE_SCANNING:
                 self.major_collection_step()
         else:
             self.minor_and_major_collection()
         self.rrc_invoke_callback()
 
 
+    def minor_collection_with_major_progress(self, extrasize=0):
+        """Do a minor collection.  Then, if there is already a major GC
+        in progress, run at least one major collection step.  If there is
+        no major GC but the threshold is reached, start a major GC.
+        """
+        self._minor_collection()
+
+        # If the gc_state is STATE_SCANNING, we're not in the middle
+        # of an incremental major collection.  In that case, wait
+        # until there is too much garbage before starting the next
+        # major collection.  But if we are in the middle of an
+        # incremental major collection, then always do (at least) one
+        # step now.
+        #
+        # Within a major collection cycle, every call to
+        # major_collection_step() increments
+        # 'threshold_objects_made_old' by nursery_size/2.
+
+        if self.gc_state != STATE_SCANNING or 
self.threshold_reached(extrasize):
+            self.major_collection_step(extrasize)
+
+            # See documentation in major_collection_step() for target 
invariants
+            while self.gc_state != STATE_SCANNING:    # target (A1)
+                threshold = self.threshold_objects_made_old
+                if threshold >= r_uint(extrasize):
+                    threshold -= r_uint(extrasize)     # (*)
+                    if self.size_objects_made_old <= threshold:   # target (A2)
+                        break
+                    # Note that target (A2) is tweaked by (*); see
+                    # test_gc_set_max_heap_size in translator/c, test_newgc.py
+
+                self._minor_collection()
+                self.major_collection_step(extrasize)
+
+        self.rrc_invoke_callback()
+
+
     def collect_and_reserve(self, totalsize):
         """To call when nursery_free overflows nursery_top.
         First check if pinned objects are in front of nursery_top. If so,
         jump over the pinned object and try again to reserve totalsize.
-        Otherwise do a minor collection, and possibly a major collection, and
-        finally reserve totalsize bytes.
+        Otherwise do a minor collection, and possibly some steps of a
+        major collection, and finally reserve totalsize bytes.
         """
 
         minor_collection_count = 0
@@ -757,47 +810,27 @@
                 self.nursery_top = self.nursery_barriers.popleft()
             else:
                 minor_collection_count += 1
-                self.minor_collection()
                 if minor_collection_count == 1:
+                    self.minor_collection_with_major_progress()
+                else:
+                    # Nursery too full again.  This is likely because of
+                    # execute_finalizers() or rrc_invoke_callback().
+                    # we need to fix it with another call to minor_collection()
+                    # ---this time only the minor part so that we are sure that
+                    # the nursery is empty (apart from pinned objects).
                     #
-                    # If the gc_state is STATE_SCANNING, we're not in
-                    # the middle of an incremental major collection.
-                    # In that case, wait until there is too much
-                    # garbage before starting the next major
-                    # collection.  But if we are in the middle of an
-                    # incremental major collection, then always do (at
-                    # least) one step now.
+                    # Note that this still works with the counters:
+                    # 'size_objects_made_old' will be increased by
+                    # the _minor_collection() below.  We don't
+                    # immediately restore the target invariant that
+                    # 'size_objects_made_old <= threshold_objects_made_old'.
+                    # But we will do it in the next call to
+                    # minor_collection_with_major_progress().
                     #
-                    # This will increment next_major_collection_threshold
-                    # by nursery_size//2.  If more than nursery_size//2
-                    # survives, then threshold_reached() might still be
-                    # true after that.  In that case we do a second step.
-                    # The goal is to avoid too high memory peaks if the
-                    # program allocates a lot of surviving objects.
-                    # 
-                    if (self.gc_state != STATE_SCANNING or
-                           self.threshold_reached()):
-
-                        self.major_collection_step()
-
-                        if (self.gc_state != STATE_SCANNING and
-                               self.threshold_reached()):  # ^^but only if 
still
-                            self.minor_collection()        # the same 
collection
-                            self.major_collection_step()
-                    #
-                    self.rrc_invoke_callback()
-                    #
-                    # The nursery might not be empty now, because of
-                    # execute_finalizers() or rrc_invoke_callback().
-                    # If it is almost full again,
-                    # we need to fix it with another call to 
minor_collection().
-                    if self.nursery_free + totalsize > self.nursery_top:
-                        self.minor_collection()
-                    #
-                else:
                     ll_assert(minor_collection_count == 2,
-                            "Seeing minor_collection() at least twice."
-                            "Too many pinned objects?")
+                              "Calling minor_collection() twice is not "
+                              "enough. Too many pinned objects?")
+                    self._minor_collection()
             #
             # Tried to do something about nursery_free overflowing
             # nursery_top before this point. Try to reserve totalsize now.
@@ -855,21 +888,9 @@
         # to major_collection_step().  If there is really no memory,
         # then when the major collection finishes it will raise
         # MemoryError.
-        #
-        # The logic is to first do a minor GC only, and check if that
-        # was enough to free a bunch of large young objects.  If it
-        # was, then we don't do any major collection step.
-        #
-        while self.threshold_reached(raw_malloc_usage(totalsize)):
-            self.minor_collection()
-            if self.threshold_reached(raw_malloc_usage(totalsize) +
-                                      self.nursery_size // 2):
-                self.major_collection_step(raw_malloc_usage(totalsize))
-            self.rrc_invoke_callback()
-            # note that this loop should not be infinite: when the
-            # last step of a major collection is done but
-            # threshold_reached(totalsize) is still true, then
-            # we should get a MemoryError from major_collection_step().
+        if self.threshold_reached(raw_malloc_usage(totalsize)):
+            self.minor_collection_with_major_progress(
+                raw_malloc_usage(totalsize) + self.nursery_size // 2)
         #
         # Check if the object would fit in the ArenaCollection.
         # Also, an object allocated from ArenaCollection must be old.
@@ -1547,7 +1568,7 @@
     # ----------
     # Nursery collection
 
-    def minor_collection(self):
+    def _minor_collection(self):
         """Perform a minor collection: find the objects from the nursery
         that remain alive and move them out."""
         #
@@ -1718,6 +1739,10 @@
         self.old_objects_pointing_to_pinned.foreach(
                 self._reset_flag_old_objects_pointing_to_pinned, None)
         #
+        # Accounting: 'nursery_surviving_size' is the size of objects
+        # from the nursery that we just moved out.
+        self.size_objects_made_old += r_uint(self.nursery_surviving_size)
+        #
         debug_print("minor collect, total memory used:",
                     self.get_total_memory_used())
         debug_print("number of pinned objects:",
@@ -1958,6 +1983,7 @@
             self.header(obj).tid &= ~GCFLAG_HAS_SHADOW
             #
             totalsize = size_gc_header + self.get_size(obj)
+            self.nursery_surviving_size += raw_malloc_usage(totalsize)
         #
         # Copy it.  Note that references to other objects in the
         # nursery are kept unchanged in this step.
@@ -2002,6 +2028,11 @@
             return
         hdr.tid |= GCFLAG_VISITED_RMY
         #
+        # Accounting
+        size_gc_header = self.gcheaderbuilder.size_gc_header
+        size = size_gc_header + self.get_size(obj)
+        self.size_objects_made_old += r_uint(raw_malloc_usage(size))
+        #
         # we just made 'obj' old, so we need to add it to the correct lists
         added_somewhere = False
         #
@@ -2084,14 +2115,14 @@
 
     def gc_step_until(self, state):
         while self.gc_state != state:
-            self.minor_collection()
+            self._minor_collection()
             self.major_collection_step()
 
     debug_gc_step_until = gc_step_until   # xxx
 
     def debug_gc_step(self, n=1):
         while n > 0:
-            self.minor_collection()
+            self._minor_collection()
             self.major_collection_step()
             n -= 1
 
@@ -2111,37 +2142,44 @@
         self.debug_check_consistency()
 
         #
+        # 'threshold_objects_made_old', is used inside comparisons
+        # with 'size_objects_made_old' to know when we must do
+        # several major GC steps (i.e. several consecurive calls
+        # to the present function).  Here is the target that
+        # we try to aim to: either (A1) or (A2)
+        #
+        #  (A1)  gc_state == STATE_SCANNING   (i.e. major GC cycle ended)
+        #  (A2)  size_objects_made_old <= threshold_objects_made_old
+        #
         # Every call to major_collection_step() adds nursery_size//2
-        # to the threshold.  It is reset at the end of this function
-        # when the major collection is fully finished.
-        #
+        # to 'threshold_objects_made_old'.
         # In the common case, this is larger than the size of all
         # objects that survive a minor collection.  After a few
         # minor collections (each followed by one call to
         # major_collection_step()) the threshold is much higher than
-        # the currently-in-use old memory.  Then threshold_reached()
-        # won't be true again until the major collection fully
-        # finishes, time passes, and it's time for the next major
-        # collection.
+        # the 'size_objects_made_old', making the target invariant (A2)
+        # true by a large margin.
         #
         # However there are less common cases:
         #
-        # * if more than half of the nursery consistently survives: we
-        #   call major_collection_step() twice after a minor
-        #   collection;
+        # * if more than half of the nursery consistently survives:
+        #   then we need two calls to major_collection_step() after
+        #   some minor collection;
         #
         # * or if we're allocating a large number of bytes in
-        #   external_malloc().  In that case, we are likely to reach
-        #   again the threshold_reached() case, and more major
-        #   collection steps will be done immediately until
-        #   threshold_reached() returns false.
+        #   external_malloc() and some of them survive the following
+        #   minor collection.  In that case, more than two major
+        #   collection steps must be done immediately, until we
+        #   restore the target invariant (A2).
         #
-        self.next_major_collection_threshold += self.nursery_size // 2
+        self.threshold_objects_made_old += r_uint(self.nursery_size // 2)
 
 
-        # XXX currently very coarse increments, get this working then split
-        # to smaller increments using stacks for resuming
         if self.gc_state == STATE_SCANNING:
+            # starting a major GC cycle: reset these two counters
+            self.size_objects_made_old = r_uint(0)
+            self.threshold_objects_made_old = r_uint(self.nursery_size // 2)
+
             self.objects_to_trace = self.AddressStack()
             self.collect_roots()
             self.gc_state = STATE_MARKING
diff --git a/rpython/memory/gc/test/test_object_pinning.py 
b/rpython/memory/gc/test/test_object_pinning.py
--- a/rpython/memory/gc/test/test_object_pinning.py
+++ b/rpython/memory/gc/test/test_object_pinning.py
@@ -19,6 +19,8 @@
         BaseDirectGCTest.setup_method(self, meth)
         max = getattr(meth, 'max_number_of_pinned_objects', 20)
         self.gc.max_number_of_pinned_objects = max
+        if not hasattr(self.gc, 'minor_collection'):
+            self.gc.minor_collection = self.gc._minor_collection
 
     def test_pin_can_move(self):
         # even a pinned object is considered to be movable. Only the caller
diff --git a/rpython/tool/ansi_print.py b/rpython/tool/ansi_print.py
--- a/rpython/tool/ansi_print.py
+++ b/rpython/tool/ansi_print.py
@@ -67,6 +67,8 @@
 
     def dot(self):
         """Output a mandelbrot dot to the terminal."""
+        if not isatty():
+            return
         global wrote_dot
         if not wrote_dot:
             mandelbrot_driver.reset()
diff --git a/rpython/tool/test/test_ansi_print.py 
b/rpython/tool/test/test_ansi_print.py
--- a/rpython/tool/test/test_ansi_print.py
+++ b/rpython/tool/test/test_ansi_print.py
@@ -65,6 +65,19 @@
     assert output[3] == ('[test:WARNING] maybe?\n', (31,))
     assert len(output[4][0]) == 1    # single character
 
+def test_no_tty():
+    log = ansi_print.AnsiLogger('test')
+    with FakeOutput(tty=False) as output:
+        log.dot()
+        log.dot()
+        log.WARNING('oops')
+        log.WARNING('maybe?')
+        log.dot()
+    assert len(output) == 2
+    assert output[0] == ('[test:WARNING] oops\n', ())
+    assert output[1] == ('[test:WARNING] maybe?\n', ())
+        
+
 def test_unknown_method_names():
     log = ansi_print.AnsiLogger('test')
     with FakeOutput() as output:
diff --git a/rpython/translator/c/node.py b/rpython/translator/c/node.py
--- a/rpython/translator/c/node.py
+++ b/rpython/translator/c/node.py
@@ -547,7 +547,6 @@
             gct = self.db.gctransformer
             if gct is not None:
                 self.gc_init = gct.gcheader_initdata(self.obj)
-                db.getcontainernode(self.gc_init)
             else:
                 self.gc_init = None
 
@@ -678,7 +677,6 @@
             gct = self.db.gctransformer
             if gct is not None:
                 self.gc_init = gct.gcheader_initdata(self.obj)
-                db.getcontainernode(self.gc_init)
             else:
                 self.gc_init = None
 
diff --git a/rpython/translator/driver.py b/rpython/translator/driver.py
--- a/rpython/translator/driver.py
+++ b/rpython/translator/driver.py
@@ -399,7 +399,7 @@
             try:
                 configure_boehm(self.translator.platform)
             except CompilationError, e:
-                i = 'Boehm GC not installed.  Try e.g. "translate.py 
--gc=hybrid"'
+                i = 'Boehm GC not installed.  Try e.g. "translate.py 
--gc=minimark"'
                 raise Exception(str(e) + '\n' + i)
 
     @taskdef([STACKCHECKINSERTION, '?'+BACKENDOPT, RTYPE, '?annotate'],
_______________________________________________
pypy-commit mailing list
[email protected]
https://mail.python.org/mailman/listinfo/pypy-commit

Reply via email to