Hello community,

here is the log from the commit of package python-pyperf for openSUSE:Leap:15.2 
checked in at 2020-03-13 10:57:54
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Leap:15.2/python-pyperf (Old)
 and      /work/SRC/openSUSE:Leap:15.2/.python-pyperf.new.3160 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "python-pyperf"

Fri Mar 13 10:57:54 2020 rev:2 rq:783424 version:1.7.0

Changes:
--------
--- /work/SRC/openSUSE:Leap:15.2/python-pyperf/python-pyperf.changes    
2020-02-22 17:51:24.289645627 +0100
+++ /work/SRC/openSUSE:Leap:15.2/.python-pyperf.new.3160/python-pyperf.changes  
2020-03-13 10:59:14.544514559 +0100
@@ -1,0 +2,20 @@
+Mon Mar  9 10:35:44 UTC 2020 - Tomáš Chvátal <[email protected]>
+
+- Add patches to work with py3.8:
+ * python-retcode.patch
+ * python38.patch
+
+-------------------------------------------------------------------
+Mon Mar  9 09:21:24 UTC 2020 - Tomáš Chvátal <[email protected]>
+
+- Update to 1.7.0:
+  * metadata: add ``python_compiler``
+  * Windows: inherit ``SystemDrive`` environment variable by default.
+  * Fix tests on ARM and PPC: cpu_model_name metadata is no longer required
+    on Linux.
+  * tests: Do not allow test suite to execute without unittest2 on Python2,
+    otherwise man failures occur due to missing 'assertRegex'.
+  * doc: Update old/dead links.
+  * Travis CI: drop Python 3.4 support.
+
+-------------------------------------------------------------------

Old:
----
  pyperf-1.6.1.tar.gz

New:
----
  pyperf-1.7.0.tar.gz
  python-retcode.patch
  python38.patch

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-pyperf.spec ++++++
--- /var/tmp/diff_new_pack.0RjxzJ/_old  2020-03-13 10:59:14.996514881 +0100
+++ /var/tmp/diff_new_pack.0RjxzJ/_new  2020-03-13 10:59:15.004514887 +0100
@@ -1,7 +1,7 @@
 #
 # spec file for package python-pyperf
 #
-# Copyright (c) 2019 SUSE LINUX GmbH, Nuernberg, Germany.
+# Copyright (c) 2020 SUSE LLC
 #
 # All modifications and additions to the file contributed by third parties
 # remain the property of their copyright owners, unless otherwise agreed
@@ -12,20 +12,26 @@
 # license that conforms to the Open Source Definition (Version 1.9)
 # published by the Open Source Initiative.
 
-# Please submit bugfixes or comments via http://bugs.opensuse.org/
+# Please submit bugfixes or comments via https://bugs.opensuse.org/
+#
 
 
 %{?!python_module:%define python_module() python-%{**} python3-%{**}}
 Name:           python-pyperf
-Version:        1.6.1
+Version:        1.7.0
 Release:        0
-License:        MIT
 Summary:        Python module to run and analyze benchmarks
-Url:            https://github.com/vstinner/pyperf
-Group:          Development/Languages/Python
+License:        MIT
+URL:            https://github.com/vstinner/pyperf
 Source:         
https://files.pythonhosted.org/packages/source/p/pyperf/pyperf-%{version}.tar.gz
-BuildRequires:  python-rpm-macros
+Patch0:         python38.patch
+Patch1:         python-retcode.patch
 BuildRequires:  %{python_module setuptools}
+BuildRequires:  fdupes
+BuildRequires:  python-rpm-macros
+Requires:       python-six
+Recommends:     python-psutil
+BuildArch:      noarch
 # SECTION test requirements
 BuildRequires:  %{python_module mock}
 BuildRequires:  %{python_module psutil}
@@ -35,15 +41,10 @@
 BuildRequires:  python2-statistics
 BuildRequires:  python2-unittest2
 # /SECTION
-BuildRequires:  fdupes
-Requires:       python-six
-Recommends:     python-psutil
 %ifpython2
 Requires:       python2-contextlib2
 Requires:       python2-statistics
 %endif
-BuildArch:      noarch
-
 %python_subpackages
 
 %description
@@ -51,6 +52,7 @@
 
 %prep
 %setup -q -n pyperf-%{version}
+%autopatch -p1
 
 %build
 %python_build
@@ -60,8 +62,7 @@
 %python_expand %fdupes %{buildroot}%{$python_sitelib}
 
 %check
-# See https://github.com/vstinner/pyperf/issues/58 for test_collect_metadata 
failure
-%pytest -k 'not test_collect_metadata'
+%pytest
 
 %files %{python_files}
 %doc README.rst

++++++ pyperf-1.6.1.tar.gz -> pyperf-1.7.0.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/pyperf-1.6.1/.travis.yml new/pyperf-1.7.0/.travis.yml
--- old/pyperf-1.6.1/.travis.yml        2018-10-19 17:50:15.000000000 +0200
+++ new/pyperf-1.7.0/.travis.yml        2019-10-14 14:37:29.000000000 +0200
@@ -1,7 +1,7 @@
 language: python
 env:
   - TOXENV=py27
-  - TOXENV=py34
+  - TOXENV=py3
   - TOXENV=doc
   - TOXENV=pep8
 # upgrade setuptools to support environment markers
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/pyperf-1.6.1/PKG-INFO new/pyperf-1.7.0/PKG-INFO
--- old/pyperf-1.6.1/PKG-INFO   2019-05-21 23:00:50.000000000 +0200
+++ new/pyperf-1.7.0/PKG-INFO   2019-12-17 22:11:41.000000000 +0100
@@ -1,10 +1,10 @@
 Metadata-Version: 1.1
 Name: pyperf
-Version: 1.6.1
+Version: 1.7.0
 Summary: Python module to run and analyze benchmarks
 Home-page: https://github.com/vstinner/pyperf
 Author: Victor Stinner
-Author-email: [email protected]
+Author-email: [email protected]
 License: MIT license
 Description: ******
         pyperf
@@ -135,7 +135,7 @@
         .. _the API docs: 
http://pyperf.readthedocs.io/en/latest/api.html#Runner.timeit
         .. _analyze benchmark results: 
https://pyperf.readthedocs.io/en/latest/analyze.html
 Platform: UNKNOWN
-Classifier: Development Status :: 4 - Beta
+Classifier: Development Status :: 5 - Production/Stable
 Classifier: Intended Audience :: Developers
 Classifier: License :: OSI Approved :: MIT License
 Classifier: Natural Language :: English
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/pyperf-1.6.1/doc/api.rst new/pyperf-1.7.0/doc/api.rst
--- old/pyperf-1.6.1/doc/api.rst        2019-05-14 23:25:47.000000000 +0200
+++ new/pyperf-1.7.0/doc/api.rst        2019-09-02 11:30:34.000000000 +0200
@@ -673,6 +673,7 @@
 
 Python metadata:
 
+* ``python_compiler``: Compiler name and version.
 * ``python_cflags``: Compiler flags used to compile Python.
 * ``python_executable``: path to the Python executable
 * ``python_hash_seed``: value of the ``PYTHONHASHSEED`` environment variable
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/pyperf-1.6.1/doc/changelog.rst 
new/pyperf-1.7.0/doc/changelog.rst
--- old/pyperf-1.6.1/doc/changelog.rst  2019-05-21 22:59:46.000000000 +0200
+++ new/pyperf-1.7.0/doc/changelog.rst  2019-12-17 22:07:31.000000000 +0100
@@ -1,6 +1,20 @@
 Changelog
 =========
 
+Version 1.7.0 (2019-12-17)
+--------------------------
+
+* metadata: add ``python_compiler``
+* Windows: inherit ``SystemDrive`` environment variable by default.
+  Contribution by Steve Dower.
+* Fix tests on ARM and PPC: cpu_model_name metadata is no longer required
+  on Linux.
+* tests: Do not allow test suite to execute without unittest2 on Python2,
+  otherwise man failures occur due to missing 'assertRegex'.
+  Contribution by John Vandenberg.
+* doc: Update old/dead links.
+* Travis CI: drop Python 3.4 support.
+
 Version 1.6.1 (2019-05-21)
 --------------------------
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/pyperf-1.6.1/doc/conf.py new/pyperf-1.7.0/doc/conf.py
--- old/pyperf-1.6.1/doc/conf.py        2019-05-14 23:27:53.000000000 +0200
+++ new/pyperf-1.7.0/doc/conf.py        2019-12-17 22:08:09.000000000 +0100
@@ -55,7 +55,7 @@
 # built documents.
 #
 # The short X.Y version.
-version = release = '1.6.1'
+version = release = '1.7.0'
 
 # The language for content autogenerated by Sphinx. Refer to documentation
 # for a list of supported languages.
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/pyperf-1.6.1/doc/run_benchmark.rst 
new/pyperf-1.7.0/doc/run_benchmark.rst
--- old/pyperf-1.6.1/doc/run_benchmark.rst      2019-05-14 23:28:54.000000000 
+0200
+++ new/pyperf-1.7.0/doc/run_benchmark.rst      2019-06-28 16:34:25.000000000 
+0200
@@ -151,7 +151,7 @@
 See also:
 
 * `Microbenchmarks article
-  <http://vstinner.readthedocs.io/microbenchmark.html>`_ (by Victor Stinner)
+  <http://vstinner.readthedocs.io/benchmark.html>`_ (by Victor Stinner)
   contains misc information on how to run stable benchmarks.
 * `SPEC CPU2000: Measuring CPU Performance in the New Millennium
   <https://open.spec.org/cpu2000/papers/COMPUTER_200007.JLH.pdf>`_ by John L.
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/pyperf-1.6.1/doc/system.rst 
new/pyperf-1.7.0/doc/system.rst
--- old/pyperf-1.6.1/doc/system.rst     2019-05-14 23:29:42.000000000 +0200
+++ new/pyperf-1.7.0/doc/system.rst     2019-10-14 14:37:29.000000000 +0200
@@ -33,7 +33,7 @@
 See also:
 
 * `nohz_full=godmode ?
-  <http://www.breakage.org/2013/11/15/nohz_fullgodmode/>`_ (by Jeremy Eder, 
November 2013)
+  <https://jeremyeder.com/2013/11/15/nohz_fullgodmode/>`_ (by Jeremy Eder, 
November 2013)
 * `cset shield - easily configure cpusets
   
<http://skebanga.blogspot.it/2012/06/cset-shield-easily-configure-cpusets.html>`_
 * `cpuset <https://github.com/lpechacek/cpuset>`_
@@ -295,7 +295,7 @@
   C-states. Open the device, write a 32-bit ``0`` to it, then keep it open
   while your tests runs, close when you're finished. See
   `processor.max_cstate, intel_idle.max_cstate and /dev/cpu_dma_latency
-  
<http://www.breakage.org/2012/11/14/processor-max_cstate-intel_idle-max_cstate-and-devcpu_dma_latency/>`_.
+  
<https://jeremyeder.com/2012/11/14/processor-max_cstate-intel_idle-max_cstate-and-devcpu_dma_latency/>`_.
 
 Misc (untested) Linux commands::
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/pyperf-1.6.1/pyperf/__init__.py 
new/pyperf-1.7.0/pyperf/__init__.py
--- old/pyperf-1.6.1/pyperf/__init__.py 2019-05-14 23:31:37.000000000 +0200
+++ new/pyperf-1.7.0/pyperf/__init__.py 2019-12-17 22:08:00.000000000 +0100
@@ -1,6 +1,6 @@
 from __future__ import division, print_function, absolute_import
 
-VERSION = (1, 6, 1)
+VERSION = (1, 7, 0)
 __version__ = '.'.join(map(str, VERSION))
 
 # Clocks
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/pyperf-1.6.1/pyperf/_collect_metadata.py 
new/pyperf-1.7.0/pyperf/_collect_metadata.py
--- old/pyperf-1.6.1/pyperf/_collect_metadata.py        2019-05-14 
23:32:52.000000000 +0200
+++ new/pyperf-1.7.0/pyperf/_collect_metadata.py        2019-10-14 
14:37:29.000000000 +0200
@@ -108,6 +108,11 @@
         else:
             metadata['python_hash_seed'] = hash_seed
 
+    # compiler
+    python_compiler = normalize_text(platform.python_compiler())
+    if python_compiler:
+        metadata['python_compiler'] = python_compiler
+
     # CFLAGS
     try:
         import sysconfig
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/pyperf-1.6.1/pyperf/_memory.py 
new/pyperf-1.7.0/pyperf/_memory.py
--- old/pyperf-1.6.1/pyperf/_memory.py  2019-05-14 23:33:19.000000000 +0200
+++ new/pyperf-1.7.0/pyperf/_memory.py  2019-10-14 14:37:29.000000000 +0200
@@ -8,8 +8,9 @@
 
 # Code to parse Linux /proc/%d/smaps files.
 #
-# See http://bmaurer.blogspot.com/2006/03/memory-usage-with-smaps.html for
-# a quick introduction to smaps.
+# See
+# 
https://web.archive.org/web/20180907232758/https://bmaurer.blogspot.com/2006/03/memory-usage-with-smaps.html
+# for a quick introduction to smaps.
 #
 # Need Linux 2.6.16 or newer.
 def read_smap_file():
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/pyperf-1.6.1/pyperf/_utils.py 
new/pyperf-1.7.0/pyperf/_utils.py
--- old/pyperf-1.6.1/pyperf/_utils.py   2018-05-11 19:00:49.000000000 +0200
+++ new/pyperf-1.7.0/pyperf/_utils.py   2019-10-14 14:37:29.000000000 +0200
@@ -350,7 +350,7 @@
 def create_environ(inherit_environ, locale):
     env = {}
 
-    copy_env = ["PATH", "HOME", "TEMP", "COMSPEC", "SystemRoot"]
+    copy_env = ["PATH", "HOME", "TEMP", "COMSPEC", "SystemRoot", "SystemDrive"]
     if locale:
         copy_env.extend(('LANG', 'LC_ADDRESS', 'LC_ALL', 'LC_COLLATE',
                          'LC_CTYPE', 'LC_IDENTIFICATION', 'LC_MEASUREMENT',
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/pyperf-1.6.1/pyperf/tests/__init__.py 
new/pyperf-1.7.0/pyperf/tests/__init__.py
--- old/pyperf-1.6.1/pyperf/tests/__init__.py   2019-05-14 23:35:00.000000000 
+0200
+++ new/pyperf-1.7.0/pyperf/tests/__init__.py   2019-10-15 23:30:37.000000000 
+0200
@@ -14,19 +14,21 @@
 except ImportError:
     import mock   # noqa
 try:
-    # Python 2.7
-    import unittest2 as unittest   # noqa
-except ImportError:
-    import unittest   # noqa
-try:
     # Python 3.3
     from contextlib import ExitStack   # noqa
 except ImportError:
     # Python 2.7: use contextlib2 backport
     from contextlib2 import ExitStack   # noqa
 
+import six
+
 from pyperf._utils import popen_communicate, popen_killer
 
+if six.PY2:
+    import unittest2 as unittest   # noqa
+else:
+    import unittest   # noqa
+
 
 @contextlib.contextmanager
 def _capture_stream(name):
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/pyperf-1.6.1/pyperf/tests/test_metadata.py 
new/pyperf-1.7.0/pyperf/tests/test_metadata.py
--- old/pyperf-1.6.1/pyperf/tests/test_metadata.py      2019-05-14 
23:37:00.000000000 +0200
+++ new/pyperf-1.7.0/pyperf/tests/test_metadata.py      2019-10-15 
23:30:37.000000000 +0200
@@ -14,7 +14,7 @@
     'python_implementation', 'python_version',
     'platform']
 if sys.platform.startswith('linux'):
-    MANDATORY_METADATA.extend(('aslr', 'cpu_model_name'))
+    MANDATORY_METADATA.append('aslr')
 
 
 class TestMetadata(unittest.TestCase):
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/pyperf-1.6.1/pyperf/tests/test_tools.py 
new/pyperf-1.7.0/pyperf/tests/test_tools.py
--- old/pyperf-1.6.1/pyperf/tests/test_tools.py 2019-05-14 23:38:06.000000000 
+0200
+++ new/pyperf-1.7.0/pyperf/tests/test_tools.py 1970-01-01 01:00:00.000000000 
+0100
@@ -1,234 +0,0 @@
-import datetime
-
-import six
-
-import pyperf
-from pyperf._formatter import (format_filesize, format_seconds, 
format_timedelta,
-                               format_timedeltas, format_number)
-from pyperf import _cpu_utils as cpu_utils
-from pyperf import _utils as utils
-from pyperf.tests import mock
-from pyperf.tests import unittest
-
-
-class TestClocks(unittest.TestCase):
-    def test_perf_counter(self):
-        t1 = pyperf.perf_counter()
-        t2 = pyperf.perf_counter()
-        self.assertGreaterEqual(t2, t1)
-
-
-class TestStatistics(unittest.TestCase):
-    def test_is_significant(self):
-        # There's no particular significance to these values.
-        DATA1 = [89.2, 78.2, 89.3, 88.3, 87.3, 90.1, 95.2, 94.3, 78.3, 89.3]
-        DATA2 = [79.3, 78.3, 85.3, 79.3, 88.9, 91.2, 87.2, 89.2, 93.3, 79.9]
-
-        # not significant
-        significant, tscore = utils.is_significant(DATA1, DATA2)
-        self.assertFalse(significant)
-        self.assertAlmostEqual(tscore, 1.0947229724603977, places=4)
-
-        significant, tscore2 = utils.is_significant(DATA2, DATA1)
-        self.assertFalse(significant)
-        self.assertEqual(tscore2, -tscore)
-
-        # significant
-        inflated = [x * 10 for x in DATA1]
-        significant, tscore = utils.is_significant(inflated, DATA1)
-        self.assertTrue(significant)
-        self.assertAlmostEqual(tscore, 43.76839453227327, places=4)
-
-        significant, tscore2 = utils.is_significant(DATA1, inflated)
-        self.assertTrue(significant)
-        self.assertEqual(tscore2, -tscore)
-
-    def test_is_significant_FIXME(self):
-        # FIXME: _TScore() division by zero: error=0
-        # n = 100
-        # values1 = (1.0,) * n
-        # values2 = (2.0,) * n
-        # self.assertEqual(utils.is_significant(values1, values2),
-        #                  (True, -141.4213562373095))
-
-        # FIXME: same error
-        # # same values
-        # values = (1.0,) * 50
-        # self.assertEqual(utils.is_significant(values, values),
-        #                  (True, -141.4213562373095))
-        pass
-
-    def test_median_abs_dev(self):
-        self.assertEqual(utils.median_abs_dev(range(97)), 24.0)
-        self.assertEqual(utils.median_abs_dev((1, 1, 2, 2, 4, 6, 9)), 1.0)
-
-
-class TestUtils(unittest.TestCase):
-    def test_parse_iso8601(self):
-        # Default format using 'T' separator
-        self.assertEqual(utils.parse_iso8601('2016-07-20T14:06:07'),
-                         datetime.datetime(2016, 7, 20, 14, 6, 7))
-        # Microseconds
-        self.assertEqual(utils.parse_iso8601('2016-07-20T14:06:07.608319'),
-                         datetime.datetime(2016, 7, 20, 14, 6, 7, 608319))
-        # Space separator
-        self.assertEqual(utils.parse_iso8601('2016-07-20 14:06:07'),
-                         datetime.datetime(2016, 7, 20, 14, 6, 7))
-
-    def test_format_seconds(self):
-        self.assertEqual(format_seconds(0),
-                         "0 sec")
-        self.assertEqual(format_seconds(316e-4),
-                         "31.6 ms")
-        self.assertEqual(format_seconds(15.9),
-                         "15.9 sec")
-        self.assertEqual(format_seconds(3 * 60 + 15.9),
-                         "3 min 15.9 sec")
-        self.assertEqual(format_seconds(404683.5876653),
-                         "4 day 16 hour 24 min")
-
-    def test_format_timedelta(self):
-        fmt_delta = format_timedelta
-
-        self.assertEqual(fmt_delta(555222), "555222 sec")
-
-        self.assertEqual(fmt_delta(1e0), "1.00 sec")
-        self.assertEqual(fmt_delta(1e-3), "1.00 ms")
-        self.assertEqual(fmt_delta(1e-6), "1.00 us")
-        self.assertEqual(fmt_delta(1e-9), "1.00 ns")
-
-        self.assertEqual(fmt_delta(316e-3), "316 ms")
-        self.assertEqual(fmt_delta(316e-4), "31.6 ms")
-        self.assertEqual(fmt_delta(316e-5), "3.16 ms")
-
-        self.assertEqual(fmt_delta(1e-10), "0.10 ns")
-
-        self.assertEqual(fmt_delta(-2), "-2.00 sec")
-
-    def test_timedelta_stdev(self):
-        def fmt_stdev(seconds, stdev):
-            return "%s +- %s" % format_timedeltas((seconds, stdev))
-
-        self.assertEqual(fmt_stdev(58123, 192), "58123 sec +- 192 sec")
-        self.assertEqual(fmt_stdev(100e-3, 0), "100 ms +- 0 ms")
-        self.assertEqual(fmt_stdev(102e-3, 3e-3), "102 ms +- 3 ms")
-
-    def test_format_number(self):
-        # plural
-        self.assertEqual(format_number(0, 'unit'), '0 units')
-        self.assertEqual(format_number(1, 'unit'), '1 unit')
-        self.assertEqual(format_number(2, 'unit'), '2 units')
-        self.assertEqual(format_number(123, 'unit'), '123 units')
-
-        # powers of 10
-        self.assertEqual(format_number(10 ** 3, 'unit'),
-                         '1000 units')
-        self.assertEqual(format_number(10 ** 4, 'unit'),
-                         '10^4 units')
-        self.assertEqual(format_number(10 ** 4 + 1, 'unit'),
-                         '10001 units')
-        self.assertEqual(format_number(33 * 10 ** 4, 'unit'),
-                         '330000 units')
-
-        # powers of 10
-        self.assertEqual(format_number(2 ** 10, 'unit'),
-                         '1024 units')
-        self.assertEqual(format_number(2 ** 15, 'unit'),
-                         '2^15 units')
-        self.assertEqual(format_number(2 ** 15),
-                         '2^15')
-        self.assertEqual(format_number(2 ** 10 + 1, 'unit'),
-                         '1025 units')
-
-    def test_format_filesize(self):
-        self.assertEqual(format_filesize(0),
-                         '0 bytes')
-        self.assertEqual(format_filesize(1),
-                         '1 byte')
-        self.assertEqual(format_filesize(10 * 1024),
-                         '10.0 kB')
-        self.assertEqual(format_filesize(12.4 * 1024 * 1024),
-                         '12.4 MB')
-
-    def test_get_python_names(self):
-        self.assertEqual(utils.get_python_names('/usr/bin/python2.7',
-                                                '/usr/bin/python3.5'),
-                         ('python2.7', 'python3.5'))
-
-        self.assertEqual(utils.get_python_names('/bin/python2.7',
-                                                '/usr/bin/python2.7'),
-                         ('/bin/python2.7', '/usr/bin/python2.7'))
-
-
-class CPUToolsTests(unittest.TestCase):
-    def test_parse_cpu_list(self):
-        parse_cpu_list = cpu_utils.parse_cpu_list
-
-        self.assertIsNone(parse_cpu_list(''))
-        self.assertIsNone(parse_cpu_list('\x00'))
-        self.assertEqual(parse_cpu_list('0'),
-                         [0])
-        self.assertEqual(parse_cpu_list('0-1,5-6'),
-                         [0, 1, 5, 6])
-        self.assertEqual(parse_cpu_list('1,3,7'),
-                         [1, 3, 7])
-
-        # tolerate spaces
-        self.assertEqual(parse_cpu_list(' 1 , 2 '),
-                         [1, 2])
-
-        # errors
-        self.assertRaises(ValueError, parse_cpu_list, 'x')
-        self.assertRaises(ValueError, parse_cpu_list, '1,')
-
-    def test_format_cpu_list(self):
-        self.assertEqual(cpu_utils.format_cpu_list([0]),
-                         '0')
-        self.assertEqual(cpu_utils.format_cpu_list([0, 1, 5, 6]),
-                         '0-1,5-6')
-        self.assertEqual(cpu_utils.format_cpu_list([1, 3, 7]),
-                         '1,3,7')
-
-    def test_get_isolated_cpus(self):
-        BUILTIN_OPEN = 'builtins.open' if six.PY3 else '__builtin__.open'
-
-        def check_get(line):
-            with mock.patch(BUILTIN_OPEN) as mock_open:
-                mock_file = mock_open.return_value
-                mock_file.readline.return_value = line
-                return cpu_utils.get_isolated_cpus()
-
-        # no isolated CPU
-        self.assertIsNone(check_get(''))
-
-        # isolated CPUs
-        self.assertEqual(check_get('1-2'), [1, 2])
-
-        # /sys/devices/system/cpu/isolated doesn't exist (ex: Windows)
-        with mock.patch(BUILTIN_OPEN, side_effect=IOError):
-            self.assertIsNone(cpu_utils.get_isolated_cpus())
-
-    def test_parse_cpu_mask(self):
-        parse_cpu_mask = cpu_utils.parse_cpu_mask
-        self.assertEqual(parse_cpu_mask('f0'),
-                         0xf0)
-        self.assertEqual(parse_cpu_mask('fedcba00,12345678'),
-                         0xfedcba0012345678)
-        self.assertEqual(parse_cpu_mask('ffffffff,ffffffff,ffffffff,ffffffff'),
-                         2**128 - 1)
-
-    def test_format_cpu_mask(self):
-        format_cpu_mask = cpu_utils.format_cpu_mask
-        self.assertEqual(format_cpu_mask(0xf0),
-                         '000000f0')
-        self.assertEqual(format_cpu_mask(0xfedcba0012345678),
-                         'fedcba00,12345678')
-
-    def test_format_cpus_as_mask(self):
-        format_cpus_as_mask = cpu_utils.format_cpus_as_mask
-        self.assertEqual(format_cpus_as_mask({4, 5, 6, 7}),
-                         '000000f0')
-
-
-if __name__ == "__main__":
-    unittest.main()
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/pyperf-1.6.1/pyperf/tests/test_utils.py 
new/pyperf-1.7.0/pyperf/tests/test_utils.py
--- old/pyperf-1.6.1/pyperf/tests/test_utils.py 1970-01-01 01:00:00.000000000 
+0100
+++ new/pyperf-1.7.0/pyperf/tests/test_utils.py 2019-06-26 22:51:50.000000000 
+0200
@@ -0,0 +1,243 @@
+import datetime
+
+import six
+
+import pyperf
+from pyperf._formatter import (format_filesize, format_seconds, 
format_timedelta,
+                               format_timedeltas, format_number)
+from pyperf import _cpu_utils as cpu_utils
+from pyperf import _utils as utils
+from pyperf.tests import mock
+from pyperf.tests import unittest
+
+
+class TestClocks(unittest.TestCase):
+    def test_perf_counter(self):
+        t1 = pyperf.perf_counter()
+        t2 = pyperf.perf_counter()
+        self.assertGreaterEqual(t2, t1)
+
+
+class TestStatistics(unittest.TestCase):
+    def test_is_significant(self):
+        # There's no particular significance to these values.
+        DATA1 = [89.2, 78.2, 89.3, 88.3, 87.3, 90.1, 95.2, 94.3, 78.3, 89.3]
+        DATA2 = [79.3, 78.3, 85.3, 79.3, 88.9, 91.2, 87.2, 89.2, 93.3, 79.9]
+
+        # not significant
+        significant, tscore = utils.is_significant(DATA1, DATA2)
+        self.assertFalse(significant)
+        self.assertAlmostEqual(tscore, 1.0947229724603977, places=4)
+
+        significant, tscore2 = utils.is_significant(DATA2, DATA1)
+        self.assertFalse(significant)
+        self.assertEqual(tscore2, -tscore)
+
+        # significant
+        inflated = [x * 10 for x in DATA1]
+        significant, tscore = utils.is_significant(inflated, DATA1)
+        self.assertTrue(significant)
+        self.assertAlmostEqual(tscore, 43.76839453227327, places=4)
+
+        significant, tscore2 = utils.is_significant(DATA1, inflated)
+        self.assertTrue(significant)
+        self.assertEqual(tscore2, -tscore)
+
+    def test_is_significant_FIXME(self):
+        # FIXME: _TScore() division by zero: error=0
+        # n = 100
+        # values1 = (1.0,) * n
+        # values2 = (2.0,) * n
+        # self.assertEqual(utils.is_significant(values1, values2),
+        #                  (True, -141.4213562373095))
+
+        # FIXME: same error
+        # # same values
+        # values = (1.0,) * 50
+        # self.assertEqual(utils.is_significant(values, values),
+        #                  (True, -141.4213562373095))
+        pass
+
+    def test_median_abs_dev(self):
+        self.assertEqual(utils.median_abs_dev(range(97)), 24.0)
+        self.assertEqual(utils.median_abs_dev((1, 1, 2, 2, 4, 6, 9)), 1.0)
+
+    def test_percentile(self):
+        # randomized range(10)
+        values = [4, 6, 9, 7, 5, 8, 3, 0, 1, 2]
+        self.assertEqual(utils.percentile(values, 0.00), 0)
+        self.assertEqual(utils.percentile(values, 0.25), 2.25)
+        self.assertEqual(utils.percentile(values, 0.50), 4.5)
+        self.assertEqual(utils.percentile(values, 0.75), 6.75)
+        self.assertEqual(utils.percentile(values, 1.00), 9)
+
+
+class TestUtils(unittest.TestCase):
+    def test_parse_iso8601(self):
+        # Default format using 'T' separator
+        self.assertEqual(utils.parse_iso8601('2016-07-20T14:06:07'),
+                         datetime.datetime(2016, 7, 20, 14, 6, 7))
+        # Microseconds
+        self.assertEqual(utils.parse_iso8601('2016-07-20T14:06:07.608319'),
+                         datetime.datetime(2016, 7, 20, 14, 6, 7, 608319))
+        # Space separator
+        self.assertEqual(utils.parse_iso8601('2016-07-20 14:06:07'),
+                         datetime.datetime(2016, 7, 20, 14, 6, 7))
+
+    def test_format_seconds(self):
+        self.assertEqual(format_seconds(0),
+                         "0 sec")
+        self.assertEqual(format_seconds(316e-4),
+                         "31.6 ms")
+        self.assertEqual(format_seconds(15.9),
+                         "15.9 sec")
+        self.assertEqual(format_seconds(3 * 60 + 15.9),
+                         "3 min 15.9 sec")
+        self.assertEqual(format_seconds(404683.5876653),
+                         "4 day 16 hour 24 min")
+
+    def test_format_timedelta(self):
+        fmt_delta = format_timedelta
+
+        self.assertEqual(fmt_delta(555222), "555222 sec")
+
+        self.assertEqual(fmt_delta(1e0), "1.00 sec")
+        self.assertEqual(fmt_delta(1e-3), "1.00 ms")
+        self.assertEqual(fmt_delta(1e-6), "1.00 us")
+        self.assertEqual(fmt_delta(1e-9), "1.00 ns")
+
+        self.assertEqual(fmt_delta(316e-3), "316 ms")
+        self.assertEqual(fmt_delta(316e-4), "31.6 ms")
+        self.assertEqual(fmt_delta(316e-5), "3.16 ms")
+
+        self.assertEqual(fmt_delta(1e-10), "0.10 ns")
+
+        self.assertEqual(fmt_delta(-2), "-2.00 sec")
+
+    def test_timedelta_stdev(self):
+        def fmt_stdev(seconds, stdev):
+            return "%s +- %s" % format_timedeltas((seconds, stdev))
+
+        self.assertEqual(fmt_stdev(58123, 192), "58123 sec +- 192 sec")
+        self.assertEqual(fmt_stdev(100e-3, 0), "100 ms +- 0 ms")
+        self.assertEqual(fmt_stdev(102e-3, 3e-3), "102 ms +- 3 ms")
+
+    def test_format_number(self):
+        # plural
+        self.assertEqual(format_number(0, 'unit'), '0 units')
+        self.assertEqual(format_number(1, 'unit'), '1 unit')
+        self.assertEqual(format_number(2, 'unit'), '2 units')
+        self.assertEqual(format_number(123, 'unit'), '123 units')
+
+        # powers of 10
+        self.assertEqual(format_number(10 ** 3, 'unit'),
+                         '1000 units')
+        self.assertEqual(format_number(10 ** 4, 'unit'),
+                         '10^4 units')
+        self.assertEqual(format_number(10 ** 4 + 1, 'unit'),
+                         '10001 units')
+        self.assertEqual(format_number(33 * 10 ** 4, 'unit'),
+                         '330000 units')
+
+        # powers of 10
+        self.assertEqual(format_number(2 ** 10, 'unit'),
+                         '1024 units')
+        self.assertEqual(format_number(2 ** 15, 'unit'),
+                         '2^15 units')
+        self.assertEqual(format_number(2 ** 15),
+                         '2^15')
+        self.assertEqual(format_number(2 ** 10 + 1, 'unit'),
+                         '1025 units')
+
+    def test_format_filesize(self):
+        self.assertEqual(format_filesize(0),
+                         '0 bytes')
+        self.assertEqual(format_filesize(1),
+                         '1 byte')
+        self.assertEqual(format_filesize(10 * 1024),
+                         '10.0 kB')
+        self.assertEqual(format_filesize(12.4 * 1024 * 1024),
+                         '12.4 MB')
+
+    def test_get_python_names(self):
+        self.assertEqual(utils.get_python_names('/usr/bin/python2.7',
+                                                '/usr/bin/python3.5'),
+                         ('python2.7', 'python3.5'))
+
+        self.assertEqual(utils.get_python_names('/bin/python2.7',
+                                                '/usr/bin/python2.7'),
+                         ('/bin/python2.7', '/usr/bin/python2.7'))
+
+
+class CPUToolsTests(unittest.TestCase):
+    def test_parse_cpu_list(self):
+        parse_cpu_list = cpu_utils.parse_cpu_list
+
+        self.assertIsNone(parse_cpu_list(''))
+        self.assertIsNone(parse_cpu_list('\x00'))
+        self.assertEqual(parse_cpu_list('0'),
+                         [0])
+        self.assertEqual(parse_cpu_list('0-1,5-6'),
+                         [0, 1, 5, 6])
+        self.assertEqual(parse_cpu_list('1,3,7'),
+                         [1, 3, 7])
+
+        # tolerate spaces
+        self.assertEqual(parse_cpu_list(' 1 , 2 '),
+                         [1, 2])
+
+        # errors
+        self.assertRaises(ValueError, parse_cpu_list, 'x')
+        self.assertRaises(ValueError, parse_cpu_list, '1,')
+
+    def test_format_cpu_list(self):
+        self.assertEqual(cpu_utils.format_cpu_list([0]),
+                         '0')
+        self.assertEqual(cpu_utils.format_cpu_list([0, 1, 5, 6]),
+                         '0-1,5-6')
+        self.assertEqual(cpu_utils.format_cpu_list([1, 3, 7]),
+                         '1,3,7')
+
+    def test_get_isolated_cpus(self):
+        BUILTIN_OPEN = 'builtins.open' if six.PY3 else '__builtin__.open'
+
+        def check_get(line):
+            with mock.patch(BUILTIN_OPEN) as mock_open:
+                mock_file = mock_open.return_value
+                mock_file.readline.return_value = line
+                return cpu_utils.get_isolated_cpus()
+
+        # no isolated CPU
+        self.assertIsNone(check_get(''))
+
+        # isolated CPUs
+        self.assertEqual(check_get('1-2'), [1, 2])
+
+        # /sys/devices/system/cpu/isolated doesn't exist (ex: Windows)
+        with mock.patch(BUILTIN_OPEN, side_effect=IOError):
+            self.assertIsNone(cpu_utils.get_isolated_cpus())
+
+    def test_parse_cpu_mask(self):
+        parse_cpu_mask = cpu_utils.parse_cpu_mask
+        self.assertEqual(parse_cpu_mask('f0'),
+                         0xf0)
+        self.assertEqual(parse_cpu_mask('fedcba00,12345678'),
+                         0xfedcba0012345678)
+        self.assertEqual(parse_cpu_mask('ffffffff,ffffffff,ffffffff,ffffffff'),
+                         2**128 - 1)
+
+    def test_format_cpu_mask(self):
+        format_cpu_mask = cpu_utils.format_cpu_mask
+        self.assertEqual(format_cpu_mask(0xf0),
+                         '000000f0')
+        self.assertEqual(format_cpu_mask(0xfedcba0012345678),
+                         'fedcba00,12345678')
+
+    def test_format_cpus_as_mask(self):
+        format_cpus_as_mask = cpu_utils.format_cpus_as_mask
+        self.assertEqual(format_cpus_as_mask({4, 5, 6, 7}),
+                         '000000f0')
+
+
+if __name__ == "__main__":
+    unittest.main()
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/pyperf-1.6.1/pyperf.egg-info/PKG-INFO 
new/pyperf-1.7.0/pyperf.egg-info/PKG-INFO
--- old/pyperf-1.6.1/pyperf.egg-info/PKG-INFO   2019-05-21 23:00:49.000000000 
+0200
+++ new/pyperf-1.7.0/pyperf.egg-info/PKG-INFO   2019-12-17 22:11:40.000000000 
+0100
@@ -1,10 +1,10 @@
 Metadata-Version: 1.1
 Name: pyperf
-Version: 1.6.1
+Version: 1.7.0
 Summary: Python module to run and analyze benchmarks
 Home-page: https://github.com/vstinner/pyperf
 Author: Victor Stinner
-Author-email: [email protected]
+Author-email: [email protected]
 License: MIT license
 Description: ******
         pyperf
@@ -135,7 +135,7 @@
         .. _the API docs: 
http://pyperf.readthedocs.io/en/latest/api.html#Runner.timeit
         .. _analyze benchmark results: 
https://pyperf.readthedocs.io/en/latest/analyze.html
 Platform: UNKNOWN
-Classifier: Development Status :: 4 - Beta
+Classifier: Development Status :: 5 - Production/Stable
 Classifier: Intended Audience :: Developers
 Classifier: License :: OSI Approved :: MIT License
 Classifier: Natural Language :: English
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/pyperf-1.6.1/pyperf.egg-info/SOURCES.txt 
new/pyperf-1.7.0/pyperf.egg-info/SOURCES.txt
--- old/pyperf-1.6.1/pyperf.egg-info/SOURCES.txt        2019-05-21 
23:00:49.000000000 +0200
+++ new/pyperf-1.7.0/pyperf.egg-info/SOURCES.txt        2019-12-17 
22:11:40.000000000 +0100
@@ -70,5 +70,5 @@
 pyperf/tests/test_runner.py
 pyperf/tests/test_system.py
 pyperf/tests/test_timeit.py
-pyperf/tests/test_tools.py
+pyperf/tests/test_utils.py
 pyperf/tests/track_memory.json
\ No newline at end of file
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/pyperf-1.6.1/setup.py new/pyperf-1.7.0/setup.py
--- old/pyperf-1.6.1/setup.py   2019-05-15 00:01:39.000000000 +0200
+++ new/pyperf-1.7.0/setup.py   2019-12-17 22:09:42.000000000 +0100
@@ -15,9 +15,9 @@
 # Release a new version:
 #
 #  - git tag VERSION
-#  - git push --tags
-#  - Remove untracked files/dirs: git clean -fdx
+#  - git clean -fdx  # Remove untracked files/dirs
 #  - python3 setup.py sdist bdist_wheel
+#  - git push --tags
 #  - twine upload dist/*
 #
 # After the release:
@@ -26,11 +26,11 @@
 #  - git commit -a -m "post-release"
 #  - git push
 
-VERSION = '1.6.1'
+VERSION = '1.7.0'
 
 DESCRIPTION = 'Python module to run and analyze benchmarks'
 CLASSIFIERS = [
-    'Development Status :: 4 - Beta',
+    'Development Status :: 5 - Production/Stable',
     'Intended Audience :: Developers',
     'License :: OSI Approved :: MIT License',
     'Natural Language :: English',
@@ -57,7 +57,7 @@
         'long_description': long_description,
         'url': 'https://github.com/vstinner/pyperf',
         'author': 'Victor Stinner',
-        'author_email': '[email protected]',
+        'author_email': '[email protected]',
         'classifiers': CLASSIFIERS,
         'packages': ['pyperf', 'pyperf.tests'],
         'install_requires': ["six"],

++++++ python-retcode.patch ++++++
Index: pyperf-1.7.0/pyperf/tests/test_system.py
===================================================================
--- pyperf-1.7.0.orig/pyperf/tests/test_system.py
+++ pyperf-1.7.0/pyperf/tests/test_system.py
@@ -18,7 +18,9 @@ class SystemTests(unittest.TestCase):
 
         # The return code is either 0 if the system is tuned or 2 if the
         # system isn't
-        self.assertIn(proc.returncode, (0, 2), msg=proc)
+        # Also it can return 1 if 
/sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
+        # is not available
+        self.assertIn(proc.returncode, (0, 1, 2), msg=proc)
 
 
 if __name__ == "__main__":
++++++ python38.patch ++++++
Index: pyperf-1.7.0/pyperf/_collect_metadata.py
===================================================================
--- pyperf-1.7.0.orig/pyperf/_collect_metadata.py
+++ pyperf-1.7.0/pyperf/_collect_metadata.py
@@ -92,7 +92,8 @@ def collect_python_metadata(metadata):
         metadata['timer'] = ('%s, resolution: %s'
                              % (info.implementation,
                                 format_timedelta(info.resolution)))
-    elif pyperf.perf_counter == time.clock:
+    elif (hasattr(time, 'clock')
+       and pyperf.perf_counter == time.clock):
         metadata['timer'] = 'time.clock()'
     elif pyperf.perf_counter == time.time:
         metadata['timer'] = 'time.time()'

Reply via email to