Hello community,

here is the log from the commit of package python-stestr for openSUSE:Factory 
checked in at 2018-12-14 20:47:41
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/python-stestr (Old)
 and      /work/SRC/openSUSE:Factory/.python-stestr.new.28833 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "python-stestr"

Fri Dec 14 20:47:41 2018 rev:7 rq:656068 version:2.2.0

Changes:
--------
--- /work/SRC/openSUSE:Factory/python-stestr/python-stestr.changes      
2018-11-14 14:41:04.942846553 +0100
+++ /work/SRC/openSUSE:Factory/.python-stestr.new.28833/python-stestr.changes   
2018-12-14 20:47:42.817482500 +0100
@@ -1,0 +2,16 @@
+Fri Dec  7 13:41:42 UTC 2018 - Thomas Bechtold <[email protected]>
+
+- update to 2.2.0:
+  * Change title of project in readme
+  * Add a better description to README Overview section
+  * Fix discovery import error formatting on py3
+  * Cleanup unused parameters in \_run\_tests
+  * Enable doc8
+  * Add all stream to repo even if some tests fail
+  * Extract loading case code to \_load\_case() function
+  * Make test running serially when just loading
+  * Fix time measurement for load command too
+  * Use reported times instead of wall time in subunit-trace
+  * Add support for test class and method by path on no-discover
+
+-------------------------------------------------------------------

Old:
----
  stestr-2.1.1.tar.gz

New:
----
  stestr-2.2.0.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-stestr.spec ++++++
--- /var/tmp/diff_new_pack.hRc8p1/_old  2018-12-14 20:47:43.893481019 +0100
+++ /var/tmp/diff_new_pack.hRc8p1/_new  2018-12-14 20:47:43.893481019 +0100
@@ -12,13 +12,13 @@
 # license that conforms to the Open Source Definition (Version 1.9)
 # published by the Open Source Initiative.
 
-# Please submit bugfixes or comments via http://bugs.opensuse.org/
+# Please submit bugfixes or comments via https://bugs.opensuse.org/
 #
 
 
 %{?!python_module:%define python_module() python-%{**} python3-%{**}}
 Name:           python-stestr
-Version:        2.1.1
+Version:        2.2.0
 Release:        0
 Summary:        A test runner runner similar to testrepository
 License:        Apache-2.0

++++++ stestr-2.1.1.tar.gz -> stestr-2.2.0.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/stestr-2.1.1/ChangeLog new/stestr-2.2.0/ChangeLog
--- old/stestr-2.1.1/ChangeLog  2018-08-09 14:50:00.000000000 +0200
+++ new/stestr-2.2.0/ChangeLog  2018-11-30 02:32:07.000000000 +0100
@@ -1,12 +1,28 @@
 CHANGES
 =======
 
+2.2.0
+-----
+
+* Change title of project in readme
+* Add a better description to README Overview section
+* Fix discovery import error formatting on py3
+* Cleanup unused parameters in \_run\_tests
+* Enable doc8
+* Add all stream to repo even if some tests fail
+* Extract loading case code to \_load\_case() function
+* Make test running serially when just loading
+* Fix time measurement for load command too
+* Use reported times instead of wall time in subunit-trace
+* Add support for test class and method by path on no-discover
+
 2.1.1
 -----
 
 * Add support for python 3.7
 * Fix handling of unexpected success results
 * Allow stestr to be called as a module (#185)
+* Make warning and error messages use stderr
 * Add error handling for invalid input regexes
 * Cleanup the manpage section on dealing with failed tests
 * Cleanup argument help text on load command
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/stestr-2.1.1/PKG-INFO new/stestr-2.2.0/PKG-INFO
--- old/stestr-2.1.1/PKG-INFO   2018-08-09 14:50:02.000000000 +0200
+++ new/stestr-2.2.0/PKG-INFO   2018-11-30 02:32:07.000000000 +0100
@@ -1,14 +1,13 @@
-Metadata-Version: 1.1
+Metadata-Version: 2.1
 Name: stestr
-Version: 2.1.1
+Version: 2.2.0
 Summary: A parallel Python test runner built around subunit
 Home-page: http://stestr.readthedocs.io/en/latest/
 Author: Matthew Treinish
 Author-email: [email protected]
 License: UNKNOWN
-Description-Content-Type: UNKNOWN
-Description: Slim/Super Test Repository
-        ==========================
+Description: stestr
+        ======
         
         .. image:: 
https://img.shields.io/travis/mtreinish/stestr/master.svg?style=flat-square
             :target: https://travis-ci.org/mtreinish/stestr
@@ -31,12 +30,22 @@
         Overview
         --------
         
-        stestr is a fork of the `testrepository`_ that concentrates on being a
-        dedicated test runner for python projects. The generic abstraction
-        layers which enabled testr to work with any subunit emitting runner 
are gone.
-        stestr hard codes python-subunit-isms into how it works. The code base 
is also
-        designed to try and be explicit, and to provide a python api that is 
documented
-        and has examples.
+        stestr is parallel Python test runner designed to execute `unittest`_ 
test
+        suites using multiple processes to split up execution of a test suite. 
It also
+        will store a history of all test runs to help in debugging failures and
+        optimizing the scheduler to improve speed. To accomplish this goal it 
uses the
+        `subunit`_ protocol to facilitate streaming and storing results from 
multiple
+        workers.
+        
+        .. _unittest: https://docs.python.org/3/library/unittest.html
+        .. _subunit: https://github.com/testing-cabal/subunit
+        
+        stestr originally started as a fork of the `testrepository`_ that 
concentrates
+        on being a dedicated test runner for python projects. The generic 
abstraction
+        layers which enabled testrepository to work with any subunit emitting 
runner
+        are gone. stestr hard codes python-subunit-isms into how it works. The 
code
+        base is also designed to try and be explicit, and to provide a python 
api that
+        is documented and has examples.
         
         .. _testrepository: https://testrepository.readthedocs.org/en/latest
         
@@ -141,3 +150,5 @@
 Classifier: Programming Language :: Python :: 3.7
 Classifier: Topic :: Software Development :: Testing
 Classifier: Topic :: Software Development :: Quality Assurance
+Provides-Extra: test
+Provides-Extra: sql
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/stestr-2.1.1/README.rst new/stestr-2.2.0/README.rst
--- old/stestr-2.1.1/README.rst 2018-05-30 23:02:51.000000000 +0200
+++ new/stestr-2.2.0/README.rst 2018-11-29 21:57:37.000000000 +0100
@@ -1,5 +1,5 @@
-Slim/Super Test Repository
-==========================
+stestr
+======
 
 .. image:: 
https://img.shields.io/travis/mtreinish/stestr/master.svg?style=flat-square
     :target: https://travis-ci.org/mtreinish/stestr
@@ -22,12 +22,22 @@
 Overview
 --------
 
-stestr is a fork of the `testrepository`_ that concentrates on being a
-dedicated test runner for python projects. The generic abstraction
-layers which enabled testr to work with any subunit emitting runner are gone.
-stestr hard codes python-subunit-isms into how it works. The code base is also
-designed to try and be explicit, and to provide a python api that is documented
-and has examples.
+stestr is parallel Python test runner designed to execute `unittest`_ test
+suites using multiple processes to split up execution of a test suite. It also
+will store a history of all test runs to help in debugging failures and
+optimizing the scheduler to improve speed. To accomplish this goal it uses the
+`subunit`_ protocol to facilitate streaming and storing results from multiple
+workers.
+
+.. _unittest: https://docs.python.org/3/library/unittest.html
+.. _subunit: https://github.com/testing-cabal/subunit
+
+stestr originally started as a fork of the `testrepository`_ that concentrates
+on being a dedicated test runner for python projects. The generic abstraction
+layers which enabled testrepository to work with any subunit emitting runner
+are gone. stestr hard codes python-subunit-isms into how it works. The code
+base is also designed to try and be explicit, and to provide a python api that
+is documented and has examples.
 
 .. _testrepository: https://testrepository.readthedocs.org/en/latest
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/stestr-2.1.1/doc/source/MANUAL.rst 
new/stestr-2.2.0/doc/source/MANUAL.rst
--- old/stestr-2.1.1/doc/source/MANUAL.rst      2018-07-12 15:08:35.000000000 
+0200
+++ new/stestr-2.2.0/doc/source/MANUAL.rst      2018-11-27 21:49:05.000000000 
+0100
@@ -100,14 +100,23 @@
 will also bypass discovery and directly call subunit.run on the module
 specified.
 
+Additionally you can specify a specific class or method within that file using
+``::`` to specify a class and method. For example::
+
+  $ stestr run --no-discover project/tests/test_foo.py::TestFoo::test_method
+
+will skip discovery and directly call subunit.run on the test method in the
+specified test class.
+
 Test Selection
 --------------
 
-Arguments passed to ``stestr run`` are used to filter test ids that will be 
run.
-stestr will perform unittest discovery to get a list of all test ids and then
-apply each argument as a regex filter. Tests that match any of the given 
filters
-will be run. For example, if you called ``stestr run foo bar`` this will only
-run the tests that have a regex match with foo **or** a regex match with bar.
+Arguments passed to ``stestr run`` are used to filter test ids that will be
+run. stestr will perform unittest discovery to get a list of all test ids and
+then apply each argument as a regex filter. Tests that match any of the given
+filters will be run. For example, if you called ``stestr run foo bar`` this
+will only run the tests that have a regex match with foo **or** a regex match
+with bar.
 
 stestr allows you do to do simple test exclusion via passing a rejection/black
 regexp::
@@ -118,8 +127,9 @@
 
   $ stestr run --black-regex 'slow_tests|bad_tests' ui\.interface
 
-Here first we selected all tests which matches to ``ui\.interface``, then we 
are
-dropping all test which matches ``slow_tests|bad_tests`` from the final list.
+Here first we selected all tests which matches to ``ui\.interface``, then we
+are dropping all test which matches ``slow_tests|bad_tests`` from the final
+list.
 
 stestr also allows you to specify a blacklist file to define a set of regexes
 to exclude. You can specify a blacklist file with the
@@ -175,9 +185,9 @@
 
 However, the test run output is configurable, you can disable this output
 with the ``--no-subunit-trace`` flag which will be completely silent except for
-any failures it encounters. There is also the ``--color`` flag which will 
enable
-colorization with subunit-trace output. If you prefer to deal with the raw
-subunit yourself and run your own output rendering or filtering you can use
+any failures it encounters. There is also the ``--color`` flag which will
+enable colorization with subunit-trace output. If you prefer to deal with the
+raw subunit yourself and run your own output rendering or filtering you can use
 the ``--subunit`` flag to output the result stream as raw subunit v2.
 
 There is also an ``--abbreviate`` flag available, when this is used a single
@@ -199,10 +209,10 @@
 append the subunit stream from the test run into the most recent entry in the
 repository.
 
-Alternatively, you can manually load the test results from a subunit stream 
into
-an existing test result in the repository using the ``--id``/``-i`` flag on
-the ``stestr load`` command. This will append the results from the input 
subunit
-stream to the specified id.
+Alternatively, you can manually load the test results from a subunit stream
+into an existing test result in the repository using the ``--id``/``-i`` flag
+on the ``stestr load`` command. This will append the results from the input
+subunit stream to the specified id.
 
 
 Running previously failed tests
@@ -280,9 +290,9 @@
 This will list all tests found by discovery.
 
 You can also use this to see what tests will be run by a given stestr run
-command. For instance, the tests that ``stestr run myfilter`` will run are 
shown
-by ``stestr list myfilter``. As with the run command, arguments to list are 
used
-to regex filter the tests.
+command. For instance, the tests that ``stestr run myfilter`` will run are
+shown by ``stestr list myfilter``. As with the run command, arguments to list
+are used to regex filter the tests.
 
 Parallel testing
 ----------------
@@ -476,8 +486,8 @@
 -----------------
 
 Sometimes it is useful to force a separate test runner instance for each test
-executed. The ``--isolated`` flag will cause stestr to execute a separate 
runner
-per test::
+executed. The ``--isolated`` flag will cause stestr to execute a separate
+runner per test::
 
   $ stestr run --isolated
 
@@ -517,8 +527,8 @@
 * format: This file identifies the precise layout of the repository, in case
   future changes are needed.
 
-* next-stream: This file contains the serial number to be used when adding 
another
-  stream to the repository.
+* next-stream: This file contains the serial number to be used when adding
+  another stream to the repository.
 
 * failing: This file is a stream containing just the known failing tests. It
   is updated whenever a new stream is added to the repository, so that it only
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/stestr-2.1.1/doc/source/README.rst 
new/stestr-2.2.0/doc/source/README.rst
--- old/stestr-2.1.1/doc/source/README.rst      2018-05-30 23:02:51.000000000 
+0200
+++ new/stestr-2.2.0/doc/source/README.rst      2018-11-29 21:57:37.000000000 
+0100
@@ -1,5 +1,5 @@
-Slim/Super Test Repository
-==========================
+stestr
+======
 
 .. image:: 
https://img.shields.io/travis/mtreinish/stestr/master.svg?style=flat-square
     :target: https://travis-ci.org/mtreinish/stestr
@@ -22,12 +22,22 @@
 Overview
 --------
 
-stestr is a fork of the `testrepository`_ that concentrates on being a
-dedicated test runner for python projects. The generic abstraction
-layers which enabled testr to work with any subunit emitting runner are gone.
-stestr hard codes python-subunit-isms into how it works. The code base is also
-designed to try and be explicit, and to provide a python api that is documented
-and has examples.
+stestr is parallel Python test runner designed to execute `unittest`_ test
+suites using multiple processes to split up execution of a test suite. It also
+will store a history of all test runs to help in debugging failures and
+optimizing the scheduler to improve speed. To accomplish this goal it uses the
+`subunit`_ protocol to facilitate streaming and storing results from multiple
+workers.
+
+.. _unittest: https://docs.python.org/3/library/unittest.html
+.. _subunit: https://github.com/testing-cabal/subunit
+
+stestr originally started as a fork of the `testrepository`_ that concentrates
+on being a dedicated test runner for python projects. The generic abstraction
+layers which enabled testrepository to work with any subunit emitting runner
+are gone. stestr hard codes python-subunit-isms into how it works. The code
+base is also designed to try and be explicit, and to provide a python api that
+is documented and has examples.
 
 .. _testrepository: https://testrepository.readthedocs.org/en/latest
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/stestr-2.1.1/doc/source/api/test_processor.rst 
new/stestr-2.2.0/doc/source/api/test_processor.rst
--- old/stestr-2.1.1/doc/source/api/test_processor.rst  2018-02-18 
00:39:04.000000000 +0100
+++ new/stestr-2.2.0/doc/source/api/test_processor.rst  2018-11-27 
21:49:05.000000000 +0100
@@ -5,9 +5,9 @@
 
 This module contains the definition of the ``TestProcessorFixture`` fixture
 class. This fixture is used for handling the actual spawning of worker
-processes for running tests, or listing tests. It is constructed as a 
`fixture`_
-to handle the lifecycle of the test id list files which are used to pass test
-ids to the workers processes running the tests.
+processes for running tests, or listing tests. It is constructed as a
+`fixture`_ to handle the lifecycle of the test id list files which are used to
+pass test ids to the workers processes running the tests.
 
 .. _fixture: https://pypi.python.org/pypi/fixtures
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/stestr-2.1.1/doc/source/api.rst 
new/stestr-2.2.0/doc/source/api.rst
--- old/stestr-2.1.1/doc/source/api.rst 2018-02-18 00:39:04.000000000 +0100
+++ new/stestr-2.2.0/doc/source/api.rst 2018-11-27 21:49:05.000000000 +0100
@@ -2,10 +2,10 @@
 
 Internal API Reference
 ======================
-This document serves as a reference for the python API used in stestr. It 
should
-serve as a guide for both internal and external use of stestr components via
-python. The majority of the contents here are built from internal docstrings
-in the actual code.
+This document serves as a reference for the python API used in stestr. It
+should serve as a guide for both internal and external use of stestr components
+via python. The majority of the contents here are built from internal
+docstrings in the actual code.
 
 Repository
 ----------
@@ -54,8 +54,8 @@
 another public function which performs the real work for the command. Each one
 of these functions has a defined stable Python API signature with args and
 kwargs so that people can easily call the functions from other python programs.
-This function is what can be expected to be used outside of stestr as the 
stable
-interface.
+This function is what can be expected to be used outside of stestr as the
+stable interface.
 All the stable functions can be imported the command module directly::
 
   from stestr import command
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/stestr-2.1.1/doc/source/internal_arch.rst 
new/stestr-2.2.0/doc/source/internal_arch.rst
--- old/stestr-2.1.1/doc/source/internal_arch.rst       2018-05-30 
23:02:51.000000000 +0200
+++ new/stestr-2.2.0/doc/source/internal_arch.rst       2018-11-27 
21:49:05.000000000 +0100
@@ -45,8 +45,8 @@
 '''''''''''''''''''''
 
 This function is used to define subcommand arguments. It has a single argparse
-parser object passed into it. The intent of this function is to have any 
command
-specific arguments defined on the provided parser object by calling
+parser object passed into it. The intent of this function is to have any
+command specific arguments defined on the provided parser object by calling
 `parser.add_argument()`_ for each argument.
 
 .. _parser.add_argument(): 
https://docs.python.org/3/library/argparse.html#the-add-argument-method
@@ -83,8 +83,9 @@
 to the scheduler/partitioner. The scheduler takes the list of tests and splits
 it into N groups where N is the concurrency that stestr will use to run tests.
 If there is any timing data available in the repository from previous runs this
-is used by the scheduler to try balancing the test load between the workers. 
For
-the full details on how the partitioning is performed see: 
:ref:`api_scheduler`.
+is used by the scheduler to try balancing the test load between the workers.
+For the full details on how the partitioning is performed see:
+:ref:`api_scheduler`.
 
 With the tests split into multiple groups for each worker process we're
 ready to start executing the tests. Each group of tests is used to launch a
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/stestr-2.1.1/stestr/bisect_tests.py 
new/stestr-2.2.0/stestr/bisect_tests.py
--- old/stestr-2.1.1/stestr/bisect_tests.py     2018-02-18 00:39:04.000000000 
+0100
+++ new/stestr-2.2.0/stestr/bisect_tests.py     2018-11-27 21:49:05.000000000 
+0100
@@ -57,7 +57,7 @@
                     repo_type=self.repo_type, repo_url=self.repo_url,
                     serial=self.serial, concurrency=self.concurrency,
                     test_path=self.test_path, top_dir=self.top_dir)
-                self.run_func(cmd, False, True, False, False,
+                self.run_func(cmd, False,
                               pretty_out=False,
                               repo_type=self.repo_type,
                               repo_url=self.repo_url)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/stestr-2.1.1/stestr/commands/load.py 
new/stestr-2.2.0/stestr/commands/load.py
--- old/stestr-2.1.1/stestr/commands/load.py    2018-08-09 14:40:52.000000000 
+0200
+++ new/stestr-2.2.0/stestr/commands/load.py    2018-11-27 21:49:05.000000000 
+0100
@@ -13,7 +13,6 @@
 """Load data into a repository."""
 
 
-import datetime
 import functools
 import os
 import sys
@@ -107,13 +106,14 @@
              force_init=force_init, streams=args.files,
              pretty_out=pretty_out, color=color,
              stdout=stdout, abbreviate=abbreviate,
-             suppress_attachments=suppress_attachments)
+             suppress_attachments=suppress_attachments, serial=True)
 
 
 def load(force_init=False, in_streams=None,
          partial=False, subunit_out=False, repo_type='file', repo_url=None,
          run_id=None, streams=None, pretty_out=False, color=False,
-         stdout=sys.stdout, abbreviate=False, suppress_attachments=False):
+         stdout=sys.stdout, abbreviate=False, suppress_attachments=False,
+         serial=False):
     """Load subunit streams into a repository
 
     This function will load subunit streams into the repository. It will
@@ -180,12 +180,34 @@
             decorate = functools.partial(mktagger, pos)
             case = testtools.DecorateTestCaseResult(case, decorate)
             yield (case, str(pos))
-
-    case = testtools.ConcurrentStreamTestSuite(make_tests)
     if not run_id:
         inserter = repo.get_inserter()
     else:
         inserter = repo.get_inserter(run_id=run_id)
+
+    retval = 0
+    if serial:
+        for stream in streams:
+            # Calls StreamResult API.
+            case = subunit.ByteStreamToStreamResult(
+                stream, non_subunit_name='stdout')
+            result = _load_case(inserter, repo, case, subunit_out, pretty_out,
+                                color, stdout, abbreviate,
+                                suppress_attachments)
+            if result or retval:
+                retval = 1
+            else:
+                retval = 0
+    else:
+        case = testtools.ConcurrentStreamTestSuite(make_tests)
+        retval = _load_case(inserter, repo, case, subunit_out, pretty_out,
+                            color, stdout, abbreviate, suppress_attachments)
+
+    return retval
+
+
+def _load_case(inserter, repo, case, subunit_out, pretty_out,
+               color, stdout, abbreviate, suppress_attachments):
     if subunit_out:
         output_result, summary_result = output.make_result(inserter.get_id,
                                                            output=stdout)
@@ -208,15 +230,22 @@
             inserter.get_id, stdout, previous_run)
         summary_result = output_result.get_summary()
     result = testtools.CopyStreamResult([inserter, output_result])
-    start_time = datetime.datetime.utcnow()
     result.startTestRun()
     try:
         case.run(result)
     finally:
         result.stopTestRun()
-    stop_time = datetime.datetime.utcnow()
-    elapsed_time = stop_time - start_time
     if pretty_out and not subunit_out:
+        start_times = []
+        stop_times = []
+        for worker in subunit_trace.RESULTS:
+            start_times += [
+                x['timestamps'][0] for x in subunit_trace.RESULTS[worker]]
+            stop_times += [
+                x['timestamps'][1] for x in subunit_trace.RESULTS[worker]]
+        start_time = min(start_times)
+        stop_time = max(stop_times)
+        elapsed_time = stop_time - start_time
         subunit_trace.print_fails(stdout)
         subunit_trace.print_summary(stdout, elapsed_time)
     if not results.wasSuccessful(summary_result):
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/stestr-2.1.1/stestr/commands/run.py 
new/stestr-2.2.0/stestr/commands/run.py
--- old/stestr-2.1.1/stestr/commands/run.py     2018-08-09 14:40:52.000000000 
+0200
+++ new/stestr-2.2.0/stestr/commands/run.py     2018-11-27 21:49:05.000000000 
+0100
@@ -313,8 +313,10 @@
         combine_id = six.text_type(latest_id)
     if no_discover:
         ids = no_discover
+        if '::' in ids:
+            ids = ids.replace('::', '.')
         if ids.find('/') != -1:
-            root, _ = os.path.splitext(ids)
+            root = ids.replace('.py', '')
             ids = root.replace('/', '.')
         run_cmd = 'python -m subunit.run ' + ids
 
@@ -395,7 +397,7 @@
                     randomize=random, test_path=test_path, top_dir=top_dir)
 
                 run_result = _run_tests(
-                    cmd, failing, analyze_isolation, isolated, until_failure,
+                    cmd, until_failure,
                     subunit_out=subunit_out, combine_id=combine_id,
                     repo_type=repo_type, repo_url=repo_url,
                     pretty_out=pretty_out, color=color, abbreviate=abbreviate,
@@ -404,8 +406,7 @@
                     result = run_result
             return result
         else:
-            return _run_tests(cmd, failing, analyze_isolation,
-                              isolated, until_failure,
+            return _run_tests(cmd, until_failure,
                               subunit_out=subunit_out,
                               combine_id=combine_id,
                               repo_type=repo_type,
@@ -430,8 +431,7 @@
                 whitelist_file=whitelist_file, black_regex=black_regex,
                 randomize=random, test_path=test_path,
                 top_dir=top_dir)
-            if not _run_tests(cmd, failing, analyze_isolation, isolated,
-                              until_failure):
+            if not _run_tests(cmd, until_failure):
                 # If the test was filtered, it won't have been run.
                 if test_id in repo.get_test_ids(repo.latest_id()):
                     spurious_failures.add(test_id)
@@ -456,7 +456,7 @@
         return bisect_runner.bisect_tests(spurious_failures)
 
 
-def _run_tests(cmd, failing, analyze_isolation, isolated, until_failure,
+def _run_tests(cmd, until_failure,
                subunit_out=False, combine_id=None, repo_type='file',
                repo_url=None, pretty_out=True, color=False, stdout=sys.stdout,
                abbreviate=False, suppress_attachments=False):
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/stestr-2.1.1/stestr/config_file.py 
new/stestr-2.2.0/stestr/config_file.py
--- old/stestr-2.1.1/stestr/config_file.py      2018-02-19 17:33:06.000000000 
+0100
+++ new/stestr-2.2.0/stestr/config_file.py      2018-11-27 21:49:05.000000000 
+0100
@@ -90,11 +90,10 @@
         if not test_path and self.parser.has_option('DEFAULT', 'test_path'):
             test_path = self.parser.get('DEFAULT', 'test_path')
         elif not test_path:
-            print("No test_path can be found in either the command line "
-                  "options nor in the specified config file {0}.  Please "
-                  "specify a test path either in the config file or via the "
-                  "--test-path argument".format(self.config_file))
-            sys.exit(1)
+            sys.exit("No test_path can be found in either the command line "
+                     "options nor in the specified config file {0}.  Please "
+                     "specify a test path either in the config file or via "
+                     "the --test-path argument".format(self.config_file))
         if not top_dir and self.parser.has_option('DEFAULT', 'top_dir'):
             top_dir = self.parser.get('DEFAULT', 'top_dir')
         elif not top_dir:
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/stestr-2.1.1/stestr/repository/sql.py 
new/stestr-2.2.0/stestr/repository/sql.py
--- old/stestr-2.1.1/stestr/repository/sql.py   2017-07-27 01:43:41.000000000 
+0200
+++ new/stestr-2.2.0/stestr/repository/sql.py   2018-11-27 21:49:05.000000000 
+0100
@@ -11,7 +11,7 @@
 # under the License.
 
 """Persistent storage of test results."""
-
+from __future__ import print_function
 
 import datetime
 import io
@@ -45,7 +45,7 @@
     def initialise(klass, url):
         """Create a repository at url/path."""
         print("WARNING: The SQL repository type is still experimental. You "
-              "might encounter issues while using it.")
+              "might encounter issues while using it.", file=sys.stderr)
         result = Repository(url)
         # TODO(mtreinish): Figure out the python api to run the migrations for
         # setting up the schema.
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/stestr-2.1.1/stestr/repository/util.py 
new/stestr-2.2.0/stestr/repository/util.py
--- old/stestr-2.1.1/stestr/repository/util.py  2017-07-27 01:43:41.000000000 
+0200
+++ new/stestr-2.2.0/stestr/repository/util.py  2018-11-27 21:49:05.000000000 
+0100
@@ -37,10 +37,9 @@
         repo_module = importlib.import_module('stestr.repository.' + repo_type)
     except ImportError:
         if repo_type == 'sql':
-            print("sql repository type requirements aren't installed. To use "
-                  "the sql repository ensure you installed the extra "
-                  "requirements with `pip install 'stestr[sql]'`")
-            sys.exit(1)
+            sys.exit("sql repository type requirements aren't installed. To "
+                     "use the sql repository ensure you installed the extra "
+                     "requirements with `pip install 'stestr[sql]'`")
         else:
             raise
     if not repo_url:
@@ -59,10 +58,9 @@
         repo_module = importlib.import_module('stestr.repository.' + repo_type)
     except ImportError:
         if repo_type == 'sql':
-            print("sql repository type requirements aren't installed. To use "
-                  "the sql repository ensure you installed the extra "
-                  "requirements with `pip install 'stestr[sql]'`")
-            sys.exit(1)
+            sys.exit("sql repository type requirements aren't installed. To "
+                     "use the sql repository ensure you installed the extra "
+                     "requirements with `pip install 'stestr[sql]'`")
         else:
             raise
     if not repo_url:
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/stestr-2.1.1/stestr/selection.py 
new/stestr-2.2.0/stestr/selection.py
--- old/stestr-2.1.1/stestr/selection.py        2018-07-12 15:08:35.000000000 
+0200
+++ new/stestr-2.2.0/stestr/selection.py        2018-11-27 21:49:05.000000000 
+0100
@@ -9,6 +9,7 @@
 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
 # License for the specific language governing permissions and limitations
 # under the License.
+from __future__ import print_function
 
 import contextlib
 import re
@@ -34,7 +35,8 @@
             try:
                 _filters.append(re.compile(f))
             except re.error:
-                print("Invalid regex: %s provided in filters" % f)
+                print("Invalid regex: %s provided in filters" % f,
+                      file=sys.stderr)
                 sys.exit(5)
         else:
             _filters.append(f)
@@ -66,7 +68,7 @@
                 regex_comment_lst.append((re.compile(line_regex), comment, []))
             except re.error:
                 print("Invalid regex: %s in provided blacklist file" %
-                      line_regex)
+                      line_regex, file=sys.stderr)
                 sys.exit(5)
     return regex_comment_lst
 
@@ -82,7 +84,7 @@
                 lines.append(re.compile(line_regex))
             except re.error:
                 print("Invalid regex: %s in provided whitelist file" %
-                      line_regex)
+                      line_regex, file=sys.stderr)
                 sys.exit(5)
     return lines
 
@@ -127,7 +129,8 @@
         try:
             record = (re.compile(black_regex), msg, [])
         except re.error:
-            print("Invalid regex: %s used for black_regex" % black_regex)
+            print("Invalid regex: %s used for black_regex" % black_regex,
+                  file=sys.stderr)
             sys.exit(5)
         if black_data:
             black_data.append(record)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/stestr-2.1.1/stestr/subunit_trace.py 
new/stestr-2.2.0/stestr/subunit_trace.py
--- old/stestr-2.1.1/stestr/subunit_trace.py    2018-08-09 14:40:52.000000000 
+0200
+++ new/stestr-2.2.0/stestr/subunit_trace.py    2018-11-27 21:49:05.000000000 
+0100
@@ -16,9 +16,9 @@
 
 """Trace a subunit stream in reasonable detail and high accuracy."""
 from __future__ import absolute_import
+from __future__ import print_function
 
 import argparse
-import datetime
 import functools
 import os
 import re
@@ -367,17 +367,22 @@
     result = testtools.StreamResultRouter(result)
     cat = subunit.test_results.CatFiles(stdout)
     result.add_rule(cat, 'test_id', test_id=None)
-    start_time = datetime.datetime.utcnow()
     result.startTestRun()
     try:
         stream.run(result)
     finally:
         result.stopTestRun()
-    stop_time = datetime.datetime.utcnow()
+    start_times = []
+    stop_times = []
+    for worker in RESULTS:
+        start_times += [x['timestamps'][0] for x in RESULTS[worker]]
+        stop_times += [x['timestamps'][1] for x in RESULTS[worker]]
+    start_time = min(start_times)
+    stop_time = max(stop_times)
     elapsed_time = stop_time - start_time
 
     if count_tests('status', '.*') == 0:
-        print("The test run didn't actually run any tests")
+        print("The test run didn't actually run any tests", file=sys.stderr)
         return 1
     if post_fails:
         print_fails(stdout)
@@ -387,7 +392,7 @@
     # NOTE(mtreinish): Ideally this should live in testtools streamSummary
     # this is just in place until the behavior lands there (if it ever does)
     if count_tests('status', '^success$') == 0:
-        print("\nNo tests were successful during the run")
+        print("\nNo tests were successful during the run", file=sys.stderr)
         return 1
     return 0 if results.wasSuccessful(summary) else 1
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/stestr-2.1.1/stestr/test_processor.py 
new/stestr-2.2.0/stestr/test_processor.py
--- old/stestr-2.1.1/stestr/test_processor.py   2018-02-19 17:33:06.000000000 
+0100
+++ new/stestr-2.2.0/stestr/test_processor.py   2018-11-29 21:57:37.000000000 
+0100
@@ -211,9 +211,15 @@
                     results.CatFiles(new_out))
             out = new_out.getvalue()
             if out:
-                sys.stdout.write(six.text_type(out))
+                if six.PY3:
+                    sys.stdout.write(out.decode('utf8'))
+                else:
+                    sys.stdout.write(out)
             if err:
-                sys.stderr.write(six.text_type(err))
+                if six.PY3:
+                    sys.stdout.write(out.decode('utf8'))
+                else:
+                    sys.stderr.write(err)
             sys.stdout.write("\n" + "=" * 80 + "\n"
                              "The above traceback was encountered during "
                              "test discovery which imports all the found test"
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/stestr-2.1.1/stestr/tests/test_bisect_return_codes.py 
new/stestr-2.2.0/stestr/tests/test_bisect_return_codes.py
--- old/stestr-2.1.1/stestr/tests/test_bisect_return_codes.py   2018-04-11 
02:31:10.000000000 +0200
+++ new/stestr-2.2.0/stestr/tests/test_bisect_return_codes.py   2018-11-27 
21:49:05.000000000 +0100
@@ -66,7 +66,7 @@
         lines = six.text_type(out.rstrip()).splitlines()
         self.assertEqual(3, p_analyze.returncode,
                          'Analyze isolation returned an unexpected return code'
-                         'Stdout: %s\nStderr: %s' % (out, err))
+                         '\nStdout: %s\nStderr: %s' % (out, err))
         last_line = ('tests.test_serial_fails.TestFakeClass.test_B  '
                      'tests.test_serial_fails.TestFakeClass.test_A')
         self.assertEqual(last_line, lines[-1])
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/stestr-2.1.1/stestr/tests/test_return_codes.py 
new/stestr-2.2.0/stestr/tests/test_return_codes.py
--- old/stestr-2.1.1/stestr/tests/test_return_codes.py  2018-08-09 
14:40:52.000000000 +0200
+++ new/stestr-2.2.0/stestr/tests/test_return_codes.py  2018-11-27 
21:49:05.000000000 +0100
@@ -314,3 +314,33 @@
         stdout = fixtures.StringStream('stdout')
         self.useFixture(stdout)
         self.assertEqual(0, list_cmd.list_command(stdout=stdout.stream))
+
+    def test_run_no_discover_pytest_path(self):
+        passing_string = 'tests/test_passing.py::FakeTestClass::test_pass_list'
+        out, err = self.assertRunExit('stestr run -n %s' % passing_string, 0)
+        lines = out.decode('utf8').splitlines()
+        self.assertIn(' - Passed: 1', lines)
+        self.assertIn(' - Failed: 0', lines)
+
+    def test_run_no_discover_pytest_path_failing(self):
+        passing_string = 'tests/test_failing.py::FakeTestClass::test_pass_list'
+        out, err = self.assertRunExit('stestr run -n %s' % passing_string, 1)
+        lines = out.decode('utf8').splitlines()
+        self.assertIn(' - Passed: 0', lines)
+        self.assertIn(' - Failed: 1', lines)
+
+    def test_run_no_discover_file_path(self):
+        passing_string = 'tests/test_passing.py'
+        out, err = self.assertRunExit('stestr run -n %s' % passing_string, 0)
+        lines = out.decode('utf8').splitlines()
+        self.assertIn(' - Passed: 2', lines)
+        self.assertIn(' - Failed: 0', lines)
+        self.assertIn(' - Expected Fail: 1', lines)
+
+    def test_run_no_discover_file_path_failing(self):
+        passing_string = 'tests/test_failing.py'
+        out, err = self.assertRunExit('stestr run -n %s' % passing_string, 1)
+        lines = out.decode('utf8').splitlines()
+        self.assertIn(' - Passed: 0', lines)
+        self.assertIn(' - Failed: 2', lines)
+        self.assertIn(' - Unexpected Success: 1', lines)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/stestr-2.1.1/stestr/tests/test_user_config.py 
new/stestr-2.2.0/stestr/tests/test_user_config.py
--- old/stestr-2.1.1/stestr/tests/test_user_config.py   2018-04-11 
02:31:10.000000000 +0200
+++ new/stestr-2.2.0/stestr/tests/test_user_config.py   2018-11-27 
21:49:05.000000000 +0100
@@ -12,10 +12,8 @@
 
 import io
 import os
-import sys
 
 import mock
-import six
 
 from stestr.tests import base
 from stestr import user_config
@@ -67,7 +65,8 @@
     @mock.patch('stestr.user_config.UserConfig')
     def test_get_user_config_invalid_path(self, user_mock, exit_mock):
         user_config.get_user_config('/i_am_an_invalid_path')
-        exit_mock.assert_called_once_with(1)
+        msg = 'The specified stestr user config is not a valid path'
+        exit_mock.assert_called_once_with(msg)
 
     @mock.patch('os.path.isfile')
     @mock.patch('stestr.user_config.UserConfig')
@@ -103,41 +102,24 @@
         user_conf = user_config.UserConfig('/path')
         self.assertEqual({}, user_conf.config)
 
-    def _restore_stdout(self, old_out):
-        sys.stdout = old_out
-
     @mock.patch('yaml.load', return_value={'init': {'subunit-trace': True}})
     @mock.patch('sys.exit')
     @mock.patch('six.moves.builtins.open', mock.mock_open())
     def test_user_config_invalid_command(self, exit_mock, yaml_mock):
-
-        temp_out = sys.stdout
-        std_out = six.StringIO()
-        sys.stdout = std_out
-        self.addCleanup(self._restore_stdout, temp_out)
         user_config.UserConfig('/path')
-        exit_mock.assert_called_once_with(1)
         error_string = ("Provided user config file /path is invalid because:\n"
                         "extra keys not allowed @ data['init']")
-        std_out.seek(0)
-        self.assertEqual(error_string, std_out.read().rstrip())
+        exit_mock.assert_called_once_with(error_string)
 
     @mock.patch('yaml.load', return_value={'run': {'subunit-trace': True}})
     @mock.patch('sys.exit')
     @mock.patch('six.moves.builtins.open', mock.mock_open())
     def test_user_config_invalid_option(self, exit_mock, yaml_mock):
-
-        temp_out = sys.stdout
-        std_out = six.StringIO()
-        sys.stdout = std_out
-        self.addCleanup(self._restore_stdout, temp_out)
         user_config.UserConfig('/path')
-        exit_mock.assert_called_once_with(1)
         error_string = ("Provided user config file /path is invalid because:\n"
                         "extra keys not allowed @ "
                         "data['run']['subunit-trace']")
-        std_out.seek(0)
-        self.assertEqual(error_string, std_out.read().rstrip())
+        exit_mock.assert_called_once_with(error_string)
 
     @mock.patch('six.moves.builtins.open',
                 return_value=io.BytesIO(FULL_YAML.encode('utf-8')))
@@ -171,30 +153,18 @@
     @mock.patch('six.moves.builtins.open',
                 return_value=io.BytesIO(INVALID_YAML_FIELD.encode('utf-8')))
     def test_user_config_invalid_value_type(self, open_mock, exit_mock):
-        temp_out = sys.stdout
-        std_out = six.StringIO()
-        sys.stdout = std_out
-        self.addCleanup(self._restore_stdout, temp_out)
         user_config.UserConfig('/path')
-        exit_mock.assert_called_once_with(1)
         error_string = ("Provided user config file /path is invalid because:\n"
                         "expected bool for dictionary value @ "
                         "data['run']['color']")
-        std_out.seek(0)
-        self.assertEqual(error_string, std_out.read().rstrip())
+        exit_mock.assert_called_once_with(error_string)
 
     @mock.patch('sys.exit')
     @mock.patch('six.moves.builtins.open',
                 return_value=io.BytesIO(YAML_NOT_INT.encode('utf-8')))
     def test_user_config_invalid_integer(self, open_mock, exit_mock):
-        temp_out = sys.stdout
-        std_out = six.StringIO()
-        sys.stdout = std_out
-        self.addCleanup(self._restore_stdout, temp_out)
         user_config.UserConfig('/path')
-        exit_mock.assert_called_once_with(1)
         error_string = ("Provided user config file /path is invalid because:\n"
                         "expected int for dictionary value @ "
                         "data['run']['concurrency']")
-        std_out.seek(0)
-        self.assertEqual(error_string, std_out.read().rstrip())
+        exit_mock.assert_called_once_with(error_string)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/stestr-2.1.1/stestr/user_config.py 
new/stestr-2.2.0/stestr/user_config.py
--- old/stestr-2.1.1/stestr/user_config.py      2018-04-11 02:31:10.000000000 
+0200
+++ new/stestr-2.2.0/stestr/user_config.py      2018-11-27 21:49:05.000000000 
+0100
@@ -31,8 +31,7 @@
     else:
         if not os.path.isfile(path):
             msg = 'The specified stestr user config is not a valid path'
-            print(msg)
-            sys.exit(1)
+            sys.exit(msg)
 
     return UserConfig(path)
 
@@ -75,8 +74,7 @@
         except vp.MultipleInvalid as e:
             msg = 'Provided user config file %s is invalid because:\n%s' % (
                 path, str(e))
-            print(msg)
-            sys.exit(1)
+            sys.exit(msg)
 
     @property
     def run(self):
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/stestr-2.1.1/stestr.egg-info/PKG-INFO 
new/stestr-2.2.0/stestr.egg-info/PKG-INFO
--- old/stestr-2.1.1/stestr.egg-info/PKG-INFO   2018-08-09 14:50:01.000000000 
+0200
+++ new/stestr-2.2.0/stestr.egg-info/PKG-INFO   2018-11-30 02:32:05.000000000 
+0100
@@ -1,14 +1,13 @@
-Metadata-Version: 1.1
+Metadata-Version: 2.1
 Name: stestr
-Version: 2.1.1
+Version: 2.2.0
 Summary: A parallel Python test runner built around subunit
 Home-page: http://stestr.readthedocs.io/en/latest/
 Author: Matthew Treinish
 Author-email: [email protected]
 License: UNKNOWN
-Description-Content-Type: UNKNOWN
-Description: Slim/Super Test Repository
-        ==========================
+Description: stestr
+        ======
         
         .. image:: 
https://img.shields.io/travis/mtreinish/stestr/master.svg?style=flat-square
             :target: https://travis-ci.org/mtreinish/stestr
@@ -31,12 +30,22 @@
         Overview
         --------
         
-        stestr is a fork of the `testrepository`_ that concentrates on being a
-        dedicated test runner for python projects. The generic abstraction
-        layers which enabled testr to work with any subunit emitting runner 
are gone.
-        stestr hard codes python-subunit-isms into how it works. The code base 
is also
-        designed to try and be explicit, and to provide a python api that is 
documented
-        and has examples.
+        stestr is parallel Python test runner designed to execute `unittest`_ 
test
+        suites using multiple processes to split up execution of a test suite. 
It also
+        will store a history of all test runs to help in debugging failures and
+        optimizing the scheduler to improve speed. To accomplish this goal it 
uses the
+        `subunit`_ protocol to facilitate streaming and storing results from 
multiple
+        workers.
+        
+        .. _unittest: https://docs.python.org/3/library/unittest.html
+        .. _subunit: https://github.com/testing-cabal/subunit
+        
+        stestr originally started as a fork of the `testrepository`_ that 
concentrates
+        on being a dedicated test runner for python projects. The generic 
abstraction
+        layers which enabled testrepository to work with any subunit emitting 
runner
+        are gone. stestr hard codes python-subunit-isms into how it works. The 
code
+        base is also designed to try and be explicit, and to provide a python 
api that
+        is documented and has examples.
         
         .. _testrepository: https://testrepository.readthedocs.org/en/latest
         
@@ -141,3 +150,5 @@
 Classifier: Programming Language :: Python :: 3.7
 Classifier: Topic :: Software Development :: Testing
 Classifier: Topic :: Software Development :: Quality Assurance
+Provides-Extra: test
+Provides-Extra: sql
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/stestr-2.1.1/stestr.egg-info/pbr.json 
new/stestr-2.2.0/stestr.egg-info/pbr.json
--- old/stestr-2.1.1/stestr.egg-info/pbr.json   2018-08-09 14:50:01.000000000 
+0200
+++ new/stestr-2.2.0/stestr.egg-info/pbr.json   2018-11-30 02:32:06.000000000 
+0100
@@ -1 +1 @@
-{"git_version": "48e3c05", "is_release": true}
\ No newline at end of file
+{"git_version": "8d35dcb", "is_release": true}
\ No newline at end of file
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/stestr-2.1.1/stestr.egg-info/requires.txt 
new/stestr-2.2.0/stestr.egg-info/requires.txt
--- old/stestr-2.1.1/stestr.egg-info/requires.txt       2018-08-09 
14:50:01.000000000 +0200
+++ new/stestr-2.2.0/stestr.egg-info/requires.txt       2018-11-30 
02:32:05.000000000 +0100
@@ -10,3 +10,12 @@
 
 [sql]
 subunit2sql>=1.8.0
+
+[test]
+hacking<0.12,>=0.11.0
+sphinx>=1.5.1
+mock>=2.0
+subunit2sql>=1.8.0
+coverage>=4.0
+ddt>=1.0.1
+doc8>=0.8.0
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/stestr-2.1.1/test-requirements.txt 
new/stestr-2.2.0/test-requirements.txt
--- old/stestr-2.1.1/test-requirements.txt      2017-10-16 20:31:25.000000000 
+0200
+++ new/stestr-2.2.0/test-requirements.txt      2018-11-27 21:49:05.000000000 
+0100
@@ -8,3 +8,4 @@
 subunit2sql>=1.8.0
 coverage>=4.0 # Apache-2.0
 ddt>=1.0.1 # MIT
+doc8>=0.8.0 # Apache-2.0
\ No newline at end of file
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/stestr-2.1.1/tools/testr_to_stestr.py 
new/stestr-2.2.0/tools/testr_to_stestr.py
--- old/stestr-2.1.1/tools/testr_to_stestr.py   2018-02-18 00:39:04.000000000 
+0100
+++ new/stestr-2.2.0/tools/testr_to_stestr.py   2018-11-27 21:49:05.000000000 
+0100
@@ -18,8 +18,7 @@
 import six
 
 if not os.path.isfile('.testr.conf'):
-    print("Testr config file not found")
-    sys.exit(1)
+    sys.exit("Testr config file not found")
 
 with open('.testr.conf', 'r') as testr_conf_file:
     config = six.moves.configparser.ConfigParser()
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/stestr-2.1.1/tox.ini new/stestr-2.2.0/tox.ini
--- old/stestr-2.1.1/tox.ini    2018-08-09 14:40:52.000000000 +0200
+++ new/stestr-2.2.0/tox.ini    2018-11-27 21:49:05.000000000 +0100
@@ -32,7 +32,9 @@
     coverage html -d cover
 
 [testenv:docs]
-commands = python setup.py build_sphinx
+commands =
+  doc8 -e .rst doc/source CONTRIBUTING.rst README.rst
+  python setup.py build_sphinx
 
 [testenv:releasenotes]
 commands = sphinx-build -a -E -W -d releasenotes/build/doctrees -b html 
releasenotes/source releasenotes/build/html


Reply via email to