Hello community,
here is the log from the commit of package python-pytest-rerunfailures for
openSUSE:Factory checked in at 2020-11-08 20:59:13
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/python-pytest-rerunfailures (Old)
and /work/SRC/openSUSE:Factory/.python-pytest-rerunfailures.new.11331
(New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Package is "python-pytest-rerunfailures"
Sun Nov 8 20:59:13 2020 rev:6 rq:846883 version:9.1.1
Changes:
--------
---
/work/SRC/openSUSE:Factory/python-pytest-rerunfailures/python-pytest-rerunfailures.changes
2020-03-23 12:53:10.492054918 +0100
+++
/work/SRC/openSUSE:Factory/.python-pytest-rerunfailures.new.11331/python-pytest-rerunfailures.changes
2020-11-08 20:59:18.112289842 +0100
@@ -1,0 +2,19 @@
+Sat Nov 7 18:33:40 UTC 2020 - Benjamin Greiner <[email protected]>
+
+- Update to 9.1.1
+ Compatibility fix.
+ * Ignore --result-log command line option when used together with
+ pytest >= 6.1.0, as it was removed there. This is a quick fix,
+ use an older version of pytest, if you want to keep this
+ feature for now. (Thanks to @ntessore for the PR)
+ * Support up to pytest 6.1.0.
+- Changelog for 9.1
+ Features
+ * Add a new flag --only-rerun to allow for users to rerun only
+ certain errors.
+ Other changes
+ * Drop dependency on mock.
+ * Add support for pre-commit and add a linting tox target. (#117)
+ (PR from @gnikonorov)
+
+-------------------------------------------------------------------
Old:
----
pytest-rerunfailures-9.0.tar.gz
New:
----
pytest-rerunfailures-9.1.1.tar.gz
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Other differences:
------------------
++++++ python-pytest-rerunfailures.spec ++++++
--- /var/tmp/diff_new_pack.PXDLd4/_old 2020-11-08 20:59:18.840288429 +0100
+++ /var/tmp/diff_new_pack.PXDLd4/_new 2020-11-08 20:59:18.844288422 +0100
@@ -19,7 +19,7 @@
%{?!python_module:%define python_module() python-%{**} python3-%{**}}
%define skip_python2 1
Name: python-pytest-rerunfailures
-Version: 9.0
+Version: 9.1.1
Release: 0
Summary: A pytest plugin to re-run tests
License: MPL-2.0
@@ -42,8 +42,6 @@
%prep
%setup -q -n pytest-rerunfailures-%{version}
-# do not depend on mock
https://github.com/pytest-dev/pytest-rerunfailures/pull/107
-sed -i -e 's:import mock:from unittest import mock:g'
test_pytest_rerunfailures.py
%build
%python_build
@@ -53,12 +51,13 @@
%python_expand %fdupes %{buildroot}%{$python_sitelib}
%check
-export PYTHONDONTWRITEBYTECODE=1
%pytest
%files %{python_files}
%doc CHANGES.rst README.rst
%license LICENSE
-%{python_sitelib}/*
+%{python_sitelib}/pytest_rerunfailures.py*
+%pycache_only %{python_sitelib}/__pycache__/pytest_rerunfailures*
+%{python_sitelib}/pytest_rerunfailures-%{version}*-info
%changelog
++++++ pytest-rerunfailures-9.0.tar.gz -> pytest-rerunfailures-9.1.1.tar.gz
++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/pytest-rerunfailures-9.0/CHANGES.rst
new/pytest-rerunfailures-9.1.1/CHANGES.rst
--- old/pytest-rerunfailures-9.0/CHANGES.rst 2020-03-18 09:18:08.000000000
+0100
+++ new/pytest-rerunfailures-9.1.1/CHANGES.rst 2020-09-29 08:28:33.000000000
+0200
@@ -1,6 +1,43 @@
Changelog
=========
+9.1.1 (2020-09-29)
+------------------
+
+Compatibility fix.
+++++++++++++++++++
+
+- Ignore ``--result-log`` command line option when used together with ``pytest
+ >= 6.1.0``, as it was removed there. This is a quick fix, use an older
+ version of pytest, if you want to keep this feature for now.
+ (Thanks to `@ntessore`_ for the PR)
+
+- Support up to pytest 6.1.0.
+
+.. _@ntessore: https://github.com/ntessore
+
+
+9.1 (2020-08-26)
+----------------
+
+Features
+++++++++
+
+- Add a new flag ``--only-rerun`` to allow for users to rerun only certain
+ errors.
+
+Other changes
++++++++++++++
+
+- Drop dependency on ``mock``.
+
+- Add support for pre-commit and add a linting tox target.
+ (`#117 <https://github.com/pytest-dev/pytest-rerunfailures/pull/117>`_)
+ (PR from `@gnikonorov`_)
+
+.. _@gnikonorov: https://github.com/gnikonorov
+
+
9.0 (2020-03-18)
----------------
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/pytest-rerunfailures-9.0/CONTRIBUTING.rst
new/pytest-rerunfailures-9.1.1/CONTRIBUTING.rst
--- old/pytest-rerunfailures-9.0/CONTRIBUTING.rst 1970-01-01
01:00:00.000000000 +0100
+++ new/pytest-rerunfailures-9.1.1/CONTRIBUTING.rst 2020-09-29
08:28:33.000000000 +0200
@@ -0,0 +1,40 @@
+============================
+Contribution getting started
+============================
+
+Contributions are highly welcomed and appreciated. Every little bit of help
counts,
+so do not hesitate!
+
+.. contents::
+ :depth: 2
+ :backlinks: none
+
+
+Preparing Pull Requests
+-----------------------
+
+#. Fork the repository.
+
+#. Enable and install `pre-commit <https://pre-commit.com>`_ to ensure
style-guides and code checks are followed::
+
+ $ pip install --user pre-commit
+ $ pre-commit install
+
+ Afterwards ``pre-commit`` will run whenever you commit.
+
+ Note that this is automatically done when running ``tox -e linting``.
+
+ https://pre-commit.com/ is a framework for managing and maintaining
multi-language pre-commit hooks
+ to ensure code-style and code formatting is consistent.
+
+#. Install `tox <https://tox.readthedocs.io/en/latest/>`_:
+
+ Tox is used to run all the tests and will automatically setup virtualenvs
+ to run the tests in. Implicitly https://virtualenv.pypa.io/ is used::
+
+ $ pip install tox
+ $ tox -e linting,py37
+
+#. Follow **PEP-8** for naming and `black <https://github.com/psf/black>`_ for
formatting.
+
+#. Add a line item to the current **unreleased** version in ``CHANGES.rst``,
unless the change is trivial.
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/pytest-rerunfailures-9.0/LICENSE
new/pytest-rerunfailures-9.1.1/LICENSE
--- old/pytest-rerunfailures-9.0/LICENSE 2020-03-18 09:18:08.000000000
+0100
+++ new/pytest-rerunfailures-9.1.1/LICENSE 2020-09-29 08:28:33.000000000
+0200
@@ -1,3 +1,3 @@
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
-file, You can obtain one at http://mozilla.org/MPL/2.0/.
+file, You can obtain one at https://www.mozilla.org/MPL/2.0/.
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/pytest-rerunfailures-9.0/PKG-INFO
new/pytest-rerunfailures-9.1.1/PKG-INFO
--- old/pytest-rerunfailures-9.0/PKG-INFO 2020-03-18 09:18:08.000000000
+0100
+++ new/pytest-rerunfailures-9.1.1/PKG-INFO 2020-09-29 08:28:33.859913300
+0200
@@ -1,6 +1,6 @@
-Metadata-Version: 1.1
+Metadata-Version: 1.2
Name: pytest-rerunfailures
-Version: 9.0
+Version: 9.1.1
Summary: pytest plugin to re-run tests to eliminate flaky failures
Home-page: https://github.com/pytest-dev/pytest-rerunfailures
Author: Leah Klearman
@@ -11,7 +11,7 @@
pytest-rerunfailures
====================
- pytest-rerunfailures is a plugin for `py.test <http://pytest.org>`_
that
+ pytest-rerunfailures is a plugin for `pytest <https://pytest.org>`_
that
re-runs tests to eliminate intermittent failures.
.. image:: https://img.shields.io/badge/license-MPL%202.0-blue.svg
@@ -65,6 +65,24 @@
$ pytest --reruns 5 --reruns-delay 1
+ Re-run all failures matching certain expressions
+ ------------------------------------------------
+
+ To re-run only those failures that match a certain list of
expressions, use the
+ ``--only-rerun`` flag and pass it a regular expression. For example,
the following would
+ only rerun those errors that match ``AssertionError``:
+
+ .. code-block:: bash
+
+ $ pytest --reruns 5 --only-rerun AssertionError
+
+ Passing the flag multiple times accumulates the arguments, so the
following would only rerun
+ those errors that match ``AssertionError`` or ``ValueError``:
+
+ .. code-block:: bash
+
+ $ pytest --reruns 5 --only-rerun AssertionError --only-rerun
ValueError
+
Re-run individual failures
--------------------------
@@ -127,8 +145,8 @@
Resources
---------
- - `Issue Tracker
<http://github.com/pytest-dev/pytest-rerunfailures/issues>`_
- - `Code <http://github.com/pytest-dev/pytest-rerunfailures/>`_
+ - `Issue Tracker
<https://github.com/pytest-dev/pytest-rerunfailures/issues>`_
+ - `Code <https://github.com/pytest-dev/pytest-rerunfailures/>`_
Development
-----------
@@ -137,7 +155,7 @@
.. code-block:: python
- @hookimpl(tryfirst=True, hookwrapper=True)
+ @hookimpl(tryfirst=True)
def pytest_runtest_makereport(item, call):
print(item.execution_count)
@@ -145,6 +163,43 @@
Changelog
=========
+ 9.1.1 (2020-09-29)
+ ------------------
+
+ Compatibility fix.
+ ++++++++++++++++++
+
+ - Ignore ``--result-log`` command line option when used together with
``pytest
+ >= 6.1.0``, as it was removed there. This is a quick fix, use an
older
+ version of pytest, if you want to keep this feature for now.
+ (Thanks to `@ntessore`_ for the PR)
+
+ - Support up to pytest 6.1.0.
+
+ .. _@ntessore: https://github.com/ntessore
+
+
+ 9.1 (2020-08-26)
+ ----------------
+
+ Features
+ ++++++++
+
+ - Add a new flag ``--only-rerun`` to allow for users to rerun only
certain
+ errors.
+
+ Other changes
+ +++++++++++++
+
+ - Drop dependency on ``mock``.
+
+ - Add support for pre-commit and add a linting tox target.
+ (`#117
<https://github.com/pytest-dev/pytest-rerunfailures/pull/117>`_)
+ (PR from `@gnikonorov`_)
+
+ .. _@gnikonorov: https://github.com/gnikonorov
+
+
9.0 (2020-03-18)
----------------
@@ -377,3 +432,4 @@
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: Implementation :: CPython
Classifier: Programming Language :: Python :: Implementation :: PyPy
+Requires-Python: >=3.5
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/pytest-rerunfailures-9.0/README.rst
new/pytest-rerunfailures-9.1.1/README.rst
--- old/pytest-rerunfailures-9.0/README.rst 2020-03-18 09:18:08.000000000
+0100
+++ new/pytest-rerunfailures-9.1.1/README.rst 2020-09-29 08:28:33.000000000
+0200
@@ -1,7 +1,7 @@
pytest-rerunfailures
====================
-pytest-rerunfailures is a plugin for `py.test <http://pytest.org>`_ that
+pytest-rerunfailures is a plugin for `pytest <https://pytest.org>`_ that
re-runs tests to eliminate intermittent failures.
.. image:: https://img.shields.io/badge/license-MPL%202.0-blue.svg
@@ -55,6 +55,24 @@
$ pytest --reruns 5 --reruns-delay 1
+Re-run all failures matching certain expressions
+------------------------------------------------
+
+To re-run only those failures that match a certain list of expressions, use the
+``--only-rerun`` flag and pass it a regular expression. For example, the
following would
+only rerun those errors that match ``AssertionError``:
+
+.. code-block:: bash
+
+ $ pytest --reruns 5 --only-rerun AssertionError
+
+Passing the flag multiple times accumulates the arguments, so the following
would only rerun
+those errors that match ``AssertionError`` or ``ValueError``:
+
+.. code-block:: bash
+
+ $ pytest --reruns 5 --only-rerun AssertionError --only-rerun ValueError
+
Re-run individual failures
--------------------------
@@ -117,8 +135,8 @@
Resources
---------
-- `Issue Tracker <http://github.com/pytest-dev/pytest-rerunfailures/issues>`_
-- `Code <http://github.com/pytest-dev/pytest-rerunfailures/>`_
+- `Issue Tracker <https://github.com/pytest-dev/pytest-rerunfailures/issues>`_
+- `Code <https://github.com/pytest-dev/pytest-rerunfailures/>`_
Development
-----------
@@ -127,6 +145,6 @@
.. code-block:: python
- @hookimpl(tryfirst=True, hookwrapper=True)
+ @hookimpl(tryfirst=True)
def pytest_runtest_makereport(item, call):
print(item.execution_count)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore'
old/pytest-rerunfailures-9.0/pytest_rerunfailures.egg-info/PKG-INFO
new/pytest-rerunfailures-9.1.1/pytest_rerunfailures.egg-info/PKG-INFO
--- old/pytest-rerunfailures-9.0/pytest_rerunfailures.egg-info/PKG-INFO
2020-03-18 09:18:08.000000000 +0100
+++ new/pytest-rerunfailures-9.1.1/pytest_rerunfailures.egg-info/PKG-INFO
2020-09-29 08:28:33.000000000 +0200
@@ -1,6 +1,6 @@
-Metadata-Version: 1.1
+Metadata-Version: 1.2
Name: pytest-rerunfailures
-Version: 9.0
+Version: 9.1.1
Summary: pytest plugin to re-run tests to eliminate flaky failures
Home-page: https://github.com/pytest-dev/pytest-rerunfailures
Author: Leah Klearman
@@ -11,7 +11,7 @@
pytest-rerunfailures
====================
- pytest-rerunfailures is a plugin for `py.test <http://pytest.org>`_
that
+ pytest-rerunfailures is a plugin for `pytest <https://pytest.org>`_
that
re-runs tests to eliminate intermittent failures.
.. image:: https://img.shields.io/badge/license-MPL%202.0-blue.svg
@@ -65,6 +65,24 @@
$ pytest --reruns 5 --reruns-delay 1
+ Re-run all failures matching certain expressions
+ ------------------------------------------------
+
+ To re-run only those failures that match a certain list of
expressions, use the
+ ``--only-rerun`` flag and pass it a regular expression. For example,
the following would
+ only rerun those errors that match ``AssertionError``:
+
+ .. code-block:: bash
+
+ $ pytest --reruns 5 --only-rerun AssertionError
+
+ Passing the flag multiple times accumulates the arguments, so the
following would only rerun
+ those errors that match ``AssertionError`` or ``ValueError``:
+
+ .. code-block:: bash
+
+ $ pytest --reruns 5 --only-rerun AssertionError --only-rerun
ValueError
+
Re-run individual failures
--------------------------
@@ -127,8 +145,8 @@
Resources
---------
- - `Issue Tracker
<http://github.com/pytest-dev/pytest-rerunfailures/issues>`_
- - `Code <http://github.com/pytest-dev/pytest-rerunfailures/>`_
+ - `Issue Tracker
<https://github.com/pytest-dev/pytest-rerunfailures/issues>`_
+ - `Code <https://github.com/pytest-dev/pytest-rerunfailures/>`_
Development
-----------
@@ -137,7 +155,7 @@
.. code-block:: python
- @hookimpl(tryfirst=True, hookwrapper=True)
+ @hookimpl(tryfirst=True)
def pytest_runtest_makereport(item, call):
print(item.execution_count)
@@ -145,6 +163,43 @@
Changelog
=========
+ 9.1.1 (2020-09-29)
+ ------------------
+
+ Compatibility fix.
+ ++++++++++++++++++
+
+ - Ignore ``--result-log`` command line option when used together with
``pytest
+ >= 6.1.0``, as it was removed there. This is a quick fix, use an
older
+ version of pytest, if you want to keep this feature for now.
+ (Thanks to `@ntessore`_ for the PR)
+
+ - Support up to pytest 6.1.0.
+
+ .. _@ntessore: https://github.com/ntessore
+
+
+ 9.1 (2020-08-26)
+ ----------------
+
+ Features
+ ++++++++
+
+ - Add a new flag ``--only-rerun`` to allow for users to rerun only
certain
+ errors.
+
+ Other changes
+ +++++++++++++
+
+ - Drop dependency on ``mock``.
+
+ - Add support for pre-commit and add a linting tox target.
+ (`#117
<https://github.com/pytest-dev/pytest-rerunfailures/pull/117>`_)
+ (PR from `@gnikonorov`_)
+
+ .. _@gnikonorov: https://github.com/gnikonorov
+
+
9.0 (2020-03-18)
----------------
@@ -377,3 +432,4 @@
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: Implementation :: CPython
Classifier: Programming Language :: Python :: Implementation :: PyPy
+Requires-Python: >=3.5
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore'
old/pytest-rerunfailures-9.0/pytest_rerunfailures.egg-info/SOURCES.txt
new/pytest-rerunfailures-9.1.1/pytest_rerunfailures.egg-info/SOURCES.txt
--- old/pytest-rerunfailures-9.0/pytest_rerunfailures.egg-info/SOURCES.txt
2020-03-18 09:18:08.000000000 +0100
+++ new/pytest-rerunfailures-9.1.1/pytest_rerunfailures.egg-info/SOURCES.txt
2020-09-29 08:28:33.000000000 +0200
@@ -1,4 +1,5 @@
CHANGES.rst
+CONTRIBUTING.rst
LICENSE
MANIFEST.in
README.rst
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/pytest-rerunfailures-9.0/pytest_rerunfailures.py
new/pytest-rerunfailures-9.1.1/pytest_rerunfailures.py
--- old/pytest-rerunfailures-9.0/pytest_rerunfailures.py 2020-03-18
09:18:08.000000000 +0100
+++ new/pytest-rerunfailures-9.1.1/pytest_rerunfailures.py 2020-09-29
08:28:33.000000000 +0200
@@ -1,13 +1,19 @@
-import pkg_resources
+import re
import time
import warnings
+import pkg_resources
import pytest
-
from _pytest.runner import runtestprotocol
-from _pytest.resultlog import ResultLog
-PYTEST_GTE_54 = pkg_resources.parse_version(pytest.__version__) >=
pkg_resources.parse_version("5.4")
+PYTEST_GTE_54 = pkg_resources.parse_version(
+ pytest.__version__
+) >= pkg_resources.parse_version("5.4")
+
+PYTEST_GTE_61 = pkg_resources.parse_version(
+ pytest.__version__
+) >= pkg_resources.parse_version("6.1")
+
def works_with_current_xdist():
"""Returns compatibility with installed pytest-xdist version.
@@ -19,8 +25,8 @@
"""
try:
- d = pkg_resources.get_distribution('pytest-xdist')
- return d.parsed_version >= pkg_resources.parse_version('1.20')
+ d = pkg_resources.get_distribution("pytest-xdist")
+ return d.parsed_version >= pkg_resources.parse_version("1.20")
except pkg_resources.DistributionNotFound:
return None
@@ -28,61 +34,79 @@
# command line options
def pytest_addoption(parser):
group = parser.getgroup(
- "rerunfailures",
- "re-run failing tests to eliminate flaky failures")
+ "rerunfailures", "re-run failing tests to eliminate flaky failures"
+ )
group._addoption(
- '--reruns',
+ "--only-rerun",
+ action="append",
+ dest="only_rerun",
+ type=str,
+ default=None,
+ help="If passed, only rerun errors matching the regex provided. "
+ "Pass this flag multiple times to accumulate a list of regexes "
+ "to match",
+ )
+ group._addoption(
+ "--reruns",
action="store",
dest="reruns",
type=int,
default=0,
- help="number of times to re-run failed tests. defaults to 0.")
+ help="number of times to re-run failed tests. defaults to 0.",
+ )
group._addoption(
- '--reruns-delay',
- action='store',
- dest='reruns_delay',
+ "--reruns-delay",
+ action="store",
+ dest="reruns_delay",
type=float,
default=0,
- help='add time (seconds) delay between reruns.'
+ help="add time (seconds) delay between reruns.",
)
def pytest_configure(config):
# add flaky marker
config.addinivalue_line(
- "markers", "flaky(reruns=1, reruns_delay=0): mark test to re-run up "
- "to 'reruns' times. Add a delay of 'reruns_delay' seconds "
- "between re-runs.")
+ "markers",
+ "flaky(reruns=1, reruns_delay=0): mark test to re-run up "
+ "to 'reruns' times. Add a delay of 'reruns_delay' seconds "
+ "between re-runs.",
+ )
def _get_resultlog(config):
- if PYTEST_GTE_54:
+ if PYTEST_GTE_61:
+ return None
+ elif PYTEST_GTE_54:
# hack
from _pytest.resultlog import resultlog_key
+
return config._store.get(resultlog_key, default=None)
else:
- return getattr(config, '_resultlog', None)
+ return getattr(config, "_resultlog", None)
def _set_resultlog(config, resultlog):
- if PYTEST_GTE_54:
+ if PYTEST_GTE_61:
+ pass
+ elif PYTEST_GTE_54:
# hack
from _pytest.resultlog import resultlog_key
+
config._store[resultlog_key] = resultlog
else:
config._resultlog = resultlog
# making sure the options make sense
-# should run before / at the begining of pytest_cmdline_main
+# should run before / at the beginning of pytest_cmdline_main
def check_options(config):
val = config.getvalue
if not val("collectonly"):
if config.option.reruns != 0:
- if config.option.usepdb: # a core option
+ if config.option.usepdb: # a core option
raise pytest.UsageError("--reruns incompatible with --pdb")
-
resultlog = _get_resultlog(config)
if resultlog:
logfile = resultlog.logfile
@@ -137,8 +161,9 @@
if delay < 0:
delay = 0
- warnings.warn('Delay time between re-runs cannot be < 0. '
- 'Using default value: 0')
+ warnings.warn(
+ "Delay time between re-runs cannot be < 0. Using default value: 0"
+ )
return delay
@@ -147,9 +172,9 @@
"""
Note: remove all cached_result attribute from every fixture
"""
- cached_result = 'cached_result'
- fixture_info = getattr(item, '_fixtureinfo', None)
- for fixture_def_str in getattr(fixture_info, 'name2fixturedefs', ()):
+ cached_result = "cached_result"
+ fixture_info = getattr(item, "_fixtureinfo", None)
+ for fixture_def_str in getattr(fixture_info, "name2fixturedefs", ()):
fixture_defs = fixture_info.name2fixturedefs[fixture_def_str]
for fixture_def in fixture_defs:
if getattr(fixture_def, cached_result, None) is not None:
@@ -163,16 +188,32 @@
def _remove_failed_setup_state_from_session(item):
"""
- Note: remove all _prepare_exc attribute from every col in stack of
_setupstate and cleaning the stack itself
+ Note: remove all _prepare_exc attribute from every col in stack of
+ _setupstate and cleaning the stack itself
"""
prepare_exc = "_prepare_exc"
- setup_state = getattr(item.session, '_setupstate')
+ setup_state = getattr(item.session, "_setupstate")
for col in setup_state.stack:
if hasattr(col, prepare_exc):
delattr(col, prepare_exc)
setup_state.stack = list()
+def _should_hard_fail_on_error(session_config, report):
+ if report.outcome != "failed":
+ return False
+
+ rerun_errors = session_config.option.only_rerun
+ if not rerun_errors:
+ return False
+
+ for rerun_regex in rerun_errors:
+ if re.search(rerun_regex, report.longrepr.reprcrash.message):
+ return False
+
+ return True
+
+
def pytest_runtest_protocol(item, nextitem):
"""
Note: when teardown fails, two reports are generated for the case, one for
@@ -189,25 +230,30 @@
# first item if necessary
check_options(item.session.config)
delay = get_reruns_delay(item)
- parallel = hasattr(item.config, 'slaveinput')
+ parallel = hasattr(item.config, "slaveinput")
item.execution_count = 0
need_to_run = True
while need_to_run:
item.execution_count += 1
- item.ihook.pytest_runtest_logstart(nodeid=item.nodeid,
- location=item.location)
+ item.ihook.pytest_runtest_logstart(nodeid=item.nodeid,
location=item.location)
reports = runtestprotocol(item, nextitem=nextitem, log=False)
for report in reports: # 3 reports: setup, call, teardown
+ is_terminal_error =
_should_hard_fail_on_error(item.session.config, report)
report.rerun = item.execution_count - 1
- xfail = hasattr(report, 'wasxfail')
- if item.execution_count > reruns or not report.failed or xfail:
+ xfail = hasattr(report, "wasxfail")
+ if (
+ item.execution_count > reruns
+ or not report.failed
+ or xfail
+ or is_terminal_error
+ ):
# last run or no failure detected, log normally
item.ihook.pytest_runtest_logreport(report=report)
else:
# failure detected and reruns not exhausted, since i < reruns
- report.outcome = 'rerun'
+ report.outcome = "rerun"
time.sleep(delay)
if not parallel or works_with_current_xdist():
@@ -222,8 +268,7 @@
else:
need_to_run = False
- item.ihook.pytest_runtest_logfinish(nodeid=item.nodeid,
- location=item.location)
+ item.ihook.pytest_runtest_logfinish(nodeid=item.nodeid,
location=item.location)
return True
@@ -231,8 +276,8 @@
def pytest_report_teststatus(report):
"""Adapted from https://pytest.org/latest/_modules/_pytest/skipping.html
"""
- if report.outcome == 'rerun':
- return 'rerun', 'R', ('RERUN', {'yellow': True})
+ if report.outcome == "rerun":
+ return "rerun", "R", ("RERUN", {"yellow": True})
def pytest_terminal_summary(terminalreporter):
@@ -244,7 +289,7 @@
lines = []
for char in tr.reportchars:
- if char in 'rR':
+ if char in "rR":
show_rerun(terminalreporter, lines)
if lines:
@@ -258,35 +303,38 @@
if rerun:
for rep in rerun:
pos = rep.nodeid
- lines.append("RERUN %s" % (pos,))
+ lines.append("RERUN {}".format(pos))
-class RerunResultLog(ResultLog):
- def __init__(self, config, logfile):
- ResultLog.__init__(self, config, logfile)
-
- def pytest_runtest_logreport(self, report):
- """
- Adds support for rerun report fix for issue:
- https://github.com/pytest-dev/pytest-rerunfailures/issues/28
- """
- if report.when != "call" and report.passed:
- return
- res = self.config.hook.pytest_report_teststatus(report=report)
- code = res[1]
- if code == 'x':
- longrepr = str(report.longrepr)
- elif code == 'X':
- longrepr = ''
- elif report.passed:
- longrepr = ""
- elif report.failed:
- longrepr = str(report.longrepr)
- elif report.skipped:
- longrepr = str(report.longrepr[2])
- elif report.outcome == 'rerun':
- longrepr = str(report.longrepr)
- else:
- longrepr = str(report.longrepr)
+if not PYTEST_GTE_61:
+ from _pytest.resultlog import ResultLog
+
+ class RerunResultLog(ResultLog):
+ def __init__(self, config, logfile):
+ ResultLog.__init__(self, config, logfile)
+
+ def pytest_runtest_logreport(self, report):
+ """
+ Adds support for rerun report fix for issue:
+ https://github.com/pytest-dev/pytest-rerunfailures/issues/28
+ """
+ if report.when != "call" and report.passed:
+ return
+ res = self.config.hook.pytest_report_teststatus(report=report)
+ code = res[1]
+ if code == "x":
+ longrepr = str(report.longrepr)
+ elif code == "X":
+ longrepr = ""
+ elif report.passed:
+ longrepr = ""
+ elif report.failed:
+ longrepr = str(report.longrepr)
+ elif report.skipped:
+ longrepr = str(report.longrepr[2])
+ elif report.outcome == "rerun":
+ longrepr = str(report.longrepr)
+ else:
+ longrepr = str(report.longrepr)
- self.log_outcome(report, code, longrepr)
+ self.log_outcome(report, code, longrepr)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/pytest-rerunfailures-9.0/setup.cfg
new/pytest-rerunfailures-9.1.1/setup.cfg
--- old/pytest-rerunfailures-9.0/setup.cfg 2020-03-18 09:18:08.000000000
+0100
+++ new/pytest-rerunfailures-9.1.1/setup.cfg 2020-09-29 08:28:33.859913300
+0200
@@ -1,6 +1,10 @@
[bdist_wheel]
universal = 0
+[check-manifest]
+ignore =
+ .pre-commit-config.yaml
+
[egg_info]
tag_build =
tag_date = 0
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/pytest-rerunfailures-9.0/setup.py
new/pytest-rerunfailures-9.1.1/setup.py
--- old/pytest-rerunfailures-9.0/setup.py 2020-03-18 09:18:08.000000000
+0100
+++ new/pytest-rerunfailures-9.1.1/setup.py 2020-09-29 08:28:33.000000000
+0200
@@ -1,43 +1,39 @@
from setuptools import setup
-with open('README.rst') as readme, open('CHANGES.rst') as changelog:
- long_description = (
- '.. contents::\n\n' +
- readme.read() +
- '\n\n' +
- changelog.read())
+with open("README.rst") as readme, open("CHANGES.rst") as changelog:
+ long_description = ".. contents::\n\n" + readme.read() + "\n\n" +
changelog.read()
-setup(name='pytest-rerunfailures',
- version='9.0',
- description='pytest plugin to re-run tests to eliminate flaky failures',
- long_description=long_description,
- author='Leah Klearman',
- author_email='[email protected]',
- url='https://github.com/pytest-dev/pytest-rerunfailures',
- py_modules=['pytest_rerunfailures'],
- entry_points={'pytest11': ['rerunfailures = pytest_rerunfailures']},
- install_requires=[
- 'setuptools>=40.0',
- 'pytest >= 5.0',
- ],
- license='Mozilla Public License 2.0 (MPL 2.0)',
- keywords='py.test pytest rerun failures flaky',
- zip_safe=False,
- classifiers=[
- 'Development Status :: 5 - Production/Stable',
- 'Framework :: Pytest',
- 'Intended Audience :: Developers',
- 'License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)',
- 'Operating System :: POSIX',
- 'Operating System :: Microsoft :: Windows',
- 'Operating System :: MacOS :: MacOS X',
- 'Topic :: Software Development :: Quality Assurance',
- 'Topic :: Software Development :: Testing',
- 'Topic :: Utilities',
- 'Programming Language :: Python :: 3.5',
- 'Programming Language :: Python :: 3.6',
- 'Programming Language :: Python :: 3.7',
- 'Programming Language :: Python :: 3.8',
- 'Programming Language :: Python :: Implementation :: CPython',
- 'Programming Language :: Python :: Implementation :: PyPy',
- ])
+setup(
+ name="pytest-rerunfailures",
+ version="9.1.1",
+ description="pytest plugin to re-run tests to eliminate flaky failures",
+ long_description=long_description,
+ author="Leah Klearman",
+ author_email="[email protected]",
+ url="https://github.com/pytest-dev/pytest-rerunfailures",
+ py_modules=["pytest_rerunfailures"],
+ entry_points={"pytest11": ["rerunfailures = pytest_rerunfailures"]},
+ install_requires=["setuptools>=40.0", "pytest >= 5.0"],
+ python_requires=">=3.5",
+ license="Mozilla Public License 2.0 (MPL 2.0)",
+ keywords="py.test pytest rerun failures flaky",
+ zip_safe=False,
+ classifiers=[
+ "Development Status :: 5 - Production/Stable",
+ "Framework :: Pytest",
+ "Intended Audience :: Developers",
+ "License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)",
+ "Operating System :: POSIX",
+ "Operating System :: Microsoft :: Windows",
+ "Operating System :: MacOS :: MacOS X",
+ "Topic :: Software Development :: Quality Assurance",
+ "Topic :: Software Development :: Testing",
+ "Topic :: Utilities",
+ "Programming Language :: Python :: 3.5",
+ "Programming Language :: Python :: 3.6",
+ "Programming Language :: Python :: 3.7",
+ "Programming Language :: Python :: 3.8",
+ "Programming Language :: Python :: Implementation :: CPython",
+ "Programming Language :: Python :: Implementation :: PyPy",
+ ],
+)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore'
old/pytest-rerunfailures-9.0/test_pytest_rerunfailures.py
new/pytest-rerunfailures-9.1.1/test_pytest_rerunfailures.py
--- old/pytest-rerunfailures-9.0/test_pytest_rerunfailures.py 2020-03-18
09:18:08.000000000 +0100
+++ new/pytest-rerunfailures-9.1.1/test_pytest_rerunfailures.py 2020-09-29
08:28:33.000000000 +0200
@@ -1,10 +1,16 @@
-import mock
-import pytest
import random
import time
+from unittest import mock
+
+import pkg_resources
+import pytest
+
+pytest_plugins = "pytester"
-pytest_plugins = 'pytester'
+PYTEST_GTE_61 = pkg_resources.parse_version(
+ pytest.__version__
+) >= pkg_resources.parse_version("6.1")
def temporary_failure(count=1):
@@ -15,229 +21,298 @@
if int(count) <= {0}:
path.write(int(count) + 1)
raise Exception('Failure: {{0}}'.format(count))""".format(
- count)
+ count
+ )
-def assert_outcomes(result, passed=1, skipped=0, failed=0, error=0, xfailed=0,
- xpassed=0, rerun=0):
+def check_outcome_field(outcomes, field_name, expected_value):
+ field_value = outcomes.get(field_name, 0)
+ assert (
+ field_value == expected_value
+ ), "outcomes.{} has unexpected value. Expected '{}' but got '{}'".format(
+ field_name, expected_value, field_value
+ )
+
+
+def assert_outcomes(
+ result, passed=1, skipped=0, failed=0, error=0, xfailed=0, xpassed=0,
rerun=0,
+):
outcomes = result.parseoutcomes()
- assert outcomes.get('passed', 0) == passed
- assert outcomes.get('skipped', 0) == skipped
- assert outcomes.get('failed', 0) == failed
- assert outcomes.get('xfailed', 0) == xfailed
- assert outcomes.get('xpassed', 0) == xpassed
- assert outcomes.get('rerun', 0) == rerun
+ check_outcome_field(outcomes, "passed", passed)
+ check_outcome_field(outcomes, "skipped", skipped)
+ check_outcome_field(outcomes, "failed", failed)
+ check_outcome_field(outcomes, "xfailed", xfailed)
+ check_outcome_field(outcomes, "xpassed", xpassed)
+ check_outcome_field(outcomes, "rerun", rerun)
def test_error_when_run_with_pdb(testdir):
- testdir.makepyfile('def test_pass(): pass')
- result = testdir.runpytest('--reruns', '1', '--pdb')
- result.stderr.fnmatch_lines_random(
- 'ERROR: --reruns incompatible with --pdb')
+ testdir.makepyfile("def test_pass(): pass")
+ result = testdir.runpytest("--reruns", "1", "--pdb")
+ result.stderr.fnmatch_lines_random("ERROR: --reruns incompatible with
--pdb")
def test_no_rerun_on_pass(testdir):
- testdir.makepyfile('def test_pass(): pass')
- result = testdir.runpytest('--reruns', '1')
+ testdir.makepyfile("def test_pass(): pass")
+ result = testdir.runpytest("--reruns", "1")
assert_outcomes(result)
def test_no_rerun_on_skipif_mark(testdir):
reason = str(random.random())
- testdir.makepyfile("""
+ testdir.makepyfile(
+ """
import pytest
- @pytest.mark.skipif(reason='{0}')
+ @pytest.mark.skipif(reason='{}')
def test_skip():
pass
- """.format(reason))
- result = testdir.runpytest('--reruns', '1')
+ """.format(
+ reason
+ )
+ )
+ result = testdir.runpytest("--reruns", "1")
assert_outcomes(result, passed=0, skipped=1)
def test_no_rerun_on_skip_call(testdir):
reason = str(random.random())
- testdir.makepyfile("""
+ testdir.makepyfile(
+ """
import pytest
def test_skip():
- pytest.skip('{0}')
- """.format(reason))
- result = testdir.runpytest('--reruns', '1')
+ pytest.skip('{}')
+ """.format(
+ reason
+ )
+ )
+ result = testdir.runpytest("--reruns", "1")
assert_outcomes(result, passed=0, skipped=1)
def test_no_rerun_on_xfail_mark(testdir):
- reason = str(random.random())
- testdir.makepyfile("""
+ testdir.makepyfile(
+ """
import pytest
@pytest.mark.xfail()
def test_xfail():
assert False
- """.format(reason))
- result = testdir.runpytest('--reruns', '1')
+ """
+ )
+ result = testdir.runpytest("--reruns", "1")
assert_outcomes(result, passed=0, xfailed=1)
def test_no_rerun_on_xfail_call(testdir):
reason = str(random.random())
- testdir.makepyfile("""
+ testdir.makepyfile(
+ """
import pytest
def test_xfail():
- pytest.xfail('{0}')
- """.format(reason))
- result = testdir.runpytest('--reruns', '1')
+ pytest.xfail('{}')
+ """.format(
+ reason
+ )
+ )
+ result = testdir.runpytest("--reruns", "1")
assert_outcomes(result, passed=0, xfailed=1)
def test_no_rerun_on_xpass(testdir):
- reason = str(random.random())
- testdir.makepyfile("""
+ testdir.makepyfile(
+ """
import pytest
@pytest.mark.xfail()
def test_xpass():
pass
- """.format(reason))
- result = testdir.runpytest('--reruns', '1')
+ """
+ )
+ result = testdir.runpytest("--reruns", "1")
assert_outcomes(result, passed=0, xpassed=1)
def test_rerun_fails_after_consistent_setup_failure(testdir):
- testdir.makepyfile('def test_pass(): pass')
- testdir.makeconftest("""
+ testdir.makepyfile("def test_pass(): pass")
+ testdir.makeconftest(
+ """
def pytest_runtest_setup(item):
- raise Exception('Setup failure')""")
- result = testdir.runpytest('--reruns', '1')
+ raise Exception('Setup failure')"""
+ )
+ result = testdir.runpytest("--reruns", "1")
assert_outcomes(result, passed=0, error=1, rerun=1)
def test_rerun_passes_after_temporary_setup_failure(testdir):
- testdir.makepyfile('def test_pass(): pass')
- testdir.makeconftest("""
+ testdir.makepyfile("def test_pass(): pass")
+ testdir.makeconftest(
+ """
def pytest_runtest_setup(item):
- {0}""".format(temporary_failure()))
- result = testdir.runpytest('--reruns', '1', '-r', 'R')
+ {}""".format(
+ temporary_failure()
+ )
+ )
+ result = testdir.runpytest("--reruns", "1", "-r", "R")
assert_outcomes(result, passed=1, rerun=1)
def test_rerun_fails_after_consistent_test_failure(testdir):
- testdir.makepyfile('def test_fail(): assert False')
- result = testdir.runpytest('--reruns', '1')
+ testdir.makepyfile("def test_fail(): assert False")
+ result = testdir.runpytest("--reruns", "1")
assert_outcomes(result, passed=0, failed=1, rerun=1)
def test_rerun_passes_after_temporary_test_failure(testdir):
- testdir.makepyfile("""
+ testdir.makepyfile(
+ """
def test_pass():
- {0}""".format(temporary_failure()))
- result = testdir.runpytest('--reruns', '1', '-r', 'R')
+ {}""".format(
+ temporary_failure()
+ )
+ )
+ result = testdir.runpytest("--reruns", "1", "-r", "R")
assert_outcomes(result, passed=1, rerun=1)
def test_rerun_passes_after_temporary_test_failure_with_flaky_mark(testdir):
- testdir.makepyfile("""
+ testdir.makepyfile(
+ """
import pytest
@pytest.mark.flaky(reruns=2)
def test_pass():
- {0}""".format(temporary_failure(2)))
- result = testdir.runpytest('-r', 'R')
+ {}""".format(
+ temporary_failure(2)
+ )
+ )
+ result = testdir.runpytest("-r", "R")
assert_outcomes(result, passed=1, rerun=2)
def test_reruns_if_flaky_mark_is_called_without_options(testdir):
- testdir.makepyfile("""
+ testdir.makepyfile(
+ """
import pytest
@pytest.mark.flaky()
def test_pass():
- {0}""".format(temporary_failure(1)))
- result = testdir.runpytest('-r', 'R')
+ {}""".format(
+ temporary_failure(1)
+ )
+ )
+ result = testdir.runpytest("-r", "R")
assert_outcomes(result, passed=1, rerun=1)
def test_reruns_if_flaky_mark_is_called_with_positional_argument(testdir):
- testdir.makepyfile("""
+ testdir.makepyfile(
+ """
import pytest
@pytest.mark.flaky(2)
def test_pass():
- {0}""".format(temporary_failure(2)))
- result = testdir.runpytest('-r', 'R')
+ {}""".format(
+ temporary_failure(2)
+ )
+ )
+ result = testdir.runpytest("-r", "R")
assert_outcomes(result, passed=1, rerun=2)
def test_no_extra_test_summary_for_reruns_by_default(testdir):
- testdir.makepyfile("""
+ testdir.makepyfile(
+ """
def test_pass():
- {0}""".format(temporary_failure()))
- result = testdir.runpytest('--reruns', '1')
- assert 'RERUN' not in result.stdout.str()
- assert '1 rerun' in result.stdout.str()
+ {}""".format(
+ temporary_failure()
+ )
+ )
+ result = testdir.runpytest("--reruns", "1")
+ assert "RERUN" not in result.stdout.str()
+ assert "1 rerun" in result.stdout.str()
def test_extra_test_summary_for_reruns(testdir):
- testdir.makepyfile("""
+ testdir.makepyfile(
+ """
def test_pass():
- {0}""".format(temporary_failure()))
- result = testdir.runpytest('--reruns', '1', '-r', 'R')
- result.stdout.fnmatch_lines_random(['RERUN test_*:*'])
- assert '1 rerun' in result.stdout.str()
+ {}""".format(
+ temporary_failure()
+ )
+ )
+ result = testdir.runpytest("--reruns", "1", "-r", "R")
+ result.stdout.fnmatch_lines_random(["RERUN test_*:*"])
+ assert "1 rerun" in result.stdout.str()
def test_verbose(testdir):
- testdir.makepyfile("""
+ testdir.makepyfile(
+ """
def test_pass():
- {0}""".format(temporary_failure()))
- result = testdir.runpytest('--reruns', '1', '-v')
- result.stdout.fnmatch_lines_random(['test_*:* RERUN*'])
- assert '1 rerun' in result.stdout.str()
+ {}""".format(
+ temporary_failure()
+ )
+ )
+ result = testdir.runpytest("--reruns", "1", "-v")
+ result.stdout.fnmatch_lines_random(["test_*:* RERUN*"])
+ assert "1 rerun" in result.stdout.str()
def test_no_rerun_on_class_setup_error_without_reruns(testdir):
- testdir.makepyfile("""
+ testdir.makepyfile(
+ """
class TestFoo(object):
@classmethod
def setup_class(cls):
assert False
def test_pass():
- pass""")
- result = testdir.runpytest('--reruns', '0')
+ pass"""
+ )
+ result = testdir.runpytest("--reruns", "0")
assert_outcomes(result, passed=0, error=1, rerun=0)
def test_rerun_on_class_setup_error_with_reruns(testdir):
- testdir.makepyfile("""
+ testdir.makepyfile(
+ """
class TestFoo(object):
@classmethod
def setup_class(cls):
assert False
def test_pass():
- pass""")
- result = testdir.runpytest('--reruns', '1')
+ pass"""
+ )
+ result = testdir.runpytest("--reruns", "1")
assert_outcomes(result, passed=0, error=1, rerun=1)
[email protected](PYTEST_GTE_61, reason="--result-log removed in
pytest>=6.1")
def test_rerun_with_resultslog(testdir):
- testdir.makepyfile("""
+ testdir.makepyfile(
+ """
def test_fail():
- assert False""")
+ assert False"""
+ )
- result = testdir.runpytest('--reruns', '2',
- '--result-log', './pytest.log')
+ result = testdir.runpytest("--reruns", "2", "--result-log", "./pytest.log")
assert_outcomes(result, passed=0, failed=1, rerun=2)
[email protected]('delay_time', [-1, 0, 0.0, 1, 2.5])
[email protected]("delay_time", [-1, 0, 0.0, 1, 2.5])
def test_reruns_with_delay(testdir, delay_time):
- testdir.makepyfile("""
+ testdir.makepyfile(
+ """
def test_fail():
- assert False""")
+ assert False"""
+ )
time.sleep = mock.MagicMock()
- result = testdir.runpytest('--reruns', '3',
- '--reruns-delay', str(delay_time))
+ result = testdir.runpytest("--reruns", "3", "--reruns-delay",
str(delay_time))
if delay_time < 0:
+ result.stdout.fnmatch_lines(
+ "*UserWarning: Delay time between re-runs cannot be < 0. "
+ "Using default value: 0"
+ )
delay_time = 0
time.sleep.assert_called_with(delay_time)
@@ -245,20 +320,28 @@
assert_outcomes(result, passed=0, failed=1, rerun=3)
[email protected]('delay_time', [-1, 0, 0.0, 1, 2.5])
[email protected]("delay_time", [-1, 0, 0.0, 1, 2.5])
def test_reruns_with_delay_marker(testdir, delay_time):
- testdir.makepyfile("""
+ testdir.makepyfile(
+ """
import pytest
@pytest.mark.flaky(reruns=2, reruns_delay={})
def test_fail_two():
- assert False""".format(delay_time))
+ assert False""".format(
+ delay_time
+ )
+ )
time.sleep = mock.MagicMock()
result = testdir.runpytest()
if delay_time < 0:
+ result.stdout.fnmatch_lines(
+ "*UserWarning: Delay time between re-runs cannot be < 0. "
+ "Using default value: 0"
+ )
delay_time = 0
time.sleep.assert_called_with(delay_time)
@@ -270,7 +353,8 @@
"""
Case: setup_class throwing error on the first execution for parametrized
test
"""
- testdir.makepyfile("""
+ testdir.makepyfile(
+ """
import pytest
pass_fixture = False
@@ -285,16 +369,19 @@
assert True
@pytest.mark.parametrize('param', [1, 2, 3])
def test_pass(self, param):
- assert param""")
- result = testdir.runpytest('--reruns', '1')
+ assert param"""
+ )
+ result = testdir.runpytest("--reruns", "1")
assert_outcomes(result, passed=3, rerun=1)
def test_rerun_on_class_scope_fixture_with_error_with_reruns(testdir):
"""
- Case: Class scope fixture throwing error on the first execution for
parametrized test
+ Case: Class scope fixture throwing error on the first execution
+ for parametrized test
"""
- testdir.makepyfile("""
+ testdir.makepyfile(
+ """
import pytest
pass_fixture = False
@@ -310,17 +397,19 @@
assert True
@pytest.mark.parametrize('param', [1, 2, 3])
def test_pass(self, setup_fixture, param):
- assert param""")
- result = testdir.runpytest('--reruns', '1')
+ assert param"""
+ )
+ result = testdir.runpytest("--reruns", "1")
assert_outcomes(result, passed=3, rerun=1)
def test_rerun_on_module_fixture_with_reruns(testdir):
"""
- Case: Module scope fixture is not re-executed when class scope fixture
throwing error on the first execution
- for parametrized test
+ Case: Module scope fixture is not re-executed when class scope fixture
throwing
+ error on the first execution for parametrized test
"""
- testdir.makepyfile("""
+ testdir.makepyfile(
+ """
import pytest
pass_fixture = False
@@ -341,17 +430,19 @@
assert True
def test_pass_2(self, module_fixture, setup_fixture):
- assert True""")
- result = testdir.runpytest('--reruns', '1')
+ assert True"""
+ )
+ result = testdir.runpytest("--reruns", "1")
assert_outcomes(result, passed=2, rerun=1)
def test_rerun_on_session_fixture_with_reruns(testdir):
"""
- Case: Module scope fixture is not re-executed when class scope fixture
throwing error on the first execution
- for parametrized test
+ Case: Module scope fixture is not re-executed when class scope fixture
+ throwing error on the first execution for parametrized test
"""
- testdir.makepyfile("""
+ testdir.makepyfile(
+ """
import pytest
pass_fixture = False
@@ -372,38 +463,80 @@
def test_pass_1(self, session_fixture, setup_fixture):
assert True
def test_pass_2(self, session_fixture, setup_fixture):
- assert True""")
- result = testdir.runpytest('--reruns', '1')
+ assert True"""
+ )
+ result = testdir.runpytest("--reruns", "1")
assert_outcomes(result, passed=2, rerun=1)
def test_execution_count_exposed(testdir):
- testdir.makepyfile('def test_pass(): assert True')
- testdir.makeconftest("""
+ testdir.makepyfile("def test_pass(): assert True")
+ testdir.makeconftest(
+ """
def pytest_runtest_teardown(item):
- assert item.execution_count == 3""")
- result = testdir.runpytest('--reruns', '2')
+ assert item.execution_count == 3"""
+ )
+ result = testdir.runpytest("--reruns", "2")
assert_outcomes(result, passed=3, rerun=2)
def test_rerun_report(testdir):
- testdir.makepyfile('def test_pass(): assert False')
- testdir.makeconftest("""
+ testdir.makepyfile("def test_pass(): assert False")
+ testdir.makeconftest(
+ """
def pytest_runtest_logreport(report):
assert hasattr(report, 'rerun')
assert isinstance(report.rerun, int)
assert report.rerun <= 2
- """)
- result = testdir.runpytest('--reruns', '2')
+ """
+ )
+ result = testdir.runpytest("--reruns", "2")
assert_outcomes(result, failed=1, rerun=2, passed=0)
def test_pytest_runtest_logfinish_is_called(testdir):
hook_message = "Message from pytest_runtest_logfinish hook"
- testdir.makepyfile('def test_pass(): pass')
- testdir.makeconftest(r"""
+ testdir.makepyfile("def test_pass(): pass")
+ testdir.makeconftest(
+ r"""
def pytest_runtest_logfinish(nodeid, location):
- print("\n{0}\n")
- """.format(hook_message))
- result = testdir.runpytest('--reruns', '1', '-s')
+ print("\n{}\n")
+ """.format(
+ hook_message
+ )
+ )
+ result = testdir.runpytest("--reruns", "1", "-s")
result.stdout.fnmatch_lines(hook_message)
+
+
[email protected](
+ "only_rerun_texts, should_rerun",
+ [
+ (["AssertionError"], True),
+ (["Assertion*"], True),
+ (["Assertion"], True),
+ (["ValueError"], False),
+ ([""], True),
+ (["AssertionError: "], True),
+ (["AssertionError: ERR"], True),
+ (["ERR"], True),
+ (["AssertionError,ValueError"], False),
+ (["AssertionError ValueError"], False),
+ (["AssertionError", "ValueError"], True),
+ ],
+)
+def test_only_rerun_flag(testdir, only_rerun_texts, should_rerun):
+ testdir.makepyfile('def test_only_rerun(): raise AssertionError("ERR")')
+
+ num_failed = 1
+ num_passed = 0
+ num_reruns = 1
+ num_reruns_actual = num_reruns if should_rerun else 0
+
+ pytest_args = ["--reruns", str(num_reruns)]
+ for only_rerun_text in only_rerun_texts:
+ pytest_args.extend(["--only-rerun", only_rerun_text])
+ result = testdir.runpytest(*pytest_args)
+ assert_outcomes(
+ result, passed=num_passed, failed=num_failed, rerun=num_reruns_actual
+ )
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn'
'--exclude=.svnignore' old/pytest-rerunfailures-9.0/tox.ini
new/pytest-rerunfailures-9.1.1/tox.ini
--- old/pytest-rerunfailures-9.0/tox.ini 2020-03-18 09:18:08.000000000
+0100
+++ new/pytest-rerunfailures-9.1.1/tox.ini 2020-09-29 08:28:33.000000000
+0200
@@ -1,17 +1,32 @@
# Tox (https://tox.readthedocs.io/en/latest/) is a tool for running tests
# in multiple virtualenvs. This configuration file will run the
-# test suite on all supported python versions. To use it, "pip install tox"
+# test suite on all supported Python versions. To use it, "pip install tox"
# and then run "tox" from this directory.
+[flake8]
+# NOTE: This is kept in line with Black
+# See:
https://black.readthedocs.io/en/stable/the_black_code_style.html#line-length
+max-line-length = 88
+
[tox]
-envlist = py{35,36,37,38,py3}-pytest{50,51,52,53,54},
+envlist =
+ linting
+ py{35,36,37,38,py3}-pytest{50,51,52,53,54,60,61}
+minversion = 3.17.1
[testenv]
-commands = py.test test_pytest_rerunfailures.py {posargs}
+commands = pytest test_pytest_rerunfailures.py {posargs}
deps =
- mock
pytest50: pytest==5.0.*
pytest51: pytest==5.1.*
pytest52: pytest==5.2.*
pytest53: pytest==5.3.*
pytest54: pytest==5.4.*
+ pytest60: pytest==6.0.*
+ pytest61: pytest==6.1.*
+
+[testenv:linting]
+basepython = python3
+commands = pre-commit run --all-files --show-diff-on-failure {posargs:}
+deps = pre-commit>=1.11.0
+skip_install = True