New submission from Zachary Ware: The title can barely be called accurate; the description of the problem isn't easy to condense to title length. Here's the issue:
$ cat subtest_test.py import os import unittest class TestClass(unittest.TestCase): def test_subTest(self): for t in map(int, os.environ.get('tests', '1')): with self.subTest(t): if t > 1: raise unittest.SkipTest('skipped') self.assertTrue(t) if __name__ == '__main__': unittest.main() $ ./python.exe subtest_test.py . ---------------------------------------------------------------------- Ran 1 test in 0.000s OK $ tests=01 ./python.exe subtest_test.py ====================================================================== FAIL: test_subTest (__main__.TestClass) (<subtest>) ---------------------------------------------------------------------- Traceback (most recent call last): File "subtest_test.py", line 12, in test_subTest self.assertTrue(t) AssertionError: 0 is not true ---------------------------------------------------------------------- Ran 1 test in 0.001s FAILED (failures=1) $ tests=012 ./python.exe subtest_test.py s ====================================================================== FAIL: test_subTest (__main__.TestClass) (<subtest>) ---------------------------------------------------------------------- Traceback (most recent call last): File "subtest_test.py", line 12, in test_subTest self.assertTrue(t) AssertionError: 0 is not true ---------------------------------------------------------------------- Ran 1 test in 0.001s FAILED (failures=1, skipped=1) Note that on the first run, the short summary is ".", as expected. The second is "", when one of the subTests fails, but then the third is "s", when one subtest fails but another is skipped. This also extends to verbose mode: $ ./python.exe subtest_test.py -v test_subTest (__main__.TestClass) ... ok ---------------------------------------------------------------------- Ran 1 test in 0.001s OK $ tests=01 ./python.exe subtest_test.py -v test_subTest (__main__.TestClass) ... ====================================================================== FAIL: test_subTest (__main__.TestClass) (<subtest>) ---------------------------------------------------------------------- Traceback (most recent call last): File "subtest_test.py", line 12, in test_subTest self.assertTrue(t) AssertionError: 0 is not true ---------------------------------------------------------------------- Ran 1 test in 0.001s FAILED (failures=1) $ tests=012 ./python.exe subtest_test.py -v test_subTest (__main__.TestClass) ... skipped 'skipped' ====================================================================== FAIL: test_subTest (__main__.TestClass) (<subtest>) ---------------------------------------------------------------------- Traceback (most recent call last): File "subtest_test.py", line 12, in test_subTest self.assertTrue(t) AssertionError: 0 is not true ---------------------------------------------------------------------- Ran 1 test in 0.001s FAILED (failures=1, skipped=1) Note the first run shows "... ok", the second "... ", and the third "... skipped 'skipped'" I'm unsure what the solution should be. There should at least be some indication that the test finished, but should mixed results be reported as 'm' ("mixed results" in verbose mode), or should failure/error take precedence, or should every different result be represented? ---------- components: Library (Lib) messages: 256580 nosy: ezio.melotti, michael.foord, pitrou, rbcollins, zach.ware priority: normal severity: normal stage: test needed status: open title: unittest subTest failure causes result to be omitted from listing type: behavior versions: Python 3.5, Python 3.6 _______________________________________ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue25894> _______________________________________ _______________________________________________ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com