[issue25894] unittest subTest failure causes result to be omitted from listing

2021-09-10 Thread Łukasz Langa

Łukasz Langa  added the comment:

> Any suggestions for the format of output? Currently PR 28082 formats lines 
> for subtest skipping, failure or error with a 2-space identation. Lines for 
> skipping, failure or error in tearDown() or functions registered with 
> addCallback() do not differ from a line skipping, failure or error in the 
> test method itself.

I'm fine with that. Ultimately I don't think differentiating subtest status 
from method status is that important.

> I am not sure about backporting this change.

Since we're too late for 3.10.0 and it isn't a trivially small change that 
changes test output, I think we shouldn't be backporting it. There are tools 
that parse this output in editors. I'm afraid that it's too risky since we 
haven't given the community enough time to test for output difference.

I'm marking this as resolved. Thanks for your patch, Serhiy! If you feel 
strongly about backporting, we'd have to reopen and mark this as release 
blocker.

--
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25894] unittest subTest failure causes result to be omitted from listing

2021-09-10 Thread Łukasz Langa

Łukasz Langa  added the comment:


New changeset f0f29f328d8b4568e8c0d4c55c7d120d96f80911 by Serhiy Storchaka in 
branch 'main':
bpo-25894: Always report skipped and failed subtests separately (GH-28082)
https://github.com/python/cpython/commit/f0f29f328d8b4568e8c0d4c55c7d120d96f80911


--
nosy: +lukasz.langa

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25894] unittest subTest failure causes result to be omitted from listing

2021-09-05 Thread Serhiy Storchaka


Serhiy Storchaka  added the comment:

Any suggestions for the format of output? Currently PR 28082 formats lines for 
subtest skipping, failure or error with a 2-space identation. Lines for 
skipping, failure or error in tearDown() or functions registered with 
addCallback() do not differ from a line skipping, failure or error in the test 
method itself.

I am not sure about backporting this change. On one hand, it fixes an old flaw 
in the unittest output. On other hand, the change affects not only subtests and 
can confuse programs which parse the unittest output because the test 
descriptions can occur multiple times.

--
versions: +Python 3.11 -Python 3.5, Python 3.6

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25894] unittest subTest failure causes result to be omitted from listing

2021-08-31 Thread Serhiy Storchaka


Serhiy Storchaka  added the comment:

PR 28082 is a draft that implements this idea. Skipped and failed (but not 
successfully passed) subtests are now reported separately, as a character (sFE) 
or a line ("skipped", "FAIL", "ERROR"). The description of the subtest is 
included in the line. For example:

$ tests=.sFE ./python test_issue25894.py 
sFE
==
ERROR: test_subTest (__main__.TestClass) [3] (t='E')
--
Traceback (most recent call last):
  File "/home/serhiy/py/cpython/test_issue25894.py", line 15, in test_subTest
raise Exception('error')

Exception: error

==
FAIL: test_subTest (__main__.TestClass) [2] (t='F')
--
Traceback (most recent call last):
  File "/home/serhiy/py/cpython/test_issue25894.py", line 13, in test_subTest
self.fail('failed')
^^^
AssertionError: failed

--
Ran 1 test in 0.001s

FAILED (failures=1, errors=1, skipped=1)

$ tests=.sFE ./python test_issue25894.py -v
test_subTest (__main__.TestClass) ... 
  test_subTest (__main__.TestClass) [1] (t='s') ... skipped 'skipped'
  test_subTest (__main__.TestClass) [2] (t='F') ... FAIL
  test_subTest (__main__.TestClass) [3] (t='E') ... ERROR

==
ERROR: test_subTest (__main__.TestClass) [3] (t='E')
--
Traceback (most recent call last):
  File "/home/serhiy/py/cpython/test_issue25894.py", line 15, in test_subTest
raise Exception('error')

Exception: error

==
FAIL: test_subTest (__main__.TestClass) [2] (t='F')
--
Traceback (most recent call last):
  File "/home/serhiy/py/cpython/test_issue25894.py", line 13, in test_subTest
self.fail('failed')
^^^
AssertionError: failed

--
Ran 1 test in 0.001s

FAILED (failures=1, errors=1, skipped=1)

As a side effect, the test description is also repeated for every error in the 
test cleanup code (in tearDown() and doCleanup()).

Similar changes should be added also in RegressionTestResult. If apply 
issue45057 first they will be much simpler.

Issue29152 can be related. If call addError() and addFailure() from 
addSubTest(), PR 28082 should be rewritten.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25894] unittest subTest failure causes result to be omitted from listing

2021-08-31 Thread Serhiy Storchaka


Change by Serhiy Storchaka :


--
keywords: +patch
pull_requests: +26525
stage: test needed -> patch review
pull_request: https://github.com/python/cpython/pull/28082

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25894] unittest subTest failure causes result to be omitted from listing

2021-08-30 Thread Serhiy Storchaka


Serhiy Storchaka  added the comment:

Were not subtests proposed as a more flexible replacement of parametrized 
tests? I think that every subtest should be counted as a separate test case: in 
verbose mode it should output a separate line and in non-verbose mode it should 
output a separate character.

--
nosy: +serhiy.storchaka

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25894] unittest subTest failure causes result to be omitted from listing

2016-03-15 Thread Robert Collins

Robert Collins added the comment:

The basic model is this:
 - a test can have a single outcome [yes, the api is ambiguous, but there it is]
 - subtests let you identify multiple variations of a single test (note the id 
tweak etc) and *may* be reported differently

We certainly must not report the test as a whole passing if any subtest did not 
pass.

Long term I want to remove the error/failure partitioning of exceptions; its 
not actually useful.

The summary for the test, when subtests are used, should probably enumerate the 
states.

test_foo (3 passed, 2 skipped, 1 failure)

in much the same way the run as a whole is enumerated.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25894] unittest subTest failure causes result to be omitted from listing

2015-12-28 Thread Ezio Melotti

Ezio Melotti added the comment:

> My gut feeling is that any subtest failure should be counted as the
> whole test failing. I’m not sure how the failure vs error cases should
> be handled. Maybe error should trump failure.

I think the priority should be error > failure > skip > pass.
IOW, pass should never be reported if any of the other 3 happen, skip should be 
reported only if there are no errors/failures, and errors should trump failures.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25894] unittest subTest failure causes result to be omitted from listing

2015-12-17 Thread Zachary Ware

Zachary Ware added the comment:

Martin Panter added the comment:
> Okay, so you have a test with subtests. You have presented three cases:
>
> 1. Single subtest which passes. No problem I assume.

Or several subtests which pass.  No problems.

> 2. Two subtests: 1st fails, 2nd passes. This is how subtests are normally 
> used, so I guess there is no problem. Is that right?

Any of multiple subtests fail, and there is no indication in the
"summary line" (the line that is usually "..",
a dot for each successful test).  When a a regular test fails, an F
(or an E, if the raised exception was anything but
self.failureException) is added to the line; when any subtests fail,
nothing is added.  If you have 10 tests methods that use subtests, and
any subtest in each method fails, your summary line will be blank.  In
verbose mode, you'd get "test_one ... test_two ... test_three ... ..."
(note lack of newlines) instead of the expected "test_one ...
FAILURE\ntest_two ... FAILURE\ntest_three ... FAILURE\n..." (note the
newlines).

> 3. After two subtests have already run (one of which failed), SkipTest is 
> raised. I guess you want the test results to be reported better in this case.
>
> What is the use case? Why not skip the test before any subtests are started?

Only the subtest is skipped (which should be valid, or documented as
not valid), and the order of the subtests doesn't matter:

$ tests=210 ./python.exe subtest_test.py -v
test_subTest (__main__.TestClass) ... skipped 'skipped'

==
FAIL: test_subTest (__main__.TestClass) ()
--
Traceback (most recent call last):
  File "subtest_test.py", line 14, in test_subTest
self.assertTrue(t)
AssertionError: 0 is not true

--
Ran 1 test in 0.001s

FAILED (failures=1, skipped=1)

But, the summary makes it seem as though the entire test was skipped.

Hopefully this makes it a bit clearer :)

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25894] unittest subTest failure causes result to be omitted from listing

2015-12-17 Thread R. David Murray

R. David Murray added the comment:

I believe this was discussed at the time subTest was added and deemed an 
acceptable tradeoff for a simpler implementation.  I'm not sure it is, but I'm 
not prepared to write code to fix it :)  I'm bothered every time I see this, 
but I have to admit that the tracebacks are the most important feedback and you 
do get those.

--
nosy: +r.david.murray

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25894] unittest subTest failure causes result to be omitted from listing

2015-12-17 Thread Martin Panter

Martin Panter added the comment:

Yes now I understand. If a subtest fails, there is no status update (not even a 
newline in verbose mode), and each subtest skip triggers a separate status 
update.

My gut feeling is that any subtest failure should be counted as the whole test 
failing. I’m not sure how the failure vs error cases should be handled. Maybe 
error should trump failure.

Judging by , Antoine intended for 
SkipTest to skip subtests. But I’m not sure that be reported as the whole test 
being skipped.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25894] unittest subTest failure causes result to be omitted from listing

2015-12-16 Thread Zachary Ware

New submission from Zachary Ware:

The title can barely be called accurate; the description of the problem isn't 
easy to condense to title length.  Here's the issue:

$ cat subtest_test.py 
import os
import unittest

class TestClass(unittest.TestCase):

def test_subTest(self):
for t in map(int, os.environ.get('tests', '1')):
with self.subTest(t):
if t > 1:
raise unittest.SkipTest('skipped')
self.assertTrue(t)

if __name__ == '__main__':
unittest.main()
$ ./python.exe subtest_test.py 
.
--
Ran 1 test in 0.000s

OK
$ tests=01 ./python.exe subtest_test.py 

==
FAIL: test_subTest (__main__.TestClass) ()
--
Traceback (most recent call last):
  File "subtest_test.py", line 12, in test_subTest
self.assertTrue(t)
AssertionError: 0 is not true

--
Ran 1 test in 0.001s

FAILED (failures=1)
$ tests=012 ./python.exe subtest_test.py 
s
==
FAIL: test_subTest (__main__.TestClass) ()
--
Traceback (most recent call last):
  File "subtest_test.py", line 12, in test_subTest
self.assertTrue(t)
AssertionError: 0 is not true

--
Ran 1 test in 0.001s

FAILED (failures=1, skipped=1)



Note that on the first run, the short summary is ".", as expected.  The second 
is "", when one of the subTests fails, but then the third is "s", when one 
subtest fails but another is skipped.  This also extends to verbose mode:

$ ./python.exe subtest_test.py -v
test_subTest (__main__.TestClass) ... ok

--
Ran 1 test in 0.001s

OK
$ tests=01 ./python.exe subtest_test.py -v
test_subTest (__main__.TestClass) ... 
==
FAIL: test_subTest (__main__.TestClass) ()
--
Traceback (most recent call last):
  File "subtest_test.py", line 12, in test_subTest
self.assertTrue(t)
AssertionError: 0 is not true

--
Ran 1 test in 0.001s

FAILED (failures=1)
$ tests=012 ./python.exe subtest_test.py -v
test_subTest (__main__.TestClass) ... skipped 'skipped'

==
FAIL: test_subTest (__main__.TestClass) ()
--
Traceback (most recent call last):
  File "subtest_test.py", line 12, in test_subTest
self.assertTrue(t)
AssertionError: 0 is not true

--
Ran 1 test in 0.001s

FAILED (failures=1, skipped=1)


Note the first run shows "... ok", the second "... ", and the third "... 
skipped 'skipped'"


I'm unsure what the solution should be.  There should at least be some 
indication that the test finished, but should mixed results be reported as 'm' 
("mixed results" in verbose mode), or should failure/error take precedence, or 
should every different result be represented?

--
components: Library (Lib)
messages: 256580
nosy: ezio.melotti, michael.foord, pitrou, rbcollins, zach.ware
priority: normal
severity: normal
stage: test needed
status: open
title: unittest subTest failure causes result to be omitted from listing
type: behavior
versions: Python 3.5, Python 3.6

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue25894] unittest subTest failure causes result to be omitted from listing

2015-12-16 Thread Martin Panter

Martin Panter added the comment:

Okay, so you have a test with subtests. You have presented three cases:

1. Single subtest which passes. No problem I assume.

2. Two subtests: 1st fails, 2nd passes. This is how subtests are normally used, 
so I guess there is no problem. Is that right?

3. After two subtests have already run (one of which failed), SkipTest is 
raised. I guess you want the test results to be reported better in this case.

What is the use case? Why not skip the test before any subtests are started?

--
nosy: +martin.panter

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com