Barry A. Warsaw added the comment:
We just discovered that this change breaks testtools. I will file a new bug on
that.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16997
___
Barry A. Warsaw added the comment:
See issue #20687
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16997
___
___
Python-bugs-list mailing list
Roundup Robot added the comment:
New changeset 5c09e1c57200 by Antoine Pitrou in branch 'default':
Issue #16997: unittest.TestCase now provides a subTest() context manager to
procedurally generate, in an easy way, small test instances.
http://hg.python.org/cpython/rev/5c09e1c57200
--
Antoine Pitrou added the comment:
Finally committed :) Thanks everyone for the reviews and suggestions.
--
resolution: - fixed
stage: patch review - committed/rejected
status: open - closed
___
Python tracker rep...@bugs.python.org
Changes by STINNER Victor victor.stin...@gmail.com:
--
nosy: +haypo
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16997
___
___
Python-bugs-list
Antoine Pitrou added the comment:
Updated patch simplifying the expectedFailure implementation, as suggested by
Michael and Nick and Michael.
(admire how test_socket was importing a private exception class!)
--
Added file: http://bugs.python.org/file29428/subtests6.patch
Nick Coghlan added the comment:
I think I have figured out what bothers me about the expectedfailure changes,
and they actually relate to how expectedfailure was implemented in the first
place: I had previously assumed that decorator was an *annotating* decorator -
that it set an attribute on
Michael Foord added the comment:
Getting rid of the thread local would be an improvement, and the change to how
expected failures is done sounds good too.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16997
Antoine Pitrou added the comment:
However, I'm wondering if it might still be possible to avoid the
need for a thread local context to handle the combination of
expected failures and subtests when we have access to the test
caseby adding the annotation that I expected to be there in the
Michael Foord added the comment:
That's a use case that I'm not very *interested* in supporting personally -
however it may well be a use case that was designed in and that others have a
need for (I didn't implement expectedFailure support).
I think expectedFailure should be used sparingly
Nick Coghlan added the comment:
The docs are fairly explicit about the intended use case: Mark the test as an
expected failure. If the test fails when run, the test is not counted as a
failure. (from
http://docs.python.org/3/library/unittest#unittest.expectedFailure)
Nothing there about
Michael Foord added the comment:
So, if it's not documented behaviour I think it's fine to lose it.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16997
___
Changes by Barry A. Warsaw ba...@python.org:
--
nosy: +barry
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16997
___
___
Python-bugs-list mailing
Changes by Terry J. Reedy tjre...@udel.edu:
--
nosy: +terry.reedy
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16997
___
___
Python-bugs-list
Antoine Pitrou added the comment:
That means there's a part of Antoine's patch I disagree with: the
change to eliminate the derived overall result attached to the
aggregate test.
The patch doesn't eliminate it, there are even tests for it.
(see the various call order tests)
The complexity
Nick Coghlan added the comment:
My day job these days is to work on the Beaker test system
(http://beaker-project.org).
I realised today that it actually includes a direct parallel to Antoine's
proposed subtest concept: whereas in unittest, each test currently has exactly
one result, in
holger krekel added the comment:
On Sun, Feb 10, 2013 at 12:41 PM, Antoine Pitrou rep...@bugs.python.orgwrote:
Antoine Pitrou added the comment:
Please don't commit I think we still need a discussion as to whether
subtests or paramaterized tests are a better approach. I certainly
don't
holger krekel added the comment:
On Sun, Feb 10, 2013 at 12:43 PM, Nick Coghlan rep...@bugs.python.orgwrote:
Nick Coghlan added the comment:
You can use subtests to build parameterized tests, you can't use
parameterized tests to build subtests.
I doubt you can implement parametrized tests
Antoine Pitrou added the comment:
what if there are 500 subtests in a loop and you don't want 500 failures to be
registered for that test case?
Parametered tests have the same issue. In this case you simply don't use
subtests
or test cases. On the other hand, the issue doesn't exist in most
Chris Jerdonek added the comment:
what if there are 500 subtests in a loop and you don't want 500 failures to
be
registered for that test case?
Parametered tests have the same issue. In this case you simply don't use
subtests
or test cases.
Right, but then you lose out on both of the
Changes by Brett Cannon br...@python.org:
--
nosy: -brett.cannon
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16997
___
___
Python-bugs-list
Changes by Yaroslav Halchenko yarikop...@gmail.com:
--
nosy: -Yaroslav.Halchenko
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16997
___
___
Andrew Bennetts added the comment:
googletest (an xUnit style C++ test framework) has an interesting feature: in
addition to assertions like ASSERT_EQ(x, y) that stop the test, it has
EXPECT_EQ(x, y) etc that will cause the test to fail without causing it to
stop. I think this decoupling of
Antoine Pitrou added the comment:
Any other comments on this? If not, I would like to commit soon.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16997
___
Michael Foord added the comment:
Please don't commit I think we still need a discussion as to whether subtests
or paramaterized tests are a better approach. I certainly don't think we need
both and there are a lot of people asking for parameterized tests. I also
haven't had a chance to look
Antoine Pitrou added the comment:
Please don't commit I think we still need a discussion as to whether
subtests or paramaterized tests are a better approach. I certainly
don't think we need both and there are a lot of people asking for
parameterized tests.
I think they don't cater to the
Nick Coghlan added the comment:
You can use subtests to build parameterized tests, you can't use
parameterized tests to build subtests. The standard library can also
be converted to using subtests *far* more readily than it could be
converted to parameterized tests. There's also the fact that
Michael Foord added the comment:
Subtests break the current unittest api of suite.countTests() and I fear they
will also break tools that use the existing test result api to generate junit
xml for continuous integration.
I would like to add a parameterized test mechanism to unittest - but
Michael Foord added the comment:
However, I think you're making a mistaking by seeing them as
*competing* APIs, rather than seeing subtests as a superior
implementation strategy for the possible later introduction of a
higher level parameterized tests API.
Parameterized tests are done at test
Antoine Pitrou added the comment:
Subtests break the current unittest api of suite.countTests() and I
fear they will also break tools that use the existing test result api
to generate junit xml for continuous integration.
It depends how you define countTests(). sub-tests, as the name
Michael Foord added the comment:
A comment from lifeless on IRC (Robert Collins):
[12:15:46] lifelessplease consider automated analysis. How can someone
tell which test actually failed ?
[12:15:55] lifelessHow can they run just that test in future ?
--
Michael Foord added the comment:
My concern is that this re-uses the existing TestResult.add* methods in a
different way (including calling addError multiple times). This can break
existing tools.
Fix suggested by lifeless on IRC. A sub test failure / success / exception
calls the following
Changes by Florent Xicluna florent.xicl...@gmail.com:
--
nosy: +flox
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16997
___
___
Python-bugs-list
Michael Foord added the comment:
And on the superior implementation strategy, both nose and py.test used to
have runtime test generation and both have deprecated them and moved to
collection time parameterization. (But I guess we know better.)
You don't need PEP 422 for parameterization. The
R. David Murray added the comment:
I don't really have strong feelings about this, but I will just note as a data
point that I implemented parameterized tests for the email package, and have no
interest myself in subtests. This is for exactly the collection time vs
runtime reason that
Antoine Pitrou added the comment:
Here is a patch implementing Michael's and lifeless' proposed strategy.
--
Added file: http://bugs.python.org/file29029/subtests5.patch
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16997
Chris Jerdonek added the comment:
I'm still opposed to exposing these features only together. Annotating the
failure message with parametrization data is useful in its own right, but what
if there are 500 subtests in a loop and you don't want 500 failures to be
registered for that test case?
Chris Jerdonek added the comment:
It seems like the last patch (subtests5.patch) dropped the parameter data from
the failure output as described in the docs. For example, the example in the
docs yields the following:
FAIL: test_even (__main__.NumbersTest)
instead of the documented:
Changes by Brian Curtin br...@python.org:
--
nosy: -brian.curtin
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16997
___
___
Python-bugs-list
Antoine Pitrou added the comment:
It seems like the last patch (subtests5.patch) dropped the parameter
data from the failure output as described in the docs. For example,
the example in the docs yields the following:
FAIL: test_even (__main__.NumbersTest)
Weird, since there are unit
Antoine Pitrou added the comment:
This new patch adds some documentation.
--
Added file: http://bugs.python.org/file28942/subtests4.patch
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16997
Michael Foord added the comment:
I am concerned that this feature changes the TestResult API in a backwards
incompatible way. There are (quite a few) custom TestResult objects that just
implement the API and don't inherit from TestResult. I'd like to try this new
code with (for example) the
Michael Foord added the comment:
Note, some brief discussion on the testing in python mailing list:
http://lists.idyll.org/pipermail/testing-in-python/2013-January/005356.html
--
___
Python tracker rep...@bugs.python.org
Antoine Pitrou added the comment:
I am concerned that this feature changes the TestResult API in a
backwards incompatible way. There are (quite a few) custom TestResult
objects that just implement the API and don't inherit from TestResult.
I'd like to try this new code with (for example) the
Chris Jerdonek added the comment:
I am concerned that this feature changes the TestResult API in a backwards
incompatible way.
My suggestion to add the original TestCase object to TestResult.errors, etc.
instead and add the extra failure data to the longDescription would address
this
Antoine Pitrou added the comment:
The current API doesn't seem like a good building block because it
bundles orthogonal features (i.e. to add loop failure data to a block
of asserts you have to use the continuance feature). Why not expose
*those* as the building blocks? The API can be
Chris Jerdonek added the comment:
I've already replied to all this.
You didn't respond to the idea of exposing both features separately after
saying you didn't understand what I meant and saying that they were pointless
and didn't make sense. So I explained and also proposed a specific API
Antoine Pitrou added the comment:
You didn't respond to the idea of exposing both features separately
after saying you didn't understand what I meant and saying that they
were pointless and didn't make sense. So I explained and also
proposed a specific API to make the suggestion clearer and
Nick Coghlan added the comment:
Right. I have *heaps* of tests that would be very easy to migrate to
Antoine's subtest API. A separate addMessage API could conceivably be
helpful for debugging, but it's not the obvious improvement to my existing
looping tests that subtests would be.
--
Nick Coghlan added the comment:
I like the idea of the subTest API being something like:
def subTest(self, _id, *, **params):
However, I'd still factor that in to the reported test ID, not into the
exception message.
--
___
Python tracker
Nick Coghlan added the comment:
Right, if you want independently addressable/runnable, then you're back to
parameterised tests as discussed in issue7897.
What I like about Antoine's subtest idea is that I think it can be used to
split the execution/reporting part of parameterised testing from
Chris Jerdonek added the comment:
1. Easily append data to failure messages coming from a block of asserts
2. Continue running a test case after a failure from a block of asserts
Both of these seem independently useful and more generally applicable,
I don't understand what you mean. 1 is
Antoine Pitrou added the comment:
If the API is more like self.assert*()'s msg parameter which appends
data to the usual exception, then it will be the same as what people
are already used to.
It might be a good idea to allow both this and the arbitrary parameter kwargs,
then.
I'm not
Chris Jerdonek added the comment:
After thinking about this more, it seems this lets you do two orthogonal things:
1. Easily append data to failure messages coming from a block of asserts
2. Continue running a test case after a failure from a block of asserts
Both of these seem independently
Antoine Pitrou added the comment:
1. Easily append data to failure messages coming from a block of asserts
2. Continue running a test case after a failure from a block of asserts
Both of these seem independently useful and more generally applicable,
I don't understand what you mean. 1 is
Changes by Arfrever Frehtes Taifersar Arahesis arfrever@gmail.com:
--
nosy: +Arfrever
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16997
___
Nick Coghlan added the comment:
I think we're going to have to separate out two counts in the metrics - the
total number of tests (the current counts), and the total number of subtests
(the executed subtest blocks). (Other parameterisation solutions can then
choose whether to treat each pair
Antoine Pitrou added the comment:
The way expectedFailure is currently implemented (it's a decorator which knows
nothing about test cases and test results, it only expects an exception to be
raised by its callee), it's gonna be difficult to make it participate with
subtests without breaking
Changes by Brett Cannon br...@python.org:
--
nosy: +brett.cannon
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16997
___
___
Python-bugs-list
Antoine Pitrou added the comment:
I think we're going to have to separate out two counts in the metrics
- the total number of tests (the current counts), and the total number
of subtests (the executed subtest blocks).
This is a reasonable proposal. On the other hand, it was already the
case
Antoine Pitrou added the comment:
New patch attached:
- makes _SubTest a TestCase subclass
- clarifies test skipping for subtests (skipTest() only skips the subtest)
- makes expected failures work as expected by resorting to a thread-local
storage hack
--
Added file:
Antoine Pitrou added the comment:
New patch attached:
- makes _SubTest a TestCase subclass
- clarifies test skipping for subtests (skipTest() only skips the subtest)
- makes expected failures work as expected by resorting to a thread-local
storage hack
--
Added file:
Antoine Pitrou added the comment:
New patch attached:
- makes _SubTest a TestCase subclass
- clarifies test skipping for subtests (skipTest() only skips the subtest)
- makes expected failures work as expected by resorting to a thread-local
storage hack
--
Added file:
Changes by florian-rathgeber florian.rathge...@gmail.com:
--
nosy: -florian-rathgeber
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16997
___
___
New submission from Antoine Pitrou:
subtests are a light alternative to parametered tests as in issue7897. They
don't generate the tests for you, they simply allow to partition a given test
case in several logical units. Meaning, when a subtest fails, the other
subtests in the test will still
Antoine Pitrou added the comment:
Attaching patch.
--
keywords: +patch
Added file: http://bugs.python.org/file28776/subtests.patch
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16997
___
Changes by Antoine Pitrou pit...@free.fr:
--
stage: - patch review
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue16997
___
___
Python-bugs-list
Serhiy Storchaka added the comment:
+1. I was going to suggest something similar for displaying the
I was going to suggest something similar to display the clarification message
in case of a fail:
for arg, expected in [(...),...]:
with self.somegoodname(msg=arg=%s%arg):
Antoine Pitrou added the comment:
Since I was asked on IRC, an example of converting an existing test. It's quite
trivial really:
diff --git a/Lib/test/test_codecs.py b/Lib/test/test_codecs.py
--- a/Lib/test/test_codecs.py
+++ b/Lib/test/test_codecs.py
@@ -630,9 +630,10 @@ class
Ezio Melotti added the comment:
I like the idea, and I think this would be a useful addition to unittest.
OTOH while this would be applicable to most of the tests (almost every test has
a for loop to check valid/invalid values, or a few related subtests in the
same test method), I'm not sure
Nick Coghlan added the comment:
This looks very nice. For cases where you decide you don't want it, some
kind of fail fast mechanism would be helpful (e.g. when you've seen the
full set of failures, and are now just trying to resolve the first one)
--
Antoine Pitrou added the comment:
Updated patch make subtests play nice with unittest's failfast flag:
http://docs.python.org/dev/library/unittest.html#cmdoption-unittest-f
--
Added file: http://bugs.python.org/file28780/subtests2.patch
___
Python
Chris Jerdonek added the comment:
Nice/elegant idea. A couple comments:
(1) What will be the semantics of TestCase/subtest failures? Currently, it
looks like each subtest failure registers as an additional failure, meaning
that the number of test failures can exceed the number of test
Antoine Pitrou added the comment:
Le samedi 19 janvier 2013 à 00:33 +, Chris Jerdonek a écrit :
With the way I understand it, it seems like a subtest failure should
register as a failure of the TestCase as a whole, unless the subtests
should be enumerated and considered tests in their own
Chris Jerdonek added the comment:
Either way, something isn't right about how it's integrated now. With
the patch, this:
@unittest.expectedFailure
def test(self):
with self.subTest():
self.assertEqual(0, 1)
gives: FAILED (failures=1, unexpected successes=1)
And
75 matches
Mail list logo