On 1/13/06, Fredrik Lundh [EMAIL PROTECTED] wrote:
my main nit is the name: the test isn't broken in itself, and doesn't need
to be fixed; it's just not expected to succeed at this time.
the usual term for this is expected failure (sometimes called XFAIL).
for the developer, this means that
Collin Winter wrote:
When I've implemented this kind of thing in the past, I've generally
called the decorator/marker/whatever TODO (or some variation of
caps/lowercase).
I usually call things TODO if they need to be done. The test case is
not TODO, since it is already done. TODO would be the
Fred L. Drake, Jr. wrote:
Scott David Daniels wrote:
Would expect_fail, expect_failure, expected_fail, or
expected_failure, work for you?
None of these use the same naming convention as the other unittest object
attributes. Perhaps something like failureExpected?
I'd definately
Scott David Daniels wrote:
OK I carried the code I offered earlier in this whole thread (tweaked in
reaction to some comments) over to comp.lang.python, gathered some
feedback, and put up a recipe on the cookbook. After a week or so for
more comment, I'll be happy to submit a patch to include
OK I carried the code I offered earlier in this whole thread (tweaked in
reaction to some comments) over to comp.lang.python, gathered some
feedback, and put up a recipe on the cookbook. After a week or so for
more comment, I'll be happy to submit a patch to include the broken_test
decorator
Fredrik Lundh wrote:
Scott David Daniels wrote:
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/466288
my main nit is the name: the test isn't broken in itself, and doesn't need
to be fixed; it's just not expected to succeed at this time.
the usual term for this is expected
Scott David Daniels wrote:
Fredrik Lundh wrote:
Scott David Daniels wrote:
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/466288
my main nit is the name: the test isn't broken in itself, and doesn't need
to be fixed; it's just not expected to succeed at this time.
the usual
Scott David Daniels wrote:
Would expect_fail, expect_failure, expected_fail, or
expected_failure, work for you?
None of these use the same naming convention as the other unittest object
attributes. Perhaps something like failureExpected?
I'd definately prefer something that reads cleanly;
Fredrik == Fredrik Lundh [EMAIL PROTECTED] writes:
Fredrik many test frameworks support expected failures for this
Fredrik purpose. how hard would it be to add a
Fredrik unittest.FailingTestCase
Fredrik class that runs a TestCase, catches any errors in it, and
Fredrik
[Stephen J. Turnbull wrote]
Fredrik == Fredrik Lundh [EMAIL PROTECTED] writes:
Fredrik many test frameworks support expected failures for this
Fredrik purpose. how hard would it be to add a
Fredrik unittest.FailingTestCase
Fredrik class that runs a TestCase,
Neal Norwitz wrote:
[moving to python-dev]
On 1/7/06, Reinhold Birkenfeld [EMAIL PROTECTED] wrote:
Well, it is not the test that's broken... it's compiler.
[In reference to:
http://mail.python.org/pipermail/python-checkins/2006-January/048715.html]
In the past, we haven't checked in
Neal Norwitz wrote:
In the past, we haven't checked in tests which are known to be broken.
There are several good reasons for this. I would prefer you, 1) also
fix the code so the test doesn't fail, 2) revert the change (there's
still a bug report open, right?), or 3) generalize tests for
Fredrik Lundh wrote:
many test frameworks support expected failures for this purpose.
how hard would it be to add a
unittest.FailingTestCase
class that runs a TestCase, catches any errors in it, and signals an
error (test foo passed unexpectedly) if it runs cleanly ?
I don't know how
On Sunday 08 January 2006 12:19, Martin v. Löwis wrote:
I don't know how hard it would be, but I would also consider this
appropriate. Of course, this should work on a case-by-case basis:
if there are multiple test methods in a class, unexpected passes
of each method should be reported.
I
Fred L. Drake, Jr. wrote:
I like the way trial (from twisted) supports this. The test method is
written
normally, in whatever class makes sense. Then the test is marked with an
attribute to say it isn't expected to pass yet. When the code is fixed and
the test passes, you get that
On Jan 8, 2006, at 1:01 PM, Martin v. Löwis wrote:
Fred L. Drake, Jr. wrote:
I like the way trial (from twisted) supports this. The test
method is written
normally, in whatever class makes sense. Then the test is marked
with an
attribute to say it isn't expected to pass yet. When
[moving to python-dev]
On 1/7/06, Reinhold Birkenfeld [EMAIL PROTECTED] wrote:
Well, it is not the test that's broken... it's compiler.
[In reference to:
http://mail.python.org/pipermail/python-checkins/2006-January/048715.html]
In the past, we haven't checked in tests which are known to be
[Neal Norwitz]
...
In the past, we haven't checked in tests which are known to be broken.
It's an absolute rule that you never check in a change (whether a test
or anything else) that causes ``regretst.py -uall`` to fail. Even if
it passes on your development box, but fails on someone else's
On 1/7/06, Neal Norwitz [EMAIL PROTECTED] wrote:
I'm proposing something like add two files to Lib/test:
outstanding_bugs.py and outstanding_crashes.py. Both would be normal
test files with info about the bug report and the code that causes
problems.
I like this approach. regrtest.py won't
19 matches
Mail list logo