FWIW, I didn't intent skpped tests to be counted as failures when I wrote
the original script.. I'm guessing the fact that they are was either an
oversight in the original code, or something that was introduced (possibly
accidentally) at a later point. Either way I agree with you, it doesn't
make much sense for them to count as failures. I'd classify it as a bug and
go ahead and fix it.

--Rafael


On Mon, Jan 27, 2014 at 4:20 PM, Alan Conway <[email protected]> wrote:

> The qpid-python-test script has a facility for skipping tests (by
> raising a Skipped exception) which works fine BUT if tests are skipped
> the script exits with non-0 status - i.e. failure.
>
> I propose we change this behavior. It is clear in the test output that
> tests were skipped rather than failed, but that's not very useful when
> the script is incorporated in larger test suites, CI frameworks etc.
> where returning non-0 will be considered a failure and set people off in
> failure investigation mode only to cause much annoyance when they dig
> down and find skipped, not failed tests are causing the alarms.
>
> IMO skipping a test is different from failing - you skip because you
> can't run the test for some environmental reason that has no bearing on
> whether the functionality works, e.g. there's some library not installed
> or what have you. We should certainly try to get skip stats reflected in
> higher level tools that measure test health but I don't think we should
> be ringing alarm bells as if something had failed.
>
> Opinions? This behavior has been around for a long time so I'm wary of
> changing it unilaterally. It is however the reason I've never used the
> skip functionality and resorted to hacks like making tests pass but
> print SKIP which is not as nice as doing it properly.
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [email protected]
> For additional commands, e-mail: [email protected]
>
>

Reply via email to