Jim Meyering wrote:
> Imagine that the first 10 tests pass, then each of the remaining ones is
> killed via e.g., SIGHUP. ...
> a naive search for "FAIL:" in the build output would find nothing.

Yes, and it should be this way, IMO. Each time a user sees a "FAIL:", he
should be encouraged to investigate.

Whereas in the gettext test suite, often when I sent SIGINTs, I saw some
tests fail without explanation. (This was due to a missing 'exit' statement
in the trap handler, but it would be the same if there was an 'exit 1' in
the trap handler.) I guessed that the FAIL report was due to the SIGINT and
did not investigate. But I don't think this attitude should be encouraged.

Similarly, when I get reports from Nelson Beebe with lots of failing tests,
I don't want to spend time on fake failures that were due to, maybe, a
shutdown of his virtual machine or something like this.

> The final result would be highly misleading:
>
>     ========================
>     All 10 tests passed
>     (300 tests were not run)
>     ========================

But before this final result, you would see 300 times

  Skipping test: caught fatal signal
  SKIP: test-foo1
  Skipping test: caught fatal signal
  SKIP: test-foo2
  Skipping test: caught fatal signal
  SKIP: test-bar
  ...

That should be enough of an explanation, no? And it will tell us that there's
no gnulib bug to investigate.

Bruno


Reply via email to