On Thu, Aug 10, 2017 at 5:40 PM, Nathan Lynch <nathan_ly...@mentor.com> wrote:
>
> Ping?
>

Sorry. We looked into this. There seems to be no easy way with Automake
to have individual `check` targets in subdirectories and make the
top-level `check` target recurse without stopping when a subdirectory
fails.

As you wrote, it is possible with --keep-going, but this is not common
practice and we don't expect our users to read make(1) to be able to run
all the tests unconditionally.

I asked the Automake mailing list about this
<http://lists.gnu.org/archive/html/automake/2017-08/msg00004.html> and
we will most probably implement the suggested solution, which is to have
individual top-level (in tests/) `check-X` targets which rerun Make with
specific sets of tests.

Thank you for reporting this by the way.

Phil

>
> Nathan Lynch <nathan_ly...@mentor.com> writes:
>> How is the babeltrace test suite on the master and stable-2.0 branches
>> intended to be used?
>>
>> Effectively there are four test suites, one for each definition of TESTS
>> in the tests/ directory.  If one of them has a failure, 'make check'
>> will not proceed to the next unless the -k/--keep-going flag is used.
>> Is this intended?  I unwittingly was running only the tests under
>> tests/cli (some of which always fail) until I discovered this.
>>
>> Defining multiple test suites this way seems to be unusual; other
>> projects I've checked just have one top-level definition of TESTS.
>>
>> I'm aware of the rationale for doing it this way:
>>
>> commit 6ca1931cb32ca2eb33252896d2a42a4c48af436a
>> Author: Philippe Proulx <eeppelitel...@gmail.com>
>> Date:   Fri May 5 16:45:11 2017 -0400
>>
>>     tests: put TESTS list in each Makefile.am
>>
>>     So that you can do `make check` in any subdirectory to run only those
>>     tests.
>>
>> But it still feels non-idiomatic to me, and I'm wondering if there's a
>> better way.  There are other ways to limit the set of tests to run,
>> detailed here:
>>
>> https://www.gnu.org/software/automake/manual/html_node/Parallel-Test-Harness.html
>>
>> Here are my current results from running all test suites; let me know if
>> you want more detail and I'll follow up; there are likely several
>> different issues to investigate.
>>
>> $ find tests/ -name test-suite.log -exec head {} +
>> ==> tests/lib/test-suite.log <==
>> =====================================================
>>    babeltrace 2.0.0-pre1: tests/lib/test-suite.log
>> =====================================================
>>
>> # TOTAL: 1200
>> # PASS:  1195
>> # SKIP:  0
>> # XFAIL: 0
>> # FAIL:  1
>> # XPASS: 0
>>
>> ==> tests/bindings/python/bt2/test-suite.log <==
>> =====================================================================
>>    babeltrace 2.0.0-pre1: tests/bindings/python/bt2/test-suite.log
>> =====================================================================
>>
>> # TOTAL: 11152
>> # PASS:  0
>> # SKIP:  0
>> # XFAIL: 0
>> # FAIL:  11151
>> # XPASS: 0
>>
>> ==> tests/cli/test-suite.log <==
>> =====================================================
>>    babeltrace 2.0.0-pre1: tests/cli/test-suite.log
>> =====================================================
>>
>> # TOTAL: 153
>> # PASS:  143
>> # SKIP:  0
>> # XFAIL: 0
>> # FAIL:  7
>> # XPASS: 0
>>
>> ==> tests/plugins/test-suite.log <==
>> =========================================================
>>    babeltrace 2.0.0-pre1: tests/plugins/test-suite.log
>> =========================================================
>>
>> # TOTAL: 2
>> # PASS:  0
>> # SKIP:  0
>> # XFAIL: 0
>> # FAIL:  0
>> # XPASS: 0
>>
>> (BTW, since the tests don't pass/skip/xfail 100%, 'make distcheck' is 
>> failing.)
> _______________________________________________
> lttng-dev mailing list
> lttng-dev@lists.lttng.org
> https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
_______________________________________________
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

Reply via email to