Hi Kevin,
On Mon, Oct 03, 2016 at 03:08:32PM +0200, Kevin Townsend wrote:
I was wondering if there were any suggestions on how it might be
possible to improve the output of the unit tests to be a bit more
verbose, following some of the other frameworks out there for embedded
systems like CMOCK.
Unit testing and test simulation is an important part of the system
for
any professionally maintained project, but could be even more useful
with a little bit of refinement.
Personally, I find it useful to see a list of tests being run and
maybe
spot if I missed a module, and to know how many tests were run,
etc., so
something like this when running the test suite(s)?
Running 'TestSuiteName' test suite:
Running 'Test Name' ... [OK][FAILED]
Running 'Test Name' ... [OK][FAILED]
[n] unit tests passed, [n] failed
Running 'TestSuiteName' test suite:
Running 'Test Name' ... [OK][FAILED]
Running 'Test Name' ... [OK][FAILED]
[n] unit tests passed, [n] failed
Ran [n] unit tests in [n] test suites
[n] unit tests passed, [n] failed
It's a poor example that needs more thought, but I was interested in
getting the discussion started.
The thinking was that the user doesn't want to be bothered with a bunch
of text when there are no failures. That said, I agree that more
verbose output in the success case would be useful in some cases. You
can get something kind of like your example if you provide the -ldebug
command line option when you run the test, e.g.,
newt -ldebug test net/nimble/host
Executing test:
/home/ccollins/repos/mynewt/core/bin/targets/unittest/net_nimble_host_test/test/net/nimble/host/test/net_nimble_host_test
2016/10/03 08:00:49 [DEBUG]
/home/ccollins/repos/mynewt/core/bin/targets/unittest/net_nimble_host_test/test/net/nimble/host/test/net_nimble_host_test
2016/10/03 08:00:50 [DEBUG] o=[pass]
ble_att_clt_suite/ble_att_clt_test_tx_find_info
[pass] ble_att_clt_suite/ble_att_clt_test_rx_find_info
[pass] ble_att_clt_suite/ble_att_clt_test_tx_read
[...]
[pass]
ble_sm_sc_test_suite/ble_sm_sc_peer_nc_iio1_rio1_b1_iat2_rat2_ik3_rk3
[pass]
ble_sm_sc_test_suite/ble_sm_sc_peer_pk_iio2_rio0_b1_iat2_rat2_ik7_rk3
[pass]
ble_sm_sc_test_suite/ble_sm_sc_us_jw_iio3_rio3_b1_iat2_rat2_ik3_rk3
[pass]
ble_sm_sc_test_suite/ble_sm_sc_us_nc_iio1_rio1_b1_iat2_rat2_ik3_rk3
[pass]
ble_sm_sc_test_suite/ble_sm_sc_us_pk_iio2_rio0_b1_iat2_rat2_ik7_rk3
[pass] ble_uuid_test_suite/ble_uuid_test_128_to_16
Passed tests: [net/nimble/host/test]
All tests passed
The output is a bit rough, and -ldebug produces a lot of extra output
that is not relevant, so there is some work to do here. As an aside, I
think newt is not very consistent with its "-v" and "-ldebug" options.
As I understand it, "-v" is supposed to produce extra output about the
user's project; "-ldebug" is meant for debugging the newt tool itself,
and is supposed to generate output relating to newt's internals.
Also, having a 'startup' and 'teardown' function that runs before and
after every unit test in the test suite may be nice as well to clear
any variables or put things into a known state, but I'm also curious
about opinions there.
Maybe have optional functions like this in every test suite module
(this is taken from a project where we used CMOCK and UNITY:
http://www.throwtheswitch.org/#download-section)
void setUp(void)
{
fifo_clear(&ff_non_overwritable);
}
void tearDown(void)
{
}
I agree. Again, this is kind of half implemented currently, but it
needs some more work. The testutil library exposes the following:
typedef void tu_post_test_fn_t(void *arg);
void tu_suite_set_post_test_cb(tu_post_test_fn_t *cb, void
*cb_arg);
So, there is a "teardown" function at the suite level, but no startup
functions, and nothing at the individual case level. Also, this
function doesn't get executed automatically unless testutil is
configured to do so.
Long ago when I was working on the testutil library, I consciously
avoided adding this type of functionality. I wanted the unit tests to
be easy to understand and debug, so I strived for a small API and
nothing automatic. In retrospect, after writing several unit tests, I
do think automatic setup and teardown functions are useful enough to
include in the API.
I also recall looking at CMock a while back when I was searching for
ideas. I think it provides a lot of useful functionality, but it
looked
like it did way more than we were interested in at the time. Now that
the project is a bit more mature, it might be a good time to add some
needed functionality to the unit testing framework.
Happy to help here, but wanted to get a discussion started first.
I for one would welcome all ideas and contributions to the testutil
library. Could you expand on the setup / teardown thoughts? Would
these be executed per test case, or just per suite? Also, my
understanding is that these function get executed automatically without
the framework needing to be told about them, is that correct?
Thanks,
Chris