Yeah, Mocking is great tool, we can test high level module based on behaviors of lower one without going down to the bsp/peripheral simulation. Since I plan to port my library as an newt project, and it already has decent tests running CMOCK, I will try to pull that of later.

On 06/10/2016 00:28, Sterling Hughes wrote:
Hi,

I don’t think we planned on providing a mock’ing framework in V1 of Mynewt. The approach to mocking has been to implement the lower layers on sim, and then special case things where it only makes sense for a particular regression or unit test. While you won’t get the control you have with mocking (i.e. guaranteed set of responses to external function calls), it does allow for a fair number of regression tests to run simulated — and should catch the vast majority of cases.

Going forward, it does sound like having this ability would be useful. If somebody wanted to provide a patch to newt, that allows it to either use an external framework like CMock, or generate a set of mock templates itself, I think it would be a great contribution!

Sterling

On 3 Oct 2016, at 12:26, hathach wrote:

Hi all,

I previously used CMock & Unity as unit testing framework for my own project. CMock is rather complex since it allows mocking the lower layers thus isolating module and making it easy for testing/probing its behavior.

For example, when testing an service-adding function, all we care about is the ble_gatts_register_svcs() finally invoked with the exact same svc_def. Behavior of ble_gatts_register_svcs() is subject to its own unit testing.

Though newt's testutil is still in developing stage, do we have a plan to implement some level of mocking framework like CMock. Since it will be a challenge to simulate lower layer and stimulate some certain scenario.

PS: I found even with mocking, it is also hard to do decent coverage of unit test with stuffs like peripherals. And the integration test is completely out of control :(

On 03/10/2016 22:42, Christopher Collins wrote:
Hi Kevin,

On Mon, Oct 03, 2016 at 03:08:32PM +0200, Kevin Townsend wrote:
I was wondering if there were any suggestions on how it might be
possible to improve the output of the unit tests to be a bit more
verbose, following some of the other frameworks out there for embedded
systems like CMOCK.

Unit testing and test simulation is an important part of the system for
any professionally maintained project, but could be even more useful
with a little bit of refinement.

Personally, I find it useful to see a list of tests being run and maybe spot if I missed a module, and to know how many tests were run, etc., so
something like this when running the test suite(s)?

     Running 'TestSuiteName' test suite:
        Running 'Test Name' ... [OK][FAILED]
        Running 'Test Name' ... [OK][FAILED]
        [n] unit tests passed, [n] failed

     Running 'TestSuiteName' test suite:
        Running 'Test Name' ... [OK][FAILED]
        Running 'Test Name' ... [OK][FAILED]
        [n] unit tests passed, [n] failed

     Ran [n] unit tests in [n] test suites
     [n] unit tests passed, [n] failed

It's a poor example that needs more thought, but I was interested in
getting the discussion started.
The thinking was that the user doesn't want to be bothered with a bunch
of text when there are no failures.  That said, I agree that more
verbose output in the success case would be useful in some cases.  You
can get something kind of like your example if you provide the -ldebug
command line option when you run the test, e.g.,

     newt -ldebug test net/nimble/host
     Executing test:
/home/ccollins/repos/mynewt/core/bin/targets/unittest/net_nimble_host_test/test/net/nimble/host/test/net_nimble_host_test
     2016/10/03 08:00:49 [DEBUG]
/home/ccollins/repos/mynewt/core/bin/targets/unittest/net_nimble_host_test/test/net/nimble/host/test/net_nimble_host_test
     2016/10/03 08:00:50 [DEBUG] o=[pass]
     ble_att_clt_suite/ble_att_clt_test_tx_find_info
     [pass] ble_att_clt_suite/ble_att_clt_test_rx_find_info
     [pass] ble_att_clt_suite/ble_att_clt_test_tx_read
     [...]
[pass] ble_sm_sc_test_suite/ble_sm_sc_peer_nc_iio1_rio1_b1_iat2_rat2_ik3_rk3 [pass] ble_sm_sc_test_suite/ble_sm_sc_peer_pk_iio2_rio0_b1_iat2_rat2_ik7_rk3 [pass] ble_sm_sc_test_suite/ble_sm_sc_us_jw_iio3_rio3_b1_iat2_rat2_ik3_rk3 [pass] ble_sm_sc_test_suite/ble_sm_sc_us_nc_iio1_rio1_b1_iat2_rat2_ik3_rk3 [pass] ble_sm_sc_test_suite/ble_sm_sc_us_pk_iio2_rio0_b1_iat2_rat2_ik7_rk3
     [pass] ble_uuid_test_suite/ble_uuid_test_128_to_16

     Passed tests: [net/nimble/host/test]
     All tests passed

The output is a bit rough, and -ldebug produces a lot of extra output
that is not relevant, so there is some work to do here.  As an aside, I
think newt is not very consistent with its "-v" and "-ldebug" options.
As I understand it, "-v" is supposed to produce extra output about the
user's project; "-ldebug" is meant for debugging the newt tool itself,
and is supposed to generate output relating to newt's internals.

Also, having a 'startup' and 'teardown' function that runs before and
after every unit test in the test suite may be nice as well to clear
any variables or put things into a known state, but I'm also curious
about opinions there.

Maybe have optional functions like this in every test suite module
(this is taken from a project where we used CMOCK and UNITY:
http://www.throwtheswitch.org/#download-section)

    void setUp(void)
    {
       fifo_clear(&ff_non_overwritable);
    }

    void tearDown(void)
    {

    }
I agree.  Again, this is kind of half implemented currently, but it
needs some more work.  The testutil library exposes the following:

     typedef void tu_post_test_fn_t(void *arg);
void tu_suite_set_post_test_cb(tu_post_test_fn_t *cb, void *cb_arg);

So, there is a "teardown" function at the suite level, but no startup
functions, and nothing at the individual case level.  Also, this
function doesn't get executed automatically unless testutil is
configured to do so.

Long ago when I was working on the testutil library, I consciously
avoided adding this type of functionality.  I wanted the unit tests to
be easy to understand and debug, so I strived for a small API and
nothing automatic.  In retrospect, after writing several unit tests, I
do think automatic setup and teardown functions are useful enough to
include in the API.

I also recall looking at CMock a while back when I was searching for
ideas. I think it provides a lot of useful functionality, but it looked
like it did way more than we were interested in at the time. Now that
the project is a bit more mature, it might be a good time to add some
needed functionality to the unit testing framework.

Happy to help here, but wanted to get a discussion started first.
I for one would welcome all ideas and contributions to the testutil
library.  Could you expand on the setup / teardown thoughts? Would
these be executed per test case, or just per suite?  Also, my
understanding is that these function get executed automatically without
the framework needing to be told about them, is that correct?

Thanks,
Chris




Reply via email to