On 26 March 2015 at 22:56, Vadim Zeitlin <vz-s...@zeitlins.org> wrote:
> On Thu, 26 Mar 2015 20:17:27 +0100 Mateusz Loskot <mate...@loskot.net> wrote:
>
> ML> >  I am less sure about how to handle the transition from assert() to 
> CATCH.
> ML> > One possibility would be to just globally replace all asserts with
> ML> > REQUIRE() macros. This would be suboptimal (if only because some of the
> ML> > asserts should probably be replaced with CHECK(), to allow the tests to
> ML> > continue if one of them fails), but should be relatively simple to do.
> ML>
> ML> I believe in single assert per test routine/case...see below.
>
>  Sorry, this is unworkable. Currently, i.e. with catch-tests and Firebird
> we have
>
>         All tests passed (1851 assertions in 54 test cases)
>
> i.e. an average of more than 34 assertions per test. Multiplying the number
> of test cases by 34 is not going to help.
>
>  Of course, the current tests are not well-organized and there is ample
> scope for improvement here. And we definitely should be splitting some test
> cases in sections using the eponymous macro. But we still won't have a
> single assert per test case or even per section. But why should this be a
> goal in the first place? I really don't see any good reasons to require it.

Not litreally, I meant single assert as testing one thing per test routine.
Currently, tests are bundled together around test data/table.

> ML> My plan was to keep the current tests and start (re)writing with CATCH
>
>  This would be my preference as well, in theory, but looking at the amount
> of the existing testing code, I realized it would take an inordinate time
> to do it, so it's just not going to happen in practice. Hence I decided to
> do what I can to improve things right now instead of hoping that the best
> case somehow happens on its own in the future...

Yes, that's why it hasn't happened yet, my plan required more resource.
Your plan has slightly different goal and approach, and I'm support it too.

> ML> in new folder: tests/integrate.
>
>  What would be the meaning of integration tests for SOCI? I.e. integration
> of what with what would they be testing? I am not sure how can we write
> more than just unit tests for a library like it.

We don't have unit tests.
The tests we have are integration tests.
I just had an idea of writing unit tests for the API, with mockery.
Hence, the distinction in the folders.

> ML> Finally, along the transition, I had an idea to refactor and clean up
> ML> the tests slightly to test single thing
> ML> per routines/cases, as short as possible, with Arrange/Act/Assert
> ML> principle in mind.
> ML> Perhaps some modularisation and groupping would be a good idea too.
>
>  Yes, it would be definitely nice to streamline all this. But,
> realistically, who is going to do it and when is it going to happen?

No, not in forseen future, I'm afraid.
I only explained my CATCH plans/ideas in the roadmap in some more details.

> ML> Having explained my points above, I don't think that find & replace
> ML> assert with CATCH macro is the way to go.

I should have added: "way to go to get where I had planned to" :)
Not that I'm disliking your proposal, no.
I just had some different goals.

>  OK, as you know I've already done this in meanwhile. I actually didn't
> want to start on this, I just wanted to add a new test (for checking the
> error messages) using CATCH. But it turned out that I couldn't do it
> without switching all the rest to CATCH first because I had to use
> TEST_CASE or TEST_CASE_METHOD for all the test functions to run them, so I
> started doing this and then was annoyed by seeing CATCH messages about N
> tests passing but 0 checks being verified, so I also went ahead and did
> replace assert()s with CHECKs.
>
>  So the question now is whether this makes things better or worse. IMHO it
> does improve them, as the commit message says, using CATCH:
>
> - Provides information about the failed test and the values of variables in 
> it.
> - Allows to continue running the tests even if one of them fails.
> - Allows to run just some of the tests with flexible selection mechanism.
>
> and all this is pretty convenient already. Additionally, it allows to write
> new tests using CATCH and organizing them better, which was the original
> impetus for this change.
>
>  It took me a couple of hours to make the changes so far and I'll probably
> need at least as much to update the other backends (so far I've only done
> it for Firebird), so I'd rather not do it if we don't want to replace
> asserts with CATCH CHECK()s. But IMHO it would be a waste, the benefits
> above are real and can be had today without any drawbacks (AFAICS).

I agree and I'm supportive about merging your efforts.

>  The possibilities I see are:
>
> 1. Throw away catch-tests branch and start adding new tests using CATCH
>    in tests/unit.

Nah, no resource.

> 2. Complete the work done for Firebird on catch-tests branch for the other
>    backends and merge it.

Let's stick to this one, IMHO.

Thanks for the efforts Vadim!

Best regards,
-- 
Mateusz  Loskot, http://mateusz.loskot.net

------------------------------------------------------------------------------
Dive into the World of Parallel Programming The Go Parallel Website, sponsored
by Intel and developed in partnership with Slashdot Media, is your hub for all
things parallel software development, from weekly thought leadership blogs to
news, videos, case studies, tutorials and more. Take a look and join the 
conversation now. http://goparallel.sourceforge.net/
_______________________________________________
soci-users mailing list
soci-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/soci-users

Reply via email to