Hi Bram,

I agree that our code should be under test and that we should agree upon some 
minimal code coverage of these tests. I also agree that creating a unit test 
instead of an integration test is preferable, due to the fact that a unit test 
is an integrated part of the bundle itself and not separated as is the case for 
integration tests. 
However, I don't think we should define creating a Unit test a goal in itself. 
The goal is to automate testing of our code and to have some mechanism in place 
that as soon as some part of the system starts behaving in an unexpected manor, 
triggered by your code change, that you are notified about that fact 
automatically (maybe even preventing the possibility to commit the change). 
That is the goal, not to create unit tests. It doesn't make sense to enforce 
creating unit tests if it doesn't contribute to the goal it tries to achieve.
That having said, I would agree on stating that all code should be under test 
with some minimal code coverage. But I do not agree on stating that this must 
be a unit test, as creating a unit test in itself is not the goal it tries to 
achieve. So if code is tested by a unit test or integration test, I don't care. 

In my opinion, a unit test is useful if it is fairly easy to test it in 
isolation. Units that perform complex calculations are very suitable to be 
tested with unit tests, as the calculation is performed inside the unit itself. 
But as soon as a unit passes the boundary of the unit, testing it in isolation 
becomes a struggle. You will need to create mock objects, stubs, fake services, 
etc. etc. just to prevent the unit test from passing the boundaries of the 
unit. In a service framework like ours, most complexity is in wiring all these 
services together. Most units are passing those boundaries very often. That 
makes unit testing in Amdatu a struggle. If your unit registers a REST service, 
invokes the HTTP service, invokes oAuth servlets, uses the Cassandra 
persistence manager or the RDF2Go API you will need to create fake/mock/stub 
objects for all these APIs. This is not only a hell of a job, but it introduces 
a new layer with possible bugs. If I test against a mocked Cassandra 
persistence manager, how do I ensure that this mock perfectly mimics the 
behavior of the real thing? I would test my unit against a representation of 
the real thing, but if that representation is correct is unsure.
In an integration test on the other hand, you are testing the unit against the 
real world. I am sure that the behavior of my unit is the same when it enters 
the real world. 

A good example I think are the storage providers. The FS storage providers are 
very suitable to be tested by a unit test, as they do not need to pass the 
boundary of their own unit  (unless you call the filesystem a boundary, but if 
so this is either an integration test or you should create a mocked 
filesystem). The Cassandra storage providers on the other hand all invoke the 
Cassandra persistence manager, which in itself passes another boundary to 
invoke the Cassandra daemon. Creating a mock object for the Cassandra 
persistence manager is not only a hell of a job, but also not very useful. Why 
would I want to test my unit against some large and complex mock object which 
in itself may contain plenty of bugs? So if my unit test passes, it doesn't 
tell me very much how this unit is going to behave in the real world. It 
doesn't tell me if my unit is functioning correctly in isolation, since the 
definition of 'correct' is determined by a large and complex fake object which 
might or might not be a good representation of the real one.
Hopefully I succeeded to explain why I don't think that unit testing should be 
a goal in itself.

Finally, I do reckon your concerns regarding integration tests;

- Integration tests are failing very often on Bamboo - true indeed and we need 
to work in that. On the other hand, in many cases the reason that they failed 
was valid and we did already fix quite some issues which came to light after a 
failing integration test. 
- No code coverage - also true, we should have a look at code coverage support 
in Pax Exam. It's on the roadmap (see 
http://issues.ops4j.org/browse/PAXEXAM-139), but I doubt if Pax itself will 
support this in the near future. I tried Clover before, but it displayed code 
coverage of the test themselves instead of the code invoked from the test (but 
should be fixable I think).
- Heavy - again, true. But that is also caused by the way we implemented the 
tests. We could just annotate one class with @Test invoking all other tests in 
which case the framework would be started only once. We could also take a look 
at Junit4OSGi instead (see 
http://ipojo-dark-side.blogspot.com/2009/05/junit4osgi-paxexam-mix.html)

Regards, Ivo




-----Original Message-----
From: amdatu-developers-bounces at amdatu.org 
[mailto:[email protected]] On Behalf Of Bram de Kruijff
Sent: maandag 17 januari 2011 16:14
To: amdatu-developers at amdatu.org
Subject: [Amdatu-developers] Conventions on unit/integration testing

Hi List,

a while back we briefly discussed
(http://lists.amdatu.org/pipermail/amdatu-developers/2010-October/000025.html)
the need for unit/integration testing and coverage measurement. At
this point we have a (kind of shaky) pax itest suite without coverage
measurement, about 26755 loc covered by no more then 52 junit test and
no policy/guidelines on the matter. This deeply concerns me and after
doing some refactorring on code not under junit test for AMDATU-263 I
thought it time to reopen this discussion and at least come to a
documented consensus/guideline.

Not going to re-itterate on the rationale for unit testing or the
difference in purpose, scope and applicability between unit and
integration testing. I think unit testing is good (to an extend) for
the numerous reasons documented all around the web and is especially
valuable to a fine grained component/service model such as ours where
any particular integration most probably only tests a subset of the
lower level use cases of a unit. At the sometime you can allready
observe how heavy the itest suite is becoming and I think it is
unreasonble for a developer to have to rely on executing it fully each
dev cycle to convince him/herselve that a local refactor does not
violate the basic contract of the unit. Even now when it is still
possible to do it on a local machine which may very well may change at
some point due to external integrations (cloud / Iaas / etc).

Therefore I'd like to propose that we complement our guidelines with a
'unit test unless' policy. it should roughly say that all (business)
code must have a reasonable degree of coverage, eg 75% and IMHO even
basic service lifecycle (but I may be overplaying my hand here). It's
easy, its light weight, its invaluable (literaly) on a large
codebase... its common sense! I know there are some different opinions
on the matter but I'd like to here them all, discuss them and come to
a policy because if we do not implement a guideline now we will never
be able go back and deal with it for these rest of the codebase
lifecycle.

WDYT
Bram
_______________________________________________
Amdatu-developers mailing list
Amdatu-developers at amdatu.org
http://lists.amdatu.org/mailman/listinfo/amdatu-developers

Reply via email to