Hi devs,

I've finally spent the time to measure our TPC (Test Percentage Coverage) using 
Clover.

The reports are very interesting and show where we need to spend our efforts in 
testing (be them automated or manual).

Here are the raw top level results:
===========================

All results: http://maven.xwiki.org/site/clover/20120701/

* Only Commons (this means unit tests and integration tests in the commons 
repository only): 62.4%
* Only Rendering: 75.5%
* Only Platform: 36.8%
* Commons+Rendering (ie Rendering tests exercise also Commons code): 72.1%
* Commons+Rendering+Platform: 47.4%
* Commons+Rendering+Platform+Enterprise: 

So our code is roughly tested at X% (note that 100% TPC doesn't mean the code 
is working, it means each line of code has been tested in some way but it 
doesn't mean all combinations have been tested!).

A good value is between 70-80%.

We're actually better than I thought (I had 50% or lower in mind) :) However 
there are huge disparity with code having 90+% and code having 0% coverage (402 
classes have 0% coverage!).

Quick high level analysis:
====================

* Commons is missing quite a lot of unit tests
* Wikimodel in Rendering is dragging the TPC down, I think we were above 80% 
before we included WikiModel)
* Platform has a very low level of unit tests which means it's easy to break 
stuff in platform and it's hard to verify that what we add in platform works 
well in general (means going through longer stabilization phases than needed)
* Our functional tests are quite effective (even though they take a very very 
long time to execute) since they make the TPC go from 47% to 63% for 
Commons+Rendering+Platform. The bad TPC for Platform is probably improved by 
the functional tests.

Next steps
=========

* IMO all developers need to look in details at the results and familiarize 
themselves with the Clover reports
* Each dev should look at stuff he has coded and ensure that they have a good 
level of unit tests
* All new code should get at the very minimum between 70-80% unit test 
coverage. For example the new diff module in commons only has only 43.4% 
coverage which is far from enough
* Spend time to automate the generation of these reports. We need a dedicated 
agent for this since it "contaminates" the local repository with clovered jars. 
The overall generation time is around 4-5 hours.
* Dig more around the reports to identify functional areas of XWiki missing 
automated tests. These areas must be covered by manual testing at each release.
* Decide if we want a more rigorous strategy for new code, and how to get 
alerts for modules missing coverage.

WDYT?

Let's share your analysis in this thread so that we can brainstorm about what 
to do with these reports

Thanks
-Vincent

_______________________________________________
devs mailing list
[email protected]
http://lists.xwiki.org/mailman/listinfo/devs

Reply via email to