1. Do you use some kind of testing criterion? In other words, is there some
kind of rule
describing what should be tested? Examples of such a criterion could be:
"every line of code should be executed during testing" or "every method
should
have at least one test case." Often such a criterion is referred to as code
coverage
criterion.

There is no real rule.

Testcases tend to be added to ant when problems have appeared, for instance
to demonstrate that a change fixes a bug report from Bugzilla. In some
instances, test cases are added proactively, to ensure that functionality
will be preserved after a change.

One of the problems with testcases is that a lot of tasks require specific
external resources (databases, application servers, version control systems,
SMTP mail server, ...) which do not always exist and do not exist under the
same name everywhere. These tasks are often very badly covered by testcases.

The situation is not ideal, but the current test suite runs on my PC under
Win 2000 in approximatively 5 minutes and is giving a hint whether the
current version of ant is OK or not.

2. Is the level of compliance to the testing criterion subject to
measurement? In case
of the first example, the percentage of lines executed during testing could
be measured.
Do you use a tool to automatically calculate your code coverage?

Since there is no real rule, there is also no measurement.



---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to