2006/9/4, Richard Liang <[EMAIL PROTECTED]>:


Alex Blewitt wrote:
> IMNSO it doesn't make sense to arbitrarily partition the tests based
> on a moniker, such as 'integration test', 'unit test', 'regression
> test' etc. For one thing, developers are generally not good at
> agreeing on the difference between them :-)

This is really a problem, however it might be simpler than we imagine.
We are open to any discussion. ;-) Anyway, developers are required to
write unit tests.

>
> If you've got fast and slow tests, then have a group for fast and slow
> tests. Then you can choose to just run the fast tests, and any
> automated build system can handle running the slow tests.
IMHO, "fast or slow" may not be the key point. The question is whether
we have any requirements to run only the regression tests.

I believe we have not. If a testcase was added to prevent regression,
it basically means that there was a hole in test coverage for some
reason. Provided that such "holes" are scattered randomly through the
given module, what for we may want to run such a sieve test suite?
I can think of the sole reason, that regression tests may
*potentially* highlight weak spots in the code (or design) which are
more prone to fail during code evolution. But still I see no reason to
run only regression tests ignoring others. I'd rather second Alex in
fast/slow grouping orthogonal to regressions.

As for information purposes, Mikhail (?) suggested good idea -
explicitly specify no. of the issue in descripton comment (or
annotation).

[snip]

--
Alexey

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to