Hi Brian,

I like the idea, but certainly difficult to implement as you need to know all of the dependencies of a class, and there are possibly other external factors. That said, anything that happens to get missed can be required to be re-run before any sort of deploy perhaps.

Certainly worth thinking about. But definitely a seperate issue - it wouldn't actually help in my case because the larger unit tests actually spend all of their time in setup (recreating a test database from a known state), and that takes 30 seconds whether you run 1 test case or all 40.

Cheers,
Brett

Brian Ewins wrote:
On a totally different tack...if junit cached the test results, it would be possible to do this:
- at the start of each run, collect the 'test list' of java test classes to use and construct timestamps for these.
- compare this to the cached list of *passed* tests. Remove any tests which are still current from the 'test list'.
- perform all the tests in the pass list only. write those that pass to the pass cache.


This would require a special testrunner designed for multiple runs, but it would match tests during development better - you only do tests that have changed or need re-run. The other goals could be updated to interact with the cache sensibly - e.g test-one removes that test from the pass cache and runs the test; the old 'test:test' would clear the cache and run this 'test-some' goal.

Writing a testrunner/testlistener doesnt look entirely trivial, since its what made Bill Venners go write his own test framework for artima. But it could be done.

Brett Porter wrote:

Hi - final one today I promise!

Another previously discussed issue I'd like to go about implementing is
test levels: I have created MAVEN-515 to track it.

proposal: allow multiple unit test sets in the POM
justification: some take a long time to run or require resources not
always present, and should be omitted in regular development, but run
in nightly builds for example.

implementation:

<unitTest>
  <type>default</type> (same as if type ommitted)
 ... as before ...
</unitTest>

<unitTest>
  <type>performance</type>
...
</unitTest>

<unitTest>
  <type>integration</type>
...
</unitTest>

<unitTest>
  <type>touchstone</type>
...
</unitTest>

By default, just the first is run, but by setting
-Dmaven.test.sets=performance,integration
Then those will be run (and default which is always run)

There will also be the special case of -Dmaven.test.sets=all, which
will run everything. Or perhaps test:test-all?

They could all use the same unitTestSourceDirectory, or move those into
each unitTest set (with code to keep it backwards compatible).

Each set would have its own junit report I imagine.

Thoughts and objections?

Cheers,
Brett

http://mobile.yahoo.com.au - Yahoo! Mobile
- Check & compose your email via SMS on your Telstra or Vodafone mobile.

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

-- Web Developer f2 network ~ everything essential 02 8596 4437


--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]



Reply via email to