Tom Eyckmans wrote:
Hi guys,
I'm working on the native test framework execution stuff,
Excellent
I just want to let you guys know how I'm trying to implement this so
you can contribute ideas/ better ways of doing things.
Global overview:
- recurse through compiled test classes, filter out class files that
contain a $ sign => testClassFilesQueue ( BlockingQueue )
- testClassFilesQueue => scan for test classes (first found test class
determines test framework) => testInfosQueue ( BlockingQueue )
- testServer / testClient
- server controls the forked test process:
- test process requests work (which test to execute next
(dequeued from testInfosQueue -> work/memory throthle), no more work
-> terminate)
- client sends progress events to the server =>
progressEventQueue => these events get duplicated to BuildListener
notifications and Test Report output, which both need to get executed
serially.
Smarter test detection. This is done on the compiled test classes.
Only test classes that don't contain a $ sign are queued on a blocking
queue for processing.
I'm pretty sure it's quite possible for a test class to have a $ in it's
name - it's just a bit unusual. We should probably support this. What
was the thinking behind excluding classes with a $ sign?
Queued classes are checked by using javassist to get to the
information ( annotations, parent classes ). The logic required to
detect test classes is located in implementations of the TestFramework
interface. I've added some base logic to scan for annotations in the
AbstractTestFramework class; the logic in here checks for a number of
annotations on methods. As Adam mentioned to me the test methods can
be inherited and so if no methods are found on the current class it
will try and scan upwards on the inheritance tree and stop when the
parent class is java.lang.Object. Scanning up the inheritance tree is
currently only done when the super test class is available in the
test-classes directory (scanning upwards should be done eventually in
the complete classpath).
This something we could cache, later on. In fact, it would be
interesting to somehow sync this meta-info up with that produced by an
incremental Compile task.
When a class is identified as an actual test class, an implementation
of TestInfo is queued testInfosQueue, this object is a reference to
the test class and the type of implementation determines the way the
test class needs to be processed. This is needed to support executing
JUnit TestCases and TestSuites or TestNG Test classes and xml suite files.
When the first test class is identified the TestFramework that
identified the test class is used. This causes only that framework to
be used for test detection from that point on.
This is too dependent on the environment, when the test source contains
both types of tests. For example, on some file systems we may find the
test-ng tests first, and on some file systems we may find the junit
tests first. We should do something which behaves the same way
everywhere, something like:
- have the build file specify which framework to use for a given test
suite, with a default (ie what we do now)
- or, scan for all types of tests and assert that there is exactly 1
test framework detected.
The only way I could come up with a mechanism to provide the most
'real-time' progress notifications and a way to control the forked vm
was with something networkish => a client-server communication
process, currently I've implemented this with a java.nio socket
client-server with Apache MINA.
Another option, which may be simpler, is to use the stdin and stdout
streams of the child process.
To control the ports used by the server processes (over multiple
gradle builds running at the same time) I'm currently using a single
file in ~/.gradle/internal/testing/ports.used that is exclusively
locked while a new port is being determined or when a port is no
longer used. Currently I'm starting to use ports from 2000 on, I've
just picked this port so this may not be the best starting point.
Ideally I want to detect free ports so there is really no need for
configuration for ports.
A simpler option would be to let the OS select the port when creating
the listen socket in the build process, then let the child process know
which port was selected, either with a command-line arg, or system
property, or using the stdin of the child process.
With this client-server in place the forked vm gets work from the
server that dequeues work from the testInfosQueue (dequeueing in
blocks of 100(just to name a number) tests). We can also provide a way
for the forked vm to wait until we want it to start executing tests (
this is usefull when a user wants to debug the tests, the user
specifies -Dtest.debug Gradle asks the user to confirm, which may be a
first way of solving GRADLE-388, ideally I'd want to detect when the
debugger is attached and proceed then).
Progress events are send by the client and queued on the server side,
this can be used to notify BuildListeners and for test output.
I don't think we should add anything to BuildListener for this. These
events should go on a new TestListener interface. They should be
registered with the Test task, or possibly some convention object added
by the Java plugin.
I'm currently undecided where to do the output processing.
What do you mean by 'output processing'?
I'm currently in favour of doing this in the gradle build process as
to limit the classpath of the forked vm. I think we can re-use some of
the Ant JUnit output code but not sure if this is something that we want.
I'd love to receive feedback on this.
Thx,
Tom
---------------------------------------------------------------------
To unsubscribe from this list, please visit:
http://xircles.codehaus.org/manage_email