Hi guys,

I'm working on the native test framework execution stuff, I just want to let
you guys know how I'm trying to implement this so you can contribute ideas/
better ways of doing things.

Global overview:
- recurse through compiled test classes, filter out class files that contain
a $ sign => testClassFilesQueue ( BlockingQueue )
- testClassFilesQueue => scan for test classes (first found test class
determines test framework) => testInfosQueue ( BlockingQueue )
- testServer / testClient
     - server controls the forked test process:
          - test process requests work (which test to execute next (dequeued
from testInfosQueue -> work/memory throthle), no more work -> terminate)
     - client sends progress events to the server => progressEventQueue =>
these events get duplicated to BuildListener notifications and Test Report
output, which both need to get executed serially.

Smarter test detection. This is done on the compiled test classes. Only test
classes that don't contain a $ sign are queued on a blocking queue for
processing.

Queued classes are checked by using javassist to get to the information (
annotations, parent classes ). The logic required to detect test classes is
located in implementations of the TestFramework interface. I've added some
base logic to scan for annotations in the AbstractTestFramework class; the
logic in here checks for a number of annotations on methods. As Adam
mentioned to me the test methods can be inherited and so if no methods are
found on the current class it will try and scan upwards on the inheritance
tree and stop when the parent class is java.lang.Object. Scanning up the
inheritance tree is currently only done when the super test class is
available in the test-classes directory (scanning upwards should be done
eventually in the complete classpath).

When a class is identified as an actual test class, an implementation of
TestInfo is queued testInfosQueue, this object is a reference to the test
class and the type of implementation determines the way the test class needs
to be processed. This is needed to support executing JUnit TestCases and
TestSuites or TestNG Test classes and xml suite files.

When the first test class is identified the TestFramework that identified
the test class is used. This causes only that framework to be used for test
detection from that point on.

The only way I could come up with a mechanism to provide the most
'real-time' progress notifications and a way to control the forked vm was
with something networkish => a client-server communication process,
currently I've implemented this with a java.nio socket client-server with
Apache MINA. To control the ports used by the server processes (over
multiple gradle builds running at the same time) I'm currently using a
single file in ~/.gradle/internal/testing/ports.used that is exclusively
locked while a new port is being determined or when a port is no longer
used. Currently I'm starting to use ports from 2000 on, I've just picked
this port so this may not be the best starting point. Ideally I want to
detect free ports so there is really no need for configuration for ports.

With this client-server in place the forked vm gets work from the server
that dequeues work from the testInfosQueue (dequeueing in blocks of 100(just
to name a number) tests). We can also provide a way for the forked vm to
wait until we want it to start executing tests ( this is usefull when a user
wants to debug the tests, the user specifies -Dtest.debug Gradle asks the
user to confirm, which may be a first way of solving GRADLE-388, ideally I'd
want to detect when the debugger is attached and proceed then).

Progress events are send by the client and queued on the server side, this
can be used to notify BuildListeners and for test output. I'm currently
undecided where to do the output processing. I'm currently in favour of
doing this in the gradle build process as to limit the classpath of the
forked vm. I think we can re-use some of the Ant JUnit output code but not
sure if this is something that we want.

I'd love to receive feedback on this.

Thx,

Tom

Reply via email to