On 2/27/07, Alan Conway <[EMAIL PROTECTED]> wrote:
Rupert Smith wrote:
> If we go for a centralized controller approach, then the controller
> can supply the class + function name.
I don't follow. If I write CppUnit test I fix the class name when I
write the tests, not when I run them. That's been my experience of JUnit
too but there may be extra flexibility there I'm not aware of.
> It'd be nice if the function name in the test report contain the
> sending and receiving clients names, plus
> the test case name. E.g. "testSimpleP2P_FromCpp_ToJava" or something
> like that.
>
I disagree. A test should only know what language it is written in, it
should not need to be aware of what language the other participants may
be in. I should be able to run the same C++ client unmodified against
C++ or Java broker with other participating clients in the test being in
any supported language. The actual components involved in a given test
run should be determined at runtime by the controller, not baked into
the tests. Or maybe I am missing the point and this is related to your
point above that I didn't get?

What I was imagining is that each client would hear the declarations
of the other clients and when each of those clients declared itself it
declared its name and that it is these declared names that would be
used to name the test outputs. Which is what this rule is about:

 IOP-27. Client Name. Each test client will provide a unique name for
itself that reflects its implementation language and distinguishes it
from the other clients. Clients should append a timestamp or UUID onto
this name to cater for the case where the same client is used multiple
times in an interop test. For example, the same client might be run on
two different operating systems, in order to check that it works
correctly on both.

So, if the client "Java-32454" heard the client "Cpp-436565" declare
itself (and others too), it knows it is going to have to run all of
its Test Cases against that client (and the others). I think it would
be good if the test results for that test reflect the fact that is was
a Java to Cpp interop test, making it easy to spot in the results what
combination of clients produce interop problems. It might even be
advantageous to get the broker type in there too somewhere?

Originally, I was thinking that each client would be responsible for
writing out the results of the tests where it is the sending part, in
the JUnit XML format. When Gordon suggested a more centralized
approach, I liked the idea because only the coordinator is going to do
the result logging, saving us the trouble of writing it in each
implementation language. So, now I'm thinking that the coordinator
sends out an invite for test case X, "Java-32765" and "Cpp-21364"
reply to it, it sets up one with the sender role, one with the
receiver role and runs test case X (through broker Y) and so on for
all the other permutations. So the coordinator knows that this is a
Java to Cpp test for case X through broker Y so can name the test
results appropriately. If the coordinator is written in Java, I know
that it is definitely possible to make it use JUnit to dynamically
create and name test cases like this; it may require writing a special
test decorator or test case implementation or something, but can be
done.

I want to be able to do something like this:

svn co https//blah/qpid
cd qpid/interop/bin
build_everything
run_interop_tests

What I'm thinking is that you will have to do a little bit more than
this. To begin with the qpid/interop directory won't contain the
scripts to start the brokers or clients, they will be put there as a
result of doing the build. I was thinking that the startall and
testall scripts would probably already exist under /interop as the
purpose of them is to work out what client scripts are available to
run. Which is what these requirements were about:

IOP-8. Broker Start Script. The java and c++ brokers will define
scripts that can start the broker running on the local machine, and
these scripts will be located at interop/java/broker/start and
interop/cpp/broker/start. The Java and C++ build processes will
generate these scripts (or copy pre-defined ones to the output
location) as part of their build processes.

IOP-14. Client Start Scripts. For each client implementation,
<client>, there will be a start script located at
interop/<client>/client/start. The build processes for each client
will generate these scripts and output them to this location as part
of their build process.

So I'm imagining that in order to run the interop tests you'll have to do:

svn co https//blah/qpid
cd qpid/cpp
./configure
make    (puts the cpp broker and client scripts under
interop/cpp/broker and interop/cpp/client)

cd qpid/java
mvn     (puts the java broker and client scripts under
interop/java/broker and interop/java/client)

cd qpid/interop
cpp/broker/start
./startall    (starts all the available clients running)
./testall     (starts the coordinator running to kick off the tests)
cpp/broker/stop

The reason I though this would be a good approach, is that you can
generate just the clients you want to test and the startall script
will find them. Obviously, during a complete build and test everything
procedure all clients will be generated. I'm thinking that eventually
I want an automated build server to run that full test procedure on a
daily basis and that it is going to have to run this procedure
succesfully on more than one machine to cater for Windows and Unix
builds. So for example, it might run the cpp build on a linux box, run
the .net build on a windows box, start the broker on the linux box,
run startall on both boxes, run testall on the linux box, stop the
broker on the linux box.

Is this an acceptable approach? The build scripts for each client can
inject whatever paths and environment variables they need into their
start scripts during their builds?

Reply via email to