Rupert Smith wrote:
If we go for a centralized controller approach, then the controller
can supply the class + function name.
I don't follow. If I write CppUnit test I fix the class name when I
write the tests, not when I run them. That's been my experience of JUnit
too but there may be extra flexibility there I'm not aware of.
It'd be nice if the function name in the test report contain the
sending and receiving clients names, plus
the test case name. E.g. "testSimpleP2P_FromCpp_ToJava" or something
like that.
I disagree. A test should only know what language it is written in, it
should not need to be aware of what language the other participants may
be in. I should be able to run the same C++ client unmodified against
C++ or Java broker with other participating clients in the test being in
any supported language. The actual components involved in a given test
run should be determined at runtime by the controller, not baked into
the tests. Or maybe I am missing the point and this is related to your
point above that I didn't get?
The idea behind the timeouts is that they get reset on every message
received (or maybe sent too). So they take no account of client
processing time at all. This idea came from when we were writing the
performance tests. To begin with I had fixed timeouts, in which the
test had to run. But we had to adjust these timeouts for different
test cases as some of the perftests take a long time to run. We
replaced this with a timeout that gets reset on every message
received. Then if one end of the test sliently packs in and stops
sending, the other end detects the long pause and times out on it. As
long as the messages keep flowing the timeout will keep being reset.
+ Requirement for timeout only when client is waiting for something.
I'm all in favour of
a) avoiding arbitrary timeouts that have to be tweaked and
b) keeping it simple
so lets just try your scheme and make it more complicated only if we
have a real problem. If we trigger the timeouts on sends and receives I
think it sounds like a good heuristic for most cases - its OK for a
test to take a long time as long as it's doing *something* (even if its
all sending or all receiving) but there should be no extended period
when nothing happens at all.
IOP-8a: the *only* prerequisite for the scripts to run is that they
are located in a checkout where cpp/make java/mvn have been run. In
particular they must NOT assume any environment variables are set.
+ IOP-8a.
Can they assume that environment variables set up to run
make/mvn/msbuild are available? For example, JAVA_HOME and java on the
path needs to be set up for maven. Scripts can assume they are
correctly set after a build?
That's exactly what I want to avoid. We need one-button build and test
scripts in the interop toolkit so everyone doesn't have to learn the
random quirks of every build system to find out if their stuff plays
nice with others. I want to be able to do something like this:
svn co https//blah/qpid
cd qpid/interop/bin
build_everything
run_interop_tests
and find out what interoperates and what doesn't. I know setting
JAVA_HOME seems like a small thing, but if I could get back the hours
of my life that have been wasted figuring out and doing the one or two
small things for each of two or three systems in three or four languages
on five or six platforms just to make the #)[EMAIL PROTECTED] interop tests
RUN, I would be a younger and more optimistic man today.
I think Gordon's more centralized approach will help clear that up.
'Invite', gather all invites, 'assign roles' and wait for 'ready'
acknowledgements to come back in before issuing a 'start'.
+ Two stage, invite then start, sequence.
++from me, Gordons is a better articulated version of what I was trying
to say :)
More centralized approach. Less framework code in each client, more in
the centralized coordinator. Tests send back reports to the
coordinator which writes the XML report out. Invite messages contain
the parameters for each test, so each test case will need to define
its parameters. Yes, I think this approach looks good.Its getting a
bit more heavily engineered but I do agree that theres a saving to be
made by putting common work in the coordinator.
+ Rewrite the spec to use this more centralized approach.
Careful not to get carried away here. I think most of the central
co-ordination we need can be achieved using the broker itself and a set
of conventions about using queues. We need something to fire up the
clients but most of the work there should be done by CppUnit, JUnit or
whatever. I'm thinking more of scripts to kick off the collection of
clients we want to test at roughly the same time and then let them hash
it out thru the broker. Also lets not overlook the reporting side,
that's where ad-hock test frameworks usually suck the most.
More to the point, one of the basic ideas you propose is already using
broker communication and queues to connect the test controller with the
individual test clients, meaning already you assume a fairly
high-level of
interoperability being possible between clients and brokers (and even
between clients seeing as how the controller would be written in a
single
language and each client in a different one).
Yes, we're testing Qpid and using Qpid to coordinate the tests! These
tests are only going to work, if they work. I guess it would be nice
if all the coordination logic happened completely out of band to Qpid,
No please no! Using Qpid to manange Qpid interop testing is absolutely
the right thing to do. We will learn a lot about the qpid user
experience by using it to solve our own problems and we will avoid
wasting of time stitching together technologies that are otherwise
irrelevant to us. If we can't achieve the level of inteorp needed to use
Qpid as our test fabric then we are not in the right profession.
+ Try to keep the level of functionality required by the central
coordinator to a minimum.
I think its worth adding a few more test cases. One I'm particularly
keen on at the moment is a test that sends every possible kind of
header field table type and ensures that all clients can send/receive
each others field tables. Any more?
The AMQP spec itself provides quite a few test cases in XML. I started
on that road with the python tests but didn't go very far. I think they
make a good shopping list to get started.