I forgot: + Use unspecified virtual host as default.
and + Dummy Test Case. Implements run through of the interaction with the coordinator for a test, but does not actuall test anything. On 2/26/07, Rupert Smith <[EMAIL PROTECTED]> wrote:
Hi, I've had a chance now to read this stuff through. I will aim to send out an updated interop test spec around about wednesday-ish taking on board points made. Points to take account of: Alan Conway > We can write a CppUnit formatter in junit style, but do we need to agree > on qualified test class names to appear in the report or will > unqualified class name + function name suffice? If we go for a centralized controller approach, then the controller can supply the class + function name. It'd be nice if the function name in the test report contain the sending and receiving clients names, plus the test case name. E.g. "testSimpleP2P_FromCpp_ToJava" or something like that. + Requirement for definition of test case names and classes. > Simple, but includes client processing time in the timeout. More > accurate would be to have timeout in effect only when the client has > reason to expect something from the broker. The idea behind the timeouts is that they get reset on every message received (or maybe sent too). So they take no account of client processing time at all. This idea came from when we were writing the performance tests. To begin with I had fixed timeouts, in which the test had to run. But we had to adjust these timeouts for different test cases as some of the perftests take a long time to run. We replaced this with a timeout that gets reset on every message received. Then if one end of the test sliently packs in and stops sending, the other end detects the long pause and times out on it. As long as the messages keep flowing the timeout will keep being reset. + Requirement for timeout only when client is waiting for something. > IOP-8a: the *only* prerequisite for the scripts to run is that they > are located in a checkout where cpp/make java/mvn have been run. In > particular they must NOT assume any environment variables are set. + IOP-8a. Can they assume that environment variables set up to run make/mvn/msbuild are available? For example, JAVA_HOME and java on the path needs to be set up for maven. Scripts can assume they are correctly set after a build? > For consistency and to avoid possible headaches with special characters > I'd suggest cpp rather than c++ for file/directory names. + That. Will change c++ to cpp. > NB: timing issues - e.g. one that plagues the current C++ topic test. If > the publisher finishes publishing before all of the subscribers are > listening (few messages, many subscribers) you get hanging subscribers > that missed the "TERMINATE" message I think Gordon's more centralized approach will help clear that up. 'Invite', gather all invites, 'assign roles' and wait for 'ready' acknowledgements to come back in before issuing a 'start'. + Two stage, invite then start, sequence. Gordon Sim > I think the approach is great. One thought that occurred is that by > offloading more work to the controller/master we minimise the amount of > framework code we need to write for each test in each language. The > controller would only need to be written in one language. More centralized approach. Less framework code in each client, more in the centralized coordinator. Tests send back reports to the coordinator which writes the XML report out. Invite messages contain the parameters for each test, so each test case will need to define its parameters. Yes, I think this approach looks good. Its getting a bit more heavily engineered but I do agree that theres a saving to be made by putting common work in the coordinator. + Rewrite the spec to use this more centralized approach. I think there should be some sort of compulsory invite that the coordinator uses to discover what clients are available to test. Then if a client cannot accept an invite the coordinator knows to give that client a fail for that test. Example, start all clients, coordinator sends compulsory invite, all clients acknowledge it, coordinator invites to do simple p2p test, some clients haven't implemented that one yet, coordinator gives those ones a fail, then runs the others. > I like the idea of having a single client executable for each language Yes, makes them easier to run in a fully automated way. Nothing to stop the clients being written in seperate sender and receiver parts with their own main methods so that they can be run in seperate pieces, then having another class that ties them together into a single executable for both parts for the purposes of this spec. Tomas Restrepo > 3- I think the initial client to broker connection tests might want to be > handled a bit differently (maybe a set of more automated, but regular, > integration/unit tests for each client testing different success and failure > connection conditions). I think each client needs to have a simple connect and send a message to itself test as Test Case 0. This will just be a copy of the existing client tests that do this simple test. > More to the point, one of the basic ideas you propose is already using > broker communication and queues to connect the test controller with the > individual test clients, meaning already you assume a fairly high-level of > interoperability being possible between clients and brokers (and even > between clients seeing as how the controller would be written in a single > language and each client in a different one). Yes, we're testing Qpid and using Qpid to coordinate the tests! These tests are only going to work, if they work. I guess it would be nice if all the coordination logic happened completely out of band to Qpid, but a cross language asynchronous messaging system just seems like such a convenient tool for writing a set of distriubted cross language tests, how can we resist? I'm not too phased by this, as I always "code for the success scenario" when writing tests and forget about failures. So when it all works, we get green ticks everywhere. When its not working, arbitrary things go wrong but hopefully its robust enough not to freeze the build process. When we don't get green ticks everywhere there is stuff to be fixed. Unless someone can think of a clever and convenient way to get out of this snake-eating-its-own-tail dependency, we'll just have to go with it. + Try to keep the level of functionality required by the central coordinator to a minimum. I think its worth adding a few more test cases. One I'm particularly keen on at the moment is a test that sends every possible kind of header field table type and ensures that all clients can send/receive each others field tables. Any more? Rupert
