Re: Draft Interop Testing Spec - Please Read

2007-03-14 Thread Alan Conway

Daniel Kulp wrote:

Alan,

If you change to look at just the real source:
wc `find java/client java/common -wholename */src/main/*.java` | tail -1
  33180  102555 1049430 total

Thus, well over half the code in the java stuff are tests.   That's a 
GOOD thing.   (If the test are actually running and test things 
correctly.   That could be a false assumption.  I don't really know.)
  

That makes sense. Wish I could say the same of the C++ broker!


Re: Draft Interop Testing Spec - Please Read

2007-03-14 Thread Daniel Kulp

Alan,

If you change to look at just the real source:
wc `find java/client java/common -wholename */src/main/*.java` | tail -1
  33180  102555 1049430 total

Thus, well over half the code in the java stuff are tests.   That's a 
GOOD thing.   (If the test are actually running and test things 
correctly.   That could be a false assumption.  I don't really know.)


Dan




On Wednesday 14 March 2007 10:31, Alan Conway wrote:
> wc `find cpp -name *.h -o -name *.cpp` | tail -1
>   42537  133707 1247482 total
>  wc `find java/client java/common -name *.java` | tail -1
>   77318  232166 2475349 total
>  wc `find python -name *.py` | tail -1
>   4396  14149 149905 total
>  wc `find ruby -name *.rb` | tail -1
>  1312  3465 29522 total

-- 
J. Daniel Kulp
Principal Engineer
IONA
P: 781-902-8727C: 508-380-7194
[EMAIL PROTECTED]
http://www.dankulp.com/blog


Re: Draft Interop Testing Spec - Please Read

2007-03-14 Thread Alan Conway

Rupert Smith wrote:

Actually, I see I could have done:

Pattern.matched(regexp, somestring);

but you get the idea.




I see I'll have to put my code where my mouth is :) When I get to this 
I'll see if I can't knock up a python or ruby port that convinces you.


Cheers,
Alan.


Re: Draft Interop Testing Spec - Please Read

2007-03-14 Thread Rupert Smith

Actually, I see I could have done:

Pattern.matched(regexp, somestring);

but you get the idea.





On 3/14/07, Rupert Smith <[EMAIL PROTECTED]> wrote:


On 3/14/07, Alan Conway <[EMAIL PROTECTED]> wrote:
>
> If you can identify a subset of the Java codebase that corresponds to
> the python client's
> functionality, filter out all the comments and get a byte count
> difference explainable as
> curly braces & newlines I will never mention this again :)
>

Ok, point conceded. Yes, you can do a lot with one line of python.

For a comparison, I saw in the testlib.py, file a function that scans the
filesystem for tests. Compared with the horrible thing that is
ClasspathScanner.java to do the same in Java and it is obvious that the
python is far more concise. Stuff like:

python:
 re.match(regexp, somestring)

java:
 Pattern pattern = new Pattern.compile(regexp);
 Matcher matcher = pattern.matcher(somestring);
 matcher.match();

shows this up well. My reponse? Stuff those three lines of java as a
static in RegexpUtils (ignoring the loss in efficiency by recompiling the
regexp every time). Then do:

RegexpUtils.match(regexp, somestring);

I do this sort of thing all the time, in an effort to write more concise
java.

Rupert



Re: Draft Interop Testing Spec - Please Read

2007-03-14 Thread Rupert Smith

On 3/14/07, Alan Conway <[EMAIL PROTECTED]> wrote:


If you can identify a subset of the Java codebase that corresponds to
the python client's
functionality, filter out all the comments and get a byte count
difference explainable as
curly braces & newlines I will never mention this again :)



Ok, point conceded. Yes, you can do a lot with one line of python.

For a comparison, I saw in the testlib.py, file a function that scans the
filesystem for tests. Compared with the horrible thing that is
ClasspathScanner.java to do the same in Java and it is obvious that the
python is far more concise. Stuff like:

python:
re.match(regexp, somestring)

java:
Pattern pattern = new Pattern.compile(regexp);
Matcher matcher = pattern.matcher(somestring);
matcher.match();

shows this up well. My reponse? Stuff those three lines of java as a static
in RegexpUtils (ignoring the loss in efficiency by recompiling the regexp
every time). Then do:

RegexpUtils.match(regexp, somestring);

I do this sort of thing all the time, in an effort to write more concise
java.

Rupert


Re: Draft Interop Testing Spec - Please Read

2007-03-14 Thread Alan Conway

Rupert Smith wrote:

Aha! So we get down to the real reason that all language choice arguments
are founded on: personal preference.


Note entirely :)

There are 2 objective reasons for preferring a scripting language:
- no compiler.
- faster implementation cycle, less code.

On the first point, full interop testing will involve a Java build 
anyway. However I'd like to be able to run ruby/python interop against 
the C++ broker as part of our standard build cycle - precisely because I 
can run those tests with no further compilation. Adding a Java build to 
the C++ build cycle is possible but very unappealing.


The second point is more important though. I've used C++, Java, python 
and ruby and for time spent per feature without a doubt C++ is the most 
painful, Java is in the middle, and the scripting languages are way out 
in front.  Here's an interesting metric:


wc `find cpp -name *.h -o -name *.cpp` | tail -1
 42537  133707 1247482 total
wc `find java/client java/common -name *.java` | tail -1
 77318  232166 2475349 total
wc `find python -name *.py` | tail -1
 4396  14149 149905 total
wc `find ruby -name *.rb` | tail -1
1312  3465 29522 total

Line counts are distorted by code formatting practices, but byte counts 
are a good reflection at the very least of how long it would take to 
type all the code in.  The java counts are inflated by JMS support, 
however even if you cut the it in half it is still an order of magnitude 
higher than python. It's very hard to argue that the Java client could 
have been implemented as quickly as python.  (Aside: I'm still surprised 
that java client+common is 2x C++ *including broker*!)



If you want to run the full interop tests, you'll need everything anyway.
And that includes the .Net, which as a Linux developer I'd be more 
worried

about than the Java. The .Net client ought to run on Mono though. Or you
could run it through the windows emulator thingy perhaps? Or as a VM?
Full interop tests yes, but it should be possible to do interop tests 
restricted to a subset of clients - .Net is a fine example of why.



I reckon it will take you about 30 minutes to install Java + Maven (+ Ant
can't remember if thats still needed for the code generator?) and get a
succesfull build of the Java ... Compared with how long I spent 
figuring out how to run the C++ build,

thats good going.
Good going indeed! But for someone with a qpid checkout, a python 
"build" takes 0 minutes.
I admit the real reason that I did it in Java is personal preference. 
I know
the quirks of extending JUnit, because I wrote the junit-toolkit that 
does

the performance tests.
I don't want to slow your progress, if Java is the fastest for you then 
go ahead
in Java , we need to get something in place. 


When I get to C++ interop I will take a look at porting to ruby/python.

As a contractor, I've done more than my fair share of maintaining other
peoples code. Here's my top list of the qualities that easy to 
maintain code

should have:

* High level documentation, explains the purpose and intention of the 
code,
and the interesting parts of the grand scale design (I leave the 
boring and

irrelevant bits out, because no-one will read them).
* Comments in the code. Explains the purpose and intention of the 
code, with

the details.
* No cut and paste coding. Where possible re-usable code should be put 
into

convenient libraries.
Hear hear! Applies to python or ruby as much as Java. There are 
pydoc/rubydoc tools similar to javadoc.
Choice of language seems pretty irrelevant on the whole. 
In my experience this is definitely not the case.  I have written 
python/ruby code in hours that

would have taken me days to write in C++ or Java.

Looks like python is only more concise because there's no curly braces on
new lines and no comments. 
If you can identify a subset of the Java codebase that corresponds to 
the python client's
functionality, filter out all the comments and get a byte count 
difference explainable as

curly braces & newlines I will never mention this again :)

Cheers,
Alan.


Re: Draft Interop Testing Spec - Please Read

2007-03-14 Thread Rupert Smith

Aha! So we get down to the real reason that all language choice arguments
are founded on: personal preference.

If you want to run the full interop tests, you'll need everything anyway.
And that includes the .Net, which as a Linux developer I'd be more worried
about than the Java. The .Net client ought to run on Mono though. Or you
could run it through the windows emulator thingy perhaps? Or as a VM?

I reckon it will take you about 30 minutes to install Java + Maven (+ Ant
can't remember if thats still needed for the code generator?) and get a
succesfull build of the Java on a linux box. At least thats how long it took
me the other day, when I put it all on my newly linuxified laptop (sorry
redhat, I installed debian becuase thats the one I tried first, many years
ago). Compared with how long I spent figuring out how to run the C++ build,
thats good going.

I admit the real reason that I did it in Java is personal preference. I know
the quirks of extending JUnit, because I wrote the junit-toolkit that does
the performance tests.

As a contractor, I've done more than my fair share of maintaining other
peoples code. Here's my top list of the qualities that easy to maintain code
should have:

* High level documentation, explains the purpose and intention of the code,
and the interesting parts of the grand scale design (I leave the boring and
irrelevant bits out, because no-one will read them).
* Comments in the code. Explains the purpose and intention of the code, with
the details.
* No cut and paste coding. Where possible re-usable code should be put into
convenient libraries.

Choice of language seems pretty irrelevant on the whole. The best book I've
ever seen on writing good code comments is, "Code Complete".

I call my method of documenting things a "sea-level" principle. Stuff above
the level of javadoc (or equivalent) goes in a Word doc or a Wiki page.
Stuff below that level goes in the code. This means that the higher level
documentation doesn't get bogged down in the details and go stale too
quickly. Also, I put a little table in each class header comment, which
contains a CRC card (class, responsibility, collaboration), which provides a
concise guide to the responsibilities of each class and how it fits together
with other classes in order to achieve those. I do that because I'm
constantly striving to follow Parna's principles of modular
design,
and perfect re-useability. It generally takes a lot of shifting around to
get there though.

I've always been a little suspicious of dynamically typed languages, because
I like type safety and static typing, combined with type inference and
parametric polymorhpism. I did say personal peferences though didn't I? I'm
definitely in the minority here, because I think checked exceptions are a
marvelous thing too...

Looks like python is only more concise because there's no curly braces on
new lines and no comments. Ruby looks like an interesting language and very
concise. I don't know it, but I'm sure I will soon. I've always thought that
writing re-usable abstractions with first class functions (closures) is far
more natural than with class inheritance. Given my preference mix, the ideal
language for me is probably OCaml, but thats French and nobody uses it!

Seriously though, I think the best language for doing this kind of stuff has
to be 
spl,
the communicating agent model is perfect for asynchronous messaging.

Good day,
Rupert

On 3/14/07, Andrew Stitcher < [EMAIL PROTECTED]> wrote:


On Wed, 2007-03-14 at 09:10 +, Robert Greig wrote:
> On 13/03/07, Andrew Stitcher <[EMAIL PROTECTED] > wrote:
>
> > One thing worth noting is that Java is much less common on Linux than
> > either Python or Ruby, as the licensing made it hard to ship the Sun
> > implementation. Whereas there are very good implementations of both
> > Python and Ruby for Windows.
>
> I don't understand this point.
>
> Java is freely available on linux for download although it isn't
> included on distros. Python and Ruby are freely available on Windows
> although they aren't included in the installation by default ( i.e. are
> not shipped by Microsoft). Is this not the same situation?
>
> I don't really have an opinion on which language the controller should
> be written in.
>

Uhm, You're probably correct there - I guess what I'm really saying is
that I'd just prefer it that way for (my) convenience when developing
the C++ broker, as not needing java would be one less thing to worry
about!

Andrew





Re: Draft Interop Testing Spec - Please Read

2007-03-14 Thread Andrew Stitcher
On Wed, 2007-03-14 at 09:10 +, Robert Greig wrote:
> On 13/03/07, Andrew Stitcher <[EMAIL PROTECTED]> wrote:
> 
> > One thing worth noting is that Java is much less common on Linux than
> > either Python or Ruby, as the licensing made it hard to ship the Sun
> > implementation. Whereas there are very good implementations of both
> > Python and Ruby for Windows.
> 
> I don't understand this point.
> 
> Java is freely available on linux for download although it isn't
> included on distros. Python and Ruby are freely available on Windows
> although they aren't included in the installation by default (i.e. are
> not shipped by Microsoft). Is this not the same situation?
> 
> I don't really have an opinion on which language the controller should
> be written in.
> 

Uhm, You're probably correct there - I guess what I'm really saying is
that I'd just prefer it that way for (my) convenience when developing
the C++ broker, as not needing java would be one less thing to worry
about!

Andrew




Re: Draft Interop Testing Spec - Please Read

2007-03-14 Thread Robert Greig

On 13/03/07, Andrew Stitcher <[EMAIL PROTECTED]> wrote:


One thing worth noting is that Java is much less common on Linux than
either Python or Ruby, as the licensing made it hard to ship the Sun
implementation. Whereas there are very good implementations of both
Python and Ruby for Windows.


I don't understand this point.

Java is freely available on linux for download although it isn't
included on distros. Python and Ruby are freely available on Windows
although they aren't included in the installation by default (i.e. are
not shipped by Microsoft). Is this not the same situation?

I don't really have an opinion on which language the controller should
be written in.

RG


Re: Draft Interop Testing Spec - Please Read

2007-03-13 Thread Andrew Stitcher
On Tue, 2007-03-13 at 17:30 +, Rupert Smith wrote:
> It may only be my ignorance, but Java is easiest for me...

I'd agree with Alan; Python or Ruby seem to me to be simpler to work
with than Java for scripting type tasks which is what the co-ordinator
really is.

One thing worth noting is that Java is much less common on Linux than
either Python or Ruby, as the licensing made it hard to ship the Sun
implementation. Whereas there are very good implementations of both
Python and Ruby for Windows.

> 
> On 3/13/07, Alan Conway <[EMAIL PROTECTED]> wrote:
> >
> > Rupert Smith wrote:
> > > Skeleton of the interop test is checked in under
> > > qpid/java/integrationtests
> > > for a Java coordinator and test client.
> > Wouldn't it be better to write the co-ordinator in python or ruby? Since
> > it's language-neutral it seems it should be in the easiest language to
> > write/maintain.
> > > *Only the test client needs to be ported to every other language*.
> > I'll look at this in C++ as soon as I can but I've got some other stuff
> > on my plate if anyone else fancies taking it.
> >
> >



Re: Draft Interop Testing Spec - Please Read

2007-03-13 Thread Rupert Smith

It may only be my ignorance, but Java is easiest for me...

On 3/13/07, Alan Conway <[EMAIL PROTECTED]> wrote:


Rupert Smith wrote:
> Skeleton of the interop test is checked in under
> qpid/java/integrationtests
> for a Java coordinator and test client.
Wouldn't it be better to write the co-ordinator in python or ruby? Since
it's language-neutral it seems it should be in the easiest language to
write/maintain.
> *Only the test client needs to be ported to every other language*.
I'll look at this in C++ as soon as I can but I've got some other stuff
on my plate if anyone else fancies taking it.




Re: Draft Interop Testing Spec - Please Read

2007-03-13 Thread Alan Conway

Rupert Smith wrote:
Skeleton of the interop test is checked in under 
qpid/java/integrationtests

for a Java coordinator and test client.
Wouldn't it be better to write the co-ordinator in python or ruby? Since 
it's language-neutral it seems it should be in the easiest language to 
write/maintain.
*Only the test client needs to be ported to every other language*. 
I'll look at this in C++ as soon as I can but I've got some other stuff 
on my plate if anyone else fancies taking it.




Re: Draft Interop Testing Spec - Please Read

2007-03-13 Thread Rupert Smith

Skeleton of the interop test is checked in under qpid/java/integrationtests
for a Java coordinator and test client.

*Only the test client needs to be ported to every other language*. The
outline of this may be found under the org...interop.testclient package.
There is no need for other language implementations to closely follow this
exampe (I used some Java enum syntax anyway, I don't know if C++ or C# has
similar syntax, python/ruby probably has funky ways of doing it better).
Something broadly similar would be nice, so that moving between the
different implementation is easy. In particular the InteropClientTestCase
interface might be worth standardizing on. Will flesh this out over the
week, depending on other priorities.

Also, I forgot to put the test case names in the spec, so that clients can
tell which test case the invites are for. I went for:

"TC1_DummyRun"
"TC2_BasicP2P"
"TC3_BasicPubSub"

Will add these to the spec.

Rupert


Re: Draft Interop Testing Spec - Please Read

2007-03-07 Thread Gordon Sim

Alan Conway wrote:

- assert expected conditions are met.


My view was that the test clients would not do much in the way of 
asserting anything. They would do 'something' and tell the coordinator 
they did it (with some details perhaps). The coordinator would assert 
that the reports from all clients, in all roles, were as expected for 
the overall system test.


If using JUnit or similar in the test clients simplifies the test 
runner, great. I can't say I'm sure either way without actually trying 
to write one but I agree that it will likely be very useful in the 
coordinator.


Re: Draft Interop Testing Spec - Please Read

2007-03-07 Thread Rupert Smith

- group related tests together.
Coordinator does this (in the sense that it produces a test suite of
all available combinations of clients and test cases).

- assert expected conditions are met.
Coordinator does this.

- handle unexpected failures.
If clients fail to produce reports, coordinator will time them out,
they get a fail. I've tried to arrange things deliberately so that
unexpected failures result in timeouts, resulting in a robust
framework that won't lock up and wait forever. Admitedly, you will
lose the failure stack trace when this happens, but can always examine
clients local logs to figure out what happened.

- collect and report results.
Reports send to coordinator, it compares sender and receiver reports,
decides if the test passed/failed, logs out the result.

On 3/7/07, Rupert Smith <[EMAIL PROTECTED]> wrote:

The coordinator will only be implemented in one language, once only.
There will be many test clients in all languages, just test clients,
with no coordinator part.

I really don't see what *Unit adds to the test client end. Please note
Alan, we've switched from my original idea of having each test client
output its own test results to *Unit XML format, to having the
coordinator do it, as per Gordons centralized approach. The result of
this, is that the test clients are really quite simple, reactive
agents.

Rupert

On 3/7/07, Alan Conway <[EMAIL PROTECTED]> wrote:
> Rupert Smith wrote:
> > I don't think the test clients will need to use
> > JUnit/CppUnit/WhateverUnit. Mostly they will just be fairly simple
> > scripts that react to instructions sent by the coordinator. Although,
> > having said that, you could use *Unit if you like.
> The test clients will need a way to:
>  - group related tests together.
>  - assert expected conditions are met.
>  - handle unexpected failures.
>  - collect and report results.
>  - provide a natural test environment for each language
>
> In other words the tests will need everything that the *Unit frameworks
> provide. It may not seem like a lot, but would you consider writing
> JUnit from scratch to test Java code? It won't be any easier to build a
> distributed test framework from scratch - in fact it'll be harder and
> the result will be less usable. Trust me, I've been there.
>
> > I'm just not sure
> > it adds anything, because these clients won't do anything when run on
> > their own. They need the coordinator to push them along.
> Right but how do you express that on the client side? I would say you
> express it as a "QpidTestRunner" test runner for *Unit tests. The
> QpidTestRunner is responsible for talking to the co-ordinator. The
> co-ordinator tells the test runner "Run BasicTests.getGet" the test
> runner sets up the test environment executes the corresponding JUnit
> test, collects the results and sends them back to the co-ordinator. The
> "test client" *is* a testrunner. That means we can flexibly load as many
> or as few tests as we want into the runner, run tests selectively and do
> all the other good stuff that you can do with *Unit tests. It also means
> the tests themselves are straight *Unit, nothing new to learn for test
> writers.
>
> It may not seem like *Unit is doing a lot in the scenario above, but
> believe me, speaking as one who has participated in reinventing it
> several times *it is not worth reinventing*.
>
> > The coordinator itself is a different story. I've sketched an outline
> > for it in Java built on top of JUnit. Trying to use JUnits existing
> > TestSuite mechanism, and ability to locate tests in *Test classes. A
> > little bit of adaption perhaps needed to dynamically name the tests,
> > or to run the same test case many times accross differnt combinations
> > of clients. A root test case called something like InviteTestCase to
> > provide common code to do the invite and gather reports, and provide
> > convenience methods for common tasks etc. Here JUnit adds a lot as a
> > convenient framework to base this on.
> All the more reason to use it on both sides: give people a common
> framwork instead of making them jump from JUnit to some ad-hoc
> framework. E.g. with JUnit on both ends the client part of a test can
> sit right beside the co-ordinator part and use some simple naming
> conventions to make it obvious what goes with what.
> > I'll aim to get this skeleton code, and skeleton code for a java test
> > client checked in before fleshing it out. That way, other test clients
> > can copy the same structure.
> Looking forward to it  :) I'll try to put some code where my mouth is
> when the skeleton is in place.
>
>
> Cheers,
> Alan.
>



Re: Draft Interop Testing Spec - Please Read

2007-03-07 Thread Alan Conway

Rupert Smith wrote:

The coordinator will only be implemented in one language, once only.
There will be many test clients in all languages, just test clients,
with no coordinator part.

I really don't see what *Unit adds to the test client end. Please note
Alan, we've switched from my original idea of having each test client
output its own test results to *Unit XML format, to having the
coordinator do it, as per Gordons centralized approach. The result of
this, is that the test clients are really quite simple, reactive
agents.

Rupert

Put up the skeleton, if I can't code up a convincing case for *Unit on 
the client side I'll shut up about it.


Cheers,
Alan.


Re: Draft Interop Testing Spec - Please Read

2007-03-07 Thread Rupert Smith

The coordinator will only be implemented in one language, once only.
There will be many test clients in all languages, just test clients,
with no coordinator part.

I really don't see what *Unit adds to the test client end. Please note
Alan, we've switched from my original idea of having each test client
output its own test results to *Unit XML format, to having the
coordinator do it, as per Gordons centralized approach. The result of
this, is that the test clients are really quite simple, reactive
agents.

Rupert

On 3/7/07, Alan Conway <[EMAIL PROTECTED]> wrote:

Rupert Smith wrote:
> I don't think the test clients will need to use
> JUnit/CppUnit/WhateverUnit. Mostly they will just be fairly simple
> scripts that react to instructions sent by the coordinator. Although,
> having said that, you could use *Unit if you like.
The test clients will need a way to:
 - group related tests together.
 - assert expected conditions are met.
 - handle unexpected failures.
 - collect and report results.
 - provide a natural test environment for each language

In other words the tests will need everything that the *Unit frameworks
provide. It may not seem like a lot, but would you consider writing
JUnit from scratch to test Java code? It won't be any easier to build a
distributed test framework from scratch - in fact it'll be harder and
the result will be less usable. Trust me, I've been there.

> I'm just not sure
> it adds anything, because these clients won't do anything when run on
> their own. They need the coordinator to push them along.
Right but how do you express that on the client side? I would say you
express it as a "QpidTestRunner" test runner for *Unit tests. The
QpidTestRunner is responsible for talking to the co-ordinator. The
co-ordinator tells the test runner "Run BasicTests.getGet" the test
runner sets up the test environment executes the corresponding JUnit
test, collects the results and sends them back to the co-ordinator. The
"test client" *is* a testrunner. That means we can flexibly load as many
or as few tests as we want into the runner, run tests selectively and do
all the other good stuff that you can do with *Unit tests. It also means
the tests themselves are straight *Unit, nothing new to learn for test
writers.

It may not seem like *Unit is doing a lot in the scenario above, but
believe me, speaking as one who has participated in reinventing it
several times *it is not worth reinventing*.

> The coordinator itself is a different story. I've sketched an outline
> for it in Java built on top of JUnit. Trying to use JUnits existing
> TestSuite mechanism, and ability to locate tests in *Test classes. A
> little bit of adaption perhaps needed to dynamically name the tests,
> or to run the same test case many times accross differnt combinations
> of clients. A root test case called something like InviteTestCase to
> provide common code to do the invite and gather reports, and provide
> convenience methods for common tasks etc. Here JUnit adds a lot as a
> convenient framework to base this on.
All the more reason to use it on both sides: give people a common
framwork instead of making them jump from JUnit to some ad-hoc
framework. E.g. with JUnit on both ends the client part of a test can
sit right beside the co-ordinator part and use some simple naming
conventions to make it obvious what goes with what.
> I'll aim to get this skeleton code, and skeleton code for a java test
> client checked in before fleshing it out. That way, other test clients
> can copy the same structure.
Looking forward to it  :) I'll try to put some code where my mouth is
when the skeleton is in place.


Cheers,
Alan.



Re: Draft Interop Testing Spec - Please Read

2007-03-07 Thread Alan Conway

Rupert Smith wrote:

I don't think the test clients will need to use
JUnit/CppUnit/WhateverUnit. Mostly they will just be fairly simple
scripts that react to instructions sent by the coordinator. Although,
having said that, you could use *Unit if you like. 

The test clients will need a way to:
- group related tests together.
- assert expected conditions are met.
- handle unexpected failures.
- collect and report results.
- provide a natural test environment for each language

In other words the tests will need everything that the *Unit frameworks 
provide. It may not seem like a lot, but would you consider writing 
JUnit from scratch to test Java code? It won't be any easier to build a 
distributed test framework from scratch - in fact it'll be harder and 
the result will be less usable. Trust me, I've been there.



I'm just not sure
it adds anything, because these clients won't do anything when run on
their own. They need the coordinator to push them along.
Right but how do you express that on the client side? I would say you 
express it as a "QpidTestRunner" test runner for *Unit tests. The 
QpidTestRunner is responsible for talking to the co-ordinator. The 
co-ordinator tells the test runner "Run BasicTests.getGet" the test 
runner sets up the test environment executes the corresponding JUnit 
test, collects the results and sends them back to the co-ordinator. The 
"test client" *is* a testrunner. That means we can flexibly load as many 
or as few tests as we want into the runner, run tests selectively and do 
all the other good stuff that you can do with *Unit tests. It also means 
the tests themselves are straight *Unit, nothing new to learn for test 
writers.


It may not seem like *Unit is doing a lot in the scenario above, but 
believe me, speaking as one who has participated in reinventing it 
several times *it is not worth reinventing*.



The coordinator itself is a different story. I've sketched an outline
for it in Java built on top of JUnit. Trying to use JUnits existing
TestSuite mechanism, and ability to locate tests in *Test classes. A
little bit of adaption perhaps needed to dynamically name the tests,
or to run the same test case many times accross differnt combinations
of clients. A root test case called something like InviteTestCase to
provide common code to do the invite and gather reports, and provide
convenience methods for common tasks etc. Here JUnit adds a lot as a
convenient framework to base this on.
All the more reason to use it on both sides: give people a common 
framwork instead of making them jump from JUnit to some ad-hoc 
framework. E.g. with JUnit on both ends the client part of a test can 
sit right beside the co-ordinator part and use some simple naming 
conventions to make it obvious what goes with what. 

I'll aim to get this skeleton code, and skeleton code for a java test
client checked in before fleshing it out. That way, other test clients
can copy the same structure.
Looking forward to it  :) I'll try to put some code where my mouth is 
when the skeleton is in place.



Cheers,
Alan.


Re: Draft Interop Testing Spec - Please Read

2007-03-07 Thread Gordon Sim

Alan Conway wrote:

Gordon Sim wrote:

Alan Conway wrote:
Don't forget there are *two* points for centralizing test control 
code. One is the co-ordinator, the other is the test runners that 
will actually be running the tests.  


My view is that anything that can be done in the co-ordinator should 
be done there. The test runners only centralize code for one language; 
the co-ordinator centralizes code for the entire framework.


Agreed, but some things can only be done locally - e.g. checking 
timeouts, capturing failure messages.


Indeed.

https://issues.apache.org/jira/browse/QPID-406 proposes an 
interceptor/handler framework for the timeout issue, comments 
.appreciated. 


Looks very sensible.


Re: Draft Interop Testing Spec - Please Read

2007-03-07 Thread Alan Conway

Gordon Sim wrote:

Alan Conway wrote:
Don't forget there are *two* points for centralizing test control 
code. One is the co-ordinator, the other is the test runners that 
will actually be running the tests.  


My view is that anything that can be done in the co-ordinator should 
be done there. The test runners only centralize code for one language; 
the co-ordinator centralizes code for the entire framework.


Agreed, but some things can only be done locally - e.g. checking 
timeouts, capturing failure messages.


https://issues.apache.org/jira/browse/QPID-406 proposes an 
interceptor/handler framework for the timeout issue, comments 
.appreciated. From past experience I can attest that this is an 
extremely powerful pattern. E.g. the existing C++ logging code could be 
rewritten as a pair of Frame interceptors and shared between client and 
broker. The C++ 0-9 request/response logic is effectively a pair of 
interceptors.  Management instrumentation could be written as interceptors.


There are 2 big payoffs from formalising this stuff as interceptors
1.  You can *configure* behavior rather than  coding it. Instead of 
having a bunch of "if (debug) log(debugmsg)", "if (managementOn) 
issue(managementInfo)" on the call path you build a chain of 
interceptors that reflects what you want. The core just calls the first 
one, the last one calls back into the core. The core doesn't need to 
know anything about logging, management etc. etc.


2. You can do plug ins, and dynamically extend behaviour at runtime 
*without even having access to core source code.*  Even in the open 
source world this really lowers the barrier of entry to people who want 
to contribute to or experiment with qpid they can write plugins, test 
and use them without ever getting commit rights to change the core.


Comments to the list or on the JIRA, I think this is a worthwhile 
direction to go in. We can start with just enough for the interop test 
use case and extend as we go.


Cheers,
Alan.


Re: Draft Interop Testing Spec - Please Read

2007-03-07 Thread Rupert Smith

I don't think the test clients will need to use
JUnit/CppUnit/WhateverUnit. Mostly they will just be fairly simple
scripts that react to instructions sent by the coordinator. Although,
having said that, you could use *Unit if you like. I'm just not sure
it adds anything, because these clients won't do anything when run on
their own. They need the coordinator to push them along.

The coordinator itself is a different story. I've sketched an outline
for it in Java built on top of JUnit. Trying to use JUnits existing
TestSuite mechanism, and ability to locate tests in *Test classes. A
little bit of adaption perhaps needed to dynamically name the tests,
or to run the same test case many times accross differnt combinations
of clients. A root test case called something like InviteTestCase to
provide common code to do the invite and gather reports, and provide
convenience methods for common tasks etc. Here JUnit adds a lot as a
convenient framework to base this on.

I'll aim to get this skeleton code, and skeleton code for a java test
client checked in before fleshing it out. That way, other test clients
can copy the same structure.

Spec updated, in response to Gordons comments.

http://cwiki.apache.org/confluence/display/qpid/Interop+Testing+Specification

Rupert

On 3/7/07, Gordon Sim <[EMAIL PROTECTED]> wrote:

Alan Conway wrote:
> Don't forget there are *two* points for centralizing test control code.
> One is the co-ordinator, the other is the test runners that will
> actually be running the tests.

My view is that anything that can be done in the co-ordinator should be
done there. The test runners only centralize code for one language; the
co-ordinator centralizes code for the entire framework.




Re: Draft Interop Testing Spec - Please Read

2007-03-07 Thread Gordon Sim

Alan Conway wrote:
Don't forget there are *two* points for centralizing test control code. 
One is the co-ordinator, the other is the test runners that will 
actually be running the tests.  


My view is that anything that can be done in the co-ordinator should be 
done there. The test runners only centralize code for one language; the 
co-ordinator centralizes code for the entire framework.




Re: Draft Interop Testing Spec - Please Read

2007-03-07 Thread Alan Conway

Gordon Sim wrote:

Rupert Smith wrote:

Thanks for taking time to read through and provide feedback Gordon.


Thanks for taking the time to write up a good spec for the rest of us 
to read; thats the real work! It's going to be of huge benefit to have 
this framework in place, your efforts are very much appreciated.

I second that!


Re: Draft Interop Testing Spec - Please Read

2007-03-07 Thread Alan Conway

Rupert Smith wrote:

I see your point, shifting as much stuff into the coordinator as
possible. I had a suspicion I was making the test case interaction
between the test clients too complex.
Don't forget there are *two* points for centralizing test control code. 
One is the co-ordinator, the other is the test runners that will 
actually be running the tests.  The tests themselves should be straight 
JUnit/CppUnit tests but the test runners can intervene pre- and  post-  
each test, they can collect failures, communicate with coordinator etc. 
Things that are more appropriate to do in the client address space can 
be done in the test runners. I don't have specific items in mind its 
just a general observation.


The only thing that has to be done *within* the tests is handling 
timeouts based on message traffic, and I think even that should hidden 
in the test setup code. It should be trivial to add an extension point 
in each qpid client library to run a callback for each message 
sent/received. We can use it right now to measure timeouts for tests, 
and if done right it will be useful for all sorts of other "frameworky" 
things in future - capturing management information, logging, debugging 
etc.  I feel an interceptor pattern coming on :)


Cheers,
Alan.



Re: Draft Interop Testing Spec - Please Read

2007-03-07 Thread Gordon Sim

Rupert Smith wrote:

Thanks for taking time to read through and provide feedback Gordon.


Thanks for taking the time to write up a good spec for the rest of us to 
read; thats the real work! It's going to be of huge benefit to have this 
framework in place, your efforts are very much appreciated.


Re: Draft Interop Testing Spec - Please Read

2007-03-07 Thread Rupert Smith

I see your point, shifting as much stuff into the coordinator as
possible. I had a suspicion I was making the test case interaction
between the test clients too complex.

I will change it so that the coordinator waits for report messages
from both senders and receivers in order to determine whether the test
has completed. Coordinator to compare sender and receiver reports to
decide if test has passed or not. "Complete Role" and "Test Done"
messages are not needed because the clients will know that their role
is complete when they send their reports. This is much closer to your
original proposal.

I will also add a reply queue to the coordinator, so that responses to
its broadcast messages are returned only to it.

Thanks for taking time to read through and provide feedback Gordon.
Your comments are definitely guiding me towards a better solution.

Rupert


On 3/6/07, Gordon Sim <[EMAIL PROTECTED]> wrote:

Rupert Smith wrote:
> http://cwiki.apache.org/confluence/display/qpid/Interop+Testing+Specification

Control of tests is still done through direct interaction between
clients. The aim of the centralized approach was to restrict all control
traffic to occur between a test client and the controller, not between
test clients themselves. I think this keeps the test clients simpler
while allowing richer composition of these clients into different scenarios.

e.g. A test sender only needs to send messages to a particular
'destination', then inform the controller what it has done. A test
receiver merely listens for messages, and reports back what it received
when the test completes. Any communication between the test clients is
part of the test, not part of the control.

I'm also not clear why the clients respond to invites by sending
messages to the control topic. That means all clients see all messages
and have to ignore those that are not relevant. I'd prefer to keep the
clients as isolated from each other as possible. Messages for the
controller would be sent directly to the controller and would not be
broadcast to all listeners.

The controller in this view is not a generic test runner that merely
triggers tests; it simplifies the language dependant parts by extracting
as much as possible from them, implementing it once in one place where
it is easier to change or extend.



Re: Draft Interop Testing Spec - Please Read

2007-03-06 Thread Gordon Sim

Rupert Smith wrote:
http://cwiki.apache.org/confluence/display/qpid/Interop+Testing+Specification 


Control of tests is still done through direct interaction between 
clients. The aim of the centralized approach was to restrict all control 
traffic to occur between a test client and the controller, not between 
test clients themselves. I think this keeps the test clients simpler 
while allowing richer composition of these clients into different scenarios.


e.g. A test sender only needs to send messages to a particular 
'destination', then inform the controller what it has done. A test 
receiver merely listens for messages, and reports back what it received 
when the test completes. Any communication between the test clients is 
part of the test, not part of the control.


I'm also not clear why the clients respond to invites by sending 
messages to the control topic. That means all clients see all messages 
and have to ignore those that are not relevant. I'd prefer to keep the 
clients as isolated from each other as possible. Messages for the 
controller would be sent directly to the controller and would not be 
broadcast to all listeners.


The controller in this view is not a generic test runner that merely 
triggers tests; it simplifies the language dependant parts by extracting 
as much as possible from them, implementing it once in one place where 
it is easier to change or extend.


Re: Draft Interop Testing Spec - Please Read

2007-03-06 Thread Alan Conway
On Mon, 2007-03-05 at 10:21 +, Rupert Smith wrote:
> As for gathering reports, I was thinking that each test (sender part)
> would report pass/fail (message body containing reason for failure in
> failure cases) to the coordinator, of which there is only one, and it
> will write out the xml reports. Reason for that being that the code to
> write the reports only needs to be written/maintained in one place.
Well everybody needs to write their own XML fragments, the coordinator
is just gluing them together into a report. Not hard though JUnit &
CppUnit both have XML outputters already. I do see the advantages of
having a central point collate all the results. Ideally the tests should
be plain JUnit/CppUnit and all the extra wiring to send results should
be in special test runners.

> Was also thinking of adding the requirement that each test client talk
> to the coordinator over a seperate AMQ connection to that which is
> sends its test messages over. The idea being, that if a test failure
> causes closure of the connection, it should still manage to send its
> report. A more serious melt-down that causes the test client to
> completely fail, should still result in the test report being written
> out as a fail, because the coordinator knows which tests/clients it
> started -> which ones did not produce a report -> which ones to write
> out a failure for. So no covered tracks. In this case, look in the log
> for the dead test client to figure out what happened. Does this sound
> ok?

Yup, give it a go. If necessary we can have tests also log to a local
file or console as a back-up in the event of a crash preventing the
result message from being sent.

> Producing updated working interop spec, with a view to putting it on
> the wiki later today.

Looking forward to it!



Re: Draft Interop Testing Spec - Please Read

2007-03-06 Thread Rupert Smith

Test clients are just going to send a pass/fail to the coordinator. In
the failure case they can put whatever text they like in the message
body, by way of explanation (log, stack trace, whatever). Coordinator
will take care of outputing the results to XML.

On 3/6/07, Alan Conway <[EMAIL PROTECTED]> wrote:

On Mon, 2007-03-05 at 10:21 +, Rupert Smith wrote:
> As for gathering reports, I was thinking that each test (sender part)
> would report pass/fail (message body containing reason for failure in
> failure cases) to the coordinator, of which there is only one, and it
> will write out the xml reports. Reason for that being that the code to
> write the reports only needs to be written/maintained in one place.
Well everybody needs to write their own XML fragments, the coordinator
is just gluing them together into a report. Not hard though JUnit &
CppUnit both have XML outputters already. I do see the advantages of
having a central point collate all the results. Ideally the tests should
be plain JUnit/CppUnit and all the extra wiring to send results should
be in special test runners.

> Was also thinking of adding the requirement that each test client talk
> to the coordinator over a seperate AMQ connection to that which is
> sends its test messages over. The idea being, that if a test failure
> causes closure of the connection, it should still manage to send its
> report. A more serious melt-down that causes the test client to
> completely fail, should still result in the test report being written
> out as a fail, because the coordinator knows which tests/clients it
> started -> which ones did not produce a report -> which ones to write
> out a failure for. So no covered tracks. In this case, look in the log
> for the dead test client to figure out what happened. Does this sound
> ok?

Yup, give it a go. If necessary we can have tests also log to a local
file or console as a back-up in the event of a crash preventing the
result message from being sent.

> Producing updated working interop spec, with a view to putting it on
> the wiki later today.

Looking forward to it!




Re: Draft Interop Testing Spec - Please Read

2007-03-06 Thread Gordon Sim

Rupert Smith wrote:

This describes the centralized approach advocated by Gordon. One
important difference with Gordon's proposal is that the assign role
(and some other control messages) are not sent out on a fanout
exchange like he describes, but on a topic exchange instead. This is
because there may well be a pure JMS test client, and the JMS
implementation only talks to direct and topic exchanges, not fanout.


The exchange type used isn't critical, the purpose was to allow the test 
controller to easily address a particular subset of clients (e.g. 
senders or receivers).


If the test clients merely return the name of their control queue then 
the controller can bind them as it sees fit for the purposes of its 
communication. Each test client is then only responsible for creating 
its control queue, binding that such that it receives invites and then 
reading all the control messages that arrive in the queue.



Also, I'm thinking that for each test case instance there will only be
one sender and one receiver client (although the receiver may be asked
to open multiple connections for pubsub tests), whereas Gordon's
solution was a bit more general purpose in that their could be many of
each. 


One of each is fine to begin with (though I do think that many of each 
will be desirable as well) and makes the point above (i.e. easy 
addressing of a subset of clients) less of an issue.




Re: Draft Interop Testing Spec - Please Read

2007-03-06 Thread Rupert Smith

Oops, its at:

http://cwiki.apache.org/confluence/display/qpid/Interop+Testing+Specification

On 3/6/07, Rupert Smith <[EMAIL PROTECTED]> wrote:

Working copy of the spec is now at:

http://cwiki.apache.org/confluence/display/qpid/Interop+Testing+Spec.

This describes the centralized approach advocated by Gordon. One
important difference with Gordon's proposal is that the assign role
(and some other control messages) are not sent out on a fanout
exchange like he describes, but on a topic exchange instead. This is
because there may well be a pure JMS test client, and the JMS
implementation only talks to direct and topic exchanges, not fanout.
Also, I'm thinking that for each test case instance there will only be
one sender and one receiver client (although the receiver may be asked
to open multiple connections for pubsub tests), whereas Gordon's
solution was a bit more general purpose in that their could be many of
each. I've been carefull to make sure the many of each approach is
still viable, by mandating the use of correlation id's in test
conversations, so that receivers can disambiguate senders in a many to
many type test. Left as a future direction for the spec.

Don't like something in the spec? Its a working copy, not set in
stone. I expect we'll discover a few things along the way and need to
adapt which is fine. I'm thoroughly bored of writing it now, time to
write some code.

Rupert

On 3/5/07, Rupert Smith <[EMAIL PROTECTED]> wrote:
> Alan, thanks for the scripting advice. Will definitely follow your
> lead here as you seem to have a clear idea of how you want this to
> work and how to make it convenient to use.
>
> As for gathering reports, I was thinking that each test (sender part)
> would report pass/fail (message body containing reason for failure in
> failure cases) to the coordinator, of which there is only one, and it
> will write out the xml reports. Reason for that being that the code to
> write the reports only needs to be written/maintained in one place.
> Was also thinking of adding the requirement that each test client talk
> to the coordinator over a seperate AMQ connection to that which is
> sends its test messages over. The idea being, that if a test failure
> causes closure of the connection, it should still manage to send its
> report. A more serious melt-down that causes the test client to
> completely fail, should still result in the test report being written
> out as a fail, because the coordinator knows which tests/clients it
> started -> which ones did not produce a report -> which ones to write
> out a failure for. So no covered tracks. In this case, look in the log
> for the dead test client to figure out what happened. Does this sound
> ok?
>
> Producing updated working interop spec, with a view to putting it on
> the wiki later today.
>
> Rupert
>
> On 3/2/07, Alan Conway <[EMAIL PROTECTED]> wrote:
> > On Tue, 2007-02-27 at 20:43 +, Marnie McCormack wrote:
> > > Not too sure how to completely get round some system variables being set 
for
> > > the tests to run. For example, if you don't set JAVA_HOME or similar, how
> > > can you run the java command ?
> > JAVA_HOME is Suns fault not ours ;) but you can run Java without
> > JAVA_HOME based on PATH - e.g. the standard gcj install works like that.
> > Our test suite should work either way.
> >
> > > Can Alan or someone shed
> > > some light on how the scripts would work without some info about QPID_HOME
> > > type stuff ?
> >
> > Scripts (programs in general) can figure out where they themselves are
> > installed and find their config files etc. by looking relative to that
> > without using env vars. E.g. add this as qpid/interop/qpid_java_env:
> >
> > #!/bin/sh
> > export SCRIPT_DIR=`dirname $0`
> > ROOT=`dirname $SCRIPT_DIR`
> > export QPID_HOME=$ROOT/java/bin
> > export PATH=QPID_HOME/bin
> > # etc...
> >
> > Now an imaginary test script first sources the environment as follows:
> >
> > #!/bin/sh
> > # qpid_test
> > source `dirname $0`/qpid_java_env
> > echo QPID_HOME=$QPID_HOME
> >
> > Finally I put /home/aconway/svn/qpid/interop in my path, now I can call
> > qpid_test from anywhere and it always prints:
> >
> > [EMAIL PROTECTED] /]$ qpid_test
> > QPID_HOME=/home/aconway/svn/qpid/java/bin
> >
> > So now the user has only one thing to set - PATH - instead of 2 or 3,
> > which is important if you work in multiple checkouts and need to switch
> > easily from one to another. You can also run tests from a different
> > checkout *without* setting paths by simply saying:
> >  /home/aconway/svn/qpid2/interop/run_test
> > and automatically the environment is set for /home/aconway/svn/qpid2
> >
> > (NB: the example above fails if you are in a subdirectory of qpid and do
> > something like ../../interop/run_test, a slightly more complicated
> > script can handle that case too.)
> >
> > > Once the spec is agreed (probably we need to draw a line in the sand
> > > somewhere here ?) I think we need to:
> > >
> > > - documen

Re: Draft Interop Testing Spec - Please Read

2007-03-06 Thread Rupert Smith

Working copy of the spec is now at:

http://cwiki.apache.org/confluence/display/qpid/Interop+Testing+Spec.

This describes the centralized approach advocated by Gordon. One
important difference with Gordon's proposal is that the assign role
(and some other control messages) are not sent out on a fanout
exchange like he describes, but on a topic exchange instead. This is
because there may well be a pure JMS test client, and the JMS
implementation only talks to direct and topic exchanges, not fanout.
Also, I'm thinking that for each test case instance there will only be
one sender and one receiver client (although the receiver may be asked
to open multiple connections for pubsub tests), whereas Gordon's
solution was a bit more general purpose in that their could be many of
each. I've been carefull to make sure the many of each approach is
still viable, by mandating the use of correlation id's in test
conversations, so that receivers can disambiguate senders in a many to
many type test. Left as a future direction for the spec.

Don't like something in the spec? Its a working copy, not set in
stone. I expect we'll discover a few things along the way and need to
adapt which is fine. I'm thoroughly bored of writing it now, time to
write some code.

Rupert

On 3/5/07, Rupert Smith <[EMAIL PROTECTED]> wrote:

Alan, thanks for the scripting advice. Will definitely follow your
lead here as you seem to have a clear idea of how you want this to
work and how to make it convenient to use.

As for gathering reports, I was thinking that each test (sender part)
would report pass/fail (message body containing reason for failure in
failure cases) to the coordinator, of which there is only one, and it
will write out the xml reports. Reason for that being that the code to
write the reports only needs to be written/maintained in one place.
Was also thinking of adding the requirement that each test client talk
to the coordinator over a seperate AMQ connection to that which is
sends its test messages over. The idea being, that if a test failure
causes closure of the connection, it should still manage to send its
report. A more serious melt-down that causes the test client to
completely fail, should still result in the test report being written
out as a fail, because the coordinator knows which tests/clients it
started -> which ones did not produce a report -> which ones to write
out a failure for. So no covered tracks. In this case, look in the log
for the dead test client to figure out what happened. Does this sound
ok?

Producing updated working interop spec, with a view to putting it on
the wiki later today.

Rupert

On 3/2/07, Alan Conway <[EMAIL PROTECTED]> wrote:
> On Tue, 2007-02-27 at 20:43 +, Marnie McCormack wrote:
> > Not too sure how to completely get round some system variables being set for
> > the tests to run. For example, if you don't set JAVA_HOME or similar, how
> > can you run the java command ?
> JAVA_HOME is Suns fault not ours ;) but you can run Java without
> JAVA_HOME based on PATH - e.g. the standard gcj install works like that.
> Our test suite should work either way.
>
> > Can Alan or someone shed
> > some light on how the scripts would work without some info about QPID_HOME
> > type stuff ?
>
> Scripts (programs in general) can figure out where they themselves are
> installed and find their config files etc. by looking relative to that
> without using env vars. E.g. add this as qpid/interop/qpid_java_env:
>
> #!/bin/sh
> export SCRIPT_DIR=`dirname $0`
> ROOT=`dirname $SCRIPT_DIR`
> export QPID_HOME=$ROOT/java/bin
> export PATH=QPID_HOME/bin
> # etc...
>
> Now an imaginary test script first sources the environment as follows:
>
> #!/bin/sh
> # qpid_test
> source `dirname $0`/qpid_java_env
> echo QPID_HOME=$QPID_HOME
>
> Finally I put /home/aconway/svn/qpid/interop in my path, now I can call
> qpid_test from anywhere and it always prints:
>
> [EMAIL PROTECTED] /]$ qpid_test
> QPID_HOME=/home/aconway/svn/qpid/java/bin
>
> So now the user has only one thing to set - PATH - instead of 2 or 3,
> which is important if you work in multiple checkouts and need to switch
> easily from one to another. You can also run tests from a different
> checkout *without* setting paths by simply saying:
>  /home/aconway/svn/qpid2/interop/run_test
> and automatically the environment is set for /home/aconway/svn/qpid2
>
> (NB: the example above fails if you are in a subdirectory of qpid and do
> something like ../../interop/run_test, a slightly more complicated
> script can handle that case too.)
>
> > Once the spec is agreed (probably we need to draw a line in the sand
> > somewhere here ?) I think we need to:
> >
> > - document on the wiki
> > - prioritise some elements
> > - split into compact JIRAs, by technology, so that individuals can pick up
> > the tasks and contribute (we don't have too many all-code-all-clients people
> > on the project !)
>
> I'm happy with the proposal so far, once this gets underway I'll tr

Re: Draft Interop Testing Spec - Please Read

2007-03-05 Thread Rupert Smith

Alan, thanks for the scripting advice. Will definitely follow your
lead here as you seem to have a clear idea of how you want this to
work and how to make it convenient to use.

As for gathering reports, I was thinking that each test (sender part)
would report pass/fail (message body containing reason for failure in
failure cases) to the coordinator, of which there is only one, and it
will write out the xml reports. Reason for that being that the code to
write the reports only needs to be written/maintained in one place.
Was also thinking of adding the requirement that each test client talk
to the coordinator over a seperate AMQ connection to that which is
sends its test messages over. The idea being, that if a test failure
causes closure of the connection, it should still manage to send its
report. A more serious melt-down that causes the test client to
completely fail, should still result in the test report being written
out as a fail, because the coordinator knows which tests/clients it
started -> which ones did not produce a report -> which ones to write
out a failure for. So no covered tracks. In this case, look in the log
for the dead test client to figure out what happened. Does this sound
ok?

Producing updated working interop spec, with a view to putting it on
the wiki later today.

Rupert

On 3/2/07, Alan Conway <[EMAIL PROTECTED]> wrote:

On Tue, 2007-02-27 at 20:43 +, Marnie McCormack wrote:
> Not too sure how to completely get round some system variables being set for
> the tests to run. For example, if you don't set JAVA_HOME or similar, how
> can you run the java command ?
JAVA_HOME is Suns fault not ours ;) but you can run Java without
JAVA_HOME based on PATH - e.g. the standard gcj install works like that.
Our test suite should work either way.

> Can Alan or someone shed
> some light on how the scripts would work without some info about QPID_HOME
> type stuff ?

Scripts (programs in general) can figure out where they themselves are
installed and find their config files etc. by looking relative to that
without using env vars. E.g. add this as qpid/interop/qpid_java_env:

#!/bin/sh
export SCRIPT_DIR=`dirname $0`
ROOT=`dirname $SCRIPT_DIR`
export QPID_HOME=$ROOT/java/bin
export PATH=QPID_HOME/bin
# etc...

Now an imaginary test script first sources the environment as follows:

#!/bin/sh
# qpid_test
source `dirname $0`/qpid_java_env
echo QPID_HOME=$QPID_HOME

Finally I put /home/aconway/svn/qpid/interop in my path, now I can call
qpid_test from anywhere and it always prints:

[EMAIL PROTECTED] /]$ qpid_test
QPID_HOME=/home/aconway/svn/qpid/java/bin

So now the user has only one thing to set - PATH - instead of 2 or 3,
which is important if you work in multiple checkouts and need to switch
easily from one to another. You can also run tests from a different
checkout *without* setting paths by simply saying:
 /home/aconway/svn/qpid2/interop/run_test
and automatically the environment is set for /home/aconway/svn/qpid2

(NB: the example above fails if you are in a subdirectory of qpid and do
something like ../../interop/run_test, a slightly more complicated
script can handle that case too.)

> Once the spec is agreed (probably we need to draw a line in the sand
> somewhere here ?) I think we need to:
>
> - document on the wiki
> - prioritise some elements
> - split into compact JIRAs, by technology, so that individuals can pick up
> the tasks and contribute (we don't have too many all-code-all-clients people
> on the project !)

I'm happy with the proposal so far, once this gets underway I'll try to
put my code where my mouth is for any further suggestions ;)

Cheers,
Alan.




Re: Draft Interop Testing Spec - Please Read

2007-03-02 Thread Alan Conway
On Tue, 2007-02-27 at 10:23 +, Rupert Smith wrote:
> On 2/27/07, Alan Conway <[EMAIL PROTECTED]> wrote:
snip
> > I disagree.  The actual components involved in a given test
> > run should be determined at runtime by the controller, not baked into
> > the tests. 

> What I was imagining is that each client would hear the declarations
> of the other clients and when each of those clients declared itself it
> declared its name and that it is these declared names that would be
> used to name the test outputs. 

Apologies for irrelevant rant - including language info in test reports
is good, I got the wrong end of the stick.

> It might even be
> advantageous to get the broker type in there too somewhere?
It might indeed. We'll have to play with some real reporting output to
figure out the right level of detail.

> Originally, I was thinking that each client would be responsible for
> writing out the results of the tests where it is the sending part, in
> the JUnit XML format. When Gordon suggested a more centralized
> approach, I liked the idea because only the coordinator is going to do
> the result logging, saving us the trouble of writing it in each
> implementation language. So, now I'm thinking that the coordinator
> sends out an invite for test case X, "Java-32765" and "Cpp-21364"
> reply to it, it sets up one with the sender role, one with the
> receiver role and runs test case X (through broker Y) and so on for
> all the other permutations. So the coordinator knows that this is a
> Java to Cpp test for case X through broker Y so can name the test
> results appropriately. If the coordinator is written in Java, I know
> that it is definitely possible to make it use JUnit to dynamically
> create and name test cases like this; it may require writing a special
> test decorator or test case implementation or something, but can be
> done.
> 

I like it - each interop tests is a runtime composite of a selection of
"compatible" JUnit/CppUnit/pythonunit/rubyunit tests. We write the
per-language tests once and we get the harness to generate the
combinations we want to test.

On results: regardless of who produces the final report, we do have to
collect results from all participants. I'd be inclined to go lo-tech:
everybody dumps their assertions to the file system and we scrape it all
up afterwards, or use some simple non-qpid protocol like syslog to
gather results. If we use XML output we can stitch it all back together
in nice HTML pages at the end. It is tempting to use qpid  to gather the
reports, but then qpid failures could hide their own tracks.

> > I want to be able to do something like this:
> >
> > svn co https//blah/qpid
> > cd qpid/interop/bin
> > build_everything
> > run_interop_tests
> 
> What I'm thinking is that you will have to do a little bit more than
> this. 
Only the first time, after that I'll write scripts :) Seriously - lets
get it working, then we can make it easier to use. We'll need the finer
granularity anyway for investigating specific problems.

A couple of caveats:
 - No script that starts background processes returns until all such
processes are fully initialized (see QPID-304)
 - scripts return 0 exit status if and only if everything really is OK.
 - scripts finish in a finite time even in the event of failures.

Once we have that we can automate to our hearts content. Without those
guarantees reliable automation is impossible. (See the unreliable
automation in Qpid C++ for proof :)

> Is this an acceptable approach? The build scripts for each client can
> inject whatever paths and environment variables they need into their
> start scripts during their builds?\

Absolutely. I think the overall ideas are sound, and we can iron out the
wrinkles as we go.  Thanks for putting this together.

Cheers,
Alan.



Re: Draft Interop Testing Spec - Please Read

2007-03-02 Thread Alan Conway
On Tue, 2007-02-27 at 20:43 +, Marnie McCormack wrote:
> Not too sure how to completely get round some system variables being set for
> the tests to run. For example, if you don't set JAVA_HOME or similar, how
> can you run the java command ? 
JAVA_HOME is Suns fault not ours ;) but you can run Java without
JAVA_HOME based on PATH - e.g. the standard gcj install works like that.
Our test suite should work either way.

> Can Alan or someone shed
> some light on how the scripts would work without some info about QPID_HOME
> type stuff ?

Scripts (programs in general) can figure out where they themselves are
installed and find their config files etc. by looking relative to that
without using env vars. E.g. add this as qpid/interop/qpid_java_env:

#!/bin/sh
export SCRIPT_DIR=`dirname $0`
ROOT=`dirname $SCRIPT_DIR`
export QPID_HOME=$ROOT/java/bin
export PATH=QPID_HOME/bin
# etc...

Now an imaginary test script first sources the environment as follows:

#!/bin/sh
# qpid_test
source `dirname $0`/qpid_java_env
echo QPID_HOME=$QPID_HOME

Finally I put /home/aconway/svn/qpid/interop in my path, now I can call
qpid_test from anywhere and it always prints:

[EMAIL PROTECTED] /]$ qpid_test
QPID_HOME=/home/aconway/svn/qpid/java/bin

So now the user has only one thing to set - PATH - instead of 2 or 3,
which is important if you work in multiple checkouts and need to switch
easily from one to another. You can also run tests from a different
checkout *without* setting paths by simply saying:
 /home/aconway/svn/qpid2/interop/run_test
and automatically the environment is set for /home/aconway/svn/qpid2

(NB: the example above fails if you are in a subdirectory of qpid and do
something like ../../interop/run_test, a slightly more complicated
script can handle that case too.)

> Once the spec is agreed (probably we need to draw a line in the sand
> somewhere here ?) I think we need to:
> 
> - document on the wiki
> - prioritise some elements
> - split into compact JIRAs, by technology, so that individuals can pick up
> the tasks and contribute (we don't have too many all-code-all-clients people
> on the project !)

I'm happy with the proposal so far, once this gets underway I'll try to
put my code where my mouth is for any further suggestions ;) 

Cheers,
Alan.



Re: Draft Interop Testing Spec - Please Read

2007-02-27 Thread Marnie McCormack

I haven't managed to take in all the detail yet (great to see such an active
discussion on this topic) but noted a couple of things.

I agree with the feedback on pseudo-code from Alan. I can feel my life
slipping through my hands fast enough already :-)

Not too sure how to completely get round some system variables being set for
the tests to run. For example, if you don't set JAVA_HOME or similar, how
can you run the java command ? Should we search through paths - I'm not sure
I understand the alternative to some minimal setup. Can Alan or someone shed
some light on how the scripts would work without some info about QPID_HOME
type stuff ?

Once the spec is agreed (probably we need to draw a line in the sand
somewhere here ?) I think we need to:

- document on the wiki
- prioritise some elements
- split into compact JIRAs, by technology, so that individuals can pick up
the tasks and contribute (we don't have too many all-code-all-clients people
on the project !)

I'm happy to contribute effort on the legwork for these points to help us
get started. This is important for some of my users and I'm keen to see it
get easier to ensure interop. It's the bane of my (all of our) life at the
moment and something that lets us down a little.

Thanks for all your input !

Bfn,
Marnie


On 2/27/07, Rupert Smith <[EMAIL PROTECTED]> wrote:


On 2/27/07, Alan Conway <[EMAIL PROTECTED]> wrote:
> The actual components involved in a given test
> run should be determined at runtime by the controller, not baked into
> the tests.

Just to clear. I'm in agreement with you here. What I'm saying is that
coordinator works out what is available to test but dynamically names
the results to reflect what was actually tested. The use of the JUnit
XML output format is for the convenience of that fact that most
automated build servers understand that format and can produce reports
from it. It may prove to be convenient to use JUnit to write the
coordinator, but not necessary in order generate results with a
matching format to that used by JUnit.



Re: Draft Interop Testing Spec - Please Read

2007-02-27 Thread Rupert Smith

On 2/27/07, Alan Conway <[EMAIL PROTECTED]> wrote:

The actual components involved in a given test
run should be determined at runtime by the controller, not baked into
the tests.


Just to clear. I'm in agreement with you here. What I'm saying is that
coordinator works out what is available to test but dynamically names
the results to reflect what was actually tested. The use of the JUnit
XML output format is for the convenience of that fact that most
automated build servers understand that format and can produce reports
from it. It may prove to be convenient to use JUnit to write the
coordinator, but not necessary in order generate results with a
matching format to that used by JUnit.


Re: Draft Interop Testing Spec - Please Read

2007-02-27 Thread Rupert Smith

On 2/27/07, Alan Conway <[EMAIL PROTECTED]> wrote:

Rupert Smith wrote:
> If we go for a centralized controller approach, then the controller
> can supply the class + function name.
I don't follow. If I write CppUnit test I fix the class name when I
write the tests, not when I run them. That's been my experience of JUnit
too but there may be extra flexibility there I'm not aware of.
> It'd be nice if the function name in the test report contain the
> sending and receiving clients names, plus
> the test case name. E.g. "testSimpleP2P_FromCpp_ToJava" or something
> like that.
>
I disagree. A test should only know what language it is written in, it
should not need to be aware of what language the other participants may
be in. I should be able to run the same C++ client unmodified against
C++ or Java broker with other participating clients in the test being in
any supported language. The actual components involved in a given test
run should be determined at runtime by the controller, not baked into
the tests. Or maybe I am missing the point and this is related to your
point above that I didn't get?


What I was imagining is that each client would hear the declarations
of the other clients and when each of those clients declared itself it
declared its name and that it is these declared names that would be
used to name the test outputs. Which is what this rule is about:

 IOP-27. Client Name. Each test client will provide a unique name for
itself that reflects its implementation language and distinguishes it
from the other clients. Clients should append a timestamp or UUID onto
this name to cater for the case where the same client is used multiple
times in an interop test. For example, the same client might be run on
two different operating systems, in order to check that it works
correctly on both.

So, if the client "Java-32454" heard the client "Cpp-436565" declare
itself (and others too), it knows it is going to have to run all of
its Test Cases against that client (and the others). I think it would
be good if the test results for that test reflect the fact that is was
a Java to Cpp interop test, making it easy to spot in the results what
combination of clients produce interop problems. It might even be
advantageous to get the broker type in there too somewhere?

Originally, I was thinking that each client would be responsible for
writing out the results of the tests where it is the sending part, in
the JUnit XML format. When Gordon suggested a more centralized
approach, I liked the idea because only the coordinator is going to do
the result logging, saving us the trouble of writing it in each
implementation language. So, now I'm thinking that the coordinator
sends out an invite for test case X, "Java-32765" and "Cpp-21364"
reply to it, it sets up one with the sender role, one with the
receiver role and runs test case X (through broker Y) and so on for
all the other permutations. So the coordinator knows that this is a
Java to Cpp test for case X through broker Y so can name the test
results appropriately. If the coordinator is written in Java, I know
that it is definitely possible to make it use JUnit to dynamically
create and name test cases like this; it may require writing a special
test decorator or test case implementation or something, but can be
done.


I want to be able to do something like this:

svn co https//blah/qpid
cd qpid/interop/bin
build_everything
run_interop_tests


What I'm thinking is that you will have to do a little bit more than
this. To begin with the qpid/interop directory won't contain the
scripts to start the brokers or clients, they will be put there as a
result of doing the build. I was thinking that the startall and
testall scripts would probably already exist under /interop as the
purpose of them is to work out what client scripts are available to
run. Which is what these requirements were about:

IOP-8. Broker Start Script. The java and c++ brokers will define
scripts that can start the broker running on the local machine, and
these scripts will be located at interop/java/broker/start and
interop/cpp/broker/start. The Java and C++ build processes will
generate these scripts (or copy pre-defined ones to the output
location) as part of their build processes.

IOP-14. Client Start Scripts. For each client implementation,
, there will be a start script located at
interop//client/start. The build processes for each client
will generate these scripts and output them to this location as part
of their build process.

So I'm imagining that in order to run the interop tests you'll have to do:

svn co https//blah/qpid
cd qpid/cpp
./configure
make(puts the cpp broker and client scripts under
interop/cpp/broker and interop/cpp/client)

cd qpid/java
mvn (puts the java broker and client scripts under
interop/java/broker and interop/java/client)

cd qpid/interop
cpp/broker/start
./startall(starts all the available clients running)
./testall (starts the coordinator running t

Re: Draft Interop Testing Spec - Please Read

2007-02-26 Thread Alan Conway

Rupert Smith wrote:

If we go for a centralized controller approach, then the controller
can supply the class + function name.
I don't follow. If I write CppUnit test I fix the class name when I 
write the tests, not when I run them. That's been my experience of JUnit 
too but there may be extra flexibility there I'm not aware of.

It'd be nice if the function name in the test report contain the
sending and receiving clients names, plus
the test case name. E.g. "testSimpleP2P_FromCpp_ToJava" or something 
like that.


I disagree. A test should only know what language it is written in, it 
should not need to be aware of what language the other participants may 
be in. I should be able to run the same C++ client unmodified against 
C++ or Java broker with other participating clients in the test being in 
any supported language. The actual components involved in a given test 
run should be determined at runtime by the controller, not baked into 
the tests. Or maybe I am missing the point and this is related to your 
point above that I didn't get?

The idea behind the timeouts is that they get reset on every message
received (or maybe sent too). So they take no account of client
processing time at all. This idea came from when we were writing the
performance tests. To begin with I had fixed timeouts, in which the
test had to run. But we had to adjust these timeouts for different
test cases as some of the perftests take a long time to run. We
replaced this with a timeout that gets reset on every message
received. Then if one end of the test sliently packs in and stops
sending, the other end detects the long pause and times out on it. As
long as the messages keep flowing the timeout will keep being reset.

+ Requirement for timeout only when client is waiting for something.


I'm all in favour of
a) avoiding arbitrary timeouts that have to be tweaked and
b) keeping it simple
so lets just try your scheme and make it more complicated only if we 
have a real problem. If we  trigger the timeouts on sends and receives I 
think it sounds like a good heuristic for most cases  - its OK for a 
test to take a long time as long as it's doing *something* (even if its 
all sending or all receiving) but there should be no extended period 
when nothing happens at all.

IOP-8a: the *only* prerequisite for the scripts to run is that they
are located in a checkout where cpp/make java/mvn have been run. In
particular they must NOT assume any environment variables are set.


+ IOP-8a.

Can they assume that environment variables set up to run
make/mvn/msbuild are available? For example, JAVA_HOME and java on the
path needs to be set up for maven. Scripts can assume they are
correctly set after a build?
That's exactly what I want to avoid. We need one-button build and test 
scripts in the interop toolkit so everyone doesn't have to learn the 
random quirks of every build system to find out if their stuff plays 
nice with others. I want to be able to do something like this:


svn co https//blah/qpid
cd qpid/interop/bin
build_everything
run_interop_tests

and find out what interoperates and what doesn't. I know setting 
JAVA_HOME seems like a small thing, but if I could get  back the hours 
of my life that have been wasted figuring out and doing the one or two 
small things for each of two or three systems in three or four languages 
on  five or six platforms just to make the #)[EMAIL PROTECTED] interop tests 
RUN, I would be a younger and more optimistic man today.



I think Gordon's more centralized approach will help clear that up.
'Invite', gather all invites, 'assign roles' and wait for 'ready'
acknowledgements to come back in before issuing a 'start'.

+ Two stage, invite then start, sequence.
++from me, Gordons is a better articulated version of what I was trying 
to say :)

More centralized approach. Less framework code in each client, more in
the centralized coordinator. Tests send back reports to the
coordinator which writes the XML report out. Invite messages contain
the parameters for each test, so each test case will need to define
its parameters. Yes, I think this approach looks good.Its getting a
bit more heavily engineered but I do agree that theres a saving to be
made by putting common work in the coordinator.

+ Rewrite the spec to use this more centralized approach.
Careful not to get carried away here. I think most of the central 
co-ordination we need can be achieved using the broker itself and a set 
of conventions about using queues. We need something to fire up the 
clients but most of the work there should  be done by CppUnit, JUnit or 
whatever. I'm thinking more of scripts to kick off the collection of 
clients we want to test at roughly the same time and then let them hash 
it out thru the broker. Also lets not overlook the reporting side, 
that's where  ad-hock test frameworks usually suck the most.





More to the point, one of the basic ideas you propose is already using
broker communication and queues

Re: Draft Interop Testing Spec - Please Read

2007-02-26 Thread Martin Ritchie

On 26/02/07, Rupert Smith <[EMAIL PROTECTED]> wrote:

I forgot:

+ Use unspecified virtual host as default.

and

+ Dummy Test Case. Implements run through of the interaction with the
coordinator for a test, but does not actuall test anything.

On 2/26/07, Rupert Smith <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I've had a chance now to read this stuff through. I will aim to send
> out an updated interop test spec around about wednesday-ish taking on
> board points made.
>
> Points to take account of:
>
> Alan Conway
> > We can write a CppUnit formatter in junit style, but do we need to agree
> > on qualified test class names to appear in the report or will
> > unqualified class name + function name suffice?
>
> If we go for a centralized controller approach, then the controller
> can supply the class + function name.
> It'd be nice if the function name in the test report contain the
> sending and receiving clients names, plus
> the test case name. E.g. "testSimpleP2P_FromCpp_ToJava" or something like 
that.
>
> + Requirement for definition of test case names and classes.
>
> > Simple, but includes client processing time in the timeout. More
> > accurate would be to have timeout in effect only when the client has
> > reason to expect something from the broker.
>
> The idea behind the timeouts is that they get reset on every message
> received (or maybe sent too). So they take no account of client
> processing time at all. This idea came from when we were writing the
> performance tests. To begin with I had fixed timeouts, in which the
> test had to run. But we had to adjust these timeouts for different
> test cases as some of the perftests take a long time to run. We
> replaced this with a timeout that gets reset on every message
> received. Then if one end of the test sliently packs in and stops
> sending, the other end detects the long pause and times out on it. As
> long as the messages keep flowing the timeout will keep being reset.
>
> + Requirement for timeout only when client is waiting for something.
>
> > IOP-8a: the *only* prerequisite for the scripts to run is that they
> > are located in a checkout where cpp/make java/mvn have been run. In
> > particular they must NOT assume any environment variables are set.
>
> + IOP-8a.
>
> Can they assume that environment variables set up to run
> make/mvn/msbuild are available? For example, JAVA_HOME and java on the
> path needs to be set up for maven. Scripts can assume they are
> correctly set after a build?
>
> > For consistency and to avoid possible headaches with special characters
> > I'd  suggest cpp rather than c++ for file/directory names.
>
> + That. Will change c++ to cpp.
>
> > NB: timing issues - e.g. one that plagues the current C++ topic test. If
> > the publisher finishes publishing before all of the subscribers are
> > listening (few messages, many subscribers) you get hanging subscribers
> > that missed the "TERMINATE" message
>
> I think Gordon's more centralized approach will help clear that up.
> 'Invite', gather all invites, 'assign roles' and wait for 'ready'
> acknowledgements to come back in before issuing a 'start'.
>
> + Two stage, invite then start, sequence.
>
> Gordon Sim
> > I think the approach is great. One thought that occurred is that by
> > offloading more work to the controller/master we minimise the amount of
> > framework code we need to write for each test in each language. The
> > controller would only need to be written in one language.
>
> More centralized approach. Less framework code in each client, more in
> the centralized coordinator. Tests send back reports to the
> coordinator which writes the XML report out. Invite messages contain
> the parameters for each test, so each test case will need to define
> its parameters. Yes, I think this approach looks good. Its getting a
> bit more heavily engineered but I do agree that theres a saving to be
> made by putting common work in the coordinator.
>
> + Rewrite the spec to use this more centralized approach.
>
> I think there should be some sort of compulsory invite that the
> coordinator uses to discover what clients are available to test. Then
> if a client cannot accept an invite the coordinator knows to give that
> client a fail for that test. Example, start all clients, coordinator
> sends compulsory invite, all clients acknowledge it, coordinator
> invites to do simple p2p test, some clients haven't implemented that
> one yet, coordinator gives those ones a fail, then runs the others.
>
> > I like the idea of having a single client executable for each language
>
> Yes, makes them easier to run in a fully automated way. Nothing to
> stop the clients being written in seperate sender and receiver parts
> with their own main methods so that they can be run in seperate
> pieces, then having another class that ties them together into a
> single executable for both parts for the purposes of this spec.
>
> Tomas Restrepo
> > 3- I think the initial client to broker connect

Re: Draft Interop Testing Spec - Please Read

2007-02-26 Thread Rupert Smith

I forgot:

+ Use unspecified virtual host as default.

and

+ Dummy Test Case. Implements run through of the interaction with the
coordinator for a test, but does not actuall test anything.

On 2/26/07, Rupert Smith <[EMAIL PROTECTED]> wrote:

Hi,

I've had a chance now to read this stuff through. I will aim to send
out an updated interop test spec around about wednesday-ish taking on
board points made.

Points to take account of:

Alan Conway
> We can write a CppUnit formatter in junit style, but do we need to agree
> on qualified test class names to appear in the report or will
> unqualified class name + function name suffice?

If we go for a centralized controller approach, then the controller
can supply the class + function name.
It'd be nice if the function name in the test report contain the
sending and receiving clients names, plus
the test case name. E.g. "testSimpleP2P_FromCpp_ToJava" or something like that.

+ Requirement for definition of test case names and classes.

> Simple, but includes client processing time in the timeout. More
> accurate would be to have timeout in effect only when the client has
> reason to expect something from the broker.

The idea behind the timeouts is that they get reset on every message
received (or maybe sent too). So they take no account of client
processing time at all. This idea came from when we were writing the
performance tests. To begin with I had fixed timeouts, in which the
test had to run. But we had to adjust these timeouts for different
test cases as some of the perftests take a long time to run. We
replaced this with a timeout that gets reset on every message
received. Then if one end of the test sliently packs in and stops
sending, the other end detects the long pause and times out on it. As
long as the messages keep flowing the timeout will keep being reset.

+ Requirement for timeout only when client is waiting for something.

> IOP-8a: the *only* prerequisite for the scripts to run is that they
> are located in a checkout where cpp/make java/mvn have been run. In
> particular they must NOT assume any environment variables are set.

+ IOP-8a.

Can they assume that environment variables set up to run
make/mvn/msbuild are available? For example, JAVA_HOME and java on the
path needs to be set up for maven. Scripts can assume they are
correctly set after a build?

> For consistency and to avoid possible headaches with special characters
> I'd  suggest cpp rather than c++ for file/directory names.

+ That. Will change c++ to cpp.

> NB: timing issues - e.g. one that plagues the current C++ topic test. If
> the publisher finishes publishing before all of the subscribers are
> listening (few messages, many subscribers) you get hanging subscribers
> that missed the "TERMINATE" message

I think Gordon's more centralized approach will help clear that up.
'Invite', gather all invites, 'assign roles' and wait for 'ready'
acknowledgements to come back in before issuing a 'start'.

+ Two stage, invite then start, sequence.

Gordon Sim
> I think the approach is great. One thought that occurred is that by
> offloading more work to the controller/master we minimise the amount of
> framework code we need to write for each test in each language. The
> controller would only need to be written in one language.

More centralized approach. Less framework code in each client, more in
the centralized coordinator. Tests send back reports to the
coordinator which writes the XML report out. Invite messages contain
the parameters for each test, so each test case will need to define
its parameters. Yes, I think this approach looks good. Its getting a
bit more heavily engineered but I do agree that theres a saving to be
made by putting common work in the coordinator.

+ Rewrite the spec to use this more centralized approach.

I think there should be some sort of compulsory invite that the
coordinator uses to discover what clients are available to test. Then
if a client cannot accept an invite the coordinator knows to give that
client a fail for that test. Example, start all clients, coordinator
sends compulsory invite, all clients acknowledge it, coordinator
invites to do simple p2p test, some clients haven't implemented that
one yet, coordinator gives those ones a fail, then runs the others.

> I like the idea of having a single client executable for each language

Yes, makes them easier to run in a fully automated way. Nothing to
stop the clients being written in seperate sender and receiver parts
with their own main methods so that they can be run in seperate
pieces, then having another class that ties them together into a
single executable for both parts for the purposes of this spec.

Tomas Restrepo
> 3- I think the initial client to broker connection tests might want to be
> handled a bit differently (maybe a set of more automated, but regular,
> integration/unit tests for each client testing different success and failure
> connection conditions).

I think each client nee

Re: Draft Interop Testing Spec - Please Read

2007-02-26 Thread Rupert Smith

Hi,

I've had a chance now to read this stuff through. I will aim to send
out an updated interop test spec around about wednesday-ish taking on
board points made.

Points to take account of:

Alan Conway

We can write a CppUnit formatter in junit style, but do we need to agree
on qualified test class names to appear in the report or will
unqualified class name + function name suffice?


If we go for a centralized controller approach, then the controller
can supply the class + function name.
It'd be nice if the function name in the test report contain the
sending and receiving clients names, plus
the test case name. E.g. "testSimpleP2P_FromCpp_ToJava" or something like that.

+ Requirement for definition of test case names and classes.


Simple, but includes client processing time in the timeout. More
accurate would be to have timeout in effect only when the client has
reason to expect something from the broker.


The idea behind the timeouts is that they get reset on every message
received (or maybe sent too). So they take no account of client
processing time at all. This idea came from when we were writing the
performance tests. To begin with I had fixed timeouts, in which the
test had to run. But we had to adjust these timeouts for different
test cases as some of the perftests take a long time to run. We
replaced this with a timeout that gets reset on every message
received. Then if one end of the test sliently packs in and stops
sending, the other end detects the long pause and times out on it. As
long as the messages keep flowing the timeout will keep being reset.

+ Requirement for timeout only when client is waiting for something.


IOP-8a: the *only* prerequisite for the scripts to run is that they
are located in a checkout where cpp/make java/mvn have been run. In
particular they must NOT assume any environment variables are set.


+ IOP-8a.

Can they assume that environment variables set up to run
make/mvn/msbuild are available? For example, JAVA_HOME and java on the
path needs to be set up for maven. Scripts can assume they are
correctly set after a build?


For consistency and to avoid possible headaches with special characters
I'd  suggest cpp rather than c++ for file/directory names.


+ That. Will change c++ to cpp.


NB: timing issues - e.g. one that plagues the current C++ topic test. If
the publisher finishes publishing before all of the subscribers are
listening (few messages, many subscribers) you get hanging subscribers
that missed the "TERMINATE" message


I think Gordon's more centralized approach will help clear that up.
'Invite', gather all invites, 'assign roles' and wait for 'ready'
acknowledgements to come back in before issuing a 'start'.

+ Two stage, invite then start, sequence.

Gordon Sim

I think the approach is great. One thought that occurred is that by
offloading more work to the controller/master we minimise the amount of
framework code we need to write for each test in each language. The
controller would only need to be written in one language.


More centralized approach. Less framework code in each client, more in
the centralized coordinator. Tests send back reports to the
coordinator which writes the XML report out. Invite messages contain
the parameters for each test, so each test case will need to define
its parameters. Yes, I think this approach looks good. Its getting a
bit more heavily engineered but I do agree that theres a saving to be
made by putting common work in the coordinator.

+ Rewrite the spec to use this more centralized approach.

I think there should be some sort of compulsory invite that the
coordinator uses to discover what clients are available to test. Then
if a client cannot accept an invite the coordinator knows to give that
client a fail for that test. Example, start all clients, coordinator
sends compulsory invite, all clients acknowledge it, coordinator
invites to do simple p2p test, some clients haven't implemented that
one yet, coordinator gives those ones a fail, then runs the others.


I like the idea of having a single client executable for each language


Yes, makes them easier to run in a fully automated way. Nothing to
stop the clients being written in seperate sender and receiver parts
with their own main methods so that they can be run in seperate
pieces, then having another class that ties them together into a
single executable for both parts for the purposes of this spec.

Tomas Restrepo

3- I think the initial client to broker connection tests might want to be
handled a bit differently (maybe a set of more automated, but regular,
integration/unit tests for each client testing different success and failure
connection conditions).


I think each client needs to have a simple connect and send a message
to itself test as Test Case 0. This will just be a copy of the
existing client tests that do this simple test.


More to the point, one of the basic ideas you propose is already using
broker communication and queues to connect th

Re: Draft Interop Testing Spec - Please Read

2007-02-23 Thread Rupert Smith

Good to see lots of constructive points being made, and glad to see
that people are enthusiastic about the general approach. I'll have to
wait until monday to give everything a good read through and produce a
proper account of what points to take on board. But in general, yes to
most points suggested so far.

Tomas - I'm thinking it should test both. At the moment I'm more
concered with client-client interop because I'm having problems there
(field tables at the moment). I think that in many cases the broker
never fully decodes the messages so these problems only show up
between different clients. But in general, I think the goal is to
fully interop test everything.

Johns idea - sounds like a worthy idea but I'd like to keep that in
the waiting room for now. I agree with Alan here that the goal at the
moment is to get at least some tests up and running quickly. I'll try
and bear the idea in mind and ensure that we keep things open enough
to add it in one day when we are feeling like we have plenty time on
our hands.

Rupert

On 2/23/07, Alan Conway <[EMAIL PROTECTED]> wrote:

John O'Hara wrote:
> Excellent.
>
> We don't have a common API (yet) so this could be tricky.
>
> One way might be to write a command processor in each language to process
> "psuedo-code" and call the right API functions in that language.
> This way we could have a core set of test defintions for any language
> -- all
> you'd have to do for the new language is write the parser-to-api mapping.

In principle I like this idea. In practice I suspect it will suck up
time like a sponge leaving little or none to write actual tests. I
suggest we focus first on building a small but useful suite of tests
described in human text and implement them across the board. Having done
that we will probably have more insight into the best way to
automate/formalize the framework. I'd prefer to get a working suite of
tests then refine the framework and tests than spend months on  a
framework and still have no interop testing.

Cheers,
Alan.



Re: Draft Interop Testing Spec - Please Read

2007-02-23 Thread Alan Conway

John O'Hara wrote:

Excellent.

We don't have a common API (yet) so this could be tricky.

One way might be to write a command processor in each language to process
"psuedo-code" and call the right API functions in that language.
This way we could have a core set of test defintions for any language 
-- all

you'd have to do for the new language is write the parser-to-api mapping.


In principle I like this idea. In practice I suspect it will suck up 
time like a sponge leaving little or none to write actual tests. I 
suggest we focus first on building a small but useful suite of tests 
described in human text and implement them across the board. Having done 
that we will probably have more insight into the best way to 
automate/formalize the framework. I'd prefer to get a working suite of 
tests then refine the framework and tests than spend months on  a 
framework and still have no interop testing.


Cheers,
Alan.


Re: Draft Interop Testing Spec - Please Read

2007-02-23 Thread Gordon Sim

Tomas Restrepo wrote:

Hi Rupert,


 I'm particuarly interested in getting some constructive feedback on
this. If you disagree with something, please also suggest an
alternative way of doing it, that you feel will be better, more
reliable, easier to implement, or whatever. Thanks.


I think this is a fantastic idea and very needed. Some thoughts:

1- Would this be mostly aimed at testing client-client interop,
client-broker interop or both? It seems to me much of the implementation
needs you specify seemed to be aimed at client-client, but maybe I'm
mistaken.


I think it is both. You raise a good point though, in that if the test 
infrastructure assumes a degree of interoperability to begin with, the 
tests it runs are only worthwhile if they test interoperability in a 
more advanced sense. That was the assumption I was making; allowing easy 
creation of tests that give us confidence that systems written using 
different pieces of the product will all work as expected rather than 
just verifying that clients can 'talk to' brokers in a basic way.



2- Personally, I'd favor an approach a bit more like Gordon's idea of it
being more "centrally controlled" by the controller. Start a client-test
process, launch the controller, and do everything from there. It would be
simpler to create, run and maintain, I think.

3- I think the initial client to broker connection tests might want to be
handled a bit differently (maybe a set of more automated, but regular,
integration/unit tests for each client testing different success and failure
connection conditions).


Yes, I agree. In c++ we have a really simple client test program that 
connects, sends itself a message then stops. Something similar for each 
language would give a simple test for the ability to connect, send and 
receive in basic form. Thats probably the first step before any test 
framework is run. Maybe we should also have a dummy test that doesn't do 
any real work but just allows the operation of the framework to be verified.



My main reason for saying this is that I think it might be more awkward to
cram the connection-level tests into the kind of structure proposed, and
even more if we went with a more central architecture such as the one Gordon
proposed.


Yes, I completely agree.


More to the point, one of the basic ideas you propose is already using
broker communication and queues to connect the test controller with the
individual test clients, meaning already you assume a fairly high-level of
interoperability being possible between clients and brokers (and even
between clients seeing as how the controller would be written in a single
language and each client in a different one). 
This is also one of the main reasons I ask whether the tests will mostly

target client-client scenarios or client-broker scenarios (as for most of
the infrastructure it seems to assume the latter already works pretty well).

Then again, maybe I'm just missing something :)


I think you are right. Any framework based on AMQP itself should 
minimise the functionality it relies on. There needs to be some test of 
basic connectivity separate from that to ensure that a given client can 
be relied on to work within the framework. The value of the framework is 
then dependent on what other useful aspects it can test. There is always 
a danger of over-engineering.


However assuming that we want to test more than just single client to 
broker interactions, there are some common aspects that you don't want 
to have to keep re-writing, there are co-ordination issues such as the 
ones Alan raised, verification and reporting of results etc. I also 
think that there is useful interop testing to be done around features 
not required by the framework e.g. acks, txns, more advanced routing 
patterns etc.


Re: Draft Interop Testing Spec - Please Read

2007-02-22 Thread John O'Hara

Or silly me, it could just be a bunch of XML (for those that can stand it)
cos everything parses XML these days.

On 23/02/07, John O'Hara <[EMAIL PROTECTED]> wrote:


Excellent.

We don't have a common API (yet) so this could be tricky.

One way might be to write a command processor in each language to process
"psuedo-code" and call the right API functions in that language.
This way we could have a core set of test defintions for any language --
all you'd have to do for the new language is write the parser-to-api
mapping.

Create Queue
Subscribe
Publish
Transmit

This wouldn't be perfect, since there would be edge cases specific to a
given language/API.  But it would give a good solid interop core for common
patterns of usage.

Where this leads is towards the iMatix idea of a "low-level protocol
exerciser", where an XML langauge is used to drive tests at the protocol
level.
But that has its own issues, since it doesn't exercise the client API's.
It also causes an issue that it kind of starts to treat the wire commands as
top-level API calls; a subject which has been the source of animated
arguments in the past.

What I'm advocating is more the pseudo-code idea.  That the test cases and
translate them to psuedo code, then do the most "obvious" mapping to each
client API in the parser.  The controller would run these engines feeding
them either commands or scripts it has generated.  The output would go back
to the controller with some kind of correlation.  To be really with it, you
should use AMQP to communicate between the controller and its children, or
good old fashioned pipes.

It would make writing 1 test across 5 API's/Languages easy, and would help
ensure some consistency in the tests (at least for a core set)

Lex and Yacc are available for C/C++, Java, C#, Perl, Python and Ruby.  So
getting a "little language" to all those from a common grammer would also be
quite straight forward (and not require all testers to have those tools
either).

Does this have any attraction to anyone?
John


On 22/02/07, Tomas Restrepo <[EMAIL PROTECTED] > wrote:
>
> Hi Rupert,
>
> >  I'm particuarly interested in getting some constructive feedback on
> > this. If you disagree with something, please also suggest an
> > alternative way of doing it, that you feel will be better, more
> > reliable, easier to implement, or whatever. Thanks.
>
> I think this is a fantastic idea and very needed. Some thoughts:
>
> 1- Would this be mostly aimed at testing client-client interop,
> client-broker interop or both? It seems to me much of the implementation
> needs you specify seemed to be aimed at client-client, but maybe I'm
> mistaken.
>
> 2- Personally, I'd favor an approach a bit more like Gordon's idea of it
> being more "centrally controlled" by the controller. Start a client-test
> process, launch the controller, and do everything from there. It would
> be
> simpler to create, run and maintain, I think.
>
> 3- I think the initial client to broker connection tests might want to
> be
> handled a bit differently (maybe a set of more automated, but regular,
> integration/unit tests for each client testing different success and
> failure
> connection conditions).
>
> My main reason for saying this is that I think it might be more awkward
> to
> cram the connection-level tests into the kind of structure proposed, and
> even more if we went with a more central architecture such as the one
> Gordon
> proposed.
>
> More to the point, one of the basic ideas you propose is already using
> broker communication and queues to connect the test controller with the
> individual test clients, meaning already you assume a fairly high-level
> of
> interoperability being possible between clients and brokers (and even
> between clients seeing as how the controller would be written in a
> single
> language and each client in a different one).
> This is also one of the main reasons I ask whether the tests will mostly
>
> target client-client scenarios or client-broker scenarios (as for most
> of
> the infrastructure it seems to assume the latter already works pretty
> well).
>
> Then again, maybe I'm just missing something :)
>
> Tomas Restrepo
> [EMAIL PROTECTED]
> http://www.winterdom.com/weblog/
>
>
>
>
>



Re: Draft Interop Testing Spec - Please Read

2007-02-22 Thread John O'Hara

Excellent.

We don't have a common API (yet) so this could be tricky.

One way might be to write a command processor in each language to process
"psuedo-code" and call the right API functions in that language.
This way we could have a core set of test defintions for any language -- all
you'd have to do for the new language is write the parser-to-api mapping.

Create Queue
Subscribe
Publish
Transmit

This wouldn't be perfect, since there would be edge cases specific to a
given language/API.  But it would give a good solid interop core for common
patterns of usage.

Where this leads is towards the iMatix idea of a "low-level protocol
exerciser", where an XML langauge is used to drive tests at the protocol
level.
But that has its own issues, since it doesn't exercise the client API's.  It
also causes an issue that it kind of starts to treat the wire commands as
top-level API calls; a subject which has been the source of animated
arguments in the past.

What I'm advocating is more the pseudo-code idea.  That the test cases and
translate them to psuedo code, then do the most "obvious" mapping to each
client API in the parser.  The controller would run these engines feeding
them either commands or scripts it has generated.  The output would go back
to the controller with some kind of correlation.  To be really with it, you
should use AMQP to communicate between the controller and its children, or
good old fashioned pipes.

It would make writing 1 test across 5 API's/Languages easy, and would help
ensure some consistency in the tests (at least for a core set)

Lex and Yacc are available for C/C++, Java, C#, Perl, Python and Ruby.  So
getting a "little language" to all those from a common grammer would also be
quite straight forward (and not require all testers to have those tools
either).

Does this have any attraction to anyone?
John


On 22/02/07, Tomas Restrepo <[EMAIL PROTECTED]> wrote:


Hi Rupert,

>  I'm particuarly interested in getting some constructive feedback on
> this. If you disagree with something, please also suggest an
> alternative way of doing it, that you feel will be better, more
> reliable, easier to implement, or whatever. Thanks.

I think this is a fantastic idea and very needed. Some thoughts:

1- Would this be mostly aimed at testing client-client interop,
client-broker interop or both? It seems to me much of the implementation
needs you specify seemed to be aimed at client-client, but maybe I'm
mistaken.

2- Personally, I'd favor an approach a bit more like Gordon's idea of it
being more "centrally controlled" by the controller. Start a client-test
process, launch the controller, and do everything from there. It would be
simpler to create, run and maintain, I think.

3- I think the initial client to broker connection tests might want to be
handled a bit differently (maybe a set of more automated, but regular,
integration/unit tests for each client testing different success and
failure
connection conditions).

My main reason for saying this is that I think it might be more awkward to
cram the connection-level tests into the kind of structure proposed, and
even more if we went with a more central architecture such as the one
Gordon
proposed.

More to the point, one of the basic ideas you propose is already using
broker communication and queues to connect the test controller with the
individual test clients, meaning already you assume a fairly high-level of
interoperability being possible between clients and brokers (and even
between clients seeing as how the controller would be written in a single
language and each client in a different one).
This is also one of the main reasons I ask whether the tests will mostly
target client-client scenarios or client-broker scenarios (as for most of
the infrastructure it seems to assume the latter already works pretty
well).

Then again, maybe I'm just missing something :)

Tomas Restrepo
[EMAIL PROTECTED]
http://www.winterdom.com/weblog/







RE: Draft Interop Testing Spec - Please Read

2007-02-22 Thread Tomas Restrepo
Hi Rupert,

>  I'm particuarly interested in getting some constructive feedback on
> this. If you disagree with something, please also suggest an
> alternative way of doing it, that you feel will be better, more
> reliable, easier to implement, or whatever. Thanks.

I think this is a fantastic idea and very needed. Some thoughts:

1- Would this be mostly aimed at testing client-client interop,
client-broker interop or both? It seems to me much of the implementation
needs you specify seemed to be aimed at client-client, but maybe I'm
mistaken.

2- Personally, I'd favor an approach a bit more like Gordon's idea of it
being more "centrally controlled" by the controller. Start a client-test
process, launch the controller, and do everything from there. It would be
simpler to create, run and maintain, I think.

3- I think the initial client to broker connection tests might want to be
handled a bit differently (maybe a set of more automated, but regular,
integration/unit tests for each client testing different success and failure
connection conditions).

My main reason for saying this is that I think it might be more awkward to
cram the connection-level tests into the kind of structure proposed, and
even more if we went with a more central architecture such as the one Gordon
proposed.

More to the point, one of the basic ideas you propose is already using
broker communication and queues to connect the test controller with the
individual test clients, meaning already you assume a fairly high-level of
interoperability being possible between clients and brokers (and even
between clients seeing as how the controller would be written in a single
language and each client in a different one). 
This is also one of the main reasons I ask whether the tests will mostly
target client-client scenarios or client-broker scenarios (as for most of
the infrastructure it seems to assume the latter already works pretty well).

Then again, maybe I'm just missing something :)

Tomas Restrepo
[EMAIL PROTECTED]
http://www.winterdom.com/weblog/






Re: Draft Interop Testing Spec - Please Read

2007-02-22 Thread Gordon Sim

Rupert Smith wrote:

I'm particuarly interested in getting some constructive feedback on
this. If you disagree with something, please also suggest an
alternative way of doing it, that you feel will be better, more
reliable, easier to implement, or whatever. Thanks.


I think the approach is great. One thought that occurred is that by 
offloading more work to the controller/master we minimise the amount of 
framework code we need to write for each test in each language. The 
controller would only need to be written in one language.


I like the idea of having a single client executable for each language 
also, that can run a named test under a particular role. That makes it 
easy to start up a given number of instances of different client 
implementations without having to remember too many script options etc. 
The controller can handle the test to run and the role each client 
should take on.


I'd suggest we use an unspecified virtual host. Assuming we will be 
using a dedicated broker instance for the tests we won't need the 
isolation virtual hosts provide and just using the default make setup 
easier in my view.


Attached are some notes that offer an alternative system. I find it 
simpler and yet quite flexible, but as is often the case that might just 
be because it fits more with my preconceptions. Its not as tightly 
specified as your document but if you like the direction we can work on 
the details a bit more. If you don't like the direction I won't be 
offended, so please feel free to ignore it!




==Actors==

There are two types of actor in the test:

1. controller 

A single controller will operate every test. The controller framework
only needs to be written in one language. The steps a controller
follows for each test are:

creates and binds an exclusive queue to the nameless exchange with
key 'control'. consumes from this queue.

sends out a 'invite' for a named test to amq.topic with routing
key 'control'. any listening clients that are able to be part of
that test respond with an 'enlist' message.

checks a queue exists for each enlisted client and binds them to
one of two new fanout exchanges created for the senders and
receivers on this test

sends a 'role-assignment' message to each exchange

wait for 'ready' messages from all enlisted receivers

sends a 'start' message to the senders exchange

waits for 'sent-report's from each sender(*)

send an 'end-test' to all receivers

waits for 'received-report's from each receiver(*)

compares actual and expected reports and prints results

deletes senders and receivers exchanges

(*)if not all reports are received in a given time, the controller
will send an 'end-test' message to all outstanding participants

2. participant

Each client is capable of participating in a set of named tests in a
particular role. Each client has a unique id. The roles are for now
'sender' and 'receiver'.

In general the clients follow the following steps:

create a queue named after clients id, bind it to amq.topic
with routing key 'control' and consume from it

listen for 'invite's to tests they are able to participate in

when such an 'invite' is received, send an 'enlist' message to the
control queue

wait for a 'role assignment' then carry out the relevant role

when a 'terminate' message is received, shutdown

The steps for the different roles are:

2(a) sender 

waits for 'start' message

sends messages to a defined 'route'

sends a 'sent-report' to the master

2(b) receiver

prepares to consume messages from a defined 'route'

sends a 'ready' message to control queue

records all received messages

detects completion by receiving an 'end-test' marker

on completion sends a 'received-report' to the master and carries
out any cleanup required

==Control Messages==

All control messages have the 'control-type' property set to the name
of the message type. E.g. a sent-report contains 'sent-report' in the
control-type property. Unless otherwise specified all control messages
are empty messages.

A sent-report is:

A text message with a line for each sent message using the tests
stringified format.

It also contains a string header indicating the senders id.

A received-report is:

A text message with a line for each received message using the
tests stringified format.

It also contains a string header indicating the receivers id.

A role-assignment message is:

A text message containing the test name and a string header 'role'
containing one of the valid roles for the test (i.e. sender or
receiver to begin with). It may also contain other properties for
paramterised tests.

An invite is:

A text message containing the name of the test that the master
wants to run. The 'control-type' property is set to 'invite'. The
parameters of the test may also 

Re: Draft Interop Testing Spec - Please Read

2007-02-22 Thread Alan Conway
On Thu, 2007-02-22 at 14:48 +, Rupert Smith wrote:
> I would like to propose an interop testing spec that defines
> exactly how test clients should behave, so that we can write a suite
> of tests that can talk to each other succesfully. 
Good work! We really need this. Comments inline. I need to think
more about the fine detail but I like the overall approach.

>  =
> 
>  Qpid Interop Testing Spec. Draft 1.
> 
>  Draft 1. Rupert Smith. Document started.
> 
> Introduction:
> 
>  The requirements in this specification use a common format, an
> example of which is given below:
> 
>  RE-1. Sample Requirement. A brief descritpion of the requirement.
> 
>  The requirements are numbered sequentially from 1.
> 
> Purpose:
> 
>  * Test sending from and receiving by each of the clients in Qpid over
> both of the broker implementations.
> 
>  * Make tests robust enough to run as part of an automated build. The
> scripts should pass or fail, not hang, wait forever, run out of memory
> or otherwise cause an automated build process to be flaky.
> 
>  * Be capable of running the full test suite on several machines in a
> hands free way. In particular C++ tests need to run on unix and .Net
> on windows.
> 
>  * Run just a few tests to begin with. More can be added later, I'm
> interested in getting the test framework established as quick as
> possible. Just
>minimal p2p and pub/sub tests to begin with.
> 
> Constraints:
> 
>  IOP-1. Operating System: The test client scripts must run on Unix and
> Windows. If a test client implementation is only available on one of
> these platforms it only needs to run on its supported platform.
> 
>  IOP-2. Scripting Language: Each test client must be startable from a
> Unix shell script. Tests run on Windows will use Cygwin to run these
> scripts. There is no need to support Windows .bat scripts.
> 
> Functional Requirements:
> 
>  Introduction.
> 
>  These requirements describe the behaviour of test clients for interop
> testing between the different client implementations in Qpid. Each
> client is expected to be a program that is capable of sending test
> messages to other clients and receiving and responding to test
> messages received from other test clients. The clients are not to be
> run as seperate programs for the sending and receiving parts for the
> sake of convenience in being able to run the clients as part of an
> automated build. The clients will listen for control messages sent by
> a master client on a topic, to tell them when to begin their tests and
> when to shutdown. The clients will send control messages to each other
> individually on a direct exchange in order to communicate about
> individual test cases being run.
> 
>  Common Requirements.
> 
>  IOP-3. Directory Structure. All scripts to start and stop brokers and
> run test clients will be placed in a directory structure underneath a
> top-level directory called 'interop' that sits at the top level of the
> Qpid project.
> 
>  IOP-4. Test Output Format. Output in junit xml format (because a lot
> of automated build software understands this format). There doesn't
> seem to be a schema or DTD for this format but it is simple enough.
> See Appendix B for an example. Each sending test client will output a
> test suite report for each receiving test client that it runs the test
> cases against.

We can write a CppUnit formatter in junit style, but do we need to agree
on qualified test class names to appear in the report or will
unqualified class name + function name suffice? 

>  IOP-5. Terminate On Timeout. Each client will keep a timeout count.
> Every time it gets a message it will reset this count. If it does not
> hear from the broker at all for 60 seconds then it will assume that
> the broker has died and will terminate with test failures.

Simple, but includes client processing time in the timeout. More
accurate would be to have timeout in effect only when the client has
reason to expect something from the broker. 

>  IOP-6. Default Virtual Host. All test clients will use the virtual
> host '/test' for all tests.
> 
>  IOP-7. Default Control Topic. All test clients will use a control
> topic with the routing key 'iop.control' on the default virtual host
> on the default topic exchange. This control topic is used for sending
> the start messages (IOP-23), declare available messages (IOP-24), test
> complete messages (IOP-25) and terminates messages (IOP-26) only.
> 
>  Use Case 1. Starting a Broker.
> 
>   Run the broker start script.
>   The script starts a broker running and tries to connect to it (or
> otherwise ping it) until it is verified to be running.
>   Once the broker is verified to be running the script terminates with
> no error code.
> 
>   Failure path: The broker fails to start or does not appear to be
> running after a timeout has passed. The script fails with an error
> code.
> 
>  IOP-8. Broker Start Script. The java and c++ brokers will define