Excellent.

We don't have a common API (yet) so this could be tricky.

One way might be to write a command processor in each language to process
"psuedo-code" and call the right API functions in that language.
This way we could have a core set of test defintions for any language -- all
you'd have to do for the new language is write the parser-to-api mapping.

Create Queue
Subscribe
Publish
Transmit

This wouldn't be perfect, since there would be edge cases specific to a
given language/API.  But it would give a good solid interop core for common
patterns of usage.

Where this leads is towards the iMatix idea of a "low-level protocol
exerciser", where an XML langauge is used to drive tests at the protocol
level.
But that has its own issues, since it doesn't exercise the client API's.  It
also causes an issue that it kind of starts to treat the wire commands as
top-level API calls; a subject which has been the source of animated
arguments in the past.

What I'm advocating is more the pseudo-code idea.  That the test cases and
translate them to psuedo code, then do the most "obvious" mapping to each
client API in the parser.  The controller would run these engines feeding
them either commands or scripts it has generated.  The output would go back
to the controller with some kind of correlation.  To be really with it, you
should use AMQP to communicate between the controller and its children, or
good old fashioned pipes.

It would make writing 1 test across 5 API's/Languages easy, and would help
ensure some consistency in the tests (at least for a core set)

Lex and Yacc are available for C/C++, Java, C#, Perl, Python and Ruby.  So
getting a "little language" to all those from a common grammer would also be
quite straight forward (and not require all testers to have those tools
either).

Does this have any attraction to anyone?
John


On 22/02/07, Tomas Restrepo <[EMAIL PROTECTED]> wrote:

Hi Rupert,

>  I'm particuarly interested in getting some constructive feedback on
> this. If you disagree with something, please also suggest an
> alternative way of doing it, that you feel will be better, more
> reliable, easier to implement, or whatever. Thanks.

I think this is a fantastic idea and very needed. Some thoughts:

1- Would this be mostly aimed at testing client-client interop,
client-broker interop or both? It seems to me much of the implementation
needs you specify seemed to be aimed at client-client, but maybe I'm
mistaken.

2- Personally, I'd favor an approach a bit more like Gordon's idea of it
being more "centrally controlled" by the controller. Start a client-test
process, launch the controller, and do everything from there. It would be
simpler to create, run and maintain, I think.

3- I think the initial client to broker connection tests might want to be
handled a bit differently (maybe a set of more automated, but regular,
integration/unit tests for each client testing different success and
failure
connection conditions).

My main reason for saying this is that I think it might be more awkward to
cram the connection-level tests into the kind of structure proposed, and
even more if we went with a more central architecture such as the one
Gordon
proposed.

More to the point, one of the basic ideas you propose is already using
broker communication and queues to connect the test controller with the
individual test clients, meaning already you assume a fairly high-level of
interoperability being possible between clients and brokers (and even
between clients seeing as how the controller would be written in a single
language and each client in a different one).
This is also one of the main reasons I ask whether the tests will mostly
target client-client scenarios or client-broker scenarios (as for most of
the infrastructure it seems to assume the latter already works pretty
well).

Then again, maybe I'm just missing something :)

Tomas Restrepo
[EMAIL PROTECTED]
http://www.winterdom.com/weblog/





Reply via email to