I think the key is to make sure you exchange one of each type of management command via QMF to the broker. Then we will know if any breaks at the protocol level.

Then we can expand it to cover more operations, but that is not critical as it is all generic code. so working out to test each type of generic code is the way to go and get all the types of methods covered. That will catch 90+ % of all errors, with no repetition. so picking a function that has
params and a response is better as it covers more code paths.

The flow though is fine.

Carl.


Andrea Gazzarini wrote:
Hi Carl, and what do you think about the following scenario?
Precondition: There's a JUnit test for testing the invocation of
queue.purge() method.

- Qpid starts;
- QMan starts;
- Test case starts : using standard JMX interface it registers itself as a
notification listener of QMan and wait...
- Qpid sends a content information message (to QMan) about a queue instance;
- QMan store that data in raw format (he has not yet the queue class
definition) and requests to Qpid schema for class queue;
- Qpid sends a schema response message;
- Qman builds the class definition, uses the previous incoming data to
create the queue object instance and emit a JMX notification to inform
regoistered listeners about that.
- Test case receives the notification and get (via QMan JMX interface) the
current number of messages on that queue;
- Test case invokes on QMan the purge() method of the queue instance and
wait;
- QMan send a method request message to Qpid;
- Qpid invokes the method and sends a method response message;
- QMan receives the method response message and emit another JMX
notification to inform registered listeners.
- QMan receives the content information message containing data updates for
queue, updates the object and emit another JMX Notification
- Test receives notification and check the result;

The JUnit Test could be decorated too with JUnitPerf to run a timed test in
order to ensure that all what described above is performed under a certain
amount of time...

Of course It's still not a complete idea but I think that there's something
of interesting on that...

The interesting thing is that you could have differnet listeners than JUnit
Test used for listening QMan events.
So for example you could create a separate module that is registering as a
QMan listener and, using something like JBoss Rules, could define
configurable rules for detecting threshold conditions, alerts for QMF events
and therefore determining an output channel (SMNP Traps, Email, SMS, I don't
know....)

Regards,
Andrea

2008/10/31 Andrea Gazzarini <[EMAIL PROTECTED]>

Yes Carl,I agree...what do you think about that?
http://clarkware.com/software/JUnitPerf.html

I used it in my previous project and that was great!
Regards,
Andrea

2008/10/30 Carl Trieloff <[EMAIL PROTECTED]>

Andrea Gazzarini wrote:
Hi all,
Actual bundle of QMan has only "offline" unit test that are running tests
again isolated "components" of QMan. That means that in order to see
those
tests running you don't need to have QMan and / or Qpid running. This is
good for development stage, allowing a (moreless) test-driven development
and therefore a flexible code but in order to see that all is working
(Test
--> QMan --> Qpid) we need to add tests against a runnning QMan connected
to
a broker. I'm thinking about that...I already coded some tests but it's
an
hard work because the asynchronous nature of the interaction between QMan
&
Qpid. Probably QMan will be extended to support JMX notifications. I'm
thinkng about that so I'm not sure but from a test perspective should be
cool if you could register a test as a listener of QMan notifications and
in
that way you will be informed about object creations, events, method
invocations and all what you need to run your verifications. If you some
kind of idea feel free to suggest... Regards, Andrea



yes, the question is how to prove all the function interop in a clean and
automated way.

I expect you will need a time based test, you know an update will be
reported back with-in
the interval configured on the broker. You can set that to 1 sec in the
script, and have the
test wait say 5sec max before reporting it as failed.

Carl.



Reply via email to