Some excellent new unit tests have been added to the python code on trunk. Not all of these pass at present.

Brokers using the python run-tests script to test themselves will pick these new tests up and will report failures. One option is to add the failures to the list of expected failures for each broker.

However as these new tests don't even open a connection to a broker, I wondered whether it would be more sensible to start partitioning the tests into unit tests for the python code itself and tests used to verify broker behaviour. That then seemed worth raising as a question for the group... thoughts?


Reply via email to