Janus Dam Nielsen <[EMAIL PROTECTED]> writes:

>>> I think that having parametrized tests is good, however I just
>>> wanted to point out that defining the parameters in the Runtime
>>> class/object might not be suffienciently expressive to what we want.
>>> We might would like a kind of grouping/system of tests so that it is
>>> easy to run the tests without any particular knowledge of which
>>> protocols support which parameters.
>>>
>>> Those tests for which a given set of parameters is invalid, the test
>>> could return an undefined value, or the test could be elided from
>>> the set of tests since it doesn't make any sense for these
>>> parameters anyhow.
>>
>> The test suite is implemented using Trial, a Twisted tool which
>> extends the standard Python unittest module with support for
>> Deferreds. The Python unittest module is modelled after JUnit.
>>
>> In Trial there is support for marking a test as skipped, and that
>> might be useful for what you are describing -- we could query the
>> tests for their requirements and if they do not match the parameters
>> of the current test, then we skip that test.
>>
>> Something like that could work, but I don't know if it is the best
>> way... Have you looked at the Trial documentation to see how it could
>> be done? There is a tutorial here:
>>
>>   http://twistedmatrix.com/trac/browser/branches/trial-
>> tutorial-2443/doc/core/howto/trial.xhtml?format=raw
>>
>> and the API documentation is here:
>>
>>   http://twistedmatrix.com/documents/current/api/
>> twisted.trial.unittest.TestCase.html
>>
>> Trial is not so well documented as the rest of Twisted, so looking at
>> the source code has helped me a bit until I found the above tutorial.
>
> I haven't looked at the code. I just wanted to make sure you didn't
> shoot yourself in the foot unintentionally :)

Yeah, thanks, that would be bad :-) so it is really nice to get some
input in these design questions!

But I asked since I am not sure I completely understand how you want
this implemented?

You want some set of parameters that are specified on the command line
when one starts Trial, or should they be put in the unit tests directly?

The latter is easy, and we can already now generate many unit tests with
small differences such as the number of players. Adding this to
test_active_runtime.py:

    class Active5(ActiveRuntimeTest):
        num_players = 5

    class Active6(ActiveRuntimeTest):
        num_players = 6

    class Active7(ActiveRuntimeTest):
        num_players = 7

makes the Bracha broadcast test be run three more times, but with more
players. It even works, I just tested! :-)

That might be a good way to do things: code a TestCase with some unit
tests, but be sure to make them generic in the sense that they can be
run with any number of players (or threshold). Then create several
subclasses like above. The classes can even be created at runtime:

for n in range(3,6):
    code = """
class Active%d(ActiveRuntimeTest):
    num_players = %d
""" % (n, n)
    exec code

Trial never notices the difference and runs our tests as normal!

This way we have lots of power over how the tests are generated: we can
easily test, say, all n<10, 10 randomly chosen n between 10 and 50, and
5 random n's between 50 and 100. Something like that...

-- 
Martin Geisler

Attachment: pgpqDrd8ZCeHb.pgp
Description: PGP signature

_______________________________________________
viff-devel mailing list (http://viff.dk/)
viff-devel@viff.dk
http://lists.viff.dk/listinfo.cgi/viff-devel-viff.dk

Reply via email to