Hugh,

> Sorry about the delay in replying.

No bother. I still have an even older post of yours on my todo list.

> Either calling individual tests or using a runAll() function as you
> suggested below. runAll() could take a spec for the functions to test,
> maybe as a regular expression and any function names that match would be
> called.

Sure enough.

> We need to provide feedback on what tests have been run and what the
> result was. Perhaps for any test we could have a tuple of (test function
> name, boolean result, failure message, log). Then if multiple tests are
> run as a result of a function call it would give a list of these results.

Sure.

> The log would be the qooxdoo log messages which might include some
> debugging output, useful for finding what's going on. Send it as one long
> string but make it easy to parse by splitting on newline, comma etc.

Hm, of course you can make the tests themselves pass back whatever
information they might want to pass back, it's all in the hand of the test
programmer. To receive this information back at the client driver, the
logs would have to become part of the return value of the tests. But as
for the AUT (the qooxdoo app), you might have to re-define the logger
object from the tests, in order to capture the standard app log output, if
that is what you're after?!


> Indeed - I would see that as our responsibility. If we can just get the
> log of tests that have been carried out and any results into our Python
> code I would be happy. Then we can email etc from there.

Ok, so let me wrap this up into a little protocol:

- The client driver script invokes an SRC command of the *Eval() family,
passing as an argument a Javascript function, possibly with arguments, to
be invoked.
- The SRC infrastructure passes this call to the SRC engine in the
browser, where the JS function is evaluated.
- The JS function invokes some test(s).
- The tests return results (success/failure, more?), log and probably
error information, which is nicely marshaled back to the client driver as
the result of the *Eval command.
- No exceptions will be allowed to leak on the browser side, and the only
footprint of this protocol in the Selenium core will be the log entries of
the *Eval commands invoked (which is of no great interest). All test
result evaluation, failure analysis, relevant logging, notifying etc. etc.
will be done on the client driver side.

Hm, as we already stated this is significantly different from the JSUnit
protocol for tests, which relies on exceptions. Which means you cannot
immedately use the TestLoader.js et al. code from the Testrunner for the
test wrapper layer (although it would make for a good starting point).
Also, I presume tests written from the Testrunner would not suite the
scenario very well, because they just throw exceptions which the wrapper
would have to catch and parse. It would be much easier if the tests were
written to return values. Would you want to write different tests, one set
for use with Testrunner, the other for use with the "SeleniumRunner" (as I
call it)?! That would defeat the idea of having the same set of tests for
both interactive testing in Testrunner as well as automated testing
through SRC.

Of course it seems possible to combine both protocols in the same tests,
these checking whether they are run in "Testrunner-" or
"Seleniumrunner-mode".

Thomas


-------------------------------------------------------------------------
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
_______________________________________________
qooxdoo-devel mailing list
qooxdoo-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/qooxdoo-devel

Reply via email to