Mikeal Rogers wrote:
On Feb 10, 2006, at 10:04 AM, Philippe Bossut wrote:
John Anderson wrote:
Having written my first functional test yesterday I have some
thoughts. The biggest problem I encountered when trying to write and
debug tests is navigating all the layers:
my test <-> CATS <-> CPIA Script <-> Chandler
Fortunately I'm very familiar with Chandler, somewhat familiar with
CPIA Script and CATS is small enough to grock without much effort.
However, I suspect most developers would find all the layers
daunting and trying to debug things would only make it worse.
Agree with that.
One of the requirements is that the system be easy to use. Obviously
there is another layer of complexity over what we do with CATS but it
is still designed to be very easy for someone to pick up and start
writing scripts and to see legible output. Part of the deliverables
for the first version of this framework will be;
-Command line python wrapper (much like do_tests, a script is imported
and output is generated that is legible using a set of default
parameters for the framework)
Yes, we've all been using variation of this for unit tests and
functional tests and find it useful. Also, when you get a particularly
tricky functional test failing somewhere deep in Chandler, were a
traceback isn't enough to diagnose the problem, it's often handy to
track it down in a debugger. So you might set things up so you can
attach in wing, e.g. include wingdbstub.py.
-Sufficient Documentation ("Writting Chandler automation in 10
minutes" style doc, extended OAF documentation for developers who wish
to use non-default features in the system, and maybe most importantly
GOOD documentation for the chandler test library that can facilitate
both easy test script authoring and developer improvements to the
chandler testing library itself.
I think much of what makes writing functional tests difficult has little
to do with your proposed framework, and more to do with how you access
the pieces of Chandler, do menu commands, click on buttons, etc., i.e.
the stuff that is mostly in CPIA script and CPIA.
The output can be very customized using this framework, but the
default output will be humanly legible and go directly to a file.
Also, a -debug flag can be set, which sets all output in the framework
to be processed as it comes in to the output object. This is no good
for performance tests but will make debugging issues worlds easier
than in CATS.
To finish up, many of the extra layers that developer might find
"daunting" will be transparent in the implementation, but the output
that developers depend on (such as a tracebacks in the log if a
failure occures) are made easy and reliable by this abstraction.
I hope this alleviates your concerns.
-Mikeal
I think it would be preferable to make the small changes necessary
to CPIA Script to make it appropriate for testing instead of adding
another layer, e.g. CATS.
Improving CPIA Script to make scripting easier is indeed a good idea.
I don't think it will replace entirely a test harness though like
CATS or, better, OAF (proposed by Mikeal). There's a lot of test
functions (batch, log, data gathering and stats) that have no place
in a Chandler level scripting language. John, I suggest you read
Mikeal proposal
(http://wiki.osafoundation.org/bin/view/Projects/OpenAutomationFramework)
first. Keep in mind also that Mikeal is trying to solve a problem
that includes Chandler and Cosmo.
Similarly, I think it's preferable to modify Chandler to eliminate
some of CPIA Script.
What alternative to CPIA scripting do you propose? No scripting at
all? Another script mechanism? Leverage an existing one?
Cheers,
- Philippe
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Open Source Applications Foundation "Dev" mailing list
http://lists.osafoundation.org/mailman/listinfo/dev
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Open Source Applications Foundation "Dev" mailing list
http://lists.osafoundation.org/mailman/listinfo/dev
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Open Source Applications Foundation "Dev" mailing list
http://lists.osafoundation.org/mailman/listinfo/dev