Hi, Aparna et al
Overall, I can see why people like wx-level acceptance testing: there
is a certain warm fuzzy feeling from knowing that the app in its
entirety is behaving reasonably. (Even if you have to watch it to be
sure :). The concerns I have are:
+ How would these recorded scripts be maintained? If we changed the
app's layout, does someone have to go re-record them all?
+ Each script currently requires a launch and quit of the app (which
is reasonable, given the nasty inter-test dependencies that have
arisen with the current functional tests). In the scenario where we
really cover all the current manual test specifications with recorded
ones, how long will this take? Will developers be required to run
them all before checkins?
+ I think I remember John/Dan saying scripts currently have to be
recorded on Windows (something to do with keycodes). Is that likely
to still be the case?
Possibly these all point at a more general issue, which is whether
you see the recorded test suites as being part of the core
development process, or something QA maintains on the side to help
validate builds and find bugs.
--Grant
On 5 Sep, 2007, at 20:55, Aparna Kadakia wrote:
Much in line with Katie's email, I too have been agonizing over the
Desktop Test Automation project lately. THe last few weeks have
been rather painful as we have manually gone thru the test specs to
validate the release. Clearly we need to make some kind of a
decision on the test framework for Desktop soon, esp in the light
of shorter release cycles.
Over the last 3 years, we have seen CATS, the CPIA script based
test automation framework and then more recently the Script/
Recording framework, both trying to fill the gap of insufficient UI
automated tests with easy test development. Even though CATS is a
full-blown fully functional test framework, the test development
requires adding methods to its libraries which require good
understanding of the internal architecture of the component under
test. This turned out to be not so scalable after all.
To overcome this handicap, John initiated the effort on Script/
Recording framework. It is still premature to comment on how
successful this is as it is far from complete. We still have a
number of significant things to add support for e.g. dragging in
cal canvas, support for modal dialogs etc. It is important to
remember that the framework we adopt needs to support testing at
the UI level and not within. This is more for functional testing
which covers scenarios rather than unit level testing.
Since John will be back in the office next week and we will have
wrapped up the Preview release, I propose we re-start the
discussion on the Desktop test automation again.
The plan is to evaluate the progress on script/recording framework
and if it is a viable next option. Also the idea is to analyze
collectively if any architectural changes are necessary to build a
more reliable test system.
Proposed Date : Tuesday, Sept 11 2007
Proposed Time : 2:30 PM (taking the Design Discussion slot since we
aren't having those)
Conference Room : Shambala
The high level goals of the automation framework remain the same:
1. easy UI test development
2. easy debugging of test failures (capturing and outputting
errors, stack traces in log files etc)
3. very clear passed and failed results from test execution
4. flexibility of running a single test v/s a full suite
5. easy installation of the framework and execution of testscripts
Let me know if that works for everyone.
Thanks
Aparna
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Open Source Applications Foundation "chandler-dev" mailing list
http://lists.osafoundation.org/mailman/listinfo/chandler-dev
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Open Source Applications Foundation "chandler-dev" mailing list
http://lists.osafoundation.org/mailman/listinfo/chandler-dev