Hi André, Maho, *,

sorry for the late answer, must have missed this mail :-(

Am 13.04.09 18:11, André Schnabel schrieb:
> Anyway - even if we resolve all the issue for running the scripts in a
> batch, I'm still not sure how usefull this would be - unless we know,
> how (exactly) the Sun team is getting the results. Looking at the status
> in quaste, for OOo310_M9 non community-tester seems to be able to get
> the same results as Sun team did.

Let me state this and give me a try to help understanding what automated
testing represents.

1. Trust your tools (automated tests, testtool, scripts)
>From my point of view it seems there is still a want of confidence in
running automated tests and what results are produced by them. With
results created by automated testers at SUN for me it is proven the
tests are running in general and results created are reproducable. All
issues during this release circle found by automated tests were 100%
reproducable and identified within a short timeframe.
The VCLTestTool itself is absolute proven and has passed a long testing
phase. Thats why always VCLTesttool from Automation pages[1] should be used.
With this information you as a member of quality assurance should be
self-assured
enough to say: Yes, I trust in the tools I'm having available.

2. Test environment
One must understand that there are some boundary
conditions to be met to create reproducable testresults. It is an absolute
must that environment is proven and remains the same during a test
circle (like testing release for OOO310) It makes no sense to change
machines, environments or VCLTesTool during this time to make testresults
reproducable. The optimum would be a dedicated machine used or
VirtualMachine for automated testing purposes only. Only with these
prerequisites you are able to reveal believable results.
If possible let always run those tests by the same users...

3. How to handle results of automated tests ?
I often hear: Autotests are failing, they don't work !
Have you ever thought about the test fails because an issue in
OpenOffice.org has been found by exactly this test ? The most do not,
because it's time-consuming to reproduce those issues manually. But
before trying to reproduce a scenario manually it is often helpful to
run a single testcase (where error occured) a second time. In most cases
this fixes the problem and your testresults are fine.
If this doesn't work also try to understand what the testcase does and
regulate the same scenario to find out where the issue is. Thats what
quality assurance is all about. And if you ever stumble over an issue in
autotest (which might occur, never told is doesn't) you can be sure your
tested product is fine...and this to assure is your goal.
Of course the autotest should be fixed or adapted also.


These informations can only be an abstract of automated testing... it is
more complex indeed but I hope these informations can help to understand
what we are doing and what testers from community should bear in mind
when running automated tests. And I'm pretty sure if those rules are
followed there will be satisfying results soon.


Best Regards
 Helge


[1]
http://qa.openoffice.org/ooQAReloaded/AutomationTeamsite/ooQA-TeamAutomationBin.html





-- 
===============================================================
Sun Microsystems GmbH           Helge Delfs
Nagelsweg 55                    Quality Assurance Engineer
20097 Hamburg                   OOo Team Lead Automation
http://qa.openoffice.org        mailto:[email protected]
http://wiki.services.openoffice.org/wiki/User:Hde

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to