Hi,

I understand you this way:

*The testautomation suite and the testtool seems to be based on an asynchronous protocol*:

There seems to be many cases in the test scripts that events get triggered by the testtool and then a amount of time is waited (sleep() ) the the according window will show up. This can lead to problems if
* machines of different performance are used (sleep time not long enough)
* different window managers are used (window shows up but has not the focus) * remote connections are used depending on the remote and network technology you are using

This leads to the situation that the test are running quite well if they are run in a well defined, reproducible environment but can get unreproducible or at least different results when getting run not within that well defined environment. For new platforms, e.g. results will be completely unpredictable.

*binary identical install sets can give different results when run on different OS configuration*:

different environments (libc.so, ld.so, libX11.so, WindowManager, cpu / networkperformance) can lead to unexpected results

* also the official source code can be configured with many different options (use of system libraries or with the tested/defined libraries which come with the OpenOffice.org source tar balls*.

there are more than 30 different with-system-lib switches, we know from examples in the past that some of them (berkeleydb, neon, libxml, etc) can cause major trouble. I expect also different testautomation results by using different versions of these libraries on the same machine. Additionally Unix based distributions will also come often with patched versions of these libraries this finally ends in a nightmare. "

Martin

Helge Delfs wrote:
Hi André, Maho, *,

sorry for the late answer, must have missed this mail :-(

Am 13.04.09 18:11, André Schnabel schrieb:
Anyway - even if we resolve all the issue for running the scripts in a
batch, I'm still not sure how usefull this would be - unless we know,
how (exactly) the Sun team is getting the results. Looking at the status
in quaste, for OOo310_M9 non community-tester seems to be able to get
the same results as Sun team did.

Let me state this and give me a try to help understanding what automated
testing represents.

1. Trust your tools (automated tests, testtool, scripts)
From my point of view it seems there is still a want of confidence in
running automated tests and what results are produced by them. With
results created by automated testers at SUN for me it is proven the
tests are running in general and results created are reproducable. All
issues during this release circle found by automated tests were 100%
reproducable and identified within a short timeframe.
The VCLTestTool itself is absolute proven and has passed a long testing
phase. Thats why always VCLTesttool from Automation pages[1] should be used.
With this information you as a member of quality assurance should be
self-assured
enough to say: Yes, I trust in the tools I'm having available.

2. Test environment
One must understand that there are some boundary
conditions to be met to create reproducable testresults. It is an absolute
must that environment is proven and remains the same during a test
circle (like testing release for OOO310) It makes no sense to change
machines, environments or VCLTesTool during this time to make testresults
reproducable. The optimum would be a dedicated machine used or
VirtualMachine for automated testing purposes only. Only with these
prerequisites you are able to reveal believable results.
If possible let always run those tests by the same users...

3. How to handle results of automated tests ?
I often hear: Autotests are failing, they don't work !
Have you ever thought about the test fails because an issue in
OpenOffice.org has been found by exactly this test ? The most do not,
because it's time-consuming to reproduce those issues manually. But
before trying to reproduce a scenario manually it is often helpful to
run a single testcase (where error occured) a second time. In most cases
this fixes the problem and your testresults are fine.
If this doesn't work also try to understand what the testcase does and
regulate the same scenario to find out where the issue is. Thats what
quality assurance is all about. And if you ever stumble over an issue in
autotest (which might occur, never told is doesn't) you can be sure your
tested product is fine...and this to assure is your goal.
Of course the autotest should be fixed or adapted also.


These informations can only be an abstract of automated testing... it is
more complex indeed but I hope these informations can help to understand
what we are doing and what testers from community should bear in mind
when running automated tests. And I'm pretty sure if those rules are
followed there will be satisfying results soon.


Best Regards
 Helge


[1]
http://qa.openoffice.org/ooQAReloaded/AutomationTeamsite/ooQA-TeamAutomationBin.html







---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to