Hi,

-------- Original-Nachricht --------
> Von: Helge Delfs <[email protected]>
> 
> Let me state this and give me a try to help understanding what automated
> testing represents.
> 
> 1. Trust your tools (automated tests, testtool, scripts)
....
> With
> results created by automated testers at SUN for me it is proven the
> tests are running in general and results created are reproducable. 

(notice here that this is "at Sun" - QUASTE status clearly shows that this is 
not the case "outside SUN"

> 
> 2. Test environment
> One must understand that there are some boundary
> conditions to be met to create reproducable testresults. 

Which are?

> It is an absolute
> must that environment is proven and remains the same during a test
> circle (like testing release for OOO310) It makes no sense to change
> machines, environments or VCLTesTool during this time to make testresults
> reproducable. 

So the previously used environment defines what environment to use the next 
time.

> The optimum would be a dedicated machine used or
> VirtualMachine for automated testing purposes only. Only with these
> prerequisites you are able to reveal believable results.
> If possible let always run those tests by the same users...

In short word: if Sun does the initial tests, only sone can do reliable tests 
for comparision.
Or in other words: it is niot usefull if community is doing tests.


> 
> 3. How to handle results of automated tests ?
> I often hear: Autotests are failing, they don't work !

Look at QUASTE!

> Have you ever thought about the test fails because an issue in
> OpenOffice.org has been found by exactly this test ? The most do not,
> because it's time-consuming to reproduce those issues manually. 

Sorry Helge - what do you think, are we as QA community testers doing? Just 
ranting around without verifiying the results?
Yes at the moment I *am* ranting.

We *try* to analyse the results ... and the outcome often is, that we cannot 
reproduce the results that Sun team gets. But everytime we report this we hear 
"oh there must be something wrong at your side".-

> But
> before trying to reproduce a scenario manually it is often helpful to
> run a single testcase (where error occured) a second time. In most cases
> this fixes the problem and your testresults are fine.

And you call this "reliable"?

Sorry -I'm going to lose the last grain of confidence I had in automated 
testing.

André
-- 
Psssst! Schon vom neuen GMX MultiMessenger gehört? Der kann`s mit allen: 
http://www.gmx.net/de/go/multimessenger01

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to