Hi Andre,

I just wanted to give some assistance to the community and in no case
had the idea to harm someone. I have always the full respect to what you
and the community is contributing.
What I wrote here is a fraction of what I've learned in more than 10
years doing automated testing and I wanted to share this with you and on
the other hand to gather some more experiences. I just wanted to give
some assistance on understanding what requirements must be met to
generate useful results. I am always aware of what problems the
community has with those autotests and that was my motivation to write
this.

Thank you !


Best Regards
 Helge



Am 24.04.09 11:24, Andre Schnabel schrieb:
> Hi,
> 
> -------- Original-Nachricht --------
>> Von: Helge Delfs <[email protected]>
>>
>> Let me state this and give me a try to help understanding what automated
>> testing represents.
>>
>> 1. Trust your tools (automated tests, testtool, scripts)
> ....
>> With
>> results created by automated testers at SUN for me it is proven the
>> tests are running in general and results created are reproducable. 
> 
> (notice here that this is "at Sun" - QUASTE status clearly shows that this is 
> not the case "outside SUN"
> 
>> 2. Test environment
>> One must understand that there are some boundary
>> conditions to be met to create reproducable testresults. 
> 
> Which are?
> 
>> It is an absolute
>> must that environment is proven and remains the same during a test
>> circle (like testing release for OOO310) It makes no sense to change
>> machines, environments or VCLTesTool during this time to make testresults
>> reproducable. 
> 
> So the previously used environment defines what environment to use the next 
> time.
> 
>> The optimum would be a dedicated machine used or
>> VirtualMachine for automated testing purposes only. Only with these
>> prerequisites you are able to reveal believable results.
>> If possible let always run those tests by the same users...
> 
> In short word: if Sun does the initial tests, only sone can do reliable tests 
> for comparision.
> Or in other words: it is niot usefull if community is doing tests.
> 
> 
>> 3. How to handle results of automated tests ?
>> I often hear: Autotests are failing, they don't work !
> 
> Look at QUASTE!
> 
>> Have you ever thought about the test fails because an issue in
>> OpenOffice.org has been found by exactly this test ? The most do not,
>> because it's time-consuming to reproduce those issues manually. 
> 
> Sorry Helge - what do you think, are we as QA community testers doing? Just 
> ranting around without verifiying the results?
> Yes at the moment I *am* ranting.
> 
> We *try* to analyse the results ... and the outcome often is, that we cannot 
> reproduce the results that Sun team gets. But everytime we report this we 
> hear "oh there must be something wrong at your side".-
> 
>> But
>> before trying to reproduce a scenario manually it is often helpful to
>> run a single testcase (where error occured) a second time. In most cases
>> this fixes the problem and your testresults are fine.
> 
> And you call this "reliable"?
> 
> Sorry -I'm going to lose the last grain of confidence I had in automated 
> testing.
> 
> André


-- 
===============================================================
Sun Microsystems GmbH           Helge Delfs
Nagelsweg 55                    Quality Assurance Engineer
20097 Hamburg                   OOo Team Lead Automation
http://qa.openoffice.org        mailto:[email protected]
http://wiki.services.openoffice.org/wiki/User:Hde

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to