Hi, Am 24.04.09 10:49, Martin Hollmichel schrieb: > Hi, > > I understand you this way: > > *The testautomation suite and the testtool seems to be based on an > asynchronous protocol*: > > There seems to be many cases in the test scripts that events get > triggered by the testtool and then a amount of time is waited (sleep() ) > the the according window will show up. This can lead to problems if > * machines of different performance are used (sleep time not long enough) > * different window managers are used (window shows up but has not the > focus) > * remote connections are used depending on the remote and network > technology you are using
These issues are constanly removed from the script. Just a sleep found in an automated script means not it has its qualification there. Sure you have some asynchronous slots in OpenOffice.org but these are merely seldom. See blog by Joerg Skottke that explains the exceptions[1] Automated testscripts were speed up during the last months and they are of a high quality and will create reproducable results if described guidelines are followed. > This leads to the situation that the test are running quite well if they > are run in a well defined, reproducible environment but can get > unreproducible or at least different results when getting run not within > that well defined environment. For new platforms, e.g. results will be > completely unpredictable. For new platforms you must normally run a test circle to find out the differences and issues maybe. This is a normal process that must take place. It is absolutely clear for me that this can't be done these times but all differences have a reason and thats qa work to find this out. It was a long journey for us to learn this and thats why we decided to create reference machines. See my latest blog about results of automated tests during OOO310 testing[2] where reproducable results are proven. > *binary identical install sets can give different results when run on > different OS configuration*: > > different environments (libc.so, ld.so, libX11.so, WindowManager, cpu / > networkperformance) can lead to unexpected results > > * also the official source code can be configured with many different > options (use of system libraries or with the tested/defined libraries > which come with the OpenOffice.org source tar balls*. > > there are more than 30 different with-system-lib switches, we know from > examples in the past that some of them (berkeleydb, neon, libxml, etc) > can cause major trouble. I expect also different testautomation results > by using different versions of these libraries on the same machine. > Additionally Unix based distributions will also come often with patched > versions of these libraries this finally ends in a nightmare. " I fully agree but all differences are issues to be found out and fixed. One must have a defined environenment to run those automated tests to get testresults reproducable. Thats what its all about Regards Helge --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
