Hi Thorsten,

Thorsten Ziehm schrieb:
Hi Andre,

Andre Schnabel schrieb:
Hi,

(notice here that this is "at Sun" - QUASTE status clearly shows that
this is not the case "outside SUN"

When you are running a test on your system, are the results reproducible
when you run it more than once?

In general - yes.

Perfect. This is, why Helge and I said, that the tests show reproducible
results.

Well - supposed we have a public building with a security gate. The gate is controlled by a finger print sensor. The sensor recognizes only one finger print. In this case you can argue, that all attempts to enter the building are perfectly reproducible. Everybody (but the one person with the correct finger print) will fail to enter the building.

Inside the building there is a testing lab. All works fine (including the security gate). All is perfect .. but still the testing lab has not enough resources to get all the tests done.

....

In my world it isn't a problem, because the solution can be the TestBots
for general use.

Depends on what the TestBots would look like.
If TestBots will consist of a set of scripts that can run an any given system, we have quite the same as we have now. You would need a dedicated system for each TestBot. But "dedicated system" is the counter part of "general use".

An alternative would be to use virtual machines that could be cloned. But still this is no solution for "general use". E.g. for cws testing, three major platforms should be covered. With cloned virtual machines we can easily cover the free platforms (Linux, Solaris x86, BSD). Due to legal reasons we cannot clone proprietary platforms like Windows and MacOSX. So what should we do with these platforms?

And what's about the platforms that we do not really know yet? RedFlag team is testing on Mips platform. OOo is working on ARM, we have a port to eComstation ...

I have often wrote about this and can repeat it again
and again. Please wait some more months. I hope the Sun team can present
it soon.

And I still would argue, that it is the wrong way. We would need a way that helps people to get same results on (slightly) different systems. We need a minimum set of tests that is reliable even if run by different people.

If we don't get this, we will always have just a very small set of people who will run automated VCL tests. We might have this small set of people runing the tests on 100 test bots instead of 10 dedicated machines - but still only the small set of people will be able to analyze the results and develop the scripts.

André

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to