Hi Tora,

tora wrote:
Hi,

Thorsten Ziehm wrote:
Sun's automated GUI testing team want to support the release testing of
the localized builds on OOo.

That would be so nice!

Sun's StarOffice/StarSuite (the Sun branded build of OOo) automated GUI
testing will be done on each Sun supported platform [1] and language [2]
for each RC.

I think there would not be an objection to your proposal.
So, can we discuss it further?

Could you describe the features of the automated GUI test a little bit more?

It is not possible for me to describe the TestTool functionality only a
few words. All information about TestTool can be found here :
http://qa.openoffice.org/qatesttool/index.html
All information about testable controls etc. can be found in the
cookbook : http://qa.openoffice.org/qatesttool/OOo_tt_CookBook.pdf

As you know, invoking a function using a slot number potentially could
not find localization related bugs. E.g. There was a bug several years
ago: Insert -> Header -> Default by choosing a mouse pointer did not work
with localized one, but the same function worked through the testtool.


The TestTool does not test 100% of OOo/SO functionality. The selection
with mouse is difficult to realize with TestTool. Therefore there does
not many tests with mouse functionality. So this is not a problem of
localized builds. If this bug occurred also in an English version, this
problem will never be find by TestTool. Only when somebody write a test
script especially for this problem. But we do not have the resources do
write a test case for each bug.

In the meantime, some results automatically produced by a process of
test scripts might have to be verified by a human being who possesses
targeting cultural background for the localized one.


Testing and QA with cultural background is needed anyway. If someone on
these testers can write automated test scripts, it will be very helpful.
;-)

Therefore, knowing what will be done by the automated GUI testing and
what will not be covered might help us discuss the testing further.


How should we generate such a list? I do not know. Only when a bug was
found we know, that something is missing in the automated test scripts.
And then we have to find the resources to write these test scripts.

Regards,
  Thorsten

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to