Hi Sophie,
Sophie schrieb:
[...]
The problem is not that the smoketest wasn't performed. The problem is
that someketest_oo did run without problem. It doesn't do an
installation as the user does/starts OOo in another environment the
user would do.
smoketest_oo native just runs and reports success in both cases.
From what I know, a comparison between cws and mws tests if they are
not run in the exactly same environment will not provide reliable infos
so the community is out of scope here, or did I miss something?. I found
a stopper on 3.1 by accident, only because I was doing localization,
it's not a confident process to me.
If you change an environment (e.g. testing in a new windows system like
Windows 7) will identify problems, which aren't new in the product but
is shown only in this environment. This is the same for localization,
accessibility, different platforms, etc.
Therefore we ask to check the Office in different locales, differerent
platforms. When you do this with automates testing (smoketest,
API-Tests, Complex-Tests, GUI with TestTool ...) you have to check
against a defined systems. For GUI testing with TestTool this is done
by the Sun team. You can check your results against the results in
QUASTe and you identify the differences. Perhaps the differences are
bugs, perhaps it only a timing issue or something else. But this should
be reported as an issue to the automated testers. They can adopt the
tests that this problem isn't shown for a next master test on your
system. Only then you can be confident, that the tests will run on 'your
system' without differences as to the defined systems.
The second just means: The smoketest cannot test everything. In fact, I
think the smoketest is just a very minimal set, there should be other
(automatic) tests for that kind of problem.
Sure, and I don't know what tests were actually run during the
RC-testing so that the PDF-export crasher was not detected, so I don't
want to speculate about it.
But I still think that (at least currently) the amount of issues found
via testscripts is so low compared to the manual ones/by people
actually using OOo..
This is something that I would like to understand either, what is the
good process? who is doing manual test cases from TCS, could we have the
same test examples, etc...? Do we believe in snapshot testing and in
this case do we provide all the necessary manpower to those tests? And
to Thorsten, it's not about documenting the process, but more about
lobying it through the volunteers.
Without documenting processes you cannot show volunteers to do their
work, I think. That lobbying is important, you are right. But here we
need the support for the L10N teams and the whole QA project. We at
Sun do not have any chance to lobbying this, as I saw from the past
years. People like you are the supporters and multipliers for us.
Thorsten
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]