Hi!

Dan Kegel wrote:
On Wed, Jul 1, 2009 at 2:18 AM, Stefan Weigel<[email protected]> wrote:
No doubt, testing is necessairy. But one cannot produce quality by testing.
High Quality is a result of *preventing* bugs.

All people involved are humans no matter what they might think of themselves. Thus: Errors do occur, errors will occur. In all states of changes of code.

But thorough automated testing can help by
a) pointing out regressions early so they can be fixed before they
infect the trunk, and

That 's it. What can be done is to avoid them at the earliest possible "state", so that they do not produce Bugs in the trunk (= current master that will become the release).

Code review by a skeptical (!!!) other developer is a good thing.

b) rubbing the noses of those who committed the regressions in their mess
closer to the time they did it; that way they get the message better that
they might need to improve their quality processes.

Hmmm... the "personal motivation" by "naming the culprit in public" might work for some. But that leads to the always-first-alwasys-stupid question "Who did it" - The question must always be "What can all learn" and not "how to punish one with most effect by entertaining those who got away with their undetected wrongdoing".

Back to the highly disucssed Automatesd Tests.
There is no doubt that those are an effective to find regressions.

So how comes that people write rightfully "The regression XYZ was not found by Autotets"? How can that happen?
(A) There is no AutoTest for this scenario written yet
(B) There is one but it did not run.

A seemingly easy solution:
(1) Write AutoTests for EVERY possible scenario
(2) Run all of them before any bit in Office trunk is changed

What spoils this "armchair strategy" are simple facts:

(1) There is more scenarions than any of us can imagine.
Example: think of "6 clicks" - How many permutations do you have in an unmodified Writer main toolbar alone?
(2) The tests require time and hardware.

So what about support to cover more and more scenarios with Autotests?

Note that only what is covered by an Autotests that does run will NEVER EVER break undetected. No matter how careful or optimistic or experienced the developer is, errors will happen. And the complexity of the Office requires tests in the interaction of modules in different contexts, on different platforms, in RTL-UI, you-name-it.

All these are "unexpected things" that the OOo core developers and QA folks here at sun have seen more-often-than-not in the past ten years. So never underestimate the impact of a "safe two-liner that looks like good code". In old days (all changes into the trunk immediately) we hat the legendary case of a rotated bitmap as the only change that prevented the Office from starting.

Summarized:
Autotests are our best weapon against regressions.
But only if the right ones get written and run.

Regards
Stefan, QA Writer

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to