Hi Frank, *,

On Tue, Jun 30, 2009 at 5:45 PM, Frank Schönheit - Sun Microsystems
Germany<[email protected]> wrote:
>
> don't have something (in this very mail) about the "what to do with the
> bug pile" topic, but a few other items cried for my response ...
>
>>> To improve this we need people who write Testtool testscripts.
               ^^^^
>> I disagree. The problem is not finding the bugs.
>
> Not sure. Looking at the amount of stoppers which came in in 3.1's
> release phase, and the amount of stoppers already raised for 3.1.1 (and
> most often for good reasons), I think that *finding* bugs *is* a problem.

I aimed at a different issue - the "community feels
misunderstood/ignored" topic.

> [...] I think another
> line of defense must be with the Devs, by not *introducing* the bugs.

Yes, that would of course be nice.

>> And honestly, I'm /very/ disappointed of the automatic testing,
>> admittedly not the testtool, but smoketestoo_native.
>> Smoketestoo_native failed to
>> * Detect a regression where the installation would not run /at all/
>> because of wrong permissions
>> * Detect the PDF-export breaker
>
> The first of those should of course have been detected. If it hasn't,
> this means somebody didn't follow the process

The problem is not that the smoketest wasn't performed. The problem is
that someketest_oo did run without problem. It doesn't do an
installation as the user does/starts OOo in another environment the
user would do.
smoketest_oo native just runs and reports success in both cases.

> The second just means: The smoketest cannot test everything. In fact, I
> think the smoketest is just a very minimal set, there should be other
> (automatic) tests for that kind of problem.

Sure, and I don't know what tests were actually run during the
RC-testing so that the PDF-export crasher was not detected, so I don't
want to speculate about it.

But I still think that (at least currently) the amount of issues found
via testscripts is so low compared to the manual ones/by people
actually using OOo..
Of course there might be a huge difference in my perception compared
to reality - since masters are released only after they passed some of
the tests, so not sure how many problems are identified at that stage.
Apart from finding bugs or not, the other big problem with the
testscripts is the non-deterministic nature. You can run a test, it
reports some warning/error. You run it again and it passes. This makes
it hart to reallly automate the process, to interpret the results.

> [...]
>>> - verifying issues of just integrated child workspaces within the current
>>> master build (to verify there are no integration related issues regarding
>>> this CWS)
>>
>> Jup, nice janitorial task that takes off some workload of the other
>> QA/Devs that can focus on other tasks
>> +1
>
> From your experience: Is this worth it?

Was definitely worth it since svn migration... :-( (changes of cws not
integrated and similar problems when the cws was integrated).
In general? Hmm. Good question. I guess it wouldn't hurt if EIS would
close those issues when the cws is integrated with an appropriate
comment.

> So, in how many cases did checking a VERIFIED FIXED issue in the MWS
> really detect a problem? And in how many cases was it just moving the
> issue to CLOSED?

Here's the next big, big problem. Often enough, developers file issues
with incomprehensible (to others) descriptions, a QA volunteer has no
chance to know what was actually changed, how to actually verify a
"VERIFIED" issue. This was a major problem at the QA-IRC days that
aimed to clean up the verified fixed issues.

> I'm asking because my gut feeling is that the latter takes too much
> time. And seeing that all issues have already been VERIFIED in the CWS,
> and assuming that *breaking* a fix by merely integrating the CWS is
> unlikely (though surely possible),

Well, it /is/ possible. But what is also possible is that a cws that
gets integrated in milestone "n" reverts a fix that was integrated in
milestone "n-2"
Seen that, been there (noticed because those were "my" issues), so
another argument against manually closing verified issues.

> I wonder whether auto-CLOSING issues
> would free previous QA resources.

Definitely worth a thought. auto-closing by EIS with a comment along
the lines of "The fix for this issue has been integrated to
<milestone>. Please reopen this issue if you can still reproduce this
issue in any milestone newer (and including) <milestone> <URL that
explains milestones/where to get them>"
You got my vote on this one :-)

ciao
Christian

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to