Hi all,
Christian Lohmaier a écrit :
Hi Frank, *,
On Tue, Jun 30, 2009 at 5:45 PM, Frank Schönheit - Sun Microsystems
Germany<[email protected]> wrote:
don't have something (in this very mail) about the "what to do with the
bug pile" topic, but a few other items cried for my response ...
To improve this we need people who write Testtool testscripts.
^^^^
I disagree. The problem is not finding the bugs.
Not sure. Looking at the amount of stoppers which came in in 3.1's
release phase, and the amount of stoppers already raised for 3.1.1 (and
most often for good reasons), I think that *finding* bugs *is* a problem.
Yes, and they are only the top of the pile
I aimed at a different issue - the "community feels
misunderstood/ignored" topic.
+1
[...] I think another
line of defense must be with the Devs, by not *introducing* the bugs.
Yes, that would of course be nice.
And honestly, I'm /very/ disappointed of the automatic testing,
admittedly not the testtool, but smoketestoo_native.
Smoketestoo_native failed to
* Detect a regression where the installation would not run /at all/
because of wrong permissions
* Detect the PDF-export breaker
The first of those should of course have been detected. If it hasn't,
this means somebody didn't follow the process
The problem is not that the smoketest wasn't performed. The problem is
that someketest_oo did run without problem. It doesn't do an
installation as the user does/starts OOo in another environment the
user would do.
smoketest_oo native just runs and reports success in both cases.
From what I know, a comparison between cws and mws tests if they are
not run in the exactly same environment will not provide reliable infos
so the community is out of scope here, or did I miss something?. I found
a stopper on 3.1 by accident, only because I was doing localization,
it's not a confident process to me.
The second just means: The smoketest cannot test everything. In fact, I
think the smoketest is just a very minimal set, there should be other
(automatic) tests for that kind of problem.
Sure, and I don't know what tests were actually run during the
RC-testing so that the PDF-export crasher was not detected, so I don't
want to speculate about it.
But I still think that (at least currently) the amount of issues found
via testscripts is so low compared to the manual ones/by people
actually using OOo..
This is something that I would like to understand either, what is the
good process? who is doing manual test cases from TCS, could we have the
same test examples, etc...? Do we believe in snapshot testing and in
this case do we provide all the necessary manpower to those tests? And
to Thorsten, it's not about documenting the process, but more about
lobying it through the volunteers.
Of course there might be a huge difference in my perception compared
to reality - since masters are released only after they passed some of
the tests, so not sure how many problems are identified at that stage.
Apart from finding bugs or not, the other big problem with the
testscripts is the non-deterministic nature. You can run a test, it
reports some warning/error. You run it again and it passes. This makes
it hart to reallly automate the process, to interpret the results.
and even if the error appears several times, you're not sure to
understand the reality behind the test and what you see. I've an example
in the archive of this list with a tab missing in a dialog.
[...]
- verifying issues of just integrated child workspaces within the current
master build (to verify there are no integration related issues regarding
this CWS)
Jup, nice janitorial task that takes off some workload of the other
QA/Devs that can focus on other tasks
+1
From your experience: Is this worth it?
Was definitely worth it since svn migration... :-( (changes of cws not
integrated and similar problems when the cws was integrated).
In general? Hmm. Good question. I guess it wouldn't hurt if EIS would
close those issues when the cws is integrated with an appropriate
comment.
I would second that if we have a serious process of QA on dev snapshot
especially for new features, see the mysterious revert to spellcheck
white color on the last dev snapshot.
So, in how many cases did checking a VERIFIED FIXED issue in the MWS
really detect a problem? And in how many cases was it just moving the
issue to CLOSED?
Here's the next big, big problem. Often enough, developers file issues
with incomprehensible (to others) descriptions, a QA volunteer has no
chance to know what was actually changed, how to actually verify a
"VERIFIED" issue. This was a major problem at the QA-IRC days that
aimed to clean up the verified fixed issues.
Yes, hard when you spend more time to try to understand the words on IZ
than really testing, you try to rely on the cws comments and find
nothing at all.
I'm asking because my gut feeling is that the latter takes too much
time. And seeing that all issues have already been VERIFIED in the CWS,
and assuming that *breaking* a fix by merely integrating the CWS is
unlikely (though surely possible),
Well, it /is/ possible. But what is also possible is that a cws that
gets integrated in milestone "n" reverts a fix that was integrated in
milestone "n-2"
Seen that, been there (noticed because those were "my" issues), so
another argument against manually closing verified issues.
I wonder whether auto-CLOSING issues
would free previous QA resources.
Definitely worth a thought. auto-closing by EIS with a comment along
the lines of "The fix for this issue has been integrated to
<milestone>. Please reopen this issue if you can still reproduce this
issue in any milestone newer (and including) <milestone> <URL that
explains milestones/where to get them>"
You got my vote on this one :-)
I'm not sure that the issue is really QA resources more than managing
the QA process with the community and in a way the community is able to
participate in an effective and funny way.
Kind regards
Sophie
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]