Hi Martin,
From my perspective one reason for the high amount of regression is the
high amount of integrated child workspaces short before a feature
freeze. In the moment the ITeam (the QA representative) does the
nomination before feature freeze. As an immediate action (for the
Hi Mathias,
Mathias Bauer schrieb:
Ingrid Halama wrote:
This is not sufficient. Heavy code restructurings and cleanups are not
bound to the feature freeze date,
Perhaps they should? And at least as far as it concerns me they are.
but have a great potential to
introduce regressions also. I
Mathias Bauer wrote:
Ingrid Halama wrote:
This is not sufficient. Heavy code restructurings and cleanups are not
bound to the feature freeze date,
Perhaps they should? And at least as far as it concerns me they are.
yes, I also consider large amount or new, move or restructured
I notice there is a qa sub-module under each module, for example sw/qa,
sd/qa.. but there is also a qadevOOo.
Who can tell me the relation ship of these qa related modules? Does
automation test of qadevOOo depend on qa of each module? How to build
these /qa ?
Thanks in advance!
Martin Hollmichel wrote:
Mathias Bauer wrote:
Ingrid Halama wrote:
This is not sufficient. Heavy code restructurings and cleanups are
not bound to the feature freeze date,
Perhaps they should? And at least as far as it concerns me they are.
yes, I also consider large amount or new,
Hi Oliver,
thanks for the data.
Oliver-Rainer Wittmann - Software Engineer - Sun Microsystems schrieb:
IMHO, we do not find critical problems (show stoppers) in DEV builds
very early, only half of them are found early according to my experience.
Some data about the show stoppers, which I
As Thorsten pointed out, we are NOT capable of covering the QA for our
product completely NOR are we able to extend QA to new features (no time
for writing new tests, etc.) We also know, that this is not because we
are lazy ...
As a matter of fact, many issues are reported by the community, at
Hi,
Original-Nachricht
Von: Ingrid Halama ingrid.hal...@sun.com
...
So I would like to see mandatory automatic tests that detect whether the
important user scenarios still work properly, whether files are still
rendered as they should, whether the performance of the
Hi Ingrid,
Ingrid Halama schrieb:
[...]
So I would like to see mandatory automatic tests that detect whether the
important user scenarios still work properly, whether files are still
rendered as they should, whether the performance of the office has not
significantly decreased, . We have
Hi Mathias et. al.,
The problem is ...
Seeing many different explanations in this thread, and suggested
solutions ... I wonder if we should collect some data about the concrete
regressions, before we start speculating 'bout the larger picture.
Oliver's table with the introduced in and found in
Hi all,
Thorsten Ziehm wrote:
Hi Mathias,
Mathias Bauer schrieb:
Ingrid Halama wrote:
This is not sufficient. Heavy code restructurings and cleanups are
not bound to the feature freeze date,
Perhaps they should? And at least as far as it concerns me they are.
but have a great potential
Hi Jochen,
Joachim Lingner schrieb:
As Thorsten pointed out, we are NOT capable of covering the QA for our
product completely NOR are we able to extend QA to new features (no time
for writing new tests, etc.) We also know, that this is not because we
are lazy ...
As a matter of fact, many
Hi Ingrid,
Two problems here. The worst one is that you cannot control that this
new rule is applied. Who decides that a code change is too huge to risk
it for the next release in two months or so? You won't count lines,
don't you - that would be stupid. Those who are willing to act
Mathias Bauer wrote:
Ingrid Halama wrote:
This is not sufficient. Heavy code restructurings and cleanups are not
bound to the feature freeze date,
Perhaps they should? And at least as far as it concerns me they are.
but have a great potential to
introduce regressions also. I
Hi Thorsten,
The problem is a bit more complex. The testers and test script writers
do not have any time for writing new test cases for new functionality,
they do not have time to check fixed issues in master, they do not have
time to check code changes in a CWS as much as they should and at
Hi Shuang,
I notice there is a qa sub-module under each module, for example sw/qa,
sd/qa.. but there is also a qadevOOo.
Who can tell me the relation ship of these qa related modules? Does
automation test of qadevOOo depend on qa of each module? How to build
these /qa ?
module/qa
Thorsten Ziehm wrote:
Hi Ingrid,
Ingrid Halama schrieb:
[...]
So I would like to see mandatory automatic tests that detect whether
the important user scenarios still work properly, whether files are
still rendered as they should, whether the performance of the office
has not significantly
Hi Ingrid,
that now bite us, most of them have been found by users or testers
*working* with the program. Adding more CWS test runs and so shortening
the time for real-life testing will not help us but make things worse.
I don't agree. Preventing the integration of bugs earlier in the
Hi,
There are more than the VCLTestTool tests. We have the performance tests
and the UNO API test and the convwatch test. All those are in the
responsibility of the developers. I think only convwatch is not mandatory.
it would be really nice to have all these test also in a cygwin
Hi Ingrid,
There are more than the VCLTestTool tests. We have the performance tests
and the UNO API test and the convwatch test. All those are in the
responsibility of the developers. I think only convwatch is not mandatory.
As far as I know, confwatch is mandatory, too. In theory, at
Hi Frank,
Frank Schönheit - Sun Microsystems Germany schrieb:
Hi Thorsten,
[...]
For instance, is it possible that QA does not have time to write new
automated tests because this is such a laborious and time-consuming
task, but we do not have the time/resources to make it an easy and quick
Hi,
The problem is a bit more complex. The testers and test script writers
do not have any time for writing new test cases for new functionality,
they do not have time to check fixed issues in master, they do not have
time to check code changes in a CWS as much as they should
Maybe it is an
Hi,
I'd prefer a button in EIS run all those tests, resulting in an
overview page, showing you red or green for both the overall status
and every single test. Only then, when you need to manually run a red
test to debug and fix it, you're required to do this on your console -
so, I think this
Hi Thorsten,
Writing good test scripts isn't an easy tasks you are right. This is
status for all software products. Writing test code costs more time
than writing other code. Try it out with UNIT tests ;-)
I know for sure. Writing complex test cases for my UNO API
implementations usually
Hi Max,
Maximilian Odendahl schrieb:
Hi,
The problem is a bit more complex. The testers and test script writers
do not have any time for writing new test cases for new functionality,
they do not have time to check fixed issues in master, they do not have
time to check code changes in a CWS as
Hi Max,
I just thought there is a higher chance of getting support for cygwin in
the near time than having these automated tests in EIS.
As far as I know, there's a group working on this. It would still leave
us with the reliability problem (sometimes a test simply gives you bogus
results,
Hi Ingrid,
Ingrid Halama schrieb:
Thorsten Ziehm wrote:
Hi Ingrid,
Ingrid Halama schrieb:
[...]
So I would like to see mandatory automatic tests that detect whether
the important user scenarios still work properly, whether files are
still rendered as they should, whether the performance of
Hi,
Do you know, how often a CWS returns back to development because of
broken functionality, not fixed issues or process violation?
of course in regards to process violation, nothing would change. I am
talking about e.g crashing issues. If the developer tried it and it does
not crash
On 2009.03.13. 12:08, Frank Schönheit - Sun Microsystems Germany wrote:
Hi Ingrid,
that now bite us, most of them have been found by users or testers
*working* with the program. Adding more CWS test runs and so shortening
the time for real-life testing will not help us but make things worse.
Hi Max,
Do you know, how often a CWS returns back to development because of
broken functionality, not fixed issues or process violation?
of course in regards to process violation, nothing would change. I am
talking about e.g crashing issues. If the developer tried it and it does
not
Hi Thorsten,
On Fri, Mar 13, 2009 at 11:23 AM, Thorsten Ziehm thorsten.zi...@sun.com wrote:
[...]
But to check the issues in Master = verified - closed could be
discussed. Here the numbers are really 99% I think. Nearly all
issues which are fixed in CWS are fixed in Master too.
Maybe in the
Hi Rich,
summary - while release early, release often is very important, stable
dev snapshots are as important.
Yes, but how to reach that? In theory, trunk is always stable, since
every CWS has undergone tests (before integration) which ensure that it
doesn't break anything. Okay, enough
Hi,
Also, having seen a lot of misunderstandings (Oh! I though you meant
*this* button, but now I see you meant *that* one!), I think it is a
good idea that somebody who did not fix the issue verifies it. And the
CWS is the the best place for this verification, I'd say.
yes, this is true, so
Hi Max,
yes, this is true, so would you say we could skip the step from going
from verified to closed, doing this verification again?
I'd say this gives the least pain/loss.
(Though my experience with dba31g, whose issues were not fixed at all in
the milestone which the CWS was integrated
Hi Christian,
Maybe in the cvs days. Now with svn there have been a couple of failed
integrations, quite a number of changes that were reverted by other
cws.
Using the verified-closed step to find broken
tooling/SVN/integrations/builds sounds weird, doesn't it? So, this
shouldn't be a
Hi,
yes, this is true, so would you say we could skip the step from going
from verified to closed, doing this verification again?
I'd say this gives the least pain/loss.
by freeing time for other stuff for QA at the same time.
So maybe this idea can be discussed by the QA Leads as a
Hello,
Thorsten Ziehm schrieb:
Hi Max,
Maximilian Odendahl schrieb:
Hi,
Do you know, how often a CWS returns back to development because of
broken functionality, not fixed issues or process violation? Its
up to 25-30% of all CWSs. You can check this in EIS. The data is
stable over
Hi Max,
Maximilian Odendahl schrieb:
Hi,
Also, having seen a lot of misunderstandings (Oh! I though you meant
*this* button, but now I see you meant *that* one!), I think it is a
good idea that somebody who did not fix the issue verifies it. And the
CWS is the the best place for this
Hi Mechtilde,
Mechtilde schrieb:
Hello,
Thorsten Ziehm schrieb:
Hi Max,
Maximilian Odendahl schrieb:
Hi,
Do you know, how often a CWS returns back to development because of
broken functionality, not fixed issues or process violation? Its
up to 25-30% of all CWSs. You can check this in
Hi,
I wrote some comments in this thread already. But I was working on
a longer mail with my collected thoughts about this topic.
What are my thoughts to this topic. My first thought was, there is
nothing (really) different in this release as in the past releases. But
this doesn't mean, that
Hi All,
I didn't realize that my login to openoffice.org gave me an OO.org address,
so I apologize for my criticism.
In the future, I would suggest that people be advised to use their
http://www.openoffice.org/servlets/StartPage login id's as I would say that
I'm a junior-junior open office
On 2009.03.13. 12:37, Frank Schönheit - Sun Microsystems Germany wrote:
Hi Rich,
summary - while release early, release often is very important, stable
dev snapshots are as important.
Yes, but how to reach that? In theory, trunk is always stable, since
every CWS has undergone tests (before
Joachim Lingner ?:
As a matter of fact, many issues are reported by the community, at
least the critical ones which often promote to stoppers. IMO, we
should therefore promote the QA community, so there will be more
volunteers (who maybe also develop tests) and extend the time span
Hello KP,
Promoting QA in community is not enough - you have to retain people. In
order to retain people project needs to fix their issues, which inspires
people to use milestones in daily work.
Many of developers do not know how great it feels when issue you are
interested in gets fixed
Ingrid Halama wrote:
Martin Hollmichel wrote:
Mathias Bauer wrote:
Ingrid Halama wrote:
This is not sufficient. Heavy code restructurings and cleanups are
not bound to the feature freeze date,
Perhaps they should? And at least as far as it concerns me they are.
yes, I also
Frank Schönheit - Sun Microsystems Germany пишет:
I think many users would rather have faster fixes than more stable
milestone (you always can go to prev release/milestone).
Uhm, I doubt that. What you're saying here is that we should sacrifice
quality to more fixes. I believe this would
Mathias Bauer wrote:
Ingrid Halama wrote
Martin Hollmichel wrote:
Mathias Bauer wrote:
Ingrid Halama wrote:
This is not sufficient. Heavy code restructurings and cleanups are
not bound to the feature freeze date,
Perhaps they should? And at least as
Hi Mathias,
Mathias Bauer schrieb:
More testing on the master(!) would be very welcome. But on the CWS?
This will make the time to master even longer and then again we are in
the vicious cycle I explained in my first posting in this thread.
Yes more testing on Master is welcome, that is
Hello,
Thorsten Ziehm schrieb:
Hi Mathias,
Mathias Bauer schrieb:
More testing on the master(!) would be very welcome. But on the CWS?
This will make the time to master even longer and then again we are in
the vicious cycle I explained in my first posting in this thread.
Yes more
On 12/03/2009 13:36, Mathias Bauer wrote:
Rainman Lee wrote:
Hi Andrew
I know that implicit conversions usually bring more side effects than
convenience. But it is not the reason that we should give all them up
I think ;)
There is no implicit conversion from std::string to const char*,
because
Hi Thorsten,
The time to master isn't a problem currently, I think.
That's not remotely my experience.
See dba31g
(http://eis.services.openoffice.org/EIS2/cws.ShowCWS?Id=7708OpenOnly=falseSection=History)
for a recent example of a CWS which needed 36 days from ready for QA
to integrated state
Hi Mechtilde,
So more testing on CWS is also welcome!
Yes Full ACK to last sentence.
And this is not only a task for the Sun people. The persons who are
interested at a CWS must be able to test a CWS. And this also if they
aren't able to build OOo on their own.
I think especially we in
Thorsten Ziehm wrote:
Hi Mathias,
Mathias Bauer schrieb:
More testing on the master(!) would be very welcome. But on the CWS?
This will make the time to master even longer and then again we are in
the vicious cycle I explained in my first posting in this thread.
Yes more testing on
Hello Kirill,
Uhm, I doubt that. What you're saying here is that we should sacrifice
quality to more fixes. I believe this would be bad for OOo's overall
reputation.
What I mean to say is that we could sacrifice quality of snapshots to
bring in features faster and to motivate QA
Hi all,
Thorsten Ziehm schrieb:
Hi Mathias,
Mathias Bauer schrieb:
More testing on the master(!) would be very welcome. But on the CWS?
This will make the time to master even longer and then again we are in
the vicious cycle I explained in my first posting in this thread.
Yes more testing
Hi Mathias,
I don't see a lot of sense in making tests mandatory just because we
have them. If a test probably can help to find problems in areas where
we know that we have them, fine. So when tests are defined it's
necessary to see which problems they can catch and if that's what we need.
Hello Frank,
Frank Schönheit - Sun Microsystems Germany schrieb:
Hi Mechtilde,
So more testing on CWS is also welcome!
Yes Full ACK to last sentence.
And this is not only a task for the Sun people. The persons who are
interested at a CWS must be able to test a CWS. And this also if they
Hi Mechtilde,
I don't think that the developer have to upload each CWS build. I prefer
that the possible tester are able to pick up the CWS builds they want
beside the normal test scenario.
Ah, you're right, that would be most helpful ...
Ciao
Frank
--
- Frank Schönheit, Software Engineer
Hi Ingrid,
please calm down, no reason to become upset.
Ingrid Halama wrote:
This is a matter of how teams work. In general I would give everybody
the credit of being able to judge whether his work imposes a huge risk
on the product or not.
Doesn't the current situation show that this is
Michael Stahl wrote:
On 12/03/2009 13:36, Mathias Bauer wrote:
Rainman Lee wrote:
Hi Andrew
I know that implicit conversions usually bring more side effects than
convenience. But it is not the reason that we should give all them up
I think ;)
There is no implicit conversion from
Regina Henschel wrote:
Hi all,
Thorsten Ziehm schrieb:
Hi Mathias,
Mathias Bauer schrieb:
More testing on the master(!) would be very welcome. But on the CWS?
This will make the time to master even longer and then again we are in
the vicious cycle I explained in my first posting in
61 matches
Mail list logo