Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Frank Schönheit - Sun Microsystems Germany
Hi Martin,

  From my perspective one reason for the high amount of regression is the 
 high amount of integrated child workspaces short before a feature 
 freeze. In the moment the ITeam (the QA representative) does the 
 nomination before feature freeze. As an immediate action (for the 
 upcoming 3.2 release) from my side I will limit this freedom until 4 
 weeks before feature freeze, in the last 4 weeks before feature freeze, 

In my opinion, it's strictly necessary then to have parallel development
branches earlier than we have today. That is, if there are a lot of
CWSes coming in, but not approved/nominated for the next release, then
we should *not* pile them, but instead have a different branch to
integrate them into. Else, the quality problems will be shifted to
post-release only.

An yes, extending the various phases we have - feature implementation,
bug fixing, release canditates -, as suggested by Ingrid, would help, too.

Ciao
Frank

-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Thorsten Ziehm

Hi Mathias,

Mathias Bauer schrieb:

Ingrid Halama wrote:

This is not sufficient. Heavy code restructurings and cleanups are not 
bound to the feature freeze date, 

Perhaps they should? And at least as far as it concerns me they are.

but have a great potential to 
introduce regressions also. I think the show-stopper phase must be 
extended in relation to the feature-phase *and* the normal-bug-fixing-phase.


Furthermore what does it help to simply let different people do the 
nominations while the criteria are not clear? So I would like to suggest 
a criterion: In the last four weeks before the feature freeze only those 
(but all those) CWSses get nominated that have a complete set of 
required tests run successfully. Same for the last four weeks before end 
of normal-bug-fixing-phase. We could start with the tests that are there 
already and develop them further.


The problem is that the usual test runs obviously don't find the bugs
that now bite us, most of them have been found by users or testers
*working* with the program. Adding more CWS test runs and so shortening
the time for real-life testing will not help us but make things worse.


The problem is a bit more complex. The testers and test script writers
do not have any time for writing new test cases for new functionality,
they do not have time to check fixed issues in master, they do not have
time to check code changes in a CWS as much as they should and at the
end you are right, they do not have the time for real-life testing.

But at the last point I want to relativize a little bit. The QA 
community and the L10N testers find critical problems in DEV build very

early. Most of the regressions which were reported in the past days on
the releases list, are regressions in the very past builds. Some of the
issues weren't identified very early by Sun employees, because they have
to look in a lot of issues these days to identify the show stoppers.

So the QA project has a big problem with the mass of integrations.
Because they cannot check every new functionality on regular base,
because they do not find the time to write the corresponding test cases
for VCLTestTool and they do not find the time, to check if the
functionality is correctly integrated in the master build.


I think we need to

- stop with larger code changes (not only features) much earlier before
the release. We should not plan for finishing the work right before the
feature freeze date, if something that is not critical for the release
is at risk we better move it to the next release *early* (instead of
desperately trying to keep the schedule) to free time and space for
other things that are considered as critical or very important for the
release.


+1


- make sure that all CWS, especially the bigger ones, get integrated as
fast as possible to allow for more real-life testing. This includes that
no such CWS should lie around for weeks because there is still so much
time to test it as the feature freeze is still 2 months away. This will
require reliable arrangements between development and QA.


+1


- reduce the number of bug fixes we put into the micro releases to free
QA resources to get the CWS with larger changes worked on when
development finished the work. This self-limitation will need a lot of
discipline of everybody involved (including myself, I know ;-)).


+1


Ah, and whatever we do, we should write down why we are doing it, so
that we can present it to everybody who blames us for moving his/her
favorite feature to the next release. ;-)


+1

Regards,
  Thorsten

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Martin Hollmichel

Mathias Bauer wrote:

Ingrid Halama wrote:

  
This is not sufficient. Heavy code restructurings and cleanups are not 
bound to the feature freeze date, 


Perhaps they should? And at least as far as it concerns me they are.
  
yes, I also consider large amount or new, move or restructured code as a 
feature and had erroneously the expectation that this is already common 
sense. If all agree we should add this to the Feature Freeze criteria 
(http://wiki.services.openoffice.org/wiki/Feature_freeze)


Martin


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



[dev] qa and qadevOOo

2009-03-13 Thread Shuang Qin

I notice there is a qa sub-module under each module, for example sw/qa,
sd/qa.. but there is also a qadevOOo.
Who can tell me the relation ship of these qa related modules? Does
automation test of qadevOOo depend on qa of each module? How to build
these /qa ?
Thanks in advance!

Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Ingrid Halama

Martin Hollmichel wrote:

Mathias Bauer wrote:

Ingrid Halama wrote:

 
This is not sufficient. Heavy code restructurings and cleanups are 
not bound to the feature freeze date, 

Perhaps they should? And at least as far as it concerns me they are.
  
yes, I also consider large amount or new, move or restructured code as 
a feature and had erroneously the expectation that this is already 
common sense. If all agree we should add this to the Feature Freeze 
criteria (http://wiki.services.openoffice.org/wiki/Feature_freeze)
Two problems here. The worst one is that you cannot control that this 
new rule is applied. Who decides that a code change is too huge to risk 
it for the next release in two months or so? You won't count lines, 
don't you - that would be stupid. Those who are willing to act carefully 
are doing that already I am convinced. And those who are not acting 
carefully you cannot control really with this new rule. So introducing 
this new rule will basically change nothing.
The second problem is that sometimes bad bugs even detected later in the 
phase need bigger code changes. In my opinion only very experienced 
developers are able to make serious estimations whether the fix is worth 
the risk or not. So what to do now? Should we make a rule 'Let a very 
experienced developer check your code'? Sometimes I wish that but I am 
not sure that it would scale and - how would you organize and control 
such a rule? We have a similar rule for the show stopper phase (let you 
change be reviewed by a second developer), but even that rule is 
violated often I am convinced.
So I would like to see mandatory automatic tests that detect whether the 
important user scenarios still work properly, whether files are still 
rendered as they should, whether the performance of the office has not 
significantly decreased,  . We have a lot tests already even if 
there is much room for improvement. In principle some of the tests are 
mandatory already, but this rule gets ignored very often.
The good thing is that a violation of this rule could be detected 
relative easily and thus used as a criterion for nomination.


Ingrid


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Thorsten Ziehm

Hi Oliver,

thanks for the data.

Oliver-Rainer Wittmann - Software Engineer - Sun Microsystems schrieb:

IMHO, we do not find critical problems (show stoppers) in DEV builds 
very early, only half of them are found early according to my experience.

Some data about the show stoppers, which I have fixed in the last days:

ISSUEINTRODUCED INFOUND IN
i99822DEV300m2 (2008-03-12)OOO310m3 (2009-02-26)

i99876DEV300m30 (2008-08-25)OOO310m3

i99665DEV300m39 (2009-01-16)OOO310m3

i100043OOO310m1OOO310m4 (2009-03-04)

i100014OOO310m2OOO310m4

i100132DEV300m38 (2008-12-22)OOO310m4

i100035SRCm248 (2008-02-21)OOO310m4
This issue is special, because it was a memory problem, that by accident 
was not detected. Thus, it should not be counted in this statistic.


Looking at this concrete data, I personally can say that we find more or 
less half of the show stoppers early.


Half of them is in my opinion a good quote. But this doesn't mean, that
we do not have to improve it. And one point is, that features has to be
checks more often in the master. Perhaps with automated testing or with
regular manual testing or real life testing. But this costs resources
and this is the critical point. Most of the QA community is also part of
the L10N community. This mean they are working on translation, when OOo
run into a critical phase like Code Freeze, where most real-life testing
is needed.

So it isn't easy to fix. Therefore I think 50% is a good quote under the
knowing circumstances.

Thorsten


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Joachim Lingner
As Thorsten pointed out, we are NOT capable of covering the QA for our 
product completely NOR are we able to extend QA to new features (no time 
for writing new tests, etc.) We also know, that this is not because we 
are lazy ...
As a matter of fact, many issues are reported by the community, at least 
the critical ones which often promote to stoppers. IMO, we should 
therefore promote the QA community, so there will be more volunteers 
(who maybe also develop tests) and extend the time span between 
feature/code freeze and the actual release date.


Jochen

Martin Hollmichel schrieb:

*Hi,

so far we have got reported almost 40 regression as stopper for 3.1 
release, see query

http://tinyurl.com slash cgsm3y .

for 3.0 ( **http://tinyurl.com slash ahkosf ) we had 27 of these issues, 
for 2.4 (**http://tinyurl.com slash c86n3u** ) we had 23.


we are obviously getting worse and I would like to know about the 
reasons for this. They are too much issues for me to evaluate the root 
cause for every single issue so I would like ask the project and qa 
leads to do an analysis for the root causes and to come with suggestions 
for avoiding them in the future.


additionally there might be other ideas or suggestions on how to detect 
and fix those issues earlier in our release process.


 From my perspective one reason for the high amount of regression is the 
high amount of integrated child workspaces short before a feature 
freeze. In the moment the ITeam (the QA representative) does the 
nomination before feature freeze. As an immediate action (for the 
upcoming 3.2 release) from my side I will limit this freedom until 4 
weeks before feature freeze, in the last 4 weeks before feature freeze, 
I or other members from the release status meeting will do the 
nomination of the cws for the upcoming release or decide to postpone it 
to the then coming release.

**
Martin
*

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Andre Schnabel
Hi,

 Original-Nachricht 
 Von: Ingrid Halama ingrid.hal...@sun.com
...

 So I would like to see mandatory automatic tests that detect whether the 
 important user scenarios still work properly, whether files are still 
 rendered as they should, whether the performance of the office has not 
 significantly decreased,  . We have a lot tests already even if 
 there is much room for improvement. In principle some of the tests are 
 mandatory already, but this rule gets ignored very often.


The problem with this rule is, that there is only a very limited set of people 
who can follow this rule. E.g. running automated test and retting reliable 
results is almost restricted to the Sun QA Team in Hamburg at the moment. So - 
no matter what rules we define - for the moment we either have to break them or 
we will delay the integration of CWSes. If we delay the integration, we will 
delay public testing. If we delay public testing, we will find critical errors 
(that cannot be identified by automatic testing) even later.

I know, I still have to write some more complete report about automatic 
testing.  :(

But as I suggested in another thread, I did some comparisions with automated 
testing on a OOO310m1 build from sund and one from a buildbot. 
The good thing is, that there are not many differences (buildbot had about 3 
errors and 10 warnings more). 
The bad thing is, that I hat a total of 190 Errors in release and required 
tests. 

I did not yet have the time to analyze what happened. But these results are not 
usable. (And I still would say, I know how to get good results from the 
testtool).

André
-- 
Pt! Schon vom neuen GMX MultiMessenger gehört? Der kann`s mit allen: 
http://www.gmx.net/de/go/multimessenger01

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Thorsten Ziehm

Hi Ingrid,

Ingrid Halama schrieb:
[...]
So I would like to see mandatory automatic tests that detect whether the 
important user scenarios still work properly, whether files are still 
rendered as they should, whether the performance of the office has not 
significantly decreased,  . We have a lot tests already even if 
there is much room for improvement. In principle some of the tests are 
mandatory already, but this rule gets ignored very often.


What do you mean? There are mandatory tests and each tester in Sun QA
team run these tests on a CWS. You can check if your CWS is tested with
VCLTestTool in QUASTe [1].

On the other side the CWS policies [2] are that code changes can be
integrated and approved by code review only. Only the CWS owner and
the QA representative must be different. This was introduced to lower
the barrier for external developers. If you think, that this is a
reason for lower quality in the product, perhaps this policy has to be
discussed.

Thorsten

[1] : http://quaste.services.openoffice.org/
  Use 'search' for CWSs which are integrated or use the CWS-listbox
  for CWSs which are in work.
[2] : http://wiki.services.openoffice.org/wiki/CWS_Policies

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Frank Schönheit - Sun Microsystems Germany
Hi Mathias et. al.,

 The problem is ...

Seeing many different explanations in this thread, and suggested
solutions ... I wonder if we should collect some data about the concrete
regressions, before we start speculating 'bout the larger picture.

Oliver's table with the introduced in and found in was a good start,
I think we should have this for nearly all of the regressions, ideally
together with a root cause.

Ciao
Frank

-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Oliver-Rainer Wittmann - Software Engineer - Sun Microsystems

Hi all,

Thorsten Ziehm wrote:

Hi Mathias,

Mathias Bauer schrieb:

Ingrid Halama wrote:

This is not sufficient. Heavy code restructurings and cleanups are 
not bound to the feature freeze date, 

Perhaps they should? And at least as far as it concerns me they are.

but have a great potential to introduce regressions also. I think the 
show-stopper phase must be extended in relation to the feature-phase 
*and* the normal-bug-fixing-phase.


Furthermore what does it help to simply let different people do the 
nominations while the criteria are not clear? So I would like to 
suggest a criterion: In the last four weeks before the feature freeze 
only those (but all those) CWSses get nominated that have a complete 
set of required tests run successfully. Same for the last four weeks 
before end of normal-bug-fixing-phase. We could start with the tests 
that are there already and develop them further.


The problem is that the usual test runs obviously don't find the bugs
that now bite us, most of them have been found by users or testers
*working* with the program. Adding more CWS test runs and so shortening
the time for real-life testing will not help us but make things worse.


The problem is a bit more complex. The testers and test script writers
do not have any time for writing new test cases for new functionality,
they do not have time to check fixed issues in master, they do not have
time to check code changes in a CWS as much as they should and at the
end you are right, they do not have the time for real-life testing.

But at the last point I want to relativize a little bit. The QA 
community and the L10N testers find critical problems in DEV build very

early. Most of the regressions which were reported in the past days on
the releases list, are regressions in the very past builds. Some of the
issues weren't identified very early by Sun employees, because they have
to look in a lot of issues these days to identify the show stoppers.



IMHO, we do not find critical problems (show stoppers) in DEV builds 
very early, only half of them are found early according to my experience.

Some data about the show stoppers, which I have fixed in the last days:

ISSUE   INTRODUCED IN   FOUND IN
i99822  DEV300m2 (2008-03-12)   OOO310m3 (2009-02-26)

i99876  DEV300m30 (2008-08-25)  OOO310m3

i99665  DEV300m39 (2009-01-16)  OOO310m3

i100043 OOO310m1OOO310m4 (2009-03-04)

i100014 OOO310m2OOO310m4

i100132 DEV300m38 (2008-12-22)  OOO310m4

i100035 SRCm248 (2008-02-21)OOO310m4
This issue is special, because it was a memory problem, that by accident 
was not detected. Thus, it should not be counted in this statistic.


Looking at this concrete data, I personally can say that we find more or 
less half of the show stoppers early.



Just my 2 cents,
Oliver.

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Thorsten Ziehm

Hi Jochen,

Joachim Lingner schrieb:
As Thorsten pointed out, we are NOT capable of covering the QA for our 
product completely NOR are we able to extend QA to new features (no time 
for writing new tests, etc.) We also know, that this is not because we 
are lazy ...
As a matter of fact, many issues are reported by the community, at least 
the critical ones which often promote to stoppers. IMO, we should 
therefore promote the QA community, so there will be more volunteers 
(who maybe also develop tests) and extend the time span between 
feature/code freeze and the actual release date.


There is one critical point. When you extend the time for testing and QA
between Feature Freeze and Release date, you bind the QA community to
one release (Code line) and who should do QA work on the next release,
where the developers work on and create their CWSes?

I talked very often with Martin about extending the time buffers
between FeatureFreeze, CodeFreeze, Translation Handover ... and
it isn't easy to find a good choice for all teams.

Thorsten

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Frank Schönheit - Sun Microsystems Germany
Hi Ingrid,

 Two problems here. The worst one is that you cannot control that this 
 new rule is applied. Who decides that a code change is too huge to risk 
 it for the next release in two months or so? You won't count lines, 
 don't you - that would be stupid. Those who are willing to act carefully 
 are doing that already I am convinced. And those who are not acting 
 carefully you cannot control really with this new rule. So introducing 
 this new rule will basically change nothing.

I beg to disagree. Of course, as you point out, there cannot be a
definite rule of what change is too big in which release phase. But
alone raising the awareness that large code changes are Bad (TM) after
feature freeze might help.
And if the analysis of current show stoppers reveal that a significant
mount is caused by late big code changes, this is a good argument I'd say.

So, let's not call this rule as it if you don't follow it, you'll be
shot. I'd consider it a guideline which every reasonable developer
would usually follow.

Ciao
Frank

-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Ingrid Halama

Mathias Bauer wrote:

Ingrid Halama wrote:

  
This is not sufficient. Heavy code restructurings and cleanups are not 
bound to the feature freeze date, 


Perhaps they should? And at least as far as it concerns me they are.

  
but have a great potential to 
introduce regressions also. I think the show-stopper phase must be 
extended in relation to the feature-phase *and* the normal-bug-fixing-phase.


Furthermore what does it help to simply let different people do the 
nominations while the criteria are not clear? So I would like to suggest 
a criterion: In the last four weeks before the feature freeze only those 
(but all those) CWSses get nominated that have a complete set of 
required tests run successfully. Same for the last four weeks before end 
of normal-bug-fixing-phase. We could start with the tests that are there 
already and develop them further.



The problem is that the usual test runs obviously don't find the bugs
  
That is not obvious to me. Too often the mandatory tests haven't been 
run. And if tests do not find an important problem, hey then the tests 
should be improved.

that now bite us, most of them have been found by users or testers
*working* with the program. Adding more CWS test runs and so shortening
the time for real-life testing will not help us but make things worse.
  
I don't agree. Preventing the integration of bugs earlier in the 
production phase especially before the integration into the master trunk 
would give us much more freedom. Now we always need to react on show 
stoppers and react and react and uh then the release time line is on 
risk. All that, because the bugs are already in the product. If you 
instead detect the bugs before they are integrated into the product you 
can keep cool, refuse the bad CWS and thus not the  release is on risk 
but only the single bad CWS.

I think we need to

- stop with larger code changes (not only features) much earlier before
the release. We should not plan for finishing the work right before the
feature freeze date, if something that is not critical for the release
is at risk we better move it to the next release *early* (instead of
desperately trying to keep the schedule) to free time and space for
other things that are considered as critical or very important for the
release.

- make sure that all CWS, especially the bigger ones, get integrated as
fast as possible to allow for more real-life testing. This includes that
no such CWS should lie around for weeks because there is still so much
time to test it as the feature freeze is still 2 months away. This will
require reliable arrangements between development and QA.

- reduce the number of bug fixes we put into the micro releases to free
QA resources to get the CWS with larger changes worked on when
development finished the work. This self-limitation will need a lot of
discipline of everybody involved (including myself, I know ;-)).

Ah, and whatever we do, we should write down why we are doing it, so
that we can present it to everybody who blames us for moving his/her
favorite feature to the next release. ;-)
  
I am missing a stimulation for good behaviour in this plans. There are 
people who do the feature design, who do the developing work, who do the 
testing, who create the automatic test, who do the documetnation and 
after all these people have done their work and lets assume they have 
done it good and without show stoppers, after all this there comes 
someone else and says, oh no, I do not think that I want to have this 
for this release, there are other things that I want to have more and in 
the sum I guess that it might be to much for the next release? Where is 
the stimulation for good behaviour here? There is none, instead it is a 
stimulation to push in the changes quickly into the product and skip 
careful testing.
Come on, we will loose the good and careful people if there is no 
stimulation for good behaviour.


Ingrid

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Frank Schönheit - Sun Microsystems Germany
Hi Thorsten,


 The problem is a bit more complex. The testers and test script writers
 do not have any time for writing new test cases for new functionality,
 they do not have time to check fixed issues in master, they do not have
 time to check code changes in a CWS as much as they should and at the
 end you are right, they do not have the time for real-life testing.

That statement frightens me - way too many they do not have time, for
my taste.

Is there any chance to change this? Or have we already reached the point
where the daily effort to keep QA running on the current (insufficient)
level prevents us from investing the effort to make QA more efficient?

For instance, is it possible that QA does not have time to write new
automated tests because this is such a laborious and time-consuming
task, but we do not have the time/resources to make it an easy and quick
task?

Ciao
Frank

-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] qa and qadevOOo

2009-03-13 Thread Frank Schönheit - Sun Microsystems Germany
Hi Shuang,

 I notice there is a qa sub-module under each module, for example sw/qa,
 sd/qa.. but there is also a qadevOOo.
 Who can tell me the relation ship of these qa related modules? Does
 automation test of qadevOOo depend on qa of each module? How to build
 these /qa ?

module/qa contains test cases for the module's code. Often, but not
always, this is Java code which depends on the qadevOOo framework -
so-called complex tests cases or UNO API tests.

The concrete structure of the qa folder depends on the module, there's
no general rule. To find out, I suggest you look for a makefile.mk
somewhere in the qa folder (or subfolders thereof), and try what happens
when you do a dmake. Either the tests are ran immediately then (which
is the case for C++ test cases, usually), or the tests are compiled, and
a subsequent dmake run or dmake run_test_name will will actually
run it.

Note that for the complex tests and the UNO API tests you need a running
OpenOffice.org instances, started with the usual --accept=... parameter.

Ciao
Frank

-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Ingrid Halama

Thorsten Ziehm wrote:

Hi Ingrid,

Ingrid Halama schrieb:
[...]
So I would like to see mandatory automatic tests that detect whether 
the important user scenarios still work properly, whether files are 
still rendered as they should, whether the performance of the office 
has not significantly decreased,  . We have a lot tests already 
even if there is much room for improvement. In principle some of the 
tests are mandatory already, but this rule gets ignored very often.


What do you mean? There are mandatory tests and each tester in Sun QA
team run these tests on a CWS. You can check if your CWS is tested with
VCLTestTool in QUASTe [1].
There are more than the VCLTestTool tests. We have the performance tests 
and the UNO API test and the convwatch test. All those are in the 
responsibility of the developers. I think only convwatch is not mandatory.

Ingrid


On the other side the CWS policies [2] are that code changes can be
integrated and approved by code review only. Only the CWS owner and
the QA representative must be different. This was introduced to lower
the barrier for external developers. If you think, that this is a
reason for lower quality in the product, perhaps this policy has to be
discussed.

Thorsten

[1] : http://quaste.services.openoffice.org/
  Use 'search' for CWSs which are integrated or use the CWS-listbox
  for CWSs which are in work.
[2] : http://wiki.services.openoffice.org/wiki/CWS_Policies

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org




--
Sun Microsystems GmbH
Nagelsweg 55, D-20097 Hamburg
Sitz der Gesellschaft: Sonnenallee 1, D-85551 Kirchheim-Heimstetten
Amtsgericht Muenchen: HRB 161028
Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels, Dr. Roland Boemer
Vorsitzender des Aufsichtsrates: Martin Haering


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Frank Schönheit - Sun Microsystems Germany
Hi Ingrid,

 that now bite us, most of them have been found by users or testers
 *working* with the program. Adding more CWS test runs and so shortening
 the time for real-life testing will not help us but make things worse.
   
 I don't agree. Preventing the integration of bugs earlier in the 
 production phase especially before the integration into the master trunk 
 would give us much more freedom. Now we always need to react on show 
 stoppers and react and react and uh then the release time line is on 
 risk. All that, because the bugs are already in the product. If you 
 instead detect the bugs before they are integrated into the product you 
 can keep cool, refuse the bad CWS and thus not the  release is on risk 
 but only the single bad CWS.

Hmmm ... difficult.

On the one hand, I agree (and this is what you can read in every QA
handbook) that finding bugs earlier reduces the overall costs.

On the other hand, I suppose (! - that's an interesting facet to find
out when analyzing the current show stoppers: found by whom?) that in
fact the majority of problems are found during real-life usage. And
nobody will use a CWS in real life. So, getting the CWS into the MWS
early has its advantages, too.

Ciao
Frank

-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Maximilian Odendahl

Hi,

There are more than the VCLTestTool tests. We have the performance tests 
and the UNO API test and the convwatch test. All those are in the 
responsibility of the developers. I think only convwatch is not mandatory.


it would be really nice to have all these test also in a cygwin 
enviromnent usable for external contributors as well


Regards
Max


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Frank Schönheit - Sun Microsystems Germany
Hi Ingrid,

 There are more than the VCLTestTool tests. We have the performance tests 
 and the UNO API test and the convwatch test. All those are in the 
 responsibility of the developers. I think only convwatch is not mandatory.

As far as I know, confwatch is mandatory, too. In theory, at least. In
practice, I doubt anybody is running it, given its reliability.

Which brings me to a very favorite topic of mine: We urgently need to
possibility to run all kind of automated tests (testtool tests,
confwatch tests, UNO API tests, performance tests, complex test cases -
more, anybody?) in an easy way. Currently, this is *a lot* of manual
work, and not remotely reliably (some test infrastructures are
semi-permanently broken, and some tests produce different results on
subsequent runs, which effectively makes them useless).

As a consequence, a lot of those tests are not run all the time, and the
bugs they could reveal are found too late.

Ciao
Frank

-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Thorsten Ziehm

Hi Frank,

Frank Schönheit - Sun Microsystems Germany schrieb:

Hi Thorsten,

[...]


For instance, is it possible that QA does not have time to write new
automated tests because this is such a laborious and time-consuming
task, but we do not have the time/resources to make it an easy and quick
task?


Writing good test scripts isn't an easy tasks you are right. This is
status for all software products. Writing test code costs more time
than writing other code. Try it out with UNIT tests ;-)

So it's the same for automated testing with VCLTestTool for OOo. But
the problem here is, that the QA team leads for an application are
often the same person who have to write the test scripts. They have
to check the new incoming issues, working in iTeams, working on
verifying issues in CWSs ...

When you have the time to concentrate on writing test scripts only,
you can create hundred lines of code per day. But the high workload
on the persons leads to hundreds lines of code per months only.

Thorsten


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Maximilian Odendahl

Hi,


The problem is a bit more complex. The testers and test script writers
do not have any time for writing new test cases for new functionality,
they do not have time to check fixed issues in master, they do not have
time to check code changes in a CWS as much as they should 


Maybe it is an idea to change the resolved fixed - verified process? It 
is a waste of time in about 99% of all case probably. The developer 
tests the issue before handing over the CWS and then sets it to 
Resolved, so there is a pretty small chance that the issue itself is not 
really fixed. And remember, it will be again checked for setting the 
issue to closed anyway.


This would free time for real-life testing and testing the involved 
area, instead of the specific issue. IMO, this would help to find 
showstoppers a lot earlier.


Best regards
Max

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Maximilian Odendahl

Hi,


I'd prefer a button in EIS run all those tests, resulting in an
overview page, showing you red or green for both the overall status
and every single test. Only then, when you need to manually run a red
test to debug and fix it, you're required to do this on your console -
so, I think this could be the second step.


this would be even better of course :-)

I just thought there is a higher chance of getting support for cygwin in 
the near time than having these automated tests in EIS.


Regards
Max

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Frank Schönheit - Sun Microsystems Germany
Hi Thorsten,

 Writing good test scripts isn't an easy tasks you are right. This is
 status for all software products. Writing test code costs more time
 than writing other code. Try it out with UNIT tests ;-)

I know for sure. Writing complex test cases for my UNO API
implementations usually takes the same time than the implementation
took, or even more. Usually, but not always, I think it's worth it :)

Okay, so let me make this more explicit: I see a ... remote possibility
that our *tooling* for writing tests - namely the testtool - has strong
usability deficiencies, and thus costs too much time for fighting the
testtool/infrastructure, which could be better spent in the actual test.

I might be wrong on that, since I seldom come in touch with testtool.
But whenever I do, I feel it hard to believe we live in the 21st century ...

Ciao
Frank

-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Thorsten Ziehm

Hi Max,

Maximilian Odendahl schrieb:

Hi,


The problem is a bit more complex. The testers and test script writers
do not have any time for writing new test cases for new functionality,
they do not have time to check fixed issues in master, they do not have
time to check code changes in a CWS as much as they should 


Maybe it is an idea to change the resolved fixed - verified process? It 
is a waste of time in about 99% of all case probably. The developer 
tests the issue before handing over the CWS and then sets it to 
Resolved, so there is a pretty small chance that the issue itself is not 
really fixed. And remember, it will be again checked for setting the 
issue to closed anyway.


Do you know, how often a CWS returns back to development because of
broken functionality, not fixed issues or process violation? Its
up to 25-30% of all CWSs. You can check this in EIS. The data is
stable over the past years. :-(

Therefore in my opinion it isn't good to change the handling of 
'resolved/fixed'-'verified' status.


But to check the issues in Master = verified - closed could be
discussed. Here the numbers are really 99% I think. Nearly all
issues which are fixed in CWS are fixed in Master too.

Thorsten


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Frank Schönheit - Sun Microsystems Germany
Hi Max,

 I just thought there is a higher chance of getting support for cygwin in 
 the near time than having these automated tests in EIS.

As far as I know, there's a group working on this. It would still leave
us with the reliability problem (sometimes a test simply gives you bogus
results, and the solution is to re-run it), but it would be a
tremendous step forward.

Ciao
Frank

-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Thorsten Ziehm

Hi Ingrid,

Ingrid Halama schrieb:

Thorsten Ziehm wrote:

Hi Ingrid,

Ingrid Halama schrieb:
[...]
So I would like to see mandatory automatic tests that detect whether 
the important user scenarios still work properly, whether files are 
still rendered as they should, whether the performance of the office 
has not significantly decreased,  . We have a lot tests already 
even if there is much room for improvement. In principle some of the 
tests are mandatory already, but this rule gets ignored very often.


What do you mean? There are mandatory tests and each tester in Sun QA
team run these tests on a CWS. You can check if your CWS is tested with
VCLTestTool in QUASTe [1].
There are more than the VCLTestTool tests. We have the performance tests 
and the UNO API test and the convwatch test. All those are in the 
responsibility of the developers. I think only convwatch is not mandatory.

Ingrid


OK, you are right. I from my perspective looks often only on VCLTestTool
tests instead of the whole stack of tools we have.

Thorsten

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Maximilian Odendahl

Hi,


Do you know, how often a CWS returns back to development because of
broken functionality, not fixed issues or process violation? 


of course in regards to process violation, nothing would change. I am 
talking about e.g crashing issues. If the developer tried it and it does 
 not crash anymore, QA should not have to test the scenario again and 
waste time on reproducing the issue again(and again when closing the issue)



But to check the issues in Master = verified - closed could be
discussed. Here the numbers are really 99% I think. Nearly all
issues which are fixed in CWS are fixed in Master too.


so I guess we have an agreement to reduce some workload of QA in this 
Resolved-Verified-Closed chain. In which exact location of it would need 
to be discussed some more I guess.


Best regards
Max

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Rich

On 2009.03.13. 12:08, Frank Schönheit - Sun Microsystems Germany wrote:

Hi Ingrid,


that now bite us, most of them have been found by users or testers
*working* with the program. Adding more CWS test runs and so shortening
the time for real-life testing will not help us but make things worse.
  
I don't agree. Preventing the integration of bugs earlier in the 
production phase especially before the integration into the master trunk 
would give us much more freedom. Now we always need to react on show 
stoppers and react and react and uh then the release time line is on 
risk. All that, because the bugs are already in the product. If you 
instead detect the bugs before they are integrated into the product you 
can keep cool, refuse the bad CWS and thus not the  release is on risk 
but only the single bad CWS.


Hmmm ... difficult.

On the one hand, I agree (and this is what you can read in every QA
handbook) that finding bugs earlier reduces the overall costs.

On the other hand, I suppose (! - that's an interesting facet to find
out when analyzing the current show stoppers: found by whom?) that in
fact the majority of problems are found during real-life usage. And
nobody will use a CWS in real life. So, getting the CWS into the MWS
early has its advantages, too.


seeing as my bug made the list, i'll chime in.
i've reported quite some bugs, absolute majority being found in real 
life usage. but i'm not the casual user, as i run dev snapshots most of 
the time, which increases chances of stumbling upon bugs.


i'd like to think i'm not the only one who does that ;), so i think this 
is the group that would find most of the stoppers in real life scenarios 
(that slipped past automated testing). important factor would be to get 
as many users (and developers !) as possible doing the same.
i think you already have guessed where i'm heading - stability and 
usability (as in fit for purpose) of dev snapshots. if dev snapshot 
has a critical problem that prevents me from using it i'll regress to 
last stable version, and, quite likely, will stay with it for a while.
that means i won't find some other bug that will prevent somebody else 
from using a dev build and finding yet another bug and so on ;)


many opensource projects tend to keep trunk relatively stable to 
increase the proportion of users why stay on trunk, and use trunk 
themselves. thus breakages are discovered sooner.


summary - while release early, release often is very important, stable 
dev snapshots are as important.



Ciao
Frank

--
 Rich

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Frank Schönheit - Sun Microsystems Germany
Hi Max,

 Do you know, how often a CWS returns back to development because of
 broken functionality, not fixed issues or process violation? 
 
 of course in regards to process violation, nothing would change. I am 
 talking about e.g crashing issues. If the developer tried it and it does 
   not crash anymore, QA should not have to test the scenario again and 
 waste time on reproducing the issue again(and again when closing the issue)

Difficult to draw the line: Which issues need verification, which don't?

Also, having seen a lot of misunderstandings (Oh! I though you meant
*this* button, but now I see you meant *that* one!), I think it is a
good idea that somebody who did not fix the issue verifies it. And the
CWS is the the best place for this verification, I'd say.

Also, IMO good QA engineers tend to not only blindly verify the concrete
issue is fixed, but think about what they saw and did. Sometimes, this
leads to additional issues, or discussions whether the new behaviour is
really intended and Good (TM), and so on. At least this is my experience
with DBA's QA, and I would not like to miss that, since it finally also
improves the product.

Ciao
Frank

-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Christian Lohmaier
Hi Thorsten,

On Fri, Mar 13, 2009 at 11:23 AM, Thorsten Ziehm thorsten.zi...@sun.com wrote:
 [...]
 But to check the issues in Master = verified - closed could be
 discussed. Here the numbers are really 99% I think. Nearly all
 issues which are fixed in CWS are fixed in Master too.

Maybe in the cvs days. Now with svn there have been a couple of failed
integrations, quite a number of changes that were reverted by other
cws.
So I don't have trust in that number. And this makes it really hard to
check when your fix was integrated in mXX, but later gets reverted by
integration of another cws in mXX+4.

ciao
Christian

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Frank Schönheit - Sun Microsystems Germany
Hi Rich,

 summary - while release early, release often is very important, stable 
 dev snapshots are as important.

Yes, but how to reach that? In theory, trunk is always stable, since
every CWS has undergone tests (before integration) which ensure that it
doesn't break anything. Okay, enough laughing.

Finally, that's exactly the problem here: Too many serious bugs slip
Dev/QA's attention in the CWS, and are found on trunk only. If we fix
that, trunk will automatically be much more stable. Easy to say, hard to do.

Ciao
Frank

-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Maximilian Odendahl

Hi,


Also, having seen a lot of misunderstandings (Oh! I though you meant
*this* button, but now I see you meant *that* one!), I think it is a
good idea that somebody who did not fix the issue verifies it. And the
CWS is the the best place for this verification, I'd say.


yes, this is true, so would you say we could skip the step from going 
from verified to closed, doing this verification again?


Regards
Max

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Frank Schönheit - Sun Microsystems Germany
Hi Max,

 yes, this is true, so would you say we could skip the step from going 
 from verified to closed, doing this verification again?

I'd say this gives the least pain/loss.

(Though my experience with dba31g, whose issues were not fixed at all in
the milestone which the CWS was integrated into, lets me be careful
here. On the other hand, this was not discovered by the verify in
master and close step, so ...)

Ciao
Frank

-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Frank Schönheit - Sun Microsystems Germany
Hi Christian,

 Maybe in the cvs days. Now with svn there have been a couple of failed
 integrations, quite a number of changes that were reverted by other
 cws.

Using the verified-closed step to find broken
tooling/SVN/integrations/builds sounds weird, doesn't it? So, this
shouldn't be a permanent argument (though it might be a good one at the
moment), but only hold until the root causes are fixed.

 So I don't have trust in that number. And this makes it really hard to
 check when your fix was integrated in mXX, but later gets reverted by
 integration of another cws in mXX+4.

You never can prevent that. If you verify-close the issue in mXX+2, you
won't find the reversal in mXX+4, anyway.

Ciao
Frank

-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Maximilian Odendahl

Hi,

yes, this is true, so would you say we could skip the step from going 
from verified to closed, doing this verification again?


I'd say this gives the least pain/loss.


by freeing time for other stuff for QA at the same time.

So maybe this idea can be discussed by the QA Leads as a possible change 
of process for the future?


Regards
Max

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Mechtilde
Hello,

Thorsten Ziehm schrieb:
 Hi Max,
 
 Maximilian Odendahl schrieb:
 Hi,


 
 Do you know, how often a CWS returns back to development because of
 broken functionality, not fixed issues or process violation? Its
 up to 25-30% of all CWSs. You can check this in EIS. The data is
 stable over the past years. :-(

Can you tell me the Path  how I can find this information in the EIS?

Regards

Mechtilde


-- 
Dipl. Ing. Mechtilde Stehmann
## http://de.openoffice.org
## Ansprechpartnerin für die deutschsprachige QA
## Freie Office-Suite für Linux, Mac, Windows, Solaris
## Meine Seite http://www.mechtilde.de
## PGP encryption welcome! Key-ID: 0x53B3892B


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Thorsten Ziehm

Hi Max,

Maximilian Odendahl schrieb:

Hi,


Also, having seen a lot of misunderstandings (Oh! I though you meant
*this* button, but now I see you meant *that* one!), I think it is a
good idea that somebody who did not fix the issue verifies it. And the
CWS is the the best place for this verification, I'd say.


yes, this is true, so would you say we could skip the step from going 
from verified to closed, doing this verification again?


We cannot get free time here anymore. It isn't mandatory anymore for the
Sun QA team to check the fixes in Master. So we skipped this nearly one
year ago - but we didn't changed the policy!

But we tried to organized QA issue-hunting days, where these issues are
addressed. With more or less results :-(

Thorsten


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Thorsten Ziehm

Hi Mechtilde,

Mechtilde schrieb:

Hello,

Thorsten Ziehm schrieb:

Hi Max,

Maximilian Odendahl schrieb:

Hi,




Do you know, how often a CWS returns back to development because of
broken functionality, not fixed issues or process violation? Its
up to 25-30% of all CWSs. You can check this in EIS. The data is
stable over the past years. :-(


Can you tell me the Path  how I can find this information in the EIS?


Childworkspace / Search
When you searched for CWSs in this query you can find a button 'status 
change statistics' at the bottom of the results. On this page you

can see, how often a CWS toggled between the states of a CWS.

Thorsten

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Thorsten Ziehm

Hi,

I wrote some comments in this thread already. But I was working on
a longer mail with my collected thoughts about this topic.


What are my thoughts to this topic. My first thought was, there is
nothing (really) different in this release as in the past releases. But
this doesn't mean, that this is a good news.

My second thought was, to write down all the point which I identified
which was (perhaps) different as in the past releases. Some of the
points I will list I can support by data, others are rumor in the teams,
which I heard over the past months or years.

1. The number of regression issues for OOo 3.1
For this release the stopper issue was opened long before code freeze.
It was in November 08 and 10-15 issues were added before code freeze
begin of March. But was isn't typically for this release is, that in
the past days a mass of new incoming regressions were added and the
quality of the regressions shows a bad view on the product.
So the whole number isn't that bad for the long time, but I expect a
quality issues with the product, when so many stoppers reports (~20)
were reported in the releases list about in the past 4 days!

2. What is special in the release plan for OOo 3.1
- the first integration of CWSs was on 30. July 2008 (DEV300m29)
- Feature Freeze for OOo 3.1 was on 18th of December 08
  = started on 11th of December in build DEV300m38 and m39 56 CWS
 with 403 issues were integrated for Feature Freeze
 (the cloned CWS from OOO300 code line are taken out)
- Code Freeze for OOo 3.0.1 was on 11th of December 08
  = started on 11th of December in build OOO300m14 19 CWS with 88
 issues were integrated for Code Freeze
This means a very high number of CWSs were handled/finalized by DEV and
QA in a very short time frame - especially before Christmas = most of
the full time engineer at Sun wanted to go on vacation for 2 weeks).
For me it's the first time, that such dates were so near together.

3. What's new in the Build Environment
Started with build DEV300m33 the Source Control Management (SCM) was 
switched to SubVersion. SubVersion wasn't as good as estimated. And

it has some bugs and challenges. I read a lot of internal and external
mails, that processes were broken, features wasn't supported and some
need only information how to make this or that. This was another reason
for additional regressions in code on the master code line.

4. External CWS (not handled by Sun developers)
a) The Sun QA team gets more and more external CWSes or only work in Sun 
internal CWSs in the past months. The numbers aren't so high, but these

CWSs bind resources in Sun QA team, but often not so much in the
Development team. This could lead to an unequal balance between the
teams.
b) I heard the rumor in the corridors here at Sun, that some external
CWS leads to broken functionality. If this is correct, why the QA
representative couldn't identify these regressions? Who are the QA 
representatives etc. Or do we have to change the CWS policies, where

code review is one possible solution for approving a CWS?

5. General quality of the code
a) I also heard the rumor in the corridors here at Sun, that some
feature aren't completely ready until feature freeze date. But the L10N
teams need the UI for translation. Then strings will be integrated first 
and functionality will be checked in with another CWS later.

If this is really done, this leads to the problem, that the iTeam do
not have enough time for regression testing. Because the functionality
testing can start too short before CodeFreeze or the first Release
Candidate. Also the time for bug fixing is too short.
b) The number of issues which marked as regressions in IssueTracker
doesn't go down in the past years. We still have a rate of 7-8% of
all reported issues which are marked as regression. For me this mean,
that we aren't going to be better with the developed code, but we
aren't going be worser. But I think, when I delete ~25% duplicate
issues, 10-15% features and enhancements for all reported issues,
the regressions are still more relevant. What does it mean 7-8%.
This means that for 50 developers 2 are working on the regressions
of the other developers only.
c) As I talked in another thread. The rate how often a CWS returns
back to development is ~25-30%. It is still that high over the past
years. And remember we do not work much with an iteration process.
Often the CWS returns because of process violation, bugs aren't fixed
or new bugs raised.

6. What features are important for a release
Do you know the features for the next release. I don't! I am surprised
every time, when I create the feature lists for general and L10N
testing. For me it looks like, that everybody can work on a feature,
which he like most. And then this feature has the highest priority
for him and is a must for the next release. On the other side the
QA, the Release Engineering and the Program Manager doesn't know,
what features they have to worked first. Because 

Re: [dev] SCM System Survey

2009-03-13 Thread David
Hi All,

I didn't realize that my login to openoffice.org gave me an OO.org address,
so I apologize for my criticism.

In the future, I would suggest that people be advised to use their
http://www.openoffice.org/servlets/StartPage login id's as I would say that
I'm a junior-junior open office developer.

Many thanks for all your hard work!

David

2009/3/12 Maximilian Odendahl i...@sept-solutions.de

 Hi again,

  and no link is sent. Someone using my email adress or is the survey not
 configured correctly?


 just after sending I got the other mail, so I guess you added all of us
 already. Sorry for the noise

 Best regards
 Max


 -
 To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
 For additional commands, e-mail: dev-h...@openoffice.org




-- 
+-
David

Mobile  |  +44 (0) 755.269.4191
Email   |  da...@hackbinary.com
Blog|  blog.hackbinary.com
Website |  www.hackbinary.com

I have no doubt that in reality the future will be vastly more surprising
than anything I can imagine. Now my own suspicion is that the Universe is
not only queerer than we suppose, but queerer than we can suppose.
- J.B.S. Haldane
+-


Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Rich

On 2009.03.13. 12:37, Frank Schönheit - Sun Microsystems Germany wrote:

Hi Rich,

summary - while release early, release often is very important, stable 
dev snapshots are as important.


Yes, but how to reach that? In theory, trunk is always stable, since
every CWS has undergone tests (before integration) which ensure that it
doesn't break anything. Okay, enough laughing.


haha. almost got me there :)


Finally, that's exactly the problem here: Too many serious bugs slip
Dev/QA's attention in the CWS, and are found on trunk only. If we fix
that, trunk will automatically be much more stable. Easy to say, hard to do.


when i wrote serious problems i meant something similar to a recent 
issue in a dev build where simply accessing tools-options crashed oo.org.


i believe that reducing the amount of such simple to find problems would 
be a notable step forward, and what's important - these tests should be 
relatively easy to automate.
i'll admit ignorance on current automated testing processes, but stuff 
that would walk through all menu options (with some advanced custom 
paths), a repository of testdocs in various formats etcetc.


this, of course, correlates with testing process discussions on this 
thread, which means more work creating tests - but i'm trying to hilight 
simple problems that somehow slip past this process. i mean, if crash on 
opening options slipped past it, there aren't enough of these simple 
checks which, i hope, would take less developer (or even qa) time to 
develop.


if test creation is too complex and there are ways to simplify it, that 
could be made a priority. such a change would improve long term quality 
and reduce workload, so it seems to be worth concentrating on.


obviously, that's just a small improvement, seen through my lens, 
which should be viewed in context with other things already mentioned.



Ciao
Frank

--
 Rich

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread ksp

Joachim Lingner ?:
As a matter of fact, many issues are reported by the community, at 
least the critical ones which often promote to stoppers. IMO, we 
should therefore promote the QA community, so there will be more 
volunteers (who maybe also develop tests) and extend the time span 
between feature/code freeze and the actual release date.


Jochen


Promoting QA in community is not enough - you have to retain people. In 
order to retain people project needs to fix their issues, which inspires 
people to use milestones in daily work.
Many of developers do not know how great it feels when issue you are 
interested in gets fixed relatively quick. Developers also do not know 
that it sucks when fix for your issue is delayed and delayed.
I think many users would rather have faster fixes than more stable 
milestone (you always can go to prev release/milestone).


Just my two copecks.
WBR,
KP.

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Frank Schönheit - Sun Microsystems Germany
Hello KP,

 Promoting QA in community is not enough - you have to retain people. In 
 order to retain people project needs to fix their issues, which inspires 
 people to use milestones in daily work.
 Many of developers do not know how great it feels when issue you are 
 interested in gets fixed relatively quick. Developers also do not know 
 that it sucks when fix for your issue is delayed and delayed.

I'd claim a lot of developers, if not most, know how this feels - in
both ways. It's just that we have much more issues than developers, and
developers have only a pretty small amount of time for fixing for
retaining people. In combination, this might look like developers do
not care for the issues reported by other people, but it's just not as
easy as this.

 I think many users would rather have faster fixes than more stable 
 milestone (you always can go to prev release/milestone).

Uhm, I doubt that. What you're saying here is that we should sacrifice
quality to more fixes. I believe this would be bad for OOo's overall
reputation.

Ciao
Frank

-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Mathias Bauer
Ingrid Halama wrote:

 Martin Hollmichel wrote:
 Mathias Bauer wrote:
 Ingrid Halama wrote:

  
 This is not sufficient. Heavy code restructurings and cleanups are 
 not bound to the feature freeze date, 
 Perhaps they should? And at least as far as it concerns me they are.
   
 yes, I also consider large amount or new, move or restructured code as 
 a feature and had erroneously the expectation that this is already 
 common sense. If all agree we should add this to the Feature Freeze 
 criteria (http://wiki.services.openoffice.org/wiki/Feature_freeze)
 Two problems here. The worst one is that you cannot control that this 
 new rule is applied. 
Well, of course you can control that - at least after the fact if that
breaks. :-)

Of course you can't enforce this rule, as is true for most rules I know.
Bank robbery is strictly forbidden, but people still do it. But be sure,
if it wasn't forbidden, much more people would do it. It's common sense
that having rules that work towards the goal is good, even if you can't
enforce them always.

 Who decides that a code change is too huge to risk 
 it for the next release in two months or so? You won't count lines, 
 don't you - that would be stupid. Those who are willing to act carefully 
 are doing that already I am convinced. And those who are not acting 
 carefully you cannot control really with this new rule. So introducing 
 this new rule will basically change nothing.

This is a matter of how teams work. In general I would give everybody
the credit of being able to judge whether his work imposes a huge risk
on the product or not. If a team member repeatedly showed him/herself as
unable to do that, the team should address that in a way the team sees fit.

The idea of being bound to (trapped into?) a rules set that can and must
be enforeced all the time is not my understanding of how we should work
together. Many problems can be solved or at least reduced by appealing
to the good forces in each of us, our skills, our will to do the right
thing and our pride about our work. Sometimes it needs some reminders,
and rules come in handy here. But you never can't enforce all rules you
make in a free community, perhaps you can do that in prison. The
community as a whole must take care for the value of rules, each member
by following them and all together by reminding others and taking action
in cases where the rules have been violated.

 The second problem is that sometimes bad bugs even detected later in the 
 phase need bigger code changes. In my opinion only very experienced 
 developers are able to make serious estimations whether the fix is worth 
 the risk or not. So what to do now? Should we make a rule 'Let a very 
 experienced developer check your code'? Sometimes I wish that but I am 
 not sure that it would scale and - how would you organize and control 
 such a rule? We have a similar rule for the show stopper phase (let you 
 change be reviewed by a second developer), but even that rule is 
 violated often I am convinced.

You tell us that we don't live in a perfect world and that indeed some
rules don't apply always. Yes, true, but at least for me that isn't
really new. Of course we must do a risk assessment in some cases that -
due to other measures we have taken - hopefully will happen less often
in the future. And risk assessment itself is risky (developers like
recursions ;-)).

But in case we are unsure, we could move a bugfix that looks too risky
but isn't a showstopper to the next release. Instead of asking if
someone can judge whether a fix is too risky, let's put it the other way
around: something is too risky if you can't say with good confidence
that it's not. Does that need an experienced developer? I would expect
that it's easier for the less experienced developer to answer that question!

But anyway, we will fail here at times, sure. No problem for me, if the
number of failures drops to an acceptable level. We don't need a panacea
and I doubt that we will ever find one.

 So I would like to see mandatory automatic tests that detect whether the 
 important user scenarios still work properly, whether files are still 
 rendered as they should, whether the performance of the office has not 
 significantly decreased,  . We have a lot tests already even if 
 there is much room for improvement. In principle some of the tests are 
 mandatory already, but this rule gets ignored very often.
 The good thing is that a violation of this rule could be detected 
 relative easily and thus used as a criterion for nomination.

More testing on the master(!) would be very welcome. But on the CWS?
This will make the time to master even longer and then again we are in
the vicious cycle I explained in my first posting in this thread.

Ciao,
Mathias

-- 
Mathias Bauer (mba) - Project Lead OpenOffice.org Writer
OpenOffice.org Engineering at Sun: http://blogs.sun.com/GullFOSS
Please don't reply to nospamfor...@gmx.de.
I use it for the OOo lists and only rarely 

Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread ksp

Frank Schönheit - Sun Microsystems Germany пишет:
I think many users would rather have faster fixes than more stable 
milestone (you always can go to prev release/milestone).



Uhm, I doubt that. What you're saying here is that we should sacrifice
quality to more fixes. I believe this would be bad for OOo's overall
reputation.
  
What I mean to say is that we could sacrifice quality of snapshots to 
bring in features faster and to motivate QA volunteers to test in real 
life (fast-paced development is yet another usage motivator). Besides, 
it is questionable what is worse for reputation - having 2-3-4-5 y/old 
usability defects or bugs versus regressions.


WBR,
K. Palagin.

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Ingrid Halama

Mathias Bauer wrote:

Ingrid Halama wrote

Martin Hollmichel wrote:


Mathias Bauer wrote:
  

Ingrid Halama wrote:

 

This is not sufficient. Heavy code restructurings and cleanups are 
not bound to the feature freeze date, 
  

Perhaps they should? And at least as far as it concerns me they are.
  

yes, I also consider large amount or new, move or restructured code as 
a feature and had erroneously the expectation that this is already 
common sense. If all agree we should add this to the Feature Freeze 
criteria (http://wiki.services.openoffice.org/wiki/Feature_freeze)
  
Two problems here. The worst one is that you cannot control that this 
new rule is applied. 


Well, of course you can control that - at least after the fact if that
breaks. :-)

Of course you can't enforce this rule, as is true for most rules I know.
Bank robbery is strictly forbidden, but people still do it. But be sure,
if it wasn't forbidden, much more people would do it. It's common sense
that having rules that work towards the goal is good, even if you can't
enforce them always.

  
Who decides that a code change is too huge to risk 
it for the next release in two months or so? You won't count lines, 
don't you - that would be stupid. Those who are willing to act carefully 
are doing that already I am convinced. And those who are not acting 
carefully you cannot control really with this new rule. So introducing 
this new rule will basically change nothing.



This is a matter of how teams work. In general I would give everybody
the credit of being able to judge whether his work imposes a huge risk
on the product or not.

Doesn't the current situation show that this is absolutely not the case?

If a team member repeatedly showed him/herself as
unable to do that, the team should address that in a way the team sees fit.
  
Hm, I would prefer to give the team member a chance to avoid his 
repeated failures and to allow and to ask him to check his changes himself.

Oh yes - that could be done by automatic tests - how cool!

The idea of being bound to (trapped into?) a rules set that can and must
be enforeced all the time is not my understanding of how we should work
together. Many problems can be solved or at least reduced by appealing
to the good forces in each of us, our skills, our will to do the right
thing and our pride about our work. Sometimes it needs some reminders,
  
The 'careful forces' are not very strong at the moment. And I doubt that 
some nice reminders will bring a significant change in behavior here.
But no problem, if we significantly enlarge the stopper phase we can 
live with the current behavior also.



and rules come in handy here. But you never can't enforce all rules you
make in a free community, perhaps you can do that in prison. The
community as a whole must take care for the value of rules, each member
by following them and all together by reminding others and taking action
in cases where the rules have been violated.
  
What are those actions that are taken currently if someone has brought 
to much risk to the master?


  
The second problem is that sometimes bad bugs even detected later in the 
phase need bigger code changes. In my opinion only very experienced 
developers are able to make serious estimations whether the fix is worth 
the risk or not. So what to do now? Should we make a rule 'Let a very 
experienced developer check your code'? Sometimes I wish that but I am 
not sure that it would scale and - how would you organize and control 
such a rule? We have a similar rule for the show stopper phase (let you 
change be reviewed by a second developer), but even that rule is 
violated often I am convinced.



You tell us that we don't live in a perfect world and that indeed some
rules don't apply always. Yes, true, but at least for me that isn't
really new. Of course we must do a risk assessment in some cases that -
due to other measures we have taken - hopefully will happen less often
in the future. And risk assessment itself is risky (developers like
recursions ;-)).

But in case we are unsure, we could move a bugfix that looks too risky
but isn't a showstopper to the next release. Instead of asking if
  
So who is 'we' in this case? Is it the developer and tester who know 
their own module best? Or is it some board of extra privileged people 
far away from the concrete bug?
If you believe in the good forces within all of us, then give all of us 
the freedom to decide whether fixes go in!
If you don't believe in the good forces then their must be clear 
criteria why and when fixes or features will not make it into the release.
Anything else has the smell of arbitrary regime - or is the english term 
despotism? Sorry for not being a native speaker.

someone can judge whether a fix is too risky, let's put it the other way
around: something is too risky if you can't say with good confidence
that it's not. Does that need an experienced developer? 

Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Thorsten Ziehm

Hi Mathias,

Mathias Bauer schrieb:


More testing on the master(!) would be very welcome. But on the CWS?
This will make the time to master even longer and then again we are in
the vicious cycle I explained in my first posting in this thread.


Yes more testing on Master is welcome, that is true. But most testing
must be done on CWS. When broken code is in the master code line it
take too much time to fix it. And then you cannot do Quality Assurance.
You can make testing, but that has nothing to do with hold a Quality
standard!

The time to master isn't a problem currently, I think. A general bugfix 
CWS can be 'approved by QA' in 2 days. But when the master is broken,

you do not know, is the bug in the CWS or in the master. It takes longer
for checking the problems and this is what we have now. Reduce the time
to master will come, when the general quality of the master and the CWSs
is better.

So more testing on CWS is also welcome!

Thorsten


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Mechtilde
Hello,

Thorsten Ziehm schrieb:
 Hi Mathias,
 
 Mathias Bauer schrieb:
 
 More testing on the master(!) would be very welcome. But on the CWS?
 This will make the time to master even longer and then again we are in
 the vicious cycle I explained in my first posting in this thread.
 
 Yes more testing on Master is welcome, that is true. But most testing
 must be done on CWS. When broken code is in the master code line it
 take too much time to fix it. And then you cannot do Quality Assurance.
 You can make testing, but that has nothing to do with hold a Quality
 standard!
 
 The time to master isn't a problem currently, I think. A general bugfix
 CWS can be 'approved by QA' in 2 days. But when the master is broken,
 you do not know, is the bug in the CWS or in the master. It takes longer
 for checking the problems and this is what we have now. Reduce the time
 to master will come, when the general quality of the master and the CWSs
 is better.
 
 So more testing on CWS is also welcome!

Yes Full ACK to last sentence.
And this is not only a task for the Sun people. The persons who are
interested at a CWS must be able to test a CWS. And this also if they
aren't able to build OOo on their own.

This is nearly impossible to do it for people outside from Sun.

The same concerns to external CWSes.

If we should discuss it more detailed we can do it in a separated thread.

Kind regards


Mechtilde

-- 
Dipl. Ing. Mechtilde Stehmann
## http://de.openoffice.org
## Ansprechpartnerin für die deutschsprachige QA
## Freie Office-Suite für Linux, Mac, Windows, Solaris
## Meine Seite http://www.mechtilde.de
## PGP encryption welcome! Key-ID: 0x53B3892B


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



[dev] Re: Simplify Reference Casts by template constructors

2009-03-13 Thread Michael Stahl

On 12/03/2009 13:36, Mathias Bauer wrote:

Rainman Lee wrote:


Hi Andrew
I know that implicit conversions usually bring more side effects than
convenience. But it is not the reason that we should give all them up
I think ;)
There is no implicit conversion from std::string to const char*,
because if a string is destroyed, the pointer to its content will be
invalid.


No, there indeed is no implicit conversion primarily for the reason
mentioned by Andrew (at least the inventor of this class told me so
many years ago): developers should not inadvertedly pass non-ascii
character strings to a UniCode string ctor. Creating a UniCode string
from a character string always needs an accompanying string encoding as
parameter.


if the inventor of OUString was indeed so conscientious, then i really 
have to wonder...


here's a little quiz (and don't ask me why i know this):
why does the following program (compiled without warning by sunCC and 
gcc4.2) do what it does:



#include rtl/ustring.hxx
#include stdio.h

int main()
{
::rtl::OUString foo( ::rtl::OUString::createFromAscii(foo) );
::rtl::OUString bar(foo + sal_Unicode(1) + foo);
printf(result: %s\n,
::rtl::OUStringToOString(bar, RTL_TEXTENCODING_UTF8).getStr());
}


 ./unxsoli4.pro/bin/test
result: oofoo


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Frank Schönheit - Sun Microsystems Germany
Hi Thorsten,

 The time to master isn't a problem currently, I think.

That's not remotely my experience.

See dba31g
(http://eis.services.openoffice.org/EIS2/cws.ShowCWS?Id=7708OpenOnly=falseSection=History)
for a recent example of a CWS which needed 36 days from ready for QA
to integrated state (and add a few more days for the milestone to be
finished).

A few more?
dba31a: 26 days
dba31b: 42 days
dba31e: 26 days
dba31f: 13 days
dba31h: 23 days
mysql1: 17 days (and that one was really small)
rptfix04: 9 days (though this number is misleading for other reasons)

dba32a is currently in QA - for 9 days already (which admittedly is also
somewhat misleading, since a regression was fixed meanwhile without
resetting the CWS to new).

Okay, there were also:
fixdbares: 2 days
dba31i: 7 days


Don't get me wrong, that's not remotely QA's alone responsibility.
Especially the first OOO310 milestones had a lot of delay between CWSes
being approved and being integrated.

But: time-to-master *is* a problem. At least for the majority of CWSes
which I participated in, over the last months.

Ciao
Frank

-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Frank Schönheit - Sun Microsystems Germany
Hi Mechtilde,


 So more testing on CWS is also welcome!
 
 Yes Full ACK to last sentence.
 And this is not only a task for the Sun people. The persons who are
 interested at a CWS must be able to test a CWS. And this also if they
 aren't able to build OOo on their own.

I think especially we in the DBA team have good experiences with
providing CWS snapshots at qa-upload. I am really glad that our
community (including you!) does intensive CWS QA, and this way also
found bugs pretty early.

However, as good as this worked out for us, I am unsure whether this
would scale. I suppose if *every* developer would upload his/her CWS,
then people would pick a few only, and the majority would be still be
untested.

Ciao
Frank

-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Mathias Bauer
Thorsten Ziehm wrote:

 Hi Mathias,
 
 Mathias Bauer schrieb:
 
 More testing on the master(!) would be very welcome. But on the CWS?
 This will make the time to master even longer and then again we are in
 the vicious cycle I explained in my first posting in this thread.
 
 Yes more testing on Master is welcome, that is true. But most testing
 must be done on CWS. When broken code is in the master code line it
 take too much time to fix it. And then you cannot do Quality Assurance.
 You can make testing, but that has nothing to do with hold a Quality
 standard!

I don't see a lot of sense in making tests mandatory just because we
have them. If a test probably can help to find problems in areas where
we know that we have them, fine. So when tests are defined it's
necessary to see which problems they can catch and if that's what we need.

I had a look on the regressions that I can judge - some of them might
have been found with convwatch, for most of them I have serious doubts
that any test we have would have found them. It's still working with the
product that is necessary.

So until now I fail to see which tests could help us further without
burning a lot of time.

There's one excecption. I'm a big fan of convwatch and so often I asked
for a *reliable* test environment that is easily configurable for
arbitrary documents.  So I still welcome if that could be accelerated.
But even these tests shouldn't be mandatory for every CWS.

 The time to master isn't a problem currently, I think. A general bugfix 
 CWS can be 'approved by QA' in 2 days. 
But that is not time to master. It still can take a week or so on
average until it's available in the master (even slower around feature
freeze, faster in the last phase of a release cycle). If we had tests
that let me believe that they could find more regressions early instead
of just holding CWS back from approval and burning CPU time, I would
welcome them.

But before that we should at least be able to have these tests run on
several masters without failure. This is not true for many tests now (as
a quick glance on QUASTE shows). These may be bugs in the master (that
have to be fixed), bugs in the tests (that also have to be fixed) or -
and that would be the most unfortunate reason - something else.

Ciao,
Mathias

-- 
Mathias Bauer (mba) - Project Lead OpenOffice.org Writer
OpenOffice.org Engineering at Sun: http://blogs.sun.com/GullFOSS
Please don't reply to nospamfor...@gmx.de.
I use it for the OOo lists and only rarely read other mails sent to it.

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Frank Schönheit - Sun Microsystems Germany
Hello Kirill,

 Uhm, I doubt that. What you're saying here is that we should sacrifice
 quality to more fixes. I believe this would be bad for OOo's overall
 reputation.
   
 What I mean to say is that we could sacrifice quality of snapshots to 
 bring in features faster and to motivate QA volunteers to test in real 
 life (fast-paced development is yet another usage motivator). Besides, 
 it is questionable what is worse for reputation - having 2-3-4-5 y/old 
 usability defects or bugs versus regressions.

But bringing CWSes faster into the master would not yield the
developer's output. In other words, we would not be able to fix only one
more bug by that. At the opposite, I would assume that if we reduce the
snapshot quality in the way you propose it, then developers would be
able to fix *less* issues, since they would need to do more regression
fixing, which is the more expensive the later in the release cycle it
happens.

Ciao
Frank


-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Regina Henschel

Hi all,

Thorsten Ziehm schrieb:

Hi Mathias,

Mathias Bauer schrieb:


More testing on the master(!) would be very welcome. But on the CWS?
This will make the time to master even longer and then again we are in
the vicious cycle I explained in my first posting in this thread.


Yes more testing on Master is welcome, that is true. But most testing
must be done on CWS. When broken code is in the master code line it
take too much time to fix it. And then you cannot do Quality Assurance.
You can make testing, but that has nothing to do with hold a Quality
standard!

The time to master isn't a problem currently, I think. A general bugfix 
CWS can be 'approved by QA' in 2 days. But when the master is broken,

you do not know, is the bug in the CWS or in the master. It takes longer
for checking the problems and this is what we have now. Reduce the time
to master will come, when the general quality of the master and the CWSs
is better.

So more testing on CWS is also welcome!



I second the idea of more CWS testing. Remember the new chart module. 
There we had a CWS for testing and a lot of bugs were found before the 
CWS was integrated in master. There is no need to spread CWS builds with 
the mirrors net, but a single server witch holds the builds for Windows, 
Linux and Mac is sufficient. You can free the space after the CWS is 
integrated.


For most of the testing people outside, it is impossible to build CWSs 
for there own, but I am sure many of them would test a CWS. They will 
not test all, but those which concerns that area of OOo, which they 
often use or that CWS, which contains a fix for their feature wish. In 
addition, to help people to decide whether they want to test a CWS, it 
is necessary to give a good, not to short description, what purpose a 
CWS has.


kind regards
Regina

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Frank Schönheit - Sun Microsystems Germany
Hi Mathias,

 I don't see a lot of sense in making tests mandatory just because we
 have them. If a test probably can help to find problems in areas where
 we know that we have them, fine. So when tests are defined it's
 necessary to see which problems they can catch and if that's what we need.
 
 I had a look on the regressions that I can judge - some of them might
 have been found with convwatch, for most of them I have serious doubts
 that any test we have would have found them. It's still working with the
 product that is necessary.
 
 So until now I fail to see which tests could help us further without
 burning a lot of time.

Quite true ...

A personal goal I set for the 3.1 release was to write complex test
cases for (most of) the show stoppers found in the DBA project. Since we
regularly run our complex tests on all CWSes, at least those concrete
stoppers would not have much chances to re-appear. (And as it is usually
the case with complex test cases, they also cover related areas pretty
well.)
Unfortunately, we didn't have too many 3.1 stoppers so far :), so I am
not sure whether it will help. But it's worth a try, /me thinks.

Ciao
Frank

-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Mechtilde
Hello Frank,

Frank Schönheit - Sun Microsystems Germany schrieb:
 Hi Mechtilde,
 
 
 So more testing on CWS is also welcome!
 Yes Full ACK to last sentence.
 And this is not only a task for the Sun people. The persons who are
 interested at a CWS must be able to test a CWS. And this also if they
 aren't able to build OOo on their own.
 

 
 However, as good as this worked out for us, I am unsure whether this
 would scale. I suppose if *every* developer would upload his/her CWS,
 then people would pick a few only, and the majority would be still be
 untested.

I don't think that the developer have to upload each CWS build. I prefer
that the possible tester are able to pick up the CWS builds they want
beside the normal test scenario.

I don't want to inflate this thread with the discussion about the
buildbots. ;-)

Regards

Mechtilde


-- 
Dipl. Ing. Mechtilde Stehmann
## http://de.openoffice.org
## Ansprechpartnerin für die deutschsprachige QA
## Freie Office-Suite für Linux, Mac, Windows, Solaris
## Meine Seite http://www.mechtilde.de
## PGP encryption welcome! Key-ID: 0x53B3892B


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Frank Schönheit - Sun Microsystems Germany
Hi Mechtilde,

 I don't think that the developer have to upload each CWS build. I prefer
 that the possible tester are able to pick up the CWS builds they want
 beside the normal test scenario.

Ah, you're right, that would be most helpful ...

Ciao
Frank

-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Mathias Bauer
Hi Ingrid,

please calm down, no reason to become upset.

Ingrid Halama wrote:

 This is a matter of how teams work. In general I would give everybody
 the credit of being able to judge whether his work imposes a huge risk
 on the product or not.
 Doesn't the current situation show that this is absolutely not the case?

No, this is just a situation that tells us that something goes wrong,
but it does not mean that everybody in the team is throwing garbage into
the master. So I would like to address this something, but not
exaggerate and put everybody under a general suspicion.

 If a team member repeatedly showed him/herself as
 unable to do that, the team should address that in a way the team sees fit.
   
 Hm, I would prefer to give the team member a chance to avoid his 
 repeated failures and to allow and to ask him to check his changes himself.

Of course! But that's something different than urging all developers to
do a fixed number of mandatory tests on each and every CWS. There should
be a QA person for every CWS. I would consider it enough to leave it up
to QA and development members of each CWS to find out if additional
testing (and which tests) can help or not. It seems that currently this
can be improved ;-), so let's work on that, let's strengthen the good
forces. But a general rule for all CWS - sorry, that's too much.

There is something that I heard from everybody who knows something about
 quality (not only software): the best way to achieve a better product
quality is making the people better that create the product. Replacing
the necessary learning process by dictating the people what they have to
do will not suffice. Give the people the tools they need, but don't
force them to use the only tools you have.

 The idea of being bound to (trapped into?) a rules set that can and must
 be enforeced all the time is not my understanding of how we should work
 together. Many problems can be solved or at least reduced by appealing
 to the good forces in each of us, our skills, our will to do the right
 thing and our pride about our work. Sometimes it needs some reminders,
   
 The 'careful forces' are not very strong at the moment. And I doubt that 
 some nice reminders will bring a significant change in behavior here.
 But no problem, if we significantly enlarge the stopper phase we can 
 live with the current behavior also.

I don't want to share your miserable picture of your colleagues. So I
won't answer to your cynic remarks about them.

 and rules come in handy here. But you never can't enforce all rules you
 make in a free community, perhaps you can do that in prison. The
 community as a whole must take care for the value of rules, each member
 by following them and all together by reminding others and taking action
 in cases where the rules have been violated.
   
 What are those actions that are taken currently if someone has brought 
 to much risk to the master?

This is an individual consideration that each responsible person needs
to find out for her/himself. I hope that you don't want to discuss how
to deal with other people's performance in public.

 But in case we are unsure, we could move a bugfix that looks too risky
 but isn't a showstopper to the next release. Instead of asking if
   
 So who is 'we' in this case? Is it the developer and tester who know 
 their own module best? Or is it some board of extra privileged people 
 far away from the concrete bug?
 If you believe in the good forces within all of us, then give all of us 
 the freedom to decide whether fixes go in!

I'm not sure if I understand, but probably you are mixing things. What
goes into a release first depends on priority/severity/importance,
something that usually is not decided on by a single person. Once
something is nominated to get into the release, I still see the
reservation that the code change is too much effort or seems to be too
risky at the time of integration. This indeed is something that the
developer should judge, either with consulting others or not. The final
decision always has to take more into account than just the risk. But
judging the risk indeed should be the task of the developer. And if the
developer can't confirm that it's not a huge risk, (s)he better should
assume it is.

 If you don't believe in the good forces then their must be clear 
 criteria why and when fixes or features will not make it into the release.
 Anything else has the smell of arbitrary regime - or is the english term 
 despotism? Sorry for not being a native speaker.

I think you are on the wrong track. I can't make any sense of that.
Perhaps a consequence of the mixture I tried to sort out in the last
paragraph?

 More testing on the master(!) would be very welcome. But on the CWS?
 This will make the time to master even longer and then again we are in
 the vicious cycle I explained in my first posting in this thread.
   
 Maybe we could try that at least? At the moment we are in the vicious 
 cycle of  another 

Re: [dev] Re: Simplify Reference Casts by template constructors

2009-03-13 Thread Mathias Bauer
Michael Stahl wrote:

 On 12/03/2009 13:36, Mathias Bauer wrote:
 Rainman Lee wrote:
 
 Hi Andrew
 I know that implicit conversions usually bring more side effects than
 convenience. But it is not the reason that we should give all them up
 I think ;)
 There is no implicit conversion from std::string to const char*,
 because if a string is destroyed, the pointer to its content will be
 invalid.
 
 No, there indeed is no implicit conversion primarily for the reason
 mentioned by Andrew (at least the inventor of this class told me so
 many years ago): developers should not inadvertedly pass non-ascii
 character strings to a UniCode string ctor. Creating a UniCode string
 from a character string always needs an accompanying string encoding as
 parameter.
 
 if the inventor of OUString was indeed so conscientious, then i really 
 have to wonder...

No reason to wonder. Intending to do something does not necessarily mean
that you succeed. :-)

It seems that it wasn't his intention to prevent all implicit
conversions (there still are three of them, one using a single
sal_UniCode as you showed), he just wanted to control how character
string constants are treated.

Regards,
Mathias

-- 
Mathias Bauer (mba) - Project Lead OpenOffice.org Writer
OpenOffice.org Engineering at Sun: http://blogs.sun.com/GullFOSS
Please don't reply to nospamfor...@gmx.de.
I use it for the OOo lists and only rarely read other mails sent to it.


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Mathias Bauer
Regina Henschel wrote:

 Hi all,
 
 Thorsten Ziehm schrieb:
 Hi Mathias,
 
 Mathias Bauer schrieb:
 
 More testing on the master(!) would be very welcome. But on the CWS?
 This will make the time to master even longer and then again we are in
 the vicious cycle I explained in my first posting in this thread.
 
 Yes more testing on Master is welcome, that is true. But most testing
 must be done on CWS. When broken code is in the master code line it
 take too much time to fix it. And then you cannot do Quality Assurance.
 You can make testing, but that has nothing to do with hold a Quality
 standard!
 
 The time to master isn't a problem currently, I think. A general bugfix 
 CWS can be 'approved by QA' in 2 days. But when the master is broken,
 you do not know, is the bug in the CWS or in the master. It takes longer
 for checking the problems and this is what we have now. Reduce the time
 to master will come, when the general quality of the master and the CWSs
 is better.
 
 So more testing on CWS is also welcome!
 
 
 I second the idea of more CWS testing. Remember the new chart module. 
 There we had a CWS for testing and a lot of bugs were found before the 
 CWS was integrated in master. There is no need to spread CWS builds with 
 the mirrors net, but a single server witch holds the builds for Windows, 
 Linux and Mac is sufficient. You can free the space after the CWS is 
 integrated.

The problem is that only a few CWS get so much interest that people jump
on the CWS testing. My experience with providing CWS builds and asking
interested users for testing rather is that nobody did it. And even if
people tested them, they usually didn't use the CWS for serious work,
they just had a look on the new features. Serious work still is the best
regression testing.

It's always a good idea to ask users for testing CWS. But if you don't
get any help, it's often better to integrate the CWS early, as this
enhances the probability that people find bugs in passing. Having CWS
lying around for weeks and months (as happened especially for the huge
ones in the past) definitely is bad.

We always ask developers to have huge or risky changes ready early in
the release cycle, so that they can be seen by users. Real life testing
done by human beings is the best way to iron out the remaining
hard-to-find quirks. But unfortunately early in the release cycle
quite often means that CWS get in conflict with those of the micro
release, and usually the latter get higher priority, both in QA and
integration. Perhaps this should be changed?

Ciao,
Mathias

-- 
Mathias Bauer (mba) - Project Lead OpenOffice.org Writer
OpenOffice.org Engineering at Sun: http://blogs.sun.com/GullFOSS
Please don't reply to nospamfor...@gmx.de.
I use it for the OOo lists and only rarely read other mails sent to it.


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org