[dev] Re: [qa-dev] CWS printerpullpages now ready for QA

2009-10-27 Thread Thorsten Ziehm

Hi Phillip,

thanks for sending this complex feature again to the QA and other
projects for testing and playing ;-). It's the best way to get such
important new feature well tested before integration.

I have one question about the next steps. As you wrote the CWS is in
state 'ready for QA' now. Is this an iteration before the known issues
will be fixed in the CWS - state will change the 'new' again? Or is it
planned to integrate the feature with these known issues and fix them
later (when nobody declare one of the issues as stopper)?

Thorsten


On 10/26/09 14:14, Philipp Lohmann wrote:

Hi,

CWS printerpullpages is now in state ready for QA. Since there are a 
lot of tasks to verify in this CWS the QA-Rep would appreciate any help 
he can get in verifying this CWS.


Of course anybody else is invited playing with this CWS build, too. 
There are the following tasks known that will be fixed after 
printerpullpages has been integrated; some of these are string changes, 
some contain UI details the UX discussion brought no final results on, 
and a few small known issues are in there, too.


ID Summary
104528 [cws printerpullpages] printing from page preview is confusi
105299 [CWS printerpullpages] printing does not use paper tray sett
106196 [CWS printerpullpages] tooltips on dialog page Impress hav
104312 CWS printerpullpages: Do not print graphics and diagrams inc
104784 printerpullpages: preview of HTML source view renders differ
105055 [CWS Printerpullpages] problem with tiled printing
105067 [CWS printerpullpages] For small pages OOo does not recalcul
105434 [CWS printerpullpages] unwanted scaling when printing 2 DIN
105727 printerpullpages Disable Note options as long as no notes ar
105728 printerpullpages - Disable the Selection button as long as n
106192 printerpullpages - Unnecessary error alert by canceling prin
104934 [CWS printerpullpages] Disable Size-settings for N-Up printi
105730 printerpullpages - Selektion won't recognized

for those interested there are current install sets for Windows, 
Linux(Intel), Solaris(Sparc) and Mac(Intel) at


ftp://qa-upload.services.openoffice.org/printerpullpages

Kind regards, pl



-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] Re: [qa-dev] Looking for QA-Rep for cws cloph13 - create backwards compatible builds for Mac by using the developer-sdk during compilation

2009-09-10 Thread Thorsten Ziehm

Hi Christian,

what is the state of your CWS? Today I saw it in a query on EIS and do
not see any progress documented in the CWS. Do you think it will be
ready soon to get it integrated in OOo 3.2? Release Engineering planned
only one more build before the branch - m58 is in work now and m59
should be the version for the branch. So the CWS must be ready until
this weekend, I think.

Thorsten


Christian Lohmaier schrieb:

Hi Aaron, Thorsten, *,

thanks for your kind offers :-) - see below

On Wed, Sep 2, 2009 at 7:54 PM, Thorsten Behrenst...@openoffice.org wrote:

Christian Lohmaier wrote:

(i.e. if you have a MAC OSX 10.4 and can check the installset, or if
you can verify the code changes by themselves,...)

can test-drive the installset  do the code review; likely no time
though for a proper testtool or tcm qa though.


Great - no worries, tinderbox did just finish building the Intel-installset:

http://termite.go-oo.org/install_sets/MacIntel-3589-cloph13-install_set.zip

Please give it a spin :-)

As for code review: Feel free to directly comment in the corresponding
issue or in EIS or ping me on IRC, write a mail (you get the idea :-))

Unfortunately no PPC build machine that runs with Leopard, only Tiger.
While it can be used to check that no reqressions are introduced while
building with Tiger, it isn't quite what the issue is about.
So again the plead for help:
@all:Got a PPC with Mac OSX 10.5? Can build OOo? Great - please give
cws cloph13 a shot!
(even if you cannot perform tests yourself, having a 10.5 built
installset for PPC would help a lot)

Nevertheless I'll build one on 10.4 and upload it tomorrow/later today.

ciao
Christian

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



[dev] Re: [qa-dev] Re: [mac] Re: [dev] Re: [qa-dev] Looking for QA-Rep for cws cloph13 - create backwards compatible builds for Mac by using the developer-sdk during compilation

2009-09-10 Thread Thorsten Ziehm

Hi Christian,

if you think everything is done for this CWS, it can be set to status
'approved by QA'.

To make the process correctly, please do the following.
- add Thorsten Behrens as QA representative
- add the information about bot-results and code review etc. to the CWS
  descriptions
- set all issues to a correct target (perhaps 3.2 or all to dev-tools)
- set all issues to status verified and add that the changes were check
  by another person
- let's change the CWS status to 'approved by QA' by Thorsten (I do not
  know if you can do this)

If you need more feedback to be save, you haven't so much time ;-)

Regards,
  Thorsten


Christian Lohmaier schrieb:

Hi Thorsten, *,

On Thu, Sep 10, 2009 at 11:49 AM, Thorsten Ziehmthorsten.zi...@sun.com wrote:

what is the state of your CWS? Today I saw it in a query on EIS and do
not see any progress documented in the CWS. Do you think it will be
ready soon to get it integrated in OOo 3.2?


From my part it is ready - but unfortunately only Thorsten did provide
feedback (all OK so far)


Release Engineering planned
only one more build before the branch - m58 is in work now and m59
should be the version for the branch. So the CWS must be ready until
this weekend, I think.


As nobody did report any problems, and the cws doesn't touch any
actual code, only the makefiles, I think it is OK to integrate.

I myself didn't find any either (well, that doesn't count much I guess :-))...

ciao
Chrstian

PS: The build breakers of the ubuntu and solaris bots in connectivity
are unrelated to the cws - apparently they're using wrong mozilla zips
(the ones needed for m53 and newer, but the cws is based on m52, so
you'd need the old zips.
/MNSInclude.hxx:77:35: error: nsIAbLDAPAttributeMap.h: No such file or directory

-
To unsubscribe, e-mail: dev-unsubscr...@qa.openoffice.org
For additional commands, e-mail: dev-h...@qa.openoffice.org



-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] Mercurial Pilot Feedback, Results

2009-07-31 Thread Thorsten Ziehm

Hi Rene,

Rene Engelhard schrieb:

Jens-Heiner Rechtien wrote:

Conclusion:
===
The purpose of the pilot was to find out if there are any important
aspects which render Mercurial unusable as SCM for OOo. We found that
there are none. This doesn't mean that Mercurial couldn't use some


Not difficult if you ignore problems ;-(


I don't think, that Heiner ignore problem reports. He was very open
for comments for this pilot and did a lot to evaluate problems. As you
can read he has listed positive and also negative feedback. Also your
feedback is listed with the result he had taken from the discussion.

When I take a look at your type of communication at that time, I can
understand when somebody isn't willing to communicate with you at all
and isn't willing to read all your comments. But I don't think that
Heiner did this.

So please be constructive with your feedback and be friendly.

Thanks,
  Thorsten


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



[dev] Re: [qa-dev] Proposal : Recheck of verified issues isn't mandatory anymore

2009-07-21 Thread Thorsten Ziehm

Hi,

the proposal was approved yesterday in the Release Status Meeting.
http://wiki.services.openoffice.org/wiki/ReleaseStatus_Minutes#2009-07-20

After the meeting I started the queries and closed 916 fixed/verified
issues.

-  12 issues without a target ('-')
-  15 issues with target 'DevTools'
-  11 issues with target 'milestone1', 'next build' or 'not determine'
-   2 issues with target 'OOo 2.2'
-   1 issue with target 'OOo 2.2.1'
- 141 issues with target 'OOo 2.3'
-  16 issues with target 'OOo 2.3.1'
- 217 issues with target 'OOo 2.4'
-  10 issues with target 'OOo 2.4.x'
-  10 issues with target 'OOo 2.x'
- 438 issues with target 'OOo 3.0'
-  43 issues with target 'OOo 3.0.1'

By accident I closed 83 issues wrongly. I fixed this error with the
result, that the owner of the issues get 3 additional mails. Now the
issues should be again in status 'fixed/verified'. Sorry for that.

All issues with target equal or higher than release OOo 3.1 are still
open for verification (873 issues).

- 260 issues with target 'OOo 3.1'
-  60 issues with target 'OOo 3.1.1' (isn't release until now)
- 553 issues with target 'OOo 3.2'   (isn't release until now)

There are still 118 'fixed/verified' issues open with other target. In
the next days I will try to identify if there are integrated in OOo or
not. Perhaps some of the following issues can/will be closed too.

- 26 issues without any target ('-')
- 35 issues with target 'DevTools'
- 33 issues with target 'milestone1', 'next build' or 'not determine'
- 21 issues with target 'OOo 3.x'
-  3 issues with target 'OfficeLater'

After one night there is only one issues reopened.
http://qa.openoffice.org/issues/show_bug.cgi?id=87538
All other are still closed!

Now I will try to change the documentation for how to handle fixed
and verified issues and link to the Wiki. This will take some days.

Regards,
  Thorsten


Thorsten Ziehm wrote:

Hi QA community, (cc'd dev@openoffice.org)

in nearly all past discussions (threads) in the QA list I read about the
annoying job to close all fixed/verified issues. I collect some feedback
from community members and worked with Joerg Jahnke on a proposal that a
recheck of verified issues isn't mandatory anymore.

Read the whole proposal :
http://wiki.services.openoffice.org/wiki/Handle_fixed_verified_issues

If there are bigger disadvantage let's discuss it in the qa mailing list
d...@qa.openoffice.org.

I set this topic/proposal on the agenda for the next release status
meetings (each Monday in IRC #oooreleases at 15:00 CET). I hope we can
decision on this proposal quickly.

Thorsten


-
To unsubscribe, e-mail: dev-unsubscr...@qa.openoffice.org
For additional commands, e-mail: dev-h...@qa.openoffice.org



-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



[dev] Re: [l10n-dev] Localisation moved into own module

2009-06-22 Thread Thorsten Ziehm

Hi Eike,

Eike Rathke wrote:

Hi Ivo,

I guess that for building a language pack the OOo source tree would not
be needed anymore, except maybe a few modules, is still a wish for the
far future?


Ivo and others are working on it to realize it in the near future ;-)
http://eis.services.openoffice.org/EIS2/cws.ShowCWS?Path=DEV300%2Fl10nframework01

Thorsten


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] Re: [l10n-dev] Localisation moved into own module

2009-06-22 Thread Thorsten Ziehm

Hi Rene,

the goal to to build and deliver language packs without any
dependencies. This CWS is the first step. There are still some steps
open. But as I heard by Ivo, when this CWs is ready you can you can
build the localization without any obj's / libs / whatever. But the
source code is still needed in this step.

So there will be a benefit. ;-)

Thorsten


Rene Engelhard wrote:

Hi,

Thorsten Ziehm wrote:

Ivo and others are working on it to realize it in the near future ;-)
http://eis.services.openoffice.org/EIS2/cws.ShowCWS?Path=DEV300%2Fl10nframework01


How is that helpful?

#build the l10n tools
cd transex3  build --all  deliver
cd xmlhelp   build --all  deliver
cd rsc   build --all  deliver

OK. Obvious.

# build only l10n targets , skip HID generation
cd instsetoo_native  build --all L10N_framework=1 NO_HIDS=1

Why -all? Just for completeness and because you need some stuff for packaging?
Or do we really need *all* the other modoules? This effectively needing
the whole source tree anyway?

What we need to have and what was Eikes question aboiut was to just need:

cd l10n  build --all  deliver
do some packaging steps

if that works out, fine. If not, that issue isn't what we talk about.

Regards,

Rene

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-18 Thread Thorsten Ziehm

Hi Rich,

Rich wrote:

On 2009.03.17. 19:56, Mathias Bauer wrote:

Hi,

Oliver-Rainer Wittmann - Software Engineer - Sun Microsystems wrote:


Hi all,

Thorsten Ziehm wrote:

...
IMHO, we do not find critical problems (show stoppers) in DEV builds 
very early, only half of them are found early according to my 
experience.

Some data about the show stoppers, which I have fixed in the last days:


thinking about time when bugs are found i got reminded about one issue - 
actually getting the dev/testing builds.
i'll admit ignorance about how these things are handled, but that might 
also help to understand how many possible testers would see it.
from my point of view, dev snapshots aren't always available fast 
enough, and quite often in dev series only every other build is 
available to public. of course, this means less testing and testing 
later in the development process.


for example, currently supposedly OOO310_m6 is available, public 
download - m5


The milestones are uploaded as soon as possible. When the milestone
is ready for use the upload started. The mass of the uploaded files
and the delivering time to the mirror network take more than 24 hours.
So in the current situation (build for different platforms in different
languages) need such time.

I do not know, how this can be fastened for all OOo users? Do you have
any idea?

Thorsten

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-16 Thread Thorsten Ziehm

Hi Frank,

if a CWS is needed and is important it is possible to have it in 2 days.
1 night for automated testing and 1 day for checking the fixed. That
this isn't possible anymore when 30-40 CWS are ready for QA at one day,
this is correct!

So your experience are correct.

Thorsten


Frank Schönheit - Sun Microsystems Germany wrote:

Hi Thorsten,


The time to master isn't a problem currently, I think.


That's not remotely my experience.

See dba31g
(http://eis.services.openoffice.org/EIS2/cws.ShowCWS?Id=7708OpenOnly=falseSection=History)
for a recent example of a CWS which needed 36 days from ready for QA
to integrated state (and add a few more days for the milestone to be
finished).

A few more?
dba31a: 26 days
dba31b: 42 days
dba31e: 26 days
dba31f: 13 days
dba31h: 23 days
mysql1: 17 days (and that one was really small)
rptfix04: 9 days (though this number is misleading for other reasons)

dba32a is currently in QA - for 9 days already (which admittedly is also
somewhat misleading, since a regression was fixed meanwhile without
resetting the CWS to new).

Okay, there were also:
fixdbares: 2 days
dba31i: 7 days


Don't get me wrong, that's not remotely QA's alone responsibility.
Especially the first OOO310 milestones had a lot of delay between CWSes
being approved and being integrated.

But: time-to-master *is* a problem. At least for the majority of CWSes
which I participated in, over the last months.

Ciao
Frank



-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-16 Thread Thorsten Ziehm

Hi Mechtilde,

Mechtilde wrote:

Hello,

Thorsten Ziehm schrieb:

Hi Mathias,

Mathias Bauer schrieb:


More testing on the master(!) would be very welcome. But on the CWS?
This will make the time to master even longer and then again we are in
the vicious cycle I explained in my first posting in this thread.

Yes more testing on Master is welcome, that is true. But most testing
must be done on CWS. When broken code is in the master code line it
take too much time to fix it. And then you cannot do Quality Assurance.
You can make testing, but that has nothing to do with hold a Quality
standard!

The time to master isn't a problem currently, I think. A general bugfix
CWS can be 'approved by QA' in 2 days. But when the master is broken,
you do not know, is the bug in the CWS or in the master. It takes longer
for checking the problems and this is what we have now. Reduce the time
to master will come, when the general quality of the master and the CWSs
is better.

So more testing on CWS is also welcome!


Yes Full ACK to last sentence.
And this is not only a task for the Sun people. The persons who are
interested at a CWS must be able to test a CWS. And this also if they
aren't able to build OOo on their own.


As we talked in many other threads. Where are the problems for testing
a CWS from a BuildBot? You can check fixed, you can test functionality
etc. You are perhaps right, that for automated testing with VCLTestTool
doesn't show always the same results. But for this the QA team from Sun
is working on TestBots. So it isn't needed to adjust the test scripts
for all test environment we will find in the community.
What do you want more?

Thorsten


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-16 Thread Thorsten Ziehm

Hi Mechtilde,

Mechtilde wrote:

Hello Thorsten, *,


[...]


Then it si ot possible for the Community to do automated tests for CWSes
which come from the community because nobody can evaluate the results in
a normal time.
For example:  In Quaste I can see that OOO310_m5 under Linux schows 14
errors and 27 warnings and the tests isn't finished yet.

So indepently how many errors I found in a build of the buildbot I have
to take much time ime to evalutate these errors. so I caan't test any
CWS build because then I don't know if the errors comes from the CWS
build or from the system around.


When you check the results with QUASTe it doesn't will take so much time
since last week. QUASTe has a new feature, that you can check on one
page the differences between Master and CWS. On this page only the
differences are shown, this is different to the past.

I should ask Helge to promote this feature in a blog or so.

Thorsten

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-16 Thread Thorsten Ziehm

Hi Mechtilde,

Mechtilde wrote:

Hello Thorsten, *

Thorsten Ziehm schrieb:

Hi Mechtilde,




For example:  In Quaste I can see that OOO310_m5 under Linux schows 14
errors and 27 warnings and the tests isn't finished yet.

So indepently how many errors I found in a build of the buildbot I have
to take much time ime to evalutate these errors. so I caan't test any
CWS build because then I don't know if the errors comes from the CWS
build or from the system around.

When you check the results with QUASTe it doesn't will take so much time
since last week. QUASTe has a new feature, that you can check on one
page the differences between Master and CWS. On this page only the
differences are shown, this is different to the past.


And this only works if Master *and* CWS use the same environment.


The comparison show you the differences between the test run in a Sun
environment and your test run. So this can also be used for information,
if your test environment show the same test results.


Or do I have the possibility to checkin as the result of a master build
via buildbot as the results of a CWS build of a buildbot?


When you want to participate in this project, we can also integrate such
mechanism. It isn't possible to get on all BuildBots a Master test run.
If you want to make this, QUASTe can be increased to handle such
informations. It will be also possible for those platforms and
environments which aren't supported by Sun (QA), e.g. 64-bit Linux or 
Deb-Packages or 


Thorsten


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Thorsten Ziehm

Hi Mathias,

Mathias Bauer schrieb:

Ingrid Halama wrote:

This is not sufficient. Heavy code restructurings and cleanups are not 
bound to the feature freeze date, 

Perhaps they should? And at least as far as it concerns me they are.

but have a great potential to 
introduce regressions also. I think the show-stopper phase must be 
extended in relation to the feature-phase *and* the normal-bug-fixing-phase.


Furthermore what does it help to simply let different people do the 
nominations while the criteria are not clear? So I would like to suggest 
a criterion: In the last four weeks before the feature freeze only those 
(but all those) CWSses get nominated that have a complete set of 
required tests run successfully. Same for the last four weeks before end 
of normal-bug-fixing-phase. We could start with the tests that are there 
already and develop them further.


The problem is that the usual test runs obviously don't find the bugs
that now bite us, most of them have been found by users or testers
*working* with the program. Adding more CWS test runs and so shortening
the time for real-life testing will not help us but make things worse.


The problem is a bit more complex. The testers and test script writers
do not have any time for writing new test cases for new functionality,
they do not have time to check fixed issues in master, they do not have
time to check code changes in a CWS as much as they should and at the
end you are right, they do not have the time for real-life testing.

But at the last point I want to relativize a little bit. The QA 
community and the L10N testers find critical problems in DEV build very

early. Most of the regressions which were reported in the past days on
the releases list, are regressions in the very past builds. Some of the
issues weren't identified very early by Sun employees, because they have
to look in a lot of issues these days to identify the show stoppers.

So the QA project has a big problem with the mass of integrations.
Because they cannot check every new functionality on regular base,
because they do not find the time to write the corresponding test cases
for VCLTestTool and they do not find the time, to check if the
functionality is correctly integrated in the master build.


I think we need to

- stop with larger code changes (not only features) much earlier before
the release. We should not plan for finishing the work right before the
feature freeze date, if something that is not critical for the release
is at risk we better move it to the next release *early* (instead of
desperately trying to keep the schedule) to free time and space for
other things that are considered as critical or very important for the
release.


+1


- make sure that all CWS, especially the bigger ones, get integrated as
fast as possible to allow for more real-life testing. This includes that
no such CWS should lie around for weeks because there is still so much
time to test it as the feature freeze is still 2 months away. This will
require reliable arrangements between development and QA.


+1


- reduce the number of bug fixes we put into the micro releases to free
QA resources to get the CWS with larger changes worked on when
development finished the work. This self-limitation will need a lot of
discipline of everybody involved (including myself, I know ;-)).


+1


Ah, and whatever we do, we should write down why we are doing it, so
that we can present it to everybody who blames us for moving his/her
favorite feature to the next release. ;-)


+1

Regards,
  Thorsten

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Thorsten Ziehm

Hi Oliver,

thanks for the data.

Oliver-Rainer Wittmann - Software Engineer - Sun Microsystems schrieb:

IMHO, we do not find critical problems (show stoppers) in DEV builds 
very early, only half of them are found early according to my experience.

Some data about the show stoppers, which I have fixed in the last days:

ISSUEINTRODUCED INFOUND IN
i99822DEV300m2 (2008-03-12)OOO310m3 (2009-02-26)

i99876DEV300m30 (2008-08-25)OOO310m3

i99665DEV300m39 (2009-01-16)OOO310m3

i100043OOO310m1OOO310m4 (2009-03-04)

i100014OOO310m2OOO310m4

i100132DEV300m38 (2008-12-22)OOO310m4

i100035SRCm248 (2008-02-21)OOO310m4
This issue is special, because it was a memory problem, that by accident 
was not detected. Thus, it should not be counted in this statistic.


Looking at this concrete data, I personally can say that we find more or 
less half of the show stoppers early.


Half of them is in my opinion a good quote. But this doesn't mean, that
we do not have to improve it. And one point is, that features has to be
checks more often in the master. Perhaps with automated testing or with
regular manual testing or real life testing. But this costs resources
and this is the critical point. Most of the QA community is also part of
the L10N community. This mean they are working on translation, when OOo
run into a critical phase like Code Freeze, where most real-life testing
is needed.

So it isn't easy to fix. Therefore I think 50% is a good quote under the
knowing circumstances.

Thorsten


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Thorsten Ziehm

Hi Ingrid,

Ingrid Halama schrieb:
[...]
So I would like to see mandatory automatic tests that detect whether the 
important user scenarios still work properly, whether files are still 
rendered as they should, whether the performance of the office has not 
significantly decreased,  . We have a lot tests already even if 
there is much room for improvement. In principle some of the tests are 
mandatory already, but this rule gets ignored very often.


What do you mean? There are mandatory tests and each tester in Sun QA
team run these tests on a CWS. You can check if your CWS is tested with
VCLTestTool in QUASTe [1].

On the other side the CWS policies [2] are that code changes can be
integrated and approved by code review only. Only the CWS owner and
the QA representative must be different. This was introduced to lower
the barrier for external developers. If you think, that this is a
reason for lower quality in the product, perhaps this policy has to be
discussed.

Thorsten

[1] : http://quaste.services.openoffice.org/
  Use 'search' for CWSs which are integrated or use the CWS-listbox
  for CWSs which are in work.
[2] : http://wiki.services.openoffice.org/wiki/CWS_Policies

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Thorsten Ziehm

Hi Jochen,

Joachim Lingner schrieb:
As Thorsten pointed out, we are NOT capable of covering the QA for our 
product completely NOR are we able to extend QA to new features (no time 
for writing new tests, etc.) We also know, that this is not because we 
are lazy ...
As a matter of fact, many issues are reported by the community, at least 
the critical ones which often promote to stoppers. IMO, we should 
therefore promote the QA community, so there will be more volunteers 
(who maybe also develop tests) and extend the time span between 
feature/code freeze and the actual release date.


There is one critical point. When you extend the time for testing and QA
between Feature Freeze and Release date, you bind the QA community to
one release (Code line) and who should do QA work on the next release,
where the developers work on and create their CWSes?

I talked very often with Martin about extending the time buffers
between FeatureFreeze, CodeFreeze, Translation Handover ... and
it isn't easy to find a good choice for all teams.

Thorsten

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Thorsten Ziehm

Hi Frank,

Frank Schönheit - Sun Microsystems Germany schrieb:

Hi Thorsten,

[...]


For instance, is it possible that QA does not have time to write new
automated tests because this is such a laborious and time-consuming
task, but we do not have the time/resources to make it an easy and quick
task?


Writing good test scripts isn't an easy tasks you are right. This is
status for all software products. Writing test code costs more time
than writing other code. Try it out with UNIT tests ;-)

So it's the same for automated testing with VCLTestTool for OOo. But
the problem here is, that the QA team leads for an application are
often the same person who have to write the test scripts. They have
to check the new incoming issues, working in iTeams, working on
verifying issues in CWSs ...

When you have the time to concentrate on writing test scripts only,
you can create hundred lines of code per day. But the high workload
on the persons leads to hundreds lines of code per months only.

Thorsten


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Thorsten Ziehm

Hi Max,

Maximilian Odendahl schrieb:

Hi,


The problem is a bit more complex. The testers and test script writers
do not have any time for writing new test cases for new functionality,
they do not have time to check fixed issues in master, they do not have
time to check code changes in a CWS as much as they should 


Maybe it is an idea to change the resolved fixed - verified process? It 
is a waste of time in about 99% of all case probably. The developer 
tests the issue before handing over the CWS and then sets it to 
Resolved, so there is a pretty small chance that the issue itself is not 
really fixed. And remember, it will be again checked for setting the 
issue to closed anyway.


Do you know, how often a CWS returns back to development because of
broken functionality, not fixed issues or process violation? Its
up to 25-30% of all CWSs. You can check this in EIS. The data is
stable over the past years. :-(

Therefore in my opinion it isn't good to change the handling of 
'resolved/fixed'-'verified' status.


But to check the issues in Master = verified - closed could be
discussed. Here the numbers are really 99% I think. Nearly all
issues which are fixed in CWS are fixed in Master too.

Thorsten


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Thorsten Ziehm

Hi Ingrid,

Ingrid Halama schrieb:

Thorsten Ziehm wrote:

Hi Ingrid,

Ingrid Halama schrieb:
[...]
So I would like to see mandatory automatic tests that detect whether 
the important user scenarios still work properly, whether files are 
still rendered as they should, whether the performance of the office 
has not significantly decreased,  . We have a lot tests already 
even if there is much room for improvement. In principle some of the 
tests are mandatory already, but this rule gets ignored very often.


What do you mean? There are mandatory tests and each tester in Sun QA
team run these tests on a CWS. You can check if your CWS is tested with
VCLTestTool in QUASTe [1].
There are more than the VCLTestTool tests. We have the performance tests 
and the UNO API test and the convwatch test. All those are in the 
responsibility of the developers. I think only convwatch is not mandatory.

Ingrid


OK, you are right. I from my perspective looks often only on VCLTestTool
tests instead of the whole stack of tools we have.

Thorsten

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Thorsten Ziehm

Hi Max,

Maximilian Odendahl schrieb:

Hi,


Also, having seen a lot of misunderstandings (Oh! I though you meant
*this* button, but now I see you meant *that* one!), I think it is a
good idea that somebody who did not fix the issue verifies it. And the
CWS is the the best place for this verification, I'd say.


yes, this is true, so would you say we could skip the step from going 
from verified to closed, doing this verification again?


We cannot get free time here anymore. It isn't mandatory anymore for the
Sun QA team to check the fixes in Master. So we skipped this nearly one
year ago - but we didn't changed the policy!

But we tried to organized QA issue-hunting days, where these issues are
addressed. With more or less results :-(

Thorsten


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Thorsten Ziehm

Hi Mechtilde,

Mechtilde schrieb:

Hello,

Thorsten Ziehm schrieb:

Hi Max,

Maximilian Odendahl schrieb:

Hi,




Do you know, how often a CWS returns back to development because of
broken functionality, not fixed issues or process violation? Its
up to 25-30% of all CWSs. You can check this in EIS. The data is
stable over the past years. :-(


Can you tell me the Path  how I can find this information in the EIS?


Childworkspace / Search
When you searched for CWSs in this query you can find a button 'status 
change statistics' at the bottom of the results. On this page you

can see, how often a CWS toggled between the states of a CWS.

Thorsten

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Thorsten Ziehm

Hi,

I wrote some comments in this thread already. But I was working on
a longer mail with my collected thoughts about this topic.


What are my thoughts to this topic. My first thought was, there is
nothing (really) different in this release as in the past releases. But
this doesn't mean, that this is a good news.

My second thought was, to write down all the point which I identified
which was (perhaps) different as in the past releases. Some of the
points I will list I can support by data, others are rumor in the teams,
which I heard over the past months or years.

1. The number of regression issues for OOo 3.1
For this release the stopper issue was opened long before code freeze.
It was in November 08 and 10-15 issues were added before code freeze
begin of March. But was isn't typically for this release is, that in
the past days a mass of new incoming regressions were added and the
quality of the regressions shows a bad view on the product.
So the whole number isn't that bad for the long time, but I expect a
quality issues with the product, when so many stoppers reports (~20)
were reported in the releases list about in the past 4 days!

2. What is special in the release plan for OOo 3.1
- the first integration of CWSs was on 30. July 2008 (DEV300m29)
- Feature Freeze for OOo 3.1 was on 18th of December 08
  = started on 11th of December in build DEV300m38 and m39 56 CWS
 with 403 issues were integrated for Feature Freeze
 (the cloned CWS from OOO300 code line are taken out)
- Code Freeze for OOo 3.0.1 was on 11th of December 08
  = started on 11th of December in build OOO300m14 19 CWS with 88
 issues were integrated for Code Freeze
This means a very high number of CWSs were handled/finalized by DEV and
QA in a very short time frame - especially before Christmas = most of
the full time engineer at Sun wanted to go on vacation for 2 weeks).
For me it's the first time, that such dates were so near together.

3. What's new in the Build Environment
Started with build DEV300m33 the Source Control Management (SCM) was 
switched to SubVersion. SubVersion wasn't as good as estimated. And

it has some bugs and challenges. I read a lot of internal and external
mails, that processes were broken, features wasn't supported and some
need only information how to make this or that. This was another reason
for additional regressions in code on the master code line.

4. External CWS (not handled by Sun developers)
a) The Sun QA team gets more and more external CWSes or only work in Sun 
internal CWSs in the past months. The numbers aren't so high, but these

CWSs bind resources in Sun QA team, but often not so much in the
Development team. This could lead to an unequal balance between the
teams.
b) I heard the rumor in the corridors here at Sun, that some external
CWS leads to broken functionality. If this is correct, why the QA
representative couldn't identify these regressions? Who are the QA 
representatives etc. Or do we have to change the CWS policies, where

code review is one possible solution for approving a CWS?

5. General quality of the code
a) I also heard the rumor in the corridors here at Sun, that some
feature aren't completely ready until feature freeze date. But the L10N
teams need the UI for translation. Then strings will be integrated first 
and functionality will be checked in with another CWS later.

If this is really done, this leads to the problem, that the iTeam do
not have enough time for regression testing. Because the functionality
testing can start too short before CodeFreeze or the first Release
Candidate. Also the time for bug fixing is too short.
b) The number of issues which marked as regressions in IssueTracker
doesn't go down in the past years. We still have a rate of 7-8% of
all reported issues which are marked as regression. For me this mean,
that we aren't going to be better with the developed code, but we
aren't going be worser. But I think, when I delete ~25% duplicate
issues, 10-15% features and enhancements for all reported issues,
the regressions are still more relevant. What does it mean 7-8%.
This means that for 50 developers 2 are working on the regressions
of the other developers only.
c) As I talked in another thread. The rate how often a CWS returns
back to development is ~25-30%. It is still that high over the past
years. And remember we do not work much with an iteration process.
Often the CWS returns because of process violation, bugs aren't fixed
or new bugs raised.

6. What features are important for a release
Do you know the features for the next release. I don't! I am surprised
every time, when I create the feature lists for general and L10N
testing. For me it looks like, that everybody can work on a feature,
which he like most. And then this feature has the highest priority
for him and is a must for the next release. On the other side the
QA, the Release Engineering and the Program Manager doesn't know,
what features they have to worked first. Because 

Re: [dev] amount of stopper / regressions for 3.1 release

2009-03-13 Thread Thorsten Ziehm

Hi Mathias,

Mathias Bauer schrieb:


More testing on the master(!) would be very welcome. But on the CWS?
This will make the time to master even longer and then again we are in
the vicious cycle I explained in my first posting in this thread.


Yes more testing on Master is welcome, that is true. But most testing
must be done on CWS. When broken code is in the master code line it
take too much time to fix it. And then you cannot do Quality Assurance.
You can make testing, but that has nothing to do with hold a Quality
standard!

The time to master isn't a problem currently, I think. A general bugfix 
CWS can be 'approved by QA' in 2 days. But when the master is broken,

you do not know, is the bug in the CWS or in the master. It takes longer
for checking the problems and this is what we have now. Reduce the time
to master will come, when the general quality of the master and the CWSs
is better.

So more testing on CWS is also welcome!

Thorsten


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] Re: buildbot builds vs standard builds

2009-02-27 Thread Thorsten Ziehm

Hi Andre and Mechtilde,

André Schnabel schrieb:

Hi,

But yes ... there are no problems with testing builds from build bots. I 
think, we should just go on as we do now.


That BuildBots have to generate installable builds are a MUST. I am on
your side. But as I said before it isn't possible for the RE team in
Hamburg to support and maintain all BuildBots which exists for OOo.
We do not have the resources! So the help of the community is needed to
maintain them.

So do we know who maintain this BuildBot?

And this isn't sarcasm!

Thorsten


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] Re: buildbot builds vs standard builds

2009-02-27 Thread Thorsten Ziehm

Hi Juergen,

Juergen Schmidt schrieb:

Thorsten Ziehm wrote:

Hi Andre and Mechtilde,

André Schnabel schrieb:

Hi,

But yes ... there are no problems with testing builds from build 
bots. I think, we should just go on as we do now.


That BuildBots have to generate installable builds are a MUST. I am on
your side. But as I said before it isn't possible for the RE team in
Hamburg to support and maintain all BuildBots which exists for OOo.
i think that has nobody requested. The ideas was more to have at least 
one reliable build bot for all platforms that we build in Hamburg as 
well (maybe Linux, Windows and Mac which are probably the most important 
ones for the community) . These build bots should have the same base 
line as our internal build clients.


What Linux? What Mac? 32bit or 64bit?
For the L10N tests more than only 3 platforms were asked for. And not
all of the requested platforms are supported by Sun. So where you want
to begin and where you want to end. It will not be an easy task to
identify the needed platforms for such project.


We do not have the resources! So the help of the community is needed to
maintain them.
That is a valid point. So do we have a brief description for the setup 
of our build clients that we can share with potential maintainers?


What information is needed for maintaining BuildBots I do not know.
There are many Wiki-Pages for this and there is an Tinderbox Mailing
list, where you or interested people will get answers.

 Thorsten


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] Re: buildbot builds vs standard builds

2009-02-27 Thread Thorsten Ziehm

Hi Mechtilde,

[...]


I want to test - manually - some dba CWSes. Therefore I must be sure
that all things beside this CWS is the same as in the corresponding
developer snapshot.

And only we have this premise. we can do a reliable QA work on CWSes.


This is and all other points correct and valid. But I want to know,
which problem did you have with such BuildBot in the past. It wasn't
always the case that you cannot get a build. So which was the problem
with base, that you want to have the same build environment like Sun
have?

Only when I know the root cause I can work on a solution. And as I told,
currently the solution to have the same build environment like Sun have
will take too much resources in my team.

Thorsten


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] buildbot builds vs standard builds

2009-02-26 Thread Thorsten Ziehm

Hi Andre,


Perhaps to use a build bots for CWS builds or for master builds here in
Hamburg could be a solution. But my intension is to search for all
requirements which are needed in build environment and want to find then
a solution. And I do not want to nail down now, that we have to use
build bots only on the request by QA community. Because I do not see
this request is valid (see my earlier mail in this thread).


Hmm? What request is not valid? That QA communitiy shoud be able to get 
somehow reliable builds from buildbots to be able to do CWS tests?


I will explain it again. I try to explain my future view of QA for CWSes
in different threads and also in my presentation at OOoCon 2008. But I
see, that my vision isn't really clear for all and therefore I try it
again.

1. Quality Assurance is different than Testing.

2. For quality assurance you need defined processes and a defined
   environment.

3. I want to have TestBots - or how ever you want to call it - with
   defined environment to check each CWS automatically or by manual
   trigger. On these BuildBots automated testing will run automatically
   on each CWS. Which automated testing has to be defined. Currently
   I know the tests with VCLTestTool are all open sourced. Other tooling
   like performance tests, API tests and other stuff isn't under my
   responsibility and therefore I cannot say, if this can be done also
   on the TestBots.

4. All results of TestBots will be stored in QUASTe and an quick
   overview (green or red status) will be stored in EIS. This will be
   the place, whe QA responsibilities or other interested people can
   have an overview about the general quality of a CWS or the Master.
   A quick comparison between CWS and Master will be available very
   soon. So that you can check if errors are in the CWS only or also
   on the corresponding Master.

5. The full automated process for CWS should be possible for all CWS.

6. It is needed to have builds from a defined build environment.
   Therefore BuildBots have to be defined, with which the automated
   tests can run stable and without other errors like the ones from Sun
   internal build environment. If this is a BuildBot like sun internal
   environment or anything different has to be checked.

7. When each CWS will be tested full automated the QA community and
   here included also the QA engineers in Hamburg, can concentrate
   on testing the new feature, checking the bug fixes etc.

8. The TestBots and the manual testing on a CWS should bring a high
   guarantee on general quality of a CWS and that should work for
   all CWSes (internal and external from Sun environment).

9. The TestBots should be installed inside or outside Sun envionment.
   So if an L10N community want to use such environment for their
   L10N testing it can copy such system(s).

If this request is not valid - what is the alternative (how would QA 
community get cws builds?)


TestBots should be the solution and I hope until mid of this year we
will habe first results.

Thorsten

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] buildbot builds vs standard builds

2009-02-25 Thread Thorsten Ziehm

Hi Rene,

Rene Engelhard wrote:

Hi,

Thorsten Ziehm wrote:

I do not see the need to bring the build bots near to the build
environment here in Hamburg. The request for build bots was (as I know)
to have builds in different environments to find build issues in these
different environment. When these environment will be nearly the same,
then we miss to find these build breakers.


buildbots != tinderbox.


Perhaps I am wrong, but build bots are used for this! As I know the 
results of the build bots are used in EIS to get the state, if a

CWS is possible to build. But perhaps I am wrong.

Thorsten

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] buildbot builds vs standard builds

2009-02-24 Thread Thorsten Ziehm

Hi Mathias,

I do not see the need to bring the build bots near to the build
environment here in Hamburg. The request for build bots was (as I know)
to have builds in different environments to find build issues in these
different environment. When these environment will be nearly the same,
then we miss to find these build breakers.

Perhaps to use a build bots for CWS builds or for master builds here in
Hamburg could be a solution. But my intension is to search for all
requirements which are needed in build environment and want to find then
a solution. And I do not want to nail down now, that we have to use
build bots only on the request by QA community. Because I do not see
this request is valid (see my earlier mail in this thread).

I talked with some people in RE and BuildEnv over the past weeks and
I want to have a build environment which can be used for the next years.
Perhaps it is possible to begin on a white paper to define new tooling,
new processes etc. But this will take time. And I do not want to get
now a solution which isn't handle by the RE resources currently. When
I start this project I will invite you and others for brainstorming,
but we should stop now saying this or that is the best solution for now.
We need a stable solution for the next years, especially when we want
or have to switch to a DSCM.

Thorsten

[...]

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] buildbot builds vs standard builds

2009-02-19 Thread Thorsten Ziehm

Hi Andre,

where is the problem? (I know I asked this since months and do not get
any detailed answer) :-(

As I heard often in the past months that it isn't possible to compare
the test results in QUASTE of a Sun build with an OOo buildbot build.
As I know there are only a few differences and also differences exists
when running the automated tests with VCL TestTool in different test
environments. And I often told you and other QA members, that is will
not be possible to get the same result of the tests in all environments.

Therefore I said to you and also on the OOoCon that I want to introduce
TestBots. Then a CWS and Master builds are tested in the same
environment. Then one major root cause for differences in the test
results are eliminated. The other thing is to run the TestBots on builds
from a BuildBot. Then all results should be the same. That's the theorie
and some engineers are working on the solution TestBots.

If this isn't a solution for you and the OOo community, you should write
it to me and make other proposals. I want to work with you and the QA
community to eliminate the barriers. But I do not get any promised
feedback since months.

That the BuildBots can be more identical between SO and OOo builds is
another issue, I think. That this has to be addressed also is also clear
for me. But I want to separate this from the QA part.

Regards,
  Thorsten


Andre Schnabel wrote:

Hi Nils,

 Original-Nachricht 

Von: Nils Fuhrmann nils.fuhrm...@sun.com



Stephan Bergmann schrieb:
During FOSDEM, Mechtilde told me about a problem the QA community is 
experiencing, namely that buildbot builds (of CWSs) are quite different 
functionality-wise from the standard builds (of milestones and releases,
often done by Sun Hamburg Release Engineering).  Those differences are 
especially apparent in Base, Mechtilde told me.  This problem in some 
cases prevents easy testing of a CWS by the QA community, or even 
thorough testing of a CWS in real life by replacing a standard OOo build

with a buildbot CWS build in (semi-)production use.
I know that there were some issue regarding QA'ing buildbot builds in 
past. To get an idea what the real problem is, we should collect those 
issues in detail when they occur to find the root cause for them 


This is quite like going to the woods and look at each tree seperately to understand, 
what the wood is.

(as we 
always do). If those issues still are existent, I would await that this 
list is already available somewhere (Mechtilde, do you have such list?).


There is no full list, as we would need to do several test runs on sun builds, 
compare those to testruns done on equivalent buildbot builds - identify the 
differencens ...

You will find differences due to:
- different configure settings (this is from my experience the biggest part, as 
complete functional areas might be missing)
- differnt compilers 
- different build environment

- different test environment

This would need to be done for at least all the major platforms and at least 
all cat0 tests. This is a total of some weeks for running, analyzing and 
comparing tests.


Really, we should investigate into the concrete list of issues before 
thinking about any additional infrastructure. 


Sorry, this is the totally wrong way of thinking. 


the correct way was: How can we get more people helping in development (here 
QA) by using existing infrastructure.

We do not need *additional* infrastructure. We just want to use existing buildbots to help with cws testing. 


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] Re: testautomation the effects on the CWS process

2008-10-10 Thread Thorsten Ziehm

Hi Caolán,

Caolán McNamara schrieb:

On Fri, 2008-10-10 at 06:50 +0200, Helge Delfs wrote:

However you might run these tests by yourself and it is of
course acceptable to fix these tests if required. 


What's the (hopefully one line) way to run these tests myself ? Or is
this a work in progress and not for use right now ?


My vision and the vision of the GUI automation QA team [1] is to have
the mandatory GUI tests (QA processes) in a build-bot or in a test-bots.
We are working on such a solution for the automated GUI testing with VCL
TestTool. Other tools can follow, but this isn't in my responsibility.

To have the test scripts for automated testing with VCL TestTool in the
same repository was the first step. Now it is possible to change the
scripts in the same CWS when a new functionality will be added or old
functionality will be changes in that way, that the tests will break.
The changes of the test scripts will be done by the Automators. It isn't
needed that the OOo developer change the scripts by herself.
It is important for us, that this tooling will not be a new barrier for
contribution. It should help to check the new code easier in the UI
(Integration Test).

Since DEV300 code line is on Subversion (SVN) you get the test scripts
anyway. So there isn't any change for you as developer on OOo.
Only when you bring in a CWS to OOo, which change break a test script,
the test scripts will be changed in the CWS, when it is 'ready for QA'
(but our intension is to set the CWS back to 'new' or bring in another
state, that the QA-persons - Automators - can do their job on the CWS).

So there isn't any new barrier or change for the developer on OOo, since
the code line changed to SVN.

Thorsten

PS.: The policies are changed. But as I wrote since SVN migration a
 policy change isn't needed anymore.
 http://wiki.services.openoffice.org/wiki/Approve_a_CWS
 http://wiki.services.openoffice.org/wiki/CWS_Policies

[1] : 
http://qa.openoffice.org/ooQAReloaded/AutomationTeamsite/ooQA-TeamAutomationResp.html


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [dev] Re: testautomation the effects on the CWS process

2008-10-10 Thread Thorsten Ziehm

Hi,

Martin Hollmichel schrieb:

Caolán McNamara wrote:

On Fri, 2008-10-10 at 06:50 +0200, Helge Delfs wrote:
 

However you might run these tests by yourself and it is of
course acceptable to fix these tests if required. 


What's the (hopefully one line) way to run these tests myself ? Or is
this a work in progress and not for use right now ?

C.

  
yes, you're right, having this in the build available via make would 
help for non feature cws if a developer has to decide to involve full 
blown QA or if he can stay with expedited cws approval process, e.g. 
with automated test and peer review.


Here are some misunderstandings. QA isn't QA in software testing.

The developer talked often about code quality and he needs tools to
run, to check if his code break anything. The GUI testing with VCL
TestTool isn't such a tooling. The VCL TestTool check functionality
as a user in an installed Office. These tests need an installed
office and the TestTool as external testing tool. This cannot run
in the development environment and should be started by 'make'.

I know that developers want to start it from the command line. But
therefore you need unit tests, complex tests and perhaps API tests.
These tests work on code base. The test with VCL TestTool are later
in the QA process and it has to done out of the developer environment.

So 'make' isn't a solution for such kind of test scripts.

The unit tests and complex tests aren't part of my team.

Thorsten


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [dev] Re: testautomation the effects on the CWS process

2008-10-10 Thread Thorsten Ziehm

Hi Stephan,

Stephan Bergmann schrieb:

On 10/10/08 15:02, Thorsten Ziehm wrote:

Here are some misunderstandings. QA isn't QA in software testing.

The developer talked often about code quality and he needs tools to
run, to check if his code break anything. The GUI testing with VCL
TestTool isn't such a tooling. The VCL TestTool check functionality
as a user in an installed Office. These tests need an installed
office and the TestTool as external testing tool. This cannot run
in the development environment and should be started by 'make'.

I know that developers want to start it from the command line. But
therefore you need unit tests, complex tests and perhaps API tests.
These tests work on code base. The test with VCL TestTool are later
in the QA process and it has to done out of the developer environment.

So 'make' isn't a solution for such kind of test scripts.


I have to disagree.  Running VCL TestTool tests (in addition to other 
tests, like unit tests and complex tests and whatnot tests) from within 
a build environment could indeed be useful (e.g., to integrate them in 
automatic builds, as Thorsten Behrens already pointed out in a recent 
mail).  That doing so should technically be possible is demonstrated by 
module smoketestoo_native.  What probably is a prerequisite for all 
this, though, is that the VCL TestTool has to produce stable, 
reproducible, unambiguous results (which, as my experience in the past 
has been, it does not always do).


You are right in that point, that the developer needs a tool, which show
him easily if his code changes work fine. In first step this could be
done in development environment. But the final approval of the changes
must be outside of the development environment.

Why?
1. You cannot identify dependencies which are working in development
   environment only. If you do not have such dependencies (perhaps link
   to special libs or something else) the code could break and show an
   error in the implementation (tests).

2. Code isn't submitted into the code management system, but it is in
   the development environment of the developer. The problem can be
   found only, when the code is build and executed outside this
   environment.

Both scenarios aren't so rarely.

But as I wrote above. This tooling should also work in the development
environment. But it shouldn't run in this environment only.

We will take care of such requirement.

Thorsten

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [dev] Re: testautomation the effects on the CWS process

2008-10-10 Thread Thorsten Ziehm

Hi,

I invite all of you to my presentation at the OOoCon2008 in Bejing
(Thursday at 9:00 as I remember). There I will talk about this vision
and possibilities. And I will talk also about the general Software
QA which is a complex scenario.

There we can talk about concerns and requirements by the developers
inside and outside the Sun environment.

Helge will be there too and will have a presentation directly after mine.

See you,
  Thorsten

[...]

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[dev] Re: [releases] OOo 3 backward compatibility and #93298

2008-10-07 Thread Thorsten Ziehm

Hi Nguyen,

this problem is fixed for ods files with issue 87128 in OOo 2.4.1. 
Perhaps the same fix is possible for sxc files.


In general the file format of OOo 3.0 is based on ODF 1.2, the file
format of OOo 2.x is based on ODF 1.1. Therefore an update notification
came up in OOo 2.x when you load a document which file format is
higher/equal ODF 1.2. In the new file format nearly all new features
are integrated and you will get a warning message in 2.x, that some
features couldn't be displayed when you load a document which is based
on OOo 3.x. So you will get many inconsistencies between 2.x and 3.x,
but the user will get a notice by the warning and update information.

That it is possible to save the new cell range of spreadsheets to older
formats like ODF 1.0 (sxw) wasn't in the focus. This bug has to be fixed
for OOo 2.4.2 in my opinion.

Thorsten


Nguyen Vu Hung wrote:

Hello all,

It seems that OOo3 is very vulnerable to backward compatibility tests.

For example, a recent bug[2] has been found[1] and
I am sure we will find more bugs like this if we have a serious test case.

This time, Calc 2.4.1 *crashes* when loading a .sxc file saved by
Calc 3.0 beta2.
The issue is serious! What do you think?

[1] http://www.nabble.com/Issue-93298-for-2.4.2-td19839142.html
[2] Calc 2.4.1 crashes when loading a .sxc file saved by Calc 3.0 beta2



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[dev] Re: [qa-dev] Proposal : tar.gz packages for Linux as default for CWS builds

2008-07-03 Thread Thorsten Ziehm

Hi,

so in my opinion the proposal is accepted. Now I will collect the
dependencies and I will come back to the lists, when it is integrated.

Currently known dependencies are :
- smoke test on Linux has to run with tar files
- automated test environment inside Sun has to be adapted

I think it will be started to work on such issues after the release of
OOo 3.0 Beta Refresh. So stay tuned ... :-)

Thorsten


Thorsten Ziehm wrote:

Hi,

in the past some requests came up to have also DEB packages beside RPM
packages, when a build from a Child Work Space (CWS) has to be tested
by the community.

This is a valid request, because the Linux Distributions with DEB
packages increased. But does it make sense to increase the time
for packaging the install sets on Linux, because two packages has
to be delivered?

The CWS tooling supports the functionality to pack a CWS as tar.gz
on Linux. Isn't this a valid format on Linux to exchange CWS builds
between community members?

The Proposal :
The default format for CWS builds on Linux should be tar.gz (instead of 
RPM). It should still be possible to create install sets as RPM or DEB

or both.

The benefit :
- could be installed on Linux distros with RPM or DEB package management
  systems (only untar the files)
- easy to remove
  (only delete the directory/files)

The negative aspects :
- no system integration
- no testing of the Java Setup
- perhaps some automated installation tooling has to be adapted

In most cases the system integration and the Java setup are needed for
special testing areas only. So it shouldn't hinder testing general
CWS builds. And if it is needed the CWS owner can create RPM or DEB
packages on request.

If there aren't major objections, this will and can be changed soon.

Thorsten


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[dev] Proposal : tar.gz packages for Linux as default for CWS builds

2008-07-02 Thread Thorsten Ziehm

Hi,

in the past some requests came up to have also DEB packages beside RPM
packages, when a build from a Child Work Space (CWS) has to be tested
by the community.

This is a valid request, because the Linux Distributions with DEB
packages increased. But does it make sense to increase the time
for packaging the install sets on Linux, because two packages has
to be delivered?

The CWS tooling supports the functionality to pack a CWS as tar.gz
on Linux. Isn't this a valid format on Linux to exchange CWS builds
between community members?

The Proposal :
The default format for CWS builds on Linux should be tar.gz (instead of 
RPM). It should still be possible to create install sets as RPM or DEB

or both.

The benefit :
- could be installed on Linux distros with RPM or DEB package management
  systems (only untar the files)
- easy to remove
  (only delete the directory/files)

The negative aspects :
- no system integration
- no testing of the Java Setup
- perhaps some automated installation tooling has to be adapted

In most cases the system integration and the Java setup are needed for
special testing areas only. So it shouldn't hinder testing general
CWS builds. And if it is needed the CWS owner can create RPM or DEB
packages on request.

If there aren't major objections, this will and can be changed soon.

Thorsten


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [dev] Opening SXW files with OOo2.x

2008-05-22 Thread Thorsten Ziehm

Hi Tobias,

Tobias Krais wrote:

Hi together,

we just upgraded from OOo 1.1.4 to OOo 2.4. Our documents are all sxw
documents. Some customers claim, that the formatting is now different
My question: does OOo illustrate / display sxw files as OOo 1.1.x does?
Or are there differences?

If there are differences, is there a way to to open the files as they
have been displayed in OOo 1.1.4?


Yes there are many differences :-) Thousands of Bugfixes and Features
were implemented and fixed until OOo 1.1.4. This version has many layout
errors and missing functionality, which can lead to a different layout
between these two versions. If the formatting errors are too
dramatically you have to open issues as bug report. If it is possible
you can add the test documents to these issues.

The general compatibility should be given between these versions and
file formats.

Regards,
  Thorsten


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[dev] Re: [qa-dev] Re: [dev] proposal for change of cws policies

2007-07-06 Thread Thorsten Ziehm

Hi Michael,

Michael Meeks schrieb:

Hi Martin,

On Thu, 2007-07-05 at 14:14 +0200, Martin Hollmichel wrote:

With the help of Nikolai we are now able to provide a proposal for a
modified version of the child workspace policies on
http://wiki.services.openoffice.org/wiki/CWS_Policies


This looks like an improvement :-) thanks.

Under the Setting a CWS to approved by QA - since this is something
developers can do (for category B) - can you expand on the (should for
bug fixes) section - Make a test specification / test case available -
is there some repository of such things somewhere ? how is that done ?
in what form ? can this be waived in the case that a unit test exercises
the code paths ? :-)


I can speak for QA on GUI level. We define 'developer issues' (like the
ones in Category B) as issues, where it isn't possible to create test
case specifications or write test cases for automated testing with
TestTool. If my team get such issues (or CWS with such issues), we do
general regression testing in the areas where the changes were done.
But these issues were tested and still can be tested by another
developer by code review or doing unit test or API testing.

I cannot speak for Unit test or API testing on issues or CWS. But is
this needed to explain in such document? Or what should be include?

Thorsten

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [dev] proposal for change of cws policies

2007-07-05 Thread Thorsten Ziehm

Moin Peter :-)

Peter Junge schrieb:

Hi Martin,

Martin Hollmichel wrote:

Hi,

for a long time it has been already practice that not all child 
workspaces had to be approved by a QA Team member but also by another 
developer. The same applies for the involvement of the user experience 
Team. Together with Lutz for the user experience team and Thorsten for 
the QA team we review the existing Child Workspace policies on 
http://tools.openoffice.org/dev_docs/child_workspace_policies.html.


With the help of Nikolai we are now able to provide a proposal for a 
modified version of the child workspace policies on 
http://wiki.services.openoffice.org/wiki/CWS_Policies


in the section 'Making the CWS Ready for QA, Approved by QA and 
Nominated for Integration', the role of the QA-representative is 
defined wrong. Please refer included link 
http://wiki.services.openoffice.org/wiki/Approve_a_CWS.


The QA-representative is not the to do the 'necessary tests', but the 
person to coordinate the QA effort. This could even mean, that the QA 
Rep. does no testing at all, when other tasks are on higher priority to 
drive the process of CWS approval.




You are right. I changed this part.

Thorsten

PS.: For me it isn't a wonder that you find this, because you was
responsible to work out with a team the 'Approve a CWS' documentation. ;-)

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [dev] Re: [qa-dev] Can we do more regression testing?

2007-06-04 Thread Thorsten Ziehm

Hi Mathias and Martin,

most of you want to find regressions in less than an hour. These tooling
doesn't exists for a complex program like OpenOffice.org. Christoph
wrote that all API-tests will run more than 4 hours. And API testing is
one of the quickest tests which exists.

I want the same as you, but this than we need UNIT tests first for all
implementations. This is the quickest type of test, which could show you
first regressions. Than write API and complex tests for your
implementation, and you are nearly save, that GUI testing with TestTool
will find less errors.

But this doesn't exists in most cases. So we have to live with long
testing phases with TestTool on GUI level.

I do not understand, why you write ever, you want quick results for
your implementation. This is the case. You will get the results quicker,
when the tooling is implemented automatically between development and
QA. If you want to wait every time, until QA rep has the time for your
CWS, it is OK for me. But then you will wait between 5-10 days
currently. And with a higher mount of CWS, which increased in the last
weeks by community CWS, this time will increase.

Thorsten


Mathias Bauer wrote:

Martin Hollmichel wrote:


Do we have some statistics in which areas we have what amount of
regressions ?

For example I would think that regression caused by broken resources
doesn't occur that much any more, are also easy to find by broad
testing. On the other hand I could image that regressions in document
layout do occur much more often and would be reported much more later
than broken resources ?


Exactly this is one of my concerns. The tests that Jogi mentioned are
surely useful to prevent huge disasters but I'm not sure if they are
qualified for testing for the regressions we usually have.





Regression tests IMHO should tackle the areas where we know that
regressions happen more frequently. I hope it's undisputed that we can
only execute a very limited set of tests and so it should be our
interest to use those test that are able to catch the biggest fish possible.

From the Writer's perspective interesting areas could be text formatting
and layout, undo/redo, redlining, numbering and some more. Regressions
in resource files or the complete inability to start an application
rarely happen and the latter IMHO is already covered by the smoketest.
BTW: extending the smoketest would be better than adding another tool
for testing.

I'm also not concerned about regressions in the main features of the
applications as usually regressions in these areas are discoverd pretty
fast. I'm concerned about regressions that are not so obvious and
usually are found too late. My impression was that this was what created
the idea to have more regression testing, so we should put our focus on
them.

These regressions are not only crashes, often they are more or less
subtile formatting or functionality differences where I wonder how the
provided tests could discover them.

Tests are software and we have learned that for good software one needs
to understand the requirements first and then select the best design to
implement them. I know that we are not perfect in following that but
IMHO we should at least try.

As my requirement would be to find as much regressions as possible with
as less effort as possible I would like to :

- identify the areas of regressions (we should have some data about it)
- think about what kind of tests are best suited to find them earlier
- try to implement such tests so that they are reliable and as easy and
fast to execute as possible
- help developers to find out fast and easily which test should be
applied when

As obviously noone else is interested in a content oriented discussion I
think I will quit and wait what we can test with the provided tests and
if this is what we need.

Until then my impression is that continuing this discussion will be less
valuable then the waste air our computers create while we are writing
our mails. :-)

'nuff said,
Mathias



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [dev] Re: [qa-dev] Can we do more regression testing?

2007-06-04 Thread Thorsten Ziehm

Hi Eike,

Eike Rathke wrote:

Maybe that's part of how the problem was perceived: discussions
_internal to Sun_. Or was Rene involved? Did he even know there was
a discussion ongoing?


- internal to Sun - I thought, that was the meaning of some mailings
here in this threat. If I'm wrong, sorry. I'm happy about talking about
the next step of quality assurance with community too. Therefore I'm
here on the list.

Thorsten

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [dev] Re: [qa-dev] Can we do more regression testing?

2007-06-04 Thread Thorsten Ziehm

Hi Stefan,

the same for automated GUI testing with TestTool. They can run in
parallel on different machines.

Thorsten


Stefan Zimmermann wrote:

... and that is exactly what Christoph wrote...

The UNO-API test will be a distributed test. Means, that the whole API
is splitted into small pieces. A pool of test machines are registered on
a server which distribute the pieces. If for example three tests
machines available the UNO-API test is done in 4-6 hours. If there are
more, the test is faster.

which means that it is related to the number of machines involved.

regards

Stefan


Stefan Zimmermann said the following on 06/04/2007 10:13 AM:

Hi Thorsten, Mathias, Martin

Thorsten Ziehm said the following on 06/04/2007 09:27 AM:

Hi Mathias and Martin,

most of you want to find regressions in less than an hour. These tooling
doesn't exists for a complex program like OpenOffice.org. Christoph
wrote that all API-tests will run more than 4 hours. And API testing is
one of the quickest tests which exists.


That information can't be right. API tests are highly parallelized and 
should be able to complete in 30min to one hour. At least it was like 
that as long as I lead the team. There may be something wrong with the 
setup if it takes longer or the value of 4 hours relates to the 
available and involved hardware resources.


Second, if you need to find regressions in an area you just worked at, 
you don't need to run the whole thing but only parts of the job, which 
means that you can figure out in minutes if you have regressions or not.


[snip]

regards

Stefan

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]





-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [dev] Re: [qa-dev] Can we do more regression testing?

2007-06-04 Thread Thorsten Ziehm

Hi Bernd,

Analyse code coverage of each and every test and than compare to modules 
added to the CWS and than when running the tests automatically just run 
those which cover modules added to the CWS? We would just need a table 
in some database somewhere where individual tests are assigned to a list 
of modules to be able to automate something like this.


We tried it for some month this year, but it seams to be impossible with
the complex code of OOo. When you start OpenOffice.org and open one
document, nearly 80% of all libraries are touched and the results we
collected with some tooling (I do not know what was used) on code lines
wasn't very useful. So we stopped this project.

So we think a full automated mechanism on code base isn't possible. But
based on modules can help. But then the complexity of the OOo is still
there, and do you know somebody who know all dependencies? ;-)

So our idea is to identify test cases for features. E.g. group all
test cases which are testing OLE objects, group other tests which test
tables in Writer. Then the tooling should give the possibility to select
such groups.

Thorsten

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [dev] Re: [qa-dev] Can we do more regression testing?

2007-06-01 Thread Thorsten Ziehm

Hi Hennes,

[...]
What you wrote is an argument against automated regression tests on CWSs 
. If we are not able to detect regression on whatever workspace 
(MWS/CWS), we don't even need to think about it.


I do not understand your points here. I give an example where all test
mechanism we have were not working. This is two CWS and not all! But do
not forget, we found a lot of regressions in these CWS with the
automated testing, but not all!

But don't make everything mandatory. If I change a string in the setup 
or change platform dependend code for systemintegration I don't want to 
do a mandatory test that tests whether all dialogs in the Calc still work.


The QA will and have to do the mandatory tests for each CWS. Most of the
CWS need the testing, if one doesn't need it, why not running the test
also? You do not have to wait for the tests, in the case you do not have
checked in any regression, you will never see a problem and you do not
think about the time the tests needs for running.

Thorsten

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [dev] Re: [qa-dev] Can we do more regression testing?

2007-06-01 Thread Thorsten Ziehm

Hi Frank,

Frank Schönheit - Sun Microsystems Germany schrieb:

Hi Joerg,

Do you agree to make regression testing with the testttol BEFORE you 
(the developer) give your work to the QA to get CWSes faster integrated?
You won't have to maintain the testing code neither do you have to learn 
the script language or debugging the test code


Hmm? Do you suggest that I, as the developer, just start the test, and
in case something goes wrong you, as the QA person, are responsible for
finding out *what* goes wrong, and stripping down the test case to a
short reproducible one? If so, what would we gain? If not, then I don't
understand what you say.


That's the point. In most cases the error messages are as good as the 
messages in our Office (short joke). When it isn't possible to find the

error easily by interpreting the error message, then the Automators will
help to debug the situation and find the problem.

What we win in this situation is, that the developer know earlier if the
code changes bring in regressions or not. And it will be possible to run
these tests also in developing phase and not only at the end. But only
at the end the tests are mandatory. Also we win, that the resources are
not needed in the QA and not in Development. It is in between, it should
be only machine resources and power.

Thorsten

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [dev] Re: [qa-dev] Can we do more regression testing?

2007-06-01 Thread Thorsten Ziehm

Hi Rene,

I do not want to discuss CWSes here in detail?

On the way to OOo 2.3 we integrated more than 180 CWS in the last 3
months and at the end we will be near 400. Perhaps 10% of them do not
need mandatory automated tests for 4-8 hours. But in some cases the
developer and the QA persons do have different understandings if it
is needed or not. In this case it is so.

Thorsten


Rene Engelhard schrieb:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

Thorsten Ziehm wrote:

Or imagine such a test run (failing or not) short before a release,
where you have a small CWS fixing a showstopper only. We don't really
want to have a mandatory 3 day delay in such situations, do we?

Best example currently: cws freetypettg. tiny *security* patch.
(As the freetype issue is public anyway I can say this here)

6 days from RfQA to QA approval (running tests?), now we are on the 8th
and miss the release date because rc3 will only be uploaded today/monday
(why do we need a rc3 anyway?) and keep our users one week more with open
security issues.

The test on this CWS ran only one night. The delay is because of a
weekend in between and some clarifications, if we need the fix for OOo


One night? That would have meant that (because the cws was RfQA on the
24th, they would have been finished on 25th). What is this weekend
argument then for?

What did you need to clarify? (see also below)


2.2.1. Most of the time was internal discussions!


Sure. We don't need a security fix for a library we ship in the tree in
the next release... Come on, what does that need for a discussion?


Please don't mix up the time how long a CWS is in state 'ready for QA'
and how long the tests run. Especially this is a good example for, do
not run the mandatory tests in QA, run the tests after finishing the CWS
by developer. Than the time in state 'ready for QA' will be reduced.


I don't have the infrastructure to do so. Not that I'd see the sense
in this specific case anywy.
I am one of the persons who will *NOT* run any things like this except
when they are done via normal build (which I mostly don't do either, at
least not for such cwses). Then again, I don't do real code cwses
affecting the offices functionality either normally...

I don't even know what tests you needed in this case anyway...
My main system isn't even a i386 fwiw.


Even the *current* procedures produce such useless delays, what if we
would have such mandatory things?

I do not think, that QA is useless time!


I don't either, but in this case it was.
The whole thing could have been done on the 24th, of, if you really think
the tests were needed (I don't), on the 25th.

Regards,

Rene
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFGX/gb+FmQsCSK63MRAq70AJ0XBnvuDKgZO0hguHOZMmtSpiGo+ACeNPdQ
PV9MoY+WeD9qlQGktyB71NM=
=eU83
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [dev] Re: [qa-dev] Can we do more regression testing?

2007-06-01 Thread Thorsten Ziehm

Hi Frank,


Hmm. Even today I have sometimes 3 or more CWS' to handle in parallel.
If the life time of a CWS becomes longer, it will become more difficult
to keep track of what you're doing. If a test fails after three days,
but meantime you started another project/CWS which you cannot leave
immediately, then you might find it difficult to come back to the first
CWS one or two or more weeks after you finished it. But maybe that's
only me.


It's the same situation for most developers now. They have to wait until
a QA person take their CWS and currently this will take more than 3 
days! And do not forget, we do not talk about 3 days we talk about max 
12 hours for the test runs.


Thorsten

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [dev] Re: [qa-dev] Can we do more regression testing?

2007-06-01 Thread Thorsten Ziehm

Hi Mathias,

Mathias Bauer schrieb:

So I do not think, that it make sense to discuss only the 'release
testing' mode. In the past the regressions were integrated before the
QA started with switching in this mode.


I'm not sure if you understood my concern. Let me put it simple: what
makes us think that the current tests we are talking about that AFAIK
have been used in QA for testing the master for quite some time will
help to find regressions that currently stay unnoticed?


You talking about finding all regressions. I talk about finding the most
important regressions, which could be P1 issues, when it is integrated
in the master. The mandatory tests are for these errors.

Therefore you should get a mechanism to run more automated test scripts
as the mandatory tests. Then you can check your implementation more
effectively and find the regressions in your feature.


When I read all your mails, I think you know, which code bring in the
regressions ;-)


I know some places, yes. Of course not all of them. But I don't know
which tests we could use to find them. But many of the regressions I
remember couldn't be found by automated GUI testing as they manifested
themselves by showing some more or less subtile formatting differences.


You should talk with Helge, Automator for Writer, about these points. I
think we will find ways to find these regressions. If it isn't possible
with TestTool, then we sould use ConvWatch or something we have to
develop. We have so many tools, but nobody know when and how it should
be used. But this is another problem.


As I wrote some mails ago my suggestion is, to bring only a small set of
mandatory tests. But give the solution to select testing areas. Then you
can run dedicated tests on your implementation. And you will not run
toolbar tests on your bugfix for automated styles.


Anything else would be insane. I took that for granted. But I also want
to believe that running several hours of tests for e.g. automatic styles
would be worth the effort. This is a good example where I suspect that
possible regressions would stay unnoticed by automatic GUI testing. But


Do not forget, that the Automated test find the XML-fileformat error,
which was introduces by automatic styles (found in Master, because the
tester doesn't think that a problem in fileformat can be integrated) and
some other errors were found before integration.


of course that's open for debate. And exactly this debate is what I want
to see happening. So let's wait until the proposed test cases are
published and until we have verified that they run reliably. Then the QA
and development engineers of the different teams can investigate them
and decide if they make sense or if we can create other tests that serve
the desited purpose better. *Then* we can decide whether we wanted to
run the tests more frequently or even make them mandatory.


Jogi send out the link with the tests. Can we start to chaffer the
mandatory tests? ;-)

AS I wrote very often and here again. The time shouldn't be the problem.
Because the developer and the QA person do not stop working, when a
machine run the tests.

Thorsten

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [dev] Re: [qa-dev] Can we do more regression testing?

2007-06-01 Thread Thorsten Ziehm

Hi Rene,


Or imagine such a test run (failing or not) short before a release,
where you have a small CWS fixing a showstopper only. We don't really
want to have a mandatory 3 day delay in such situations, do we?


Best example currently: cws freetypettg. tiny *security* patch.
(As the freetype issue is public anyway I can say this here)

6 days from RfQA to QA approval (running tests?), now we are on the 8th
and miss the release date because rc3 will only be uploaded today/monday
(why do we need a rc3 anyway?) and keep our users one week more with open
security issues.


The test on this CWS ran only one night. The delay is because of a
weekend in between and some clarifications, if we need the fix for OOo
2.2.1. Most of the time was internal discussions!

Please don't mix up the time how long a CWS is in state 'ready for QA'
and how long the tests run. Especially this is a good example for, do
not run the mandatory tests in QA, run the tests after finishing the CWS
by developer. Than the time in state 'ready for QA' will be reduced.


Even the *current* procedures produce such useless delays, what if we
would have such mandatory things?


I do not think, that QA is useless time!

Thorsten

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [dev] Re: [qa-dev] Can we do more regression testing?

2007-05-31 Thread Thorsten Ziehm

Hi Christoph,

Christoph Neumann wrote:

Hi,

Thorsten Ziehm schrieb:

Hi Mathias,


Do you think it's worth it?


I think it's not primarly the matter of running the regression-suite
before QA approval but to have a small set of meaningful regression
tests available ?

Exactly, and I would prefer to have regression tests based on the API or
complex test framework and not based on the GUI testtool. We shouldn't
raise even more barriers to contribution.


I'm really on your side! But how many complex tests do we have? How high
is the code coverage with API tests in complex scenarios. I do not think
that we have enough test scripts on coding level. If we have it, then we
should make them mandatory quickly. Perhaps then the tests on GUI level
with TestTool could be unnecessarily.


how high is the code coverage of TestTool? Both can be extended.


The code coverage of TestTool tests is more than 50% overall. In some
applications we are testing more than 70%. But this isn't the point
where I want to point my finger on. I see, that we have a stable API in
most cases. I do not see so many issues about broken APIs. But I see a
lot of issues about regressions. My question is here, do we find the
regressions with API tests only? Do we need more complex tests, which
test the API in complex scenarios? Will they find the regressions, which
hinder the user to work with the product? I do not know this, therefore 
I ask this question.


But everybody shouldn't forget that API testing is another level of
testing. It is testing on code level. GUI testing is the highest level
and it tests against the specification and the functionality on user
level. It will find completely different problems as UNIT-, API- or
Complex-Tests.

But do not misunderstand me. I want to have as much testing and quality
assurance on a CWS as I can get. So I'm friend of UNIT-, API- and
Compex-Tests. And when they should be mandatory for approving a CWS, I
will vote for it.


I have planned to establish an automated UNO-API testing for CWS and
Master. The CWS tests should be started in EIS like ConvWatch, the
Master would be tested automatically as soon as they are available. This
should be done until end of summer (hopefully).


Super !

Thorsten

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [dev] Re: [qa-dev] Can we do more regression testing?

2007-05-31 Thread Thorsten Ziehm

Hi Jörg,

[...]


Ause just informed me about another solution that might remove the need 
to have the test run on every CWS i.e. we wouldn't need to have the 
tests mandatory. His idea is to run the tests on the Master Workspace 
prior to announcing the CWS as ready for CWS use. If a test fails then 
this would result in a P1 issue that has to be fixed before the MWS can 
be used by everyone. Very similar to how we handle it for the Smoketest 
on the MWS nowadays.


Additionally the list of tests to run would be checked in to CVS, so 
that we could disable a tests for every user on a given milestone if a 
fix cannot be done in time.


That way a developer could get an _optional_ means at hand of doing 
regression tests, with no obligation to always run these tests. If the 
developer feels that he should run the tests, then he could do so and 
invest the (machine) time. If he thinks that the tests will be no 
additional help, he just does not run them.


Of course the question then is how often such a regression happens. If 
we have to expect to have half a dozen P1 bugs each milestone due to the 
mass of regressions, then the mandatory for every CWS seems the better 
solution to me. But if we expect to have such a P1 bug from the 
automatic tests only once every 2 or 3 milestones (or hopefully even 
less often), then this seems an acceptable way to me.


Does that make sense?


No, from view of the QA team. Then they have to do the regressions tests
on the CWS as they do now. So we do not have any benefit from this solution.

Thorsten

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [dev] Re: [qa-dev] Can we do more regression testing?

2007-05-31 Thread Thorsten Ziehm

Hi Mathias,

Mathias Bauer wrote:

Martin Hollmichel wrote:


Jörg Jahnke schrieb:

Hi,

one of the questions is whether it would be acceptable for everyone to
run a small regression test-suite prior to the QA-approval of a CWS.
These tests would probably run several hours, depending on the
hardware being used, and therefore cost time and hardware-resources.

Do you think it's worth it?


I think it's not primarly the matter of running the regression-suite
before QA approval but to have a small set of meaningful regression
tests available ?


The whole discussion IMHO goes into the wrong direction as it neglects
an important word mentioned in this mail from Martin: meaningful.

Before discussing additional regression tests we first must find out
*what* we have to test. Yes, having reliable tests that hopefully don't
take too much time is the *necessary* precondition without that I even
wouldn't think about it. But there is more: we must get an idea which
areas of the code we need to check as they are known to have a history
of appearing regressions. Why should we do regression tests of code
and functionality that never contained a high regression risk and most
probably never will? That would be a waste of time and we already waste
too much of it.


The QA team identified 45 tests, which show very quickly about broken
functionality. We take these test for Release testing, when the team
has to identify quickly if a build is good or not. So we do have such
a bunch of test cases. These tests cover more than 80% of all OOo files.


Please consider: even in the QA process we currently do not execute
every possible existing test on a CWS for several reasons, mainly the
extraordinary long time it takes to execute them all. I assume the same
should apply to the tests we are considering now. So what we are
currently discussing are *selected* tests that most probably help to
avoid regressions. *What* we select here is crucial for the success.
Martin tried to consider this by his 20% rule mentioned in another
mail but I'm not sure if that makes sense - IMHO we need to cover *the*
20% (or maybe even less) of the code that is worth the effort.


I do not want to let run each test on each CWS. That doesn't make sense.
But then the tooling must have a selection field. So the developer can
select the field 'tables' and 'writer', when he changed something in
tables in Writer. Then all test cases are selected, which works with
tables in writer and check there functionality. On top some other tests
will run, to avoid general regressions in other applications (4hours).

[...]


There is something else that should be thought-provoking: AFAIK most or
nearly all discovered regressions we had on the master in the last
releases haven't been found by the existing automated tests. They have
been found by manual testing of users. So what makes us think that
applying the existing test cases earlier and more often will help us to
find these regressions? For me this is a hint that we might need at
least additional or even other tests if we wanted to test efficiently.
I'm not sure about that but it would be careless to ignore this fact.


You are right, there were not found all regressions in Master by the
automated tests. But some of them were found, when some more tests are
mandatory. In the past only 2 smaller tests are mandatory for approving
a CWS. Many testers run more than these tests, but not all. Therefore
some regressions went into the Master, which could be identified by
the test cases.

On the other hand, do not forget the regressions, which were identified
by the automated test scripts and when the CWS goes back to development.
This process will be speed up, because the developer do not have to wait
until the QA responsible person have time.

So mandatory tests will help to identify more regressions before
integration of a CWS, but not all. That is right and cannot be denied.


So currently I don't know where this discussion will end. If the
expected result is a confirmation that developers agreed to executing
some arbitrary tests not known yet to test something not defined yet I
doubt that this will come to an end. But if we are talking about tests
that will be reliable, not too slow and that will be specifically
designed to investigate the dark corners that are known to produce
regressions more frequently: I think that wouldn't get a lot resistance.
But that's not where we are now.


I don't think so.


So my question is: any objections against my three suggested
preconditions? I know, not too slow still must be defined. But as IMHO
it is the least important condition I don't expect that this will be the
most critical point.


When 'not too slow' means, the rest of automated testing has to be done
by QA team. Then we do not need mandatory tests for developers. Because
than the QA will need the same effort to check the CWSs.

For me it is important, that the automated testing time will be spend
between development process 

Re: [dev] Re: [qa-dev] Can we do more regression testing?

2007-05-31 Thread Thorsten Ziehm

Hi Mathias,

Mathias Bauer wrote:
[...]

You are right, there were not found all regressions in Master by the
automated tests. But some of them were found, when some more tests are
mandatory. In the past only 2 smaller tests are mandatory for approving
a CWS. Many testers run more than these tests, but not all. Therefore
some regressions went into the Master, which could be identified by
the test cases.


I didn't talk about tests on a CWS (I know that we only have a few
mandatory tests), I was talking about the regressions that haven't been
detected by the release testing on the master.



We will never find all regressions with TestTool or any other tooling
or human testing. This has nothing to do with 'release testing' or so.
We do not have test cases which identify problems in displaying
documents or something similar. Perhaps intensive usage of ConvWatch
could help a little bit. But to make this mandatory is another story!

When we talked about the last big regressions with CWS aw024 or the
CWS with AutomaticStyles in Writer. I do not have any solution until
now, how such regressions could be minimized in the near future. Our
test mechanisms do not find such regressions. Only a mass of human
testers could help here. So we are trying to bring more QA members of
the community to the relevant master builds for intensive testing of the
new feature.

So I do not think, that it make sense to discuss only the 'release
testing' mode. In the past the regressions were integrated before the
QA started with switching in this mode.


We should try to identify tests that will be able to detect regressions
in code where we know that it is prone to regressions. I don't want to
make tests mandatory if it tests code that most probably will not create
a single regression in 5 years or so.


This isn't the case.

[...]

and then deciding how to deal with them. If you think that the 45 test
cases identified by the QA team are a proper selection we should have a
closer look on them and identify which code they test.


When I read all your mails, I think you know, which code bring in the
regressions ;-)

As I wrote some mails ago my suggestion is, to bring only a small set of
mandatory tests. But give the solution to select testing areas. Then you
can run dedicated tests on your implementation. And you will not run
toolbar tests on your bugfix for automated styles.

Thorsten

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [dev] Re: [qa-dev] Can we do more regression testing?

2007-05-30 Thread Thorsten Ziehm

Hi Frank at all,


Frank Schönheit - Sun Microsystems Germany wrote:

Hi Oliver,

Do you have such tests? Those that are able to find more regressions 
than they overlook?


Hmm? How do you measure *this*? If they find regressions, that's good.
Every test will overlook some regressions.

Those that run only several hours not weeks like the 
current ones?


That's important indeed. If I have to wait several days betwen finishing
my builds and passing the CWS to QA, just because of the test, this
would certainly be a serious hurdle.



Why it is a problem to wait 1-3 days before you set a CWS in state
'ready for QA'? Mostly it doesn't hinder you to work on another CWS.
So the time for running the tests is not so important for me.

If is more important, that the tests do not run on machines from the
developers. Only then it will not hinder Sun internally or developers
in the community to work on other projects. We need a tinderbox with a
defined environment to build the install sets, install it, run smoke
tests and the TestTool tests. Then the time for running the tests isn't
important.



Additionally, I'd like to raise the wish for deterministic tests. Our
current testtool scripts not always fulfull this - the QA guys using it
all day long will tell you there are tests which sometimes fail, but
succeed the next time you run them. That'd be inacceptable for required
tests.


You are right, that TestTool do not run without problems. But the guys
are working on making more deterministic.

But this shouldn't be a problem at all. Because the test scripts with
known problems will not used for the test runs in the first steps. Only
test cases with run deterministic will be checked in for this tooling.


And, while I am at bashing the testtool :) (no pun intended):
Tests are only useful if you are able to track down the problem with a
reasonable effort. If the outcome of the test is foo failed, but it
takes you hours to just identify what foo this is, then the test is
useless. Everybody who ever tried finding an error reported by the
testtool knows that this is ... strictly important.



The problem is the debugging in TestTool. But often thrown errors are
easy to understand and easy to reproduce. But you are right, some cases
are more tricky. But often the users of the TestTool do not understand
the error message 'button XYZ is disabled, when doing YXZ = Bug'. :-(

Thorsten

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [dev] Re: [qa-dev] Can we do more regression testing?

2007-05-30 Thread Thorsten Ziehm

Hi Martin,

[...]

I still think that making a test mandatory is not the first step in the 
process. I would like to name these requirements with this priorities:


1. Test should be repoducible and generate easy to read and unambigious 
logs with clear error codes.


done, with the planned solution


2. Test should be run within approx. 1 hour.


Why 1 hour? Why not one night or 24 hours or so? It is only machine 
power and resources you need for it.


3. Test should cover 20% of the functionality of each application 
(typically used function)


done, with the planned solution


4. Test should be mandatory.


done, with the planned solution

Thorsten

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [dev] Re: [qa-dev] Can we do more regression testing?

2007-05-30 Thread Thorsten Ziehm

Hi Mathias,


Do you think it's worth it?

I think it's not primarly the matter of running the regression-suite 
before QA approval but to have a small set of meaningful regression 
tests available ?


Exactly, and I would prefer to have regression tests based on the API or
complex test framework and not based on the GUI testtool. We shouldn't
raise even more barriers to contribution.



I'm really on your side! But how many complex tests do we have? How high
is the code coverage with API tests in complex scenarios. I do not think
that we have enough test scripts on coding level. If we have it, then we
should make them mandatory quickly. Perhaps then the tests on GUI level
with TestTool could be unnecessarily.

Thorsten

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [dev] Re: [qa-dev] Can we do more regression testing?

2007-05-30 Thread Thorsten Ziehm

Hi Frank,

Frank Schönheit - Sun Microsystems Germany wrote:

Hi Ingrid,

Why is it a serious hurdle to wait let's say 3 days? For me this is not 
so obvious.


Imagine your frustration what happens if the test fails after 2 days and
20 hours ... Or the turnaround times you have when the test fails there,
you fix it, and the test fails again an hour later.


This is the current situation, when QA has to run the automated test 
scripts. But you have to wait also, that the tester do have the time to 
run the tests. So you have to wait longer for the error report. I do not 
see a problem here.



Or imagine such a test run (failing or not) short before a release,
where you have a small CWS fixing a showstopper only. We don't really
want to have a mandatory 3 day delay in such situations, do we?


You are right. In case of showstopper testing the time frame of some
days is too long. But in this cases only a smaller part of changes will
be integrated in a CWS. So the testing must be flexible to select only
some tests, which test the changes and not only the complete office.
But this must be possible.

Thorsten

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [dev] Re: [qa-dev] Can we do more regression testing?

2007-05-30 Thread Thorsten Ziehm

Hi Frank,

Frank Schönheit - Sun Microsystems Germany wrote:

Hi Thorsten,


That's important indeed. If I have to wait several days betwen finishing
my builds and passing the CWS to QA, just because of the test, this
would certainly be a serious hurdle.

Generally, no. For a normal CWS, cycle time in QA is weeks, so this
really does not add significant overhead.


I call the difference between 3 weeks and 4 weeks significant. Also,
there are more than enough CWS where your statement simply doesn't hold
(especially for small CWS in pre-release times), where several days of
test runs would be even worse.


Currently we talked about a subset of 45 test cases, which were used for
release testing also. These tests needs round about one night - max 12
hours. As I remember correctly the discussions with Jörg, we want to 
make a subset of these tests as required first. They should run round

about 4 hours. But if you want to test more, you can select more.

In the past releases the QA run these 45 tests on most of the 
showstopper CWS and we do not shift the release because of the long 
running time of these tests.


We do not talk about all test cases and test scripts, which run on
one platform round about 2 weeks.

 Thorsten

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [dev] a new crash bug

2007-05-23 Thread Thorsten Ziehm
Hi Gao,

first of all please write an issue in IssueTracker for this bug.
Without an issue, this bug report will disappear over the time.

Thanks
  Thorsten


gaopeng wrote:
 Hi all,
 
 Now I am doing some work related to openoffice.org . Recently I encouter a 
 new crash bug. I have spent some time on it but  I have not found any way to 
 fix it.
 
 the steps to reproduce the crash bug is as follows:
 1 create a new writer document 
 2  Insert --- envelop  , then pop-up the envelop dialog, then 
 select the envelop , click the new doc button . 
 3 For the envelop created, click File ---save/save as  to save the 
 envelop .
 4   File ---reload , then crash.
 
 This bug occurs both in openoffice 2.1 and openoffice 2.2 
 
 
 Have Any body  been puzzled by this bug or found how to fix it?
 Look forward to your reply.
 
 
 Best Regards
 Gao Peng
 
 E-mail : [EMAIL PROTECTED]
 Date :  2007-05-23 
 

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



[dev] Re: [qa-dev] info about red tinderbox status in EIS now available in OOoWiki

2007-04-27 Thread Thorsten Ziehm

Hi all,

could this the starting point that QA have to set CWS back to the CWS 
owner, when a result of tinderbox is red? Then I will add this

information to the Wiki page and will link it to the QA CWS approval
process.

I have only one question, which is the current situation. How could
see the CWS Owner or QA Rep if the build breaker isn't also on the 
Master? Currently #75975# breaks the Mac builds.

http://www.openoffice.org/issues/show_bug.cgi?id=75975

Thorsten


Bernd Eilers schrieb:


Hi there!

A new Wiki page for the red tinderbox status shown in EIS and what you 
can do if you have that to get more detailed information etc. is now 
available.


The Wiki Entry is at:

http://wiki.services.openoffice.org/wiki/RedTinderboxStatusInEIS

Feel free to add any useful information you have in this area there.

Kind regards,
Bernd Eilers

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [dev] Re: [qa-dev] info about red tinderbox status in EIS now available in OOoWiki

2007-04-27 Thread Thorsten Ziehm

Hi Mathias,

Mathias Bauer schrieb:

AFAIK there are some people already discussing the possible work flow.
So we don't need to rush things.



I know, that the release status meeting decided that somebody should
define a process. Currently I saw only opened discussion, but I couldn't
see a team which work on this action item. - perhaps I do not read
enough mailing lists :-( -
So I wanted to speed up the definition of a process with my provocative
question. Currently the QA had a big problem.

1. they cannot find out, what the problems of the tinderbox results is;
   they could only decide on the color of the result
2. we cannot reject the CWS, because the development doesn't exactly
   know, how they can analyze and/or fix the problems
3. the code line maintainer are validly angry when QA approve the CWS
   with build breakers
4. it isn't possible to compare the results of Master build with CWS

I talked with the Gatekeeper and he told me, that he need a clear
statement, what should be done with CWSs where the tinderbox results
are not green (or only what to do when they are red).


For me a process that rejects CWS with red tinderbox build does not
meet these quality requirements at the moment. A new process like this
one must be formerly agreed on by *all* participants. And it must be
tested for some time (like we test our products) before it is rolled out
and made mandatory for all CWS.


You are absolutely right here.

Thorsten

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [dev] Specification Process Possibilities ... what about a wiki?

2006-10-31 Thread Thorsten Ziehm

Hi Kohei,



There are mainly two complaints I have with the current specification 
project:


1) It asks for way too many details, especially in the UI design
section.  It's not too bad if the feature involves only one
control/widget change.  But once the change involves three or more
dialogs, with each dialog having an average of 7 to 8 controls, it
starts to become a real pain in the rear, aside from the fact that the
Basic runs and errors everytime I change the value of that combo box
(ok, I'll stop complaining about this because I think I've already got
my point across to the right person :-).



Without details a new feature isn't clear.
An example you want to design a new car. One important thing are the
tires. You said the development team you need a tire for your new car.
In you mind you know all details you need at the tires - height, width,
rim, how many bots etc. Without writing this into a specification, the
other team do not know, what you need and where it is good for.

The specification template is should a support to do not forget
something in a dialog. And there are many things you can forget, when
you have to work platform independent and language independent.


2) The target audience is not very clear.  Thanks to this thread,
though, now I'm beginning to see who the specification documents are
intended for (mostly for QA, right?).  But without knowing who will
actually read my specification and how my spec will get used, I'd have
hard time setting the right level of granularity so that I can do
absolutely minimal but still make my spec document useful for someone.



The specifications are for the developers, the quality assurance and the 
documentation team.

- without a specification the developers doesn't know what has to be
  implement
- without a specification the QA member doesn't know what has to be
  check/text
- without a specification the writer for online help doesn't know what
  has to be written



Call me lazy, but when I'm writing a spec, I don't feel productive, so
I just want to get it over with as quickly as possible.  Aside from
the fact that, when I'm trying to write a spec late at night after my
kids are asleep, my motivation meter begins to fall rapidly, and my
typing speed begins to crawl. ;-)



I know that. Documentation isn't my favorite job too. But it has to be
done, that my boss or my team know all important things. It's the same,
as to write specifications. You write down all the things, you have to
communicate, that all teams around you (and the users) know how your
implementation works.


But doesn't an externally contributed feature come pretty much when
it's complete (or nearly so)?  If so, then a spec is written after the
fact, which means the spec can be easily retrofitted to be in sync
with the code.  In this scenario, a spec cannot be used to verify the
implementation, because the implementation is done first.  You can do
the opposite, perhaps, to verify the spec against the implementation.
I did that for my first specification (natural sort), and I'll
probably do it for my second spec (solver), too.

So, my workflow seems different from yours, which itself may be a
problem when being involved in this project.  But that's how I write
my code in my free time.



I learned at my study. Before implementation draw flow charts, write
down all dependencies, make small specifications for any action you want
to implement. That will reduce the re-work costs. What I did was to
write first the code and create then the flow charts. The dependencies I
found, because my code fails in the first implementation.

I learned that it takes more time, to implement first the code and write
down the specification afterwards. But the coding was more funny and I
didn't changed my handling at my study.

Now I am at a company and learned, that re-work costs are really costs
money and is very annoying for the users. Therefore I am now a fan of
the processes which are taught at study.

Thorsten

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [dev] Specification Process Possibilities ... what about a wiki?

2006-10-31 Thread Thorsten Ziehm

Hi Michael,

Michael Meeks wrote:

Hi Mathias,

So, while broadly agreeing with most of what you say:

On Mon, 2006-10-30 at 08:53 +0100, Mathias Bauer wrote:

Without the spec the QA wouldn't be able to even find bugs in
many cases (with the exception of obvious ones).


We hear this a lot. And, now we know that specifications are frequently
inaccurate, buggy / out of sync with the code anyway. So - I'm having


The team which worked out the specification process know that the
specifications are not in the highest quality now. This is a learning
process for each member on the specification (User Experience,
Development, Quality Assurance and Documentation). So each team makes
errors know. But the errors are lower than without having a process,
like in OOo 1.1.x time frame.


problems understanding what -exactly- QA need here. It'd help to have 10
representative examples of times when a specification has actually
helped distinguish between bugs  features, and what was done with that
information [ writing tests / whatever ].



You can take nearly each of the new specifications and the corresponding 
CWS. You will see, that very often bugs were written after the CWS go

into the QA for the first time.


Thorsten

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [dev] Specification Process Possibilities ...

2006-10-31 Thread Thorsten Ziehm

Hi Michael,

many questions are answered by Mathias already. Some open questions
I want to answer.

Michael Meeks wrote:


As for the quality of OO.o 2.0.0 [ created AFAIR after the
specification process was introduced ], I think it was a fairly
'interesting' release (defect wise), hence the compound slippage etc.



Yes the specification process was introduced in OOo 2.0 time frame. But
it doesn't work, as you said. The bug count was high in OOo 2.0.
Therefore a template for specifications were developed to eliminate the
most important faults.




I think this is one reason why OpenOffice.org is so successful.


Do you have data to back that up ?



It isn't possible to get data here. But from my own feeling and
discussion with many people, quality is the highest priority.


Perhaps their bugs are of the form:

OO.o is incredibly slow to start



Yes this is a bug. But I think it is more than one bug.


Good unit testing [ as in I can run dmake check in configmgr and get
a yes/no answer in a few seconds ], such as I've implemented in previous
projects I've worked on [eg. the several thousand lines of unit test in
ORBit2] is invaluable. It helps re-factoring, it improves quality and
confidence in the product, and so on. Of course UNO components
substantially complicate such unit testing in OO.o (along with
(apparently) a love of only testing what can be tested via UNO, from
Java ;-). At least, I've not been able to understand the useful / common
recipe for running do tests or whatever in a given source directory 
getting useful data - I'd love to be shown how this works.



Unit tests and tests with automated TestTool are different level of
quality assurance in Software. Unit tests are used to check the code
itself. The next level are API tests to check the integrated code
in the whole content. At UI level the automated testing with TestTool
is done. If the first levels are not done efficiently it will be more
difficult for the UI testing. Mostly the general stability is broken
or something else.

When we do have more testing on integration level will reduce the level
of UI testing with TestTool.


So - I need a deeper understanding of what you understand by quality
and how you weight these statements:



User perspective  :
In my opinion we had the following goals in the last updates.
(I changed the order of your points)


+ Quality is OO.o not crashing (stability)
+ Quality is OO.o not loosing data
+ Quality is OO.o loading  saving my 'foreign' data files
+ Quality is OO.o performing acceptably

+ Quality is OO.o not consuming all available memory

+ Quality is OO.o behaving ergonomically as I expect
+ Quality is OO.o being slick  beautiful
+ Quality is OO.o being feature competitive with others


Code contributor perspective  :
These are important points too. These are and should be goals for
the development. I cannot speak about that, because I am not
the professional in code quality.


+ Quality is OO.o source code being readable
+ Quality is OO.o source code being maintainable
+ Quality is OO.o source code being consistent
+ Quality is OO.o source code not being cut/pasted


The quality (user and developer perspective) can be increased with
specifications. But specifications are not a part of the quality.

+ Quality is every aspect of OO.o having a detailed spec.

Regards,
  Thorsten

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [dev] Specification Process Possibilities ... what about a wiki?

2006-10-31 Thread Thorsten Ziehm

Hi Thorsten B.


The specification template is should a support to do not forget
something in a dialog. And there are many things you can forget, when
you have to work platform independent and language independent.


I think the basic misunderstanding between our two camps (Sun people
vs. OOo volunteers) is the fact that the typical workflow is simply
radically different.



That the workflow is different I see, but why it should be? Re-work
costs is painful for every developer or other people who worked on
such a product. Therefore the workflow worked out by Sun should help to
reduce the working hours at spare time. If somebody want to code first
and write down the specification, it's fine for me. But I learned, that
this will take more time as to work the other way round.

I do not want to say, that every developer (internally or in the
community) should create the specification before or after the code
implementation. As you wrote, the most important thing is, that the
changes are documented. Which tooling is used I am uninterested. Use a 
wiki or use HTML or the specification template. But do not forget to

document all important things - listed in the template.

Currently the team worked out a Writer document with Basic in it. If
somebody can create a Wiki similar template, perhaps it can help to
reduce the barrier between the two camps (Sun and Community).



Absolutely correct. And I'd recommend every community developer
starting to implement a major feature to spend some time on planning 
discussion, even doing UI mock-ups. But we shouldn't _force_ them to
do that, and we shouldn't use that as an all-or-nothing argument for
the spec process. We should request what we really need
(documentation), and leave all the rest unspecified. ;-)



I totally agree.
As I often wrote in this mail and in other mails. The description of
the specification process and the approval process of a CWS or what ever
should help everybody to work in a optimized way. If the specification
(documentation) is written before or after the code is implemented in
a CWS isn't important for me. But if you hand over a CWS to other people
or before you want to integrate new feature (especially UI features) the
specification (documentation) must be in final state.

Thorsten

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [dev] Specification Process Possibilities ...

2006-10-25 Thread Thorsten Ziehm

Hi Michael,


them: We need this burdensome process for Higher Quality !
us:   But lets face it quality is still not good
them: Then we need -even-more- burdensome process !
  repeat ad nauseum.


Nobody said, that it is needed to include _burdensome_ processes to get
a higher quality. If all people who worked on such a big and complex
project like OpenOffice.org could take enough care of quality on each
new integrated code quality assurance, other control mechanism or
processes are not needed. And there isn't a difference between new
feature or a bug fix. But in the past years especially _I_ learned, that 
it is easier to bring in a mass of new features as to bring in stability

and high quality into the product. The project OpenOffice.org is too
complex, that somebody can have an overview about all dependencies.
Because of the hard work to get a higher quality from one Update to the
next one, some leaks in work flows were identified. And to avoid the
such problems some processes like the New Specification Template were
introduced.

All over in the industry you see processes and control mechanisms.
Without that we are not in such a high living standards. So why the
Software industry and OpenOffice.org should give up processes and
control mechanisms?

That the quality is still not good you are right in some cases. Yes we 
have still more then 9000 issues open. Nearly 6500 issues are defects. 
But in my opinion OpenOffice.org 2.x is more stable and is more usable

and more bug free than ever. We still have problems in special areas,
where people can say OpenOffice.org is still in Beta status. But the 
general work for private and business use the Office has a very high 
quality.

I think this is one reason why OpenOffice.org is so successful.

If somebody think the quality isn't high enough, why they are working on
new features and why they are not working on fixing bugs?


That if Sun QA
wants to include all this process for Quality reasons, then -it- must
shoulder the burden [ at least for volunteer contributions ].


That's not the point. It isn't possible to check the quality of all
integrated code by the Sun QA team. Therefore processes are defined
(e.g. how to approve a CWS), that every community member and developers
can help here. If there aren't any definition or tools to help making
QA, it will not be possible to handle so much code changes.

One point was not understood over years at StarOffice team at Sun and 
other software products around the world. The Quality Assurance cannot

bring quality into the product. The developers bring the quality into
the code and the QA have to make regression testing. If the quality of 
the code is bad, the QA cannot make out of it a good product. So the

code must be better and the documentation about the changes, because
then regression testing is more efficiently. That's one reason why the
specifications are needed.


Having a formalised process (1 paragraph necessary?) for quickly
including code into OO.o that is disabled in all Sun builds, and quickly
getting fixes / changes into that etc. would be much appreciated. This
is something we have been wanting for some years now; but no action.


I learned from the past quality takes time. If you want to have
quick fixes and changes into a code line, the quality will decrease.
What do you want to have, a product with higher quality or a product
with much more features and changes? I heard some customers and they
told, that they want to have higher stability and higher quality. But
you will be right, additionally they said, they want to have feature
xyz. But often these are features, which are very special for they needs 
and has nothing to do in an open source project like OpenOffice.org. 
That's why Sun and other companies make an own brand of OOo.


Regards,
  Thorsten

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [dev] Valgrind automatic tests for accessability

2006-10-12 Thread Thorsten Ziehm

Hi Mathias,

Mathias Bauer wrote:

Thorsten Ziehm wrote:


Hi Caolan,

the Automated Testing Team by Sun tried with TestTool to identify a code 
coverage with their test scripts. They ran the tests with accessibility 
enabled or disabled on the system. The result in our environment was the 
same.


Code coverage is one thing (and it's only status quo!), code behavior
another. Does just switching on a11y support in the configuration
*without actually using it* really make the testing with the testtool
impossible? How does it hurt us?



It doesn't make the testing with TestTool impossible. But the 
Accessibility Code isn't touched at testing. Therefore I talked about 
the analysis of Code Coverage with the TestTool scripts.


If it will be possible to check the Accessibility functionality with 
TestTool too, then the code coverage of all scripts will increase. But 
as I heard, this isn't possible without using the Accessibility Tools.


Regards,
  Thorsten

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [dev] Valgrind automatic tests for accessability

2006-10-12 Thread Thorsten Ziehm

Hi Caolan,

Caolan McNamara wrote:

On Thu, 2006-10-12 at 09:05 +0200, Thorsten Ziehm wrote:

Hi Mathias,

Mathias Bauer wrote:

Code coverage is one thing (and it's only status quo!), code behavior
another. Does just switching on a11y support in the configuration
*without actually using it* really make the testing with the testtool
impossible? How does it hurt us?

It doesn't make the testing with TestTool impossible. But the 
Accessibility Code isn't touched at testing. Therefore I talked about 
the analysis of Code Coverage with the TestTool scripts.


If it will be possible to check the Accessibility functionality with 
TestTool too, then the code coverage of all scripts will increase. But 
as I heard, this isn't possible without using the Accessibility Tools.


Maybe something of interest here then might be dogtail which RedHat uses
for app GUI testing. It uses the a11y interface to traverse the apps GUI
elements, and so tests some a11y functionality as it goes by nature of
how it works.

http://people.redhat.com/zcerza/dogtail/
http://people.redhat.com/zcerza/dogtail/doc/apps/categories.html



The resources in my team are limited. Currently I do not see any chance 
to work with dogtail to check with Valgrind the accessibility code. 
Perhaps a team at RedHat can do this or somebody else from the community?


Regards,
  Thorsten

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: [dev] Valgrind automatic tests for accessability

2006-10-11 Thread Thorsten Ziehm

Hi Caolan,

the Automated Testing Team by Sun tried with TestTool to identify a code 
coverage with their test scripts. They ran the tests with accessibility 
enabled or disabled on the system. The result in our environment was the 
same.


Perhaps you find an other way. But the testing team thought, that the 
Accessibility code is only activated, when the connection is over the 
Accessibility tools/bridges or what ever, but it isn't activated only by 
enabling it in the system.


So we cannot help here for Valgrind testing when Accessiblity is 
activated in the system.


 Thorsten


Caolan McNamara wrote:

On Wed, 2006-10-11 at 14:54 +0200, Nikolai Pretzell wrote:

Hi,

regarding the Valgrind Tasks 
(http://wiki.services.openoffice.org/wiki/ValgrindTasks) somebody 
(Caolan?) asked some time ago, if we could do something like this for 
accessability features.


I have contacted the people creating the automatic tests we use, and the 
answer is unfortunately: No, with the current tools this is not 
possible. Accessability features trigger the Office via third party 
tools and the TestTool can not catch the signals in the Office caused 
from third party applications, therefore there are no automatic test 
scripts we could use with Valgrind.



Not sure if we're talking about exactly the same thing, but maybe I just
don't understand the current valgrind test harness. I didn't really mean
to actually test a11y features specifically, I just mean that when
running OOo under valgrind on e.g. linux that gnome's a11y is enabled,
which triggers OOo's a11y to be enabled and the various OOo a11y objects
created during normal operations. i.e. not poking OOo directly with any
external a11y tools.

C.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]