Hi Xuefei,

your questions aren't easy to answer. From your questions I think you
were at my presentation at the OOoCon in Beijing. I'm right?
I will try to answer your questions. See my answers in-line.

xduan_bj schrieb:

Hi All,
I'm very interested in OO testing process and read CWS policy.
> I have some confused, anyone can help me answer them,
thanks a lot in advance.
>
1. If a new feature only has one CWS, I found some documents
> said one feature maybe have milestone1, milestone2.... for each
> milestone, developer will create different CWS or only work
on one CWS?

I do not know, where you find the information with milestones
of a feature. Therefore I cannot say, what is meant with milestone
in such documentation.
The word milestone is used in different meanings inside the development processes. I want to explain the both meanings and what it can mean
for working with/on CWS.

a) Milestone = a big feature can be cut in different separated parts
When a feature can be developed in different separate parts or phases
you can use different CWS for such implementations. The separate
implementation details have to be complete and should work as stand
alone implementation without other stuff.
For e.g. if you want to implement a new user interface you can split it
in different parts. In one CWS you can change the Toolbars in another
CWS you can change the page view in Impress. But each of the CWS have
to work without the other implementation.

b) Milestone = different development steps of one feature
As I know in most cases milestone is used in this meaning. The
integration team (iTeam) work in iterations on a new feature. The install sets for each iteration (milestone) will be handed over to
to the iTeam to check each steps of the whole implementation. This
has to be done in one CWS.
E.g. you want to change the toolbars. The first step can be to change
the UI for this control only and present it the team. Next step can
be to change all functionality for the buttons on the toolbar ....
At the end the full functionality of toolbars is changed and after
checking/QAed the whole implementation in this CWS the change can be
implemented.

It is important to check-in new code with a CWS which work from scratch
and which doesn't need any other CWS for full functionality.

2. For every issue, we must verify not only in CWS but also in MWS, right?

Yes (in an ideal world, where we have resources for it!)
If you want to be save you have to check each issue in CWS and in MWS. A
problem can be occur in MSW only, when the integration of the CWS fails.
Sometimes this is the case when different CWS work in same code areas
(same code files) and code conflicts were shown at the integration of
the CWS into the main trunk. When the code conflicts aren't fixed
correctly, than the new implementation or older features in OOo can
break. The risk is higher than more CWSes in same code areas are integrated at the same time.

So if less CWSes were integrated and the code areas aren't the same the
risk is low, that an implementation in a CWS doesn't work in a MSW.
But you will never know this, without testing it :-) Therefore we try
to check all integrated issues in Master and close the issue after
that testing only. We try to use the monthly QA testing days for such
tasks to use as much resources of the community as possible.

3. In CWS, I know we'll do manual test, performance(load/save)test,
> 26 required automation test. In MWS, we'll do all automation test
every 3 builds, performance test, manual test, I want to know for
manual test, if they are same testcases on CWS and MWS. or just random test?

You are right, the most effort on testing we do on CWS testing. If the
quality of a CWS is good and the issues are checked again on Master,
there isn't as much testing on MWS needed.
The general quality of MWS will be checked by the automated testing and
by checking the integrated issues in MWS. A coordinated manual testing
isn't done on MSW for all functionality. There exists TCM test cases
which are checked by the L10N teams. But this is only a small part of
new or older features in OOo which are tested manually.

But the most testing is done by the users of OOo. They report us the
problems very quickly. So the usage of the product (releases and
developer milestones) is most important for the general quality
assurance of OOo.

My vision is to have also manual testing based on test case
specifications for all features on a regular base. But this need
resources we do not have currently in the community. And also we
do not have the tooling to organize such work and protocol the
test results. :-( On the tooling I will work this year. I am not
a fan of TCM, so I want to have a new and better tool.

I hope your questions are answered.

Thorsten


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to