Michael Meeks wrote:

>       Which is strange. At least, from where I stand we need more quality
> [ by which I mean all-around polish ] not less; from a UI perspective we
> need thousands of tiny ergonomic fixes all over the place: almost all of
> them trivial in and of themselves; but if each one requires a multi-page
> specification, they will never get done.

You mix up some things here. Nobody said that we need a spec for each
and every "tiny ergonomic fix". We need them for new features - e.g. a
quickstarter on Linux. :-P

>       Furthermore, I find the quality of the QA tests generally rather poor.
> My recollection (which may be wildly astray) is that broadly there is a
> large list of StarBasic test cases [ which is good ], but many of these
> are known to fail / spit warnings, they take several (3+) -days- to run
> (mostly with the machine idle / sleeping), and they're not particularly
> reliable :-) [ is that unfair ? ]

I agree here, but that's not the fault of the QA automated tests, they
are just our last resort because our code was not implemented as being
unit-testable from the very beginning (more than 10 years ago). So we
have parts in the code that are unit-testable (and we have tests for it)
but most code unfortunately is bound to some vcl/sfx/svx/etc. stuff that
makes unit testing impossible. So while we don't think that the
automated QA tests are the greatest thing on earth we are glad to have
them to get at least a certain amount of regression testing.

The long duration of the tests is indeed a problem. We have moved the
target of a considerable amount of CWS from OOo2.1 to OOo2.2 just
because we couldn't finish their testing in time (code freeze) - and
most of them (if not all) are not from "non-Sun" developers BTW.

We are currently investigating how we can get faster tests, one
direction we are looking into is avoiding or at least reducing the
idle/sleeping times. Other ideas are welcome. My very personal opinion
is that we should have more API (code) based tests and less GUI testtool
based ones but I know that there are other opinions. Must be discussed.

For features with user interface we also need some testing "by hand".
And that's mainly the testing where the spec is needed. In my
understanding the automated testing is done afterwards and is our best
weapon against regressions (as I stated above, nobody would complain if
we had better ones).

>       Good unit testing [ as in I can run "dmake check" in configmgr and get
> a yes/no answer in a few seconds ], such as I've implemented in previous
> projects I've worked on [eg. the several thousand lines of unit test in
> ORBit2] is invaluable. It helps re-factoring, it improves quality and
> confidence in the product, and so on. Of course UNO components
> substantially complicate such unit testing in OO.o (along with
> (apparently) a love of only testing what can be tested via UNO, from
> Java ;-). 
There's nothing wrong with doing unit and API tests in Java.
And UNO components don't make anything more complicated, on the
contrary: as they have a stable API and follow the idea of separation of
interface and implementation they are easier to test than other C++ code
in OOo. And - again thanks to UNO - you can write your tests in Java,
C++, OOo Basic or Python, just as you prefer.

Or is there anything you find more complicated in unit testing of UNO
components? Then please give an example.

> At least, I've not been able to understand the useful / common
> recipe for running "do tests" or whatever in a given source directory &
> getting useful data - I'd love to be shown how this works.

Sorry, I don't understand this. There is no relation between the
automated QA tests and any source directories. And as I said, if the
code was testable by unit tests we surely would prefer to do it.

>       The problem is you are retarding not just features, but fix inclusion.
> This was dramatically the case with the old-style 1.1.x branch: the cost
> & penalty of back-porting *fixes* was so high that only very
> infrequently did people bother to actually do it, consequently the
> quality of the 1.1.x branch stayed low.

You again mix things here. This is no longer true for fixes in 2.0. And
nobody asks for specs for bug fixes. Please give examples where a bug
fix was not integrated because a spec was missing.

>       My feeling is that -if- you can be sure that commits will (on average)
> fix more than they break, and that you can be sure critical regressions
> are extremely unlikely then accelerating the pace of change will lead to
> higher quality :-)

The problem is that you don't know this beforehand - and in the past one
of our biggest problems was that we introduced code changes and later on
the master wasn't usable for weeks. This has hit not only QA and users
(that wanted to play with the milestones) but also developers that
needed the new master because of some other changes made in this
release. It is much cheaper to let *one* developer wait for some more
days for the integration of his code changes than putting the work of a
lot more developers on risk caused by sloppy coding (or sloppy testing).

>       Are you suggesting that on occasion StarOffice chooses a lower quality
> than OO.o in order to satisfy specific customer feature requests ? if so
> I think that's screwed up ;-) 
Don't be silly. With a little bit of goodwill you should be able to
understand what Thorsten has written. He was talking about special
releases for special customers that we always do between "official"
releases. Of course they get the same treatment as all other releases
but we don't have to wait for the next "official" release to hand over
the fix (or feature) to the customer. All fixes/features get integrated
into the next "official" OOo release also (if they are done in code that
is part of OOo and not in one of the few modules that are not OS modules).

> OO.o should be of a lower quality in order
> to get great testing & feedback to improve the StarOffice quality; at
> least that is how I would structure it. Indeed - I am surprised that
> OO.o and StarOffice releases are ~concurrent (or that StarOffice
> sometimes leads) - that seems to me to be a recipe for poorer quality in
> StarOffice.

I don't understand how you come to that crazy conclusion. Step back and
think about Thorsten words with an open mind and not with the explicit
will to use them against him.

You have a lot of experience in development and unquestionable you are a
very skilled developer. I would like to see this utilized for a
constructive work towards a better OOo and not just for fighting against
the current approach to achieve this without pointing out better ways.
And please accept that "throw everything in and fix the bugs later"
can't be the way to go. Been there, done that, felt the pain. Don't want
to be there again.

We have developed some rules/processes that should help us to create a
better product. I can't speak for all but at least for myself (and I
know also for some others): it should always be possible to question
this in particular cases - but this must be justified and it must happen
before the fact, not as a little something a few days before code freeze.

Of course every rule can be changed if good arguments and (IMHO even
more important!) good alternatives are presented.

We have some goals that we try to reach with specs and with our CWS and
QA procedures. We can discuss alternative ways to reach these goals but
this needs a constructive athmosphere. This thread started with a cry
for help from someone who couldn't do his work properly because the
acting person(s) didn't take care for him. Please let us concentrate on
this and discuss how this can be avoided in the future.

OTOH we also read in this thread that apparently our current rules and
procedures created unsatisfactory results in some cases: CWS didn't get
integrated because of build or platform problems, exaggerated demand for
specifications, missing responsiveness in QA or UserExperience, missing
support of code owners etc. Point taken. So we must take this more
serious as it apparently has happened. Expect to hear something about
this in the very near future. But please don't set this off against the
problem that started this thread.

Ciao,
Mathias

-- 
Mathias Bauer - OpenOffice.org Application Framework Project Lead
Please reply to the list only, [EMAIL PROTECTED] is a spam sink.

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to