Frank Schoenheit, Sun Microsystems Germany wrote:

> Hi Bjoern,
> 
>> Assertions should be tested with the common tests (cwscheckapi has
>> decent code coverage) preventing the non-pro master to become unusable.
> 
> Ah!
> 
> Did you know that testtool, the program for running automated UI level
> tests on OOo, can capture and report assertions?
> 
> If you claim that assertions should be "tested with the common tests",
> this immediately implies that testtool runs should be done on
> non-product builds. Which brings the "QA should use non-products" topic
> back onto the plate.
> 
> And I continue to think that if we're serious about assertions being a
> cheaply available "first line of defense against bugs" (and it seems we
> all agreed on that), then non-product builds should get a much higher
> standing than they have now - in all departments: engineering, QA,
> release engineering.

As it seems, most of us believe that at least in its current state our
code is not ready for aborting assertions, so discussing whether having
aborting assertions are a good idea or not perhaps is superimposed by
the fear to get an unusable product for months. My suggestion is to
leave that out for now and think about how we can first reduce the
number of assertions found when executing code.

Recently I made a CAT0 test with a non-pro build and I was astonished
(or should I say: horrified) by the large number of assertions I got.
Some of them have been asserts for missing resources - this definitely
points to a problem in our ability to detect severe errors. Getting all
autotests assertion-free could be a first step to improve the situation
considerably. Next step would be to keep that state and it is
interesting to think about how we could achieve that.

I tend to like the idea of using non-pro builds in QA, but we must
recall why we have stopped to do that in the past. RĂ¼diger mentioned the
performance problem, maybe there have been others. The general statement
"QA should use the code that the user gets" sound reasonable, but it is
interesting to see where we are getting with it when we have a look on
what happens in the code.

Let's have a look at the performance first.

IMHO the tests are so slow anyway that you never get a result of a test
the same day you started it (usual working hours assumed ;-)). A Cat0
test lasts ~8 hours (net time), so waiting for one or two hours more
wouldn't be a killer. I doubt that the performance decrease would be
larger than this, though of course that needs verification. Or do we
have some (at least approximated) data already that would help to judge
my rough estimation?

Now lets look at "QA should use the code that the user gets". This burns
down to the question: does a non-pro build hide bugs that a pro build
would show?

Think about that - do you really think that the probability of such an
incident is so high that it justifies to abandon the superior bug
detection abilities that a test with activated debugging code gives to
us? Should we follow a dogma (in a neutral meaning!) or should we just
measure the trade-off: by using non-pro builds in QA we lose something
(100% certainty to test the exact code we deliver to users), but we
could gain a lot.

A short litmus test: all errors I found in the CAT0 test with a pro
build of my CWS also appeared in the CAT0 test with the non-pro build I
mentioned above, so nothing got lost. But in the test with the non-pro
build I found a lot of additional problems(because I got assertions)
that I never would have seen in the pro build. The fact that none of
these problems had been created by the CWS is another story that should
be told another time.

A variant of "everybody uses builds that give assertions" could be that
the QA uses assertion enabled builds for CWS, but for tests on master
uses pro builds.

OK, but what about the release code lines where we only do "pro" builds?

I want to throw in an idea that has been discussed some months ago, just
to see if this could be a good compromise. As all compromises it has its
pros and cons. So I would like to encourage you to think about the pros
as (I'm sure ;-)) you immediately will discover its cons.

The idea is the unification of pro and non-pro build. We have discussed
that only as an idea to save time in release engineering and a nice
opportunity for additional diagnostic abilities for customer problems,
but maybe it can help in our current discussion also.

When we thought about getting rid of the pro/nonpro differences I made a
rough estimation for the influence on code size. I listed all libraries
that are load on startup and compared their sizes in the pro and non-pro
builds (Windows and Linux). On both platforms the code to load on
startup would become ~6% larger by converting all "#ifdef DBG_UTIL" to
"if (bDBG_UTIL==true)". The influence on startup performance would be a
little bit less than this 6%, but probably measurable. IMHO bearable,
but - as always - YMMV. From a closer look on what code exactly will be
loaded, I estimated that a large part of it can be converted to become
loaded on demand, so the influence would become even smaller.

"bDBG_UTIL" would be a global variable that can be set from the
environment, a configuration file etc. [Before someone asks: all
"#if(n)def PRODUCT" statements have been converted to use DBG_UTIL
instead already.]

If nothing except checking bDBG_UTIL is done in all places where
debugging code exists, I doubt that this will be measurable except for
some special cases. So overall I don't expect a discernible performance
loss for the version a *user* will get and execute (though, as I said,
perhaps it's just measurable). Admittedly, this has never been verified
by performance tests, but IMHO it sounds quite probable and reasonable.

Verifying my assumption about the performance of a unified build wasn't
done as it would be some effort to get such a build in the first place.
Investing that effort and being told afterwards that "5% are too much"
wasn't a very attractive perspective. Having a performance goal for such
a build up-front could allow us to do that if we are confident to reach it.

Regards,
Mathias

-- 
Mathias Bauer (mba) - Project Lead OpenOffice.org Writer
OpenOffice.org Engineering at Sun: http://blogs.sun.com/GullFOSS
Please don't reply to "nospamfor...@gmx.de".
I use it for the OOo lists and only rarely read other mails sent to it.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org

          • Re... Stephan Bergmann
            • ... Christian Lippka
              • ... Terrence Enger
              • ... Stephan Bergmann
              • ... Ingrid Halama
              • ... Frank Schoenheit, Sun Microsystems Germany
              • ... Frank Schoenheit, Sun Microsystems Germany
              • ... Stephan Bergmann
      • Re: [dev] ... bjoern michaelsen - Sun Microsystems - Hamburg Germany
        • Re: [d... Frank Schoenheit, Sun Microsystems Germany
          • Re... Mathias Bauer
            • ... Eike Rathke
              • ... Mathias Bauer
            • ... Frank Schoenheit, Sun Microsystems Germany
            • ... Thorsten Behrens
              • ... Mathias Bauer
  • Re: [dev] Should as... Terrence Enger

Reply via email to