[dev] Test Cleanup

2009-12-14 Thread Stephan Bergmann

Hi all,

I just embarked on a new project, namely to clean up and consolidate the 
various test frameworks and corresponding tests available in the OOo 
build environment.  These include at least:


- C++ unit tests, based on CppUnit and/or testshl2, and Java unit tests, 
based on JUnit and/or qadevOOo.  These are scattered across the code 
base (*/qa, */test, */workben, testtools/, ...), some are executed 
during a regular build (o3tl/qa, basegfx/test, basebmp/test) but most 
are not even compiled during a regular build (and thus rot over time). 
Some of the tests need no special environment, while others require a 
working UNO runtime environment.


- The so-called UNO-API and Complex tests.  These are located in 
*/qa/unoapi and */qa/complex, use OOoRunner from qadevOOo, and can be 
executed via cwscheckapi or checkapi.  They require an installed OOo, 
which cwscheckapi takes care of.  They are not compiled or executed 
during a regular build (they clearly cannot be executed, as they require 
an installed OOo), but it is expected that cwscheckapi is manually 
executed for each CWS.


- The smoke test in smoketestoo_native.  It requires an installed OOo, 
which smoketest.pl takes care of.  It is executed at the end of a 
regular build.


- The ConvWatch and Performance tests, that can be started from the EIS 
page of a CWS.  They require an installed OOo (and also the installation 
of a corresponding master-workspace OOo, for result comparison), which 
they take care of.  They are not executed during a regular build, but it 
is rather expected that they are manually triggered from EIS for each 
CWS (where they are executed asynchronously on dedicated machines, and 
their results made available in EIS).


What is *not* covered (for now?) are the automatic QA tests based on 
testtool, as well as the portability tests (so to speak) of building 
OOo on a wide range of platforms via buildbots and tinderboxes.


The two main problems with the tests listed above appear to be that (a) 
many of them require an OOo installation, and they all invented their 
own ways of providing one, and all those ways are brittle and start to 
fail sooner or later, and (b) the tests that are not compiled or 
executed during each build (CWS as well as master) start to rot sooner 
or later.  A third problem probably is that the tests and test 
frameworks are often poorly documented and do things in non-standard 
ways (e.g., testshl2 vs. plain CppUnit), so that it is not easy to 
maintain existing tests and write additional ones.


I would like to address these problems.  My guiding vision in doing so 
is the following perfect world:  There is one OOo installation in the 
solver.  (Ideally, it would automatically emerge from delivering the 
various files directly to the appropriate locations in the solver.)  All 
the tests that require an OOo installation use that one installation. 
(They do not modify it.  Each test probably has its own, throw-away 
UserInstallation directory, and soffice is started with appropriate 
switches to not show unwanted first start wizards etc.)  All the tests 
are written using standard tools (the xUnit framework: CppUnit resp. 
JUnit).  For tests that have specific requirements on their environment 
(i.e., require a working UNO runtime environment, or an OOo 
installation), there are library routines available to set up/tear down 
such environments, to be called from the xUnit setup/tearDown methods. 
Generally, tests are compiled and executed during every regular build. 
For tests which absolutely cannot be executed during every regular build 
(maybe because they are too expensive, or require a dedicated machine, 
like could be the case for the performance test), the main way to 
execute them is still to have some (manual) makefile target for them. 
(There may be additional convenience mechanisms, like buttons in EIS, 
but they are strictly secondary.)


I know that this picture is not perfectly realistic, and that there will 
be obstacles along the way that require pragmatic workarounds.  Still, I 
think it is important to know what the ideal should look like, even if 
you have to deviate from it.


As a first step, I set up CWS sb118 to experiment, as a showcase and to 
gain further insight, with turning smoketestoo_native into such an ideal 
test.  As it turned out, the first thing I had to do on that CWS was to 
replace the heavily modified CppUnit 1.8 currently used by OOo with a 
plain unmodified latest version of CppUnit 1.12.1.


Comments on all of this are, of course, very welcome.

-Stephan

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] Test Cleanup

2009-12-14 Thread Steffen Grund

Hello,

see my 2 cents below.

Stephan Bergmann wrote:

Hi all,

I just embarked on a new project, namely to clean up and consolidate the 
various test frameworks and corresponding tests available in the OOo 
build environment.  These include at least:


- C++ unit tests, based on CppUnit and/or testshl2, and Java unit tests, 
based on JUnit and/or qadevOOo.  These are scattered across the code 
base (*/qa, */test, */workben, testtools/, ...), some are executed 
during a regular build (o3tl/qa, basegfx/test, basebmp/test) but most 
are not even compiled during a regular build (and thus rot over time). 
Some of the tests need no special environment, while others require a 
working UNO runtime environment.


- The so-called UNO-API and Complex tests.  These are located in 
*/qa/unoapi and */qa/complex, use OOoRunner from qadevOOo, and can be 
executed via cwscheckapi or checkapi.  They require an installed OOo, 
which cwscheckapi takes care of.  They are not compiled or executed 
during a regular build (they clearly cannot be executed, as they require 
an installed OOo), but it is expected that cwscheckapi is manually 
executed for each CWS.


Not connected, I would like to change something: as far as it is 
possible (from build order perspective), I plan to include all Java 
tests into the build, where this has not been done already. Tests can 
then be executed with dmake or cwscheckapi.
Execution while building is not planned - nearly all tests need an 
installed runnable office to execute.

When no Java environment is set, tests will not be compiled, of course.

-Steffen



- The smoke test in smoketestoo_native.  It requires an installed OOo, 
which smoketest.pl takes care of.  It is executed at the end of a 
regular build.


- The ConvWatch and Performance tests, that can be started from the EIS 
page of a CWS.  They require an installed OOo (and also the installation 
of a corresponding master-workspace OOo, for result comparison), which 
they take care of.  They are not executed during a regular build, but it 
is rather expected that they are manually triggered from EIS for each 
CWS (where they are executed asynchronously on dedicated machines, and 
their results made available in EIS).


What is *not* covered (for now?) are the automatic QA tests based on 
testtool, as well as the portability tests (so to speak) of building 
OOo on a wide range of platforms via buildbots and tinderboxes.


The two main problems with the tests listed above appear to be that (a) 
many of them require an OOo installation, and they all invented their 
own ways of providing one, and all those ways are brittle and start to 
fail sooner or later, and (b) the tests that are not compiled or 
executed during each build (CWS as well as master) start to rot sooner 
or later.  A third problem probably is that the tests and test 
frameworks are often poorly documented and do things in non-standard 
ways (e.g., testshl2 vs. plain CppUnit), so that it is not easy to 
maintain existing tests and write additional ones.


I would like to address these problems.  My guiding vision in doing so 
is the following perfect world:  There is one OOo installation in the 
solver.  (Ideally, it would automatically emerge from delivering the 
various files directly to the appropriate locations in the solver.)  All 
the tests that require an OOo installation use that one installation. 
(They do not modify it.  Each test probably has its own, throw-away 
UserInstallation directory, and soffice is started with appropriate 
switches to not show unwanted first start wizards etc.)  All the tests 
are written using standard tools (the xUnit framework: CppUnit resp. 
JUnit).  For tests that have specific requirements on their environment 
(i.e., require a working UNO runtime environment, or an OOo 
installation), there are library routines available to set up/tear down 
such environments, to be called from the xUnit setup/tearDown methods. 
Generally, tests are compiled and executed during every regular build. 
For tests which absolutely cannot be executed during every regular build 
(maybe because they are too expensive, or require a dedicated machine, 
like could be the case for the performance test), the main way to 
execute them is still to have some (manual) makefile target for them. 
(There may be additional convenience mechanisms, like buttons in EIS, 
but they are strictly secondary.)


I know that this picture is not perfectly realistic, and that there will 
be obstacles along the way that require pragmatic workarounds.  Still, I 
think it is important to know what the ideal should look like, even if 
you have to deviate from it.


As a first step, I set up CWS sb118 to experiment, as a showcase and to 
gain further insight, with turning smoketestoo_native into such an ideal 
test.  As it turned out, the first thing I had to do on that CWS was to 
replace the heavily modified CppUnit 1.8 currently used by OOo with a 
plain unmodified latest 

Re: [dev] Test Cleanup

2009-12-14 Thread bjoern michaelsen - Sun Microsystems - Hamburg Germany
On Mon, 14 Dec 2009 15:06:45 +0100
Stephan Bergmann stephan.bergm...@sun.com wrote:

 Hi all,
 
 I just embarked on a new project, namely to clean up and consolidate
 the various test frameworks and corresponding tests available in the
 OOo build environment. 
 ...
 Comments on all of this are, of course, very welcome.

Yay! Sounds like another great step forward for the development
environment.

Best Regards,

Bjoern Michaelsen
-- 
===
 Sitz der Gesellschaft:
 Sun Microsystems GmbH, Sonnenallee 1, D-85551 Kirchheim-Heimstetten
 Amtsgericht Muenchen: HRB 161028
 Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels, Wolf Frenkel
 Vorsitzender des Aufsichtsrates: Martin Haering
===


-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] Test Cleanup

2009-12-14 Thread Juergen Schmidt

Stephan Bergmann wrote:

Hi all,

I just embarked on a new project, namely to clean up and consolidate the 
various test frameworks and corresponding tests available in the OOo 
build environment.  These include at least:


- C++ unit tests, based on CppUnit and/or testshl2, and Java unit tests, 
based on JUnit and/or qadevOOo.  These are scattered across the code 
base (*/qa, */test, */workben, testtools/, ...), some are executed 
during a regular build (o3tl/qa, basegfx/test, basebmp/test) but most 
are not even compiled during a regular build (and thus rot over time). 
Some of the tests need no special environment, while others require a 
working UNO runtime environment.


- The so-called UNO-API and Complex tests.  These are located in 
*/qa/unoapi and */qa/complex, use OOoRunner from qadevOOo, and can be 
executed via cwscheckapi or checkapi.  They require an installed OOo, 
which cwscheckapi takes care of.  They are not compiled or executed 
during a regular build (they clearly cannot be executed, as they require 
an installed OOo), but it is expected that cwscheckapi is manually 
executed for each CWS.


- The smoke test in smoketestoo_native.  It requires an installed OOo, 
which smoketest.pl takes care of.  It is executed at the end of a 
regular build.


- The ConvWatch and Performance tests, that can be started from the EIS 
page of a CWS.  They require an installed OOo (and also the installation 
of a corresponding master-workspace OOo, for result comparison), which 
they take care of.  They are not executed during a regular build, but it 
is rather expected that they are manually triggered from EIS for each 
CWS (where they are executed asynchronously on dedicated machines, and 
their results made available in EIS).


What is *not* covered (for now?) are the automatic QA tests based on 
testtool, as well as the portability tests (so to speak) of building 
OOo on a wide range of platforms via buildbots and tinderboxes.


The two main problems with the tests listed above appear to be that (a) 
many of them require an OOo installation, and they all invented their 
own ways of providing one, and all those ways are brittle and start to 
fail sooner or later, and (b) the tests that are not compiled or 
executed during each build (CWS as well as master) start to rot sooner 
or later.  A third problem probably is that the tests and test 
frameworks are often poorly documented and do things in non-standard 
ways (e.g., testshl2 vs. plain CppUnit), so that it is not easy to 
maintain existing tests and write additional ones.


I would like to address these problems.  My guiding vision in doing so 
is the following perfect world:  There is one OOo installation in the 
solver.  (Ideally, it would automatically emerge from delivering the 
various files directly to the appropriate locations in the solver.)  All 
the tests that require an OOo installation use that one installation. 
(They do not modify it.  Each test probably has its own, throw-away 
UserInstallation directory, and soffice is started with appropriate 
switches to not show unwanted first start wizards etc.)  All the tests 
are written using standard tools (the xUnit framework: CppUnit resp. 
JUnit).  For tests that have specific requirements on their environment 
(i.e., require a working UNO runtime environment, or an OOo 
installation), there are library routines available to set up/tear down 
such environments, to be called from the xUnit setup/tearDown methods. 
Generally, tests are compiled and executed during every regular build. 
For tests which absolutely cannot be executed during every regular build 
(maybe because they are too expensive, or require a dedicated machine, 
like could be the case for the performance test), the main way to 
execute them is still to have some (manual) makefile target for them. 
(There may be additional convenience mechanisms, like buttons in EIS, 
but they are strictly secondary.)


I know that this picture is not perfectly realistic, and that there will 
be obstacles along the way that require pragmatic workarounds.  Still, I 
think it is important to know what the ideal should look like, even if 
you have to deviate from it.
thanks for sharing your vision of the ideal world, i agree that it is 
important and of course very useful to know where to go ...




As a first step, I set up CWS sb118 to experiment, as a showcase and to 
gain further insight, with turning smoketestoo_native into such an ideal 
test.  As it turned out, the first thing I had to do on that CWS was to 
replace the heavily modified CppUnit 1.8 currently used by OOo with a 
plain unmodified latest version of CppUnit 1.12.1.


Comments on all of this are, of course, very welcome.
a cleanup and a consolidation of the different available and used test 
frameworks etc. sounds very useful. And once we have reached a state 
where we have working tests, a working framework and some documentation 
in place that describes how to write new 

Re: [dev] Test Cleanup

2009-12-14 Thread Frank Schoenheit, Sun Microsystems Germany
Hi Stephan,

 I just embarked on a new project, namely to clean up and consolidate the 
 various test frameworks and corresponding tests available in the OOo 
 build environment.

That#s highly appreciated!

 - The so-called UNO-API and Complex tests.  These are located in 
 */qa/unoapi and */qa/complex, use OOoRunner from qadevOOo, and can be 
 executed via cwscheckapi or checkapi.

Not sure whether you mix things here, or whether I am simply not
up-to-date: To my knowledge, the complex test cases in */qa/complex are
not (read: cannot be) executed by (cws)checkapi. At least in all modules
I now, they're accompanied by some makefile which allows to invoke them
via dmake run or some such.

 ...
 I know that this picture is not perfectly realistic, and that there will 
 be obstacles along the way that require pragmatic workarounds.  Still, I 
 think it is important to know what the ideal should look like, even if 
 you have to deviate from it.

Agreed. I'd already be very happy if only some parts of this could be
achieved.

For the records, since you didn't mention it explicitly, though I think
it's on your list: (Un)Reliability of the tests is another major blocker
for their acceptance currently. Of course, in places where this is due
to the concrete test, not due to the test framework, this is to be
solved one by one only. But we shouldn't forget this important goal: If
tests do not run reliably, then the best test framework of the world
won't get us anywhere.

Ciao
Frank

-- 
- Frank Schönheit, Software Engineer frank.schoenh...@sun.com -
- Sun Microsystems  http://www.sun.com/staroffice -
- OpenOffice.org Base   http://dba.openoffice.org -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] Test Cleanup

2009-12-14 Thread Stephan Bergmann

On 12/14/09 16:21, Frank Schoenheit, Sun Microsystems Germany wrote:
I just embarked on a new project, namely to clean up and consolidate the 
various test frameworks and corresponding tests available in the OOo 
build environment.


That#s highly appreciated!

- The so-called UNO-API and Complex tests.  These are located in 
*/qa/unoapi and */qa/complex, use OOoRunner from qadevOOo, and can be 
executed via cwscheckapi or checkapi.


Not sure whether you mix things here, or whether I am simply not
up-to-date: To my knowledge, the complex test cases in */qa/complex are
not (read: cannot be) executed by (cws)checkapi. At least in all modules
I now, they're accompanied by some makefile which allows to invoke them
via dmake run or some such.


You are right.  As Steffen already wrote, he is currently (and somewhat 
independently) looking into treating the complex tests more like the 
unoapi tests, so I took the liberty of discussing those two kinds of 
tests here as if they were more or less the same sort of thing.


I know that this picture is not perfectly realistic, and that there will 
be obstacles along the way that require pragmatic workarounds.  Still, I 
think it is important to know what the ideal should look like, even if 
you have to deviate from it.


Agreed. I'd already be very happy if only some parts of this could be
achieved.

For the records, since you didn't mention it explicitly, though I think
it's on your list: (Un)Reliability of the tests is another major blocker
for their acceptance currently. Of course, in places where this is due
to the concrete test, not due to the test framework, this is to be
solved one by one only. But we shouldn't forget this important goal: If
tests do not run reliably, then the best test framework of the world
won't get us anywhere.


Yes, thanks for mentioning it here.  Unreliable tests are a waste of 
time (as are unreliable test frameworks).  We have to get rid of them 
(by fixing them or by dumping them).  Its on the list.


-Stephan

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org