Re: [dev] subsequenttests

2010-10-01 Thread Stephan Bergmann

On 09/30/10 15:51, Frank Schönheit wrote:

Well, this the trick is ... part is exactly why I think that
issueing a statement like from now on, we do tests for our code
won't work - this is a complex topic, with a lot of tricks to know,
so Just Do It! is an approach which simply doesn't work. But okay,
that's a different story.


I beg to differ: If code is not testable, it is not good and needs to
be changed.


You didn't get my point. Writing *good* and *useful* tests needs
education, for most, if not all of us. Just do it! won't give you
those tests, but just a pile of frustrated developers.

So, the learning which is needed here (and the is needed here is the
part which some of those saying just do it! miss) will be on a long
and hard road.

I didn't mean to say we should not take this road. I just wanted to say
(and this was probably the wrong forum here) that words are easy, while
doings are more difficult.


The assumption behind write new tests was that people are learning 
while doing.  At least for me, that's always worked out best.  It was 
not meant to imply that writing those tests, especially the first ones, 
is easy.


If you have difficulties getting started, just talk to me, and I'm sure 
we get any initial blockers out of the way.  (Admittedly, I seem to be 
unable to write nice getting-started tutorials...)


-Stephan

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] subsequenttests

2010-10-01 Thread Frank Schönheit
Hi Stephan,

 I am not better at giving useful numbers here than you or anybody else 
 are.  If you want your tests integrated directly in the build, and think 
 their failure rate is acceptable, go ahead, put them in the build. 

Will do. Probably as soon as your sb123 is integrated, so the necessary
changes for splitting our existing complex tests into 100%- and less
than 100%-tests does not conflict with your/Lars' work done there.

 People will start telling you if your assumption about tolerable failure 
 rates matches theirs.  (And if it doesn't, be prepared to remove your 
 tests from the build again.

Fine with me, and absolutely legitimate.

Thanks  Ciao
Frank
-- 
ORACLE
Frank Schönheit | Software Engineer | frank.schoenh...@oracle.com
Oracle Office Productivity: http://www.oracle.com/office

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] subsequenttests

2010-09-30 Thread Frank Schönheit
Hi Stephan,

 So, I would be somewhat unhappy to throw all those they require a
 running OOo instance tests into the same unreliable category.
 
 See the list of sporadic failures at the end of 
 http://wiki.services.openoffice.org/wiki/Test_Cleanup#unoapi_Tests_2. 
   Many of them deal with problems during process shut down, and many of 
 them are probably generic enough to not only affect qa/unoapi tests, but 
 also qa/complex tests.

Indeed, this list is horrifying. Given that the problems there not only
affect UNO-API tests, or complex tests, but probably (potentially) each
and every client accessing OOo via a remote bridge - shouldn't their
fixes have a somewhat higher priority? (yes, that's a rhetorical question)

 However, if you have a complex test for which you can show that it works 
 reliably enough on all relevant platforms and on all buildbots so that 
 it can be executed during every build -- no problem to actually include 
 that test in every build (i.e., go down the if there ever crop up new 
 tests... route detailed in the OP).

What would be your requirement for can show? 10 tests in a row which
don't fail? 100? 1000? On one, two, three, four, five, or six platforms?

In other words: I'd prefer doing it the other way 'round: Include tests
for which we're *very* sure that they work reliably, and later exclude
those for which reality prove us wrong.

Personally, I'd put a large number (but not all) of dbaccess/qa/complex,
forms/qa/integration, and connectivity/qa/complex (the latter only after
the integration of CWS dba34b) into the reliable list. At the moment,
I execute all of those manually for each and every CWS, but this is
somewhat unfortunate, given that we (nearly) have a ready infrastructure
to automate this.

 The trick is to let writing tests guide you when writing an 
 implementation, so that the resulting implementation is indeed (unit) 
 testable.  See for example 
 http://www.growing-object-oriented-software.com/ for some food for 
 thought.  However, how well this works out for us needs to be seen, 
 indeed...

Well, this the trick is ... part is exactly why I think that issueing
a statement like from now on, we do tests for our code won't work -
this is a complex topic, with a lot of tricks to know, so Just Do
It! is an approach which simply doesn't work. But okay, that's a
different story.

Even if I (and others) get my fingers onto a TDD book (something I plan
for a longer time already), this doesn't mean that everything is
immediately testable without a running UNO environment (or even OOo).
So, I continue to think having the infrastructure for this is good and
necessary.

 One reason more to keep a subsequenttests infrastructure which can be
 run all the time (i.e. excludes unoapi) - we'll need it more sooner than
 later, if we take write tests serious.
 
 The subsequenttests infrastructure will not go away.  And I urge every 
 developer to routinely run subsequenttests for each CWS (just as you 
 routinely ran cwscheckapi for each CWS in the past) -- it is just that 
 its output is apparently not stable enough for automatic processing.

Not even for manual ... There's a few quirks which gave me headaches in
my last CWS'es, when I ran subsequenttests, and which finally often
resulted in some Just Don't Do It habit. But that's a different story,
too - and yes, we better should embark on fixing those quirks.

Ciao
Frank

-- 
ORACLE
Frank Schönheit | Software Engineer | frank.schoenh...@oracle.com
Oracle Office Productivity: http://www.oracle.com/office

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] subsequenttests

2010-09-30 Thread Björn Michaelsen
Am Thu, 30 Sep 2010 14:19:37 +0200
schrieb Frank Schönheit frank.schoenh...@oracle.com:

 Hi Stephan,
 [...]
  The trick is to let writing tests guide you when writing an 
  implementation, so that the resulting implementation is indeed
  (unit) testable.  See for example 
  http://www.growing-object-oriented-software.com/ for some food
  for thought.  However, how well this works out for us needs to be
  seen, indeed...
 
 Well, this the trick is ... part is exactly why I think that
 issueing a statement like from now on, we do tests for our code
 won't work - this is a complex topic, with a lot of tricks to know,
 so Just Do It! is an approach which simply doesn't work. But okay,
 that's a different story.

I beg to differ: If code is not testable, it is not good and needs to
be changed. If you fear to change it because it is complex and has
weird dependencies, you have two choices:
- make it testable and the world will be a better place
- leave the design as is, actually make it worse by adding workarounds
  that do not really fit in the old design. If you shied away from
  changing the design, the next change to the code will be even more
  daunting. And there will be a next change. There always is.

You probably say now: I know, but  If you do, take a step back and
meditate about if there can be any but that will be valid against the
fundamental truth that we need unit tests for the messy code we have and
we even more need unit tests for the messier code we have. There isnt.

So Just Do It! is exactly the right thing to do, even if -- and
maybe because -- it not a comfortable road to take.

We choose to write unit tests, not only because they are easy, but
because they are hard, because that goal will serve to organize and
measure the best of our energies and skills, because that challenge is
one that we are willing to accept, one we are unwilling to postpone,
and one which we intend to win, and the others, too.
 -- paraphrasing JFK at Rice ;)

Best Regards,

Bjoern




-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] subsequenttests

2010-09-30 Thread Frank Schönheit
Hi Björn,

 Well, this the trick is ... part is exactly why I think that
 issueing a statement like from now on, we do tests for our code
 won't work - this is a complex topic, with a lot of tricks to know,
 so Just Do It! is an approach which simply doesn't work. But okay,
 that's a different story.
 
 I beg to differ: If code is not testable, it is not good and needs to
 be changed.

You didn't get my point. Writing *good* and *useful* tests needs
education, for most, if not all of us. Just do it! won't give you
those tests, but just a pile of frustrated developers.

So, the learning which is needed here (and the is needed here is the
part which some of those saying just do it! miss) will be on a long
and hard road.

I didn't mean to say we should not take this road. I just wanted to say
(and this was probably the wrong forum here) that words are easy, while
doings are more difficult.

 If code is not testable, it is not good and needs to be changed.

Sure. And if OOo is not stable for remote access, this needs to be
fixed, since this is one of our most important features. And if a single
feature of OOo is not accessible, this needs to be fixed. And if there
is a crash somewhere, this needs to be fixed. And if code is not
maintainable, this needs to be fixed.

The list could be much longer. There are more things which need to be
fixed than can be fixed, actually.

It boils down to a question of priority. I fully agree to you that tests
*are* very important (hey, accidentally, I just spent the whole day to
write new tests for my current CWS'es changes!), but I would have a hard
time arguing with my manager that I want to spend the next two years
re-factoring the x*100.000 lines in my modules, to make them testable.

So, seriously, please save me from this fundamentalist But it *must* be
that way! phrases, and let's find compromises with reality. Thanks.

Ciao
Frank

-- 
ORACLE
Frank Schönheit | Software Engineer | frank.schoenh...@oracle.com
Oracle Office Productivity: http://www.oracle.com/office

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] subsequenttests

2010-09-24 Thread Stephan Bergmann

On 05/31/10 10:24, Stephan Bergmann wrote:

Just a reminder.  As announced
(http://www.openoffice.org/servlets/ReadMsg?list=interface-announcemsgNo=1266
cwscheckapi replaced with subsequenttests), subsequenttests is the new
tool to run all kinds of OOo developer tests (that require a complete
installation set to test against), particularly replacing cwscheckapi.

With CWS sb120 integrated in DEV300_m80, the framework and tests will
hopefully be reliable enough for actual use, see
http://tools.openoffice.org/servlets/ReadMsg?list=tinderboxmsgNo=360
new step for buildbots.

Developers are encouraged to run subsequenttests now, similarly to how
they ran cwscheckapi in the past. See the first link above for details.

Due to a code change in CWS sb120, Gregor's buildbot framework will no
longer patch away the subsequenttests step in DEV300_m80, so most of the
buildbots (those running on Gregor's framework) will automatically start
to include that step. There are apparently problems with X11 on some of
the bots (see the second link above), and there might still be sporadic
failures (see
http://wiki.services.openoffice.org/w/index.php?title=Test_Cleanup#unoapi_Tests_2),
potentially causing buildbot builds to go red. I leave it up to Gregor
to disable any test steps again that turn out to cause trouble; please
inform him about problems you encounter. (Due to vacation schedules, we
probably won't be able to track down those X11 problems for the next two
weeks.)


Unfortunately it seems impossible to stabilize the old test code and, 
even more so, OOo itself to the point that subsequenttests would be 
reliable enough to routinely run during every build.  Experiments (with 
local machines on CWS sb123 and with buildbots at 
http://eis.services.openoffice.org/EIS2/cws.ShowCWS?Id=9754OpenOnly=falseSection=Tests) 
show that even if individual machines manage to successfully run 
subsequenttests 50, 60, times in a row, they all do fail sooner or 
later, without pattern (see the lengthy lists at 
http://wiki.services.openoffice.org/wiki/Test_Cleanup#unoapi_Tests_2).


The old, existing tests are definitely not useless (they do find errors 
in new code, occasionally).  But they are obviously too fragile to tie 
the decision whether a build succeeded or failed to the random outcome 
of whether the tests succeeded or failed.


Therefore, I propose the following two things:

1  Keep the old tests for manual developer execution.  With CWS sb123, 
they should be in a shape where they mostly work, so that a developer 
could run them to see whether they unearth any problems in newly written 
code.  But, of course, this would occasionally need manual 
interpretation of the test results:  If they fail just once, and 
re-running them succeeds, you probably hit one of those spurious 
failures unrelated to your new code.  If they fail repeatedly, maybe 
even on multiple platforms, it smells like you broke something.  There 
will be a Mechanism A to (manually) execute those tests.


2  Urge developers to write new, solid tests for new code or code under 
maintenance.  These should typically be unit tests (i.e., should not 
require a complete OOo instance), which would have (at least) three a 
advantages over the old, qadevOOo based tests:  They would run quickly, 
they would not suffer from OOo's fragility, and they could be run 
directly within the build process.  There will be a Mechanism B to 
(automatically) execute those tests.


For the Mechanisms A and B:  As long as those new tests are indeed all 
unit tests, Mechanism B is simply to build and execute the tests 
unconditionally from within the build system (as is, for example, 
already done in basegfx/test/).  And that implies that Mechanism A can 
for now simply be the existing subsequenttests tool (which should then 
no longer be included in buildbot scripts, of course; and note that CWS 
sb129 adds a -k switch to subsequenttests, a la make, making it more 
useful with our sporadically failing tests).


If there ever crop up new tests that do require a complete OOo 
installation, we could then shift things as follows:  Use plain 
subsequenttests as (part of) Mechanism B (and add it back to the 
buildbots) and add a switch like --extra to subsequenttests as Mechanism 
A, to run the old tests (whose makefiles would need to be adapted to 
only trigger on something like $(OOO_SUBSEQUENT_TESTS)==extra 
instead of $(OOO_SUBSEQUENT_TESTS)!=).


So, action items:

- Gregor, please remove the subsequenttests step from the buildbots again.

- All developers, write new tests.

-Stephan

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] subsequenttests

2010-09-24 Thread Frank Schönheit
Hi Stephan,

 If there ever crop up new tests that do require a complete OOo 
 installation,

While I agree that the unoapi tests are quite fragile, the current
subsequenttests are more than this. In particular, there are complex
test cases which I'd claim are much more stable. (More precise, I'd
claim this for the complex tests in at least forms and dbaccess, since
we spent significant efforts in the past to actually make them stable
and reliable.)

So, I would be somewhat unhappy to throw all those they require a
running OOo instance tests into the same unreliable category.

I'm all in for disabling unoqpi tests in subsequent tests, but there are
tests which I'd like to be executed by default, even if they require a
running office.

Other than that, I'd claim that for a halfway complex implementation,
you pretty early reach a state where you need an UNO infrastructure at
least, and quickly even a running office. So, I don't share your
optimism that new tests can nearly always be written to not require a
running OOo.
One reason more to keep a subsequenttests infrastructure which can be
run all the time (i.e. excludes unoapi) - we'll need it more sooner than
later, if we take write tests serious.

JM2C

Ciao
Frank

-- 
ORACLE
Frank Schönheit | Software Engineer | frank.schoenh...@oracle.com
Oracle Office Productivity: http://www.oracle.com/office

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] subsequenttests

2010-06-23 Thread Stephan Bergmann

On 06/07/10 14:14, Mathias Bauer wrote:

On 31.05.2010 10:24, Stephan Bergmann wrote:

Just a reminder.  As announced
(http://www.openoffice.org/servlets/ReadMsg?list=interface-announcemsgNo=1266 


cwscheckapi replaced with subsequenttests), subsequenttests is the new
tool to run all kinds of OOo developer tests (that require a complete
installation set to test against), particularly replacing cwscheckapi.

With CWS sb120 integrated in DEV300_m80, the framework and tests will
hopefully be reliable enough for actual use, see
http://tools.openoffice.org/servlets/ReadMsg?list=tinderboxmsgNo=360
new step for buildbots.

Developers are encouraged to run subsequenttests now, similarly to how
they ran cwscheckapi in the past. See the first link above for details.

Due to a code change in CWS sb120, Gregor's buildbot framework will no
longer patch away the subsequenttests step in DEV300_m80, so most of the
buildbots (those running on Gregor's framework) will automatically start
to include that step. There are apparently problems with X11 on some of
the bots (see the second link above), and there might still be sporadic
failures (see
http://wiki.services.openoffice.org/w/index.php?title=Test_Cleanup#unoapi_Tests_2), 


potentially causing buildbot builds to go red. I leave it up to Gregor
to disable any test steps again that turn out to cause trouble; please
inform him about problems you encounter. (Due to vacation schedules, we
probably won't be able to track down those X11 problems for the next two
weeks.)


As expected, all build bots that don't skip that test, break. Some of 
them, as expected, because of DISPLAY problems, some others because they 
can't generate a PDF file in some of the test (what apparently worked in 
CWS sb120). Do we nevertheless want to keep these tests running on the 
build bots. Side effects of this will be that all CWS based on m80 and 
later will have status red in EIS.


By now, its probably too late to be able to extract information exactly 
where the tests started to break on the various bots when DEV300_m80 
became available end of May, right?  :(


-Stephan

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] subsequenttests

2010-06-23 Thread Ingrid Halama

On 06/23/10 15:57, Stephan Bergmann wrote:
[...]


By now, its probably too late to be able to extract information exactly 
where the tests started to break on the various bots when DEV300_m80 
became available end of May, right?  :(




Bots Solaris-Intel and Ubuntu-9.04-i386 failed due to DISPLAY errors (see first 
comment at CWS chart47).
I didn't remember the exact error with the Ubuntu-8.04-amd64, but there was 
also something with subsequenttests.

-Ingrid

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] subsequenttests

2010-06-07 Thread Mathias Bauer

On 31.05.2010 10:24, Stephan Bergmann wrote:

Just a reminder.  As announced
(http://www.openoffice.org/servlets/ReadMsg?list=interface-announcemsgNo=1266
cwscheckapi replaced with subsequenttests), subsequenttests is the new
tool to run all kinds of OOo developer tests (that require a complete
installation set to test against), particularly replacing cwscheckapi.

With CWS sb120 integrated in DEV300_m80, the framework and tests will
hopefully be reliable enough for actual use, see
http://tools.openoffice.org/servlets/ReadMsg?list=tinderboxmsgNo=360
new step for buildbots.

Developers are encouraged to run subsequenttests now, similarly to how
they ran cwscheckapi in the past. See the first link above for details.

Due to a code change in CWS sb120, Gregor's buildbot framework will no
longer patch away the subsequenttests step in DEV300_m80, so most of the
buildbots (those running on Gregor's framework) will automatically start
to include that step. There are apparently problems with X11 on some of
the bots (see the second link above), and there might still be sporadic
failures (see
http://wiki.services.openoffice.org/w/index.php?title=Test_Cleanup#unoapi_Tests_2),
potentially causing buildbot builds to go red. I leave it up to Gregor
to disable any test steps again that turn out to cause trouble; please
inform him about problems you encounter. (Due to vacation schedules, we
probably won't be able to track down those X11 problems for the next two
weeks.)


As expected, all build bots that don't skip that test, break. Some of 
them, as expected, because of DISPLAY problems, some others because they 
can't generate a PDF file in some of the test (what apparently worked in 
CWS sb120). Do we nevertheless want to keep these tests running on the 
build bots. Side effects of this will be that all CWS based on m80 and 
later will have status red in EIS.


Regards,
Mathias

--
Mathias Bauer (mba) - Project Lead OpenOffice.org Writer
OpenOffice.org Engineering at Sun: http://blogs.sun.com/GullFOSS
Please don't reply to nospamfor...@gmx.de.
I use it for the OOo lists and only rarely read other mails sent to it.

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



[dev] subsequenttests

2010-05-31 Thread Stephan Bergmann
Just a reminder.  As announced 
(http://www.openoffice.org/servlets/ReadMsg?list=interface-announcemsgNo=1266 
cwscheckapi replaced with subsequenttests), subsequenttests is the new 
tool to run all kinds of OOo developer tests (that require a complete 
installation set to test against), particularly replacing cwscheckapi.


With CWS sb120 integrated in DEV300_m80, the framework and tests will 
hopefully be reliable enough for actual use, see 
http://tools.openoffice.org/servlets/ReadMsg?list=tinderboxmsgNo=360 
new step for buildbots.


Developers are encouraged to run subsequenttests now, similarly to how 
they ran cwscheckapi in the past.  See the first link above for details.


Due to a code change in CWS sb120, Gregor's buildbot framework will no 
longer patch away the subsequenttests step in DEV300_m80, so most of the 
buildbots (those running on Gregor's framework) will automatically start 
to include that step.  There are apparently problems with X11 on some of 
the bots (see the second link above), and there might still be sporadic 
failures (see 
http://wiki.services.openoffice.org/w/index.php?title=Test_Cleanup#unoapi_Tests_2), 
potentially causing buildbot builds to go red.  I leave it up to Gregor 
to disable any test steps again that turn out to cause trouble; please 
inform him about problems you encounter.  (Due to vacation schedules, we 
probably won't be able to track down those X11 problems for the next two 
weeks.)


-Stephan

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] subsequenttests

2010-05-31 Thread Stephan Bergmann

On 05/31/10 10:37, Rene Engelhard wrote:

On Mon, May 31, 2010 at 10:24:17AM +0200, Stephan Bergmann wrote:
With CWS sb120 integrated in DEV300_m80, the framework and tests will  
hopefully be reliable enough for actual use, see  

[...]
the bots (see the second link above), and there might still be sporadic  
failures (see  
http://wiki.services.openoffice.org/w/index.php?title=Test_Cleanup#unoapi_Tests_2), 
potentially causing buildbot builds to go red.  I leave it up to Gregor  


So it's not reliable enough.

to disable any test steps again that turn out to cause trouble; please  
inform him about problems you encounter.  (Due to vacation schedules, we  
probably won't be able to track down those X11 problems for the next two  
weeks.)


How should people getting accused of breaking stuff then handle red
tinderboxes where the red status is caused by this?


Nobody gets accused.  Erroneous red statuses, while they admittedly 
suck, are not too uncommon, for various reasons.  People know how to 
handle them (by looking at the logs, finding out what caused the 
breakage, and taking a note in the CWS EIS data in case the cause is 
external to their CWS).


I am all for doing everything to reduce false positives to as low a 
level as practically possible, and I am especially determined to do so 
for the parts I own.  However, we cannot improve subsequenttests 
without trying it out, in the wild.  We need to balance the value we get 
out of these tests against the annoyances that the false positives cause.


Timing of CWS sb120 hitting the master and me going on two weeks of 
vacation might be a little unfortunate.  That's why I put it into 
Gregor's hands to get that balancing right for now.  But be assured that 
I will evaluate the usefulness of subsequenttests as soon as I return, 
based on any data that has accumulated by then.


-Stephan

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] subsequenttests

2010-05-31 Thread Bernd Eilers


Hi Stephan,

what X Server do the subsequenttests use by the way xvfbd or xvnc or 
something else?


Kind regards,
Bernd Eilers


Stephan Bergmann wrote:
Just a reminder.  As announced 
(http://www.openoffice.org/servlets/ReadMsg?list=interface-announcemsgNo=1266 
cwscheckapi replaced with subsequenttests), subsequenttests is the new 
tool to run all kinds of OOo developer tests (that require a complete 
installation set to test against), particularly replacing cwscheckapi.


With CWS sb120 integrated in DEV300_m80, the framework and tests will 
hopefully be reliable enough for actual use, see 
http://tools.openoffice.org/servlets/ReadMsg?list=tinderboxmsgNo=360 
new step for buildbots.


Developers are encouraged to run subsequenttests now, similarly to how 
they ran cwscheckapi in the past.  See the first link above for details.


Due to a code change in CWS sb120, Gregor's buildbot framework will no 
longer patch away the subsequenttests step in DEV300_m80, so most of the 
buildbots (those running on Gregor's framework) will automatically start 
to include that step.  There are apparently problems with X11 on some of 
the bots (see the second link above), and there might still be sporadic 
failures (see 
http://wiki.services.openoffice.org/w/index.php?title=Test_Cleanup#unoapi_Tests_2), 
potentially causing buildbot builds to go red.  I leave it up to Gregor 
to disable any test steps again that turn out to cause trouble; please 
inform him about problems you encounter.  (Due to vacation schedules, we 
probably won't be able to track down those X11 problems for the next two 
weeks.)


-Stephan

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org




-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] subsequenttests

2010-05-31 Thread Stephan Bergmann

On 05/31/10 11:55, Bernd Eilers wrote:
what X Server do the subsequenttests use by the way xvfbd or xvnc or 
something else?


subsequenttests and below sets up nothing.  The started soffice 
instances simply use whatever DISPLAY points to.


-Stephan

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] subsequenttests

2010-05-31 Thread Rene Engelhard
On Mon, May 31, 2010 at 01:25:26PM +0200, Stephan Bergmann wrote:
 On 05/31/10 11:55, Bernd Eilers wrote:
 what X Server do the subsequenttests use by the way xvfbd or xvnc or  
 something else?

 subsequenttests and below sets up nothing.  The started soffice  
 instances simply use whatever DISPLAY points to.

svp not possible? The smoketest at least works with it.

Grüße/Regards,

René

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] subsequenttests

2010-05-31 Thread Stephan Bergmann

On 05/31/10 15:20, Rene Engelhard wrote:

On Mon, May 31, 2010 at 01:25:26PM +0200, Stephan Bergmann wrote:

On 05/31/10 11:55, Bernd Eilers wrote:
what X Server do the subsequenttests use by the way xvfbd or xvnc or  
something else?
subsequenttests and below sets up nothing.  The started soffice  
instances simply use whatever DISPLAY points to.


svp not possible? The smoketest at least works with it.


No idea what you mean.

-Stephan

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] subsequenttests

2010-05-31 Thread Rene Engelhard
On Mon, May 31, 2010 at 03:47:00PM +0200, Stephan Bergmann wrote:
 svp not possible? The smoketest at least works with it.

 No idea what you mean.

SAL_USE_VCLPLUGIN=svp aka. headless.
As said, works for me for the smoketest in 3.2.x.

Grüße/Regards,

René

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org



Re: [dev] subsequenttests

2010-05-31 Thread Stephan Bergmann

On 05/31/10 16:18, Rene Engelhard wrote:

On Mon, May 31, 2010 at 03:47:00PM +0200, Stephan Bergmann wrote:

svp not possible? The smoketest at least works with it.

No idea what you mean.


SAL_USE_VCLPLUGIN=svp aka. headless.
As said, works for me for the smoketest in 3.2.x.


Ah---see my response at 
http://tools.openoffice.org/servlets/ReadMsg?list=tinderboxmsgNo=364.


-Stephan

-
To unsubscribe, e-mail: dev-unsubscr...@openoffice.org
For additional commands, e-mail: dev-h...@openoffice.org