Re: We *really* need a development model change !

2002-01-09 Thread Andriy Palamarchuk


--- Gerard Patel [EMAIL PROTECTED] wrote:
 At 04:44 PM 08/01/2002 -0500, you wrote:
 
 snip
 (Another Pythoner, cool :) )
 
 This has given me an idea - while I don't expect it
 to be
 used in Wine, I will try to write my own test progs
 with it : use the *windows* python interpreter under
 Wine.
 From the doc, it's possible to call any win32 api
 from
 it using a 'calldll' interface. If it works well, I
 won't need
 a 'test framework' - just use the standard,
 available tool.

Could you, please, port my or Alexandre's Perl samples
to Python, so we'll be able to easy compare them? It
will be interesting to see differences.

There is no big difference between Perl and Python for
this application.
IMHO they have the same problems:
a) you have no C compiler support. We need to define
manually constants, types of parameters, returning
values, sizes of structures, etc.
Even with C I saw on this list discussions about
correct sizes of the structures, leave alone scripting
languages.

b) developer skills. C the most familiar language to
those who knows/learns Win32.

About my backround - I know Perl pretty well, know and
like Python too.

Andriy Palamarchuk

__
Do You Yahoo!?
Send FREE video emails in Yahoo! Mail!
http://promo.yahoo.com/videomail/





Re: We *really* need a development model change !

2002-01-09 Thread Alexandre Julliard

Andriy Palamarchuk [EMAIL PROTECTED] writes:

 Which approaches can you suggest to reach goals I
 believe are important:
 
 1) bundling - are we going to have separate
 distributions - Wine with tests, tests only, Wine
 only? There are a lot of cases where only one of them
 is required.

My view is that everything is distributed with Wine, and we have a
script on WineHQ that builds a zip of the test-only environment for
use under Windows, and/or a script to fetch it from CVS on Windows.

 2) development of the unit tests under Windows.
 Obviously, we don't need to have Wine itself when we
 work with unit tests on Windows. Plus, we need to
 create development environment, usable by Windows
 developers.

For Perl we need to ship a winetest.exe and a couple of scripts to run
through the tests. For C we need to generate makefiles one way or
another, including support for the major Windows compilers.

 3) Organization of the unit tests in such way, so they
 can be used by other Win32 implementation projects.
 Conditional TODOs I suggested above will help to
 manage different TODO lists for different projects.

Looks good, though I would suggest having simply a TODO_WINE instead
of making people write the same test thousands of times.  Then we can
add TODO_ODIN or whatever if the need arises. And I think the TODOs
should be controlled by a command-line option, so you can switch them
on under Wine too. But these are details, I think overall it looks
quite good.

-- 
Alexandre Julliard
[EMAIL PROTECTED]





Re: We *really* need a development model change !

2002-01-09 Thread Andriy Palamarchuk

--- Alexandre Julliard [EMAIL PROTECTED] wrote:
 Andriy Palamarchuk [EMAIL PROTECTED] writes:
  3) Organization of the unit tests in such way, so
 they
  can be used by other Win32 implementation
 projects.
  Conditional TODOs I suggested above will help to
  manage different TODO lists for different
 projects.
 
 Looks good, though I would suggest having simply a
 TODO_WINE instead
 of making people write the same test thousands of
 times.  Then we can
 add TODO_ODIN or whatever if the need arises. 

Good idea, updating the library.

 And I think the TODOs
 should be controlled by a command-line option, so
 you can switch them on under Wine too. 

Do not understand why we need to have the command line
option. TODO_WINE will automatically detect when the
application runs under Wine. Under Windows the code in
TODO_WINE will be executed like there is no TODO.

 But these are details, I think
 overall it looks quite good.

I'm also satisfied by the overall status.
Don't have big experience in build process creation,
leaving this part up to you. Otherwise I'm willing to
work further on unit tests.time.

Thanks,
Andriy Palamarchuk

__
Do You Yahoo!?
Send FREE video emails in Yahoo! Mail!
http://promo.yahoo.com/videomail/





Re: We *really* need a development model change !

2002-01-09 Thread Alexandre Julliard

Andriy Palamarchuk [EMAIL PROTECTED] writes:

 Do not understand why we need to have the command line
 option. TODO_WINE will automatically detect when the
 application runs under Wine. Under Windows the code in
 TODO_WINE will be executed like there is no TODO.

The idea is that you might want to run the TODO_WINE tests under Wine
in Windows mode, for instance to find a problem you'd like to work
on. You could grep for TODO_WINE in the test scripts but this is less
convenient IMO.

-- 
Alexandre Julliard
[EMAIL PROTECTED]





Re: We *really* need a development model change !

2002-01-09 Thread Andriy Palamarchuk


--- Alexandre Julliard [EMAIL PROTECTED] wrote:
 The idea is that you might want to run the TODO_WINE
 tests under Wine
 in Windows mode, for instance to find a problem
 you'd like to work
 on. You could grep for TODO_WINE in the test scripts
 but this is less
 convenient IMO.

Understood. IMO grep is better - you can choose from
list of TODO's. Lets leave it to the time when
somebody needs the feature.

Andriy Palamarchuk

__
Do You Yahoo!?
Send FREE video emails in Yahoo! Mail!
http://promo.yahoo.com/videomail/





Re: We *really* need a development model change !

2002-01-09 Thread Andriy Palamarchuk


--- Alexandre Julliard [EMAIL PROTECTED] wrote:
 Andriy Palamarchuk [EMAIL PROTECTED] writes:

 But I think implementing it now would actually make
 your life easier.
 This way you wouldn't have to require some magic
 call to detect you
 are running on Wine, you could simply rely on the
 Wine makefiles to
 pass you the right option.

I don't see any value in information about the first
TODO check in a test script. I, personally, prefer
using magic call. 

One more plus of autodetection - I can run the same
Windows test binary on  Windows and on Wine without
any troubles.

But I'd like to use your idea in other place. I feel
uncomfortable when test does not show anything. It is
not clear whether it runs any checks, whether my last
changes were executed. I suggest to have switch which
turns off output. By default the test will print some
statistics on successful run - number of tests
running, number of TODO tests, time of execution -
anything else? Make script can run test with output
turned off.

Andriy Palamarchuk

__
Do You Yahoo!?
Send FREE video emails in Yahoo! Mail!
http://promo.yahoo.com/videomail/





Re: We *really* need a development model change !

2002-01-08 Thread Alexandre Julliard

Andriy Palamarchuk [EMAIL PROTECTED] writes:

 Almost all complexity you are talking about is already
 implemented for us. Usage of the framework is very
 simple, do not require from test writers anything.
 They are only required to correctly use Test::Simple
 (or Test::More). They don't need to remember about
 Test::Harness.

But adapting the framework to do what we want is IMO more work than
simply reimplementing from scratch the few features that we actually
need. We don't really gain anything by making the effort of reusing
all that code, since we don't need most of it.

 There are reasons to use Test::Harness:
 1) control of TODO tests - I really want to use this
 feature.
 2) control of SKIP tests - very useful for
 Wine-specific tests, choosing behavior, depending on
 Windows versions, etc. I need this feature too.

Yes, I agree we want that. I think there are easy to implement no
matter what we use, we don't really need Test::Harness for that.

 3) we already need to manage the test output. I'd
 estimate number of checks for my existing
 SystemParametersInfo unit test as:
 25 (number of implemented actions) * 10 (minimal
 number of checks for each action) = 250 - 350 tests
 We'll definitely have huge number of tests. Why not
 pick up scaleable approach from very beginning?

For me your SystemParametersInfo is one test, not 250. All I want to
know if whether it passed or not, and if not what was the cause of the
failure. I don't want to know about the details of the 250 individual
checks.

 Suggest decisions from the discussion:
 1) unit tests are very important for the project
 2) mixed C/Perl environment will be used for the tests
 development. Choosing the tool is a matter of personal
 preferences.

I don't think I agree. For me the value of Perl is in that it makes it
trivial to take the suite over the Windows; but if half the tests are
in C we lose this advantage, and then we might as well do everything
in C.

 3) Test::Harness will be used to compile report for
 test batches

I don't see the need. What I want is a make-like system that keeps
track of which tests have been run, which ones need to be re-run
because they have been modified etc.  I don't think there is any use
in a report stating that 12.42% of the tests failed, this doesn't tell
us anything.

 4) The unit test will be a separate application

You cannot put the whole test suite in a single application, you need
to split things up some way. A decent test suite will probably be
several times the size of the code it is testing; you don't want that
in a single application.

 Alexandre, we explicitely did not agree on this
 decision yet. You preferred to have unit tests
 spreaded over the Wine directory tree. The main
 argument for this was possibility of running subsets
 of test.

No, the argument is modularity. The tests for kernel32 have nothing to
do with the tests for opengl, and have everything to do with the
kernel32 code they are testing. So it seems logical to put them
together.

Then when you change something in kernel32 you can change the test
that is right next to it, run make test in the kernel32 directory and
have it re-run the relevant tests, and then do cvs diff dlls/kernel32
and get a complete diff including the code and the test changes.

-- 
Alexandre Julliard
[EMAIL PROTECTED]





Re: We *really* need a development model change !

2002-01-08 Thread Andriy Palamarchuk

--- Alexandre Julliard [EMAIL PROTECTED] wrote:
 Andriy Palamarchuk [EMAIL PROTECTED] writes:

[...]
 But adapting the framework to do what we want is IMO
 more work than
 simply reimplementing from scratch the few features
 that we actually
 need. We don't really gain anything by making the
 effort of reusing
 all that code, since we don't need most of it.

Could you, please, list the additional features we
need? I'll try to estimate amount of work necessary to
implement them in Test::Harness.

Do you want to use architecture, completely different
from Test::Harness + Test::Simple modules or you only
want to replace Test::Harness?

Existing architecture:
Individual test scripts (executables) are very simple
(use Test::Simple). They only print messages like ok
4 - action SPI_FOO # TODO not implemented for each
check.
Test::Harness module parses output, has logic to
analyze test results, creates overall report and
details of failures, including crashes.

 I don't want to know about the details of
 the 250 individual
 checks.

You got an impression that Test::Harness does not
report individual failures. Sorry, I gave too simple
demo.

Example of Test::Harness output for a few failures:

test2.p.NOK 2# Failed test (test2.pl
at line 8)  
#  got: '1'
# expected: '0'
test2.p.NOK 3# Failed test (test2.pl
at line 9)  
#  got: '1'
# expected: '0'
test2.p.NOK 4# Failed test (test2.pl
at line 10) 
test2.p.ok 8/8# Looks like you failed 3
tests of 8.

[... summary report output is skipped ...]

Does this output look closer to the one you want?
Let me know if you need any other information.

  2) mixed C/Perl environment will be used for the
 tests
  development. Choosing the tool is a matter of
 personal
  preferences.
 
 I don't think I agree. For me the value of Perl is
 in that it makes it
 trivial to take the suite over the Windows; but if
 half the tests are
 in C we lose this advantage, and then we might as
 well do everything
 in C.

Sorry, misinterpreted your statement that threads
won't be used for tests in Perl. Could you give your
vision when C is used?

 What I want is a make-like
 system that keeps
 track of which tests have been run, which ones need
 to be re-run
 because they have been modified etc.

This usage of make is fine with me. I just want to
separate unit tests from main code and have
centralized control. You still can call subset of unit
tests from the build process.

 You cannot put the whole test suite in a single
 application, you need
 to split things up some way. A decent test suite
 will probably be
 several times the size of the code it is testing;
 you don't want that
 in a single application.

We were not careful about the terms. Yes, the tests
will be bunch of the executables. I assumed these
executables to be parts of one test application.
Completely agree with you.

  You preferred to have unit tests
  spreaded over the Wine directory tree. The main
  argument for this was possibility of running
 subsets
  of test.
 
 No, the argument is modularity.

I'm all for modular unit tests and there are a few
ways to divide the tests between modules.

 Then when you change something in kernel32 you can
 change the test
 that is right next to it, run make test in the
 kernel32 directory and
 have it re-run the relevant tests, and then do cvs
 diff dlls/kernel32
 and get a complete diff including the code and the
 test changes.

Agree, this is not so convenient for separate test
application.

Separate unit tests application has its own
advantages:
1) Separate distributions are used for Wine and the
test applications. The only case when we want them
together - Wine development under *nix. In all other
cases we need only one of them.
2) The unit tests will be mostly developed under
Windows. The unit tests build process has extra
Windows compatibility requirements.
3) Having unit tests as a separate application we
create possibilities to collaborate with other W32
implementation OS projects (ODIN comes to mind).

All the tasks we mentioned can be implemented with any
approach. I believe the tasks you described will be
not much more difficult with my approach, especially
if directory tree has parallel structure.

Andriy Palamarchuk

__
Do You Yahoo!?
Send FREE video emails in Yahoo! Mail!
http://promo.yahoo.com/videomail/





Re: We *really* need a development model change !

2002-01-08 Thread Alexandre Julliard

Andriy Palamarchuk [EMAIL PROTECTED] writes:

 Could you, please, list the additional features we
 need? I'll try to estimate amount of work necessary to
 implement them in Test::Harness.

Basically the features I get with my 10-line makefile hack: ability to
run tests for modified dlls and their dependencies (for instance if
you change something in user32 and run make test it doesn't run tests
for kernel32), ability to remember which tests failed and run only
these the second time around, trivial integration of new tests (just
add one line in a makefile).

 Do you want to use architecture, completely different
 from Test::Harness + Test::Simple modules or you only
 want to replace Test::Harness?

I think we would be much better off developing a simple infrastructure
from scratch that does exactly what we want, than trying to bend an
existing framework to our needs. This will also ensure that the
framework remains as simple as possible, which is important since
every extra feature, even one we don't use, can possibly introduce
problems (like the pipes/fork issues have shown).

 Does this output look closer to the one you want?
 Let me know if you need any other information.

I don't really care about the output. My point is that this output is
not necessary, a simple assertion foo failed at line xx on failure
and no output at all on success would work just as well. I've nothing
against such an output either, but I don't think it justifies
introducing the complexity of reusing the Test stuff.

 Sorry, misinterpreted your statement that threads
 won't be used for tests in Perl. Could you give your
 vision when C is used?

In my vision we either use Perl everywhere, or C everywhere. If we use
Perl there may be a need for a few little glue programs in C, but this
doesn't require any C infrastructure.

I personally think Perl would be a better choice, but I seem to be
pretty much the only one of this opinion. In any case the most
important thing is to choose something that people are going to use,
and so far the Perl stuff isn't a success in this respect. I'm not
convinced a C infrastructure would fare better, but maybe we should
try it and see.

-- 
Alexandre Julliard
[EMAIL PROTECTED]





Re: We *really* need a development model change !

2002-01-08 Thread Ove Kaaven


On 8 Jan 2002, Alexandre Julliard wrote:

 I personally think Perl would be a better choice, but I seem to be
 pretty much the only one of this opinion.

Well, if random opinions count here, I also would prefer Perl. As much as
I hate Perl (I'm more in the Python camp), I'd hate writing regression
tests in C much more.






Re: We *really* need a development model change !

2002-01-08 Thread Dimitrie O. Paun

On Tue, 8 Jan 2002, Alexandre Julliard wrote:

 I personally think Perl would be a better choice, but I seem to be
 pretty much the only one of this opinion. In any case the most
 important thing is to choose something that people are going to use,
 and so far the Perl stuff isn't a success in this respect. I'm not
 convinced a C infrastructure would fare better, but maybe we should
 try it and see.

And this is an excellent point. I, for one, know enough Perl to know that
I find it ugly as bloody hell, and as such I have no desire to learn it.
Writing tests is a big pain in the ass in the first place, and I can tell
you I would not do it in Perl if I was paid to do it.

Now, I kept quiet on this issue because I can see the merits of using a
scripting language to write said tests. However, I would like to point out
that the _hard_ problem is getting the tests written, it doesn't really
matter in what language. If the infrastructure (compiler, libs, etc) is
not present on the current platform (configure is the 'man'), it is
trivial enough to simply not run them (or run some dummied up tests
instead, something like true(1)).

So, bottom line, I think you should accept whatever tests you get. If the
author gets a woody writing them in C, or using some test harness or
another, let's just consider it the motivating factor behind writing the
tests in the first place.

--
Dimi.






Re: We *really* need a development model change !

2002-01-08 Thread Dimitrie O. Paun

On Tue, 8 Jan 2002, Ove Kaaven wrote:

 Well, if random opinions count here, I also would prefer Perl. As much as
 I hate Perl (I'm more in the Python camp), I'd hate writing regression
 tests in C much more.

(Another Pythoner, cool :) )

But if we accept tests in C, we don't loose anything. If it's a pain to do
them in C, we'll end up with just a handful of them which we can simply
convert to Perl (which should be trivial since they are just a few). On
the other hand, if we end up with a lot of C tests, means that people
simply prefer C over Perl (for whatever reason), so we would still have
won since we end up with all those tests.

--
Dimi.






Re: We *really* need a development model change !

2002-01-08 Thread Alexandre Julliard

Dimitrie O. Paun [EMAIL PROTECTED] writes:

 So, bottom line, I think you should accept whatever tests you get. If the
 author gets a woody writing them in C, or using some test harness or
 another, let's just consider it the motivating factor behind writing the
 tests in the first place.

Unfortunately that's not possible. If everybody uses his favorite test
harness we will soon have more of them than actual tests. It's already
going to be enough work maintaining one framework and making sure it
always works both on Windows and Wine, we can't afford to have several.

-- 
Alexandre Julliard
[EMAIL PROTECTED]





Re: We *really* need a development model change !

2002-01-08 Thread Dimitrie O. Paun

On Tue, 8 Jan 2002, Alexandre Julliard wrote:

 Unfortunately that's not possible. If everybody uses his favorite test
 harness we will soon have more of them than actual tests.

Certainly, I was exaggerating. However, we should accept tests written in C.

--
Dimi.






Re: We *really* need a development model change !

2002-01-08 Thread Francois Gouget

On 8 Jan 2002, Alexandre Julliard wrote:

 Dimitrie O. Paun [EMAIL PROTECTED] writes:

  So, bottom line, I think you should accept whatever tests you get. If the
  author gets a woody writing them in C, or using some test harness or
  another, let's just consider it the motivating factor behind writing the
  tests in the first place.

 Unfortunately that's not possible. If everybody uses his favorite test
 harness we will soon have more of them than actual tests. It's already
 going to be enough work maintaining one framework and making sure it
 always works both on Windows and Wine, we can't afford to have several.

   I agree that we should not have as many testing frameworks as test
writers. But maybe two would be acceptable. I propose the following:

 * you posted the beginning of a perl test framework. What is needed
before it can be committed to CVS? AFAICS all it lacks is support for
difference files, aka TODO tests.
 * so let's complete that framework and commit it to CVS
 * then if someone really wants a C testing framework, let them develop
it. And then we can replace

test:
run-perl-tests


with:

ptests:
run-perl-tests
ctests:
run-c-tests
tests: ptests ctests

   The perl test framework will need a way to build a zip file of some
sort with all the necessary stuff to run the perl tests on Windows. All
we need is for this to not be confused when we add the C tests. The C
tests will need such a functionality too. And it should package just the
C tests and not the perl tests.
   So we should be able to have perl and C tests side by side, in such a
way that they basically ignore each other.

   Then we'll see after a little while which framework is being used.
And if one of them is never used, then as Dimitrie said, we can convert
its tests and remove it.



--
Francois Gouget [EMAIL PROTECTED]http://fgouget.free.fr/
  In a world without fences who needs Gates?






Re: We *really* need a development model change !

2002-01-08 Thread Gerard Patel

At 04:44 PM 08/01/2002 -0500, you wrote:

snip
(Another Pythoner, cool :) )

This has given me an idea - while I don't expect it to be
used in Wine, I will try to write my own test progs
with it : use the *windows* python interpreter under Wine.
From the doc, it's possible to call any win32 api from
it using a 'calldll' interface. If it works well, I won't need
a 'test framework' - just use the standard, available tool.

Gerard






Re: We *really* need a development model change !

2002-01-08 Thread Dimitrie O. Paun

On Tue, 8 Jan 2002, Francois Gouget wrote:

The perl test framework will need a way to build a zip file of some
 sort with all the necessary stuff to run the perl tests on Windows. All
 we need is for this to not be confused when we add the C tests. The C
 tests will need such a functionality too. And it should package just the
 C tests and not the perl tests.

In fact, this should be possible in C as well. Say configure can check if
gcc can generate PE executables, and if so, we can compile (in
Linux/*BSD/etc) the C tests as PE executable so that we actually run the
exact same binary in Wine and Windows.

From the testers POV they should be as easy to run as the Perl tests (if
not easier), and all that is required is a cross-compiling gcc. You don't
have that, no big problem, you simply can not package the C tests.

--
Dimi.






Re: We *really* need a development model change !

2002-01-08 Thread Dimitrie O. Paun

On Tue, 8 Jan 2002, Gerard Patel wrote:

 This has given me an idea - while I don't expect it to be
 used in Wine, I will try to write my own test progs
 with it : use the *windows* python interpreter under Wine.
 From the doc, it's possible to call any win32 api from
 it using a 'calldll' interface. If it works well, I won't need
 a 'test framework' - just use the standard, available tool.

And that's a very cool idea. In fact, we don't need a test harness. What
we need is to say:

'A test is an executable. The exit status is 0 on success, and non-zero
on error. If the test fails, it should explain why on stderr. Verbose
output (if any) goes on stdout.'

Now, from the POV of Wine, we don't care (just like the kernel), if the
executable is a native binary or a #! executable. Everything else are a
few Makefile rules, which are conditional on the given
compiler/interpreter being available (which can be checked quite easily by
configure).

--
Dimi.






Re: We *really* need a development model change !

2002-01-08 Thread Francois Gouget

On Tue, 8 Jan 2002, Dimitrie O. Paun wrote:
[...]
 Now, from the POV of Wine, we don't care (just like the kernel), if the
 executable is a native binary or a #! executable. Everything else are a
 few Makefile rules, which are conditional on the given
 compiler/interpreter being available (which can be checked quite easily by
 configure).

   That's true on Unix because sh, perl, and C executables will just
work. But if some of your tests are sh scripts you will have trouble
running them on Windows.
   We probably won't often need to run all the tests in Windows, but I
can imagine that it would still be necessary to check behavior on
different setups: in 16bpp vs. 32bpp, in the english vs. the russian vs.
chinese version, with IE 5 installed or not installed, etc. So we need a
framework that makes it easy to run all the tests on Windows. Since sh
scripts tend to invoke a ton of Unix tools like expr, awk, sed, perl,
this seems not to be a good basis for writing tests.

   But I agree with the approach: a test is an executable that returns 0
if successfull and non-zero if not. It's pretty much the foundation of
my proposal except that all tests should either be of the same type:
perl or C (or whatever).

--
Francois Gouget [EMAIL PROTECTED]http://fgouget.free.fr/
   Cahn's Axiom: When all else fails, read the instructions.






Re: We *really* need a development model change !

2002-01-08 Thread Dimitrie O. Paun

On Wed, 9 Jan 2002, Francois Gouget wrote:

That's true on Unix because sh, perl, and C executables will just
 work. But if some of your tests are sh scripts you will have trouble
 running them on Windows.

Yes, but nobody really proposes writing tests in Bourne-shell. In fact,
you can't easily do it wether you run under Unix or Windows. What I was
saying is that the execution engine should not really matter, generally
speaking. In practice, there are only 3 choices:
  1. Native executable (most likely C based)
  2. Perl script
  3. Python script
In all this cases we can package things such that the *exact* same tests
run under both Wine  Windows. In all this cases, it is possible to make
it trivial for the tester to run the tests, without them knowing what
language has been used to write the actual tests.

We probably won't often need to run all the tests in Windows, but I
 can imagine that it would still be necessary to check behavior on
 different setups: in 16bpp vs. 32bpp, in the english vs. the russian vs.
 chinese version, with IE 5 installed or not installed, etc. So we need a
 framework that makes it easy to run all the tests on Windows. Since sh
 scripts tend to invoke a ton of Unix tools like expr, awk, sed, perl,
 this seems not to be a good basis for writing tests.

Again, you will not be able to easily invoke Win32 APIs from sh anyway, so
this is not really an option.

--
Dimi.






Re: We *really* need a development model change !

2002-01-07 Thread Andriy Palamarchuk

Spent a couple of days trying to port my test to C
unit testing framework Check.

Simple tests work fine. Problems start when I try to
use multithreaded Wine tests. In one case the
framework stucks on reading from pipe. Other code
layouts have other issues. The cause may be in
incorrect pipes handling, multi-process tests handling
or interaction with Wine thread implementation.

I remember I also had some problems with Perl
multithreaded tests...

I attached my unit test example, almost identical to
the one I published before for Perl framework. There
is also a whole working directory. If you want to play
with the it, just unpack it to wine/programs
directory.

Andriy Palamarchuk

__
Do You Yahoo!?
Send FREE video emails in Yahoo! Mail!
http://promo.yahoo.com/videomail/


check.tar.gz
Description: check.tar.gz


sysparams.c
Description: sysparams.c


Re: We *really* need a development model change !

2002-01-07 Thread Alexandre Julliard

Andriy Palamarchuk [EMAIL PROTECTED] writes:

 Simple tests work fine. Problems start when I try to
 use multithreaded Wine tests. In one case the
 framework stucks on reading from pipe. Other code
 layouts have other issues. The cause may be in
 incorrect pipes handling, multi-process tests handling
 or interaction with Wine thread implementation.

I'm not sure if this is what happens, but if two threads try to send
results down the same pipe you'll have problems. You need explicit
threading support in the framework if you want to create threads (the
Perl framework doesn't support threads either at this point, so if
your test works there it's by pure luck).

Another potential problem is that the fork() option of the framework
is going to cause major breakage. You cannot use fork inside a Winelib
app. And in any case I don't think a GPLed framework is appropriate
for Wine.

-- 
Alexandre Julliard
[EMAIL PROTECTED]





Re: We *really* need a development model change !

2002-01-07 Thread Andriy Palamarchuk

Hmm, I don't see my post in wine-devel.
Alexandre, did I send the message directly to you?
If yes, could you, please, forward to the mailing list?
Thank you.

Alexandre Julliard wrote:

 Andriy Palamarchuk [EMAIL PROTECTED] writes:
 
 
Simple tests work fine. Problems start when I try to
use multithreaded Wine tests. In one case the
framework stucks on reading from pipe. Other code
layouts have other issues. The cause may be in
incorrect pipes handling, multi-process tests handling
or interaction with Wine thread implementation.

 

 I'm not sure if this is what happens, but if two threads try to send
 results down the same pipe you'll have problems.


I think there is no competition for the pipe. New pipe is opened for 
each test. It looks like not all handles are closed to the write side of 
the pipe and read hangs. Can it happen because the handle is inherited 
to wineserver?

 You need explicit
 threading support in the framework if you want to create threads (the
 Perl framework doesn't support threads either at this point, so if
 your test works there it's by pure luck).


:-( I need more than one thread to catch message on system parameter 
value change.

 Another potential problem is that the fork() option of the framework
 is going to cause major breakage. You cannot use fork inside a Winelib
 app. 


I really like memory protection which is provided by fork(). We are 
going to need it (e.g. for TODO tests) :-/ The framework can be used in 
forkless mode, however we still need to remove that piping stuff.

Can we have the same advantages by using two-stage launching process? 
The first part coordinates everything and starts each test with exec. 
Such implementation is less efficient but much more portable.

 And in any case I don't think a GPLed framework is appropriate for Wine.

a) this is license not for Wine, but for Wine testing application. From 
my point of view GPL is very appropriate in this case. Any other OS 
projects, using the test application will have to contribute the changes 
back. On other hand the license won't hurt any commercial application 
because nobody will make business on it.

b) on the Check mailing list I saw request to change license to less restrictive one.

The main developer of the framework took into account this request. I 
also asked for less restrictive license, posted a couple bug reports, 
but have not had any feedback yet.

I have an idea for compromise - let's use both - Perl and C tests. Perl 
module Test::Harness can be used to manage both types of the tests. C 
test application can print feedback information in format Test::Harness 
needs. A library can be created for this. This gives all the advantages 
of both tools and choice, do not impose big constraints on developers.
Test::Harness is very mature framework and has clearly defined interface.

I'm still for keeping all the tests as separate application. It is very 
easy to create test hierarchy for tests application if you want to have 
subsets of the tests.

Andriy Palamarchuk






Re: We *really* need a development model change !

2002-01-07 Thread Alexandre Julliard

Andriy Palamarchuk [EMAIL PROTECTED] writes:

 :-( I need more than one thread to catch message on system parameter
 value change.

Not really, if you create a window your winproc will be called even if
it's in the same thread. But we need to make threading work in any
case.

 I really like memory protection which is provided by fork(). We are
 going to need it (e.g. for TODO tests) :-/ The framework can be used
 in forkless mode, however we still need to remove that piping stuff.
 
 Can we have the same advantages by using two-stage launching process?
 The first part coordinates everything and starts each test with
 exec. Such implementation is less efficient but much more portable.

IMO that's what we want, and the next step is to recognize that a
simple shell script and a couple of makefile rules can do the exec
job just as well, without having to worry about pipes or fork or
whatever. I frankly don't see the need of a complex test harness to
launch tests and print reports. Maybe when we have 1000 tests we will
want some of that complexity, but for now it's only a waste of time.

 a) this is license not for Wine, but for Wine testing
 application. From my point of view GPL is very appropriate in this
 case. Any other OS projects, using the test application will have to
 contribute the changes back. On other hand the license won't hurt any
 commercial application because nobody will make business on it.

Let's not debate licenses again. Anything that is included in the Wine
distribution has to be under the Wine license. If you want a GPLed
test suite you'll have to distribute it separately from Wine; but I
think that would be a mistake.

-- 
Alexandre Julliard
[EMAIL PROTECTED]





Re: We *really* need a development model change !

2002-01-03 Thread Andriy Palamarchuk

Sylvain Petreolle wrote:

 Running test1.pl returns to me :
 
 [syl@snoop winetest]$ cd /c/winetest 
 [syl@snoop winetest]$ perl test1.pl
 Can't locate wine.pm in @INC (@INC contains:
 /usr/lib/perl5/5.6.0/i386-linux /usr/lib/perl5/5.6.0
 /usr/lib/perl5/site_perl/5.6.0/i386-linux
 /usr/lib/perl5/site_perl/5.6.0
 /usr/lib/perl5/site_perl .) at test1.pl line 8.
 BEGIN failed--compilation aborted at test1.pl line
 8.--- 

Sorry, I was not specific enough about the scripts
launching instructions.
It looks like you try to start the script from
directory, which does not have wine.pm module. You
need to put contents of the archive to directory
programs/winetest of the wine source tree, build
winetest application, then it will work. This
directory should have wine.pm.

Because of the problems I reported in the message (see
below) the scripts require different launching. You
can start the scripts as:

# this script uses winetest framework
winetest test1.pl 

#these don't
perl test2.pl
perl test_all.pl

[...]
However I found a few issues with winetest:
1) For some reason running test_all.pl with winetest
gives compilation error. I saw the same compilation
error when I tried to use other Perl testing
framework
Test::Unit.
2) Compilation failure when I try to run test1.pl
directly with Perl, like perl test1.pl

Let me know if you still can't start the scripts.

Andriy Palamarchuk

__
Do You Yahoo!?
Send your FREE holiday greetings online!
http://greetings.yahoo.com





Re: We *really* need a development model change !

2002-01-03 Thread Andriy Palamarchuk

--- Andriy Palamarchuk [EMAIL PROTECTED] wrote:
 Bad:
5) no types checking, so errors in values, calculated
manually won't be caught :-(

Andriy Palamarchuk

__
Do You Yahoo!?
Send your FREE holiday greetings online!
http://greetings.yahoo.com





Re: We *really* need a development model change !

2002-01-03 Thread Francois Gouget

On 3 Jan 2002, Alexandre Julliard wrote:

 Andriy Palamarchuk [EMAIL PROTECTED] writes:

  Always succeed *under Windows*. Do you really, really,
  really think all the tests will succeed under Wine
  from day 1 and we will be able to maintain them
  failure-free?

 Absolutely. There's a very simple way of enforcing that: I'm not
 comitting anything that causes make test to fail.

  The value of unit tests is exactly in failures! The
  more failures of unit tests we have - the better test
  developers do their work.

 Well, I guess it's a philosophical point, but for me the value of the
 tests is for regression testing. If you allow the tests to fail you'll
 pretty soon have 90% of the tests fail somewhere, and this is
 completely useless except maybe as a list of things we still have to
 do. While if the tests usually succeed, as soon as something fails you
 know there's a regression and you have to fix it.

   This is why the notion of 'TODO tests' or known differences (my
'.ref.diff') is useful. This way when writing a test you don't have to
restrict yourself to just what works at a given point in Wine, and
still, the tests don't fail. But as soon as something changes in the
failure mode, or even stops failing, then you know it.


--
Francois Gouget [EMAIL PROTECTED]http://fgouget.free.fr/
   Nouvelle version : les anciens bogues ont été remplacés par de nouveaux.






Re: We *really* need a development model change !

2002-01-03 Thread Dan Kegel

Alexandre Julliard wrote:
  Do you really, really,
  really think all the tests will succeed under Wine
  from day 1 and we will be able to maintain them
  failure-free?
 
 Absolutely. There's a very simple way of enforcing that: I'm not
 comitting anything that causes make test to fail.

That's great to hear, but I think you have to modify your statement a
bit -- you may want to commit new tests don't yet pass,
if they show a real flaw in Wine.

That means you probably want to live with less than 100%  success rates.
The important thing when committing a new change to Wine
(as opposed to a change to the test suite) is that it not
cause any *new* failures.  I bet that's what you meant.

I'm so jazzed by the new emphasis on regression testing.
There were sparks of it in previous years
http://groups.google.com/groups?hl=enselm=38CE7B2D.77204824%40alumni.caltech.edu
but it didn't catch on for some reason.

Francios, are your tests (from http://fgouget.free.fr/wine/booktesting-en.shtml)
part of this framework yet?
- Dan





Re: We *really* need a development model change !

2002-01-03 Thread Andreas Mohr

On Thu, Jan 03, 2002 at 10:59:37AM -0800, Dan Kegel wrote:
 Alexandre Julliard wrote:
   Do you really, really,
   really think all the tests will succeed under Wine
   from day 1 and we will be able to maintain them
   failure-free?
  
  Absolutely. There's a very simple way of enforcing that: I'm not
  comitting anything that causes make test to fail.
 
 That's great to hear, but I think you have to modify your statement a
 bit -- you may want to commit new tests don't yet pass,
 if they show a real flaw in Wine.
OK, let me show my support for your view, too.

IMHO not committing any tests that fail is a high goal, but it's simply the
wrong one ;-)

Right now we've got a *lot* of problems giving people something worthwhile to
hack on:
Getting started with Wine is very difficult.

By giving people 150 (or, as far as I'm concerned, it'll be many more)
tests that fail on Wine, they even have *choice* in selecting the specific issue
that they want to fix !

That's why a perfect test suite is bad.

-- 
Andreas MohrStauferstr. 6, D-71272 Renningen, Germany
Tel. +49 7159 800604http://home.nexgo.de/andi.mohr/





Re: We *really* need a development model change !

2002-01-03 Thread Francois Gouget

On Thu, 3 Jan 2002, Dan Kegel wrote:
[...]
 Francios, are your tests (from http://fgouget.free.fr/wine/booktesting-en.shtml)
 part of this framework yet?

   No and they will not be. The reason is that the source for these
tests are part of books and as such it is all copyrighted material.
   Plus, the actual programs in these tests are almost all interactive
so they are not suited for our automated testing purposes. But I will
continue to maintain them (hmmm, whenever I have time) as they are a
good complement, especially for testing winemaker and Winelib.


--
Francois Gouget [EMAIL PROTECTED]http://fgouget.free.fr/
Linux: It is now safe to turn on your computer.






Re: We *really* need a development model change !

2002-01-03 Thread Alexandre Julliard

Dan Kegel [EMAIL PROTECTED] writes:

 That's great to hear, but I think you have to modify your statement a
 bit -- you may want to commit new tests don't yet pass,
 if they show a real flaw in Wine.

In that case the test should use a TODO mechanism or equivalent, and
it must still be possible to run make test without failure (but there
would be an option to switch the failures on if you want).

 That means you probably want to live with less than 100%  success rates.
 The important thing when committing a new change to Wine
 (as opposed to a change to the test suite) is that it not
 cause any *new* failures.  I bet that's what you meant.

No, what I mean is that you can't spot new failures if every test run
shows hundreds of existing ones. The only way to find new failures is
if you can do a successful test run before a change.

Imagine that you have 1000 tests, and a typical run shows 250
failures. Then you make a change, and you now see 248 failures. Does
it mean you fixed 2 bugs, or does it mean you fixed 5 and introduced 3
new ones?  You have no way of knowing without going through the 250
failures one by one. This is clearly not possible.

-- 
Alexandre Julliard
[EMAIL PROTECTED]





Re: We *really* need a development model change !

2002-01-03 Thread Andriy Palamarchuk


--- Alexandre Julliard [EMAIL PROTECTED] wrote:
 Andriy Palamarchuk [EMAIL PROTECTED] writes:
 
  Always succeed *under Windows*. Do you really,
 really,
  really think all the tests will succeed under Wine
  from day 1 and we will be able to maintain them
  failure-free?
 
 Absolutely. There's a very simple way of enforcing
 that: I'm not
 comitting anything that causes make test to fail.

  The value of unit tests is exactly in failures!
 The
  more failures of unit tests we have - the better
 test
  developers do their work.
 
 Well, I guess it's a philosophical point, but for me
 the value of the
 tests is for regression testing. If you allow the
 tests to fail you'll
 pretty soon have 90% of the tests fail somewhere,
 and this is
 completely useless except maybe as a list of things
 we still have to
 do. While if the tests usually succeed, as soon as
 something fails you
 know there's a regression and you have to fix it.

As Francois mention, this is why TODO tests exists.
Even without TODO tests it is not wise to reject a
perfectly correct patch from a Windows developer who
even does not have Wine.

Without TODO construction compromise is still
possible. E.g we can use conditional compilation or
execution to select only tests which succeed for
finding regressions. I believe there will be somebody,
interested in mining dirty tests.
Other option is to store failture lists and compare
them from time to time in a search of regressions.

 What you can do with my make test patch is run make
 test -k first, let
 it go through everything, and then run make test
 again and it will
 only run the tests that failed the first time, or
 that have been
 modified. This is a major gain. It could certainly
 be done some other
 way too without using make, but AFAICS your test
 harness would force
 to run all tests all the time (or to disable them
 manually). This is
 not acceptable once you have a large number of
 tests.

I don't see poing in selecting subsets of tests. From
experience - even with pretty big number of tests it
does not take long time to execute them.

 OTOH a Wine test suite can happen (I hope) because
 this is something
 Wine developers need when they write code, so there
 is at least some
 motivation for them to write tests.

Exactly! I do not want to spend resources of Wine
project on some nice documentation. On the contrary,
goal of this change is to invite external resources.

Andriy Palamarchuk

__
Do You Yahoo!?
Send your FREE holiday greetings online!
http://greetings.yahoo.com





Re: We *really* need a development model change !

2002-01-03 Thread Alexandre Julliard

Andriy Palamarchuk [EMAIL PROTECTED] writes:

 I don't see poing in selecting subsets of tests. From
 experience - even with pretty big number of tests it
 does not take long time to execute them.

A good testing suite is going to take a long time to run. My simple
atom test takes about 3 seconds on my machine, 1000 such tests would
take an hour. Now some of that is Perl overhead of course, C tests
would likely run faster but you need to compile them first so it's not
necessarily a gain.

In either case a full run will most likely take longer than compiling
Wine itself. Running all tests all the time is simply not an option;
just imagine if you had to rebuild all of Wine everytime you change
something.

-- 
Alexandre Julliard
[EMAIL PROTECTED]





Re: We *really* need a development model change !

2002-01-03 Thread Ulrich Weigand

Alexandre Julliard wrote:
 Dan Kegel [EMAIL PROTECTED] writes:
  That's great to hear, but I think you have to modify your statement a
  bit -- you may want to commit new tests don't yet pass,
  if they show a real flaw in Wine.
 
 In that case the test should use a TODO mechanism or equivalent, and
 it must still be possible to run make test without failure (but there
 would be an option to switch the failures on if you want).

The dejagnu test harness that is used by e.g. the gcc test suite
allows to classify a test case as 'expected to fail'.

When you run the test, every test case either passes or fails,
which results in a classification into four subsets:

  PASS   Test case was expected to pass, and it did
  FAIL   Test case was expected to pass, but failed
  XPASS  Test case was expected to fail, but passed
  XFAIL  Test case was expected to fail, and it did

Only a test case in the FAIL category causes the whole test run
to fail, and is reported even in the test run summary.  The other
categories are only reported as total numbers.

If you are getting nonzero FAIL numbers, you have introduced a
regression.  Nonzero PASS and XFAIL numbers are expected; if you
get nonzero XPASS numbers you might look at the cases in question
and decide whether you want to remove the 'expected to fail' flag.

This system works quite well in my experience with gcc, maybe something
like this could be implemented for Wine as well ...

Bye,
Ulrich

-- 
  Dr. Ulrich Weigand
  [EMAIL PROTECTED]





Re: We *really* need a development model change !

2002-01-03 Thread Robert Baruch

On Thursday 03 January 2002 07:54 am, Andriy Palamarchuk wrote:
 Alexandre Julliard wrote:

 The value of unit tests is exactly in failures! The
 more failures of unit tests we have - the better test
 developers do their work.

 The whole programming methodology exists which
 dictates that you write tests first, then implement
 code which makes them succeed.
 Please, look at this short article to better
 understand my point of view:
 Test Infected: Programmers Love Writing Tests
 http://members.pingnet.ch/gamma/junit.htm

According to Extreme Programming Installed, the chapter on Unit Tests, page 
97: Everyone on the team releases code only when all the unit tests in the 
entire system run at 100 percent. So in theory there shouldn't be any 
failures since the code wouldn't make it into the CVS tree. The only way this 
could work in the face of missing functionality is that the tests for that 
functionality are not run until the functionality is implemented.

The value is when you add new functionality (and possibly new tests) and old 
tests break. Then you can pinpoint the changes that caused the old tests to 
break. Again, that can only work if all the old tests succeeded, which means 
you can't include tests that you know will fail in a release.

--Rob





Re: We *really* need a development model change !

2002-01-03 Thread Sylvain Petreolle

Hi Andriy and all,

Thanks,
the scripts are running as expected yet.

 --- Andriy Palamarchuk [EMAIL PROTECTED] a écrit : 
Sylvain Petreolle wrote:
 
  Running test1.pl returns to me :
  
  [syl@snoop winetest]$ cd /c/winetest 
  [syl@snoop winetest]$ perl test1.pl
  Can't locate wine.pm in @INC (@INC contains:
  /usr/lib/perl5/5.6.0/i386-linux
 /usr/lib/perl5/5.6.0
  /usr/lib/perl5/site_perl/5.6.0/i386-linux
  /usr/lib/perl5/site_perl/5.6.0
  /usr/lib/perl5/site_perl .) at test1.pl line 8.
  BEGIN failed--compilation aborted at test1.pl line
  8.--- 
 
 Sorry, I was not specific enough about the scripts
 launching instructions.
 It looks like you try to start the script from
 directory, which does not have wine.pm module. You
 need to put contents of the archive to directory
 programs/winetest of the wine source tree, build
 winetest application, then it will work. This
 directory should have wine.pm.
 
 Because of the problems I reported in the message
 (see
 below) the scripts require different launching. You
 can start the scripts as:
 
 # this script uses winetest framework
 winetest test1.pl 
 
 #these don't
 perl test2.pl
 perl test_all.pl
 
 [...]
 However I found a few issues with winetest:
 1) For some reason running test_all.pl with
 winetest
 gives compilation error. I saw the same
 compilation
 error when I tried to use other Perl testing
 framework
 Test::Unit.
 2) Compilation failure when I try to run test1.pl
 directly with Perl, like perl test1.pl
 
 Let me know if you still can't start the scripts.
 
 Andriy Palamarchuk
 
 __
 Do You Yahoo!?
 Send your FREE holiday greetings online!
 http://greetings.yahoo.com
 
  

___
Do You Yahoo!? -- Une adresse @yahoo.fr gratuite et en français !
Yahoo! Courrier : http://courrier.yahoo.fr





Re: We *really* need a development model change !

2002-01-03 Thread Andreas Mohr

On Thu, Jan 03, 2002 at 04:55:03PM -0500, Robert Baruch wrote:
 On Thursday 03 January 2002 07:54 am, Andriy Palamarchuk wrote:
 The value is when you add new functionality (and possibly new tests) and old 
 tests break. Then you can pinpoint the changes that caused the old tests to 
 break. Again, that can only work if all the old tests succeeded, which means 
 you can't include tests that you know will fail in a release.
No, you can !

This is exactly what everybody seems to assume we don't need:
tests that are *known* to fail.
(like Ulrich Weigand said: have status variables like FAIL, XFAIL, GOOD, XGOOD)

The key to success is to check the *difference* to *expected* behaviour.
And if there is indeed a *difference*, then we know that something changed
and we need to examine it more closely and thus derive our ultimate result codes
from it.

Again, we can't just include tests that work on all occasions.
Instead we should have tests that are as thorough/strict as possible,
and thus with all sorts of failures, but which ultimately don't let the
test suite fail, since they're *expected* to currently fail.

-- 
Andreas MohrStauferstr. 6, D-71272 Renningen, Germany
Tel. +49 7159 800604http://home.nexgo.de/andi.mohr/





Re: We *really* need a development model change !

2002-01-03 Thread Robert Baruch

On Thursday 03 January 2002 06:05 pm, Andreas Mohr wrote:
 On Thu, Jan 03, 2002 at 04:55:03PM -0500, Robert Baruch wrote:
  The value is when you add new functionality (and possibly new tests) and
  old tests break. Then you can pinpoint the changes that caused the old
  tests to break. Again, that can only work if all the old tests succeeded,
  which means you can't include tests that you know will fail in a release.

 No, you can !

 This is exactly what everybody seems to assume we don't need:
 tests that are *known* to fail.
 (like Ulrich Weigand said: have status variables like FAIL, XFAIL, GOOD,
 XGOOD)

 The key to success is to check the *difference* to *expected* behaviour.

Oh, I see. That does make more sense.

I think my problem was in what XP defines as a release, which is a system 
which performs some of its functionality perfectly, and doesn't perform the 
rest of the functionality at all. That is, a customer can play with a 
release and expect not to break the app.

Since Wine effectively gives the customer (the Windows exe) access to 
functionality that hasn't been completed yet, Wine releases aren't the same 
as XP releases, so the XP concept of 100% success in unit tests doesn't apply.

--Rob





Re: We *really* need a development model change !

2002-01-02 Thread Francois Gouget

On 1 Jan 2002, Alexandre Julliard wrote:

 Jeremy White [EMAIL PROTECTED] writes:

  I've started playing with this, Alexandre, and I had a thought/question:
  why not put the tests under 'wine/tests'?  I recognize the benefit
  of having a test immediately associated with the implementation.
  But, I would argue
a)  that not all tests are going to be specific to one dll

 It seems to me that a given test should always be specific not only to
 a dll, but to a single or a few functions of this dll. When do you
 think this would not be the case?

   I can think of one case that I burn to put into the Wine testing
framework: the command line/argv/argv handling. I think it
would make sense to test simultaneously:
 * kernel32.CreateProcess
   It is clearly involved when the parent proces is a Wine process
 * kernel32.GetCommandLineA/W
   Clearly related
 * main and WinMain
   Not from a specific dll
 * msvcrt.__getmainargs
   One of the ways to retrieve the parameters
 * msvcrt.__argc, msvcrt.__argv, msvcrt.__wargv
   Other functions returning the process's parameters
 * shell32.CommandLineToArgvW
   Not involved int the process creation but performs exactly the same
   command line to argument array conversion that is involved in there.
   So I think it makes sense to test it in the same program.


   But I believe that even that case can fit reasonably well in the test
architecture. For instance we could to these tests in msvcrt/test since
msvcrt depends on kernel32. Then it's just a matter of
CommandLineToArgvW that we may want to separate.


[...]
b)  by placing all the tests together, you make exporting
  a 'test' package to Windows simpler.
c)  You centralize the info and allow for good doco
d)  We can create and sustain a whole Windows make
  file hierarchy, which would be useful to a test
  writer in Windows.

   I think the real rational for putting tests in a separate directory
would be to make them completely separate from Wine. After all a Win32
conformance test suite could also benefit other projects like Odin,
ReactOS?, and any other Win32 reimplementation. Thus there may be an
argument that it could be beneficial to have a test suite that everyone
could contribute to. It could even be under a different license, like
GPL to ensure that all projects contribute to it fairly (and who wants a
proprietary Win32 conformance suite anyway). Such a strategy would even
mke more sense if there was already such a test suite out there.

   But I'm not aware of any such test suite so I say we should start
with an integrated test suite which seems more practical at the moment
and we'll see when the time comes if we need to separate it from Wine.


--
Francois Gouget [EMAIL PROTECTED]http://fgouget.free.fr/
The nice thing about meditation is that it makes doing nothing quite respectable
  -- Paul Dean






Re: We *really* need a development model change !

2002-01-02 Thread Andreas Mohr

On Wed, Jan 02, 2002 at 02:34:56AM -0800, Francois Gouget wrote:
I can think of one case that I burn to put into the Wine testing
 framework: the command line/argv/argv handling. I think it
 would make sense to test simultaneously:
Yes, yes, and again: yes !

Wine's cmdline handling is still *very* buggy !
(which is not too astonishing, given these tons of quirks in the Win32 API)
I've had several programs already which didn't work due to cmdline problems.
This is a prime target of API testing.
Another prime target would be file system handling (I know of several functions
that are still broken).

-- 
Andreas MohrStauferstr. 6, D-71272 Renningen, Germany
Tel. +49 7159 800604http://home.nexgo.de/andi.mohr/





Re: We *really* need a development model change !

2002-01-02 Thread Francois Gouget

On 30 Dec 2001, Alexandre Julliard wrote:
[...]
 In fact here's a 10-minute hack to add a make test target. With that
 all you have to do is create a test script in dlls/xxx/tests/foo.test,
 put the expected output in tests/foo.test.ref (presumably generated by
 running the test under Windows), add your script to the makefile and
 run make test.


   I think what we need with this is a couple of guidelines and
documentation for potential test writers, and maybe a couple of
extensions. The following is half a proposed documentation that we could
put in the Wine Developper Guide, and half a proposed specification for
some possible extensions. As usual, comments and suggestions are
welcome.



What is a test
--

   A test unit is an executable or script. You can name it anyway you
like (please no space in the names, it's always annoying). All test
units should be non-interactive. A test unit called xxx generates two
outputs:
 * its exit code
 * text output on either or both of stdout and stderr, both of which are
normally redirected to a file called 'xxx.out'.

   A test succeeds if:
 * its exit code is 0
 * and its output, 'xxx.out' matches the reference output according to
the rules described later.

   Reciprocally it fails if:
 * its exit code is non zero
   Either because one aspect of the test failed and thus the test unit
decided to return a non-zero code to indicate failure, or because it
crashed and thus the parent got a = 128 error code.
 * or because its output differs from the reference output established
on Windows

   Under this model each test unit may actually be comprised of more
than one process (for instance to test CreateProcess, inter-process
messaging, inter-process DDE, etc.). All that counts is that the
original process does not finish until the testing is complete so that
the testing framework knows when to check the test output and move on.
   (There is no provision for hung tests. A time out based mechanism,
with a large time out, like 5 minutes, could do the trick.)


   A test unit can also exercise more than one aspect of one or more
APIs. But, as a rule of thumb, a specific test should not exercise more
than a couple to a handful related APIs (or up to a dozen in extreme
cases). Also, more than one test could exercise different aspects a
given API.
   So when running the Wine regression tests, if we find that 3 tests
out of 50 failed, it means that three processes had an incorrect exit
code or output out of fifty. One should then analyze in more details
what went wrong during the execution of these processes to determine
which specific API or aspect of an API misbehaved.  This can be done
either by looking at their output, by running them again with Wine
traces, or even by running them in a debugger.



Test Output
---

   Wine tests can write their output in any form they like. The only
important criteria are:
 * it should be reproducible from one run to the next: don't print
pointer values. They are most likely to change in the next run and thus
cannot be checked
 * it should be the same on a wide range of systems: don't print things
like the screen resolution!
 * it should be easy to correlate with the source of the test. For
instance if a check fails, it would be a good idea to print a message
that can easily be grepped in the source code, or even the line number
for that check. But don't print line numbers for success messages, they
will change whenever someone changes the test and would require an
update to the reference files..
 * the output should not be empty (just in case the process may die with
a 0 return code / fail to start before writing anything to the output)
 * finally it should be easy to read by the people who are going to be
debugging the test when something goes wrong.


   To each test we associate a file containing the reference output for
that test. If the test's output consists of a single Test Ok, then
that file may be ommitted. (I am not sure if this shortcut is actually
very needed/useful)

   Otherwise this file is either called:
 * 'xxx.ref'
 * or 'xxx.win95' or 'xxx.win98' ... if the output depends on the
Windows version being emulated. The winever-specific file takes
precedence over the '.ref' file, and the '.ref' file, which should
exist, serves as a fallback.

   This second feature is probably best avoided as much as possible as
multiple reference files are harder to maintain than a single reference
file. But they maybe be useful for some APIs (can anyone think of any?).
In any case I propose not to implement it until we actually find the
need for it.


   One may also create a file called 'xxx.ref.diff' (resp.
'xxx.win95.diff', etc.) which contains a diff between the test output in
Windows and the test output in Wine. The goal is to:
 * make it unnecessary to tweak tests to not report known Wine
shortcomings/bugs, or to remove these tests altogether
 * but not have a hundreds of tests that systematically fail due to

Re: We *really* need a development model change !

2002-01-02 Thread Andreas Mohr

On Wed, Jan 02, 2002 at 04:36:14AM -0800, Francois Gouget wrote:
I think what we need with this is a couple of guidelines and
 documentation for potential test writers, and maybe a couple of
 extensions. The following is half a proposed documentation that we could
 put in the Wine Developper Guide, and half a proposed specification for
 some possible extensions. As usual, comments and suggestions are
 welcome.
Good !

Well, I've read the whole damn thing, and I don't have many
comments/flames/whatever to make (damn ! :)

The criteria to determine success/failure of a test unit xxx then
 becomes:
xxx xxx.out 21
if the return code is != 0
   then the test failed
diff -u xxx.ref xxx.out xxx.diff
if there is no xxx.ref.diff  xxx.diff is not empty
   then the test failed
if xxx.diff is different from xxx.ref.diff
   then the test failed
otherwise the test is successful
Wow, someone must have been very bored ;)

What is needed most is a two sample tests:
  * one simple console based test
  * another one involving some GUI stuff
No !
We need Win32 GUI, Win32 console and Win16.

Then the documentation could use them as examples and discuss the
 interesting aspects.
Yep !
(in a very simple way...)

I believe that all the above is fairly neutral as far as perl vs. C
 is concerned. Except for the compilation issues, and maybe the exact
 command to use to invoke a test, whether a test is written in C or in
 perl should not make any difference.
Great job !
(unfortunately I don't have much to add :-\)

I'm damn sure some issues will still arise, but we can only find problems
if we go ahead and implement it.

-- 
Andreas MohrStauferstr. 6, D-71272 Renningen, Germany
Tel. +49 7159 800604http://home.nexgo.de/andi.mohr/





Re: We *really* need a development model change !

2002-01-02 Thread Andriy Palamarchuk


--- Alexandre Julliard [EMAIL PROTECTED] wrote:
 In theory tests should be written under Windows yes.
 In practice the
 initial version of a test may be done on Windows,
 but I'm sure people
 will then modify the test under Wine without
 bothering to retry under
 Windows every time.

What is the point of having Win32 API unit test which
does not confirm to the API reference implementation?!

I agree that some of developers don't have Windows
installed, don't have time to recompile the tests
under Windows, etc, but Win32 confirmance is purpose
of the whole project. No big qualification is required
to fix confirmance problems and a few volunteers with
access to different Windows platforms can perform this
task.

Andriy



__
Do You Yahoo!?
Send your FREE holiday greetings online!
http://greetings.yahoo.com





Re: We *really* need a development model change !

2002-01-02 Thread Andriy Palamarchuk


--- Francois Gouget [EMAIL PROTECTED] wrote:
[...]

 What is a test

I wonder if I'm the only one who favours using
existing testing framework? Why to create something
new if you have not reviewed existing options?

Perl has big choice of tools. In previous messages I
reported about choices for C.

Are you afraid that it will be difficult to learn new
API? We impose some conventions ourselves.
All the frameworks I saw provide very simple API.

Examples of usages of different frameworks:

Perl module Test::Simple:
ok( 1 == 1, '1 == 1' );
ok( 2 == 2, '2 == 2' );

Perl module Test::Unit:
sub test_ok_1 {
  assert(1 == 1);
  assert(2 == 2);
}   

C framewok Check:
fail_unless(1 == 1, 1==1);
fail_unless(2 == 2, 2==2);

Sure, there is more code to structure the test suites
and glue them together, but API is very simple and can
be easy guessed from examples. I don't see developer
spending more than a few minutes to learn the
framework basics.

Advantages we get using existing framework:
1) existing services of the framewok can be used. Some
of the services which I'm interested in:
 - TODO tests (by default are not reported), SKIPPED
tests (test is not executed for some conditions) -
Test::Simple Perl module
 - powerful reporting capabilities
 - test code structuring (init, run, teardown, tests
hierarchy)
 - individual tests application address space
protection - Check C framework

2) the implementation of the API can be extended as we
like without changing the API. We can use help of the
framework developers. Conformance to the API is
maintained by compilation process. The conventions you
suggested can be changed only with changing the tests
and can't be easy checked.


Andriy Palamarchuk

__
Do You Yahoo!?
Send your FREE holiday greetings online!
http://greetings.yahoo.com





Re: We *really* need a development model change !

2002-01-02 Thread Andriy Palamarchuk

--- Francois Gouget [EMAIL PROTECTED] wrote:
[...]

 A test unit can also exercise more than one aspect
of 
 one or more APIs. But, as a rule of thumb, a
specific 
 test should not exercise more than a couple to a 
 handful related APIs (or up to a dozen in extreme
 cases). Also, more than one test could exercise 
 different aspects a given API.

I don't see a problem in having complex, task-oriented
tests. Simple tests check one module, coplex tasks
help to check integration between components. Both are
needed. I especially like to create tests which
implement long data conversion chains where output
should be equal to input :-)
I still want to have well-structured tests.

 Wine tests can write their output in any form they 
 like. The only important criteria are:

One more criteria - restore the system state (if
possible). We don't want to have misconfigured machine
after running the test suite.

 To each test we associate a file containing the 
 reference output for
 that test.

Maintaining reference output file(s) is difficult
because of:
- EOLs conversions
- keep all the files in sync
- output files can be huge
- test suite can have different output for different
versions of Windows
- one person can't run the suite on all Windows
platforms, those who can, are afraid to touch fragile
system of the output reference files.

I suggest to use explicit checks and print descriptive
messages in case of falure. I agree, this approach is
more labour-intensive, especially for tests using IPC.
It is also much more maintainable as soon as you coded
it. Everything, including differences between
different Windows platforms is documented directly in
the code! This gives much better control. E.g, it is
possible to comment-out part of the test, still
getting meaningful results what worked, what did not.

One more idea about reference output - it can be
included in Perl script as in-line constant, so we can
keep the output in the same file as the test itself.

 Each test should contain a section that looks 
 something like:

 # @START_TEST_COVERAGE@
 # kernel32.CreateProcess
 # kernel32.GetCommandLineA
 # kernel32.GetCommandLineW
 # __getmainargs
 # __p__argc
 # __p_argv
 # __p_wargv
 # @END_TEST_COVERAGE@

I prefer to have descriptive comments and failure
messages. It will be difficult to keep the description
you suggest in sync with the test implementation.

 * TEST_WINVER
This contains the value of the '-winver' Wine 
 argument, or the
 Windows version if the test is being run in Windows.

 We should mandate the use of specific values of 
 winver so that test don't have to
 recognize all the synonyms of win2000 (nt2k...),
etc.
(Do we need to distinguish between Windows and 
 Wine?)

Yes, we do. Sometimes we have implemented behavior
only for one Windows version, e.g. NT. In this case
switch -winver won't affect Wine behaviour.

 * TEST_BATCH
If true (equal to 1) or unset, then the test 
 should assume that it is being run from within the 
 test framework and thus that it should be
 non-interactive. If TEST_BATCH is set to 0, then the

 test can assume that it is being run in interactive 
 mode, and thus ask questions to the
 user. Of course most tests will simply behave 
 identically in both cases,

I strongly recommend to use only batch version. If
somebody has reasons to play with different forms of
input, he can use any of following easy options:
a) hardcode it - original test is improved ;-)
b) take the test code to his interactive application
c) add interaction with user to his local copy of the
tests

 Running tests
 -
 In Wine:
 
'make tests' seems the best way to do things.
But some tests may need to create windows. For 
 instance I have a DIB test that creates a window, 
 draws a number of DIBs in it and checks the
 bitmap bits of these DIBs and then exits. Thus it is

 a non-interactive test. I am not really sure whether

 the window actually needs to be made
 visible or not, but even if this particular exemple 
 does not require it,
 I suspect that othersm checking for message
sequences 
 for instance, may need to make the window visible.

I have tests which show window to use window
messaging.  Idea about tests separation to cui and gui
looks good.
You can run cui tests while doing something else.

 In Windows:

  Hmmm, not sure how that is done. Run
'winetest.exe'?

Something like:
perl run_all_tests.pl

The unit test application we develop will be a test of
Win32 API in general, useful for all implementations.
I'm for keeping this test in separate directory tree, 
not mixing it with Wine files.

Andriy Palamarchuk

__
Do You Yahoo!?
Send your FREE holiday greetings online!
http://greetings.yahoo.com





Re: We *really* need a development model change !

2002-01-02 Thread Alexandre Julliard

Francois Gouget [EMAIL PROTECTED] writes:

I think what we need with this is a couple of guidelines and
 documentation for potential test writers, and maybe a couple of
 extensions. The following is half a proposed documentation that we could
 put in the Wine Developper Guide, and half a proposed specification for
 some possible extensions. As usual, comments and suggestions are
 welcome.

Great job!

  * it should be easy to correlate with the source of the test. For
 instance if a check fails, it would be a good idea to print a message
 that can easily be grepped in the source code, or even the line number
 for that check. But don't print line numbers for success messages, they
 will change whenever someone changes the test and would require an
 update to the reference files..

IMO the test should not be printing successes or failures at all. If
it can determine whether some result is OK or not, it should simply do
an assert on the result. Printing things should be only for the cases
where checking for failure is too complicated, and so we need to rely
on the output comparison to detect failures.

Otherwise this file is either called:
  * 'xxx.ref'
  * or 'xxx.win95' or 'xxx.win98' ... if the output depends on the
 Windows version being emulated. The winever-specific file takes
 precedence over the '.ref' file, and the '.ref' file, which should
 exist, serves as a fallback.

No, there should always be a single .ref file IMO. Version checks
should be done inside the test itself to make sure the output is
always the same.

Each test should contain a section that looks something like:
 
 # @START_TEST_COVERAGE@
 # kernel32.CreateProcess
 # kernel32.GetCommandLineA
 # kernel32.GetCommandLineW
 # __getmainargs
 # __p__argc
 # __p_argv
 # __p_wargv
 # @END_TEST_COVERAGE@

This is already part of the Perl framework, you have to explicitly
declare the functions you use. So we don't want to duplicate the
information.

  * TEST_WINVER
This contains the value of the '-winver' Wine argument, or the
 Windows version if the test is being run in Windows. We should mandate
 the use of specific values of winver so that test don't have to
 recognize all the synonyms of win2000 (nt2k...), etc.
(Do we need to distinguish between Windows and Wine?)

The test should use GetVersion() and friends IMO, no need for a
separate variable.

  * and add two corresponding targets: 'make cui-tests' runs only those
 tests that do not pop up windows, and 'make gui-tests' runs only those
 tests that do pop up windows
  * 'make tests' would be 'tests: cui-tests gui-tests'

I don't think this complexity is necessary. You can always redirect
the display if the windows annoy you. And tests should try to keep the
windows hidden as much as possible.

What is needed most is a two sample tests:
  * one simple console based test
  * another one involving some GUI stuff

I have attached two sample Perl scripts that were written some time
ago by John Sturtz and myself. One is testing the atom functions and
the other is creating a window. They should probably be simplified a
bit to serve as documentation samples.

-- 
Alexandre Julliard
[EMAIL PROTECTED]



atom.pl
Description: Perl program


win.pl
Description: Perl program


Re: We *really* need a development model change !

2002-01-02 Thread Andreas Mohr

On Wed, Jan 02, 2002 at 10:20:25AM -0800, Alexandre Julliard wrote:
 Francois Gouget [EMAIL PROTECTED] writes:
   * it should be easy to correlate with the source of the test. For
  instance if a check fails, it would be a good idea to print a message
  that can easily be grepped in the source code, or even the line number
  for that check. But don't print line numbers for success messages, they
  will change whenever someone changes the test and would require an
  update to the reference files..
 
 IMO the test should not be printing successes or failures at all. If
 it can determine whether some result is OK or not, it should simply do
 an assert on the result. Printing things should be only for the cases
 where checking for failure is too complicated, and so we need to rely
 on the output comparison to detect failures.
Hmm, I don't know how you'd do that exactly.
If we implement strict testing, then tons of functions will fail on Wine.
And then we get an assert() every 20 seconds or what ??
More info needed here, I guess...

 Otherwise this file is either called:
   * 'xxx.ref'
   * or 'xxx.win95' or 'xxx.win98' ... if the output depends on the
  Windows version being emulated. The winever-specific file takes
  precedence over the '.ref' file, and the '.ref' file, which should
  exist, serves as a fallback.
 
 No, there should always be a single .ref file IMO. Version checks
 should be done inside the test itself to make sure the output is
 always the same.
.
.
.
   * TEST_WINVER
 This contains the value of the '-winver' Wine argument, or the
  Windows version if the test is being run in Windows. We should mandate
  the use of specific values of winver so that test don't have to
  recognize all the synonyms of win2000 (nt2k...), etc.
 (Do we need to distinguish between Windows and Wine?)
 
 The test should use GetVersion() and friends IMO, no need for a
 separate variable.
Doh ! Right !
Like Andriy already said: the tests itself should reflect the entire
behaviour of the functions, and even the version differences.
One additional step in the direction of very simple, self-behaving tests...

   * and add two corresponding targets: 'make cui-tests' runs only those
  tests that do not pop up windows, and 'make gui-tests' runs only those
  tests that do pop up windows
   * 'make tests' would be 'tests: cui-tests gui-tests'
 
 I don't think this complexity is necessary. You can always redirect
 the display if the windows annoy you. And tests should try to keep the
 windows hidden as much as possible.
Hmm, why complexity ?
Is it really that difficult to implement ?
I'd say it's a useful feature, and it doesn't incur much penalty,
so the feature/penalty quotient is high enough ;-)
Avoiding visible Windows as much as possible would be nice to have, though.

Hmm, OTOH:
maybe it'd be better to use make targets tests-unattended and tests-visual
instead (note that I'm writing the names the other way around).
This of course means that even many GUI tests would fall under the unattended
category, thus annoying window popups would have to be minimized.

OTOH we already kid of decided that we don't want to care about GUI testing
right now, so maybe we should really just use one test target for now.
Splitting later should be easy anyway.

-- 
Andreas MohrStauferstr. 6, D-71272 Renningen, Germany
Tel. +49 7159 800604http://home.nexgo.de/andi.mohr/





Re: We *really* need a development model change !

2002-01-02 Thread Jeremy White



* its exit code
 * text output on either or both of stdout and stderr, both of which are
normally redirected to a file called 'xxx.out'.

   A test succeeds if:
 * its exit code is 0
 * and its output, 'xxx.out' matches the reference output according to
the rules described later.

I think that it would be handy to use stderr for status/diagnostics, and 
only use
stdout for reference checking.  Chatty people like me get their statii,
but the tests remain clean.  Perhaps a WINETEST_DEBUG env variable
would be a good addition as well (and since Alexandre controls the commits,
my guess is it'll default to off g).

Otherwise this file is either called:
 * 'xxx.ref'
 * or 'xxx.win95' or 'xxx.win98' ... if the output depends on the
Windows version being emulated. The winever-specific file takes
precedence over the '.ref' file, and the '.ref' file, which should
exist, serves as a fallback.

I hope that the cases where this would be needed would be few enough
that we wouldn't need to build in a general purpose exception;
we can just have a mytest.win95.test and a mytest.win98.test for
the cases where it's needed.

[snipping chunks largely agreed with]


Test coverage
-

   Each test should contain a section that looks something like:

I started trying to tweak Alexandre's patch to create some sample
tests, and I learned the following:
1.  Cygwin installation has *dramatically* improved.
 Getting a full working toolchain is no longer
 a big pain in the rear end, it's actually pretty easy.

2.  Having '.test' files be implicitly Perl scripts is too
 limiting,  IMHO.  I hate to add another format, but
 I've been toying with YAFF (yet another file format)
 so that a '.test' file describes a test, as follows:
# Comment lines
script=name_of_my_perl_test_script
invoke=name_of_c_test_or_shell_script
status=-eq 0
pattern=.*OK
compare=name_of_my_ref_file
 
 where one of script or invoke is required, and
  status, if given, is a test expression that $0 is compared to,
  pattern is a regexp applied to stdout, if given,
  and compare is the name of a .ref.out file to compare
  the output with.

  The default would be just a case of 'status=-eq 0'.

  The nice thing about this approach is that you can
  handle the multiple version testing just by creating
  a new reference file and a new .test file.
 

   

In Windows:

   Hmmm, not sure how that is done. Run 'winetest.exe'?

IMO, we should have a script that creates a 'winetest.zip',
with simple batch files to make it easy to run a single
test.

But, (as a new convert to the ease of use of Cygwin), I think
we can just stick with 'make tests' for the full deal
on Windows.

Jer







Re: We *really* need a development model change !

2002-01-02 Thread Alexandre Julliard

Andriy Palamarchuk [EMAIL PROTECTED] writes:

 I suggest to use explicit checks and print descriptive
 messages in case of falure. I agree, this approach is
 more labour-intensive, especially for tests using IPC.
 It is also much more maintainable as soon as you coded
 it. Everything, including differences between
 different Windows platforms is documented directly in
 the code! This gives much better control. E.g, it is
 possible to comment-out part of the test, still
 getting meaningful results what worked, what did not.

I definitely agree here, having the code check everything itself is
much better that having to compare the output. The drawback is that it
makes tests more complex to write, so I'm not sure if we want to make
it the rule for all tests.

-- 
Alexandre Julliard
[EMAIL PROTECTED]





Re: We *really* need a development model change !

2002-01-02 Thread Alexandre Julliard

Francois Gouget [EMAIL PROTECTED] writes:

I am not sure about using asserts. In the case where one calls the
 same API ten times with different parameters, it would probably be
 better to still do all ten tests even if the second call fails. This way
 the developper debugging things immediately knows if just one aspect
 failed or if its everything. If we call assert as soon as one item fails
 we don't have the information of whether the rest still works or
 not.

Sure, it doesn't have to be the C library version of assert, it can be
an equivalent that continues execution and only sets the exit status
at the end.

-- 
Alexandre Julliard
[EMAIL PROTECTED]





Re: We *really* need a development model change !

2002-01-02 Thread Francois Gouget

On 2 Jan 2002, Alexandre Julliard wrote:

 Francois Gouget [EMAIL PROTECTED] writes:

 I am not sure about using asserts. In the case where one calls the
  same API ten times with different parameters, it would probably be
  better to still do all ten tests even if the second call fails. This way
  the developper debugging things immediately knows if just one aspect
  failed or if its everything. If we call assert as soon as one item fails
  we don't have the information of whether the rest still works or
  not.

 Sure, it doesn't have to be the C library version of assert, it can be
 an equivalent that continues execution and only sets the exit status
 at the end.

   Kind of like my proposed 'test_failed(message)' function :-)
   I agree then.


--
Francois Gouget [EMAIL PROTECTED]http://fgouget.free.fr/
 Avoid the Gates of Hell - use Linux.







Re: We *really* need a development model change !

2002-01-02 Thread David Elliott

On 2002.01.02 14:12 Jeremy White wrote:
[big snip]
1.  Cygwin installation has *dramatically* improved.
 Getting a full working toolchain is no longer
 a big pain in the rear end, it's actually pretty easy.
 
[big snip]

Well, as I mentioned the other day... I have recently built a linux cross 
mingw32 toolchain with the latest released binutils (maybe I should 
upgrade that, seems the mingw people also use a newer unstable binutils) 
and the latest released gcc (3.0.3) along with the MinGW w32api and 
mingw-runtime packages (both version 1.2).

There were a few issues such as building with multithread support that 
crept up into the build process and thus I have some patches for that if 
anyone is interested.  Most of them patch the configure scripts and so on 
so that it uses threading for the target environment instead of trying to 
compile posix threads.

I will be contacting the MinGW team about this shortly.

-Dave





Re: We *really* need a development model change !

2002-01-02 Thread Andreas Mohr

On Wed, Jan 02, 2002 at 11:40:40AM -0800, Alexandre Julliard wrote:
 Andriy Palamarchuk [EMAIL PROTECTED] writes:
 
  I suggest to use explicit checks and print descriptive
  messages in case of falure. I agree, this approach is
  more labour-intensive, especially for tests using IPC.
  It is also much more maintainable as soon as you coded
  it. Everything, including differences between
  different Windows platforms is documented directly in
  the code! This gives much better control. E.g, it is
  possible to comment-out part of the test, still
  getting meaningful results what worked, what did not.
 
 I definitely agree here, having the code check everything itself is
 much better that having to compare the output. The drawback is that it
 makes tests more complex to write, so I'm not sure if we want to make
 it the rule for all tests.
I think we do want to do this :-)
It doesn't add significant overhead to the test procedure,
and as far as third-party (aka Windows) developers are concerned,
they could just hack away at their specific test, and even if they miss
version checking, then we could easily add this, I guess, as someone *will*
notice and will enhance it properly.

-- 
Andreas MohrStauferstr. 6, D-71272 Renningen, Germany
Tel. +49 7159 800604http://home.nexgo.de/andi.mohr/





Re: We *really* need a development model change !

2002-01-02 Thread Andriy Palamarchuk

The final attempt to solicit feedback for my
suggestion to use existing testing framework.


Want to bring to your attention testing framework
Test::Simple. I think you'll like this one the most
because it implements exactly the ideas you suggested
earlier, plus some more.

You can play with the examples unpacking file
winetest.tar.gz to the existing winetest application
directory.

1) look at file test1.pl. It implements exactly the
functionality of existing test.pl module with using
Test::Simple framework. The only change I made are
descriptive error messages for the first few tests. 

Output of test1.pl:
ok 1 - Valid atom handle
ok 2 - No error code defined
ok 3
ok 4 - Succeed code
ok 5 - Atom name
ok 6
ok 7
ok 8
ok 9
ok 10
ok 11
ok 12
ok 13
ok 14
ok 15
1..15

The basic usage is not more difficult than one you
suggested, right?

2) test2.pl - very simple test script. Demonstrates 
TODO tests functionality. These are tests which are
known to fail - you are notified if any of these
succeeds by miracle. You'll see following output if
you run the test:

1..4
ok 1 - Success
not ok 2
# Failed test (test2.pl at line 8)
not ok 3 # TODO Example of using TODO tests
# Failed (TODO) test (test2.pl at line 12)
ok 4 - Example of successfull TODO test # TODO Example
of using TODO tests
# Looks like you failed 1 tests of 4.

3) Things become even more interesting when
Test::Simple is used with module Test::Harness.
Test::Harness allows to run many tests at once and
consolidate results of these tests.
test_all.pl uses the module to run all the tests
(currently test2.pl only). The output of the script:

test2.p.# Failed test (test2.pl at
line 8)
# Looks like you failed 1 tests of 4.
dubious
Test returned status 1 (wstat 256, 0x100)
DIED. FAILED tests 2-3
Failed 2/4 tests, 50.00% okay
Failed Test  Status Wstat Total Fail  Failed  List of
failed
---
test2.pl  1   256 42  50.00%  2-3
Failed 1/1 test scripts, 0.00% okay. 2/4 subtests
failed, 50.00% okay.

4) the framework has other nice features, like
skipping tests. Useful in choosing platform-specific
tests, gui vs cli, etc.

Is this functionality sufficient for winetest?

However I found a few issues with winetest:
1) For some reason running test_all.pl with winetest
gives compilation error. I saw the same compilation
error when I tried to use other Perl testing framework
Test::Unit.
2) Compilation failure when I try to run test1.pl
directly with Perl, like perl test1.pl

Look forward for your answer.
Let me know if you need more information.

Andriy Palamarchuk



__
Do You Yahoo!?
Send your FREE holiday greetings online!
http://greetings.yahoo.com


winetest.tar.gz
Description: winetest.tar.gz


Re: We *really* need a development model change !

2002-01-02 Thread Francois Gouget

On Wed, 2 Jan 2002, Andreas Mohr wrote:
[...]
 What is needed most is a two sample tests:
   * one simple console based test
   * another one involving some GUI stuff
 No !
 We need Win32 GUI, Win32 console and Win16.

   Why is it necessary to separate Win16 from the rest?
   On Windows it could make sense: obviously you cannot run the 32bit
tests on Windows 3.x. But in Wine if you are running the Win32 GUI
tests, why not run the Win16 GUI tests? Is a 'Win16 console' test
something that can exist? Would that be a DOS program? In any case
couldn't they be run with all the Win32 console tests?


--
Francois Gouget [EMAIL PROTECTED]http://fgouget.free.fr/
Linux: It is now safe to turn on your computer.







Re: We *really* need a development model change !

2002-01-02 Thread Sylvain Petreolle

Running test1.pl returns to me :

[syl@snoop winetest]$ cd /c/winetest 
[syl@snoop winetest]$ perl test1.pl
Can't locate wine.pm in @INC (@INC contains:
/usr/lib/perl5/5.6.0/i386-linux /usr/lib/perl5/5.6.0
/usr/lib/perl5/site_perl/5.6.0/i386-linux
/usr/lib/perl5/site_perl/5.6.0
/usr/lib/perl5/site_perl .) at test1.pl line 8.
BEGIN failed--compilation aborted at test1.pl line
8.--- 

Andriy Palamarchuk [EMAIL PROTECTED] a écrit :  The
final attempt to solicit feedback for my
 suggestion to use existing testing framework.
 
 
 Want to bring to your attention testing framework
 Test::Simple. I think you'll like this one the most
 because it implements exactly the ideas you
 suggested
 earlier, plus some more.
 
 You can play with the examples unpacking file
 winetest.tar.gz to the existing winetest application
 directory.
 
 1) look at file test1.pl. It implements exactly the
 functionality of existing test.pl module with using
 Test::Simple framework. The only change I made are
 descriptive error messages for the first few tests. 
 
 Output of test1.pl:
 ok 1 - Valid atom handle
 ok 2 - No error code defined
 ok 3
 ok 4 - Succeed code
 ok 5 - Atom name
 ok 6
 ok 7
 ok 8
 ok 9
 ok 10
 ok 11
 ok 12
 ok 13
 ok 14
 ok 15
 1..15
 
 The basic usage is not more difficult than one you
 suggested, right?
 
 2) test2.pl - very simple test script. Demonstrates 
 TODO tests functionality. These are tests which are
 known to fail - you are notified if any of these
 succeeds by miracle. You'll see following output if
 you run the test:
 
 1..4
 ok 1 - Success
 not ok 2
 # Failed test (test2.pl at line 8)
 not ok 3 # TODO Example of using TODO tests
 # Failed (TODO) test (test2.pl at line 12)
 ok 4 - Example of successfull TODO test # TODO
 Example
 of using TODO tests
 # Looks like you failed 1 tests of 4.
 
 3) Things become even more interesting when
 Test::Simple is used with module Test::Harness.
 Test::Harness allows to run many tests at once and
 consolidate results of these tests.
 test_all.pl uses the module to run all the tests
 (currently test2.pl only). The output of the script:
 
 test2.p.# Failed test (test2.pl at
 line 8)
 # Looks like you failed 1 tests of 4.
 dubious
  Test returned status 1 (wstat 256, 0x100)
 DIED. FAILED tests 2-3
  Failed 2/4 tests, 50.00% okay
 Failed Test  Status Wstat Total Fail  Failed  List
 of
 failed

---
 test2.pl  1   256 42  50.00%  2-3
 Failed 1/1 test scripts, 0.00% okay. 2/4 subtests
 failed, 50.00% okay.
 
 4) the framework has other nice features, like
 skipping tests. Useful in choosing platform-specific
 tests, gui vs cli, etc.
 
 Is this functionality sufficient for winetest?
 
 However I found a few issues with winetest:
 1) For some reason running test_all.pl with winetest
 gives compilation error. I saw the same compilation
 error when I tried to use other Perl testing
 framework
 Test::Unit.
 2) Compilation failure when I try to run test1.pl
 directly with Perl, like perl test1.pl
 
 Look forward for your answer.
 Let me know if you need more information.
 
 Andriy Palamarchuk
 
 
 
 __
 Do You Yahoo!?
 Send your FREE holiday greetings online!
 http://greetings.yahoo.com

 ATTACHMENT part 2 application/x-gzip
name=winetest.tar.gz
 

___
Do You Yahoo!? -- Une adresse @yahoo.fr gratuite et en français !
Yahoo! Courrier : http://courrier.yahoo.fr





Re: We *really* need a development model change !

2002-01-02 Thread Alexandre Julliard

Andriy Palamarchuk [EMAIL PROTECTED] writes:

 1) look at file test1.pl. It implements exactly the
 functionality of existing test.pl module with using
 Test::Simple framework. The only change I made are
 descriptive error messages for the first few tests. 
 
 Output of test1.pl:
 ok 1 - Valid atom handle
 ok 2 - No error code defined
 ok 3
 ok 4 - Succeed code
 ok 5 - Atom name
 ok 6
 ok 7
 ok 8
 ok 9
 ok 10
 ok 11
 ok 12
 ok 13
 ok 14
 ok 15
 1..15
 
 The basic usage is not more difficult than one you
 suggested, right?

Yes, using ok() or assert() is pretty much the same. But it should not
be printing all that stuff IMO, except if you explicitly ask it to
when debugging a test for instance. The TODO/SKIP stuff is
interesting, I agree we probably want something like that.

 3) Things become even more interesting when
 Test::Simple is used with module Test::Harness.
 Test::Harness allows to run many tests at once and
 consolidate results of these tests.
 test_all.pl uses the module to run all the tests
 (currently test2.pl only). The output of the script:
 
 test2.p.# Failed test (test2.pl at
 line 8)
 # Looks like you failed 1 tests of 4.
 dubious
   Test returned status 1 (wstat 256, 0x100)
 DIED. FAILED tests 2-3
   Failed 2/4 tests, 50.00% okay
 Failed Test  Status Wstat Total Fail  Failed  List of
 failed
 ---
 test2.pl  1   256 42  50.00%  2-3
 Failed 1/1 test scripts, 0.00% okay. 2/4 subtests
 failed, 50.00% okay.

I really don't see a need for this kind of things. IMO we should
enforce that tests always succeed, otherwise we can't do regression
testing. And running the tests through the Makefile has the advantage
that you can check dependencies and only run tests when something
affecting them has changed.

-- 
Alexandre Julliard
[EMAIL PROTECTED]





Re: We *really* need a development model change !

2002-01-01 Thread Jeremy White




In fact here's a 10-minute hack to add a make test target. With that
all you have to do is create a test script in dlls/xxx/tests/foo.test,
put the expected output in tests/foo.test.ref (presumably generated by
running the test under Windows), add your script to the makefile and
run make test.

I've started playing with this, Alexandre, and I had a thought/question:
why not put the tests under 'wine/tests'?  I recognize the benefit
of having a test immediately associated with the implementation.
But, I would argue
  a)  that not all tests are going to be specific to one dll
  b)  by placing all the tests together, you make exporting
a 'test' package to Windows simpler.
  c)  You centralize the info and allow for good doco
  d)  We can create and sustain a whole Windows make
file hierarchy, which would be useful to a test
writer in Windows.

(And yes, I ask because I am threatening to actually do some of the work;
of course, I'll run out of time shortly, and it will be an empty 
threat...g).

Thoughts?

Jer








Re: We *really* need a development model change !

2002-01-01 Thread Alexandre Julliard

Andriy Palamarchuk [EMAIL PROTECTED] writes:

 Before I was confident the tests would be developed under Windows and
 then run under Wine. You described reverse situation.
 
 To create a test people will have to use Windows to check it works
 properly. Of course all the tests must succeed under Windows.

In theory tests should be written under Windows yes. In practice the
initial version of a test may be done on Windows, but I'm sure people
will then modify the test under Wine without bothering to retry under
Windows every time.

-- 
Alexandre Julliard
[EMAIL PROTECTED]





Re: We *really* need a development model change !

2002-01-01 Thread Alexandre Julliard

Jeremy White [EMAIL PROTECTED] writes:

 I've started playing with this, Alexandre, and I had a thought/question:
 why not put the tests under 'wine/tests'?  I recognize the benefit
 of having a test immediately associated with the implementation.
 But, I would argue
   a)  that not all tests are going to be specific to one dll

It seems to me that a given test should always be specific not only to
a dll, but to a single or a few functions of this dll. When do you
think this would not be the case?

This really goes with the dll separation strategy: Wine should no
longer be viewed as a monolithic program, but more as a set of dll
packages grouped in the same tar file. And at some point it could
become desirable to split some dlls out of the main tree, or to have
separate people maintain separate dlls independently. So I think the
unit tests should be part of their respective dll.

   b)  by placing all the tests together, you make exporting
 a 'test' package to Windows simpler.
   c)  You centralize the info and allow for good doco
   d)  We can create and sustain a whole Windows make
 file hierarchy, which would be useful to a test
 writer in Windows.

I don't think we should maintain a Windows make hierarchy, at least
not manually. If we have to ship Windows makefiles they should be
generated from the Wine makefiles (or both types of makefile generated
from some other source file). Asking people to keep two hierarchies in
sync won't work.

-- 
Alexandre Julliard
[EMAIL PROTECTED]





Re: We *really* need a development model change !

2002-01-01 Thread Jeremy White



I don't think we should maintain a Windows make hierarchy, at least
not manually. If we have to ship Windows makefiles they should be
generated from the Wine makefiles (or both types of makefile generated
from some other source file). Asking people to keep two hierarchies in
sync won't work.


I'm relatively neutral on the tests vs. dlls issue, and so I'm willing 
to defer
to your judgement.  

However, I think it's critical that this process somehow be set up to be
trivial for use by a Windows geek.  And requiring the Cygwin
tool chain on Windows defeats the whole purpose.  For example,
I here at home have nothing but a totally brain dead Win98 partition.
No compilers, nothing.  (Okay, Diablo II, but that's it).

For me, at a minimum, I need to have a precompiled winetest.exe.

Ideally, we would have a 'winetest.zip' such that all I would
have to do is install Perl, and then I'd have a nice set of
sample test scripts I could run/modify/tweak to my hearts
satisfaction.  If I had a C compiler, I could also compile
the C tests.

Hmm.  What if I had a 'make export-tests' that created
a template 'winetest.zip' file.  Then I've just got to
get a Windows winetest.exe file built and repackage
the 'winetest.zip' file.  

So, if we had *one* Windows machine with a full Cygwin/CVS/gmake
toolchain, it could periodically build new 'winetest.zip'
files and publish them as a separate download at winehq.com.

What do you think?  If I extended your patch to add an export-tests
target, would this be useful?

Jer







Re: We *really* need a development model change !

2002-01-01 Thread Alexandre Julliard

Jeremy White [EMAIL PROTECTED] writes:

 Ideally, we would have a 'winetest.zip' such that all I would
 have to do is install Perl, and then I'd have a nice set of
 sample test scripts I could run/modify/tweak to my hearts
 satisfaction.

Exactly, yes. If possible winetest.zip should also include perl.dll
and the needed packages so that you don't even need to install perl at
all. I think it may be a good idea to have a winetest.zip that
contains winetest.exe and related files that you only have to install
once, and another zip that contains the test themselves which you may
need to upgrade more frequently (or that you could get directly from
CVS if you have a CVS client on Windows).

 Hmm.  What if I had a 'make export-tests' that created a template
 'winetest.zip' file.  Then I've just got to get a Windows
 winetest.exe file built and repackage the 'winetest.zip' file.  So,
 if we had *one* Windows machine with a full Cygwin/CVS/gmake
 toolchain, it could periodically build new 'winetest.zip' files and
 publish them as a separate download at winehq.com.
 
 What do you think?  If I extended your patch to add an export-tests
 target, would this be useful?

Sure, but I don't think a makefile target is the best way to do it. It
would be better to simply have a script that could be run on WineHQ
every night to rebuild the zip directly from CVS.

-- 
Alexandre Julliard
[EMAIL PROTECTED]





Re: We *really* need a development model change !

2001-12-31 Thread Andriy Palamarchuk

One more point in favour of C-based tests. They will
be very useful in porting Wine to non-Intel platforms.
C tests will help to test both - execution of Windows
applications under processor emulator and compilation
of Windows applications with Winelib.

 1. It is much easier to install under Windows than a
full development
   environment, and we don't have to worry about
supporting a dozen
   different compilers. We can simply provide a zip
file containing the
   compiled script interpreter, and people can be up
and running in seconds.

We can create MinGW toolchain distribution, customized
for the test application. Besides original MinGW tools
  (about 32Mb) this distribution can include
command-line, GUI CVS clients, GUI test application
launcher. Can you suggest any other tools?

The launcher will have functionality to run Wine test
application, update it from CVS, build, create patch.
Such distribution is mostly newbie-oriented.

With script-based tool it is still necessary to
install CVS separately or we need to create our own
distribution.


Another advantage of C-based tests is that they
provide smooth path from tests creation to
contributing to core Wine project.

Thanks,
Andriy Palamarchuk

__
Do You Yahoo!?
Send your FREE holiday greetings online!
http://greetings.yahoo.com





Re: We *really* need a development model change !

2001-12-31 Thread Alexandre Julliard

Andriy Palamarchuk [EMAIL PROTECTED] writes:

 It seems the problem is not so big. It will be sufficient to run the
 binary, compiled under Windows not more often than once a month.

But it won't compile. Once we have a simple environment in Wine where
you run make test and everything happens automatically, people will
use that. They won't bother to update all the makefiles etc. that you
need in order to build on Windows. Then every time someone tries to
build the tests under Windows they will have to fix a ton of problems
before it works.

We simply cannot expect people to constantly dual-boot to run their
tests in both environments, so we need a way to make sure that when
code works on one platform it also works on the other without extra
work. We could certainly build a Windows infrastructure that does
everything automatically for C tests, but this is a massive amount of
work.

 BTW, tests in Perl address only execution of applications, not compilation.

Compilation is not an interesting case. 99% of it is tested by
compiling Wine itself, and the remaining occasional problem is trivial
to locate and fix. There's simply no need for regression testing of
the compilation environment.

-- 
Alexandre Julliard
[EMAIL PROTECTED]





Re: We *really* need a development model change !

2001-12-31 Thread Andriy Palamarchuk

Alexandre Julliard wrote:

 Andriy Palamarchuk [EMAIL PROTECTED] writes:
It seems the problem is not so big. It will be sufficient to run the
binary, compiled under Windows not more often than once a month.

 
 But it won't compile. Once we have a simple environment in Wine where
 you run make test and everything happens automatically, people will
 use that. They won't bother to update all the makefiles etc. that you
 need in order to build on Windows. Then every time someone tries to
 build the tests under Windows they will have to fix a ton of problems
 before it works.

 
 

 We simply cannot expect people to constantly dual-boot to run their
 tests in both environments, so we need a way to make sure that when
 code works on one platform it also works on the other without extra
 work. We could certainly build a Windows infrastructure that does
 everything automatically for C tests, but this is a massive amount of
 work.


Before I was confident the tests would be developed under Windows and 
then run under Wine. You described reverse situation.
To create a test people will have to use Windows to check it works 
properly. Of course all the tests must succeed under Windows.

BTW, tests in Perl address only execution of applications, not compilation.

 
 Compilation is not an interesting case. 99% of it is tested by
 compiling Wine itself, and the remaining occasional problem is trivial
 to locate and fix. There's simply no need for regression testing of
 the compilation environment.


You are right for current conditions. The situation may be more 
interesting if Wine will be ported to non-Intel architecture.

Ok, this week I'll create a few tests in Perl and share my unbiased 
experience :-)

Andriy









Re: We *really* need a development model change !

2001-12-30 Thread Alexandre Julliard

Andriy Palamarchuk [EMAIL PROTECTED] writes:

 No, ability to call W32 API functions is not
 considered a unit test infrastracture. I may say Wine
 has such infrastructure for C since 1993 :-). From
 this point of view Andreas test application provides
 more support for unit tests than plain Perl module.

Not really, because you need a lot more infrastructure to build C
tests than to simply run Perl scripts. For Perl all you need is the
winetest app that is in the tree, plus a bit of Makefile glue which is
pretty trivial to do.

In fact here's a 10-minute hack to add a make test target. With that
all you have to do is create a test script in dlls/xxx/tests/foo.test,
put the expected output in tests/foo.test.ref (presumably generated by
running the test under Windows), add your script to the makefile and
run make test.

Index: Make.rules.in
===
RCS file: /opt/cvs-commit/wine/Make.rules.in,v
retrieving revision 1.95
diff -u -r1.95 Make.rules.in
--- Make.rules.in   2001/12/14 23:14:22 1.95
+++ Make.rules.in   2001/12/30 18:00:41
@@ -47,6 +47,7 @@
 RM= rm -f
 MV= mv
 MKDIR = mkdir -p
+DIFF  = diff -u
 C2MAN = @C2MAN@
 MANSPECS  = -w $(TOPSRCDIR)/dlls/gdi/gdi32.spec \
-w $(TOPSRCDIR)/dlls/user/user32.spec \
@@ -58,6 +59,8 @@
 ALLLINTFLAGS = $(LINTFLAGS) $(DEFS) $(OPTIONS) $(DIVINCL)
 WINAPI_CHECK = $(TOPSRCDIR)/tools/winapi_check/winapi_check
 WINEBUILD = $(TOPOBJDIR)/tools/winebuild/winebuild
+WINETEST  = $(TOPOBJDIR)/programs/winetest/winetest
+RUNTEST   = $(TOPOBJDIR)/programs/winetest/runtest
 MAKEDEP   = $(TOPOBJDIR)/tools/makedep
 WRC   = $(TOPOBJDIR)/tools/wrc/wrc
 WMC   = $(TOPOBJDIR)/tools/wmc/wmc
@@ -95,7 +98,7 @@
 
 # Implicit rules
 
-.SUFFIXES: .mc .rc .mc.rc .res .spec .spec.c .glue.c
+.SUFFIXES: .mc .rc .mc.rc .res .spec .spec.c .glue.c .test .test.out .test.ref
 
 .c.o:
$(CC) -c $(ALLCFLAGS) -o $@ $
@@ -121,6 +124,12 @@
 .c.ln:
$(LINT) -c $(ALLLINTFLAGS) $ || ( $(RM) $@  exit 1 )
 
+.test.test.out:
+   $(RUNTEST) $(TOPOBJDIR) $  $@
+
+.test.out.test.ref:
+   $(DIFF) $ $@  touch $@
+
 .PHONY: all install uninstall clean distclean depend dummy
 
 # 'all' target first in case the enclosing Makefile didn't define any target
@@ -216,7 +225,7 @@
-cd `dirname $@`  $(RM) $(CLEAN_FILES)
 
 clean:: $(SUBDIRS:%=%/__clean__) $(EXTRASUBDIRS:%=%/__clean__)
-   $(RM) $(CLEAN_FILES) $(GEN_C_SRCS) $(GEN_ASM_SRCS) $(RC_SRCS:.rc=.res) 
$(RC_SRCS16:.rc=.res) $(MC_SRCS:.mc=.mc.rc) $(PROGRAMS)
+   $(RM) $(CLEAN_FILES) $(GEN_C_SRCS) $(GEN_ASM_SRCS) $(RC_SRCS:.rc=.res) 
+$(RC_SRCS16:.rc=.res) $(MC_SRCS:.mc=.mc.rc) $(TESTS:%=tests/%.out) $(PROGRAMS)
 
 # Rules for installing
 
@@ -225,6 +234,13 @@
 
 $(SUBDIRS:%=%/__uninstall__): dummy
cd `dirname $@`  $(MAKE) uninstall
+
+# Rules for testing
+
+test:: $(WINETEST) $(TESTS:%=tests/%.out) $(TESTS:%=tests/%.ref)
+
+$(WINETEST):
+   cd $(TOPOBJDIR)/programs/winetest  $(MAKE) winetest
 
 # Misc. rules
 
Index: Makefile.in
===
RCS file: /opt/cvs-commit/wine/Makefile.in,v
retrieving revision 1.103
diff -u -r1.103 Makefile.in
--- Makefile.in 2001/11/23 23:04:58 1.103
+++ Makefile.in 2001/12/30 18:00:41
@@ -132,6 +132,9 @@
@cd dlls  $(MAKE) checklink
@cd debugger  $(MAKE) checklink
 
+test::
+   @cd dlls  $(MAKE) test
+
 TAGS etags:
etags `find $(TOPSRCDIR) -name '*.[chS]' -print | grep -v dbgmain`
 
Index: dlls/Makedll.rules.in
===
RCS file: /opt/cvs-commit/wine/dlls/Makedll.rules.in,v
retrieving revision 1.16
diff -u -r1.16 Makedll.rules.in
--- dlls/Makedll.rules.in   2001/09/17 20:09:08 1.16
+++ dlls/Makedll.rules.in   2001/12/30 18:01:00
@@ -36,6 +36,10 @@
 checklink:: lib$(MODULE).$(LIBEXT)
$(CC) -o checklink $(TOPSRCDIR)/library/checklink.c -L. -l$(MODULE) 
$(ALL_LIBS)  $(RM) checklink
 
+# Rules for testing
+
+$(TESTS:%=tests/%.out): lib$(MODULE).$(LIBEXT)
+
 # Rules for debug channels
 
 debug_channels: dummy
Index: dlls/Makefile.in
===
RCS file: /opt/cvs-commit/wine/dlls/Makefile.in,v
retrieving revision 1.109
diff -u -r1.109 Makefile.in
--- dlls/Makefile.in2001/11/06 17:52:37 1.109
+++ dlls/Makefile.in2001/12/30 18:01:02
@@ -764,6 +764,9 @@
 
 # Misc rules
 
+$(SUBDIRS:%=%/__test__): dummy
+   @cd `dirname $@`  $(MAKE) test
+
 $(SUBDIRS:%=%/__checklink__): dummy
@cd `dirname $@`  $(MAKE) checklink
 
@@ -773,6 +776,8 @@
 install:: $(SUBDIRS:%=%/__install__)
 
 uninstall:: $(SUBDIRS:%=%/__uninstall__)
+
+test:: $(SUBDIRS:%=%/__test__)
 
 checklink:: $(SUBDIRS:%=%/__checklink__)
 
--- /dev/null   Fri Dec  7 20:45:56 2001
+++ programs/winetest/runtest   Sat Dec 29 17:00:48 2001
@@ -0,0 +1,9 @@
+#!/bin/sh
+topobjdir=$1
+shift

Re: We *really* need a development model change !

2001-12-30 Thread Alexandre Julliard

Andriy Palamarchuk [EMAIL PROTECTED] writes:

 1) The discussion started from John Sturtz post, who
 created the Perl module for Win32 functions.
 Discussion what is better - C or Perl for unit testing
 started later as I understand there was no conclusion.
 Now I can assume that this topic was not discussed to
 death and we can do it now ;-)

OK you are right, it was discussed to death inside CodeWeavers, but
not all of that was public. Basically the argument is that some sort
of scripting language is better than plain C for two reasons:

1. It is much easier to install under Windows than a full development
  environment, and we don't have to worry about supporting a dozen
  different compilers. We can simply provide a zip file containing the
  compiled script interpreter, and people can be up and running in
  seconds.

2. The scripts are independent from the compilation environment, which
  allows testing binary compatibility. In C you have to compile the
  tests under Wine using the Wine headers, which means you can't spot
  wrong definitions in the headers since the test will see the same
  definition as Wine itself. The only way around is to build tests
  under Windows and run them under Wine but this is a major pain.
  With a script you are guaranteed to run the exact same thing in both
  environments.

I started implementing a simple scripting language, but then John
Sturtz showed that it was possible to leverage Perl to do the same
thing, so I think it's the way to go.

There are probably a number of things you cannot do from Perl, like
threads or exception handling, and for that we will want some kind of
C framework too. But I believe we can already go a long way with the
Perl stuff we have today. Maybe I'm wrong, maybe it's really unusable
and we need to scrap it and redo a C environment from scratch; but we
won't know that until we try to use it seriously.

-- 
Alexandre Julliard
[EMAIL PROTECTED]





Re: We *really* need a development model change !

2001-12-30 Thread David Elliott

On 2001.12.30 15:34 Alexandre Julliard wrote:

 2. The scripts are independent from the compilation environment, which
   allows testing binary compatibility. In C you have to compile the
   tests under Wine using the Wine headers, which means you can't spot
   wrong definitions in the headers since the test will see the same
   definition as Wine itself. The only way around is to build tests
   under Windows and run them under Wine but this is a major pain.
   With a script you are guaranteed to run the exact same thing in both
   environments.
 

Well, if it helps any, I have recently built a linux-cross-mingw32 
toolchain as RPMs.

For those not running on an RPM distro the specfiles should be easy enough 
to understand and do manually.

For those who are running RH7.2 (and probably 7.1 as well) you can have 
the binary packages.  And for all other RPM distros the source RPMs should 
build without problems.

I have the following RPMs built:

i386-mingw32-binutils-2.11.2-0_biscuit_0
i386-mingw32-gcc-bootstrap-3.0.3-0_biscuit_0 (built from the same specfile 
as the normal gcc but only makes a C compiler good enough to build w32api 
and mingw-runtime properly).
i386-mingw32-w32api-1.2-0_biscuit_0
i386-mingw32-mingw-runtime-1.2-0_biscuit_0
All the rest are part of the gcc build:
i386-mingw32-gcc-3.0.3-0_biscuit_0
i386-mingw32-gcc-c++-3.0.3-0_biscuit_0
i386-mingw32-libstdc++-devel-3.0.3-0_biscuit_0
i386-mingw32-gcc-g77-3.0.3-0_biscuit_0
i386-mingw32-gcc-java-3.0.3-0_biscuit_0 (unfortunately it didn't compile 
the java runtime, so this one is useless until I figure that out)

If anyone is interested in this I can probably put it in 
kernelrpm.sourceforge.net temporarily before I contact the MinGW team and 
see if they would be interested in hosting these files on their 
sourceforge site.

Please note that this is not just a simple compile.  I had to do quite a 
bit of patching (well, figuring out what to patch was the issue, the 
patches are tiny) to get it to build and work properly even including 
thread support.

I also have compiled wxMSW with this compiler and successfully built and 
tested the minimal, mdi, and taskbar test programs.

Of note is that the MDI program does not show any icons in the toolbar 
when run under Wine, but works perfectly under Windows (95osr2 in 
Win4Lin).  Is anyone aware of this, might be related to IE5.5 displaying 
black squares instead of icons in its toolbar, and would be significantly 
easier to debug as I built all of this with debugging info (assuming 
winedbg can read it), plus you'd have the full sourcecode from 
wxwindows.org.

-Dave





Re: We *really* need a development model change !

2001-12-30 Thread Andreas Mohr

[omitting comments about very nice make test framework]

On Sun, Dec 30, 2001 at 12:34:06PM -0800, Alexandre Julliard wrote:
 Andriy Palamarchuk [EMAIL PROTECTED] writes:
 
  1) The discussion started from John Sturtz post, who
  created the Perl module for Win32 functions.
  Discussion what is better - C or Perl for unit testing
  started later as I understand there was no conclusion.
  Now I can assume that this topic was not discussed to
  death and we can do it now ;-)
 
 OK you are right, it was discussed to death inside CodeWeavers, but
 not all of that was public. Basically the argument is that some sort
Hmm right.
I really should have remembered the extent of these discussions.

 of scripting language is better than plain C for two reasons:
 
 1. It is much easier to install under Windows than a full development
   environment, and we don't have to worry about supporting a dozen
   different compilers. We can simply provide a zip file containing the
   compiled script interpreter, and people can be up and running in
   seconds.
That one goes to you, I guess.

 There are probably a number of things you cannot do from Perl, like
 threads or exception handling, and for that we will want some kind of
 C framework too. But I believe we can already go a long way with the
 Perl stuff we have today. Maybe I'm wrong, maybe it's really unusable
 and we need to scrap it and redo a C environment from scratch; but we
 won't know that until we try to use it seriously.
...at which point we already have a huge data collection of known
expected function behaviour that's just waiting for us to port it easily
to C then or so...

Hmm, and different winver settings for testing of the behaviour of
different Wine --winver settings and adapting to different Windows versions
are possible with this framework, too ?
(we'd very much want to have that, I guess)

Oh, and what about Win16 support of the test suite ?
This is why I started this in the first place.
I'm very much afraid of losing HUGE amounts of Win16 compatibility
due to everybody using more and more Win32 programs only...

I haven't toyed with the make test environment yet (severe lack of time
- exam time), so I don't know much about it.
Unfortunately I'm afraid I won't have much time in the foreseeable future
either.

Still, I think we should try to have a C compatibility layer, too, for
two reasons:
- people who aren't familiar with anything else
- supporting diversity (what if someday we find out that perl sucks ? :-)

That's it for now,

Andreas (who'd really like to be able to contribute more to this now)





Re: We *really* need a development model change !

2001-12-30 Thread Alexandre Julliard

Andreas Mohr [EMAIL PROTECTED] writes:

 Hmm, and different winver settings for testing of the behaviour of
 different Wine --winver settings and adapting to different Windows versions
 are possible with this framework, too ?

In its current state it uses the default Wine config from ~/.wine so
any config changes can be done this way (of course we will have to
support -winver in the config file, but we need that anyway). We could
provide a set of standardized configs to reduce variation, but it may
in fact be preferable to let people run with the config they normally
use, so that we have a wider coverage of the different options.

 Oh, and what about Win16 support of the test suite ?

Not supported yet, though this could be added. In fact since the Perl
layer already needs to convert the arguments, making it support 16-bit
should be relatively easy, and probably a lot more transparent than
doing it in C.

 Still, I think we should try to have a C compatibility layer, too, for
 two reasons:
 - people who aren't familiar with anything else
 - supporting diversity (what if someday we find out that perl sucks ? :-)

If it turns out Perl doesn't work, we can always convert the tests to
C, this should be relatively easy. Most of the tests will simply be a
bunch of function calls and result checks, which is simple to do in
any language. Besides, unless you want to do elaborate tests, you
really don't need to know much Perl to be able to write tests with
winetest.

-- 
Alexandre Julliard
[EMAIL PROTECTED]





Re: We *really* need a development model change !

2001-12-30 Thread Andriy Palamarchuk

Responding to Alexandre and Jeremy.

Alexandre Julliard wrote:

 Basically the argument is that some sort
 of scripting language is better than plain C for two reasons:
 
 1. It is much easier to install under Windows than a full development
   environment, and we don't have to worry about supporting a dozen
   different compilers. We can simply provide a zip file containing the
   compiled script interpreter, and people can be up and running in
   seconds.


Completely agree with this one.

 2. The scripts are independent from the compilation environment, which
   allows testing binary compatibility. In C you have to compile the
   tests under Wine using the Wine headers, which means you can't spot
   wrong definitions in the headers since the test will see the same
   definition as Wine itself. The only way around is to build tests
   under Windows and run them under Wine but this is a major pain.
   With a script you are guaranteed to run the exact same thing in both
   environments.


It seems the problem is not so big. It will be sufficient to run the 
binary, compiled under Windows not more often than once a month.

 There are probably a number of things you cannot do from Perl, like
 threads or exception handling, and for that we will want some kind of
 C framework too. But I believe we can already go a long way with the
 Perl stuff we have today. 


Some technical challenges we can overcome by improving our Perl 
framework, other we will be handled with C.

 Maybe I'm wrong, maybe it's really unusable
 and we need to scrap it and redo a C environment from scratch; but we
 won't know that until we try to use it seriously.


Whether Perl in convenient enough language in comparison with C remains 
to be decided.

I see following biggest problems with Perl:

1) as I mentioned before the first problem is in capturing audience of 
test creators. Even be Perl the best programming language ever:
   a) existing Wine developers are experts in C. Even those who knows 
Perl are more experienced in C.
   b) attracting new developers. We want to target developers who 
already knows or want to know Win32 API, right? Usually these people are 
not those who program in scripting languages. I'm not sure if we will be 
able to justify before them learning new language.

2) Wine targets problem Compilation and execution of Win32 
applications. C is the native language of this problem. All the 
problems and solutions can be easy expressed in C and require 
translation when converted to other languages. All the documentation, 
learning materials are C-oriented.
BTW, tests in Perl address only execution of applications, not compilation.

The biggest advantage of scripting language in this application is 
easiness of installation.
Most people who know or want to learn Win32 API already have to use some 
kind C development environment.

I'd prefer to lose 10 developers who don't want to learn C instead of 
losing one expert in Win32 API who thinks that Perl sucks ;-)

Andriy Palamarchuk






Re: We *really* need a development model change !

2001-12-28 Thread Andriy Palamarchuk

Simplicity is one of the goals of the testing
framework, but besides being simple the framework
should be powerful enough for such big project.

I'm looking for existing library which would suit our
needs.

Unit test framework Check looks promising. It
provides many  features which I'd love to use for our
tests, including:
- protection of the test address space - tests crashed
do not put down the whole test suite and are reported.
At the same time you can request to run tests in the
same address space what is good for debugging.
- grouping tests in suites, tree-like grouping of the
test suites. You can choose to run only subset of the
tests
- having set up and tear down sections which can
be used to set up tests and free resourses after.
- a few forms of output - from silent to detailed

Individual test looks like (section from manual):

START_TEST(test_create)
{
   fail_unless (money_amount(five_dollars) == 5,
   Amount not set correctly on creation);
   fail_unless
(strcmp(money_currency(five_dollars),USD) == 0,
   Currency not set correctly on creation);
}
END_TEST

They also give information how to configure the tests
with autoconf.

The issues which need to be resolved:
1) License - GPL. Is it Ok to have the test suite
under GPL?
2) Environment - POSIX. The library uses POSIX calls
to manage processes, in particular functions fork,
_exit, write, read, close, getppid, getpid, pipe.
Calls fork , _exit and getppid are used only
when test runs in FORK mode. We don't need to have
address space protection under Windows - tests there
should not crash.
The rest of functions are used for communication
between the test launcher and tests and can be
implemented with using corresponding Win32 calls.
After implementing these changes the framework can be
used in pure Win32 environment, without Cygwin.

Let me know if you are interested. I'll start to work
on the port. Meantime we can start to use the
framework under Wine and on Windows with Cygwin.

More information about Check you can find here:
http://check.sourceforge.net/ - home page
http://sourceforge.net/project/showfiles.php?group_id=28255
 - tutorial

Thanks,
Andriy Palamarchuk

__
Do You Yahoo!?
Send your FREE holiday greetings online!
http://greetings.yahoo.com





Re: We *really* need a development model change !

2001-12-28 Thread Jeremy White

Andriy (and all),

I think you have dismissed winetest much too quickly.

We spent a considerable amount of energy thinking
about a test harness (largely because one of our investors
felt passionately that it was vital), so not only did
we have the public conversations you saw on wine-devel,
we here at CodeWeavers also had some lengthy private
conversations.

I entered that process feeling much as you do:  that 'C' based
tests were the way and the truth and the light.

I was persuaded that a Perl based test was, in fact,
a viable alternative, if not a better alternative.

Several points:
 1.  John Sturtz worked very hard to make the
 Perl code able to take callbacks.
 AFAIK, it does that.

 2.  The Perl code makes it easy to do pattern
 matching, hex dumps, and general text manipulation,
 which is a good thing.

 3.  Perl is really easy to install and use in Windows;
 writing these tests in Windows is a total
 snap, and, I would argue, Windows is exactly
 where the tests should be written.

All that aside, I think the most important tasks
are a simple set:
 1.  Modify the makefiles so that 'make test'
 (or make regress, or pick your poison)
 does something interesting, and does
 so without much hassle.

 2.  Define a standard for how tests should
 work.  For example, if I wanted to
 write a test for FunctionFoo(), I should
 be able to create wine/programs/winetest/tests/foo.pl,
 add 'foo.pl' to winetest/tests/Makefile.in,
 and my test should automatically become part
 of the regression suite.

 (Note that adding tests/foo.c is just as trivial).

 I believe that the defacto standard chosen
 at this point is that a test prints 'OK' if
 all went well, and it doesn't on error, and
 prints some clue as to what went wrong.

 3.  Add a section to the documentation on how
 to do the same.


In my not so humble opinion, that is a very straightforward
task, that shouldn't take very long, and would be an
excellent beginning for shifting the development model.

Once we have in place the concept of a 'make regress',
we can start expanding on that to roll in new
testing tools; I just think it's a mistake to arbitrarily
discard a perfectly useful tool.

Jer

Andriy Palamarchuk wrote:

 Simplicity is one of the goals of the testing
 framework, but besides being simple the framework
 should be powerful enough for such big project.
 
 I'm looking for existing library which would suit our
 needs.
 
 Unit test framework Check looks promising. It
 provides many  features which I'd love to use for our
 tests, including:
 - protection of the test address space - tests crashed
 do not put down the whole test suite and are reported.
 At the same time you can request to run tests in the
 same address space what is good for debugging.
 - grouping tests in suites, tree-like grouping of the
 test suites. You can choose to run only subset of the
 tests
 - having set up and tear down sections which can
 be used to set up tests and free resourses after.
 - a few forms of output - from silent to detailed
 
 Individual test looks like (section from manual):
 
 START_TEST(test_create)
 {
fail_unless (money_amount(five_dollars) == 5,
  Amount not set correctly on creation);
fail_unless
 (strcmp(money_currency(five_dollars),USD) == 0,
  Currency not set correctly on creation);
 }
 END_TEST
 
 They also give information how to configure the tests
 with autoconf.
 
 The issues which need to be resolved:
 1) License - GPL. Is it Ok to have the test suite
 under GPL?
 2) Environment - POSIX. The library uses POSIX calls
 to manage processes, in particular functions fork,
 _exit, write, read, close, getppid, getpid, pipe.
 Calls fork , _exit and getppid are used only
 when test runs in FORK mode. We don't need to have
 address space protection under Windows - tests there
 should not crash.
 The rest of functions are used for communication
 between the test launcher and tests and can be
 implemented with using corresponding Win32 calls.
 After implementing these changes the framework can be
 used in pure Win32 environment, without Cygwin.
 
 Let me know if you are interested. I'll start to work
 on the port. Meantime we can start to use the
 framework under Wine and on Windows with Cygwin.
 
 More information about Check you can find here:
 http://check.sourceforge.net/ - home page
 http://sourceforge.net/project/showfiles.php?group_id=28255
  - tutorial
 
 Thanks,
 Andriy Palamarchuk
 
 __
 Do You Yahoo!?
 Send your FREE holiday greetings online!
 http://greetings.yahoo.com
 
 








Re: We *really* need a development model change !

2001-12-27 Thread Andriy Palamarchuk


Alexandre Julliard wrote:

 Andreas Mohr [EMAIL PROTECTED] writes:
Please comment on both my intended posting and the
way I programmed the first
version of the test suite (I'm not extremely happy
with the current program;
if you have any improvements, then get them here
ASAP !).
 
 Look at programs/winetest, that's the tool we should
use to write
 tests IMO.

Just looked at the tool. It only provides gateway from
Perl to wine API functions, right?

Advantages I see in using script-based tests:
1) tests can be used without recompilation
2) Perl is familiar to many developers
3) programming in scripting language is usually easier
than in C
4) Perl is widely used for testing purposes and I
expect to find many testing tools in Perl.

Cons:
1) Many current Wine contributors don't know Perl
2) one more level of abstraction does not give
significant advantages in this application. On the
contrary, it is more difficult to locate cause of
problems because developer has to go trough one more,
often not familiar layer. Absense of strict typing in
this layer will hurt a lot.

Advantages of using C-based tests:
1) compilation has to be used to run the tests. In
some cases this is an advantage. Before running the
tests you'd better be sure it at least compiles in
both environments.
2) C is the most familiar to developers language and
the language itself is simpler than Perl
3) Documentation for Win32 API is C-oriented.
Examples, given in documentation are in C or easy
translated to C
4) Developers already have some testing code snippets
in C

Summary:
Requirements to the unit testing tool:
1) should help to quickly create tests
2) easy to use, should help to involve as many
developers as possible.
3) may be useful for developers, using Wine for there
projects

Because of the goals I'm more inclined to use C-based
test suite. IMHO it is better suited for existing Wine
developers audience and will provide us much bigger
code pool. I'm even ready to have tests in both - Perl
and C.

The big question is a tool to test GUI. I did not find
any OS Windows GUI testing frameworks :-(

Comments, suggestions?

Thanks,
Andriy Palamarchuk

__
Do You Yahoo!?
Send your FREE holiday greetings online!
http://greetings.yahoo.com





Re: We *really* need a development model change !

2001-12-27 Thread Andreas Mohr

On Thu, Dec 27, 2001 at 07:38:24AM -0800, Andriy Palamarchuk wrote:
 Because of the goals I'm more inclined to use C-based
 test suite. IMHO it is better suited for existing Wine
 developers audience and will provide us much bigger
 code pool. I'm even ready to have tests in both - Perl
 and C.

Yes, IMHO we really need a C based test.
After all in order to leverage the Windows developer group`s skills,
we probably really shouldn't ask them to code it up in perl...

The best way IMHO would be to try to make C and perl based test output
compatible.
That way people familiar with perl coding could code it in perl,
and other people would have choice ;)

Hmm, wait.
Also creating perl based tests would mean that you need to have perl
installed on Windows, too.
Maybe that is too much of a disadvantage...

 The big question is a tool to test GUI. I did not find
 any OS Windows GUI testing frameworks :-(
I don`t think we should even bother right now.
Many windows functions are not GUI based, and those who are could
at least get tested against their functional behaviour.

-- 
Andreas MohrStauferstr. 6, D-71272 Renningen, Germany
Tel. +49 7159 800604http://home.nexgo.de/andi.mohr/





Re: We *really* need a development model change !

2001-12-27 Thread Alexandre Julliard

Andriy Palamarchuk [EMAIL PROTECTED] writes:

 Just looked at the tool. It only provides gateway from
 Perl to wine API functions, right?

Yes, though a lot of higher-level stuff could be added to it once
people start using it.

The C vs. Perl question has been debated already, please check the
archive. The truth is that a lot of people are willing to setup a
testing infrastructure, but nobody is willing to write the actual
tests.

So right now we have a minimal infrastructure in Perl, and I'm not
going to setup another one until someone starts using the existing one
seriously and demonstrates whether it works or not in practice. The
theory has been discussed to death already.

-- 
Alexandre Julliard
[EMAIL PROTECTED]





Re: We *really* need a development model change !

2001-12-27 Thread Andriy Palamarchuk


--- Alexandre Julliard [EMAIL PROTECTED] wrote:
 The truth is that a lot of people are
 willing to setup a
 testing infrastructure, but nobody is willing to
 write the actual
 tests.

Counterexamples:
1) I suspect that you try to run the code you develop
before committing it to CVS ;-) You work on core Wine
functionality and the test code snippets you use would
be invaluable as unit tests. However I can't find
these tests anywhere in CVS tree :-P

2) Besides Perl testing framework there are a few
testing applications (vartest, guitest) in CVS, but
they are not merged in a single test suite.

3) I tried to submit unit test application for
SystemParametersInfo with my first patch to this
function, but the unit test was not accepted. I
assumed that no unit test is necessary in CVS tree for
such really simple function.

Summary - there are people who wants to develop unit
tests. IMO the problem with unit test in the project
is in:
1) Our attitude. We don't have *any* policy about unit
tests. Developers are not asked about unit tests, from
the documents on WineHQ it is not clear whether we
need them at all. 

2) Absense of infrastracture. We need to develop or
choose unit testing framework, define unit test
policies, add information about unit tests to Wine
documentation, keep unit tests visible for developers
all the time.

 So right now we have a minimal infrastructure in
 Perl

No, ability to call W32 API functions is not
considered a unit test infrastracture. I may say Wine
has such infrastructure for C since 1993 :-). From
this point of view Andreas test application provides
more support for unit tests than plain Perl module.

In existing CVS tree we have much more tests in C than
in Perl (both - a little more than nothing)

 I'm not
 going to setup another one until someone starts
 using the existing one
 seriously and demonstrates whether it works or not
 in practice. The
 theory has been discussed to death already.

I agree with you - only usage of the framework can
help to choose better one. I'll look more closely at
C-based, try to combine our Perl module with
Perl-based unit frameworks.


Andriy Palamarchuk

__
Do You Yahoo!?
Send your FREE holiday greetings online!
http://greetings.yahoo.com





Re: We *really* need a development model change !

2001-12-27 Thread Francois Gouget

On Thu, 27 Dec 2001, Andriy Palamarchuk wrote:

[... Perl/C pros and cons for testing]

   I think you summarised the pros and cons of both options quite
well. I would add just one thing against perl in its current form: AFAIK
(but it may have changed since), the current perl framework does not
support callback functions. This can be a problem for testing things
like:
 * CreateThread: takes a function to be run in the new thread
 * timers: SetTimer(...,timerproc)
 * window procs: We can write quite a few tests that create some widget
(a list, table, ...) and then sends messages to check the state of that
widget (select an item in the list, then check which item is selected,
whether the 'hot' item is still hot, etc.), and or the resulting
sequence of other messages. While these are not 'GUI' tests in that they
don't make sure that the list displays correctly (or at all), they check
important properties of the widget implementation.



 The big question is a tool to test GUI. I did not find
 any OS Windows GUI testing frameworks :-(

   Andreas has already replied and i agree with him. But I'll basically
repeat what he said to give it more weight :-)

   GUI testing is not 'the big question'. It's irrelevant right now.
   And what I really don't want to happen, it to see us refuse to pick
the low hanging fruits and starve to death because we don't have a
ladder that lets us reach the fruits at the top of the tree.

   In other words, there are thousands of APIs that we can test very
easily and we should write tests for them NOW. We should not wait for
the development of a hypothetical framework that would let us also test
graphical (i.e. daoes it display right) issues.
   Testing all these APIs represents enough work to keep us busy for
quite some time and can already benefit Wine greatly. So really, at this
time, given the amount of effort required to even get something usable,
GUI testing is irrelevant and should be postponed.

   (of course if someone out there really wants to develop a GUI testing
framework and donate it to Wine, go ahead, all I want is that we don't
wait for one)


--
Francois Gouget [EMAIL PROTECTED]http://fgouget.free.fr/
 Linux: the choice of a GNU generation






Re: We *really* need a development model change !

2001-12-27 Thread Francois Gouget

On 27 Dec 2001, Alexandre Julliard wrote:
[...]
 The C vs. Perl question has been debated already, please check the
 archive. The truth is that a lot of people are willing to setup a
 testing infrastructure, but nobody is willing to write the actual
 tests.

   I don't know if it is that noone wants to write tests or if it is
that:
 * the current infrastructure has never been officially announced
 * the Wine project never officially said 'go ahead' write tests
   with our new testing infrastructure
 * there is no documentation on either how to write tests, no guidelines
   on how to write good tests, or on how one is supposed to run the Wine
   regression test suite
 * there is no sample test in any of the dlls/xxx directories. There is
   just one sample perl script in programs/winetest/test.pl
 * and there is also an autoconf test missing for '-lperl':
gcc -shared  -Wl,-Bsymbolic winetest.spec.o  winetest.o wine.o -o
winetest.so -L../../library -lwine `perl -MExtUtils::Embed -e ldopts`
-lm  -lutil -ldl
/usr/bin/ld: cannot find -lperl


   So I would say we still have work to do on the infrastructure, if
nothing else in terms of documentation and 'advertising' it, if we are
to expect people to start writing tests.


--
Francois Gouget [EMAIL PROTECTED]http://fgouget.free.fr/
$live{free} || die ;






Re: We *really* need a development model change !

2001-12-27 Thread Andriy Palamarchuk

I looked at thread Perl/Wine extension for perusal
ran on February, 2001. Want to bring some information
from that thread to this discussion:

1) The discussion started from John Sturtz post, who
created the Perl module for Win32 functions.
Discussion what is better - C or Perl for unit testing
started later as I understand there was no conclusion.
Now I can assume that this topic was not discussed to
death and we can do it now ;-)

2) One of arguments about the tool choice was its
availability. 

Currently you are free to use one of a few commercial
compilers, free compiler lcc or gcc as part of Cygwin,
mingw packages.
One more problem - support for a few compilers
environment for C.
Perl is available as ActiveState and standard ports.

3) No existing unit tests frameworks were discussed.

4) There was a suggestion to use both - C and Perl
tests

5) Was defined that existing application should not be
run as part of unit test.

I think these are all points of that discussions,
which will be interesting for this discussion. Feel
free to correct me.


--- Andreas Mohr [EMAIL PROTECTED] wrote:
 On Thu, Dec 27, 2001 at 12:13:05PM -0800, Francois
 Gouget wrote:
   The big question is a tool to test GUI. I did
 not find
   any OS Windows GUI testing frameworks :-(
  
 Andreas has already replied and i agree with
 him. But I'll basically
  repeat what he said to give it more weight :-)
  
 GUI testing is not 'the big question'. It's
 irrelevant right now.

I'm convinced.

 In the meantime we should try to discuss more about
 what the test suite
 framework should look like, i.e. whether my approach
 is good/bad, what
 to possibly improve, what the output should look
 like and whether
 it's suitably parsable.

Results of preliminary review of the unit testing
frameworks. Some of these frameworks can run tests in
a separate address space and can report crashes!
Suprisingly I don't have too many options.

C frameworks:
1) Check. Problem - POSIX-based (under Windows needs
Cygwin).
2) CUnit. Problem - very simple, in development. I
think it is not worth trying.
3) cUnit. Problem - Linux only.
4) Autounit. Problem - POSIX-based
5) QMTest. Problem - needs Python.

C++ frameworks:
1) CPPUnit. Problem - in C++

Perl frameworks:
- there are quite a few Perl modules for testing on
CPAN, including port of JUnit to Perl.

Summary:
What do you think - which ones we still can use
despite the constraints? I'll review the chosen
frameworks more closely.

Perl modules are Ok and I can review them in detail if
we decide to go with Perl.

Andriy Palamarchuk

__
Do You Yahoo!?
Send your FREE holiday greetings online!
http://greetings.yahoo.com





Re: We *really* need a development model change !

2001-12-27 Thread Geoff Thorpe

On Friday 28 December 2001 05:00, Andreas Mohr wrote:

 Yes, IMHO we really need a C based test.

I'd go along with this. It seems that a variety of tests, written and 
contributed by a variety of people, and thus written in a variety of 
mutually inconsistent and collectively odd ways, is inevitable. Rather than 
people wasting their valuable time on rejigging any existing modular 
extensible test framework or writing anything new, the simplist way 
forward would surely be to document some simple input/output behaviours 
test programs should use.

Eg. as long as it is described what should and should not be sent by a test 
program to stdout+stderr, and what input (if any) should be supported on 
the command line and/or stdin, then you have achieved the only consistency 
that matters. A simple shell script could then, for example, control the 
level of debugging in any/all such tests by simply piping output(s) through 
egrep or whatever - stdout itself could be piped to /dev/null if necessary 
leaving only stderr, etc.

Let's face it - anyone adding new functionality to wine or working on 
anything non-trivial will be cooking up their own foo.c test programs on 
the fly when developing - the less work it takes for that developer to 
convert their foo.c into a test-foo32-api.c for use by test scripts, the 
better off everyone will be.

Let's also not forget, if that standardised form for input/output of the 
testing is as logical and uncomplicated as possible, more and more 
well-meaning code grunts will be capable of making useful contributions - 
be it clunky GUI wrappers for the testing suites (think what make xconfig 
did for linux kernel rebuilding), or by even taking the various foo.c test 
fragments from the wine developers and coaxing them into the standardised 
form on their behalfs.

If a testing framework is anything more complicated - the only people 
working on the test suites will be whoever defines/understands the test 
framework specification and the hard-core wine developers themselves. In 
other words, we won't get far.

Cheers,
Geoff






Re: We *really* need a development model change !

2001-12-27 Thread Andreas Mohr

On Fri, Dec 28, 2001 at 12:49:54PM +1300, Geoff Thorpe wrote:

[voting for a simple test interface]

 If a testing framework is anything more complicated - the only people 
 working on the test suites will be whoever defines/understands the test 
 framework specification and the hard-core wine developers themselves. In 
 other words, we won't get far.
Exactly.

We've got so many (up to 12000) functions to write tests for that we really
can't afford having a complicated test environment. Just about everybody
should be able to write tests for a function.
And if this is the case, then we really should try to leverage as much
existing Windows developer knowledge/skillset as possible in order
to have a working test suite in place in just about an instant.

-- 
Andreas MohrStauferstr. 6, D-71272 Renningen, Germany
Tel. +49 7159 800604http://home.nexgo.de/andi.mohr/





Re: We *really* need a development model change !

2001-12-26 Thread Andriy Palamarchuk

Andreas Mohr wrote:
 I guess we really should change our development
model from trying tons of
 programs to *systematically* testing functions and
Windows mechanisms now.
 If we can show everyone where stuff is failing, it
might be a lot easier
 to attract new people.

I *completely* support this idea. Benefits of such
test suite are enormous. Existing developers can
contribute a lot by adding test snippets for the
functions they create. Now they create such snippets
anyway and throw them away.

 I attached a preview of the posting I intend to post
on *tons* of Windows
 devel newsgroups (Call For Volunteers). That way
we might actually get
 hold of hundreds of Windows developers helping us
implement a complete
 test suite (complete tests of up to 12000 Windows
functions).
 Not to mention the additional PR we might get out of
this...

Lets ask for help only after the suite structure is
more/less defined and we'll be able to give people
something to work on.

Comments:
- Don't want to reinvent the weel. Is there any
existing test suite framework we can use? Sorry, I
can't suggest any for C but I'm very impressed with
JUnit in Java. It is even Ok if the framework is GPLed
or LGPLed - I don't think any company will make
buziness based on the test suite.
- I /personally/ prefer CL interface only for such
suite
- it would be better if the suite print summary
information and information about failed tests only
- make the test suite more visible for existing
developers. Ask them to run the test suite before
submitting a patch?
- I think the suite test will consist from a few
separate applications because different tests may have
different requirements to GUI configuration,
processes, etc. We need a way to run all the
applications in one batch.
- define variable which indicates whether the suite
runs under Wine. Such indicator can be used for Wine
white-box testing.
- it would be greate to have functionality to support
output comparison? For some functionality it is easier
to write tests to compare output instead of doing
explicit checks (e.g. tests, involving a few
processes). The output can be redirected to file and
files compared. If we use files we need to store files
for Wine and a few versions of Windows :-(
- the suite applications size will be pretty big. Is
it better to move it to separate CVS tree?
- what about running the suite weekly (or daily)
automatically and publishing the results to
wine-devel?
- most developers on this list have access to one
version of Windows. Is it difficult to create testing
farm with remote access to a few versions of windows?
This would help developers to test their code on a few
platforms. Existing environments in the companies,
involved in the project can be used.
- I remember long time ago there was a post on
wine-devel about using Perl or Perl-like language for
unit testing.
What is current status of that project?

Thanks,
Andriy Palamarchuk


__
Do You Yahoo!?
Send your FREE holiday greetings online!
http://greetings.yahoo.com





Re: We *really* need a development model change !

2001-12-26 Thread Andriy Palamarchuk

C Unit test frameworks I found after a quick search:
http://check.sourceforge.net/
http://people.codefactory.se/~spotty/cunit/
http://freshmeat.net/projects/autounit/

C++:
http://sourceforge.net/projects/cppunit/

Thanks,
Andriy Palamarchuk

__
Do You Yahoo!?
Send your FREE holiday greetings online!
http://greetings.yahoo.com





Re: We *really* need a development model change !

2001-12-26 Thread Andreas Mohr

On Wed, Dec 26, 2001 at 10:07:20AM -0800, Andriy Palamarchuk wrote:
 Andreas Mohr wrote:
  I guess we really should change our development
 model from trying tons of
  programs to *systematically* testing functions and
 Windows mechanisms now.
  If we can show everyone where stuff is failing, it
 might be a lot easier
  to attract new people.
 
 I *completely* support this idea. Benefits of such
 test suite are enormous. Existing developers can
 contribute a lot by adding test snippets for the
 functions they create. Now they create such snippets
 anyway and throw them away.
Ah, good ! :-)
Exactly. A lot of people create test code e.g. for undocumented functions etc.
By adding a *slight* bit more work, they'd have a test for this function.

 Comments:
 - Don't want to reinvent the weel. Is there any
 existing test suite framework we can use? Sorry, I
 can't suggest any for C but I'm very impressed with
 JUnit in Java. It is even Ok if the framework is GPLed
 or LGPLed - I don't think any company will make
 buziness based on the test suite.
Hmm, good question. I don't know of any, but we should probably do some more
research. After all it's about 12000 functions, so we should get it right.

 - I /personally/ prefer CL interface only for such
 suite
Yes, yes, yes. *Much* easier to use. That's why I did exactly that kind of
thing.

 - it would be better if the suite print summary
 information and information about failed tests only
Yep. Current output is something like:
WINETEST:test:Loader_16:LoadModule:FAILED:01:[retval]
WINETEST:test:Loader_16:LoadModule:FAILED:12:[retval]
WINETEST:test:Loader_16:LoadModule:FAILED:13:[retval]

or, in case of success, only:
WINETEST:test:Loader_16:LoadModule:OK

(yeah, I know, wishful thinking ;-)

This output is pretty useful, I think:
It can be parsed *very* easily, and grepping for regressions is also pretty
easy.

WINETEST exists to be able to distinguish this output from bogus Wine
messages,
test indicates that this is a test line output versus a warning message or
similar output,
Loader_16 indicates testing of 16bit loader functionality,
LoadModule - well... ;-)
FAILED - obvious
01 - test number 01 failed.
[retval] - contains the (wrong) return value of the function, if applicable.

BTW, I think having a test suite wouldn't be about hunting regressions
at first: just look at my LoadModule16 example and you'll see that we're
still quite far from hunting regressions *only*.
My guess is that we'll be shocked at how many functions fail in how many ways.

 - make the test suite more visible for existing
 developers. Ask them to run the test suite before
 submitting a patch?
No, I don't think so.
I think it suffices if Alexandre runs the test suite before or after every
large commit cycle.
That way he'd be able to back out problematic patches.
Asking developers to run the *whole* test suite for each patch could be
pretty painful.

 - I think the suite test will consist from a few
 separate applications because different tests may have
 different requirements to GUI configuration,
 processes, etc. We need a way to run all the
 applications in one batch.
Exactly. Which is why I really prefer simple text output. IMHO it's the only way
to go.

 - define variable which indicates whether the suite
 runs under Wine. Such indicator can be used for Wine
 white-box testing.
Hmm, yes, that might be useful.
We'd also need to pass a winver value to the test suite via command line
in order to let the test app adapt to different windows environments
(and thus also to different wine --winver settings !).

 - it would be greate to have functionality to support
 output comparison? For some functionality it is easier
 to write tests to compare output instead of doing
 explicit checks (e.g. tests, involving a few
 processes). The output can be redirected to file and
 files compared. If we use files we need to store files
 for Wine and a few versions of Windows :-(
Hmm, I don't quite get what exactly you're talking about.

 - the suite applications size will be pretty big. Is
 it better to move it to separate CVS tree?
Yep, I'd say so. There definitely is no business for it to reside in the main
Wine tree.

 - what about running the suite weekly (or daily)
 automatically and publishing the results to
 wine-devel?
Good idea ! Might prove worthwhile.

 - most developers on this list have access to one
 version of Windows. Is it difficult to create testing
 farm with remote access to a few versions of windows?
 This would help developers to test their code on a few
 platforms. Existing environments in the companies,
 involved in the project can be used.
Hmm, why ?
The idea is that hundreds (or hopefully thousands ?) of volunteer Windows
developers create bazillions of test functions for specific API functions.
That will happen on specific Windows version only, of course.
Now we have a test framework for a specific API function on a specific
Windows version.
Now if there are behavioral conflicts 

Re: We *really* need a development model change !

2001-12-26 Thread Andriy Palamarchuk


--- Andreas Mohr [EMAIL PROTECTED] wrote:
 On Wed, Dec 26, 2001 at 10:07:20AM -0800, Andriy
 Palamarchuk wrote:
  Andreas Mohr wrote:

[... skipped ...]

  - it would be better if the suite print summary
  information and information about failed tests
 only
 Yep. Current output is something like:

WINETEST:test:Loader_16:LoadModule:FAILED:01:[retval]

WINETEST:test:Loader_16:LoadModule:FAILED:12:[retval]

WINETEST:test:Loader_16:LoadModule:FAILED:13:[retval]
 
 or, in case of success, only:
 WINETEST:test:Loader_16:LoadModule:OK
I mean something like: 
===
Run: 1234 tests
Failed: 2 Errors: 1

Fail 1: 
Fail 2: 
Error 1: 
===
In the example above failture means condition check
failure, Error - exception.

I suggest to print nothing for successfull tests. At
least this is the way I am accustomed with JUnit.
We are not interested in successfull tests, are we?
;-)

 This output is pretty useful, I think:
 It can be parsed *very* easily, and grepping for
 regressions is also pretty
 easy.
 
 WINETEST exists to be able to distinguish this
 output from bogus Wine
 messages,
 test indicates that this is a test line output
 versus a warning message or
 similar output,
 Loader_16 indicates testing of 16bit loader
 functionality,
 LoadModule - well... ;-)
 FAILED - obvious
 01 - test number 01 failed.
 [retval] - contains the (wrong) return value of
 the function, if applicable.

Looks simple and the output is really useful. I just
don't see any reason to show information about
successfull tests.
At least we can get short form of the output by
clipping all Ok messages from your suggested form.

 BTW, I think having a test suite wouldn't be about
 hunting regressions
 at first: just look at my LoadModule16 example and
 you'll see that we're
 still quite far from hunting regressions *only*.
 My guess is that we'll be shocked at how many
 functions fail in how many ways.

Agree, agree, agree... We can even use eXtreme
Programming approaches :-) See
http://xprogramming.com/ and other sites on the subj.
I also like this article:
http://members.pingnet.ch/gamma/junit.htm
I use JUnit extensively and like the whole idea.

  - make the test suite more visible for existing
  developers. Ask them to run the test suite before
  submitting a patch?
 No, I don't think so.
 I think it suffices if Alexandre runs the test suite
 before or after every
 large commit cycle.
 That way he'd be able to back out problematic
 patches.
 Asking developers to run the *whole* test suite for
 each patch could be
 pretty painful.

I don't see why running the unit tests is paintful.
I'd estimate that it would not take more then 5
minutes to test all 12000 W32 functions. We also can
keep tests for slow/rarely changed areas of API in
separate complete suite.

I think the test suite is for developers, not for
Alexandre (I meas as a team leader :-) or QA. This is
why I want to increase visibility of unit tests.
Again, the developers will more likely to contribute
to the suite if they will remember about it.

I do not suggest to enforce the unit test usage
because we'll always have developers/companies who
don't want to do that. It would suffice to recomment
before submitting a patch to check that we have the
same (accidentally - less :-) number of failures as we
had before or report any new bugs introduced.
It is even Ok to have increased number of issues as
soon as developer consiously makes decision to break
something. Compact tests output I describe above also
will help to quicky identify any changes in unit tests
output.

 We'd also need to pass a winver value to the test
 suite via command line
 in order to let the test app adapt to different
 windows environments
 (and thus also to different wine --winver settings
 !).

Sounds good.

  - it would be greate to have functionality to
 support
  output comparison? For some functionality it is
 easier
  to write tests to compare output instead of doing
  explicit checks (e.g. tests, involving a few
  processes). The output can be redirected to file
 and
  files compared. If we use files we need to store
 files
  for Wine and a few versions of Windows :-(
 Hmm, I don't quite get what exactly you're talking
 about.

Example: I have pretty big unit test for
SystemParametersInfo function. Part of the test is to
insure that WM_SETTINGCHANGE window message is fired
when necessary. I have simple handler for the message
which prints confirmation when the message received. I
save output when I run tests under Windows and Wine
and compare the output. Advantages - 1) simplicity, 2)
I can see contents of the failure. To do explicit
check I need to set up some communication (common
variable, step counter etc) between the message
handler and testing code. If these 2 code snippets are
in different processes I need to use IPC to do
explicit check?

Ideally I'd like to pring nothing to the screen -
developer does not need to see all this information.
The information can 

Re: We *really* need a development model change !

2001-12-26 Thread Alexandre Julliard

Andreas Mohr [EMAIL PROTECTED] writes:

 I attached a preview of the posting I intend to post on *tons* of Windows
 devel newsgroups (Call For Volunteers). That way we might actually get
 hold of hundreds of Windows developers helping us implement a complete
 test suite (complete tests of up to 12000 Windows functions).
 Not to mention the additional PR we might get out of this...

Yeah, I'm sure spamming the Windows newsgroups is a great PR
strategy. Please don't do that.

 Please comment on both my intended posting and the way I programmed the first
 version of the test suite (I'm not extremely happy with the current program;
 if you have any improvements, then get them here ASAP !).

Look at programs/winetest, that's the tool we should use to write
tests IMO.

-- 
Alexandre Julliard
[EMAIL PROTECTED]





Re: We *really* need a development model change !

2001-12-26 Thread Joerg Mayer

On Wed, Dec 26, 2001 at 10:26:27AM -0800, Andriy Palamarchuk wrote:
 C Unit test frameworks I found after a quick search:
 http://check.sourceforge.net/
 http://people.codefactory.se/~spotty/cunit/
 http://freshmeat.net/projects/autounit/
 
 C++:
 http://sourceforge.net/projects/cppunit/

Seen on /. two days ago:
http://www.codesourcery.com/qm/qmtest

  Ciao
  Jörg
--
Joerg Mayer  [EMAIL PROTECTED]
I found out that pro means instead of (as in proconsul). Now I know
what proactive means.





Re: We *really* need a development model change !

2001-12-26 Thread Francois Gouget


   I wholeheartedly agree with you.

   I think that both approaches (application oriented, and API oriented)
are necessary.

 * We need the application oriented approach because this makes Wine
useful to people now. But maybe we should focus more on specific
applications: getting a few applications working very well seems more
interesting to me than getting a lot of applications kind of working.
The reason is that people migrating from the Windows world don't want
applications that kind of work, they want to see stuff that works just
as well as on Windows, even if it is limited to a few applications.
 * We also need the test framework but not only for regression testing.
All you need to write tests is a Windows computer so we could probably
get the help of people who would otherwise not contribute to Wine
(windows programmers who feel intimidated by contributing to Wine). And
these tests look like a great way to find bugs in our current
implementations: I am quite sure that with just a couple of hours of
coding tests you could find quite a few bugs that may otherwise take you
days to track down in a relay trace.
   But as Alexandre said, tact is needed when soliciting the help of
windows programmers so that this is not seen as blatant spam.


   I also like your section about why Wine is worth contributing to. It
may need some polishing but it would be nice to have it on WineHQ. After
all, WineHQ may tell you what Wine is but there is not a single word
about why it is important and why it is worth contributing to. We keep
hearing about how Wine is bad for Linux, how we should all stop wasting
our time and contribute to native Linux applications instead, how Linux
has all the applications it needs anyway, etc., etc. No wonder! We don't
even tell our side of the story!


   Also we need to have a framework in place so that we can handle the
sudden downpour of help we are going to receive ;-) Well, even if it's
not such a massive number of contributions. We need:
 * someone to organize things (and unfortunately no-one seems to have
the time, please, someone, volunteer!)
 * a mechanism to track which APIs already have tests in place and which
don't. Sure there are 12000 APIs so the risk of overlap seems small, but
I am quite sure that the distribution of submissions will not be random.
 * something to do a 'make test' and report which tests suceeded and
which failed
 * we also need to decide how to write these tests. You seem inclined to
write them in C but there has been some work before to create a
framework in perl. I consider that both have pros and cons and all I
care about at this point is that we do finally get something in place.



--
Francois Gouget [EMAIL PROTECTED]http://fgouget.free.fr/
  Any sufficiently advanced bug is indistinguishable from a feature.
-- from some indian guy