Ben Klein wrote:
2009/4/13 James McKenzie <[email protected]>:
Ben Klein wrote:
2009/4/13 chris ahrendt <[email protected]>:

Vincent Povirk wrote:

On Sun, Apr 12, 2009 at 5:24 PM, Ben Klein <[email protected]> wrote:

2009/4/13 Vincent Povirk <[email protected]>:

But the description doesn't say "invalid conditions". It says "serious
errors in Wine". That's something that should never happen in tests,
as it would imply that the state of the libraries we're testing (and
thus the test) is invalid.

How about "serious errors in Wine, or in the test cases, sometimes
deliberate errors in the tests"? As Vitaliy points out, some tests
deliberately do invalid things to make sure they fail. We can't ONLY
test on things that succeed!

I'm not against changing the description. If it's OK to have err
messages for cases that we know fail (but don't crash or prevent the
library from continuing to function), then the Developer's Guide is
wrong and should be changed. I also don't care how long it takes to
make the change, as long as there's a consensus that it's the guide
that's wrong and not the reality of what's in Wine.

This is EXACTLY the point I am trying to get to.. if they are normal and not
ERRORS but warnings then they should be thus and noted in the developers
guide. Right now Wine isn't even following its own guidelines in this case.

No. Not warnings. Errors. They are errors. There is no way to
distinguish an error caused by a real application from an error caused
by a Wine test.


If the situation is an error and it is expected, the test should handle
this, like:

ok (some_test_here_that_should_fail, "The test that should fail, did/n")

I'm guessing that most of the tests that should fail, do.  I don't know
if there is a failure like there is an ok.

AFAIK, this is how expected failures in tests are handled. I saw a few
recent tests submitted and/or committed that do it like this.

If you don't like it, run all your tests with WINEDEBUG=-all.



And that will prove nothing.  Tests should be run with debugging on.
You are really being sarcastic, right?

As to the discussion, I will add my .02 Eurocent here:

Fixme:  Code that is not implemented that should be.
Warning:  Code that encountered an unexpected or new condition that is
not normal.  Data should not be corrupted by this event.
Error:  Code encountered a condition that will result in corruption of data.
It appears that we have 'error creep' and that is not good from a user
point of view and it is really irritating from a support point of view.

During testing an error could be either a Warning or an Error.  Tests
should not be run against non-existent Wine code, but should against
existing Windows code.  The situation with testing is that unexpected or
improper behavior of code should be an error.  There is no such thing as
a warning during a test run.  Either the test passes, which means that
Wine is acting the same as a certain version of Windows, or it is not.

There is no way for the Wine components that are producing the errors
to distinguish between a test run and a real application. Again, the
tests are triggering these errors not in the test applications but in
other Wine components. In some (possibly all) cases that this happens,
this is expected and correct (because it is impossible to distinguish
between a genuine error and a test error).

Now, the problem is that we are sending cryptic messages to end users,
most of which they can and should ignore.  Err messages piling up on
their terminal windows should be a cause for concern.  If we know that a
condition does not cause data corruption, then we should not be marking
it as an error, but maybe a warning or if the code is non-existent or
improper, a fixme.

End users shouldn't be running the tests without understanding how they work.

Can we start to clean up some of the most obvious "it is not an error
but the message says it is" so that we can calm down users who encounter
them?

The ERRs are being produced by components of Wine outside the test
cases. It's highly likely for those ERRs to change in later versions
of Wine. If you want to maintain a complete list of where ERRs will be
triggered by test cases and include a message for each one that says
"It's OK, you can ignore this ERR", then I'm sure you're welcome to
try it.

2009/4/13 James McKenzie <[email protected]>:
Vitaliy Margolen wrote:
You wrong. As I said, tests are _allowed_ and some really _supposed to_ test
invalid cases. Where Wine 100% correct about complaining. But the goal of
each test is either succeed or fail. If it succeeds then there is no bug.

Conversely, if a good test starts to fail or error messages appear where
there were none, we have a problem that needs to be reported so it can
be fixed (this happened with Wine 1.1.18).

There is a BIG difference between "test caused an ERR message" and
"test failed". They are handled differently in the test runs.

Again there are not failures in the tests your ran! There is nothing to fix
here. Move along don't waste everyone time for something YOU DO NOT LIKE! Or
send patches.

The problem is that error messages should not appear where there are no
errors.  They should be either warnings or fixmes.  If the test passes
but a bunch of error messages appear during the test run, there is a
coding problem that needs to be fixed.  This is why we are answering
user questions with 'The error message can be ignored, there is nothing
wrong with the execution of your program.'  This makes it appear that
the developers don't know what they are doing and that makes the overall
project appear bad.

So deliberately invalid cases in the tests should never cause ERRs,
even though the same invalid cases produced by genuine applications
should? Good luck implementing that.

Conversation is over. DO NOT WASTE everyone's time!!!


We are not 'wasting time.'  This issue needs to be addressed by the
development community so that errors, warnings and fixmes are properly
reported.

They are. It's just that *some* people, who shall remain nameless
here, are misinterpreting them. These are test cases, and test cases
have to test on both positive and negative cases, expecting success
and failure respectively. If the positive cases are generating ERRs,
then that's indicative of code in Wine that needs cleaning up, but if
the negative cases do something *deliberately invalid* and generate
ERRs, then there's nothing to worry about.

We can and maybe should get more granular when we work with
code.  Maybe a fixme for a known but unimplemented function and a
warning for an incomplete function.

Shouldn't this be the other way around? Complete non-implementation
(or stubiness) is more likely to cause problems for applications than
partially implemented functions. At the moment, both of these cases
are FIXMEs (stub and partial stub respectively).

Maybe changing the wording on an error message that really
is an error in code but will not cause a program to crash.
I've seen error messages that really should be fixmes and the other
way around where a fixme causes some programs to terminate
with a screen full of debugging code.

ERRs and FIXMEs are classified by developers for developers, not for
end-users. It's impractical to limit all ERRs to things that crash
programs, and all FIXMEs to things that don't crash. In some cases,
I'm sure, the app would detect an unexpected value caused by either a
problem that triggers a FIXME or ERR and work around the problem.

At least I have the time to waste at the moment :P
So basically it, in your opinion, comes down to ERR's and the debug output from running
tests or anything else should be ignored by anyone but developers?

That does not make any sense... There has to be a consistent way for developers and users to report or work on issues.. and like Vincent says the use has gotten a little skewed.

chris


Reply via email to