JAMon can keep aggregate stats for your messages
(error/fatal/warn/info/debug and more), and you could test to ensure errors
did not occur with JUNit against the jamon data.   for more on jamon you can
go to www.jamonapi.com.  For more on how jamon works with log4j (no code
changes are required.  Just use the JAMonAppender), go to
http://jamonapi.sourceforge.net/log4j_jamonappender.html

On 6/23/07, Bob Jacobsen <[EMAIL PROTECTED]> wrote:

I've got a bunch of code that uses log4j for logging, and a large
number of unit tests done with JUnit.

As is the nature of these tests, they test a lot of error conditions,
which results in lots of error and warning messages logged when
running the tests.  Up until now, we just ignored those:  If the
tests passed, nobody looked at the test output. If the tests failed,
then we started looking at the output for "abnormal errors", which is
really a mess.

My co-developers and I would like to clean this whole situation up,
so that the tests run with no error or warning messages appearing in
the output log.  But we still want to _test_ the conditions that are
"errors", and in the code as it exists will throw those messages.
And we still want to see if _other_ error and warning messages start
appearing when the tests are run.

Anybody know how to Do This Right?

So far, we've got a couple suggestions:

1) Go through and find all the places where the error messages are
logged, and make then log through a service routine which can be
over-ridden.  E.g.

  class Mine {
    void doSomething() {
          ...
          if (bad) log.error("Oops");
     }
}

becomes

  class Mine {
    void doSomething() {
          ...
          if (bad) report("Oops");
     }
    void report(String msg) {
        log.error(msg);
    }
}

Then we can test it with

      new Mine(){
          void report(String msg) {
              if (!msg.equals("Oops"))
                    // assert error
          }
       }

The subclassing lets us intercept the error _before_ it gets to
log4j, and test it manually.  But there's a _huge_ amount of this. It
seems a complete pain to do all this, and since we'd be putting in a
"cover" layer in front of log4j in each class, it seems far from a
best practice.

2) Instead of suppressing the "good" messages, learn to ignore them.

To do this, we'd just take a log from "normal" tests, and built into
our testing that we diff the current output with the "normal" log.
New or missing lines would be visible to the author; the same old
stuff would pass through the diff.

Of course, this is a maintenance nightmare. As code changes, we'd
have to keep the "master error log" synchronized, but it would be
easy for somebody to pass a Really Bad Thing as a "new normal error".

This is currently the most popular choice, mostly because it doesn't
need to rework the existing code, but I really don't like it.

3) Do something via log4j itself.

I'm not even sure what I'm asking here, but can we somehow
programmatically access a stream of log4j log entries, and manipulate
it?

What I'm imagining is a test like this:

a) Push all existing stuff in log to whatever outputs are configured.

b) Temporarily stop output of log entries

c) run test

d) Check through log entries accumulated since (b), which come from
the test, and remove the ones that are expected (optionally, tell
JUnit to fail if something is missing)

e) Allow all other log entries to continue to their configured outputs


If there was some way to do this, we could have "unexpected" messages
still go to the usual places, as configured, but remove "expected"
messages from the log stream.

Is there a way to do this?

Or is there a better solution to the whole need?

Thanks in advance.

Bob

--
Bob Jacobsen, UC Berkeley
[EMAIL PROTECTED] +1-510-486-7355 fax +1-510-643-8497 AIM, Skype
JacobsenRG

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Reply via email to