Wrapping your RollingFileAppender with an AsyncAppender might address your problems. There were a lot of issues with the AsyncAppender that were recently (March 2006) addressed in the SVN trunk. If you are using log4j 1.2, you may want to back port the log4j 1.3 AsyncAppender (see note on http://issues.apache.org/bugzilla/ show_bug.cgi?id=38982 for instructions) if you run into problems.

Basically using an AsyncAppender will place logging events in a fixed size queue. Other logging requests will only need to be blocked for the time needed to update the queue, not the time needed to do the file IO. When the queue hits the maximum size, the default behavior in the log4j 1.3 appender (and the only behavior in the log4j 1.2) is to block other logging requests until some of the logging requests are removed from the queue. The log4j 1.3 appender added a new property "blocking" which when set to false does not wait for the queue to have vacancies and will summarize the events that were dropped.


On May 17, 2006, at 9:24 AM, Néstor Boscán wrote:

Hi

I'm working with Log4J on a heavy load application and we're using it for
performance, debug and error information. All our loggers are using
RollingFileAppender and we generate a lot of logging information. Our
application is running very slow and after a profiling test we discovered that Log4J was blocking the threads. I've read about this problem in some
web sites but none give a clear solution to the problem. Anyone has
succesfully resolved this problem?. What options do I have?

Regards,

Néstor Boscán


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to