Folks

We use log4j extensively in our application and need to reduce the
overhead of log4j writes into file. We can trade off memory at this
stage for cpu cycles. One way I was thinking of was that we log into
memory for a given "batch" of messages and then we flush this memory
structure when we hit a configured batch size where we write all the
messages in one shot into the file.

I ran some quick and dirty tests and it does seem to make a
significant difference in performance.

So I am looking to see if there is a way to do this. My conclusion is
that I can perhaps use the WriterAppender (to write to a String) or
perhaps write my own class that subclasses AppenderSkeleton. By the
way, I looked at using AsycAppender but looks like it enforces xml
based configuration which I would like to avoid.

Before I delve into that, I wanted to ask if anyone else has any
thoughts on the validity of the problem and/or solution to the
problem?

Best Regards
Menon
-----------------------------------------------------------
R. M. Menon - A Rafi fan(www.mohdrafi.com)
Author, Expert Oracle JDBC Programming,
http://www.amazon.com/exec/obidos/tg/detail/-/159059407X/
-----------------------------------------------------------

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to