On 26 February 2010 01:13, ramakrishna menon <[email protected]> wrote: > We use log4j extensively in our application and need to reduce the > overhead of log4j writes into file. We can trade off memory at this > stage for cpu cycles. One way I was thinking of was that we log into > memory for a given "batch" of messages and then we flush this memory > structure when we hit a configured batch size where we write all the > messages in one shot into the file.
Surely your operating system should be buffering filesystem writes for you! Seriously though, you wouldn't be trading off memory for CPU cycles: you'd be deferring the writes until a more convenient time and most probably using additional CPU cycles as a result! If you have performance issues in your application then I'd suggest you address them directly. If logging is slowing your application then switch it off. If you require an audit trail, however, then basic Log4j is probably not the right way to go: at the very least, write a custom appender that does _exactly_ what you need. AppenderSkeleton is a good starting point: study the code of the other appenders to see how they achieve what they need. I have written quite a few custom appenders that operate under heavy load in heavily multithreaded systems and they all follow the same pattern: quickly append a lightweight representation of the LoggingEvent* to a java.util.Deque<> and perform writes (to file, JDBC, socket, etc.) in a daemon thread. [*] not the LoggingEvent itself - it has excess baggage! I recommend the book Java Performance Tuning by Jack Shirazi published by O'Reilly. Not just for Java performance geeks but for ALL Java developers! And ALWAYS read the code of any libraries you're thinking of using! Regards, Michael Erskine --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
