Thanks !
1) Is there a reason why the buffer of the AsyncAppender is implemented
as ArrayList and not LinkedList ?
2) I tried to ran on log4j 1.2.14 with the following configuration, and
got a NullPointerException (see below).
If my LocationInfo was set to "false", why did the Dispatcher get into
getLocationInformation() method ?
BTW, I set LocationInfo to "false" because I had to. The default value
(of "false") didn't work.
<appender name="ASYNC-MAIN"
class="org.apache.log4j.AsyncAppender">
<param name="BufferSize" value="20000" />
<param name="blocking" value="false" />
<param name="LocationInfo" value="false"/>
<appender-ref ref="R" />
</appender>
<appender name="R" class="org.apache.log4j.RollingFileAppender">
<param name="Append" value="false" />
<param name="BufferedIO" value="true" />
<param name="BufferSize" value="8192" />
<param name="File" value="R.OUT" />
<param name="MaxFileSize" value="1MB" />
<param name="MaxBackupIndex" value="1" />
<layout class="org.apache.log4j.PatternLayout">
<param name="ConversionPattern"
value="%d{yyyy/MM/dd
HH:mm:ss}#%5p#[%t]#%c#%C#%L#%X{user}# %m %n" />
</layout>
</appender>
<root>
<level value="ALL" />
<appender-ref ref="ASYNC-MAIN" />
</root>
The Exception:
Exception in thread "Dispatcher-Thread-0" java.lang.NullPointerException
at java.lang.String.lastIndexOf(String.java:1654)
at java.lang.String.lastIndexOf(String.java:1636)
at
org.apache.log4j.spi.LocationInfo.<init>(LocationInfo.java:119)
at
org.apache.log4j.spi.LoggingEvent.getLocationInformation(LoggingEvent.ja
va:191)
at
org.apache.log4j.helpers.PatternParser$ClassNamePatternConverter.getFull
yQualifiedName(PatternParser.java:538)
at
org.apache.log4j.helpers.PatternParser$NamedPatternConverter.convert(Pat
ternParser.java:511)
at
org.apache.log4j.helpers.PatternConverter.format(PatternConverter.java:6
4)
at org.apache.log4j.PatternLayout.format(PatternLayout.java:503)
at
org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:301)
at
org.apache.log4j.RollingFileAppender.subAppend(RollingFileAppender.java:
236)
at
org.apache.log4j.WriterAppender.append(WriterAppender.java:159)
at
org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:230)
at
org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(Ap
penderAttachableImpl.java:65)
at
org.apache.log4j.AsyncAppender$Dispatcher.run(AsyncAppender.java:578)
at java.lang.Thread.run(Thread.java:595)
-----Original Message-----
From: Curt Arnold [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 24, 2006 1:27 AM
To: Log4J Users List
Subject: Re: Multiple threads that log events to the same file
concurrently
On Oct 22, 2006, at 4:49 AM, Cohen Oren (ICS) wrote:
> Thanks Curt !
>
> Regarding item 3 (below) :
>
> I meant that I will not use "append" for adding a Logging Event to the
> thread's queue, but rather a regular "add()" command.
> Each thread will put its events on its own logging queue by using
> "add()" method (like adding a regular object to a linked list).
> The threads will NOT use log4j.
> However, the new dispatcher I'll write, will go over these queues
> (round-robin), get the logging events from it, and will use the log4j
> mechanism and its "append()" method in order to add them to the
> FileAppender.
>
> So, it seems as if no block will be done. Correct ?
>
By the time that log4j has processed an logging request to the point
that an Appender is called with a LoggingEvent, log4j will have have an
synchronization lock and will block any other threads until the appender
completes its append action (which in your case would be the add to a
thread-local collection). In either your suggested approach or using
the AsyncAppender with blocking=false, there is still the chance for a
thread to be blocked, but the duration of the block is substantially
reduced since it is only the time to add to a collection, not the time
it takes for the File IO to complete. Since at the point your appender
would get a call, you are guaranteed to be synchronized, using
thread-local collections would not have a benefit.
If you have a case of the logging requests being generated in bursts
faster than the underlying appender can handle, you need some mechanism
to avoid exhausting all your memory holding logging events for later
processing. The new AsyncAppender provides two mechanisms to handle
this case, the old-style blocking behavior where the calling thread is
blocked until the queue dropped below the specified maximum size (which
can cause the calling application to stall until the queue is drained)
and the newly introduced non-blocking style where overflow logging
events are counted and summarized. You haven't described how you would
address that issue in your appender.
Again, I would recommend that you look at the log4j 1.2.14 AsyncAppender
with the blocking=false option as it should address your major concerns.
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]