[
https://issues.apache.org/jira/browse/LOG4NET-190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12658440#action_12658440
]
kennethxu edited comment on LOG4NET-190 at 12/21/08 7:39 PM:
--------------------------------------------------------------
Hi Ron,
Thanks for you comments. You could be right on the using of buffering. IMHO, I
try my best to avoid using the ThreadPool for many reasons that have been
discussed by others so I'm not repeating it here.
The new updated patch has below fixes.
a) Although TimeSpen can be set via XML configure too using format of
"hh:mm:ss", I have changed it to int as requested by you and renamed the
property to BatchWaitTimeout.
b) The queue coping logic has been improved in ActivateOptions.
c) Events are now fixed and Fix property is added default to All.
d) Better synchronization in close. Informs worker thread to shutdown and waits
for it to complete flushing the buffer and exit. I didn't put in a timeout but
it can be added later if needed. This close logic has same effect as closing
the attached appender if async was not used.
Other than those, unit test for AsyncAppender are added.
As to extending BufferingForwardingAppender, that was actually where I started.
But I realized that the way it releases the buffer when full is not what I
wanted, I want the worker thread to free the queue quickly rather then waiting
it to be filled up. And we can always use buffering appender before or after
AsyncAppender to get the lossy effect.
I have also tested the overhead of AsycAppender in one of the test case. In my
core 2 due 2.33G desktop, the result of logging 1M events is below. Mock is the
dummy implementation of AppenderSkeleton
Async 1000000 events:00:00:00.4531250
Mock 1000000 events:00:00:00.1406250
You can see the overhead is well acceptable. I think what's important is how
long the lock is hold instead of how many places in the code the lock is used.
As to the LockingQueue, I agree with you. Actually, I have worked a little on
the Spring.Threading which is a port from Java concurrent API. The full
implementation of a BlockingQueue is quite involved. And it is very difficult,
if not impossible, to ensure capacity before queuing. With my limited skill and
the amount of effort to devote, I decided to let it stay with the appender
itself, which is still relatively small now. Until I can figure out an
efficient, simple and still atomic LockingQueue. Or may be some smart soul can
help :)
>> What happens when the queue is full? Does the appender block, throw away
>> LoggingEvents, or grow its internal LoggingEvents array?
As of now, appender blocks. It can be enhanced to do any of those in future
easily.
Thanks again!
was (Author: kennethxu):
Hi Ron,
Thanks for you comments. You could be right on the using of buffering. IMHO, I
try my best to avoid using the ThreadPool for many reasons that have been
discussed by others so I'm not repeating it here.
The new updated patch has below fixes.
a) Although TimeSpen can be set via XML configure too using format of
"hh:mm:ss", I have changed it to int as requested by you and renamed the
property to BatchWaitTimeout.
b) The queue coping logic has been improved in ActivateOptions.
c) Events are now fixed and Fix property is added default to All.
d) Better synchronization in close. Informs worker thread to shutdown and waits
for it to complete flushing the buffer and exit. I didn't put in a timeout but
it can be added later if needed. This close logic has same effect as closing
the attached appender if async was not used.
Other than those, unit test for AsyncAppender are added.
As to extending BufferingForwardingAppender, that was actually where I started.
But I realized that the way it releases the buffer when full is not what I
wanted, I want the worker thread to free the queue quickly rather then waiting
it to be filled up. And we can always use buffering appender before or after
AsyncAppender to get the lossy effect.
I have also tested the overhead of AsycAppender in one of the test case. In my
core 2 due 2.33G desktop, the result of logging 1M events is below. Mock is the
dummy implementation of AppenderSkeleton
Async 1000000 events:00:00:00.4531250
Mock 1000000 events:00:00:00.1406250
You can see the overhead is well acceptable. I think what's important is how
long the lock is hold instead of how many places in the code the lock is used.
As to the LockingQueue, I agree with you. Actually, I have worked a little on
the Spring.Threading which is a port from Java concurrent API. The full
implementation of a BlockingQueue is quite involved. And it is very difficult,
if not impossible, to ensure capacity before queue. With my limited skill and
the amount effort to devote, I decided to let it stay with the appender itself,
which is still relatively small. Until I can figure out an efficient, simply
and still atomic LockingQueue. Or may be another smart soul :)
>> What happens when the queue is full? Does the appender block, throw away
>> LoggingEvents, or grow its internal LoggingEvents array?
As of now, appender blocks. It can be enhanced to do any of those in future.
Thanks again!
> [Patch] Need true asynchronous appender
> ---------------------------------------
>
> Key: LOG4NET-190
> URL: https://issues.apache.org/jira/browse/LOG4NET-190
> Project: Log4net
> Issue Type: New Feature
> Components: Appenders
> Affects Versions: 1.2.11
> Environment: DotNet
> Reporter: Kenneth Xu
> Fix For: 1.2.11
>
> Attachments: AsyncAppenderPatch.patch
>
>
> The AsycAppender in the example uses .Net ThreadPool, a busy logging system
> can quickly fill up the queue of the work item. As the ThreadPool is also
> used by the framework itself, using it for async logging is inappropriate.
> Thus, a true asynchronous appender with its dedicated queue and thread has
> been asked by the community many times.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.