I'm trying to understand what locking is necessary in AppenderSkeleton and its 
derived classes.  There is a lock(this) in AppenderSkeleton's DoAppend and 
Close methods, which ensure that the appender can't be closed while an Append 
is in progress.   Implementing Append in a derived class is easier, because the 
lock ensures it can never be called concurrently by more than one thread.

That's all fairly clear, but I don't understand the comment in the 
AppenderSkeleton.DoAppend method:

       // This lock is absolutely critical for correct formatting
       // of the message in a multi-threaded environment.  Without
       // this, the message may be broken up into elements from
       // multiple thread contexts (like get the wrong thread ID).

The lock is clearly necessary for the above reasons, but I don't see what race 
condition could cause a message to be "broken up into elements from multiple 
thread contexts"?

Can you throw any light on that?



From: Dominik Psenner [mailto:dpsen...@apache.org]
Sent: 31 October 2016 15:31
To: log4net-dev@logging.apache.org
Subject: Re: AsyncAppenderSkeleton


See inlines.
On 2016-10-31 11:30, Joe wrote:
Hi Dominik,

Thanks for the feedback

> Please note also that MSMQ sounds like a MS only service and that would in 
> turn mean that the .net core variant would no longer be able to benefit from 
> the AsyncAppenderSkeleton. To me this outlines a path that we would not like 
> to walk on

I don't see the problem here.

I'm proposing that we could implement the queue as a separate class 
implementing a suitable interface (ILoggingEventQueue, IAppenderQueue or 
whatever - I'm with Philip Karlton on naming).    The default implementation 
would be an in-memory queue, would not depend on MSMQ and would be available 
for .net core etc.


Sorry, my fault, the sentence was TL;DR it's entirety. I had it read it as "The 
default implementation could be MSMQ". ;-) Thanks for the clarification.


Then there could be an alternate implementation with a persistent queue using 
MSMQ, or users could provide their own custom implementations using some other 
persistent queueing technology if they wish.

The alternative of a persistent queue is useful to reduce the risk of (probably 
the last and therefore most important) logging events being lost when an 
application crashes: with a persistent queue they could be picked up again from 
the queue when the application restarts, or by an independent utility.


> This sounds mostly like the BufferingAppenderSkeleton, which only misses the 
> background worker thread to send the buffer.

I'm not convinced that BufferingAppenderSkeleton is a good candidate.  For 
example:

- Locking is problematic.  The appender would need to lock(this) while it is 
forwarding logging events to the sink (BufferingAppenderSkeleton.SendBuffer).  
This could hold the lock for an extended period (e.g. due to a communication 
timeout).  Therefore DoAppend should not lock(this) while enqueueing logging 
events or we'll be unnecessarily blocking the calling application.  This is one 
of the main reasons I want to implement my own DoEvents rather than deriving 
from AppenderSkeleton.

If the implementation requires lock(this) to work, the implementation is 
broken. The queue itself has to be thread safe. Hence, a true async appender 
should block the calling application only to fix a few logging event properties 
that would otherwise be lost (i.e. stacktrace or thread information).


- I see the buffering and triggering logic being implemented in a pluggable 
ILoggingEventQueue.   By default, there would be no buffering, except what is 
implicit in the fact that events may be enqueued faster than they can be 
dequeued.  I.e. whenever the background thread detects events in the queue, by 
default it processes all available events immediately, in blocks whose maximum 
size is a configurable SendBufferSize.    A custom queue implementation could 
implement triggering logic akin to BufferingAppenderSkeleton, e.g. wait for a 
timeout defined by TimeEvaluator if there are fewer than SendBufferSize events 
in the queue.

A async appender working in async mode always buffers, by definition. If it 
wouldn't buffer, there would be nothing that a background thread could work on 
and it would block the calling application.



> System.Threading.Task.Run().

The TPL could be one way of implementing the queue, though I'm not convinced 
that it adds much value.   The custom implementation I did recently didn't use 
TPL, and that would be my starting point.  This also means it would be 
compatible with .NET 3.5.

If .net 3.5 is a target for async logging, then the implementation cannot use 
the System.Threading.Tasks namespace. Otherwise I would build upon the default 
task scheduler implementation or provide a custom task scheduler implementation 
that derives from System.Threading.Tasks.TaskScheduler and let all logging 
tasks run on that task scheduler.


  I found having a single background thread made it easier to implement things 
like flushing.

Mileage may vary but to me, this is not the case.


   Flush was implemented to:
- return true immediately if the queue is empty and all events have been 
successfully sent to the sink.
- return false immediately if the last attempt to send events to the sink 
failed.
- else wait for the background thread to set a ManualResetEvent when it's 
finished processing all events in the queue.

That could read as:

bool Flush() {
return await Task.Run(() => {
  return doFlush();
});
}

or:

bool Flush() {
 Task<bool> task = Task.Run() => {
  return doFlush();
});
 task.Wait();
 return task.Result;
}

or even:

Task<bool> FlushAsync() {
  return Task.Run() => {
  return doFlush();
});
}



> The default implementation should probably be able to operate asynchronously 
> or synchronously and change mode of operation based on a configuration flag 
> "Asynchronous=True"

That's exactly what I had in mind.

Cheers

Reply via email to