[ 
https://issues.apache.org/jira/browse/LOG4J2-151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13561484#comment-13561484
 ] 

Remko Popma edited comment on LOG4J2-151 at 1/24/13 8:07 AM:
-------------------------------------------------------------

Ralph,

About logging throughput: 
I don't think it makes sense to look at logging throughput in isolation. What 
matters is *application* throughput and latency, and how logging latency 
contributes to that.

You are right in one sense that with asynchronous logging, there is a queue, 
and when that queue is full, the logger becomes essentially synchronous and 
logging latency will become equal to the appender latency.

A clear case of when logging latency (not throughput) is crucial is during 
activity peaks. Asynchronous logging can help prevent or dampen latency spikes 
during bursts of events.


Let me illustrate with an example application:
* on average, 5000 events/second (on avg., a new event every 200 microseconds)
* during bursts, 20,000 events/second (on avg., a new event every 50 
microseconds)
* bursts usually last up to 10 seconds
* the business logic takes 30 microseconds to process an event (without logging)

Now we want to add logging, let's say 10 Logger.log calls per event.
* With synchronous logging, Logger.log takes 4 microseconds, 4 x 10 = 40 
micros/event
* With AsyncLogger, Logger.log takes 150 nanoseconds, 0.150 x 10 = 1.5 
micros/event

So the total latency of processing one event is:
business logic latency + logging latency =
30 + 40  = 70   micros/event with synchronous logging, and 
30 + 1.5 = 31.5 micros/event with asynchronous logging.

The problem is that with synchronous logging we would fall behind during 
bursts. 
In our example, a 10 second burst causes a latency spike of 4 seconds.

        (Calculation: During a burst, events come in at a rate of 20,000 
events/sec (50 micros per event).
        We need 70 micros/event, so we would fall behind 20 micros per event.
        20,000 events/sec x 20 micros delay/event = 400 millis delay per 
second.)

With AsyncLogger, the application would not experience *any* delay, until the 
logger queue is full. After the logger queue is full, the application would 
experience the same delay as with synchronous logging.
To avoid these latency spikes, you configure your queue for the expected burst 
duration. In our example we may want to have space for ~2,000,000 LogEvents

Of course, AsyncLogger is not a magic bullet; if bursts last longer than your 
queue size can accomodate, you will eventually end up with synchronous logging. 
But AsyncLogger can act like a buffer to dampen latency spikes that your 
application would have otherwise during bursts.

I think activity peaks are common for many applications, and this is one case 
where AsyncLogger can add value.
                
      was (Author: rem...@yahoo.com):
    Ralph,

About logging throughput: 
I don't think it makes sense to look at logging throughput in isolation. What 
matters is *application* throughput and latency, and how logging latency 
contributes to that.

You are right in one sense that with asynchronous logging, there is a queue, 
and when that queue is full, the logger becomes essentially synchronous and 
logging latency will become equal to the appender latency.

A clear case of when logging latency (not throughput) is crucial is during 
activity peaks. Asynchronous logging can help prevent or dampen latency spikes 
during bursts of events.


Let me illustrate with an example application:
* on average, 5000 events/second (on avg., a new event every 200 microseconds)
* during bursts, 20,000 events/second (on avg., a new event every 50 
microseconds)
* bursts usually last up to 10 seconds
* the business logic takes 30 microseconds to process an event (without logging)

Now we want to add logging, let's say 10 Logger.log calls per event.
* With synchronous logging, Logger.log takes 4 microseconds, 4 x 10 = 40 
micros/event
* With AsyncLogger, Logger.log takes 150 nanoseconds, 0.150 x 10 = 1.5 
micros/event

So the total latency of processing one event is:
business logic latency + logging latency =
30 + 40  = 70   micros/event with synchronous logging, and 
30 + 1.5 = 31.5 micros/event with asynchronous logging.

The problem is that with synchronous logging we would fall behind during 
bursts. 
In our example, a 10 second burst causes a latency spike of 4 seconds.

        (Calculation: During a burst, events come in at a rate of 20,000 
events/sec (50 micros per event).
        We need 70 micros/event, so we would fall behind 20 micros per event.
        20,000 events/sec x 20 micros delay/event = 400 millis delay per 
second.)

With AsyncLogger, the application would not experience *any* delay, until the 
logger queue is full. After the logger queue is full, the application would 
experience the same delay as with synchronous logging.
To avoid these latency spikes, you configure your queue for the expected burst 
duration. In our example we may want to have space for ~2,000,000 LogEvents

Of course, AsyncLogger is not a magic bullet; if bursts last longer than 10 
seconds, you will eventually end up with synchronous logging.
But AsyncLogger can act like a buffer to dampen latency spikes that your 
application would have otherwise during bursts.

I think activity peaks are common for many applications, and this is one case 
where AsyncLogger can add value.
                  
> Please facilitate subclassing Logger and LoggerContext (in 
> org.apache.logging.log4j.core)
> -----------------------------------------------------------------------------------------
>
>                 Key: LOG4J2-151
>                 URL: https://issues.apache.org/jira/browse/LOG4J2-151
>             Project: Log4j 2
>          Issue Type: New Feature
>          Components: Core
>    Affects Versions: 2.0-beta3
>            Reporter: Remko Popma
>         Attachments: FastLog4j.zip
>
>
> I would like to create a custom logger, while reusing the 
> org.apache.logging.log4j.core.Logger functionality.
> The following two changes would make subclassing possible:
> * change visibility of method Logger$PrivateConfig#logEvent(LogEvent) (line 
> 265) from protected to public
> * change visibility of method LoggerContext#newInstance(LoggerContext, 
> String) (line 310) from private to protected
> My use case is that I want to create an asynchronous Logger for low latency 
> logging.
> This custom logger hands off control to a separate thread as early as 
> possible. In my case, AsynchAppender is not a good match for my requirements, 
> as with that approach (a) the logging call still needs to flow down the 
> hierarchy to the appender, doing synchronization and creating objects at 
> various points on the way, and (b) when serializing the LogEvent, the 
> getSource() method is always called, which is expensive.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

---------------------------------------------------------------------
To unsubscribe, e-mail: log4j-dev-unsubscr...@logging.apache.org
For additional commands, e-mail: log4j-dev-h...@logging.apache.org

Reply via email to