[ 
https://issues.apache.org/jira/browse/LOG4J2-928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16071507#comment-16071507
 ] 

ASF GitHub Bot commented on LOG4J2-928:
---------------------------------------

Github user leventov commented on the issue:

    https://github.com/apache/logging-log4j2/pull/87
  
    <h3>Latency comparison</h3>
    
    ```
    java -jar log4j-perf/target/benchmarks.jar 
".*Log4j2AppenderLatencyBenchmark.end2end" -f 1 -wi 1 -i 30 -t 2 -p 
loggerType=MMap
    ```
    On my MBP with 4 cores and SSD.
    
https://cdn.rawgit.com/leventov/288f6ff4ea960998e6479001976b0755/raw/aeb2f65560241deaa9394ff1ef8044bb961ea37e/plotFiles.html
 (red line is before this PR, blue line is this PR)
    
    Tail latency is much better than it used to be, but worse than I expected. 
Calling `MappedByteBuffer.load()` significantly improves latency around 
99.99-99.999 percentiles, but dramatically drops the extreme tail latency, from 
4.5 milliseconds to ~300 milliseconds. I don't know the reason for that.
    
    Also quite "fun" than hand-rolled version of MappedByteBuffer.load(): 
https://github.com/leventov/logging-log4j2/tree/wait-free-MemoryMappedFileManager-crash
 crashes the JVM when I try to run the same command
    ```
    java -jar log4j-perf/target/benchmarks.jar 
".*Log4j2AppenderLatencyBenchmark.end2end" -f 1 -wi 1 -i 30 -t 2 -p 
loggerType=MMap
    ```
    Could somebody else please try it on other machines?
    



> Lock-free synchronous sub-microsecond appender
> ----------------------------------------------
>
>                 Key: LOG4J2-928
>                 URL: https://issues.apache.org/jira/browse/LOG4J2-928
>             Project: Log4j 2
>          Issue Type: New Feature
>          Components: Appenders
>            Reporter: Remko Popma
>
> _(This is a work in progress.)_
> *Goal*
> It should be possible to create synchronous file appenders with (nearly) the 
> same performance as async loggers.
> *Background*
> The key to the async loggers performance is the lock-free queue provided by 
> the LMAX Disruptor. Multiple threads can add events to this queue without 
> contending on a lock. This means throughput scales linearly with the number 
> of threads: more threads = more throughput.
> With a lock-based design, on the other hand, performance maxes out with a 
> single thread logging. Adding more threads does not help. In fact, total 
> logging throughput goes down slightly when multiple threads are logging (see 
> the Async Appenders in the [async performance 
> comparison|http://logging.apache.org/log4j/2.x/manual/async.html#Asynchronous_Throughput_Comparison_with_Other_Logging_Packages]).
>  Lock contention means that multiple threads together end up logging slower 
> than a single thread.
> *Currently only async loggers are lock-free*
> Log4j2 provides good performance with async loggers, but this approach has 
> several drawbacks:
> * dependency on external LMAX disruptor library
> * possibility of data loss: log events that have been put on the queue but 
> not flushed to disk yet may be lost in the event of an application crash
> This ticket proposes a new feature to address these issues.
> *Proposal: a lock-free synchronous appender*
> For a single-threaded application the current MemoryMappedFileAppender has 
> performance comparable to Async Loggers (TODO: perf test).
> However, the current implementation uses locks to control concurrency, and 
> suffers from lock contention in multi-threaded scenarios.
> For inspiration for a lock-free solution, we can look at 
> [Aeron|https://github.com/real-logic/Aeron], specifically Aeron's design for 
> Log Buffers. Martin Thompson's September 2014 Strangeloop 
> [presentation|https://www.youtube.com/watch?v=tM4YskS94b0] gives details on 
> the design (especially the section 16:45-23:30 is relevant).
> The way this works, is that instead of using locks, concurrency is handled 
> with a protocol where threads "reserve" blocks of memory atomically. Each 
> thread (having serialized the log event) knows how many bytes it wants to 
> write. It then atomically moves the buffer tail pointer by that many bytes 
> using a CAS operation. After the tail has been moved, the thread is free to 
> write the message payload bytes to the area of the buffer that it just 
> reserved, without needing to worry about other threads. Between threads, the 
> only point of contention is the tail pointer, which is similar to the 
> disruptor. We can reasonably expect performance to scale linearly with the 
> number threads, like async loggers.
> *Still needs work*
> This looks promising, but there are a few snags. 
> # Needs the Unsafe. {{java.nio.ByteBuffer}} only provides relative, not 
> absolute bulk put operations. That is, it only allows appending byte arrays 
> at the current cursor location, not at some user-specified absolute location. 
> The above design requires random access to be thread-safe. Aeron works around 
> this by using {{sun.misc.Unsafe}}. Users should be aware of this so they can 
> decide on whether the performance gain is worth the risk. Also, this may make 
> the OSGi folks unhappy (see LOG4J2-238 discussion)... Not sure how serious we 
> are about making Log4j2 work on OSGi, but perhaps it is possible to mark the 
> package for this feature as optional in the OSGi manifest. An alternative may 
> be to put this appender in a separate module.
> # TBD: How many files/buffers to use? In his presentation Martin mentions 
> that using a single large memory mapped file will cause a lot of page faults, 
> page cache churn, and unspecified VM issues. He recommends cycling between 
> three smaller buffers, one active (currently written to), one dirty (full, 
> now being processed by a background thread) and one clean (to swap in when 
> the active buffer becomes full). I am not sure if the page fault problem will 
> occur for our use case: a Log4j appender is append-only, and there is no 
> separate thread or process reading this data at the same time. If it does, 
> and we decide on a similar design with three smaller buffers, we still need 
> to work out if these can be three different mapped regions in the same log 
> file, or if it is better to use a separate temporary file and copy from the 
> temporary file to the target log file in a background thread. I would prefer 
> to have a single file. Note that even with a single file we may want a 
> background thread for mapping a new region at every swap and occasionally 
> extending the file when the EOF is reached.
> Feedback welcome. I intend to update this ticket as I learn more.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to