[
https://issues.apache.org/jira/browse/LOG4J2-1179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15278388#comment-15278388
]
Remko Popma edited comment on LOG4J2-1179 at 5/10/16 4:22 PM:
--------------------------------------------------------------
Ok. Then still outstanding for this ticket are:
* File Appender response time comparison. This involves running the
ResponseTimeTest many times to find a reasonable workload that is interesting
but does not saturate the appender. I've found response time tests to be fiddly
and I expect this will take me some time.
* Cost of various APIs/wrappers (SLF4J, Log4j1, JUL, Commons Logging). Can be
done in a JMH benchmark, which makes things easier, but the benchmarks.jar
uber-jar will contain all dependencies, so it cannot be used to test the
log4j-1.2-api or the log4j-slf4j-impl wrappers. Instead we'll need to use the
log4j-perf-2.6-SNAPSHOT.jar and manually add the other necessary dependencies.
A bit fiddly, but should not be too hard.
* Create a JMH benchmark for the Advanced Filtering section (currently at the
bottom of the performance page). See FilterPerformanceComparison.java.
Volunteers? :-)
I'll start with the first one and go down the list. If anyone wants to help out
I would appreciate it.
was (Author: [email protected]):
Ok. Then still outstanding for this ticket are:
* File Appender response time comparison. This involves running the
ResponseTimeTest many times to find a reasonable workload that is interesting
but does not saturate the appender. I've found response time tests to be fiddly
and I expect this will take me some time.
* Cost of various APIs/wrappers (SLF4J, Log4j1, JUL, Commons Logging). Can be
done in a JMH benchmark, which makes things easier, but the benchmarks.jar
uber-jar will contain all dependencies, so it cannot be used to test the
log4j-1.2-api or the log4j-slf4j-impl wrappers. A bit of classpath fiddling may
be required, but should not be too hard.
* Create a JMH benchmark for the Advanced Filtering section (currently at the
bottom of the performance page). See FilterPerformanceComparison.java.
Volunteers? :-)
I'll start with the first one and go down the list. If anyone wants to help out
I would appreciate it.
> Log4j performance documentation
> -------------------------------
>
> Key: LOG4J2-1179
> URL: https://issues.apache.org/jira/browse/LOG4J2-1179
> Project: Log4j 2
> Issue Type: Documentation
> Components: Documentation, Performance Benchmarks
> Affects Versions: 2.4.1
> Reporter: Remko Popma
> Assignee: Remko Popma
> Fix For: 2.6
>
> Attachments: ParamMsgThrpt1T.png, ParamMsgThrpt2T.png,
> ParamMsgThrpt4T.png
>
>
> Reorganize and extend performance data on the site.
> *Async Loggers Manual Page*
> Should be more focussed. Proposed changes:
> (/) Link to Location section in Performance page from Async Loggers page
> _"Location, location, location..."_ section.
> (/) Similarly, move _"Throughput of Logging With Location
> (includeLocation="true")"_ table with throughput results to general
> Performance page. UPDATE: replaced with new data from JMH benchmark.
> (/) Move _"FileAppender vs. RandomAccessFileAppender"_ section to general
> Performance page. (Again, keep anchors and link to new section on Performance
> page to avoid breaking links.)
> (/) Rewrite opening paragraph of Async Logger manual page to remove reference
> to RandomAccessFile appender
> (/) Rewrite section on _Latency_
> * The histogram shows service time (more useful for users is response time:
> service time + wait time).
> * Bar chart diagram on "average latency" is nonsense. Latency is not a normal
> distribution so terms like "average latency" don't make sense. Remove this.
> (A histogram showing the full range of percentiles _does_ make sense.)
> * Bar chart diagram with max of 99.99% of observations is better than average
> but still has large drawbacks: this is service time (omitting the crucial
> wait time) and how high are the peaks in the 0.01% we did not report? Better
> to remove this and instead show a histogram with the full range of
> percentages.
> *Performance Page*
> (/) Briefly explain about various aspects of "performance": peak measured
> throughput (what kind of bursts can we deal with?), sustained throughput, and
> response time (service time + wait time).
> 2. Then show how Log4j 2 compares to the alternatives (Logback, Log4j-1.2 and
> JUL) on all these three performance dimensions.
> 3. Finally, document some performance trade-offs for Log4j 2 functionality.
> *2. Comparison to alternative logging libraries*
> (/) Peak throughput comparison Async Loggers vs async appenders for bursty
> logging.
> (/) Response time comparison of Async Loggers vs async appenders
> (/) Parameterized messages: use these JMH [benchmark
> results|https://issues.apache.org/jira/browse/LOG4J2-1278?focusedCommentId=15216236&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15216236]?
> (Looks like parameterized messages are currently quite expensive...)
> (/) compare performance impact of including location between logging libraries
> For various appenders, compare Log4j2 to alternatives with regards to max
> sustained throughput (and separately, response time).
> (/) [File Appender max sustained
> thoughput|https://issues.apache.org/jira/browse/LOG4J2-1297?focusedCommentId=15256490&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15256490]
> (-) File Appender response time comparison
> (?) Socket appender (TCP/UDP)
> (?) Syslog appender (TCP/UDP)
> *3. Log4j 2 functionality performance trade-offs*
> (/) Compare performance of Log4j 2 appenders (File, RandomAccess File,
> MemoryMapped File, Console, Rewrite, other?). Use the same layout for
> comparison. Perhaps the PatternLayout with the {{%d \[%t\] %p %c - %m%n}}
> pattern.
> (-) Cost of various APIs/wrappers (SLF4J, Log4j1, JUL, Commons Logging)
> (?) Compare performance all layouts (CSV, Gelf, HTML, JSON, Pattern,
> RFC-5424, Serialized, Syslog, XML). Perhaps for log events with and without
> Throwable. TBD: any layout options to compare? (It may be good to document
> which features have a performance cost.)
> (?) Cost of various Pattern Layout options. Are there any converters that are
> particularly expensive (other than location)?
> (?) JDBC appenders? - different JDBC drivers and target databases may have
> very different performance. May become a big project. We could do a quick
> comparison of the JDBC appender to the JDK Derby DB compared against
> FileAppender just to get an idea of max sustained throughput?
> -------------------
> Of the existing Performance page sections:
> (-) Briefly mention that disabled logging has no measurable cost, but
> de-emphasize this section by moving it down the page.
> (-) I like the part about the filters because it a) compares Log4j 2 to
> Logback and b) considers multithreaded applications. I'll turn this into a
> JMH test and show the result as a bar chart.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]