[
https://issues.apache.org/jira/browse/LOG4J2-1430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
John Cairns updated LOG4J2-1430:
--------------------------------
Attachment: log4jperfnumactl.log
[[email protected]][~Anthony Maire] Here are the benchmarks at 1, 2, 4, 8, 16,
32, 64 thread with numactl and without taskset. Both disruptors are set to
WAITING/PARK respectively. I'm using the same 12 core Haswell box with the
29x clock multiplier as before (standard enterprise pizza box circa 2016).
I'm not sure this result is any more or less relevant than the previous results
but it does match in terms of my interpretation of the data. numactl
introduced some sort of degenerate pauses for high iterations, so I had to
reduce the number of iterations to get a good result. Also there is likely a
concurrency bug in the Xfer version as it appeared to deadock in the 32 thread
case, that data is lost. Although this result as not as careful, and IMHO
not as good, as the previous at least it does agree overall.
I'm satisfied that the evidence conclusively demonstrates the performance of
Conversant Disruptor in a wide variety of real cases and benchmarks. Even
including a variety of experimental conditions.
> Add optional support for Conversant DisruptorBlockingQueue in AsyncAppender
> ---------------------------------------------------------------------------
>
> Key: LOG4J2-1430
> URL: https://issues.apache.org/jira/browse/LOG4J2-1430
> Project: Log4j 2
> Issue Type: New Feature
> Components: Appenders
> Affects Versions: 2.6.1
> Reporter: Matt Sicker
> Assignee: Matt Sicker
> Fix For: 2.7
>
> Attachments: AsyncAppenderPerf01.txt, AsyncLogBenchmarks.log,
> conversantvsjctoolsnumthreads.jpg, jctools-vs-conversant-service-time.png,
> log4j2-1430-jctools-tmp-patch.txt, log4jHaswell2cpu2core.jpg,
> log4jHaswell2cpu4core.jpg, log4jperfnumactl.log, log4jrafile.log,
> log4jthread2cpu2core.log, log4jthread2cpu4core.log
>
>
> [Conversant Disruptor|https://github.com/conversant/disruptor] works as an
> implementation of BlockingQueue that is much faster than ArrayBlockingQueue.
> I did some benchmarks earlier and found it to be a bit faster:
> h3. AsyncAppender/ArrayBlockingQueue
> {code}
> Benchmark Mode Samples
> Score Error Units
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput10Params thrpt 20
> 1101267.173 ± 17583.204 ops/s
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput11Params thrpt 20
> 1128269.255 ± 12188.910 ops/s
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput1Param thrpt 20
> 1525470.805 ± 56515.933 ops/s
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput2Params thrpt 20
> 1789434.196 ± 42733.475 ops/s
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput3Params thrpt 20
> 1803276.278 ± 34938.176 ops/s
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput4Params thrpt 20
> 1468550.776 ± 26402.286 ops/s
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput5Params thrpt 20
> 1322304.349 ± 22417.997 ops/s
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput6Params thrpt 20
> 1179756.489 ± 16502.276 ops/s
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput7Params thrpt 20
> 1324660.677 ± 18893.944 ops/s
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput8Params thrpt 20
> 1309365.962 ± 19602.489 ops/s
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput9Params thrpt 20
> 1422144.180 ± 20815.042 ops/s
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughputSimple thrpt 20
> 1247862.372 ± 18300.764 ops/s
> {code}
> h3. AsyncAppender/DisruptorBlockingQueue
> {code}
> Benchmark Mode Samples
> Score Error Units
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput10Params thrpt 20
> 3704735.586 ± 59766.253 ops/s
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput11Params thrpt 20
> 3622175.410 ± 31975.353 ops/s
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput1Param thrpt 20
> 6862480.428 ± 121473.276 ops/s
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput2Params thrpt 20
> 6193288.988 ± 93545.144 ops/s
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput3Params thrpt 20
> 5715621.712 ± 131878.581 ops/s
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput4Params thrpt 20
> 5745187.005 ± 213854.016 ops/s
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput5Params thrpt 20
> 5307137.396 ± 88135.709 ops/s
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput6Params thrpt 20
> 4953015.419 ± 72100.403 ops/s
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput7Params thrpt 20
> 4833836.418 ± 52919.314 ops/s
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput8Params thrpt 20
> 4353791.507 ± 79047.812 ops/s
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput9Params thrpt 20
> 4136761.624 ± 67804.253 ops/s
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughputSimple thrpt 20
> 6719456.722 ± 187433.301 ops/s
> {code}
> h3. AsyncLogger
> {code}
> Benchmark Mode Samples
> Score Error Units
> o.a.l.l.p.j.AsyncLoggersBenchmark.throughput10Params thrpt 20
> 5075883.371 ± 180465.316 ops/s
> o.a.l.l.p.j.AsyncLoggersBenchmark.throughput11Params thrpt 20
> 4867362.030 ± 193909.465 ops/s
> o.a.l.l.p.j.AsyncLoggersBenchmark.throughput1Param thrpt 20
> 10294733.024 ± 226536.965 ops/s
> o.a.l.l.p.j.AsyncLoggersBenchmark.throughput2Params thrpt 20
> 9021650.667 ± 351102.255 ops/s
> o.a.l.l.p.j.AsyncLoggersBenchmark.throughput3Params thrpt 20
> 8079337.905 ± 115824.975 ops/s
> o.a.l.l.p.j.AsyncLoggersBenchmark.throughput4Params thrpt 20
> 7347356.788 ± 66598.738 ops/s
> o.a.l.l.p.j.AsyncLoggersBenchmark.throughput5Params thrpt 20
> 6930636.174 ± 150072.908 ops/s
> o.a.l.l.p.j.AsyncLoggersBenchmark.throughput6Params thrpt 20
> 6309567.300 ± 293709.787 ops/s
> o.a.l.l.p.j.AsyncLoggersBenchmark.throughput7Params thrpt 20
> 6051997.196 ± 268405.087 ops/s
> o.a.l.l.p.j.AsyncLoggersBenchmark.throughput8Params thrpt 20
> 5273376.623 ± 99168.461 ops/s
> o.a.l.l.p.j.AsyncLoggersBenchmark.throughput9Params thrpt 20
> 5091137.594 ± 150617.444 ops/s
> o.a.l.l.p.j.AsyncLoggersBenchmark.throughputSimple thrpt 20
> 11136623.731 ± 400350.272 ops/s
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]