I have two approaches possible here: an optional dependency on Conversant
Disruptor to replace ArrayBlockingQueue, or a more generic
BlockingQueueFactory that can be specified by a log4j property. I'll go
make a jira issue for this.

On 15 June 2016 at 11:28, Ralph Goers <ralph.go...@dslextreme.com> wrote:

> This is still a pretty good option.  I would modify the code to use that
> option if the library is present, otherwise fall back to ArrayBlockingQueue.
>
> Ralph
>
> On Jun 15, 2016, at 9:24 AM, Matt Sicker <boa...@gmail.com> wrote:
>
> Using the 4 threads versions of the benchmarks, here's what I've found:
>
> *ArrayBlockingQueue:*
>
> Benchmark                                                       Mode
> Samples        Score       Error  Units
>
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput10Params    thrpt
> 20  1101267.173 ± 17583.204  ops/s
>
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput11Params    thrpt
> 20  1128269.255 ± 12188.910  ops/s
>
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput1Param      thrpt
> 20  1525470.805 ± 56515.933  ops/s
>
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput2Params     thrpt
> 20  1789434.196 ± 42733.475  ops/s
>
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput3Params     thrpt
> 20  1803276.278 ± 34938.176  ops/s
>
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput4Params     thrpt
> 20  1468550.776 ± 26402.286  ops/s
>
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput5Params     thrpt
> 20  1322304.349 ± 22417.997  ops/s
>
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput6Params     thrpt
> 20  1179756.489 ± 16502.276  ops/s
>
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput7Params     thrpt
> 20  1324660.677 ± 18893.944  ops/s
>
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput8Params     thrpt
> 20  1309365.962 ± 19602.489  ops/s
>
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput9Params     thrpt
> 20  1422144.180 ± 20815.042  ops/s
>
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughputSimple      thrpt
> 20  1247862.372 ± 18300.764  ops/s
>
>
> *DisruptorBlockingQueue:*
>
> Benchmark                                                       Mode
> Samples        Score        Error  Units
>
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput10Params    thrpt
> 20  3704735.586 ±  59766.253  ops/s
>
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput11Params    thrpt
> 20  3622175.410 ±  31975.353  ops/s
>
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput1Param      thrpt
> 20  6862480.428 ± 121473.276  ops/s
>
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput2Params     thrpt
> 20  6193288.988 ±  93545.144  ops/s
>
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput3Params     thrpt
> 20  5715621.712 ± 131878.581  ops/s
>
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput4Params     thrpt
> 20  5745187.005 ± 213854.016  ops/s
>
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput5Params     thrpt
> 20  5307137.396 ±  88135.709  ops/s
>
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput6Params     thrpt
> 20  4953015.419 ±  72100.403  ops/s
>
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput7Params     thrpt
> 20  4833836.418 ±  52919.314  ops/s
>
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput8Params     thrpt
> 20  4353791.507 ±  79047.812  ops/s
>
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput9Params     thrpt
> 20  4136761.624 ±  67804.253  ops/s
>
> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughputSimple      thrpt
> 20  6719456.722 ± 187433.301  ops/s
>
>
> *AsyncLogger:*
>
> Benchmark                                                Mode  Samples
>     Score        Error  Units
>
> o.a.l.l.p.j.AsyncLoggersBenchmark.throughput10Params    thrpt       20
> 5075883.371 ± 180465.316  ops/s
>
> o.a.l.l.p.j.AsyncLoggersBenchmark.throughput11Params    thrpt       20
> 4867362.030 ± 193909.465  ops/s
>
> o.a.l.l.p.j.AsyncLoggersBenchmark.throughput1Param      thrpt       20
> 10294733.024 ± 226536.965  ops/s
>
> o.a.l.l.p.j.AsyncLoggersBenchmark.throughput2Params     thrpt       20
> 9021650.667 ± 351102.255  ops/s
>
> o.a.l.l.p.j.AsyncLoggersBenchmark.throughput3Params     thrpt       20
> 8079337.905 ± 115824.975  ops/s
>
> o.a.l.l.p.j.AsyncLoggersBenchmark.throughput4Params     thrpt       20
> 7347356.788 ±  66598.738  ops/s
>
> o.a.l.l.p.j.AsyncLoggersBenchmark.throughput5Params     thrpt       20
> 6930636.174 ± 150072.908  ops/s
>
> o.a.l.l.p.j.AsyncLoggersBenchmark.throughput6Params     thrpt       20
> 6309567.300 ± 293709.787  ops/s
>
> o.a.l.l.p.j.AsyncLoggersBenchmark.throughput7Params     thrpt       20
> 6051997.196 ± 268405.087  ops/s
>
> o.a.l.l.p.j.AsyncLoggersBenchmark.throughput8Params     thrpt       20
> 5273376.623 ±  99168.461  ops/s
>
> o.a.l.l.p.j.AsyncLoggersBenchmark.throughput9Params     thrpt       20
> 5091137.594 ± 150617.444  ops/s
>
> o.a.l.l.p.j.AsyncLoggersBenchmark.throughputSimple      thrpt       20
> 11136623.731 ± 400350.272  ops/s
>
>
> So while the Conversant BlockingQueue implementation significantly
> improves the performance of AsyncAppender, it looks like AsyncLogger is
> still faster.
>
> On 15 June 2016 at 10:43, Matt Sicker <boa...@gmail.com> wrote:
>
>> I'm gonna play with the microbenchmarks and see what I find number-wise.
>> I'll return with some results.
>>
>> On 15 June 2016 at 10:33, Matt Sicker <boa...@gmail.com> wrote:
>>
>>> There's a much smaller disruptor library: <
>>> https://github.com/conversant/disruptor> which implements
>>> BlockingQueue. I'm not sure how it compares in performance to LMAX (it's
>>> supposedly better on Intel machines), but it might be worth looking into as
>>> an alternative (or at least making a BlockingQueueFactory like Camel does
>>> for its SEDA component).
>>>
>>> --
>>> Matt Sicker <boa...@gmail.com>
>>>
>>
>>
>>
>> --
>> Matt Sicker <boa...@gmail.com>
>>
>
>
>
> --
> Matt Sicker <boa...@gmail.com>
>
>
>


-- 
Matt Sicker <boa...@gmail.com>

Reply via email to