I do not think Java 8 adds anything in this area:
https://docs.oracle.com/javase/8/docs/technotes/guides/concurrency/changes8.html

Gary

On Wed, Jun 15, 2016 at 6:17 PM, Remko Popma <remko.po...@gmail.com> wrote:

> Matt,
>
> Would you be interested in also looking at using TransferQueue in
> AsyncAppender?
> (
> https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/TransferQueue.html
> )
>
> Remko
>
> Sent from my iPhone
>
> On 2016/06/16, at 9:33, Matt Sicker <boa...@gmail.com> wrote:
>
> What I found really neat about the Conversant disruptor is that it doesn't
> use sun.misc.Unsafe which makes it a bit more future-proof. On the other
> hand, the main developer said it's optimised for Intel architectures, so
> it's somewhat special purpose.
>
> On 15 June 2016 at 19:14, Remko Popma <remko.po...@gmail.com> wrote:
>
>> Very nice numbers!
>> I think this would be a great addition to Log4j 2.
>>
>> As with any improvement that is performance-centric, we should add a
>> section to the Performance page that compares the new capability to the
>> previous options. It would be great if we can run these benchmarks for a
>> number of threads and show the result in a graph.
>>
>> Sent from my iPhone
>>
>> On 2016/06/16, at 1:24, Matt Sicker <boa...@gmail.com> wrote:
>>
>> Using the 4 threads versions of the benchmarks, here's what I've found:
>>
>> *ArrayBlockingQueue:*
>>
>> Benchmark                                                       Mode
>> Samples        Score       Error  Units
>>
>> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput10Params    thrpt
>>   20  1101267.173 ± 17583.204  ops/s
>>
>> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput11Params    thrpt
>>   20  1128269.255 ± 12188.910  ops/s
>>
>> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput1Param      thrpt
>>   20  1525470.805 ± 56515.933  ops/s
>>
>> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput2Params     thrpt
>>   20  1789434.196 ± 42733.475  ops/s
>>
>> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput3Params     thrpt
>>   20  1803276.278 ± 34938.176  ops/s
>>
>> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput4Params     thrpt
>>   20  1468550.776 ± 26402.286  ops/s
>>
>> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput5Params     thrpt
>>   20  1322304.349 ± 22417.997  ops/s
>>
>> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput6Params     thrpt
>>   20  1179756.489 ± 16502.276  ops/s
>>
>> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput7Params     thrpt
>>   20  1324660.677 ± 18893.944  ops/s
>>
>> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput8Params     thrpt
>>   20  1309365.962 ± 19602.489  ops/s
>>
>> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput9Params     thrpt
>>   20  1422144.180 ± 20815.042  ops/s
>>
>> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughputSimple      thrpt
>>   20  1247862.372 ± 18300.764  ops/s
>>
>>
>> *DisruptorBlockingQueue:*
>>
>> Benchmark                                                       Mode
>> Samples        Score        Error  Units
>>
>> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput10Params    thrpt
>>   20  3704735.586 ±  59766.253  ops/s
>>
>> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput11Params    thrpt
>>   20  3622175.410 ±  31975.353  ops/s
>>
>> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput1Param      thrpt
>>   20  6862480.428 ± 121473.276  ops/s
>>
>> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput2Params     thrpt
>>   20  6193288.988 ±  93545.144  ops/s
>>
>> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput3Params     thrpt
>>   20  5715621.712 ± 131878.581  ops/s
>>
>> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput4Params     thrpt
>>   20  5745187.005 ± 213854.016  ops/s
>>
>> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput5Params     thrpt
>>   20  5307137.396 ±  88135.709  ops/s
>>
>> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput6Params     thrpt
>>   20  4953015.419 ±  72100.403  ops/s
>>
>> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput7Params     thrpt
>>   20  4833836.418 ±  52919.314  ops/s
>>
>> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput8Params     thrpt
>>   20  4353791.507 ±  79047.812  ops/s
>>
>> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughput9Params     thrpt
>>   20  4136761.624 ±  67804.253  ops/s
>>
>> o.a.l.l.p.j.AsyncAppenderLog4j2Benchmark.throughputSimple      thrpt
>>   20  6719456.722 ± 187433.301  ops/s
>>
>>
>> *AsyncLogger:*
>>
>> Benchmark                                                Mode  Samples
>>       Score        Error  Units
>>
>> o.a.l.l.p.j.AsyncLoggersBenchmark.throughput10Params    thrpt       20
>> 5075883.371 ± 180465.316  ops/s
>>
>> o.a.l.l.p.j.AsyncLoggersBenchmark.throughput11Params    thrpt       20
>> 4867362.030 ± 193909.465  ops/s
>>
>> o.a.l.l.p.j.AsyncLoggersBenchmark.throughput1Param      thrpt       20
>> 10294733.024 ± 226536.965  ops/s
>>
>> o.a.l.l.p.j.AsyncLoggersBenchmark.throughput2Params     thrpt       20
>> 9021650.667 ± 351102.255  ops/s
>>
>> o.a.l.l.p.j.AsyncLoggersBenchmark.throughput3Params     thrpt       20
>> 8079337.905 ± 115824.975  ops/s
>>
>> o.a.l.l.p.j.AsyncLoggersBenchmark.throughput4Params     thrpt       20
>> 7347356.788 ±  66598.738  ops/s
>>
>> o.a.l.l.p.j.AsyncLoggersBenchmark.throughput5Params     thrpt       20
>> 6930636.174 ± 150072.908  ops/s
>>
>> o.a.l.l.p.j.AsyncLoggersBenchmark.throughput6Params     thrpt       20
>> 6309567.300 ± 293709.787  ops/s
>>
>> o.a.l.l.p.j.AsyncLoggersBenchmark.throughput7Params     thrpt       20
>> 6051997.196 ± 268405.087  ops/s
>>
>> o.a.l.l.p.j.AsyncLoggersBenchmark.throughput8Params     thrpt       20
>> 5273376.623 ±  99168.461  ops/s
>>
>> o.a.l.l.p.j.AsyncLoggersBenchmark.throughput9Params     thrpt       20
>> 5091137.594 ± 150617.444  ops/s
>>
>> o.a.l.l.p.j.AsyncLoggersBenchmark.throughputSimple      thrpt       20
>> 11136623.731 ± 400350.272  ops/s
>>
>>
>> So while the Conversant BlockingQueue implementation significantly
>> improves the performance of AsyncAppender, it looks like AsyncLogger is
>> still faster.
>>
>> On 15 June 2016 at 10:43, Matt Sicker <boa...@gmail.com> wrote:
>>
>>> I'm gonna play with the microbenchmarks and see what I find number-wise.
>>> I'll return with some results.
>>>
>>> On 15 June 2016 at 10:33, Matt Sicker <boa...@gmail.com> wrote:
>>>
>>>> There's a much smaller disruptor library: <
>>>> https://github.com/conversant/disruptor> which implements
>>>> BlockingQueue. I'm not sure how it compares in performance to LMAX (it's
>>>> supposedly better on Intel machines), but it might be worth looking into as
>>>> an alternative (or at least making a BlockingQueueFactory like Camel does
>>>> for its SEDA component).
>>>>
>>>> --
>>>> Matt Sicker <boa...@gmail.com>
>>>>
>>>
>>>
>>>
>>> --
>>> Matt Sicker <boa...@gmail.com>
>>>
>>
>>
>>
>> --
>> Matt Sicker <boa...@gmail.com>
>>
>>
>
>
> --
> Matt Sicker <boa...@gmail.com>
>
>


-- 
E-Mail: garydgreg...@gmail.com | ggreg...@apache.org
Java Persistence with Hibernate, Second Edition
<http://www.manning.com/bauer3/>
JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
Spring Batch in Action <http://www.manning.com/templier/>
Blog: http://garygregory.wordpress.com
Home: http://garygregory.com/
Tweet! http://twitter.com/GaryGregory

Reply via email to