Just to double-check: when using blocking wait, and running jstack, does BlockingWaitStrategy appear in the stack trace? Also, it it possible to double-check (perhaps attach VisualVM) that it definitely is the AsyncLoggerConfig-1 thread that consumes so much CPU?
Sent from my iPhone > On 2014/03/19, at 12:31, Chris Graham <[email protected]> wrote: > > I have tried both Block and Sleep (the default), but not Yield. No > discernable difference. > > >> On Wed, Mar 19, 2014 at 1:53 PM, Remko Popma <[email protected]> wrote: >> >> As tweeted, I suggest trying the blocking wait strategy. Can you run a >> jstack dump (and perhaps attach result to a Jira ticket)? In the attached >> stack trace below, the AsyncLoggerConfig-1 thread seems to be parked, >> waiting for a new log event... Doesn't explain high CPU usage... >> >> Sent from my iPhone >> >>> On 2014/03/19, at 10:27, Chris Graham <[email protected]> wrote: >>> >>> Hello Everyone. >>> >>> In this instance, I'm in indirect used of log4j2 2.0-rc1, as it's in the >>> web app that I'm using, Apache Archiva 2.0.1. >>> >>> The issue is that when running under WebSphere 8.5.0.2 (obviously on the >>> IBM JDK, 1.6) on AIX 6.1 TL8, Apache Archiva when it's doing nothing, is >>> sitting idle on around 50% CPU. >>> >>> Obviosuly, this is not good! >>> >>> I've performed the AIX native analysis, to get the native thread ID, >> mapped >>> it to a Java thread it, triggered a heap dump, and I've found this as the >>> culprit: >>> >>> 3XMTHREADINFO "AsyncLoggerConfig-1" J9VMThread:0x0000000031D14600, >>> j9thread_t:0x00000100137D8BD0, java/lang/Thread:0x000000004301C508, >>> state:CW, prio=5 >>> 3XMJAVALTHREAD (java/lang/Thread getId:0x6A, isDaemon:true) >>> 3XMTHREADINFO1 (native thread ID:0x2BF00F9, native >> priority:0x5, >>> native policy:UNKNOWN) >>> 3XMHEAPALLOC Heap bytes allocated since last GC cycle=0 (0x0) >>> 3XMTHREADINFO3 Java callstack: >>> 4XESTACKTRACE at sun/misc/Unsafe.park(Native Method) >>> 4XESTACKTRACE at >>> java/util/concurrent/locks/LockSupport.parkNanos(LockSupport.java:332) >>> 4XESTACKTRACE at >> com/lmax/disruptor/SleepingWaitStrategy.applyWaitMethod(SleepingWaitStrategy.java:66) >>> 4XESTACKTRACE at >> com/lmax/disruptor/SleepingWaitStrategy.waitFor(SleepingWaitStrategy.java:39) >>> 4XESTACKTRACE at >> com/lmax/disruptor/ProcessingSequenceBarrier.waitFor(ProcessingSequenceBarrier.java:55) >>> 4XESTACKTRACE at >>> com/lmax/disruptor/BatchEventProcessor.run(BatchEventProcessor.java:115) >>> 4XESTACKTRACE at >> java/util/concurrent/ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) >>> 4XESTACKTRACE at >> java/util/concurrent/ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) >>> 4XESTACKTRACE at java/lang/Thread.run(Thread.java:773) >>> 3XMTHREADINFO3 Native callstack: >>> 4XENATIVESTACK _event_wait+0x2b8 (0x09000000007E7D3C >>> [libpthreads.a+0x16d3c]) >>> 4XENATIVESTACK _cond_wait_local+0x4e4 (0x09000000007F5A48 >>> [libpthreads.a+0x24a48]) >>> 4XENATIVESTACK _cond_wait+0xbc (0x09000000007F6020 >>> [libpthreads.a+0x25020]) >>> 4XENATIVESTACK pthread_cond_wait+0x1a8 (0x09000000007F6C8C >>> [libpthreads.a+0x25c8c]) >>> 4XENATIVESTACK (0x0900000001223014 [libj9thr26.so+0x6014]) >>> 4XENATIVESTACK (0x0900000001222C60 [libj9thr26.so+0x5c60]) >>> 4XENATIVESTACK (0x090000000116AE58 [libj9vm26.so+0xfe58]) >>> 4XENATIVESTACK (0x090000000116B17C [libj9vm26.so+0x1017c]) >>> 4XENATIVESTACK (0x0900000001810528 >> [libjclscar_26.so+0x5c528]) >>> 4XENATIVESTACK (0x0900000001813B98 >> [libjclscar_26.so+0x5fb98]) >>> 4XENATIVESTACK (0x0900000001161764 [libj9vm26.so+0x6764]) >>> 4XENATIVESTACK (0x0900000001239CA0 [libj9prt26.so+0x2ca0]) >>> 4XENATIVESTACK (0x09000000011615D4 [libj9vm26.so+0x65d4]) >>> 4XENATIVESTACK (0x090000000121FAF4 [libj9thr26.so+0x2af4]) >>> 4XENATIVESTACK _pthread_body+0xf0 (0x09000000007D4D34 >>> [libpthreads.a+0x3d34]) >>> NULL >>> >>> I've been dealing with Olivier, from Archiva, and he suggested that I >> drop >>> a message in here. >>> >>> Are there any known issues with this? >>> >>> -Chris >> >> --------------------------------------------------------------------- >> To unsubscribe, e-mail: [email protected] >> For additional commands, e-mail: [email protected] >> >> --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
