No, unfortunately I only have thread dump at jvm shutdown :(
On Mon, Mar 15, 2021, 6:29 PM Remko Popma wrote:
> Leon, can you definitely rule out the possibility that there are any
> objects being logged which themselves call Log4j from their toString
> method?
>
> On Tue, Mar 16, 2021 at 7:2
Leon, can you definitely rule out the possibility that there are any
objects being logged which themselves call Log4j from their toString
method?
On Tue, Mar 16, 2021 at 7:28 AM Leon Finker wrote:
> We are using default TimeoutBlockingWaitStrategy. I doubt it will
> reproduce again. We have be
We are using default TimeoutBlockingWaitStrategy. I doubt it will
reproduce again. We have been using 1.13.1 since October 2020 and this
never happened till now (been using log4j2 since the beginning on
various versions). We run about 200 of these specific service
instances and never observed it be
Yes, looks the same. In our case it's also under JRE 11 and G1 and
string deduplication enabled (I doubt it matters :) but I would
mention it. And on running on virtual machines.
On Mon, Mar 15, 2021 at 8:50 AM Carter Kozak wrote:
>
> This sounds a lot like an issue I discussed with the lmax-disr
My bad, I read things a little too quickly and I havent seen the separation
between applicative threads thread and background logging thread in the
original message. There are 2 issues there :
- the one from Leon : the application thread cannot make progress because
the disruptor buffer is full, bu
> The issue here is that you are trying to log from the thread.
That doesn't match my understanding of the problem, but it's entirely possible
I've missed something. The "Log4j2-TF-1-AsyncLogger[AsyncContext@5cb0d902]-1"
thread in the example above is waiting for events to become available on th
Hello
The issue here is that you are trying to log from the thread. Since this
thread is the only one that consume event from the disruptor, is the buffer
is full, it deadlocks itself.
It is not really a disruptor bug, it is an inappropriate usage of it which
leads to this behavior.
In this threa
This sounds a lot like an issue I discussed with the lmax-disruptor folks here:
https://github.com/LMAX-Exchange/disruptor/issues/307
I ended up switching my services to log synchronously rather than enqueue when
the buffer is full, which
can produce events out of order, however we log into a str
Yes unfortunately, all logging stopped and nothing was seen in the log
file for about 2 hrs till service was restarted. But there is nothing
pointing to a deadlock in the threads dump we git so far :( However,
the threads dump captured was part of our standard kill -3 if the
process isn't exiting n
One other thing. My expectation is that once the buffer is full you should
continue to log but at the rate your underlying system can handle. This
wouldn’t be a deadlock but if the rate at which you are logging is higher than
the system can write the result will be that the application slows dow
Yes, it rings bells and really should have a slot on our FAQ page (although the
question really isn’t asked all that frequently).
First it means that you are logging faster than your system can handle. The
proper response is to figure out why you are logging so much and try to reduce
it or use
HI,
Using log4j2 1.13.1 with disruptor 3.4.2. Linux CentOS. JRE 11.0.5
Using the following JRE args:
-DAsyncLogger.RingBufferSize=32768
-DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector
-Dlog4j.configurationFile=...
The disruptor queue has filled up. And we've
12 matches
Mail list logo