[
https://issues.apache.org/jira/browse/ARTEMIS-5325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17940754#comment-17940754
]
Justin Bertram commented on ARTEMIS-5325:
-----------------------------------------
As far as I can tell [^thread-dump-consumer-events.txt] is categorically
different from [^thread-dump.txt] because the former only demonstrates threads
blocking on sending management notifications while creating a consumer. It
doesn't demonstrate any kind of hard or soft dead-lock. Furthermore, there is
no blockage of Netty threads.
It looks like your broker is experiencing a high number of remote clients
attempting to create a consumer when this thread dump was acquired. Is this
load normal in your environment? If so, it might be evidence of an anti-pattern
in your clients (creating & closing consumers often rather than letting them
remain open).
In any case, the thread that is blocking all the others is in code to page the
management notification message to disk (which is relatively slow). Typically
management notifications are consumed very quickly which prevents any
accumulation and therefore prevents paging. However, for whatever reason your
broker is paging management notifications to disk which might represent a
configuration or runtime issue that you should investigate.
bq. I think that the fix you provided (many thanks btw) will cover it as all
management notification events will be sent in a new thread now...Am I correct ?
That is not correct. The PR only sends _session_ notifications in a new thread.
Consumer notifications are still sent synchronously.
> Don't block session creation/closing with sending management notification
> -------------------------------------------------------------------------
>
> Key: ARTEMIS-5325
> URL: https://issues.apache.org/jira/browse/ARTEMIS-5325
> Project: ActiveMQ Artemis
> Issue Type: Bug
> Components: Broker, Clustering
> Affects Versions: 2.36.0, 2.37.0, 2.38.0, 2.39.0
> Reporter: Jean-Pascal Briquet
> Assignee: Justin Bertram
> Priority: Major
> Labels: pull-request-available
> Attachments: PrimaryDeadLockOnBackupSyncTest.java,
> thread-dump-consumer-events.txt, thread-dump.txt
>
> Time Spent: 20m
> Remaining Estimate: 0h
>
> h2. Configuration
> Artemis cluster with three primary/backup pairs using a ZooKeeper quorum.
> h2. Description
> The initial primary/backup replication can impact the primary (live) node,
> causing it to crash or freeze for and extend period.
> After an in-depth investigation, I found that the primary becomes dead-locked
> because no Netty threads are available to process the replication
> synchronization confirmation coming from the backup.
> This issue occurs when client application creates too many connections during
> the final phase of the replication phase.
> Below, I provide details of my investigation and a potential workaround.
> A thread-dump and a test-case are attached.
> h3. Lock / Unlock
> At the very end of the replication process, the Artemis primary locks its
> internal state including journal. (see
> ReplicationManager.sendSynchronizationDone()).
> It then waits for a synchronization confirmation packet from the backup
> before releasing the lock (see ReplicationManager.handlePacket()).
> This confirmation packet indicates to the primary that the backup is
> synchronized and ready for duty.
> The confirmation packet signals tha the backup is synchronized. While locked,
> the primary is essentially frozen, no operation can proceed on the broker.
> Under normal circumstances, this locks lasts only a few seconds or less.
> However, in my scenario, the confirmation packet from the backup is never
> processed.
> As a result, the primary remains locked indefinitely, freezing all activity
> until the replication process times out or the Artemis critical analyzer
> decides to stop the process.
> h3. Confirmation packet handling issue
> All incoming packets arriving to Artemis are handled by Netty threads, which
> are managed via a dedicated Netty thread-pool of size = 3 * processor count.
> After adding low level logs in packet handlers and analyzing tcp dumps, I'm
> sure that the confirmation packet is well received by the primary but is
> never processed.
> Upon inspecting the thread-dump, it is possible to see that no free Artemis
> Netty threads are available.
> All netty threads are blocked handling connection creation requests while
> attempting to send session notification events to other cluster nodes.
> However such notification event cannot be sent due to the replication and
> journal lock.
> During the investigation, I have seen that some client application were
> misbehaving, aggressively creating new connections.
> When these excessive connection requests occur in the final phase of the
> initial replication, they can block all Netty threads, leading to the
> deadlock.
> h2. Workaround
> Enable the following configuration in the broker.xml.
> {quote}<suppress-session-notifications>true</suppress-session-notifications>
> {quote}
> This property disable session creation notifications, preventing Netty
> threads from being blocked and therefore avoiding the deadlock.
> https://activemq.apache.org/components/artemis/documentation/latest/management.html#suppressing-session-notifications
> Disabling session notification seems to be acceptable for my use-cases, which
> relies on CORE, AMQP and OPENWIRE protocols.
> However, according to documentation, this option should not be used with MQTT
> protocol.
> h2. Test
> Add the provided test under
> tests/integration-tests/src/test/java/org/apache/activemq/artemis/tests/integration/cluster/failover
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]
For further information, visit: https://activemq.apache.org/contact