[
https://issues.apache.org/jira/browse/IGNITE-10808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16728702#comment-16728702
]
Stanislav Lukyanov commented on IGNITE-10808:
---------------------------------------------
There are two parts in this problem:
1) The queue may grow indefinitely if metrics updates are generated faster than
they're processed.
This can be solved by removing all of the updates but the latest one.
When a new metrics update is added to the queue, we should check if there is
another metrics update in the queue already. If there then replace the old one
with the new one (at the same place in the queue). We should be careful and
only replace the metrics update on their first ring pass - the messages on the
second ring pass should be left in the queue.
2) The metrics updates may take too much of the discovery worker capacity
leading to starvation-type issues.
This can be solved by making metrics update normal priority instead of high
priority.
To avoid triggering failure detection we need to make sure that all messages,
not only metrics updates, reset the failure detection timer.
> Discovery message queue may build up with TcpDiscoveryMetricsUpdateMessage
> --------------------------------------------------------------------------
>
> Key: IGNITE-10808
> URL: https://issues.apache.org/jira/browse/IGNITE-10808
> Project: Ignite
> Issue Type: Bug
> Reporter: Stanislav Lukyanov
> Priority: Major
> Attachments: IgniteMetricsOverflowTest.java
>
>
> A node receives a new metrics update message every `metricsUpdateFrequency`
> milliseconds, and the message will be put at the top of the queue (because it
> is a high priority message).
> If processing one message takes more than `metricsUpdateFrequency` then
> multiple `TcpDiscoveryMetricsUpdateMessage` will be in the queue. A long
> enough delay (e.g. caused by a network glitch or GC) may lead to the queue
> building up tens of metrics update messages which are essentially useless to
> be processed. Finally, if processing a message on average takes a little more
> than `metricsUpdateFrequency` (even for a relatively short period of time,
> say, for a minute due to network issues) then the message worker will end up
> processing only the metrics updates and the cluster will essentially hang.
> Reproducer is attached. In the test, the queue first builds up and then very
> slowly being teared down, causing "Failed to wait for PME" messages.
> Need to change ServerImpl's SocketReader not to put another metrics update
> message to the top of the queue if it already has one (or replace the one at
> the top with new one).
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)