[
https://issues.apache.org/jira/browse/NIFI-1085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14982598#comment-14982598
]
Matt Gilman commented on NIFI-1085:
-----------------------------------
The 35MB heartbeat messages are just the statistics and bulletins for the
components in the flow from each node. Since there is a history of these
statistics being held in memory that is likely what's filling your heap. The
stats history repository is an interface so you should be able to create an
implementation that doesn't hold the history in memory and that may help since
you said the symptoms occur most often when garbage collection begins.
> WebClusterManager starves immutable API requests under heavy load conditions
> ----------------------------------------------------------------------------
>
> Key: NIFI-1085
> URL: https://issues.apache.org/jira/browse/NIFI-1085
> Project: Apache NiFi
> Issue Type: Improvement
> Components: Core Framework
> Affects Versions: 0.3.0
> Reporter: Michael Moser
>
> With a 6 node cluster with thousands of components on the graph, we noticed
> that the ReentrantReadWriteLock in WebClusterManager can starve NiFi Web
> Server threads that are waiting on the read lock.
> Thread dumps shows the HeartbeatMonitoringTimerTask thread holding the write
> lock while many Web Server threads are parked waiting on the read lock.
> Modify the ReentrantReadWriteLock to operate in fair mode (to give the lock
> to threads waiting the longest, such as those wanting the read lock).
> Modify the HearbeatMonitoringTimerTask timer to not be scheduleAtFixedRate()
> but instead use schedule() to execute it less often if garbage collection
> blocks it.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)