[
https://issues.apache.org/jira/browse/RATIS-840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17087664#comment-17087664
]
Lokesh Jain commented on RATIS-840:
-----------------------------------
[~yjxxtd] Thanks for updating the patch! Please find my comments below.
# LogAppender$AppenderDaemon#run - I think we can use server.isAlive() instead
of raftlog.isOpen().
# LogAppender#stopAppender - We should move the daemon.daemon calls inside
AppenderDaemon. Since we are interrupting the daemon in AppenderDaemon#stop the
join call might not give the desired result.
# Star import in LeaderState.
# In LeaderState#getSorted - Lets also print the leader information while
throwing the exception.
# LeaderState#addSenders - New followerInfo should be created in this function
instead of RaftServerImpl#newLogAppender. Then we can remove the update call
from LogAppender.
{code:java}
leaderState.putFollowerInfo(f)
{code}
> Memory leak of LogAppender
> --------------------------
>
> Key: RATIS-840
> URL: https://issues.apache.org/jira/browse/RATIS-840
> Project: Ratis
> Issue Type: Bug
> Components: server
> Reporter: runzhiwang
> Assignee: runzhiwang
> Priority: Critical
> Attachments: RATIS-840.001.patch, image-2020-04-06-14-27-28-485.png,
> image-2020-04-06-14-27-39-582.png, screenshot-1.png, screenshot-2.png
>
>
> *What's the problem ?*
> When run hadoop-ozone for 4 days, datanode memory leak. When dump heap, I
> found there are 460710 instances of GrpcLogAppender. But there are only 6
> instances of SenderList, and each SenderList contains 1-2 instance of
> GrpcLogAppender. And there are a lot of logs related to
> [LeaderState::restartSender|https://github.com/apache/incubator-ratis/blob/master/ratis-server/src/main/java/org/apache/ratis/server/impl/LeaderState.java#L428].
> {code:java}INFO impl.RaftServerImpl:
> 1665f5ea-ab17-4a0e-af6d-6958efd322fa@group-F64B465F37B5-LeaderState:
> Restarting GrpcLogAppender for
> 1665f5ea-ab17-4a0e-af6d-6958efd322fa@group-F64B465F37B5-\u003e229cbcc1-a3b2-4383-9c0d-c0f4c28c3d4a\n","stream":"stderr","time":"2020-04-06T03:59:53.37892512Z"}{code}
>
> So there are a lot of GrpcLogAppender did not stop the Daemon Thread when
> removed from senders.
> !image-2020-04-06-14-27-28-485.png!
> !image-2020-04-06-14-27-39-582.png!
>
> *Why
> [LeaderState::restartSender|https://github.com/apache/incubator-ratis/blob/master/ratis-server/src/main/java/org/apache/ratis/server/impl/LeaderState.java#L428]
> so many times ?*
> 1. As the image shows, when remove group, SegmentedRaftLog will close, then
> GrpcLogAppender throw exception when find the SegmentedRaftLog was closed.
> Then GrpcLogAppender will be
> [restarted|https://github.com/apache/incubator-ratis/blob/master/ratis-server/src/main/java/org/apache/ratis/server/impl/LogAppender.java#L94],
> and the new GrpcLogAppender throw exception again when find the
> SegmentedRaftLog was closed, then GrpcLogAppender will be restarted again ...
> . It results in an infinite restart of GrpcLogAppender.
> 2. Actually, when remove group, GrpcLogAppender will be stoped:
> RaftServerImpl::shutdown ->
> [RoleInfo::shutdownLeaderState|https://github.com/apache/incubator-ratis/blob/master/ratis-server/src/main/java/org/apache/ratis/server/impl/RaftServerImpl.java#L266]
> -> LeaderState::stop -> LogAppender::stopAppender, then SegmentedRaftLog
> will be closed: RaftServerImpl::shutdown ->
> [ServerState:close|https://github.com/apache/incubator-ratis/blob/master/ratis-server/src/main/java/org/apache/ratis/server/impl/RaftServerImpl.java#L271]
> ... . Though RoleInfo::shutdownLeaderState called before ServerState:close,
> but the GrpcLogAppender was stopped asynchronously. So infinite restart of
> GrpcLogAppender happens, when GrpcLogAppender stop after SegmentedRaftLog
> close.
> !screenshot-1.png!
> *Why GrpcLogAppender did not stop the Daemon Thread when removed from senders
> ?*
> I find a lot of GrpcLogAppender blocked inside logs4j. I think it's
> GrpcLogAppender restart too fast, then blocked in logs4j.
> !screenshot-2.png!
> *Can the new GrpcLogAppender work normally ?*
> 1. Even though without the above problem, the new created GrpcLogAppender
> still can not work normally.
> 2. When creat a new GrpcLogAppender, a new FollowerInfo will also be created:
> LeaderState::addAndStartSenders ->
> LeaderState::addSenders->RaftServerImpl::newLogAppender -> [new
> FollowerInfo|https://github.com/apache/incubator-ratis/blob/master/ratis-server/src/main/java/org/apache/ratis/server/impl/RaftServerImpl.java#L129]
> 3. When the new created GrpcLogAppender append entry to follower, then the
> follower response SUCCESS.
> 4. Then LeaderState::updateCommit -> [LeaderState::getMajorityMin |
> https://github.com/apache/incubator-ratis/blob/master/ratis-server/src/main/java/org/apache/ratis/server/impl/LeaderState.java#L599]
> ->
> [voterLists.get(0) |
> https://github.com/apache/incubator-ratis/blob/master/ratis-server/src/main/java/org/apache/ratis/server/impl/LeaderState.java#L607].
> {color:#DE350B}Error happens because voterLists.get(0) return the
> FollowerInfo of the old GrpcLogAppender, not the FollowerInfo of the new
> GrpcLogAppender. {color}
> 5. Because the majority commit got from the FollowerInfo of the old
> GrpcLogAppender never changes. So even though follower has append entry
> successfully, the leader can not update commit. So the new created
> GrpcLogAppender can never work normally.
> 6. The reason of unit test of runTestRestartLogAppender can pass is that it
> did not stop the old GrpcLogAppender, and the old GrpcLogAppender append
> entry to follower, not the new GrpcLogAppender. If stop the old
> GrpcLogAppender, runTestRestartLogAppender will fail.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)