[
https://issues.apache.org/jira/browse/HDFS-9178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15019094#comment-15019094
]
Sangjin Lee commented on HDFS-9178:
-----------------------------------
Does this issue exist in 2.6.x? Should this be backported to branch-2.6?
> Slow datanode I/O can cause a wrong node to be marked bad
> ---------------------------------------------------------
>
> Key: HDFS-9178
> URL: https://issues.apache.org/jira/browse/HDFS-9178
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Kihwal Lee
> Assignee: Kihwal Lee
> Priority: Critical
> Fix For: 3.0.0, 2.7.2
>
> Attachments: HDFS-9178.branch-2.6.patch, HDFS-9178.patch
>
>
> When non-leaf datanode in a pipeline is slow on or stuck at disk I/O, the
> downstream node can timeout on reading packet since even the heartbeat
> packets will not be relayed down.
> The packet read timeout is set in {{DataXceiver#run()}}:
> {code}
> peer.setReadTimeout(dnConf.socketTimeout);
> {code}
> When the downstream node times out and closes the connection to the upstream,
> the upstream node's {{PacketResponder}} gets {{EOFException}} and it sends an
> ack upstream with the downstream node status set to {{ERROR}}. This caused
> the client to exclude the downstream node, even thought the upstream node was
> the one got stuck.
> The connection to downstream has longer timeout, so the downstream will
> always timeout first. The downstream timeout is set in {{writeBlock()}}
> {code}
> int timeoutValue = dnConf.socketTimeout +
> (HdfsConstants.READ_TIMEOUT_EXTENSION * targets.length);
> int writeTimeout = dnConf.socketWriteTimeout +
> (HdfsConstants.WRITE_TIMEOUT_EXTENSION * targets.length);
> NetUtils.connect(mirrorSock, mirrorTarget, timeoutValue);
> OutputStream unbufMirrorOut = NetUtils.getOutputStream(mirrorSock,
> writeTimeout);
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)