tasanuma commented on a change in pull request #3704:
URL: https://github.com/apache/hadoop/pull/3704#discussion_r772796622
##########
File path:
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
##########
@@ -1254,10 +1273,52 @@ public void run() {
}
}
+ void markSlowNode(List<DatanodeInfo> slownodesFromAck) throws IOException {
+ Set<DatanodeInfo> discontinuousNodes = new
HashSet<>(slowNodeMap.keySet());
+ for (DatanodeInfo slowNode : slownodesFromAck) {
+ if (!slowNodeMap.containsKey(slowNode)) {
+ slowNodeMap.put(slowNode, 1);
+ } else {
+ int oldCount = slowNodeMap.get(slowNode);
+ slowNodeMap.put(slowNode, ++oldCount);
+ }
+ discontinuousNodes.remove(slowNode);
+ }
+ for (DatanodeInfo discontinuousNode : discontinuousNodes) {
+ slowNodeMap.remove(discontinuousNode);
+ }
+
+ if (!slowNodeMap.isEmpty()) {
+ for (Map.Entry<DatanodeInfo, Integer> entry : slowNodeMap.entrySet()) {
+ if (entry.getValue() >= markSlowNodeAsBadNodeThreshold) {
+ DatanodeInfo slowNode = entry.getKey();
+ int index = getDatanodeIndex(slowNode);
+ if (index >= 0) {
+ errorState.setBadNodeIndex(
+ getDatanodeIndex(entry.getKey()));
Review comment:
We can reuse `index` variable.
```suggestion
errorState.setBadNodeIndex(index);
```
##########
File path:
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PipelineAck.java
##########
@@ -230,14 +260,27 @@ public static ECN getECNFromHeader(int header) {
return StatusFormat.getECN(header);
}
+ public static SLOW getSLOWFromHeader(int header) {
+ return StatusFormat.getSLOW(header);
+ }
+
public static int setStatusForHeader(int old, Status status) {
return StatusFormat.setStatus(old, status);
}
+ public static int setSLOWForHeader(int old, SLOW slow) {
Review comment:
Only the unit test uses this method. Would you please add
VisibleForTesting?
```suggestion
@VisibleForTesting
public static int setSLOWForHeader(int old, SLOW slow) {
```
##########
File path:
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PipelineAck.java
##########
@@ -230,14 +260,27 @@ public static ECN getECNFromHeader(int header) {
return StatusFormat.getECN(header);
}
+ public static SLOW getSLOWFromHeader(int header) {
+ return StatusFormat.getSLOW(header);
+ }
+
public static int setStatusForHeader(int old, Status status) {
return StatusFormat.setStatus(old, status);
}
+ public static int setSLOWForHeader(int old, SLOW slow) {
+ return StatusFormat.setSLOW(old, slow);
+ }
+
public static int combineHeader(ECN ecn, Status status) {
+ return combineHeader(ecn, status, SLOW.DISABLED);
+ }
+
+ public static int combineHeader(ECN ecn, Status status, SLOW slow) {
Review comment:
I want `PipelineAck#getHeaderFlag()` to use this method.
```java
public int getHeaderFlag(int i) {
if (proto.getFlagCount() > 0) {
return proto.getFlag(i);
} else {
return combineHeader(ECN.DISABLED, proto.getReply(i), SLOW.DISABLED);
}
}
```
##########
File path:
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java
##########
@@ -1620,8 +1623,10 @@ private void sendAckUpstreamUnprotected(PipelineAck ack,
long seqno,
// downstream nodes, reply should contain one reply.
replies = new int[] { myHeader };
} else if (mirrorError) { // ack read error
- int h = PipelineAck.combineHeader(datanode.getECN(), Status.SUCCESS);
- int h1 = PipelineAck.combineHeader(datanode.getECN(), Status.ERROR);
+ int h = PipelineAck.combineHeader(datanode.getECN(), Status.SUCCESS,
+ datanode.getSLOW());
+ int h1 = PipelineAck.combineHeader(datanode.getECN(), Status.ERROR,
+ datanode.getSLOW());
Review comment:
Why it doesn't use
`datanode.getSLOWByBlockPoolId(block.getBlockPoolId())`?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]