jojochuang commented on a change in pull request #3247:
URL: https://github.com/apache/hadoop/pull/3247#discussion_r679795972
##########
File path:
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
##########
@@ -1386,19 +1386,11 @@ private void addDatanode2ExistingPipeline() throws
IOException {
* Case 2: Failure in Streaming
* - Append/Create:
* + transfer RBW
- *
- * Case 3: Failure in Close
- * - Append/Create:
- * + no transfer, let NameNode replicates the block.
*/
if (!isAppend && lastAckedSeqno < 0
&& stage == BlockConstructionStage.PIPELINE_SETUP_CREATE) {
//no data have been written
return;
- } else if (stage == BlockConstructionStage.PIPELINE_CLOSE
Review comment:
the stage is in PIPELINE_CLOSE state when when packet is the last in the
block and all transferred data is acknowledged.
with this change, the re-replication which was triggered by NameNode block
manager periodically, is moved to client side.
it looks fine to me. but HDFS re replication logic is complex. We should
validate it with a test.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]