[ 
https://issues.apache.org/jira/browse/HDFS-15856?focusedWorklogId=558929&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-558929
 ]

ASF GitHub Bot logged work on HDFS-15856:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 27/Feb/21 13:25
            Start Date: 27/Feb/21 13:25
    Worklog Time Spent: 10m 
      Work Description: ayushtkn commented on a change in pull request #2721:
URL: https://github.com/apache/hadoop/pull/2721#discussion_r584121631



##########
File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
##########
@@ -1263,14 +1265,18 @@ private boolean processDatanodeOrExternalError() throws 
IOException {
       packetSendTime.clear();
     }
 
-    // If we had to recover the pipeline five times in a row for the
+    // If we had to recover the pipeline exceed times which
+    // defined in maxPipelineRecoveryRetries in a row for the

Review comment:
       nit:
   Looks some grammatical error, can we change to,
   ``
   If we had to recover the pipeline more than the value
    defined by maxPipelineRecoveryRetries in a row for the
   ``
   

##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
##########
@@ -4352,6 +4352,17 @@
   </description>
 </property>
 
+<property>
+  <name>dfs.client.pipeline.recovery.max-retries</name>
+  <value>5</value>
+  <description>
+    If we had to recover the pipeline exceed times which
+    this value defined in a row for the same packet,
+    this client likely has corrupt data or corrupting
+    during transmission.

Review comment:
       Same as above.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 558929)
    Time Spent: 1h  (was: 50m)

> Make recover the pipeline in same packet exceed times for stream closed 
> configurable.
> -------------------------------------------------------------------------------------
>
>                 Key: HDFS-15856
>                 URL: https://issues.apache.org/jira/browse/HDFS-15856
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Qi Zhu
>            Assignee: Qi Zhu
>            Priority: Minor
>              Labels: pull-request-available
>          Time Spent: 1h
>  Remaining Estimate: 0h
>
> Now recover the pipeline five times in a row for the same packet, will close 
> the stream, but i think it should be configurable for different cluster 
> needed.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to