[ 
https://issues.apache.org/jira/browse/HDFS-8838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14648802#comment-14648802
 ] 

Li Bo commented on HDFS-8838:
-----------------------------

hi, Nicholas
I think you can commit your patch first and I will update mine after that.
Some points :
1.      {{DFSStripedOutputStream#getNumBlockWriteRetry}} returns 0, which 
allows connecting to datanode only one time. I think we should allow the  
connecting to be retied for several times. One way is to store the located 
block getting from {{locateFollowingBlock()}}, and the following retries will 
use the store one, no need to call {{locateFollowingBlock()}} again.
2.      in {{TestDFSStripedOutputStreamWithFailure}}, you store the test length 
in {{LENGTHS}}. But when I read the code, I have to calculate the length by 
myself to see what kind the test is. So, how about adding some comments , or 
directly show the file length in parameter such as  {{testDatanodeFailure(4* 
cellSize +123)}}?


> Tolerate datanode failures in DFSStripedOutputStream when the data length is 
> small
> ----------------------------------------------------------------------------------
>
>                 Key: HDFS-8838
>                 URL: https://issues.apache.org/jira/browse/HDFS-8838
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: hdfs-client
>            Reporter: Tsz Wo Nicholas Sze
>            Assignee: Tsz Wo Nicholas Sze
>         Attachments: h8838_20150729.patch
>
>
> Currently, DFSStripedOutputStream cannot tolerate datanode failures when the 
> data length is small.  We fix the bugs here and add more tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to