[ 
https://issues.apache.org/jira/browse/BIGTOP-1560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14245930#comment-14245930
 ] 

jay vyas edited comment on BIGTOP-1560 at 12/14/14 2:44 PM:
------------------------------------------------------------

Thanks for looking into it... 
http://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
 I think min replication is actually 1... Even though certainly any sane hdfs 
deploy will have 3 or more?  Either way I can do a final review and commit this 
today I'm pretty sure it's ready to go!


was (Author: jayunit100):
Thanks for looking into it... 
http://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
 I think min replication is actually 1... Even though certainly any sane hdfs 
deploy will have 3 or more?  

> Add a test case for performing block corruption recovery
> --------------------------------------------------------
>
>                 Key: BIGTOP-1560
>                 URL: https://issues.apache.org/jira/browse/BIGTOP-1560
>             Project: Bigtop
>          Issue Type: Test
>          Components: tests
>    Affects Versions: 0.8.0
>            Reporter: Dasha Boudnik
>            Assignee: Dasha Boudnik
>             Fix For: 0.9.0
>
>         Attachments: BIGTOP-1560.patch, BIGTOP-1560.patch
>
>
> Found this issue in internal testing. Roughly:
> create file in HDFS
> find block for the file
> corrupt a block
> trigger recovery by trying to read the file
> check recovery was successful



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to