[
https://issues.apache.org/jira/browse/MAPREDUCE-4917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13545352#comment-13545352
]
Hadoop QA commented on MAPREDUCE-4917:
--------------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12563471/MAPREDUCE-4917.2.patch
against trunk revision .
{color:red}-1 patch{color}. The patch command could not apply the patch.
Console output:
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/3206//console
This message is automatically generated.
> multiple BlockFixer should be supported in order to improve scalability and
> reduce too much work on single BlockFixer
> ---------------------------------------------------------------------------------------------------------------------
>
> Key: MAPREDUCE-4917
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-4917
> Project: Hadoop Map/Reduce
> Issue Type: Improvement
> Components: contrib/raid
> Affects Versions: 0.22.0
> Reporter: Jun Jin
> Assignee: Jun Jin
> Labels: patch
> Fix For: 0.22.0
>
> Attachments: MAPREDUCE-4917.1.patch, MAPREDUCE-4917.2.patch
>
> Original Estimate: 672h
> Remaining Estimate: 672h
>
> current implementation can only run single BlockFixer since the fsck (in
> RaidDFSUtil.getCorruptFiles) only check the whole DFS file system. multiple
> BlockFixer will do the same thing and try to fix same file if multiple
> BlockFixer launched.
> the change/fix will be mainly in BlockFixer.java and
> RaidDFSUtil.getCorruptFile(), to enable fsck to check the different paths
> defined in separated Raid.xml for single RaidNode/BlockFixer
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira