[ https://issues.apache.org/jira/browse/HADOOP-5019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12663428#action_12663428 ]
Raghu Angadi commented on HADOOP-5019: -------------------------------------- "blk_" to indicate a different option fsck would work. But I think other options might be cleaner than this hack : - first preference : it is not fsck. It could just be another command ("-blockInfo" or some such). - second : add an explicit option to fsck. > add querying block's info in the fsck facility > ---------------------------------------------- > > Key: HADOOP-5019 > URL: https://issues.apache.org/jira/browse/HADOOP-5019 > Project: Hadoop Core > Issue Type: New Feature > Components: dfs > Reporter: zhangwei > Priority: Minor > Attachments: HADOOP-5019.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > As now the fsck can do pretty well,but when the developer happened to the log > such Block blk_28622148 is not valid.etc > We wish to know which file and the datanodes the block belongs to.It can be > solved by running "bin/hadoop fsck -files -blocks -locations / | grep > <blockid>" ,but as mentioned early in the HADOOP-4945 ,it's not an effective > way in a big product cluster. > so maybe we could do something to let the fsck more convenience . -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.