[ 
https://issues.apache.org/jira/browse/HDFS-7648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14325196#comment-14325196
 ] 

Colin Patrick McCabe commented on HDFS-7648:
--------------------------------------------

bq. I see Colin that point that before we understand the problem, our system 
should not be too smart fixing it. However, after we know the cause of the 
problem (say, the admin moved some blocks manually), we need some way to fix 
those misplaced blocks. How about adding a conf to enable/disable the auto-fix 
feature and the default is disabled?

I wouldn't object to a configuration like that, but I also question whether it 
is needed.  Has this ever actually happened?  And if it did happen, isn't the 
answer more likely to be "stop editing the VERSION file manually, silly" or 
"your ext4 filesystem is bad and needs to be completely reformatted" rather 
than "DN should cleverly fix"?

bq. Colin Patrick McCabe kindly review the patch. Thanks!

We should be logging in the {{compileReport}} function, not a new function.  We 
can check whether the location is correct around the same place we're checking 
the file name, etc.

> Verify the datanode directory layout
> ------------------------------------
>
>                 Key: HDFS-7648
>                 URL: https://issues.apache.org/jira/browse/HDFS-7648
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>            Reporter: Tsz Wo Nicholas Sze
>            Assignee: Rakesh R
>         Attachments: HDFS-7648-3.patch, HDFS-7648-4.patch, HDFS-7648.patch, 
> HDFS-7648.patch
>
>
> HDFS-6482 changed datanode layout to use block ID to determine the directory 
> to store the block.  We should have some mechanism to verify it.  Either 
> DirectoryScanner or block report generation could do the check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to