[
https://issues.apache.org/jira/browse/HDFS-7722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14351167#comment-14351167
]
Chris Nauroth commented on HDFS-7722:
-------------------------------------
[~eddyxu], sorry I haven't had a chance to dig into this patch yet. If I
understand correctly, you're saying that removing a path from configuration and
running reconfig will not clear volume failure information, but keeping the
path in configuration, fixing the disk at that mount point and running reconfig
will clear it. Do I have it right? I would like us to have some means to take
corrective action and clear the volume failure information "online". As long
as that's still possible in some way, then it's probably sticking to the spirit
of the code I wrote earlier.
Would you mind holding off the commit until early next week so I can take a
closer look? Thanks!
> DataNode#checkDiskError should also remove Storage when error is found.
> -----------------------------------------------------------------------
>
> Key: HDFS-7722
> URL: https://issues.apache.org/jira/browse/HDFS-7722
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode
> Affects Versions: 2.6.0
> Reporter: Lei (Eddy) Xu
> Assignee: Lei (Eddy) Xu
> Attachments: HDFS-7722.000.patch, HDFS-7722.001.patch,
> HDFS-7722.002.patch
>
>
> When {{DataNode#checkDiskError}} found disk errors, it removes all block
> metadatas from {{FsDatasetImpl}}. However, it does not removed the
> corresponding {{DataStorage}} and {{BlockPoolSliceStorage}}.
> The result is that, we could not directly run {{reconfig}} to hot swap the
> failure disks without changing the configure file.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)