[
https://issues.apache.org/jira/browse/HDFS-7722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14369491#comment-14369491
]
Hudson commented on HDFS-7722:
------------------------------
FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #137 (See
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/137/])
Fix CHANGES.txt for HDFS-7722. (arp: rev
02a67aad65e790cddba6f49658664f459e1de788)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
> DataNode#checkDiskError should also remove Storage when error is found.
> -----------------------------------------------------------------------
>
> Key: HDFS-7722
> URL: https://issues.apache.org/jira/browse/HDFS-7722
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode
> Affects Versions: 2.6.0
> Reporter: Lei (Eddy) Xu
> Assignee: Lei (Eddy) Xu
> Fix For: 2.7.0
>
> Attachments: HDFS-7722.000.patch, HDFS-7722.001.patch,
> HDFS-7722.002.patch, HDFS-7722.003.patch, HDFS-7722.004.patch
>
>
> When {{DataNode#checkDiskError}} found disk errors, it removes all block
> metadatas from {{FsDatasetImpl}}. However, it does not removed the
> corresponding {{DataStorage}} and {{BlockPoolSliceStorage}}.
> The result is that, we could not directly run {{reconfig}} to hot swap the
> failure disks without changing the configure file.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)