[
https://issues.apache.org/jira/browse/HDFS-11340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Manoj Govindassamy updated HDFS-11340:
--------------------------------------
Attachment: HDFS-11340-branch-2.01.patch
[~eddyxu], Attached branch2 patch corresponding to the trunk v05 patch. Please
take a look.
> DataNode reconfigure for disks doesn't remove the failed volumes
> ----------------------------------------------------------------
>
> Key: HDFS-11340
> URL: https://issues.apache.org/jira/browse/HDFS-11340
> Project: Hadoop HDFS
> Issue Type: Bug
> Affects Versions: 3.0.0-alpha1
> Reporter: Manoj Govindassamy
> Assignee: Manoj Govindassamy
> Attachments: HDFS-11340.01.patch, HDFS-11340.02.patch,
> HDFS-11340.03.patch, HDFS-11340.04.patch, HDFS-11340.05.patch,
> HDFS-11340-branch-2.01.patch
>
>
> Say a DataNode (uuid:xyz) has disks D1 and D2. When D1 turns bad, JMX query
> on FSDatasetState-xyz for "NumFailedVolumes" attr rightly shows the failed
> volume count as 1 and the "FailedStorageLocations" attr has the failed
> storage location as "D1".
> It is possible to add or remove disks to this DataNode by running
> {{reconfigure}} command. Let the failed disk D1 be removed from the conf and
> the new conf has only one good disk D2. Upon running the reconfigure command
> for this DataNode with this new disk conf, the expectation is DataNode would
> no more have "NumFailedVolumes" or "FailedStorageLocations". But, even after
> removing the failed disk from the conf and a successful reconfigure, DataNode
> continues to show the "NumFailedVolumes" as 1 and "FailedStorageLocations" as
> "D1" and it never gets reset.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]