[
https://issues.apache.org/jira/browse/HDFS-11340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15858416#comment-15858416
]
Manoj Govindassamy edited comment on HDFS-11340 at 2/8/17 7:19 PM:
-------------------------------------------------------------------
Thanks for the review [~eddyxu] and [~linyiqun]. Attached v03 patch to make use
of {{DataNodeTestUtils#waitForDiskError}}. Can you please take a look ?
was (Author: manojg):
Thanks for the review [~eddyxu] and [~linyiqun]. Attached v03 patch to make use
of {{DataNodeTestUtils#waitForDiskError}}.
> DataNode reconfigure for disks doesn't remove the failed volumes
> ----------------------------------------------------------------
>
> Key: HDFS-11340
> URL: https://issues.apache.org/jira/browse/HDFS-11340
> Project: Hadoop HDFS
> Issue Type: Bug
> Affects Versions: 3.0.0-alpha1
> Reporter: Manoj Govindassamy
> Assignee: Manoj Govindassamy
> Attachments: HDFS-11340.01.patch, HDFS-11340.02.patch,
> HDFS-11340.03.patch
>
>
> Say a DataNode (uuid:xyz) has disks D1 and D2. When D1 turns bad, JMX query
> on FSDatasetState-xyz for "NumFailedVolumes" attr rightly shows the failed
> volume count as 1 and the "FailedStorageLocations" attr has the failed
> storage location as "D1".
> It is possible to add or remove disks to this DataNode by running
> {{reconfigure}} command. Let the failed disk D1 be removed from the conf and
> the new conf has only one good disk D2. Upon running the reconfigure command
> for this DataNode with this new disk conf, the expectation is DataNode would
> no more have "NumFailedVolumes" or "FailedStorageLocations". But, even after
> removing the failed disk from the conf and a successful reconfigure, DataNode
> continues to show the "NumFailedVolumes" as 1 and "FailedStorageLocations" as
> "D1" and it never gets reset.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]