[
https://issues.apache.org/jira/browse/HDFS-5917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liang Xie updated HDFS-5917:
----------------------------
Attachment: HDFS-5917.txt
A minor patch without any new test case, this one is very straight-forward, so
it should be ok, right? :)
> Have an ability to refresh deadNodes list periodically
> ------------------------------------------------------
>
> Key: HDFS-5917
> URL: https://issues.apache.org/jira/browse/HDFS-5917
> Project: Hadoop HDFS
> Issue Type: Bug
> Affects Versions: 3.0.0, 2.2.0
> Reporter: Liang Xie
> Assignee: Liang Xie
> Attachments: HDFS-5917.txt
>
>
> In current HBase + HDFS trunk impl, if one node is inserted into deadNodes
> list, before deadNodes.clear() be invoked, this node could not be choose
> always. When i fixed HDFS-5637, i had a raw thought, since there're not a few
> conditions could trigger a node be inserted into deadNodes, we should have
> an ability to refresh this important cache list info automaticly. It's
> benefit for HBase scenario at least, e.g. before HDFS-5637 fixed, if a local
> node be inserted into deadNodes, then it will read remotely even the local
> node is not dead:) if more unfortunately, this block is in a huge HFile which
> doesn't be picked into any minor compaction in short period, the performance
> penality will be continued until a large compaction or region reopend or
> deadNodes.clear() be invoked...
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)