[ 
https://issues.apache.org/jira/browse/HDFS-5917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang Xie updated HDFS-5917:
----------------------------

    Description: In current HBase + HDFS trunk impl, if one node is added into 
deadNodes map, before deadNodes.clear() be invoked, this node could not be 
chosen any more. When i fixed HDFS-5637, i had a raw thought, since there're 
not a few conditions could trigger a node be added into deadNodes map,  it 
would be better if we have an ability to refresh this cache map info 
automaticly. It's good for HBase scenario at least, e.g. before HDFS-5637 
fixed, if a local node be added into deadNodes, then it will read remotely even 
if the local node is live in real:) if more unfortunately, this block is in a 
huge HFile which doesn't be picked into any minor compaction in short period, 
the performance penality will be continued until a large compaction or region 
reopend or deadNodes.clear() be invoked...  (was: In current HBase + HDFS trunk 
impl, if one node is inserted into deadNodes list, before deadNodes.clear() be 
invoked, this node could not be choose always. When i fixed HDFS-5637, i had a 
raw thought, since there're not a few conditions could trigger a node be 
inserted into deadNodes,  we should have an ability to refresh this important 
cache list info automaticly. It's benefit for HBase scenario at least, e.g. 
before HDFS-5637 fixed, if a local node be inserted into deadNodes, then it 
will read remotely even the local node is not dead:) if more unfortunately, 
this block is in a huge HFile which doesn't be picked into any minor compaction 
in short period, the performance penality will be continued until a large 
compaction or region reopend or deadNodes.clear() be invoked...)

> Have an ability to refresh deadNodes list periodically
> ------------------------------------------------------
>
>                 Key: HDFS-5917
>                 URL: https://issues.apache.org/jira/browse/HDFS-5917
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 3.0.0, 2.2.0
>            Reporter: Liang Xie
>            Assignee: Liang Xie
>         Attachments: HDFS-5917.txt
>
>
> In current HBase + HDFS trunk impl, if one node is added into deadNodes map, 
> before deadNodes.clear() be invoked, this node could not be chosen any more. 
> When i fixed HDFS-5637, i had a raw thought, since there're not a few 
> conditions could trigger a node be added into deadNodes map,  it would be 
> better if we have an ability to refresh this cache map info automaticly. It's 
> good for HBase scenario at least, e.g. before HDFS-5637 fixed, if a local 
> node be added into deadNodes, then it will read remotely even if the local 
> node is live in real:) if more unfortunately, this block is in a huge HFile 
> which doesn't be picked into any minor compaction in short period, the 
> performance penality will be continued until a large compaction or region 
> reopend or deadNodes.clear() be invoked...



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to