[ 
https://issues.apache.org/jira/browse/HDFS-7433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14277426#comment-14277426
 ] 

Daryn Sharp commented on HDFS-7433:
-----------------------------------

[~mingma], I'm not sure I understand your scenario.  When the node starts the 
decomm, the monitor will notice that the scan number is _less_ than current 
scan number, scan the node, set the scan number to current.  Now when the next 
cycle starts, it will skip that node because the scan number is the same as 
current.  It's not until the monitor hits the end of the list that it bumps the 
scan number which will trigger the rescan of the nodes because they have a 
lower scan number.

How would the continuous re-scan case occur that you describe?

> DatanodeManager#datanodeMap should be a HashMap, not a TreeMap, to optimize 
> lookup performance
> ----------------------------------------------------------------------------------------------
>
>                 Key: HDFS-7433
>                 URL: https://issues.apache.org/jira/browse/HDFS-7433
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namenode
>    Affects Versions: 2.0.0-alpha, 3.0.0
>            Reporter: Daryn Sharp
>            Assignee: Daryn Sharp
>            Priority: Critical
>         Attachments: HDFS-7433.patch, HDFS-7433.patch
>
>
> The datanode map is currently a {{TreeMap}}.  For many thousands of 
> datanodes, tree lookups are ~10X more expensive than a {{HashMap}}.  
> Insertions and removals are up to 100X more expensive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to