[ 
https://issues.apache.org/jira/browse/HDFS-11242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16870566#comment-16870566
 ] 

Lars Francke commented on HDFS-11242:
-------------------------------------

I think I'm getting a clearer picture now but (and there's always a chance I'm 
mistaken) I believe what you're suggesting does not work.

Focusing on your second point: This is what's not currently possible!

Because the script is only ever called _once_ and the result is then cached. 
Look at CachedDNSToSwitchMapping.

So if you have a flexible rack aware script (which we do) but have a bug in it 
(e.g. reading racks from Consul but there's an error in the mapping) you have 
no way of fixing that error without restarting the NameNode because the bad 
result is cached.

Regarding your first point: Thanks for the pointer. After a restart a fixed 
rack awareness script would be called anyway so this wouldn't make any 
difference.

In short: I believe the patch & solution that [~reidchan] proposed is a very 
valid use-case.

> Add refresh cluster network topology operation to dfs admin
> -----------------------------------------------------------
>
>                 Key: HDFS-11242
>                 URL: https://issues.apache.org/jira/browse/HDFS-11242
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namenode
>    Affects Versions: 3.0.0-alpha1
>            Reporter: Reid Chan
>            Priority: Minor
>         Attachments: HDFS-11242.002.patch, HDFS-11242.patch
>
>
> The network topology and dns to switch mapping are initialized at the start 
> of the namenode.
> If admin wants to change the topology because of new datanodes added, he has 
> to stop and restart namenode(s), otherwise those new added datanodes are 
> squeezed under /default-rack.
> It is a low frequency operation, but it should be operated appropriately, so 
> dfs admin should take the responsibility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to