[ 
https://issues.apache.org/jira/browse/HDFS-13098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16352663#comment-16352663
 ] 

Íñigo Goiri commented on HDFS-13098:
------------------------------------

bq. From previous comments, the DataNodes will firstly register in the Router, 
then talk to NNs afterwards.
That'd be the optimized version, for now, I would support both and let the 
admin choose to heartbeat through the Router or directly.
Obviously, proxying the heartbeats is the simplest one as it doesn't require 
changes in the DN code (otherwise, it has to register with one entity and then 
switch to another).
We can start with this one, and then do the other one.

bq. I think here the Router will maintain the <DataNode, NameNode(SubCluster)> 
mapping, which will be used in DN's restarting. Do we plan to persist this 
mapping info in the State Store?
Yes, that was my idea. We can play some optimizations here to reduce the size 
but yes, persisting the mapping is the obvious one.

> RBF: Datanodes interacting with Routers
> ---------------------------------------
>
>                 Key: HDFS-13098
>                 URL: https://issues.apache.org/jira/browse/HDFS-13098
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Íñigo Goiri
>            Priority: Major
>
> Datanodes talk to particular Namenodes. We could use the Router 
> infrastructure for the Datanodes to register/heartbeating into them and the 
> Routers would forward this to particular Namenodes. This would make the 
> assignment of Datanodes to subclusters potentially more dynamic.
> The implementation would potentially include:
> * Router to implement part of DatanodeProtocol
> * Forwarding DN messages into Routers
> * Policies to assign datanodes to subclusters
> * Datanodes to make blockpool configuration dynamic



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to