[ 
https://issues.apache.org/jira/browse/HDFS-13219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16383835#comment-16383835
 ] 

Íñigo Goiri commented on HDFS-13219:
------------------------------------

We also have this information in the Datanodes tab. When we do 
{{getDatanodeReport()}} in {{RouterRpcServer}}, there is some merging which 
considers the nodeId, so those cases shouldn't be repeated. This particular 
case is fine.

To provide the information mentioned in this information, we would have to go 
over this properly composed DN report and account capacity, used, etc. 
Currently, {{FederationMetrics#getTotalCapacity()}} only uses 
{{getNameserviceAggregatedLong()}} to sum the stats already in the local cache; 
this is pretty fast. We could add an option to enable detailed metrics and make 
this expensive calls. Obviously, we should cache the DN reports, etc.

We internally don't have such setup (each subcluster is independent) so we 
don't have this requirement and I cannot put cycles on it. I'd be happy to 
review.

> RBF:Cluster information on Router is not correct when the Federation shares 
> datanodes.
> --------------------------------------------------------------------------------------
>
>                 Key: HDFS-13219
>                 URL: https://issues.apache.org/jira/browse/HDFS-13219
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>    Affects Versions: 2.9.0
>            Reporter: Tao Jie
>            Priority: Major
>
> Now summary information on Router website aggregates summary of each 
> nameservice. However in a typical federation cluster deployment, datanodes 
> are shared among nameservices. Consider we have 2 namespaces and 100 
> datanodes in one cluster. 100 datanodes are available for each namespace, but 
> we see 200 datanodes on the router website. So does other information such as 
> {{Total capacity}}, {{Remaining capacity}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to