[ 
https://issues.apache.org/jira/browse/HADOOP-1138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12524814
 ] 

dhruba borthakur commented on HADOOP-1138:
------------------------------------------

The code looks good. A few comments:

1. FSnamesystem.getDatanodeListForReport excludes nodes that are listed in 
dfs.hosts.exclude. Maybe a better option wold be to show them with a status of 
"Excluded". Currently, it shows "Decommisioned" or "In Service".

2. The comment in FSnamesystem.getDatanodeListForReport talks about 
"dfs.report.datanode.timeout.day" but it should be 
"dfs.report.datanode.timeout.hours".

3. Maybe a unit test case that tests this functionality would be really nice. 

> Datanodes that are dead for a long long time should not show up in the UI
> -------------------------------------------------------------------------
>
>                 Key: HADOOP-1138
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1138
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>            Reporter: dhruba borthakur
>            Assignee: Raghu Angadi
>             Fix For: 0.15.0
>
>         Attachments: HADOOP-1138.patch
>
>
> Proposal 1:
> If a include files is used, then show all nodes (dead/alive) that are listed 
> in the includes file. If there isn't an include file, then display only nodes 
> that have pinged this instance of the namenode.
> Proposal2:
> A config variable specifies the time duration. The namenode, on a restart, 
> purges all datanodes that have not pinged for that time duration. The default 
> value of this config variable can be 1 week. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to