[ 
https://issues.apache.org/jira/browse/HADOOP-2447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12556844#action_12556844
 ] 

dhruba borthakur commented on HADOOP-2447:
------------------------------------------

Hi Konstantin,

You had a cluster that initially did not have the configuration dfs.max.objects 
set. This means that you could create objects. Then you set dfs.max.objects to 
15 and restarted the cluster. In this case you can have a percentage larger 
than 100% (because the system does not auto-purge fs objects). Please let me 
know if this scenario makes sense.

I forgot to mention that the new statistic in the web-ui was not in the table 
format because it increases the display table size, thus typically requiring 
the administrator to scroll down the window. Writing the statistic as a single 
line at the top of the screen was deemed better at that time.

> HDFS should be capable of limiting the total number of inodes in the system
> ---------------------------------------------------------------------------
>
>                 Key: HADOOP-2447
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2447
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Sameer Paranjpye
>            Assignee: dhruba borthakur
>             Fix For: 0.16.0
>
>         Attachments: fileLimit.patch, fileLimit2.patch, fileLimit3.patch
>
>
> The HDFS Namenode should be capable of limiting the total number of Inodes 
> (files + directories). The can be done through a config variable, settable in 
> hadoop-site.xml. The default should be no limit.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to