[ https://issues.apache.org/jira/browse/HADOOP-2447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12553420 ]
Doug Cutting commented on HADOOP-2447: -------------------------------------- I agree that this is a good first step. Perhaps it will even be sufficient. We might also add some statistics to the web ui, to facilitate configuration. For example, the "Cluster Summary" section might add something like: Objects: 2136 files, 342 directories, 5779 blocks = 8257 total / 12000 (69%) Heap: 34 M/b The latter would be from the value of Runtime.getTotalMemory(). Would this be useful? > HDFS should be capable of limiting the total number of inodes in the system > --------------------------------------------------------------------------- > > Key: HADOOP-2447 > URL: https://issues.apache.org/jira/browse/HADOOP-2447 > Project: Hadoop > Issue Type: New Feature > Reporter: Sameer Paranjpye > Assignee: dhruba borthakur > Fix For: 0.16.0 > > Attachments: fileLimit.patch > > > The HDFS Namenode should be capable of limiting the total number of Inodes > (files + directories). The can be done through a config variable, settable in > hadoop-site.xml. The default should be no limit. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.