[ 
https://issues.apache.org/jira/browse/HADOOP-2447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12553424
 ] 

Doug Cutting commented on HADOOP-2447:
--------------------------------------

> Heap: 34 M/b

Oops.  This might better be:

Heap: 34 / 90 MB (37%)

Where these would be the results of Runtime.getTotalMemory() and 
Runtime.getMaxMemory().  Then one could compare the two percentages (of objects 
and of maximum memory) to decide whether it was safe to increase the maximum 
number of objects, with lots of provisos.

> HDFS should be capable of limiting the total number of inodes in the system
> ---------------------------------------------------------------------------
>
>                 Key: HADOOP-2447
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2447
>             Project: Hadoop
>          Issue Type: New Feature
>            Reporter: Sameer Paranjpye
>            Assignee: dhruba borthakur
>             Fix For: 0.16.0
>
>         Attachments: fileLimit.patch
>
>
> The HDFS Namenode should be capable of limiting the total number of Inodes 
> (files + directories). The can be done through a config variable, settable in 
> hadoop-site.xml. The default should be no limit.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to