[ https://issues.apache.org/jira/browse/HADOOP-2447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12553434 ]
Allen Wittenauer commented on HADOOP-2447: ------------------------------------------ The stats look good. Is the scaling different for inodes vs. blocks though? For example, which extreme is more dangerous, one file with all of the blocks or lots and lots of files with 1 block? Or are they the same? I'd be concerned about having a single tunable and edge-case scenarios. > This is true for most other existing-namenode-parameters and I do not want to > introduce a special > command just to change this dfs.max.objects parameter. I guess I need to get in my act together and file that JIRA I've been meaning to file: Stop Making Me Restart. :) [Keep in mind that a restart leads to downtime which leads to unhappiness... the bigger the HDFS, the longer it takes to restart, the longer the downtime, ... which results in more unhappy villagers with pitchforks outside the castle doors.] Rather than a special command, I *really* want to be able to HUP the process or something and have it re-read any parameters that have "reread support". I can certainly understand that certain settings will require a full restart, but some shouldn't and this is one of them, IMO. > HDFS should be capable of limiting the total number of inodes in the system > --------------------------------------------------------------------------- > > Key: HADOOP-2447 > URL: https://issues.apache.org/jira/browse/HADOOP-2447 > Project: Hadoop > Issue Type: New Feature > Reporter: Sameer Paranjpye > Assignee: dhruba borthakur > Fix For: 0.16.0 > > Attachments: fileLimit.patch > > > The HDFS Namenode should be capable of limiting the total number of Inodes > (files + directories). The can be done through a config variable, settable in > hadoop-site.xml. The default should be no limit. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.