[ https://issues.apache.org/jira/browse/HADOOP-2447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12556766#action_12556766 ]
Konstantin Shvachko commented on HADOOP-2447: --------------------------------------------- {code} volatile private long totalNodes = 1; // number of inodes, for rootdir {code} totalNodes should be totalINodes, otherwise it is not clear which nodes are being referred to, e.g. data-nodes or nodes related to network topology ... {code} private long maxFsObjects = 0; // maximum allowed inodes. {code} The comment should say "objects" rather than "inodes". Also this member logically belongs to FSNamesystem, because - FSDirectory has knowledge only about INodes, but not blocks. - Traditionally we were trying to keep all configurable parameters inside FSNamesystem and set them using setConfigurationParameters(). I don't see why we should do any different here. Then the next step would be to move checkFsObjectLimit() to FSNamesystem from FSDirectory. It also looks that you can call checkFsObjectLimit() in the FSNamesystem methods rather than inside FSDirectory after that. The statistics is a really good idea. Should we display it the same way the other stat fields are displayed? Something like: {code} DFS Used% : 0 % Live Nodes : 0 Dead Nodes : 0 Files and directories : 49 Blocks : 36 Total objects : 85 (100%) out of max allowed 50 Name-node Heap Size: 74.38 MB / 733.81 MB (10%) {code} The number of files and directories displayed is inconsistent with the number reported by fsck. Fsck apparently does not count the root directory as an entry. I'd say fsck is wrong, but the important thing they should be consistent. The percentage of objects should not exceed 100%. Right now it is reported as: {code} 20 files and directories, 17 blocks = 37 total / 15 (246%). Heap Size is 50.94 MB / 733 MB (6%) {code} I was not able to apply this patch to the current trunk after the permissions patch. > HDFS should be capable of limiting the total number of inodes in the system > --------------------------------------------------------------------------- > > Key: HADOOP-2447 > URL: https://issues.apache.org/jira/browse/HADOOP-2447 > Project: Hadoop > Issue Type: New Feature > Components: dfs > Reporter: Sameer Paranjpye > Assignee: dhruba borthakur > Fix For: 0.16.0 > > Attachments: fileLimit.patch, fileLimit2.patch > > > The HDFS Namenode should be capable of limiting the total number of Inodes > (files + directories). The can be done through a config variable, settable in > hadoop-site.xml. The default should be no limit. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.