[
https://issues.apache.org/jira/browse/HDFS-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14235821#comment-14235821
]
Chris Nauroth commented on HDFS-7473:
-------------------------------------
Actually, I need to retract my prior statement. There was a conscious decision
to stop supporting 0 during HDFS-6102. See this comment:
https://issues.apache.org/jira/browse/HDFS-6102?focusedCommentId=13934262&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13934262
> Document setting dfs.namenode.fs-limits.max-directory-items to 0 is invalid
> ---------------------------------------------------------------------------
>
> Key: HDFS-7473
> URL: https://issues.apache.org/jira/browse/HDFS-7473
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: documentation
> Affects Versions: 2.4.0, 2.5.2
> Reporter: Jason Keller
> Assignee: Akira AJISAKA
> Labels: newbie
> Attachments: HDFS-7473-001.patch
>
>
> When setting dfs.namenode.fs-limits.max-directory-items to 0 in
> hdfs-site.xml, the error "java.lang.IllegalArgumentException: Cannot set
> dfs.namenode.fs-limits.max-directory-items to a value less than 0 or greater
> than 6400000" is produced. However, the documentation shows that 0 is a
> valid setting for dfs.namenode.fs-limits.max-directory-items, turning the
> check off.
> Looking into the code in
> hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
> shows that the culprit is
> Preconditions.checkArgument(maxDirItems > 0 && maxDirItems <= MAX_DIR_ITEMS,
> "Cannot set "+ DFSConfigKeys.DFS_NAMENODE_MAX_DIRECTORY_ITEMS_KEY+ " to a
> value less than 0 or greater than " + MAX_DIR_ITEMS);
> This checks if maxDirItems is greater than 0. Since 0 is not greater than 0,
> it produces an error.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)