[
https://issues.apache.org/jira/browse/HDFS-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14235345#comment-14235345
]
Hadoop QA commented on HDFS-7473:
---------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12685270/HDFS-7473-001.patch
against trunk revision 0653918.
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:green}+0 tests included{color}. The patch appears to be a
documentation patch that doesn't require tests.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:green}+1 javadoc{color}. There were no new javadoc warning messages.
{color:green}+1 eclipse:eclipse{color}. The patch built with
eclipse:eclipse.
{color:green}+1 findbugs{color}. The patch does not introduce any new
Findbugs (version 2.0.3) warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:red}-1 core tests{color}. The patch failed these unit tests in
hadoop-hdfs-project/hadoop-hdfs:
org.apache.hadoop.hdfs.TestSetTimes
{color:green}+1 contrib tests{color}. The patch passed contrib unit tests.
Test results:
https://builds.apache.org/job/PreCommit-HDFS-Build/8925//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8925//console
This message is automatically generated.
> Document setting dfs.namenode.fs-limits.max-directory-items to 0 is invalid
> ---------------------------------------------------------------------------
>
> Key: HDFS-7473
> URL: https://issues.apache.org/jira/browse/HDFS-7473
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: documentation
> Affects Versions: 2.4.0, 2.5.2
> Reporter: Jason Keller
> Assignee: Akira AJISAKA
> Labels: newbie
> Attachments: HDFS-7473-001.patch
>
>
> When setting dfs.namenode.fs-limits.max-directory-items to 0 in
> hdfs-site.xml, the error "java.lang.IllegalArgumentException: Cannot set
> dfs.namenode.fs-limits.max-directory-items to a value less than 0 or greater
> than 6400000" is produced. However, the documentation shows that 0 is a
> valid setting for dfs.namenode.fs-limits.max-directory-items, turning the
> check off.
> Looking into the code in
> hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
> shows that the culprit is
> Preconditions.checkArgument(maxDirItems > 0 && maxDirItems <= MAX_DIR_ITEMS,
> "Cannot set "+ DFSConfigKeys.DFS_NAMENODE_MAX_DIRECTORY_ITEMS_KEY+ " to a
> value less than 0 or greater than " + MAX_DIR_ITEMS);
> This checks if maxDirItems is greater than 0. Since 0 is not greater than 0,
> it produces an error.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)