[
https://issues.apache.org/jira/browse/HDFS-10731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15417938#comment-15417938
]
Xiao Chen commented on HDFS-10731:
----------------------------------
This changes the constructor of the exception, but since the class is marked as
{{Private}} and {{Evolving}}, we should be good. Agree with assessment on test
failures too.
+1, thanks Wei-Chiu.
> FSDirectory#verifyMaxDirItems does not log path name
> ----------------------------------------------------
>
> Key: HDFS-10731
> URL: https://issues.apache.org/jira/browse/HDFS-10731
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: namenode
> Affects Versions: 2.7.2
> Reporter: Wei-Chiu Chuang
> Assignee: Wei-Chiu Chuang
> Priority: Minor
> Labels: supportability
> Attachments: HDFS-10731.001.patch
>
>
> {quote}
> 2016-08-05 14:42:04,687 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode:
> FSDirectory.verifyMaxDirItems: The directory item limit of null is exceeded:
> limit=1048576 items=1048576
> {quote}
> The error message above logs the path name incorrectly (null). Without the
> path name it is hard to tell which directory is in trouble. The exception
> should set the path name before being logged.
> This bug was seen on a CDH 5.5.2 cluster, but CDH5.5.2 is roughly up to date
> with Apache Hadoop 2.7.2.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]