[
https://issues.apache.org/jira/browse/HDFS-16698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18036440#comment-18036440
]
ASF GitHub Bot commented on HDFS-16698:
---------------------------------------
github-actions[bot] closed pull request #4644: HDFS-16698. Add a metric to
sense possible MaxDirectoryItemsExceededException in time.
URL: https://github.com/apache/hadoop/pull/4644
> Add a metric to sense possible MaxDirectoryItemsExceededException in time.
> --------------------------------------------------------------------------
>
> Key: HDFS-16698
> URL: https://issues.apache.org/jira/browse/HDFS-16698
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: ZanderXu
> Assignee: ZanderXu
> Priority: Major
> Labels: pull-request-available
> Time Spent: 20m
> Remaining Estimate: 0h
>
> In our prod environment, we occasionally encounter
> MaxDirectoryItemsExceededException caused job failure.
> {code:java}
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$MaxDirectoryItemsExceededException):
> The directory item limit of /user/XXX/.sparkStaging is exceeded:
> limit=1048576 items=1048576
> {code}
> In order to avoid it, we add a metric to sense possible
> MaxDirectoryItemsExceededException in time. So that we can process it in time
> to avoid job failure.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]