[
https://issues.apache.org/jira/browse/HDFS-16698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18036032#comment-18036032
]
ASF GitHub Bot commented on HDFS-16698:
---------------------------------------
github-actions[bot] commented on PR #4644:
URL: https://github.com/apache/hadoop/pull/4644#issuecomment-3499945676
We're closing this stale PR because it has been open for 100 days with no
activity. This isn't a judgement on the merit of the PR in any way. It's just a
way of keeping the PR queue manageable.
If you feel like this was a mistake, or you would like to continue working
on it, please feel free to re-open it and ask for a committer to remove the
stale tag and review again.
Thanks all for your contribution.
> Add a metric to sense possible MaxDirectoryItemsExceededException in time.
> --------------------------------------------------------------------------
>
> Key: HDFS-16698
> URL: https://issues.apache.org/jira/browse/HDFS-16698
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: ZanderXu
> Assignee: ZanderXu
> Priority: Major
> Labels: pull-request-available
> Time Spent: 20m
> Remaining Estimate: 0h
>
> In our prod environment, we occasionally encounter
> MaxDirectoryItemsExceededException caused job failure.
> {code:java}
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$MaxDirectoryItemsExceededException):
> The directory item limit of /user/XXX/.sparkStaging is exceeded:
> limit=1048576 items=1048576
> {code}
> In order to avoid it, we add a metric to sense possible
> MaxDirectoryItemsExceededException in time. So that we can process it in time
> to avoid job failure.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]