[
https://issues.apache.org/jira/browse/HDFS-11661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15977147#comment-15977147
]
Wei-Chiu Chuang commented on HDFS-11661:
----------------------------------------
[~kihwal], [~daryn] and [~nroberts]:
could you please share what your clusters run which exposed this bug? I would
like to understand this bug and reproduce the issue you mentioned. We can
revert HDFS-10797, but in the long run I would still like to fix the bug that
HDFS-10797 intended to fix.
Thanks very much!
> GetContentSummary uses excessive amounts of memory
> --------------------------------------------------
>
> Key: HDFS-11661
> URL: https://issues.apache.org/jira/browse/HDFS-11661
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: namenode
> Affects Versions: 2.8.0, 3.0.0-alpha2
> Reporter: Nathan Roberts
> Priority: Blocker
> Attachments: Heap growth.png
>
>
> ContentSummaryComputationContext::nodeIncluded() is being used to keep track
> of all INodes visited during the current content summary calculation. This
> can be all of the INodes in the filesystem, making for a VERY large hash
> table. This simply won't work on large filesystems.
> We noticed this after upgrading a namenode with ~100Million filesystem
> objects was spending significantly more time in GC. Fortunately this system
> had some memory breathing room, other clusters we have will not run with this
> additional demand on memory.
> This was added as part of HDFS-10797 as a way of keeping track of INodes that
> have already been accounted for - to avoid double counting.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]