[ 
https://issues.apache.org/jira/browse/HDFS-11661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16003243#comment-16003243
 ] 

Daryn Sharp commented on HDFS-11661:
------------------------------------

Sorry, thought I commented last week.  I agree we should just revert the 
original change entirely.  I may/should/hopefully have a patch by EOW, but 
don't let it block the release.  We've "lived" with the inconsistency this long 
so waiting a bit longer won't hurt.  Between debugging 2.8 and grokking 
snapshots, fixing the discrepancies is taking longer than expected.

> GetContentSummary uses excessive amounts of memory
> --------------------------------------------------
>
>                 Key: HDFS-11661
>                 URL: https://issues.apache.org/jira/browse/HDFS-11661
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.8.0, 3.0.0-alpha2
>            Reporter: Nathan Roberts
>            Assignee: Wei-Chiu Chuang
>            Priority: Blocker
>         Attachments: HDFS-11661.001.patch, HDFs-11661.002.patch, Heap 
> growth.png
>
>
> ContentSummaryComputationContext::nodeIncluded() is being used to keep track 
> of all INodes visited during the current content summary calculation. This 
> can be all of the INodes in the filesystem, making for a VERY large hash 
> table. This simply won't work on large filesystems. 
> We noticed this after upgrading a namenode with ~100Million filesystem 
> objects was spending significantly more time in GC. Fortunately this system 
> had some memory breathing room, other clusters we have will not run with this 
> additional demand on memory.
> This was added as part of HDFS-10797 as a way of keeping track of INodes that 
> have already been accounted for - to avoid double counting.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to