[
https://issues.apache.org/jira/browse/HDFS-4879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13702372#comment-13702372
]
Hadoop QA commented on HDFS-4879:
---------------------------------
{color:green}+1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12591245/hdfs-4879.txt
against trunk revision .
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:green}+1 tests included{color}. The patch appears to include 1 new
or modified test files.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:green}+1 javadoc{color}. The javadoc tool did not generate any
warning messages.
{color:green}+1 eclipse:eclipse{color}. The patch built with
eclipse:eclipse.
{color:green}+1 findbugs{color}. The patch does not introduce any new
Findbugs (version 1.3.9) warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:green}+1 core tests{color}. The patch passed unit tests in
hadoop-hdfs-project/hadoop-hdfs.
{color:green}+1 contrib tests{color}. The patch passed contrib unit tests.
Test results:
https://builds.apache.org/job/PreCommit-HDFS-Build/4603//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4603//console
This message is automatically generated.
> Add "blocked ArrayList" collection to avoid CMS full GCs
> --------------------------------------------------------
>
> Key: HDFS-4879
> URL: https://issues.apache.org/jira/browse/HDFS-4879
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: namenode
> Affects Versions: 3.0.0, 2.0.4-alpha
> Reporter: Todd Lipcon
> Assignee: Todd Lipcon
> Attachments: hdfs-4879.txt, hdfs-4879.txt, hdfs-4879.txt
>
>
> We recently saw an issue where a large deletion was issued which caused 25M
> blocks to be collected during {{deleteInternal}}. Currently, the list of
> collected blocks is an ArrayList, meaning that we had to allocate a
> contiguous 25M-entry array (~400MB). After a NN has been running for a long
> amount of time, the old generation may become fragmented such that it's hard
> to find a 400MB contiguous chunk of heap.
> In general, we should try to design the NN such that the only large objects
> are long-lived and created at startup time. We can improve this particular
> case (and perhaps some others) by introducing a new List implementation which
> is made of a linked list of arrays, each of which is size-limited (eg to 1MB).
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira