[
https://issues.apache.org/jira/browse/AMBARI-18936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15682063#comment-15682063
]
Hadoop QA commented on AMBARI-18936:
------------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12839703/AMBARI-18936.02.patch
against trunk revision .
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:red}-1 tests included{color}. The patch doesn't appear to include
any new or modified tests.
Please justify why no new tests are needed for this
patch.
Also please list what manual steps were performed to
verify this patch.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:green}+1 core tests{color}. The patch passed unit tests in
ambari-server.
Test results:
https://builds.apache.org/job/Ambari-trunk-test-patch/9332//testReport/
Console output:
https://builds.apache.org/job/Ambari-trunk-test-patch/9332//console
This message is automatically generated.
> DataNode JVM heap settings should include CMSInitiatingOccupancy
> ----------------------------------------------------------------
>
> Key: AMBARI-18936
> URL: https://issues.apache.org/jira/browse/AMBARI-18936
> Project: Ambari
> Issue Type: Improvement
> Affects Versions: 2.2.2
> Reporter: Arpit Agarwal
> Assignee: Arpit Agarwal
> Fix For: 2.5.0
>
> Attachments: AMBARI-18936.02.patch
>
>
> This is a followup to AMBARI-18694 to fix remaining stack versions:
> ___________________________
> As HDFS-11047 reported, DirectoryScanner does scan by deep copying
> FinalizedReplica. In a deployment with 500,000+ blocks, we've seen the DN
> heap usage being accumulated to high peaks very quickly. Deep copies of
> FinalizedReplica will make DN heap usage even worse if directory scans are
> scheduled more frequently.
> Another factor is that huge number of ScanInfo instances corresponding to
> HDFS blocks are lingering in garbage to eat many heap memories until a full
> GC takes place.
> This proposes adding JVM settings to force GC more frequently to release
> DataNode heap consumed as a result of two aforementioned reasons, i.e. add
> the options to HADOOP_DATANODE_OPTS
> {noformat}
> -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly
> -XX:ConcGCThreads=8 -XX:+UseConcMarkSweepGC
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)