[ 
https://issues.apache.org/jira/browse/AMBARI-11552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14565851#comment-14565851
 ] 

Hadoop QA commented on AMBARI-11552:
------------------------------------

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12736274/AMBARI-11552.00.patch
  against trunk revision .

    {color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

    {color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
                        Please justify why no new tests are needed for this 
patch.
                        Also please list what manual steps were performed to 
verify this patch.

    {color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

    {color:green}+1 core tests{color}.  The patch passed unit tests in 
ambari-server.

Test results: 
https://builds.apache.org/job/Ambari-trunk-test-patch/2936//testReport/
Console output: 
https://builds.apache.org/job/Ambari-trunk-test-patch/2936//console

This message is automatically generated.

> 2.3 stack advisor doesn't take into account HBASE-11520
> -------------------------------------------------------
>
>                 Key: AMBARI-11552
>                 URL: https://issues.apache.org/jira/browse/AMBARI-11552
>             Project: Ambari
>          Issue Type: Bug
>            Reporter: Nick Dimiduk
>             Fix For: 2.1.0
>
>         Attachments: AMBARI-11552.00.patch
>
>
> Launching a HDP-2.3 RS with the HDP-2.2 stack_advisor's output on a machine 
> with lots of ram results in
> {noformat}
> regionserver.HRegionServer: Failed init
> java.lang.OutOfMemoryError: Direct buffer memory
> at java.nio.Bits.reserveMemory(Bits.java:658)
> at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
> at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
> at 
> org.apache.hadoop.hbase.util.ByteBufferArray.<init>(ByteBufferArray.java:65)
> at 
> org.apache.hadoop.hbase.io.hfile.bucket.ByteBufferIOEngine.<init>(ByteBufferIOEngine.java:47)
> at 
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getIOEngineFromName(BucketCache.java:309)
> at 
> org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.<init>(BucketCache.java:219)
> at 
> org.apache.hadoop.hbase.io.hfile.CacheConfig.getBucketCache(CacheConfig.java:568)
> at org.apache.hadoop.hbase.io.hfile.CacheConfig.getL2(CacheConfig.java:511)
> at 
> org.apache.hadoop.hbase.io.hfile.CacheConfig.instantiateBlockCache(CacheConfig.java:591)
> at org.apache.hadoop.hbase.io.hfile.CacheConfig.<init>(CacheConfig.java:231)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.handleReportForDutyResponse(HRegionServer.java:1349)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:896)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> HBASE-11520 introduced a subtle change in the configuration, it's release 
> note says
> bq. Remove "hbase.bucketcache.percentage.in.combinedcache". Simplifies config 
> of block cache. If you are using this config., after this patch goes in, it 
> will be ignored. The L1 LruBlockCache will be whatever hfile.block.cache.size 
> is set to and the L2 BucketCache will be whatever hbase.bucketcache.size is 
> set to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to