[ 
https://issues.apache.org/jira/browse/HBASE-14598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14956343#comment-14956343
 ] 

Hadoop QA commented on HBASE-14598:
-----------------------------------

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12766461/hbase-14598-v1.patch
  against master branch at commit 08df55defc052a674d13ff0dbfb5e82618775293.
  ATTACHMENT ID: 12766461

    {color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

    {color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
                        Please justify why no new tests are needed for this 
patch.
                        Also please list what manual steps were performed to 
verify this patch.

    {color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.6.1 2.7.0 
2.7.1)

    {color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

    {color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

    {color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

    {color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

    {color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

    {color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

     {color:red}-1 core tests{color}.  The patch failed these unit tests:
     

     {color:red}-1 core zombie tests{color}.  There are 8 zombie test(s):       
at 
org.apache.hadoop.hbase.client.TestSnapshotFromClient.testSnapshotDeletionWithRegex(TestSnapshotFromClient.java:170)
        at 
org.apache.hadoop.hbase.namespace.TestNamespaceAuditor.testRegionOperations(TestNamespaceAuditor.java:469)
        at 
org.apache.hadoop.hbase.filter.TestFuzzyRowFilterEndToEnd.testEndToEnd(TestFuzzyRowFilterEndToEnd.java:143)
        at 
org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat.testMRIncrementalLoadWithSplit(TestHFileOutputFormat.java:384)
        at 
org.apache.hadoop.hbase.client.TestFromClientSide.testJiraTest33(TestFromClientSide.java:2339)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15995//testReport/
Release Findbugs (version 2.0.3)        warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15995//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15995//artifact/patchprocess/checkstyle-aggregate.html

  Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15995//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/15995//console

This message is automatically generated.

> ByteBufferOutputStream grows its HeapByteBuffer beyond JVM limitations
> ----------------------------------------------------------------------
>
>                 Key: HBASE-14598
>                 URL: https://issues.apache.org/jira/browse/HBASE-14598
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.98.12
>            Reporter: Ian Friedman
>            Assignee: Ian Friedman
>         Attachments: 14598.txt, hbase-14598-v1.patch
>
>
> We noticed that in returning a Scan against a region containing particularly 
> large (wide) rows that it is possible during 
> ByteBufferOutputStream.checkSizeAndGrow() to attempt to create a new 
> ByteBuffer larger than the JVM allows which then throws a OutOfMemoryError. 
> The code currently caps it at Integer.MAX_VALUE which is actually larger than 
> the JVM allows. This lead to us dealing with cascading region server death as 
> the RegionServer hosting the region died, opened on a new server, the client 
> retried the scan, and the new RS died as well. 
> I believe ByteBufferOutputStream should not try to create ByteBuffers that 
> large and instead throw an exception back up if it needs to grow any bigger. 
> The limit should probably be something like Integer.MAX_VALUE-8, as that is 
> what ArrayList uses. ref: 
> http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/8-b132/java/util/ArrayList.java#221



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to