David Wayne Birdsall created TRAFODION-2043:
-----------------------------------------------

             Summary: Bulk load may fail if bucket cache is configured and is 
large
                 Key: TRAFODION-2043
                 URL: https://issues.apache.org/jira/browse/TRAFODION-2043
             Project: Apache Trafodion
          Issue Type: Bug
          Components: sql-cmu
    Affects Versions: 2.0-incubating, 2.1-incubating
         Environment: Potentially all; this particular example was seen on a 
10-node cluster
            Reporter: David Wayne Birdsall
            Assignee: David Wayne Birdsall
             Fix For: 2.1-incubating


Bulk load may fail when HBase is configured to use bucket cache. An example: 

SQL>LOAD WITH CONTINUE ON ERROR INTO TK.DEVICES SELECT * FROM HIVE.TK.DEVICES ;
 
UTIL_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------
Task: LOAD             Status: Started    Object: TRAFODION.TK.DEVICES          
                                                
Task:  CLEANUP         Status: Started    Object: TRAFODION.TK.DEVICES          
                                                
Task:  CLEANUP         Status: Ended      Object: TRAFODION.TK.DEVICES          
                                                
Task:  PREPARATION     Status: Started    Object: TRAFODION.TK.DEVICES          
                                                
*** ERROR[8448] Unable to access Hbase interface. Call to 
ExpHbaseInterface::addToHFile returned error HBASE_ADD_TO_HFILE_ERROR(-713). 
Cause: 
java.lang.OutOfMemoryError: Direct buffer memory
java.nio.Bits.reserveMemory(Bits.java:658)
java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306)
org.apache.hadoop.hbase.util.ByteBufferArray.<init>(ByteBufferArray.java:65)
org.apache.hadoop.hbase.io.hfile.bucket.ByteBufferIOEngine.<init>(ByteBufferIOEngine.java:47)
org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.getIOEngineFromName(BucketCache.java:307)
org.apache.hadoop.hbase.io.hfile.bucket.BucketCache.<init>(BucketCache.java:217)
org.apache.hadoop.hbase.io.hfile.CacheConfig.getBucketCache(CacheConfig.java:614)
org.apache.hadoop.hbase.io.hfile.CacheConfig.getL2(CacheConfig.java:553)
org.apache.hadoop.hbase.io.hfile.CacheConfig.instantiateBlockCache(CacheConfig.java:637)
org.apache.hadoop.hbase.io.hfile.CacheConfig.<init>(CacheConfig.java:231)
org.trafodion.sql.HBulkLoadClient.doCreateHFile(HBulkLoadClient.java:209)
org.trafodion.sql.HBulkLoadClient.addToHFile(HBulkLoadClient.java:245)
. [2016-06-09 00:31:55]

The failure occurs because the bulk load client code is using a server-side API 
that requires a CacheConfig object, and that object configures itself according 
to the settings in the hbase-site.xml file. In particular, if a large bucket 
cache is configured, it may exceed the memory we specify for Trafodion client 
servers.

The fix is to either avoid using cache at all, or to unset the bucket cache 
property before constructing a CacheConfig object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to