Chaoyu Tang created THRIFT-3914:
-----------------------------------

             Summary: TSaslServerTransport throws OOM due to 
BetaArrayOutputStream limitation
                 Key: THRIFT-3914
                 URL: https://issues.apache.org/jira/browse/THRIFT-3914
             Project: Thrift
          Issue Type: Bug
          Components: Java - Library
    Affects Versions: 0.9.3
            Reporter: Chaoyu Tang


TSaslServerTransport uses the BetaArrayOutputStream as its write buffer, but 
the BetaArrayOutputStream has buffer size limitation with maximum Integer 
MAX_VALUE (2,147,483,647) bytes. If it needs write the result exceeding this 
limitation, it will throw OutOfMemoryError with msg "Requested array size 
exceeds VM limit". Following is the stack trace from a Hive use case:
{code}
Exception in thread "pool-6-thread-9" java.lang.OutOfMemoryError: Requested 
array size exceeds VM limit
        at java.util.Arrays.copyOf(Arrays.java:2271)
        at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
        at 
java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
        at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
        at 
org.apache.thrift.transport.TSaslTransport.write(TSaslTransport.java:476)
        at 
org.apache.thrift.transport.TSaslServerTransport.write(TSaslServerTransport.java:41)
        at 
org.apache.thrift.protocol.TBinaryProtocol.writeString(TBinaryProtocol.java:202)
        at 
org.apache.hadoop.hive.metastore.api.SerDeInfo$SerDeInfoStandardScheme.write(SerDeInfo.java:579)
        at 
org.apache.hadoop.hive.metastore.api.SerDeInfo$SerDeInfoStandardScheme.write(SerDeInfo.java:501)
        at 
org.apache.hadoop.hive.metastore.api.SerDeInfo.write(SerDeInfo.java:439)
        at 
org.apache.hadoop.hive.metastore.api.StorageDescriptor$StorageDescriptorStandardScheme.write(StorageDescriptor.java:1490)
        at 
org.apache.hadoop.hive.metastore.api.StorageDescriptor$StorageDescriptorStandardScheme.write(StorageDescriptor.java:1288)
        at 
org.apache.hadoop.hive.metastore.api.StorageDescriptor.write(StorageDescriptor.java:1154)
        at 
org.apache.hadoop.hive.metastore.api.Partition$PartitionStandardScheme.write(Partition.java:1072)
        at 
org.apache.hadoop.hive.metastore.api.Partition$PartitionStandardScheme.write(Partition.java:929)
        at 
org.apache.hadoop.hive.metastore.api.Partition.write(Partition.java:825)
        at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_result$get_partitions_resultStandardScheme.write(ThriftHiveMetastore.java)
        at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_result$get_partitions_resultStandardScheme.write(ThriftHiveMetastore.java)
        at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partitions_result.write(ThriftHiveMetastore.java:65485)
        at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:53)
        at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
        at 
org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:707)
        at 
org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:702)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
        at 
org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:702)
        at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
{code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to