[ 
https://issues.apache.org/jira/browse/HIVE-11153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14617659#comment-14617659
 ] 

Sergey Shelukhin edited comment on HIVE-11153 at 7/7/15 11:37 PM:
------------------------------------------------------------------

That is probably HADOOP-10027



was (Author: sershe):
That is probably https://issues.apache.org/jira/browse/HADOOP-10027


> LLAP: SIGSEGV in Off-heap decompression routines
> ------------------------------------------------
>
>                 Key: HIVE-11153
>                 URL: https://issues.apache.org/jira/browse/HIVE-11153
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Hive
>    Affects Versions: llap
>            Reporter: Gopal V
>            Assignee: Sergey Shelukhin
>         Attachments: llap-cn105-coredump.log
>
>
> LLAP started with 
> {code}
> ./dist/hive/bin/hive --service llap --cache 57344m --executors 16 --size 
> 131072m --xmx 65536m --name llap0 --loglevel WARN --instances 1
> {code}
> Running date_dim filters from query27 with the large cache enabled.
> {code}
> R13=0x00007f2ca9d15ca0 is pointing into the stack for thread: 
> 0x00007f2d4cece800
> R14=0x00007f3d7e2bfc00: <offset 0xf9dc00> in 
> /usr/jdk64/jdk1.8.0_40/jre/lib/amd64/server/libjvm.so at 0x00007f3d7d322000
> R15=0x00007f3d7e2bb6a0: <offset 0xf996a0> in 
> /usr/jdk64/jdk1.8.0_40/jre/lib/amd64/server/libjvm.so at 0x00007f3d7d322000
> Stack: [0x00007f2ca9c17000,0x00007f2ca9d18000],  sp=0x00007f2ca9d15ca0,  free 
> space=1019k
> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
> code)
> V  [libjvm.so+0x6daca3]  jni_GetStaticObjectField+0xc3
> C  [libhadoop.so.1.0.0+0x100e9]  
> Java_org_apache_hadoop_io_compress_zlib_ZlibDecompressor_inflateBytesDirect+0x49
> C  0x00007f2ca9d15e60
> Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
> j  org.apache.hadoop.io.compress.zlib.ZlibDecompressor.inflateBytesDirect()I+0
> j  
> org.apache.hadoop.io.compress.zlib.ZlibDecompressor.inflateDirect(Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)I+93
> j  
> org.apache.hadoop.io.compress.zlib.ZlibDecompressor$ZlibDirectDecompressor.decompress(Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)V+72
> j  
> org.apache.hadoop.hive.shims.ZeroCopyShims$DirectDecompressorAdapter.decompress(Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)V+6
> j  
> org.apache.hadoop.hive.ql.io.orc.ZlibCodec.directDecompress(Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)V+15
> j  
> org.apache.hadoop.hive.ql.io.orc.ZlibCodec.decompress(Ljava/nio/ByteBuffer;Ljava/nio/ByteBuffer;)V+17
> j  
> org.apache.hadoop.hive.ql.io.orc.InStream.decompressChunk(Ljava/nio/ByteBuffer;Lorg/apache/hadoop/hive/ql/io/orc/CompressionCodec;Ljava/nio/ByteBuffer;)V+14
> j  
> org.apache.hadoop.hive.ql.io.orc.InStream.readEncodedStream(JJLorg/apache/hadoop/hive/common/DiskRangeList;JJLorg/apache/hadoop/hive/shims/HadoopShims$ZeroCopyReaderShim;Lorg/apache/hadoop/hive/ql/io/o
> rc/CompressionCodec;ILorg/apache/hadoop/hive/llap/io/api/cache/LowLevelCache;Lorg/apache/hadoop/hive/llap/io/api/EncodedColumnBatch$StreamBuffer;JJLorg/apache/hadoop/hive/llap/counters/LowLevelCacheCounte
> rs;)Lorg/apache/hadoop/hive/common/DiskRangeList;+376
> j  
> org.apache.hadoop.hive.ql.io.orc.EncodedReaderImpl.readEncodedColumns(ILorg/apache/hadoop/hive/ql/io/orc/StripeInformation;[Lorg/apache/hadoop/hive/ql/io/orc/OrcProto$RowIndex;Ljava/util/List;Ljava/uti
> l/List;[Z[[Z)V+2079
> j  
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal()Ljava/lang/Void;+1244
> j  
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal()Ljava/lang/Object;+1
> j  org.apache.hadoop.hive.common.CallableWithNdc.call()Ljava/lang/Object;+8
> j  java.util.concurrent.FutureTask.run()V+42
> j  
> java.util.concurrent.ThreadPoolExecutor.runWorker(Ljava/util/concurrent/ThreadPoolExecutor$Worker;)V+95
> j  java.util.concurrent.ThreadPoolExecutor$Worker.run()V+5
> j  java.lang.Thread.run()V+11
> v  ~StubRoutines::call_stub
> {code}
> Always reproducible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to