[ 
https://issues.apache.org/jira/browse/HIVE-12171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14959773#comment-14959773
 ] 

Sergey Shelukhin edited comment on HIVE-12171 at 10/15/15 10:41 PM:
--------------------------------------------------------------------

This is by design (sortof). Cache size needs to be increased, it's almost full 
and too fragmented to accommodate larger-than-usual uncompressed stream that is 
cached in max-alloc-sized parts. Max alloc in this case is 16Mb, stream is 
4-something Mb, but maximum chunk present in buddy allocator is 4Mb. 
Alternatively we could try to break stream into smaller parts for this case


was (Author: sershe):
This is by design (sortof). Cache size needs to be increased, it's almost full 
and too fragmented to accommodate larger-than-usual uncompressed stream that is 
cached in max-alloc-sized parts. Max alloc in this case is 16Mb, stream is 
4-something Mb, but maximum chunk present in allocation is 4Mb. 
Alternatively we could try to break stream into smaller parts for this case

> LLAP: BuddyAllocator failures when querying uncompressed data
> -------------------------------------------------------------
>
>                 Key: HIVE-12171
>                 URL: https://issues.apache.org/jira/browse/HIVE-12171
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 2.0.0
>            Reporter: Gopal V
>            Assignee: Sergey Shelukhin
>
> {code}
> hive> select sum(l_extendedprice * l_discount) as revenue from 
> testing.lineitem where l_shipdate >= '1993-01-01' and l_shipdate < 
> '1994-01-01' ;
> Caused by: 
> org.apache.hadoop.hive.common.io.Allocator$AllocatorOutOfMemoryException: 
> Failed to allocate 4920000; at 0 out of 1
>         at 
> org.apache.hadoop.hive.llap.cache.BuddyAllocator.allocateMultiple(BuddyAllocator.java:176)
>         at 
> org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.preReadUncompressedStream(EncodedReaderImpl.java:882)
>         at 
> org.apache.hadoop.hive.ql.io.orc.encoded.EncodedReaderImpl.readEncodedColumns(EncodedReaderImpl.java:319)
>         at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.performDataRead(OrcEncodedDataReader.java:413)
>         at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:194)
>         at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:191)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
>         at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:191)
>         at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:74)
>         at 
> org.apache.hadoop.hive.common.CallableWithNdc.call(CallableWithNdc.java:37)
>         ... 4 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to