How much memory have you currently provided to HS2? Have you tried bumping
that up?

On Mon, Sep 7, 2015 at 1:09 AM, Sanjeev Verma <sanjeev.verm...@gmail.com>
wrote:

> *I am getting the following exception when the HS2 is crashing, any idea
> why it has happening*
>
> "pool-1-thread-121" prio=4 tid=19283 RUNNABLE
> at java.lang.OutOfMemoryError.<init>(OutOfMemoryError.java:48)
> at java.util.Arrays.copyOf(Arrays.java:2271)
> Local Variable: byte[]#1
> at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
> at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutput
> Stream.java:93)
> at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
> Local Variable: org.apache.thrift.TByteArrayOutputStream#42
> Local Variable: byte[]#5378
> at org.apache.thrift.transport.TSaslTransport.write(TSaslTransp
> ort.java:446)
> at org.apache.thrift.transport.TSaslServerTransport.write(TSasl
> ServerTransport.java:41)
> at org.apache.thrift.protocol.TBinaryProtocol.writeI32(TBinaryP
> rotocol.java:163)
> at org.apache.thrift.protocol.TBinaryProtocol.writeString(TBina
> ryProtocol.java:186)
> Local Variable: byte[]#2
> at org.apache.hive.service.cli.thrift.TStringColumn$TStringColu
> mnStandardScheme.write(TStringColumn.java:490)
> Local Variable: java.util.ArrayList$Itr#1
> at org.apache.hive.service.cli.thrift.TStringColumn$TStringColu
> mnStandardScheme.write(TStringColumn.java:433)
> Local Variable: org.apache.hive.service.cli.th
> rift.TStringColumn$TStringColumnStandardScheme#1
> at org.apache.hive.service.cli.thrift.TStringColumn.write(TStri
> ngColumn.java:371)
> at org.apache.hive.service.cli.thrift.TColumn.standardSchemeWri
> teValue(TColumn.java:381)
> Local Variable: org.apache.hive.service.cli.thrift.TColumn#504
> Local Variable: org.apache.hive.service.cli.thrift.TStringColumn#453
> at org.apache.thrift.TUnion$TUnionStandardScheme.write(TUnion.java:244)
> at org.apache.thrift.TUnion$TUnionStandardScheme.write(TUnion.java:213)
> at org.apache.thrift.TUnion.write(TUnion.java:152)
>
>
>
> On Fri, Aug 21, 2015 at 6:16 AM, kulkarni.swar...@gmail.com <
> kulkarni.swar...@gmail.com> wrote:
>
>> Sanjeev,
>>
>> One possibility is that you are running into[1] which affects hive 0.13.
>> Is it possible for you to apply the patch on [1] and see if it fixes your
>> problem?
>>
>> [1] https://issues.apache.org/jira/browse/HIVE-10410
>>
>> On Thu, Aug 20, 2015 at 6:12 PM, Sanjeev Verma <sanjeev.verm...@gmail.com
>> > wrote:
>>
>>> We are using hive-0.13 with hadoop1.
>>>
>>> On Thu, Aug 20, 2015 at 11:49 AM, kulkarni.swar...@gmail.com <
>>> kulkarni.swar...@gmail.com> wrote:
>>>
>>>> Sanjeev,
>>>>
>>>> Can you tell me more details about your hive version/hadoop version etc.
>>>>
>>>> On Wed, Aug 19, 2015 at 1:35 PM, Sanjeev Verma <
>>>> sanjeev.verm...@gmail.com> wrote:
>>>>
>>>>> Can somebody gives me some pointer to looked upon?
>>>>>
>>>>> On Wed, Aug 19, 2015 at 9:26 AM, Sanjeev Verma <
>>>>> sanjeev.verm...@gmail.com> wrote:
>>>>>
>>>>>> Hi
>>>>>> We are experiencing a strange problem with the hiveserver2, in one of
>>>>>> the job it gets the GC limit exceed from mapred task and hangs even 
>>>>>> having
>>>>>> enough heap available.we are not able to identify what causing this 
>>>>>> issue.
>>>>>> Could anybody help me identify the issue and let me know what
>>>>>> pointers I need to looked up.
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Swarnim
>>>>
>>>
>>>
>>
>>
>> --
>> Swarnim
>>
>
>


-- 
Swarnim

Reply via email to