Hey Sanjeev.
Can you put the /tmp/hive/hive.log (on the hvevserver2 host) when you
launch the query ?
Best regards.
Tale
On Thu, Sep 22, 2016 at 5:03 AM, Sanjeev Verma
wrote:
> lowered 1073741824 to half of it but still getting the same issue.
>
> On Wed, Sep 21, 2016 at 6:44 PM, Sanjeev Verm
lowered 1073741824 to half of it but still getting the same issue.
On Wed, Sep 21, 2016 at 6:44 PM, Sanjeev Verma
wrote:
> its 1073741824 now but I cant see anything running on client side, the job
> which kicked up by the query got completed but HS2 is crashing
>
> On Wed, Sep 21, 2016 at 6:40
its 1073741824 now but I cant see anything running on client side, the job
which kicked up by the query got completed but HS2 is crashing
On Wed, Sep 21, 2016 at 6:40 PM, Prasanth Jayachandran <
pjayachand...@hortonworks.com> wrote:
> FetchOperator will run client side. What is the value for
> hi
FetchOperator will run client side. What is the value for
hive.fetch.task.conversion.threshold?
Thanks
Prasanth
> On Sep 21, 2016, at 6:37 PM, Sanjeev Verma wrote:
>
> I am getting hiveserver2 memory even after increasing the heap size from 8G
> to 24G, in clue why it still going to OOM with e
In my experience having looked at way to many heap dumps from
hiveserver2 it always end up being a seriously over partitioned table
and a user who decided to do a full table scan basically requesting all
partitions. This often is by accident for example when using
unix_timestamp to convert date