> memory and more GC, which can cause issues.
>>
>>
>>
>> Also, have you tried to persist data in any way? If so, then that might
>> be causing an issue.
>>
>>
>>
>> Lastly, I am not sure if your data has a skew and if that is forcing a
>> lot of
gt; Also, have you tried to persist data in any way? If so, then that might be
> causing an issue.
>
>
>
> Lastly, I am not sure if your data has a skew and if that is forcing a lot
> of data to be on one executor node.
>
>
>
> Sent from my Windows 10 phone
>
&
if your data has a skew and if that is forcing a lot
> of data to be on one executor node.
>
>
>
> Sent from my Windows 10 phone
>
>
>
> *From: *Rodrick Brown
> *Sent: *Friday, November 25, 2016 12:25 AM
> *To: *Aniket Bhatnagar
> *Cc: *user
> *Subject: *Re: OS
r node.
Sent from my Windows 10 phone
*From: *Rodrick Brown
*Sent: *Friday, November 25, 2016 12:25 AM
*To: *Aniket Bhatnagar
*Cc: *user
*Subject: *Re: OS killing Executor due to high (possibly off heap) memory
usage
Try setting spark.yarn.executor.memoryOverhead 1
On Thu, Nov 24, 2
gt;
Cc: user<mailto:user@spark.apache.org>
Subject: Re: OS killing Executor due to high (possibly off heap) memory usage
Try setting spark.yarn.executor.memoryOverhead 1
On Thu, Nov 24, 2016 at 11:16 AM, Aniket Bhatnagar
mailto:aniket.bhatna...@gmail.com>> wrote:
Hi Spark users
I am
Try setting spark.yarn.executor.memoryOverhead 1
On Thu, Nov 24, 2016 at 11:16 AM, Aniket Bhatnagar <
aniket.bhatna...@gmail.com> wrote:
> Hi Spark users
>
> I am running a job that does join of a huge dataset (7 TB+) and the
> executors keep crashing randomly, eventually causing the job to c