If i want to take specifically for the task number which got failed. is it
possible to take heap dump.


"16/12/16 12:25:54 WARN YarnSchedulerBackend$YarnSchedulerEndpoint:
Container killed by YARN for exceeding memory limits. 20.0 GB of 19.8 GB
physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.

16/12/16 12:25:54 ERROR YarnClusterScheduler: Lost executor 1 on
ip-.dev: Container killed by YARN for exceeding memory limits. 20.0 GB
of 19.8 GB physical memory used. Consider boosting
spark.yarn.executor.memoryOverhead.
16/12/16 12:25:55 WARN TaskSetManager: Lost task 7.0 in stage 1.0 (TID
9, ip.dev): ExecutorLostFailure (executor 1 exited caused by one of
the running tasks) Reason: Container killed by YARN for exceeding
memory limits. 20.0 GB of 19.8 GB physical memory used. Consider
boosting spark.yarn.executor.memoryOverhead.
16/12/16 12:25:55 INFO BlockManagerMasterEndpoint: Trying to remove
executor 1 from BlockManagerMaster.
16/12/16 12:25:55 INFO BlockManagerMaster: Removal of executor 1 requested
16/12/16 12:25:55 INFO YarnSchedulerBackend$YarnDriverEndpoint: Asked
to remove non-existent executor 1

"

thanks,
selvam R

On Fri, Dec 16, 2016 at 12:30 PM, Selvam Raman <sel...@gmail.com> wrote:

> Hi,
>
> how can i take heap dump in EMR slave node to analyze.
>
> I have one master and two slave.
>
> if i enter jps command in Master, i could see sparksubmit with pid.
>
> But i could not see anything in slave node.
>
> how can i take heap dump for spark job.
>
> --
> Selvam Raman
> "லஞ்சம் தவிர்த்து நெஞ்சம் நிமிர்த்து"
>



-- 
Selvam Raman
"லஞ்சம் தவிர்த்து நெஞ்சம் நிமிர்த்து"

Reply via email to