Hi,

I'm running a spark job on YARN, using 6 executors each with 25 GB of
memory and spark.yarn.executor.overhead set to 5GB. Despite this, I still
seem to see YARN killing my executors for exceeding the memory limit.

Reading the docs, it looks like the overhead defaults to around 10% of the
size of the heap - yet I'm still seeing failures when it's set to 20% of
the heap size. Is this expected? Are there any particular issues or
antipatterns in Spark code that could cause the JVM to use an excessive
amount of memory beyond the heap?

Thanks,

Tim.

-- 
This email is confidential, if you are not the intended recipient please 
delete it and notify us immediately by emailing the sender. You should not 
copy it or use it for any purpose nor disclose its contents to any other 
person. Privitar Limited is registered in England with registered number 
09305666. Registered office Salisbury House, Station Road, Cambridge, 
CB12LA.

Reply via email to