Hi guys,

I have a job which gets stuck if a couple of tasks get killed due to OOM
exception. Spark doesn't kill the job and it keeps on running for hours.
Ideally I would expect Spark to kill the job or restart the killed
executors but nothing seems to be happening. Anybody got idea about this?

Thanks
Nikhil

Reply via email to