). This is especially true if you're using spot
instances.
BR, Martin
From: Shrikant Prasad
Sent: Wednesday, November 9, 2022 12:11
To: Dongjoon Hyun
Cc: dev
Subject: Re: Spark Context Shutodown
You don't often get email from shrikant@gmail.com. Learn why
I have gone through debug logs of jobs. There are no failures or exceptions
in logs.
This issue does not seem to be specific to jobs as several of our jobs have
been impacted by this issue and these same jobs pass also on retry.
I am trying to figure out why the driver pod is getting deleted when
Maybe enabling DEBUG level log in your job and follow the processing logic
until the failure?
BTW, you need to look at what happens during job processing.
`Spark Context was shutdown` is not the root cause, but the result of job
failure in most cases.
Dongjoon.
On Fri, Oct 28, 2022 at 12:10 AM
Thanks Dongjoon for replying. I have tried with Spark 3.2 and still facing
the same issue.
Looking for some pointers which can help in debugging to find the
root cause.
Regards,
Shrikant
On Thu, 27 Oct 2022 at 10:36 PM, Dongjoon Hyun
wrote:
> Hi, Shrikant.
>
> It seems that you are using
Hi, Shrikant.
It seems that you are using non-GA features.
FYI, since Apache Spark 3.1.1, Kubernetes Support became GA in the
community.
https://spark.apache.org/releases/spark-release-3-1-1.html
In addition, Apache Spark 3.1 reached EOL last month.
Could you try the latest distribution
Hi Everyone,
We are using Spark 3.0.1 with Kubernetes resource manager. Facing an
intermittent issue in which the driver pod gets deleted and the driver logs
have this message that Spark Context was shutdown.
The same job works fine with given set of configurations most of the time
but sometimes