Hello,
I am using Spark 2.2.1 with standalone resource manager.

I have a streaming job where from time to time jobs are aborted due to the
following exception. The reasons are different e.g.
FileNotFound/NullPointerException etc

org.apache.spark.SparkException: Job aborted due to stage failure:
Task 7 in stage 0.0 failed 4 times, most recent failure: Lost task 7.3
in stage 0.0.....

These exceptions are printed in driver logs.

I have a try/catch around my streaming job. Strangely sometimes, the above
exceptions are printed in the logs but my try/catch block never catches
them but sometimes, it does catch them. In both cases, the job continues to
process data.

I am trying to understand this behavior that in which case I will be able
to catch the exception.

I have tried to reproduce this using something like rdd.map(x=>
1/0).print() but failed. I can see the exception in driver logs but my
catch block never catches it.

Regards,
Behroz

Reply via email to