[ 
https://issues.apache.org/jira/browse/SPARK-7736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14620021#comment-14620021
 ] 

Esben S. Nielsen commented on SPARK-7736:
-----------------------------------------

Thanks for the comment. I don't understand how it apply here however as both 
listed pyspark programs (In my understanding) should result in step 2) of your 
scenario:

p1) Unhandled execption raised before SparkContext initialization:
---
from pyspark import SparkContext
raise Exception('Fail')
sc = SparkContext(appName="raise_seen_by_yarn")
---
This results in an AM retry (total 2 AM tries as per YARN default) and 
subsequent marking of the application YARN status as FAILED. This is what I 
expect for a "designed to fail AM".

p2) Unhandled execption raised after SparkContext initialization:
---
from pyspark import SparkContext
sc = SparkContext(appName="raise_not_seen_by_yarn"):
raise Exception('Fail')
---
This results in the the application being marked as SUCCEEDED (total of 1 AM 
try) which is not what I expect for a "designed to fail AM".

I've tried to look in the spark documentation if there should be taken special 
actions to signal failure to YARN but I haven't found anything? And looking at 
src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala : L118 
where all sys.exit calls are considered successful termination regardless of 
exit code I can't see a way to signal failure to YARN after SparkContext 
initialization?

Both p1 and p2 return with non-zero exit code when run with spark-submit 
--master yarn-client which is what I would expect.


> Exception not failing Python applications (in yarn cluster mode)
> ----------------------------------------------------------------
>
>                 Key: SPARK-7736
>                 URL: https://issues.apache.org/jira/browse/SPARK-7736
>             Project: Spark
>          Issue Type: Bug
>          Components: YARN
>         Environment: Spark 1.3.1, Yarn 2.7.0, Ubuntu 14.04
>            Reporter: Shay Rojansky
>
> It seems that exceptions thrown in Python spark apps after the SparkContext 
> is instantiated don't cause the application to fail, at least in Yarn: the 
> application is marked as SUCCEEDED.
> Note that any exception right before the SparkContext correctly places the 
> application in FAILED state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to