Sarto Mihai created SPARK-9252:
----------------------------------

             Summary: Spark client application must be restarted if the cluster 
(yarn) gets restarted
                 Key: SPARK-9252
                 URL: https://issues.apache.org/jira/browse/SPARK-9252
             Project: Spark
          Issue Type: Bug
          Components: Java API
    Affects Versions: 1.3.0
         Environment: Spark 1.3.0, Apache Hadoop 2.6
            Reporter: Sarto Mihai


We have a Java application that is building and sending successful RDDs. But if 
the cluster gets restarted, even we detect that from the application and 
rebuild the JavaSparkContext the execution will all fail until we restart the 
application too.
We are suspecting there is something static with the JavaSparkContext that does 
not get reinitialized - because we build new JavaSparkContext objects if we 
detect the oldSparkContext.env().isStopped().
If we also restart the 'client' application then the RDD executions will work 
just fine.
Therefore, we would like to not restart our application in case the Hadoop 
cluster get restarted and be able to make new JavaSparkContext in case the old 
Yarn application (Spark) was stopped.

Let me know should you need any more details.            



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to