[
https://issues.apache.org/jira/browse/SPARK-9559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14651965#comment-14651965
]
partha bishnu commented on SPARK-9559:
--------------------------------------
Hi
Yes..I requested 1 executor like I mentioned in the original description [ I
used --total-executor-cores 1 with spark_submit]
We are using 1.3 so far and as you suggested to use 1.4, we will look into it
and try to reproduce on 1.4 and report back. Again Thanks for looking into it.
Again to recap:
--------------------------------------------------------------------------------
With options: --total-executor-core 1 and check-pointing enabled, I have:
node-1: Spark Master running
node-2: 1 worker jvm running and can start at most one executor
node-3: 1 worker jvm and can start at most one executor.
> I launch jobs using spark_submit that started jobs in one executor on node-2
> I killed node-2 (both worker jvm and executor)
> Expected behavior: Spark master should ask worker JVM on node-3 to launch a
> new executor and restart the jobs in that executor.
> Observed behavior: Jobs got stuck
> Worker redundancy/failover in spark stand-alone mode
> ----------------------------------------------------
>
> Key: SPARK-9559
> URL: https://issues.apache.org/jira/browse/SPARK-9559
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Affects Versions: 1.3.0
> Reporter: partha bishnu
>
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]