Is it possible now to share spark context among machines (through serialization or some other ways)? I am looking for possible ways to make the spark job submission to be HA (high availability). For example, if a job submitted to machine A fails in the middle (due to machine A crash), I want this job automatically re-run on machine B.
Thanks -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/sharing-spark-context-among-machines-tp10665.html Sent from the Apache Spark User List mailing list archive at Nabble.com.