I'm not sure what the design is, but I think the current behavior if the
driver can't reach the master is to attempt to connect once and fail if
that attempt fails.  Is that what you're observing?  (What version of Spark
also?)

On Fri, Oct 17, 2014 at 3:51 AM, preeze <etan...@gmail.com> wrote:

> Hi all,
>
> I am running a standalone spark cluster with a single master. No HA or
> failover is configured explicitly (no ZooKeeper etc).
>
> What is the default designed behavior for submission of new jobs when a
> single master went down or became unreachable?
>
> I couldn't find it documented anywhere.
> Thanks.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Designed-behavior-when-master-is-unreachable-tp16677.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to