Lately I started messing around with hadoop and spark.

I noticed spark can leverage zookeeper in order to create
multiple "secondaries" masters.

I was wondering however, how one may implement the client
in such situation?

that is, what should the spark master URL be for a spark client application?

Let's say for example, I have 10 nodes, and 3 of them (1/3/5) are masters.
I don't want to put either one of the masters url, since they may be
brought down.

so, which master URL do I use? or rather, how do I use one url
which will change when a new master is chosen?

Note:
I know I can simply have a list of masters, use try/catch to see which one
fails, and try other ones - I was hoping for something "better", in
performance context, and more dynamic as well.

Yours, Jones.

Reply via email to