In my limited understanding, there must be single   "leader" master  in the
cluster. If there are multiple leaders, it will lead to unstable cluster as
each masters will keep scheduling independently. You should use zookeeper
for HA, so that standby masters can vote to find new leader if the primary
goes down.

Now, you can still have multiple masters running as leaders but
conceptually they should be thought as different clusters.

Regarding workers, they should follow their master.

Not sure if this answers your question, as I am sure you have read the
documentation thoroughly.

Best
Ayan

On Sun, Apr 26, 2015 at 6:31 PM, James King <jakwebin...@gmail.com> wrote:

> If I have 5 nodes and I wish to maintain 1 Master and 2 Workers on each
> node, so in total I will have 5 master and 10 Workers.
>
> Now to maintain that setup I would like to query spark regarding the
> number Masters and Workers that are currently available using API calls and
> then take some appropriate action based on the information I get back, like
> restart a dead Master or Worker.
>
> Is this possible? does Spark provide such API?
>



-- 
Best Regards,
Ayan Guha

Reply via email to