Hi, I have a job that runs on Spark on EC2. The cluster currently contains 1 master node and 2 worker node.
I am planning to add several other worker nodes to the cluster. How should I do that so the master node knows the new worker nodes? I couldn't find the documentation on it in Spark's site. Can anybody help a bit? Thanks, Xiaobing
