spark-ec2 does not directly support adding instances to an existing cluster, apart from the special case of adding slaves to a cluster with a master but no slaves. There is an open issue to track adding this support, SPARK-2008 <https://issues.apache.org/jira/browse/SPARK-2008>, but it doesn't have any momentum at the moment.
Your best bet currently is to do what you did and hack your way through using spark-ec2's various scripts. You probably already know this, but to be clear, note that Spark itself supports adding slaves to a running cluster. It's just that spark-ec2 hasn't implemented a feature to do this work for you. Nick On Wed, Nov 25, 2015 at 2:27 PM Dillian Murphey <crackshotm...@gmail.com> wrote: > It appears start-slave.sh works on a running cluster. I'm surprised I > can't find more info on this. Maybe I'm not looking hard enough? > > Using AWS and spot instances is incredibly more efficient, which begs for > the need of dynamically adding more nodes while the cluster is up, yet > everything I've found so far seems to indicate it isn't supported yet. > > But yet here I am with 1.5 and it at least appears to be working. Am I > missing something? > > On Tue, Nov 24, 2015 at 4:40 PM, Dillian Murphey <crackshotm...@gmail.com> > wrote: > >> What's the current status on adding slaves to a running cluster? I want >> to leverage spark-ec2 and autoscaling groups. I want to launch slaves as >> spot instances when I need to do some heavy lifting, but I don't want to >> bring down my cluster in order to add nodes. >> >> Can this be done by just running start-slave.sh?? >> >> What about using Mesos? >> >> I just want to create an AMI for a slave and on some trigger launch it >> and have it automatically add itself to the cluster. >> >> thanks >> > >