Hello,

I'm new to ec2. I've set up a spark cluster on ec2 and am using
persistent-hdfs with the data nodes mounting ebs. I launched my cluster
using spot-instances

./spark-ec2 -k mykeypair -i ~/aws/mykeypair.pem -t m3.xlarge -s 4 -z
us-east-1c --spark-version=1.2.0 --spot-price=.0321
--hadoop-major-version="2"  --copy-aws-credentials --ebs-vol-size=100
launch mysparkcluster

My question is, if the spot-instances get dropped, and I try and attach new
slaves to existing master with --use-existing-master, can I mount those new
slaves to the same ebs volumes? I'm guessing not. If somebody has
experience with this, how is it done?

Thanks.
Sincerely,
Deb

Reply via email to