Originally posted here:
http://stackoverflow.com/questions/28002443/cluster-hangs-in-ssh-ready-state-using-spark-1-2-ec2-launch-script

I'm trying to launch a standalone Spark cluster using its pre-packaged EC2
scripts, but it just indefinitely hangs in an 'ssh-ready' state:

    ubuntu@machine:~/spark-1.2.0-bin-hadoop2.4$ ./ec2/spark-ec2 -k
<key-pair> -i <identity-file>.pem -r us-west-2 -s 3 launch test
    Setting up security groups...
    Searching for existing cluster test...
    Spark AMI: ami-ae6e0d9e
    Launching instances...
    Launched 3 slaves in us-west-2c, regid = r-b_______6
    Launched master in us-west-2c, regid = r-0______0
    Waiting for all instances in cluster to enter 'ssh-ready'
state..........

Yet I can SSH into these instances without compaint:

    ubuntu@machine:~$ ssh -i <identity-file>.pem root@master-ip
    Last login: Day MMM DD HH:mm:ss 20YY from
c-AA-BBB-CCCC-DDD.eee1.ff.provider.net

           __|  __|_  )
           _|  (     /   Amazon Linux AMI
          ___|\___|___|

    https://aws.amazon.com/amazon-linux-ami/2013.03-release-notes/
    There are 59 security update(s) out of 257 total update(s) available
    Run "sudo yum update" to apply all updates.
    Amazon Linux version 2014.09 is available.
    root@ip-internal ~]$

I'm trying to figure out if this is a problem in AWS or with the Spark
scripts. I've never had this issue before until recently.


-- 
Nathan Murthy // 713.884.7110 (mobile) // @natemurthy

Reply via email to