The cluster is not running. You need to add MASTER environment variable & point to your master IP to connect with it. Also if you are running in distributed mode the workers should be registered.
Mayur Rustagi Ph: +1 (760) 203 3257 http://www.sigmoidanalytics.com @mayur_rustagi <https://twitter.com/mayur_rustagi> On Wed, Apr 2, 2014 at 12:44 AM, Denny Lee <denny.g....@gmail.com> wrote: > I’ve been able to get CDH5 up and running on EC2 and according to Cloudera > Manager, Spark is running healthy. > > But when I try to run spark-shell, I eventually get the error: > > 14/04/02 07:18:18 <http://airmail.calendar/2002-04-14%2019:18:18%20PDT> INFO > client.AppClient$ClientActor: Connecting to master > spark://ip-172-xxx-xxx-xxx:7077... > 14/04/02 07:18:38 <http://airmail.calendar/2002-04-14%2019:18:38%20PDT> ERROR > client.AppClient$ClientActor: All masters are unresponsive! Giving up. > 14/04/02 07:18:38 <http://airmail.calendar/2002-04-14%2019:18:38%20PDT> ERROR > cluster.SparkDeploySchedulerBackend: Spark cluster looks dead, giving up. > 14/04/02 07:18:38 <http://airmail.calendar/2002-04-14%2019:18:38%20PDT> ERROR > scheduler.TaskSchedulerImpl: Exiting due to error from cluster scheduler: > Spark cluster looks down > > Wondering which configurations I would need to change to get this to work? > > Thanks! > Denny >