Thanks Mayur - I thought I had done those configurations but perhaps I'm 
pointing to the wrong master IP.  


> On Apr 2, 2014, at 9:34 AM, Mayur Rustagi <mayur.rust...@gmail.com> wrote:
> 
> The cluster is not running. You need to add MASTER environment variable & 
> point to your master IP to connect with it.
> Also if you are running in distributed mode the workers should be registered. 
> 
> Mayur Rustagi
> Ph: +1 (760) 203 3257
> http://www.sigmoidanalytics.com
> @mayur_rustagi
> 
> 
> 
>> On Wed, Apr 2, 2014 at 12:44 AM, Denny Lee <denny.g....@gmail.com> wrote:
>> I’ve been able to get CDH5 up and running on EC2 and according to Cloudera 
>> Manager, Spark is running healthy.
>> 
>> But when I try to run spark-shell, I eventually get the error:
>> 
>> 14/04/02 07:18:18 INFO client.AppClient$ClientActor: Connecting to master 
>> spark://ip-172-xxx-xxx-xxx:7077...
>> 14/04/02 07:18:38 ERROR client.AppClient$ClientActor: All masters are 
>> unresponsive! Giving up.
>> 14/04/02 07:18:38 ERROR cluster.SparkDeploySchedulerBackend: Spark cluster 
>> looks dead, giving up.
>> 14/04/02 07:18:38 ERROR scheduler.TaskSchedulerImpl: Exiting due to error 
>> from cluster scheduler: Spark cluster looks down
>> 
>> Wondering which configurations I would need to change to get this to work?
>> 
>> Thanks!
>> Denny
> 

Reply via email to