Re: cannot exec. job: TaskSchedulerImpl: Initial job has not accepted any resources

2014-04-14 Thread Praveen R
Can you try adding this to your spark-env file and sync to all hosts export MASTER=spark://hadoop-pg-5.cluster:7077 On Sat, Apr 12, 2014 at 6:50 PM, ge ko koenig@gmail.com wrote: Hi, I'm starting using Spark and have installed Spark within CDH5 using ClouderaManager. I set up one

cannot exec. job: TaskSchedulerImpl: Initial job has not accepted any resources

2014-04-12 Thread Gerd Koenig
Hi, I'm starting using Spark and have installed Spark within CDH5 using ClouderaManager. I set up one master (hadoop-pg-5) and 3 workers (hadoop-pg-7[-8,-9]). Master WebUI looks good, all workers seem to be registered. If I open spark-shell and try to execute the wordcount example, the execution

cannot exec. job: TaskSchedulerImpl: Initial job has not accepted any resources

2014-04-12 Thread ge ko
Hi, I'm starting using Spark and have installed Spark within CDH5 using ClouderaManager. I set up one master (hadoop-pg-5) and 3 workers (hadoop-pg-7[-8,-9]). Master WebUI looks good, all workers seem to be registered. If I open spark-shell and try to execute the wordcount example, the execution