Hi All,

I am using the spark-submit command to submit my jar to a standalone cluster
with two executor.

When I use the spark-submit it deploys the application twice and I see two
application entries in the master UI. 

The master logs as shown below also indicate that submit try to deploy the
application twice and the deployment of second application fails I see the
the error is "14/07/31 17:13:34 WARN TaskSchedulerImpl: Initial job has not
accepted any resources; check your cluster UI to ensure that workers are
registered and have sufficient memory" on the driver side. I have increased
the memory for the executor.


Could you please help me understand what is going wrong here?

Thanks,
Ali 


14/07/31 17:09:32 INFO Master: Registering app KafkaMessageReceiver
14/07/31 17:09:32 INFO Master: Registered app KafkaMessageReceiver with ID
app-20140731170932-0016
14/07/31 17:09:32 INFO Master: Launching executor app-20140731170932-0016/0
on worker worker-20140731192616-dev1.dr.com-46317
14/07/31 17:09:32 INFO Master: Launching executor app-20140731170932-0016/1
on worker worker-20140731162612-dev.dr.com-58975
14/07/31 17:09:33 INFO Master: Registering app KafkaMessageReceiver
14/07/31 17:09:33 INFO Master: Registered app KafkaMessageReceiver with ID
app-20140731170933-0017





--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/spark-submit-registers-the-driver-twice-tp11112.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to