Try to use --executor-memory 12g with spark-summit. Or you can set it
in conf/spark-defaults.properties and rsync it to all workers and then
restart. -Xiangrui

On Fri, Jun 27, 2014 at 1:05 PM, Peng Cheng <pc...@uow.edu.au> wrote:
> I give up, communication must be blocked by the complex EC2 network topology
> (though the error information indeed need some improvement). It doesn't make
> sense to run a client thousands miles away to communicate frequently with
> workers. I have moved everything to EC2 now.
>
>
>
> --
> View this message in context: 
> http://apache-spark-user-list.1001560.n3.nabble.com/TaskSchedulerImpl-Initial-job-has-not-accepted-any-resources-check-your-cluster-UI-to-ensure-that-woy-tp8247p8444.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to