Hello spark folks,

I have a simple spark cluster setup but I can't get jobs to run on it.  I
am using the standlone mode.

One master, one slave.  Both machines have 32GB ram and 8 cores.

The slave is setup with one worker that has 8 cores and 24GB memory
allocated.

My application requires 2 cores and 5GB of memory.

However, I'm getting the following error:

WARN TaskSchedulerImpl: Initial job has not accepted any resources; check
your cluster UI to ensure that workers are registered and have sufficient
memory

What else should I check for?

This is a simplified setup (the real cluster has 20 nodes).  In this
simplified setup I am running the master and the slave manually.  The
master's web page shows the worker and it shows the application and the
memory/core requirements match what I mentioned above.

I also tried running the SparkPi example via bin/run-example and get the
same result.  It requires 8 cores and 512MB of memory, which is also
clearly within the limits of the available worker.

Any ideas would be greatly appreciated!!

Matt

Reply via email to