"Initial job has not accepted any resources; check your cluster UI to
ensure that workers are registered and have sufficient resources".

I'm assuming you are submitting the job in coarse-grained mode, in that
case make sure you are asking for the available resources.

If you want to submit multiple applications and run them side-by-side then
you can submit the application in fine-grained mode.

Read more over here
http://spark.apache.org/docs/latest/running-on-mesos.html#mesos-run-modes

Thanks
Best Regards

On Wed, Sep 2, 2015 at 4:02 PM, srungarapu vamsi <srungarapu1...@gmail.com>
wrote:

> Hi,
>
> I am using a mesos cluster to run my spark jobs.
> I have one mesos-master and two mesos-slaves setup on 2 machines.
> On one machine, master and slave are setup and on the second machine
> mesos-slave is setup
> I run these on  m3-large ec2 instances.
>
> 1. When i try to submit two jobs using spark-submit in parallel, one job
> hangs with the message : "Initial job has not accepted any resources; check
> your cluster UI to ensure that workers are registered and have sufficient
> resources". But when i check on the mesos cluster UI which runs at 5050
> port, i can see idle memory which can be used by the hanging job. But
> number of idle cores is 1.
> So, does this mean that cores are pinned to spark-submit and no other
> spark-submit can get the core till the running spark-submit completes ?
>
> 2. Assumption : "submitting multiple spark-jobs using spark-submit has the
> above mentioned problem ".
> Now my task is to run a spark-streaming job which reads from kafka and
> does some precomputation.
> The nature of my pre-computation jobs are in such a way that, each
> pre-compute jobs has few mutually exclusive tasks to complete where all the
> tasks have inherent tree structure in them. i.e A task initiates few other
> tasks and they initiate further more tasks.
> I already have spark jobs which run as a batch job to perform the
> pre-computations  mentioned above. Now, is it a good idea to convert these
> precompuations jobs into akka actors ?
>
> 3. If at all running multiple spark-submit jobs with shared CPU is
> possible, for the scenario explained in Point.2, which approach is better :
> "precomputation jobs as actors" vs "multiple spark-submits" ?
>
> Any pointers to clear my above doubts is highly appreciated.
> --
> /Vamsi
>

Reply via email to