What are the full driver logs?  If you enable DEBUG logging, it should give
you more information about the rejected offers.  This can also happen if
offers are being accepted, but tasks immediately die for some reason.  You
should check the Mesos UI for failed tasks.  If they exist, please include
those logs here as well.

On Tue, Nov 22, 2016 at 4:52 AM, John Yost <hokiege...@gmail.com> wrote:

> Hi Everyone,
>
> There is probably an obvious answer to this, but not sure what it is. :)
>
> I am attempting to launch 2..n spark shells using Mesos as the master
> (this is to support 1..n researchers running pyspark stuff on our data). I
> can launch two or more spark shells without any problem. But, when I
> attempt any kind of operation that requires a Spark executor outside the
> driver program such as:
>
> val numbers = Ranger(1,1000)
> val pNumbers = sc.parallelize(numbers)
> pNumbers.take(5)
>
> I get the dreaded message:
> TaskSchedulerImpl: Initial job has not accepted any resources; check your
> cluster UI to ensure that workers are registered and sufficient resources
>
> I confirmed that both spark shells are listed as separate, uniquely-named
> Mesos frameworks and that there are plenty of CPU core and memory resources
> on our cluster.
>
> I am using Spark 2.0.1 on Mesos 0.28.1. Any ideas that y'all may have
> would be very much appreciated.
>
> Thanks! :)
>
> --John
>
>


-- 
Michael Gummelt
Software Engineer
Mesosphere

Reply via email to