Turning on Spark debug logs in conf/log4j.properties may help.  The problem
could be any number of things, including that you don't have enough
resources for the default executor sizes.

On Sun, Aug 14, 2016 at 2:37 PM, Peter Figliozzi <[email protected]>
wrote:

> Hi All, I am new to Mesos.  I set up a cluster this weekend with 3 agents,
> 1 master, Mesos 1.0.0.  The resources show in the Mesos UI and the agents
> are all in the Agents tab.  So everything looks good from that vantage
> point.
>
> Next I installed Spark 2.0.0 on each agent and the master, in the same
> path (/opt/spark) on each machine.  I run the spark-shell from the master
> like this:
>
> ./spark-shell --master mesos://zk://moe:2181/mesos -c
> spark.mesos.executor.home=`pwd`
>
> The shell comes up nicely, however, none of the resources get assigned to
> the Spark framework (zeros for everything).
>
> If I try a simple task like
>
> sc.parallelize(0 to 10, 8).count
>
> it fails:
>
> WARN TaskSchedulerImpl: Initial job has not accepted any resources; check
> your cluster UI to ensure that workers are registered and have sufficient
> resources
>
>
> I'll post my logs in a little bit if need be.  Hopefully it's a common
> newb error and simple fix.
>
> Thank you
>



-- 
Michael Gummelt
Software Engineer
Mesosphere

Reply via email to