Problem solved-- I was incorrectly specifying the spark directory in the
command-line argument.  It wants the spark root directory, not the bin.

I unpacked mine in /opt, so, the Right way:

spark.mesos.executor.home=/opt/spark-2.0.0-bin-hadoop2.7/

Wrong way:

spark.mesos.executor.home=/opt/spark-2.0.0-bin-hadoop2.7/bin

On Sun, Aug 14, 2016 at 9:09 PM, Peter Figliozzi <pete.figlio...@gmail.com>
wrote:

>
> I notice the first thing that happens when I run the spark-shell is *three
> failed 'sandbox' tasks* appearing in the Mesos UI.  (One for each agent.)
> Then another three, as if it tried twice.
>
> I attached the Mesos logs from the master and one of the agents.
>
> All of this happens when I run the spark-shell... before any commands in
> the shell.
>
> Summary/Master:
>
>    - Received SUBSCRIBE call for spark shell
>    - Adds and launches task with 4 cpus and 1408 mem on each agent
>    - Something about "Processing ACCEPT call for offers"
>    - TASK_RUNNING on all agents
>    - Processing ACKNOWLEDGE calls all agents
>    - Status update TASK_FAILED from all agents
>
> Summary/Agent:
>
>    - "Got assigned" task 1
>    - Trying to chown /var/lib/mesos/slaves/..... to user 'peter'
>    - Launching executor 1 of framework with resources cpus 0.1, mem 32 in
>    work directory...
>    - Queuing task '1' for executor '1'...
>    - docker: No container info found, skipping launch
>    - containerizer: starting container...
>    - linux_launcher: cloning child process with flags =
>    - systemd: assigned child process '17043' to 'mesos_executors.slice'
>    - ... some more slave stuff ....
>    - Handling status update TASK_FAILED for task 1
>
> Could someone explain what's supposed to happen here when running the
> spark shell?
>
> Thanks,
>
> Pete
>
>
> On Sun, Aug 14, 2016 at 6:57 PM, Michael Gummelt <mgumm...@mesosphere.io>
> wrote:
>
>> Turning on Spark debug logs in conf/log4j.properties may help.  The
>> problem could be any number of things, including that you don't have enough
>> resources for the default executor sizes.
>>
>> On Sun, Aug 14, 2016 at 2:37 PM, Peter Figliozzi <
>> pete.figlio...@gmail.com> wrote:
>>
>>> Hi All, I am new to Mesos.  I set up a cluster this weekend with 3
>>> agents, 1 master, Mesos 1.0.0.  The resources show in the Mesos UI and the
>>> agents are all in the Agents tab.  So everything looks good from that
>>> vantage point.
>>>
>>> Next I installed Spark 2.0.0 on each agent and the master, in the same
>>> path (/opt/spark) on each machine.  I run the spark-shell from the master
>>> like this:
>>>
>>> ./spark-shell --master mesos://zk://moe:2181/mesos -c
>>> spark.mesos.executor.home=`pwd`
>>>
>>> The shell comes up nicely, however, none of the resources get assigned
>>> to the Spark framework (zeros for everything).
>>>
>>> If I try a simple task like
>>>
>>> sc.parallelize(0 to 10, 8).count
>>>
>>> it fails:
>>>
>>> WARN TaskSchedulerImpl: Initial job has not accepted any resources;
>>> check your cluster UI to ensure that workers are registered and have
>>> sufficient resources
>>>
>>>
>>> I'll post my logs in a little bit if need be.  Hopefully it's a common
>>> newb error and simple fix.
>>>
>>> Thank you
>>>
>>
>>
>>
>> --
>> Michael Gummelt
>> Software Engineer
>> Mesosphere
>>
>
>

Reply via email to