Hello Tim,

Here is my conf/spark-defaults.conf which is inside the docker image:

$ cat conf/spark-defaults.conf

spark.mesos.coarse: false
spark.mesos.executor.docker.image: docker-registry/mesos-spark:master-12
spark.mesos.mesosExecutor.cores: 0.25
spark.mesos.executor.home: /opt/spark
spark.mesos.uris: file:///etc/docker.tar.gz

I am already setting it inside the docker image.

Am I missing something ?

Regards,

On Tue, Mar 15, 2016 at 4:37 PM, Tim Chen <[email protected]> wrote:

> Hi Pradeep,
>
> Yes we still have a pending PR that will start propagating these settings
> down to the executors, right now it's only applied on the driver. As a work
> around you can download or set spark.mesos.executor.docker.image in the
> spark-default.conf file in the docker image you use to launch the driver
> and it should automatically get this setting when the driver is launched.
>
> Tim
>
> On Tue, Mar 15, 2016 at 9:26 AM, Pradeep Chhetri <
> [email protected]> wrote:
>
>> Hello Timothy,
>>
>> I am setting spark.mesos.executor.docker.image. In my case, the driver
>> is actually started as a docker container (SparkPi in screenshot) but the
>> tasks which are spawned by driver are not starting as containers but plain
>> java processes. Is this expected ?
>>
>> Thanks
>>
>> On Tue, Mar 15, 2016 at 4:19 PM, Timothy Chen <[email protected]> wrote:
>>
>>> You can launch the driver and executor in docker containers as well by
>>> setting spark.mesos.executor.docker.image to the image you want to use to
>>> launch them.
>>>
>>> Tim
>>>
>>> On Mar 15, 2016, at 8:49 AM, Radoslaw Gruchalski <[email protected]>
>>> wrote:
>>>
>>> Pradeep,
>>>
>>> You can mount a spark directory as a volume. This means you have to have
>>> spark deployed on every agent.
>>>
>>> Another thing you can do, place spark in hdfs, assuming that you have
>>> hdfs available but that too will download a copy to the sandbox.
>>>
>>> I'd prefer the former.
>>>
>>> Sent from Outlook Mobile <https://aka.ms/qtex0l>
>>>
>>> _____________________________
>>> From: Pradeep Chhetri <[email protected]>
>>> Sent: Tuesday, March 15, 2016 4:41 pm
>>> Subject: Apache Spark Over Mesos
>>> To: <[email protected]>
>>>
>>>
>>> Hello,
>>>
>>> I am able to run Apache Spark over Mesos. Its quite simple to run Spark
>>> Dispatcher over marathon and ask it to run Spark Executor (I guess also can
>>> be called as Spark Driver) as docker container.
>>>
>>> I have a query regarding this:
>>>
>>> All spark tasks are spawned directly by first downloading the spark
>>> artifacts. I was thinking if there is some way I can start them too as
>>> docker containers. This will save the time for downloading the spark
>>> artifacts. I am running spark in fine-grained mode.
>>>
>>> I have attached a screenshot of a sample job
>>>
>>> <Screen Shot 2016-03-15 at 15.15.06.png>
>>> ​
>>> Thanks,
>>>
>>> --
>>> Pradeep Chhetri
>>>
>>>
>>>
>>
>>
>> --
>> Pradeep Chhetri
>>
>
>


-- 
Pradeep Chhetri

Reply via email to