Hello Timothy,

I am setting spark.mesos.executor.docker.image. In my case, the driver is
actually started as a docker container (SparkPi in screenshot) but the
tasks which are spawned by driver are not starting as containers but plain
java processes. Is this expected ?

Thanks

On Tue, Mar 15, 2016 at 4:19 PM, Timothy Chen <[email protected]> wrote:

> You can launch the driver and executor in docker containers as well by
> setting spark.mesos.executor.docker.image to the image you want to use to
> launch them.
>
> Tim
>
> On Mar 15, 2016, at 8:49 AM, Radoslaw Gruchalski <[email protected]>
> wrote:
>
> Pradeep,
>
> You can mount a spark directory as a volume. This means you have to have
> spark deployed on every agent.
>
> Another thing you can do, place spark in hdfs, assuming that you have hdfs
> available but that too will download a copy to the sandbox.
>
> I'd prefer the former.
>
> Sent from Outlook Mobile <https://aka.ms/qtex0l>
>
> _____________________________
> From: Pradeep Chhetri <[email protected]>
> Sent: Tuesday, March 15, 2016 4:41 pm
> Subject: Apache Spark Over Mesos
> To: <[email protected]>
>
>
> Hello,
>
> I am able to run Apache Spark over Mesos. Its quite simple to run Spark
> Dispatcher over marathon and ask it to run Spark Executor (I guess also can
> be called as Spark Driver) as docker container.
>
> I have a query regarding this:
>
> All spark tasks are spawned directly by first downloading the spark
> artifacts. I was thinking if there is some way I can start them too as
> docker containers. This will save the time for downloading the spark
> artifacts. I am running spark in fine-grained mode.
>
> I have attached a screenshot of a sample job
>
> <Screen Shot 2016-03-15 at 15.15.06.png>
> ​
> Thanks,
>
> --
> Pradeep Chhetri
>
>
>


-- 
Pradeep Chhetri

Reply via email to