You can launch the driver and executor in docker containers as well by setting 
spark.mesos.executor.docker.image to the image you want to use to launch them.

Tim

> On Mar 15, 2016, at 8:49 AM, Radoslaw Gruchalski <ra...@gruchalski.com> wrote:
> 
> Pradeep,
> 
> You can mount a spark directory as a volume. This means you have to have 
> spark deployed on every agent.
> 
> Another thing you can do, place spark in hdfs, assuming that you have hdfs 
> available but that too will download a copy to the sandbox.
> 
> I'd prefer the former.
> 
> Sent from Outlook Mobile
> 
> _____________________________
> From: Pradeep Chhetri <pradeep.chhetr...@gmail.com>
> Sent: Tuesday, March 15, 2016 4:41 pm
> Subject: Apache Spark Over Mesos
> To: <user@mesos.apache.org>
> 
> 
> Hello,
> 
> I am able to run Apache Spark over Mesos. Its quite simple to run Spark 
> Dispatcher over marathon and ask it to run Spark Executor (I guess also can 
> be called as Spark Driver) as docker container.
> 
> I have a query regarding this:
> 
> All spark tasks are spawned directly by first downloading the spark 
> artifacts. I was thinking if there is some way I can start them too as docker 
> containers. This will save the time for downloading the spark artifacts. I am 
> running spark in fine-grained mode.
> 
> I have attached a screenshot of a sample job
> 
> <Screen Shot 2016-03-15 at 15.15.06.png> 
>  
> Thanks,
> 
> -- 
> Pradeep Chhetri
> 
> 

Reply via email to