[ 
https://issues.apache.org/jira/browse/SPARK-2691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14194679#comment-14194679
 ] 

Chris Heller commented on SPARK-2691:
-------------------------------------

Ok here is the patch as a PR: https://github.com/apache/spark/pull/3074

[~tarnfeld] feel free to expand on this patch. I was looking at the code today 
and realized the coarse mode support should be trivial (just setting a 
ContainerInfo inside the TaskInfo created) -- it just cannot reuse the 
fine-grained code path in its current form since that assumes passing of an 
ExecutorInfo, but it could easily be generalized over a ContainerInfo instead.

We are not shipping the spark image as an executor URI, instead spark is 
bundled in the image. It is just a stock spark needed in the image, a simple 
docker file would look like (assuming you have a spark tar ball and libmesos in 
your directory with the Dockerfile):

{noformat}
FROM ubuntu

RUN apt-get -y update
RUN apt-get -y install default-jre-headless
RUN apt-get -y install python2.7

ADD spark-1.1.0-bin-hadoop1.tgz /
RUN mv /spark-1.1.0-bin-hadoop1 /spark
COPY libmesos-0.20.1.so /usr/lib/libmesos.so

ENV SPARK_HOME /spark
ENV MESOS_JAVA_NATIVE_LIBRARY /usr/lib/libmesos.so

CMD ps -ef
{noformat}

[~yoeduardoj] one awesome thing, which is actually beyond the scope of docker 
support, but still related to mesos would be the ability to support 
configuration of what role and attributes in a mesos offer are filtered by 
spark -- but this is not relevant just wanted to bring it up while folks are 
digging into the mesos backend code.

> Allow Spark on Mesos to be launched with Docker
> -----------------------------------------------
>
>                 Key: SPARK-2691
>                 URL: https://issues.apache.org/jira/browse/SPARK-2691
>             Project: Spark
>          Issue Type: Improvement
>          Components: Mesos
>            Reporter: Timothy Chen
>            Assignee: Timothy Chen
>              Labels: mesos
>         Attachments: spark-docker.patch
>
>
> Currently to launch Spark with Mesos one must upload a tarball and specifiy 
> the executor URI to be passed in that is to be downloaded on each slave or 
> even each execution depending coarse mode or not.
> We want to make Spark able to support launching Executors via a Docker image 
> that utilizes the recent Docker and Mesos integration work. 
> With the recent integration Spark can simply specify a Docker image and 
> options that is needed and it should continue to work as-is.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to