Thanks for pointing this out, I did not see this one. . Wow, that's exactly
what one needs to run mesos slave in a docker. But the image is not kept up
to date. The latest tag is 0.2.4_mesos-0.26.0_docker-1.8.2_ubuntu-14.04.3
Do you know how one can trigger an update to keep it on par with
mesosphere/mesos-slave?

On Tue, Mar 15, 2016 at 1:53 AM, Aaron Carey <aca...@ilm.com> wrote:

> Would the officially provided docker-in-docker image help?
>
> mesosphere/mesos-slave-dind
>
>
> ------------------------------
> *From:* Yuri Finkelstein [yurif2...@gmail.com]
> *Sent:* 15 March 2016 04:25
> *To:* user@mesos.apache.org
>
> *Subject:* Re: running mesos slave in a docker container
>
> Sure, but my point what - why would mesosphere not put docker binary in
> the official docker image? Maintaining my own docker image of anything is
> the last instrument I use. That's what "official" images are for after all.
>
> On Mon, Mar 14, 2016 at 8:30 PM, Yong Tang <yong.tang.git...@outlook.com>
> wrote:
>
>> One way to avoid map library dependencies of docker between host and
>> docker containers is to install binaries of docker into the docker
>> container:
>>
>>
>> https://docs.docker.com/engine/installation/binaries/
>>
>>
>> and then map /var/run/docker.sock between host and docker containers. In
>> this way, library dependencies conflicts between host and docker containers
>> could be mostly avoided.
>>
>>
>> Thanks
>> Yong
>>
>> ------------------------------
>> Date: Mon, 14 Mar 2016 18:49:45 -0700
>> Subject: Re: running mesos slave in a docker container
>> From: yurif2...@gmail.com
>> To: user@mesos.apache.org
>>
>>
>> Enumerating each and every lib path and dealing with potential conflicts
>> between host  and docker libc, etc - I didn't want to deal with this
>> option, it's quite bad imho.
>>
>> On Mon, Mar 14, 2016 at 6:42 PM, haosdent <haosd...@gmail.com> wrote:
>>
>> >2. --volumes-from
>> So far DockerContainerizer in Mesos don't support this option.
>>
>> >1. What is the best method to point mesos-slave running in a container
>> to a working
>> Usually I mount docker binary to container from host.
>>
>> ```
>> docker run --privileged -d \
>> --name=mesos-slave \
>> --net=host \
>> -p 31000-31300:31000-31300 \
>> -p 5051:5051 \
>> -v /usr/bin/docker:/bin/docker \
>> -v
>> /lib/x86_64-linux-gnu/libdevmapper.so.1.02.1:/usr/lib/libdevmapper.so.1.02 \
>> -v /lib/x86_64-linux-gnu/libpthread.so.0:/lib/libpthread.so.0 \
>> -v /usr/lib/x86_64-linux-gnu/libsqlite3.so:/lib/libsqlite3.so.0 \
>> -v /lib/x86_64-linux-gnu/libudev.so.1:/lib/libudev.so.1
>> -v /var/run/docker.sock:/var/run/docker.sock \
>> -v /sys:/sys \
>> -v /tmp:/tmp \
>> -e MESOS_MASTER=zk://10.10.10.9:2181/mesos \
>> -e MESOS_LOG_DIR=/tmp/log \
>> -e MESOS_CONTAINERIZERS=docker \
>> -e MESOS_LOGGING_LEVEL=INFO \
>> -e MESOS_IP=10.10.10.9 \
>> -e MESOS_WORK_DIR=/tmp
>> mesosphere/mesos-slave mesos-slave
>> ```
>>
>> On Tue, Mar 15, 2016 at 8:47 AM, Yuri Finkelstein <yurif2...@gmail.com>
>> wrote:
>>
>> Since mesosphere distributes images of mesos software in a container (
>> https://hub.docker.com/r/mesosphere/mesos-slave/), I decided to try this
>> option. After trying this with various settings I settled on a
>> configuration that basically works. But I do see one problem and this is
>> what this message about.
>>
>> To start off, I find it strange that the image does not contain docker
>> distribution itself. After all, in order to use containnerizer=mesos one
>> needs to point mesos slave at a docker binary. If I bind-mount docker
>> binary to container's /usr/local/bin/mesos and use option
>> --mesos=/usr/local/bin/mesos I run into the problem of dynamic library
>> dependencies: mesos depends on a bunch of dyanmic libraries:
>> ======================
>> ldd /usr/bin/docker
>> linux-vdso.so.1 =>  (0x00007fffaebfe000)
>> libsystemd-journal.so.0 => /lib/x86_64-linux-gnu/libsystemd-journal.so.0
>> (0x00007f0a1458b000)
>> libapparmor.so.1 => /usr/lib/x86_64-linux-gnu/libapparmor.so.1
>> (0x00007f0a1437f000)
>> libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0
>> (0x00007f0a14160000)
>> libdevmapper.so.1.02.1 => /lib/x86_64-linux-gnu/libdevmapper.so.1.02.1
>> (0x00007f0a13f27000)
>> libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f0a13b62000)
>> ... and many more
>> ===========================
>> Mounting */lib/x86_64-linux-gnu/ *in docker is a horrible idea which is
>> not worth discussing. So I wonder what is the rational behind decision to
>> not include docker binary into the mesosphere container and how do other
>> people solve this problem.
>>
>>
>> Here is one solution that I found. I use* docker:dind* but not as
>> running container but rather as a volume:
>>
>> ==============================
>> docker create --name "docker-proxy" -v
>> /var/run/docker.sock:/var/run/docker.sock -v /usr/local/bin docker:dind
>> ===============================
>>
>>
>> This container contains a fully functional docker binary in its
>> /usr/local/bin, and this is all I need it for. To make the mesos-slave
>> container see this binary I simply use *--volumes-from* option:
>> ==========
>> docker run -d --restart=unless-stopped --volumes-from
>> "docker-proxy" --docker=/usr/local/bin/docker 
>> --containerizers="docker,mesos" --name
>> $MESOS_SLAVE $MESOS_SLAVE_IMAGE ...
>> ==========
>>
>> This works like a charm. But, there is the following problem.
>> In order for mesos-slave to function in this mode, it needs to spawn
>> executors in docker container as well. For that purpose mesos slave has
>> option *--docker_mesos_image= *that should be set to the same container
>> image name that's used to launch mesos slave. If I do this,
>> --docker_mesos_image="$MESOS_SLAVE_IMAGE"
>>
>> I see that every attempt to spawn a task fails because option
>> *--docker=/usr/local/bin/docker* is apparently injected into the
>> executor container but the *--volumes-from="docker-proxy"* option is
>> NOT! So, the executor becomes dysfunctional without that docker binary.
>>
>>
>> So, to summarize, I'm raising 2 questions:
>> 1. What is the best method to point mesos-slave running in a container to
>> a working copy of docker binary and make this work such that executor
>> containers will also inherit visibility of this binary.
>> 2. If my proposed method based on docker:dind is deemed reasonable in
>> general, then I wonder whether I should file a Jira to request that in
>> addition to *--docker_mesos_image* one gets the ability to add
>> additional settings to the executor container such as *--volumes-from*.
>> This is not easy to formulate as potentially other similar options may need
>> to be configured as well.
>>
>>
>> P.S
>> The full script showing how I launch mesos slave is shown below
>>
>>
>>
>>
>> for i in ${MESOS_SLAVE_NODES[*]}; do
>> eval $(docker-machine env $i)
>> NODE_IP=$(docker-machine ip $i)
>>
>> # mesos-slave requires access to docker binary, but the coctainer image
>> does not contain it.
>> # For that reason I'm creating (but not running!) a docker-in-a-docker
>> container which contains a statically linked version of the docker binary
>> # in /usr/local/bin. Then, using '--volumes-from' option on the mesos
>> container I'm making this binary visible
>> remove_container "docker-proxy"
>> docker create --name "docker-proxy" -v
>> /var/run/docker.sock:/var/run/docker.sock -v /usr/local/bin docker:dind
>>
>> remove_container $MESOS_SLAVE
>> log "Starting mesos slave on $i"
>>
>> docker run -d --restart=unless-stopped --volumes-from "docker-proxy"
>> --name $MESOS_SLAVE \
>> --net='host' \
>> --pid='host' \
>> -e "TZ=$TIMEZONE" \
>> --privileged \
>> -v /sys/fs/cgroup:/host/sys/fs/cgroup \
>> $MESOS_SLAVE_IMAGE \
>>
>> --master="zk://$zk/mesos" \
>> --advertise_ip=$NODE_IP \
>> --ip=$NODE_IP \
>> --resources="ports:[8000-9000, 3000-3200]" \
>> --cgroups_hierarchy=/host/sys/fs/cgroup \
>> --docker=/usr/local/bin/docker \
>> --containerizers="docker,mesos" \
>> --log_dir=/var/log/mesos \
>> --logging_level=INFO \
>> --docker_remove_delay=1hrs \
>> --gc_delay=2hrs \
>> --executor_registration_timeout=5mins
>>
>>
>> # --docker_mesos_image="$MESOS_SLAVE_IMAGE" \
>>
>>
>> done
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> --
>> Best Regards,
>> Haosdent Huang
>>
>>
>>
>

Reply via email to