Hi Tim,

Yes you are right, I was running the last automated build from the master 
version, with v0.7.1 everything works fine.

I am running Marathon, Mesos master and slave and zookeeper in a container in 
CoreOS.

My plan is to create the needed unit files to run it in CoreOS and to write a 
simple http endpoint (probably written in node) to use hipache to route the 
requests (as I will have to run a lot of containers).

Are you interested in putting this in the main repo or I create a dedicated 
side project?

Thanks again,

Alessandro

> Le 28 oct. 2014 à 22:19, Tim Chen <[email protected]> a écrit :
> 
> Hi Alessrando,
> 
> I think Mesos is running your task fine, but Marathon is killing your task.
> 
> Are you launching Marathon through a docker container as well? And what 
> version of Marathon are you using?
> 
> Tim
> 
> On Tue, Oct 28, 2014 at 2:07 PM, Alessandro Siragusa 
> <[email protected] <mailto:[email protected]>> wrote:
> Hi guys,
> 
> I still have a problem running mesos-slave in a Docker container. It 
> continuously kills and starts the containers on all the three slave nodes. In 
> the Marathon UI I can see multiple instances at the same time on all the 
> nodes.
> 
> I1028 20:43:19.572377     8 slave.cpp:1002] Got assigned task 
> ubuntu.70687acd-5ee3-11e4-8b6f-42c8cb288d5c for framework 
> 20141027-004948-352326060-38124-1-0000
> I1028 20:43:19.572691     8 slave.cpp:1112] Launching task 
> ubuntu.70687acd-5ee3-11e4-8b6f-42c8cb288d5c for framework 
> 20141027-004948-352326060-38124-1-0000
> I1028 20:43:19.573457     8 slave.cpp:1222] Queuing task 
> 'ubuntu.70687acd-5ee3-11e4-8b6f-42c8cb288d5c' for executor 
> ubuntu.70687acd-5ee3-11e4-8b6f-42c8cb288d5c of framework 
> '20141027-004948-352326060-38124-1-0000
> I1028 20:43:19.575451    13 docker.cpp:743] Starting container 
> '7f23db8e-9fb5-4e20-9f06-eb4caf361d86' for task 
> 'ubuntu.70687acd-5ee3-11e4-8b6f-42c8cb288d5c' (and executor 
> 'ubuntu.70687acd-5ee3-11e4-8b6f-42c8cb288d5c') of framework 
> '20141027-004948-352326060-38124-1-0000'
> I1028 20:43:20.936192     8 slave.cpp:2538] Monitoring executor 
> 'ubuntu.70687acd-5ee3-11e4-8b6f-42c8cb288d5c' of framework 
> '20141027-004948-352326060-38124-1-0000' in container 
> '7f23db8e-9fb5-4e20-9f06-eb4caf361d86'
> I1028 20:43:20.947391    13 slave.cpp:1733] Got registration for executor 
> 'ubuntu.70687acd-5ee3-11e4-8b6f-42c8cb288d5c' of framework 
> 20141027-004948-352326060-38124-1-0000 from executor(1)@176.31.235.180:42593 
> <http://176.31.235.180:42593/>
> I1028 20:43:20.947986    13 slave.cpp:1853] Flushing queued task 
> ubuntu.70687acd-5ee3-11e4-8b6f-42c8cb288d5c for executor 
> 'ubuntu.70687acd-5ee3-11e4-8b6f-42c8cb288d5c' of framework 
> 20141027-004948-352326060-38124-1-0000
> I1028 20:43:20.949553     9 slave.cpp:2088] Handling status update 
> TASK_RUNNING (UUID: ebb06849-c0ed-470e-95f0-3c652f6a2eee) for task 
> ubuntu.70687acd-5ee3-11e4-8b6f-42c8cb288d5c of framework 
> 20141027-004948-352326060-38124-1-0000 from executor(1)@176.31.235.180:42593 
> <http://176.31.235.180:42593/>
> I1028 20:43:20.949733     6 status_update_manager.cpp:320] Received status 
> update TASK_RUNNING (UUID: ebb06849-c0ed-470e-95f0-3c652f6a2eee) for task 
> ubuntu.70687acd-5ee3-11e4-8b6f-42c8cb288d5c of framework 
> 20141027-004948-352326060-38124-1-0000
> I1028 20:43:20.949831     6 status_update_manager.cpp:373] Forwarding status 
> update TASK_RUNNING (UUID: ebb06849-c0ed-470e-95f0-3c652f6a2eee) for task 
> ubuntu.70687acd-5ee3-11e4-8b6f-42c8cb288d5c of framework 
> 20141027-004948-352326060-38124-1-0000 to [email protected] 
> <mailto:[email protected]>:5050
> I1028 20:43:20.949935     6 slave.cpp:2252] Sending acknowledgement for 
> status update TASK_RUNNING (UUID: ebb06849-c0ed-470e-95f0-3c652f6a2eee) for 
> task ubuntu.70687acd-5ee3-11e4-8b6f-42c8cb288d5c of framework 
> 20141027-004948-352326060-38124-1-0000 to executor(1)@176.31.235.180:42593 
> <http://176.31.235.180:42593/>
> I1028 20:43:20.955905    10 status_update_manager.cpp:398] Received status 
> update acknowledgement (UUID: ebb06849-c0ed-470e-95f0-3c652f6a2eee) for task 
> ubuntu.70687acd-5ee3-11e4-8b6f-42c8cb288d5c of framework 
> 20141027-004948-352326060-38124-1-0000
> I1028 20:43:21.938161     9 docker.cpp:1286] Updated 'cpu.shares' to 614 at 
> /sys/fs/cgroup/cpu,cpuacct/system.slice/docker-e84f6042adeb23101e6a147f6f5ed1a01f748b432d7e33c8c8d1e9c091487095.scope
>  for container 7f23db8e-9fb5-4e20-9f06-eb4caf361d86
> I1028 20:43:21.938460     9 docker.cpp:1321] Updated 
> 'memory.soft_limit_in_bytes' to 544MB for container 
> 7f23db8e-9fb5-4e20-9f06-eb4caf361d86
> I1028 20:43:21.938865     9 docker.cpp:1347] Updated 'memory.limit_in_bytes' 
> to 544MB at 
> /sys/fs/cgroup/memory/system.slice/docker-e84f6042adeb23101e6a147f6f5ed1a01f748b432d7e33c8c8d1e9c091487095.scope
>  for container 7f23db8e-9fb5-4e20-9f06-eb4caf361d86
> I1028 20:43:25.571907    10 slave.cpp:1002] Got assigned task 
> ubuntu.73fb8c9e-5ee3-11e4-8b6f-42c8cb288d5c for framework 
> 20141027-004948-352326060-38124-1-0000
> I1028 20:43:25.572101    10 slave.cpp:1112] Launching task 
> ubuntu.73fb8c9e-5ee3-11e4-8b6f-42c8cb288d5c for framework 
> 20141027-004948-352326060-38124-1-0000
> I1028 20:43:25.572816    10 slave.cpp:1222] Queuing task 
> 'ubuntu.73fb8c9e-5ee3-11e4-8b6f-42c8cb288d5c' for executor 
> ubuntu.73fb8c9e-5ee3-11e4-8b6f-42c8cb288d5c of framework 
> '20141027-004948-352326060-38124-1-0000
> I1028 20:43:25.574316    12 docker.cpp:743] Starting container 
> 'b02a4193-0aea-47cd-b2ef-bf4d9cdf7c4a' for task 
> 'ubuntu.73fb8c9e-5ee3-11e4-8b6f-42c8cb288d5c' (and executor 
> 'ubuntu.73fb8c9e-5ee3-11e4-8b6f-42c8cb288d5c') of framework 
> '20141027-004948-352326060-38124-1-0000'
> I1028 20:43:26.941576     8 slave.cpp:2538] Monitoring executor 
> 'ubuntu.73fb8c9e-5ee3-11e4-8b6f-42c8cb288d5c' of framework 
> '20141027-004948-352326060-38124-1-0000' in container 
> 'b02a4193-0aea-47cd-b2ef-bf4d9cdf7c4a'
> I1028 20:43:26.953220     8 slave.cpp:1733] Got registration for executor 
> 'ubuntu.73fb8c9e-5ee3-11e4-8b6f-42c8cb288d5c' of framework 
> 20141027-004948-352326060-38124-1-0000 from executor(1)@176.31.235.180:47347 
> <http://176.31.235.180:47347/>
> I1028 20:43:26.953403     8 slave.cpp:1853] Flushing queued task 
> ubuntu.73fb8c9e-5ee3-11e4-8b6f-42c8cb288d5c for executor 
> 'ubuntu.73fb8c9e-5ee3-11e4-8b6f-42c8cb288d5c' of framework 
> 20141027-004948-352326060-38124-1-0000
> I1028 20:43:26.955296    13 slave.cpp:2088] Handling status update 
> TASK_RUNNING (UUID: 856e1be8-1eeb-441d-b3da-9087c71122e8) for task 
> ubuntu.73fb8c9e-5ee3-11e4-8b6f-42c8cb288d5c of framework 
> 20141027-004948-352326060-38124-1-0000 from executor(1)@176.31.235.180:47347 
> <http://176.31.235.180:47347/>
> I1028 20:43:26.955451     6 status_update_manager.cpp:320] Received status 
> update TASK_RUNNING (UUID: 856e1be8-1eeb-441d-b3da-9087c71122e8) for task 
> ubuntu.73fb8c9e-5ee3-11e4-8b6f-42c8cb288d5c of framework 
> 20141027-004948-352326060-38124-1-0000
> I1028 20:43:26.955806     6 status_update_manager.cpp:373] Forwarding status 
> update TASK_RUNNING (UUID: 856e1be8-1eeb-441d-b3da-9087c71122e8) for task 
> ubuntu.73fb8c9e-5ee3-11e4-8b6f-42c8cb288d5c of framework 
> 20141027-004948-352326060-38124-1-0000 to [email protected] 
> <mailto:[email protected]>:5050
> I1028 20:43:26.955899    13 slave.cpp:2252] Sending acknowledgement for 
> status update TASK_RUNNING (UUID: 856e1be8-1eeb-441d-b3da-9087c71122e8) for 
> task ubuntu.73fb8c9e-5ee3-11e4-8b6f-42c8cb288d5c of framework 
> 20141027-004948-352326060-38124-1-0000 to executor(1)@176.31.235.180:47347 
> <http://176.31.235.180:47347/>
> I1028 20:43:26.966419    12 status_update_manager.cpp:398] Received status 
> update acknowledgement (UUID: 856e1be8-1eeb-441d-b3da-9087c71122e8) for task 
> ubuntu.73fb8c9e-5ee3-11e4-8b6f-42c8cb288d5c of framework 
> 20141027-004948-352326060-38124-1-0000
> I1028 20:43:27.943474     8 docker.cpp:1286] Updated 'cpu.shares' to 614 at 
> /sys/fs/cgroup/cpu,cpuacct/system.slice/docker-79f811f1846a19dd58a17252538e88aa90c3197355311b83072628c6b57ee23d.scope
>  for container b02a4193-0aea-47cd-b2ef-bf4d9cdf7c4a
> I1028 20:43:27.943781     8 docker.cpp:1321] Updated 
> 'memory.soft_limit_in_bytes' to 544MB for container 
> b02a4193-0aea-47cd-b2ef-bf4d9cdf7c4a
> I1028 20:43:27.944221     8 docker.cpp:1347] Updated 'memory.limit_in_bytes' 
> to 544MB at 
> /sys/fs/cgroup/memory/system.slice/docker-79f811f1846a19dd58a17252538e88aa90c3197355311b83072628c6b57ee23d.scope
>  for container b02a4193-0aea-47cd-b2ef-bf4d9cdf7c4a
> I1028 20:43:31.576675    10 slave.cpp:1002] Got assigned task 
> ubuntu.778ffe01-5ee3-11e4-8b6f-42c8cb288d5c for framework 
> 20141027-004948-352326060-38124-1-0000
> I1028 20:43:31.576858    10 slave.cpp:1112] Launching task 
> ubuntu.778ffe01-5ee3-11e4-8b6f-42c8cb288d5c for framework 
> 20141027-004948-352326060-38124-1-0000
> I1028 20:43:31.577553    10 slave.cpp:1222] Queuing task 
> 'ubuntu.778ffe01-5ee3-11e4-8b6f-42c8cb288d5c' for executor 
> ubuntu.778ffe01-5ee3-11e4-8b6f-42c8cb288d5c of framework 
> '20141027-004948-352326060-38124-1-0000
> I1028 20:43:31.578986     6 docker.cpp:743] Starting container 
> '55fe7a4f-f36a-486e-9f19-74948e7bed18' for task 
> 'ubuntu.778ffe01-5ee3-11e4-8b6f-42c8cb288d5c' (and executor 
> 'ubuntu.778ffe01-5ee3-11e4-8b6f-42c8cb288d5c') of framework 
> '20141027-004948-352326060-38124-1-0000'
> I1028 20:43:32.948483    12 slave.cpp:2538] Monitoring executor 
> 'ubuntu.778ffe01-5ee3-11e4-8b6f-42c8cb288d5c' of framework 
> '20141027-004948-352326060-38124-1-0000' in container 
> '55fe7a4f-f36a-486e-9f19-74948e7bed18'
> I1028 20:43:32.960706     8 slave.cpp:1733] Got registration for executor 
> 'ubuntu.778ffe01-5ee3-11e4-8b6f-42c8cb288d5c' of framework 
> 20141027-004948-352326060-38124-1-0000 from executor(1)@176.31.235.180:46388 
> <http://176.31.235.180:46388/>
> I1028 20:43:32.961201     8 slave.cpp:1853] Flushing queued task 
> ubuntu.778ffe01-5ee3-11e4-8b6f-42c8cb288d5c for executor 
> 'ubuntu.778ffe01-5ee3-11e4-8b6f-42c8cb288d5c' of framework 
> 20141027-004948-352326060-38124-1-0000
> I1028 20:43:32.962913    11 slave.cpp:2088] Handling status update 
> TASK_RUNNING (UUID: 86932ecb-0144-4e9f-a8c2-c3eb0a2389a1) for task 
> ubuntu.778ffe01-5ee3-11e4-8b6f-42c8cb288d5c of framework 
> 20141027-004948-352326060-38124-1-0000 from executor(1)@176.31.235.180:46388 
> <http://176.31.235.180:46388/>
> I1028 20:43:32.963019     7 status_update_manager.cpp:320] Received status 
> update TASK_RUNNING (UUID: 86932ecb-0144-4e9f-a8c2-c3eb0a2389a1) for task 
> ubuntu.778ffe01-5ee3-11e4-8b6f-42c8cb288d5c of framework 
> 20141027-004948-352326060-38124-1-0000
> I1028 20:43:32.963100     7 status_update_manager.cpp:373] Forwarding status 
> update TASK_RUNNING (UUID: 86932ecb-0144-4e9f-a8c2-c3eb0a2389a1) for task 
> ubuntu.778ffe01-5ee3-11e4-8b6f-42c8cb288d5c of framework 
> 20141027-004948-352326060-38124-1-0000 to [email protected] 
> <mailto:[email protected]>:5050
> I1028 20:43:32.963224    11 slave.cpp:2252] Sending acknowledgement for 
> status update TASK_RUNNING (UUID: 86932ecb-0144-4e9f-a8c2-c3eb0a2389a1) for 
> task ubuntu.778ffe01-5ee3-11e4-8b6f-42c8cb288d5c of framework 
> 20141027-004948-352326060-38124-1-0000 to executor(1)@176.31.235.180:46388 
> <http://176.31.235.180:46388/>
> I1028 20:43:32.973897     9 status_update_manager.cpp:398] Received status 
> update acknowledgement (UUID: 86932ecb-0144-4e9f-a8c2-c3eb0a2389a1) for task 
> ubuntu.778ffe01-5ee3-11e4-8b6f-42c8cb288d5c of framework 
> 20141027-004948-352326060-38124-1-0000
> I1028 20:43:33.000520     9 slave.cpp:1278] Asked to kill task 
> ubuntu.70687acd-5ee3-11e4-8b6f-42c8cb288d5c of framework 
> 20141027-004948-352326060-38124-1-0000
> I1028 20:43:33.000617     9 slave.cpp:1278] Asked to kill task 
> ubuntu.73fb8c9e-5ee3-11e4-8b6f-42c8cb288d5c of framework 
> 20141027-004948-352326060-38124-1-0000
> I1028 20:43:33.949836    13 docker.cpp:1286] Updated 'cpu.shares' to 614 at 
> /sys/fs/cgroup/cpu,cpuacct/system.slice/docker-9af70962d8e0f18bc94bb4dafaeab19275f36b54815fb29e6aacda9520340d7a.scope
>  for container 55fe7a4f-f36a-486e-9f19-74948e7bed18
> I1028 20:43:33.950157    13 docker.cpp:1321] Updated 
> 'memory.soft_limit_in_bytes' to 544MB for container 
> 55fe7a4f-f36a-486e-9f19-74948e7bed18
> I1028 20:43:33.950603    13 docker.cpp:1347] Updated 'memory.limit_in_bytes' 
> to 544MB at 
> /sys/fs/cgroup/memory/system.slice/docker-9af70962d8e0f18bc94bb4dafaeab19275f36b54815fb29e6aacda9520340d7a.scope
>  for container 55fe7a4f-f36a-486e-9f19-74948e7bed18
> 
> 
> This is the task I launch:
> 
> $ cat Docker.json 
> {
>   "container": {
>     "type": "DOCKER",
>     "docker": {
>       "image": "libmesos/ubuntu"
>     }
>   },
>   "id": "ubuntu",
>   "instances": "1",
>   "cpus": "0.5",
>   "mem": "512",
>   "uris": [],
>   "cmd": "while sleep 10; do date -u +%T; done"
> }
> 
> With the following command:
> 
> $ curl -X POST -H "Content-Type: application/json" http://MASTER:8080/v2/apps 
> <http://master:8080/v2/apps> [email protected] <mailto:[email protected]>
> {"id":"/ubuntu","cmd":"while sleep 10; do date -u +%T; 
> done","args":null,"user":null,"env":{},"instances":1,"cpus":0.5,"mem":512.0,"disk":0.0,"executor":"","constraints":[],"uris":[],"storeUrls":[],"ports":[0],"requirePorts":false,"backoffSeconds":1,"backoffFactor":1.15,"container":{"type":"DOCKER","volumes":[],"docker":{"image":"libmesos/ubuntu","network":null,"portMappings":null}},"healthChecks":[],"dependencies":[],"upgradeStrategy":{"minimumHealthCapacity":1.0},"version":"2014-10-28T20:46:01.567Z"}
> 
> I have started the slave with the following command:
> 
> docker run --name=slave --net=host -e 
> MESOS_EXECUTOR_REGISTRATION_TIMEOUT=5mins -e 
> MESOS_ISOLATOR=cgroups/cpu,cgroups/mem -e MESOS_CONTAINERIZERS=docker,mesos 
> -e MESOS_IP=${COREOS_PUBLIC_IPV4} -v /run/docker.sock:/run/docker.sock -v 
> /sys:/sys -v /proc:/proc -e MESOS_LOG_DIR=/var/log -e 
> MESOS_MASTER=zk://MASTER_IPS:2181/mesos <> -p 5051:5051 --privileged 
> mesos-slave
> 
> 
> Any thoughts about this issue? I can’t find anything interesting in the logs, 
> no errors :/
> 
> Thanks
> 
>> Le 27 oct. 2014 à 21:12, Alessandro Siragusa <[email protected] 
>> <mailto:[email protected]>> a écrit :
>> 
>> Thanks, you saved my life :)
>> 
>> Exactly, I am running on CoreOS and I am going to write the unit files 
>> needed to make it run. 
>> 
>> To lock the masters and the zookeeper daemons to the same nodes I am 
>> considering adding a meta tag to the master nodes, moreover I think 
>> publishing their hostnames to etcd so that no manual configuration will be 
>> needed.
>> 
>> I think that this can be the effortless way to build a Mesos cluster.
>> 
>> I will give you the repository name once I will make it work ;)
>> 
>> @Kashif The containers will run on the host node and not inside the slave 
>> container
>> 
>> 
>>> Le 27 oct. 2014 à 20:13, Grzegorz Graczyk <[email protected] 
>>> <mailto:[email protected]>> a écrit :
>>> 
>>> Change name of container for something else, it can't start with "mesos-" 
>>> because that's how mesos knows which containers are managed by mesos. It 
>>> just kills itself.
>>> Are you running this on CoreOS? 
>>> Read this thread: 
>>> https://www.mail-archive.com/[email protected]/msg01602.html 
>>> <https://www.mail-archive.com/[email protected]/msg01602.html>
>>> 
>>> On 27 October 2014 18:33, Alessandro Siragusa 
>>> <[email protected] <mailto:[email protected]>> 
>>> wrote:
>>> Hi all,
>>> 
>>> I’m trying to run mesos-slave in a Docker container with the containerizers 
>>> docker,mesos :
>>> 
>>> $ docker run -ti --rm --name=mesos-slave --net=host -e 
>>> MESOS_ISOLATOR=cgroups/cpu,cgroups/mem -e MESOS_CONTAINERIZERS=docker,mesos 
>>> -e MESOS_HOSTNAME=slave-hostname -e MESOS_IP=1.2.3.4 -v 
>>> /run/docker.sock:/run/docker.sock -v /sys:/sys -e MESOS_LOG_DIR=/var/log -e 
>>> MESOS_MASTER=zk://ZOOKEEPER_IPS:2181/mesos <> -p 5051:5051 -e 
>>> MESOS_CHECKPOINT=false  mesos-slave
>>> I1027 16:53:04.851111     1 logging.cpp:142] INFO level logging started!
>>> I1027 16:53:04.851359     1 main.cpp:126] Build: 2014-09-23 05:36:09 by root
>>> I1027 16:53:04.851375     1 main.cpp:128] Version: 0.20.1
>>> I1027 16:53:04.851384     1 main.cpp:131] Git tag: 0.20.1
>>> I1027 16:53:04.851393     1 main.cpp:135] Git SHA: 
>>> fe0a39112f3304283f970f1b08b322b1e970829d
>>> I1027 16:53:05.853960     1 containerizer.cpp:89] Using isolation: 
>>> posix/cpu,posix/mem
>>> 2014-10-27 16:53:05,854:1(0x7fd859eee700):ZOO_INFO@log_env@712: Client 
>>> environment:zookeeper.version=zookeeper C client 3.4.5
>>> 2014-10-27 16:53:05,854:1(0x7fd859eee700):ZOO_INFO@log_env@716: Client 
>>> environment:host.name <http://host.name/>=slave-hostname
>>> 2014-10-27 16:53:05,854:1(0x7fd859eee700):ZOO_INFO@log_env@723: Client 
>>> environment:os.name <http://os.name/>=Linux
>>> 2014-10-27 16:53:05,854:1(0x7fd859eee700):ZOO_INFO@log_env@724: Client 
>>> environment:os.arch=3.16.2+
>>> 2014-10-27 16:53:05,854:1(0x7fd859eee700):ZOO_INFO@log_env@725: Client 
>>> environment:os.version=#2 SMP Wed Oct 1 20:08:48 UTC 2014
>>> 2014-10-27 16:53:05,854:1(0x7fd859eee700):ZOO_INFO@log_env@733: Client 
>>> environment:user.name <http://user.name/>=(null)
>>> I1027 16:53:05.854481     1 main.cpp:149] Starting Mesos slave
>>> I1027 16:53:05.854866    11 slave.cpp:167] Slave started on 1)@1.2.3.4:5051 
>>> <http://1.2.3.4:5051/>
>>> I1027 16:53:05.855069    11 slave.cpp:278] Slave resources: cpus(*):8; 
>>> mem(*):31188; disk(*):447410; ports(*):[31000-32000]
>>> I1027 16:53:05.855099    11 slave.cpp:306] Slave hostname: slave-hostname
>>> I1027 16:53:05.855110    11 slave.cpp:307] Slave checkpoint: false
>>> I1027 16:53:05.856284     6 state.cpp:33] Recovering state from 
>>> '/tmp/mesos/meta'
>>> I1027 16:53:05.856372     8 status_update_manager.cpp:193] Recovering 
>>> status update manager
>>> I1027 16:53:05.856470     6 containerizer.cpp:252] Recovering containerizer
>>> I1027 16:53:05.856484     9 docker.cpp:577] Recovering Docker containers
>>> 2014-10-27 16:53:05,856:1(0x7fd859eee700):ZOO_INFO@log_env@741: Client 
>>> environment:user.home=/root
>>> 2014-10-27 16:53:05,856:1(0x7fd859eee700):ZOO_INFO@log_env@753: Client 
>>> environment:user.dir=/tmp
>>> 2014-10-27 16:53:05,856:1(0x7fd859eee700):ZOO_INFO@zookeeper_init@786: 
>>> Initiating client connection, host=MASTERS:2181 sessionTimeout=10000 
>>> watcher=0x7fd85de56a30 sessionId=0 sessionPasswd=<null> 
>>> context=0x7fd828000f80 flags=0
>>> 2014-10-27 16:53:05,862:1(0x7fd856a8c700):ZOO_INFO@check_events@1703: 
>>> initiated connection to server [MASTER_IP:2181]
>>> 2014-10-27 16:53:05,865:1(0x7fd856a8c700):ZOO_INFO@check_events@1750: 
>>> session establishment complete on server [REPLICA_IP:2181], 
>>> sessionId=0x1494d70090e0025, negotiated timeout=10000
>>> I1027 16:53:05.865543     7 group.cpp:313] Group process 
>>> (group(1)@1.2.3.4:5051 <http://1.2.3.4:5051/>) connected to ZooKeeper
>>> I1027 16:53:05.865581     7 group.cpp:787] Syncing group operations: queue 
>>> size (joins, cancels, datas) = (0, 0, 0)
>>> I1027 16:53:05.865608     7 group.cpp:385] Trying to create path '/mesos' 
>>> in ZooKeeper
>>> I1027 16:53:05.866819     6 detector.cpp:138] Detected a new leader: 
>>> (id='7')
>>> I1027 16:53:05.866902     8 group.cpp:658] Trying to get 
>>> '/mesos/info_0000000007' in ZooKeeper
>>> I1027 16:53:05.867444     7 detector.cpp:426] A new leading master 
>>> (UPID=master@MASTER_IP:5050) is detected
>>> I1027 16:53:07.855305    13 slave.cpp:3198] Finished recovery
>>> I1027 16:53:07.855576    12 slave.cpp:589] New master detected at 
>>> master@MASTER_IP:5050
>>> I1027 16:53:07.855669    12 slave.cpp:625] No credentials provided. 
>>> Attempting to register without authentication
>>> I1027 16:53:07.855680     7 status_update_manager.cpp:167] New master 
>>> detected at master@MASTER_IP:5050
>>> I1027 16:53:07.855695    12 slave.cpp:636] Detecting new master
>>> 
>>> The process exits right after the last line.
>>> 
>>> If I don’t start the Docker containerizer everything works fine:
>>> 
>>> $ docker run -ti --rm --name=mesos-slave --net=host -e 
>>> MESOS_ISOLATOR=cgroups/cpu,cgroups/mem -e MESOS_CONTAINERIZERS=mesos -e 
>>> MESOS_HOSTNAME=slave-host -e MESOS_IP=1.2.3.4 -v 
>>> /run/docker.sock:/run/docker.sock -v /sys:/sys -e MESOS_LOG_DIR=/var/log -e 
>>> MESOS_MASTER=zk://ZOOKEEPER_IPS:2181/mesos <> -p 5051:5051 -e 
>>> MESOS_CHECKPOINT=false mesos-slave 
>>> I1027 17:03:28.288579     1 logging.cpp:142] INFO level logging started!
>>> I1027 17:03:28.288823     1 main.cpp:126] Build: 2014-09-23 05:36:09 by root
>>> I1027 17:03:28.288838     1 main.cpp:128] Version: 0.20.1
>>> I1027 17:03:28.288849     1 main.cpp:131] Git tag: 0.20.1
>>> I1027 17:03:28.288857     1 main.cpp:135] Git SHA: 
>>> fe0a39112f3304283f970f1b08b322b1e970829d
>>> I1027 17:03:28.290194     1 containerizer.cpp:89] Using isolation: 
>>> posix/cpu,posix/mem
>>> I1027 17:03:28.290340     1 main.cpp:149] Starting Mesos slave
>>> 2014-10-27 17:03:28,290:1(0x7f89ef493700):ZOO_INFO@log_env@712: Client 
>>> environment:zookeeper.version=zookeeper C client 3.4.5
>>> 2014-10-27 17:03:28,290:1(0x7f89ef493700):ZOO_INFO@log_env@716: Client 
>>> environment:host.name <http://host.name/>=slave-hostname
>>> 2014-10-27 17:03:28,290:1(0x7f89ef493700):ZOO_INFO@log_env@723: Client 
>>> environment:os.name <http://os.name/>=Linux
>>> 2014-10-27 17:03:28,290:1(0x7f89ef493700):ZOO_INFO@log_env@724: Client 
>>> environment:os.arch=3.16.2+
>>> 2014-10-27 17:03:28,290:1(0x7f89ef493700):ZOO_INFO@log_env@725: Client 
>>> environment:os.version=#2 SMP Wed Oct 1 20:08:48 UTC 2014
>>> 2014-10-27 17:03:28,290:1(0x7f89ef493700):ZOO_INFO@log_env@733: Client 
>>> environment:user.name <http://user.name/>=(null)
>>> I1027 17:03:28.290735     8 slave.cpp:167] Slave started on 1)@1.2.3.4:5051 
>>> <http://1.2.3.4:5051/>
>>> I1027 17:03:28.290910     8 slave.cpp:278] Slave resources: cpus(*):8; 
>>> mem(*):31188; disk(*):447410; ports(*):[31000-32000]
>>> I1027 17:03:28.290946     8 slave.cpp:306] Slave hostname: slave-hostname
>>> I1027 17:03:28.290964     8 slave.cpp:307] Slave checkpoint: false
>>> I1027 17:03:28.292132    13 state.cpp:33] Recovering state from 
>>> '/tmp/mesos/meta'
>>> I1027 17:03:28.292201    12 status_update_manager.cpp:193] Recovering 
>>> status update manager
>>> I1027 17:03:28.292268    12 containerizer.cpp:252] Recovering containerizer
>>> I1027 17:03:28.292431     8 slave.cpp:3198] Finished recovery
>>> 2014-10-27 17:03:28,292:1(0x7f89ef493700):ZOO_INFO@log_env@741: Client 
>>> environment:user.home=/root
>>> 2014-10-27 17:03:28,292:1(0x7f89ef493700):ZOO_INFO@log_env@753: Client 
>>> environment:user.dir=/tmp
>>> 2014-10-27 17:03:28,292:1(0x7f89ef493700):ZOO_INFO@zookeeper_init@786: 
>>> Initiating client connection, host=ZOOKEEPER_IPS:2181 sessionTimeout=10000 
>>> watcher=0x7f89f33fba30 sessionId=0 sessionPasswd=<null> 
>>> context=0x7f89d0002580 flags=0
>>> 2014-10-27 17:03:28,296:1(0x7f89e7fff700):ZOO_INFO@check_events@1703: 
>>> initiated connection to server [REPLICA_IP:2181]
>>> 2014-10-27 17:03:28,297:1(0x7f89e7fff700):ZOO_INFO@check_events@1750: 
>>> session establishment complete on server [REPLICA_IP:2181], 
>>> sessionId=0x3494d6e6dc00013, negotiated timeout=10000
>>> I1027 17:03:28.298100     7 group.cpp:313] Group process 
>>> (group(1)@1.2.3.4:5051 <http://1.2.3.4:5051/>) connected to ZooKeeper
>>> I1027 17:03:28.298125     7 group.cpp:787] Syncing group operations: queue 
>>> size (joins, cancels, datas) = (0, 0, 0)
>>> I1027 17:03:28.298138     7 group.cpp:385] Trying to create path '/mesos' 
>>> in ZooKeeper
>>> I1027 17:03:28.299088    12 detector.cpp:138] Detected a new leader: 
>>> (id='7')
>>> I1027 17:03:28.299154     9 group.cpp:658] Trying to get 
>>> '/mesos/info_0000000007' in ZooKeeper
>>> I1027 17:03:28.299595    10 detector.cpp:426] A new leading master 
>>> (UPID=master@MASTER_IP:5050) is detected
>>> I1027 17:03:28.299659     8 slave.cpp:589] New master detected at 
>>> master@MASTER_IP:5050
>>> I1027 17:03:28.299717     8 slave.cpp:625] No credentials provided. 
>>> Attempting to register without authentication
>>> I1027 17:03:28.299723     9 status_update_manager.cpp:167] New master 
>>> detected at master@MASTER_IP:5050
>>> I1027 17:03:28.299741     8 slave.cpp:636] Detecting new master
>>> I1027 17:03:29.265413    12 slave.cpp:754] Registered with master 
>>> master@MASTER_IP:5050; given slave ID 20141027-040801-148971440-5050-1-5
>>> 
>>> And therefore I can see this slave on the web interface of the master.
>>> 
>>> This is the Dockerfile of the container that I run:
>>> 
>>> $ cat Dockerfile 
>>> FROM redjack/mesos-slave
>>> 
>>> RUN apt-get install -y docker.io <http://docker.io/>
>>> 
>>> In installed docker in the host to prevent the following error:
>>> 
>>> Failed to create a containerizer: Could not create DockerContainerizer: 
>>> Failed to execute 'docker version': exited with status exited with status 
>>> 127
>>> 
>>> I know that somebody managed to make it run in v0.19, but with v0.20 things 
>>> have changed a lot, as far as I have understood. Any thoughts?
>>> 
>>> Thanks
>>> 
>> 
> 
> 

Reply via email to