[
https://issues.apache.org/jira/browse/MESOS-3573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15122714#comment-15122714
]
chenqiang commented on MESOS-3573:
----------------------------------
yes, I also remove that dir due to that it will make mesos-slave service fail
to start with error that failed to find executor libprocess pid file, more
details listed as below.
```
[[email protected] ~]# systemctl status mesos-slave -l
mesos-slave.service - Mesos Cluster Manager
Loaded: loaded (/usr/lib/systemd/system/mesos-slave.service; enabled)
Active: active (running) since 五 2016-01-29 09:38:17 CST; 7s ago
Process: 65443 ExecStop=/usr/bin/killall -s 15 mesos-slave (code=exited,
status=1/FAILURE)
Main PID: 783 (mesos-slave)
CGroup: /system.slice/mesos-slave.service
└─783 /usr/sbin/mesos-slave
1月 29 09:38:18 mesos-slave-dev187-xxx.domain mesos-slave[783]: I0129
09:38:18.010277 817 state.cpp:54] Recovering state from '/data/mesos/meta'
1月 29 09:38:18 mesos-slave-dev187-xxx.domain mesos-slave[783]: I0129
09:38:18.010331 817 state.cpp:690] No checkpointed resources found at
'/data/mesos/meta/resources/resources.info'
1月 29 09:38:18 mesos-slave-dev187-xxx.domain mesos-slave[783]: 2016-01-29
09:38:18,015:783(0x7fe2f086b700):ZOO_INFO@check_events@1750: session
establishment complete on server [10.10.187.35:2181],
sessionId=0x7527cc2a34d2358, negotiated timeout=10000
1月 29 09:38:18 mesos-slave-dev187-xxx.domain mesos-slave[783]: I0129
09:38:18.015884 814 group.cpp:331] Group process
(group(1)@10.10.144.187:5051) connected to ZooKeeper
1月 29 09:38:18 mesos-slave-dev187-xxx.domain mesos-slave[783]: I0129
09:38:18.015938 814 group.cpp:805] Syncing group operations: queue size
(joins, cancels, datas) = (0, 0, 0)
1月 29 09:38:18 mesos-slave-dev187-xxx.domain mesos-slave[783]: I0129
09:38:18.015965 814 group.cpp:403] Trying to create path
'/mesos/mesos-bjdx-dev01' in ZooKeeper
1月 29 09:38:18 mesos-slave-dev187-xxx.domain mesos-slave[783]: I0129
09:38:18.018003 795 detector.cpp:156] Detected a new leader: (id='32')
1月 29 09:38:18 mesos-slave-dev187-xxx.domain mesos-slave[783]: I0129
09:38:18.018306 814 group.cpp:674] Trying to get
'/mesos/mesos-bjdx-dev01/json.info_0000000032' in ZooKeeper
1月 29 09:38:18 mesos-slave-dev187-xxx.domain mesos-slave[783]: I0129
09:38:18.019448 816 detector.cpp:481] A new leading master
([email protected]:5050) is detected
1月 29 09:38:25 mesos-slave-dev187-xxx.domain mesos-slave[783]: W0129
09:38:25.164949 817 state.cpp:508] Failed to find executor libprocess pid
file
'/data/mesos/meta/slaves/20151224-124919-3658156554-5050-27206-S14/frameworks/20151223-150303-2677017098-5050-30032-0000/executors/22c7c8de-fd0b-472e-bd0b-b16d175fcb90/runs/83476ff1-1460-4128-a573-d84ebff37589/pids/libprocess.pid'
```
> Mesos does not kill orphaned docker containers
> ----------------------------------------------
>
> Key: MESOS-3573
> URL: https://issues.apache.org/jira/browse/MESOS-3573
> Project: Mesos
> Issue Type: Bug
> Components: docker, slave
> Reporter: Ian Babrou
> Labels: mesosphere
>
> After upgrade to 0.24.0 we noticed hanging containers appearing. Looks like
> there were changes between 0.23.0 and 0.24.0 that broke cleanup.
> Here's how to trigger this bug:
> 1. Deploy app in docker container.
> 2. Kill corresponding mesos-docker-executor process
> 3. Observe hanging container
> Here are the logs after kill:
> {noformat}
> slave_1 | I1002 12:12:59.362002 7791 docker.cpp:1576] Executor for
> container 'f083aaa2-d5c3-43c1-b6ba-342de8829fa8' has exited
> slave_1 | I1002 12:12:59.362284 7791 docker.cpp:1374] Destroying
> container 'f083aaa2-d5c3-43c1-b6ba-342de8829fa8'
> slave_1 | I1002 12:12:59.363404 7791 docker.cpp:1478] Running docker stop
> on container 'f083aaa2-d5c3-43c1-b6ba-342de8829fa8'
> slave_1 | I1002 12:12:59.363876 7791 slave.cpp:3399] Executor
> 'sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c' of framework
> 20150923-122130-2153451692-5050-1-0000 terminated with signal Terminated
> slave_1 | I1002 12:12:59.367570 7791 slave.cpp:2696] Handling status
> update TASK_FAILED (UUID: 4a1b2387-a469-4f01-bfcb-0d1cccbde550) for task
> sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c of framework
> 20150923-122130-2153451692-5050-1-0000 from @0.0.0.0:0
> slave_1 | I1002 12:12:59.367842 7791 slave.cpp:5094] Terminating task
> sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c
> slave_1 | W1002 12:12:59.368484 7791 docker.cpp:986] Ignoring updating
> unknown container: f083aaa2-d5c3-43c1-b6ba-342de8829fa8
> slave_1 | I1002 12:12:59.368671 7791 status_update_manager.cpp:322]
> Received status update TASK_FAILED (UUID:
> 4a1b2387-a469-4f01-bfcb-0d1cccbde550) for task
> sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c of framework
> 20150923-122130-2153451692-5050-1-0000
> slave_1 | I1002 12:12:59.368741 7791 status_update_manager.cpp:826]
> Checkpointing UPDATE for status update TASK_FAILED (UUID:
> 4a1b2387-a469-4f01-bfcb-0d1cccbde550) for task
> sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c of framework
> 20150923-122130-2153451692-5050-1-0000
> slave_1 | I1002 12:12:59.370636 7791 status_update_manager.cpp:376]
> Forwarding update TASK_FAILED (UUID: 4a1b2387-a469-4f01-bfcb-0d1cccbde550)
> for task sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c of framework
> 20150923-122130-2153451692-5050-1-0000 to the slave
> slave_1 | I1002 12:12:59.371335 7791 slave.cpp:2975] Forwarding the
> update TASK_FAILED (UUID: 4a1b2387-a469-4f01-bfcb-0d1cccbde550) for task
> sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c of framework
> 20150923-122130-2153451692-5050-1-0000 to [email protected]:5050
> slave_1 | I1002 12:12:59.371908 7791 slave.cpp:2899] Status update
> manager successfully handled status update TASK_FAILED (UUID:
> 4a1b2387-a469-4f01-bfcb-0d1cccbde550) for task
> sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c of framework
> 20150923-122130-2153451692-5050-1-0000
> master_1 | I1002 12:12:59.372047 11 master.cpp:4069] Status update
> TASK_FAILED (UUID: 4a1b2387-a469-4f01-bfcb-0d1cccbde550) for task
> sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c of framework
> 20150923-122130-2153451692-5050-1-0000 from slave
> 20151002-120829-2153451692-5050-1-S0 at slave(1)@172.16.91.128:5051
> (172.16.91.128)
> master_1 | I1002 12:12:59.372534 11 master.cpp:4108] Forwarding status
> update TASK_FAILED (UUID: 4a1b2387-a469-4f01-bfcb-0d1cccbde550) for task
> sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c of framework
> 20150923-122130-2153451692-5050-1-0000
> master_1 | I1002 12:12:59.373018 11 master.cpp:5576] Updating the latest
> state of task sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c of framework
> 20150923-122130-2153451692-5050-1-0000 to TASK_FAILED
> master_1 | I1002 12:12:59.373447 11 hierarchical.hpp:814] Recovered
> cpus(*):0.1; mem(*):16; ports(*):[31685-31685] (total: cpus(*):4;
> mem(*):1001; disk(*):52869; ports(*):[31000-32000], allocated:
> cpus(*):8.32667e-17) on slave 20151002-120829-2153451692-5050-1-S0 from
> framework 20150923-122130-2153451692-5050-1-0000
> {noformat}
> Another issue: if you restart mesos-slave on the host with orphaned docker
> containers, they are not getting killed. This was the case before and I hoped
> for this trick to kill hanging containers, but it doesn't work now.
> Marking this as critical because it hoards cluster resources and blocks
> scheduling.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)