[
https://issues.apache.org/jira/browse/MESOS-3573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15075015#comment-15075015
]
Ian Babrou commented on MESOS-3573:
-----------------------------------
After digging around code for a bing I think I have an explanation for one part
of the issue: when you remove /var/lib/mesos/meta/slaves/latest you also lose
slave id. The next time you start mesos slave, it only tries to recover
containers that belong to the new id. I don't know if it is possible to change
anything about that behavior. Looks like I have to manually kill everything.
[~tnachen] please correct me if I'm wrong here.
*
https://mail-archives.apache.org/mod_mbox/mesos-commits/201505.mbox/%[email protected]%3E
Still, death of mesos-docker-executor leads to orphaned docker container.
Relevant logs from mesos slave (a bunch of errors here and no call to docker
stop!):
{noformat}
I1230 12:54:29.586717 22451 state.cpp:54] Recovering state from
'/tmp/mesos/meta'
I1230 12:54:29.587085 22451 state.cpp:681] No checkpointed resources found at
'/tmp/mesos/meta/resources/resources.info'
I1230 12:54:29.598330 22455 fetcher.cpp:79] Clearing fetcher cache
I1230 12:54:29.598755 22455 slave.cpp:4318] Recovering framework
57997611-238c-4c65-a47d-5784298129e3-0000
I1230 12:54:29.599159 22455 slave.cpp:5108] Recovering executor
'bbbb.5b08ba75-aef4-11e5-a8e8-aae46d62e152' of framework
57997611-238c-4c65-a47d-5784298129e3-0000
I1230 12:54:29.600065 22455 status_update_manager.cpp:202] Recovering status
update manager
I1230 12:54:29.600433 22455 status_update_manager.cpp:210] Recovering executor
'bbbb.5b08ba75-aef4-11e5-a8e8-aae46d62e152' of framework
57997611-238c-4c65-a47d-5784298129e3-0000
I1230 12:54:29.600770 22455 status_update_manager.cpp:499] Creating
StatusUpdate stream for task bbbb.5b08ba75-aef4-11e5-a8e8-aae46d62e152 of
framework 57997611-238c-4c65-a47d-5784298129e3-0000
I1230 12:54:29.600404 22452 slave.cpp:682] Successfully attached file
'/tmp/mesos/slaves/57997611-238c-4c65-a47d-5784298129e3-S0/frameworks/57997611-238c-4c65-a47d-5784298129e3-0000/executors/bbbb.5b08ba75-aef4-11e5-a8e8-aae46d62e152/runs/da778106-fc42-48d7-bc79-655e9c8cce5a'
I1230 12:54:29.601595 22455 status_update_manager.cpp:802] Replaying status
update stream for task bbbb.5b08ba75-aef4-11e5-a8e8-aae46d62e152
I1230 12:54:29.602447 22455 docker.cpp:536] Recovering Docker containers
I1230 12:54:29.602839 22454 containerizer.cpp:384] Recovering containerizer
I1230 12:54:29.603252 22454 containerizer.cpp:433] Skipping recovery of
executor 'bbbb.5b08ba75-aef4-11e5-a8e8-aae46d62e152' of framework
57997611-238c-4c65-a47d-5784298129e3-0000 because it was not launched from
mesos containerizer
I1230 12:54:29.604511 22455 docker.cpp:842] Running docker -H
unix:///var/run/docker.sock ps -a
I1230 12:54:29.770815 22450 docker.cpp:723] Running docker -H
unix:///var/run/docker.sock inspect
mesos-57997611-238c-4c65-a47d-5784298129e3-S0.da778106-fc42-48d7-bc79-655e9c8cce5a
I1230 12:54:29.873257 22450 docker.cpp:640] Recovering container
'da778106-fc42-48d7-bc79-655e9c8cce5a' for executor
'bbbb.5b08ba75-aef4-11e5-a8e8-aae46d62e152' of framework
'57997611-238c-4c65-a47d-5784298129e3-0000'
I1230 12:54:29.874300 22450 docker.cpp:687] Checking if Docker container named
'/mesos-57997611-238c-4c65-a47d-5784298129e3-S0.da778106-fc42-48d7-bc79-655e9c8cce5a'
was started by Mesos
I1230 12:54:29.874343 22450 docker.cpp:697] Checking if Mesos container with ID
'da778106-fc42-48d7-bc79-655e9c8cce5a' has been orphaned
I1230 12:54:29.874552 22450 docker.cpp:1584] Executor for container
'da778106-fc42-48d7-bc79-655e9c8cce5a' has exited
I1230 12:54:29.874567 22450 docker.cpp:1385] Destroying container
'da778106-fc42-48d7-bc79-655e9c8cce5a'
I1230 12:54:29.874590 22450 docker.cpp:1487] Running docker stop on container
'da778106-fc42-48d7-bc79-655e9c8cce5a'
I1230 12:54:29.874892 22450 slave.cpp:4170] Sending reconnect request to
executor 'bbbb.5b08ba75-aef4-11e5-a8e8-aae46d62e152' of framework
57997611-238c-4c65-a47d-5784298129e3-0000 at executor(1)@127.0.0.1:42053
E1230 12:54:29.875147 22450 slave.cpp:3537] Termination of executor
'bbbb.5b08ba75-aef4-11e5-a8e8-aae46d62e152' of framework
57997611-238c-4c65-a47d-5784298129e3-0000 failed: Container
'da778106-fc42-48d7-bc79-655e9c8cce5a' not found
I1230 12:54:29.876190 22457 poll_socket.cpp:111] Socket error while connecting
I1230 12:54:29.876230 22457 process.cpp:1603] Failed to send
'mesos.internal.ReconnectExecutorMessage' to '127.0.0.1:42053', connect: Socket
error while connecting
E1230 12:54:29.876283 22457 process.cpp:1911] Failed to shutdown socket with fd
9: Transport endpoint is not connected
I1230 12:54:29.876399 22450 slave.cpp:2762] Handling status update TASK_FAILED
(UUID: b4d23693-b17d-4e66-9d98-09da7839a731) for task
bbbb.5b08ba75-aef4-11e5-a8e8-aae46d62e152 of framework
57997611-238c-4c65-a47d-5784298129e3-0000 from @0.0.0.0:0
I1230 12:54:29.876497 22450 slave.cpp:5298] Terminating task
bbbb.5b08ba75-aef4-11e5-a8e8-aae46d62e152
E1230 12:54:29.876644 22450 slave.cpp:2963] Failed to update resources for
container da778106-fc42-48d7-bc79-655e9c8cce5a of executor
'bbbb.5b08ba75-aef4-11e5-a8e8-aae46d62e152' running task
bbbb.5b08ba75-aef4-11e5-a8e8-aae46d62e152 on status update for terminal task,
destroying container: Container 'da778106-fc42-48d7-bc79-655e9c8cce5a' not found
W1230 12:54:29.876747 22450 composing.cpp:515] Container
'da778106-fc42-48d7-bc79-655e9c8cce5a' not found
I1230 12:54:29.876760 22450 status_update_manager.cpp:322] Received status
update TASK_FAILED (UUID: b4d23693-b17d-4e66-9d98-09da7839a731) for task
bbbb.5b08ba75-aef4-11e5-a8e8-aae46d62e152 of framework
57997611-238c-4c65-a47d-5784298129e3-0000
I1230 12:54:29.876781 22450 status_update_manager.cpp:826] Checkpointing UPDATE
for status update TASK_FAILED (UUID: b4d23693-b17d-4e66-9d98-09da7839a731) for
task bbbb.5b08ba75-aef4-11e5-a8e8-aae46d62e152 of framework
57997611-238c-4c65-a47d-5784298129e3-0000
I1230 12:54:29.882992 22450 status_update_manager.cpp:376] Forwarding update
TASK_FAILED (UUID: b4d23693-b17d-4e66-9d98-09da7839a731) for task
bbbb.5b08ba75-aef4-11e5-a8e8-aae46d62e152 of framework
57997611-238c-4c65-a47d-5784298129e3-0000 to the slave
W1230 12:54:29.883154 22450 slave.cpp:3033] Dropping status update TASK_FAILED
(UUID: b4d23693-b17d-4e66-9d98-09da7839a731) for task
bbbb.5b08ba75-aef4-11e5-a8e8-aae46d62e152 of framework
57997611-238c-4c65-a47d-5784298129e3-0000 sent by status update manager because
the slave is in RECOVERING state
I1230 12:54:29.883180 22450 slave.cpp:3005] Status update manager successfully
handled status update TASK_FAILED (UUID: b4d23693-b17d-4e66-9d98-09da7839a731)
for task bbbb.5b08ba75-aef4-11e5-a8e8-aae46d62e152 of framework
57997611-238c-4c65-a47d-5784298129e3-0000
I1230 12:54:31.876523 22449 slave.cpp:2702] Cleaning up un-reregistered
executors
I1230 12:54:31.876610 22449 slave.cpp:4230] Finished recovery
{noformat}
> Mesos does not kill orphaned docker containers
> ----------------------------------------------
>
> Key: MESOS-3573
> URL: https://issues.apache.org/jira/browse/MESOS-3573
> Project: Mesos
> Issue Type: Bug
> Components: docker, slave
> Reporter: Ian Babrou
> Assignee: Gilbert Song
> Priority: Blocker
> Labels: mesosphere
>
> After upgrade to 0.24.0 we noticed hanging containers appearing. Looks like
> there were changes between 0.23.0 and 0.24.0 that broke cleanup.
> Here's how to trigger this bug:
> 1. Deploy app in docker container.
> 2. Kill corresponding mesos-docker-executor process
> 3. Observe hanging container
> Here are the logs after kill:
> {noformat}
> slave_1 | I1002 12:12:59.362002 7791 docker.cpp:1576] Executor for
> container 'f083aaa2-d5c3-43c1-b6ba-342de8829fa8' has exited
> slave_1 | I1002 12:12:59.362284 7791 docker.cpp:1374] Destroying
> container 'f083aaa2-d5c3-43c1-b6ba-342de8829fa8'
> slave_1 | I1002 12:12:59.363404 7791 docker.cpp:1478] Running docker stop
> on container 'f083aaa2-d5c3-43c1-b6ba-342de8829fa8'
> slave_1 | I1002 12:12:59.363876 7791 slave.cpp:3399] Executor
> 'sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c' of framework
> 20150923-122130-2153451692-5050-1-0000 terminated with signal Terminated
> slave_1 | I1002 12:12:59.367570 7791 slave.cpp:2696] Handling status
> update TASK_FAILED (UUID: 4a1b2387-a469-4f01-bfcb-0d1cccbde550) for task
> sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c of framework
> 20150923-122130-2153451692-5050-1-0000 from @0.0.0.0:0
> slave_1 | I1002 12:12:59.367842 7791 slave.cpp:5094] Terminating task
> sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c
> slave_1 | W1002 12:12:59.368484 7791 docker.cpp:986] Ignoring updating
> unknown container: f083aaa2-d5c3-43c1-b6ba-342de8829fa8
> slave_1 | I1002 12:12:59.368671 7791 status_update_manager.cpp:322]
> Received status update TASK_FAILED (UUID:
> 4a1b2387-a469-4f01-bfcb-0d1cccbde550) for task
> sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c of framework
> 20150923-122130-2153451692-5050-1-0000
> slave_1 | I1002 12:12:59.368741 7791 status_update_manager.cpp:826]
> Checkpointing UPDATE for status update TASK_FAILED (UUID:
> 4a1b2387-a469-4f01-bfcb-0d1cccbde550) for task
> sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c of framework
> 20150923-122130-2153451692-5050-1-0000
> slave_1 | I1002 12:12:59.370636 7791 status_update_manager.cpp:376]
> Forwarding update TASK_FAILED (UUID: 4a1b2387-a469-4f01-bfcb-0d1cccbde550)
> for task sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c of framework
> 20150923-122130-2153451692-5050-1-0000 to the slave
> slave_1 | I1002 12:12:59.371335 7791 slave.cpp:2975] Forwarding the
> update TASK_FAILED (UUID: 4a1b2387-a469-4f01-bfcb-0d1cccbde550) for task
> sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c of framework
> 20150923-122130-2153451692-5050-1-0000 to [email protected]:5050
> slave_1 | I1002 12:12:59.371908 7791 slave.cpp:2899] Status update
> manager successfully handled status update TASK_FAILED (UUID:
> 4a1b2387-a469-4f01-bfcb-0d1cccbde550) for task
> sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c of framework
> 20150923-122130-2153451692-5050-1-0000
> master_1 | I1002 12:12:59.372047 11 master.cpp:4069] Status update
> TASK_FAILED (UUID: 4a1b2387-a469-4f01-bfcb-0d1cccbde550) for task
> sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c of framework
> 20150923-122130-2153451692-5050-1-0000 from slave
> 20151002-120829-2153451692-5050-1-S0 at slave(1)@172.16.91.128:5051
> (172.16.91.128)
> master_1 | I1002 12:12:59.372534 11 master.cpp:4108] Forwarding status
> update TASK_FAILED (UUID: 4a1b2387-a469-4f01-bfcb-0d1cccbde550) for task
> sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c of framework
> 20150923-122130-2153451692-5050-1-0000
> master_1 | I1002 12:12:59.373018 11 master.cpp:5576] Updating the latest
> state of task sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c of framework
> 20150923-122130-2153451692-5050-1-0000 to TASK_FAILED
> master_1 | I1002 12:12:59.373447 11 hierarchical.hpp:814] Recovered
> cpus(*):0.1; mem(*):16; ports(*):[31685-31685] (total: cpus(*):4;
> mem(*):1001; disk(*):52869; ports(*):[31000-32000], allocated:
> cpus(*):8.32667e-17) on slave 20151002-120829-2153451692-5050-1-S0 from
> framework 20150923-122130-2153451692-5050-1-0000
> {noformat}
> Another issue: if you restart mesos-slave on the host with orphaned docker
> containers, they are not getting killed. This was the case before and I hoped
> for this trick to kill hanging containers, but it doesn't work now.
> Marking this as critical because it hoards cluster resources and blocks
> scheduling.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)