[jira] [Comment Edited] (MESOS-3573) Mesos does not kill orphaned docker containers

2016-01-28 Thread haosdent (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-3573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15121882#comment-15121882
 ] 

haosdent edited comment on MESOS-3573 at 1/28/16 5:37 PM:
--

And my idea we could add a check to skip and warning when parse() find slaveId 
didn't match current slaveId. https://reviews.apache.org/r/42915/


was (Author: haosd...@gmail.com):
And my idea we could add a check to skip and warning when parse() find slaveId 
didn't match current slaveId.

> Mesos does not kill orphaned docker containers
> --
>
> Key: MESOS-3573
> URL: https://issues.apache.org/jira/browse/MESOS-3573
> Project: Mesos
>  Issue Type: Bug
>  Components: docker, slave
>Reporter: Ian Babrou
>  Labels: mesosphere
>
> After upgrade to 0.24.0 we noticed hanging containers appearing. Looks like 
> there were changes between 0.23.0 and 0.24.0 that broke cleanup.
> Here's how to trigger this bug:
> 1. Deploy app in docker container.
> 2. Kill corresponding mesos-docker-executor process
> 3. Observe hanging container
> Here are the logs after kill:
> {noformat}
> slave_1| I1002 12:12:59.362002  7791 docker.cpp:1576] Executor for 
> container 'f083aaa2-d5c3-43c1-b6ba-342de8829fa8' has exited
> slave_1| I1002 12:12:59.362284  7791 docker.cpp:1374] Destroying 
> container 'f083aaa2-d5c3-43c1-b6ba-342de8829fa8'
> slave_1| I1002 12:12:59.363404  7791 docker.cpp:1478] Running docker stop 
> on container 'f083aaa2-d5c3-43c1-b6ba-342de8829fa8'
> slave_1| I1002 12:12:59.363876  7791 slave.cpp:3399] Executor 
> 'sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c' of framework 
> 20150923-122130-2153451692-5050-1- terminated with signal Terminated
> slave_1| I1002 12:12:59.367570  7791 slave.cpp:2696] Handling status 
> update TASK_FAILED (UUID: 4a1b2387-a469-4f01-bfcb-0d1cccbde550) for task 
> sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c of framework 
> 20150923-122130-2153451692-5050-1- from @0.0.0.0:0
> slave_1| I1002 12:12:59.367842  7791 slave.cpp:5094] Terminating task 
> sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c
> slave_1| W1002 12:12:59.368484  7791 docker.cpp:986] Ignoring updating 
> unknown container: f083aaa2-d5c3-43c1-b6ba-342de8829fa8
> slave_1| I1002 12:12:59.368671  7791 status_update_manager.cpp:322] 
> Received status update TASK_FAILED (UUID: 
> 4a1b2387-a469-4f01-bfcb-0d1cccbde550) for task 
> sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c of framework 
> 20150923-122130-2153451692-5050-1-
> slave_1| I1002 12:12:59.368741  7791 status_update_manager.cpp:826] 
> Checkpointing UPDATE for status update TASK_FAILED (UUID: 
> 4a1b2387-a469-4f01-bfcb-0d1cccbde550) for task 
> sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c of framework 
> 20150923-122130-2153451692-5050-1-
> slave_1| I1002 12:12:59.370636  7791 status_update_manager.cpp:376] 
> Forwarding update TASK_FAILED (UUID: 4a1b2387-a469-4f01-bfcb-0d1cccbde550) 
> for task sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c of framework 
> 20150923-122130-2153451692-5050-1- to the slave
> slave_1| I1002 12:12:59.371335  7791 slave.cpp:2975] Forwarding the 
> update TASK_FAILED (UUID: 4a1b2387-a469-4f01-bfcb-0d1cccbde550) for task 
> sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c of framework 
> 20150923-122130-2153451692-5050-1- to master@172.16.91.128:5050
> slave_1| I1002 12:12:59.371908  7791 slave.cpp:2899] Status update 
> manager successfully handled status update TASK_FAILED (UUID: 
> 4a1b2387-a469-4f01-bfcb-0d1cccbde550) for task 
> sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c of framework 
> 20150923-122130-2153451692-5050-1-
> master_1   | I1002 12:12:59.37204711 master.cpp:4069] Status update 
> TASK_FAILED (UUID: 4a1b2387-a469-4f01-bfcb-0d1cccbde550) for task 
> sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c of framework 
> 20150923-122130-2153451692-5050-1- from slave 
> 20151002-120829-2153451692-5050-1-S0 at slave(1)@172.16.91.128:5051 
> (172.16.91.128)
> master_1   | I1002 12:12:59.37253411 master.cpp:4108] Forwarding status 
> update TASK_FAILED (UUID: 4a1b2387-a469-4f01-bfcb-0d1cccbde550) for task 
> sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c of framework 
> 20150923-122130-2153451692-5050-1-
> master_1   | I1002 12:12:59.37301811 master.cpp:5576] Updating the latest 
> state of task sleepy.87eb6191-68fe-11e5-9444-8eb895523b9c of framework 
> 20150923-122130-2153451692-5050-1- to TASK_FAILED
> master_1   | I1002 12:12:59.37344711 hierarchical.hpp:814] Recovered 
> cpus(*):0.1; mem(*):16; ports(*):[31685-31685] (total: cpus(*):4; 
> mem(*):1001; disk(*):52869; ports(*):[31000-32000], allocated: 
> cpus(*):8.32667e-17) on slave 20151002-120829-2153451692-5050-1-S0 from 
> framework 20150923-122130-2153451692-5050-1-
> {noformat}
> Another issue: 

[jira] [Comment Edited] (MESOS-3573) Mesos does not kill orphaned docker containers

2015-12-30 Thread Ian Babrou (JIRA)

[ 
https://issues.apache.org/jira/browse/MESOS-3573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15075015#comment-15075015
 ] 

Ian Babrou edited comment on MESOS-3573 at 12/30/15 1:09 PM:
-

After digging around the code for a bit I think I have an explanation for one 
part of the issue: when you remove /var/lib/mesos/meta/slaves/latest you also 
lose slave id. The next time you start mesos slave, it only tries to recover 
containers that belong to the new id. I don't know if it is possible to change 
anything about that behavior. Looks like I have to manually kill everything. 
[~tnachen] please correct me if I'm wrong here.

* 
https://mail-archives.apache.org/mod_mbox/mesos-commits/201505.mbox/%3ca4ce47aae392480894e375e0b4626...@git.apache.org%3E

Still, death of mesos-docker-executor leads to orphaned docker container. 
Relevant logs from mesos slave (a bunch of errors here and no call to docker 
stop!):

{noformat}
I1230 12:54:29.586717 22451 state.cpp:54] Recovering state from 
'/tmp/mesos/meta'
I1230 12:54:29.587085 22451 state.cpp:681] No checkpointed resources found at 
'/tmp/mesos/meta/resources/resources.info'
I1230 12:54:29.598330 22455 fetcher.cpp:79] Clearing fetcher cache
I1230 12:54:29.598755 22455 slave.cpp:4318] Recovering framework 
57997611-238c-4c65-a47d-5784298129e3-
I1230 12:54:29.599159 22455 slave.cpp:5108] Recovering executor 
'.5b08ba75-aef4-11e5-a8e8-aae46d62e152' of framework 
57997611-238c-4c65-a47d-5784298129e3-
I1230 12:54:29.600065 22455 status_update_manager.cpp:202] Recovering status 
update manager
I1230 12:54:29.600433 22455 status_update_manager.cpp:210] Recovering executor 
'.5b08ba75-aef4-11e5-a8e8-aae46d62e152' of framework 
57997611-238c-4c65-a47d-5784298129e3-
I1230 12:54:29.600770 22455 status_update_manager.cpp:499] Creating 
StatusUpdate stream for task .5b08ba75-aef4-11e5-a8e8-aae46d62e152 of 
framework 57997611-238c-4c65-a47d-5784298129e3-
I1230 12:54:29.600404 22452 slave.cpp:682] Successfully attached file 
'/tmp/mesos/slaves/57997611-238c-4c65-a47d-5784298129e3-S0/frameworks/57997611-238c-4c65-a47d-5784298129e3-/executors/.5b08ba75-aef4-11e5-a8e8-aae46d62e152/runs/da778106-fc42-48d7-bc79-655e9c8cce5a'
I1230 12:54:29.601595 22455 status_update_manager.cpp:802] Replaying status 
update stream for task .5b08ba75-aef4-11e5-a8e8-aae46d62e152
I1230 12:54:29.602447 22455 docker.cpp:536] Recovering Docker containers
I1230 12:54:29.602839 22454 containerizer.cpp:384] Recovering containerizer
I1230 12:54:29.603252 22454 containerizer.cpp:433] Skipping recovery of 
executor '.5b08ba75-aef4-11e5-a8e8-aae46d62e152' of framework 
57997611-238c-4c65-a47d-5784298129e3- because it was not launched from 
mesos containerizer
I1230 12:54:29.604511 22455 docker.cpp:842] Running docker -H 
unix:///var/run/docker.sock ps -a
I1230 12:54:29.770815 22450 docker.cpp:723] Running docker -H 
unix:///var/run/docker.sock inspect 
mesos-57997611-238c-4c65-a47d-5784298129e3-S0.da778106-fc42-48d7-bc79-655e9c8cce5a
I1230 12:54:29.873257 22450 docker.cpp:640] Recovering container 
'da778106-fc42-48d7-bc79-655e9c8cce5a' for executor 
'.5b08ba75-aef4-11e5-a8e8-aae46d62e152' of framework 
'57997611-238c-4c65-a47d-5784298129e3-'
I1230 12:54:29.874300 22450 docker.cpp:687] Checking if Docker container named 
'/mesos-57997611-238c-4c65-a47d-5784298129e3-S0.da778106-fc42-48d7-bc79-655e9c8cce5a'
 was started by Mesos
I1230 12:54:29.874343 22450 docker.cpp:697] Checking if Mesos container with ID 
'da778106-fc42-48d7-bc79-655e9c8cce5a' has been orphaned
I1230 12:54:29.874552 22450 docker.cpp:1584] Executor for container 
'da778106-fc42-48d7-bc79-655e9c8cce5a' has exited
I1230 12:54:29.874567 22450 docker.cpp:1385] Destroying container 
'da778106-fc42-48d7-bc79-655e9c8cce5a'
I1230 12:54:29.874590 22450 docker.cpp:1487] Running docker stop on container 
'da778106-fc42-48d7-bc79-655e9c8cce5a'
I1230 12:54:29.874892 22450 slave.cpp:4170] Sending reconnect request to 
executor '.5b08ba75-aef4-11e5-a8e8-aae46d62e152' of framework 
57997611-238c-4c65-a47d-5784298129e3- at executor(1)@127.0.0.1:42053
E1230 12:54:29.875147 22450 slave.cpp:3537] Termination of executor 
'.5b08ba75-aef4-11e5-a8e8-aae46d62e152' of framework 
57997611-238c-4c65-a47d-5784298129e3- failed: Container 
'da778106-fc42-48d7-bc79-655e9c8cce5a' not found
I1230 12:54:29.876190 22457 poll_socket.cpp:111] Socket error while connecting
I1230 12:54:29.876230 22457 process.cpp:1603] Failed to send 
'mesos.internal.ReconnectExecutorMessage' to '127.0.0.1:42053', connect: Socket 
error while connecting
E1230 12:54:29.876283 22457 process.cpp:1911] Failed to shutdown socket with fd 
9: Transport endpoint is not connected
I1230 12:54:29.876399 22450 slave.cpp:2762] Handling status update TASK_FAILED 
(UUID: b4d23693-b17d-4e66-9d98-09da7839a731) for task