I thought we killed this problem?

nj12:~ # lxc-start -n vps001 -f /etc/lxc/vps001/config
lxc-start: Device or resource busy - failed to remove previous cgroup 
'/sys/fs/cgroup/vps001'
lxc-start: failed to spawn 'vps001'
lxc-start: Device or resource busy - failed to remove cgroup 
'/sys/fs/cgroup/vps001'

nj12:~ # lxc-ps auxwww |grep vps001
            root      9307  0.0  0.0   7668   808 pts/0    S+   14:06 
0:00 grep vps001

nj12:~ # lxc-info -n vps001
'vps001' is STOPPED

nj12:~ # lxc-destroy -n vps001
'vps001' does not exist

nj12:~ # mount |grep cgroup
cgroup on /sys/fs/cgroup type cgroup (rw)

nj12:~ # rm -rf /sys/fs/cgroup/vps001
rm: cannot remove 
`/sys/fs/cgroup/vps001/30149/cpuset.memory_spread_slab': Operation not 
permitted
rm: cannot remove 
`/sys/fs/cgroup/vps001/30149/cpuset.memory_spread_page': Operation not 
permitted
[...]
rm: cannot remove `/sys/fs/cgroup/vps001/cgroup.procs': Operation not 
permitted
rm: cannot remove `/sys/fs/cgroup/vps001/tasks': Operation not permitted
nj12:~ #

The dirs and files still exist so "just ignore the error" doesn't apply 
here. What happened was the user issued the command "reboot" from within 
the container. In my own testing I had only ever used "shutdown -r now" 
which worked fine.

This is lxc 0.7.4.2 on kernel 2.6.39

How can I clear this cgroup? How can I even tell if there are really any 
processes holding it open if lxc-ps shows none?
How can I restart this container other than by editing the start script 
to use a different cgroup name or restarting the entire host?

-- 
bkw

------------------------------------------------------------------------------
EditLive Enterprise is the world's most technically advanced content
authoring tool. Experience the power of Track Changes, Inline Image
Editing and ensure content is compliant with Accessibility Checking.
http://p.sf.net/sfu/ephox-dev2dev
_______________________________________________
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users

Reply via email to