[Lxc-users] Problem with cgroup with LXc
Hello all, I started using LXCs. They are very nice. But I have problems with cgroup. There are problem with remove old informations from cgroup subdirectory. It happend only sometimes. Typicaly zabbix agent in LXC can generate this problem. But not always. Please see: ls -l /cgroup/test_lxc drwxr-xr-x 3 root root 0 2010-09-29 23:07 10194 drwxr-xr-x 3 root root 0 2010-10-01 21:11 11382 drwxr-xr-x 3 root root 0 2010-10-03 18:29 12632 drwxr-xr-x 3 root root 0 2010-09-15 15:10 1715 drwxr-xr-x 3 root root 0 2010-10-15 07:31 20270 drwxr-xr-x 3 root root 0 2010-10-16 02:05 20468 drwxr-xr-x 3 root root 0 2010-10-16 22:42 21090 drwxr-xr-x 3 root root 0 2010-10-19 04:58 22349 drwxr-xr-x 3 root root 0 2010-08-27 16:09 22455 drwxr-xr-x 3 root root 0 2010-08-29 10:45 23636 drwxr-xr-x 3 root root 0 2010-09-16 19:10 2398 drwxr-xr-x 3 root root 0 2010-10-22 00:27 24182 drwxr-xr-x 3 root root 0 2010-10-26 06:45 27044 drwxr-xr-x 3 root root 0 2010-09-04 18:26 27119 drwxr-xr-x 3 root root 0 2010-09-05 04:24 27187 drwxr-xr-x 3 root root 0 2010-09-09 21:39 30581 drwxr-xr-x 3 root root 0 2010-09-20 10:10 4793 -r--r--r-- 1 root root 0 2010-08-02 13:53 cgroup.procs -r--r--r-- 1 root root 0 2010-08-02 13:53 cpuacct.stat -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpuacct.usage -r--r--r-- 1 root root 0 2010-08-02 13:53 cpuacct.usage_percpu -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpu.rt_period_us -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpu.rt_runtime_us -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpuset.cpu_exclusive -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpuset.cpus -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpuset.mem_exclusive -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpuset.mem_hardwall -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpuset.memory_migrate -r--r--r-- 1 root root 0 2010-08-02 13:53 cpuset.memory_pressure -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpuset.memory_spread_page -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpuset.memory_spread_slab -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpuset.mems -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpuset.sched_load_balance -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpuset.sched_relax_domain_level -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpu.shares --w--- 1 root root 0 2010-08-02 13:53 devices.allow --w--- 1 root root 0 2010-08-02 13:53 devices.deny -r--r--r-- 1 root root 0 2010-08-02 13:53 devices.list -rw-r--r-- 1 root root 0 2010-08-02 13:53 freezer.state -rw-r--r-- 1 root root 0 2010-08-02 13:53 memory.failcnt --w--- 1 root root 0 2010-08-02 13:53 memory.force_empty -rw-r--r-- 1 root root 0 2010-08-02 13:53 memory.limit_in_bytes -rw-r--r-- 1 root root 0 2010-08-02 13:53 memory.max_usage_in_bytes -rw-r--r-- 1 root root 0 2010-08-02 13:53 memory.memsw.failcnt -rw-r--r-- 1 root root 0 2010-08-02 13:53 memory.memsw.limit_in_bytes -rw-r--r-- 1 root root 0 2010-08-02 13:53 memory.memsw.max_usage_in_bytes -r--r--r-- 1 root root 0 2010-08-02 13:53 memory.memsw.usage_in_bytes -rw-r--r-- 1 root root 0 2010-08-02 13:53 memory.soft_limit_in_bytes -r--r--r-- 1 root root 0 2010-08-02 13:53 memory.stat -rw-r--r-- 1 root root 0 2010-08-02 13:53 memory.swappiness -r--r--r-- 1 root root 0 2010-08-02 13:53 memory.usage_in_bytes -rw-r--r-- 1 root root 0 2010-08-02 13:53 memory.use_hierarchy -rw-r--r-- 1 root root 0 2010-08-02 13:53 net_cls.classid -rw-r--r-- 1 root root 0 2010-08-02 13:53 notify_on_release -rw-r--r-- 1 root root 0 2010-08-02 13:53 tasks ls -R1 10194 10194: 2 cgroup.procs cpuacct.stat cpuacct.usage cpuacct.usage_percpu cpu.rt_period_us cpu.rt_runtime_us cpuset.cpu_exclusive cpuset.cpus cpuset.mem_exclusive cpuset.mem_hardwall cpuset.memory_migrate cpuset.memory_pressure cpuset.memory_spread_page cpuset.memory_spread_slab cpuset.mems cpuset.sched_load_balance cpuset.sched_relax_domain_level cpu.shares devices.allow devices.deny devices.list freezer.state memory.failcnt memory.force_empty memory.limit_in_bytes memory.max_usage_in_bytes memory.memsw.failcnt memory.memsw.limit_in_bytes memory.memsw.max_usage_in_bytes memory.memsw.usage_in_bytes memory.soft_limit_in_bytes memory.stat memory.swappiness memory.usage_in_bytes memory.use_hierarchy net_cls.classid notify_on_release tasks 10194/2: cgroup.procs cpuacct.stat cpuacct.usage cpuacct.usage_percpu cpu.rt_period_us cpu.rt_runtime_us cpuset.cpu_exclusive cpuset.cpus cpuset.mem_exclusive cpuset.mem_hardwall cpuset.memory_migrate cpuset.memory_pressure cpuset.memory_spread_page cpuset.memory_spread_slab cpuset.mems cpuset.sched_load_balance cpuset.sched_relax_domain_level cpu.shares devices.allow devices.deny devices.list freezer.state memory.failcnt memory.force_empty memory.limit_in_bytes memory.max_usage_in_bytes memory.memsw.failcnt memory.memsw.limit_in_bytes memory.memsw.max_usage_in_bytes memory.memsw.usage_in_bytes memory.soft_limit_in_bytes memory.stat memory.swappiness memory.usage_in_bytes memory.use_hierarchy net_cls.classid notify_on_release tasks It looks like as this problem:
Re: [Lxc-users] Problem with cgroup with LXc
On 10/26/2010 08:38 AM, Miroslav Lednicky, AVONET, s.r.o. wrote: Hello all, I started using LXCs. They are very nice. But I have problems with cgroup. There are problem with remove old informations from cgroup subdirectory. It happend only sometimes. Typicaly zabbix agent in LXC can generate this problem. But not always. Please see: ls -l /cgroup/test_lxc drwxr-xr-x 3 root root 0 2010-09-29 23:07 10194 drwxr-xr-x 3 root root 0 2010-10-01 21:11 11382 drwxr-xr-x 3 root root 0 2010-10-03 18:29 12632 drwxr-xr-x 3 root root 0 2010-09-15 15:10 1715 drwxr-xr-x 3 root root 0 2010-10-15 07:31 20270 drwxr-xr-x 3 root root 0 2010-10-16 02:05 20468 drwxr-xr-x 3 root root 0 2010-10-16 22:42 21090 drwxr-xr-x 3 root root 0 2010-10-19 04:58 22349 drwxr-xr-x 3 root root 0 2010-08-27 16:09 22455 drwxr-xr-x 3 root root 0 2010-08-29 10:45 23636 drwxr-xr-x 3 root root 0 2010-09-16 19:10 2398 drwxr-xr-x 3 root root 0 2010-10-22 00:27 24182 drwxr-xr-x 3 root root 0 2010-10-26 06:45 27044 drwxr-xr-x 3 root root 0 2010-09-04 18:26 27119 drwxr-xr-x 3 root root 0 2010-09-05 04:24 27187 drwxr-xr-x 3 root root 0 2010-09-09 21:39 30581 drwxr-xr-x 3 root root 0 2010-09-20 10:10 4793 -r--r--r-- 1 root root 0 2010-08-02 13:53 cgroup.procs -r--r--r-- 1 root root 0 2010-08-02 13:53 cpuacct.stat -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpuacct.usage -r--r--r-- 1 root root 0 2010-08-02 13:53 cpuacct.usage_percpu -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpu.rt_period_us -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpu.rt_runtime_us -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpuset.cpu_exclusive -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpuset.cpus -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpuset.mem_exclusive -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpuset.mem_hardwall -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpuset.memory_migrate -r--r--r-- 1 root root 0 2010-08-02 13:53 cpuset.memory_pressure -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpuset.memory_spread_page -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpuset.memory_spread_slab -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpuset.mems -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpuset.sched_load_balance -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpuset.sched_relax_domain_level -rw-r--r-- 1 root root 0 2010-08-02 13:53 cpu.shares --w--- 1 root root 0 2010-08-02 13:53 devices.allow --w--- 1 root root 0 2010-08-02 13:53 devices.deny -r--r--r-- 1 root root 0 2010-08-02 13:53 devices.list -rw-r--r-- 1 root root 0 2010-08-02 13:53 freezer.state -rw-r--r-- 1 root root 0 2010-08-02 13:53 memory.failcnt --w--- 1 root root 0 2010-08-02 13:53 memory.force_empty -rw-r--r-- 1 root root 0 2010-08-02 13:53 memory.limit_in_bytes -rw-r--r-- 1 root root 0 2010-08-02 13:53 memory.max_usage_in_bytes -rw-r--r-- 1 root root 0 2010-08-02 13:53 memory.memsw.failcnt -rw-r--r-- 1 root root 0 2010-08-02 13:53 memory.memsw.limit_in_bytes -rw-r--r-- 1 root root 0 2010-08-02 13:53 memory.memsw.max_usage_in_bytes -r--r--r-- 1 root root 0 2010-08-02 13:53 memory.memsw.usage_in_bytes -rw-r--r-- 1 root root 0 2010-08-02 13:53 memory.soft_limit_in_bytes -r--r--r-- 1 root root 0 2010-08-02 13:53 memory.stat -rw-r--r-- 1 root root 0 2010-08-02 13:53 memory.swappiness -r--r--r-- 1 root root 0 2010-08-02 13:53 memory.usage_in_bytes -rw-r--r-- 1 root root 0 2010-08-02 13:53 memory.use_hierarchy -rw-r--r-- 1 root root 0 2010-08-02 13:53 net_cls.classid -rw-r--r-- 1 root root 0 2010-08-02 13:53 notify_on_release -rw-r--r-- 1 root root 0 2010-08-02 13:53 tasks ls -R1 10194 10194: 2 cgroup.procs cpuacct.stat cpuacct.usage cpuacct.usage_percpu cpu.rt_period_us cpu.rt_runtime_us cpuset.cpu_exclusive cpuset.cpus cpuset.mem_exclusive cpuset.mem_hardwall cpuset.memory_migrate cpuset.memory_pressure cpuset.memory_spread_page cpuset.memory_spread_slab cpuset.mems cpuset.sched_load_balance cpuset.sched_relax_domain_level cpu.shares devices.allow devices.deny devices.list freezer.state memory.failcnt memory.force_empty memory.limit_in_bytes memory.max_usage_in_bytes memory.memsw.failcnt memory.memsw.limit_in_bytes memory.memsw.max_usage_in_bytes memory.memsw.usage_in_bytes memory.soft_limit_in_bytes memory.stat memory.swappiness memory.usage_in_bytes memory.use_hierarchy net_cls.classid notify_on_release tasks 10194/2: cgroup.procs cpuacct.stat cpuacct.usage cpuacct.usage_percpu cpu.rt_period_us cpu.rt_runtime_us cpuset.cpu_exclusive cpuset.cpus cpuset.mem_exclusive cpuset.mem_hardwall cpuset.memory_migrate cpuset.memory_pressure cpuset.memory_spread_page cpuset.memory_spread_slab cpuset.mems cpuset.sched_load_balance cpuset.sched_relax_domain_level cpu.shares devices.allow devices.deny devices.list freezer.state memory.failcnt memory.force_empty memory.limit_in_bytes memory.max_usage_in_bytes memory.memsw.failcnt memory.memsw.limit_in_bytes memory.memsw.max_usage_in_bytes
Re: [Lxc-users] kramic on maverick
On 10/26/2010 08:57 AM, Papp Tamás wrote: On 2010.10.24. 22:00, Daniel Lezcano wrote: Can you check you have at least the in the rootfs the /etc/init/console.conf file ? The content should be: # console - getty # # This service maintains a console on tty1 from the point the system is # started until it is shut down again. start on stopped rc RUNLEVEL=[2345] stop on runlevel [!2345] respawn exec /sbin/getty -8 38400 /dev/console The problem was not with the console, but ssh and other services, which supposed to be started by the rc system. After some hours I could get it work. I downloaded an openvz template and made a diff and I made the needed changes to the lxc template. What are these changes ? Yeah, I suppose you logged as non-root and then you did a su - The tty belongs to the initial user and not to the current user, this is why it fails. I suppose when root opens the tty that shouldn't happen, but anyway you can fix this by chmod ugo+rw $(tty) until there is a proper fix for this. Thank you, I'll try it next time. Ok, thanks for pointing the problem. This is something I will fix. -- Daniel -- Nokia and ATT present the 2010 Calling All Innovators-North America contest Create new apps games for the Nokia N8 for consumers in U.S. and Canada $10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store http://p.sf.net/sfu/nokia-dev2dev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Two virtual interfaces in a container
On Mon, Oct 25, 2010 at 4:15 AM, Daniel Lezcano daniel.lezc...@free.fr wrote: On 10/25/2010 07:24 AM, Nirmal Guhan wrote: On Sun, Oct 24, 2010 at 3:07 PM, Daniel Lezcanodlezc...@fr.ibm.com wrote: [ snip ] How does it work when I have eth0 in lxc attached to br0? I still assign IP to eth0 in this case as part of lxc config. Is this a special case where IP is required for interface attached to the bridge? I assume you are talking about a veth + bridge, right ? The network stacks are separated between the host and the container and the veth is a pass-through network device, it is a pair device (vethA - vethB). When the packets are injected to vethA, they are received by vethB and when they are injected to vethB, they are received by vethA. Practically, when the container is created, the vethA is attached to the bridge and vethB is moved inside the container and renamed eth0 for convenience. No IP address is assigned to vethA but it is assigned to vethB. Assuming you have an IP address 1.2.3.4 on vethB and another host with the IP 1.2.3.5, if you ping from the container to the host, here is what happens: (container) : search the route for dest address 1.2.3.5 (container) : found the dev where to send packet is eth0 (aka vethB) (container) : send the packet to this device (host) : the packet arrives from vethA (host) : the bridge hooks the packet (host) : lookup the destination with the mac @ (host) : send the packet on all the ports (host) : the packet goes through the real device eth0 (peer) : the packet arrives to the peer and this one answers (host) : the packet arrives on the real device eth0 (host) : the packet is hooked by the bridge code (host) : the bridge look for the dest mac @ and find vethA (host) : the bridge send the packet to vethA (container) : the packet arrives to eth0 (aka vethB) Thanks for the detailed explanation. So, if I have multiple interfaces (eth, tap) attached to bridge, I will assign IP to bridge. As I tested, I was also able to assign IP to tap interface attached to bridge (so there are two IPs and still ping both of them. Only missing piece is - bridge is a layer 2 device that can take an L3 IP too :-) This helps me, though!! ~Nirmal -- Nokia and ATT present the 2010 Calling All Innovators-North America contest Create new apps games for the Nokia N8 for consumers in U.S. and Canada $10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store http://p.sf.net/sfu/nokia-dev2dev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users