Re: [Lxc-users] two NICs in container
On 06/01/2011 07:26 AM, Mihamina Rakotomandimby wrote: Hi all, When I create a container, I usually created it with only one NIC: [...] lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 lxc.network.ipv4 = 41.204.96.2/28 lxc.network.name = eth0 [...] Now I want to create a container with 2 NICs On the host I have another bridge br1, and I wan the 2nd container interface to be eth1 How to? lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 lxc.network.ipv4 = 41.204.96.2/28 lxc.network.name = eth0 lxc.network.type = veth lxc.network.flags = up lxc.network.link = br1 lxc.network.ipv4 = myip/28 lxc.network.name = eth1 Alternatively you can get ride of the virtual interface name because they automatically assigned. lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 lxc.network.ipv4 = 41.204.96.2/28 lxc.network.type = veth lxc.network.flags = up lxc.network.link = br1 lxc.network.ipv4 = myip/28 -- Simplify data backup and recovery for your virtual environment with vRanger. Installation's a snap, and flexible recovery options mean your data is safe, secure and there when you need it. Data protection magic? Nope - It's vRanger. Get your free trial download today. http://p.sf.net/sfu/quest-sfdev2dev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
[Lxc-users] lxc_namespace - failed to clone(0x6c020000): Invalid argument
After a minor kernel update lxc-start does not work any more: root@vms2:/lxc# lxc-start -f bunny.cfg -n bunny -d -o /dev/tty lxc-start 1306913218.901 ERRORlxc_namespace - failed to clone(0x6c02): Invalid argument lxc-start 1306913218.901 ERRORlxc_start - Invalid argument - failed to fork into a new namespace lxc-start 1306913218.901 ERRORlxc_start - failed to spawn 'bunny' lxc-start 1306913218.901 ERRORlxc_cgroup - No such file or directory - failed to remove cgroup '/cgroup/bunny' root@vms2:/lxc# uname -a; lxc-version Linux vms2 2.6.32-32-server #62-Ubuntu SMP Wed Apr 20 22:07:43 UTC 2011 x86_64 GNU/Linux lxc version: 0.7.4.1 root@vms2:/lxc# mount | grep cgroup none on /cgroup type cgroup (rw) -- Ullrich Horlacher Server- und Arbeitsplatzsysteme Rechenzentrum E-Mail: horlac...@rus.uni-stuttgart.de Universitaet Stuttgart Tel:++49-711-685-65868 Allmandring 30 Fax:++49-711-682357 70550 Stuttgart (Germany) WWW:http://www.rus.uni-stuttgart.de/ -- Simplify data backup and recovery for your virtual environment with vRanger. Installation's a snap, and flexible recovery options mean your data is safe, secure and there when you need it. Data protection magic? Nope - It's vRanger. Get your free trial download today. http://p.sf.net/sfu/quest-sfdev2dev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] lxc_namespace - failed to clone(0x6c020000): Invalid argument
On 06/01/2011 10:25 AM, Ulli Horlacher wrote: On Wed 2011-06-01 (10:18), Daniel Lezcano wrote: root@vms2:/lxc# lxc-start -f bunny.cfg -n bunny -d -o /dev/tty lxc-start 1306913218.901 ERRORlxc_namespace - failed to clone(0x6c02): Invalid argument Can you show the content of the /cgroup root directory please ? Any message in /var/log/messages ? -- Simplify data backup and recovery for your virtual environment with vRanger. Installation's a snap, and flexible recovery options mean your data is safe, secure and there when you need it. Data protection magic? Nope - It's vRanger. Get your free trial download today. http://p.sf.net/sfu/quest-sfdev2dev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] lxc_namespace - failed to clone(0x6c020000): Invalid argument
On Wed 2011-06-01 (10:30), Daniel Lezcano wrote: On 06/01/2011 10:25 AM, Ulli Horlacher wrote: On Wed 2011-06-01 (10:18), Daniel Lezcano wrote: root@vms2:/lxc# lxc-start -f bunny.cfg -n bunny -d -o /dev/tty lxc-start 1306913218.901 ERRORlxc_namespace - failed to clone(0x6c02): Invalid argument Can you show the content of the /cgroup root directory please ? Any message in /var/log/messages ? 2011-06-01 10:34:53 [ 5228.816214] device vetheBqcj5 entered promiscuous mode 2011-06-01 10:34:53 [ 5228.817240] ADDRCONF(NETDEV_UP): vetheBqcj5: link is not ready This is strange, because I have not configured vetheBqcj5. I use: root@vms2:/var/log# grep network /lxc/bunny.cfg lxc.network.type = veth lxc.network.link = br8 lxc.network.name = eth0 With: root@vms2:/var/log# cat /etc/network/interfaces auto lo iface lo inet loopback auto eth0 iface eth0 inet manual up ifconfig eth0 up auto br0 iface br0 inet static address 129.69.1.68 netmask 255.255.255.0 gateway 129.69.1.254 bridge_ports eth0 bridge_stp off bridge_maxwait 5 post-up /usr/sbin/brctl setfd br0 0 # VLAN8 auto eth2 iface eth2 inet manual up ifconfig eth2 up auto vlan8 iface vlan8 inet manual vlan_raw_device eth2 up ifconfig vlan8 up auto br8 iface br8 inet manual bridge_ports vlan8 bridge_maxwait 5 bridge_stp off post-up /usr/sbin/brctl setfd br8 0 root@vms2:/var/log# ifconfig br0 Link encap:Ethernet HWaddr 00:23:ae:6c:4f:cd inet addr:129.69.1.68 Bcast:129.69.1.255 Mask:255.255.255.0 inet6 addr: fe80::223:aeff:fe6c:4fcd/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:140360 errors:0 dropped:0 overruns:0 frame:0 TX packets:4288 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:10991449 (10.9 MB) TX bytes:631122 (631.1 KB) br8 Link encap:Ethernet HWaddr 00:e0:52:b7:37:fe inet6 addr: fe80::2e0:52ff:feb7:37fe/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:54832 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2654293 (2.6 MB) TX bytes:468 (468.0 B) eth0 Link encap:Ethernet HWaddr 00:23:ae:6c:4f:cd inet6 addr: fe80::223:aeff:fe6c:4fcd/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:154863 errors:0 dropped:0 overruns:0 frame:0 TX packets:4300 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:30994037 (30.9 MB) TX bytes:632223 (632.2 KB) Memory:fe9e-fea0 eth2 Link encap:Ethernet HWaddr 00:e0:52:b7:37:fe inet6 addr: fe80::2e0:52ff:feb7:37fe/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:55625 errors:0 dropped:0 overruns:0 frame:0 TX packets:12 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3513414 (3.5 MB) TX bytes:936 (936.0 B) Interrupt:18 Base address:0x8f00 loLink encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:152 errors:0 dropped:0 overruns:0 frame:0 TX packets:152 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:12394 (12.3 KB) TX bytes:12394 (12.3 KB) veth80Eh90 Link encap:Ethernet HWaddr 3a:ce:cf:67:68:55 UP BROADCAST PROMISC MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) veth9NBAU3 Link encap:Ethernet HWaddr 2a:00:8d:c1:1f:98 UP BROADCAST PROMISC MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) vethO3dr66 Link encap:Ethernet HWaddr 62:05:7b:53:4d:07 UP BROADCAST PROMISC MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) vethORKGHz Link encap:Ethernet HWaddr 0e:b6:91:af:d2:9d UP BROADCAST PROMISC MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0
Re: [Lxc-users] lxc_namespace - failed to clone(0x6c020000): Invalid argument
On 06/01/2011 10:45 AM, Ulli Horlacher wrote: [ ... ] 2011-06-01 10:34:53 [ 5228.816214] device vetheBqcj5 entered promiscuous mode 2011-06-01 10:34:53 [ 5228.817240] ADDRCONF(NETDEV_UP): vetheBqcj5: link is not ready This is strange, because I have not configured vetheBqcj5. It is configured by lxc automatically. No worries. Oh ! As far as I remember the ubuntu kernel team disabled the network namespace in the kernel. Can you check that with lxc-checkconfig ? Thanks -- Daniel -- Simplify data backup and recovery for your virtual environment with vRanger. Installation's a snap, and flexible recovery options mean your data is safe, secure and there when you need it. Data protection magic? Nope - It's vRanger. Get your free trial download today. http://p.sf.net/sfu/quest-sfdev2dev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] lxc_namespace - failed to clone(0x6c020000): Invalid argument
On 06/01/2011 11:17 AM, Daniel Lezcano wrote: On 06/01/2011 10:45 AM, Ulli Horlacher wrote: [ ... ] 2011-06-01 10:34:53 [ 5228.816214] device vetheBqcj5 entered promiscuous mode 2011-06-01 10:34:53 [ 5228.817240] ADDRCONF(NETDEV_UP): vetheBqcj5: link is not ready This is strange, because I have not configured vetheBqcj5. It is configured by lxc automatically. No worries. Oh ! As far as I remember the ubuntu kernel team disabled the network namespace in the kernel. Can you check that with lxc-checkconfig ? Daniel, I use Natty (11.04) and in it's enabled: $ lxc-checkconfig Kernel config /proc/config.gz not found, looking in other places... Found kernel config file /boot/config-2.6.38-8-server --- Namespaces --- Namespaces: enabled Utsname namespace: enabled Ipc namespace: enabled Pid namespace: enabled User namespace: enabled Network namespace: enabled Multiple /dev/pts instances: enabled --- Control groups --- Cgroup: enabled Cgroup namespace: enabled Cgroup device: enabled Cgroup sched: enabled Cgroup cpu account: enabled Cgroup memory controller: enabled Cgroup cpuset: enabled --- Misc --- Veth pair device: enabled Macvlan: enabled Vlan: enabled File capabilities: enabled Note : Before booting a new kernel, you can check its configuration usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig Just FYI. tamas -- Simplify data backup and recovery for your virtual environment with vRanger. Installation's a snap, and flexible recovery options mean your data is safe, secure and there when you need it. Data protection magic? Nope - It's vRanger. Get your free trial download today. http://p.sf.net/sfu/quest-sfdev2dev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] lxc_namespace - failed to clone(0x6c020000): Invalid argument
On Wed 2011-06-01 (11:17), Daniel Lezcano wrote: On 06/01/2011 10:45 AM, Ulli Horlacher wrote: [ ... ] 2011-06-01 10:34:53 [ 5228.816214] device vetheBqcj5 entered promiscuous mode 2011-06-01 10:34:53 [ 5228.817240] ADDRCONF(NETDEV_UP): vetheBqcj5: link is not ready This is strange, because I have not configured vetheBqcj5. It is configured by lxc automatically. No worries. Ahh.. ok :-) Oh ! As far as I remember the ubuntu kernel team disabled the network namespace in the kernel. Can you check that with lxc-checkconfig ? root@vms2:~# lxc-checkconfig Kernel config /proc/config.gz not found, looking in other places... Found kernel config file /boot/config-2.6.32-32-server --- Namespaces --- Namespaces: enabled Utsname namespace: enabled Ipc namespace: enabled Pid namespace: enabled User namespace: enabled Network namespace: missing ^^ A! Can I enable it at runtime or is it a compile time feature? -- Ullrich Horlacher Server- und Arbeitsplatzsysteme Rechenzentrum E-Mail: horlac...@rus.uni-stuttgart.de Universitaet Stuttgart Tel:++49-711-685-65868 Allmandring 30 Fax:++49-711-682357 70550 Stuttgart (Germany) WWW:http://www.rus.uni-stuttgart.de/ -- Simplify data backup and recovery for your virtual environment with vRanger. Installation's a snap, and flexible recovery options mean your data is safe, secure and there when you need it. Data protection magic? Nope - It's vRanger. Get your free trial download today. http://p.sf.net/sfu/quest-sfdev2dev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
[Lxc-users] reboot fail on natty container on natty host
Hi, Can you reboot on console by lxc-console in a container? HostOS: Ubuntu natty lxc version: 0.7.4-0ubuntu7.1 I make a natty container on natty host. # lxc-create -t natty -n natty01 I start the container, and run lxc-console, then run reboot on the console. # lxc-start -n natty01 -d -o log -l debug # lxc-console -n natty01 : (snip) natty01 login: root Password: : (snip) root@natty01:~# reboot : (snip) root@natty01:~# lxc-console: Input/output error - failed to read But reboot failed, so container is stopped. # lxc-info -n natty01 'natty01' is STOPPED The log is as follows: (run reboot) lxc-start 1306922495.549 DEBUGlxc_utmp - got inotify event 2 for utmp lxc-start 1306922495.551 DEBUGlxc_utmp - utmp handler - run level is 2/6 lxc-start 1306922495.551 DEBUGlxc_utmp - Setting up utmp shutdown timer lxc-start 1306922495.551 DEBUGlxc_utmp - Container rebooting lxc-start 1306922495.684 DEBUGlxc_utmp - got inotify event 2 for utmp lxc-start 1306922495.684 DEBUGlxc_utmp - utmp handler - run level is 2/6 lxc-start 1306922495.685 DEBUGlxc_utmp - got inotify event 2 for utmp lxc-start 1306922495.685 DEBUGlxc_utmp - utmp handler - run level is 2/6 lxc-start 1306922495.686 DEBUGlxc_utmp - got inotify event 2 for utmp lxc-start 1306922495.686 DEBUGlxc_utmp - utmp handler - run level is 2/6 lxc-start 1306922495.689 DEBUGlxc_utmp - got inotify event 2 for utmp lxc-start 1306922495.689 DEBUGlxc_utmp - utmp handler - run level is 2/6 lxc-start 1306922496.439 DEBUGlxc_utmp - got inotify event 2 for utmp lxc-start 1306922496.440 DEBUGlxc_utmp - utmp handler - run level is / lxc-start 1306922496.552 DEBUGlxc_cgroup - using cgroup mounted at '/cgroup' lxc-start 1306922496.552 DEBUGlxc_utmp - there are 4 tasks running lxc-start 1306922497.552 DEBUGlxc_utmp - there are 4 tasks running lxc-start 1306922498.341 DEBUGlxc_utmp - got inotify event 2 for utmp lxc-start 1306922498.341 DEBUGlxc_utmp - utmp handler - run level is / lxc-start 1306922498.552 DEBUGlxc_utmp - there are 1 tasks running lxc-start 1306922498.552 INFO lxc_utmp - container has rebooted lxc-start 1306922498.552 DEBUGlxc_utmp - Clearing utmp shutdown timer lxc-start 1306922498.564 DEBUGlxc_start - container init process exited lxc-start 1306922498.570 INFO lxc_error - child 4391 ended on signal (9) lxc-start 1306922498.571 DEBUGlxc_cgroup - using cgroup mounted at '/cgroup' lxc-start 1306922498.572 DEBUGlxc_cgroup - '/cgroup/natty01' unlinked lxc-start 1306922498.574 INFO lxc_start_ui - rebooting container I did same thing on lucid container on maverick host, I can reboot on console in container. -- ka...@jazz.email.ne.jp / KATOH Yasufumi -- Simplify data backup and recovery for your virtual environment with vRanger. Installation's a snap, and flexible recovery options mean your data is safe, secure and there when you need it. Data protection magic? Nope - It's vRanger. Get your free trial download today. http://p.sf.net/sfu/quest-sfdev2dev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] lxc_namespace - failed to clone(0x6c020000): Invalid argument
On 06/01/2011 11:55 AM, Papp Tamas wrote: On 06/01/2011 11:17 AM, Daniel Lezcano wrote: On 06/01/2011 10:45 AM, Ulli Horlacher wrote: [ ... ] 2011-06-01 10:34:53 [ 5228.816214] device vetheBqcj5 entered promiscuous mode 2011-06-01 10:34:53 [ 5228.817240] ADDRCONF(NETDEV_UP): vetheBqcj5: link is not ready This is strange, because I have not configured vetheBqcj5. It is configured by lxc automatically. No worries. Oh ! As far as I remember the ubuntu kernel team disabled the network namespace in the kernel. Can you check that with lxc-checkconfig ? Daniel, I use Natty (11.04) and in it's enabled: Yep, it is only a 2.6.32 kernel which is concerned by this modification. What I meant is the kernel team just disabled the netns with an update in 2.6.32 https://lists.ubuntu.com/archives/kernel-team/2011-March/015173.html That looked very weird for me. I tried to convince them the netns was needed but they decided to remove a kernel feature in a kernel update. :s -- Simplify data backup and recovery for your virtual environment with vRanger. Installation's a snap, and flexible recovery options mean your data is safe, secure and there when you need it. Data protection magic? Nope - It's vRanger. Get your free trial download today. http://p.sf.net/sfu/quest-sfdev2dev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] lxc_namespace - failed to clone(0x6c020000): Invalid argument
On 06/01/2011 12:25 PM, Ulli Horlacher wrote: On Wed 2011-06-01 (11:17), Daniel Lezcano wrote: On 06/01/2011 10:45 AM, Ulli Horlacher wrote: [ ... ] 2011-06-01 10:34:53 [ 5228.816214] device vetheBqcj5 entered promiscuous mode 2011-06-01 10:34:53 [ 5228.817240] ADDRCONF(NETDEV_UP): vetheBqcj5: link is not ready This is strange, because I have not configured vetheBqcj5. It is configured by lxc automatically. No worries. Ahh.. ok :-) Oh ! As far as I remember the ubuntu kernel team disabled the network namespace in the kernel. Can you check that with lxc-checkconfig ? root@vms2:~# lxc-checkconfig Kernel config /proc/config.gz not found, looking in other places... Found kernel config file /boot/config-2.6.32-32-server --- Namespaces --- Namespaces: enabled Utsname namespace: enabled Ipc namespace: enabled Pid namespace: enabled User namespace: enabled Network namespace: missing ^^ A! Can I enable it at runtime or is it a compile time feature? It is a compile feature :( But you can install a more recent kernel as suggested in the kernel team email thread. https://lists.ubuntu.com/archives/kernel-team/2011-March/015173.html Well, there is an alternative for those folks that _are_ dependent on NET_NS: sudo apt-get install linux-image-server-lts-backport-maverick -- Simplify data backup and recovery for your virtual environment with vRanger. Installation's a snap, and flexible recovery options mean your data is safe, secure and there when you need it. Data protection magic? Nope - It's vRanger. Get your free trial download today. http://p.sf.net/sfu/quest-sfdev2dev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] lxc_namespace - failed to clone(0x6c020000): Invalid argument
On 06/01/2011 03:06 PM, Ulli Horlacher wrote: On Wed 2011-06-01 (14:40), Daniel Lezcano wrote: root@vms2:~# lxc-checkconfig Kernel config /proc/config.gz not found, looking in other places... Found kernel config file /boot/config-2.6.32-32-server --- Namespaces --- Namespaces: enabled Utsname namespace: enabled Ipc namespace: enabled Pid namespace: enabled User namespace: enabled Network namespace: missing ^^ A! Can I enable it at runtime or is it a compile time feature? It is a compile feature :( Bad... https://lists.ubuntu.com/archives/kernel-team/2011-March/015173.html Well, there is an alternative for those folks that _are_ dependent on NET_NS: sudo apt-get install linux-image-server-lts-backport-maverick With this workaround my LXC containers are working again! Thanks! Nevertheless this IS an (Ubuntu) bug! Both packages, lxc and linux-image belong to the same Ubuntu (LTS!) version and should work together! I will file a bug-report at launchpad.net I fully agree with you. Disabling a feature in the kernel for an update to fix a bug is not a solution. -- Simplify data backup and recovery for your virtual environment with vRanger. Installation's a snap, and flexible recovery options mean your data is safe, secure and there when you need it. Data protection magic? Nope - It's vRanger. Get your free trial download today. http://p.sf.net/sfu/quest-sfdev2dev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] reboot fail on natty container on natty host
On 06/01/2011 12:33 PM, KATOH Yasufumi wrote: Hi, Can you reboot on console by lxc-console in a container? HostOS: Ubuntu natty lxc version: 0.7.4-0ubuntu7.1 Good catch, thanks for reporting ! Fixed with this patch. Index: lxc/src/lxc/commands.c === --- lxc.orig/src/lxc/commands.c2011-06-01 16:56:16.911017001 +0200 +++ lxc/src/lxc/commands.c2011-06-01 16:57:37.661017001 +0200 @@ -236,6 +236,11 @@ static int incoming_command_handler(int return -1; } +if (fcntl(connection, F_SETFD, FD_CLOEXEC)) { +SYSERROR(failed to set close-on-exec on incoming connection); +goto out_close; +} + if (setsockopt(connection, SOL_SOCKET, SO_PASSCRED, opt, sizeof(opt))) { SYSERROR(failed to enable credential on socket); I will commit in a few hours. -- Simplify data backup and recovery for your virtual environment with vRanger. Installation's a snap, and flexible recovery options mean your data is safe, secure and there when you need it. Data protection magic? Nope - It's vRanger. Get your free trial download today. http://p.sf.net/sfu/quest-sfdev2dev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] failed to create pty #0
Hey guys! I know, I know... This is like 9 months old. But I finally got caught between a rock and a hard place on Fedora 15 and was researching that problem (which others have pointed good pointers to on this list - thank you very much - got me going again) and I kept constantly running into this same damn problem. Still. So I finally had to drill into it. On Mon, 2010-09-20 at 09:03 -0400, Michael H. Warfield wrote: On Mon, 2010-09-20 at 05:29 -0400, l...@jelmail.com wrote: Hi Daniel, I have tracked down this issue somewhat. It seems to be caused by shutting down a container (not by lxc-stop) and is caused by the rc.shutdown script present in Arch Linux. I've seen this problem too even when lxc-stop is used and the container is a Fedora container (mostly F12's). If I shut down the container and stop it with lxc-stop then restart the container, I get that failed to create pty #0) when sshing into the container. I have to restart the host system once that's happened. I don't know what specifically causes the problem because I haven't had time to investigate but I do know that it's fixed by removing everything from rc.shutdown onwards from the line containing stat_busy “Saving System Clock” as suggested on lxc.teegra.net (I had done this on a prior container but missed this step on a new one which is why the problem only started happening recently). I'm going to have to see if there's something similar in the Fedora shutdown scripts. Interesting. I hadn't tried using lxc-stop without shutting down the contained OS, so I hadn't narrowed it down that far. Interesting. I narrowed this down to a specific set of commands in the Fedora halt script. These are the buggers that are causing the problem... # Remount read only anything that's left mounted. echo $Remounting remaining filesystems readonly mount | awk '{ print $1,$3 }' | while read dev dir; do fstab-decode mount -n -o ro,remount $dev $dir done Comment those lines out. Problem goes away. Oh, I gotta bad feeling here. We've been fighting this whole bloody remount thing propagating back into the host and the random acts of terrorism that lie therein for a long time. Let's see... mount | awk '{ print $1,$3 }' rootfs / /dev/sdb1 / /dev/sda8 /srv/shared none /dev/pts none /proc none /sys none /dev/shm devpts /dev/console devpts /dev/tty1 devpts /dev/tty2 devpts /dev/tty3 devpts /dev/tty4 devpts /dev/tty5 devpts /dev/tty6 /proc/bus/usb /proc/bus/usb none /proc/sys/fs/binfmt_misc Yup... Ok... That doesn't take much guessing. The container is remounting the /dev/pts as ro and kiss it good by in the host. Sigh. I just got done testing this on an F15 host / F14 client w/ LXC 0.7.4.2. 2.6.38.6-27.fc15 kernel. Probably not a lot we can do from user space. That's some isolation we really need down in kernel land somewhere. Yes, I can hear it now. Old country doctor's advice. Well, then, don't do that. But the fact is the container can do something horrible that propagates back into the host. Yes! Now that I know what, specifically, is causing this, I can correct it in the guest. But a rogue guest can do bad things. This is not good. The container should NEVER have that kind of power to affect the host. Regards, Mike So something in that shutdown file has the capacity to disable the host's ability to start further containers and also disable the ability to ssh into already running ones (thankfully, lxc-console still worked). John Regards, Mike -- Start uncovering the many advantages of virtual appliances and start using them to simplify application deployment and accelerate your shift to cloud computing. http://p.sf.net/sfu/novell-sfdev2dev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Simplify data backup and recovery for your virtual environment with vRanger. Installation's a snap, and flexible recovery options mean your data is safe, secure and there when you need it. Data protection magic? Nope - It's vRanger. Get your free trial download today. http://p.sf.net/sfu/quest-sfdev2dev___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [PATCH] ignore non-lxc configuration line
On Fri, 2011-05-13 at 22:32 +0200, Daniel Lezcano wrote: From: Daniel Lezcano daniel.lezc...@free.fr We ignore the line of in the configuration file not beginning by lxc. So we can mix the configuration file with another information used for another component through the lxc library. Wow... I seem to recall requesting this sort of thing ages ago. Maybe even before we created the -users list and only had the -dev list and was shot down. I have s wanted this feature. That can implement many of the OpenVZ compatibility things we need the high level scripts to perform and keep them in one file. Many thanks. I as SO glad to see this! Regards, Mike Signed-off-by: Daniel Lezcano dlezc...@fr.ibm.com --- src/lxc/confile.c | 12 1 files changed, 8 insertions(+), 4 deletions(-) diff --git a/src/lxc/confile.c b/src/lxc/confile.c index 791f04f..d632404 100644 --- a/src/lxc/confile.c +++ b/src/lxc/confile.c @@ -799,7 +799,7 @@ static int parse_line(char *buffer, void *data) char *dot; char *key; char *value; - int ret = -1; + int ret = 0; if (lxc_is_line_empty(buffer)) return 0; @@ -815,10 +815,14 @@ static int parse_line(char *buffer, void *data) } line += lxc_char_left_gc(line, strlen(line)); - if (line[0] == '#') { - ret = 0; + + /* martian option - ignoring it, the commented lines beginning by '#' + * fall in this case + */ + if (strncmp(line, lxc., 4)) goto out; - } + + ret = -1; dot = strstr(line, =); if (!dot) { -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Simplify data backup and recovery for your virtual environment with vRanger. Installation's a snap, and flexible recovery options mean your data is safe, secure and there when you need it. Data protection magic? Nope - It's vRanger. Get your free trial download today. http://p.sf.net/sfu/quest-sfdev2dev___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users