[Lxc-users] ovs-switch networking
Could somebody let me know how to set up veth network on container using openv-switch. I read through few links on net which asked to run a script which has (http://people.canonical.com/~serge/user-data-lxc-ovs.sh) ovs-vsctl add-port br0 \$5 what does $5 signifies? Is it possible to use openvswitch without making the host interface entering promiscuous mode? What is the advantage of using openvswitch instead of bridge? -- Kalyanasundaram http://blogs.eskratch.com/ https://github.com/kalyanceg/ -- LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] shared memory between containers
Quoting zheng_hua...@163.com (zheng_hua...@163.com): hi, i have two processes running in two containers, they are expected to communicate with shared memory IPC, but it turned out to be failed. is there any way to address this problem? Yes, have the containers share an ipc namespace. -serge -- LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] IPC with shared memory?
Quoting Binknight (zheng_hua...@163.com): hi, i have two processes running in two containers on the same hardware node, they are expected to communicate with shared memory IPC mechanism, but it turned out to be failed. It seems that the shared memory created in one container is not visible to process in other container because different namespace. is there any way to address this problem? Hm, I thought you could specify the namespaces to be unshared in lxc.conf, but I see you can't. Given the flexibility that lxc strives toward I find that surprising. Please feel free to write a simple patch to add an lxc.ns option to lxc.conf, and use that in start.c to pick the namespaces to unshare. Then you would create a shell in a new IPC ns, lxc-unshare -s IPC -- /bin/bash and from there start the two containers without IPC in their clone flags, so that they would share an IPC ns with each other, but not with the host. -serge -- LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] ovs-switch networking
Quoting Kalyana sundaram (kalyan...@gmail.com): Could somebody let me know how to set up veth network on container using openv-switch. I read through few links on net which asked to run a script which has (http://people.canonical.com/~serge/user-data-lxc-ovs.sh) To make sure this is clear, this script is meant to be run as a userdata file to a ec2 or openstack node being brought up. ovs-vsctl add-port br0 \$5 what does $5 signifies? It's the name of the interface, passed in as argument 5 to the lxc.network.script= script. (The \ is to avoid $ being escaped by the shell as I'm catting into the script) Is it possible to use openvswitch without making the host interface entering promiscuous mode? (It brings the host if into promiscuous mode?) What is the advantage of using openvswitch instead of bridge? The fact that you can use a gre tunnel (as shown in the comment at the bottom of that script) to connect containers on different lxc hosts, regardless of the networking topology behind the hosts. So for instance when I'm going to reproduce a bunch of distro bugs, I have a script that uses juju to fire up n openstack nodes (on a cloud over which I have no control, such as amazon's ec2). These nodes pre-populate lvm-backed containers. When I want to create a new precise container I run 'startcontainer precise' which will clone a new container on the next lxc host node. All the containers are linked with a gre tunnel to each other (served by a dnsmasq running on the first lxc node). -serge -- LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
[Lxc-users] lxc container shutdown or restart fails after upgrade to ubuntu 12.10
In ubuntu 12.04 lxc worked fine for me. After upgrading to Ubuntu 12.10 I've had a persistent problem. While working with an ubuntu container I sometimes try to either shutdown or restart the container bmullan@container:~$ sudo shutdown -r now *-or-* bmullan@container:~$ sudo shutdown -h now *in either I'll sometimes see the following:* The system is going down for reboot NOW! bmullan@container:~$ 4init: tty4 main process (515) killed by TERM signal 4init: tty2 main process (530) killed by TERM signal 4init: tty3 main process (532) killed by TERM signal 4init: cron main process (541) killed by TERM signal 4init: anacron main process (539) killed by TERM signal 4init: console main process (599) killed by TERM signal 4init: tty1 main process (601) killed by TERM signal 4init: hwclock-save main process (2915) terminated with status 70 4init: alsa-store main process (2922) terminated with status 19 4init: plymouth-upstart-bridge main process (2927) terminated with status 1 * Asking all remaining processes to terminate... [ OK ] * All processes ended within 4 seconds [ OK ] *initctl: Event failed * Deactivating swap... ** **swapoff: Not superuser. ** ** [fail]* umount: /run/lock: not mounted umount: /run/shm: not mounted mount: cannot mount block device /dev/disk/by-uuid/f2c86851-f893-4b48-b589-767ddf04caa1 read-only * Will now restart At this point the terminal window will just freeze. *Now the real problem this causes is upon shutdown or restart of my HOST system (ubuntu 12.10) my pc gets stuck in a loop that keeps repeating:* *[1176.213467] unregistered_netdevice: Waiting for lo to become free. Usage Count = 2* Any advice is appreciated. thanks Brian -- LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] lxc container shutdown or restart fails after upgrade to ubuntu 12.10
Quoting brian mullan (bmullan.m...@gmail.com): *Now the real problem this causes is upon shutdown or restart of my HOST system (ubuntu 12.10) my pc gets stuck in a loop that keeps repeating:* *[1176.213467] unregistered_netdevice: Waiting for lo to become free. Usage Count = 2* Any advice is appreciated. This is a known kernel bug in quantal, fixed (I believe) in the raring kernel. (The bug was introduced I believe in 3.5, and 'fixed' by the removal of the routing table soon after) -serge -- LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
[Lxc-users] (no subject)
Sent from my Verizon Wireless 4G LTE Smartphone-- LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users