Re: [Lxc-users] [Newbie] lxc container to container.
Hi. I think the best is the traditionally way between two machines: Connect via ssh and do what you want to. Its not a good idea in my eyes to break through the security isolation of containers because that could affect all of them. (something bad like getting root of lxc host and kill process) But maybe there is a way I didn't see. Regards, Andreas Am 17.10.2013 03:48, schrieb Vijay Viswanathan: Hi What is the best approach to communicate between containers? Basically, I need to be able to kill a process running in one container [ container-1 ] from another container [container-2]. Currently, Iam able to kill a process inside a container from the host but how can container-2 see stuff inside container-1 ? I understand container is for isolation but unfortunately, I have this scenario where container-2 need to be like a master container of container-1. Thx. Vijay. -- October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register http://pubads.g.doubleclick.net/gampad/clk?id=60135031iu=/4140/ostg.clktrk ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users -- October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register http://pubads.g.doubleclick.net/gampad/clk?id=60135031iu=/4140/ostg.clktrk___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] lxc.blkio.weight question
You have to run the tests on both containers at the same time, then you will see the difference. If no container need IO, your first container will get the same speed as your second. Regards, Andreas Am 17.10.2013 10:54, schrieb autumn_sky_is: Hi, i'm using lxc in my project, i want to control disk io speed, so i use blkio.weight, but the result is confusing. I create 2 containers lxc1 and lxc2 lxc1: lxc.blkio.weight=100; lxc2:lxc.blkio.weight=1000 the running scripts are same in lxc1 and lxc2: sync echo 3 /proc/sys/vm/drop_caches dd if=/var/a.img of=/dev/null bs=1M count=2000 I have read [1], and I check the io scheduler: noop anticipatory deadline [cfq] I suppose that lxc1 is slower than lxc2, but the test turn out that they are in same speed. Can you help me figure it out? Thank you. PS: my kernel is 2.6.32, lxc version is 0.9.0 [1] http://osdir.com/ml/lxc-chroot-linux-containers/2011-12/msg00083.html -- October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register http://pubads.g.doubleclick.net/gampad/clk?id=60135031iu=/4140/ostg.clktrk ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users -- October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register http://pubads.g.doubleclick.net/gampad/clk?id=60135031iu=/4140/ostg.clktrk___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Problem with lxc and mutliple ips
Thanks Tamas and Guido for your help. Dealing with stp options on the bridge wasn't a good option for us because we don't use a closed network solution and it could shutdown our provider net. (if they use stp too) We figured out that interfaces starts smoother in lxc if we set lxc.network.hwaddr. Sometimes an ip won't get configured for the first time but that repairs a arping or pinging the gateway inside container. Regards, Andreas Am 11.10.2013 14:58, schrieb Tamas Papp: On 10/11/2013 02:42 PM, Andreas Laut wrote: Hi, actually can't get lxc nightly compiled with debian right now, configure has problems with pkg-config python3-dev (package pkg-config and python3-dev is installed) (configure line 5588) I tried instead lxc 0.9 from tarball and got the same problem. Our LXC Config is attached, maybe this helps. Our lxc-host bridge is configured like auto br0 iface br0 inet static bridge_ports eth4 bridge_stp off address 10.5.255.80 netmask 255.255.0.0 gateway 10.5.255.252 I used to add bridge_fd 0 bridge_maxwait 0 to the bridge config. tamas -- October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register http://pubads.g.doubleclick.net/gampad/clk?id=60134071iu=/4140/ostg.clktrk ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users -- October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register http://pubads.g.doubleclick.net/gampad/clk?id=60135031iu=/4140/ostg.clktrk ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
[Lxc-users] Problem with lxc and mutliple ips
Dear list, we are using lxc 0.8 on Debian Wheezy (official debian package). Now we wanted to start a lxc with more than one IP address and we have gotten strange behaviors. As starting the lxc some IPs are reachable, some not. If we shut down the lxc and boot again, some other IPs are reachable. There seams no logic behind this. And - in no time - there were all IPs reachable. If we are using only one IP for LXC, all is fine. Do someone else have gotten this problem? All help and ideas are appreciated. Regards, Andreas -- October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register http://pubads.g.doubleclick.net/gampad/clk?id=60134071iu=/4140/ostg.clktrk ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Problem with lxc and mutliple ips
Ok, sorry. You're right. We are using a bridge named br0 bound to eth0 at lxc host. On the containers we are using veth, but this problem happens also with type macvlan. No change at all. We also tried using hwaddr. We're doing further research in hope to show a way to reproduce this for you. Andreas Am 11.10.2013 09:41, schrieb Jäkel, Guido: Dear Andreas, please substantiate your term start a lxc with multiple IPs and the line If we are using only one IP for LXC, all is fine: What kind of network setup do you use, is it e.g. a bridge on the lxc host and veth's on the containers? A guess might be that you have a MAC address clash; did you override the lxc.network.hwaddr? Guido -Original Message- From: Andreas Laut [mailto:andreas.l...@spark5.de] Sent: Friday, October 11, 2013 8:53 AM To: lxc-users@lists.sourceforge.net Subject: [Spam-Wahrscheinlichkeit=45][Lxc-users] Problem with lxc and mutliple ips Dear list, we are using lxc 0.8 on Debian Wheezy (official debian package). Now we wanted to start a lxc with more than one IP address and we have gotten strange behaviors. As starting the lxc some IPs are reachable, some not. If we shut down the lxc and boot again, some other IPs are reachable. There seams no logic behind this. And - in no time - there were all IPs reachable. If we are using only one IP for LXC, all is fine. Do someone else have gotten this problem? All help and ideas are appreciated. -- October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register http://pubads.g.doubleclick.net/gampad/clk?id=60134071iu=/4140/ostg.clktrk ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Problem with lxc and mutliple ips
Hi, actually can't get lxc nightly compiled with debian right now, configure has problems with pkg-config python3-dev (package pkg-config and python3-dev is installed) (configure line 5588) I tried instead lxc 0.9 from tarball and got the same problem. Our LXC Config is attached, maybe this helps. Our lxc-host bridge is configured like auto br0 iface br0 inet static bridge_ports eth4 bridge_stp off address 10.5.255.80 netmask 255.255.0.0 gateway 10.5.255.252 Andreas Am 11.10.2013 10:45, schrieb Tamas Papp: On 10/11/2013 10:40 AM, Andreas Laut wrote: Ok, sorry. You're right. We are using a bridge named br0 bound to eth0 at lxc host. On the containers we are using veth, but this problem happens also with type macvlan. No change at all. We also tried using hwaddr. We're doing further research in hope to show a way to reproduce this for you. Are you able to reproduce that against recent nightly build? tamas -- October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register http://pubads.g.doubleclick.net/gampad/clk?id=60134071iu=/4140/ostg.clktrk ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users # /var/lib/lxc/lxc-container/config ## Container lxc.utsname = lxc-container lxc.network.type= veth lxc.network.flags = up lxc.network.link= br0 lxc.network.ipv4= 10.05.225.10/16 lxc.network.type= veth lxc.network.flags = up lxc.network.link= br0 lxc.network.ipv4= 10.05.225.11/16 lxc.network.type= veth lxc.network.flags = up lxc.network.link= br0 lxc.network.ipv4= 10.05.100.12/16 lxc.network.type= veth lxc.network.flags = up lxc.network.link= br0 lxc.network.ipv4= 10.05.100.13/16 lxc.network.type= veth lxc.network.flags = up lxc.network.link= br0 lxc.network.ipv4= 10.05.225.14/16 lxc.network.ipv4.gateway= 10.05.255.252 lxc.rootfs = /var/lib/lxc/lxc-container/rootfs lxc.arch= x86_64 #lxc.console= /var/log/lxc/lxc-container.console lxc.tty = 5 lxc.pts = 1024 ## Capabilities lxc.cap.drop= mac_admin lxc.cap.drop= mac_override lxc.cap.drop= sys_admin lxc.cap.drop= sys_module lxc.cap.drop= sys_rawio ## Devices # Allow all devices #lxc.cgroup.devices.allow = a # Deny all devices lxc.cgroup.devices.deny = a # Allow to mknod all devices (but not using them) lxc.cgroup.devices.allow= c *:* m lxc.cgroup.devices.allow= b *:* m # /dev/console lxc.cgroup.devices.allow= c 5:1 rwm # /dev/fuse lxc.cgroup.devices.allow= c 10:229 rwm # /dev/null lxc.cgroup.devices.allow= c 1:3 rwm # /dev/ptmx lxc.cgroup.devices.allow= c 5:2 rwm # /dev/pts/* lxc.cgroup.devices.allow= c 136:* rwm # /dev/random lxc.cgroup.devices.allow= c 1:8 rwm # /dev/rtc lxc.cgroup.devices.allow= c 254:0 rwm # /dev/tty lxc.cgroup.devices.allow= c 5:0 rwm # /dev/urandom lxc.cgroup.devices.allow= c 1:9 rwm # /dev/zero lxc.cgroup.devices.allow= c 1:5 rwm ## Limits #lxc.cgroup.cpu.shares = 1024 #lxc.cgroup.cpuset.cpus = 0 #lxc.cgroup.memory.limit_in_bytes = 4G #lxc.cgroup.memory.memsw.limit_in_bytes = 1G #lxc.cgroup.blkio.weight= 500 ## Filesystem lxc.mount.entry = proc /var/lib/lxc/lxc-container/rootfs/proc proc nodev,noexec,nosuid,ro 0 0 lxc.mount.entry = sysfs /var/lib/lxc/lxc-container/rootfs/sys sysfs defaults,ro 0 0 -- October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors
Re: [Lxc-users] Problem with lxc and mutliple ips
Sorry, found the mistake in my lxc config by myself, need to do further tests. Am 11.10.2013 14:42, schrieb Andreas Laut: Hi, actually can't get lxc nightly compiled with debian right now, configure has problems with pkg-config python3-dev (package pkg-config and python3-dev is installed) (configure line 5588) I tried instead lxc 0.9 from tarball and got the same problem. Our LXC Config is attached, maybe this helps. Our lxc-host bridge is configured like auto br0 iface br0 inet static bridge_ports eth4 bridge_stp off address 10.5.255.80 netmask 255.255.0.0 gateway 10.5.255.252 Andreas Am 11.10.2013 10:45, schrieb Tamas Papp: On 10/11/2013 10:40 AM, Andreas Laut wrote: Ok, sorry. You're right. We are using a bridge named br0 bound to eth0 at lxc host. On the containers we are using veth, but this problem happens also with type macvlan. No change at all. We also tried using hwaddr. We're doing further research in hope to show a way to reproduce this for you. Are you able to reproduce that against recent nightly build? tamas -- October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register http://pubads.g.doubleclick.net/gampad/clk?id=60134071iu=/4140/ostg.clktrk ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users -- October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register http://pubads.g.doubleclick.net/gampad/clk?id=60134071iu=/4140/ostg.clktrk ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users -- October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register http://pubads.g.doubleclick.net/gampad/clk?id=60134071iu=/4140/ostg.clktrk___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] lxc mtab
Hi, thats usual behavior with ubuntu/debian and no need to worry about. The rootfs mount entry is fake, not real. (you can test this using the command umount rootfs or type mount) Regards, Andreas Am 05.10.2013 06:50, schrieb Kalyana sundaram: Hi I run a group of lxc containers over a ubuntu host. when I do df on the container, I get rootfs and /dev/disk. But since both are same(mounted on /), why mtab shows them separately, I think I am missing some insight in this. df -TH FilesystemType Size Used Avail Use% Mounted on rootfs rootfs 984G54G 881G 6% / /dev/sdb1 ext4 984G54G 881G 6% / -- Kalyanasundaram http://blogs.eskratch.com/ https://github.com/kalyanceg/ -- October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register http://pubads.g.doubleclick.net/gampad/clk?id=60134791iu=/4140/ostg.clktrk ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users -- October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register http://pubads.g.doubleclick.net/gampad/clk?id=60134791iu=/4140/ostg.clktrk___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] (no subject)
You can also symlink to the path you want. rmdir /var/lib/lxc ln -s /mywantedpath /var/lib/lxc Regards, Andreas Am 04.10.2013 07:40, schrieb Tamas Papp: On 10/04/2013 06:03 AM, Kalyana sundaram wrote: Hi lxc by default creates rootfs and fstab on /var/lib/lxc Is it possible to use some other directory? Because when I do lxc-ls, it does a ls of /var/lib/lxc Either use lxcpath=/some/other/dir in /etc/lxc/lxc.conf or -P switch Cheers, tamas -- October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register http://pubads.g.doubleclick.net/gampad/clk?id=60134791iu=/4140/ostg.clktrk ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users -- October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register http://pubads.g.doubleclick.net/gampad/clk?id=60134791iu=/4140/ostg.clktrk ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
[Lxc-users] mounting nfs not working
Hi list. I've tried to mount an external nfs mount into a container. But I got the following error message after lxc-start: lxc-start: Invalid argument - failed to mount 'nfsserver:/srv/services' on '/usr/lib/x86_64-linux-gnu/lxc//home' And my lxc.mount.entry looks like: nfsserver:/srv/services /srv/lxc/container/rootfs/home nfs defaults,_netdev,rsize=8192,rw 0 0 My lxc Version is 0.8.0~rc1-8+deb7u1. The nfs mount is reachable from host and container. All help is appreciated. Regards, Andreas -- October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register http://pubads.g.doubleclick.net/gampad/clk?id=60134791iu=/4140/ostg.clktrk ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Read-only container /proc
Ubuntu 13.04 comes with lxc 0.9? I used same version (from debian testing) on wheezy and I have no problem with read only proc. And my mount options are the same as yours, strange. lxc.mount.entry = proc /srv/vserver/vs-db01-dev/rootfs/proc proc nodev,noexec,nosuid,ro 0 0 Regards, Andreas Am 18.09.2013 15:15, schrieb Andre Nathan: Hello In Ubuntu 12.04 I used to be able to create containers with this line in the container's fstab: proc /var/lib/lxc/test/rootfs/proc proc ro,nodev,noexec,nosuid 0 0 Now in 13.04 I get the following error: $ sudo lxc-start -n test -f /var/lib/lxc/test/lxc.conf -lDEBUG -L /dev/stdout lxc-start: Permission denied - failed to create symlink for kmsg lxc-start: failed to setup kmsg for 'test' lxc-start: Read-only file system - failed to change apparmor profile to unconfined lxc-start: invalid sequence number 1. expected 4 lxc-start: failed to spawn 'test' This happens even when apparmor is disabled for lxc-start. Just changing the ro to rw in fstab allows the container to start. Is is possible to have a read-only container /proc in newer LXC? Thanks, Andre -- LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99! 1,500+ hours of tutorials including VisualStudio 2012, Windows 8, SharePoint 2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack includes Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13. http://pubads.g.doubleclick.net/gampad/clk?id=58041151iu=/4140/ostg.clktrk ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users -- LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99! 1,500+ hours of tutorials including VisualStudio 2012, Windows 8, SharePoint 2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack includes Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13. http://pubads.g.doubleclick.net/gampad/clk?id=58041151iu=/4140/ostg.clktrk___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
[Lxc-users] debian and cgroup behaviour
Dear list. I've got the following error message with Debian squeeze (Kernel 2.6.32.5) and lxc 0.8: lxc-start: No such file or directory - failed to rename cgroup /sys/fs/cgroup//lxc/18381-/sys/fs/cgroup//lxc/vs-db lxc 0.7xx creates a folder under /sys/fs/cgroup/[init-process-id] and renames this folder to /sys/fs/cgroup/[container-name] lxc 0.8xx creates the same folder under /sys/fs/cgroup/*lxc/*[init-process-id] and renames it to /sys/fs/cgroup/*lxc*/[container-name] This result in lxc 0.8 won't start containers in squeeze and throw the rename error. lxc 0.7 can start the containers. And I don't know why. Anyone? (please don't advise to upgrade to wheezy, this is part of our upgrading tests :) ) -- LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99! 1,500+ hours of tutorials including VisualStudio 2012, Windows 8, SharePoint 2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack includes Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13. http://pubads.g.doubleclick.net/gampad/clk?id=58041151iu=/4140/ostg.clktrk___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users