Re: [Lxc-users] [lxc-devel] [PATCH] ignore non-lxc configuration line
On Sat, Jun 4, 2011 at 23:16, Rob Landley r...@landley.net wrote: On 06/02/2011 02:41 PM, Daniel Lezcano wrote: It will be for the lxc-0.7.5 version. No ETA for the moment. I would like to have new feature for lxc before releasing a new version, the delta with 0.7.4 are mostly bug fixes. Just a random observation, but there would appear to be at least a couple on the list who consider this to _be_ a new feature. Me among them, FWIW. -- David Serrano -- Simplify data backup and recovery for your virtual environment with vRanger. Installation's a snap, and flexible recovery options mean your data is safe, secure and there when you need it. Discover what all the cheering's about. Get your free trial download today. http://p.sf.net/sfu/quest-dev2dev2 ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] lxc.cgroup.memory.limit_in_bytes has no effect
On Tue, May 17, 2011 at 09:10, Daniel Lezcano daniel.lezc...@free.fr wrote: On 05/17/2011 08:34 AM, Ulli Horlacher wrote: root@flupp:~# clp Command Line Perl with readline support, @ARGV and Specials. Type ? for help. (perl):: undef $/; open F,'/tmp/1GB.tmp' or die; $_=F; print length 1073741824 I don't know exactly what does your perl program It reads the whole file into a variable and then prints the length of that variable, which shows that the file has actually been read into memory. When a process reaches the memory limit size then the container will begin to swap. Yes, that's what I saw in a quick test. -- David Serrano -- Achieve unprecedented app performance and reliability What every C/C++ and Fortran developer should know. Learn how Intel has extended the reach of its next-generation tools to help boost performance applications - inlcuding clusters. http://p.sf.net/sfu/intel-dev2devmay ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [PATCH] ignore non-lxc configuration line
On Sat, May 14, 2011 at 00:15, Serge Hallyn serge.hal...@canonical.com wrote: I'm curious, whatcha got in mind? I don't think you have to have something in mind to implement this. Just that old motto Be lenient in what you accept :). -- David Serrano -- Achieve unprecedented app performance and reliability What every C/C++ and Fortran developer should know. Learn how Intel has extended the reach of its next-generation tools to help boost performance applications - inlcuding clusters. http://p.sf.net/sfu/intel-dev2devmay ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Making LXC accept an already open network interface—or other options
On Mon, May 9, 2011 at 14:52, Serge Hallyn serge.hal...@canonical.com wrote: Thanks for your response. Before scripting it, let's try manually first: devs=`ls /sys/class/net/veth*` ip link add type veth newdevs=`ls /sys/class/net/veth*` # Get the intersection of $devs and $newdevs I assume you mean difference instead of intersection, since the first execution of ls gives an emtpy output, and the purpose of this is obtaining the new devices, right? host# ls /sys/class/net/ eth0 eth1 lo br0 host# ip link add type veth host# ls /sys/class/net/ eth0 eth1 lo br0 veth0 veth1 host# _ # Attach $dev1 to your bridge Assuming $dev1 is the first of the new devices: host# brctl addif br0 veth0 host# _ lxc-start -n mycontainer # mycontainer has no network After this, the container sees the same interfaces as the host and it does have connectivity to the outside: host# cat testimg01.conf lxc.tty = 4 lxc.pivotdir = .pivot lxc.arch=x86 lxc.utsname=testimg01 lxc.console=/tmp/lxc-testimg01-console.log lxc.rootfs=/root/lxc/nfsroot lxc.mount.entry=proc/root/lxc/nfsroot/proc proc defaults 0 0 lxc.mount.entry=sys /root/lxc/nfsroot/sys sysfs defaults 0 0 lxc.mount.entry=devpts /root/lxc/nfsroot/dev/pts devpts defaults 0 0 lxc.mount.entry=varlock /root/lxc/nfsroot/var/lock tmpfs defaults 0 0 lxc.mount.entry=tmp /root/lxc/nfsroot/tmp tmpfs mode=1777 0 0 lxc.cgroup.devices.deny = a lxc.cgroup.devices.allow = c 1:3 rwm lxc.cgroup.devices.allow = c 1:5 rwm lxc.cgroup.devices.allow = c 4:0 rwm lxc.cgroup.devices.allow = c 4:1 rwm lxc.cgroup.devices.allow = c 5:0 rwm lxc.cgroup.devices.allow = c 5:1 rwm lxc.cgroup.devices.allow = c 1:8 rwm lxc.cgroup.devices.allow = c 1:9 rwm lxc.cgroup.devices.allow = c 5:2 rwm lxc.cgroup.devices.allow = c 136:* rwm lxc.cgroup.devices.allow = c 254:0 rm host# lxc-start -f testimg01.conf -n testimg01 -l DEBUG -o /tmp/lxc-testimg01.log _ container# # ip link show |grep ^[0-9] 1: lo: LOOPBACK,UP,LOWER_UP mtu 16436 qdisc noqueue state UNKNOWN 2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP qlen 1000 3: eth1: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP qlen 1000 4: br0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue state UNKNOWN 6: veth0: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN qlen 1000 7: veth1: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN qlen 1000 container# telnet 172.20.64.20 22## outside node Trying 172.20.64.20... Connected to 172.20.64.20. Escape character is '^]'. SSH-2.0-OpenSSH_5.3p1 Debian-3ubuntu4 _ # get PID as the init pid of mycontainer ip link set $dev2 netns $PID host# pgrep init 1 4809 host# ip link set veth1 netns 4809 host # _ # now from your mycontainer console, configure $dev2 which is now in the container # you can rename it to eth0 in the container as ip link set $dev2 name eth0 Since eth0 exists inside the container, renaming veth1 returns an error: container# ip link set veth1 name eth0 RTNETLINK answers: File exists Am I doing something wrong? -- David Serrano -- Achieve unprecedented app performance and reliability What every C/C++ and Fortran developer should know. Learn how Intel has extended the reach of its next-generation tools to help boost performance applications - inlcuding clusters. http://p.sf.net/sfu/intel-dev2devmay ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Making LXC accept an already open network interface—or other options
On Tue, May 10, 2011 at 16:36, Serge Hallyn serge.hal...@canonical.com wrote: 1. tell it to give you a normal network interface lxc.network.type=veth lxc.network.link=br0 lxc.network.flags=down 2. bring up the container 3. bring down the normal interface 4. Continue here with passing veth1 into the container. Thank you. We're almost there. With this configuration, now there's only lo and eth0 inside the container indeed. Then I: host# ip link set veth1 netns $pid container# ip link del eth0 container# ip link set veth1 name eth0 container# ifconfig eth0 10.1.0.253 up container# ping 10.1.0.101## address of br0 in the host PING 10.1.0.101 (10.1.0.101) 56(84) bytes of data. From 10.1.0.253 icmp_seq=1 Destination Host Unreachable From 10.1.0.253 icmp_seq=2 Destination Host Unreachable From 10.1.0.253 icmp_seq=3 Destination Host Unreachable If I bring up eth0 before deleting it and putting veth1 in its place, the network works as expected and I can ping the host's br0. But the veth1-renamed-to-eth0 doesn't want to work. Interestingly: container# ifconfig eth0 # ifconfig eth0 eth0 Link encap:Ethernet HWaddr ae:6c:69:6a:f5:08 inet addr:10.1.0.253 Bcast:10.255.255.255 Mask:255.0.0.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) container# ip neigh show 10.1.0.101 dev eth0 FAILED container# arp -an ? (10.1.0.101) at incomplete on eth0 You can see that the packet counts remain at 0. -- David Serrano -- Achieve unprecedented app performance and reliability What every C/C++ and Fortran developer should know. Learn how Intel has extended the reach of its next-generation tools to help boost performance applications - inlcuding clusters. http://p.sf.net/sfu/intel-dev2devmay ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Hide container processes on the host...
If the parent of slapd is init, you could also check for a PPID of 1—this will only be true for the host slapd. -- David Serrano -- WhatsUp Gold - Download Free Network Management Software The most intuitive, comprehensive, and cost-effective network management toolset available today. Delivers lowest initial acquisition cost and overall TCO of any competing solution. http://p.sf.net/sfu/whatsupgold-sd ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
[Lxc-users] Slow and unexpected umounts after pivot_root
event 1073742080 for qvd lxc-start 1298972578.076 DEBUGlxc_utmp - got inotify event 256 for atd.pid lxc-start 1298972578.076 DEBUGlxc_utmp - got inotify event 2 for atd.pid lxc-start 1298972578.076 DEBUGlxc_utmp - got inotify event 256 for crond.reboot lxc-start 1298972578.079 DEBUGlxc_utmp - got inotify event 256 for sshd.pid lxc-start 1298972578.079 DEBUGlxc_utmp - got inotify event 2 for sshd.pid lxc-start 1298972578.242 DEBUGlxc_utmp - got inotify event 1073742080 for cups lxc-start 1298972578.300 DEBUGlxc_utmp - got inotify event 2 for utmp lxc-start 1298972578.300 DEBUGlxc_utmp - utmp handler - run level is /2 lxc-start 1298972578.301 DEBUGlxc_utmp - Container running - 8 - In this second log, there are two lines that surprise me: lxc-start 1298972574.493 DEBUGlxc_conf - umounted '/.pivot/var/lib/lxc/testimg00/ovl00' lxc-start 1298972574.973 DEBUGlxc_conf - umounted '/.pivot/var/lib/lxc/testimg00/root00/lib/modules/2.6.32-24-server' Why has LXC to umount these directories? They belong to another container so I understand they shouldn't appear here at all. I detected this when starting the 9th or 10th container and having to wait for a longer time than usual for it to appear online. My kernel version is 2.6.32-24-server (x86_64) and this happens with both LXC 0.7.2 and 0.7.4 (didn't try with 0.7.3). This is the configuration (the same for both containers): - 8 - ## general lxc.tty = 4 lxc.pivotdir = .pivot #lxc.arch=x86 ## network lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 lxc.network.name = eth1 lxc.network.mtu = 1500 ## devices lxc.cgroup.devices.deny = a lxc.cgroup.devices.allow = c 1:3 rwm lxc.cgroup.devices.allow = c 1:5 rwm lxc.cgroup.devices.allow = c 4:0 rwm lxc.cgroup.devices.allow = c 4:1 rwm lxc.cgroup.devices.allow = c 5:0 rwm lxc.cgroup.devices.allow = c 5:1 rwm lxc.cgroup.devices.allow = c 1:8 rwm lxc.cgroup.devices.allow = c 1:9 rwm lxc.cgroup.devices.allow = c 5:2 rwm lxc.cgroup.devices.allow = c 136:* rwm lxc.cgroup.devices.allow = c 254:0 rm ## there's some capabilities stuff here but the ## problem still exists if I comment it out - 8 - And this is how I run lxc-start for each container: lxc-start -f container.conf -n testimg00 -l DEBUG -o /tmp/lxc-testimg00.log \ -s lxc.utsname=testimg00 \ -s lxc.console=/tmp/lxc-testimg00-console.log \ -s lxc.rootfs=/var/lib/lxc/testimg00/root00 \ -s lxc.mount.entry=proc/var/lib/lxc/testimg00/root00/proc proc defaults 0 0 \ -s lxc.mount.entry=sys /var/lib/lxc/testimg00/root00/sys sysfs defaults 0 0 \ -s lxc.mount.entry=devpts /var/lib/lxc/testimg00/root00/dev/pts devpts defaults 0 0 \ -s lxc.mount.entry=varlock /var/lib/lxc/testimg00/root00/var/lock tmpfs defaults 0 0 \ -s lxc.mount.entry=tmp /var/lib/lxc/testimg00/root00/tmp tmpfs mode=1777 0 0 \ -s lxc.network.hwaddr=54:52:00:00:00:00 \ -s lxc.network.ipv4=10.1.0.230/24 lxc-start -f container.conf -n testimg01 -l DEBUG -o /tmp/lxc-testimg01.log \ -s lxc.utsname=testimg01 \ -s lxc.console=/tmp/lxc-testimg01-console.log \ -s lxc.rootfs=/var/lib/lxc/testimg01/root01 \ -s lxc.mount.entry=proc/var/lib/lxc/testimg01/root01/proc proc defaults 0 0 \ -s lxc.mount.entry=sys /var/lib/lxc/testimg01/root01/sys sysfs defaults 0 0 \ -s lxc.mount.entry=devpts /var/lib/lxc/testimg01/root01/dev/pts devpts defaults 0 0 \ -s lxc.mount.entry=varlock /var/lib/lxc/testimg01/root01/var/lock tmpfs defaults 0 0 \ -s lxc.mount.entry=tmp /var/lib/lxc/testimg01/root01/tmp tmpfs mode=1777 0 0 \ -s lxc.network.hwaddr=54:52:00:00:00:01 \ -s lxc.network.ipv4=10.1.0.231/24 Has anyone experienced something like this? Any clue would be appreciated. -- David Serrano -- Free Software Download: Index, Search Analyze Logs and other IT data in Real-Time with Splunk. Collect, index and harness all the fast moving IT data generated by your applications, servers and devices whether physical, virtual or in the cloud. Deliver compliance at lower cost and gain new business insights. http://p.sf.net/sfu/splunk-dev2dev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Slow and unexpected umounts after pivot_root
On Tue, Mar 1, 2011 at 11:37, Daniel Lezcano daniel.lezc...@free.fr wrote: The kernel 2.6.32-24 has a regression with the umount I think. I recommend you install a more recent kernel version Good, that did it. If I understood correctly, you mount these directories before lxc-start, right ? When the container is launched, the mounts points are inherited and appear in the pivot_root, so they are unmounted. Oh, I had a wrong understanding of what was being unmounted. Thank you for your quick response! -- David Serrano -- Free Software Download: Index, Search Analyze Logs and other IT data in Real-Time with Splunk. Collect, index and harness all the fast moving IT data generated by your applications, servers and devices whether physical, virtual or in the cloud. Deliver compliance at lower cost and gain new business insights. http://p.sf.net/sfu/splunk-dev2dev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users