Re: [Lxc-users] On clean shutdown of Ubuntu 10.04 containers

2010-12-06 Thread atp
Hi, The way it works on 0.7 was always a stopgap - there does not seem to be a clean way of doing it that bridges both sysv init and upstart. The ideal thing would be to intercept the reboot() syscall. The clean way would be in the kernel. The nasty way would be via LD_PRELOAD or other tricks. The

Re: [Lxc-users] How make top, meminfo etc. to show the limits of the container?

2011-01-21 Thread atp
Hi, Its not as simple as it seems. What you're asking for is to selectively hide or modify what gets shown to container processes by the /proc file system. In other words making /proc container aware. /proc is already partially there - with the pid namespace, but not for ram and cpus. We've trie

Re: [Lxc-users] single root io virtualization

2011-02-23 Thread atp
Hi, There's no reason why you shouldn't be able to use it with lxc. Looking at the guide here; http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization/sect-Para-virtualized_Windows_Drivers_Guide-How_SR_IOV_Libvirt_Works.html The virtual devices show as virtual pci cards.

Re: [Lxc-users] single root io virtualization

2011-02-23 Thread atp
23/11, siraj rathore > wrote: > > From: siraj rathore > Subject: Re: [Lxc-users] single root io virtualization > To: "atp" > Date: Wednesday, February 23, 2011, 10:11 AM > > Thanks, This is a good tutorial. But i wonder how t

[Lxc-users] restricting container visible cpus

2010-01-28 Thread atp
Hi, I'm looking at trying to restrict a container's view of the cpus available on the system. I'm on fedora 12, with lxc-0.6.5-1.x86_64 Does anyone know if it is possible to restrict the containers view of the number of cpus it has access to? Would the libvirt interface to lxc be able to do t

[Lxc-users] procfs and cpu masking.

2010-02-23 Thread atp
Hi, Apologies for the delay - I've just got to looking at the procfs tarball. > It's for the moment very experimental, it's a prototype: > http://lxc.sourceforge.net/download/procfs/procfs.tar.gz > > IMO, the code is easy to follow. > > The fuse in mounted in the container but the code expect

Re: [Lxc-users] restricting container visible cpus

2010-03-15 Thread atp
Daniel, > Not really. I think we should create a single daemon on the host > providing services for the container, but there is the isolation > preventing us to do that easily, so I don't have yet a clear idea of how > to do that. Ok, fair enough. I'll proceed with the setup as it stands the

[Lxc-users] updated procfs

2010-05-14 Thread atp
Hi, Prompted by the "LXC a feature complete replacement of OpenVZ" thread, I've uploaded my interim changes to procfs to http://www.tinola.com/lxc/ Its very much a work in progress, and the code is pretty horrible in places. Sorry its taken so long. I've learnt a lot about fuse. Changes fro

Re: [Lxc-users] Memory reports inside a contener

2010-05-20 Thread atp
Benoit, Its experimental. I've taken the procfs.tar.gz tarball and updated it to include restricting the view of /proc/cpuinfo and /proc/stat The updated version is up on http://www.tinola.com/lxc/procfs-1.2.tar.gz However there are some known bugs with it. The first is that /proc/self

[Lxc-users] help with root mount parameters

2010-05-25 Thread atp
Hi, I've synced to git head this afternoon, and firing up a container I now get [r...@islab01 lxc]# lxc-start -n test01.dev.tradefair/ lxc-start: No such file or directory - failed to access to '/usr/lib64/lxc', check it is present lxc-start: failed to set rootfs for 'test01.dev.tradefair/' lxc

Re: [Lxc-users] help with root mount parameters

2010-05-26 Thread atp
Thanks to both for the replies. This now makes sense. I've specified the rootfs.mount in the container config, and it gets past there and boots ok. Just in case anyone else cares, a very handy debug log can be had by using this command. lxc-start --logpriority=TRACE -o /tmp/trace.log --name my

Re: [Lxc-users] help with root mount parameters

2010-05-26 Thread atp
Daniel, > > The autoconf maze has me befuddled as well. I tried briefly to see where > > VERSION and PACKAGE_VERSION were defined but to no avail. > > > > They should be defined in src/config.h (generated by autoconf). But where does autoconf get it from :-) ? I wanted to set the version n

Re: [Lxc-users] Dreadful network performance, only to host from container

2010-05-27 Thread atp
Hi, Send ifconfig br0from the host ifconfig eth0 from the container and the version of lxc you're using. Do you have anything special with the /etc/sysctl.conf? On a completely blank container with no tuning, I get with scp; host->container squashfs.img 100% 639MB 33.6MB/s 0

Re: [Lxc-users] Dreadful network performance, only to host from container

2010-05-27 Thread atp
Toby Just FYI in case you were unaware - it seems one of your MXs is black holed. I tried to email you direct, but messagelabs said; : 74.125.148.10 does not like recipient. Remote host said: 554 5.7.1 Service unavailable; Client host [74.125.149.113] blocked using sbl-xbl.spamhaus.org; http://w

[Lxc-users] container shutdown

2010-06-01 Thread atp
Hello, Been looking at getting this patch working; http://lxc.git.sourceforge.net/git/gitweb.cgi?p=lxc/lxc;a=commitdiff;h=563f2f2ccd2891661836c96f92f047a735355c1b;hp=3bdf52d753ecf347b3b5cbff97675032f2de3e5e This patch allows to shutdown the container when the system is powered off in the contai

Re: [Lxc-users] container shutdown

2010-06-01 Thread atp
Bad idea to follow up on yourself, however I've got a bit further; running an inotifywait -m on the file at the same time as I'm tailing the log file gives you; lxc-start 1275415483.290 DEBUGlxc_cgroup - using cgroup mounted at '/cgroup' lxc-start 1275415483.290 DEBUGlxc_utmp

Re: [Lxc-users] container shutdown

2010-06-01 Thread atp
Ok, absolutely the last post tonight. I promise. I fixed the find /var/run -exec rm -f {} command in rc.sysinit. Now the problem is that the runlevel is written whilst things are still shutting down; /lxc/test01.dev.tradefair/rootfs/var/run/utmp MODIFY /lxc/test01.dev.tradefair/rootfs/var/run/

Re: [Lxc-users] LXC bringup issue on Fedora

2010-06-02 Thread atp
Nirmal, From a quick look I'd suggest you investigate your lxc.tty setting. You've allowed a single tty for your container. Its likely that your container is starting gettys for more than one tty. They're dying immediately, hence the respawning too fast. Either reduce the number of ttys, or i

Re: [Lxc-users] LXC bringup issue on Fedora

2010-06-03 Thread atp
Nirmal, To do this you'll also need to make sure you have a getty listening on /dev/console in the container. If you use upstart then make sure theres a file in the containers /etc/event.d/ that holds something like; # tty1 - getty # # This service maintains a getty on tty1 from the point the sy

[Lxc-users] one or two things

2010-06-15 Thread atp
Hi, If anyone is interested, I've put the script(s) I use to create fedora core 12 containers up on; http://www.tinola.com/lxc/ It needs the latest git version of lxc. I've also included a temporary patch to fs/proc/stat.c (tested on 2.6.34) that masks the set of visible cpus to only those

Re: [Lxc-users] one or two things

2010-06-16 Thread atp
(let's call it lxcproc) of fs/proc in kernel code? > > This copy could have all neccessary patches applied (memory and cpu views). > So we could "mount -t lxcproc none /proc" in containers. > > I think, it's a cleaner way than fuse overlay. > > Regards, >

Re: [Lxc-users] Reboot from container

2010-06-21 Thread atp
John, > I disagree. Trying to migrate from openvz (which has a working > reboot/shutdown inside the guest) to lxc this was one of the show > stopper bugs / features that prevented me from using lxc in a > production environment to replace openvz. What are the others? My goal is to run with the

Re: [Lxc-users] What's the setup for macvlan on the host to talk to containers?

2010-07-06 Thread atp
You'll need a recent version of iproute2. I have iproute2-2.6.34.tar.bz2 1) Add a macvlan. ip link add link name address type macvlan\ mode (bridge|vepa|private) e.g. ip link add link bond200 name bond200:0 address 00:aa:bb:cc:dd:ee \ type macvlan mode bridge 2) Show a macvlan

Re: [Lxc-users] PHYS type lxc not working

2010-07-08 Thread atp
You need to add a macvlan device to the parent device with the same mode. So, if you have a macvlan with a link device of eth0, you'll need to create a macvlan device off eth0 - e.g. eth0:1 that is of mode bridge. There was an email yesterday that showed how to do that. Andy On Thu, 2010-07-08

Re: [Lxc-users] not separeted resources

2010-07-12 Thread atp
Hi, > I'm wondering on some problems, come up in the near past using LXC. All > of our systems are Ubuntu 10.04 (2.6.32-server) with lxc 0.7.1. > > 1. Why does the container see the host's dmesg? Why does it don't have > own one? Because the work to create per namespace kernel ring buffers h

Re: [Lxc-users] not separeted resources

2010-07-12 Thread atp
Nirmal, > > > > Per container/per cgroup resource tracking has not been implemented. > > I think only the *tracking* has not been implemented. It would still > be possible to configure resources per container using cgroup (cpuset, > memory etc.) Please confirm. Yes, resource constraint has

Re: [Lxc-users] not separeted resources

2010-07-13 Thread atp
Nirmal. > I read this as the reporting is not virtualized for container but > limits will still apply per container. For instance I can configure > 256MB physical memory for container but it won't show up in meminfo > or free. The physical limit will still apply. Did I misread? No, your readin