Re: [Lxc-users] Bonding inside LXC container
Dear Yao, I have no experience with the macvlan device because I even can't find a clear documentation and I wonder if there is one her to give an abstract and howto for it. In my diagram I meant to use lxc.network.type = phys to directly reach through the NIC device. Reading https://www.kernel.org/doc/Documentation/networking/bonding.txt i got the notion that the bonding driver is strongly related to the physical network drivers. I don't think that it will work with virtual devices like macvlan or even veth as basic devices, but I may be wrong with this. I'm not sure but I doubt it may be Network namespace or something similar that brings about this problem. I was able to google out a thread Bonding simplifications and netns support on the kernel.org mailing list. It's from end of 2009 but I think it's irrelevant nowadays. -Original Message- From: wang yao [mailto:yaowang2...@gmail.com] Sent: Monday, November 18, 2013 5:09 AM To: Jäkel, Guido Cc: lxc-users@lists.sourceforge.net Subject: Re: [Lxc-users] Bonding inside LXC container Hi Jake, First of all, thank you for your reply and I am very sorry for such a late response. Just as you said, I had ever tried the bonding style like this: eth0--+--bond0--[veth]--eth0 eth1--/ But when I used mode=6(alb) of bonding following this way, there is 80% packet loss in the container, I must patch the kernel to the problem. On the other hand, my current approach: eth0--[phys]--eth0--+--bond0 eth1--[phys]--eth1--/ My lxc configuration like this (Networking part) : # Networking lxc.network.type = macvlan lxc.network.flags = up lxc.network.link = eth0 lxc.network.name = eth0 lxc.network.ipv4 = 172.19.8.168/16 lxc.network.mtu = 1500 lxc.network.hwaddr = fe:67:f5:42:40:14 lxc.network.type = macvlan lxc.network.flags = up lxc.network.link = eth1 lxc.network.name = eth1 lxc.network.ipv4 = 172.19.8.169/16 lxc.network.mtu = 1500 lxc.network.hwaddr = fe:67:f5:42:40:15 ... I did the bonding in the container, the bonding configuration is the same as what I did before on the host. When I started bonding device in the container, this message came out: Bringing up interface bond0: bonding device bond0 does not seem to be present, delaying initialization. I'm not sure but I doubt it may be Network namespace or something similar that brings about this problem. What's your idea? Regards, Yao 2013/11/15 Jäkel, Guido g.jae...@dnb.de Dear Yao, as I understand, you want to bound two physical interfaces of the host hardware to and use the bond inside a container. eth0--[phys]--eth0--+--bond0 eth1--[phys]--eth1--/ Because no other -- neither host nor another container -- may use one of NICs in addition, I would suggest to put the virtual bonding interface on the host and reach through the bound into the container via a veth. To me it's seems to be a better separation of concerns. eth0--+--bond0--[veth]--eth0 eth1--/ Following this way, you may also share the bound to more than one container by putting a virtual bridge between the virtual bonding interface and the virtual Ethernet adapters of the Containers. By the way, I don't see a clear reason why your current approach may fail. May you please present you configuration here? Greetings Guido -Original Message- From: wang yao [mailto:yaowang2...@gmail.com] Sent: Friday, November 15, 2013 4:33 AM To: lxc-users@lists.sourceforge.net Subject: [Lxc-users] Bonding inside LXC container Hi all, I tried to bond two NICs (eth0 and eth1) in the container, but when I finished the bonding configuration (I think my configuraion is correct) and started bonding device inside container, this message came out: Bringing up interface bond0: bonding device bond0 does not seem to be present, delaying initialization. So I want to know if LXC can't support the way of bonding configuration as I did, or I can do something to make this achieved. I am glad to talk about Bonding and LXC with someone who has interest in it. Regards, Yao -- DreamFactory - Open Source REST JSON Services for HTML5 Native Apps OAuth, Users, Roles, SQL, NoSQL, BLOB Storage and External API Access Free app hosting. Or install the open source package on any LAMP server. Sign up and see examples for AngularJS, jQuery, Sencha Touch and Native! http://pubads.g.doubleclick.net/gampad/clk?id=63469471iu=/4140/ostg.clktrk ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Bonding inside LXC container
Dear Yao, as I understand, you want to bound two physical interfaces of the host hardware to and use the bound inside a container. eth0--[phys]--eth0--+--bound0 eth1--[phys]--eth1--/ Because no other -- neither host nor another container -- may use one of NICs in addition, I would suggest to put the virtual bounding interface on the host and reach through the bound into the container via a veth. To me it's seems to be a better separation of concerns. eth0--+--bound0--[veth]--eth0 eth1--/ Following this way, you may also share the bound to more than one container by putting a virtual bridge between the virtual bounding interface and the virtual Ethernet adapters of the Containers. By the way, I don't see a clear reason why your current approach may fail. May you please present you configuration here? Greetings Guido -Original Message- From: wang yao [mailto:yaowang2...@gmail.com] Sent: Friday, November 15, 2013 4:33 AM To: lxc-users@lists.sourceforge.net Subject: [Lxc-users] Bonding inside LXC container Hi all, I tried to bond two NICs (eth0 and eth1) in the container, but when I finished the bonding configuration (I think my configuraion is correct) and started bonding device inside container, this message came out: Bringing up interface bond0: bonding device bond0 does not seem to be present, delaying initialization. So I want to know if LXC can't support the way of bonding configuration as I did, or I can do something to make this achieved. I am glad to talk about Bonding and LXC with someone who has interest in it. Regards, Yao -- DreamFactory - Open Source REST JSON Services for HTML5 Native Apps OAuth, Users, Roles, SQL, NoSQL, BLOB Storage and External API Access Free app hosting. Or install the open source package on any LAMP server. Sign up and see examples for AngularJS, jQuery, Sencha Touch and Native! http://pubads.g.doubleclick.net/gampad/clk?id=63469471iu=/4140/ostg.clktrk ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [Spam-Wahrscheinlichkeit=45] Problem with lxc and mutliple ips
Dear Andreas, please substantiate your term start a lxc with multiple IPs and the line If we are using only one IP for LXC, all is fine: What kind of network setup do you use, is it e.g. a bridge on the lxc host and veth's on the containers? A guess might be that you have a MAC address clash; did you override the lxc.network.hwaddr? Guido -Original Message- From: Andreas Laut [mailto:andreas.l...@spark5.de] Sent: Friday, October 11, 2013 8:53 AM To: lxc-users@lists.sourceforge.net Subject: [Spam-Wahrscheinlichkeit=45][Lxc-users] Problem with lxc and mutliple ips Dear list, we are using lxc 0.8 on Debian Wheezy (official debian package). Now we wanted to start a lxc with more than one IP address and we have gotten strange behaviors. As starting the lxc some IPs are reachable, some not. If we shut down the lxc and boot again, some other IPs are reachable. There seams no logic behind this. And - in no time - there were all IPs reachable. If we are using only one IP for LXC, all is fine. Do someone else have gotten this problem? All help and ideas are appreciated. -- October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register http://pubads.g.doubleclick.net/gampad/clk?id=60134071iu=/4140/ostg.clktrk ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Problem with lxc and mutliple ips
Dear Andreas, Thank you for the first clarification. But now I want to ask what exactly is not working: * Are you trying to use more than one Containers with one IP or one Container with more than one IP? * Are you using the same subnets? * From what location you can't reach the IP, did this station list the right MAC in it's ARP table at least? * Can you reach the LXC host from there? Can you reach others from the Host? Can you reach the host or others from the Containers? * Has your host an IP (on the bridge)? Is STP enabled, is the forward delay and hello time at an appropriate low value? * Is the Host connected to a Switched Network? What did you observe here with respect to the used MACs / IPs? Greetings Guido -Original Message- From: Andreas Laut [mailto:andreas.l...@spark5.de] Sent: Friday, October 11, 2013 10:41 AM To: Jäkel, Guido; 'lxc-users@lists.sourceforge.net' Subject: Re: [Lxc-users] Problem with lxc and mutliple ips Ok, sorry. You're right. We are using a bridge named br0 bound to eth0 at lxc host. On the containers we are using veth, but this problem happens also with type macvlan. No change at all. We also tried using hwaddr. We're doing further research in hope to show a way to reproduce this for you. Andreas Am 11.10.2013 09:41, schrieb Jäkel, Guido: Dear Andreas, please substantiate your term start a lxc with multiple IPs and the line If we are using only one IP for LXC, all is fine: What kind of network setup do you use, is it e.g. a bridge on the lxc host and veth's on the containers? A guess might be that you have a MAC address clash; did you override the lxc.network.hwaddr? Guido -Original Message- From: Andreas Laut [mailto:andreas.l...@spark5.de] Sent: Friday, October 11, 2013 8:53 AM To: lxc-users@lists.sourceforge.net Subject: [Spam-Wahrscheinlichkeit=45][Lxc-users] Problem with lxc and mutliple ips Dear list, we are using lxc 0.8 on Debian Wheezy (official debian package). Now we wanted to start a lxc with more than one IP address and we have gotten strange behaviors. As starting the lxc some IPs are reachable, some not. If we shut down the lxc and boot again, some other IPs are reachable. There seams no logic behind this. And - in no time - there were all IPs reachable. If we are using only one IP for LXC, all is fine. Do someone else have gotten this problem? All help and ideas are appreciated. -- October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register http://pubads.g.doubleclick.net/gampad/clk?id=60134071iu=/4140/ostg.clktrk ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Bind mount point must be in container root?
Dear Kaj, You step into a non-trivial trap. It will work either if your mount path inside the container isn't 'mnt' or if you use lxc.pivotdir to define it to something other than it default 'mnt'. To get rid of this problem, I'm using an argument like '-s lxc.pivotdir=$CONTAINER' in my starter script. With greetings Guido -Original Message- From: Kaj Wiik [mailto:kaj.w...@iki.fi] Sent: Tuesday, October 08, 2013 12:21 PM To: lxc-users@lists.sourceforge.net Subject: [Spam-Wahrscheinlichkeit=99][Lxc-users] Bind mount point must be in container root? Hi! I noticed that in order to get bind mount (from host to container) work, the mount point must be in container root: This works: lxc.mount.entry = /mnt/raid/course_data course_data none bind 0 0 This does not: lxc.mount.entry = /mnt/raid/course_data mnt/course_data none bind 0 0 The files just do not show up, no error messages that I could see are emitted... LXC 0.7.5-3ubuntu67, kernel 3.5.0-41-generic #64~precise1-Ubuntu Is this expected and documented? Detecting this (and workaround) took quite a lot of time... Cheers, Kaj -- October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register http://pubads.g.doubleclick.net/gampad/clk?id=60134071iu=/4140/ostg.clktrk ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users -- October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register http://pubads.g.doubleclick.net/gampad/clk?id=60134071iu=/4140/ostg.clktrk ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] veth interface not deleted?
Would injecting tcp rst really be necessary? In my test, doing ip link del on the host side of the interface ALWAYS succeed, no matter what the state the guest container's interface is. Serge, do you have the particular commit ids for lxc.network.script.down support? Backporting that would probably be the best step for me to try. Dear Fajar, Dear Serge, With my setup i found that on a test machine tcpdump detects a RST packet from the container (for an open, idle ssh connection to the containers sshd) only if the network interface of the container is *not* brought down at shutdown. Obviously, in that moment the ssh-client exists immediately with a Connection to container closed by remote host. And I did not observe any undeleted veth's If the interface is closed as usual, nothing happens at the test machine. Here the veth stay alive on the lxc-host. I'm able to remove the interface using 'ip link del dev veth'. This will allow to startup the container again using the same veth name (; i name it fixed by the container name). But in spite of this action, the ssh connection stay alive. At the moment I don’t' have an idea where the friendly RST comes from. But it will terminate the tcp connections and therefore, the veth vanish at once. Guido -- October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register http://pubads.g.doubleclick.net/gampad/clk?id=60134791iu=/4140/ostg.clktrk ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] veth interface not deleted?
Hi, I want to contribute an observation while playing around with my empty plain vanilla container template: The test cyclce is to start it, open an ssh terminal session to it, leave it idle and regular shut down the container. Now, if the containers eth0 is brought down by the shutdown, after end of the lxc process the corresponding veth is still there and the idle ssh terminal client don't notices anything. This seems to hold for long times (, maybe endless if there is no traffic?). I'm not able to rename the veth at this point; it's said to be busy. But if I do a keystroke in the terminal, after some short timeout the client quits with a broken connection message. Also, a short time the veth disappear. By the other hand if I prevent inside the container by configuration that eth0 is driven down, then right at the termination of the lxc process the ssh terminal quits and also, the veth disappears. Beside from the test, I noticed the similar effect on other in-real-usage-containers with connection to listeners inside: The veth stays a while until theses inbounding connection have died. Guido -- October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register http://pubads.g.doubleclick.net/gampad/clk?id=60133471iu=/4140/ostg.clktrk ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] veth interface not deleted?
Quoting Jäkel, Guido (g.jae...@dnb.de): Hi, I want to contribute an observation while playing around with my empty plain vanilla container template: The test cycle is to start it, open an ssh terminal session to it, leave it idle and regular shut down the container. Now, if the containers eth0 is brought down by the shutdown, after end of the lxc process the corresponding veth is still there and the idle ssh terminal client don't notices anything. This seems to hold for long times (, maybe endless if there is no traffic?). I'm not able to rename the veth at this point; it's said to be busy. Yes, AIUI this is proper tcp behavior, and there's really nothing we can do about it. The tcp sock has to be kept open so we can tell the other end to shutdown. I completly agree ... But if I do a keystroke in the terminal, after some short timeout the client quits with a broken connection message. Also, a short time the veth disappear. By the other hand if I prevent inside the container by configuration that eth0 is driven down, then right at the termination of the lxc process the ssh terminal quits and also, the veth disappears. Beside from the test, I noticed the similar effect on other in-real-usage- containers with connection to listeners inside: The veth stays a while until theses inbounding connection have died. ... , but what causes this helpful effect? I guess that the open connection are reset, maybe by the stack as a result of closing the network namespace. But why this will happen only if the interface was left up (which is the anormal case)? -- October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register http://pubads.g.doubleclick.net/gampad/clk?id=60133471iu=/4140/ostg.clktrk ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] clones of clones are failing to start
Dear Serge, to assist to avoid such problems i would propose to introduce macro expansion (of the own tags but also by incorporating the environment variables) into the configuration argument parser and to provide some useful basics like the container name. Then one may use e.g. lxc.hook.mount = $MYCONTAINER_HOME/hooks/$lxc.name In my personal LXC framework I need this because I want to have abstract configuration files for classes of containers. I simulate this by preparsing additional configuration files and converting it to -s options for lxc-start It would be also helpful, if one may use the -f option more than once (, don't checked if this is not already possible) and if there is a meta tag to use inside a configuration file to include another ('lxc.include = foo' or '@INCLUDE foo'). Guido -Original Message- From: Serge Hallyn [mailto:serge.hal...@ubuntu.com] Sent: Wednesday, July 17, 2013 11:32 PM To: Jay Taylor Cc: lxc-users Subject: [Spam-Wahrscheinlichkeit=94]Re: [Lxc-users] clones of clones are failing to start clearly the updating of hostnames should always exempt lxc.cap.drop, and a few other lines. Just how robust we can make this I'm not 100% sure. (I.e. in a lxc.hook.mount = /opt/mycontainer/hooks/mycontainer.1, how can we know which 'mycontainer' strings should be replaced?) -- See everything from the browser to the database with AppDynamics Get end-to-end visibility with application monitoring from AppDynamics Isolate bottlenecks and diagnose root cause in seconds. Start your free trial of AppDynamics Pro today! http://pubads.g.doubleclick.net/gampad/clk?id=48808831iu=/4140/ostg.clktrk ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] clones of clones are failing to start
Hi Serge, to assist to avoid such problems i would propose to introduce macro expansion (of the own tags but also by incorporating the environment variables) into the configuration argument parser and to provide some useful basics like the container name. Then one may use e.g. lxc.hook.mount = $MYCONTAINER_HOME/hooks/$lxc.name That sounds good. Would you be able to post a patch to do this? I'm very sorry, I'm no C code developer but a system engineer. I understand a lot of different code just from reading and may even point out bugs. Or more abstract things like this :} And if you had this, you'd be able to simply use lxc-start without the -s? That sounds worthwhile then. Can you show us an example of a pre-parsed config, and the final executed lxc-start command? I'm using the standard lxc configuration file just for the general setup. The concrete details are in an additional file per container. But a concrete containers extra config file might be a symlink to a file describing the configuration for a class of containers. Also, the container standard config file normally is a symlink to a central version. By cheating around with such symlinks, I'm easily able to test out and migrate then to new configuration scenarios . In my central lxc command script, i don't have realized the expansion of internal lxc tags, but of environment variables.There's a section to gather the additional config to the array EXTRA_CONFIG, which is appended later to the arguments of lxc-start. LXCBASE=/etc/lxc [...] CONTAINERBASE=$LXCBASE/$CONTAINER [ ! -d $CONTAINERBASE ] LOG unknown container \$CONTAINER\ (no LXC basedir found) ! 12 exit 1 [...] # build/check standard and extra configuration declare -a EXTRA_OPTS [ -e $CONTAINERBASE/fstab ] EXTRA_OPTS=(-s lxc.mount=$CONTAINERBASE/fstab) # use a central fstab if exist EXTRA_CONFIG=$CONTAINERBASE/config.$CONTAINER [ -f $EXTRA_CONFIG ] while read LINE; do LINE=${LINE%%#*} # delete comments [ -z $LINE ] continue # skip empty lines TAG=${LINE%%=*}; TAG=`echo $TAG`# split and remove surrounding whitespace VALUE=${LINE##*=}; VALUE=`echo $VALUE`; eval VALUE=$VALUE; # same to and evaluate [ -n $VALUE ] EXTRA_OPTS=(${EXTRA_OPTS[@]} -s $TAG=$VALUE) # push to array done $EXTRA_CONFIG LOG extra config processed. [...] LOG starting container \$CONTAINER\ ... lxc-start \ -n $CONTAINER -d \ -l $LXCLOGLEVEL -o /var/log/lxc/$CONTAINER.log -c /var/log/lxc/$CONTAINER.out \ -f $CONTAINERBASE/config \ -s lxc.utsname=$CONTAINER \ -s lxc.rootfs=$CONTAINERBASE/rootfs \ -s lxc.pivotdir=$CONTAINER \ -s lxc.network.link=$BRIDGE \ -s lxc.network.hwaddr=$HWADDR \ -s lxc.network.veth.pair=$CONTAINER \ -s lxc.mount.entry=$CGROUPPATH/$CONTAINER cgroup none ro,bind 0 0 \ ${EXTRA_OPTS[@]}; RC=$? [...] An additional config file containing # sample lxc.cgroup.memory.limit_in_bytes = 9G lxc.cgroup.memory.soft_limit_in_bytes = 8G lxc.editor=$EDITOR # just an senseless example for expansion would be translated to the extra options -s lxc.cgroup.memory.limit_in_bytes=9G -s lxc.cgroup.memory.soft_limit_in_bytes=8G -s lxc.editor=vim -- See everything from the browser to the database with AppDynamics Get end-to-end visibility with application monitoring from AppDynamics Isolate bottlenecks and diagnose root cause in seconds. Start your free trial of AppDynamics Pro today! http://pubads.g.doubleclick.net/gampad/clk?id=48808831iu=/4140/ostg.clktrk ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [Spam-Wahrscheinlichkeit=45] lxcbr0 MAC addr issue
Dear Hans, this is a FAQ here but -- as you already found -- not basically caused by LXC. The software bridge will always choose the lowest MAC of the attached devices or hold an explicit assigned (from the set of currently assigned devices) as long as possible. In your case you either may set the MAC of the outgoing NIC or set the MAC of the veth's from a range which is above common hardware. There are more or less reserved ranges used for this by us here and in similar projects. Guido -Original Message- From: Hans Feldt [mailto:hans.fe...@ericsson.com] Sent: Wednesday, June 05, 2013 8:23 AM To: lxc-users@lists.sourceforge.net Subject: [Spam-Wahrscheinlichkeit=45][Lxc-users] lxcbr0 MAC addr issue It is a fact that the bridge takes the lowest MAC address from the attached ports for the host port. See for example http://backreference.org/2010/07/28/linux-bridge-mac-addresses-and-dynamic-ports/ Thus if a container is restarted, the host port can potentially change its MAC address and containers will have a stale ARP cache. This of course causes problem for communication container-host. Tested the workaround mentioned in the link but then I got problem with network manager on a later Ubuntu version. Then I tried using a dummy container and reusing its MAC addr for the host port. Works but... Now my question, could not lxc (at boot) setup a fixed MAC addr for the host port? Thanks, Hans -- How ServiceNow helps IT people transform IT departments: 1. A cloud service to automate IT design, transition and operations 2. Dashboards that offer high-level views of enterprise services 3. A single system of record for all IT processes http://p.sf.net/sfu/servicenow-d2d-j ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
[Lxc-users] lxcbr0 MAC addr issue
yes and it does this. The point is that lxcbr0 is not tied to any physical nic. So the first container you start, however high the macaddr is, lxcbr0 takes its mac. If the next container gets a lower macaddr, lxcbr0's macaddr drops. This lxcbr0 is special to Ubuntu, right? And if not to a physical NIC, to what is this bridge connected to on the host? -- How ServiceNow helps IT people transform IT departments: 1. A cloud service to automate IT design, transition and operations 2. Dashboards that offer high-level views of enterprise services 3. A single system of record for all IT processes http://p.sf.net/sfu/servicenow-d2d-j ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] list admin
Ok, who wants to be co-administrator of the mailing list ? Tamas and Mike -- Try New Relic Now We'll Send You this Cool Shirt New Relic is the only SaaS-based application performance monitoring service that delivers powerful full stack analytics. Optimize and monitor your browser, app, servers with just a few lines of code. Try New Relic and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_may ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [Spam-Wahrscheinlichkeit=87] Restarting LXC containers after power failure
Dear David, this will require to persist the current power state of a container by some kind of marker. A tricky way is to mark some container-related file, e.g. to (miss-) use the sticky bit of the containers lxc configuration file or to put some marker file into the containers rootfs. This will allow to write an init script that will scan for such a marker and (re)start the corresponding container. Using LXC's startup/shutdown hook scripts, you may event set such a marker automagical and especially to remove it on a regular shutdown. -Original Message- From: David Parks [mailto:david.pa...@frugg.com] Sent: Monday, May 06, 2013 2:59 PM To: lxc-users@lists.sourceforge.net Cc: Ha Ho Subject: [Spam-Wahrscheinlichkeit=87][Lxc-users] Restarting LXC containers after power failure We just had a datacenter power failure, upon restart of course our LXC containers were stopped. It would be just dandy if there were an LXC option to restart the containers that were already running before the sudden failure. Is there any such capability? -- Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET Get 100% visibility into your production application - at no cost. Code-level diagnostics for performance bottlenecks with 2% overhead Download for free and get started troubleshooting in minutes. http://p.sf.net/sfu/appdyn_d2d_ap1 ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] LXC icon for Ubuntu's Juju
TBH, I prefer the icon on the right, with boxes inside the monitor. +1 Or what's about something with a container -- like http://serverservice.sytes.net/wp-content/uploads/2012/06/lxc11.png -- Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_d2d_mar ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] appropriate architecture for two sets of containers on one host
Dear Mike, Don't put an IP on the second (or further) bridges. Think about this bridges configuration slot as an additional virtual interface card to connect your hosts IP stack with this network. Said that, you will not be surprised that you got two network interface devices and two default routes with your configuration. And as you would do it with a plain machine with two network cards on two different networks, you'll get in trouble to route your outgoing traffic. And run into advanced problems, if there will be multiple routes to reach your host, especially if this traffic goes through a statefull firewall. I think you don't want (and I even would say you should not) to offer any services by the host. Therefore you should not need to reach the host from the VLANs you're using for the different groups of containers (; respectively you don't need an IP for the host in this subnets). I would suggest to use a separate management network/VLAN. Then, just add a bridge for it and put the hosts IP to it to plug hosts IP stack to it. You'll have a simple default route to your router and it's up to it to provide/control the interconnection to other network ranges. I'm curious how you configured yours. Because I use PXE and a NFS-rootfs for my hosts, I'm using two physical interfaces on it. The eth0 is to access the host. It's attached to an untrunked plain old port on the switch because I can't find howto PXE and NFS-boot from a trunked VLAN. To provide access to our different VLANs for the containers, the eth1 takes all the virtual vlan interfaces (named vlan#) and this the bridges (br###). This takes the outer side of the container veth's (named by the container name) and at inside the container you'll see the VLAN of interest unrolled on eth0. If you're know what you doing, you also may connect the container to more VLANs by adding additional veths. And because the containers rootfs and dataspace-fs is on a NFS mount done by host too, this network traffic goes through eth0. Therefore, there's no need to expose the storage architecture or other backside services to another VLAN than the management one. For separation of concerns I also suggest to use DHCP to configure the containers. Many routers will provide a DHCP relay agent (i.e. a DHCP proxy) to spawn across networks; with this you don't need to make your DHCP server multi-homed to all the VLANs. Greetings Guido dropping the PVID from the switch. But when I added another VLAN: ++ |++ | |||+ c1 | || c1 | eth0/.17.3 |---\ | |||+ ++ +-+ |+ |++ | br1.17 |---| eth1.17 |---| eth1/.17.2 |- |++ ++ +-+ | |+ |||+| | | || c2 | eth0/.17.4 |---/ | | |||+ c2| | |++| | | ++ +-+ | | | | br1.18 |---| eth1.18 |-/ | | ++ +-+ | ++ with - iface eth1.18 inet manual auto br1.18 iface br1.18 inet static bridge_ports eth1.18 bridge_maxwait 0 bridge_fd 0 bridge_stp off address 192.168.18.2 netmask 255.255.255.0 gateway 192.168.18.1 dns... iface eth1.17 inet manual auto br1.17 iface br1.17 inet static bridge_ports eth1.17 bridge_maxwait 0 bridge_fd 0 bridge_stp off address 192.168.17.2 netmask 255.255.255.0 gateway 192.168.17.1 dns... - in /etc/network/interfaces, I got two default routes: - host$ ip route show 192.168.18.0/24 dev br1.18 proto kernel scope link src 192.168.18.2 192.168.17.0/24 dev br1.17 proto kernel scope link src 192.168.17.2 default via 192.168.17.1 dev br1.17 default via 192.168.18.1 dev br1.18 - -- Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_d2d_mar ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] appropriate architecture for two sets of containers on one host
... and if you don't like to deal with changing spanning trees or broad/multicast storms I strongly recommend to let only *one* do any routing for all - for the lxc host and for all other machines in the network. Of course, this one is the (core) router. Guido -Original Message- From: Fajar A. Nugraha [mailto:l...@fajar.net] The only time you need to have an IP address on the host bridge is if you want the host to communicate directly to the containers. But if you have an external router (e.g. 192.168.18.1, which can also connect to 192.168.17.0/24), then your host can still communicate with the container, even when the host itself does not have an IP address in 192.168.18.0/24. -- Fajar -- Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_d2d_mar ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Syslog
Dear Miroslav, please ensure that the syslog deamon within all containers don't log the kernel logfile source. If you drain this source by more than one syslog process, the log messages will spread over the different syslog files. If you state what concrete syslog deamon you'll use, I may have concrete configuration instructions for this. Sincerely Guido Hello, I am using LXC a long time. I have problem with syslog messages. They are corrupted. For example: Jan 24 09:05:27 lx3 kernel: 78e1601e8c48:00 =.27T2.1N4S0R0 = 6 TDP5 = = 6241.950 hrwl:6n:OIe13Ue0Ac4ee1601e8c4004009S=2879S1..3L=4O00R=0T=3D2 O=PP58D=2E1 62e:R:e.Oe =48:60:89:05:9R78.D13.L1T0 C0T5D2R= =9P6E24091 owl6eRIt6UtMc7:11:ed:00:00C2.2S703E4OxP=0L3=9OUS58T2N4eh Ca78:60:89:05:9R78.D13.L1T0 C0T5D2R= =9P6E26[224021.773178] Shorewall:vl632net:DROP:IN=eth1.63 OUT=eth0 MAC=ca:47:e8:e9:17:61:00:19:e7:8d:c9:42:08:00:45:00:00:90 SRC=172.28.27.27 DST=172.30.0.31 LEN=144 TOS=0x00 PREC=0x00 TTL=253 ID=6529 PROTO=UDPSP=59D= = ::: C7222D=20. N4T=0PCx L5I61RODS=1 T6L=4[204519]Soelv3eDPNt. Tt Ca78971097d92805000R1... T7301E1 Sx E00T2 =0PTU T48P1 N2 Exist any solution? Is possible to isolate syslogs between containers? Thnank You and best regards, -- Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS, MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft MVPs and experts. ON SALE this month only -- learn more at: http://p.sf.net/sfu/learnnow-d2d ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Physical interface not getting released after container shutdown
Dear Benoit, Serge Hallyn suggested that 7b35f3d should fix my problem. I noticed that. Thanks for the tip. a careful analysis of netstat does not lead to think I have remaining container connections. I'm not using physical interfaces but instead of the default (veth and a number of unkown source) I name the veth interfaces with the container name. Said this, I had run into the same problems. Notice, that if the connection are in TCP-stae *_WAITING, the connection needs up to 6min (with defaults) to disappear. After this time, you might be able to restart the container without booting the host. Yours Guido ***Lesen. Hören. Wissen. Deutsche Nationalbibliothek*** -- Dr. Guido Jäkel Deutsche Nationalbibliothek IT SG 4.3 (Infrastruktur Unix) Adickesallee 1 60322 Frankfurt am Main Tel. +49-69-1525-1750 Fax +49-69-1525-1799 mailto:g.jae...@dnb.de http://www.dnb.de attachment: Jäkel, Guido.vcf-- Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS, MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft MVPs and experts. ON SALE this month only -- learn more at: http://p.sf.net/sfu/learnnow-d2d___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Syslog
Dear Miroslav, In case of rsyslog, for every container (but obviously not for the host) edit /etc/rsyslog.conf the disable loading of the imklog module by hashing it out: [...] #$ModLoad imklog # provides kernel logging support (previously done by rklogd) [...] Sincerely Guido -Original Message- From: Miroslav Lednicky [mailto:miroslav.ledni...@fnusa.cz] Sent: Thursday, January 24, 2013 11:11 AM To: Jäkel, Guido Cc: 'lxc-users@lists.sourceforge.net' Subject: Re: [Lxc-users] Syslog Dear Guido, I am using rsyslog and distribution Ubuntu. Thank you, Miroslav. Dne 24.1.2013 10:49, Jäkel, Guido napsal(a): Dear Miroslav, please ensure that the syslog deamon within all containers don't log the kernel logfile source. If you drain this source by more than one syslog process, the log messages will spread over the different syslog files. If you state what concrete syslog deamon you'll use, I may have concrete configuration instructions for this. Sincerely Guido Hello, I am using LXC a long time. I have problem with syslog messages. They are corrupted. For example: Jan 24 09:05:27 lx3 kernel: 78e1601e8c48:00 =.27T2.1N4S0R0 = 6 TDP5 = = 6241.950 hrwl:6n:OIe13Ue0Ac4ee1601e8c4004009S=2879S1..3L=4O00R=0T=3D2 O=PP58D=2E1 62e:R:e.Oe =48:60:89:05:9R78.D13.L1T0 C0T5D2R= =9P6E24091 owl6eRIt6UtMc7:11:ed:00:00C2.2S703E4OxP=0L3=9OUS58T2N4eh Ca78:60:89:05:9R78.D13.L1T0 C0T5D2R= =9P6E26[224021.773178] Shorewall:vl632net:DROP:IN=eth1.63 OUT=eth0 MAC=ca:47:e8:e9:17:61:00:19:e7:8d:c9:42:08:00:45:00:00:90 SRC=172.28.27.27 DST=172.30.0.31 LEN=144 TOS=0x00 PREC=0x00 TTL=253 ID=6529 PROTO=UDPSP=59D= = ::: C7222D=20. N4T=0PCx L5I61RODS=1 T6L=4[204519]Soelv3eDPNt. Tt Ca78971097d92805000R1... T7301E1 Sx E00T2 =0PTU T48P1 N2 Exist any solution? Is possible to isolate syslogs between containers? Thnank You and best regards, -- Miroslav Lednický, Fakultní nemocnice u sv. Anny v Brně -- Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS, MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft MVPs and experts. ON SALE this month only -- learn more at: http://p.sf.net/sfu/learnnow-d2d ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] start order
On the other hand, I *do* also feel that any services on the containers ought to be robust to unavailability, so that startup order should not matter. Dear Serge, yes - it's Xmas time, bells are ringing and all is warm and bright. ;) Unfortunately, it matters to the greater part of software. The better on will run into an (comparable short) timeout and fail in a defined way on startup. The other part will run but fail later or have lacks in an functional way without recovering from it if the preconditions become available. And all of us will have seen some software which need some later unused requirements. Why we should need the similar mechanisms in all of the system init frameworks, if every part of the puzzle will just well-behaved wait until all preconditions are fulfilled ... Guido -- LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] start order
Hi all, here my 5ct on auto start and start order: Because i'm using a farm of LXC hosts where my containers may be spread over, i also need to persist the preferred host of a container. This is currently stored in a separate configuration file. Because this information should be easy accessible by the Container also for some special reasons, it is stored in the containers root file system. If a LXC host boots, it will scan for this information and start it's assigned containers -- but only if they are not already started on other non-preferred hosts by failover mechanisms. This is detected by some heuristic inspections like a ping to the containers management network address. For sure a LXC host farm is an advanced feature, but in this case for an lxc-internal autostart feature one need a hook script to control if an autostart-tagged container should really be started. For a start order configuration mechanism I would prefer some kind of a local before/after paradigm over a concrete and total enumeration: Within the lxc configuration of a container A it should be mentioned that it depends on B (should be started after B) and/or provide something for C (should be started before C). The autoboot-feature then have to calculate a concrete boot order based on this restrictions. With this, one have just to write down the real important dependencies (, which will be a small one at most). This kind of description also covers the possible extension to start some set of (so calculated) independent containers in (an configured amount of) parallel way. This will be useful for powerful hosts with a large number of containers. with greetings Guido -- LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
[Lxc-users] Shared file access inside a container (was: Converting existing CentOS 6.x to container within Ubuntu 12.04 - can that be simple?)
(1) I'm not sure you can do nfs-mount inside an lxc container Yes, you can for the simplest solution. But also, you can mount it on the host and propagate it (or any subtree, e.g. for a concrete container) via an bind-mount to the container. If you have a lot of containers, this will reduce the number of NFS-mounts to one per host. And if the containers will use the same set of files, there will use local locking and share the same fs-cache. Also, as the network traffic caused by NFS operations will be handled by the host and there is no traffic caused by file access inside the containers, the container don't need to have network access to the NFS server used. With other words, the NFS server don't need to be exposed to the network domain of the containers but just to that of the host. A entry in an lxc fstab file (referred by lxc.mount=) like /mnt/ext_nfs/container_foo mnt/my_nfs_part none bind 0 0 will propagate (a former at host at /mnt/ext_nfs mounted) external NFS source (with a tree container_foo) to the mount point /mnt/my_nfs_part of the container foo. This paradigm will also serve the principle Separation of Concerns, because the container don't have to know about the source of the shared file space. It might be shifted, splitted or reconfigured otherwise in case of external needs and it even don't need to be served by NFS. Guido -- Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_d2d_nov ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] What should 'uptime' say inside an lxc container?
Dear Dan, As a workaround you may use the following perl script written by Ullrich Horlacher. It also demonstrates the basic idea where to get a containers uptime from. Here he use a well known file, but I think one may also use the information related to the containers init process. lxc-uptime: #!/usr/bin/perl -w $uptime = `/usr/bin/uptime`; @s = lstat '/dev/pts' or die $uptime; $s = time - $s[10]; if ($s172800) { $d = int($s/86400); $uptime =~ s/up .*?,/up $d days,/; } else { $h = int($s/3600); $m = int(($s-$h*3600)/60); $uptime =~ s/up .*?,/sprintf(up %02d:%02d,,$h,$m)/e; } print $uptime; with greetings Guido -- Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] when Host OS upgrades its linux kernel .. what happens in LXC containers?
So what happens with the container's when the Host OS gets an upgrade that includes a new kernel? Are the containers stil reachable, runable, etc? I guess what I'm asking is what happens? Dear Brian, a new kernel will be not become used until you boot the host. From that, after an os upgrade on the host, nothing will change for the moment - neighter for the host nor for the containers. Of course, before the reboot of the host you might need to upgrade the OS of a container *if* it is incompatible to the new kernel. But for normal update intervals this is very unlikely. Greetings Guido -- Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
[Lxc-users] Proposal: Change default value of lxc.pivotdir
Dear developers, I want to propose to change the default value of the temporal lxc pivot directory from 'mnt' to '.lxc-mnt' or something unusual like that: Right now, It takes me about an hour to trace down why I can successfully bind mount some resource from the host to the container to any mount point, but not at some location beyond /mnt -- which will be the canonical choice for a mount point to many users. Yes, I can read it in the manual. And I actually was able to find it in the moment as I have found at the debug level, what's happens in the background. As I don't see any reason to have this nice pitfall for others too, I vote to change the default name of the pivot directory away from 'mnt'. greetings Guido -- Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Proposal: Change default value of lxc.pivotdir
Dear Chris, I think many of us have been caught out by this feature. No need to get this number rising, right? ;) I now set all my config files to use /mnt/.lxc/NAME as the lxc.pivotdir entry for a container named NAME. Do you choose the NAME postfix because in addition there's a possible race condition while starting two containers at the same time, too?? If this is the case, please Developers, make it a build-in to lxc-start to choose such an arbitrary name based on some adequate prefix (maybe .lxc-pivot is also a good speaking one) AND the actual container name. For the moment I'll add this to my wrapper script as a dynamic configuration passed via the -s option. greetings Guido -- Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Proposal: Change default value of lxc.pivotdir
perhaps just using tempnam suffices. Or the process id? To use something unique, but still related ... -- Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Multiple lxc containers with sameIP/ethernet address
I have a set up where there are multiple short lived containers (sharing the same IP address) in a host. Why? Don't do that. I agree...what is your goal? As others said, this is very free-spirited and typically only used in a high availability cluster setup or other failover scenarios. You might use the arping package to explicitly broadcast the changing of the MAC for the IP by sending out additional unsolicited ARP announcements right after taking up the interface in the container /sbin/arping -c 1 -b -U $IP While taking up an interface with an IP address, such a package is already sent out. But it might be swallow by a switch because it's configured or even just by a bad firmware. Because I'm working with this HA things I know that some switches will remember just *one* IP per MAC and will be confused by any alias IP assignments. -- This SF email is sponsosred by: Try Windows Azure free for 90 days Click Here http://p.sf.net/sfu/sfd2d-msazure ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [Spam-Wahrscheinlichkeit=54]Re: lxc-execute fails to exec lxc-init
I know this is digression but I wondered if you could expand on this? Perhaps if I explained our use case and tell me if I'm doing the right thing? 1. We create a new container 2. We want to bootstrap it with a puppet script (apt-get install puppet puppet apply script.pp) We see two options for this: 1. lxc-execute 2. issue a remote ssh command. Dear Peter, if it's the common way you want to setupbootstrap a new container, maybe you can -- as a third way -- inject a (self-deleting?) script into the containers filesystem before the first start. Guido -- This SF email is sponsosred by: Try Windows Azure free for 90 days Click Here http://p.sf.net/sfu/sfd2d-msazure ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] ]Re: container shutdown
Executable name: I would prefer several almost identical actions to be implemented in one program with options instead of several almost identical programs. So I say lxc-shutdown -r than lxc-reboot. But I have no problem with lxc-shutdown doing -r based on argv0 as well as getopts. Everyone can have what they want without asking you the author to write multiple programs. ... or -- as it is common use on unix -- with a multi-named (linked) exe containing some $0-mechanism -- Isn't that what I said? :) Dear Brian, Oh yes, of course -- I didn't read it accurate sorry Guido -- This SF email is sponsosred by: Try Windows Azure free for 90 days Click Here http://p.sf.net/sfu/sfd2d-msazure ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Ubuntu template questions
Can the host send a signal to the init's container? If yes, sysvinit responds to SIGINT. Does upstart behave the same (e.g. process control-alt-delete.conf when the signal is received)? It's set to reboot by default, but perhaps there's some other signal than we can use for shutdown? SysVInit responds to SIGINT -- normaly with a reboot action -- and to SIGPWR. I add the line pf:12345:powerwait:/sbin/halt to the containers /etc/inittab file. In my lxc managing script I'm using this to gracefully reboot or halt (powerdown) containers by sending this signals to the init of the container. If someone is able to add something to the upstart or other corresponding frameworks, I would suggest to use the same signals for compatibility. Guido -- Virtualization Cloud Management Using Capacity Planning Cloud computing makes use of virtualization - but cloud computing also focuses on allowing computing to be delivered as a service. http://www.accelacomm.com/jaw/sfnl/114/51521223/ ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Ubuntu template questions
After some experiments, upstart ignores SIGPWR, but still listens to SIGINT, and killing the process from the host works. So modifying the containter's control-alt-delete.conf to run shutdown -h instead of shutdown -r can let the host tell the guest to shutdown cleanly. Dear Fajar, because a container reboot may be emulated externally by a start after a stop, patching the control-alt-delete.conf in such a way will have the most benefits as long as there's no patch for reacting on SIGPWR itself. May the patch of control-alt-delete.conf include some detection of the in-container-situation? Is it possible to depend it to the (normal) absent sys_boot capability inside of a container? Guido -- Virtualization Cloud Management Using Capacity Planning Cloud computing makes use of virtualization - but cloud computing also focuses on allowing computing to be delivered as a service. http://www.accelacomm.com/jaw/sfnl/114/51521223/ ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] adding a default gateway inside a container as a non root user
Dear Arun, You may also use a DHCP environment to setup the containers network IP, routing, DNS-Servers etc. This approach will ease any changes of the network infrastructure and will help you to make your templates more generic. For that, you have the to assign a fix MAC address to the container and to configure a fixed parameter table (host/IP/MAC) at the dhcpd. At my lxc-starter I'm using the formula IP=$(gethostbyname $CONTAINER) HWADDR=`IP=${IP#*.}; printf 00:50:C2:%02X:%02X:%02X ${IP//./ }` # a.b.c.d - 00:50:C2:bb:cc:dd (hex) [...] lxc-start -n $CONTAINER [...] -s lxc.network.hwaddr=$HWADDR Guido My tactical work around was to inject the route add into /etc/rc.d/rc.local in the rootfs template for my LXC containers, so when I create each container rc.local is staged, did the same with /etc/resolv.conf as well. Hi, I am trying to add a default gateway inside a lxc container so that the application can talk to outside network. -- Virtualization Cloud Management Using Capacity Planning Cloud computing makes use of virtualization - but cloud computing also focuses on allowing computing to be delivered as a service. http://www.accelacomm.com/jaw/sfnl/114/51521223/ ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] PostgreSQL - sh: cannot create /dev/null: Permission denied - LXC Issue?
Dear Patrick, As I understand /dev/null isn't writable in your container. That's definitely a wrong configuration. Please check, that there is a real device node for /dev/null (and others) in your container and you have it (and others) in the lxc device access control list (lxc.cgroup.devices.allow = c 1:3 rw) Note that -- depending on the linux flavor in your LXC container -- you might have to populate /dev by your own, because it's not reasonable to run udev or something like this inside a container. Greetings Guido -- Write once. Port to many. Get the SDK and tools to simplify cross-platform app development. Create new or port existing apps to sell to consumers worldwide. Explore the Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join http://p.sf.net/sfu/intel-appdev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] seeing a network pause when starting and stopping LXCs - how do I stop this ?
Dear Michael, I always hate replying to my own posts but I have stumbled onto some interesting clarification as I've continued to play with this... Below in-line. [...] Again a well-done investigation. For everyone who don't have the time to carefully read this threads, i want to sum the (imho) key statements: * With kernel 2.6.27ff, one MAY make a MAC sticky to a bridge using standard tools, e.g. ifconfig br0 hw ether 12:34:56:78:90:ab This will turn off the auto-select-the-lowest-MAC-algorithm. * But the MAC MUST be one of the attached NICs to get the stacks layer2 packet routing working. The problem is that the bridge only thinks a packet is local if it arrives with destination hw addr == incoming device address. You might have more than one bridge and they may attached to different groups of NICs. Therefore, there have to be a layer 2 instrument (the MAC) to find the right bridge by its table of MACs spanned by the attached interfaces. For us LXC-users it means, we should make the MAC of the hosts interface sticky to the attached bridge. Unfortunately, there's no support by the brctrl command for it (, like 'brctrl addif bridge interface [--sticky]'). Therefore, one have to use something like ifconfig $BRIDGE hw ether `LANG= ifconfig $HOSTNIC | sed -ne /HWaddr/ s/.* HWaddr \(.*\)$/\1/p` in an ifup-post-script. greetings Guido -- Dr. Guido Jäkel Deutsche Nationalbibliothek IT SG 2.2 (Infrastruktur Unix) Adickesallee 1 60322 Frankfurt am Main Tel. +49-69-1525-1750 Fax +49-69-1525-1799 mailto:g.jae...@dnb.de http://www.dnb.de -- Cloud Computing - Latest Buzzword or a Glimpse of the Future? This paper surveys cloud computing today: What are the benefits? Why are businesses embracing it? What are its payoffs and pitfalls? http://www.accelacomm.com/jaw/sdnl/114/51425149/ ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] LXC guests and their PTY (permissions): PTY allocation request failed on channel 0
Hi all, I am really very happy about the goal to get a virtualization solution mainline, however, there a quite a few things I really hate about LXC right now, and this is one: Dear Christian, because i'm using Gentoo too, I'll try to support you by direct mail communication. Guido -- The demand for IT networking professionals continues to grow, and the demand for specialized networking skills is growing even more rapidly. Take a complimentary Learning@Cisco Self-Assessment and learn about Cisco certifications, training, and career opportunities. http://p.sf.net/sfu/cisco-dev2dev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Graceful shutdowns: current best practices?
4. Which signal? SIGINT? SIGPWR? Both? Does only work for init based systems, not for upstart, like Ubuntu! Dear Derek, Sending a SIGINT to init will invoke the alsctrldel entry of the /etc/inittab . A SIGPWR will (in absence of /etc/powerfail) call the powerfail entry. In a common setup, sending SIGINT will cause a reboot and SIGPWR will halt the client system. In my Gentoo environment, I'm currently using container/etc/inittab [...] ca:12345:ctrlaltdel:/sbin/shutdown -r now pf:12345:powerwait:/sbin/halt [...] But I'm still using the so-called baselayout-1. Some time ago Gentoo have shifted to baselayout-2, which use openRC. To my knowledge, it's init don't respect this signals, too. Because of that, I decide to wait until the lxc-attach functionality is stable. greetings Guido -- All the data continuously generated in your IT infrastructure contains a definitive record of customers, application performance, security threats, fraudulent activity and more. Splunk takes this data and makes sense of it. Business sense. IT sense. Common sense. http://p.sf.net/sfu/splunk-d2d-oct ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] OUI
Looks like the cheap and easy to get OUI is 36 bits long, leaving only 12 bits for the user. Is 4096 possible unique MAC's enough? I appreciate the development to let LXC assign an usable random MAC with an adequate prefix in the default case because this will fit for the most users and use cases. But for instance, my LXC environment is designed to run it's containers on arbitrary hosts. Therefore I'm rely on DHCP for the network setup (hostname, IP, route, ns, ...) of a container. To get a systematical and dhcp-centralized and computable relation between the hostname, the IP and the used MAC. Therefore I calculate it in the LXC starter script using a 3 byte prefix and the 3 last bytes of the IP. Because we use a LAN based on 10.0.0.0/8, this will be unique for it. HWADDR=`IP=${IP:3}; printf 00:50:C2:%02X:%02X:%02X ${IP//./ }` # a.b.c.d - 00:50:C2:bb:cc:dd (hex) Shifting to a.b.c.d - pp:pp:pp:pp:pu:dd with any formular to melt down b and c to the low nibble of u will be no enough. Of corse, this is a individual situation. And I may change this from a formula to an request to the DHCPd. Or even use a DHCP pool and an dynamic DNS environment. But others may have similar constrains. To me it don't sound very sustainable to be limited to just 4096 usable MACs. It means that you have to administrate it in any way, either by hand or by technical tools. Think about, that one may systematical need more than one virtual network card per container. And with such a low range, if you choose an MAC by random out of 4096, it would be advisable to check by ARP if it's unused. Guido -- All the data continuously generated in your IT infrastructure contains a definitive record of customers, application performance, security threats, fraudulent activity and more. Splunk takes this data and makes sense of it. Business sense. IT sense. Common sense. http://p.sf.net/sfu/splunk-d2d-oct ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] New LXC Creation Script: lxc-ubuntu-x
I think there is about 80% overlap between the two projects but enough differences to be interesting. I'll take a closer look at your script looking for ideas I may have missed, and I invite you to do the same. @Derek: well-spoken. @Daniel Serge: Is there already something like a Wiki to collect such contribute work? I think, there are much more people around here which have developed such tools around LXC: Focused on their own requirements and conditions and therefore not fitted to publish to the community. But usefull to study for others to take an idea of it for own purposes. I also had to made my own lxc wrapper script to support my operation environment: To manage the arrangement and control of LXC containers running anywhere on a farm of identical LXC hosts. And as like as Ulli, I have written a lxc-free -- a substitute for free reflecting the memory values from the cgroup. Greetings Guido -- All the data continuously generated in your IT infrastructure contains a definitive record of customers, application performance, security threats, fraudulent activity and more. Splunk takes this data and makes sense of it. Business sense. IT sense. Common sense. http://p.sf.net/sfu/splunk-d2dcopy1 ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
[Lxc-users] ]Re: Bug with cgroup devices access rights!?
Problem solved. /dev/rtc is only used to read the time. To write the date and time the ioctl function settimeofday is used. To prevent this you have to drop the capability sys_time Dear sfrazt, Good job! May you figure out if there are unwanted side effects if one may drop the sys_time capability for a container, i.e. will something else will be denied what one will probably need to use? @Dev: If not, this dropping should be added to the reference manuals and example configuration snippets. Greetings Guido -- Why Cloud-Based Security and Archiving Make Sense Osterman Research conducted this study that outlines how and why cloud computing security and archiving is rapidly being adopted across the IT space for its ease of implementation, lower cost, and increased reliability. Learn more. http://www.accelacomm.com/jaw/sfnl/114/51425301/ ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
[Lxc-users] Is lxc-start threadsave?
Hi all, is lxc-start threadsave, i.e. may a start up different containers in parallel? Have I to apply a individual value for 'lxc.rootfs.mount', e.g. by use of the process id or 'mktemp'. Or something else, more? thanks Guido -- All of the data generated in your IT infrastructure is seriously valuable. Why? It contains a definitive record of application performance, security threats, fraudulent activity, and more. Splunk takes this data and makes sense of it. IT sense. And common sense. http://p.sf.net/sfu/splunk-d2d-c2 ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] veth name
Is there a way to assign veth name(visible from the host) to be the same each time the container boots ? At the moment it is a random value like vethFFzyq2 Yes there is: It's in the man page, but it's not written in bold letters ;) man 5 lxc.conf I wonder why it is not on the project page: http://lxc.sourceforge.net/man/ Because it says at the footer: lxc man pages, generated manually from the lxc version 0.7.0.. Notice the word manually ... -- EditLive Enterprise is the world's most technically advanced content authoring tool. Experience the power of Track Changes, Inline Image Editing and ensure content is compliant with Accessibility Checking. http://p.sf.net/sfu/ephox-dev2dev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
[Lxc-users] Howto detect the containers host
Hi all, something related to the Howto detect we're a LXC Container is the question: Howto detect from inside a container the name (or something equivalent) of the machine we're hosted on? This might be of interest for administration level scripts on setups like the one 'm going to use: It's a farm of identical hosts where I may start the prepared containers on any of it, because all of this stuff is using a nfs-rootfs. To be usable to run a isolated linx instance, at startup the uts information of the container is cloned from the host and the so-called nodename (which is availablie as 'hostname' or 'uname -n' at user level) is replaced by the value of lxc.utsname from config. Is there any possibility to let the container have access to the original nodename? Might it be made available on some procfs entry like /proc/lxc - together with some other informations? Another idea is to use, replace or append something to another uts field. Maybe the domainname field in the uts-struct copy for the container may be abused to hold the hosts nodename. To my knowledge if is only used by NIS/YP and is a separate to the dnsdomainname for DNS nameresolving. And will probably be overwritten by NIS/YP startup, if used. Or maybe we may append something to the release field. It is filled with the kernel version string, like '2.6.37-gentoo'. Here we may append something to get version-lxc@host's nodename To have another way in addition -- because on some init frameworks this passed environment isn't available -- we may add an analogous putenv() call to the lxc_start.c|main to pass host=hostname to the containers environment. Grüße Guido -- vRanger cuts backup time in half-while increasing security. With the market-leading solution for virtual backup and recovery, you get blazing-fast, flexible, and affordable data protection. Download your free trial now. http://p.sf.net/sfu/quest-d2dcopy1 ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [Spam-Wahrscheinlichkeit=94]Re: Howto detect the containers host
UlliMy lxc meta-script creates /lxc/hostname inside the container at startup: As a workaround my meta-scripts does something similar be able to re-start the appropriate containers in case of a panic, powerfail or similar on the supporting host. But IMHO it's in the concern of basic lxc and not to your, mine and other peoples metascripts to provide such things. PappI hope a container cannot identify its host. You mean that's a concern of security? Why it shouldn't; security through obscurity is never a solution at all, you'll know! Guido -- vRanger cuts backup time in half-while increasing security. With the market-leading solution for virtual backup and recovery, you get blazing-fast, flexible, and affordable data protection. Download your free trial now. http://p.sf.net/sfu/quest-d2dcopy1 ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] updated lxc template for debian squeeze - with attachedscript ; )
Hi, i have tried to find an rfc about this but have failed, instead, the only (serious/credible) documentation i could find was http://wiki.xen.org/xenwiki/XenNetworking#head-d5446face7e308f577e5aee1c72cf9d156903722 , so i updated the script accordingly, here is the updated patch. again, Dear Jon, at the given link, right at the last sentence of the paragraph, you'll find: It's recommended to use a MAC address inside the range 00:16:3e:xx:xx:xx. This address range is reserved for use by Xen. You see, there's no only the reserved prefix 00:50:C2, but there is at least one more official MAC space 00:16:3e reserved by Xen. Why do you don't use this and follow my simple or advanced suggestions macaddr=$(echo -n 00:50:C2; hexdump -n 3 -v -e '/1 :%02X' /dev/urandom) macaddr=$(echo -n 00:50:C2; echo ${hostname:0:1}${hostname: -2} $(head -c 3 /dev/urandom) | hexdump -n 3 -v -e '/1 :%02X') Of corse, the prefix may be replaced by any of this appropriate Prefixes. Probably just Daniel will know, if there is already a MAC space requested by the LXC project. Guido -- Free Software Download: Index, Search Analyze Logs and other IT data in Real-Time with Splunk. Collect, index and harness all the fast moving IT data generated by your applications, servers and devices whether physical, virtual or in the cloud. Deliver compliance at lower cost and gain new business insights. http://p.sf.net/sfu/splunk-dev2dev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] updated lxc template for debian squeeze - with attachedscript ; )
Dear John, - generate random mac address for the guest so it gets always the same lease from a dhcp server You suggest doing this by macaddr=$(echo -n 00; hexdump -n 5 -v -e '/1 :%02X' /dev/urandom) I think this is a little bit to random. The german Wikipedia tells at http://de.wikipedia.org/wiki/MAC-Adresse about a reserved MAC range for private use (sorry, it's not in corresponding the English article): [Neben der OUI existiert auch ein kleiner Adressbereich (IAB - Individual Address Block), der für Privatpersonen und kleine Firmen und Organisationen vorgesehen ist, die nicht so viele Adressen benötigen. Die Adresse beginnt mit 00-50-C2 und wird von drei weiteren Hex-Ziffern gefolgt (12 Bits), die für jede Organisation vergeben werden. Damit verbleibt der Adressbereich innerhalb der Bits 11 bis 0 nutzbar wodurch 212 = 4096 individuelle Adressen möglich sind.] Maybe we should take respect to this and we should use macaddr=$(echo -n 00:50:C2; hexdump -n 3 -v -e '/1 :%02X' /dev/urandom) for this. Another approach is to derive it from the designated name of the container (i.e. $hostname in terms of the script). Because there might be typical clustering naming schemes based on a name and some digits, I suggest to select the first and the last two characters of the hostname (filled by random for the unlikely case of a hostname shorter than 3 chars) echo -n 00:50:C2; echo ${hostname:0:1}${hostname: -2} $(head -c 3 /dev/urandom) | hexdump -n 3 -v -e '/1 :%02X' - 00:50:C2:first:nextlast:last filled by random @Daniel: Because this will have a common use for all, it might be included into the lxc-conf parser [lxc.network.hwaddr: the interface mac address is dynamically allocated by default to the virtual interface ...] We maybe should have a special keyword for a derived semi-static MAC that would not change at every startup of the container but may be calculated by the formula given above. Guido -- Free Software Download: Index, Search Analyze Logs and other IT data in Real-Time with Splunk. Collect, index and harness all the fast moving IT data generated by your applications, servers and devices whether physical, virtual or in the cloud. Deliver compliance at lower cost and gain new business insights. http://p.sf.net/sfu/splunk-dev2dev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] LXC Container Boot/Shutdown errors
Hi, i was facing a similar problem with ipv6 with a 2.6.36 kernel. What's the similarity? Bug was corrected in the 2.6.36-rc4. But, maybe it's not the same? What's the kernel version? 2.6.37-gentoo -- Free Software Download: Index, Search Analyze Logs and other IT data in Real-Time with Splunk. Collect, index and harness all the fast moving IT data generated by your applications, servers and devices whether physical, virtual or in the cloud. Deliver compliance at lower cost and gain new business insights. http://p.sf.net/sfu/splunk-dev2dev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users