Re: [Lxc-users] Mailing-list move on Sunday 8th of December
On Wed, 2013-12-04 at 19:04 -0500, Stéphane Graber wrote: On Wed, Dec 04, 2013 at 06:57:20PM -0500, Stéphane Graber wrote: Hello, You are receiving this e-mail because you are currently subscribed to: lxc-de...@lists.sourceforge.net ^ I meant lxc-users@lists.sourceforge.net Are Tamas and I going to be co-owners of the new lists as we were with the old lists? If so, will we need new list ownership passwords (in private if so, obviously). I'm presuming that Daniel won't be an owner (we became co-owners because he was so swamped) since he's even busier than before. Since you're setting this up, I'm also presuming you'll be the primary owner? Regards, Mike On this coming Sunday (8th of December), all LXC mailing-lists will be moved to a new home at: http://lists.linuxcontainers.org This is the last step of our migration out of sourceforge. The new mailman server is hosted by myself and shared with a few other projects (on other domains). That new server has daily offsite backups and a redundant e-mail infrastructure on two continents. So I'm not expecting any more problem with our lists there than on SourceForge. On Sunday, I'll disable the list on sourceforge, do one last mbox export and load it on the new server. From that point on, any e-mail reaching the old address will simply be rejected with an error indicating the new address (short of having found a way to redirect to the new address...). All of the list history and all subscriptions and settings will stay as they are, so once you have updated your mail filters and aliases everything should be back to normal. Sorry for the inconvenience and looking forward to a SourceForge free world! -- Stéphane Graber Ubuntu developer http://www.ubuntu.com -- Sponsored by Intel(R) XDK Develop, test and display web and hybrid apps with a single code base. Download it for free now! http://pubads.g.doubleclick.net/gampad/clk?id=111408631iu=/4140/ostg.clktrk ___ lxc-users mailing list lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users -- Sponsored by Intel(R) XDK Develop, test and display web and hybrid apps with a single code base. Download it for free now! http://pubads.g.doubleclick.net/gampad/clk?id=111408631iu=/4140/ostg.clktrk ___ lxc-users mailing list lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users -- Michael H. Warfield (AI4NB) | (770) 978-7061 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Sponsored by Intel(R) XDK Develop, test and display web and hybrid apps with a single code base. Download it for free now! http://pubads.g.doubleclick.net/gampad/clk?id=111408631iu=/4140/ostg.clktrk___ lxc-users mailing list lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Couple questions regarding IPSec and LXC containers
the Openswan and kernel level IPsec infrastructure. Robert Adams I may experiment with this further as you now have my curiousity up. Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 978-7061 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Shape the Mobile Experience: Free Subscription Software experts and developers: Be at the forefront of tech innovation. Intel(R) Software Adrenaline delivers strategic insight and game-changing conversations that shape the rapidly evolving mobile landscape. Sign up now. http://pubads.g.doubleclick.net/gampad/clk?id=63431311iu=/4140/ostg.clktrk___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Fwd: Fwd: LXC and sound in container -
Hey... On Sun, 2013-11-17 at 11:44 -0500, brian mullan wrote: Attached is a v2.0 writeup of how I configured Sound in my LXC containers. Please note that this document is based on further research into how PulseAudio can be configured and because of what I learned is both much shorter less complex to setup sound in LXC. See the attached libreoffice .ODT file for the updated information. That's really interesting. Good job. I also have a personal use case for this that, ironically, has nothing to do with LXC Containers. I've got a little backburner project at home of setting up a little audioserver using a Raspberry Pi I can park around (outdoors during the summer) and drive a remote setup connected over WiFi. For some reason, it never dawned on me to just remote the audio over the network using PulseAudio. Nice. Brian Mullan Regards, Mike -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. -- DreamFactory - Open Source REST JSON Services for HTML5 Native Apps OAuth, Users, Roles, SQL, NoSQL, BLOB Storage and External API Access Free app hosting. Or install the open source package on any LAMP server. Sign up and see examples for AngularJS, jQuery, Sencha Touch and Native! http://pubads.g.doubleclick.net/gampad/clk?id=63469471iu=/4140/ostg.clktrk ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users -- Michael H. Warfield (AI4NB) | (770) 978-7061 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- DreamFactory - Open Source REST JSON Services for HTML5 Native Apps OAuth, Users, Roles, SQL, NoSQL, BLOB Storage and External API Access Free app hosting. Or install the open source package on any LAMP server. Sign up and see examples for AngularJS, jQuery, Sencha Touch and Native! http://pubads.g.doubleclick.net/gampad/clk?id=63469471iu=/4140/ostg.clktrk___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Lxc-users Digest, Vol 47, Issue 13
On Fri, 2013-11-15 at 13:02 +, John wrote: On 15/11/13 12:21, brian mullan wrote: John... Thanks for you note also.. I'd seen a very similar Bash script for Arch Linux here: http://pastebin.com/zZEAk3Ny while researching all of this. Brian Ah-ha, yes that pastebin is mine. That paste pre-dates systemd. I think the current implementation using the autodev hook is much cleaner. I used to have a separate script called make_sound_devices that was called on the host after boot to write devs to the containers that used alsa. This was needed because the device nodes appeared to change on every boot. Yes. This was a point hammered on multiple times by Greg K-H at LinuxPlumbers. Device notes are potentially volatile over reboot and should not be replied on. That's what devtmpfs and udev are intended to deal with. Before systemd it was possible to write a container /dev from the host. That may, hopefully, become possible again. I have patches in front on Serge and Stéphane which should make this and a few other facilities available to us once again by providing hooks from the container /dev into a subdirectory of the host /dev at /dev/.lxc/${CONTAINER} (actually a hashed unique name) with some symlinks pointing into it from the container configuration areas. I'm waiting to hear feedback on my latest proposal. The autodev hook does exactly the same thing but is automated per-container during container startup. Good find though :) Yes, we want to be able to do this at container startup. Ultimately, this should provide some semblance of udev capability from the host into the container, but that's further off. Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 978-7061 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- DreamFactory - Open Source REST JSON Services for HTML5 Native Apps OAuth, Users, Roles, SQL, NoSQL, BLOB Storage and External API Access Free app hosting. Or install the open source package on any LAMP server. Sign up and see examples for AngularJS, jQuery, Sencha Touch and Native! http://pubads.g.doubleclick.net/gampad/clk?id=63469471iu=/4140/ostg.clktrk___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] System destabilization
On Wed, 2013-11-06 at 12:41 -0500, Dustin Oprea wrote: I'm a newcomer to LXC. I'm aware of the security disclaimers behind using an LXC (such as access to the same sysfs as the host), but is it also fair to say that it's just as likely for a rogue application inside a container to cause a kernel panic or some kind of disastrous segfault that will destabilize the host? I don't really think the question is quantifiable or answerable in a formal or definitive way. But I'll give you my arguments to the contrary. I haven't really run into a rogue applications causing a kernel opps or panic in years and I've had plenty of experience diagnosing panics and opps's in the past. Not to say it can't happen, but it does indicate a kernel bug and, as such, a security issue in the kernel. The kernel is suppose to protect itself from such rouge behavior. But, there's always something and, as a professional security researcher, I'm well aware of that. As such, it's no MORE likely in a container than running on the host and it's entirely possible that the container namespace isolation could convey some projection against a number of areas where such a thing could arise. If you're comparing it to things like shared proc, sysfs, or devtmpfs, I do see those issues show up (systemd and devtmpfs being my primary example and PITA) but have never seen a rouge container application, on it's own, do much more than resource starvation (I've got a container with a mysql process that occasionally sends my load average into lala land). So, my response would be no, it's not just as likely for the simple reason that kernel security bugs that would allow it are much less likely than configuration collisions that allow conflicts over proc, sysfs, or devtmpfs. Possible - yes. Likely - no. As likely - no. Dustin Oprea Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- November Webinars for C, C++, Fortran Developers Accelerate application performance with scalable programming models. Explore techniques for threading, error checking, porting, and tuning. Get the most from the latest Intel processors and coprocessors. See abstracts and register http://pubads.g.doubleclick.net/gampad/clk?id=60136231iu=/4140/ostg.clktrk___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] lxc-centos/lxc-rhel?
On Mon, 2013-09-30 at 12:14 -0400, Dwight Engen wrote: On Sat, 28 Sep 2013 09:52:15 +0700 Fajar A. Nugraha l...@fajar.net wrote: On Sat, Sep 28, 2013 at 3:03 AM, Michael H. Warfield m...@wittsend.comwrote: On Fri, 2013-09-27 at 10:38 +0700, Fajar A. Nugraha wrote: In particular, it solves the problem of mismatched rpmdb version (i.e. when installing centos5 on latest ubuntu) by doing yum install twice. I accomplished that with an rpm --rebuilddb sortly after installing the minimal packages. Unfortunatey using JUST that didn't work last time I tested installing Centos 5 from Ubuntu 12.04. So what I did was: - move rpmdb location to the correct place (Ubuntu put this in $HOME/.rpmdb) - try rpm --rebulddb - test with yum - if yum still complains, then reinstall a new environment using yum/rpm from the temporary environment. Hi guys, just wanted to mention in case it helps is that the way I solved the db version mismatch in the oracle linux template was to use db_dump | db_load. Nice. I've been trying rpm --initdb followed by rpm --rebuilddb that seems to cover the vast majority of the cases but I wasn't aware of those commands. That might be another idea. BTW... While I have your attention... The Oracle Linux template worked well on Fedora. Will it work on other distros like Arch, Alt, or Suse which doesn't have a (compatible) version of yum/rpm? That's one of my major take-aways from Linux Plumbers, to tub thump the message that these templates need to be as distro agnostic as possible. I think I now have the Fedora template in pretty good shape and will be posting a patch for that in the next day or two (after reviewing your suggestions). Many thanks! Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register http://pubads.g.doubleclick.net/gampad/clk?id=60133471iu=/4140/ostg.clktrk___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] lxc-centos/lxc-rhel?
On Mon, 2013-09-30 at 19:53 +0200, Tamas Papp wrote: On 09/30/2013 06:23 PM, Michael H. Warfield wrote: The Oracle Linux template worked well on Fedora. Will it work on other distros like Arch, Alt, or Suse which doesn't have a (compatible) version of yum/rpm? That's one of my major take-aways from Linux It works on Ubuntu (12.04+ in my case). Plumbers, to tub thump the message that these templates need to be as distro agnostic as possible. I think I now have the Fedora template in pretty good shape and will be posting a patch for that in the next day or two (after reviewing your suggestions). I'm waiting for it very much, fedora template doesn't work on Ubuntu at this moment... Oh? I thought it did, but you had to have the rpm and yum packages installed (you won't with the new one). I thought I had tested that out on one of my Ubuntu systems. Cheers, tamas -- October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register http://pubads.g.doubleclick.net/gampad/clk?id=60133471iu=/4140/ostg.clktrk Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register http://pubads.g.doubleclick.net/gampad/clk?id=60133471iu=/4140/ostg.clktrk___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] lxc-centos/lxc-rhel?
On Fri, 2013-09-27 at 10:38 +0700, Fajar A. Nugraha wrote: On Fri, Sep 27, 2013 at 6:55 AM, Michael H. Warfield m...@wittsend.com wrote: On Thu, 2013-09-26 at 19:43 +0530, Shridhar Daithankar wrote: Would there be a official centos/rhel template bundled with lxc? If not, which one is most preferred one? It's on my list of things to do. Templates were a hot topic in a couple of talks at Linux Plumbers last week in New Orleans. I seem to have ended up handling the Fedora template at the moment and I see the templates for RHEL/CentOS/SL as an offshoot of that effort. I've got a number on my list. Any help is appreciated... This is what I modified from the official lxc-fedora template: https://github.com/fajarnugraha/lxc/tree/centos-template Got it. Installed CentOS 56 quite nicely on my Fedora 19 system. I need to give it a close look over, particularly with regards to distro independent installation (IOW, installing say CentOS on OpenSuse, which was failing). This should go into the package, ASAP. In particular, it solves the problem of mismatched rpmdb version (i.e. when installing centos5 on latest ubuntu) by doing yum install twice. I accomplished that with an rpm --rebuilddb sortly after installing the minimal packages. This is good. Much appreciated. I'll look the actual code over in closer detail later. You said it was based on the official lxc-fedora template. That template has been changing (by me). What version of LXC where you working from? -- Fajar Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register http://pubads.g.doubleclick.net/gampad/clk?id=60133471iu=/4140/ostg.clktrk___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] lxc-centos/lxc-rhel?
On Thu, 2013-09-26 at 19:43 +0530, Shridhar Daithankar wrote: Hi, Recently I had to set up containers on centos and discovered that there is no template for centos/rhel in lxc-source. Googling turned up quite a few templates but I am not sure which one is preferred. I tried one which created the container alright but on starting the container, killed the host X and it didn't continue working afterwards i.e. it was stopped. The host OS is centos as well. Would there be a official centos/rhel template bundled with lxc? If not, which one is most preferred one? It's on my list of things to do. Templates were a hot topic in a couple of talks at Linux Plumbers last week in New Orleans. I seem to have ended up handling the Fedora template at the moment and I see the templates for RHEL/CentOS/SL as an offshoot of that effort. I've got a number on my list. Any help is appreciated... -- Regards Shridhar -- October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register http://pubads.g.doubleclick.net/gampad/clk?id=60133471iu=/4140/ostg.clktrk Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register http://pubads.g.doubleclick.net/gampad/clk?id=60133471iu=/4140/ostg.clktrk___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [lxc-devel] Working LXC templates? EUREAKA! I think I've got it!
On Tue, 2013-09-17 at 10:26 -0700, Tony Su wrote: Regarding LXC and the use of Linux Bridge devices for configured networking, At least on openSUSE, LXC is not configured with any /etc/lxc/* by default, and possibly because I also have libvirt configured to support LXC (although that is not working on my system). From what I've seen though, I cannot see why configuring networking should be in a general lxc configuration file, or even if it should exist I see it as reaasonable to configure within a template(why not point the build to a pre-existing virtual network complete with its own configuration?). In any case as I noted before, pointing the Container interface to a pre-existing virtual Linux Bridge device instead of a physical interface in the template should be nearly trivial. If you wanted to support bridge creation and configuration, that'd be a big project outside the scope of what I'm describing. yum and chroot. H. From what I can see, at the point you invoke yum, nothing has yet been downloaded so any functionality you invoke must exist in the HostOS. Despite running in the chroot environment, I can't see how yum can work unless it's also present in the HostOS. Maybe if I test this on a distro that doesn't support yum natively I'll find different, but based purely on perusing the code I can't see how it would work otherwise. line 155 chroot ${rootfs_path} yum --releasever=${release} -y install fedora-release That line is in config_fedora, called at line 926 after the call to install_fedora at line 920 where the run time install is downloaded and run. It should be installed at the time in question or something earlier has failed. Flow of control definitely needs to be cleaned up. Regards, Mike Requirement for GPG keys Again, based at this point purely on perusing and not actually testing the code on an appropriately set up system, unless the repo is configured without keys or the retrieval utility is configured not to require key verification, I don't see how you've avoided this requirement, particularly in this case you're using yum which is a repo client utility. This is aside from verifying the downloaded image whose integrity could be verified easily by invoking and comparing checksum values if desired. I'd recommend no verification, though because it would add maintenance issues. A superior solution might be a file transfer transport that automatically does checksum (like torrent but that opens up potential issues since some ISPs block torrent indiscriminitately). Regards, Tony On Tue, Sep 17, 2013 at 6:34 AM, Michael H. Warfield m...@wittsend.com wrote: On Sun, 2013-09-15 at 16:17 -0700, Tony Su wrote: Hello Michael, First a comment on problems with systemd you descrbe. I probably have run into many of the things you itemized, but since my time is usually focused on something I'm trying to use LXC and not LXC itself, I usually just drop any further attempts and move on to find a workaround(eg consoles) or use a different technology(x server issue). Regarding many of the issues you describe though, I wonder if they couldn't be addressed with more strict enforcement of using namespaces (and less often cgroups). I've read how namespaces are supposed to be an extremely powerful means of isolating processes and yet I don't see any obvious indications it's being done consistently... by either prepending to standard process or service names (if the goal is to easily identify the namespace) or using a random string (if the goal is better security so exploits can't anticipate commonly used namespaces). In fact, I think I see this namespace issue in various parts of the template you created. If I understand what is happening, there are numerous places where you create special nodes on the HostOS instead of (a) using the existing HostOS nodes but using namespaces to isolate Container processes (b) creating nodes entirely within the Container which would make the Container entirely portable but lose the benefit perhaps of the better ways nodes are created and mounted today(eg tmpfs in RAM). Diving more into your template code, I applaud your effort, it's significant and no minor effort. As of this moment, I've mainly been perusing what I might call HostOS Container Pre-Install, the part which precedes the actual installation and relies on components running in the HostOS only. This would be your script approx lines 0-410. 1. I like your method of identifying whether the OS is Fedora, and additionally whether is ARM or not. That was an effort working on my Raspberry Pi's. 2. It looks like you're configuring networking binding directly to eth0. I would recommend instead supporting the use of Linux Bridge devices
Re: [Lxc-users] [lxc-devel] Working LXC templates? EUREAKA! I think I've got it!
Hey Serge, On Fri, 2013-09-20 at 09:57 -0500, Serge Hallyn wrote: Hey Michael, tried this out on a saucy vm, and it looked good until it died with receiving incremental file list fedora-release-19-2.noarch.rpm sent 47 bytes received 33329 bytes 9536.00 bytes/sec total size is 32472 speedup is 0.97 warning: fedora-release-19-2.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID fb4b18e6: NOKEY Preparing... # [100%] Updating / installing... 1:fedora-release-19-2 # [100%] Loaded plugins: fastestmirror, langpacks Error: Cannot retrieve metalink for repository: fedora/19/x86_64. Please verify its path and try again mount: mount point proc does not exist chroot: failed to run command ‘yum’: No such file or directory Build of Installation RTE failed. Temp directory not removed so it can be investigated. Fedora Run Time Environment setup failed Failed to download 'fedora base' failed to install fedora lxc-create: container creation template for f1 failed lxc-create: Error creating container f1 Looks like unpacking didn't go right? shrug That looks like the first one I sent out. I caught a couple of typos immediately after that and recent it with an opps. I'll double check it though. -serge Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99! 1,500+ hours of tutorials including VisualStudio 2012, Windows 8, SharePoint 2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack includes Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13. http://pubads.g.doubleclick.net/gampad/clk?id=58041151iu=/4140/ostg.clktrk___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [lxc-devel] Working LXC templates? EUREAKA! I think I've got it!
is out of date, but that is beyond our control. The template I sent out is also very preliminary and will be heavily revised before being submitted as a patch (now, after the LinuxPlumbers conference). It may not even get into the 1.0.0 release at this point as we're already seeing our first Alpha tags and pulls. I've made the two following modifications 1. line 33 - added release=18 The comments in this script describe passing the release number as an option to lxc-create but is not supported in openSUSE. Despite unable to pass as a command line option, passing within the template with this line works. BTW - If no release is specified, Fedora defaults to earliest release in the repos(which is 14) rather than latest. 2. Line 153 - The template hardcodes the RELEASE_URL string which is created by appending a hardcoded string to the MIRROR_URL string, but it appears that Fedora restructured their repos since this template was created. Now, an /f/ has to be inserted into the RELEASE_URL (initial letter of the word fedora). Fedora, after a certain release, changed their Packages directory to include a first letter of the package as a subdirectory. So fedora-release is in Packages/f/fedora-release-* and yum is in Packages/y/yum-*. That's the Fedora convention now as of a certain rev of Fedora. There's an if check in there for it. Additional - Especially when connecting to repos of a Distro different than the HostOS, GPG authentication keys are not yet installed. Have been investigating whether it's possible to simply download ahead of time and install into the default Key Ring or something more is required. If this approach is feasable, then this needs to be added early to the template script, but maybe a better method is for the User to be prompted for an interactive answer to confirm key installation. Actually, this should now be largely resolved by using the LiveOS image and minimizing the use of --nogpgcheck to yum, but I'll verify that. There is a catch-22 of verifying the LiveOS image but I'm not sure I have a solution to that one. Tony On Sat, Sep 14, 2013 at 3:35 PM, Tony Su ton...@su-networking.com wrote: Cool. I'll block some significant time to look at what you built over the next 3 days. Tony Later! Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- LIMITED TIME SALE - Full Year of Microsoft Training For Just $49.99! 1,500+ hours of tutorials including VisualStudio 2012, Windows 8, SharePoint 2013, SQL 2012, MVC 4, more. BEST VALUE: New Multi-Library Power Pack includes Mobile, Cloud, Java, and UX Design. Lowest price ever! Ends 9/20/13. http://pubads.g.doubleclick.net/gampad/clk?id=58041151iu=/4140/ostg.clktrk___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [lxc-devel] Working LXC templates? EUREAKA! I think I've got it!
On Thu, 2013-09-12 at 15:23 -0400, Michael H. Warfield wrote: All - Especially Tony Su, Couple of people where I work thought you couldn't do what I was trying to do, that it was impossible. Oh well. Looks like they were wrong. :-P It may not be efficient but it can be made to work. Never fails, I swear, it never fails. Not minutes after I posted this original message, one of my test runs blew an error that pointed out a typo that pointed out six other spots where I had environment variables out of sync and, after fixing them, I realized I had missed several update paths. And then there were the initialization checks I missed. Would work fine on fedora on fedora but not what my target was. Ok... So... I fixed all that. The attached template should actually work if you're not on a Fedora host (I hope). It does dawn on me that all this effort really is a one trick pony and, once it has built the very first container, is no longer needed after the first run. Reason? Each container built can be a run time environment to build new containers. So... Before I submit an actual patch back to the upstream project, I'll probably scope out the logic to scan the cached containers for the most appropriate container and use it for the RTE and never use the cached install RTE again. This may also be appropriate for other templates. Once you've bootstrapped the template process, you may not need the bootstrap again. Just build and cache your best shot and use it in the future. I have tested this template and built containers from F14 through F19. F15 failed but, I already know it would not have run as a container anyways because of the horribly broken and incompatible systemd in that distro. Prior to F14, some versions will build and some won't. Shouldn't be a problem with Fedora on Fedora and anything prior to F18 is past EOL anyways, so why should I care? IAC... Please discard the previous template. Here is my current template. And please test it. Thanks! Regards, Mike Way down below, in-line... On Mon, 2013-09-09 at 07:28 -0400, Michael H. Warfield wrote: On Mon, 2013-09-09 at 08:58 +0200, Natanael Copa wrote: On Sun, 08 Sep 2013 20:33:16 -0400 Michael H. Warfield m...@wittsend.com wrote: With all due respect... On Sun, 2013-09-08 at 16:08 -0700, Tony Su wrote: After putting some thought into this, IMO LXC badly needs a universal tool with the following features - A single script should be used to run on any HostOS, creating any supported Container OS. Although this would make the script extremely large, IMO it would actually be easier to maintain in the long run. Actually, no. From my experience (30+ years in software development), it would turn into a morass. The problem here is that the maintainer(s) would then need to understand how each and every distribution is installed and how it would be installed on each and every distribution. It would distill the worse of all the problems we have now in the templates into one great big dung pile. It would rapidly become unmaintainable. The extremely large is the red letter warning that it will become unmaintainable as fewer and fewer people understand what this great big huge blob does. I tend to agree with this. What I do think could be done is to use template APIs. Either by having a script library with helper functions (for things like: generate_mac_address etc) or let the template scripts be plugins that must provide a set of pre-defined functions (eg. install_distro, configure_distro, copy_configuration etc) or maybe a combination of those two. I like this idea, it's just that, in the short term, we have to get past this one gotcha. In the git-stage current Fedora template, the entire problem is embodied in the download_fedora function starting around line 201... The gotcha's are three commands around line 272 after we've identified and downloaded the initial release rpm. We have this: mkdir -p $INSTALL_ROOT/var/lib/rpm rpm --root $INSTALL_ROOT --initdb rpm --root $INSTALL_ROOT -ivh ${INSTALL_ROOT}/${RELEASE_RPM} $YUM install $PKG_LIST Ok... Those are running in the host local run time environment. Obviously, if the host does not have a (compatible) version of rpm and/or yum in the host local environment, you're screwed. That's the catch-22 situation and it's the same situation with zypper in the OpenSuse template. That short space of code has to be recreated in a totally distro-agnostic manner so it runs on any distro to create our initial rootfs. After that, we can do what ever distro (Fedora) specific commands we want by chrooting into the target container first (including rebuilding the rpm database to the target version). That's only even needed if you don't already have a cached rootfs for that distro and version
Re: [Lxc-users] [lxc-devel] Working LXC templates?
On Mon, 2013-09-09 at 08:58 +0200, Natanael Copa wrote: On Sun, 08 Sep 2013 20:33:16 -0400 Michael H. Warfield m...@wittsend.com wrote: With all due respect... On Sun, 2013-09-08 at 16:08 -0700, Tony Su wrote: After putting some thought into this, IMO LXC badly needs a universal tool with the following features - A single script should be used to run on any HostOS, creating any supported Container OS. Although this would make the script extremely large, IMO it would actually be easier to maintain in the long run. Actually, no. From my experience (30+ years in software development), it would turn into a morass. The problem here is that the maintainer(s) would then need to understand how each and every distribution is installed and how it would be installed on each and every distribution. It would distill the worse of all the problems we have now in the templates into one great big dung pile. It would rapidly become unmaintainable. The extremely large is the red letter warning that it will become unmaintainable as fewer and fewer people understand what this great big huge blob does. I tend to agree with this. What I do think could be done is to use template APIs. Either by having a script library with helper functions (for things like: generate_mac_address etc) or let the template scripts be plugins that must provide a set of pre-defined functions (eg. install_distro, configure_distro, copy_configuration etc) or maybe a combination of those two. I like this idea, it's just that, in the short term, we have to get past this one gotcha. In the git-stage current Fedora template, the entire problem is embodied in the download_fedora function starting around line 201... The gotcha's are three commands around line 272 after we've identified and downloaded the initial release rpm. We have this: mkdir -p $INSTALL_ROOT/var/lib/rpm rpm --root $INSTALL_ROOT --initdb rpm --root $INSTALL_ROOT -ivh ${INSTALL_ROOT}/${RELEASE_RPM} $YUM install $PKG_LIST Ok... Those are running in the host local run time environment. Obviously, if the host does not have a (compatible) version of rpm and/or yum in the host local environment, you're screwed. That's the catch-22 situation and it's the same situation with zypper in the OpenSuse template. That short space of code has to be recreated in a totally distro-agnostic manner so it runs on any distro to create our initial rootfs. After that, we can do what ever distro (Fedora) specific commands we want by chrooting into the target container first (including rebuilding the rpm database to the target version). That's only even needed if you don't already have a cached rootfs for that distro and version. I was S close last week working on this while on vacation. Recent revs of Fedora have this downloadable LiveOS core that runs on the netinst CD and others. That's the 200MB - 300MB blob I was referring to. You just download it, mount it (requires squashfs) to a directory and then mount the ext3 image it contains on another directory. Then create and bind mount a few rw directories (etc var run), mount proc in the image, and bind mount your rootfs to run/install in the image. Then chroot into the image and voila, instant RTE. Yum and rpm are capable of installing other versions of Fedora, so I wouldn't (shouldn't) even need a version specific instantiation of the RTE, just one that can install every version we might be interested in. Except... It didn't work. Sigh... $#@$#@#@ Fedora came so close and then face planted for what I wanted to do. Sure rpm is in there. But rpmdb is not. So, no --initdb and no --rebuilddb. They've got yum in there, but no yummain.py so yum hurls chunks immediately upon execution trying to import yummain. What the hell good does it do to have yum in there but no yummain?!?! But, something, fixed up, like that from all the distros would be a perfect answer (albeit somewhat less than high performance thanks to that download, but it's a single download that could be cached). The only potential gotcha I see in there is requiring squashfs available for mounting the image. Anyone have heartburn over that? That squashfs image has to be able to do an install or it would be useless on the netinst CD. I'm not sure if they still have anaconda on there or not but it has to be able to do a kickstart install, so all I need is a minimal.ks kickstart file to perform essentially an unattended install of a minimal build into the target rootfs. That's where I'm at now, trying to get that to work. But it's all that work just to replace those few lines of code above where you can not perform a host native install of the rootfs. That's all that's standing in the way and it's frustratingly close for the Fedora template. We've got the following distro templates in the project, currently: lxc-alpine lxc-alt lxc-arch lxc-busybox (is this really a distro template
Re: [Lxc-users] [lxc-devel] Working LXC templates?
On Mon, 2013-09-09 at 17:23 +0530, Shridhar Daithankar wrote: On Monday, September 09, 2013 07:28:43 AM Michael H. Warfield wrote: In the git-stage current Fedora template, the entire problem is embodied in the download_fedora function starting around line 201... The gotcha's are three commands around line 272 after we've identified and downloaded the initial release rpm. We have this: mkdir -p $INSTALL_ROOT/var/lib/rpm rpm --root $INSTALL_ROOT --initdb rpm --root $INSTALL_ROOT -ivh ${INSTALL_ROOT}/${RELEASE_RPM} $YUM install $PKG_LIST Ok... Those are running in the host local run time environment. Obviously, if the host does not have a (compatible) version of rpm and/or yum in the host local environment, you're screwed. That's the catch-22 situation and it's the same situation with zypper in the OpenSuse template. That short space of code has to be recreated in a totally distro-agnostic manner so it runs on any distro to create our initial rootfs. After that, we can do what ever distro (Fedora) specific commands we want by chrooting into the target container first (including rebuilding the rpm database to the target version). That's only even needed if you don't already have a cached rootfs for that distro and version. Another approach could be to popularize the container downloads by the distro. If each distro. could add a .tar.gz for a working container of a given release, one could just download and configure them, no? then the lxc project upstream, could just list those links or include them in the a separate tool, that just downloads and untars the same? That would completely sidestep the bootstrapping one-distro.-on-another problem. True, and that's been mentioned before in several different contexts. The problem is that we have to get the cooperation of ALL the distros we wish to support, which seems to be a bit of an intractable problem. In Fedora, it would require someone to take that on as a project and be the maintainer. It would also have to be architected into the build system so it would be automatically build when a release it cut. Since they already have the squashfs LiveOS system (which is 95% of what we need), I don't think it would be a major leap for them to add a parallel build to build an LXC LiveOS to live, say, in the same download directory. In fact, if they fixed some of the deficiencies in the LiveOS image, we could work directly from the squashfs image that's already there. The problem is in getting someone interested (I'm not a Fedora maintainer) and getting them to do it. It would probably have to be filled as an enhancement request and go through the months long vetting process at best. We might have a better shot (in this case) of filing a bug report in bugzilla for the busted components of the LiveOS image and getting them to fix it. Even there, though, it's likely impossible to get them to retroactively fix any of the previous images. I agree it would work, I just don't think we can depend on getting everyone else to agree to it and implement it in their distros. Considering the direction Fedora has been taking in recent decisions (removing sendmail / any MTA as well as the syslog server from the base install) makes me seriously question if they would even care. -- Regards Shridhar Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Learn the latest--Visual Studio 2012, SharePoint 2013, SQL 2012, more! Discover the easy way to master current and previous Microsoft technologies and advance your career. Get an incredible 1,500+ hours of step-by-step tutorial videos with LearnDevNow. Subscribe today and save! http://pubads.g.doubleclick.net/gampad/clk?id=58041391iu=/4140/ostg.clktrk___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [lxc-devel] Working LXC templates?
where N is the number of authors (or lines) in a given file. To this end, I've done the intial baby steps towards architecting a new universal lxc-create template script that should have all the above features and likely within a week or so post an early alpha on my github for anyonne to criticize/contribute/modify. Right now, I'm proceeding down an avenue that may lead me to generating a template that should work everywhere. The effort is to create an intermediary RTE (Run Time Environment) based on the netinst (Network Install) CD images present in all Fedora releases and many of the others I have examined. I may even resort to using the RedHat / Fedora Anaconda kickstart system to create a minimal RTE rootfs and using that to create the container cache rootfs. The RTE could even be cached alone with the individual roots for the containers. It's decidedly NOT efficient (about a 200MB-300MB download if you can not install natively) but it should work (still working out the bugs and dependencies) but only has to be done once for all the containers of a given distro. Is this applicable to other templates? Maybe yes, maybe no. Can others copy the technique of creating an intermediary RTE to facilitate the initial build of the rootfs? Hell yes (I hope so). But, it's up to the individual template maintainers. At this point, I'm not sure if Serge, Daniel, and Stephane and others consider me a maintainer for the Fedora template or just an (very) annoying contributor (and agent provocatour) but I wouldn't be offended or annoyed either way. If you think you can do this, I'm more than willing to test and critique. Maybe you're better at this than I am. Give it your best shot. If you can come up with a way to build the Fedora rootfs without recourse to rpm or yum or anaconda or the netinst CD (or prebuild, non-distro distributed tarballs), I would love to hear your thoughts. And, no, just because you have rpm on OpenSuse, it doesn't mean it will install Fedora, unfortunately. One of the things I'm concerned about is versioning in the rpm databases (on all rpm based distros) and if even a distro RTE can build a true rootfs with the correct rpm database version. Sigh... In the meantime, I would be interested to know if anyone is successfully doing what the title of this message thread describes... Creating a Container with a different GuestOS than the HostOS, and if you could either provide a link or description to where the template script can be found (plus, as Nathaniel Copa described anything special and extraordinary you had to do). I have done it manually down through the years for many distros on several distros under both OpenVZ and LXC with varying success. It's invariably a painful experience, often having to resort to the netinst CD's or other precreated containers. It's interesting that even OpenVZ seems to have abandoned their original templating scheme and largely gone with a pile of pre-built template tarballs on their website. If we can avoid recourse to that situation (and it has already been suggested by more than one), we would be much better off. I don't want to go down the pre-build tarball road, myself. It would be done a lot faster except that this is being done in my spare time. Now, if someone who deals with LXC a lot wanted to hire me to do this and maybe more... :) Honestly (and no offense - EVERYONE STOP LAUGHING) if you think you can do it, have at it, knock yourself out. But I won't support it. In my mind, it will end up being one great big unsupportable hairball of functions and side effects and dependencies with too many fingers in the pie for each distro's piece of the pie. Too many years I've seen things like that turn in to an exercise in autoerotism that is just not worth the frustration. If you want to take on the sole task of maintaining it for every distro - more power to you. I couldn't do it. Who will maintain it? Keep it small and modularized into separate files with specific functionality with very tightly controlled interfaces to reduce interactions and side effects. That way, I can take responsibility for my files and you for yours. When I pass my pieces along, I hope I've commented things enough for someone else to take up the flame or flame me if I failed. If you can truly come up with a distro agnostic way of doing this without requiring significant new support from the (arbitrary) guest distros, my hat is off to you. I'd love to see it. Tony Apologies to anyone who feels I'm off base or too harsh here. Regards, Mike On Wed, Sep 4, 2013 at 10:52 AM, Natanael Copa nc...@alpinelinux.org wrote: On Wed, 04 Sep 2013 09:40:49 -0400 Michael H. Warfield m...@wittsend.com wrote: I do think it is an issue with the whole distribution agnostic template problem that may require some help from the distros or some innovative ideas of how we can bootstrap distros using distro agnostic tools (like stone knives
Re: [Lxc-users] Working LXC templates?
This issue really belongs on -devel since it's a template development issue that really impacts all the template writers. On Tue, 2013-09-03 at 09:26 -0700, Tony Su wrote: Thx all for the replies, - lxc-version returns 0.8.0. Looking around, there might be a more current unstable, but AFAIK it's the most recently released stable. This is not going to work. 0.8.0 will not support systemd in a container, which all recent supported versions of Fedora are going to require. 0.8.0 may be the more recently released stable FROM OpenSuse but it is not the more recently release stable from LXC. The most recently released stable from LXC is 0.9.0 and even that doesn't have some of the necessary patches to the Fedora Template. Your best bet would be to built from the stage branch in git. You may need to wait until we release 1.0.0 and I'll take some of your thoughts into consideration for the Fedora template but I have no way to test them on OpenSuse at this time. Once we release 1.0.0, you'll still have the problem with what OpenSuse releases as their stable. We have no control over what they decide and do. - I'd have to re-run to get the SSL error again but I think I've described its error accurately, no further explanation except that the identity of the remote server cannot be authenticated. This would lead me to guess that the server is not registered properly with a public CA (eg using a CA root that isn't in a bootstrap Fedora) so guessing that perhaps an option should be offered that allows over-riding authentication? SSL encryption of course should still be implemented for security. Well, I can give you an argument if the error was described accurately enough. I didn't see any site names I could test to see what the root CA is. Without that, I can't tell you why you're seeing that error. I understand fully what the error is (having my own private CA for private activities) but I can't determine the origin without knowing the source. WAG - Wild Ass Guess I suspect that their certs are properly signed by a CA particular to the Fedora Project and properly contained in the Fedora rpm root store, so it may really be yet another cross distribution issue that depends on the distribution peculiar packages and configurations. Since there's probably no need (I see no need) for the Fedora Repositories to be registered properly with a public CA (and pay the extra expense), I would say the term properly is missued in this case. The rpms are all signed by the Fedora Project and their gpg key so, integrity shouldn't be an example unless someone intercepts the original rootfs build and provides a trojaned package with the gpg keys. /WAG Since this is an inter-distribution issue, I'm not sure what the proper solution would be (assuming my WAG is true) or what LXC can do to address it. I also don't know why Ubuntu / Debian is not experiencing this problem either. But, without some example names of specific sites exhibiting this problem (I don't run OpenSuse) I have no way to investigate further. Yeah, we could probably add an option to the template to ignore the SSL check or to use and alternate rootca store (if we can avoid the catch-22) but it may be better to investigate a more generic, distribution agnostic, solution to these types of problems. I do think it is an issue with the whole distribution agnostic template problem that may require some help from the distros or some innovative ideas of how we can bootstrap distros using distro agnostic tools (like stone knives and bear skins style install of the rootfs using nothing more than tar, gzip, gpg, and curl or wget). - The lxc-create issue is definitely there. At first, I encountered it using the openSUSE YAST LXC container applet. but then when I invoked lxc-create from a console, but the help verifies it supports few options and not these. But, as I described if the template requires parameters, it's also possible to simply provide them in the script instead of at runtime as command switches (but might not be apparent at first to someone reading your script). Tony Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Learn the latest--Visual Studio 2012, SharePoint 2013, SQL 2012, more! Discover the easy way to master current and previous Microsoft technologies and advance your career. Get an incredible 1,500+ hours of step-by-step tutorial videos with LearnDevNow. Subscribe today and save! http://pubads.g.doubleclick.net/gampad/clk?id=58040911iu=/4140/ostg.clktrk___ Lxc-users
Re: [Lxc-users] Routing additional public IP without exposing to host
On Thu, 2013-08-29 at 11:47 -0400, Robert Pendell wrote: Ok... so this might not even be possible so this will be theoretical speak only. I don't have a configuration at the moment as the progress I made before was wiped when I gave up before. I found out about some limitations from my host so I was wondering if this scheme was possible. I think this is very possible. At first, I thought you were asking about vampire routing where machines share an IP address (or one machine is sitting on the path but acting as a vampire for an IP and MAC address) but you're really talking about two IP addresses on the same MAC address, sort of what cable /dsl modems do when you allocate a passthrough host while they maintain minimal admin access. Ok, so that, in and of itself, is actually pretty trivial. The splitting of the two IP addresses to two machines (virtual or otherwise) while sharing a common MAC address is what gets entertaining. In this case, I think you need to get really intimately up close and personal with ebtables. Specifically with MAC level NAT in the brouting chain. I've never done this myself (but I have explored the possibilities for vampire routing) but I think that can provide you with the hooks that will do what you want to do. Both IP1 and IP2 are on different subnets. Statically assigned by provider. Container1 will be a container that I want to expose to the world bypassing iptables. There is an additional issue here. The container's mac address can't be leaked over the bridge. It must appear to be coming from the host. Reason is because switch security doesn't allow unauthorized mac addresses to route. Host has IP1 on br0 Host routes IP2 to Container1 but it isn't assigned to the interface? (eg I don't want any services on the host to be able to bind to IP2 at all) Container1 handles IP2 on virtual eth0 Container2 (and so forth) are NAT routed for testing Can this be done at all? Any input will be extremely useful. Robert Pendell shi...@elite-systems.org A perfect world is one of chaos. Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Learn the latest--Visual Studio 2012, SharePoint 2013, SQL 2012, more! Discover the easy way to master current and previous Microsoft technologies and advance your career. Get an incredible 1,500+ hours of step-by-step tutorial videos with LearnDevNow. Subscribe today and save! http://pubads.g.doubleclick.net/gampad/clk?id=58040911iu=/4140/ostg.clktrk___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Working LXC templates?
On Wed, 2013-08-28 at 10:31 -0700, Tony Su wrote: Was wondering if there is a source for working LXC templates for deploying non-native distros. Ah... One of my major bugaboos. I keep pissing in peoples corn flakes over making the template generic. This is really an issue for lxc-devel aot lxc-users but that's another matter. Although I'm working on openSUSE, it looks like the default provided templates are generic, for example author for the fedora template is Daniel Lezcano and the ubuntu template is Serge Hallyn. Daniel authored it and I've made a number of modifications / contributions. What version of lxc-tools are you running? An example of current difficulties is that the Fedora repo system appears to have been restructured in a major way. I've partially made modifications to the fedora script (which was likely created sometime around fedora 14) by modifying the repo string to add a f letter to the path as follows RELEASE_URL=$MIRROR_URL/Packages/f/fedora-release-$release-1.noarch.rpm Oh? That's an area where I've already had my fingers in. Release ver, please? Things in git are likely to be working better there. But after locating the package and downloading, A series of errors follow beginning with a pycurl error unable to verify the remote host using SSL. That does NOT sound like something related to inserting anything in the path. That's related to an SSL cert not verifying to the FQDN to the originating site. That has nothing to do with the longer URL past the FQDN. Cut and paste the errors. Although I can investigate how to disable the check, I thought I might first ask whether anyone knows of LXC scripts where you can be on a Host running one distro and run a different distro (preferably at least Fedora 18, but interested in others as well). Within distro, F{X} to F{Y} I have verified it working from F14 through F19 to F12 through F18. From Ubuntu to Fedora, it should be working. It's not perfect and I'm not at all certain it would work from OpenSUSE or Arch or several others (but, it if doesn't, it never did) but that's a problem I keep bitching about and I'm not certain it has an answer. At least OpenSUSE has rpm and Ubuntu has febboot and yum support. But that's not the answer to the distro-agnostic templates. BTW - If anyone else is following what I've done to this point on openSUSE, I also figured out how to pass the release parameter in the template since although the template describes passing that parameter from the command line, it's not supported by lxc-create on openSUSE. It certainly works on Ubuntu and Fedora. You'll have to provide a few more details there since I don't know any reason why it shouldn't honor a -t fedora -- -R 18 option. what error do you get??? TIA, Tony Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Learn the latest--Visual Studio 2012, SharePoint 2013, SQL 2012, more! Discover the easy way to master current and previous Microsoft technologies and advance your career. Get an incredible 1,500+ hours of step-by-step tutorial videos with LearnDevNow. Subscribe today and save! http://pubads.g.doubleclick.net/gampad/clk?id=58040911iu=/4140/ostg.clktrk___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Linux Mint and cgroups on /sys/fs/cgroup or /dev/cgroup/cpu
On Mon, 2013-08-12 at 15:02 +0200, arjan wrote: Hi, This seems a list of lxc users only right? Anybody reading and replying here that actually knows about the nitty-gritty of lxc? No not quite. Some of us are devs. Originally, we only had the -dev list. Daniel created this list at my request years ago just so we could improve the signal to noise ratio for BOTH users and devs. The users are often not interested in the dev nitty gritty and the devs were starting to get overwhelmed and distracted by some of the simple user requests. So we split the lists. Some of us do follow both. Daniel gets pretty snowed under even for the devel list so he doesn't pay a lot of attention here. Serge pays a bit of attention and I haunt it when I can. To report back on how I more or less solved this issue: I simply commented out the entry in /etc/bash.bashrc. Of course this is not the ideal approach, because the performance goes down as was shown in the link I mailed. Sorry, I'm not a Linux Mint user. I do some Fedora, RHEL, CentOS, plus some Ubuntu and Debian. I'm a total neophite wrt Mint and had not been following the thread. Kind regards, Arjan Widlak. On 08/08/2013 12:15 AM, Arjan Widlak wrote: Hi, I'm using Linux Mint and have the same problem as described in this post: http://osdir.com/ml/lxc-chroot-linux-containers/2012-03/msg00130.html On Linux Mint this entry is in /etc/bash.bashrc: if [ $PS1 ] ; then mkdir -p -m 0700 /dev/cgroup/cpu/user/$$ /dev/null 21 echo $$ /dev/cgroup/cpu/user/$$/tasks echo 1 /dev/cgroup/cpu/user/$$/notify_on_release fi That's a very bad location for cgroups to be mounted. A decision was made some time ago, and not everyone agrees with it, but cgroups should be mounted on /sys/fs/cgroup. They created a mount point in the kernel sysfs (/sys) file system specifically for this. The reason for this entry in Linux Mint is probably this: http://www.webupd8.org/2010/11/alternative-to-200-lines-kernel-patch.html Oh, that is very very old. Even at 3 years old, however, you will notice they reference /sys/fs/cgroup. That is the standard now and has been for quite a while. Of course I can mount cgroups on /sys/fs/cgroup. And after that I can start a container. However I get these errors on my terminal: bash: /dev/cgroup/cpu/user/3855/tasks: No such file or directory bash: /dev/cgroup/cpu/user/3855/notify_on_release: No such file or directory What is the best approach here? The lxc-checkconfig script finds cgroups mounted when it's mounted on /dev/cgroup/cpu, but lxc-start does not. What version of lxc? The latest version (0.9.0) certainly should as I'm using it and systemd on Fedora mandates where cgroups is going to be mounted. If you're still running on 0.7.5 or earlier, that's probably your real problem. In linux-vserver the location of the cgroup mount can be configured. Can it with lxc? Where cgroups are mounted are an external issue to LXC. Newer versions of LXC should adapt to where cgroups are mounted and whether they are mounted in a flat name space or a hierarchy (as established by libvirt and by systemd). The convention and the current gold standard is /sys/fs/cgroup in a hierarchy and you will need a recent version of the LXC utils. I'm actually surprised linux-vserver is still around. I abandoned that project years ago after they broke IPv6 and went to OpenVZ for several years before migrating to LXC so I wasn't having to install out-of-date custom patched kernels all the time. I can't even get the linux-vserver mailing list server to respond to my subscription or information requests, so I assumed it was just plain dead. I guess I was wrong. Kind regards, Arjan Widlak. -- Get 100% visibility into Java/.NET code with AppDynamics Lite! It's a free troubleshooting tool designed for production. Get down to code-level detail for bottlenecks, with 2% overhead. Download for free and get started troubleshooting in minutes. http://pubads.g.doubleclick.net/gampad/clk?id=48897031iu=/4140/ostg.clktrk ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users -- Met vriendelijke groet, Arjan Widlak Bezoek onze site op: http://www.unitedknowledge.nl De rijkshuisstijl, ook voor tablet en iPhone: http://www.rijkshuisstijl.unitedknowledge.nl/ United Knowledge, inhoud en techniek Bilderdijkstraat 79N 1053 KM Amsterdam T +31 (0)20 737 1851 F +31 (0)84 877 0399 bur...@unitedknowledge.nl http://www.unitedknowledge.nl M +31 (0)6 2427 1444 E ar...@unitedknowledge.nl We use WebGUI, the Open Source CMS http://www.webgui.org/ Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http
Re: [Lxc-users] local subnet
On Sat, 2013-08-03 at 22:23 +0100, Bretton Woods wrote: the answer is probably yes. is it possible to create a container without a network bridge that is on the same subnet as the host? I believe that is what macvlan was suppose to be for but I never had a good experience with it (ongoing host to container issue that may or may not have been resolved in the kernel - I gave up long ago). I generally used bridged, one way or another. In fact why do we always create a bridge and another subnet? I don't understand this question. You have two parts which are orthogonal. Quite literally, the only differences between bridged mode, nat mode, and routed mode is whether the host interface is a member of the bridge and your router/nat configurations. If the host interface is a member of the common bridge, you are in a fully bridged mode and you don't need another subnet and your guests are part of the hosts subnet. If it's not, you're generally (default) assigning a private address to the bridge and using NAT (nat mode) or (very rare) assigning a global unicast IPv4 block to the bridge and using true routing for routed mode with static routes on your host. The key to all three modes is that bridge, which acts as an internal etherswitch on the host (some literature even refers to it as a virtual lan). So the and another subnet actually only applies to two of those three modes (and routed mode is so rare, I'm tempted to say it doesn't really count). also, If you really REALLY want to get bitching complex, you can use a hybrid mode with IPv4 and IPv6 where IPv4 is routed / nated and IPv6 is bridged directly. Then your IPv4 networking is on separate subnets but your IPv6 routing is on a flat SLA (IPv6 subnet) and managed by the common router and it's RA's (router advertisements). That requires creative use of the mac level firewalling (ebtables) and is not recommended unless you're a real masochistic experimenter like I am. bretton Just one of those thoughts :) Interesting thoughts but you have other options. What you are referring to is merely the default. Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Get your SQL database under version control now! Version control is standard for application code, but databases havent caught up. So what steps can you take to put your SQL databases under version control? Why should you start doing it? Read more to find out. http://pubads.g.doubleclick.net/gampad/clk?id=49501711iu=/4140/ostg.clktrk___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Issues in using lxc in Fedora 14
with an LXC 0.9.0 upgrade. I don't have your problem and can not possibly suggest what's wrong with your setup. Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Get your SQL database under version control now! Version control is standard for application code, but databases havent caught up. So what steps can you take to put your SQL databases under version control? Why should you start doing it? Read more to find out. http://pubads.g.doubleclick.net/gampad/clk?id=49501711iu=/4140/ostg.clktrk___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Unable to create lxc CT on fedora 19
On Tue, 2013-07-09 at 20:25 -0700, jjs - mainphrame wrote: I've installed the lxc-0.9 rpms and installed the attached fedora template. The container is created successfully and can be started, but without networking. The equivalent operation on my ubuntu 13.04 system produces a container with network connectivity. Hmmm... Interesting. That's one area where my systems may be a bit different that most others. Since I have huge address space to play with (on IPv4 and IPv6), I generally set my networking up for pure bridging, instead of NAT. That sounds like the default bridge may not be set up properly. I think the default uses virbr0 in the host which is nominally handled by the libvirt stuff for things like dhcp. I haven't been able to do an apples to apples comparison since I can't seem to install a debian CT on fedora nor a fedora CT on ubuntu. Interesting. There may still be something else at play here then. Still getting the same chroot errors? I haven't been able to reproduce them at all. Any hints as to where I should be looking? 1) Use lxc-console to log into the container and see if it got a network address for its networking. 2) Use brctl show in the host to display the bridge configuration and if virbr0 shows any connected devices. 3) Check the address of the virbr0 interface in the host. 4) Check to see if libvirt and dhcpd are running. 5) Make sure that you have forwarding enabled in the host. I'll test some of this out on another host of mine with NAT networking. Joe Regards, Mike On Tue, Jul 9, 2013 at 11:18 AM, Michael H. Warfield m...@wittsend.com wrote: On Mon, 2013-07-08 at 12:54 -0400, Michael H. Warfield wrote: On Mon, 2013-07-08 at 08:51 -0700, jjs - mainphrame wrote: Hi Michael, Yes, it is lxc-0.8 from the repos. I did notice that ubuntu shipped with lxc-0.9. You're right, f19 failed during the download phase. All the others failed at the end with the chroot error. Good. I just completed Ubuntu and Debian container installs without running into any errors (though the Ubuntu container install seemed to take FOREVER). I'm working on a patch for the Fedora template and will submit it up to the devel list, hopefull later today (I have several things on my list). Once Serge and Stéphane accept that, it'll be in the git repo for the next release. I have a new lxc-fedora template now. It's been tested running on Fedora 17 through 19 and has installed Fedora containers for 11, 12, 13, 14, 16, 17, 18, and 19. Fedora 15 container creation runs into a dependency on fedora-rawhide-release (WHY???) but is way past EOL so I really don't care. Containers built manually don't run, anyways, thanks to that version of systemd packaged with it. Fedora prior to 11 also has weird errors with the repos that's it's not worth worry or trying to fix, they're so old. Those earlier ones where just built to test the template script anyways and the legacy repos tend to be a bit flaky... Fedora 16 will build but will not run and, really, I expected that. It's a compatibility problem with, what else, the systemd version running in that container. It's also post EOL and not worth any effort (though I did try months ago - it's really a lost cause, it's so bad). Thanks for the feedback, I now have a better sense of what's going on. If you'd care to share any of your packages or scripts, I'd be indebted. I'll see what I can do. I've attached a new lxc-fedora template. Copy this to /usr/share/lxc/template/lxc-fedora AFTER installing 0.9.0. I'll be posting a patch to -devel later today. There's more than just the fixes to the retry logic and the release download logic you need in there. It also incorporates other patches that are already in staging plus a few fixes for host version checking. You can try it out and see how it works for you. Thomas Moschny posted a link to some prebuilt rpms here: http://thm.fedorapeople.org/lxc/ I haven't used them but I suspect they're not much different than the ones I build. You may need to manually removed the lxc-doc package from the stock repository as there's a conflict in the rpm builds and the 0.9.0
Re: [Lxc-users] Unable to create lxc CT on fedora 19
On Tue, 2013-07-09 at 10:02 -0400, Dwight Engen wrote: On Mon, 08 Jul 2013 10:05:49 -0400 Michael H. Warfield m...@wittsend.com wrote: On Sat, 2013-07-06 at 16:11 -0700, jjs - mainphrame wrote: All, Noob question here. I've been testing lxc on ubunbtu 13.04 and everything just works. However, all my attempts to create an lxc CT on fedora 19 have failed. The result is the same when attempting to create debian, ubuntu or fedora containers. What am I missing here? As it so happens, I just upgraded one of my F18 workstations to F19 over the weekend and had not gotten around to testing this yet. So I just tested. Spotted an immediate and obvious problem before I even started and fixed that before even making the attempt. 1) What version of lxc are you running? The stock lxc rpms from the repos are 0.8.0. Uh, oh. No, that's not gonna work at all. Version 0.8.0 is not compatible with the version of systemd that's shipped with F19 (or F18 for that matter). You need to upgrade lxc to at least 0.9.0 (current) and that's been discussed in several other threads. There are some prebuilt rpms floating around, though I typically build my own since I've done some work on the binaries and the Fedora template. I would highly recommend installing from some prebuilt rpms (or building your own) rather than from source compile and install. Just wanted to point out that make rpm from the sources works quite well for the building your own case :) As does rpmbuild -ta lxc-0.9.0.tar.gz directly on the tarball (which is generally what I do until I start making modifications). :-) No for testing... 2) The lxc-fedora template (even in 0.9.0) is busted, as I feared it would be, for Fedora 19 because the Fedora 19 release file is a -2 release and it's only looking for a -1 release. I saw that code a month ago and thought that can't be right but it hasn't busted until now. Nobody answered when I asked about that logic on the devel list back then so I guess it's one more thing on my list to fix. The whole retry logic in that template is wrong, IMNSHO. 3) The errors couldn't be the same because the template logic is different. No where in the Fedora template do we do a chroot .* mount.*proc. In fact, we don't even mount proc (which may be something else I should look into). That error on the chroot ... mount -t proc would have never shown up in an fedora create (but you would have blown up for the release download). 4) After installing lxc-0.9.0 on my F19 system AND hacking the bloody lxc-fedora template for the release extension, I was able to successfully install an F19 container on an F19 host. I'll try Ubuntu and Debian next. AFAICT, almost none of the other templates have allowed for cross distro container creation, which sucks. One more thing to work on. :-P Regards, Mike --- output follows --- [root@max ~]# lxc-create -n debian1 -t debian /usr/share/lxc/templates/lxc-debian is /usr/share/lxc/templates/lxc-debian debootstrap is /sbin/debootstrap Checking cache download in /var/cache/lxc/debian/rootfs-squeeze-amd64 ... Downloading debian minimal ... I: Retrieving Release W: Cannot check Release signature; keyring file not available /usr/share/keyrings/debian-archive-keyring.gpg I: Retrieving Packages I: Validating Packages I: Resolving dependencies of required packages... I: Resolving dependencies of base packages... I: Found additional required dependencies: insserv libbz2-1.0 libdb4.8 libslang2 ... snipped ... I: Extracting mount... I: Extracting util-linux... I: Extracting liblzma2... I: Extracting xz-utils... I: Extracting zlib1g... W: Failure trying to run: chroot /var/cache/lxc/debian/partial-squeeze-amd64 mount -t proc proc /proc W: See /var/cache/lxc/debian/partial-squeeze-amd64/debootstrap/debootstrap.log for details Failed to download the rootfs, aborting. Failed to download 'debian base' failed to install debian lxc-create: failed to execute template 'debian' lxc-create: aborted [root@max ~]# Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- See everything from the browser to the database with AppDynamics Get end-to-end visibility with application monitoring from AppDynamics Isolate bottlenecks and diagnose root cause in seconds. Start your free trial of AppDynamics Pro today! http://pubads.g.doubleclick.net/gampad/clk?id
Re: [Lxc-users] Unable to create lxc CT on fedora 19
On Mon, 2013-07-08 at 12:54 -0400, Michael H. Warfield wrote: On Mon, 2013-07-08 at 08:51 -0700, jjs - mainphrame wrote: Hi Michael, Yes, it is lxc-0.8 from the repos. I did notice that ubuntu shipped with lxc-0.9. You're right, f19 failed during the download phase. All the others failed at the end with the chroot error. Good. I just completed Ubuntu and Debian container installs without running into any errors (though the Ubuntu container install seemed to take FOREVER). I'm working on a patch for the Fedora template and will submit it up to the devel list, hopefull later today (I have several things on my list). Once Serge and Stéphane accept that, it'll be in the git repo for the next release. I have a new lxc-fedora template now. It's been tested running on Fedora 17 through 19 and has installed Fedora containers for 11, 12, 13, 14, 16, 17, 18, and 19. Fedora 15 container creation runs into a dependency on fedora-rawhide-release (WHY???) but is way past EOL so I really don't care. Containers built manually don't run, anyways, thanks to that version of systemd packaged with it. Fedora prior to 11 also has weird errors with the repos that's it's not worth worry or trying to fix, they're so old. Those earlier ones where just built to test the template script anyways and the legacy repos tend to be a bit flaky... Fedora 16 will build but will not run and, really, I expected that. It's a compatibility problem with, what else, the systemd version running in that container. It's also post EOL and not worth any effort (though I did try months ago - it's really a lost cause, it's so bad). Thanks for the feedback, I now have a better sense of what's going on. If you'd care to share any of your packages or scripts, I'd be indebted. I'll see what I can do. I've attached a new lxc-fedora template. Copy this to /usr/share/lxc/template/lxc-fedora AFTER installing 0.9.0. I'll be posting a patch to -devel later today. There's more than just the fixes to the retry logic and the release download logic you need in there. It also incorporates other patches that are already in staging plus a few fixes for host version checking. You can try it out and see how it works for you. Thomas Moschny posted a link to some prebuilt rpms here: http://thm.fedorapeople.org/lxc/ I haven't used them but I suspect they're not much different than the ones I build. You may need to manually removed the lxc-doc package from the stock repository as there's a conflict in the rpm builds and the 0.9.0 package is not obsoleting the Fedora 0.8.0 lxc-doc package. Joe Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! lxc-fedora Description: application/shellscript signature.asc Description: This is a digitally signed message part -- See everything from the browser to the database with AppDynamics Get end-to-end visibility with application monitoring from AppDynamics Isolate bottlenecks and diagnose root cause in seconds. Start your free trial of AppDynamics Pro today! http://pubads.g.doubleclick.net/gampad/clk?id=48808831iu=/4140/ostg.clktrk___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] lxc-wait doesn't notice container shutdown
On Mon, 2013-07-08 at 18:27 +1000, Christoph Willing wrote: On 05/07/2013, at 9:53 PM, Serge Hallyn serge.hal...@ubuntu.com wrote: Quoting Christoph Willing (cwill...@users.sourceforge.net): Since upgrading from lxc-0.7.5 to 0.9.0 I have a problem with lxc-wait. Previously, scripts containing an lxc-wait for the STOPPED state would continue as expected when the nominated container shut itself down i.e. the script received the STOPPED state and lxc-wait exits. However with 0.9.0, lxc-wait doesn't seem to receive the STOPPED state when the container shuts itself down - the scripts just keep waiting. I can run lxc-stop manually, whereupon the waiting script then sees that the container gets the message and continues as before. On the other hand, the same scripts see the RUNNING state of a newly started container and continue execution as before. So although lxc-wait is working (receives states sent explicitly via lxc-start/stop), it no longer receives any indication from the container that is is shutting down. Is this new behaviour expected in 0.9.0? No it sounds unexpected. Would you be able to code the above into a little test script to reproduce? (something like sudo lxc-create -t ubuntu -n x1 sudo lxc-start -n x1 -d sudo nohup lxc-wait -s STOPPED -n x1 /tmp/outout 21 pid=$! sudo lxc-sttach -n x1 -- poweroff tail /tmp/outout ps -p $pid echo lxc-wait still running - FAIL ps -p $pid || echo lxc-wait exited - PASS ) Also please tell us which distro+release you're on and the exact package or upstream git version (there have been very recent changes...) Is lxc-wait a script or a program in yours? (which lxc-wait; file `which lxc-wait`) I found fixed the problem but, for interest, the affected distro is Slackware current (the next release), using package lxc-0.9.0-x86_64-1 from which lxc-wait is a program: chris@current:~$ which lxc-wait; file `which lxc-wait` /usr/bin/lxc-wait /usr/bin/lxc-wait: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), stripped Since there is no distributed lxc-slackware template, I've made my own which I install locally. I've been using the same basic template for about the last 3 Slackware releases and have had no problems. Part of the the template's execution involves patching some of Slackware's default rc.d scripts for use in the container. For some reason, the patching removes the reboot/poweroff calls in rc.6 (the shutdown script). I don't recall now why that was done - whether it was mistaken or intentional - yet everything had all worked as expected until now. Anyway, assuming it was a mistake and so changing my template to leave the reboot/poweroff calls intact has fixed the problem - lxc-wait receives the STOPPED state again. Historically, removing the calls to reboot/poweroff were done, at the very least, to circumvent the read-only remounts of mounted file systems in the host. We've been plagued by that problem on and off again for some time and never really nailed it. I think the latest fixes for dealing with systemd based systems is ultimately incompatible with some of the host workarounds (bind mounts) which some of use were using to avoid that problem as well. There may have been other reasons but that was, most emphatically, one of them. chris Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- This SF.net email is sponsored by Windows: Build for Windows Store. http://p.sf.net/sfu/windows-dev2dev___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Unable to create lxc CT on fedora 19
On Sat, 2013-07-06 at 16:11 -0700, jjs - mainphrame wrote: All, Noob question here. I've been testing lxc on ubunbtu 13.04 and everything just works. However, all my attempts to create an lxc CT on fedora 19 have failed. The result is the same when attempting to create debian, ubuntu or fedora containers. What am I missing here? As it so happens, I just upgraded one of my F18 workstations to F19 over the weekend and had not gotten around to testing this yet. So I just tested. Spotted an immediate and obvious problem before I even started and fixed that before even making the attempt. 1) What version of lxc are you running? The stock lxc rpms from the repos are 0.8.0. Uh, oh. No, that's not gonna work at all. Version 0.8.0 is not compatible with the version of systemd that's shipped with F19 (or F18 for that matter). You need to upgrade lxc to at least 0.9.0 (current) and that's been discussed in several other threads. There are some prebuilt rpms floating around, though I typically build my own since I've done some work on the binaries and the Fedora template. I would highly recommend installing from some prebuilt rpms (or building your own) rather than from source compile and install. No for testing... 2) The lxc-fedora template (even in 0.9.0) is busted, as I feared it would be, for Fedora 19 because the Fedora 19 release file is a -2 release and it's only looking for a -1 release. I saw that code a month ago and thought that can't be right but it hasn't busted until now. Nobody answered when I asked about that logic on the devel list back then so I guess it's one more thing on my list to fix. The whole retry logic in that template is wrong, IMNSHO. 3) The errors couldn't be the same because the template logic is different. No where in the Fedora template do we do a chroot .* mount.*proc. In fact, we don't even mount proc (which may be something else I should look into). That error on the chroot ... mount -t proc would have never shown up in an fedora create (but you would have blown up for the release download). 4) After installing lxc-0.9.0 on my F19 system AND hacking the bloody lxc-fedora template for the release extension, I was able to successfully install an F19 container on an F19 host. I'll try Ubuntu and Debian next. AFAICT, almost none of the other templates have allowed for cross distro container creation, which sucks. One more thing to work on. :-P Regards, Mike --- output follows --- [root@max ~]# lxc-create -n debian1 -t debian /usr/share/lxc/templates/lxc-debian is /usr/share/lxc/templates/lxc-debian debootstrap is /sbin/debootstrap Checking cache download in /var/cache/lxc/debian/rootfs-squeeze-amd64 ... Downloading debian minimal ... I: Retrieving Release W: Cannot check Release signature; keyring file not available /usr/share/keyrings/debian-archive-keyring.gpg I: Retrieving Packages I: Validating Packages I: Resolving dependencies of required packages... I: Resolving dependencies of base packages... I: Found additional required dependencies: insserv libbz2-1.0 libdb4.8 libslang2 ... snipped ... I: Extracting mount... I: Extracting util-linux... I: Extracting liblzma2... I: Extracting xz-utils... I: Extracting zlib1g... W: Failure trying to run: chroot /var/cache/lxc/debian/partial-squeeze-amd64 mount -t proc proc /proc W: See /var/cache/lxc/debian/partial-squeeze-amd64/debootstrap/debootstrap.log for details Failed to download the rootfs, aborting. Failed to download 'debian base' failed to install debian lxc-create: failed to execute template 'debian' lxc-create: aborted [root@max ~]# -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. -- This SF.net email is sponsored by Windows: Build for Windows Store. http://p.sf.net/sfu/windows-dev2dev ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- This SF.net email is sponsored by Windows: Build for Windows Store. http://p.sf.net/sfu/windows-dev2dev___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Unable to create lxc CT on fedora 19
On Mon, 2013-07-08 at 18:22 +0200, Thomas Moschny wrote: 2013/7/8 Michael H. Warfield m...@wittsend.com: 2) The lxc-fedora template (even in 0.9.0) is busted, as I feared it would be, for Fedora 19 because the Fedora 19 release file is a -2 release and it's only looking for a -1 release. I saw that code a month ago and thought that can't be right but it hasn't busted until now. Nobody answered when I asked about that logic on the devel list back then so I guess it's one more thing on my list to fix. The whole retry logic in that template is wrong, IMNSHO. It is unclear to me why the fedora-release rpm is installed separately. Yum knows a --releasever option that could be used instead. It's because of the catch-22 trying to bootstrap on a foreign distro, and this is where most of the other templates fall flat on their faces (and Fedora, Debian, and Ubuntu does as well, just not as badly). You have to download a minimal file system assuming you only have distro independent tools. Once that's built, you can use the distro dependent tools (yum, apt, dpkg, zypper) in the chrooted container. But you can't use them till you have it set up and don't have them before. That makes for a challenge, for sure! In some cases, it make take a little help from the distro vendors there. I've just figure out some of what I need to do to get the SuSE template working on non-SuSE hosts. Seems that rpm is present on SuSE and I should be able to eliminate zypper and use purely rpm to install the minimal (I'm finding this how the hard way working on some SuSE zLinux s360 stuff). Debian and Fedora (and Redhat based et all) have debboot and febboot to assist in cross building this, but that's not universal. The template owners (and I'm just a contributor) really need to try and make their templates work in a more distro independent manner. But, that's why you will see these sorts of download peculiarities in some of the templates. We're working around running on a distro different than what we're installing. Independently, the retry logic is broken. It should fetch the mirrorlist only once and try mirrors on that list, from top to bottom. Right (I'm working on that now) and download information to tell it what the correct release package is (pull down a directory of Packages/f/ and search for fedora-release-xx in this case). I'm working on it. That logic is broken for more than one reason and is going to need more than one loop. 1) It needs a couple of retries for network errors in getting the mirror list. That's one loop and I'll probably just shorten the first loop and add a sleep for that one. 2) It then needs to process through the list as you say - that's the second loop. I'll probably make that a for loop over, say, the top 6 repo URLs (if you don't get it in 6, you've got bigger problems, so come back again...). 3) It needs to deal with multiple/variable/indeterminant release clicks. That should be handled by pulling the directory and finding the correct release package name from the directory. Regards, Thomas Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- This SF.net email is sponsored by Windows: Build for Windows Store. http://p.sf.net/sfu/windows-dev2dev___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Rootfs as rw overlay on top of ro directory
On Tue, 2013-06-11 at 12:19 -0500, Rob Landley wrote: On 06/11/2013 04:07:27 AM, Ivan Vilata i Balaguer wrote: Hi everyone, I'm doing some tests on containers having a union rootfs (using Aufs in Debian) consisting of a writable directory overlaid on top of a read-only mount coming from a Squashfs image file. The configuration described below seems to work pretty well with lxc 0.9.0.alpha3 and Linux 3.8.13-1 (on Debian Sid), at least when the writable directory is a plain one and not a mountpoint (see below). Actually overlayfs is the one Linus told Al Viro to include: http://lkml.indiana.edu/hypermail/linux/kernel/1303.1/02476.html Well... It's about time. We've been crying for an upstream unionfs / union mount, overlayfs for a couple of years now. They've been fiddling around over which one solves which problems and which fits the kernel semantics better. I saw some of the earlier discussions. Everyone has their sacred cows and their axes to grind and nothing is perfect. Might want to focus on the one that's actually going into the kernel, unless there's more recent developments I missed? Has this one made it into 3.10? What I'm hearing is that the 3.10 release is going to be a mammoth cujo release from the size of the RC's and change sets. I'm going to have to go check that out. :-)=) Rob -- This SF.net email is sponsored by Windows: Build for Windows Store. http://p.sf.net/sfu/windows-dev2dev Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- This SF.net email is sponsored by Windows: Build for Windows Store. http://p.sf.net/sfu/windows-dev2dev___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Sharing container rootfs
in a realistic test environment first. Thank you, Bogdan P. Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- How ServiceNow helps IT people transform IT departments: 1. A cloud service to automate IT design, transition and operations 2. Dashboards that offer high-level views of enterprise services 3. A single system of record for all IT processes http://p.sf.net/sfu/servicenow-d2d-j___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Sharing container rootfs
On Mon, 2013-06-10 at 08:48 -0500, Serge Hallyn wrote: Quoting Michael H. Warfield (m...@wittsend.com): On Fri, 2013-06-07 at 08:45 +, Purcareata Bogdan-B43198 wrote: ... I use to do something similar a lot under the old linux-vservers project (now defunct for several years - mailing list is now dead). They used a COW (Copy On Write) system to maintain a common READ ONLY root system and per-vserver modified layers of changes each server made while running. It was quite a nice feature. In theory, this is the idea of using a rootfs image with a unionfs rw layer on top of that for the running container. That way, you only have one copy of a binary on disk and only one copy of the shared executable code in memory, yet the containers all have unique modifiable root file systems. So it works in principle. Implementation can be another matter. I think I recall having done this with OpenVZ (after linux-vserver failed in ongoing IPv6 support forced me over to OpenVZ) but that also would have been a long time ago. More recently (but still more than a year ago) I tried the same technique using unionfs with LXC which failed horribly. Functionally, it should appear to be similar to a bind mount but bind mounts are currently problematical with some of the hacks we've had to implement to work around systemd conventions. I haven't tried it in well over a year. I suppose I should try that again. Maybe it would work now... This is (IIUC) what lxc-start-ephemeral is meant to do - and also what 'lxc-clone -B overlayfs -o containerbase -n containerA' is meant for, where containerbase is a canonical, directory-backed container which all other containers are based upon, and containerA becomes a usable container with an overlayfs or aufs write layer mounted over containerbase's readonly rootfs. Oh you UC, all right. Now that's perfect. Maybe I missunderstood what ephemeral did. I assumed that, after the container was stopped, all the ephemeral data would be lost (IOW a throw-away instantiation). If that's persistent across starts, that would be what I was doing. I haven't played with lxc-start-ephemeral or lxc-clone as yet. Sounds like I need to and save my self some grief. That may even be the real answer to the OP's quest. Really sounds like lxc-clone with overlayfs is a match for what I had been doing. It's how both docker and https://github.com/hallyn/lxc-snap provide incremental container image development. Nice. -serge Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- How ServiceNow helps IT people transform IT departments: 1. A cloud service to automate IT design, transition and operations 2. Dashboards that offer high-level views of enterprise services 3. A single system of record for all IT processes http://p.sf.net/sfu/servicenow-d2d-j___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] lxcbr0 MAC addr issue
operations are performed by brctl, based on some discussions I saw. Certainly shouldn't be a problem in a server hosting environment where the host MAC is used for the bridge MAC and unlikely to ever be removed from the bridge. AFAICT... This should be fixed in the current releases and no longer a problem. -serge Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- How ServiceNow helps IT people transform IT departments: 1. A cloud service to automate IT design, transition and operations 2. Dashboards that offer high-level views of enterprise services 3. A single system of record for all IT processes http://p.sf.net/sfu/servicenow-d2d-j___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] lxcbr0 MAC addr issue
On Wed, 2013-06-05 at 09:30 -0500, Serge Hallyn wrote: Quoting Michael H. Warfield (m...@wittsend.com): On Wed, 2013-06-05 at 07:40 -0500, Serge Hallyn wrote: Now my question, could not lxc (at boot) setup a fixed MAC addr for the host port? Yeah, given how bad this was for libvirt/qemu I'm surprised I've not seen this happen in lxc - but I haven't, and noone else has reported it. Actually, this has come up on both the -devel list and here, the last time about a year and a half ago: On the -devel list... Subject: Set high byte of mac addresses for host veth devices to 0xfe It had a patch and referenced an open bug associated with it: random veth device MAC addresses cause bridge problems - ID: 3411497 https://sourceforge.net/tracker/index.php?func=detailaid=3411497group_id=163076atid=826303 Christian Seiler included a proposed patch with his original posting to the -devel list back in November of 2011 where he set the high order byte to FE for a private locally administered MAC and some discussion ensued. After a couple of bug fixes, it was Acked it on 01/03/2012. It looks like it was applied. Right around line 3109 of src/lxc/conf.c: static int setup_private_host_hw_addr(char *veth1) ... ifr.ifr_hwaddr.sa_data[0] = 0xfe; err = ioctl(sockfd, SIOCSIFHWADDR, ifr); close(sockfd); yes and it does this. The point is that lxcbr0 is not tied to any physical nic. So the first container you start, however high the macaddr is, lxcbr0 takes its mac. If the next container gets a lower macaddr, lxcbr0's macaddr drops. Right now I have: lxcbr0Link encap:Ethernet HWaddr fe:02:72:77:79:ff vethtdjU5K Link encap:Ethernet HWaddr fe:02:72:77:79:ff Oh, yeah... I'm always using bridged mode with the host ethernet address on the bridge, so that's never a problem. You're right. The NAT mode is problematical there because it's not anchored to a physical interface. I think someone, at one point in those earlier threads, was suggesting that they were using a dummy interface or a dummy container as a place holder to lock the interface address for that reason. I just happen to have enough IPv4 addresses, personally, that I bridge everything and never really need to NAT anything where I can avoid it. :-P Of course, in a server environment, where you are hosting a farm of virtual servers, you would almost always want global public addresses on the servers, I would imagine. You're still going to have the problem that, if you shut down the container that the bridge is using for the address, the address is going to shift, static or not. The address of the bridge must be an address of an interface on the bridge or you are going to have routing problems. That was made clear in some discussion on some of the kernel mailing lists. How do you deal with that then? Do you designate a container that must never be shut down or the bridge hangs? You could load the dummy module and bridge a dummy interface to the bridge. That would guarantee a MAC address lower than the fe: private addresses of the bridge and would be cheaper than a dummy container. I've done that before a long time ago. (We should still close that old bug, however.) -serge -- How ServiceNow helps IT people transform IT departments: 1. A cloud service to automate IT design, transition and operations 2. Dashboards that offer high-level views of enterprise services 3. A single system of record for all IT processes http://p.sf.net/sfu/servicenow-d2d-j Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- How ServiceNow helps IT people transform IT departments: 1. A cloud service to automate IT design, transition and operations 2. Dashboards that offer high-level views of enterprise services 3. A single system of record for all IT processes http://p.sf.net/sfu/servicenow-d2d-j___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] lxcbr0 MAC addr issue
On Wed, 2013-06-05 at 15:17 +, Jäkel, Guido wrote: yes and it does this. The point is that lxcbr0 is not tied to any physical nic. So the first container you start, however high the macaddr is, lxcbr0 takes its mac. If the next container gets a lower macaddr, lxcbr0's macaddr drops. This lxcbr0 is special to Ubuntu, right? And if not to a physical NIC, to what is this bridge connected to on the host? Not to the best of my knowledge. It should be a simple bridge. What do you get for this command? brctl show A bridge doesn -- How ServiceNow helps IT people transform IT departments: 1. A cloud service to automate IT design, transition and operations 2. Dashboards that offer high-level views of enterprise services 3. A single system of record for all IT processes http://p.sf.net/sfu/servicenow-d2d-j ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- How ServiceNow helps IT people transform IT departments: 1. A cloud service to automate IT design, transition and operations 2. Dashboards that offer high-level views of enterprise services 3. A single system of record for all IT processes http://p.sf.net/sfu/servicenow-d2d-j___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] lxcbr0 MAC addr issue
Crap... Bumped the keyboard and this one got away from me prematurely. On Wed, 2013-06-05 at 11:23 -0400, Michael H. Warfield wrote: On Wed, 2013-06-05 at 15:17 +, Jäkel, Guido wrote: yes and it does this. The point is that lxcbr0 is not tied to any physical nic. So the first container you start, however high the macaddr is, lxcbr0 takes its mac. If the next container gets a lower macaddr, lxcbr0's macaddr drops. This lxcbr0 is special to Ubuntu, right? And if not to a physical NIC, to what is this bridge connected to on the host? Not to the best of my knowledge. It should be a simple bridge. What do you get for this command? brctl show A bridge doesn A bridge doesn't have to be attach to a device. A bridge is its own logical entity in the kernel to which you may attach devices. You can not attach a bridge to something else. You can only attach something else to the bridge. There's a difference. In the case of a NATing configuration, you set up a bridge (name it whatever you want) and attach the containers to it. Then you use the NAT modules to route between the bridge and the external interface while NATing the addresses. I use lxcbr0 on my Fedora hosts. It's just a bridge. I could see where Ubuntu might have some preconfigured setups for this purpose where I have to set them up by hand in Fedora. That's just a matter of the distro specific support scripts. Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- How ServiceNow helps IT people transform IT departments: 1. A cloud service to automate IT design, transition and operations 2. Dashboards that offer high-level views of enterprise services 3. A single system of record for all IT processes http://p.sf.net/sfu/servicenow-d2d-j___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] lxcbr0 MAC addr issue
On Wed, 2013-06-05 at 06:23 +, Hans Feldt wrote: It is a fact that the bridge takes the lowest MAC address from the attached ports for the host port. See for example http://backreference.org/2010/07/28/linux-bridge-mac-addresses-and-dynamic-ports/ Thus if a container is restarted, the host port can potentially change its MAC address and containers will have a stale ARP cache. This of course causes problem for communication container-host. Tested the workaround mentioned in the link but then I got problem with network manager on a later Ubuntu version. Then I tried using a dummy container and reusing its MAC addr for the host port. Works but... Now my question, could not lxc (at boot) setup a fixed MAC addr for the host port? There's a gotcha in there. You can not set an arbitrary MAC address on a bridge. It can only be the MAC address of an attached interface. It has to do with how packets are routed down in the kernel and determining if a packet is to be handled locally on the bridge or not. It also may have some ties in to the spanning tree algorithm protocol logic (whether you are using STP or have it enabled or not). If you set it to a fixed MAC address of a container, you can't stop or reboot that container without losing that static assignment on the bridge. A dummy container is one option, if you don't have a host hardware interface connected to the bridge. But you need one with a MAC address lower than any of the others. Another alternative is to use a dummy interface... modprobe dummy brctl addif lxcbr0 dummy0 The dummy0 interface doesn't even need to be up or have an IP address assigned to it. Since the container host-local addresses will all be private (fe:...) and the dummy0 interface will have something lower, you should be good to go. Thanks, Hans Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- How ServiceNow helps IT people transform IT departments: 1. A cloud service to automate IT design, transition and operations 2. Dashboards that offer high-level views of enterprise services 3. A single system of record for all IT processes http://p.sf.net/sfu/servicenow-d2d-j___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] lxcbr0 MAC addr issue
On Wed, 2013-06-05 at 11:26 -0500, Serge Hallyn wrote: Quoting Michael H. Warfield (m...@wittsend.com): Crap... Bumped the keyboard and this one got away from me prematurely. On Wed, 2013-06-05 at 11:23 -0400, Michael H. Warfield wrote: On Wed, 2013-06-05 at 15:17 +, Jäkel, Guido wrote: yes and it does this. The point is that lxcbr0 is not tied to any physical nic. So the first container you start, however high the macaddr is, lxcbr0 takes its mac. If the next container gets a lower macaddr, lxcbr0's macaddr drops. This lxcbr0 is special to Ubuntu, right? And if not to a physical NIC, to what is this bridge connected to on the host? Not to the best of my knowledge. It should be a simple bridge. What do you get for this command? brctl show A bridge doesn A bridge doesn't have to be attach to a device. A bridge is its own logical entity in the kernel to which you may attach devices. You can not attach a bridge to something else. You can only attach something else to the bridge. There's a difference. In the case of a NATing configuration, you set up a bridge (name it whatever you want) and attach the containers to it. Then you use the NAT modules to route between the bridge and the external interface while NATing the addresses. I use lxcbr0 on my Fedora hosts. It's just a bridge. I could see where Ubuntu might have some preconfigured setups for this purpose where I have to set them up by hand in Fedora. That's just a matter of the distro specific support scripts. Right. And we *could* attach a dummy device with mac starting with something lower. BUT I just did some testing, and even as I watch lxcbr0's addr go down when starting a new container, my ssh to the container which had the higher macaddr doesn't hiccough. Hmmm... It would be interesting to see what you get from arp -a on the host and the container before and after that. It would also be interesting to see what happens if you ssh to a container with the higher address first and then bring up a container with a lower mac address and see if it impacts the existing connection. It really all depends on how the host is managing the arp table and, if the MAC address changes, how does the bridge change impact the arp table. In the case of ssh'ing to a container from the host, the host would still have the correct arp entry for the container which would facilitate the delivery of the initial SYN. The container would have an incorrect entry but that should correct itself as soon as that new packet arrives connecting from the host, invalidating the containers arp table entry, I would think. It should also correct the MAC table entries for the bridge (used by the STP) if it originates from the interface that changed. It works the same way over an eitherswitch externally. The moment a packet is sent with a new from MAC address, the switch (bridge) remembers what port (attachment) that MAC address was last seen on and enters it into it's mac switch table. What would be interesting to know is how the container sees it, since his arp table entry for the host is then wrong if he initiates a packet first, after the change. It could be that the tap/bridge connection from container to host bridge would clear that up quickly. Perhaps it'll be a problem when connected from an outside host (through port forwarding). In that case I'll happily do the dummy if hack. But it seems possible that it just isn't needed. (And since the iptables rule is --to-destination an ip address, I'm thinking it won't be) Yeah, since the host exposed IP and MAC are not changing, it shouldn't be, in the external case with NAT as you describe. The external systems will only be referencing the external IP address for which the MAC doesn't change and the ARP tables are perfectly coherent, externally. So NAT w/ port forwarding to a container should be covered. It's where the MAC address of the host interface changes (because it's attached to a bridge) where you get into trouble in the external case. But, we have that case covered now with the fixes that are in-place since 0.8.0 (assuming the OP is on a recent enough version of the lxc tools). So both of those should be covered. That really just leaves the host - container case (container - container should be a trivial non-issue). Then I think it's an issue of which side of the bridge is issuing the first packet after the MAC address changes and what does the bridge logic do about it. Seems like a corner case to me. I also don't know what role STP has in this game. On my fedora system, my lxcbr0 bridge has STP disabled but the virbr0 bridge (used by libvirt and setup by default) has it enabled and I have no idea why. It's possible that these symptoms could vary depending on that setting. I've definitely heard it mentioned before. Oh, and this is likely to get really ugly with IPv6. With the host
Re: [Lxc-users] Routing issues
On Tue, 2013-06-04 at 11:21 +0100, Rory Campbell-Lange wrote: On 03/06/13, Serge Hallyn (serge.hal...@ubuntu.com) wrote: Quoting Rory Campbell-Lange (r...@campbell-lange.net): On 04/06/13, Papp Tamas (tom...@martos.bme.hu) wrote: What is the IP address of the container? The host is on aa.bb.cc.103 (a public net address) and the container is on aa.bb.cc.87. I can get from 87 to 103, but I can't ping the gateway from the container. Hm, here's an idea. Lxc sets /proc/sys/net/ipv4/conf/$link/forwarding. Perhaps that isn't enough. You might echo 1 /proc/sys/net/ipv4/conf/eth0/forwarding and /proc/sys/net/ipv4/ip_forward. But, 1. what does 'route -n' in the container (and on the host) show? 2. when you ping the ip address of your router, what does traceroute (wireshark, whatever) on the host show? Hi Serge Thanks very much for your email. Going through the steps above showed me I had a firewall problem. Dropping the firewall allowed the container to hit the internet. Apologies for this beginner problem. I'd be grateful to know if anyone has some firewall (iptables) advice for allowing traffic to the container? I expect to run another firewall on the container itself. That's probably your FORWARD chain there. Set that policy to ACCEPT and flush all the rules from the FORWARD chain like this: iptables -P FORWARD ACCEPT iptables -F FORWARD FORWARD chain is going to affect packets forwarded over the host's bridge to the containers. The INPUT and OUTPUT chains will affect the packets coming in and going out from the local host's OS interfaces. Depending on your distro, track down your persistent rule storage and make those changes permanent. Fedora prior to firewalld (here we go again), RedHat, and RH derivatives (CentOS et al) are generally in /etc/sysconfig/iptables unless you've also installed one of the sundry firewall toolkits. Ubuntu, I'm not so sure about. Regards Rory -- Rory Campbell-Lange r...@campbell-lange.net -- How ServiceNow helps IT people transform IT departments: 1. A cloud service to automate IT design, transition and operations 2. Dashboards that offer high-level views of enterprise services 3. A single system of record for all IT processes http://p.sf.net/sfu/servicenow-d2d-j ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- How ServiceNow helps IT people transform IT departments: 1. A cloud service to automate IT design, transition and operations 2. Dashboards that offer high-level views of enterprise services 3. A single system of record for all IT processes http://p.sf.net/sfu/servicenow-d2d-j___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Script to generate systemd unit file for containers
On Sun, 2013-06-02 at 11:18 +0530, Shridhar Daithankar wrote: Hi All, I looked around and didn't find a script that generates systemd unit files for containers. Here is the script. I hope people find it useful and it is included by default in lxc upstream. That's an interesting suggestion that might be worth consideration over on the lxc-devel list where we are currently considering autostart proposals. Not sure if you're doing this under Arch Linux or Fedora (or maybe CentOS) but we will need some suggestions for incorporating the proposed boot management into the systemd based hosts. Current proposal on the table is discussing having an autostart flag, boot delay, and group name contained in the container's configuration file. If a container is then transported from one host to another, the boot attributes are transported along with it. It's also highly desirable to have a uniform autoboot convention such that you're not dependent on whether you're running systemd (CentOS, SL, Fedora, Arch) or upstart (Ubuntu and others) or the older sysvinit scripts. One highly desirable characteristic is to have the processed more or less consistent regardless of the host platform, keeping it as platform agnostic as possible. The current Ubuntu convention of having links in /etc/lxc/boot/{container} to the container config in /var/lib/lxc/{container}/config may or may not end up being part of that proposal. Under the proposed paradigm, we would not be using systemd unit files for autostarting the containers and, rather, have a service that runs our startup process that would then handling this. Doing this purely as systemd unit devices would not be portable from, say, a Fedora host to an Ubuntu host. Even moving this from a Fedora host to another Fedora host (or Arch), we would have to also remove and rebuild unit files on each host, as that wouldn't have been automatically transported. This also would be challenging to work with prioritizing or ordering the boot order independent of the container name. Nice suggestion, however. I may play with it on some of my hosts (I run Fedora and I've been working on the Fedora lxc-create template). I don't think I would incorporate it into the container creation templates, however, because of the current discussions underway. This is one of the things we need to hash out before cutting 1.0. Thanks Thank you. - #!/bin/bash function usage() { echo Usage:$0 enable|disable container name } if [ $# -lt 2 ] ; then usage exit -1; fi export UNITFILE=/etc/systemd/system/multi-user.target.wants/$2_container.service case $1 in enable) cat EOF $UNITFILE [Unit] Description=Container $2 service [Service] Type=forking ExecStart=/usr/bin/lxc-start -d -n $2 ExecStop=/usr/bin/lxc-stop -n $2 RemainAfterExit=yes [Install] WantedBy=multi-user.target EOF ;; disable) rm $UNITFILE ;; *) usage ;; esac - -- Regards Shridhar Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! -- Get 100% visibility into Java/.NET code with AppDynamics Lite It's a free troubleshooting tool designed for production Get down to code-level detail for bottlenecks, with 2% overhead. Download for free and get started troubleshooting in minutes. http://p.sf.net/sfu/appdyn_d2d_ap2 ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] frequent host machine kernel panic
On Sun, 2013-06-02 at 17:02 +0530, Kalyana sundaram wrote: lxc 0.8.0 I saw from your original post that you're running CentOS 6.2. WHY? CentOS is on rolling release and you should be on 6.4. That tells me you are fairly out of date. What version kernel are you running? Seems like it's a kernel bug and there's always a possibility that it's been fixed. Have you tried updating to 6.4? What are the distributions and versions you are attempting to run in the containers? How did you install lxc on CentOS? I don't see it in the CentOS repositories and 0.8.0 seems newer than I would expect, even if it were. What were the instructions you followed to create your containers? Reason I'm asking is that the CentOS wiki page on creating lxc containers is referring to the libvirt lxc and not this lxc. If you followed the instructions on this page, then you are using the libvirt lxc which is a different project and not this one! http://wiki.centos.org/HowTos/LXC-on-CentOS6 Yes, it's confusing. Not sure who's to blame for two projects with the same name for the same functionality. There are some recent (from back in March) instructions out there that refer to setting up this lxc flavor on CentOS 6.2 which baffle the crap out of me that they hadn't used or recommended using the latest version of CentOS. They also recommended using the OpenVZ CentOS 5 templates. I'm not really sure I would follow those instructions. http://www.techrepublic.com/blog/opensource/how-to-create-lxc-system-containers-to-isolate-services/1299 CentOS 6 is still upstart based. You could probably get away with 0.7.5 which is what I would have expected in the repos. We haven't created an lxc-centos template (yet) for lxc-create but I've been considering it based on the lxc-fedora template (the differences are rather ugly). The procedures for creating the container are significantly different between those two sets of instructions (the CentOS wiki and that set at TechRepublic. I have used the OpenVZ CentOS 6 templates to create LXC containers but I'm running on a Fedora 17 host with very up-to-date kernels. At the very least, you should update those host systems to the latest CentOS release and kernels in order to get any help analyzing what the problem is. On Sun, Jun 2, 2013 at 12:02 PM, Papp Tamas tom...@martos.bme.hu wrote: On 06/02/2013 06:06 AM, Kalyana sundaram wrote: No actually kernel panics happen in more than one host. Please keep the mailing list in the address list and don't use toppost. What version do you use? tamas -- Kalyanasundaram http://blogs.eskratch.com/ https://github.com/kalyanceg/ Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! -- Get 100% visibility into Java/.NET code with AppDynamics Lite It's a free troubleshooting tool designed for production Get down to code-level detail for bottlenecks, with 2% overhead. Download for free and get started troubleshooting in minutes. http://p.sf.net/sfu/appdyn_d2d_ap2 ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
[Lxc-users] Anyone with some prebuilt templates laying around they could share?
A recent threat on the lxc-users reminded me that I'm lacking a few templates for testing under LXC. OpenVZ has a number of prebuilt containers (like CentOS) for download which some people (myself) have used to create containers where we don't have prebuilt templates or the lxc-templates we have do not function cross-distro. For example... On a Fedora host, I can create an Ubuntu container, thanks to deboot. On an Ubuntu host, I can create a Fedora container using feboot. I don't see an equivalent for Arch on either. I can tell you that trying to create an Arch container on Fedora host will fail miserably trying to use lxc-create. That template seems to be strictly like-on-like, which is sad. There is no Arch template in OpenVZ to even give me a start, so I'm out of luck there. How do we bootstrap an Arch system with packman on a system which does not have packman? Anyone with an Arch container prebuilt template they could share? Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! -- Get 100% visibility into Java/.NET code with AppDynamics Lite It's a free troubleshooting tool designed for production Get down to code-level detail for bottlenecks, with 2% overhead. Download for free and get started troubleshooting in minutes. http://p.sf.net/sfu/appdyn_d2d_ap2 ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] list admin
On Fri, 2013-05-24 at 08:31 +0200, Tamas Papp wrote: On 05/24/2013 02:06 AM, Daniel Lezcano wrote: Yes, sure. Sorry, I have been more and more busy with other stuff and flooded by emails, so I did not followed the discussion closely. Can you explain in a few words what do you need ? I'd like to be sure, there is no SPOF in the listadmin position:) Personally I want to kick off the invalid email address, like Mike said. Also I think it would be a good idea to change the Reply-To: header to the list address. There are, in my experience, always arguments for and against setting the Reply-To: header as you suggest. However, I was just reminded by a bounce catch over on the -devel list, that the bounce catcher does work pretty good in the majority of the cases when bounces go back to the list and the bad addresses get automagically unsubscribed. So, that idea of changing the Reply-To gets my endorsement. I would just wait a short while for anyone else to chime in with an opinion and then make it so. +1 Cheers, tamas -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! -- Try New Relic Now We'll Send You this Cool Shirt New Relic is the only SaaS-based application performance monitoring service that delivers powerful full stack analytics. Optimize and monitor your browser, app, servers with just a few lines of code. Try New Relic and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_may ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Regarding creating a LXC container in fedora 17
On Sat, 2013-05-18 at 12:09 -0700, jjs - mainphrame wrote: Interesting. I didn't realize how spoiled I am and how easy I have it with lxc on ubuntu! Don't get too too comfortable. I don't know if Ubuntu is transitioning to systemd yet or not (or planning to or creating a new alternative) but this was a talk tease from a talk delivered by my friend and co-guru Mark Torres (formerly of Suse fame) to the Atlanta Unix Users Group a couple of months ago... -- Marc will be discussing some of the myths, uses and advantages of this replacement of the Linux sysvinit package. Despite the best efforts of many (and a never ending supply of mailing list flames, holy water, silver bullets, wooden stakes, poisons and evil looks) it is coming soon to a Linux Distro near you. -- You may have to deal with the likes of systemd sooner or later (or some other variant in similar concept). This is not something peculiar to Fedora (OpenSuse, and Arch Linux both have systemd as will RHEL, CentOS, Scientific Linux, etc...). By the time it gets to Ubuntu, though, I suspect it will be much more stable. Seems like Fedora adopted it early (very much like Ubuntu with Unity) and Fedora has sort of been the proving ground for systemd (that's what Fedora is for). Personally, I'm in the holy water and wooden stakes category (and I have at least one flame war on the systemd list to back me up) with regards to systemd but it is what it is. We've been dealing with it as it evolves. It's worth noting that I've decided that I will never get Fedora 15 or Fedora 16 running in a container (they will work as a host, however) due to incompatible versions of systemd. But F15 and F16 are EOL and thus not worth any effort or stress. They (systemd) obviously found solutions for the havoc they caused in those versions and we have managed to get F17 and F18 working with the recent versions of systemd. It's not Fedora, it's systemd that create these problems. OTOH... Unity caused me to totally abandon Ubuntu. It totally broke freenx and the NX support and try as I might, I could not get it to work after trying for 2 or 3 releases of Ubuntu. Even though the freenx package was still in the repos, several of the libraries upon which it depended were gone or deprecated. I tried all the manual patches and source compiles and nothing worked or created irreconcilable conflicts. If I can't use NX to get graphic desktops in a container or on a remote host system, Ubuntu became useless to me and I had to rip it out where I had it. Even the laptops for my grandkids (which were running Ubuntu) are now running on Fedora for that reason (so grandpa can have a remote desktop on their laptop and do remote graphical administration). Maybe they have since fixed it but they already lost me on that one point alone. I do need to get some (at least one) containers running Ubuntu running just so I can run Oregano (Electronics Schematic CAD program with simulation capabilities) which I can not get to work on Fedora 18 (Fedora 19 might work) due to some gtk3+ library issues. So maybe I'll revisit Ubuntu again soon. I have successfully built an Ubuntu container on a Fedora host. That's one of the nice things, that I can cross build Ubuntu on a Fedora host. I think (but I haven't tried) the reverse is true. If not, I may have to fix that in the Fedora template. That's not true of many of the other templates. I can't build an Arch Linux container on a Fedora host. That's something the container owners have to address. We've gotten it (systemd) working and functional (maybe not totally pretty) and now it's more a matter of getting the updated revs into the downstream releases to make it just as easy. That's the way it works. All the template maintainers (and I'm just a contributor) are shooting at moving targets. What's in the git repo right now for the Fedora template is pretty sweet (if I do say so myself) and will build Fedora containers out of the box. Which reminds me. The dudes that do the Fedora Remix for the Raspberry Pi just renamed their distribution to Pidor (shades of Kubuntu, Edubuntu and the other remixes of Ubuntu). That's going to break the lxc-fedora template on the Raspberry Pi (yet again - pounding head on desk). Right now, AFAICT, the ARM support for Ubuntu is only for ARM7 and up which leaves out the RPi (ARM6). Oh well. It never ends. Joe Regards, Mike On Sat, May 18, 2013 at 11:19 AM, Michael H. Warfield m...@wittsend.comwrote: On Sat, 2013-05-18 at 19:41 +0530, Ajith Adapa wrote: Hmm sounds one more road block for using lxc in fedora 17 because of systemd. It's not a roadblock. More like a mile long stretch of stingers (stop spike strips / tire deflators). We're getting there. It's just one more unnecessary puzzle to solve. Sigh... Currently there is no place where there is a guide for starting up with LXC for latest fedora versions. I think a page in fedoraproject
Re: [Lxc-users] list admin
On Thu, 2013-05-23 at 22:33 +0200, Tamas Papp wrote: hi Guys, Please, someone! Who is the owner/admin of this list? Last I knew, it was Daniel who created the list and most likely owns it. It was created at my suggestion to move user questions off the devel list ages and ages ago. Lately he's typically been buried over his head busy that he doesn't show up much even in the -devel list lately other than to cut a release, must less this list. You're probably complaining about that bad E-Mail address that's been on the list for like forever. It's been mentioned before. It's a mailman list so we just need someone with owner privs and the password. Thanks, tamas Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Try New Relic Now We'll Send You this Cool Shirt New Relic is the only SaaS-based application performance monitoring service that delivers powerful full stack analytics. Optimize and monitor your browser, app, servers with just a few lines of code. Try New Relic and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_may___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] list admin
On Fri, 2013-05-24 at 02:06 +0200, Daniel Lezcano wrote: On 05/23/2013 11:44 PM, Tamas Papp wrote: On 05/23/2013 11:13 PM, Michael H. Warfield wrote: Last I knew, it was Daniel who created the list and most likely owns it. It was created at my suggestion to move user questions off the devel list ages and ages ago. Lately he's typically been buried over his head busy that he doesn't show up much even in the -devel list lately other than to cut a release, must less this list. You're probably complaining about that bad E-Mail address that's been on the list for like forever. It's been mentioned before. It's a mailman list so we just need someone with owner privs and the password. I already offered my service on the list a couple of months ago, but no answer was received:( I would be glad to kick off that address:) Also I think replies should be addressed to the list, though whose decision it is. Daniel, can you help in this? Yes, sure. Sorry, I have been more and more busy with other stuff and flooded by emails, so I did not followed the discussion closely. Hasn't been much discussion. Just alot of random complaints about one particular E-mail address on the -users list which has not be valid and has been bouncing for ages. Can you explain in a few words what do you need ? If you can nuke off msklizman...@ebuddy.com, that would be great. Longer term, you might want to share list ownership or moderator with one of us, like Tamas, so we can handle some of the administrivia for you when you are up to your eyeballs in fecal matter. :-P I've managed numerous mailman, ezmlm (yuck), and majordomo lists down through the years but Tamas has also volunteered to help out. Thanks -- Daniel Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! -- Try New Relic Now We'll Send You this Cool Shirt New Relic is the only SaaS-based application performance monitoring service that delivers powerful full stack analytics. Optimize and monitor your browser, app, servers with just a few lines of code. Try New Relic and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_may ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Is there a way to get the container name from inside the container?
On Mon, 2013-05-20 at 21:30 -0700, Masood Mortazavi wrote: I seem recall that the host name is the same as container name. Try to print/view the host name while in the conatiner. You can also configure your .bash_profile to ensure the host name (i.e the container name in this case) appears on your bash command prompt. By default it is but that's not required. If you create it with lxc-create, then the container name, the lxc.utsname in the config and the hostname of the system will be the same. After the container has been created, these can be changed independently. The question has also been raised (by me) over on the -devel list of setting the utsname and the host name to an FQDN (Fully Qualified Domain Name) based on the container name and the domain name of the host or by an independent template creation object (--fqdn), since it's generally desirable for those two to be an FQDN while keeping the container name to a simple name. I would not depend on the host name to match the container name unless this is a site convention you enforce locally. Regards, Mike Good luck, m. On Thu, May 16, 2013 at 2:08 PM, Vallevand, Mark K mark.vallev...@unisys.com wrote: Is there a way to get the container name from inside the container? ** ** Regards. Mark K Vallevand mark.vallev...@unisys.com May you live in interesting times, may you come to the attention of important people and may all your wishes come true. THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY MATERIAL and is thus for use only by the intended recipient. If you received this in error, please contact the sender and delete the e-mail and its attachments from all computers. ** ** -- AlienVault Unified Security Management (USM) platform delivers complete security visibility with the essential security capabilities. Easily and efficiently configure, manage, and operate all of your security controls from a single console and one unified framework. Download a free trial. http://p.sf.net/sfu/alienvault_d2d ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users -- Try New Relic Now We'll Send You this Cool Shirt New Relic is the only SaaS-based application performance monitoring service that delivers powerful full stack analytics. Optimize and monitor your browser, app, servers with just a few lines of code. Try New Relic and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_may ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Try New Relic Now We'll Send You this Cool Shirt New Relic is the only SaaS-based application performance monitoring service that delivers powerful full stack analytics. Optimize and monitor your browser, app, servers with just a few lines of code. Try New Relic and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_may___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Regarding creating a LXC container in fedora 17
On Sun, 2013-05-19 at 09:26 +0530, Shridhar Daithankar wrote: On Saturday, May 18, 2013 04:06:30 PM Michael H. Warfield wrote: On Sun, 2013-05-19 at 00:02 +0530, Shridhar Daithankar wrote: I ran into the same lxc-console issue and managed to solve it by commenting out ConditionPathExists line in /etc/systemd/system/getty.target.wants/getty\@tty1.service first I had to start the container in foreground, login, comment out aforementioned line and restart it in daemon mode. After that lxc-console worked. If I changed the systemd unit from the chroot itself, it didn't reflect somehow. Also the line referred to /dev/tty0 but only /dev/tty and /dev/tty1 were available in the container. I don't understand systemd enough to comment on the reasons of it working or not-working. I just experimented around to get a working tty. Oh, I understand why it's working and that's VERY interesting. I hadn't dug into that realm of the systemd files as yet. They're using the mere existence of /dev/tty0 as a logic switch to determine if they are going to start a getty on %I (/dev/tty1 in this case). Which is yet another case of them burying a magic cookie switch in the logic. They conditionalize it on a hard coded /dev/tty0 but then start a getty /sbin/agetty --noclear %I 38400. the %I is the unit name. Cute. Cheeky bastards. So, it basically means that if we create a tty0 under autodev, that should then enable any listed vty consoles we set up using systemd links. That could be a key to making that work without modifying files in the container but would require a (relatively minor but non trivial) change to the lxc source code, as opposed to merely a template. I'll run this up the flagpole on the -devel list and with Serge and get their opinions. This is doable. I think I'll try this out in my sources first and then post a request for comments over on -devel. The problem is in the routine setup_tty in conf.c that's going to need some massaging to make it 0 based instead of 1 based and adjust for the additional tty count. :-/ digging a bit deeper in this and quoting from http://www.freedesktop.org/software/systemd/man/systemd-getty-generator.html and http://0pointer.de/blog/projects/serial-console.html It looks like systemd is generating that unit file on demand during boot(thats why editing it before starting the container did not work for me) and it is looking into /sys/class/tty/console/active which indeed reports tty0 inside a container. So the real question is how can lxc/kernel make the container report tty1 there so that systemd would work out of the box. or.. as you have proposed, lxc could expose tty0. Or, after reading that first document, another idea that comes to mind. This is one that could be implemented in the lxc-fedora template. The idea is based on the section near the bottom of that document titled Serial Terminals and mentions However, sometimes there's the need to manually configure a serial getty... That can be adapted and used with getty@.service like this... cp /lib/systemd/system/getty@.service /etc/systemd/system Now edit /etc/systemd/system/getty@.service and delete or comment out the ConditionPathExists=/dev/tty0 line. Finally... cd /etc/systemd/system/getty.target.wants/ ln -sf ../getty\@.service getty@tty1.service ln -sf ../getty\@.service getty@tty2.service ln -sf ../getty\@.service getty@tty3.service ln -sf ../getty\@.service getty@tty4.service ... For the number of vty logins you want up to the number of ttys you have specified in lxc.tty. That overrides the getty@.service in /lib/systemd/system/ Tested out with a Fedora 17 container I have running hear now and it works like a charm. You do end up with multiple agetty processes running that way. I can then use lxc-console to connect to those consoles. This would seem to be a cheap and dirty way to get it to work. Looks like we could try with some virtual console devices they mention in that document as well but that recurses back to needing to modify lxc-start and the autodev logic. -- Regards Shridhar -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- AlienVault Unified Security Management (USM) platform delivers complete security visibility with the essential security capabilities. Easily and efficiently configure, manage, and operate all of your security controls from a single console and one unified framework. Download a free trial. http://p.sf.net/sfu
Re: [Lxc-users] Regarding creating a LXC container in fedora 17
] ip_rcv+0x24c/0x370 [1281973.370052] [c088d5db] __netif_receive_skb+0x5bb/0x740 [1281973.370052] [c088d8ce] netif_receive_skb+0x2e/0x90 [1281973.370052] [f7c36a49] virtnet_poll+0x449/0x6a0 [virtio_net] [1281973.370052] [c044d6aa] ? run_timer_softirq+0x1a/0x210 [1281973.370052] [c088decd] net_rx_action+0x11d/0x1f0 [1281973.370052] [c044695b] __do_softirq+0xab/0x1c0 [1281973.370052] [c04468b0] ? local_bh_enable_ip+0x90/0x90 [1281973.370052] IRQ [1281973.370052] [c0446bdd] ? irq_exit+0x9d/0xb0 [1281973.370052] [c04258ee] ? smp_apic_timer_interrupt+0x5e/0x90 [1281973.370052] [c097a440] ? apic_timer_interrupt+0x34/0x3c [1281973.370052] [c044007b] ? console_start+0xb/0x20 [1281973.370052] [c0979bbf] ? _raw_spin_unlock_irqrestore+0xf/0x20 [1281973.370052] [c07918d6] ? ata_scsi_queuecmd+0x96/0x250 [1281973.370052] [c076ad18] ? scsi_dispatch_cmd+0xb8/0x260 [1281973.370052] [c066007b] ? queue_store_random+0x4b/0x70 [1281973.370052] [c07711b3] ? scsi_request_fn+0x2c3/0x4b0 [1281973.370052] [c042f2b7] ? kvm_clock_read+0x17/0x20 [1281973.370052] [c0409448] ? sched_clock+0x8/0x10 [1281973.370052] [c065cace] ? __blk_run_queue+0x2e/0x40 [1281973.370052] [c066214a] ? blk_execute_rq_nowait+0x6a/0xd0 [1281973.370052] [c066221d] ? blk_execute_rq+0x6d/0xe0 [1281973.370052] [c06620b0] ? __raw_spin_unlock_irq+0x10/0x10 [1281973.370052] [c0446ba7] ? irq_exit+0x67/0xb0 [1281973.370052] [c04258ee] ? smp_apic_timer_interrupt+0x5e/0x90 [1281973.370052] [c097a440] ? apic_timer_interrupt+0x34/0x3c [1281973.370052] [c076ffa0] ? scsi_execute+0xb0/0x140 [1281973.370052] [c0771429] ? scsi_execute_req+0x89/0x100 [1281973.370052] [c077f3d5] ? sr_check_events+0xb5/0x2e0 [1281973.370052] [c07a64cd] ? cdrom_check_events+0x1d/0x40 [1281973.370052] [c077f856] ? sr_block_check_events+0x16/0x20 [1281973.370052] [c06663c5] ? disk_check_events+0x45/0xf0 [1281973.370052] [c0666485] ? disk_events_workfn+0x15/0x20 [1281973.370052] [c045788e] ? process_one_work+0x12e/0x3d0 [1281973.370052] [c097a440] ? apic_timer_interrupt+0x34/0x3c [1281973.370052] [c0459939] ? worker_thread+0x119/0x3b0 [1281973.370052] [c0459820] ? flush_delayed_work+0x50/0x50 [1281973.370052] [c045e2a4] ? kthread+0x94/0xa0 [1281973.370052] [c0980ef7] ? ret_from_kernel_thread+0x1b/0x28 [1281973.370052] [c045e210] ? kthread_create_on_node+0xc0/0xc0 [1281973.370052] Code: 5d c3 8d b4 26 00 00 00 00 89 02 c3 90 8d 74 26 00 81 fa ff ff 03 00 89 d1 77 2e 81 fa 00 00 01 00 76 0e 81 e2 ff ff 00 00 66 ef c3 90 8d 74 26 00 55 ba 2c 5a b2 c0 89 e5 89 c8 e8 01 ff ff ff [1281991.139165] ata2: lost interrupt (Status 0x58) [1281991.148055] ata2: drained 12 bytes to clear DRQ [1281991.165039] ata2.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [1281991.172924] sr 1:0:0:0: CDB: [1281991.172932] Get event status notification: 4a 01 00 00 10 00 00 00 08 00 [1281991.497342] ata2.00: cmd a0/00:00:00:08:00/00:00:00:00:00/a0 tag 0 pio 16392 in [1281991.497342] res 40/00:02:00:04:00/00:00:00:00:00/a0 Emask 0x4 (timeout) [1281991.523767] ata2.00: status: { DRDY } [1281991.616161] ata2: soft resetting link [1281998.232648] ata2.01: qc timeout (cmd 0xec) [1281998.238559] ata2.01: failed to IDENTIFY (I/O error, err_mask=0x4) [1281998.247432] ata2: soft resetting link [1281998.575468] ata2.01: NODEV after polling detection [1281998.698009] ata2.00: configured for MWDMA2 [1281998.714460] ata2: EH complete Not sure what the deal is with that ATA error. That's a hard drive lost interrupt problem. Looks to be on your CD Rom drive? Looks like it recovered. 3. Last but not least after sometime my host kernel crashed as a result need to restart the VPC. I don't understand what you are saying here. You're saying your kernel crashed but I don't understand the as a result of... What did you do, why did you do it, and what happened? Regards, Ajith Regards, Mike On Thu, May 16, 2013 at 8:09 PM, Ajith Adapa ajith.ad...@gmail.com wrote: Thanks @thomas and @michael. I will try the RPMs and steps provided to start a container. Regards, Ajith On Wed, May 15, 2013 at 2:01 PM, Thomas Moschny thomas.mosc...@gmail.comwrote: 2013/5/14 Michael H. Warfield m...@wittsend.com: What I would recommend as steps on Fedora 17... Download lxc-0.9.0 here: http://lxc.sourceforge.net/download/lxc/lxc-0.9.0.tar.gz You should have rpm-build and friends installed via yum on your system. Build the lxc rpms by running rpmbuild (as any user) as follows: You could also try using the pre-built packages I put here: http://thm.fedorapeople.org/lxc/ . Regards, Thomas -- AlienVault Unified Security Management (USM) platform delivers complete security visibility with the essential security capabilities. Easily and efficiently configure, manage, and operate all of your security
Re: [Lxc-users] Regarding creating a LXC container in fedora 17
On Sat, 2013-05-18 at 19:02 +0530, Ajith Adapa wrote: Sorry for the confusion. In case of issue 3, I felt host kernel crashed because of the soft lock issue mentioned in Issue 2.That's the reason I was saying as a result of ... Ideally speaking I haven't done anything other than creating the lxc-container at the time. Once I restarted the host machine after crash I havent observed any issues. Then I have started the container using below command and tried to connect to its shell using lxc-console command but I ended up with below message. Ideally I should see a prompt but its just hangs down there. Ctl+a q works and nothing else. [root@ipiblr ~]# lxc-start -n TEST -d [root@ipiblr ~]# lxc-console -n TEST Type Ctrl+a q to exit the console, Ctrl+a Ctrl+a to enter Ctrl+a itself Oh, crap... I keep forgetting about that (because I don't use it). That needs to be noted somewhere in the documentation. That's yet another BAD decision on the part of the systemd crowd, lxc-console is probably not going to work, at least for the time being. They (systemd) intentionally, with documented malice a forethought, disable gettys on the vtys in the container if systemd detects that it's in a container. However, /dev/console in the container is still active and is connected to lxc-start and I'm able to log in there but I have never gotten lxc-console to work with a systemd container and I don't know of anything I can do about it. You would need some way to force the container to start gettys on the vtys. Maybe, if I (or someone else) can figure out a way to do that (force the gettys to start on the vtys), it could be integrated into the Fedora template. My patches for the autodev stuff (plus other stuff) have now been accepted and applied by Serge, so that's done. Maybe I can look deeper into this morass now. Regards, Mike Regards, Ajith On Sat, May 18, 2013 at 5:55 PM, Michael H. Warfield m...@wittsend.com wrote: Hello, On Sat, 2013-05-18 at 12:35 +0530, Ajith Adapa wrote: Hi, I have installed all the rpms created by @thomas and followed @michael steps to start a lxc container. I have a doubt. 1. When I give lxc-create command I came across huge download of various files. As per my understanding rootfs is created for new container (where can i get the steps for it ? ). Steps for what? It's in /var/lib/lxc/{Container}/rootfs/ But I see below log. Is there any issue ? Copy /var/cache/lxc/fedora/i686/17/rootfs to /var/lib/lxc/TEST/TEST/rootfs ... Copying rootfs to /var/lib/lxc/TEST/TEST/rootfs ...setting root passwd to root installing fedora-release package warning: Failed to read auxiliary vector, /proc not mounted? warning: Failed to read auxiliary vector, /proc not mounted? warning: Failed to read auxiliary vector, /proc not mounted? warning: Failed to read auxiliary vector, /proc not mounted? warning: Failed to read auxiliary vector, /proc not mounted? warning: Failed to read auxiliary vector, /proc not mounted? warning: Failed to read auxiliary vector, /proc not mounted? warning: Failed to read auxiliary vector, /proc not mounted? The warnings are perfectly normal and harmless. I ran into this with recent versions of yum and researched it. It's because /proc is not mounted in the container itself when the container is being created. You can ignore them. Package fedora-release-17-2.noarch already installed and latest version Nothing to do Again, normal. container rootfs and config created 'fedora' template installed 'TEST' created Looks like your container was created. I don't see a problem. 2.I see a SOFT LOCK issue with latest version kernel shown below. # uname -a Linux blr 3.8.8-100.fc17.i686 #1 SMP Wed Apr 17 17:26:59 UTC 2013 i686 i686 i386 GNU/Linux [1098069.351017] SELinux: initialized (dev binfmt_misc, type binfmt_misc), uses genfs_contexts [1281973.370052] BUG: soft lockup - CPU#0 stuck for 23s! [kworker/0:1:2201] I've seen that on my Dell 610's but they haven't caused any real failures. Not quite sure what that is. [1281973.370052] Modules linked in: binfmt_misc lockd sunrpc snd_intel8x0 snd_ac97_codec ac97_bus snd_seq snd_seq_device snd_pcm i2c_piix4
Re: [Lxc-users] Regarding creating a LXC container in fedora 17
On Sat, 2013-05-18 at 19:41 +0530, Ajith Adapa wrote: Hmm sounds one more road block for using lxc in fedora 17 because of systemd. It's not a roadblock. More like a mile long stretch of stingers (stop spike strips / tire deflators). We're getting there. It's just one more unnecessary puzzle to solve. Sigh... Currently there is no place where there is a guide for starting up with LXC for latest fedora versions. I think a page in fedoraproject would be of great help with the known issues and steps using lxc under various fedora versions. First we get it working but, yeah, that would be incredibly nice and then also add it to this project as well. I am really thinking to start using LXC containers in fedora 14. Build and Boot it up with latest stable kernel version (Might be 3.4) and LXC version (0.9) and try out using LXC- containers :) On Sat, May 18, 2013 at 7:28 PM, Michael H. Warfield m...@wittsend.com wrote: On Sat, 2013-05-18 at 19:02 +0530, Ajith Adapa wrote: Sorry for the confusion. In case of issue 3, I felt host kernel crashed because of the soft lock issue mentioned in Issue 2.That's the reason I was saying as a result of ... Ideally speaking I haven't done anything other than creating the lxc-container at the time. Once I restarted the host machine after crash I havent observed any issues. Then I have started the container using below command and tried to connect to its shell using lxc-console command but I ended up with below message. Ideally I should see a prompt but its just hangs down there. Ctl+a q works and nothing else. [root@ipiblr ~]# lxc-start -n TEST -d [root@ipiblr ~]# lxc-console -n TEST Type Ctrl+a q to exit the console, Ctrl+a Ctrl+a to enter Ctrl+a itself Oh, crap... I keep forgetting about that (because I don't use it). That needs to be noted somewhere in the documentation. That's yet another BAD decision on the part of the systemd crowd, lxc-console is probably not going to work, at least for the time being. They (systemd) intentionally, with documented malice a forethought, disable gettys on the vtys in the container if systemd detects that it's in a container. However, /dev/console in the container is still active and is connected to lxc-start and I'm able to log in there but I have never gotten lxc-console to work with a systemd container and I don't know of anything I can do about it. You would need some way to force the container to start gettys on the vtys. Maybe, if I (or someone else) can figure out a way to do that (force the gettys to start on the vtys), it could be integrated into the Fedora template. My patches for the autodev stuff (plus other stuff) have now been accepted and applied by Serge, so that's done. Maybe I can look deeper into this morass now. Regards, Mike Regards, Ajith On Sat, May 18, 2013 at 5:55 PM, Michael H. Warfield m...@wittsend.com wrote: Hello, On Sat, 2013-05-18 at 12:35 +0530, Ajith Adapa wrote: Hi, I have installed all the rpms created by @thomas and followed @michael steps to start a lxc container. I have a doubt. 1. When I give lxc-create command I came across huge download of various files. As per my understanding rootfs is created for new container (where can i get the steps for it ? ). Steps for what? It's in /var/lib/lxc/{Container}/rootfs/ But I see below log. Is there any issue ? Copy /var/cache/lxc/fedora/i686/17/rootfs to /var/lib/lxc/TEST/TEST/rootfs ... Copying rootfs to /var/lib/lxc/TEST/TEST/rootfs ...setting root passwd to root installing fedora-release package warning: Failed to read auxiliary vector, /proc not mounted? warning: Failed to read auxiliary vector, /proc not mounted? warning: Failed to read auxiliary
Re: [Lxc-users] Regarding creating a LXC container in fedora 17
On Sun, 2013-05-19 at 00:02 +0530, Shridhar Daithankar wrote: On Saturday, May 18, 2013 09:58:26 AM Michael H. Warfield wrote: On Sat, 2013-05-18 at 19:02 +0530, Ajith Adapa wrote: Then I have started the container using below command and tried to connect to its shell using lxc-console command but I ended up with below message. Ideally I should see a prompt but its just hangs down there. Ctl+a q works and nothing else. [root@ipiblr ~]# lxc-start -n TEST -d [root@ipiblr ~]# lxc-console -n TEST Type Ctrl+a q to exit the console, Ctrl+a Ctrl+a to enter Ctrl+a itself Oh, crap... I keep forgetting about that (because I don't use it). That needs to be noted somewhere in the documentation. That's yet another BAD decision on the part of the systemd crowd, lxc-console is probably not going to work, at least for the time being. They (systemd) intentionally, with documented malice a forethought, disable gettys on the vtys in the container if systemd detects that it's in a container. However, /dev/console in the container is still active and is connected to lxc-start and I'm able to log in there but I have never gotten lxc-console to work with a systemd container and I don't know of anything I can do about it. You would need some way to force the container to start gettys on the vtys. Maybe, if I (or someone else) can figure out a way to do that (force the gettys to start on the vtys), it could be integrated into the Fedora template. My patches for the autodev stuff (plus other stuff) have now been accepted and applied by Serge, so that's done. Maybe I can look deeper into this morass now. First of all, let me say thank you for your concise message upthread. It renewed my interest in lxc and I managed to get a working container for the first time. I will post a detailed HOWTO shortly. I ran into the same lxc-console issue and managed to solve it by commenting out ConditionPathExists line in /etc/systemd/system/getty.target.wants/getty\@tty1.service NICE! I'm going to look at that and see what I can do with it. That gives me a thread to pick at. It should be possible to work that into the Fedora template. We've already got systemd related code in there and this would be nice addition. Many thanks! Regards, Mike first I had to start the container in foreground, login, comment out aforementioned line and restart it in daemon mode. After that lxc-console worked. If I changed the systemd unit from the chroot itself, it didn't reflect somehow. Also the line referred to /dev/tty0 but only /dev/tty and /dev/tty1 were available in the container. I don't understand systemd enough to comment on the reasons of it working or not-working. I just experimented around to get a working tty. My environment is arch linux and the details are as follows. kernel : 3.9.2-1 systemd : 204-1 lxc : 1:0.9.0-2 HTH -- Regards Shridhar -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- AlienVault Unified Security Management (USM) platform delivers complete security visibility with the essential security capabilities. Easily and efficiently configure, manage, and operate all of your security controls from a single console and one unified framework. Download a free trial. http://p.sf.net/sfu/alienvault_d2d___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Regarding creating a LXC container in fedora 17
On Sun, 2013-05-19 at 00:02 +0530, Shridhar Daithankar wrote: On Saturday, May 18, 2013 09:58:26 AM Michael H. Warfield wrote: On Sat, 2013-05-18 at 19:02 +0530, Ajith Adapa wrote: Then I have started the container using below command and tried to connect to its shell using lxc-console command but I ended up with below message. Ideally I should see a prompt but its just hangs down there. Ctl+a q works and nothing else. [root@ipiblr ~]# lxc-start -n TEST -d [root@ipiblr ~]# lxc-console -n TEST Type Ctrl+a q to exit the console, Ctrl+a Ctrl+a to enter Ctrl+a itself Oh, crap... I keep forgetting about that (because I don't use it). That needs to be noted somewhere in the documentation. That's yet another BAD decision on the part of the systemd crowd, lxc-console is probably not going to work, at least for the time being. They (systemd) intentionally, with documented malice a forethought, disable gettys on the vtys in the container if systemd detects that it's in a container. However, /dev/console in the container is still active and is connected to lxc-start and I'm able to log in there but I have never gotten lxc-console to work with a systemd container and I don't know of anything I can do about it. You would need some way to force the container to start gettys on the vtys. Maybe, if I (or someone else) can figure out a way to do that (force the gettys to start on the vtys), it could be integrated into the Fedora template. My patches for the autodev stuff (plus other stuff) have now been accepted and applied by Serge, so that's done. Maybe I can look deeper into this morass now. First of all, let me say thank you for your concise message upthread. It renewed my interest in lxc and I managed to get a working container for the first time. I will post a detailed HOWTO shortly. I ran into the same lxc-console issue and managed to solve it by commenting out ConditionPathExists line in /etc/systemd/system/getty.target.wants/getty\@tty1.service first I had to start the container in foreground, login, comment out aforementioned line and restart it in daemon mode. After that lxc-console worked. If I changed the systemd unit from the chroot itself, it didn't reflect somehow. Also the line referred to /dev/tty0 but only /dev/tty and /dev/tty1 were available in the container. I don't understand systemd enough to comment on the reasons of it working or not-working. I just experimented around to get a working tty. Oh, I understand why it's working and that's VERY interesting. I hadn't dug into that realm of the systemd files as yet. They're using the mere existence of /dev/tty0 as a logic switch to determine if they are going to start a getty on %I (/dev/tty1 in this case). Which is yet another case of them burying a magic cookie switch in the logic. They conditionalize it on a hard coded /dev/tty0 but then start a getty /sbin/agetty --noclear %I 38400. the %I is the unit name. Cute. Cheeky bastards. So, it basically means that if we create a tty0 under autodev, that should then enable any listed vty consoles we set up using systemd links. That could be a key to making that work without modifying files in the container but would require a (relatively minor but non trivial) change to the lxc source code, as opposed to merely a template. I'll run this up the flagpole on the -devel list and with Serge and get their opinions. This is doable. I think I'll try this out in my sources first and then post a request for comments over on -devel. The problem is in the routine setup_tty in conf.c that's going to need some massaging to make it 0 based instead of 1 based and adjust for the additional tty count. :-/ My environment is arch linux and the details are as follows. kernel : 3.9.2-1 systemd : 204-1 lxc : 1:0.9.0-2 HTH -- Regards Shridhar -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- AlienVault Unified Security Management (USM) platform delivers complete security visibility with the essential security capabilities. Easily and efficiently configure, manage, and operate all of your security controls from a single console and one unified framework. Download a free trial. http://p.sf.net/sfu/alienvault_d2d___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Regarding creating a LXC container in fedora 17
On Wed, 2013-05-15 at 10:31 +0200, Thomas Moschny wrote: 2013/5/14 Michael H. Warfield m...@wittsend.com: What I would recommend as steps on Fedora 17... Download lxc-0.9.0 here: http://lxc.sourceforge.net/download/lxc/lxc-0.9.0.tar.gz You should have rpm-build and friends installed via yum on your system. Build the lxc rpms by running rpmbuild (as any user) as follows: You could also try using the pre-built packages I put here: http://thm.fedorapeople.org/lxc/ . Nice! I wasn't aware of them. Many thanks. Regards, Thomas Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- AlienVault Unified Security Management (USM) platform delivers complete security visibility with the essential security capabilities. Easily and efficiently configure, manage, and operate all of your security controls from a single console and one unified framework. Download a free trial. http://p.sf.net/sfu/alienvault_d2d___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Regarding creating a LXC container in fedora 17
On Tue, 2013-05-14 at 11:58 +0530, Ajith Adapa wrote: Hi, Sorry for my basic question as I am new to LXC. I would like to know the steps to create a LXC container using lxc in fedora 17. I have searched for the same in google but I am not able to find any useful posts to do the same. It would be more helpful if anyone can share the steps if they got any. Reference other messages about lxc-create. Fedora 17 has lxc 0.7.5 in the repos, which is NOT going to work. You're going to run into multiple problems which, basically, you will not be able to resolve thanks to systemd (init) running in the host and systemd running in the container. Fundamentally 0.7.5 is not compatible with systemd. You will need at least 0.9.0 (current) to get this to work. The Fedora template is not (yet) fully updated to support systemd in a container. After creating the container with lxc-create, you have to edit the container config in /var/lib/lxc/{Container}/config and add an entry as follows: lxc.autodev = 1 Steps: What I would recommend as steps on Fedora 17... Download lxc-0.9.0 here: http://lxc.sourceforge.net/download/lxc/lxc-0.9.0.tar.gz You should have rpm-build and friends installed via yum on your system. Build the lxc rpms by running rpmbuild (as any user) as follows: rpmbuild -ta lxc-0.9.0 Resolve any dependencies until it builds and gives you a cluster of rpms under ~/rpmbuild/RPMS/{arch}/. Everything here down should now be done as root... Install the lxc and lxc-libs rpm files using yum localupdate (file names will be printed near the end of the rpmbuild command run above). [Optional - install the lxc-devel package as well.] It should looks sometime like this: yum localupdate ~user/rpmbuild/RPMS/x86_64/lxc-libs-0.9.0-1.fc17.x86_64.rpm ~user/rpmbuild/RPMS/x86_64/lxc-0.9.0-1.fc17.x86_64.rpm Doing this with rpmbuild and yum will properly update the existing binary packages and documentation while retaining your ability to update and maintain once it hits the official repos. Now create your container... lxc-create -t fedora -n Fedora17 -R 17 (The -R 17 is the default if you're on Fedora 17 and you can leave it off.) That should run and create your container. Now edit the container. vi /var/lib/lxc/Fedora17/config And add this line in the file somewhere (if it doesn't already exist): lxc.autodev = 1 You should now be able to start the container: lxc-start -n Fedora17 I still need to submit the patches for the Fedora template for the autodev support and the ARM / Raspberry Pi support. They'll eventually get in there. :-P Regards, Ajith Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- AlienVault Unified Security Management (USM) platform delivers complete security visibility with the essential security capabilities. Easily and efficiently configure, manage, and operate all of your security controls from a single console and one unified framework. Download a free trial. http://p.sf.net/sfu/alienvault_d2d___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] How long the dhcp lease lasts
On Mon, 2013-04-22 at 21:59 +0200, Robin Monjo wrote: Hello everyone, Using the default config, containers will have their IP attributed by the dnsmasq service. I'd like to write in the '/etc/hosts' file of the host system the hostname and the corresponding IP address of each container I start. Why? If you're using dnsmasq, it's in the DNS cache of the dnsmasq server. That's its purpose in life, to act as a DNS caching service. Why do you need it in /etc/hosts when you've already got in in DNS? I will then use this hostname in an iptable rules for port forwarding. But what if dnsmasq change an IP at the end of the lease ? So the question is: how long an IP given by dnsmasq will last ? What is your lease expiration configured for? I've used anything for a few hours (high turnover large WiFi sites) to several days to carry over a weekend along with static leases for servers and permanent devices. It depends on what you've configured it for. May I see one of my container have its IP changed ? Maybe. Depends on what you've configured it for. I'm not so sure about dnsmasq acting as the dhcp server (which is what you are talking about) but dhcpd will notify you of things like this. Most notably, you will get deletes on old name/addresses and adds for new name/addresses both forward and reverse, but it's all configuration driven. Any ways to have the dnsmasq table synced with the /etc/hosts file (container's host side) ? Again... Why? Use the DNS cache in dnsmasq. You have no need to refer to /etc/hosts at all. Kind regards Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Precog is a next-generation analytics platform capable of advanced analytics on semi-structured data. The platform includes APIs for building apps and a phenomenal toolset for data science. Developers can use our toolset for easy data analysis visualization. Get a free account! http://www2.precog.com/precogplatform/slashdotnewsletter___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Problem with: lxc.autodev=1
On Sat, 2013-04-20 at 22:01 +0200, Andreas Otto wrote: Hi, thanks for the fast answer, Been kind-of deeply involved in the whole systemd / audodev thing so I'm a bit sensitive to some of the bizarre nuances of systemd and various versions. You got my attention with that subject line. They (systemd) really broke far too many things far too unnecessarily (with little or no benefit from what I can discern) and then tried to tell everyone else how they should be doing things. It's been a real mess. You can probably tell, I'm not a fan of systemd at the moment. Maybe when it grows up and matures a bit more... I've got Fedora 14 (upstart - no systemd) working with and without autodev, Fedora 15 working with upstart (with systemd only after painful tweaking) and Fedora 16 not at all with systemd (major udev problems) but working on Fedora 17 with systemd following their recommendations. Problem is that their recommendations don't work for all versions of systemd. I've really given up on Fedora 15 and 16 in a container just because systemd is not stable or consistent in its behavior. if lxc version is: host# rpm -q lxc lxc-0.8.0-3.5.1.x86_64 - this is the version from opensuse 12.3 First recommendation is to get on 0.9.0. We got a lot of fixes in there so I can't tell if this was fixed or not. my 'guest' has ... guest# ps -eaf | grep systemd root 24 1 0 19:48 ?00:00:00 /usr/lib/systemd/systemd-logind message+ 28 1 0 19:48 ?00:00:00 /bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation root955 1 0 19:48 ?00:00:00 /usr/lib/systemd/systemd-journald Any idea what version of systemd is running? I've found there are a number of versions that seem to be impossible to get to work and one versions that works you find the next version doesn't. I find this particular problem rather disturbing. I have not seen the permissions problem crop up in a systemd container with autodev enabled. What I've seen was just the opposite. This should have worked. Not sure what's causing it to be wrong, unless it's the lxc version but ever 0.8.0 worked if you had the right version of systemd. I did retest a fresh Fedora 14 (upstart) container and it works with the proper permissions in the /dev directory for both autodev settings. But that's under 0.9.0, so that would be my first check. Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Precog is a next-generation analytics platform capable of advanced analytics on semi-structured data. The platform includes APIs for building apps and a phenomenal toolset for data science. Developers can use our toolset for easy data analysis visualization. Get a free account! http://www2.precog.com/precogplatform/slashdotnewsletter___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] mknod inside systemd container
Hey John, On Thu, 2013-04-04 at 09:07 +0100, John wrote: On 03/04/13 23:15, Michael H. Warfield wrote: On Wed, 2013-04-03 at 23:03 +0100, John wrote: On 02/04/13 23:59, Michael H. Warfield wrote: On Tue, 2013-04-02 at 16:02 +0100, John wrote: If my understanding is correctl, to stop systemd trying to launch udev and generally make a mess of everything inside a container, you need to remove the mknod capability from the container. Ah... That's kind of old information and not really effective. But what if I want (need) to be able to use mknod inside a container, how can I do that with a systemd container? 1) Get the latest lxc. lxc 0.8 might suffice for systemd in a container but not with systemd in the host and I wouldn't recommend it. 0.9.0 is being pulled and bundled now. It's not up yet but 0.9.0.rc1 is. 2) You'll have to add lxc.autodev = 1 to your configuration file. I already do that. I am running lxc version: 0.9.0.alpha3 That's strange. What stops systemd from mounting devtmpfs and firing up udev is having a tmpfs mounted on /dev. That's part of what autodev = 1 is doing. I'm taking my understanding from here: http://www.freedesktop.org/wiki/Software/systemd/ContainerInterface where it says The udev unit files will check for CAP_SYS_MKNOD, and skip udev if that is not available. But it sounds like you're saying that lxc.autodev = 1 should prevent systemd from firing the udev systemd unit anyway. Well, sort of. I may have been a little off there. I do find that systemd-udevd is running in my systemd containers but that the actual udevd process itself is not (as opposed to the host systems and other systems). I can run mknod in those containers. So it is possible to do this. But... A hint may be in the lxc-fedora template where there is specifically a configure_systemd_fedora function that does this: configure_fedora_systemd() { unlink ${rootfs_path}/etc/systemd/system/default.target touch ${rootfs_path}/etc/fstab chroot ${rootfs_path} ln -s /dev/null //etc/systemd/system/udev.service chroot ${rootfs_path} ln -s /lib/systemd/system/multi-user.target /etc/systemd/system/default.target #dependency on a device unit fails it specially that we disabled udev sed -i 's/After=dev-%i.device/After=/' ${rootfs_path}/lib/systemd/system/getty\@.service } Something similar does exist in the lxc-archlinux template: # disable services unavailable for container ln -s /dev/null /etc/systemd/system/systemd-udevd.service ln -s /dev/null /etc/systemd/system/systemd-udevd-control.socket ln -s /dev/null /etc/systemd/system/systemd-udevd-kernel.socket ln -s /dev/null /etc/systemd/system/proc-sys-fs-binfmt_misc.automount # set default systemd target ln -s /lib/systemd/system/multi-user.target /etc/systemd/system/default.target The lxc-archlinux template script seems very badly broken for me, expecting an fixed bridge name of br0 and not using the defaults from /etc/lxc/default.conf and looking for things that are not present on my Fedora host. So I haven't been able to build an archlinux container on my host systems. Did you build yours from lxc-create or did you roll your own? Maybe you might want to check those /dev/null links in that container. Looks like udevd should not even start if those have been set correctly. What distro is running in the container and what version of systemd? I've seen this with Fedora 16 but the latest systemd and Fedora 17 in the container are fine. I am running Arch Linux on both host and container: Linux 3.7.10-1-ARCH #1 SMP PREEMPT Thu Feb 28 09:50:17 CET 2013 x86_64 GNU/Linux On my host, systemctl --version reports: systemd 197 +PAM -LIBWRAP -AUDIT -SELINUX -IMA -SYSVINIT +LIBCRYPTSETUP +GCRYPT +ACL +XZ And on the container, it's systemd 196 Those versions are congruent with what I'm running. I've just checked and the latest version in the Arch repo is 198. I wonder if I should try and update to that? Proabably won't make a difference. This did give me some place to look over my headaches with Fedora 15 and Fedora 16 upgraded containers though. :-)=) I found that, without the removal of mknod capability, everything went crazy. I have working containers with systemd both on host and inside the container (I even run my full desktop inside a container). To get a systemd container working I found I needed three things: lxc.autodev = 1 lxc.cap.drop = mknod I'm not having to do that but I'm avoiding F15 and F16 because they don't seem to play nice and start reliably. F17 is doing well for me. lxc.pts = 1024 It's alll working well except for the fact that I might need to allow a container to have mknod capability. Are you saying that with 0.9.0 there are changes that negate the requirement for lxc.cap.drop = mknod? The way I understood it was that it was systemd that behaved differently based on the availability of that capability
Re: [Lxc-users] LXC- ARM6 RaspberryPI Fedora core 14
On Wed, 2013-04-03 at 11:29 +0200, Benito wrote: Hi There I've been working on a project on the Raspberry PI , where we want to run a 32-bit /64-bit fedora 14 container on the RaspberryPi . 32-bit / 64-bit? Huh? What do you think you're running? You're going to be running on the host architecture and host kernel. The Raspberry Pi (RPi) is an ARMv6 (at least mine is) and that's 32 bit. That's going to be the container as well as the host. You're not thinking you'll be running i686 or x86_64 binaries in a container like that, are you? I've had success with LXC on a Mint14 64-bit (which is Ubuntu 12 based I believe) host with Fedora 14 - 64bit container ( Downloaded FC14 with the template -t parameter below..,, It was as easy as running : apt-get install yum lxc-create -t fedora -n fedora14 I've got a feeling that is not going to work (at least not yet or at least now without some effort) for you for a number of reasons... 1) That command you gave won't install Fedora 14 unless you were running on a Fedora 14 host. Running that command on one of my F17 hosts installed F17 into a container named fedora14 but it obviously was NOT Fedora 14. Below you say you tried it from a Fedora ARM Remix. Was it that the old F14 remix (no longer current or supported) or one of the current F17 or F18 ones? 2) Why are you wanting to resort to F14? That's been out of support and past EOL for over a year now. It was the last Fedora before they started resorting to systemd. If that's your concern, you should get lxc 0.9.0 (still not on the web site yet - still at 0.9.0.rc1 but should show up in a day or so) installed so you can support systemd in a container. 3) The current Fedora ARM Remix is F17 and F18 with versions of systemd that's going to require lxc 0.9.0 or higher. 4) The current version of lxc in the F18 Remix is only 0.7.5 and will not support running systemd in a container (F17 or F18) or running on a host system running recent versions of systemd. Sigh... 5) I don't know about the Raspian flavor (I'm running Fedora ARM Remix F17 on my 4 Raspberry PIs) but the F17 ARM Remix has cgroups in place that should not require a recompiled kernel. It is going to need a newer version of lxc, though... 6) Sigh... Looking at what got built by that lxc-create command, I realize we also have a template problem for containers running systemd. The config files have to have autodev = 1 in them. Even if you managed to create an F17 or F18 container and have the latest lxc, those containers will not run properly without that option. Damn... I'll take that up on the development list. Now I've been struggling with this for a few weeks , Compiled an LXC friendly kernel on the Raspbian OS, - Debian wheezy ARM , also tried with Fedora ARM remix. Is it even possible to run a fedora14 container on ARM (raspberry pi) architecture ? If so can anyone point me in the right direction ? I may see if I can pull this off on one of my F17 ARM Remix RPIs after rebuilding lxc with the latest sources. I think you can forget about F14 (IMHO) though. Regards Benito Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Minimize network downtime and maximize team effectiveness. Reduce network management and security costs.Learn how to hire the most talented Cisco Certified professionals. Visit the Employer Resources Portal http://www.cisco.com/web/learning/employer_resources/index.html___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] LXC- ARM6 RaspberryPI Fedora core 14
Cross posting over to the developers list, since this is definitely a developer issue... On Wed, 2013-04-03 at 11:29 +0200, Benito wrote: Hi There I've been working on a project on the Raspberry PI , where we want to run a 32-bit /64-bit fedora 14 container on the RaspberryPi . I've had success with LXC on a Mint14 64-bit (which is Ubuntu 12 based I believe) host with Fedora 14 - 64bit container ( Downloaded FC14 with the template -t parameter below..,, It was as easy as running : apt-get install yum lxc-create -t fedora -n fedora14 Ok... Got the latest lxc compiled on my Raspberry Pi. There seem to be 4 things broken with the fedora template on the RPi, two of which are peculiar to running on the Fedora Remix but one big one will bite Raspbian as well. One is a version/config issue with the Fedora container for supporting systemd in a container. 1) The architecture is reported by the OS as armv6l but this fails. The arm processors should be mapped to arch = arm. That means adding an if check in the template. That applies to both Raspbian and Fedora Remix hosts. I don't know of other architechtures are similarly affected. 2) Running under Fedora Remix 17, the template can not find the release information and doesn't recognize it as a Fedora family. Instead of being in /etc/fedora-release, it's in /etc/raspberrypi-fedora-remix-release but could also be extracted from /etc/redhat-release, which is a symlink. 3) On Fedora Remix, if it finds the release (I added a symlink to test), it's extracting the wrong field for the version number from the release file. This is vanilla Fedora 17: Fedora release 17 (Beefy Miracle) This is the RPi Fedora Remix 17: Fedora remix release 17 (Raspberrypi Fedora Remix) The template script is extracting the third field (word) and is one off in this case. That also causes the yum downloads to bomb. Both points 2 3 can be circumvented by including the -R release to the template like this: lxc-create -t fedora -n Fedora17 -- -R 17 But that still leaves the bad architecture which then causes the yum downloads to blow up. I added this to the template to get it to work and it's building a container now. if [ $arch = i686 ]; then arch=i386 fi + if [ $arch = armv6l ]; then + arch=arm + fi That should probably be turned into a case statement. Detecting the correct release file is probably going to be a little ugly and, maybe, should fall back to /etc/redhat-release if fedora-release is not present, detect the keyword Fedora and skip the optional word Remix. 4) Finally, there's going to need to be a version check in there to add autodev = 1 to container configs for versions greater than 14 or systemd in the container will cause problems for the host system. Regards, Mike Now I've been struggling with this for a few weeks , Compiled an LXC friendly kernel on the Raspbian OS, - Debian wheezy ARM , also tried with Fedora ARM remix. Is it even possible to run a fedora14 container on ARM (raspberry pi) architecture ? If so can anyone point me in the right direction ? Regards Benito -- Minimize network downtime and maximize team effectiveness. Reduce network management and security costs.Learn how to hire the most talented Cisco Certified professionals. Visit the Employer Resources Portal http://www.cisco.com/web/learning/employer_resources/index.html ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Minimize network downtime and maximize team effectiveness. Reduce network management and security costs.Learn how to hire the most talented Cisco Certified professionals. Visit the Employer Resources Portal http://www.cisco.com/web/learning/employer_resources/index.html___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] mknod inside systemd container
On Wed, 2013-04-03 at 23:03 +0100, John wrote: On 02/04/13 23:59, Michael H. Warfield wrote: On Tue, 2013-04-02 at 16:02 +0100, John wrote: If my understanding is correctl, to stop systemd trying to launch udev and generally make a mess of everything inside a container, you need to remove the mknod capability from the container. Ah... That's kind of old information and not really effective. But what if I want (need) to be able to use mknod inside a container, how can I do that with a systemd container? 1) Get the latest lxc. lxc 0.8 might suffice for systemd in a container but not with systemd in the host and I wouldn't recommend it. 0.9.0 is being pulled and bundled now. It's not up yet but 0.9.0.rc1 is. 2) You'll have to add lxc.autodev = 1 to your configuration file. I already do that. I am running lxc version: 0.9.0.alpha3 That's strange. What stops systemd from mounting devtmpfs and firing up udev is having a tmpfs mounted on /dev. That's part of what autodev = 1 is doing. What distro is running in the container and what version of systemd? I've seen this with Fedora 16 but the latest systemd and Fedora 17 in the container are fine. I found that, without the removal of mknod capability, everything went crazy. I have working containers with systemd both on host and inside the container (I even run my full desktop inside a container). To get a systemd container working I found I needed three things: lxc.autodev = 1 lxc.cap.drop = mknod I'm not having to do that but I'm avoiding F15 and F16 because they don't seem to play nice and start reliably. F17 is doing well for me. lxc.pts = 1024 It's alll working well except for the fact that I might need to allow a container to have mknod capability. Are you saying that with 0.9.0 there are changes that negate the requirement for lxc.cap.drop = mknod? The way I understood it was that it was systemd that behaved differently based on the availability of that capability... I have found that this works to get recent systemd containers (Fedora 17) to work but Fedora 15 and Fedora 16 (neither of which are supported any longer) work due to udev / systemd interaction. I would recommend waiting a couple of days until 0.9.0 is up and then pulling it down and building it. That's your best shot with systemd. I have this container that is a builder of system images for other nodes (containers and/or metal boxes). In order to correctly do this it needs to execute mknod inside the image as it builds it. (note, device nodes created doesn't need to be usable in the context of the image being built - the builder just needs to be able to create it). I've been doing this for ages under sysvinit and it's been fine. I have just migrated this builder container to systemd and hit this problem... Is there another way to keep systemd in line other than removing the mknod capability ? Thanks, John -- Own the Future-Intel(R) Level Up Game Demo Contest 2013 Rise to greatness in Intel's independent game demo contest. Compete for recognition, cash, and the chance to get your game on Steam. $5K grand prize plus 10 genre and skill prizes. Submit your demo by 6/6/13. http://altfarm.mediaplex.com/ad/ck/12124-176961-30367-2 ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users -- Minimize network downtime and maximize team effectiveness. Reduce network management and security costs.Learn how to hire the most talented Cisco Certified professionals. Visit the Employer Resources Portal http://www.cisco.com/web/learning/employer_resources/index.html ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Minimize network downtime and maximize team effectiveness. Reduce network management and security costs.Learn how to hire the most talented Cisco Certified professionals. Visit the Employer Resources Portal http://www.cisco.com/web/learning/employer_resources/index.html___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] mknod inside systemd container
On Tue, 2013-04-02 at 16:02 +0100, John wrote: If my understanding is correctl, to stop systemd trying to launch udev and generally make a mess of everything inside a container, you need to remove the mknod capability from the container. Ah... That's kind of old information and not really effective. But what if I want (need) to be able to use mknod inside a container, how can I do that with a systemd container? 1) Get the latest lxc. lxc 0.8 might suffice for systemd in a container but not with systemd in the host and I wouldn't recommend it. 0.9.0 is being pulled and bundled now. It's not up yet but 0.9.0.rc1 is. 2) You'll have to add lxc.autodev = 1 to your configuration file. I have found that this works to get recent systemd containers (Fedora 17) to work but Fedora 15 and Fedora 16 (neither of which are supported any longer) work due to udev / systemd interaction. I would recommend waiting a couple of days until 0.9.0 is up and then pulling it down and building it. That's your best shot with systemd. I have this container that is a builder of system images for other nodes (containers and/or metal boxes). In order to correctly do this it needs to execute mknod inside the image as it builds it. (note, device nodes created doesn't need to be usable in the context of the image being built - the builder just needs to be able to create it). I've been doing this for ages under sysvinit and it's been fine. I have just migrated this builder container to systemd and hit this problem... Is there another way to keep systemd in line other than removing the mknod capability ? Thanks, John -- Own the Future-Intel(R) Level Up Game Demo Contest 2013 Rise to greatness in Intel's independent game demo contest. Compete for recognition, cash, and the chance to get your game on Steam. $5K grand prize plus 10 genre and skill prizes. Submit your demo by 6/6/13. http://altfarm.mediaplex.com/ad/ck/12124-176961-30367-2 ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Minimize network downtime and maximize team effectiveness. Reduce network management and security costs.Learn how to hire the most talented Cisco Certified professionals. Visit the Employer Resources Portal http://www.cisco.com/web/learning/employer_resources/index.html___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] installation problem
Hello... On Sat, 2013-01-26 at 22:09 +0800, Ryan Young wrote: Hello all, I am new to lxc and got some problem with the installation of lxc 0.8.0. My distribution is rhel5, when I implement ./configure, it returns asplease install libcap-devel, but I did have libcap-devel/libcap installed on my machine, its version is 1.10.26. Are there any issues regarding this problem? RHEL 5? I don't think so. What kernel version are you running? I'm not aware of any flavor of RHEL 5 / CentOS 5/ SLS 5 that runs a kernel that will support LXC. From what I can tell, the latest kernel available on 5.9 is 2.6.18-348.1.1 and that's just not going to happen as it has no support for control groups (cgroups). Even if you solve the libpcap problem, you won't get past this one without a more modern kernel. You would be better off upgrading to RHEL 6 / CentOS 6 than even attempting this. Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS, MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft MVPs and experts. ON SALE this month only -- learn more at: http://p.sf.net/sfu/learnnow-d2d___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] error starting lxc containers
On Sat, 2013-01-19 at 23:33 +0200, Ramez Hanna wrote: host: fedora 17 kernel: 3.6.11-5.fc17.x86_64 lxc: 0.9 alpha2 systemd: systemd-44-23.fc17.x86_64 selinux is disabled I have a Fedora 17 host (and several other hosts as well as Fedora 18 for testing)... The latest update to systemd broke lxc due to the pivot_root problem with their use of the MS_SHARED mount attribute that has been under active discussion for the last couple of weeks. No, it is not functional under 0.9.0 alpha2. It is, more or less, fixed under current staging (fixed with a lot of uglyness that we're trying to address). Your errors reported below don't seem to exactly correspond to the errors I would expect but, I would expect that, if you had recently upgraded that Fedora 17 host to the latest systemd, you are going to fail, period. Most likely, I would expect you to fail with a pivot_root failure but anything is possible. It's broken and we know it. lxc-start -n build02 build02 is a wheezy container built with the debina template was working fine both kernel and systemd were upgraded (can't tell which one broke it) Most likely, it's the systemd upgrade that caused the failure. Fedora 17 with the latest systemd from fedora-upgrades has broken lxc and even 0.9.0 alpha2 does not fix it. You have to use staging from git and build your own. Regards, Mike error is lxc-start 1358593239.324 INFO lxc_conf - cgroup has been setup lxc-start 1358593239.324 INFO lxc_conf - console has been setup lxc-start 1358593239.325 ERRORlxc_conf - Operation not permitted - error creating /usr/lib64/lxc/rootfs/dev/tty1 lxc-start 1358593239.329 ERRORlxc_conf - Operation not permitted - error creating /usr/lib64/lxc/rootfs/dev/tty2 lxc-start 1358593239.330 ERRORlxc_conf - Operation not permitted - error creating /usr/lib64/lxc/rootfs/dev/tty3 lxc-start 1358593239.332 ERRORlxc_conf - Operation not permitted - error creating /usr/lib64/lxc/rootfs/dev/tty4 lxc-start 1358593239.334 INFO lxc_conf - 4 tty(s) has been setup lxc-start 1358593239.334 DEBUGlxc_conf - mountpoint for old rootfs is '/usr/lib64/lxc/rootfs/lxc_putold' lxc-start 1358593239.334 ERRORlxc_conf - Invalid argument - pivot_root syscall failed lxc-start 1358593239.336 ERRORlxc_conf - failed to setup pivot root lxc-start 1358593239.337 ERRORlxc_conf - failed to set rootfs for 'f17' lxc-start 1358593239.338 ERRORlxc_start - failed to setup the container lxc-start 1358593239.339 ERRORlxc_sync - invalid sequence number 1. expected 2 lxc-start 1358593239.340 ERRORlxc_start - failed to spawn 'f17' any pointers to where this comes from? -- BR RH http://informatiq.org -- Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS, MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft MVPs and experts. SALE $99.99 this month only -- learn more at: http://p.sf.net/sfu/learnmore_122912 ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS, MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft MVPs and experts. SALE $99.99 this month only -- learn more at: http://p.sf.net/sfu/learnmore_122912___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] CentOS init.d services not started upon start
On Thu, 2013-01-17 at 20:48 +0530, Shibashish wrote: Hi, My runlevel services, i.e. whatever is in /etc/init.d/ is not started when I start a container using /usr/bin/lxc-start -d -n myvm1. How do I start those automatically? I'm on CentOS 6.3, lxc built from git branch staging. Strange. I have a couple of CenOS 6.3 containers running just fine and all the services that are configured to autostart are starting properly. First question. What runlevel is your container starting in? Run the runlevel command and you should see something like N 3. The first number is the previous level (none in this case) and the second number is the current runlevel (3 in this case). The services in /etc/init.d are available to start but must be linked to startup links in the appropriate /etc/rc.d/rc[runlevel].d (/etc/rc.d/rc3.d in my example above). Links that begin with an S followed by two numbers (like S55sshd) are startup links when entering that runlevel. Services to be autostarted are normally managed by chkconfig. Just type chkconfig and look down the column of services for your runlevel and see what's on. You can use chkconfig to turn services on or off as well. If you're in runlevel 1 or S (single user), you'll need to figure out why and correct it. You might check /etc/inittab for the id (init default) line. Most likely you are going to want 3. If you are in runlevel 3 and you have services configured to start, which are not starting, then you should probably look in /var/log/messages to see if you're getting errors. Thanks. ShiB. while ( ! ( succeed = try() ) ); Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS, MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft MVPs and experts. ON SALE this month only -- learn more at: http://p.sf.net/sfu/learnmore_122712___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] start order
On Mon, 2012-12-10 at 08:10 -0600, Serge Hallyn wrote: Quoting Michael H. Warfield (m...@wittsend.com): There has been very little discussion in the main project over how to manage autobooting containers (or maybe I've missed it). Maybe it's time we had it. What I do now is specific to my constellations of 6 lxc hosts (about 4 dozen guest containers) at two sites. I would personally prefer this sort of information to be contained in the individual container config files, ala OpenVZ and Linux-Vservers, without creating an entirely new, distribution specific, configuration file. Primary argument with having it contained within the container config really concerns migration. When I migrated a container from one host to another host in the same cluster, if all that information is in the container config, it follows along. If you bury it in a common config, you then have to edit those configuration files on both the servers (the throwing and the catching) in order to properly complete the migration. I don't like that idea at all. Buried in a config, but it could be exposed by lxc-info so it wouldn't be so bad. Though I guess it could be an issue for boot-speed fanatics who want to minimize random disk access during boot. I won't say that I'm a boot speed fanatic but I can be (more than) a little OCD with regards to getting certain critical containers up as quickly as possible even where other delays dominate. A simple LXC_AUTOSTART=no default would probably make them happy (how many ppl really want autostart, and of those, how many of the systems they boot every day do they want it enabled on?) Systems of this sort should not be be booted every day. My test systems - sure. My production systems - Hell NO! My monster host at my colo site, berserker-base, was recently booted after upgrading a root kernel in the host system and a distro upgrade. It had been up over 500 days. It took hours to fsck the various file systems just to boot and some critical containers were down that entire time (I do have my authoritative name servers scattered between multiple hosts with nice long TTLs and I work with Hurricane Electric for slaves so I had no disruption in service). But it had close to 3 dozen containers. It took only a minute or so to bring them up, once the host was up, so, against the hours of fsck, the minor difference between one machine starting before another was a drop in the bucket, I will admit. Still, I wanted those name servers up first. :-) They can slow the other containers down if the other containers need DNS services that these provide. There are dependencies in there for both DNS and routing. Here's my problem. System takes 3 hours to fsck those file systems. I still have to remember to come back to it once it's fully booted and make sure those critical containers get booted up. Yes, I need autoboot badly there. I do get distracted and I do forget. It's happened more than once that someone has called me on the phone asking why there system is down (I have a dozen or so friends with playgrounds I host) and I realized I had gotten distracted and forgot to finish firing everything up. Boot order is less important to me than simply having raw autoboot. I have a script but it's currently run by hand. I always meant to turn it into an init script but hadn't. It'd be great to have distro-independent boot startup and ordering built in, so that distros don't have to roll their own. Sounds like you all have some great ideas so I'll look for patches :) What I think would be great is 1. lxc.conf updates to indicate autostart delay and order or dependencies This sounds reasonable for global configurations. I think providing a container by container delay may be a case where the precision is exceeding the accuracy and may be entirely way too much overkill so a global in lxc.conf is reasonable. I still like having the autoboot option and/or priority in the individual container configs though, since it really is container specific. Maybe there's a way to adapt to both paradigms... Give one a preference. I think I could make that work. 2. an lxc command to spit out containers which should be started, in order (to loop over) O I LIKE that idea. I hadn't thought of that! That makes a lot of sense. Something like lxc-boot where you pass it a level number (0 for manual and 1 for boot maybe) and it passes back to you a list of containers that need to be booted to meet the criterion (priority) and in the order they should be booted. That would simplify what I'm doing now... It's really almost a spin on the idea of lxc-ls only you list them (the ones that need booting) out in boot order. 3. lxc-info update to optionally list that information 4. could package sysvinit, upstart, and systemd templates. I'll look at the Fedora / RHEL / CentOS / SLS angle. I need to get more
Re: [Lxc-users] start order
On Mon, 2012-12-10 at 11:35 -0600, Serge Hallyn wrote: Quoting Michael H. Warfield (m...@wittsend.com): On Mon, 2012-12-10 at 08:10 -0600, Serge Hallyn wrote: Quoting Michael H. Warfield (m...@wittsend.com): There has been very little discussion in the main project over how to manage autobooting containers (or maybe I've missed it). Maybe it's time we had it. What I do now is specific to my constellations of 6 lxc hosts (about 4 dozen guest containers) at two sites. I would personally prefer this sort of information to be contained in the individual container config files, ala OpenVZ and Linux-Vservers, without creating an entirely new, distribution specific, configuration file. Primary argument with having it contained within the container config really concerns migration. When I migrated a container from one host to another host in the same cluster, if all that information is in the container config, it follows along. If you bury it in a common config, you then have to edit those configuration files on both the servers (the throwing and the catching) in order to properly complete the migration. I don't like that idea at all. Buried in a config, but it could be exposed by lxc-info so it wouldn't be so bad. Though I guess it could be an issue for boot-speed fanatics who want to minimize random disk access during boot. I won't say that I'm a boot speed fanatic but I can be (more than) a little OCD with regards to getting certain critical containers up as quickly as possible even where other delays dominate. Sorry, I think you misunderstood what I meant by that :) By boot speed fanatics, I meant people who want to avoid having init scripts look under /etc/default/$pkg because that adds seeking which slows down boot. For those people, if any laptop that has some containers will, at boot, have to scan /var/lib/lxc/*/config to find lxc.boot entries, that will be unacceptable. And TBH since my main laptop has an SSD but doesn't suspend/resume, 5 sec boot time is precious to me. But I don't autostart containers on that laptop. With 'boot speed fanatics' I was not talking about people who want containers brought up at boot. Oh, no... I know EXACTLY who you were referring to. I was just making an orthogonal, oblique, point. Maybe (probably) I was being too obscure. They're the ones that got us into this whole systemd the system must boot in under 10 seconds mess (which I don't TOTALLY disagree with). I totally agree with you there. If you autostart containers in that sort of environment you have no room to bitch about the boot time. That's a self-inflicted injury and I agree the autoboot option must default to off and you only autoboot those containers that are enabled. A fast grep on the config files is pretty small, time wise, but an interesting point, even if you do it in the background. Even if you DON'T autostart containers, if you scan the directories with grep, that can take (a little) time. A global config for autostart might not be a bad idea. In RH/rpm-based land, this would probably be something in an /etc/sysconfig/lxc file with LXC_AUTOBOOT=0 and I would support that entirely. Don't even look at the configuration files if that's the case. If it's set to autoboot, then digest the configuration files to sort them out. But it is something you have done yourself. Even the ls /etc/lxc/auto is not without cost but that may be the way Ubuntu goes with that paradigm. -serge Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] start order
) in order to properly complete the migration. I don't like that idea at all. Where numbers mean secundums, eg. dns server receives 15 secundums to startup properly. Essential services would be working before anything else and users would be happy:) What do you think about that? Thanks, tamas Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] start order
On Sat, 2012-12-08 at 19:51 +0100, Daniel Baumann wrote: On 12/08/2012 06:24 PM, Michael H. Warfield wrote: It's possible to drop a full configuration file there into /etc/lxc/auto and not have it exist in /var/lib/lxc and then -f config file would work while -n $c -d would fail. That would be my guess there. that was the intention when i wrote it in debian (ubuntu at some point took it over then). I've been imitating the way OpenVZ does this (on Fedora) by adding some parameters in the lxc configuration files like this: not convinced that a start order does actually matter (because your stuff within containers should a) be graceful wrt/ to existence of other services by other containers and b) be eventful), however, better and simpler imho is to support using numbered prefixes. I can see his desire for controlling start order. In my real world case, where I have to reboot one of my major hosts, I have certain important containers I need up as soon as possible. One is a null router which is the /dev/null sink for my net-telescope (security related stuff similar to a honeypot). That's the first one that needs to be up. That one is followed by some authoritative name servers and recursive name servers. After that, there are processes which like to chew on MySQL databases as they slowly come up. The containers are independent of one another but they're demand on the host resources can have a significant real-time impact. Not vital. Everything comes up anyways, but I can see his point. /etc/lxc/auto/0001-foo /etc/lxc/auto/0002-bar [...] I like having the containers named for the host names (the old OpenVZ paradigm really sucked) so listing give me better readability and understandability. I thought of the prefix idea too but discounted it because it would then force you into a naming convention for the containers themselves. That's more of a hack than a solution in that case where you have additional side effects like that. -- Address:Daniel Baumann, Donnerbuehlweg 3, CH-3012 Bern Email: daniel.baum...@progress-technologies.net Internet: http://people.progress-technologies.net/~daniel.baumann/ Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] Container sends login prompt to the system console
-start 1354793173.811 DEBUGlxc_cgroup - cgroup /sys/fs/cgroup/devices has flags 0x2 lxc-start 1354793173.811 DEBUGlxc_cgroup - get_init_cgroup: found init cgroup for subsys devices at / lxc-start 1354793173.811 DEBUGlxc_cgroup - using cgroup mounted at '/sys/fs/cgroup/devices//lxc' lxc-start 1354793173.811 DEBUGlxc_cgroup - lxc_cgroup_path_get: returning /sys/fs/cgroup/devices//lxc/test1-18 for subsystem devices.allow lxc-start 1354793173.811 DEBUGlxc_conf - cgroup 'devices.allow' set to 'c 254:0 rwm' lxc-start 1354793173.811 INFO lxc_conf - cgroup has been setup lxc-start 1354793173.811 INFO lxc_conf - console has been setup lxc-start 1354793173.811 INFO lxc_conf - 4 tty(s) has been setup lxc-start 1354793173.811 DEBUGlxc_conf - created '/usr/local/lib/lxc/rootfs/lxc_putold' directory lxc-start 1354793173.811 DEBUGlxc_conf - mountpoint for old rootfs is '/usr/local/lib/lxc/rootfs/lxc_putold' lxc-start 1354793173.811 DEBUGlxc_conf - pivot_root syscall to '/usr/local/lib/lxc/rootfs' successful lxc-start 1354793173.812 DEBUGlxc_conf - umounted '/lxc_putold/dev/shm' lxc-start 1354793173.812 DEBUGlxc_conf - umounted '/lxc_putold/dev/pts' lxc-start 1354793173.812 DEBUGlxc_conf - umounted '/lxc_putold/dev/mqueue' lxc-start 1354793173.812 DEBUGlxc_conf - umounted '/lxc_putold/dev/hugepages' lxc-start 1354793173.812 DEBUGlxc_conf - umounted '/lxc_putold/proc/sys/fs/binfmt_misc' lxc-start 1354793173.812 DEBUGlxc_conf - umounted '/lxc_putold/sys/kernel/security' lxc-start 1354793173.812 DEBUGlxc_conf - umounted '/lxc_putold/sys/fs/selinux' lxc-start 1354793173.812 DEBUGlxc_conf - umounted '/lxc_putold/sys/fs/cgroup/systemd' lxc-start 1354793173.812 DEBUGlxc_conf - umounted '/lxc_putold/sys/fs/cgroup/cpuset' lxc-start 1354793173.812 DEBUGlxc_conf - umounted '/lxc_putold/sys/fs/cgroup/cpu,cpuacct' lxc-start 1354793173.812 DEBUGlxc_conf - umounted '/lxc_putold/sys/fs/cgroup/memory' lxc-start 1354793173.812 DEBUGlxc_conf - umounted '/lxc_putold/sys/fs/cgroup/devices' lxc-start 1354793173.812 DEBUGlxc_conf - umounted '/lxc_putold/sys/fs/cgroup/freezer' lxc-start 1354793173.812 DEBUGlxc_conf - umounted '/lxc_putold/sys/fs/cgroup/net_cls' lxc-start 1354793173.812 DEBUGlxc_conf - umounted '/lxc_putold/sys/fs/cgroup/blkio' lxc-start 1354793173.812 DEBUGlxc_conf - umounted '/lxc_putold/sys/fs/cgroup/perf_event' lxc-start 1354793173.812 DEBUGlxc_conf - umounted '/lxc_putold/sys/kernel/debug' lxc-start 1354793173.812 DEBUGlxc_conf - umounted '/lxc_putold/sys/kernel/config' lxc-start 1354793173.812 DEBUGlxc_conf - umounted '/lxc_putold/sys/fs/fuse/connections' lxc-start 1354793173.812 DEBUGlxc_conf - umounted '/lxc_putold/run/user/1000/gvfs' lxc-start 1354793173.812 DEBUGlxc_conf - umounted '/lxc_putold/tmp' lxc-start 1354793173.812 DEBUGlxc_conf - umounted '/lxc_putold/boot/efi' lxc-start 1354793173.812 DEBUGlxc_conf - umounted '/lxc_putold/home' lxc-start 1354793173.812 DEBUGlxc_conf - umounted '/lxc_putold/dev' lxc-start 1354793173.812 DEBUGlxc_conf - umounted '/lxc_putold/sys/fs/cgroup' lxc-start 1354793173.812 DEBUGlxc_conf - umounted '/lxc_putold/run' lxc-start 1354793173.812 DEBUGlxc_conf - umounted '/lxc_putold/boot' lxc-start 1354793173.812 DEBUGlxc_conf - umounted '/lxc_putold/sys' lxc-start 1354793173.812 INFO lxc_conf - lazy unmount of '/lxc_putold' lxc-start 1354793173.812 WARN lxc_conf - failed to unmount '/lxc_putold/proc' lxc-start 1354793173.813 INFO lxc_conf - created new pts instance lxc-start 1354793173.813 DEBUGlxc_conf - capabilities has been setup lxc-start 1354793173.813 NOTICE lxc_conf - 'test1-18' is setup. lxc-start 1354793173.813 NOTICE lxc_start - exec'ing '/sbin/init' lxc-start 1354793173.813 NOTICE lxc_start - '/sbin/init' started with pid '17431' lxc-start 1354793173.813 WARN lxc_start - invalid pid for SIGCHLD tahnks Benoit -- LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On Wed, 2012-12-05 at 13:00 +, John wrote: On 04/12/12 21:29, Michael H. Warfield wrote: I raised the question about LXC/systemd a while back and have been trying to follow the conversation but I have to admit it's going somewhat over my head. I've also been away on another piece of work but would now like to understand where things lie with LXC and systemd inside a container. Ok... I'll try to answer some of them... Thanks Mike, much appreciated. I have just updated my system to 0.8.0 and I can't see any changes to make a systemd container work. Are there changes in 0.8.0 ? There are very significant changes in 0.8.0 but, unfortunately, not the ones you need to get systemd to work in a container. We've been testing a lot of these and they are in git but they are not in a release yet. Hopefully soon, just not yet. If so, I'd be grateful for some guidance on what I need to do to to my configuration to make it work. Right now, you'll have to build from git. I will go away and do a git build later today. I presume that would be from git://lxc.git.sourceforge.net/gitroot/lxc/lxc. I'm also happy to help test this if I can. If it helps I am on Arch Linux. There are two problems. One is systemd in an lxc container. I think we have a rope on this one and it's tied down. The other is the more recent (195+) versions of systemd in the host that throw the pivot root errors. That has not been addressed as yet. I use Fedora. Right now, I have Fedora 17 hosts with Fedora 17 containers. Fedora 18 (currently in beta) host (systemd 195) is going to be a train wreck until we sort the pivot root problem. I don't know what you have with Arch Linux. You'll have to tell us what versions of systemd you are running. Ah yes, the pivot root problem. I have worked around this for the time being by doing a mount --make-rprivate /. I created a systemd service on the host as an after dependency on systemd-remount-fs.service to do this. I believe this is ok in the short term (it appears to work ok for me). Hmmm... I was thinking someone ran into some problems doing that and causing problems with the /dev/pts mounts or some such. Good to note if that worked for you. I'm about to start playing with Fedora 18 Beta where I expect problems. I'll try that out. If I rebuild lxc from git, should I then expect my existing systemd container to work or is there anything else that I need to do ? Yeah, one other thing (in addition to following Serge's advice regarding git and #stage)... You have to add an option to the config file for your systemd containers. lxc.autodev = 1 My versions: lxc version: 0.8.0 Linux hydrogen 3.6.8-1-ARCH #1 SMP PREEMPT Mon Nov 26 22:10:40 CET 2012 x86_64 GNU/Linux systemd 196 many thanks everyone. John Mike Thanks, I really appreciate the help. Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On Wed, 2012-12-05 at 11:09 -0600, Serge Hallyn wrote: Quoting Michael H. Warfield (m...@wittsend.com): You have to add an option to the config file for your systemd containers. lxc.autodev = 1 Phrasing it this way makes me wonder, should lxc look for '$rootfs/dev/console' and automatically set lxc.autodev if that is not found? I'm of two minds here (which, in my case, is a reduction in force and those two minds are going where did everybody go?). That might be an idea for jogging the default. If you don't have '$rootfs/dev/console' then set it. In that obvious case, you need it. If you do have it, obey what's on the config file? We have a bit of a chicken and egg situation. It's not just auto populating /dev but it's also mounting a ramfs partition on it. I'm not sure I'm comfortable with the level of random acts of terrorism that systemd has proven to be capable of if someone accidentally leaves a .../dev/console in their file system so we don't then mount ramfs on dev and we don't then auto populate dev. But... The same situation exists if the user doesn't manually provide the autodev option. But... I would not want us to switch based on the existence of systemd either. Maybe there is some other way we can autodetect this that doesn't depend on those static devices? Overall, I like that idea. It helps idiot proof the configuration better. (Right now if lxc.autodev is 1 then the tmpfs /dev is mounted before all the lxc.mount.entries and /var/lib/lxc/$c/fstab entries, but I can't right now think of a reason why it has to stay that way. If we were to always set lxc.autodev if /dev is empty, we'd want to make sure any separate /dev has been mounted, of course.) Concur. I think this is a good idea, we just have to watch some of the corner cases. The one I fear the most is the one where someone (ME!) does a yum upgrade of a container that then becomes systemd (F14 - F15) where it use to be Upstart with a static /dev populated. Boom. Flash of light, mushroom cloud on the horizon. But, again, if I don't fix the bloody config, I'm screwed as well. Self inflicted injury. I see that. I don't see a downside to adding it. I'm just a little nudgey about relying on it. On the balance... I'm in favor, yeah. -serge Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] autodev (was Re: [lxc-devel] [GIT] lxc branch, master, updated.) 7f99e339363d9f005c9386f60a1d8c0953c85053
Serge... You need to go in for doing psychic readings or buy lots of lottery ticks because I think you must have been channeling me and reading my mind. I've been close to posting on this off and on for the last couple of days but kept going I need to test this and what about this combination... On Fri, 2012-11-23 at 12:30 -0600, Serge Hallyn wrote: Quoting Serge Hallyn (serge.hal...@canonical.com): Quoting Michael H. Warfield (m...@wittsend.com): ... [big snip] ... [root@forest Plover]# cat 2012-10-31-06:41:37.log lxc-start 1351680097.900 ERRORlxc_conf - Operation not permitted - error creating /usr/lib64/lxc/rootfs/dev/tty6 Still got the error creating /dev/tty6 but not 1-5. But wait. In the container itself... [mhw@canyon mhw]$ ssh plover.ip6.wittsend.com Last login: Tue Oct 30 18:40:12 2012 from canyon.ip6.wittsend.com [mhw@plover ~]$ sudo -s [root@plover mhw]# ls -l /dev/tty? crw---. 1 root root 136, 36 Oct 31 06:41 /dev/tty1 crw---. 1 root root 136, 37 Oct 31 06:41 /dev/tty2 crw---. 1 root root 136, 38 Oct 31 06:41 /dev/tty3 crw---. 1 root root 136, 39 Oct 31 06:41 /dev/tty4 crw---. 1 root root 136, 40 Oct 31 06:41 /dev/tty5 crw---. 1 root root 136, 41 Oct 31 06:41 /dev/tty6 So it gave an error but did actual work. What error is it giving? Does it list the errno? I meant to track that down to the line that was generating it but the error what was I gave you... Operation not permitted - error creating /usr/lib64/lxc/rootfs/dev/tty6 and I had not dug into it deeper. IAC, with your last change, I consider that error informative and not critical. It's just noise... Seems like it's an off by one error somewhere but I just don't see where... The error seems to be printed from the statement on line 643 of lxc/src/lxc/conf.c from that git branch you fed me... if (access(path, F_OK)) { ret = creat(path, 0660); if (ret==-1) { SYSERROR(error creating %s\n, path); /* this isn't fatal, continue */ } else close(ret); } So why does it get this error on the last one and not the previous 5? No clue. I've got this narrowed down... If devtty is defined you do one thing and, if not, you drop into the autodev logic. Is that correct? If I devtty can be used with autodev = 1, and was working fine for me in ubuntu containers using autodev = 1. I just need to see if I can better consolidate those codepaths. use devtty, then the INFO dump says 0 ttys created and, in fact, none are, so I guess I don't know how to use that parameter. If you say lxc.devtty = joe then lxc will create a directory /dev/joe in the container, create /dev/joe/console and /dev/joe/tty{1-N}, bind mount the host ptys to those, and then symlink those into /dev. The purpose of this sad hackery is to allow package upgrades which want to do Strange, I tried this for several of my containers and it didn't seem to actually do anything for me. The directory was never created and nothing obvious happened (tip of the hat to the old Colossal Caverns Adventure game after which my entire domain, WittsEnd, is named). Orthogonal issue at this point other than your comment about wanting to add a patch. rm /dev/console MAKEDEV /dev/console to succeed. If lxc has bind-mounted the lxc consoles onto /dev/console directly, then 'rm /dev/console' fails because you can't delete the mount target. (This of course results in bad console until a restart with lxc, but let's the container upgrade succeed. Device namespaces, where are you?) Hi Michael, This seems to have stalled. Let's try and get this fixed in staging right now. Was the final patch (which I *believe* was the one in https://github.com/hallyn/lxc/tree/upstream.nov1.2012.autodev ) working for you? The patch is working well. I have some strangeness in some containers which are NOT using systemd that, if I enable audodev when it is NOT needed, the results have wrong (bad - weird) permissions. I've looked right at the code where you do the makedev. It should work. It doesn't make sense. Maybe it's a 32 bit / 64 bit thing (this is a 64 bit host), I don't know. I don't see why but that's an orthogonal issue as well. It works in the containers I need it and I don't need it in the containers where it doesn't work so I'm good with the patch as it stands. Next issue that's roaring up on us is going to be systemd 195+ and that whole MS_SHARED and pivot root issue but, again, that's all orthogonal to this issue. That's going to bite us in the Fedora 18 time frame and already in Arch Linux. But we need this first. thanks, -serge Sorry
Re: [Lxc-users] autodev (was Re: [lxc-devel] [GIT] lxc branch, master, updated.) 7f99e339363d9f005c9386f60a1d8c0953c85053
On Fri, 2012-11-23 at 14:15 -0600, Serge Hallyn wrote: Quoting Michael H. Warfield (m...@wittsend.com): Serge... You need to go in for doing psychic readings or buy lots of lottery ticks because I think you must have been channeling me and reading my mind. I've been close to posting on this off and on for the last couple of days but kept going I need to test this and what about this combination... All that says to me is that it takes me a few days longer to remember something :) ... The patch is working well. I have some strangeness in some containers which are NOT using systemd that, if I enable audodev when it is NOT needed, the results have wrong (bad - weird) permissions. I've looked right at the code where you do the makedev. It should work. It doesn't make sense. Maybe it's a 32 bit / 64 bit thing (this is a 64 bit host), I don't know. I don't see why but that's an orthogonal issue as well. It works in the containers I need it and I don't need it in the containers where it doesn't work so I'm good with the patch as it stands. Ok, I'll go ahead and post the patch one more time for review, then on monday or tuesday will push it to the staging branch and to the ubuntu raring package. That should get it sufficient extra testing that if there are subtle issues, we'll shake them out better than we are with the patch sitting in an obscure branch. Concur. thanks, -serge Many thanks! Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Monitor your physical, virtual and cloud infrastructure from a single web console. Get in-depth insight into apps, servers, databases, vmware, SAP, cloud infrastructure, etc. Download 30-day Free Trial. Pricing starts from $795 for 25 servers or applications! http://p.sf.net/sfu/zoho_dev2dev_nov___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On Mon, 2012-10-22 at 16:11 +0200, Lennart Poettering wrote: Note that there are reports that LXC has issues with the fact that newer systemd enables shared mount propagation for all mounts by default (this should actually be beneficial for containers as this ensures that new mounts appear in the containers). LXC when run on such a system fails as soon as it tries to use pivot_root(), as that is incompatible with shared mount propagation. The needs fixing in LXC: it should use MS_MOVE or MS_BIND to place the new root dir in / instead. A short term work-around is to simply remount the root tree to private before invoking LXC. In another thread, Serge had some heartburn over this shared mount propagation which then rang a bell in my head about past problems we have seen. On Mon, 2012-11-05 at 08:51 -0600, Serge Hallyn wrote: Quoting Michael H. Warfield (m...@wittsend.com): ... This was from another threat with the systemd guys. On Mon, 2012-10-22 at 16:11 +0200, Lennart Poettering wrote: Note that there are reports that LXC has issues with the fact that newer systemd enables shared mount propagation for all mounts by default (this should actually be beneficial for containers as this ensures that new mounts appear in the containers). LXC when run on such a system fails MS_SLAVE does this as well. MS_SHARED means container mounts also propagate into the host, which is less desirable in most cases. Here's where we've seen some problems in the past. It's not just mounts that are propagated but remounts as well. The problem arose that some of us had our containers on a separate partition. When we would shut a container down, that container tried to remount its file systems ro which then propagated back into the host causing the hosts file system to be ro (doesn't happen if you are running on the host's root fs for the containers) and from there across into the other containers. Are you using MS_SHARED or MS_SLAVE for this? If you are using MS_SHARED do you create a potential security problem where actions in the container can bleed into the state of the host and into other containers. That's highly undesirable. If a mount in a propagates back into the host and is then reflected to another container sharing that same mount tree (I have shared partitions specific to that sort of thing) does that create an information disclosure situation of one container mounts a new file system and the other container sees the new mount? I don't know if the mount propagation would reflect back up the shared tree or not but I have certainly seen remounts do this. I don't see that as desirable. Maybe I'm misunderstand how this is suppose to work but I intend to test out those scenarios when I have a chance. I do know that, when testing that ro problem, I was able to remount a partition ro in one container and it would switch in the host and the other container and I could the remount it rw in the other container and have it propagate back. Not good. Can you offer any clarity on this? as soon as it tries to use pivot_root(), as that is incompatible with shared mount propagation. The needs fixing in LXC: it should use MS_MOVE or MS_BIND to place the new root dir in / instead. A short term Actually not quite sure how this would work. It should be possible to set up a set of conditions to work around this, but the kernel checks at do_pivotroot are pretty harsh - mnt-mnt_parent of both the new root and current root have to be not shared. So perhaps we actually first chroot into a dir whose parent is non-shared, then pivot_root from there? :) (Simple chroot in place of pivot_root still does not suffice, not only because of chroot escapes, but also different results in /proc/pid/mountinfo and friends) Comments on Serge's points? At this point, we see where this will become problematical in Fedora 18 but appears to already be problematical in NixOS that another user is running and which containers systemd 195 in the host. We've had problems with chroot in the past due to chroot escapes and other problems years ago as Serge mentioned. Lennart -- Lennart Poettering - Red Hat, Inc. Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- LogMeIn Central: Instant, anywhere, Remote PC access and management. Stay in control, update software, and manage PCs from one command center Diagnose problems and improve visibility into emerging IT issues Automate, monitor and manage. Do more in less time with Central http://p.sf.net/sfu
Re: [Lxc-users] [lxc-devel] [GIT] lxc branch, master, updated. 7f99e339363d9f005c9386f60a1d8c0953c85053
On Sun, 2012-11-04 at 23:09 +0100, Daniel Lezcano wrote: On 11/01/2012 09:41 PM, Michael H. Warfield wrote: On Thu, 2012-11-01 at 21:20 +0100, Daniel Baumann wrote: On 11/01/2012 09:08 PM, Michael H. Warfield wrote: I know, I KNOW this is an 11th hour request. Can we please get Serge's autodev stuff into this release? Please? release early, release often? just release current git as 0.8.0 now, and the one with the autofoo as 0.8.1 soon after that? That would be ideal but we've been sitting at 0.8.0rc2 for something like 3-1/2 months now. I know Daniel (the other Daniel, the Daniel) has been incredibly busy. I have no objection to getting this out the door as 0.8.0 with a fast bump to 0.8.1 for the systemd stuff, but another several months is not good. Can we get this fast bump? We'll be staring Fedora 18 in the face by then. The working versions of Fedora are no longer in support and we've got more distros adopting systemd. Yeah, I have to admit I have been a very maintainer the last months and I apologize for that. Thanks to Serge and Stephane who took the patches and consolidate the next version. I prefer to release a 0.8.0 right now and release a 0.8.1 in a couple of weeks. That would be ok for you Michael ? PERFECT! Sounds like Serge and I still have some details to work out and I'm continuing to test like mad. Maybe we can get some more of this stuff figured out by then! That would make the next release even better. Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- LogMeIn Central: Instant, anywhere, Remote PC access and management. Stay in control, update software, and manage PCs from one command center Diagnose problems and improve visibility into emerging IT issues Automate, monitor and manage. Do more in less time with Central http://p.sf.net/sfu/logmein12331_d2d___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] pivot root failures when / is mounted as shared
On Sat, 2012-11-03 at 12:01 +0100, Peter Simons wrote: Hi guys, I've been using lxc for a while now, and it's a great tool. Thank you very much for the time and effort you have been dedicating to the development of that software! My Linux distribution (NixOS) is about to switch from upstart to systemd, and it this switch in the host system is going to break all my containers. It appears this is a well known problem that's been reported at http://sourceforge.net/tracker/?func=detailaid=3559833group_id=163076atid=826303 and https://github.com/lxc/lxc/issues/4. Now, I wonder what the status of this issue is. Is it clear how that problem can be remedied? Is there maybe a patch that fixes this problem? Does anyone know a work-around that I could use to keep my containers running when that switch to systemd occurs on the host system? Having JUST been up to my ears in this (and still am to some extent) working with Serge and hammering out some of these issues over the last couple of weeks, I think I can speak to some of this. The issue regarding pivot root is not your only problem and may not yet be a current problem. It's really, more or less, something that's going to become a problem (Fedora 19 time frame if I understand it properly from the systemd gang). Not sure what version of systemd where this becomes a problem in but I'm running Fedora 17 with systemd 44 in the host for some time with no problem. If you've tested that and you've seen how it breaks, could you post the version of systemd you are running and the error messages along with some config examples here so we can see them? A real problem is in the systemd based containers where it wants to mount devtmpfs on top of /dev and that breaks all sorts of things (console conflicts, restarts X in the host, all sorts of mess). Serge was finally able to whip together a patch this last week that I've been testing and I now have Fedora 17 containers using systemd running. We've still got some minor gotchas like console ttys not working yet (systemd won't start them in a container). Currently lxc-console will not work with a systemd container because of the systemd behavior wrt starting getty processes on ttys when it detects that it's in a container. For your distro, the pivot root when / is mounted shared may or may not be a problem but, as of systemd 44 for sure it is not. I still need to do some retesting with systemd 195 (Fedora Rawhide) but my early indications are that it's not a problem or hasn't been for me (but then my container / may not be mounted shared). IAC, you're getting nowhere with it (if you have systemd in the container) until you have the devtmpfs fixes. Right now... To get this to work, you need Serge's autodev branch from git here: git://github.com/hallyn/lxc called upstream.nov1.2012.autodev Build lxc from that. Then, in your systemd based containers, you must add the parameter lxc.autodev = 1 to the config. That will cause tmpfs to be mounted on top of /dev and populated with the device entries which are needed. Adding that to your non-systemd containers should have no negative impact but I have yet to test that fully. Daniel is getting ready to cut the 0.8.0 release which will NOT have the autodev fixes but we're hoping to get a quicker turnaround to get 0.9.0 or something similar out with them and other systemd fixes. If you are having the pivot root problem, this may not help. That will probably have to be addressed sooner or later from what I've read from the systemd guys. Take care, Peter Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- LogMeIn Central: Instant, anywhere, Remote PC access and management. Stay in control, update software, and manage PCs from one command center Diagnose problems and improve visibility into emerging IT issues Automate, monitor and manage. Do more in less time with Central http://p.sf.net/sfu/logmein12331_d2d___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] pivot root failures when / is mounted as shared
Additional comments specific to the pivot root issue... Adding the developers list as well, since this is a development issue... On Sat, 2012-11-03 at 12:01 +0100, Peter Simons wrote: Hi guys, I've been using lxc for a while now, and it's a great tool. Thank you very much for the time and effort you have been dedicating to the development of that software! My Linux distribution (NixOS) is about to switch from upstart to systemd, and it this switch in the host system is going to break all my containers. It appears this is a well known problem that's been reported at http://sourceforge.net/tracker/?func=detailaid=3559833group_id=163076atid=826303 and https://github.com/lxc/lxc/issues/4. Now, I wonder what the status of this issue is. Is it clear how that problem can be remedied? Is there maybe a patch that fixes this problem? Does anyone know a work-around that I could use to keep my containers running when that switch to systemd occurs on the host system? This was from another threat with the systemd guys. On Mon, 2012-10-22 at 16:11 +0200, Lennart Poettering wrote: Note that there are reports that LXC has issues with the fact that newer systemd enables shared mount propagation for all mounts by default (this should actually be beneficial for containers as this ensures that new mounts appear in the containers). LXC when run on such a system fails as soon as it tries to use pivot_root(), as that is incompatible with shared mount propagation. The needs fixing in LXC: it should use MS_MOVE or MS_BIND to place the new root dir in / instead. A short term work-around is to simply remount the root tree to private before invoking LXC. Lennart -- Lennart Poettering - Red Hat, Inc. So there you have a suggested workaround for the shared mount propagation problem, which is what you are referring to. ITMT... Daniel, Serge? Any thoughts on those comments vis-a-vis the pivot function and using MS_MOVE or MS_BIND instead? IIRC, we switched to pivot_root() years ago to deal with some other issues that were plaguing us. Take care, Peter -- LogMeIn Central: Instant, anywhere, Remote PC access and management. Stay in control, update software, and manage PCs from one command center Diagnose problems and improve visibility into emerging IT issues Automate, monitor and manage. Do more in less time with Central http://p.sf.net/sfu/logmein12331_d2d ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- LogMeIn Central: Instant, anywhere, Remote PC access and management. Stay in control, update software, and manage PCs from one command center Diagnose problems and improve visibility into emerging IT issues Automate, monitor and manage. Do more in less time with Central http://p.sf.net/sfu/logmein12331_d2d___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [lxc-devel] [GIT] lxc branch, master, updated. 7f99e339363d9f005c9386f60a1d8c0953c85053
On Thu, 2012-11-01 at 20:15 -0400, Michael H. Warfield wrote: On Thu, 2012-11-01 at 19:17 -0400, Michael H. Warfield wrote: On Thu, 2012-11-01 at 23:28 +0100, Serge Hallyn wrote: Quoting Michael H. Warfield (m...@wittsend.com): On Thu, 2012-11-01 at 22:44 +0100, Serge Hallyn wrote: Quoting Michael H. Warfield (m...@wittsend.com): On Thu, 2012-11-01 at 21:20 +0100, Daniel Baumann wrote: On 11/01/2012 09:08 PM, Michael H. Warfield wrote: I know, I KNOW this is an 11th hour request. Can we please get Serge's autodev stuff into this release? Please? release early, release often? just release current git as 0.8.0 now, and the one with the autofoo as 0.8.1 soon after that? That would be ideal but we've been sitting at 0.8.0rc2 for something like 3-1/2 months now. I know Daniel (the other Daniel, the Daniel) has been incredibly busy. I have no objection to getting this out the door as 0.8.0 with a fast bump to 0.8.1 for the systemd stuff, but another several months is not good. Can we get this fast bump? We'll be staring Fedora 18 in the face by then. The working versions of Fedora are no longer in support and we've got more distros adopting systemd. I think this will end up slated for 0.9.0 (which we're hoping will be soon), but in any case I went ahead and created a branch at git://github.com/hallyn/lxc called upstream.nov1.2012.autodev, with an autodev patch on top of Daniel's latest push. I quickly tried my hand at fixing the error you had with /dev/ttyN. I haven't tested that bit. I will not be able to be online at all from now until weekend or monday, so if it needs more tweaks please feel free to 'just fix it'. Problem. Works for the systemd containers but not for my older containers. I get this... [root@forest Plover]# cat 2012-10-30-18:17:46.log lxc-start 1351635466.998 ERRORlxc_conf - Operation not permitted - error 1 creating /usr/lib64/lxc/rootfs/dev/tty6 lxc-start 1351635466.999 ERRORlxc_conf - failed to setup the ttys for 'Plover' lxc-start 1351635466.999 ERRORlxc_start - failed to setup the container lxc-start 1351635466.999 ERRORlxc_sync - invalid sequence number 1. expected 2 lxc-start 1351635466.999 ERRORlxc_start - failed to spawn 'Plover' Alcove (the systemd container) was the first one started so it may be an ordinal thing or it may be a systemd thing. But it's a problem. \ Hm, perhaps the container doesn't have mknod? They all should have, but I will investigate. Those devices would have existed in the static file system with /dev. Could it be a problem with the device already existing in the /dev directory? Ok... Now this is just bloody weird. I do not understand this. Yes the containers come up. But... Here's what shows up in the detached container's log... [root@forest Audience]# cat 2012-10-30-18:52:41.log lxc-start 1351637562.011 ERRORlxc_conf - Operation not permitted - error creating /usr/lib64/lxc/rootfs/dev/tty6 Now wait a minute... What about 1, 2, 3, 4, and 5??? They succeeded but 6 failed? How does that make any sense. In the container... crw-rw-rw- 1 root root 5, 0 Apr 13 2006 tty crw--w 1 root tty 136, 16 Oct 30 2012 tty1 crw--w 1 root tty 136, 17 Oct 30 2012 tty2 crw--w 1 root tty 136, 18 Oct 30 2012 tty3 crw--w 1 root tty 136, 19 Oct 30 2012 tty4 crw--w 1 root tty 136, 20 Oct 30 2012 tty5 crw--w 1 root tty 136, 21 Oct 30 2012 tty6 Ok... That's probably from a couple of days ago. But no error messages for the others and they are not freshly made either... That was a CentOS 5 container. Trying it with another Fedora container but removed the tty? entries. No errors. Hmmm... Wait... Another problem... Container Plover... [mhw@plover ~]$ who mhw pts/92012-10-30 19:47 (forest.ip6.wittsend.com) [mhw@plover ~]$ sudo -s sudo: sorry, you must have a tty to run sudo What? Ok... The problem with the container plover appears to have been an error on my part. I think I cleaned out too many tty devices. Carefully only removed tty? devices from the static /dev in that container and started it up... [root@forest Plover]# cat 2012-10-31-06:41:37.log lxc-start 1351680097.900 ERRORlxc_conf - Operation not permitted - error creating /usr/lib64/lxc/rootfs/dev/tty6 Still got the error creating /dev/tty6 but not 1-5. But wait. In the container itself... [mhw@canyon mhw]$ ssh plover.ip6.wittsend.com Last login: Tue Oct 30 18:40:12 2012 from canyon.ip6.wittsend.com [mhw@plover ~]$ sudo -s [root@plover mhw]# ls -l /dev/tty? crw---. 1 root root 136, 36 Oct 31 06:41 /dev
Re: [Lxc-users] [lxc-devel] [GIT] lxc branch, master, updated. 7f99e339363d9f005c9386f60a1d8c0953c85053
On Thu, 2012-11-01 at 21:20 +0100, Daniel Baumann wrote: On 11/01/2012 09:08 PM, Michael H. Warfield wrote: I know, I KNOW this is an 11th hour request. Can we please get Serge's autodev stuff into this release? Please? release early, release often? just release current git as 0.8.0 now, and the one with the autofoo as 0.8.1 soon after that? That would be ideal but we've been sitting at 0.8.0rc2 for something like 3-1/2 months now. I know Daniel (the other Daniel, the Daniel) has been incredibly busy. I have no objection to getting this out the door as 0.8.0 with a fast bump to 0.8.1 for the systemd stuff, but another several months is not good. Can we get this fast bump? We'll be staring Fedora 18 in the face by then. The working versions of Fedora are no longer in support and we've got more distros adopting systemd. -- Address:Daniel Baumann, Donnerbuehlweg 3, CH-3012 Bern Email: daniel.baum...@progress-technologies.net Internet: http://people.progress-technologies.net/~daniel.baumann/ Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_sfd2d_oct___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [lxc-devel] [GIT] lxc branch, master, updated. 7f99e339363d9f005c9386f60a1d8c0953c85053
On Thu, 2012-11-01 at 22:44 +0100, Serge Hallyn wrote: Quoting Michael H. Warfield (m...@wittsend.com): On Thu, 2012-11-01 at 21:20 +0100, Daniel Baumann wrote: On 11/01/2012 09:08 PM, Michael H. Warfield wrote: I know, I KNOW this is an 11th hour request. Can we please get Serge's autodev stuff into this release? Please? release early, release often? just release current git as 0.8.0 now, and the one with the autofoo as 0.8.1 soon after that? That would be ideal but we've been sitting at 0.8.0rc2 for something like 3-1/2 months now. I know Daniel (the other Daniel, the Daniel) has been incredibly busy. I have no objection to getting this out the door as 0.8.0 with a fast bump to 0.8.1 for the systemd stuff, but another several months is not good. Can we get this fast bump? We'll be staring Fedora 18 in the face by then. The working versions of Fedora are no longer in support and we've got more distros adopting systemd. I think this will end up slated for 0.9.0 (which we're hoping will be soon), but in any case I went ahead and created a branch at git://github.com/hallyn/lxc called upstream.nov1.2012.autodev, with an autodev patch on top of Daniel's latest push. I quickly tried my hand at fixing the error you had with /dev/ttyN. I haven't tested that bit. I will not be able to be online at all from now until weekend or monday, so if it needs more tweaks please feel free to 'just fix it'. Problem. Works for the systemd containers but not for my older containers. I get this... [root@forest Plover]# cat 2012-10-30-18:17:46.log lxc-start 1351635466.998 ERRORlxc_conf - Operation not permitted - error 1 creating /usr/lib64/lxc/rootfs/dev/tty6 lxc-start 1351635466.999 ERRORlxc_conf - failed to setup the ttys for 'Plover' lxc-start 1351635466.999 ERRORlxc_start - failed to setup the container lxc-start 1351635466.999 ERRORlxc_sync - invalid sequence number 1. expected 2 lxc-start 1351635466.999 ERRORlxc_start - failed to spawn 'Plover' Alcove (the systemd container) was the first one started so it may be an ordinal thing or it may be a systemd thing. But it's a problem. (there are also some todos in the commit msg - if we're going to wait for 0.9.0 then I can handle those later, and port the patch on top of the 100 additional patches queued in github.com/lxc/lxc#staging) -serge Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- LogMeIn Central: Instant, anywhere, Remote PC access and management. Stay in control, update software, and manage PCs from one command center Diagnose problems and improve visibility into emerging IT issues Automate, monitor and manage. Do more in less time with Central http://p.sf.net/sfu/logmein12331_d2d___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [lxc-devel] [GIT] lxc branch, master, updated. 7f99e339363d9f005c9386f60a1d8c0953c85053
On Thu, 2012-11-01 at 19:17 -0400, Michael H. Warfield wrote: On Thu, 2012-11-01 at 23:28 +0100, Serge Hallyn wrote: Quoting Michael H. Warfield (m...@wittsend.com): On Thu, 2012-11-01 at 22:44 +0100, Serge Hallyn wrote: Quoting Michael H. Warfield (m...@wittsend.com): On Thu, 2012-11-01 at 21:20 +0100, Daniel Baumann wrote: On 11/01/2012 09:08 PM, Michael H. Warfield wrote: I know, I KNOW this is an 11th hour request. Can we please get Serge's autodev stuff into this release? Please? release early, release often? just release current git as 0.8.0 now, and the one with the autofoo as 0.8.1 soon after that? That would be ideal but we've been sitting at 0.8.0rc2 for something like 3-1/2 months now. I know Daniel (the other Daniel, the Daniel) has been incredibly busy. I have no objection to getting this out the door as 0.8.0 with a fast bump to 0.8.1 for the systemd stuff, but another several months is not good. Can we get this fast bump? We'll be staring Fedora 18 in the face by then. The working versions of Fedora are no longer in support and we've got more distros adopting systemd. I think this will end up slated for 0.9.0 (which we're hoping will be soon), but in any case I went ahead and created a branch at git://github.com/hallyn/lxc called upstream.nov1.2012.autodev, with an autodev patch on top of Daniel's latest push. I quickly tried my hand at fixing the error you had with /dev/ttyN. I haven't tested that bit. I will not be able to be online at all from now until weekend or monday, so if it needs more tweaks please feel free to 'just fix it'. Problem. Works for the systemd containers but not for my older containers. I get this... [root@forest Plover]# cat 2012-10-30-18:17:46.log lxc-start 1351635466.998 ERRORlxc_conf - Operation not permitted - error 1 creating /usr/lib64/lxc/rootfs/dev/tty6 lxc-start 1351635466.999 ERRORlxc_conf - failed to setup the ttys for 'Plover' lxc-start 1351635466.999 ERRORlxc_start - failed to setup the container lxc-start 1351635466.999 ERRORlxc_sync - invalid sequence number 1. expected 2 lxc-start 1351635466.999 ERRORlxc_start - failed to spawn 'Plover' Alcove (the systemd container) was the first one started so it may be an ordinal thing or it may be a systemd thing. But it's a problem. \ Hm, perhaps the container doesn't have mknod? They all should have, but I will investigate. Those devices would have existed in the static file system with /dev. Could it be a problem with the device already existing in the /dev directory? Ok... Now this is just bloody weird. I do not understand this. Yes the containers come up. But... Here's what shows up in the detached container's log... [root@forest Audience]# cat 2012-10-30-18:52:41.log lxc-start 1351637562.011 ERRORlxc_conf - Operation not permitted - error creating /usr/lib64/lxc/rootfs/dev/tty6 Now wait a minute... What about 1, 2, 3, 4, and 5??? They succeeded but 6 failed? How does that make any sense. In the container... crw-rw-rw- 1 root root 5, 0 Apr 13 2006 tty crw--w 1 root tty 136, 16 Oct 30 2012 tty1 crw--w 1 root tty 136, 17 Oct 30 2012 tty2 crw--w 1 root tty 136, 18 Oct 30 2012 tty3 crw--w 1 root tty 136, 19 Oct 30 2012 tty4 crw--w 1 root tty 136, 20 Oct 30 2012 tty5 crw--w 1 root tty 136, 21 Oct 30 2012 tty6 Ok... That's probably from a couple of days ago. But no error messages for the others and they are not freshly made either... That was a CentOS 5 container. Trying it with another Fedora container but removed the tty? entries. No errors. Hmmm... Wait... Another problem... Container Plover... [mhw@plover ~]$ who mhw pts/92012-10-30 19:47 (forest.ip6.wittsend.com) [mhw@plover ~]$ sudo -s sudo: sorry, you must have a tty to run sudo What? Sigh... No problem in Alcove (F17): [mhw@canyon mhw]$ ssh alcove.ip6.wittsend.com Last login: Wed Oct 31 01:51:39 2012 from canyon.ip6.wittsend.com [mhw@alcove ~]$ sudo -s [root@alcove mhw]# Back to Audience (CentOS 5) and removed /dev/tty?: No errors as seen before. Successfully created tty1-6. Sudo works. WTH? There's something wrong here. Audience and Plover do not have autodev enabled. Why has this changed? Shouldn't that be under the autodev switch as well? I've updated the git tree to not fail when mknod is denied. It should just spit out an error, and presumably another one when it next tries to bind mount the pty onto it, but that's ok. Does git://github.com/hallyn/lxc#upstream.nov1.2012.autodev work better now? (I'm logging off now, so if not I probably won't know for some time) Working now. All containers are up. When you get back
Re: [Lxc-users] [lxc-devel] [GIT] lxc branch, master, updated. 7f99e339363d9f005c9386f60a1d8c0953c85053
f1ccde27c038e7fb7e538913505248b36ddd9e65 Author: Serge Hallyn serge.ha...@ubuntu.com Date: Tue Aug 21 09:56:03 2012 -0500 ubuntu and debian templates: Clean up cache if cache build is interrupted Otherwise the next lxc-create may rsync a bad cache. Signed-off-by: Serge Hallyn serge.hal...@ubuntu.com commit 4a311c1241805dac5893918854fd40f77b2b6f49 Author: Serge Hallyn serge.hal...@ubuntu.com Date: Thu Aug 16 21:11:50 2012 -0500 Cleanup partial container if -h was passed to template If user calls 'lxc-create -t ubuntu -- -h' (as opposed to 'lxc-create -t ubuntu -h') then the ubuntu template will print its help then exit 0. Then lxc-create does not cleanup. So detect this in lxc-create. commit 4d5fb23ad827eda17b64676f527c3f168cd56ebd Author: Serge Hallyn serge@amd1.(none) Date: Fri Jul 20 10:38:15 2012 -0500 lxc-clone: fix handling of lxc.mount entries The 'lxc.mount =' entry can have more than one space, or tabs, before the =. We only need to disambiguate from 'lxc.mount.entry'. So just check for a space or tab after mount. Signed-off-by: Serge Hallyn serge.hal...@ubuntu.com commit 8b892c55b077d1716eb130e76f9c9725ecb0f73a Author: Serge Hallyn serge.hal...@ubuntu.com Date: Thu Jul 19 17:54:54 2012 -0500 lxc-clone: change uuid on xfs Otherwise after cloning an lvm+xfs container you can't run the original and clone at the same time. Based on a patch by Maurizio Sambati posted at https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1013549 Signed-off-by: Serge Hallyn serge.hal...@ubuntu.com --- Summary of changes: Makefile.am |2 +- configure.ac |8 +- doc/lxc-attach.sgml.in| 11 ++- doc/lxc-cgroup.sgml.in|7 +- doc/lxc-checkpoint.sgml.in|8 +- doc/lxc-console.sgml.in |8 +- doc/lxc-create.sgml.in| 12 ++-- doc/lxc-destroy.sgml.in |7 +- doc/lxc-execute.sgml.in | 10 ++-- doc/lxc-freeze.sgml.in|4 +- doc/lxc-kill.sgml.in |4 +- doc/lxc-ls.sgml.in|5 +- doc/lxc-monitor.sgml.in |4 +- doc/lxc-ps.sgml.in| 14 ++-- doc/lxc-restart.sgml.in | 11 ++-- doc/lxc-shutdown.sgml.in |7 +- doc/lxc-start.sgml.in | 16 +++-- doc/lxc-stop.sgml.in |4 +- doc/lxc-unfreeze.sgml.in |4 +- doc/lxc-wait.sgml.in |6 +- lxc.spec.in | 16 +++-- src/lxc/cgroup.c |9 ++- src/lxc/conf.c| 51 +- src/lxc/conf.h|6 -- src/lxc/lxc-clone.in | 34 ++ src/lxc/lxc-create.in | 14 - src/lxc/lxc-destroy.in| 17 -- src/lxc/lxc-ls.in |5 +- src/lxc/lxc-ps.in |4 +- src/lxc/lxc-setcap.in |3 - src/lxc/lxc-setuid.in |3 - src/lxc/lxc_start.c | 16 - src/lxc/namespace.h |4 + src/lxc/start.c | 15 +++- templates/lxc-altlinux.in |4 + templates/lxc-archlinux.in|4 + templates/lxc-busybox.in | 30 - templates/lxc-debian.in | 19 +- templates/lxc-fedora.in | 55 +++- templates/lxc-lenny.in|6 ++- templates/lxc-opensuse.in |3 + templates/lxc-sshd.in | 48 -- templates/lxc-ubuntu-cloud.in | 144 - templates/lxc-ubuntu.in | 43 +--- 44 files changed, 465 insertions(+), 240 deletions(-) hooks/post-receive -- Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_sfd2d_oct ___ Lxc-devel mailing list lxc-de...@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-devel -- Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_sfd2d_oct ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key
Re: [Lxc-users] [lxc-devel] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On Wed, 2012-10-31 at 18:15 +0100, Serge Hallyn wrote: Can you tell me the exact git tree and branch you are using? I'm using head. I'm not specifying a tree. The results you're getting don't make sense to me... Hoping I can find a simple answer. Me too. -serge Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_sfd2d_oct___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [lxc-devel] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On Wed, 2012-10-31 at 18:30 +0100, Serge Hallyn wrote: Quoting Michael H. Warfield (m...@wittsend.com): On Wed, 2012-10-31 at 18:15 +0100, Serge Hallyn wrote: Can you tell me the exact git tree and branch you are using? I'm using head. I'm not specifying a tree. ? I'm not sure what you mean - are you using git://github.com/lxc/lxc, or the tree on lxc.sf.net? IOW, I'm not using a branch in the tree. I'm using the main trunk. Created my tree with - git clone git://github.com/lxc/lxc So the former, not the later. The results you're getting don't make sense to me... Hoping I can find a simple answer. Me too. -serge Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_sfd2d_oct___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [lxc-devel] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On Sun, 2012-10-28 at 23:02 +0100, Serge Hallyn wrote: Quoting Michael H. Warfield (m...@wittsend.com): : I did see some errors setting up that dev... -- [root@forest mhw]# lxc-start -n Alcove lxc-start: No such file or directory - failed to mount '/dev/pts/59'-'/usr/lib64/lxc/rootfs/dev/tty1' lxc-start: No such file or directory - failed to mount '/dev/pts/60'-'/usr/lib64/lxc/rootfs/dev/tty2' lxc-start: No such file or directory - failed to mount '/dev/pts/61'-'/usr/lib64/lxc/rootfs/dev/tty3' lxc-start: No such file or directory - failed to mount '/dev/pts/62'-'/usr/lib64/lxc/rootfs/dev/tty4' lxc-start: No such file or directory - failed to mount '/dev/pts/63'-'/usr/lib64/lxc/rootfs/dev/tty5' lxc-start: No such file or directory - failed to mount '/dev/pts/64'-'/usr/lib64/lxc/rootfs/dev/tty6' systemd 44 running in system mode. (+PAM +LIBWRAP +AUDIT +SELINUX +IMA +SYSVINIT +LIBCRYPTSETUP; fedora) Welcome to Fedora 17 (Beefy Miracle)! -- Not sure what that's all about but, since systemd isn't going to start getty's on the tty? interfaces anyways, it probably doesn't make much difference. Oh, I see. Yeah, in the !lxc.ttydir case, when we created our own /dev we should create the tty files. I need to fix that. Well... I'm not sure I understand what you mean by that. The /dev/pts/* entries do does exist in the host system. But the bind mount fails saying they do not exist. Not sure I understand what the problem is here but I would like them connected even if systemd doesn't start getty's on them. I've used them for other purposes in the past. Of course in your case since systemd isn't going to start getty's on them, you should not have the lxc.tty = 6 in your container config, which it looks like you still do? Actually, I've decided this is worthy of debugging and there may be other ways to start a getty (or something else) on that tty. It really should work. Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_sfd2d_oct___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [lxc-devel] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On Tue, 2012-10-30 at 19:35 +0100, Serge Hallyn wrote: Quoting Michael H. Warfield (m...@wittsend.com): On Sun, 2012-10-28 at 23:02 +0100, Serge Hallyn wrote: Quoting Michael H. Warfield (m...@wittsend.com): : I did see some errors setting up that dev... -- [root@forest mhw]# lxc-start -n Alcove lxc-start: No such file or directory - failed to mount '/dev/pts/59'-'/usr/lib64/lxc/rootfs/dev/tty1' lxc-start: No such file or directory - failed to mount '/dev/pts/60'-'/usr/lib64/lxc/rootfs/dev/tty2' lxc-start: No such file or directory - failed to mount '/dev/pts/61'-'/usr/lib64/lxc/rootfs/dev/tty3' lxc-start: No such file or directory - failed to mount '/dev/pts/62'-'/usr/lib64/lxc/rootfs/dev/tty4' lxc-start: No such file or directory - failed to mount '/dev/pts/63'-'/usr/lib64/lxc/rootfs/dev/tty5' lxc-start: No such file or directory - failed to mount '/dev/pts/64'-'/usr/lib64/lxc/rootfs/dev/tty6' systemd 44 running in system mode. (+PAM +LIBWRAP +AUDIT +SELINUX +IMA +SYSVINIT +LIBCRYPTSETUP; fedora) Welcome to Fedora 17 (Beefy Miracle)! -- Not sure what that's all about but, since systemd isn't going to start getty's on the tty? interfaces anyways, it probably doesn't make much difference. Oh, I see. Yeah, in the !lxc.ttydir case, when we created our own /dev we should create the tty files. I need to fix that. Well... I'm not sure I understand what you mean by that. The /dev/pts/* entries do does exist in the host system. But the bind mount fails saying they do not exist. Not sure I understand what the In Ubuntu, we use lxc.ttydir = lxc. That means the actual /dev/ttyN in the container is a symlink to /dev/lxc/ttyN (to allow package upgrades which insist on removing /dev/ttyN to succeed - they will fail if /dev/ttyN is mounted over). When /dev was pre-populated, /dev/ttyN existed. But when we are populating it, it does not. So before we try tomount /dev/pts/NN from the host onto /dev/ttyN in the container, we have to create a file to bind mount over. I didn't put that in the patch. Yet. Got it! You and I both did the same thing. I'm thinking from the view of Fedora and you're thinking from the view of Ubuntu. Got it. We'll get this right yet. :-)=) problem is here but I would like them connected even if systemd doesn't start getty's on them. I've used them for other purposes in the past. Of course in your case since systemd isn't going to start getty's on them, you should not have the lxc.tty = 6 in your container config, which it looks like you still do? Actually, I've decided this is worthy of debugging and there may be other ways to start a getty (or something else) on that tty. It really should work. Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! -- Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_sfd2d_oct ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_sfd2d_oct___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [lxc-devel] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On Mon, 2012-10-29 at 10:18 +0100, Serge Hallyn wrote: Quoting Michael H. Warfield (m...@wittsend.com): ... Yeah, I don't think I need to play a game like this anymore. I'd have to go back through some old old E-Mails to see why I did that before. I seem to recall we were playing with all sorts of bind mount options for some PRIVATE thing or another. It may not be necessary at all any longer. IAC, it's minor to switch it back. I seem to recall switching back and forth using bind mounts several times back when that got done that way. I may play with the pre-mount hook just for giggles and see how that might work as well. Any idea why I was experiencing the problem with the mount hook when trying to populate /dev? I know it wouldn't have The only idea I have is that perhaps your root is MS_SHARED by default? Can you post the script you were using and the container config? Another point on the curve... The documentation says that pre-mount takes place before the mount occurs and mount takes place after the mount occurs. Only problem is that pre-mount is not being recognized. lxc-start 1351627853.032 ERRORlxc_confile - unknown key lxc.hook.pre-mount This is the same binaries from git that recognize lxc.hook.mount so I'm assuming the doco and the code don't match at this point. Even without my original bind mount, if I have a mount hook that does something in a newly mounted tmpfs directory, it doesn't show up in that directory it shows up in the parent directory as if it ran before the mounts took place. I could put the mount in the hook.mount script and then do it but it's seriously acting like the pre-mount hook isn't even there (parameter unknown) and the mount hook is running before the mounts are complete. Simple exerts from some test scripts doing, really, nothing but testing sequencing... Ok... Lets try this. I won't post entire configs but... For machine Alcove (my testbed)... -- lxc.tty = 6 lxc.pts = 64 lxc.rootfs = /srv/lxc/private/Alcove lxc.mount.entry=none/srv/lxc/private/Alcove/dev.tmp tmpfs defaults 0 0 lxc.mount.entry=/home/shared /srv/lxc/private/Alcove/srv/shared none ro,bind 0 0 lxc.autodev = 1 # lxc.hook.pre-mount = /var/lib/lxc/Alcove/pre-mount lxc.hook.mount = /var/lib/lxc/Alcove/mount lxc.mount.entry=shmfs /var/lib/lxc/private/Alcove/dev/shm tmpfs mode=0644 0 0 -- Now /var/lib/lxc/Alcove/mount: -- #!/bin/sh - touch /srv/lxc/private/Alcove/dev.tmp/mounted -- In that directory on the host fs I have this: [root@forest mhw]# touch /srv/lxc/private/Alcove/dev.tmp/no-mounted [root@forest mhw]# ls /srv/lxc/private/Alcove/dev.tmp/ no-mounted Now, when I start the container, the tmpfs should get mounted on /dev.tmp in the container (relative to the container rootfs) and should have the single file mounted in it while the parent file system back on the host should have the single file not-mounted in it. Let's see... lxc-start -n Alcove... In the container... [mhw@alcove ~]$ mount | grep tmpfs none on /dev type tmpfs (rw,relatime,seclabel,size=100k) none on /dev.tmp type tmpfs (rw,relatime,seclabel) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel) tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,mode=755) tmpfs on /sys/fs/cgroup type tmpfs (rw,nosuid,nodev,noexec,seclabel,mode=755) tmpfs on /media type tmpfs (rw,nosuid,nodev,noexec,relatime,seclabel,mode=755) Looks like the mount took place. I have tmpfs on /dev.tmp [mhw@alcove ~]$ ls /dev.tmp/ [mhw@alcove ~]$ Opps... Where did the file end up? Let's look on the host... [mhw@forest ~]$ ls /srv/lxc/private/Alcove/dev.tmp/ mounted no-mounted Arg... Wrong answer. It ended up in the parent file system before tmpfs was mounted. But, the documentation says hook.mount runs after the mounts have completed. There's something wrong here or I am badly mistaken in my understanding... (Probably the later, I'll admit.) Regards, Mike worked because of the /dev/pts mount but I have more heartburn in that it looks like it ran too early and the mount on /dev had not even taken place at that time. I believe I can see why... You're doing the autodev populate prior to any of the mounts being performed, so that private root file system is not bound to the directory at that time. Drop that bind mount for the rootfs and this then worked like a charm: -- lxc.rootfs = /srv/lxc/private/Alcove lxc.mount.entry=/home/shared /srv/lxc/private/Alcove/srv/shared none ro,bind 0 0 lxc.autodev = 1 -- I think that rootfs directory bind was an effort to more fully match the OpenVZ behavior but also trying to deal with some of the read-only problems were where having in the past with shutdowns. If it won't work, it won't work and I won't miss it. I did see some errors setting up that dev... -- [root@forest mhw]# lxc-start -n Alcove lxc-start: No such file or directory
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On Sun, 2012-10-28 at 18:52 +0100, Serge Hallyn wrote: Quoting Michael H. Warfield (m...@wittsend.com): On Sat, 2012-10-27 at 13:51 -0400, Michael H. Warfield wrote: On Sat, 2012-10-27 at 13:40 -0400, Michael H. Warfield wrote: /me erasing everything at this point and taking off the systemd crew, since this will have no relevance to them... Testing the hook feature out using git rev (finally got it built)... I added this line to my config... lxc.mount.entry=tmpfs /srv/lxc/private/Plover/dev.tmp tmpfs defaults 0 0 lxc.hook.mount = /var/lib/lxc/Plover/mount In /var/lib/lxc/Plover/mount I have this: -- rsync -avAH /srv/lxc/private/Plover/dev.template/. /srv/lxc/private/Plover/dev.tmp/ -- (This is just testing out the concepts. If I understand this correctly, lxc.hook.pre-mount runs BEFORE the mounting takes place and lxc.hook.mount takes place after the mount. Problem is, the result of that rsync is not showing up in the mounted tmpfs file system but is showing up in the underlying parent file system as if it were run pre-mount. Something not right here... I changed it to lxc.hook.start = /srv/lxc/mount (where I put the script in the container) which then works but that then requires the template and the command to be in the container. Suboptimal to say the least. But it gives me a way to test this tmpfs thing out. I also noticed that the .start hook runs, it appears, after caps are dropped and I see a lot of bitching about mknod on certain devices. I had to thrown an exit 0 into that script so it would continue in spite of the errors but, now, I can refine my template based on what it could create. Crap. I've got a catch-22 here... This is going to take some work. Hey, I've got a rather minimal patch (appended below) to add the support for mounting and populating a minimal /dev working. (A few hours were wasted due to my not knowing that upstart was going to issue mounted-dev even though /dev was mounted before upstart started - and the mounted-dev hook deletes and recreates all consoles. GAH) Yes, we can create the /dev directory with tmpfs from a template. Problem is that /dev/pts does not exist at the time we need to mount the devpts on /dev/pts for the pty's so that hurls chunks and dies. We can't create the /dev/ directory contents prior to mounting in the pre-mount hook because we won't have tmpfs in place at the time. We have to get tmpfs mounted on /dev and then create /dev/pts and then mount the ptys in there. There has to be a mkdir in between those two mount actions. Simplest solution would seem to be to add some logic to the mount logic that says test if directory exists and, if not, create it. I'm not sure of the consequences of that, though. I don't see a way to make this happen with hooks. It's almost like we need and on-mount per mount hook. Should be moot given my patch, which I intend to push this week, but why couldn't a lxc.hook.mount do the whole thing, mount /dev and and populate it? I wasn't thinking a lxc.hook.start, for the reasons you encountered, but I assume you tried lxc.hook.mount and it failed? See my other comment about lxc.hook.mount. I tried using it to populate /dev but it showed up in the parent of the mount undeneath the tmpfs mount. It was like it ran pre-mount. I tried it for several different combinations and couldn't get it to go. Would still have the problem with mounting /dev/pts which would take place before the mount hook at run to mount the file system and populate it. That actually MIGHT work (gotta think on this now) if I used lxc.hook.pre-mount and mounted tmpfs over /dev, and populated it but then I run into a problem where I was using a bind-mount for the rootfs. Might still work. I'll test your patch out first though. Patch below: snip Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- WINDOWS 8 is here. Millions of people. Your app in 30 days. Visit The Windows 8 Center at Sourceforge for all your go to resources. http://windows8center.sourceforge.net/ join-generation-app-and-make-money-coding-fast/___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On Sun, 2012-10-28 at 18:52 +0100, Serge Hallyn wrote: Should be moot given my patch, which I intend to push this week, but why couldn't a lxc.hook.mount do the whole thing, mount /dev and and populate it? I wasn't thinking a lxc.hook.start, for the reasons you encountered, but I assume you tried lxc.hook.mount and it failed? Patch below: Patch failed against 0.8.0rc2 and git root. Even with loose patching for whitespace, had failures... This was against git: [mhw@forest lxc]$ patch -p1 -l ../lxc-autodev.patch patching file src/lxc/conf.c Hunk #1 succeeded at 616 (offset -3 lines). Hunk #2 succeeded at 633 (offset -3 lines). Hunk #3 succeeded at 839 (offset -3 lines). Hunk #4 succeeded at 2203 (offset -66 lines). patching file src/lxc/conf.h Hunk #1 FAILED at 227. 1 out of 1 hunk FAILED -- saving rejects to file src/lxc/conf.h.rej patching file src/lxc/confile.c Hunk #1 FAILED at 77. Hunk #2 FAILED at 118. Hunk #3 succeeded at 854 with fuzz 2 (offset 1 line). 2 out of 3 hunks FAILED -- saving rejects to file src/lxc/confile Version to patch to? I'm trying to manually integrate those failed hunks now. Shouldn't be too difficult for only three. Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- WINDOWS 8 is here. Millions of people. Your app in 30 days. Visit The Windows 8 Center at Sourceforge for all your go to resources. http://windows8center.sourceforge.net/ join-generation-app-and-make-money-coding-fast/___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [lxc-devel] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On Sun, 2012-10-28 at 18:52 +0100, Serge Hallyn wrote: Quoting Michael H. Warfield (m...@wittsend.com): On Sat, 2012-10-27 at 13:51 -0400, Michael H. Warfield wrote: On Sat, 2012-10-27 at 13:40 -0400, Michael H. Warfield wrote: /me erasing everything at this point and taking off the systemd crew, since this will have no relevance to them... Testing the hook feature out using git rev (finally got it built)... I added this line to my config... lxc.mount.entry=tmpfs /srv/lxc/private/Plover/dev.tmp tmpfs defaults 0 0 lxc.hook.mount = /var/lib/lxc/Plover/mount In /var/lib/lxc/Plover/mount I have this: -- rsync -avAH /srv/lxc/private/Plover/dev.template/. /srv/lxc/private/Plover/dev.tmp/ -- (This is just testing out the concepts. If I understand this correctly, lxc.hook.pre-mount runs BEFORE the mounting takes place and lxc.hook.mount takes place after the mount. Problem is, the result of that rsync is not showing up in the mounted tmpfs file system but is showing up in the underlying parent file system as if it were run pre-mount. Something not right here... I changed it to lxc.hook.start = /srv/lxc/mount (where I put the script in the container) which then works but that then requires the template and the command to be in the container. Suboptimal to say the least. But it gives me a way to test this tmpfs thing out. I also noticed that the .start hook runs, it appears, after caps are dropped and I see a lot of bitching about mknod on certain devices. I had to thrown an exit 0 into that script so it would continue in spite of the errors but, now, I can refine my template based on what it could create. Crap. I've got a catch-22 here... This is going to take some work. Hey, I've got a rather minimal patch (appended below) to add the support for mounting and populating a minimal /dev working. (A few hours were wasted due to my not knowing that upstart was going to issue mounted-dev even though /dev was mounted before upstart started - and the mounted-dev hook deletes and recreates all consoles. GAH) I am happy to report that, after manually editing my git head branch to patch in the failed hunks, I was able to build it and test it and my Fedora 17 systemd based container fired right up after adding the lxc.autodev = 1 parameter to the config file. Yeah I did run into one gotcha, but one I can live with. I had been bind mounting the private root file system to another directory and then using that as the rootfs like this: -- lxc.rootfs = /srv/lxc/rootfs lxc.mount.entry=/srv/lxc/private/Alcove /srv/lxc/rootfs none bind.shared 0 0 lxc.mount.entry=/home/shared /srv/lxc/private/Alcove/srv/shared none ro,bind 0 0 lxc.autodev = 1 -- This did not work and I got the startup error that it can not mount to /dev because it doesn't exist. I believe I can see why... You're doing the autodev populate prior to any of the mounts being performed, so that private root file system is not bound to the directory at that time. Drop that bind mount for the rootfs and this then worked like a charm: -- lxc.rootfs = /srv/lxc/private/Alcove lxc.mount.entry=/home/shared /srv/lxc/private/Alcove/srv/shared none ro,bind 0 0 lxc.autodev = 1 -- I think that rootfs directory bind was an effort to more fully match the OpenVZ behavior but also trying to deal with some of the read-only problems were where having in the past with shutdowns. If it won't work, it won't work and I won't miss it. I did see some errors setting up that dev... -- [root@forest mhw]# lxc-start -n Alcove lxc-start: No such file or directory - failed to mount '/dev/pts/59'-'/usr/lib64/lxc/rootfs/dev/tty1' lxc-start: No such file or directory - failed to mount '/dev/pts/60'-'/usr/lib64/lxc/rootfs/dev/tty2' lxc-start: No such file or directory - failed to mount '/dev/pts/61'-'/usr/lib64/lxc/rootfs/dev/tty3' lxc-start: No such file or directory - failed to mount '/dev/pts/62'-'/usr/lib64/lxc/rootfs/dev/tty4' lxc-start: No such file or directory - failed to mount '/dev/pts/63'-'/usr/lib64/lxc/rootfs/dev/tty5' lxc-start: No such file or directory - failed to mount '/dev/pts/64'-'/usr/lib64/lxc/rootfs/dev/tty6' systemd 44 running in system mode. (+PAM +LIBWRAP +AUDIT +SELINUX +IMA +SYSVINIT +LIBCRYPTSETUP; fedora) Welcome to Fedora 17 (Beefy Miracle)! -- Not sure what that's all about but, since systemd isn't going to start getty's on the tty? interfaces anyways, it probably doesn't make much difference. Regards, Mike Yes, we can create the /dev directory with tmpfs from a template. Problem is that /dev/pts does not exist at the time we need to mount the devpts on /dev/pts for the pty's so that hurls chunks and dies. We can't create the /dev/ directory contents prior to mounting in the pre-mount hook because we won't have tmpfs in place at the time. We have to get
Re: [Lxc-users] [lxc-devel] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On Sun, 2012-10-28 at 23:02 +0100, Serge Hallyn wrote: Quoting Michael H. Warfield (m...@wittsend.com): On Sun, 2012-10-28 at 18:52 +0100, Serge Hallyn wrote: : I've got a rather minimal patch (appended below) to add the support for mounting and populating a minimal /dev working. (A few hours were wasted due to my not knowing that upstart was going to issue mounted-dev even though /dev was mounted before upstart started - and the mounted-dev hook deletes and recreates all consoles. GAH) I am happy to report that, after manually editing my git head branch to Sorry, it was against the ubuntu quantal package. I've been in the air without onboard wifi, so working with what i had at hand. Oh, I figured it was a package mismatch. Wasn't too terribly difficult to patch those hunks in and kick out a diff against git. patch in the failed hunks, I was able to build it and test it and my Fedora 17 systemd based container fired right up after adding the lxc.autodev = 1 parameter to the config file. Yeah I did run into one gotcha, but one I can live with. I had been bind mounting the private root file system to another directory and then using that as the rootfs like this: -- lxc.rootfs = /srv/lxc/rootfs lxc.mount.entry=/srv/lxc/private/Alcove /srv/lxc/rootfs none bind.shared 0 0 lxc.mount.entry=/home/shared /srv/lxc/private/Alcove/srv/shared none ro,bind 0 0 lxc.autodev = 1 -- This did not work and I got the startup error that it can not mount to /dev because it doesn't exist. Hm, yeah. If you do need to play a game like this, you might be best off using a pre-mount hook for that. Yeah, I don't think I need to play a game like this anymore. I'd have to go back through some old old E-Mails to see why I did that before. I seem to recall we were playing with all sorts of bind mount options for some PRIVATE thing or another. It may not be necessary at all any longer. IAC, it's minor to switch it back. I seem to recall switching back and forth using bind mounts several times back when that got done that way. I may play with the pre-mount hook just for giggles and see how that might work as well. Any idea why I was experiencing the problem with the mount hook when trying to populate /dev? I know it wouldn't have worked because of the /dev/pts mount but I have more heartburn in that it looks like it ran too early and the mount on /dev had not even taken place at that time. I believe I can see why... You're doing the autodev populate prior to any of the mounts being performed, so that private root file system is not bound to the directory at that time. Drop that bind mount for the rootfs and this then worked like a charm: -- lxc.rootfs = /srv/lxc/private/Alcove lxc.mount.entry=/home/shared /srv/lxc/private/Alcove/srv/shared none ro,bind 0 0 lxc.autodev = 1 -- I think that rootfs directory bind was an effort to more fully match the OpenVZ behavior but also trying to deal with some of the read-only problems were where having in the past with shutdowns. If it won't work, it won't work and I won't miss it. I did see some errors setting up that dev... -- [root@forest mhw]# lxc-start -n Alcove lxc-start: No such file or directory - failed to mount '/dev/pts/59'-'/usr/lib64/lxc/rootfs/dev/tty1' lxc-start: No such file or directory - failed to mount '/dev/pts/60'-'/usr/lib64/lxc/rootfs/dev/tty2' lxc-start: No such file or directory - failed to mount '/dev/pts/61'-'/usr/lib64/lxc/rootfs/dev/tty3' lxc-start: No such file or directory - failed to mount '/dev/pts/62'-'/usr/lib64/lxc/rootfs/dev/tty4' lxc-start: No such file or directory - failed to mount '/dev/pts/63'-'/usr/lib64/lxc/rootfs/dev/tty5' lxc-start: No such file or directory - failed to mount '/dev/pts/64'-'/usr/lib64/lxc/rootfs/dev/tty6' systemd 44 running in system mode. (+PAM +LIBWRAP +AUDIT +SELINUX +IMA +SYSVINIT +LIBCRYPTSETUP; fedora) Welcome to Fedora 17 (Beefy Miracle)! -- Not sure what that's all about but, since systemd isn't going to start getty's on the tty? interfaces anyways, it probably doesn't make much difference. Oh, I see. Yeah, in the !lxc.ttydir case, when we created our own /dev we should create the tty files. I need to fix that. Cool. Once again... Looks like we got some real progress here with this one. I've still got more testing to do, undoing some of my changes in the container itself and making sure it all still works. Also looks like I can stop and restart one of these containers now without the hung cgroup directory. I suspected it was some stray devices behind that. This is good. Of course in your case since systemd isn't going to start getty's on them, you should not have the lxc.tty = 6 in your container config, which it looks like you still do? Yeah. I was taking it one step at a time. I wish they WOULD fire up
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
/me erasing everything at this point and taking off the systemd crew, since this will have no relevance to them... Testing the hook feature out using git rev (finally got it built)... I added this line to my config... lxc.mount.entry=tmpfs /srv/lxc/private/Plover/dev.tmp tmpfs defaults 0 0 lxc.hook.mount = /var/lib/lxc/Plover/mount In /var/lib/lxc/Plover/mount I have this: -- rsync -avAH /srv/lxc/private/Plover/dev.template/. /srv/lxc/private/Plover/dev.tmp/ -- (This is just testing out the concepts. If I understand this correctly, lxc.hook.pre-mount runs BEFORE the mounting takes place and lxc.hook.mount takes place after the mount. Problem is, the result of that rsync is not showing up in the mounted tmpfs file system but is showing up in the underlying parent file system as if it were run pre-mount. Something not right here... Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- WINDOWS 8 is here. Millions of people. Your app in 30 days. Visit The Windows 8 Center at Sourceforge for all your go to resources. http://windows8center.sourceforge.net/ join-generation-app-and-make-money-coding-fast/___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On Sat, 2012-10-27 at 13:40 -0400, Michael H. Warfield wrote: /me erasing everything at this point and taking off the systemd crew, since this will have no relevance to them... Testing the hook feature out using git rev (finally got it built)... I added this line to my config... lxc.mount.entry=tmpfs /srv/lxc/private/Plover/dev.tmp tmpfs defaults 0 0 lxc.hook.mount = /var/lib/lxc/Plover/mount In /var/lib/lxc/Plover/mount I have this: -- rsync -avAH /srv/lxc/private/Plover/dev.template/. /srv/lxc/private/Plover/dev.tmp/ -- (This is just testing out the concepts. If I understand this correctly, lxc.hook.pre-mount runs BEFORE the mounting takes place and lxc.hook.mount takes place after the mount. Problem is, the result of that rsync is not showing up in the mounted tmpfs file system but is showing up in the underlying parent file system as if it were run pre-mount. Something not right here... I changed it to lxc.hook.start = /srv/lxc/mount (where I put the script in the container) which then works but that then requires the template and the command to be in the container. Suboptimal to say the least. But it gives me a way to test this tmpfs thing out. I also noticed that the .start hook runs, it appears, after caps are dropped and I see a lot of bitching about mknod on certain devices. I had to thrown an exit 0 into that script so it would continue in spite of the errors but, now, I can refine my template based on what it could create. Regards, Mike Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- WINDOWS 8 is here. Millions of people. Your app in 30 days. Visit The Windows 8 Center at Sourceforge for all your go to resources. http://windows8center.sourceforge.net/ join-generation-app-and-make-money-coding-fast/___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On Sat, 2012-10-27 at 13:51 -0400, Michael H. Warfield wrote: On Sat, 2012-10-27 at 13:40 -0400, Michael H. Warfield wrote: /me erasing everything at this point and taking off the systemd crew, since this will have no relevance to them... Testing the hook feature out using git rev (finally got it built)... I added this line to my config... lxc.mount.entry=tmpfs /srv/lxc/private/Plover/dev.tmp tmpfs defaults 0 0 lxc.hook.mount = /var/lib/lxc/Plover/mount In /var/lib/lxc/Plover/mount I have this: -- rsync -avAH /srv/lxc/private/Plover/dev.template/. /srv/lxc/private/Plover/dev.tmp/ -- (This is just testing out the concepts. If I understand this correctly, lxc.hook.pre-mount runs BEFORE the mounting takes place and lxc.hook.mount takes place after the mount. Problem is, the result of that rsync is not showing up in the mounted tmpfs file system but is showing up in the underlying parent file system as if it were run pre-mount. Something not right here... I changed it to lxc.hook.start = /srv/lxc/mount (where I put the script in the container) which then works but that then requires the template and the command to be in the container. Suboptimal to say the least. But it gives me a way to test this tmpfs thing out. I also noticed that the .start hook runs, it appears, after caps are dropped and I see a lot of bitching about mknod on certain devices. I had to thrown an exit 0 into that script so it would continue in spite of the errors but, now, I can refine my template based on what it could create. Crap. I've got a catch-22 here... This is going to take some work. Yes, we can create the /dev directory with tmpfs from a template. Problem is that /dev/pts does not exist at the time we need to mount the devpts on /dev/pts for the pty's so that hurls chunks and dies. We can't create the /dev/ directory contents prior to mounting in the pre-mount hook because we won't have tmpfs in place at the time. We have to get tmpfs mounted on /dev and then create /dev/pts and then mount the ptys in there. There has to be a mkdir in between those two mount actions. Simplest solution would seem to be to add some logic to the mount logic that says test if directory exists and, if not, create it. I'm not sure of the consequences of that, though. I don't see a way to make this happen with hooks. It's almost like we need and on-mount per mount hook. Regards, Mike Regards, Mike Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- WINDOWS 8 is here. Millions of people. Your app in 30 days. Visit The Windows 8 Center at Sourceforge for all your go to resources. http://windows8center.sourceforge.net/ join-generation-app-and-make-money-coding-fast/___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On Sat, 2012-10-27 at 19:44 +0100, Colin Guthrie wrote: 'Twas brillig, and Michael H. Warfield at 26/10/12 18:18 did gyre and gimble: What the hell is this? /var/run is symlinked to /run and is mounted with a tmpfs. Yup, that's how /var/run and /run is being handled these days. It provides a consistent space to pass info from the initrd over to the main system and has various other uses also. Interesting. I hadn't considered that aspect of it before. Very interesting. If you want to ensure files are created in this folder, just drop a config file in to /usr/lib/tmpfiles.d/ in the package in question. See man systemd-tmpfiles for more info. NOW THAT is something else I needed to know about! Thank you very very much! Learned something new. This whole thing has been a massive learning experience getting this container kick started. Could be some packages are not fully upgraded to this concept in F17. As a non-fedora user, I can't really comment on that specifically. As it turns out, the kernel has had some of our patches applied that I wasn't aware of vis-a-vis reboot/halt and this should no longer be an issue. I'm still struggling with the tmpfs on /dev thing and have run into a catch-22 with regards to that. I can mount tmpfs on /dev just fine and can populate it just fine in a post mount hook but, then, we're trying to mount a devpts file system on /dev/pts before we've had a chance to populate it and it's then crashing on the mount. Sigh... I think that's going to now have to wait for Serge or Daniel to comment on. Col -- Colin Guthrie gmane(at)colin.guthr.ie http://colin.guthr.ie/ Day Job: Tribalogic Limited http://www.tribalogic.net/ Open Source: Mageia Contributor http://www.mageia.org/ PulseAudio Hacker http://www.pulseaudio.org/ Trac Hacker http://trac.edgewall.org/ Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- WINDOWS 8 is here. Millions of people. Your app in 30 days. Visit The Windows 8 Center at Sourceforge for all your go to resources. http://windows8center.sourceforge.net/ join-generation-app-and-make-money-coding-fast/___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
Adding in the lxc-devel list. On Thu, 2012-10-25 at 22:59 -0400, Michael H. Warfield wrote: On Thu, 2012-10-25 at 15:42 -0400, Michael H. Warfield wrote: On Thu, 2012-10-25 at 14:02 -0500, Serge Hallyn wrote: Quoting Michael H. Warfield (m...@wittsend.com): On Thu, 2012-10-25 at 13:23 -0400, Michael H. Warfield wrote: Hey Serge, On Thu, 2012-10-25 at 11:19 -0500, Serge Hallyn wrote: ... Oh, sorry - I take back that suggestion :) Note that we have mount hooks, so templates could install a mount hook to mount a tmpfs onto /dev and populate it. Ok... I've done some cursory search and turned up nothing but some comments about pre mount hooks. Where is the documentation about this feature and how I might use / implement it? Some examples would probably suffice. Is there a require release version of lxc-utils? I think I found what I needed in the changelog here: http://www.mail-archive.com/lxc-devel@lists.sourceforge.net/msg01490.html I'll play with it and report back. Also the Lifecycle management hooks section in https://help.ubuntu.com/12.10/serverguide/lxc.html This isn't working... Based on what was in both of those articles, I added this entry to another container (Plover) to test... lxc.hook.mount = /var/lib/lxc/Plover/mount When I run lxc-start -n Plover, I see this: [root@forest ~]# lxc-start -n Plover lxc-start: unknow key lxc.hook.mount lxc-start: failed to read configuration file I'm running the latest rc... [root@forest ~]# rpm -qa | grep lxc lxc-0.8.0.rc2-1.fc16.x86_64 lxc-libs-0.8.0.rc2-1.fc16.x86_64 lxc-doc-0.8.0.rc2-1.fc16.x86_64 Is it something in git that hasn't made it to a release yet? nm... I see it. It's in git and hasn't made it to a release. I'm working on a git build to test now. If this is something that solves some of this, we need to move things along here and get these things moved out. According to git, 0.8.0rc2 was 7 months ago? What's the show stoppers here? While the git repo says 7 months ago, the date stamp on the lxc-0.8.0-rc2 tarball is from July 10, so about 3-1/2 months ago. Sounds like we've accumulated some features (like the hooks) we are going to need like months ago to deal with this systemd debacle. How close are we to either 0.8.0rc3 or 0.8.0? Any blockers or are we just waiting on some more features? Note that I'm thinking that having lxc-start guess how to fill in /dev is wrong, because different distros and even different releases of the same distros have different expectations. For instance ubuntu lucid wants /dev/shm to be a directory, while precise+ wants a symlink. So somehow the template should get involved, be it by adding a hook, or simply specifying a configuration file which lxc uses internally to decide how to create /dev. I agree this needs to be by some sort of convention or template that we can adjust. Personally I'd prefer if /dev were always populated by the templates, and containers (i.e. userspace) didn't mount a fresh tmpfs for /dev. But that does complicate userspace, and we've seen it in debian/ubuntu as well (i.e. at certain package upgrades which rely on /dev being cleared after a reboot). -serge Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_sfd2d_oct___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On Fri, 2012-10-26 at 12:11 -0400, Michael H. Warfield wrote: On Thu, 2012-10-25 at 23:38 +0200, Lennart Poettering wrote: On Thu, 25.10.12 11:59, Michael H. Warfield (m...@wittsend.com) wrote: I SUSPECT the hang condition is something to do with systemd trying to start and interactive console on /dev/console, which sysvinit and upstart do not do. Yes, this is documented, please see the link I already posted, and which I linked above a second time. This may have been my fault. I was using the -o option to lxc-start (output logfile) and failed to specify the -c (console output redirect) option. It seems to fire up nicely (albeit with other problems) with that additional option. Continuing my research. Confirming. Using the -c option for the console file works. Unfortunately, thanks to no getty's on the ttys so lxc-console does not work and no way to connect to that console redirect and the failure of the network to start, I'm still trying to figure out just what is face planting in a container I can not access. :-/=/ Punch out the punch list one PUNCH at at time here. I've got some more problems relating to shutting down containers, some of which may be related to mounting tmpfs on /run to which /var/run is symlinked to. We're doing halt / restart detection by monitoring utmp in that directory but it looks like utmp isn't even in that directory anymore and mounting tmpfs on it was always problematical. We may have to have a more generic method to detect when a container has shut down or is restarting in that case. I can't parse this. The system call reboot() is virtualized for containers just fine and the container managaer (i.e. LXC) can check for that easily. Apparently, in recent kernels, we can. Unfortunately, I'm still finding that I can not restart a container I have previously halted. I have no problem with sysvinit and upstart systems on this host, so it is a container problem peculiar to systemd containers. Continuing to research that problem. Lennart -- Lennart Poettering - Red Hat, Inc. Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- WINDOWS 8 is here. Millions of people. Your app in 30 days. Visit The Windows 8 Center at Sourceforge for all your go to resources. http://windows8center.sourceforge.net/ join-generation-app-and-make-money-coding-fast/___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
Sorry for taking a few days to get back on this. I was delivering a guest lecture up at Fordham University last Tuesday so I was out of pocket a couple of days or I would have responded sooner... On Mon, 2012-10-22 at 16:59 -0400, Michael H. Warfield wrote: On Mon, 2012-10-22 at 22:50 +0200, Lennart Poettering wrote: On Mon, 22.10.12 11:48, Michael H. Warfield (m...@wittsend.com) wrote: To summarize the problem... The LXC startup binary sets up various things for /dev and /dev/pts for the container to run properly and this works perfectly fine for SystemV start-up scripts and/or Upstart. Unfortunately, systemd has mounts of devtmpfs on /dev and devpts on /dev/pts which then break things horribly. This is because the kernel currently lacks namespaces for devices and won't for some time to come (in design). When devtmpfs gets mounted over top of /dev in the container, it then hijacks the hosts console tty and several other devices which had been set up through bind mounts by LXC and should have been LEFT ALONE. Please initialize a minimal tmpfs on /dev. systemd will then work fine. My containers have a reasonable /dev that work with Upstart just fine but they are not on tmpfs. Is mounting tmpfs on /dev and recreating that minimal /dev required? Well, it can be any kind of mount really. Just needs to be a mount. And the idea is to use tmpfs for this. What /dev are you currently using? It's probably not a good idea to reuse the hosts' /dev, since it contains so many device nodes that should not be accessible/visible to the container. Got it. And that explains the problems we're seeing but also what I'm seeing in some libvirt-lxc related pages, which is a separate and distinct project in spite of the similarities in the name... http://wiki.1tux.org/wiki/Lxc/Installation#Additional_notes Unfortunately, in our case, merely getting a mount in there is a complication in that it also has to be populated but, at least, we understand the problem set now. Ok... Serge and I were corresponding on the lxc-users list and he had a suggestion that worked but I consider to be a bit of a sub-optimal workaround. Ironically, it was to mount devtmpfs on /dev. We don't (currently) have a method to auto-populate a tmpfs mount with the needed devices and this provided it. It does have a problem that makes me uncomfortable in that the container now has visibility into the hosts /dev system. I'm a security expert and I'm not comfortable with that solution even with the controls we have. We can control access but still, not happy with that. I now have a container that starts with systemd running more or less properly. We do have some problems with the convention that has been set up, however. When running in this mode, you run on the console and you don't spawn getty's on the tty's. There seems to be a problem with this. In this mode, if I manually start the container in a terminal window, that eventually results in a login prompt there. Under sysvinit and upstart I don't get that and can detach. If I run lxc-console (which attaches to one of the vtys) it gives me nothing. Under sysvinit and upstart I get vty login prompts because they have started getty on those vtys. This is important in case network access has not started for one reason or another and the container was started detached in the background. If I start lxc-start in detached mode (-d -o {logfile}) lxc-start redirects the system console to the log file and goes daemon. In this case, the systemd container hangs and never starts. I SUSPECT the hang condition is something to do with systemd trying to start and interactive console on /dev/console, which sysvinit and upstart do not do. Maybe we have to do something different with the redirects in this case, but it's not working consistent with the other packages. We should also start appropriate gettys on those vtys if they are configured. Maybe start the getty's if the tty? exists up to a configured limit (and don't restart if they immediately fail) and obviously don't start them if they don't. It then gives up control over that process. Also don't start a login on /dev/console if you DO start a getty? That would make your behavior congruent with that of the other two systems. I've got some more problems relating to shutting down containers, some of which may be related to mounting tmpfs on /run to which /var/run is symlinked to. We're doing halt / restart detection by monitoring utmp in that directory but it looks like utmp isn't even in that directory anymore and mounting tmpfs on it was always problematical. We may have to have a more generic method to detect when a container has shut down or is restarting in that case. I'm also finding we end up with dangling resources where we can't remove to cgroup directories after a halt and that creates a serious problem I have
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On Thu, 2012-10-25 at 11:19 -0500, Serge Hallyn wrote: Quoting Michael H. Warfield (m...@wittsend.com): Sorry for taking a few days to get back on this. I was delivering a guest lecture up at Fordham University last Tuesday so I was out of pocket a couple of days or I would have responded sooner... On Mon, 2012-10-22 at 16:59 -0400, Michael H. Warfield wrote: On Mon, 2012-10-22 at 22:50 +0200, Lennart Poettering wrote: On Mon, 22.10.12 11:48, Michael H. Warfield (m...@wittsend.com) wrote: To summarize the problem... The LXC startup binary sets up various things for /dev and /dev/pts for the container to run properly and this works perfectly fine for SystemV start-up scripts and/or Upstart. Unfortunately, systemd has mounts of devtmpfs on /dev and devpts on /dev/pts which then break things horribly. This is because the kernel currently lacks namespaces for devices and won't for some time to come (in design). When devtmpfs gets mounted over top of /dev in the container, it then hijacks the hosts console tty and several other devices which had been set up through bind mounts by LXC and should have been LEFT ALONE. Please initialize a minimal tmpfs on /dev. systemd will then work fine. My containers have a reasonable /dev that work with Upstart just fine but they are not on tmpfs. Is mounting tmpfs on /dev and recreating that minimal /dev required? Well, it can be any kind of mount really. Just needs to be a mount. And the idea is to use tmpfs for this. What /dev are you currently using? It's probably not a good idea to reuse the hosts' /dev, since it contains so many device nodes that should not be accessible/visible to the container. Got it. And that explains the problems we're seeing but also what I'm seeing in some libvirt-lxc related pages, which is a separate and distinct project in spite of the similarities in the name... http://wiki.1tux.org/wiki/Lxc/Installation#Additional_notes Unfortunately, in our case, merely getting a mount in there is a complication in that it also has to be populated but, at least, we understand the problem set now. Ok... Serge and I were corresponding on the lxc-users list and he had a suggestion that worked but I consider to be a bit of a sub-optimal workaround. Ironically, it was to mount devtmpfs on /dev. We don't Oh, sorry - I take back that suggestion :) Well, it worked (sort of) and reinforced what the problem was and where the solution lay so there's no need to be sorry for it. We learned and we know why it's not the right solution. This is good. We made a lot of progress on this just in the last week. This is very good. Note that we have mount hooks, so templates could install a mount hook to mount a tmpfs onto /dev and populate it. Ah, now that is interesting. I haven't looked at that before. I need to explore that further. Or, if everyone is going to need it, we could just add a 'lxc.populatedevs = 1' option which does that without needing a hook. Eventually, with Fedora (and later RHEL / CentOS / SL), Arch Linux, and others going to systemd, I think this is going to be needed sooner than later. devtmpfs should not be used in containers :) Concur! -serge Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_sfd2d_oct___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
Hey Serge, On Thu, 2012-10-25 at 11:19 -0500, Serge Hallyn wrote: Quoting Michael H. Warfield (m...@wittsend.com): Sorry for taking a few days to get back on this. I was delivering a guest lecture up at Fordham University last Tuesday so I was out of pocket a couple of days or I would have responded sooner... On Mon, 2012-10-22 at 16:59 -0400, Michael H. Warfield wrote: On Mon, 2012-10-22 at 22:50 +0200, Lennart Poettering wrote: On Mon, 22.10.12 11:48, Michael H. Warfield (m...@wittsend.com) wrote: To summarize the problem... The LXC startup binary sets up various things for /dev and /dev/pts for the container to run properly and this works perfectly fine for SystemV start-up scripts and/or Upstart. Unfortunately, systemd has mounts of devtmpfs on /dev and devpts on /dev/pts which then break things horribly. This is because the kernel currently lacks namespaces for devices and won't for some time to come (in design). When devtmpfs gets mounted over top of /dev in the container, it then hijacks the hosts console tty and several other devices which had been set up through bind mounts by LXC and should have been LEFT ALONE. Please initialize a minimal tmpfs on /dev. systemd will then work fine. My containers have a reasonable /dev that work with Upstart just fine but they are not on tmpfs. Is mounting tmpfs on /dev and recreating that minimal /dev required? Well, it can be any kind of mount really. Just needs to be a mount. And the idea is to use tmpfs for this. What /dev are you currently using? It's probably not a good idea to reuse the hosts' /dev, since it contains so many device nodes that should not be accessible/visible to the container. Got it. And that explains the problems we're seeing but also what I'm seeing in some libvirt-lxc related pages, which is a separate and distinct project in spite of the similarities in the name... http://wiki.1tux.org/wiki/Lxc/Installation#Additional_notes Unfortunately, in our case, merely getting a mount in there is a complication in that it also has to be populated but, at least, we understand the problem set now. Ok... Serge and I were corresponding on the lxc-users list and he had a suggestion that worked but I consider to be a bit of a sub-optimal workaround. Ironically, it was to mount devtmpfs on /dev. We don't Oh, sorry - I take back that suggestion :) Note that we have mount hooks, so templates could install a mount hook to mount a tmpfs onto /dev and populate it. Ok... I've done some cursory search and turned up nothing but some comments about pre mount hooks. Where is the documentation about this feature and how I might use / implement it? Some examples would probably suffice. Is there a require release version of lxc-utils? Or, if everyone is going to need it, we could just add a 'lxc.populatedevs = 1' option which does that without needing a hook. devtmpfs should not be used in containers :) -serge Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_sfd2d_oct___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users