Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On 07/12/12 00:48, Serge Hallyn wrote: Quoting John (l...@jelmail.com): On 06/12/12 20:06, Dan Kegel wrote: On Thu, Dec 6, 2012 at 12:00 PM, John l...@jelmail.com wrote: While on the subject, any reason for lxc-destroy now being destructive? Wait, isn't that the point? It's in the name and all. When was it ever nondestructive? It only destroyed the configuration in /var/lib and never deleted the root filesystem until very recently (0.8.0, I guess). Was your rootfs a symbolic link by chance? I'm guessing commit 55116c42e767ce795f796fc51cd2ef7d76cf18af is what you're seeing. Before that it did remove the rootfs, but if your rootfs was a symlink it happened to not do it. That wasn't by intent. Perhaps lxc-destroy should take a flag to not delete the rootfs? Not sure... Ah, I can now see what is wrong. It isn't down to symlinks but beacuase my rootfs isn't under /var/lib/lxc. Looking at that commit, I can see that the remove (on line 126) deletes $lxc_path/$lxc_name but does not explictly remove $rootdev. The new code added at line 122 does indeed remove $rootdev. In my case I have my container rootfs in a directory called /srv/test.i686 (i.e not underneath $lxc_path - /var/lib/lxc). I guess the design assumes that a template is used to create a container and that it would put the rootfs beneath /var/lib/test. So the commit fixes an anomaly but leaves me unsure of a couple of things: 1. what is the correct way to update a container config without removing the rootfs. I have always used destroy/create to do this but that, clearly, won't work if the destroy phase removes the rootfs. I like being able to separately manage the rootfs from its configuration. 2. is it wrong to have the rootfs outside of the /var/lib/lxc ? I have a small /var but use a large dedicated partition for my root filesystem directories. I suspect I need to look at using per-container lvm volumes, something that makes sense but I haven't delved into yet. I would value having options to preserve the rootfs when doing lxc-destroy and for lxc-create to use an existing rootfs (i.e. instead of a template). Thanks very much for the help. -- LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
Quoting Michael H. Warfield (m...@wittsend.com): You have to add an option to the config file for your systemd containers. lxc.autodev = 1 I would like to understand a bit more about what this option does and learn the correct way of creating devices inside a container. With autodev, if I understand correctly, LXC creates a 100Kb tmpfs for /dev, overmounting any existing /dev. it creates a pts subdirectory plus the devices listed in sutuct lxc_devs (src/lxc/conf.c) - null, zero, full, urandom, random, tty and console. What do I do if I need more than those devices in /dev? To date, I have manually used mknod to create devices during the process of creating a rootfs (i.e. I create them beforehand, on the host). I see the comment in the source about providing a file, so I guess this is being thought about? I like being able to do things in the main config file, so perhaps supporting options like lxc.dev = name mask type maj min ? Also, I can't work out what the autodev option does that allows systemd to work ? It's a bit over my head but I'd like to understand if I can. What's the difference between /dev that is on the rootfs and a /dev that is created and over-mounted? systemd inside container = issues so far: - creating devices in /dev - no vty devices (cannot use lxc-console) Regards, John -- LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
Quoting John (l...@jelmail.com): On 07/12/12 00:48, Serge Hallyn wrote: Quoting John (l...@jelmail.com): On 06/12/12 20:06, Dan Kegel wrote: On Thu, Dec 6, 2012 at 12:00 PM, John l...@jelmail.com wrote: While on the subject, any reason for lxc-destroy now being destructive? Wait, isn't that the point? It's in the name and all. When was it ever nondestructive? It only destroyed the configuration in /var/lib and never deleted the root filesystem until very recently (0.8.0, I guess). Was your rootfs a symbolic link by chance? I'm guessing commit 55116c42e767ce795f796fc51cd2ef7d76cf18af is what you're seeing. Before that it did remove the rootfs, but if your rootfs was a symlink it happened to not do it. That wasn't by intent. Perhaps lxc-destroy should take a flag to not delete the rootfs? Not sure... Ah, I can now see what is wrong. It isn't down to symlinks but beacuase my rootfs isn't under /var/lib/lxc. Looking at that commit, I can see that the remove (on line 126) deletes $lxc_path/$lxc_name but does not explictly remove $rootdev. The new code added at line 122 does indeed remove $rootdev. In my case I have my container rootfs in a directory called /srv/test.i686 (i.e not underneath $lxc_path - /var/lib/lxc). I guess the design assumes that a template is used to create a container and that it would put the rootfs beneath /var/lib/test. So the commit fixes an anomaly but leaves me unsure of a couple of things: 1. what is the correct way to update a container config without removing the rootfs. I have always used destroy/create to do this but that, clearly, won't work if the destroy phase removes the rootfs. I like being able to separately manage the rootfs from its configuration. This I don't really understand - I've always done it by hand. What exactly is made easier by doing destroy/create? Maybe we can reproduce that with an lxc-update or something... Especially if we can then have lxc-update expand variables and take a list of containers to update to batch the operations. Though still right now I would just default to a bash loop calling sed... 2. is it wrong to have the rootfs outside of the /var/lib/lxc ? I Not if you're doing it right :) That's in fact why the --dir option was added. But that's exactly why the lxc-destroy behavior should be the same with the rootfs in our out of /var/lib/lxc/$container. have a small /var but use a large dedicated partition for my root filesystem directories. I suspect I need to look at using per-container lvm volumes, something that makes sense but I haven't delved into yet. If it's working for you and you don't need separate volumes for fast snapshotted clones, I wouldn't change it. We need to make sure to support your use case. I would value having options to preserve the rootfs when doing lxc-destroy and for lxc-create to use an existing rootfs (i.e. instead of a template). Ok, I don't *really* want to make lxc-destroy not delete the rootfs just if it is outside of /var/lib/lxc/$container... On the one hand I can see people could do that specifically in the hopes of making it outlive the container. On the other hand I could see people doing it only because they are short on disk space ending up running out of disk space because they lost track of where the undeleted rootfs's were. Maybe lxc-destroy -k -n p1 for --keep (don't delete the rootfs)? -serge -- LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
Quoting John (l...@jelmail.com): Quoting Michael H. Warfield (m...@wittsend.com): You have to add an option to the config file for your systemd containers. lxc.autodev = 1 I would like to understand a bit more about what this option does and learn the correct way of creating devices inside a container. With autodev, if I understand correctly, LXC creates a 100Kb tmpfs for /dev, overmounting any existing /dev. it creates a pts subdirectory plus the devices listed in sutuct lxc_devs (src/lxc/conf.c) - null, zero, full, urandom, random, tty and console. What do I do if I need more than those devices in /dev? To date, I have manually used mknod to create devices during the process of creating a rootfs (i.e. I create them beforehand, on the host). I see the comment in the source about providing a file, so I guess this is being thought about? I like being able to do things in the main config file, so perhaps supporting options like lxc.dev = name mask type maj min ? Yup, in either the commit msg or the RFC email I suggested we would probably want to add that. I think it's a good idea. I just didn't do it :) Does someone want to write that patch? Also, I can't work out what the autodev option does that allows systemd to work ? It's a bit over my head but I'd like to understand It's because systemd checks whether /dev is a separate filesystem from / or not. If it not, then it mounts its own /dev, hiding the console which lxc has created, which is a unix98 pty which lxc-console will attach to. In fact it's more dangerous than that - systemd will (I'm pretty sure) mount /dev as devtmpfs type, which means it's a shared mount with the host, so changes made by the container to /dev will be reflected on the host's /dev. if I can. What's the difference between /dev that is on the rootfs and a /dev that is created and over-mounted? Create a fedora 14 container. Look at /dev/console and /dev/tty1 - /dev/tty4 in container and on the host. They're different. systemd inside container = issues so far: - creating devices in /dev - no vty devices (cannot use lxc-console) Regards, John -- LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On 07/12/12 13:50, Serge Hallyn wrote: Quoting John (l...@jelmail.com): On 07/12/12 00:48, Serge Hallyn wrote: Quoting John (l...@jelmail.com): On 06/12/12 20:06, Dan Kegel wrote: On Thu, Dec 6, 2012 at 12:00 PM, John l...@jelmail.com wrote: While on the subject, any reason for lxc-destroy now being destructive? Wait, isn't that the point? It's in the name and all. When was it ever nondestructive? It only destroyed the configuration in /var/lib and never deleted the root filesystem until very recently (0.8.0, I guess). Was your rootfs a symbolic link by chance? I'm guessing commit 55116c42e767ce795f796fc51cd2ef7d76cf18af is what you're seeing. Before that it did remove the rootfs, but if your rootfs was a symlink it happened to not do it. That wasn't by intent. Perhaps lxc-destroy should take a flag to not delete the rootfs? Not sure... Ah, I can now see what is wrong. It isn't down to symlinks but beacuase my rootfs isn't under /var/lib/lxc. Looking at that commit, I can see that the remove (on line 126) deletes $lxc_path/$lxc_name but does not explictly remove $rootdev. The new code added at line 122 does indeed remove $rootdev. In my case I have my container rootfs in a directory called /srv/test.i686 (i.e not underneath $lxc_path - /var/lib/lxc). I guess the design assumes that a template is used to create a container and that it would put the rootfs beneath /var/lib/test. So the commit fixes an anomaly but leaves me unsure of a couple of things: 1. what is the correct way to update a container config without removing the rootfs. I have always used destroy/create to do this but that, clearly, won't work if the destroy phase removes the rootfs. I like being able to separately manage the rootfs from its configuration. This I don't really understand - I've always done it by hand. What exactly is made easier by doing destroy/create? Maybe we can reproduce that with an lxc-update or something... Especially if we can then have lxc-update expand variables and take a list of containers to update to batch the operations. Though still right now I would just default to a bash loop calling sed... I always treated /var/lib/lxc as internal. From the early days, /etc/lxc was suggested as a configuration directory and where the original configuration would lie. Using lxc-create copied that config into /var/lxc. This, in my mind, meant that I shouldn't mess with the config inside /var/lib/lxc but should instead modify /etc/lxc and then do a destroy/create. I may have been living on a mis-premise all this time but that's how I've been using it. [...] I would value having options to preserve the rootfs when doing lxc-destroy and for lxc-create to use an existing rootfs (i.e. instead of a template). Ok, I don't *really* want to make lxc-destroy not delete the rootfs just if it is outside of /var/lib/lxc/$container... On the one hand I can see people could do that specifically in the hopes of making it outlive the container. On the other hand I could see people doing it only because they are short on disk space ending up running out of disk space because they lost track of where the undeleted rootfs's were. Maybe lxc-destroy -k -n p1 for --keep (don't delete the rootfs)? yes, that would work. -serge -- LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
Quoting John (l...@jelmail.com): On 07/12/12 13:50, Serge Hallyn wrote: Quoting John (l...@jelmail.com): On 07/12/12 00:48, Serge Hallyn wrote: Quoting John (l...@jelmail.com): On 06/12/12 20:06, Dan Kegel wrote: On Thu, Dec 6, 2012 at 12:00 PM, John l...@jelmail.com wrote: While on the subject, any reason for lxc-destroy now being destructive? Wait, isn't that the point? It's in the name and all. When was it ever nondestructive? It only destroyed the configuration in /var/lib and never deleted the root filesystem until very recently (0.8.0, I guess). Was your rootfs a symbolic link by chance? I'm guessing commit 55116c42e767ce795f796fc51cd2ef7d76cf18af is what you're seeing. Before that it did remove the rootfs, but if your rootfs was a symlink it happened to not do it. That wasn't by intent. Perhaps lxc-destroy should take a flag to not delete the rootfs? Not sure... Ah, I can now see what is wrong. It isn't down to symlinks but beacuase my rootfs isn't under /var/lib/lxc. Looking at that commit, I can see that the remove (on line 126) deletes $lxc_path/$lxc_name but does not explictly remove $rootdev. The new code added at line 122 does indeed remove $rootdev. In my case I have my container rootfs in a directory called /srv/test.i686 (i.e not underneath $lxc_path - /var/lib/lxc). I guess the design assumes that a template is used to create a container and that it would put the rootfs beneath /var/lib/test. So the commit fixes an anomaly but leaves me unsure of a couple of things: 1. what is the correct way to update a container config without removing the rootfs. I have always used destroy/create to do this but that, clearly, won't work if the destroy phase removes the rootfs. I like being able to separately manage the rootfs from its configuration. This I don't really understand - I've always done it by hand. What exactly is made easier by doing destroy/create? Maybe we can reproduce that with an lxc-update or something... Especially if we can then have lxc-update expand variables and take a list of containers to update to batch the operations. Though still right now I would just default to a bash loop calling sed... I always treated /var/lib/lxc as internal. From the early days, /etc/lxc was suggested as a configuration directory and where the original configuration would lie. Using lxc-create copied that config into /var/lxc. This, in my mind, meant that I shouldn't mess with the config inside /var/lib/lxc but should instead modify /etc/lxc and then do a destroy/create. I may have been living on a mis-premise all this time but that's how I've been using it. No actually I like that. I've not done it, but it seems sound. I would value having options to preserve the rootfs when doing lxc-destroy and for lxc-create to use an existing rootfs (i.e. instead of a template). Ok, I don't *really* want to make lxc-destroy not delete the rootfs just if it is outside of /var/lib/lxc/$container... On the one hand I can see people could do that specifically in the hopes of making it outlive the container. On the other hand I could see people doing it only because they are short on disk space ending up running out of disk space because they lost track of where the undeleted rootfs's were. Maybe lxc-destroy -k -n p1 for --keep (don't delete the rootfs)? yes, that would work. Great - if someone gets to writing that patch before I do, +1. -serge -- LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
Quoting John (l...@jelmail.com): On 05/12/12 21:59, Serge Hallyn wrote: You have to specify a template, i.e. '-t debian'. Oh. I wasn't using a template. Up to now, I have an existing root fileyststem, say /srv/lxc/mycontainer.x86_64 that is pointed to by my configuration file, say mycontainer.conf, by its lxc.rootfs entry. I have seen lxc-create as merely inserting the config from mycontainer.conf into /var/lib/lxc/mycontainer/config and nothing more. I haven't used a template script to create a container because I've got my own that I have been using ever since I first started using lxc (there were no templates back then, well not for arch anyway!). I've always done a destroy/create to update the LXC configuration for a container. This now seems to be the wrong way given destroy removes the rootfs and create expects a template. What's the new way ? I've looked at the man page for lxc-create but am none the wiser. How do I now create a container (or just update the config) for an existing root filesystem ? Hm, I see. Yeah this behavior likely changed with the introduction of custom template paths. Perhaps we should allow '-t none' for exactly your use case. Stéphane? -serge -- LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On 06/12/12 17:10, Serge Hallyn wrote: Quoting John (l...@jelmail.com): On 05/12/12 21:59, Serge Hallyn wrote: You have to specify a template, i.e. '-t debian'. Oh. I wasn't using a template. Up to now, I have an existing root fileyststem, say /srv/lxc/mycontainer.x86_64 that is pointed to by my configuration file, say mycontainer.conf, by its lxc.rootfs entry. I have seen lxc-create as merely inserting the config from mycontainer.conf into /var/lib/lxc/mycontainer/config and nothing more. I haven't used a template script to create a container because I've got my own that I have been using ever since I first started using lxc (there were no templates back then, well not for arch anyway!). I've always done a destroy/create to update the LXC configuration for a container. This now seems to be the wrong way given destroy removes the rootfs and create expects a template. What's the new way ? I've looked at the man page for lxc-create but am none the wiser. How do I now create a container (or just update the config) for an existing root filesystem ? Hm, I see. Yeah this behavior likely changed with the introduction of custom template paths. Perhaps we should allow '-t none' for exactly your use case. Stéphane? -serge Or perhaps, allow leaving off the -t unless you want to work with a template ? (kind of like it's been to date). Would that not work ? -- LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On 12/06/2012 02:45 PM, John wrote: On 06/12/12 17:10, Serge Hallyn wrote: Quoting John (l...@jelmail.com): On 05/12/12 21:59, Serge Hallyn wrote: You have to specify a template, i.e. '-t debian'. Oh. I wasn't using a template. Up to now, I have an existing root fileyststem, say /srv/lxc/mycontainer.x86_64 that is pointed to by my configuration file, say mycontainer.conf, by its lxc.rootfs entry. I have seen lxc-create as merely inserting the config from mycontainer.conf into /var/lib/lxc/mycontainer/config and nothing more. I haven't used a template script to create a container because I've got my own that I have been using ever since I first started using lxc (there were no templates back then, well not for arch anyway!). I've always done a destroy/create to update the LXC configuration for a container. This now seems to be the wrong way given destroy removes the rootfs and create expects a template. What's the new way ? I've looked at the man page for lxc-create but am none the wiser. How do I now create a container (or just update the config) for an existing root filesystem ? Hm, I see. Yeah this behavior likely changed with the introduction of custom template paths. Perhaps we should allow '-t none' for exactly your use case. Stéphane? -serge Or perhaps, allow leaving off the -t unless you want to work with a template ? (kind of like it's been to date). Would that not work ? Yeah, that makes sense, I'll fix it. Basically allow for -t none and have it default to that when not specified, that should essentially revert to the previous behaviour. -- Stéphane Graber Ubuntu developer http://www.ubuntu.com signature.asc Description: OpenPGP digital signature -- LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On 06/12/12 19:48, Stéphane Graber wrote: On 12/06/2012 02:45 PM, John wrote: On 06/12/12 17:10, Serge Hallyn wrote: Quoting John (l...@jelmail.com): On 05/12/12 21:59, Serge Hallyn wrote: You have to specify a template, i.e. '-t debian'. Oh. I wasn't using a template. Up to now, I have an existing root fileyststem, say /srv/lxc/mycontainer.x86_64 that is pointed to by my configuration file, say mycontainer.conf, by its lxc.rootfs entry. I have seen lxc-create as merely inserting the config from mycontainer.conf into /var/lib/lxc/mycontainer/config and nothing more. I haven't used a template script to create a container because I've got my own that I have been using ever since I first started using lxc (there were no templates back then, well not for arch anyway!). I've always done a destroy/create to update the LXC configuration for a container. This now seems to be the wrong way given destroy removes the rootfs and create expects a template. What's the new way ? I've looked at the man page for lxc-create but am none the wiser. How do I now create a container (or just update the config) for an existing root filesystem ? Hm, I see. Yeah this behavior likely changed with the introduction of custom template paths. Perhaps we should allow '-t none' for exactly your use case. Stéphane? -serge Or perhaps, allow leaving off the -t unless you want to work with a template ? (kind of like it's been to date). Would that not work ? Yeah, that makes sense, I'll fix it. Basically allow for -t none and have it default to that when not specified, that should essentially revert to the previous behaviour. While on the subject, any reason for lxc-destroy now being destructive? This in, my opinion, is a significant behavioural change and I did actually unwittingly delete one of my containers last night. Luckily it was just a test one :) Can we make lxc-destroy work like it did before (or provide a cmdline option to make it so) ? I don't know how else to update lxc config without doing a destroy/create cycle (except for hand-editing /var/lib/mycontainer/config but I expect that's verboten). sorry - going off topic for the original thread. J -- LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On Thu, Dec 6, 2012 at 12:00 PM, John l...@jelmail.com wrote: While on the subject, any reason for lxc-destroy now being destructive? Wait, isn't that the point? It's in the name and all. When was it ever nondestructive? -- LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On 06/12/12 20:06, Dan Kegel wrote: On Thu, Dec 6, 2012 at 12:00 PM, John l...@jelmail.com wrote: While on the subject, any reason for lxc-destroy now being destructive? Wait, isn't that the point? It's in the name and all. When was it ever nondestructive? It only destroyed the configuration in /var/lib and never deleted the root filesystem until very recently (0.8.0, I guess). -- LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
Quoting John (l...@jelmail.com): On 06/12/12 20:06, Dan Kegel wrote: On Thu, Dec 6, 2012 at 12:00 PM, John l...@jelmail.com wrote: While on the subject, any reason for lxc-destroy now being destructive? Wait, isn't that the point? It's in the name and all. When was it ever nondestructive? It only destroyed the configuration in /var/lib and never deleted the root filesystem until very recently (0.8.0, I guess). Was your rootfs a symbolic link by chance? I'm guessing commit 55116c42e767ce795f796fc51cd2ef7d76cf18af is what you're seeing. Before that it did remove the rootfs, but if your rootfs was a symlink it happened to not do it. That wasn't by intent. Perhaps lxc-destroy should take a flag to not delete the rootfs? Not sure... -- LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On 04/12/12 21:29, Michael H. Warfield wrote: I raised the question about LXC/systemd a while back and have been trying to follow the conversation but I have to admit it's going somewhat over my head. I've also been away on another piece of work but would now like to understand where things lie with LXC and systemd inside a container. Ok... I'll try to answer some of them... Thanks Mike, much appreciated. I have just updated my system to 0.8.0 and I can't see any changes to make a systemd container work. Are there changes in 0.8.0 ? There are very significant changes in 0.8.0 but, unfortunately, not the ones you need to get systemd to work in a container. We've been testing a lot of these and they are in git but they are not in a release yet. Hopefully soon, just not yet. If so, I'd be grateful for some guidance on what I need to do to to my configuration to make it work. Right now, you'll have to build from git. I will go away and do a git build later today. I presume that would be from git://lxc.git.sourceforge.net/gitroot/lxc/lxc. I'm also happy to help test this if I can. If it helps I am on Arch Linux. There are two problems. One is systemd in an lxc container. I think we have a rope on this one and it's tied down. The other is the more recent (195+) versions of systemd in the host that throw the pivot root errors. That has not been addressed as yet. I use Fedora. Right now, I have Fedora 17 hosts with Fedora 17 containers. Fedora 18 (currently in beta) host (systemd 195) is going to be a train wreck until we sort the pivot root problem. I don't know what you have with Arch Linux. You'll have to tell us what versions of systemd you are running. Ah yes, the pivot root problem. I have worked around this for the time being by doing a mount --make-rprivate /. I created a systemd service on the host as an after dependency on systemd-remount-fs.service to do this. I believe this is ok in the short term (it appears to work ok for me). If I rebuild lxc from git, should I then expect my existing systemd container to work or is there anything else that I need to do ? My versions: lxc version: 0.8.0 Linux hydrogen 3.6.8-1-ARCH #1 SMP PREEMPT Mon Nov 26 22:10:40 CET 2012 x86_64 GNU/Linux systemd 196 many thanks everyone. John Mike Thanks, I really appreciate the help. -- LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
Quoting John (l...@jelmail.com): If so, I'd be grateful for some guidance on what I need to do to to my configuration to make it work. Right now, you'll have to build from git. I will go away and do a git build later today. I presume that would be from git://lxc.git.sourceforge.net/gitroot/lxc/lxc. No, git://github.com/lxc/lxc.git #staging -serge -- LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On Wed, 2012-12-05 at 13:00 +, John wrote: On 04/12/12 21:29, Michael H. Warfield wrote: I raised the question about LXC/systemd a while back and have been trying to follow the conversation but I have to admit it's going somewhat over my head. I've also been away on another piece of work but would now like to understand where things lie with LXC and systemd inside a container. Ok... I'll try to answer some of them... Thanks Mike, much appreciated. I have just updated my system to 0.8.0 and I can't see any changes to make a systemd container work. Are there changes in 0.8.0 ? There are very significant changes in 0.8.0 but, unfortunately, not the ones you need to get systemd to work in a container. We've been testing a lot of these and they are in git but they are not in a release yet. Hopefully soon, just not yet. If so, I'd be grateful for some guidance on what I need to do to to my configuration to make it work. Right now, you'll have to build from git. I will go away and do a git build later today. I presume that would be from git://lxc.git.sourceforge.net/gitroot/lxc/lxc. I'm also happy to help test this if I can. If it helps I am on Arch Linux. There are two problems. One is systemd in an lxc container. I think we have a rope on this one and it's tied down. The other is the more recent (195+) versions of systemd in the host that throw the pivot root errors. That has not been addressed as yet. I use Fedora. Right now, I have Fedora 17 hosts with Fedora 17 containers. Fedora 18 (currently in beta) host (systemd 195) is going to be a train wreck until we sort the pivot root problem. I don't know what you have with Arch Linux. You'll have to tell us what versions of systemd you are running. Ah yes, the pivot root problem. I have worked around this for the time being by doing a mount --make-rprivate /. I created a systemd service on the host as an after dependency on systemd-remount-fs.service to do this. I believe this is ok in the short term (it appears to work ok for me). Hmmm... I was thinking someone ran into some problems doing that and causing problems with the /dev/pts mounts or some such. Good to note if that worked for you. I'm about to start playing with Fedora 18 Beta where I expect problems. I'll try that out. If I rebuild lxc from git, should I then expect my existing systemd container to work or is there anything else that I need to do ? Yeah, one other thing (in addition to following Serge's advice regarding git and #stage)... You have to add an option to the config file for your systemd containers. lxc.autodev = 1 My versions: lxc version: 0.8.0 Linux hydrogen 3.6.8-1-ARCH #1 SMP PREEMPT Mon Nov 26 22:10:40 CET 2012 x86_64 GNU/Linux systemd 196 many thanks everyone. John Mike Thanks, I really appreciate the help. Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
Quoting Michael H. Warfield (m...@wittsend.com): You have to add an option to the config file for your systemd containers. lxc.autodev = 1 Phrasing it this way makes me wonder, should lxc look for '$rootfs/dev/console' and automatically set lxc.autodev if that is not found? (Right now if lxc.autodev is 1 then the tmpfs /dev is mounted before all the lxc.mount.entries and /var/lib/lxc/$c/fstab entries, but I can't right now think of a reason why it has to stay that way. If we were to always set lxc.autodev if /dev is empty, we'd want to make sure any separate /dev has been mounted, of course.) -serge -- LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On Wed, 2012-12-05 at 11:09 -0600, Serge Hallyn wrote: Quoting Michael H. Warfield (m...@wittsend.com): You have to add an option to the config file for your systemd containers. lxc.autodev = 1 Phrasing it this way makes me wonder, should lxc look for '$rootfs/dev/console' and automatically set lxc.autodev if that is not found? I'm of two minds here (which, in my case, is a reduction in force and those two minds are going where did everybody go?). That might be an idea for jogging the default. If you don't have '$rootfs/dev/console' then set it. In that obvious case, you need it. If you do have it, obey what's on the config file? We have a bit of a chicken and egg situation. It's not just auto populating /dev but it's also mounting a ramfs partition on it. I'm not sure I'm comfortable with the level of random acts of terrorism that systemd has proven to be capable of if someone accidentally leaves a .../dev/console in their file system so we don't then mount ramfs on dev and we don't then auto populate dev. But... The same situation exists if the user doesn't manually provide the autodev option. But... I would not want us to switch based on the existence of systemd either. Maybe there is some other way we can autodetect this that doesn't depend on those static devices? Overall, I like that idea. It helps idiot proof the configuration better. (Right now if lxc.autodev is 1 then the tmpfs /dev is mounted before all the lxc.mount.entries and /var/lib/lxc/$c/fstab entries, but I can't right now think of a reason why it has to stay that way. If we were to always set lxc.autodev if /dev is empty, we'd want to make sure any separate /dev has been mounted, of course.) Concur. I think this is a good idea, we just have to watch some of the corner cases. The one I fear the most is the one where someone (ME!) does a yum upgrade of a container that then becomes systemd (F14 - F15) where it use to be Upstart with a static /dev populated. Boom. Flash of light, mushroom cloud on the horizon. But, again, if I don't fix the bloody config, I'm screwed as well. Self inflicted injury. I see that. I don't see a downside to adding it. I'm just a little nudgey about relying on it. On the balance... I'm in favor, yeah. -serge Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On 05/12/12 14:55, Michael H. Warfield wrote: [...] Ah yes, the pivot root problem. I have worked around this for the time being by doing a mount --make-rprivate /. I created a systemd service on the host as an after dependency on systemd-remount-fs.service to do this. I believe this is ok in the short term (it appears to work ok for me). Hmmm... I was thinking someone ran into some problems doing that and causing problems with the /dev/pts mounts or some such. Good to note if that worked for you. I'm about to start playing with Fedora 18 Beta where I expect problems. I'll try that out. If I rebuild lxc from git, should I then expect my existing systemd container to work or is there anything else that I need to do ? Yeah, one other thing (in addition to following Serge's advice regarding git and #stage)... You have to add an option to the config file for your systemd containers. lxc.autodev = 1 Ok got that. I used git://github.com/lxc/lxc.git #staging. Built and installed ok. Existing containers running. When I try to create a new one, with or without the autodev like you suggest, I get the below: # lxc-create -n test2 -f test2.conf lxc-create: unknown template '' lxc-create: aborted I checked and the above create does work with 0.8.0. I realise it's probably a glitch caused by something unrelated and which will probably be fixed quite quickly. I may try a re-build in the morning. Next, I manually edited /var/lib/lxc/test/config to add lxc.autodev to it but attempting to start the container gave me this: # lxc-start -n test2 lxc-start: No such file or directory - failed to mount 'devshm' on '/usr/lib/lxc/rootfs//dev/shm' I had an instruction in the config to mount devshm so I removed that and could then start the container up successfully. I got a login prompt and can log in. Lovely! I now need to run some more tests here but I can confirm that the staging build will allow a container to start on my Arch system. FYI (Arch - specific): I used a modified copy of the lxc-git PKGBUILD (https://aur.archlinux.org/packages/lx/lxc-git/PKGBUILD) to build lxc#staging. I only changed the git root to be git://github.com/lxc/lxc.git. ps. I Just did an lxc-destroy while testing and it appears to now be destructive. That took me by surprise. Regards, John -- LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On 26/10/12 22:02, Michael H. Warfield wrote: On Fri, 2012-10-26 at 12:11 -0400, Michael H. Warfield wrote: On Thu, 2012-10-25 at 23:38 +0200, Lennart Poettering wrote: On Thu, 25.10.12 11:59, Michael H. Warfield (m...@wittsend.com) wrote: I SUSPECT the hang condition is something to do with systemd trying to start and interactive console on /dev/console, which sysvinit and upstart do not do. Yes, this is documented, please see the link I already posted, and which I linked above a second time. This may have been my fault. I was using the -o option to lxc-start (output logfile) and failed to specify the -c (console output redirect) option. It seems to fire up nicely (albeit with other problems) with that additional option. Continuing my research. Confirming. Using the -c option for the console file works. Unfortunately, thanks to no getty's on the ttys so lxc-console does not work and no way to connect to that console redirect and the failure of the network to start, I'm still trying to figure out just what is face planting in a container I can not access. :-/=/ Punch out the punch list one PUNCH at at time here. I've got some more problems relating to shutting down containers, some of which may be related to mounting tmpfs on /run to which /var/run is symlinked to. We're doing halt / restart detection by monitoring utmp in that directory but it looks like utmp isn't even in that directory anymore and mounting tmpfs on it was always problematical. We may have to have a more generic method to detect when a container has shut down or is restarting in that case. I can't parse this. The system call reboot() is virtualized for containers just fine and the container managaer (i.e. LXC) can check for that easily. Apparently, in recent kernels, we can. Unfortunately, I'm still finding that I can not restart a container I have previously halted. I have no problem with sysvinit and upstart systems on this host, so it is a container problem peculiar to systemd containers. Continuing to research that problem. Lennart -- Lennart Poettering - Red Hat, Inc. Regards, Mike -- WINDOWS 8 is here. Millions of people. Your app in 30 days. Visit The Windows 8 Center at Sourceforge for all your go to resources. http://windows8center.sourceforge.net/ join-generation-app-and-make-money-coding-fast/ ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users I raised the question about LXC/systemd a while back and have been trying to follow the conversation but I have to admit it's going somewhat over my head. I've also been away on another piece of work but would now like to understand where things lie with LXC and systemd inside a container. I have just updated my system to 0.8.0 and I can't see any changes to make a systemd container work. Are there changes in 0.8.0 ? If so, I'd be grateful for some guidance on what I need to do to to my configuration to make it work. I'm also happy to help test this if I can. If it helps I am on Arch Linux. many thanks everyone. John -- LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On Tue, Dec 4, 2012 at 3:29 PM, Michael H. Warfield m...@wittsend.com wrote: On Tue, 2012-12-04 at 20:40 +, John wrote: [...] I'm also happy to help test this if I can. If it helps I am on Arch Linux. There are two problems. One is systemd in an lxc container. I think we have a rope on this one and it's tied down. The other is the more recent (195+) versions of systemd in the host that throw the pivot root errors. That has not been addressed as yet. I use Fedora. Right now, I have Fedora 17 hosts with Fedora 17 containers. Fedora 18 (currently in beta) host (systemd 195) is going to be a train wreck until we sort the pivot root problem. I don't know what you have with Arch Linux. You'll have to tell us what versions of systemd you are running. we (Arch) are currently running systemd 196... so i guess... :sadface: -- C Anthony -- LogMeIn Rescue: Anywhere, Anytime Remote support for IT. Free Trial Remotely access PCs and mobile devices and provide instant support Improve your efficiency, and focus on delivering more value-add services Discover what IT Professionals Know. Rescue delivers http://p.sf.net/sfu/logmein_12329d2d ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On Mon, 2012-10-22 at 16:11 +0200, Lennart Poettering wrote: Note that there are reports that LXC has issues with the fact that newer systemd enables shared mount propagation for all mounts by default (this should actually be beneficial for containers as this ensures that new mounts appear in the containers). LXC when run on such a system fails as soon as it tries to use pivot_root(), as that is incompatible with shared mount propagation. The needs fixing in LXC: it should use MS_MOVE or MS_BIND to place the new root dir in / instead. A short term work-around is to simply remount the root tree to private before invoking LXC. In another thread, Serge had some heartburn over this shared mount propagation which then rang a bell in my head about past problems we have seen. On Mon, 2012-11-05 at 08:51 -0600, Serge Hallyn wrote: Quoting Michael H. Warfield (m...@wittsend.com): ... This was from another threat with the systemd guys. On Mon, 2012-10-22 at 16:11 +0200, Lennart Poettering wrote: Note that there are reports that LXC has issues with the fact that newer systemd enables shared mount propagation for all mounts by default (this should actually be beneficial for containers as this ensures that new mounts appear in the containers). LXC when run on such a system fails MS_SLAVE does this as well. MS_SHARED means container mounts also propagate into the host, which is less desirable in most cases. Here's where we've seen some problems in the past. It's not just mounts that are propagated but remounts as well. The problem arose that some of us had our containers on a separate partition. When we would shut a container down, that container tried to remount its file systems ro which then propagated back into the host causing the hosts file system to be ro (doesn't happen if you are running on the host's root fs for the containers) and from there across into the other containers. Are you using MS_SHARED or MS_SLAVE for this? If you are using MS_SHARED do you create a potential security problem where actions in the container can bleed into the state of the host and into other containers. That's highly undesirable. If a mount in a propagates back into the host and is then reflected to another container sharing that same mount tree (I have shared partitions specific to that sort of thing) does that create an information disclosure situation of one container mounts a new file system and the other container sees the new mount? I don't know if the mount propagation would reflect back up the shared tree or not but I have certainly seen remounts do this. I don't see that as desirable. Maybe I'm misunderstand how this is suppose to work but I intend to test out those scenarios when I have a chance. I do know that, when testing that ro problem, I was able to remount a partition ro in one container and it would switch in the host and the other container and I could the remount it rw in the other container and have it propagate back. Not good. Can you offer any clarity on this? as soon as it tries to use pivot_root(), as that is incompatible with shared mount propagation. The needs fixing in LXC: it should use MS_MOVE or MS_BIND to place the new root dir in / instead. A short term Actually not quite sure how this would work. It should be possible to set up a set of conditions to work around this, but the kernel checks at do_pivotroot are pretty harsh - mnt-mnt_parent of both the new root and current root have to be not shared. So perhaps we actually first chroot into a dir whose parent is non-shared, then pivot_root from there? :) (Simple chroot in place of pivot_root still does not suffice, not only because of chroot escapes, but also different results in /proc/pid/mountinfo and friends) Comments on Serge's points? At this point, we see where this will become problematical in Fedora 18 but appears to already be problematical in NixOS that another user is running and which containers systemd 195 in the host. We've had problems with chroot in the past due to chroot escapes and other problems years ago as Serge mentioned. Lennart -- Lennart Poettering - Red Hat, Inc. Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- LogMeIn Central: Instant, anywhere, Remote PC access and management. Stay in control, update software, and manage PCs from one command center Diagnose problems and improve visibility into emerging IT issues Automate, monitor and manage. Do more in less time with Central
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On Sun, 2012-10-28 at 18:52 +0100, Serge Hallyn wrote: Quoting Michael H. Warfield (m...@wittsend.com): On Sat, 2012-10-27 at 13:51 -0400, Michael H. Warfield wrote: On Sat, 2012-10-27 at 13:40 -0400, Michael H. Warfield wrote: /me erasing everything at this point and taking off the systemd crew, since this will have no relevance to them... Testing the hook feature out using git rev (finally got it built)... I added this line to my config... lxc.mount.entry=tmpfs /srv/lxc/private/Plover/dev.tmp tmpfs defaults 0 0 lxc.hook.mount = /var/lib/lxc/Plover/mount In /var/lib/lxc/Plover/mount I have this: -- rsync -avAH /srv/lxc/private/Plover/dev.template/. /srv/lxc/private/Plover/dev.tmp/ -- (This is just testing out the concepts. If I understand this correctly, lxc.hook.pre-mount runs BEFORE the mounting takes place and lxc.hook.mount takes place after the mount. Problem is, the result of that rsync is not showing up in the mounted tmpfs file system but is showing up in the underlying parent file system as if it were run pre-mount. Something not right here... I changed it to lxc.hook.start = /srv/lxc/mount (where I put the script in the container) which then works but that then requires the template and the command to be in the container. Suboptimal to say the least. But it gives me a way to test this tmpfs thing out. I also noticed that the .start hook runs, it appears, after caps are dropped and I see a lot of bitching about mknod on certain devices. I had to thrown an exit 0 into that script so it would continue in spite of the errors but, now, I can refine my template based on what it could create. Crap. I've got a catch-22 here... This is going to take some work. Hey, I've got a rather minimal patch (appended below) to add the support for mounting and populating a minimal /dev working. (A few hours were wasted due to my not knowing that upstart was going to issue mounted-dev even though /dev was mounted before upstart started - and the mounted-dev hook deletes and recreates all consoles. GAH) Yes, we can create the /dev directory with tmpfs from a template. Problem is that /dev/pts does not exist at the time we need to mount the devpts on /dev/pts for the pty's so that hurls chunks and dies. We can't create the /dev/ directory contents prior to mounting in the pre-mount hook because we won't have tmpfs in place at the time. We have to get tmpfs mounted on /dev and then create /dev/pts and then mount the ptys in there. There has to be a mkdir in between those two mount actions. Simplest solution would seem to be to add some logic to the mount logic that says test if directory exists and, if not, create it. I'm not sure of the consequences of that, though. I don't see a way to make this happen with hooks. It's almost like we need and on-mount per mount hook. Should be moot given my patch, which I intend to push this week, but why couldn't a lxc.hook.mount do the whole thing, mount /dev and and populate it? I wasn't thinking a lxc.hook.start, for the reasons you encountered, but I assume you tried lxc.hook.mount and it failed? See my other comment about lxc.hook.mount. I tried using it to populate /dev but it showed up in the parent of the mount undeneath the tmpfs mount. It was like it ran pre-mount. I tried it for several different combinations and couldn't get it to go. Would still have the problem with mounting /dev/pts which would take place before the mount hook at run to mount the file system and populate it. That actually MIGHT work (gotta think on this now) if I used lxc.hook.pre-mount and mounted tmpfs over /dev, and populated it but then I run into a problem where I was using a bind-mount for the rootfs. Might still work. I'll test your patch out first though. Patch below: snip Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- WINDOWS 8 is here. Millions of people. Your app in 30 days. Visit The Windows 8 Center at Sourceforge for all your go to resources. http://windows8center.sourceforge.net/ join-generation-app-and-make-money-coding-fast/___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On Sun, 2012-10-28 at 18:52 +0100, Serge Hallyn wrote: Should be moot given my patch, which I intend to push this week, but why couldn't a lxc.hook.mount do the whole thing, mount /dev and and populate it? I wasn't thinking a lxc.hook.start, for the reasons you encountered, but I assume you tried lxc.hook.mount and it failed? Patch below: Patch failed against 0.8.0rc2 and git root. Even with loose patching for whitespace, had failures... This was against git: [mhw@forest lxc]$ patch -p1 -l ../lxc-autodev.patch patching file src/lxc/conf.c Hunk #1 succeeded at 616 (offset -3 lines). Hunk #2 succeeded at 633 (offset -3 lines). Hunk #3 succeeded at 839 (offset -3 lines). Hunk #4 succeeded at 2203 (offset -66 lines). patching file src/lxc/conf.h Hunk #1 FAILED at 227. 1 out of 1 hunk FAILED -- saving rejects to file src/lxc/conf.h.rej patching file src/lxc/confile.c Hunk #1 FAILED at 77. Hunk #2 FAILED at 118. Hunk #3 succeeded at 854 with fuzz 2 (offset 1 line). 2 out of 3 hunks FAILED -- saving rejects to file src/lxc/confile Version to patch to? I'm trying to manually integrate those failed hunks now. Shouldn't be too difficult for only three. Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- WINDOWS 8 is here. Millions of people. Your app in 30 days. Visit The Windows 8 Center at Sourceforge for all your go to resources. http://windows8center.sourceforge.net/ join-generation-app-and-make-money-coding-fast/___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
/me erasing everything at this point and taking off the systemd crew, since this will have no relevance to them... Testing the hook feature out using git rev (finally got it built)... I added this line to my config... lxc.mount.entry=tmpfs /srv/lxc/private/Plover/dev.tmp tmpfs defaults 0 0 lxc.hook.mount = /var/lib/lxc/Plover/mount In /var/lib/lxc/Plover/mount I have this: -- rsync -avAH /srv/lxc/private/Plover/dev.template/. /srv/lxc/private/Plover/dev.tmp/ -- (This is just testing out the concepts. If I understand this correctly, lxc.hook.pre-mount runs BEFORE the mounting takes place and lxc.hook.mount takes place after the mount. Problem is, the result of that rsync is not showing up in the mounted tmpfs file system but is showing up in the underlying parent file system as if it were run pre-mount. Something not right here... Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- WINDOWS 8 is here. Millions of people. Your app in 30 days. Visit The Windows 8 Center at Sourceforge for all your go to resources. http://windows8center.sourceforge.net/ join-generation-app-and-make-money-coding-fast/___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On Sat, 2012-10-27 at 13:40 -0400, Michael H. Warfield wrote: /me erasing everything at this point and taking off the systemd crew, since this will have no relevance to them... Testing the hook feature out using git rev (finally got it built)... I added this line to my config... lxc.mount.entry=tmpfs /srv/lxc/private/Plover/dev.tmp tmpfs defaults 0 0 lxc.hook.mount = /var/lib/lxc/Plover/mount In /var/lib/lxc/Plover/mount I have this: -- rsync -avAH /srv/lxc/private/Plover/dev.template/. /srv/lxc/private/Plover/dev.tmp/ -- (This is just testing out the concepts. If I understand this correctly, lxc.hook.pre-mount runs BEFORE the mounting takes place and lxc.hook.mount takes place after the mount. Problem is, the result of that rsync is not showing up in the mounted tmpfs file system but is showing up in the underlying parent file system as if it were run pre-mount. Something not right here... I changed it to lxc.hook.start = /srv/lxc/mount (where I put the script in the container) which then works but that then requires the template and the command to be in the container. Suboptimal to say the least. But it gives me a way to test this tmpfs thing out. I also noticed that the .start hook runs, it appears, after caps are dropped and I see a lot of bitching about mknod on certain devices. I had to thrown an exit 0 into that script so it would continue in spite of the errors but, now, I can refine my template based on what it could create. Regards, Mike Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- WINDOWS 8 is here. Millions of people. Your app in 30 days. Visit The Windows 8 Center at Sourceforge for all your go to resources. http://windows8center.sourceforge.net/ join-generation-app-and-make-money-coding-fast/___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On Sat, 2012-10-27 at 13:51 -0400, Michael H. Warfield wrote: On Sat, 2012-10-27 at 13:40 -0400, Michael H. Warfield wrote: /me erasing everything at this point and taking off the systemd crew, since this will have no relevance to them... Testing the hook feature out using git rev (finally got it built)... I added this line to my config... lxc.mount.entry=tmpfs /srv/lxc/private/Plover/dev.tmp tmpfs defaults 0 0 lxc.hook.mount = /var/lib/lxc/Plover/mount In /var/lib/lxc/Plover/mount I have this: -- rsync -avAH /srv/lxc/private/Plover/dev.template/. /srv/lxc/private/Plover/dev.tmp/ -- (This is just testing out the concepts. If I understand this correctly, lxc.hook.pre-mount runs BEFORE the mounting takes place and lxc.hook.mount takes place after the mount. Problem is, the result of that rsync is not showing up in the mounted tmpfs file system but is showing up in the underlying parent file system as if it were run pre-mount. Something not right here... I changed it to lxc.hook.start = /srv/lxc/mount (where I put the script in the container) which then works but that then requires the template and the command to be in the container. Suboptimal to say the least. But it gives me a way to test this tmpfs thing out. I also noticed that the .start hook runs, it appears, after caps are dropped and I see a lot of bitching about mknod on certain devices. I had to thrown an exit 0 into that script so it would continue in spite of the errors but, now, I can refine my template based on what it could create. Crap. I've got a catch-22 here... This is going to take some work. Yes, we can create the /dev directory with tmpfs from a template. Problem is that /dev/pts does not exist at the time we need to mount the devpts on /dev/pts for the pty's so that hurls chunks and dies. We can't create the /dev/ directory contents prior to mounting in the pre-mount hook because we won't have tmpfs in place at the time. We have to get tmpfs mounted on /dev and then create /dev/pts and then mount the ptys in there. There has to be a mkdir in between those two mount actions. Simplest solution would seem to be to add some logic to the mount logic that says test if directory exists and, if not, create it. I'm not sure of the consequences of that, though. I don't see a way to make this happen with hooks. It's almost like we need and on-mount per mount hook. Regards, Mike Regards, Mike Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- WINDOWS 8 is here. Millions of people. Your app in 30 days. Visit The Windows 8 Center at Sourceforge for all your go to resources. http://windows8center.sourceforge.net/ join-generation-app-and-make-money-coding-fast/___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On Sat, 2012-10-27 at 19:44 +0100, Colin Guthrie wrote: 'Twas brillig, and Michael H. Warfield at 26/10/12 18:18 did gyre and gimble: What the hell is this? /var/run is symlinked to /run and is mounted with a tmpfs. Yup, that's how /var/run and /run is being handled these days. It provides a consistent space to pass info from the initrd over to the main system and has various other uses also. Interesting. I hadn't considered that aspect of it before. Very interesting. If you want to ensure files are created in this folder, just drop a config file in to /usr/lib/tmpfiles.d/ in the package in question. See man systemd-tmpfiles for more info. NOW THAT is something else I needed to know about! Thank you very very much! Learned something new. This whole thing has been a massive learning experience getting this container kick started. Could be some packages are not fully upgraded to this concept in F17. As a non-fedora user, I can't really comment on that specifically. As it turns out, the kernel has had some of our patches applied that I wasn't aware of vis-a-vis reboot/halt and this should no longer be an issue. I'm still struggling with the tmpfs on /dev thing and have run into a catch-22 with regards to that. I can mount tmpfs on /dev just fine and can populate it just fine in a post mount hook but, then, we're trying to mount a devpts file system on /dev/pts before we've had a chance to populate it and it's then crashing on the mount. Sigh... I think that's going to now have to wait for Serge or Daniel to comment on. Col -- Colin Guthrie gmane(at)colin.guthr.ie http://colin.guthr.ie/ Day Job: Tribalogic Limited http://www.tribalogic.net/ Open Source: Mageia Contributor http://www.mageia.org/ PulseAudio Hacker http://www.pulseaudio.org/ Trac Hacker http://trac.edgewall.org/ Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- WINDOWS 8 is here. Millions of people. Your app in 30 days. Visit The Windows 8 Center at Sourceforge for all your go to resources. http://windows8center.sourceforge.net/ join-generation-app-and-make-money-coding-fast/___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
Quoting Michael H. Warfield (m...@wittsend.com): On Thu, 2012-10-25 at 20:30 -0500, Serge Hallyn wrote: Quoting Michael H. Warfield (m...@wittsend.com): On Thu, 2012-10-25 at 23:38 +0200, Lennart Poettering wrote: On Thu, 25.10.12 11:59, Michael H. Warfield (m...@wittsend.com) wrote: I've got some more problems relating to shutting down containers, some of which may be related to mounting tmpfs on /run to which /var/run is symlinked to. We're doing halt / restart detection by monitoring utmp in that directory but it looks like utmp isn't even in that directory anymore and mounting tmpfs on it was always problematical. We may have to have a more generic method to detect when a container has shut down or is restarting in that case. I can't parse this. The system call reboot() is virtualized for containers just fine and the container managaer (i.e. LXC) can check for that easily. The problem we have had was with differentiating between reboot and halt to either shut the container down cold or restarted it. You say easily and yet we never came up with an easy solution and monitored utmp instead for the next runlevel change. What is your easy solution for that problem? I think you're on older kernels, where we had to resort to that. Pretty recently Daniel Lezcano's patch was finally accepted upstream, which lets a container call reboot() and lets the parent of init tell whether it called reboot or shutdown by looking at wTERMSIG(status). Now THAT is wonderful news! I hadn't realized that had been accepted. So we no longer need to rely on the old utmp kludge? Yup :) It was very liberating, in terms of what containers can do with mounting. -- Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_sfd2d_oct ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
Adding in the lxc-devel list. On Thu, 2012-10-25 at 22:59 -0400, Michael H. Warfield wrote: On Thu, 2012-10-25 at 15:42 -0400, Michael H. Warfield wrote: On Thu, 2012-10-25 at 14:02 -0500, Serge Hallyn wrote: Quoting Michael H. Warfield (m...@wittsend.com): On Thu, 2012-10-25 at 13:23 -0400, Michael H. Warfield wrote: Hey Serge, On Thu, 2012-10-25 at 11:19 -0500, Serge Hallyn wrote: ... Oh, sorry - I take back that suggestion :) Note that we have mount hooks, so templates could install a mount hook to mount a tmpfs onto /dev and populate it. Ok... I've done some cursory search and turned up nothing but some comments about pre mount hooks. Where is the documentation about this feature and how I might use / implement it? Some examples would probably suffice. Is there a require release version of lxc-utils? I think I found what I needed in the changelog here: http://www.mail-archive.com/lxc-devel@lists.sourceforge.net/msg01490.html I'll play with it and report back. Also the Lifecycle management hooks section in https://help.ubuntu.com/12.10/serverguide/lxc.html This isn't working... Based on what was in both of those articles, I added this entry to another container (Plover) to test... lxc.hook.mount = /var/lib/lxc/Plover/mount When I run lxc-start -n Plover, I see this: [root@forest ~]# lxc-start -n Plover lxc-start: unknow key lxc.hook.mount lxc-start: failed to read configuration file I'm running the latest rc... [root@forest ~]# rpm -qa | grep lxc lxc-0.8.0.rc2-1.fc16.x86_64 lxc-libs-0.8.0.rc2-1.fc16.x86_64 lxc-doc-0.8.0.rc2-1.fc16.x86_64 Is it something in git that hasn't made it to a release yet? nm... I see it. It's in git and hasn't made it to a release. I'm working on a git build to test now. If this is something that solves some of this, we need to move things along here and get these things moved out. According to git, 0.8.0rc2 was 7 months ago? What's the show stoppers here? While the git repo says 7 months ago, the date stamp on the lxc-0.8.0-rc2 tarball is from July 10, so about 3-1/2 months ago. Sounds like we've accumulated some features (like the hooks) we are going to need like months ago to deal with this systemd debacle. How close are we to either 0.8.0rc3 or 0.8.0? Any blockers or are we just waiting on some more features? Note that I'm thinking that having lxc-start guess how to fill in /dev is wrong, because different distros and even different releases of the same distros have different expectations. For instance ubuntu lucid wants /dev/shm to be a directory, while precise+ wants a symlink. So somehow the template should get involved, be it by adding a hook, or simply specifying a configuration file which lxc uses internally to decide how to create /dev. I agree this needs to be by some sort of convention or template that we can adjust. Personally I'd prefer if /dev were always populated by the templates, and containers (i.e. userspace) didn't mount a fresh tmpfs for /dev. But that does complicate userspace, and we've seen it in debian/ubuntu as well (i.e. at certain package upgrades which rely on /dev being cleared after a reboot). -serge Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_sfd2d_oct___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On Fri, 2012-10-26 at 12:11 -0400, Michael H. Warfield wrote: On Thu, 2012-10-25 at 23:38 +0200, Lennart Poettering wrote: On Thu, 25.10.12 11:59, Michael H. Warfield (m...@wittsend.com) wrote: I SUSPECT the hang condition is something to do with systemd trying to start and interactive console on /dev/console, which sysvinit and upstart do not do. Yes, this is documented, please see the link I already posted, and which I linked above a second time. This may have been my fault. I was using the -o option to lxc-start (output logfile) and failed to specify the -c (console output redirect) option. It seems to fire up nicely (albeit with other problems) with that additional option. Continuing my research. Confirming. Using the -c option for the console file works. Unfortunately, thanks to no getty's on the ttys so lxc-console does not work and no way to connect to that console redirect and the failure of the network to start, I'm still trying to figure out just what is face planting in a container I can not access. :-/=/ Punch out the punch list one PUNCH at at time here. I've got some more problems relating to shutting down containers, some of which may be related to mounting tmpfs on /run to which /var/run is symlinked to. We're doing halt / restart detection by monitoring utmp in that directory but it looks like utmp isn't even in that directory anymore and mounting tmpfs on it was always problematical. We may have to have a more generic method to detect when a container has shut down or is restarting in that case. I can't parse this. The system call reboot() is virtualized for containers just fine and the container managaer (i.e. LXC) can check for that easily. Apparently, in recent kernels, we can. Unfortunately, I'm still finding that I can not restart a container I have previously halted. I have no problem with sysvinit and upstart systems on this host, so it is a container problem peculiar to systemd containers. Continuing to research that problem. Lennart -- Lennart Poettering - Red Hat, Inc. Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- WINDOWS 8 is here. Millions of people. Your app in 30 days. Visit The Windows 8 Center at Sourceforge for all your go to resources. http://windows8center.sourceforge.net/ join-generation-app-and-make-money-coding-fast/___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
Sorry for taking a few days to get back on this. I was delivering a guest lecture up at Fordham University last Tuesday so I was out of pocket a couple of days or I would have responded sooner... On Mon, 2012-10-22 at 16:59 -0400, Michael H. Warfield wrote: On Mon, 2012-10-22 at 22:50 +0200, Lennart Poettering wrote: On Mon, 22.10.12 11:48, Michael H. Warfield (m...@wittsend.com) wrote: To summarize the problem... The LXC startup binary sets up various things for /dev and /dev/pts for the container to run properly and this works perfectly fine for SystemV start-up scripts and/or Upstart. Unfortunately, systemd has mounts of devtmpfs on /dev and devpts on /dev/pts which then break things horribly. This is because the kernel currently lacks namespaces for devices and won't for some time to come (in design). When devtmpfs gets mounted over top of /dev in the container, it then hijacks the hosts console tty and several other devices which had been set up through bind mounts by LXC and should have been LEFT ALONE. Please initialize a minimal tmpfs on /dev. systemd will then work fine. My containers have a reasonable /dev that work with Upstart just fine but they are not on tmpfs. Is mounting tmpfs on /dev and recreating that minimal /dev required? Well, it can be any kind of mount really. Just needs to be a mount. And the idea is to use tmpfs for this. What /dev are you currently using? It's probably not a good idea to reuse the hosts' /dev, since it contains so many device nodes that should not be accessible/visible to the container. Got it. And that explains the problems we're seeing but also what I'm seeing in some libvirt-lxc related pages, which is a separate and distinct project in spite of the similarities in the name... http://wiki.1tux.org/wiki/Lxc/Installation#Additional_notes Unfortunately, in our case, merely getting a mount in there is a complication in that it also has to be populated but, at least, we understand the problem set now. Ok... Serge and I were corresponding on the lxc-users list and he had a suggestion that worked but I consider to be a bit of a sub-optimal workaround. Ironically, it was to mount devtmpfs on /dev. We don't (currently) have a method to auto-populate a tmpfs mount with the needed devices and this provided it. It does have a problem that makes me uncomfortable in that the container now has visibility into the hosts /dev system. I'm a security expert and I'm not comfortable with that solution even with the controls we have. We can control access but still, not happy with that. I now have a container that starts with systemd running more or less properly. We do have some problems with the convention that has been set up, however. When running in this mode, you run on the console and you don't spawn getty's on the tty's. There seems to be a problem with this. In this mode, if I manually start the container in a terminal window, that eventually results in a login prompt there. Under sysvinit and upstart I don't get that and can detach. If I run lxc-console (which attaches to one of the vtys) it gives me nothing. Under sysvinit and upstart I get vty login prompts because they have started getty on those vtys. This is important in case network access has not started for one reason or another and the container was started detached in the background. If I start lxc-start in detached mode (-d -o {logfile}) lxc-start redirects the system console to the log file and goes daemon. In this case, the systemd container hangs and never starts. I SUSPECT the hang condition is something to do with systemd trying to start and interactive console on /dev/console, which sysvinit and upstart do not do. Maybe we have to do something different with the redirects in this case, but it's not working consistent with the other packages. We should also start appropriate gettys on those vtys if they are configured. Maybe start the getty's if the tty? exists up to a configured limit (and don't restart if they immediately fail) and obviously don't start them if they don't. It then gives up control over that process. Also don't start a login on /dev/console if you DO start a getty? That would make your behavior congruent with that of the other two systems. I've got some more problems relating to shutting down containers, some of which may be related to mounting tmpfs on /run to which /var/run is symlinked to. We're doing halt / restart detection by monitoring utmp in that directory but it looks like utmp isn't even in that directory anymore and mounting tmpfs on it was always problematical. We may have to have a more generic method to detect when a container has shut down or is restarting in that case. I'm also finding we end up with dangling resources where we can't remove to cgroup directories after a halt and that creates a serious problem I have to
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
Quoting Michael H. Warfield (m...@wittsend.com): Sorry for taking a few days to get back on this. I was delivering a guest lecture up at Fordham University last Tuesday so I was out of pocket a couple of days or I would have responded sooner... On Mon, 2012-10-22 at 16:59 -0400, Michael H. Warfield wrote: On Mon, 2012-10-22 at 22:50 +0200, Lennart Poettering wrote: On Mon, 22.10.12 11:48, Michael H. Warfield (m...@wittsend.com) wrote: To summarize the problem... The LXC startup binary sets up various things for /dev and /dev/pts for the container to run properly and this works perfectly fine for SystemV start-up scripts and/or Upstart. Unfortunately, systemd has mounts of devtmpfs on /dev and devpts on /dev/pts which then break things horribly. This is because the kernel currently lacks namespaces for devices and won't for some time to come (in design). When devtmpfs gets mounted over top of /dev in the container, it then hijacks the hosts console tty and several other devices which had been set up through bind mounts by LXC and should have been LEFT ALONE. Please initialize a minimal tmpfs on /dev. systemd will then work fine. My containers have a reasonable /dev that work with Upstart just fine but they are not on tmpfs. Is mounting tmpfs on /dev and recreating that minimal /dev required? Well, it can be any kind of mount really. Just needs to be a mount. And the idea is to use tmpfs for this. What /dev are you currently using? It's probably not a good idea to reuse the hosts' /dev, since it contains so many device nodes that should not be accessible/visible to the container. Got it. And that explains the problems we're seeing but also what I'm seeing in some libvirt-lxc related pages, which is a separate and distinct project in spite of the similarities in the name... http://wiki.1tux.org/wiki/Lxc/Installation#Additional_notes Unfortunately, in our case, merely getting a mount in there is a complication in that it also has to be populated but, at least, we understand the problem set now. Ok... Serge and I were corresponding on the lxc-users list and he had a suggestion that worked but I consider to be a bit of a sub-optimal workaround. Ironically, it was to mount devtmpfs on /dev. We don't Oh, sorry - I take back that suggestion :) Note that we have mount hooks, so templates could install a mount hook to mount a tmpfs onto /dev and populate it. Or, if everyone is going to need it, we could just add a 'lxc.populatedevs = 1' option which does that without needing a hook. devtmpfs should not be used in containers :) -serge -- Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_sfd2d_oct ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On Thu, 2012-10-25 at 11:19 -0500, Serge Hallyn wrote: Quoting Michael H. Warfield (m...@wittsend.com): Sorry for taking a few days to get back on this. I was delivering a guest lecture up at Fordham University last Tuesday so I was out of pocket a couple of days or I would have responded sooner... On Mon, 2012-10-22 at 16:59 -0400, Michael H. Warfield wrote: On Mon, 2012-10-22 at 22:50 +0200, Lennart Poettering wrote: On Mon, 22.10.12 11:48, Michael H. Warfield (m...@wittsend.com) wrote: To summarize the problem... The LXC startup binary sets up various things for /dev and /dev/pts for the container to run properly and this works perfectly fine for SystemV start-up scripts and/or Upstart. Unfortunately, systemd has mounts of devtmpfs on /dev and devpts on /dev/pts which then break things horribly. This is because the kernel currently lacks namespaces for devices and won't for some time to come (in design). When devtmpfs gets mounted over top of /dev in the container, it then hijacks the hosts console tty and several other devices which had been set up through bind mounts by LXC and should have been LEFT ALONE. Please initialize a minimal tmpfs on /dev. systemd will then work fine. My containers have a reasonable /dev that work with Upstart just fine but they are not on tmpfs. Is mounting tmpfs on /dev and recreating that minimal /dev required? Well, it can be any kind of mount really. Just needs to be a mount. And the idea is to use tmpfs for this. What /dev are you currently using? It's probably not a good idea to reuse the hosts' /dev, since it contains so many device nodes that should not be accessible/visible to the container. Got it. And that explains the problems we're seeing but also what I'm seeing in some libvirt-lxc related pages, which is a separate and distinct project in spite of the similarities in the name... http://wiki.1tux.org/wiki/Lxc/Installation#Additional_notes Unfortunately, in our case, merely getting a mount in there is a complication in that it also has to be populated but, at least, we understand the problem set now. Ok... Serge and I were corresponding on the lxc-users list and he had a suggestion that worked but I consider to be a bit of a sub-optimal workaround. Ironically, it was to mount devtmpfs on /dev. We don't Oh, sorry - I take back that suggestion :) Well, it worked (sort of) and reinforced what the problem was and where the solution lay so there's no need to be sorry for it. We learned and we know why it's not the right solution. This is good. We made a lot of progress on this just in the last week. This is very good. Note that we have mount hooks, so templates could install a mount hook to mount a tmpfs onto /dev and populate it. Ah, now that is interesting. I haven't looked at that before. I need to explore that further. Or, if everyone is going to need it, we could just add a 'lxc.populatedevs = 1' option which does that without needing a hook. Eventually, with Fedora (and later RHEL / CentOS / SL), Arch Linux, and others going to systemd, I think this is going to be needed sooner than later. devtmpfs should not be used in containers :) Concur! -serge Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_sfd2d_oct___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
Hey Serge, On Thu, 2012-10-25 at 11:19 -0500, Serge Hallyn wrote: Quoting Michael H. Warfield (m...@wittsend.com): Sorry for taking a few days to get back on this. I was delivering a guest lecture up at Fordham University last Tuesday so I was out of pocket a couple of days or I would have responded sooner... On Mon, 2012-10-22 at 16:59 -0400, Michael H. Warfield wrote: On Mon, 2012-10-22 at 22:50 +0200, Lennart Poettering wrote: On Mon, 22.10.12 11:48, Michael H. Warfield (m...@wittsend.com) wrote: To summarize the problem... The LXC startup binary sets up various things for /dev and /dev/pts for the container to run properly and this works perfectly fine for SystemV start-up scripts and/or Upstart. Unfortunately, systemd has mounts of devtmpfs on /dev and devpts on /dev/pts which then break things horribly. This is because the kernel currently lacks namespaces for devices and won't for some time to come (in design). When devtmpfs gets mounted over top of /dev in the container, it then hijacks the hosts console tty and several other devices which had been set up through bind mounts by LXC and should have been LEFT ALONE. Please initialize a minimal tmpfs on /dev. systemd will then work fine. My containers have a reasonable /dev that work with Upstart just fine but they are not on tmpfs. Is mounting tmpfs on /dev and recreating that minimal /dev required? Well, it can be any kind of mount really. Just needs to be a mount. And the idea is to use tmpfs for this. What /dev are you currently using? It's probably not a good idea to reuse the hosts' /dev, since it contains so many device nodes that should not be accessible/visible to the container. Got it. And that explains the problems we're seeing but also what I'm seeing in some libvirt-lxc related pages, which is a separate and distinct project in spite of the similarities in the name... http://wiki.1tux.org/wiki/Lxc/Installation#Additional_notes Unfortunately, in our case, merely getting a mount in there is a complication in that it also has to be populated but, at least, we understand the problem set now. Ok... Serge and I were corresponding on the lxc-users list and he had a suggestion that worked but I consider to be a bit of a sub-optimal workaround. Ironically, it was to mount devtmpfs on /dev. We don't Oh, sorry - I take back that suggestion :) Note that we have mount hooks, so templates could install a mount hook to mount a tmpfs onto /dev and populate it. Ok... I've done some cursory search and turned up nothing but some comments about pre mount hooks. Where is the documentation about this feature and how I might use / implement it? Some examples would probably suffice. Is there a require release version of lxc-utils? Or, if everyone is going to need it, we could just add a 'lxc.populatedevs = 1' option which does that without needing a hook. devtmpfs should not be used in containers :) -serge Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_sfd2d_oct___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On Thu, 2012-10-25 at 13:23 -0400, Michael H. Warfield wrote: Hey Serge, On Thu, 2012-10-25 at 11:19 -0500, Serge Hallyn wrote: ... Oh, sorry - I take back that suggestion :) Note that we have mount hooks, so templates could install a mount hook to mount a tmpfs onto /dev and populate it. Ok... I've done some cursory search and turned up nothing but some comments about pre mount hooks. Where is the documentation about this feature and how I might use / implement it? Some examples would probably suffice. Is there a require release version of lxc-utils? I think I found what I needed in the changelog here: http://www.mail-archive.com/lxc-devel@lists.sourceforge.net/msg01490.html I'll play with it and report back. Or, if everyone is going to need it, we could just add a 'lxc.populatedevs = 1' option which does that without needing a hook. devtmpfs should not be used in containers :) -serge Regards, Mike -- Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_sfd2d_oct ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_sfd2d_oct___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
Quoting Michael H. Warfield (m...@wittsend.com): On Thu, 2012-10-25 at 13:23 -0400, Michael H. Warfield wrote: Hey Serge, On Thu, 2012-10-25 at 11:19 -0500, Serge Hallyn wrote: ... Oh, sorry - I take back that suggestion :) Note that we have mount hooks, so templates could install a mount hook to mount a tmpfs onto /dev and populate it. Ok... I've done some cursory search and turned up nothing but some comments about pre mount hooks. Where is the documentation about this feature and how I might use / implement it? Some examples would probably suffice. Is there a require release version of lxc-utils? I think I found what I needed in the changelog here: http://www.mail-archive.com/lxc-devel@lists.sourceforge.net/msg01490.html I'll play with it and report back. Also the Lifecycle management hooks section in https://help.ubuntu.com/12.10/serverguide/lxc.html Note that I'm thinking that having lxc-start guess how to fill in /dev is wrong, because different distros and even different releases of the same distros have different expectations. For instance ubuntu lucid wants /dev/shm to be a directory, while precise+ wants a symlink. So somehow the template should get involved, be it by adding a hook, or simply specifying a configuration file which lxc uses internally to decide how to create /dev. Personally I'd prefer if /dev were always populated by the templates, and containers (i.e. userspace) didn't mount a fresh tmpfs for /dev. But that does complicate userspace, and we've seen it in debian/ubuntu as well (i.e. at certain package upgrades which rely on /dev being cleared after a reboot). -serge -- Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_sfd2d_oct ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On Thu, 2012-10-25 at 14:02 -0500, Serge Hallyn wrote: Quoting Michael H. Warfield (m...@wittsend.com): On Thu, 2012-10-25 at 13:23 -0400, Michael H. Warfield wrote: Hey Serge, On Thu, 2012-10-25 at 11:19 -0500, Serge Hallyn wrote: ... Oh, sorry - I take back that suggestion :) Note that we have mount hooks, so templates could install a mount hook to mount a tmpfs onto /dev and populate it. Ok... I've done some cursory search and turned up nothing but some comments about pre mount hooks. Where is the documentation about this feature and how I might use / implement it? Some examples would probably suffice. Is there a require release version of lxc-utils? I think I found what I needed in the changelog here: http://www.mail-archive.com/lxc-devel@lists.sourceforge.net/msg01490.html I'll play with it and report back. Also the Lifecycle management hooks section in https://help.ubuntu.com/12.10/serverguide/lxc.html This isn't working... Based on what was in both of those articles, I added this entry to another container (Plover) to test... lxc.hook.mount = /var/lib/lxc/Plover/mount When I run lxc-start -n Plover, I see this: [root@forest ~]# lxc-start -n Plover lxc-start: unknow key lxc.hook.mount lxc-start: failed to read configuration file I'm running the latest rc... [root@forest ~]# rpm -qa | grep lxc lxc-0.8.0.rc2-1.fc16.x86_64 lxc-libs-0.8.0.rc2-1.fc16.x86_64 lxc-doc-0.8.0.rc2-1.fc16.x86_64 Is it something in git that hasn't made it to a release yet? Note that I'm thinking that having lxc-start guess how to fill in /dev is wrong, because different distros and even different releases of the same distros have different expectations. For instance ubuntu lucid wants /dev/shm to be a directory, while precise+ wants a symlink. So somehow the template should get involved, be it by adding a hook, or simply specifying a configuration file which lxc uses internally to decide how to create /dev. I agree this needs to be by some sort of convention or template that we can adjust. Personally I'd prefer if /dev were always populated by the templates, and containers (i.e. userspace) didn't mount a fresh tmpfs for /dev. But that does complicate userspace, and we've seen it in debian/ubuntu as well (i.e. at certain package upgrades which rely on /dev being cleared after a reboot). -serge Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_sfd2d_oct___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On Thu, 2012-10-25 at 23:38 +0200, Lennart Poettering wrote: On Thu, 25.10.12 11:59, Michael H. Warfield (m...@wittsend.com) wrote: I've got some more problems relating to shutting down containers, some of which may be related to mounting tmpfs on /run to which /var/run is symlinked to. We're doing halt / restart detection by monitoring utmp in that directory but it looks like utmp isn't even in that directory anymore and mounting tmpfs on it was always problematical. We may have to have a more generic method to detect when a container has shut down or is restarting in that case. I can't parse this. The system call reboot() is virtualized for containers just fine and the container managaer (i.e. LXC) can check for that easily. The problem we have had was with differentiating between reboot and halt to either shut the container down cold or restarted it. You say easily and yet we never came up with an easy solution and monitored utmp instead for the next runlevel change. What is your easy solution for that problem? Lennart -- Lennart Poettering - Red Hat, Inc. Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_sfd2d_oct___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
Quoting Michael H. Warfield (m...@wittsend.com): On Thu, 2012-10-25 at 23:38 +0200, Lennart Poettering wrote: On Thu, 25.10.12 11:59, Michael H. Warfield (m...@wittsend.com) wrote: I've got some more problems relating to shutting down containers, some of which may be related to mounting tmpfs on /run to which /var/run is symlinked to. We're doing halt / restart detection by monitoring utmp in that directory but it looks like utmp isn't even in that directory anymore and mounting tmpfs on it was always problematical. We may have to have a more generic method to detect when a container has shut down or is restarting in that case. I can't parse this. The system call reboot() is virtualized for containers just fine and the container managaer (i.e. LXC) can check for that easily. The problem we have had was with differentiating between reboot and halt to either shut the container down cold or restarted it. You say easily and yet we never came up with an easy solution and monitored utmp instead for the next runlevel change. What is your easy solution for that problem? I think you're on older kernels, where we had to resort to that. Pretty recently Daniel Lezcano's patch was finally accepted upstream, which lets a container call reboot() and lets the parent of init tell whether it called reboot or shutdown by looking at wTERMSIG(status). -serge -- Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_sfd2d_oct ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On Thu, 2012-10-25 at 20:30 -0500, Serge Hallyn wrote: Quoting Michael H. Warfield (m...@wittsend.com): On Thu, 2012-10-25 at 23:38 +0200, Lennart Poettering wrote: On Thu, 25.10.12 11:59, Michael H. Warfield (m...@wittsend.com) wrote: I've got some more problems relating to shutting down containers, some of which may be related to mounting tmpfs on /run to which /var/run is symlinked to. We're doing halt / restart detection by monitoring utmp in that directory but it looks like utmp isn't even in that directory anymore and mounting tmpfs on it was always problematical. We may have to have a more generic method to detect when a container has shut down or is restarting in that case. I can't parse this. The system call reboot() is virtualized for containers just fine and the container managaer (i.e. LXC) can check for that easily. The problem we have had was with differentiating between reboot and halt to either shut the container down cold or restarted it. You say easily and yet we never came up with an easy solution and monitored utmp instead for the next runlevel change. What is your easy solution for that problem? I think you're on older kernels, where we had to resort to that. Pretty recently Daniel Lezcano's patch was finally accepted upstream, which lets a container call reboot() and lets the parent of init tell whether it called reboot or shutdown by looking at wTERMSIG(status). Now THAT is wonderful news! I hadn't realized that had been accepted. So we no longer need to rely on the old utmp kludge? -serge Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_sfd2d_oct___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On Thu, 2012-10-25 at 15:42 -0400, Michael H. Warfield wrote: On Thu, 2012-10-25 at 14:02 -0500, Serge Hallyn wrote: Quoting Michael H. Warfield (m...@wittsend.com): On Thu, 2012-10-25 at 13:23 -0400, Michael H. Warfield wrote: Hey Serge, On Thu, 2012-10-25 at 11:19 -0500, Serge Hallyn wrote: ... Oh, sorry - I take back that suggestion :) Note that we have mount hooks, so templates could install a mount hook to mount a tmpfs onto /dev and populate it. Ok... I've done some cursory search and turned up nothing but some comments about pre mount hooks. Where is the documentation about this feature and how I might use / implement it? Some examples would probably suffice. Is there a require release version of lxc-utils? I think I found what I needed in the changelog here: http://www.mail-archive.com/lxc-devel@lists.sourceforge.net/msg01490.html I'll play with it and report back. Also the Lifecycle management hooks section in https://help.ubuntu.com/12.10/serverguide/lxc.html This isn't working... Based on what was in both of those articles, I added this entry to another container (Plover) to test... lxc.hook.mount = /var/lib/lxc/Plover/mount When I run lxc-start -n Plover, I see this: [root@forest ~]# lxc-start -n Plover lxc-start: unknow key lxc.hook.mount lxc-start: failed to read configuration file I'm running the latest rc... [root@forest ~]# rpm -qa | grep lxc lxc-0.8.0.rc2-1.fc16.x86_64 lxc-libs-0.8.0.rc2-1.fc16.x86_64 lxc-doc-0.8.0.rc2-1.fc16.x86_64 Is it something in git that hasn't made it to a release yet? nm... I see it. It's in git and hasn't made it to a release. I'm working on a git build to test now. If this is something that solves some of this, we need to move things along here and get these things moved out. According to git, 0.8.0rc2 was 7 months ago? What's the show stoppers here? Note that I'm thinking that having lxc-start guess how to fill in /dev is wrong, because different distros and even different releases of the same distros have different expectations. For instance ubuntu lucid wants /dev/shm to be a directory, while precise+ wants a symlink. So somehow the template should get involved, be it by adding a hook, or simply specifying a configuration file which lxc uses internally to decide how to create /dev. I agree this needs to be by some sort of convention or template that we can adjust. Personally I'd prefer if /dev were always populated by the templates, and containers (i.e. userspace) didn't mount a fresh tmpfs for /dev. But that does complicate userspace, and we've seen it in debian/ubuntu as well (i.e. at certain package upgrades which rely on /dev being cleared after a reboot). -serge Regards, Mike Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_sfd2d_oct___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On 22/10/12 03:06, Michael H. Warfield wrote: On Mon, 2012-10-22 at 02:53 +0200, Kay Sievers wrote: On Sun, Oct 21, 2012 at 11:25 PM, Michael H. Warfield m...@wittsend.com wrote: This is being directed to the systemd-devel community but I'm cc'ing the lxc-users community and the Fedora community on this for their input as well. I know it's not always good to cross post between multiple lists but this is of interest to all three communities who may have valuable input. I'm new to this particular list, just having joined after tracking a problem down to some systemd internals... Several people over the last year or two on the lxc-users list have been discussions trying to run certain distros (notably Fedora 16 and above, recent Arch Linux and possibly others) in LXC containers, virualizing entire servers this way. This is very similar to Virtuoso / OpenVZ only it's using the native Linux cgroups for the containers (primary reason I dumped OpenVZ was to avoid their custom patched kernels). These recent distros have switched to systemd for the main init process and this has proven to be disastrous for those of us using LXC and trying to install or update our containers. To put it bluntly, it doesn't work and causes all sorts of problems on the host. To summarize the problem... The LXC startup binary sets up various things for /dev and /dev/pts for the container to run properly and this works perfectly fine for SystemV start-up scripts and/or Upstart. Unfortunately, systemd has mounts of devtmpfs on /dev and devpts on /dev/pts which then break things horribly. This is because the kernel currently lacks namespaces for devices and won't for some time to come (in design). When devtmpfs gets mounted over top of /dev in the container, it then hijacks the hosts console tty and several other devices which had been set up through bind mounts by LXC and should have been LEFT ALONE. Yes! I recognize that this problem with devtmpfs and lack of namespaces is a potential security problem anyways that could (and does) cause serious container-to-host problems. We're just not going to get that fixed right away in the linux cgroups and namespaces. How do we work around this problem in systemd where it has hard coded mounts in the binary that we can't override or configure? Or is it there and I'm just missing it trying to examine the sources? That's how I found where the problem lay. As a first step, this probably explains most of it: http://www.freedesktop.org/wiki/Software/systemd/ContainerInterface A very long ways, yeah. That looks like it could be just what we've been looking for. Just gotta figure out how to set that environment variable but that's up to a couple of others to comment on in the lxc-users list. Then we'll see where we go from there. Many thanks! Kay Regards, Mike I've just performed a very quick check on my Arch Linux system here. on host (running systemd): # cat /proc/1/environ TERM=linuxRD_TIMESTAMP= In a container (running sysvinit): # cat /proc/1/environ STY=623.systemd-lithiumTERM=screenTERMCAP=SC|screen|VT 100/ANSI X3.64 virtual terminal:\ :DO=\E[%dB:LE=\E[%dD:RI=\E[%dC:UP=\E[%dA:bs:bt=\E[Z:\ :cd=\E[J:ce=\E[K:cl=\E[H\E[J:cm=\E[%i%d;%dH:ct=\E[3g:\ :do=^J:nd=\E[C:pt:rc=\E8:rs=\Ec:sc=\E7:st=\EH:up=\EM:\ :le=^H:bl=^G:cr=^M:it#8:ho=\E[H:nw=\EE:ta=^I:is=\E)0:\ :li#24:co#80:am:xn:xv:LP:sr=\EM:al=\E[L:AL=\E[%dL:\ :cs=\E[%i%d;%dr:dl=\E[M:DL=\E[%dM:dc=\E[P:DC=\E[%dP:\ :im=\E[4h:ei=\E[4l:mi:IC=\E[%d@:ks=\E[?1h\E=:\ :ke=\E[?1l\E:vi=\E[?25l:ve=\E[34h\E[?25h:vs=\E[34l:\ :ti=\E[?1049h:te=\E[?1049l:k0=\E[10~:k1=\EOP:k2=\EOQ:\ :k3=\EOR:k4=\EOS:k5=\E[15~:k6=\E[17~:k7=\E[18~:\ :k8=\E[19~:k9=\E[20~:k;=\E[21~:F1=\E[23~:F2=\E[24~:\ :kh=\E[1~:@1=\E[1~:kH=\E[4~:@7=\E[4~:kN=\E[6~:kP=\E[5~:\ :kI=\E[2~:kD=\E[3~:ku=\EOA:kd=\EOB:kr=\EOC:kl=\EOD:WINDOW=0SHELL=/bin/shPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binLANG=en_GB.UTF-8container=lxc So it looks like that container environment variable is already set on PID1 Regards, John -- Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_sfd2d_oct ___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On Mon, 2012-10-22 at 22:50 +0200, Lennart Poettering wrote: On Mon, 22.10.12 11:48, Michael H. Warfield (m...@wittsend.com) wrote: To summarize the problem... The LXC startup binary sets up various things for /dev and /dev/pts for the container to run properly and this works perfectly fine for SystemV start-up scripts and/or Upstart. Unfortunately, systemd has mounts of devtmpfs on /dev and devpts on /dev/pts which then break things horribly. This is because the kernel currently lacks namespaces for devices and won't for some time to come (in design). When devtmpfs gets mounted over top of /dev in the container, it then hijacks the hosts console tty and several other devices which had been set up through bind mounts by LXC and should have been LEFT ALONE. Please initialize a minimal tmpfs on /dev. systemd will then work fine. My containers have a reasonable /dev that work with Upstart just fine but they are not on tmpfs. Is mounting tmpfs on /dev and recreating that minimal /dev required? Well, it can be any kind of mount really. Just needs to be a mount. And the idea is to use tmpfs for this. What /dev are you currently using? It's probably not a good idea to reuse the hosts' /dev, since it contains so many device nodes that should not be accessible/visible to the container. Got it. And that explains the problems we're seeing but also what I'm seeing in some libvirt-lxc related pages, which is a separate and distinct project in spite of the similarities in the name... http://wiki.1tux.org/wiki/Lxc/Installation#Additional_notes Unfortunately, in our case, merely getting a mount in there is a complication in that it also has to be populated but, at least, we understand the problem set now. systemd will make use of pre-existing mounts if they exist, and only mount something new if they don't exist. So you're saying that, if we have something mounted on /dev, that's what prevents systemd from mounting devtmpfs on /dev? Yes. But, I have systemd running on my host system (F17) and containers with sysvinit or upstart inits are all starting just fine. That sounds like it should impact all containers as pivot_root() is issued before systemd in the container is started. Or am I missing something here? That sounds like a problem for Serge and others to investigate further. I'll see about trying that workaround though. The shared issue is F18, and it's about running LXC on a systemd system, not about running systemd inside of LXC. Whew! I'll deal with F18 when I need to deal with F18. That explains why my F17 hosts are running and gives Serge and others a chance to address this, forewarned. Thanks for that info. Lennart -- Lennart Poettering - Red Hat, Inc. Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_sfd2d_oct___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users
Re: [Lxc-users] [systemd-devel] Unable to run systemd in an LXC / cgroup container.
On Mon, 2012-10-22 at 02:53 +0200, Kay Sievers wrote: On Sun, Oct 21, 2012 at 11:25 PM, Michael H. Warfield m...@wittsend.com wrote: This is being directed to the systemd-devel community but I'm cc'ing the lxc-users community and the Fedora community on this for their input as well. I know it's not always good to cross post between multiple lists but this is of interest to all three communities who may have valuable input. I'm new to this particular list, just having joined after tracking a problem down to some systemd internals... Several people over the last year or two on the lxc-users list have been discussions trying to run certain distros (notably Fedora 16 and above, recent Arch Linux and possibly others) in LXC containers, virualizing entire servers this way. This is very similar to Virtuoso / OpenVZ only it's using the native Linux cgroups for the containers (primary reason I dumped OpenVZ was to avoid their custom patched kernels). These recent distros have switched to systemd for the main init process and this has proven to be disastrous for those of us using LXC and trying to install or update our containers. To put it bluntly, it doesn't work and causes all sorts of problems on the host. To summarize the problem... The LXC startup binary sets up various things for /dev and /dev/pts for the container to run properly and this works perfectly fine for SystemV start-up scripts and/or Upstart. Unfortunately, systemd has mounts of devtmpfs on /dev and devpts on /dev/pts which then break things horribly. This is because the kernel currently lacks namespaces for devices and won't for some time to come (in design). When devtmpfs gets mounted over top of /dev in the container, it then hijacks the hosts console tty and several other devices which had been set up through bind mounts by LXC and should have been LEFT ALONE. Yes! I recognize that this problem with devtmpfs and lack of namespaces is a potential security problem anyways that could (and does) cause serious container-to-host problems. We're just not going to get that fixed right away in the linux cgroups and namespaces. How do we work around this problem in systemd where it has hard coded mounts in the binary that we can't override or configure? Or is it there and I'm just missing it trying to examine the sources? That's how I found where the problem lay. As a first step, this probably explains most of it: http://www.freedesktop.org/wiki/Software/systemd/ContainerInterface A very long ways, yeah. That looks like it could be just what we've been looking for. Just gotta figure out how to set that environment variable but that's up to a couple of others to comment on in the lxc-users list. Then we'll see where we go from there. Many thanks! Kay Regards, Mike -- Michael H. Warfield (AI4NB) | (770) 985-6132 | m...@wittsend.com /\/\|=mhw=|\/\/ | (678) 463-0932 | http://www.wittsend.com/mhw/ NIC whois: MHW9 | An optimist believes we live in the best of all PGP Key: 0x674627FF| possible worlds. A pessimist is sure of it! signature.asc Description: This is a digitally signed message part -- Everyone hates slow websites. So do we. Make your web apps faster with AppDynamics Download AppDynamics Lite for free today: http://p.sf.net/sfu/appdyn_sfd2d_oct___ Lxc-users mailing list Lxc-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lxc-users