[lxc-users] [OT] uidmap
Hi all, Is there a similar functionality to uidmap and its kin for the RedHat world? ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] Container fails to start with 'uid range not allowed'
Quoting Sean Templeton (seantemple...@outlook.com): > I have been trying to create an unprivileged container for the past couple > days with no success. After having read the entire Internet, I'm about to > give up and just create a privileged container. But maybe you all can figure > out what I am doing wrong. > > I created a user 'zrw' on the host and am trying to map the uid and guid from > the container to this user. I have created the container but have otherwise > not touched it. My end goal is to install Samba in the container and mount a > directory on the host to share out. > > When I create the user, /etc/subuid and /etc/subgid automatically have the > following added: > root@server:/# cat /etc/sub* | grep zrw > zrw:689824:65536 > zrw:689824:65536 > > but "id -u zrw" and "id -g zrw" both return 1000. Why would 689824 > automatically be put in the /etc/sub* files? From all of my reading I thought > the uid and guid in the /etc/sub* files should be the same as the user and > group ids? > I changed the subuid and subgid files to > zrw:689824:65536 > zrw:1000:1 No, subuid and subgid are specificall to delegate new subids to you. You can always, as uid 1000, map hostuid 1000 to any id in a new user namespace. The /etc/subuid and /etc/subgid entries allow you to also map other ids into a new user namespace. > I then put this mapping in the container's .conf file (along with many other > different variations, like id_map = u 0 689824 65536) > lxc.id_map = u 0 10 1000 > lxc.id_map = g 0 10 1000 > lxc.id_map = u 1000 1000 1 > lxc.id_map = g 1000 1000 1 > lxc.id_map = u 1001 10 64535 > lxc.id_map = g 1001 10 64535 If you really need files which you own on the host as uid 1000 to be shared with the container, and owned by the container, then the easiest way, keeping th original subuid and subgid entries of zrw:689824:65536 zrw:689824:65536 would be to use: lxc.id_map = u 0 689824 65536 lxc.id_map = g 0 689824 65536 lxc.id_map = u 10 1000 1 lxc.id_map = g 10 1000 1 Then any files owned by 1000 on the host will, in the container, appear to belong to uid 10. You can add /etc/passwd and /etc/group entries to give them a normal looking name. The danger in this is that the container will then have privilege over any files which your host user owns. -serge ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] LXD 2.4.1 - Trouble with Cloud Init
On January 29, 2017 11:33:28 AM EST, "Serge E. Hallyn"wrote: >On Fri, Jan 27, 2017 at 06:38:05AM -0600, Neil Bowers wrote: >> Thank you so much - this has been bugging me for weeks. >> >> I do have a question, however, in regards to the 'write_files' >directive - > >I'm sorry - I really should start using cloud init more myself after >which I culd be helpful, but for now I don't know the answers to this, >so cc:ing Scott (which I should have done in my previous reply). > >> since this runs before users are created (and while I understand that >> having it able to affect more of boot is useful, but it's not >documented >> like that anywhere I can find), if I were to create the file in >`/etc/skel` >> instead, would any created users pick it up from there? Or is that >ignored >> when creating users with cloud-init ? > >I should *think* cloud-init creates users the standard way which would >honor /etc/skel. Scott? > Yeah. That will work. That's a good idea. ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] Storing output of lxc attach_wait in python variable
Hi all, I'm using the python bindings of lxc and trying to capture the output of container.attach_wait(lxc.attach_run_command, ["ls"]) in a variable. I tried modifying the sys.stdout and sys.stderr to StringIO. But that is capturing only the output and error printed by python's print statements. It is not capturing the output of lxc attach though. One approach is I can call lxc-attach command through subprocess. Is there any other way I can achieve this while still using lxc python bindings. Thanks Livingston ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
[lxc-users] Container fails to start with 'uid range not allowed'
I have been trying to create an unprivileged container for the past couple days with no success. After having read the entire Internet, I'm about to give up and just create a privileged container. But maybe you all can figure out what I am doing wrong. I created a user 'zrw' on the host and am trying to map the uid and guid from the container to this user. I have created the container but have otherwise not touched it. My end goal is to install Samba in the container and mount a directory on the host to share out. When I create the user, /etc/subuid and /etc/subgid automatically have the following added: root@server:/# cat /etc/sub* | grep zrw zrw:689824:65536 zrw:689824:65536 but "id -u zrw" and "id -g zrw" both return 1000. Why would 689824 automatically be put in the /etc/sub* files? From all of my reading I thought the uid and guid in the /etc/sub* files should be the same as the user and group ids? I changed the subuid and subgid files to zrw:689824:65536 zrw:1000:1 I then put this mapping in the container's .conf file (along with many other different variations, like id_map = u 0 689824 65536) lxc.id_map = u 0 10 1000 lxc.id_map = g 0 10 1000 lxc.id_map = u 1000 1000 1 lxc.id_map = g 1000 1000 1 lxc.id_map = u 1001 10 64535 lxc.id_map = g 1001 10 64535 When I start the container I get the following output: lxc-start: cgroups/cgfsng.c: create_path_for_hierarchy: 1321 Path "/sys/fs/cgroup/systemd//lxc/100" already existed. lxc-start: cgroups/cgfsng.c: cgfsng_create: 1385 No such file or directory - Failed to create /sys/fs/cgroup/systemd//lxc/100: No such file or directory lxc-start: cgroups/cgfsng.c: create_path_for_hierarchy: 1321 Path "/sys/fs/cgroup/systemd//lxc/100-1" already existed. lxc-start: cgroups/cgfsng.c: cgfsng_create: 1385 No such file or directory - Failed to create /sys/fs/cgroup/systemd//lxc/100-1: No such file or directory ... same output as above repeating up to systemd//lxc/100-33 newuidmap: uid range [0-1000) -> [689824-690824) not allowed lxc-start: start.c: lxc_spawn: 1164 Failed to set up id mapping. lxc-start: start.c: __lxc_start: 1357 Failed to spawn container "100". newuidmap: uid range [0-1000) -> [689824-690824) not allowed lxc-start: conf.c: userns_exec_1: 4379 Error setting up child mappings lxc-start: cgroups/cgfsng.c: recursive_destroy: 1276 Error destroying /sys/fs/cgroup/systemd//lxc/100-20 newuidmap: uid range [0-1000) -> [689824-690824) not allowed lxc-start: conf.c: userns_exec_1: 4379 Error setting up child mappings lxc-start: cgroups/cgfsng.c: recursive_destroy: 1276 Error destroying /sys/fs/cgroup/cpuset//lxc/100-20 lxc-start: conf.c: userns_exec_1: 4379 Error setting up child mappings ... same output as above repeating up to 100-33 for cgroup/cpu, cgroup/blkio, cgroup/memory, cgroup/devices, etc. lxc-start: tools/lxc_start.c: main: 365 The container failed to start. You can tell how many tries I've made by the fact that it creates a new 100- every time I try to start the container. Every variation of mapping I have tried always ends with uid range not allowed. On another note, if I delete the container and then try to rm -rf /sys/fs/cgroup/pids/lxc/100* I get "Operation not permitted" on a ton of files in those directories, and consequently the directories are not deleted. To "solve" that a previous time, I reinstalled the operating system. From other reading it does not appear there are any attributes set on these files and lsattr gives "lsattr: Inappropriate ioctl for device While reading flags on ./cgroup.procs" for every file. Are these files created with a special permission when creating the container, the container fails to start, and somehow the error handling code can't delete them so I'm stuck with them forever? (Unless I pull the nuclear option of course.) I would appreciate any help! ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] Control groups list
Hi Serge, My code is already based on lxc.container.conf. My question is about cgroups (lxc.cgroup.foo), not container configuration. Is there a "most used cgroups" list for containers configuration? Update pastebin : http://pastebin.com/bMWmZ70U Élie. Le jeu. 26 janv. 2017 à 18:24, Serge E. Hallyna écrit : > Quoting Elie Deloumeau-Prigent (e...@deloumeau.fr): > > Hi all, > > > > Is there a list of defaults cgroups that are used by a container (e.g. > > lxc.cgroup.memory.limit_in_bytes) ? > > man lxc.container.conf > > You can use any that you want, the filename is explicitly listed in > the options. And in the pastebin you quote below. > > > I need this list for this piece of code : http://pastebin.com/a49pftA7 > (Related > > to LXC Web Panel) > > > > Élie. > > > ___ > > lxc-users mailing list > > lxc-users@lists.linuxcontainers.org > > http://lists.linuxcontainers.org/listinfo/lxc-users > > ___ > lxc-users mailing list > lxc-users@lists.linuxcontainers.org > http://lists.linuxcontainers.org/listinfo/lxc-users ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] LXD 2.4.1 - Trouble with Cloud Init
On Fri, Jan 27, 2017 at 06:38:05AM -0600, Neil Bowers wrote: > Thank you so much - this has been bugging me for weeks. > > I do have a question, however, in regards to the 'write_files' directive - I'm sorry - I really should start using cloud init more myself after which I culd be helpful, but for now I don't know the answers to this, so cc:ing Scott (which I should have done in my previous reply). > since this runs before users are created (and while I understand that > having it able to affect more of boot is useful, but it's not documented > like that anywhere I can find), if I were to create the file in `/etc/skel` > instead, would any created users pick it up from there? Or is that ignored > when creating users with cloud-init ? I should *think* cloud-init creates users the standard way which would honor /etc/skel. Scott? > Essentially I'm just trying to set up a simple way to put up and tear down > containers that will have all of my defaults in place from the start. > > Neil > > On Thu, Jan 26, 2017 at 2:00 PM, Serge E. Hallynwrote: > > > Hi, > > > > Scott Moser was kind enough to provide this reply: > > > > (http://paste.ubuntu.com/23870807/) > > > > #!/bin/sh > > > > ## > > ## This is Scott Moser in reply to > > ## https://lists.linuxcontainers.org/pipermail/lxc-users/2017- > > January/012766.html > > ## The user-data you have has some problems, and is stopping it from > > working > > ## as you desire. This script can be executed to launch an instance > > ## with the user-data included inside it, and will show it functioning > > ## correctly. > > ## > > ## I did not test, but assume that updating profile accordingly will get > > you > > ## the behavior you're after. > > ## > > ## Scott > > > > ## changes from your user-data > > # 'sudo' is a string, you had it as a list. > > # 'write_files': > > #- changed path to /root/ (there is no '~' in this scenario, cloud-init > > # could possibly interpret that as the default user, but it does not. > > #- files are created before users are added... write files runs early > > # that means it can affect more of boot, but means it can't write > > # files owned by users who do not yet exist. > > # there is a bug/feature request for this, we could add a > > # 'write_files_late' module that ran later and could then populate > > # created users directories. > > #- you had bad yaml in one part, the 'content' was as if it intended > > # to be included in the previous 'path', but was a new list entry. > > # basically that 'content' had no 'path'. > > # > > # with regard to no errors, you can see the issues with > > # journalctl --unit=cloud-init.service > > # look for 'WARN'. Also /run/cloud-init/result.json should report errors. > > # > > # These should get written to /var/log/cloud-init.log, but in yakkety > > # you wont see them there yet. (bug 1643990) > > > > udata=$(cat <<"EOF" > > #cloud-config > > users: > > - name: dood > > gecos: Mr Dood > > ssh_authorized_keys: > > - ssh-rsa B3NzaC1yc2EBIwAAAQEA3I > > 7VUf2l5gSn5uavROsc5HRDpZdQueUq5ozemNSj8T7enqKHOEaFoU2VoPgGEW > > C9RyzSQVeyD6s7APMcE82EtmW4skVEgEGSbDc1pvxzxtchBj78hJP6Cf5TCMFSXw+ > > Fz5rF1dR23QDbN1mkHs7adr8GW4kSWqU7Q7NDwfIrJJtO7Hi42GyXtvEONHb > > iRPOe8stqUly7MvUoN+5kfjBM8Qqpfl2+FNhTYWpMfYdPUnE7u536WqzFmsaqJc > > tz3gBxH9Ex7dFtrxR4qiqEr9Qtlu3xGn7Bw07/+i1D+ey3ONkZLN+ > > LQ714cgj8fRS4Hj29SCmXp5Kt5/82cD/VN3NtHw== smoser@brickies > > sudo: 'ALL=(ALL) NOPASSWD:ALL' > > groups: sudo > > shell: /bin/bash > > write_files: > > - owner: root:root > > path: /root/.bash_aliases > > content: | > > alias dir='ls -Alph --color=auto' > > apt_proxy: "http://192.168.1.2:8000; > > EOF > > ) > > > > name=$1 > > rel=${2:-yakkety} > > lxc launch "ubuntu-daily:$rel" "$name" "--config=user.user-data=$udata" > > > > -serge > > > ___ > lxc-users mailing list > lxc-users@lists.linuxcontainers.org > http://lists.linuxcontainers.org/listinfo/lxc-users ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] Risk/benefit of enabling user namespaces in the kernel for running unprivileged containers
On Fri, Jan 13, 2017 at 08:52:14PM +, John wrote: > > > > > - Original Message - > > From: Serge E. Hallyn> > To: LXC users mailing-list > > Sent: Friday, January 13, 2017 11:20 AM > > Subject: Re: [lxc-users] Risk/benefit of enabling user namespaces in the > > kernel for running unprivileged containers > > >> I'm unclear about several points: > >> *Is it true that enabling CONFIG_USER_NS makes LXCs safer but at the cost > > of decreasing security on the host? > > > > "basically" > > > > "decreasing security on the host" implies there are known > > vulnerabilities or > > shortcomings which you are enabling as a tradeoff. That's not the case. > > Rather, > > there are so many interactions between types of resources that we keep > > running > > into new ways in which unanticipated interactions can lead to > > vulnerabilities > > when unprivileged users gain the ability to create new namespaces. > > > > Some of the 'vulnerabilities' are pretty arguable, for instance the > > ability > > for an unprivileged user to escape a negative acl by dropping a group, or to > > see an overmounted file in a new namespace. But others are very serious. > > > > When that will settle down, noone really knows. > Sorry, this got lost in my backlog. Been bad about inbox 0. > Again, thank you for the detailed reply. Are the nature of these sorts of > interactions such that users require physical access or ssh access to the > host machine in order to exploit, or can they originate from within the > container? It dpeends. The ACL one I mentioned above is only relevant for users who have local (non-container) accounts, since the whole point is that they get around an access restriction imposed on their local user. One thing user namespaces do is allow root in a user namespace, which we call untrusted, to run kernel code which previously was guarded so that only the real trusted root could ever run it. The classic example of this is superblock readers which run when you mount a filesystem to parse its metadata. This in particular is still guarded because it's so unsafe, but there are others. (Just look for any code inside a 'if ns_capable()' check). If such code has a bug, then root in a container can exercise that bug. > If it's a physical/remote access thing, no big deal assuming we > do not open the host up to ssh, right? If however the vector is the > container itself, that's entirely different. It can be the container itself - the question is who will exploit it. Do you trust your local users? Do you allow containers to run services that are open to the world? For instance if a container exports a web app or a mysql service, then an attacker might exploit a mysql bug to get access as the mysql user in the container, then run one kernel exploit to become container, root, then exploit another kernel bug to do something which only the host root should be able to do. But I think (I've not looked over the list of known CVEs - i really need to) most of the bugs have been more of the sort where a local non-root user does something in a new user namespace, which he can create with absolutely no privilege. -serge ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users
Re: [lxc-users] LXC, unionfs and short lived containers
Hi Fajar, all, Thanks for your reply. On Sun, Jan 29, 2017 at 4:04 AM, Frans Meulenbroeks < fransmeulenbroeks at gmail.com> wrote: > Hi, > > I'm working on migrating from LXC 1.x to LXC 2. > While doing so I bumped upon the following issue: > > My containers are short-lived (say an hour or so). > In LXC 1 we used an overlay filesystem in order to speed up the lxc create. > However I understood LXC 2 does not have this capability. > Where did you read that? Here: https://github.com/lxc/lxd/issues/1878 see the response of Stephane. Of course this reply is almost 10 months old. > Any idea how to create containers quickly and efficiently in LXC 2 > > Complication is that at some times we have a fair amount of containers > alive (say around 50), so creating all containers and reverting to a > snapshot is probably not efficient > Why is it not efficient? I'm worried about disk storage and creation time, but I noticed your suggestion below. > (apart from the space taken up by the 50 rootfs-es). > > Thanks in advance for any suggestions how to tacke this! > I'm pretty sure you can still use overlayfs with lxc-2. My suggestion though, is go with lxd and zfs instead. You can have a "golden" container, keep it stopped, and simply create your other containers with "lxc copy". With zfs, the "copy" process will be instaneus, and the "clone" will be its own filesystem (no lower/base directory restriction like in aufs/overlayfs). Ah ok, currently we already have a golden container from which we derive new containers using aufs. I've just tried the above and it works like a charm! If you need to modify the "golden" container (which will affect all NEW containers copied from it), simply start it and perform-your-changes like on a normal container (don't forget to stop it afterwards). Note that this is different from aufs/overlayfs, where generally you shouldn't touch the lower/base directory. Yeah I am aware of the latter. We always have to take care in lxc-1 when updating the golden container that no instances were running. This is great info! Thanks again! Frans -- Fajar ___ lxc-users mailing list lxc-users@lists.linuxcontainers.org http://lists.linuxcontainers.org/listinfo/lxc-users