I am having a bit of difficulty with the user ID namespacing/mapping used by lxd.

Firstly, let me see if I understand this properly. Reading https://www.stgraber.org/2014/01/17/lxc-1-0-unprivileged-containers/ it seems each host user is allocated one mapped range of uids. For example:

$ cat /etc/subuid
lxd:100000:65536
root:100000:65536

Here, both these users get the same mapped range. Furthermore, presumably every container launched by those users has the *same* mapped range; that is, root in container foo and root in container bar both map to 100000 on the host, correct? That sounds like it doesn't give proper isolation from one container to another, but I'll assume that's not a problem in practice.

Here's the actual issue I'm dealing with.

I am trying to deploy FreeIPA as a way to do centralised authentication for servers - and of course, with lots of lxd containers, central authentication would be extremely helpful!

The trouble is that FreeIPA allocates very high UIDs and GIDs by default. Example from a fresh install of FreeIPA inside a centos/7/amd64 container:

[root@test ~]# id admin
uid=1134400000(admin) gid=1134400000(admins) groups=1134400000(admins)

[root@test ~]# getent passwd admin
admin:*:1134400000:1134400000:Administrator:/home/admin:/bin/bash

But this id isn't usable within the container, as I find if I try to ssh to it:

Oct 14 11:01:23 test sshd[3896]: Authorized to admin, krb5 principal ad...@ipa.example.com (ssh_gssapi_krb5_cmdok) Oct 14 11:01:23 test sshd[3896]: Accepted gssapi-with-mic for admin from 10.15.6.253 port 49800 ssh2 *Oct 14 11:01:23 test sshd[3896]: fatal: initgroups: admin: Invalid argument*

Or even just use su within the container:

[root@test ~]# su - admin
*su: cannot set groups: Invalid argument*

So I'm wondering about the best way to deal with this.

(1) I can try to configure FreeIPA to allocate uids in the "low" range, say 2000+.

(2) I can try to do get lxd to map a larger range, for example map 1134400000 upwards to 2134400000 on the host. But are there problems in that approach? Is there a reason why the default lxd config only maps 64K worth of uids?

Plus, I imagine this means all existing containers will need their filesystems changed because of the new mapped IDs?

# ls -l /var/lib/lxd/containers/ldap-1/rootfs/sbin/suexec
-r-x--x--- 1 100000 100048 15352 Jul 18 15:31 /var/lib/lxd/containers/ldap-1/rootfs/sbin/suexec

Thanks,

Brian.
_______________________________________________
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Reply via email to