On Sat, May 18, 2019 at 12:44 PM Grant Taylor
<gtay...@gentoo.tnetconsulting.net> wrote:
>
> On 5/18/19 9:26 AM, Peter Humphrey wrote:
> > Hello list,
>
> Hi,
>
> > Can anyone answer this?
>
> I would think that containers could be made to do this.  But I'm not a
> fan of the containerization systems that I've seen.  They seem to be too
> large and try to control too many things and impose too many
> limitations.

I'd be interested if there are other scripts people have put out
there, but I agree that most of the container solutions on Linux are
overly-complex.

I personally use nspawn, which is actually pretty minimal, but it
depends on systemd, which I'm sure many would argue is overly complex.
:)  However, if you are running systemd you can basically do a
one-liner that requires zero setup to turn a chroot into a container.

On to the original questions about mounts:

In general you can mount stuff in containers without issue.  There are
two ways to go about it.  One is to mount something on the host and
bind-mount it into the container, typically at launch time.  The other
is to give the container the necessary capabilities so that it can do
its own mounting (typically containers are not given the necessary
capabilities, so mounting will fail even as root inside the
container).

I believe the reason the wiki says to be careful with mounts has more
to do with UID/GID mapping.  As you are using nfs this is already an
issue you're probably dealing with.  You're probably aware that
running nfs with multiple hosts with unsynchronized passwd/group files
can be tricky, because linux (and unix in general) works with
UIDs/GIDs, and not really directly with names, so if you're doing
something with one UID on one host and with a different UID on another
host you might get unexpected permissions behavior.

In a nutshell the same thing can happen with containers, or for that
matter with chroots.  If you have identical passwd/group files it
should be a non-issue.  However, if you want to do mapping with
unprivileged containers you have to be careful with mounts as they
might not get translated properly.  Using completely different UIDs in
a container is their suggested solution, which is fine as long as the
actual container filesystem isn't shared with anything else.  That
tends to be the case anyway when you're using container
implementations that do a lot of fancy image management.  If you're
doing something very minimal and just using a path/chroot on the host
as your container then you need to be mindful of your UIDs/GIDs if you
go accessing anything from the host directly.

The other thing I'd be careful with is mounting physical devices in
more than one place.  Since you're actually sharing a kernel I suspect
linux will "do the right thing" if you mount an ext4 on /dev/sda2 on
two different containers, but I've never tried it (and again doing
that requires giving containers access to even see sda2 because they
probably won't see physical devices by default).  In a VM environment
you definitely can't do this, because the VMs are completely isolated
at the kernel level and having two different kernels having dirty
buffers on the same physical device is going to kill any filesystem
that isn't designed to be clustered.  In a container environment the
two containers aren't really isolated at the actual physical
filesystem level since they share the kernel, so I think you'd be fine
but I'd really want to test or do some research before relying on it.

In any case, the more typical solution is to just mount everything on
the host and then bind-mount it into the container.  So, you could
mount the nfs in /mnt and then bind-mount that into your container.
There is really no performance hit and it should work fine without
giving the container a bunch of capabilities.

-- 
Rich

Reply via email to