Quoting Stéphane Graber (stgra...@ubuntu.com):
> On Fri, Jul 19, 2013 at 02:26:47PM +0000, Serge Hallyn wrote:
> > With this patchset, I am able to create and start an ubuntu-cloud
> > container completely as an unprivileged user, on an ubuntu saucy
> > host with the kernel from ppa:ubuntu-lxc/kernel and the nsexec
> > package from ppa:serge-hallyn/userns-natty.
> 
> That's great! We're definetely getting close to having really useful
> unprivileged containers!
> 
> > 
> > The one thing still completely unimplemented is networking.  I am
> > creating containers with lxc.network.type=empty to work around this.
> > Once the rest of this settles down, I'll address that.
> > 
> > lxc-destroy has not yet been updated, so right now the easiest way
> > to delete these containers is as root.  lxc-console and lxc-stop do
> > work as expected.
> > 
> > ====================
> > Prerequisities:
> > ====================
> > 
> > 1. A privileged user or init script needs to create
> >     /run/lock/lxc/$HOME
> > and set perms so $USER can create locks there.
> 
> I have a feeling we already talked about this but my memory needs to be
> refreshed, why can't we use XDG_RUNTIME_DIR for that instead of
> requiring a privileged process to create a directory under
> /run/lock/lxc?

I forget (but do recall you mentioned this before), I'll have to read up
on that again.

Right now lxclock.c defaults to using /run/lock/lxc/$lxcpath/

You're suggesting using XDG_RUNTIME_DIR which becomes /run/user/$dir.

Perhaps I should simply check getuid() - if 0, use /run/lock/lxc,
otherwise use $XDG_RUNTIME_DIR/lxc/$lxcpath ?

> Was that only for the corner case where multiple users may have write
> access and uid/gid mapping to a shared container? Is that actually

Yes.

> likely to happen (you'd need the same uid/gid allocation for both users
> and have the container on a shared path for it to be a problem).

But it *is* likely to happen with root owned containers.  The same
code is handling both.  But geteuid() == 0 is probably a decent way to
guess what's going on.

> > 2. Before starting the container you'll need to be in a cgroup you
> > can manipulate.  I do this with:
> > 
> > #!/bin/sh
> > name=`whoami`
> > for d in /sys/fs/cgroup/*; do
> >     sudo mkdir $d/$name
> >     sudo chown -R $name $d/$name
> > done
> > echo 0 | sudo tee -a /sys/fs/cgroup/cpuset/$name/cpuset.cpus
> > echo 0 | sudo tee -a /sys/fs/cgroup/cpuset/$name/cpuset.mems
> > 
> > followed by:
> > 
> > cgroup_enter() {
> >     name=`whoami`
> >     for d in /sys/fs/cgroup/*; do
> >             echo $$  > $d/$name/cgroup.procs
> >     done
> > }
> > 
> > 3. You need to give your user some subuids.  If you're creating a
> > new saucy system to use this on, then you already have some - check
> > /etc/subuids.  If not, then add some using "usermod -w 100000-299999
> > -v 100000-299999 $user"
> 
> I'm assuming you mean /etc/subuid and /etc/subgid?

yeah.

> On up to date saucy, those two files are empty but I guess we may be
> getting some allocation for new users or on new installs?

yes /etc/login.defs specifies default allocations for new users.  I
*thought* we were by default allocating some number but maybe not.

-serge

------------------------------------------------------------------------------
See everything from the browser to the database with AppDynamics
Get end-to-end visibility with application monitoring from AppDynamics
Isolate bottlenecks and diagnose root cause in seconds.
Start your free trial of AppDynamics Pro today!
http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk
_______________________________________________
Lxc-devel mailing list
Lxc-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-devel

Reply via email to