On Mon, Sep 14, 2015 at 04:33:05PM +0000, Serge Hallyn wrote:
> Quoting Serge Hallyn (serge.hal...@ubuntu.com):
> > Quoting Christian Brauner (christianvanbrau...@gmail.com):
> > > On Mon, Sep 14, 2015 at 02:50:39PM +0000, Serge Hallyn wrote:
> > > > Quoting Christian Brauner (christianvanbrau...@gmail.com):
> > > > > When creating ephemeral containers that have the option lxc.ephemeral 
> > > > > = 1 set
> > > > > in their config, they will be destroyed on shutdown. As they are 
> > > > > simple overlay
> > > > > clones of an existing container they should be registered in the 
> > > > > lxc_snapshots
> > > > > file of the original container to stay consistent and adhere to the
> > > > > expectancies of the users. Most of all, it ensure that we cannot 
> > > > > remove a
> > > > > container that has clones, even if they are just ephemeral 
> > > > > snapshot-clones. The
> > > > > function adds further consistency because remove_snapshots_entry() 
> > > > > ensures that
> > > > > ephemeral clone-snapshots deregister themselves from the 
> > > > > lxc_snapshots file
> > > > > when they are destroyed.
> > > > > 
> > > > > POSSIBLE GLITCH:
> > > > > I was thinking hard about racing conditions and concurrent acces on 
> > > > > the
> > > > > lxc_snapshots file when lxc-destroy is called on the container while 
> > > > > we
> > > > > shutdown then container from inside. Here is what my thoughts are so 
> > > > > far:
> > > > > 
> > > > > There should be no racing condition when lxc-destroy including all 
> > > > > snapshots is
> > > > 
> > > > Note that lxcapi_destroy_with_snapshots() deletes the *snapshots*, not 
> > > > the
> > > > snapshot clones.  This is an unfortunate naming clash (which we could 
> > > > try
> > > > to correct henceforth but we need good names :), but they are different.
> > > > So anything under /var/lib/lxc/$container/snaps will be deleted.  But if
> > > > you've created an overlayfs clone, then containers listed in
> > > > /var/lib/lxc/$container/lxc_snapshots will not be deleted. There is no
> > > > API call or program to automatically deleted those right now.
> > > 
> > > I think you are partially wrong here. I was not thinking about problems 
> > > created
> > > by an API-call but by the lxc-destroy exectuable. A quick walkthrough: 
> > > With the
> > 
> > D'oh.  Yeah, you'll need to mutex that somehow.
> 
> If you want help up-front with the design, please let me know.  If you
> aren't sure what the current container_disk_lock() and container_mem_lock()
> do, please shout.  (they are explained in a LOCKING comment above
> lxc_container_free() in src/lxc/lxccontainer.c)
> 
> The easiest thing to do mght be to disk_lock the container in lxc_destroy.c,
> then make the mod_rdep() helper which you use in lxc_destroy.c be a _locked
> variant (to avoid deadlock).  So mod_rdep() would turn into something like:
> 
> static bool mod_rdep(struct lxc_container *c0, struct lxc_container *c, bool 
> inc)
> {
>       bool ret;
>       if (container_disk_lock(c0))
>               return false;
>       ret = mod_rdep_locked(c0->name, c0->lxcpath, c->name, c->lxcpath);
>       container_disk_unlock(c0);
>       return ret;
> }
> 
> -serge

Thanks, this is really nice! One question:
- I'll take it that we want to make mod_rdep() public. mod_rdep() will be used
  in lxc_destroy.c, lxccontainer.c and start.c. Problem is that in start.c we do
  not have a container to pass into mod_rdep(). Do you want me to rewrite
  mod_rdep() to take in lxcpath and lxcname? If so, could we still use
  disk_lock() and mem_lock by e.g. calling lxc_container_new(lxcname, lxcpath)
  and then calling disk_lock() or mem_lock() after to protect the container?

Attachment: signature.asc
Description: PGP signature

_______________________________________________
lxc-devel mailing list
lxc-devel@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-devel

Reply via email to