[lxc-devel] [PATCH 1/2] python3: Don't require a template name

2014-06-03 Thread Stéphane Graber
The template name isn't required, if it's not passed, then create will
simply be asked to create a container without a rootfs.

Signed-off-by: Stéphane Graber stgra...@ubuntu.com
---
 src/python-lxc/lxc.c   |  2 +-
 src/python-lxc/lxc/__init__.py | 13 +
 2 files changed, 10 insertions(+), 5 deletions(-)

diff --git a/src/python-lxc/lxc.c b/src/python-lxc/lxc.c
index f7ab092..d436c28 100644
--- a/src/python-lxc/lxc.c
+++ b/src/python-lxc/lxc.c
@@ -733,7 +733,7 @@ Container_create(Container *self, PyObject *args, PyObject 
*kwds)
 int i = 0;
 static char *kwlist[] = {template, flags, args, NULL};
 
-if (! PyArg_ParseTupleAndKeywords(args, kwds, s|iO, kwlist,
+if (! PyArg_ParseTupleAndKeywords(args, kwds, |siO, kwlist,
   template_name, flags, vargs))
 return NULL;
 
diff --git a/src/python-lxc/lxc/__init__.py b/src/python-lxc/lxc/__init__.py
index 45d139d..47b25b8 100644
--- a/src/python-lxc/lxc/__init__.py
+++ b/src/python-lxc/lxc/__init__.py
@@ -201,11 +201,11 @@ class Container(_lxc.Container):
 
 return _lxc.Container.set_config_item(self, key, value)
 
-def create(self, template, flags=0, args=()):
+def create(self, template=None, flags=0, args=()):
 
 Create a new rootfs for the container.
 
-template must be a valid template name.
+template if passed must be a valid template name.
 
 flags (optional) is an integer representing the optional
 create flags to be passed.
@@ -222,8 +222,13 @@ class Container(_lxc.Container):
 else:
 template_args = args
 
-return _lxc.Container.create(self, template=template,
- flags=flags, args=tuple(template_args))
+if template:
+return _lxc.Container.create(self, template=template,
+ flags=flags,
+ args=tuple(template_args))
+else:
+return _lxc.Container.create(self, flags=flags,
+ args=tuple(template_args))
 
 def clone(self, newname, config_path=None, flags=0, bdevtype=None,
   bdevdata=None, newsize=0, hookargs=()):
-- 
1.9.1

___
lxc-devel mailing list
lxc-devel@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-devel


[lxc-devel] [PATCH 2/2] python3: Handle invalid global config keys

2014-06-03 Thread Stéphane Graber
Signed-off-by: Stéphane Graber stgra...@ubuntu.com
---
 src/python-lxc/lxc.c | 10 +-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/src/python-lxc/lxc.c b/src/python-lxc/lxc.c
index d436c28..a8ab65b 100644
--- a/src/python-lxc/lxc.c
+++ b/src/python-lxc/lxc.c
@@ -329,12 +329,20 @@ LXC_get_global_config_item(PyObject *self, PyObject 
*args, PyObject *kwds)
 {
 static char *kwlist[] = {key, NULL};
 char* key = NULL;
+const char* value = NULL;
 
 if (! PyArg_ParseTupleAndKeywords(args, kwds, s|, kwlist,
   key))
 return NULL;
 
-return PyUnicode_FromString(lxc_get_global_config_item(key));
+value = lxc_get_global_config_item(key);
+
+if (!value) {
+PyErr_SetString(PyExc_KeyError, Invalid configuration key);
+return NULL;
+}
+
+return PyUnicode_FromString(value);
 }
 
 static PyObject *
-- 
1.9.1

___
lxc-devel mailing list
lxc-devel@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-devel


[lxc-devel] [lxc/lxc] 015f0d: lxc-autostart: rework boot and group handling

2014-06-03 Thread GitHub
  Branch: refs/heads/master
  Home:   https://github.com/lxc/lxc
  Commit: 015f0dd7924d27aeb2f16bb0c4d243f3fd93e94b
  https://github.com/lxc/lxc/commit/015f0dd7924d27aeb2f16bb0c4d243f3fd93e94b
  Author: Michael H. Warfield m...@wittsend.com
  Date:   2014-06-03 (Tue, 03 Jun 2014)

  Changed paths:
M .gitignore
M config/init/systemd/Makefile.am
R config/init/systemd/lxc.service
A config/init/systemd/lxc.service.in
R config/init/sysvinit/lxc
A config/init/sysvinit/lxc.in
M config/init/upstart/lxc.conf
M configure.ac
M doc/lxc-autostart.sgml.in
M doc/lxc.container.conf.sgml.in
M lxc.spec.in
M src/lxc/lxc_autostart.c

  Log Message:
  ---
  lxc-autostart: rework boot and group handling

This adds new functionality to lxc-autostart.

*) The -g / --groups option is multiple cummulative entry.
This may be mixed freely with the previous comma separated
group list convention.  Groups are processed in the
order they first appear in the aggregated group list.

*) The NULL group may be specified in the group list using either a
leading comma, a trailing comma, or an embedded comma.

*) Booting proceeds in order of the groups specified on the command line
then ordered by lxc.start.order and name collalating sequence.

*) Default host bootup is now specified as -g onboot, meaning that first
the onboot group is booted and then any remaining enabled
containers in the NULL group are booted.

*) Adds documentation to lxc-autostart for -g processing order and
combinations.

*) Parameterizes bootgroups, options, and shutdown delay in init scripts
and services.

*) Update the various init scripts to use lxc-autostart in a similar way.

Reported-by: CDR vene...@gmail.com
Signed-off-by: Dwight Engen dwight.en...@oracle.com
Signed-off-by: Michael H. Warfield m...@wittsend.com
Acked-by: Stéphane Graber stgra...@ubuntu.com


___
lxc-devel mailing list
lxc-devel@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-devel


Re: [lxc-devel] [PATCH 1/2] python3: Don't require a template name

2014-06-03 Thread Serge Hallyn
Quoting Stéphane Graber (stgra...@ubuntu.com):
 The template name isn't required, if it's not passed, then create will
 simply be asked to create a container without a rootfs.

With the command line lxc-create, we decided that this was too dangerous
and too easy for ppl to do wrong, so we required 'none' for this case.

Do you feel that API users are different and won't just start typeing
'create()' and get confused?

 Signed-off-by: Stéphane Graber stgra...@ubuntu.com
 ---
  src/python-lxc/lxc.c   |  2 +-
  src/python-lxc/lxc/__init__.py | 13 +
  2 files changed, 10 insertions(+), 5 deletions(-)
 
 diff --git a/src/python-lxc/lxc.c b/src/python-lxc/lxc.c
 index f7ab092..d436c28 100644
 --- a/src/python-lxc/lxc.c
 +++ b/src/python-lxc/lxc.c
 @@ -733,7 +733,7 @@ Container_create(Container *self, PyObject *args, 
 PyObject *kwds)
  int i = 0;
  static char *kwlist[] = {template, flags, args, NULL};
  
 -if (! PyArg_ParseTupleAndKeywords(args, kwds, s|iO, kwlist,
 +if (! PyArg_ParseTupleAndKeywords(args, kwds, |siO, kwlist,
template_name, flags, vargs))
  return NULL;
  
 diff --git a/src/python-lxc/lxc/__init__.py b/src/python-lxc/lxc/__init__.py
 index 45d139d..47b25b8 100644
 --- a/src/python-lxc/lxc/__init__.py
 +++ b/src/python-lxc/lxc/__init__.py
 @@ -201,11 +201,11 @@ class Container(_lxc.Container):
  
  return _lxc.Container.set_config_item(self, key, value)
  
 -def create(self, template, flags=0, args=()):
 +def create(self, template=None, flags=0, args=()):
  
  Create a new rootfs for the container.
  
 -template must be a valid template name.
 +template if passed must be a valid template name.
  
  flags (optional) is an integer representing the optional
  create flags to be passed.
 @@ -222,8 +222,13 @@ class Container(_lxc.Container):
  else:
  template_args = args
  
 -return _lxc.Container.create(self, template=template,
 - flags=flags, args=tuple(template_args))
 +if template:
 +return _lxc.Container.create(self, template=template,
 + flags=flags,
 + args=tuple(template_args))
 +else:
 +return _lxc.Container.create(self, flags=flags,
 + args=tuple(template_args))
  
  def clone(self, newname, config_path=None, flags=0, bdevtype=None,
bdevdata=None, newsize=0, hookargs=()):
 -- 
 1.9.1
 
 ___
 lxc-devel mailing list
 lxc-devel@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-devel
___
lxc-devel mailing list
lxc-devel@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-devel


Re: [lxc-devel] [PATCH 1/2] python3: Don't require a template name

2014-06-03 Thread Stéphane Graber
On Tue, Jun 03, 2014 at 03:11:02PM +, Serge Hallyn wrote:
 Quoting Stéphane Graber (stgra...@ubuntu.com):
  The template name isn't required, if it's not passed, then create will
  simply be asked to create a container without a rootfs.
 
 With the command line lxc-create, we decided that this was too dangerous
 and too easy for ppl to do wrong, so we required 'none' for this case.
 
 Do you feel that API users are different and won't just start typeing
 'create()' and get confused?

The binding is meant to follow the C API and in the C API we don't have
the special string, so that's why I did it that way.

While it may be reasonable for shell users to try and guess parameters,
I'd expect API users to actually read the function help before using it,
so I think that's fine.

 
  Signed-off-by: Stéphane Graber stgra...@ubuntu.com
  ---
   src/python-lxc/lxc.c   |  2 +-
   src/python-lxc/lxc/__init__.py | 13 +
   2 files changed, 10 insertions(+), 5 deletions(-)
  
  diff --git a/src/python-lxc/lxc.c b/src/python-lxc/lxc.c
  index f7ab092..d436c28 100644
  --- a/src/python-lxc/lxc.c
  +++ b/src/python-lxc/lxc.c
  @@ -733,7 +733,7 @@ Container_create(Container *self, PyObject *args, 
  PyObject *kwds)
   int i = 0;
   static char *kwlist[] = {template, flags, args, NULL};
   
  -if (! PyArg_ParseTupleAndKeywords(args, kwds, s|iO, kwlist,
  +if (! PyArg_ParseTupleAndKeywords(args, kwds, |siO, kwlist,
 template_name, flags, vargs))
   return NULL;
   
  diff --git a/src/python-lxc/lxc/__init__.py b/src/python-lxc/lxc/__init__.py
  index 45d139d..47b25b8 100644
  --- a/src/python-lxc/lxc/__init__.py
  +++ b/src/python-lxc/lxc/__init__.py
  @@ -201,11 +201,11 @@ class Container(_lxc.Container):
   
   return _lxc.Container.set_config_item(self, key, value)
   
  -def create(self, template, flags=0, args=()):
  +def create(self, template=None, flags=0, args=()):
   
   Create a new rootfs for the container.
   
  -template must be a valid template name.
  +template if passed must be a valid template name.
   
   flags (optional) is an integer representing the optional
   create flags to be passed.
  @@ -222,8 +222,13 @@ class Container(_lxc.Container):
   else:
   template_args = args
   
  -return _lxc.Container.create(self, template=template,
  - flags=flags, 
  args=tuple(template_args))
  +if template:
  +return _lxc.Container.create(self, template=template,
  + flags=flags,
  + args=tuple(template_args))
  +else:
  +return _lxc.Container.create(self, flags=flags,
  + args=tuple(template_args))
   
   def clone(self, newname, config_path=None, flags=0, bdevtype=None,
 bdevdata=None, newsize=0, hookargs=()):
  -- 
  1.9.1
  
  ___
  lxc-devel mailing list
  lxc-devel@lists.linuxcontainers.org
  http://lists.linuxcontainers.org/listinfo/lxc-devel
 ___
 lxc-devel mailing list
 lxc-devel@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-devel

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: Digital signature
___
lxc-devel mailing list
lxc-devel@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-devel


Re: [lxc-devel] [PATCH] lxc-plamo: fix for configuring network interface

2014-06-03 Thread Stéphane Graber
On Tue, Jun 03, 2014 at 12:20:23PM +0900, TAMUKI Shoichi wrote:
 Fix configure_plamo so as not to configure wireless network interface
 in containers even if the host uses wireless network interface.
 
 Signed-off-by: TAMUKI Shoichi tam...@linet.gr.jp

Acked-by: Stéphane Graber stgra...@ubuntu.com

 ---
  templates/lxc-plamo.in | 4 
  1 file changed, 4 insertions(+)
 
 diff --git a/templates/lxc-plamo.in b/templates/lxc-plamo.in
 index 644b8d0..24ecb7e 100644
 --- a/templates/lxc-plamo.in
 +++ b/templates/lxc-plamo.in
 @@ -237,6 +237,10 @@ configure_plamo() {
rm -f $rootfs/etc/rc.d/rc.inet1.tradnet
sh /tmp/netconfig.rconly
rm -f /tmp/netconfig.rconly
 +  ed - $rootfs/etc/rc.d/rc.inet1.tradnet - EOF
 + g/cmdline/s/if/ false \\/
 + w
 + EOF
return 0
  }
  
 -- 
 1.9.0
 ___
 lxc-devel mailing list
 lxc-devel@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-devel

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: Digital signature
___
lxc-devel mailing list
lxc-devel@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-devel


[lxc-devel] [lxc/lxc] 0520c2: point user to updated man page in template boilerp...

2014-06-03 Thread GitHub
  Branch: refs/heads/master
  Home:   https://github.com/lxc/lxc
  Commit: 0520c252da5d99b611594b80aac810cb29895dc8
  https://github.com/lxc/lxc/commit/0520c252da5d99b611594b80aac810cb29895dc8
  Author: Dwight Engen dwight.en...@oracle.com
  Date:   2014-06-03 (Tue, 03 Jun 2014)

  Changed paths:
M src/lxc/lxccontainer.c

  Log Message:
  ---
  point user to updated man page in template boilerplate

Signed-off-by: Dwight Engen dwight.en...@oracle.com
Acked-by: Serge E. Hallyn serge.hal...@ubuntu.com


  Commit: aadd458215c973af5c54c249482948e8e95b3edf
  https://github.com/lxc/lxc/commit/aadd458215c973af5c54c249482948e8e95b3edf
  Author: TAMUKI Shoichi tam...@linet.gr.jp
  Date:   2014-06-03 (Tue, 03 Jun 2014)

  Changed paths:
M templates/lxc-plamo.in

  Log Message:
  ---
  lxc-plamo: fix for configuring network interface

Fix configure_plamo so as not to configure wireless network interface
in containers even if the host uses wireless network interface.

Signed-off-by: TAMUKI Shoichi tam...@linet.gr.jp
Acked-by: Stéphane Graber stgra...@ubuntu.com


Compare: https://github.com/lxc/lxc/compare/015f0dd7924d...aadd458215c9___
lxc-devel mailing list
lxc-devel@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-devel


Re: [lxc-devel] [PATCH 1/2] python3: Don't require a template name

2014-06-03 Thread Serge Hallyn
Quoting Stéphane Graber (stgra...@ubuntu.com):
 On Tue, Jun 03, 2014 at 03:11:02PM +, Serge Hallyn wrote:
  Quoting Stéphane Graber (stgra...@ubuntu.com):
   The template name isn't required, if it's not passed, then create will
   simply be asked to create a container without a rootfs.
  
  With the command line lxc-create, we decided that this was too dangerous
  and too easy for ppl to do wrong, so we required 'none' for this case.
  
  Do you feel that API users are different and won't just start typeing
  'create()' and get confused?
 
 The binding is meant to follow the C API and in the C API we don't have
 the special string, so that's why I did it that way.
 
 While it may be reasonable for shell users to try and guess parameters,
 I'd expect API users to actually read the function help before using it,
 so I think that's fine.

Acked-by: Serge E. Hallyn serge.hal...@ubuntu.com

   Signed-off-by: Stéphane Graber stgra...@ubuntu.com
   ---
src/python-lxc/lxc.c   |  2 +-
src/python-lxc/lxc/__init__.py | 13 +
2 files changed, 10 insertions(+), 5 deletions(-)
   
   diff --git a/src/python-lxc/lxc.c b/src/python-lxc/lxc.c
   index f7ab092..d436c28 100644
   --- a/src/python-lxc/lxc.c
   +++ b/src/python-lxc/lxc.c
   @@ -733,7 +733,7 @@ Container_create(Container *self, PyObject *args, 
   PyObject *kwds)
int i = 0;
static char *kwlist[] = {template, flags, args, NULL};

   -if (! PyArg_ParseTupleAndKeywords(args, kwds, s|iO, kwlist,
   +if (! PyArg_ParseTupleAndKeywords(args, kwds, |siO, kwlist,
  template_name, flags, vargs))
return NULL;

   diff --git a/src/python-lxc/lxc/__init__.py 
   b/src/python-lxc/lxc/__init__.py
   index 45d139d..47b25b8 100644
   --- a/src/python-lxc/lxc/__init__.py
   +++ b/src/python-lxc/lxc/__init__.py
   @@ -201,11 +201,11 @@ class Container(_lxc.Container):

return _lxc.Container.set_config_item(self, key, value)

   -def create(self, template, flags=0, args=()):
   +def create(self, template=None, flags=0, args=()):

Create a new rootfs for the container.

   -template must be a valid template name.
   +template if passed must be a valid template name.

flags (optional) is an integer representing the optional
create flags to be passed.
   @@ -222,8 +222,13 @@ class Container(_lxc.Container):
else:
template_args = args

   -return _lxc.Container.create(self, template=template,
   - flags=flags, 
   args=tuple(template_args))
   +if template:
   +return _lxc.Container.create(self, template=template,
   + flags=flags,
   + args=tuple(template_args))
   +else:
   +return _lxc.Container.create(self, flags=flags,
   + args=tuple(template_args))

def clone(self, newname, config_path=None, flags=0, bdevtype=None,
  bdevdata=None, newsize=0, hookargs=()):
   -- 
   1.9.1
   
   ___
   lxc-devel mailing list
   lxc-devel@lists.linuxcontainers.org
   http://lists.linuxcontainers.org/listinfo/lxc-devel
  ___
  lxc-devel mailing list
  lxc-devel@lists.linuxcontainers.org
  http://lists.linuxcontainers.org/listinfo/lxc-devel
 
 -- 
 Stéphane Graber
 Ubuntu developer
 http://www.ubuntu.com



 ___
 lxc-devel mailing list
 lxc-devel@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-devel

___
lxc-devel mailing list
lxc-devel@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-devel


Re: [lxc-devel] [PATCH 2/2] python3: Handle invalid global config keys

2014-06-03 Thread Serge Hallyn
Quoting Stéphane Graber (stgra...@ubuntu.com):
 Signed-off-by: Stéphane Graber stgra...@ubuntu.com

Acked-by: Serge E. Hallyn serge.hal...@ubuntu.com

 ---
  src/python-lxc/lxc.c | 10 +-
  1 file changed, 9 insertions(+), 1 deletion(-)
 
 diff --git a/src/python-lxc/lxc.c b/src/python-lxc/lxc.c
 index d436c28..a8ab65b 100644
 --- a/src/python-lxc/lxc.c
 +++ b/src/python-lxc/lxc.c
 @@ -329,12 +329,20 @@ LXC_get_global_config_item(PyObject *self, PyObject 
 *args, PyObject *kwds)
  {
  static char *kwlist[] = {key, NULL};
  char* key = NULL;
 +const char* value = NULL;
  
  if (! PyArg_ParseTupleAndKeywords(args, kwds, s|, kwlist,
key))
  return NULL;
  
 -return PyUnicode_FromString(lxc_get_global_config_item(key));
 +value = lxc_get_global_config_item(key);
 +
 +if (!value) {
 +PyErr_SetString(PyExc_KeyError, Invalid configuration key);
 +return NULL;
 +}
 +
 +return PyUnicode_FromString(value);
  }
  
  static PyObject *
 -- 
 1.9.1
 
 ___
 lxc-devel mailing list
 lxc-devel@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-devel
___
lxc-devel mailing list
lxc-devel@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-devel


[lxc-devel] [lxc/lxc] 825568: Corrected debug message

2014-06-03 Thread GitHub
  Branch: refs/heads/master
  Home:   https://github.com/lxc/lxc
  Commit: 8255688a655a330973e89d0dc0129f8ff698372d
  https://github.com/lxc/lxc/commit/8255688a655a330973e89d0dc0129f8ff698372d
  Author: bartekplus bartekp...@gmail.com
  Date:   2014-06-03 (Tue, 03 Jun 2014)

  Changed paths:
M src/lxc/conf.c

  Log Message:
  ---
  Corrected debug message

Signed-off-by: Bartosz Tomczyk bartekp...@gmail.com
Acked-by: Stéphane Graber stgra...@ubuntu.com


  Commit: 0d6b9aea632955950db29e61072f8fbd09f9d8ee
  https://github.com/lxc/lxc/commit/0d6b9aea632955950db29e61072f8fbd09f9d8ee
  Author: bartekplus bartekp...@gmail.com
  Date:   2014-06-03 (Tue, 03 Jun 2014)

  Changed paths:
M src/lxc/lxc_execute.c

  Log Message:
  ---
  Free lxc configuration structure

Signed-off-by: Bartosz Tomczyk bartekp...@gmail.com
Acked-by: Stéphane Graber stgra...@ubuntu.com


Compare: https://github.com/lxc/lxc/compare/aadd458215c9...0d6b9aea6329___
lxc-devel mailing list
lxc-devel@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-devel


[lxc-devel] [lxc/lxc] 8df684: python3: Don't require a template name

2014-06-03 Thread GitHub
  Branch: refs/heads/master
  Home:   https://github.com/lxc/lxc
  Commit: 8df68465f2e0425079aec7a4831acb8c04a97ae7
  https://github.com/lxc/lxc/commit/8df68465f2e0425079aec7a4831acb8c04a97ae7
  Author: Stéphane Graber stgra...@ubuntu.com
  Date:   2014-06-03 (Tue, 03 Jun 2014)

  Changed paths:
M src/python-lxc/lxc.c
M src/python-lxc/lxc/__init__.py

  Log Message:
  ---
  python3: Don't require a template name

The template name isn't required, if it's not passed, then create will
simply be asked to create a container without a rootfs.

Signed-off-by: Stéphane Graber stgra...@ubuntu.com
Acked-by: Serge E. Hallyn serge.hal...@ubuntu.com


  Commit: 1b03969c7ce75d7d88fe88d25fa74febb89bea98
  https://github.com/lxc/lxc/commit/1b03969c7ce75d7d88fe88d25fa74febb89bea98
  Author: Stéphane Graber stgra...@ubuntu.com
  Date:   2014-06-03 (Tue, 03 Jun 2014)

  Changed paths:
M src/python-lxc/lxc.c

  Log Message:
  ---
  python3: Handle invalid global config keys

Signed-off-by: Stéphane Graber stgra...@ubuntu.com
Acked-by: Serge E. Hallyn serge.hal...@ubuntu.com


Compare: https://github.com/lxc/lxc/compare/0d6b9aea6329...1b03969c7ce7___
lxc-devel mailing list
lxc-devel@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-devel


[lxc-devel] [PATCH] lxc-fedora.in: Correct some systemd target setups.

2014-06-03 Thread Michael H. Warfield
lxc-fedora.in: Correct some systemd target setups.

Set the halt.target action to be sigpwr.target.  This allows
SIGPWR to properly shut the container down from lxc-stop.

Renable the systemd-journald.service.

Signed-off-by: Michael H. Warfield m...@wittsend.com
---
 templates/lxc-fedora.in | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/templates/lxc-fedora.in b/templates/lxc-fedora.in
index 2e14cc8..b9741ac 100644
--- a/templates/lxc-fedora.in
+++ b/templates/lxc-fedora.in
@@ -369,10 +369,9 @@ configure_fedora_systemd()
 rm -f ${rootfs_path}/etc/systemd/system/default.target
 touch ${rootfs_path}/etc/fstab
 chroot ${rootfs_path} ln -s /dev/null /etc/systemd/system/udev.service
-chroot ${rootfs_path} ln -s /dev/null 
/etc/systemd/system/systemd-journald.service
 chroot ${rootfs_path} ln -s /lib/systemd/system/multi-user.target 
/etc/systemd/system/default.target
 # Make systemd honor SIGPWR
-chroot ${rootfs_path} ln -s /usr/lib/systemd/system/halt.target 
/etc/systemd/system/
+chroot ${rootfs_path} ln -s /usr/lib/systemd/system/halt.target 
/etc/systemd/system/sigpwr.target
 #dependency on a device unit fails it specially that we disabled udev
 # sed -i 's/After=dev-%i.device/After=/' 
${rootfs_path}/lib/systemd/system/getty\@.service
 #
-- 
1.9.3


-- 
Michael H. Warfield (AI4NB) | (770) 978-7061 |  m...@wittsend.com
   /\/\|=mhw=|\/\/  | (678) 463-0932 |  http://www.wittsend.com/mhw/
   NIC whois: MHW9  | An optimist believes we live in the best of all
 PGP Key: 0x674627FF| possible worlds.  A pessimist is sure of it!



signature.asc
Description: This is a digitally signed message part
___
lxc-devel mailing list
lxc-devel@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-devel


Re: [lxc-devel] [RFC] Per-user namespace process accounting

2014-06-03 Thread Pavel Emelyanov
On 05/29/2014 07:32 PM, Serge Hallyn wrote:
 Quoting Marian Marinov (m...@1h.com):
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 05/29/2014 01:06 PM, Eric W. Biederman wrote:
 Marian Marinov m...@1h.com writes:

 Hello,

 I have the following proposition.

 Number of currently running processes is accounted at the root user 
 namespace. The problem I'm facing is that
 multiple containers in different user namespaces share the process 
 counters.

 That is deliberate.

 And I understand that very well ;)


 So if containerX runs 100 with UID 99, containerY should have NPROC limit 
 of above 100 in order to execute any 
 processes with ist own UID 99.

 I know that some of you will tell me that I should not provision all of my 
 containers with the same UID/GID maps,
 but this brings another problem.

 We are provisioning the containers from a template. The template has a lot 
 of files 500k and more. And chowning
 these causes a lot of I/O and also slows down provisioning considerably.

 The other problem is that when we migrate one container from one host 
 machine to another the IDs may be already
 in use on the new machine and we need to chown all the files again.

 You should have the same uid allocations for all machines in your fleet as 
 much as possible.   That has been true
 ever since NFS was invented and is not new here.  You can avoid the cost of 
 chowning if you untar your files inside
 of your user namespace.  You can have different maps per machine if you are 
 crazy enough to do that.  You can even
 have shared uids that you use to share files between containers as long as 
 none of those files is setuid.  And map
 those shared files to some kind of nobody user in your user namespace.

 We are not using NFS. We are using a shared block storage that offers us 
 snapshots. So provisioning new containers is
 extremely cheep and fast. Comparing that with untar is comparing a race car 
 with Smart. Yes it can be done and no, I
 do not believe we should go backwards.

 We do not share filesystems between containers, we offer them block devices.
 
 Yes, this is a real nuisance for openstack style deployments.
 
 One nice solution to this imo would be a very thin stackable filesystem
 which does uid shifting, or, better yet, a non-stackable way of shifting
 uids at mount.

I vote for non-stackable way too. Maybe on generic VFS level so that 
filesystems 
don't bother with it. From what I've seen, even simple stacking is quite a 
challenge.

Thanks,
Pavel
___
lxc-devel mailing list
lxc-devel@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-devel


Re: [lxc-devel] [RFC] Per-user namespace process accounting

2014-06-03 Thread Serge Hallyn
Quoting Pavel Emelyanov (xe...@parallels.com):
 On 05/29/2014 07:32 PM, Serge Hallyn wrote:
  Quoting Marian Marinov (m...@1h.com):
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1
 
  On 05/29/2014 01:06 PM, Eric W. Biederman wrote:
  Marian Marinov m...@1h.com writes:
 
  Hello,
 
  I have the following proposition.
 
  Number of currently running processes is accounted at the root user 
  namespace. The problem I'm facing is that
  multiple containers in different user namespaces share the process 
  counters.
 
  That is deliberate.
 
  And I understand that very well ;)
 
 
  So if containerX runs 100 with UID 99, containerY should have NPROC 
  limit of above 100 in order to execute any 
  processes with ist own UID 99.
 
  I know that some of you will tell me that I should not provision all of 
  my containers with the same UID/GID maps,
  but this brings another problem.
 
  We are provisioning the containers from a template. The template has a 
  lot of files 500k and more. And chowning
  these causes a lot of I/O and also slows down provisioning considerably.
 
  The other problem is that when we migrate one container from one host 
  machine to another the IDs may be already
  in use on the new machine and we need to chown all the files again.
 
  You should have the same uid allocations for all machines in your fleet 
  as much as possible.   That has been true
  ever since NFS was invented and is not new here.  You can avoid the cost 
  of chowning if you untar your files inside
  of your user namespace.  You can have different maps per machine if you 
  are crazy enough to do that.  You can even
  have shared uids that you use to share files between containers as long 
  as none of those files is setuid.  And map
  those shared files to some kind of nobody user in your user namespace.
 
  We are not using NFS. We are using a shared block storage that offers us 
  snapshots. So provisioning new containers is
  extremely cheep and fast. Comparing that with untar is comparing a race 
  car with Smart. Yes it can be done and no, I
  do not believe we should go backwards.
 
  We do not share filesystems between containers, we offer them block 
  devices.
  
  Yes, this is a real nuisance for openstack style deployments.
  
  One nice solution to this imo would be a very thin stackable filesystem
  which does uid shifting, or, better yet, a non-stackable way of shifting
  uids at mount.
 
 I vote for non-stackable way too. Maybe on generic VFS level so that 
 filesystems 
 don't bother with it. From what I've seen, even simple stacking is quite a 
 challenge.

Do you have any ideas for how to go about it?  It seems like we'd have
to have separate inodes per mapping for each file, which is why of
course stacking seems natural here.

Trying to catch the uid/gid at every kernel-userspace crossing seems
like a design regression from the current userns approach.  I suppose we
could continue in the kuid theme and introduce a iiud/igid for the
in-kernel inode uid/gid owners.  Then allow a user privileged in some
ns to create a new mount associated with a different mapping for any
ids over which he is privileged.
___
lxc-devel mailing list
lxc-devel@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-devel


Re: [lxc-devel] [RFC] Per-user namespace process accounting

2014-06-03 Thread Serge Hallyn
Quoting Pavel Emelyanov (xe...@parallels.com):
 On 06/03/2014 09:26 PM, Serge Hallyn wrote:
  Quoting Pavel Emelyanov (xe...@parallels.com):
  On 05/29/2014 07:32 PM, Serge Hallyn wrote:
  Quoting Marian Marinov (m...@1h.com):
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1
 
  On 05/29/2014 01:06 PM, Eric W. Biederman wrote:
  Marian Marinov m...@1h.com writes:
 
  Hello,
 
  I have the following proposition.
 
  Number of currently running processes is accounted at the root user 
  namespace. The problem I'm facing is that
  multiple containers in different user namespaces share the process 
  counters.
 
  That is deliberate.
 
  And I understand that very well ;)
 
 
  So if containerX runs 100 with UID 99, containerY should have NPROC 
  limit of above 100 in order to execute any 
  processes with ist own UID 99.
 
  I know that some of you will tell me that I should not provision all 
  of my containers with the same UID/GID maps,
  but this brings another problem.
 
  We are provisioning the containers from a template. The template has a 
  lot of files 500k and more. And chowning
  these causes a lot of I/O and also slows down provisioning 
  considerably.
 
  The other problem is that when we migrate one container from one host 
  machine to another the IDs may be already
  in use on the new machine and we need to chown all the files again.
 
  You should have the same uid allocations for all machines in your fleet 
  as much as possible.   That has been true
  ever since NFS was invented and is not new here.  You can avoid the 
  cost of chowning if you untar your files inside
  of your user namespace.  You can have different maps per machine if you 
  are crazy enough to do that.  You can even
  have shared uids that you use to share files between containers as long 
  as none of those files is setuid.  And map
  those shared files to some kind of nobody user in your user namespace.
 
  We are not using NFS. We are using a shared block storage that offers us 
  snapshots. So provisioning new containers is
  extremely cheep and fast. Comparing that with untar is comparing a race 
  car with Smart. Yes it can be done and no, I
  do not believe we should go backwards.
 
  We do not share filesystems between containers, we offer them block 
  devices.
 
  Yes, this is a real nuisance for openstack style deployments.
 
  One nice solution to this imo would be a very thin stackable filesystem
  which does uid shifting, or, better yet, a non-stackable way of shifting
  uids at mount.
 
  I vote for non-stackable way too. Maybe on generic VFS level so that 
  filesystems 
  don't bother with it. From what I've seen, even simple stacking is quite a 
  challenge.
  
  Do you have any ideas for how to go about it?  It seems like we'd have
  to have separate inodes per mapping for each file, which is why of
  course stacking seems natural here.
 
 I was thinking about lightweight mapping which is simple shifting. Since
 we're trying to make this co-work with user-ns mappings, simple uid/gid shift
 should be enough. Please, correct me if I'm wrong.
 
 If I'm not, then it looks to be enough to have two per-sb or per-mnt values
 for uid and gid shift. Per-mnt for now looks more promising, since container's
 FS may be just a bind-mount from shared disk.

per-sb would work.  per-mnt would as you say be nicer, but I don't see how it
can be done since parts of the vfs which get inodes but no mnt information
would not be able to figure out the shifts.

  Trying to catch the uid/gid at every kernel-userspace crossing seems
  like a design regression from the current userns approach.  I suppose we
  could continue in the kuid theme and introduce a iiud/igid for the
  in-kernel inode uid/gid owners.  Then allow a user privileged in some
  ns to create a new mount associated with a different mapping for any
  ids over which he is privileged.
 
 User-space crossing? From my point of view it would be enough if we just turn
 uid/gid read from disk (well, from whenever FS gets them) into uids, that 
 would
 match the user-ns's ones, this sould cover the VFS layer and related syscalls
 only, which is, IIRC stat-s family and chown.
 
 Ouch, and the whole quota engine :\
 
 Thanks,
 Pavel
 ___
 Containers mailing list
 contain...@lists.linux-foundation.org
 https://lists.linuxfoundation.org/mailman/listinfo/containers
___
lxc-devel mailing list
lxc-devel@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-devel


Re: [lxc-devel] [RFC] Per-user namespace process accounting

2014-06-03 Thread Pavel Emelyanov
On 06/03/2014 09:26 PM, Serge Hallyn wrote:
 Quoting Pavel Emelyanov (xe...@parallels.com):
 On 05/29/2014 07:32 PM, Serge Hallyn wrote:
 Quoting Marian Marinov (m...@1h.com):
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 05/29/2014 01:06 PM, Eric W. Biederman wrote:
 Marian Marinov m...@1h.com writes:

 Hello,

 I have the following proposition.

 Number of currently running processes is accounted at the root user 
 namespace. The problem I'm facing is that
 multiple containers in different user namespaces share the process 
 counters.

 That is deliberate.

 And I understand that very well ;)


 So if containerX runs 100 with UID 99, containerY should have NPROC 
 limit of above 100 in order to execute any 
 processes with ist own UID 99.

 I know that some of you will tell me that I should not provision all of 
 my containers with the same UID/GID maps,
 but this brings another problem.

 We are provisioning the containers from a template. The template has a 
 lot of files 500k and more. And chowning
 these causes a lot of I/O and also slows down provisioning considerably.

 The other problem is that when we migrate one container from one host 
 machine to another the IDs may be already
 in use on the new machine and we need to chown all the files again.

 You should have the same uid allocations for all machines in your fleet 
 as much as possible.   That has been true
 ever since NFS was invented and is not new here.  You can avoid the cost 
 of chowning if you untar your files inside
 of your user namespace.  You can have different maps per machine if you 
 are crazy enough to do that.  You can even
 have shared uids that you use to share files between containers as long 
 as none of those files is setuid.  And map
 those shared files to some kind of nobody user in your user namespace.

 We are not using NFS. We are using a shared block storage that offers us 
 snapshots. So provisioning new containers is
 extremely cheep and fast. Comparing that with untar is comparing a race 
 car with Smart. Yes it can be done and no, I
 do not believe we should go backwards.

 We do not share filesystems between containers, we offer them block 
 devices.

 Yes, this is a real nuisance for openstack style deployments.

 One nice solution to this imo would be a very thin stackable filesystem
 which does uid shifting, or, better yet, a non-stackable way of shifting
 uids at mount.

 I vote for non-stackable way too. Maybe on generic VFS level so that 
 filesystems 
 don't bother with it. From what I've seen, even simple stacking is quite a 
 challenge.
 
 Do you have any ideas for how to go about it?  It seems like we'd have
 to have separate inodes per mapping for each file, which is why of
 course stacking seems natural here.

I was thinking about lightweight mapping which is simple shifting. Since
we're trying to make this co-work with user-ns mappings, simple uid/gid shift
should be enough. Please, correct me if I'm wrong.

If I'm not, then it looks to be enough to have two per-sb or per-mnt values
for uid and gid shift. Per-mnt for now looks more promising, since container's
FS may be just a bind-mount from shared disk.

 Trying to catch the uid/gid at every kernel-userspace crossing seems
 like a design regression from the current userns approach.  I suppose we
 could continue in the kuid theme and introduce a iiud/igid for the
 in-kernel inode uid/gid owners.  Then allow a user privileged in some
 ns to create a new mount associated with a different mapping for any
 ids over which he is privileged.

User-space crossing? From my point of view it would be enough if we just turn
uid/gid read from disk (well, from whenever FS gets them) into uids, that would
match the user-ns's ones, this sould cover the VFS layer and related syscalls
only, which is, IIRC stat-s family and chown.

Ouch, and the whole quota engine :\

Thanks,
Pavel
___
lxc-devel mailing list
lxc-devel@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-devel


Re: [lxc-devel] [RFC] Per-user namespace process accounting

2014-06-03 Thread Eric W. Biederman
Serge Hallyn serge.hal...@ubuntu.com writes:

 Quoting Pavel Emelyanov (xe...@parallels.com):
 On 05/29/2014 07:32 PM, Serge Hallyn wrote:
  Quoting Marian Marinov (m...@1h.com):
  We are not using NFS. We are using a shared block storage that offers us 
  snapshots. So provisioning new containers is
  extremely cheep and fast. Comparing that with untar is comparing a race 
  car with Smart. Yes it can be done and no, I
  do not believe we should go backwards.
 
  We do not share filesystems between containers, we offer them block 
  devices.
  
  Yes, this is a real nuisance for openstack style deployments.
  
  One nice solution to this imo would be a very thin stackable filesystem
  which does uid shifting, or, better yet, a non-stackable way of shifting
  uids at mount.
 
 I vote for non-stackable way too. Maybe on generic VFS level so that 
 filesystems 
 don't bother with it. From what I've seen, even simple stacking is quite a 
 challenge.

 Do you have any ideas for how to go about it?  It seems like we'd have
 to have separate inodes per mapping for each file, which is why of
 course stacking seems natural here.

 Trying to catch the uid/gid at every kernel-userspace crossing seems
 like a design regression from the current userns approach.  I suppose we
 could continue in the kuid theme and introduce a iiud/igid for the
 in-kernel inode uid/gid owners.  Then allow a user privileged in some
 ns to create a new mount associated with a different mapping for any
 ids over which he is privileged.

There is a simple solution.

We pick the filesystems we choose to support.
We add privileged mounting in a user namespace.
We create the user and mount namespace.
Global root goes into the target mount namespace with setns and performs
the mounts.

90% of that work is already done.

As long as we don't plan to support XFS (as it XFS likes to expose it's
implementation details to userspace) it should be quite straight
forward.

The permission check change would probably only need to be:


@@ -2180,6 +2245,10 @@ static int do_new_mount(struct path *path, const char 
*fstype, int flags,
return -ENODEV;
 
if (user_ns != init_user_ns) {
+   if (!(type-fs_flags  FS_UNPRIV_MOUNT)  
!capable(CAP_SYS_ADMIN)) {
+   put_filesystem(type);
+   return -EPERM;
+   }
if (!(type-fs_flags  FS_USERNS_MOUNT)) {
put_filesystem(type);
return -EPERM;


There are also a few funnies with capturing the user namespace of the
filesystem when we perform the mount (in the superblock?), and not
allowing a mount of that same filesystem in a different user namespace.

But as long as the kuid conversions don't measurably slow down the
filesystem when mounted in the initial mount and user namespaces I don't
see how this would be a problem for anyone, and is very little code.


Eric
___
lxc-devel mailing list
lxc-devel@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-devel


[lxc-devel] [PATCH 1/1] lxcapi_snapshot: check that c is defined

2014-06-03 Thread Serge Hallyn
before using it, like the other snapshot api methods do.

This will need to go into stable-1.0 as well.

Signed-off-by: Serge Hallyn serge.hal...@ubuntu.com
---
 src/lxc/lxccontainer.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/src/lxc/lxccontainer.c b/src/lxc/lxccontainer.c
index ac6de62..5cedb27 100644
--- a/src/lxc/lxccontainer.c
+++ b/src/lxc/lxccontainer.c
@@ -2852,6 +2852,9 @@ static int lxcapi_snapshot(struct lxc_container *c, const 
char *commentfile)
struct lxc_container *c2;
char snappath[MAXPATHLEN], newname[20];
 
+   if (!c || !lxcapi_is_defined(c))
+   return -1;
+
// /var/lib/lxc - /var/lib/lxcsnaps \0
ret = snprintf(snappath, MAXPATHLEN, %ssnaps/%s, c-config_path, 
c-name);
if (ret  0 || ret = MAXPATHLEN)
-- 
2.0.0

___
lxc-devel mailing list
lxc-devel@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-devel


Re: [lxc-devel] [RFC] Per-user namespace process accounting

2014-06-03 Thread Eric W. Biederman
Pavel Emelyanov xe...@parallels.com writes:

 On 06/03/2014 09:26 PM, Serge Hallyn wrote:
 Quoting Pavel Emelyanov (xe...@parallels.com):
 On 05/29/2014 07:32 PM, Serge Hallyn wrote:
 Quoting Marian Marinov (m...@1h.com):
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 05/29/2014 01:06 PM, Eric W. Biederman wrote:
 Marian Marinov m...@1h.com writes:

 Hello,

 I have the following proposition.

 Number of currently running processes is accounted at the root user 
 namespace. The problem I'm facing is that
 multiple containers in different user namespaces share the process 
 counters.

 That is deliberate.

 And I understand that very well ;)


 So if containerX runs 100 with UID 99, containerY should have NPROC 
 limit of above 100 in order to execute any 
 processes with ist own UID 99.

 I know that some of you will tell me that I should not provision all of 
 my containers with the same UID/GID maps,
 but this brings another problem.

 We are provisioning the containers from a template. The template has a 
 lot of files 500k and more. And chowning
 these causes a lot of I/O and also slows down provisioning considerably.

 The other problem is that when we migrate one container from one host 
 machine to another the IDs may be already
 in use on the new machine and we need to chown all the files again.

 You should have the same uid allocations for all machines in your fleet 
 as much as possible.   That has been true
 ever since NFS was invented and is not new here.  You can avoid the cost 
 of chowning if you untar your files inside
 of your user namespace.  You can have different maps per machine if you 
 are crazy enough to do that.  You can even
 have shared uids that you use to share files between containers as long 
 as none of those files is setuid.  And map
 those shared files to some kind of nobody user in your user namespace.

 We are not using NFS. We are using a shared block storage that offers us 
 snapshots. So provisioning new containers is
 extremely cheep and fast. Comparing that with untar is comparing a race 
 car with Smart. Yes it can be done and no, I
 do not believe we should go backwards.

 We do not share filesystems between containers, we offer them block 
 devices.

 Yes, this is a real nuisance for openstack style deployments.

 One nice solution to this imo would be a very thin stackable filesystem
 which does uid shifting, or, better yet, a non-stackable way of shifting
 uids at mount.

 I vote for non-stackable way too. Maybe on generic VFS level so that 
 filesystems 
 don't bother with it. From what I've seen, even simple stacking is quite a 
 challenge.
 
 Do you have any ideas for how to go about it?  It seems like we'd have
 to have separate inodes per mapping for each file, which is why of
 course stacking seems natural here.

 I was thinking about lightweight mapping which is simple shifting. Since
 we're trying to make this co-work with user-ns mappings, simple uid/gid shift
 should be enough. Please, correct me if I'm wrong.

 If I'm not, then it looks to be enough to have two per-sb or per-mnt values
 for uid and gid shift. Per-mnt for now looks more promising, since container's
 FS may be just a bind-mount from shared disk.

 Trying to catch the uid/gid at every kernel-userspace crossing seems
 like a design regression from the current userns approach.  I suppose we
 could continue in the kuid theme and introduce a iiud/igid for the
 in-kernel inode uid/gid owners.  Then allow a user privileged in some
 ns to create a new mount associated with a different mapping for any
 ids over which he is privileged.

 User-space crossing? From my point of view it would be enough if we just turn
 uid/gid read from disk (well, from whenever FS gets them) into uids, that 
 would
 match the user-ns's ones, this sould cover the VFS layer and related syscalls
 only, which is, IIRC stat-s family and chown.

 Ouch, and the whole quota engine :\

And posix acls.

But all of this is 90% done already.  I think today we just have
conversions to the initial user namespace. We just need a few tweaks to
allow it and a per superblock user namespace setting.

Eric

___
lxc-devel mailing list
lxc-devel@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-devel


Re: [lxc-devel] [PATCH 1/1] lxcapi_snapshot: check that c is defined

2014-06-03 Thread Stéphane Graber
On Tue, Jun 03, 2014 at 01:16:03PM -0500, Serge Hallyn wrote:
 before using it, like the other snapshot api methods do.
 
 This will need to go into stable-1.0 as well.
 
 Signed-off-by: Serge Hallyn serge.hal...@ubuntu.com

Acked-by: Stéphane Graber stgra...@ubuntu.com

 ---
  src/lxc/lxccontainer.c | 3 +++
  1 file changed, 3 insertions(+)
 
 diff --git a/src/lxc/lxccontainer.c b/src/lxc/lxccontainer.c
 index ac6de62..5cedb27 100644
 --- a/src/lxc/lxccontainer.c
 +++ b/src/lxc/lxccontainer.c
 @@ -2852,6 +2852,9 @@ static int lxcapi_snapshot(struct lxc_container *c, 
 const char *commentfile)
   struct lxc_container *c2;
   char snappath[MAXPATHLEN], newname[20];
  
 + if (!c || !lxcapi_is_defined(c))
 + return -1;
 +
   // /var/lib/lxc - /var/lib/lxcsnaps \0
   ret = snprintf(snappath, MAXPATHLEN, %ssnaps/%s, c-config_path, 
 c-name);
   if (ret  0 || ret = MAXPATHLEN)
 -- 
 2.0.0
 
 ___
 lxc-devel mailing list
 lxc-devel@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-devel

-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: Digital signature
___
lxc-devel mailing list
lxc-devel@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-devel


Re: [lxc-devel] [PATCH] lxc-fedora.in: Correct some systemd target setups.

2014-06-03 Thread Stéphane Graber
On Tue, Jun 03, 2014 at 12:59:20PM -0400, Michael H. Warfield wrote:
 lxc-fedora.in: Correct some systemd target setups.
 
 Set the halt.target action to be sigpwr.target.  This allows
 SIGPWR to properly shut the container down from lxc-stop.
 
 Renable the systemd-journald.service.
 
 Signed-off-by: Michael H. Warfield m...@wittsend.com

Acked-by: Stéphane Graber stgra...@ubuntu.com

 ---
  templates/lxc-fedora.in | 3 +--
  1 file changed, 1 insertion(+), 2 deletions(-)
 
 diff --git a/templates/lxc-fedora.in b/templates/lxc-fedora.in
 index 2e14cc8..b9741ac 100644
 --- a/templates/lxc-fedora.in
 +++ b/templates/lxc-fedora.in
 @@ -369,10 +369,9 @@ configure_fedora_systemd()
  rm -f ${rootfs_path}/etc/systemd/system/default.target
  touch ${rootfs_path}/etc/fstab
  chroot ${rootfs_path} ln -s /dev/null /etc/systemd/system/udev.service
 -chroot ${rootfs_path} ln -s /dev/null 
 /etc/systemd/system/systemd-journald.service
  chroot ${rootfs_path} ln -s /lib/systemd/system/multi-user.target 
 /etc/systemd/system/default.target
  # Make systemd honor SIGPWR
 -chroot ${rootfs_path} ln -s /usr/lib/systemd/system/halt.target 
 /etc/systemd/system/
 +chroot ${rootfs_path} ln -s /usr/lib/systemd/system/halt.target 
 /etc/systemd/system/sigpwr.target
  #dependency on a device unit fails it specially that we disabled udev
  # sed -i 's/After=dev-%i.device/After=/' 
 ${rootfs_path}/lib/systemd/system/getty\@.service
  #
 -- 
 1.9.3
 
 
 -- 
 Michael H. Warfield (AI4NB) | (770) 978-7061 |  m...@wittsend.com
/\/\|=mhw=|\/\/  | (678) 463-0932 |  http://www.wittsend.com/mhw/
NIC whois: MHW9  | An optimist believes we live in the best of all
  PGP Key: 0x674627FF| possible worlds.  A pessimist is sure of it!
 



 ___
 lxc-devel mailing list
 lxc-devel@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-devel


-- 
Stéphane Graber
Ubuntu developer
http://www.ubuntu.com


signature.asc
Description: Digital signature
___
lxc-devel mailing list
lxc-devel@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-devel


[lxc-devel] [lxc/lxc] 840f05: lxcapi_snapshot: check that c is defined

2014-06-03 Thread GitHub
  Branch: refs/heads/master
  Home:   https://github.com/lxc/lxc
  Commit: 840f05df8ad3bb43e231c7ae9f8fbd7236469924
  https://github.com/lxc/lxc/commit/840f05df8ad3bb43e231c7ae9f8fbd7236469924
  Author: Serge Hallyn serge.hal...@ubuntu.com
  Date:   2014-06-03 (Tue, 03 Jun 2014)

  Changed paths:
M src/lxc/lxccontainer.c

  Log Message:
  ---
  lxcapi_snapshot: check that c is defined

before using it, like the other snapshot api methods do.

This will need to go into stable-1.0 as well.

Signed-off-by: Serge Hallyn serge.hal...@ubuntu.com
Acked-by: Stéphane Graber stgra...@ubuntu.com


  Commit: e5469dadd9fa248fe9992c8323af115f78dbbb27
  https://github.com/lxc/lxc/commit/e5469dadd9fa248fe9992c8323af115f78dbbb27
  Author: Michael H. Warfield m...@wittsend.com
  Date:   2014-06-03 (Tue, 03 Jun 2014)

  Changed paths:
M templates/lxc-fedora.in

  Log Message:
  ---
  lxc-fedora.in: Correct some systemd target setups.

Set the halt.target action to be sigpwr.target.  This allows
SIGPWR to properly shut the container down from lxc-stop.

Renable the systemd-journald.service.

Signed-off-by: Michael H. Warfield m...@wittsend.com
Acked-by: Stéphane Graber stgra...@ubuntu.com


Compare: https://github.com/lxc/lxc/compare/1b03969c7ce7...e5469dadd9fa___
lxc-devel mailing list
lxc-devel@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-devel


[lxc-devel] [PATCH] lxc-download: Attempt to get the GPG key 3 times

2014-06-03 Thread Stéphane Graber
This is to deal with the GPG pool occasionaly yielding broken servers.

Signed-off-by: Stéphane Graber stgra...@ubuntu.com
---
 templates/lxc-download.in | 11 ++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/templates/lxc-download.in b/templates/lxc-download.in
index a06c0a4..31e0d27 100644
--- a/templates/lxc-download.in
+++ b/templates/lxc-download.in
@@ -116,8 +116,17 @@ gpg_setup() {
 mkdir -p $DOWNLOAD_TEMP/gpg
 chmod 700 $DOWNLOAD_TEMP/gpg
 export GNUPGHOME=$DOWNLOAD_TEMP/gpg
-if ! gpg --keyserver $DOWNLOAD_KEYSERVER \
+
+success=
+for i in $(seq 3); do
+if gpg --keyserver $DOWNLOAD_KEYSERVER \
 --recv-keys ${DOWNLOAD_KEYID} /dev/null 21; then
+success=1
+break
+fi
+done
+
+if [ -z $success ]; then
 echo ERROR: Unable to fetch GPG key from keyserver.
 exit 1
 fi
-- 
1.9.1

___
lxc-devel mailing list
lxc-devel@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-devel


Re: [lxc-devel] [PATCH] lxc-download: Attempt to get the GPG key 3 times

2014-06-03 Thread Serge Hallyn
Quoting Stéphane Graber (stgra...@ubuntu.com):
 This is to deal with the GPG pool occasionaly yielding broken servers.
 
 Signed-off-by: Stéphane Graber stgra...@ubuntu.com

Acked-by: Serge E. Hallyn serge.hal...@ubuntu.com

 ---
  templates/lxc-download.in | 11 ++-
  1 file changed, 10 insertions(+), 1 deletion(-)
 
 diff --git a/templates/lxc-download.in b/templates/lxc-download.in
 index a06c0a4..31e0d27 100644
 --- a/templates/lxc-download.in
 +++ b/templates/lxc-download.in
 @@ -116,8 +116,17 @@ gpg_setup() {
  mkdir -p $DOWNLOAD_TEMP/gpg
  chmod 700 $DOWNLOAD_TEMP/gpg
  export GNUPGHOME=$DOWNLOAD_TEMP/gpg
 -if ! gpg --keyserver $DOWNLOAD_KEYSERVER \
 +
 +success=
 +for i in $(seq 3); do
 +if gpg --keyserver $DOWNLOAD_KEYSERVER \
  --recv-keys ${DOWNLOAD_KEYID} /dev/null 21; then
 +success=1
 +break
 +fi
 +done
 +
 +if [ -z $success ]; then
  echo ERROR: Unable to fetch GPG key from keyserver.
  exit 1
  fi
 -- 
 1.9.1
 
 ___
 lxc-devel mailing list
 lxc-devel@lists.linuxcontainers.org
 http://lists.linuxcontainers.org/listinfo/lxc-devel
___
lxc-devel mailing list
lxc-devel@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-devel


[lxc-devel] [lxc/lxc] 809a15: lxc-download: Attempt to get the GPG key 3 times

2014-06-03 Thread GitHub
  Branch: refs/heads/master
  Home:   https://github.com/lxc/lxc
  Commit: 809a1539a3d83d0ea6f277519e6d43e75ccf1013
  https://github.com/lxc/lxc/commit/809a1539a3d83d0ea6f277519e6d43e75ccf1013
  Author: Stéphane Graber stgra...@ubuntu.com
  Date:   2014-06-03 (Tue, 03 Jun 2014)

  Changed paths:
M templates/lxc-download.in

  Log Message:
  ---
  lxc-download: Attempt to get the GPG key 3 times

This is to deal with the GPG pool occasionaly yielding broken servers.

Signed-off-by: Stéphane Graber stgra...@ubuntu.com
Acked-by: Serge E. Hallyn serge.hal...@ubuntu.com


___
lxc-devel mailing list
lxc-devel@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-devel