Re: [systemd-devel] Waiting for nspawn services

2014-10-27 Thread Lennart Poettering
On Sat, 25.10.14 05:39, Rich Freeman (r-syst...@thefreemanclan.net) wrote:

 One of the useful options in nspawn is the ability to boot the init
 within the container using -b, especially if that init happens to be
 systemd.
 
 However, I could not find any easy way to set a dependency on a
 service within a container.
 
 Example use case:
 Unit 1 boots an nspawn container that runs mysql
 Unit 2 launches a service that depends on mysql, or it might even be
 another container that depends on mysql.
 
 I could put together a script that pings mysql until it is up, but the
 original mysql unit already has to make the determination as to
 whether the service is ready, so this is redundant.  Also, that is a
 solution specific to a single daemon, while the problem is generic.
 
 I could think of a few possible ways to solve this.
 
 1.  Have a way to actually specify a dependency on a unit within a
 container.

As I see it containers are really about separation of things, and
integrating multiple systemd nodes into a single dependency tree
appears a slippery slope to me here...

 2.  Have a generic wait program that can wait for any unit to start
 within a container, or perhaps even on a remote host.

Note that a tool like this would have to do the
try-wait-try-again-repeat dance as well, as init systems are not
immediately connectable when a container is booted, during their early
boot phase. 

 3.  Have a way for nspawn to delay becoming online until all services
 inside have become online.
 
 Actually, being able to express unit dependencies across machines
 might be useful on many levels, but I'll be happy enough just to be
 able to handle containers on a single host for now.

I am not opposed to adding a scheme that makes this possible, but I am
not sure how we could do this in a nice way. Requirements I see are:
that it fails gracefully on non-systemd guests, is race-free, doesn't
involve retry loops.

In general I think making use of socket notification here would be the
much better option, as it removes the entire need for ordering things
here. nspawn already support socket activation just fine. If your
mysql container would use this, then you could start the entire mysql
container at the same time as the mysql client without any further
complexity or synchronization, and it would just work.

socket activation in mysql would be pretty useful anyway, so maybe
that's the more preferable option anyway?

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Waiting for nspawn services

2014-10-27 Thread Rich Freeman
On Mon, Oct 27, 2014 at 10:49 AM, Lennart Poettering
lenn...@poettering.net wrote:
 In general I think making use of socket notification here would be the
 much better option, as it removes the entire need for ordering things
 here. nspawn already support socket activation just fine. If your
 mysql container would use this, then you could start the entire mysql
 container at the same time as the mysql client without any further
 complexity or synchronization, and it would just work.


Is socket activation supported for nspawn containers that use network
namespaces?  Incoming connections would not be pointed at the host IP,
but at the container's IP, which the host wouldn't otherwise be
listening on since the interface for it does not yet exist.

Or do I need to move everything to different port numbers and use the host IP?

--
Rich
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Waiting for nspawn services

2014-10-27 Thread Lennart Poettering
On Mon, 27.10.14 11:24, Rich Freeman (r-syst...@thefreemanclan.net) wrote:

 On Mon, Oct 27, 2014 at 10:49 AM, Lennart Poettering
 lenn...@poettering.net wrote:
  In general I think making use of socket notification here would be the
  much better option, as it removes the entire need for ordering things
  here. nspawn already support socket activation just fine. If your
  mysql container would use this, then you could start the entire mysql
  container at the same time as the mysql client without any further
  complexity or synchronization, and it would just work.
 
 Is socket activation supported for nspawn containers that use network
 namespaces? 

Yes. The socket passed in doesn't have to be from the same namespace
as the container runs in. It's kinda cool, as this allows locking down
containers pretty strictly, but still granting them access on some
very specific listening socket.

(Note though that ymmv on this, because depending on the software you
use it might want to reverse-dns lookup incomoing connections, and
that would fail if the container doesn't have network access to do
DNS... That said, if mysql would do reverse-dns of all incoming
connections it would be really stupid...)

 Incoming connections would not be pointed at the host IP,
 but at the container's IP, which the host wouldn't otherwise be
 listening on since the interface for it does not yet exist.
 
 Or do I need to move everything to different port numbers and use the host IP?

Network namespaces are relevant for the process that originally binds
the sockets. In the case of socket-activated containers that would be
the host. If you then pass the fds into the containers and those are
locked into their own namespaces, then any sockets they create and
bind would be from their own namepsace, but the one they got passed in
would still be from the original host namespace. If they then accept a
connection on that passed-in socket that connection socket would also
be part of the same host namespace -- not of the containers.

Hence, two rules:

a) if you have a socket, then all sockets you derive from it via
   accept() stay part of the same namespace as that original socket.

b) any new sockets you generate via socket() are part of whatever
   network namespace your process is currently in.

Hope that makes sense?

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Waiting for nspawn services

2014-10-27 Thread Reindl Harald


Am 27.10.2014 um 16:32 schrieb Lennart Poettering:

(Note though that ymmv on this, because depending on the software you
use it might want to reverse-dns lookup incomoing connections, and
that would fail if the container doesn't have network access to do
DNS... That said, if mysql would do reverse-dns of all incoming
connections it would be really stupid...)


it does because you can set permissions on hostnames
but you can add skip-name-resolve in my.cnf



signature.asc
Description: OpenPGP digital signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Waiting for nspawn services

2014-10-27 Thread Rich Freeman
On Mon, Oct 27, 2014 at 11:32 AM, Lennart Poettering
lenn...@poettering.net wrote:
 Network namespaces are relevant for the process that originally binds
 the sockets. In the case of socket-activated containers that would be
 the host. If you then pass the fds into the containers and those are
 locked into their own namespaces, then any sockets they create and
 bind would be from their own namepsace, but the one they got passed in
 would still be from the original host namespace. If they then accept a
 connection on that passed-in socket that connection socket would also
 be part of the same host namespace -- not of the containers.


In case it wasn't clear - I'm talking about network namespaces with
ethernet bridging - not just an isolated network namespace without any
network access at all.  That said, I could certainly see why the
latter would be useful.

So, if the host is 10.0.0.1, then mysql would normally listen on
10.0.0.2:3306.  One of my goals here was to keep everything running on
its native port and dedicated IP to minimize configuration.  For
example, I can run ssh on 10.0.0.2 and let it have port 22, and not
worry about the other 3 containers running ssh on port 22.

I suppose I could have systemd listen on 10.0.0.1:x and pass that
connection over to mysql.  However, then I need to point services to
10.0.0.1 and not 10.0.0.2.

This is why I alluded to it being useful to be able to depend on
services on remote hosts.  I completely agree that doing this in a
clean way without resorting to polling would involve a bit of work.
My own workaround in this case was basically going to amount to
polling.

--
Rich
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Waiting for nspawn services

2014-10-27 Thread Lennart Poettering
On Mon, 27.10.14 11:49, Rich Freeman (r-syst...@thefreemanclan.net) wrote:

 On Mon, Oct 27, 2014 at 11:32 AM, Lennart Poettering
 lenn...@poettering.net wrote:
  Network namespaces are relevant for the process that originally binds
  the sockets. In the case of socket-activated containers that would be
  the host. If you then pass the fds into the containers and those are
  locked into their own namespaces, then any sockets they create and
  bind would be from their own namepsace, but the one they got passed in
  would still be from the original host namespace. If they then accept a
  connection on that passed-in socket that connection socket would also
  be part of the same host namespace -- not of the containers.
 
 
 In case it wasn't clear - I'm talking about network namespaces with
 ethernet bridging - not just an isolated network namespace without any
 network access at all.  That said, I could certainly see why the
 latter would be useful.
 
 So, if the host is 10.0.0.1, then mysql would normally listen on
 10.0.0.2:3306.  One of my goals here was to keep everything running on
 its native port and dedicated IP to minimize configuration.  For
 example, I can run ssh on 10.0.0.2 and let it have port 22, and not
 worry about the other 3 containers running ssh on port 22.
 
 I suppose I could have systemd listen on 10.0.0.1:x and pass that
 connection over to mysql.  However, then I need to point services to
 10.0.0.1 and not 10.0.0.2.

Correct.

 This is why I alluded to it being useful to be able to depend on
 services on remote hosts.  I completely agree that doing this in a
 clean way without resorting to polling would involve a bit of work.
 My own workaround in this case was basically going to amount to
 polling.

There has been a TODO list item for a while to allow .socket units to
be created within specific namespaces. I figure with that in place you
could actually make this work, by creating the mysql socket with
IP_FREEBIND in a new namespace, and then make nspawn ultimately take
possession of it, or so...

But dunno, maybe not, sounds like a ton of races still

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Waiting for nspawn services

2014-10-25 Thread Zbigniew Jędrzejewski-Szmek
On Sat, Oct 25, 2014 at 05:39:51AM -0400, Rich Freeman wrote:
 One of the useful options in nspawn is the ability to boot the init
 within the container using -b, especially if that init happens to be
 systemd.
 
 However, I could not find any easy way to set a dependency on a
 service within a container.
 
 Example use case:
 Unit 1 boots an nspawn container that runs mysql
 Unit 2 launches a service that depends on mysql, or it might even be
 another container that depends on mysql.
 
 I could put together a script that pings mysql until it is up, but the
 original mysql unit already has to make the determination as to
 whether the service is ready, so this is redundant.  Also, that is a
 solution specific to a single daemon, while the problem is generic.
 
 I could think of a few possible ways to solve this.
 
 1.  Have a way to actually specify a dependency on a unit within a container.
 2.  Have a generic wait program that can wait for any unit to start
 within a container, or perhaps even on a remote host.
 3.  Have a way for nspawn to delay becoming online until all services
 inside have become online.
That would be nice functionality. I don't think anything like this exists
currently, but systemd can send sd_notify() messages when it receives
appropriate env. variables (it does that already for user sessions),
and systemd-nspawn could be hooked to pass them.

You could also:
4. Use socket activation for the whole container. If this is possible (i.e.
   all services inside of the container can be socket activated) then this
   is probably the best option.

 Actually, being able to express unit dependencies across machines
 might be useful on many levels, but I'll be happy enough just to be
 able to handle containers on a single host for now.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel