Re: [systemd-devel] Container, private network and socket activation

2015-02-04 Thread Mikhail Morfikov
 That indicates that the systemd or apache inside the container do not
 correctly make use of the the socket passed into them. You need to
 make sure that inside the container you have pretty much the same
 .socket unit running as on the host. The ListStream lines must be
 identical, so that systemd inside the container recognizes the sockets
 passed in from the host as the ones to use for apache. The only
 difference for the socket units is that on the host they should
 activate the container, in the container they should activate apache.
 ...
 Well, because the socket wasn't passed on right the connection on it
 will still be queued after the container exits again. systemd will
 thus immediately spawn the container again. 
 
 Basically, if you fix your issue #1, your issue #3 will be magically
 fixed too.

Now I understand the mechanizm, at least I think so.

Unfortunately I have apache 2.4.x . I tried to apply the patches
Christian Seiler mentioned, but I was unable to build the package. I
think I have to wait a little bit longer in order to make it work.

Anyway, I tried to reproduce the ssh example (it can be found here:
http://0pointer.net/blog/projects/socket-activated-containers.html)
just for testing purposes, and I dont't experience the rebooting issue
anymore, but there's another thing:

morfik:~$ ssh -p 23 192.168.10.10
^C
morfik:~$ ssh -p 23 192.168.10.10
ssh: connect to host 192.168.10.10 port 23: Connection refused

The container started when I had tried to connect for the first
time, but I couldn't connect to this port after that, and I have no
idea why. I tried to figure out what went wrong, but I failed.

# machinectl status debian-tree -l --no-pager
debian-tree
   Since: Thu 2015-02-05 00:21:41 CET; 1min 16s ago
  Leader: 103953 (systemd)
 Service: nspawn; class container
Root: /media/Kabi/debian-tree
 Address: 192.168.10.10
  fe80::1474:8dff:fe79:6b44
  OS: Debian GNU/Linux 8 (jessie)
Unit: machine-debian\x2dtree.scope
  ├─103953 /lib/systemd/systemd 3
  └─system.slice
├─dbus.service
│ └─104069 /usr/bin/dbus-daemon --system --address=systemd: 
--nofork --nopidfile --systemd-activation
├─cron.service
│ └─104043 /usr/sbin/cron -f
├─apache2.service
│ ├─104481 /usr/sbin/apache2 -k start
│ ├─104485 /usr/sbin/apache2 -k start
│ ├─104511 /usr/sbin/apache2 -k start
│ ├─104512 /usr/sbin/apache2 -k start
│ ├─104513 /usr/sbin/apache2 -k start
│ ├─104515 /usr/sbin/apache2 -k start
│ └─104516 /usr/sbin/apache2 -k start
├─system-sshd.slice
│ └─sshd@0-192.168.10.10:23-192.168.10.10:51767.service
│   ├─104041 sshd: [accepted]
│   └─104042 sshd: [net]
├─systemd-journald.service
│ └─103975 /lib/systemd/systemd-journald
├─systemd-logind.service
│ └─104046 /lib/systemd/systemd-logind
├─mysql.service
│ ├─104090 /bin/sh /usr/bin/mysqld_safe
│ └─104453 /usr/sbin/mysqld --basedir=/usr 
--datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --user=mysql 
--log-error=/var/log/mysql/error.log --pid-file=/var/run/mysqld/mysqld.pid 
--socket=/var/run/mysqld/mysqld.sock --port=
├─console-getty.service
│ └─104208 /sbin/agetty --noclear --keep-baud console 
115200 38400 9600 vt102
└─rsyslog.service
  └─104088 /usr/sbin/rsyslogd -n

Then I logged into the container:

root:~# machinectl login debian-tree
  
...
root@www:/home/morfik# netstat -tupan
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address   Foreign Address State   
PID/Program name
tcp0  0 192.168.10.10:  0.0.0.0:*   LISTEN  
483/mysqld
tcp6   0  0 :::80   :::*LISTEN  
511/apache2
tcp6   0  0 :::22   :::*LISTEN  
1/systemd
tcp6   0  0 :::443  :::*LISTEN  
511/apache2

Nothing listens on the port 23, why?

Still inside of the container:

root@www:/home/morfik#  tree /etc/systemd/system
/etc/systemd/system
|-- getty.target.wants
|   `-- getty@tty1.service - /lib/systemd/system/getty@.service
|-- multi-user.target.wants
|   |-- cron.service - /lib/systemd/system/cron.service
|   |-- remote-fs.target - /lib/systemd/system/remote-fs.target
|   `-- rsyslog.service - /lib/systemd/system/rsyslog.service

Re: [systemd-devel] Container, private network and socket activation

2015-02-04 Thread Lennart Poettering
On Wed, 04.02.15 04:40, Mikhail Morfikov (mmorfi...@gmail.com) wrote:


 1. When I try to connect for the very first time, I get a timeout, even 
 though the container
 is working. I can cancel the connection immediately, and reconnect after 2-3 
 sec and then the
 page shows up. All subsequent connections work without a problem, just the 
 first one gets
 a timeout. Is there a way to fix this, so the first connection that boots the 
 system could
 be somehow delayed, so after a while the page would show up?

That indicates that the systemd or apache inside the container do not
correctly make use of the the socket passed into them. You need to
make sure that inside the container you have pretty much the same
.socket unit running as on the host. The ListStream lines must be
identical, so that systemd inside the container recognizes the sockets
passed in from the host as the ones to use for apache. The only
difference for the socket units is that on the host they should
activate the container, in the container they should activate apache.

 2. Is there a way to shut down the container automatically after
 some period of inactivity?  Let's say there's no traffic for 30min,
 and after this time the container goes down.

No, this is not available. It's hard to know when a process is idle
from the outside. While some strategies here are thinkable, no code
for it exists.

 3. How to stop the container manually? I'm asking because when I try via
 systemctl stop mycontainer.service , it stops, but:
 
 ...
 Feb 04 04:15:58 morfikownia systemd-nspawn[14346]: Halting system.
 Feb 04 04:15:58 morfikownia systemd-machined[14353]: Machine debian-tree 
 terminated.
 Feb 04 04:15:58 morfikownia systemd-nspawn[14346]: Container debian-tree has 
 been shut down.
 Feb 04 04:15:58 morfikownia systemd[1]: Starting My little
 container...

Well, because the socket wasn't passed on right the connection on it
will still be queued after the container exits again. systemd will
thus immediately spawn the container again. 

Basically, if you fix your issue #1, your issue #3 will be magically
fixed too.

 4. Is there a way to persist the interfaces (veth0 and veth1)? Because after 
 the container
 goes down, they're deleted, so I have to create them anew.

Hmm, good question. I don't think the kernel allows that... It
destroys veth links when either side's network namespace dies... Not
sure if we can do anything about this in a robust way...

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Container, private network and socket activation

2015-02-03 Thread Tomasz Torcz
On Tue, Feb 03, 2015 at 11:28:09PM +0100, Christian Seiler wrote:
 Am 03.02.2015 um 22:06 schrieb Lennart Poettering:
  Socket activation is somethings daemons need to support
  explicitly. Many do these days, but I don't think Apache is one of
  them.
 
 FYI: all released versions (i.e. up to 2.4.x) of Apache httpd don't
 support it yet, but the current development version does - at least if
 you compile it with the corresponding options (no module needs to be
 loaded for that, it's in the core).
 
 There's a proposal to backport that and sd_notify integration[1] to the
 stable 2.4.x branch, but nothing's happened so far:
 
 http://svn.apache.org/viewvc/httpd/httpd/branches/2.4.x/STATUS?revision=1656674view=markup#l138
 
 [1] Although Fedora apparently already includes sd_notify integration
 for quite a while now, but no socket activation...

  Fadora allows socket activation for httpd, actually, since:
http://pkgs.fedoraproject.org/cgit/httpd.git/commit/?id=572a5df9ee47a39d346a4f6b7cd76f6a8804d63f

-- 
Tomasz Torcz God, root, what's the difference?
xmpp: zdzich...@chrome.pl God is more forgiving.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Container, private network and socket activation

2015-02-03 Thread Mikhail Morfikov
 Hmm, to implement something like this I think the best option would be
 to set up the interface to later pass to the container first on the
 host, then listen on the container's IP address on the host. When a
 connection comes in the container would have to be started via socket
 activation, and would then have to take over the container interface
 (with --network-interface=), so that all further connections are
 delivered directly to the container and the host is not involved
 anymore. 

I managed to set this up. In short:

# ip link add type veth
# ip addr add 192.168.10.10/24 brd + dev veth1
# ip addr add 192.168.10.20/24 brd + dev veth0
# ip link set veth1 up
# ip link set veth0 up
# brctl addif br_lxc veth0

This sets two interfaces, one of which (veth1) goes to the container via
the following service file:

[Unit]
Description=My little container

[Service]
Type=simple
KillMode=process
ExecStart=/usr/bin/systemd-nspawn -jbD /media/Kabi/debian-tree/ \
--network-interface=veth1 \
--bind /media/Kabi/apache/:/apache/ \
--bind 
/media/Kabi/backup_packages/apt/archives/:/var/cache/apt/archives/ \
--bind /media/Kabi/repozytorium:/repozytorium \
3

In addition, I have my bridge interface set:

auto br_lxc
iface br_lxc inet static
address 192.168.10.100
netmask 255.255.255.0
broadcast 192.168.10.255
bridge_ports none
bridge_waitport 0
bridge_fd 0

The next thing is to socket activate the container through this file:

[Unit]
Description=The HTTP/HTTPS socket of my little container

[Socket]
ListenStream=192.168.10.10:80
ListenStream=192.168.10.10:443

When I start the socket, I get:

root:~# systemctl start mycontainer.socket
root:~# systemctl status mycontainer.socket
● mycontainer.socket - The HTTP/HTTPS socket of my little container
   Loaded: loaded (/etc/systemd/system/mycontainer.socket; static; vendor 
preset: enabled)
   Active: active (listening) since Wed 2015-02-04 04:00:51 CET; 1s ago
   Listen: 192.168.10.10:80 (Stream)
   192.168.10.10:443 (Stream)

Feb 04 04:00:51 morfikownia systemd[1]: Listening on The HTTP/HTTPS socket of 
my little container.

That's all for the host.

In the container I had to configure the passed interface via 
/etc/network/interface :

auto veth1
iface veth1 inet static
address 192.168.10.10
netmask 255.255.255.0
broadcast 192.168.10.255
gateway 192.168.10.100

And that's it. This setup works. I mean, when I type in my firefox 
http://192.168.10.10, the
container boots and I'm able to browse the page.

Now I have some questions:

1. When I try to connect for the very first time, I get a timeout, even though 
the container
is working. I can cancel the connection immediately, and reconnect after 2-3 
sec and then the
page shows up. All subsequent connections work without a problem, just the 
first one gets
a timeout. Is there a way to fix this, so the first connection that boots the 
system could
be somehow delayed, so after a while the page would show up?
2. Is there a way to shut down the container automatically after some period of 
inactivity?
Let's say there's no traffic for 30min, and after this time the container goes 
down.
3. How to stop the container manually? I'm asking because when I try via
systemctl stop mycontainer.service , it stops, but:

...
Feb 04 04:15:58 morfikownia systemd-nspawn[14346]: Halting system.
Feb 04 04:15:58 morfikownia systemd-machined[14353]: Machine debian-tree 
terminated.
Feb 04 04:15:58 morfikownia systemd-nspawn[14346]: Container debian-tree has 
been shut down.
Feb 04 04:15:58 morfikownia systemd[1]: Starting My little container...
Feb 04 04:15:58 morfikownia systemd[1]: Stopping Container debian-tree.
Feb 04 04:15:58 morfikownia systemd[1]: Stopped Container debian-tree.
Feb 04 04:15:58 morfikownia kernel: br_lxc: port 1(veth0) entered disabled state
Feb 04 04:15:58 morfikownia kernel: device veth0 left promiscuous mode
Feb 04 04:15:58 morfikownia kernel: br_lxc: port 1(veth0) entered disabled state
Feb 04 04:15:58 morfikownia systemd-nspawn[15325]: Spawning container 
debian-tree on /media/Kabi/debian-tree.
Feb 04 04:15:58 morfikownia systemd-nspawn[15325]: Press ^] three times within 
1s to kill container.
Feb 04 04:15:58 morfikownia systemd[1]: mycontainer.service: main process 
exited, code=exited, status=237/n/a
Feb 04 04:15:58 morfikownia systemd[1]: Failed to start My little container.
Feb 04 04:15:58 morfikownia systemd[1]: Unit mycontainer.service entered failed 
state.
Feb 04 04:15:58 morfikownia systemd[1]: mycontainer.service failed.
Feb 04 04:15:58 morfikownia systemd[1]: Starting My little container...
Feb 04 04:15:58 morfikownia systemd[1]: mycontainer.service: main process 
exited, code=exited, status=237/n/a
Feb 04 04:15:58 morfikownia systemd[1]: Failed to start My little container.
Feb 04 04:15:58 morfikownia systemd[1]: Unit mycontainer.service entered failed 
state.
Feb 04 04:15:58 

Re: [systemd-devel] Container, private network and socket activation

2015-02-03 Thread Lennart Poettering
On Tue, 03.02.15 02:36, Mikhail Morfikov (mmorfi...@gmail.com) wrote:

 So, everything works pretty well. 
 
 Now there's a problem, how to add socket activation to this
 container?

Well, the sockets for socket activated containers are created on the
host's namespace, not the container's namespace. They are then passed
into the container namespace, but they still belong to the host's
namespace. This means to connect to them you must connect to one of
the host'S IP addresses, not the container's IP addresses.

If the container's and host namesapces are identical (which is the
case if you don't use --private-network or any of the --network-xyz
switches), then the distinction goes away of course.

Also note that using socket activation for cotnainers means that
systemd instance inside the container also needs to have configuration
for the socket, to pass it on to the service that ultimately shall
answer for it. Are you sure that apache2 has support for that, and
that you set it up?

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Container, private network and socket activation

2015-02-03 Thread Christian Seiler
Am 03.02.2015 um 22:06 schrieb Lennart Poettering:
 Socket activation is somethings daemons need to support
 explicitly. Many do these days, but I don't think Apache is one of
 them.

FYI: all released versions (i.e. up to 2.4.x) of Apache httpd don't
support it yet, but the current development version does - at least if
you compile it with the corresponding options (no module needs to be
loaded for that, it's in the core).

There's a proposal to backport that and sd_notify integration[1] to the
stable 2.4.x branch, but nothing's happened so far:

http://svn.apache.org/viewvc/httpd/httpd/branches/2.4.x/STATUS?revision=1656674view=markup#l138

[1] Although Fedora apparently already includes sd_notify integration
for quite a while now, but no socket activation...

Christian

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Container, private network and socket activation

2015-02-03 Thread Mikhail Morfikov
 Also note that using socket activation for cotnainers means that
 systemd instance inside the container also needs to have configuration
 for the socket, to pass it on to the service that ultimately shall
 answer for it. Are you sure that apache2 has support for that, and
 that you set it up?

Actually, I just want to start the container when someone else tries to
connect to the port 80 of the host, just using the container's IP
address. So, for instance, my host has IP 192.168.1.150, the container
has IP 192.168.10.10 , and I want to type the second address in a web
browser so the system in the container could boot and start apache.
Then I could browse the page that is hosted by the apache server inside
of the container. I'm not sure if that's even possible, but apache
inside of the container starts at boot automatically, so I think there's
no need for setting anything in the container -- please correct me if
I'm wrong.


pgpZjLYp3PFB1.pgp
Description: OpenPGP digital signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Container, private network and socket activation

2015-02-03 Thread Lennart Poettering
On Tue, 03.02.15 20:45, Mikhail Morfikov (mmorfi...@gmail.com) wrote:

  Also note that using socket activation for cotnainers means that
  systemd instance inside the container also needs to have configuration
  for the socket, to pass it on to the service that ultimately shall
  answer for it. Are you sure that apache2 has support for that, and
  that you set it up?
 
 Actually, I just want to start the container when someone else tries to
 connect to the port 80 of the host, just using the container's IP
 address. So, for instance, my host has IP 192.168.1.150, the container
 has IP 192.168.10.10 , and I want to type the second address in a web
 browser so the system in the container could boot and start apache.

Hmm, to implement something like this I think the best option would be
to set up the interface to later pass to the container first on the
host, then listen on the container's IP address on the host. When a
connection comes in the container would have to be started via socket
activation, and would then have to take over the container interface
(with --network-interface=), so that all further connections are
delivered directly to the container and the host is not involved
anymore. 

This way you'd still have two seperate network namespaces, but the
interface would change namespace during activation of the container,
so that first the host owns it and processes it and then the
container.

Of course, either way you'd need socket activation support in your
Apache. And I don't think Apache provides that right now out of the
box...

Also note that ther's a slight security risk here: the socket that is
used for activation is from the hosts's namespace. Using the old BSD
netdev ioctls like SIOCGIFCONF will reveal the host's network setup,
not the container's setup.

 Then I could browse the page that is hosted by the apache server inside
 of the container. I'm not sure if that's even possible, but apache
 inside of the container starts at boot automatically, so I think there's
 no need for setting anything in the container -- please correct me if
 I'm wrong.

Socket activation is somethings daemons need to support
explicitly. Many do these days, but I don't think Apache is one of
them.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Container, private network and socket activation

2015-02-02 Thread Mikhail Morfikov
I've set up a container via systemd-nspawn tool, and I wanted to use the 
private network feature.
The line that launches the container includes --network-bridge= and 
--network-veth options.
The whole systemd .service file looks like this:

[Unit]
Description=My little container

[Service]
Type=simple
KillMode=process
ExecStart=/usr/bin/systemd-nspawn -jbD /media/Kabi/debian-tree/ \
--network-bridge=br_lxc \
--network-veth \
--bind /media/Kabi/apache/:/apache/ \
--bind 
/media/Kabi/backup_packages/apt/archives/:/var/cache/apt/archives/ \
--bind /media/Kabi/repozytorium:/repozytorium \
3

The bridge interface was created through the /etc/network/interface file, and 
it looks as follows:

auto br_lxc
iface br_lxc inet static
address 192.168.10.100
netmask 255.255.255.0
broadcast 192.168.10.255
bridge_ports none
bridge_waitport 0
bridge_fd 0

The container is able to boot:

# systemctl status mycontainer.service
● mycontainer.service - My little container
   Loaded: loaded (/etc/systemd/system/mycontainer.service; static; vendor 
preset: enabled)
   Active: active (running) since Tue 2015-02-03 01:57:24 CET; 12s ago
 Main PID: 84905 (systemd-nspawn)
   CGroup: /system.slice/mycontainer.service
   └─84905 /usr/bin/systemd-nspawn -jbD /media/Kabi/debian-tree/ 
--network-bridge=br_lxc --network-veth --bind /media/Kabi/apache/:/apache/ 
--bind /media/Kabi/backup_packages/apt/arch...

Feb 03 01:57:25 morfikownia systemd-nspawn[84905]: [  OK  ] Started Console 
Getty.
Feb 03 01:57:25 morfikownia systemd-nspawn[84905]: [  OK  ] Reached target 
Login Prompts.
Feb 03 01:57:25 morfikownia systemd-nspawn[84905]: [  OK  ] Started System 
Logging Service.
Feb 03 01:57:25 morfikownia systemd-nspawn[84905]: [  OK  ] Started Cleanup of 
Temporary Directories.
Feb 03 01:57:27 morfikownia systemd-nspawn[84905]: [  OK  ] Started LSB: Start 
and stop the mysql database server daemon.
Feb 03 01:57:28 morfikownia systemd-nspawn[84905]: [  OK  ] Started LSB: 
Apache2 web server.
Feb 03 01:57:28 morfikownia systemd-nspawn[84905]: [  OK  ] Reached target 
Multi-User System.
Feb 03 01:57:28 morfikownia systemd-nspawn[84905]: Starting Update UTMP about 
System Runlevel Changes...
Feb 03 01:57:28 morfikownia systemd-nspawn[84905]: [  OK  ] Started Update UTMP 
about System Runlevel Changes.
Feb 03 01:57:29 morfikownia systemd-nspawn[84905]: Debian GNU/Linux 8 www 
console

# machinectl
MACHINE  CONTAINER SERVICE
debian-tree  container nspawn

1 machines listed.

# machinectl status debian-tree
debian-tree
   Since: Tue 2015-02-03 01:57:24 CET; 2min 54s ago
  Leader: 84906 (systemd)
 Service: nspawn; class container
Root: /media/Kabi/debian-tree
   Iface: br_lxc
 Address: 192.168.10.10
  fe80::541b:d0ff:febc:c38c%7
  OS: Debian GNU/Linux 8 (jessie)
Unit: machine-debian\x2dtree.scope
  ├─84906 /lib/systemd/systemd 3
  └─system.slice
├─dbus.service
│ └─85024 /usr/bin/dbus-daemon --system --address=systemd: 
--nofork --nopidfile --systemd-activation
├─cron.service
│ └─85003 /usr/sbin/cron -f
├─apache2.service
│ ├─85427 /usr/sbin/apache2 -k start
│ ├─85454 /usr/sbin/apache2 -k start
│ ├─85485 /usr/sbin/apache2 -k start
│ ├─85486 /usr/sbin/apache2 -k start
│ ├─85488 /usr/sbin/apache2 -k start
│ ├─85489 /usr/sbin/apache2 -k start
│ └─85491 /usr/sbin/apache2 -k start
├─systemd-journald.service
│ └─84941 /lib/systemd/systemd-journald
├─systemd-logind.service
│ └─85006 /lib/systemd/systemd-logind
├─mysql.service
│ ├─85057 /bin/sh /usr/bin/mysqld_safe
│ └─85415 /usr/sbin/mysqld --basedir=/usr 
--datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --user=mysql 
--log-error=/var/log/mysql/error.log --pid-file=/var/run...
├─console-getty.service
│ └─85055 /sbin/agetty --noclear --keep-baud console 115200 
38400 9600 vt102
└─rsyslog.service
  └─85051 /usr/sbin/rsyslogd -n


Inside of the container I added the following configuration to its network 
interface:

auto host0
iface host0 inet static
address 192.168.10.10
network 192.168.10.0/24
netmask 255.255.255.0
broadcast 192.168.10.255
gateway 192.168.10.100

Communication works (ping from the container):

root@www:~# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=52