[systemd-devel] network interface down in container

2015-04-30 Thread arnaud gaboury
I already used for a while a container (Arch on Arch). I had two
distinct IP and a working setup thanks to good help from Tom Gundersen

I am trying to replicate my network settings on a new setup (Fedora on
Arch). For now, I am just trying with DHCP.

Here the setup on host:


1- created a virtual bridge

$ cat /etc/systemd/network/Bridge.netdev

[NetDev]
Name=br0
Kind=bridge

2 - bind my eth to the bridge

$ cat /etc/systemd/network/eth.network

[Match]
Name=en*

[Network]
Bridge=br0

3- created bridge network unit

$ cat /etc/systemd/network/bridge.network

[Match]
Name=br0

[Network]
DHCP=IPV4


Nothing else.

when container is up:

$ ip a
2: enp7s0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast
master br0 state UP group default qlen 1000
link/ether 14:da:e9:b5:7a:88 brd ff:ff:ff:ff:ff:ff
inet6 fe80::16da:e9ff:feb5:7a88/64 scope link
   valid_lft forever preferred_lft forever
4: br0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue state
UP group default
link/ether b6:0c:00:22:f1:4a brd ff:ff:ff:ff:ff:ff
inet 192.168.1.87/24 brd 192.168.1.255 scope global br0
   valid_lft forever preferred_lft forever
inet6 fe80::b40c:ff:fe22:f14a/64 scope link
   valid_lft forever preferred_lft forever
9: vb-poppy: NO-CARRIER,BROADCAST,MULTICAST,UP mtu 1500 qdisc
pfifo_fast master br0 state DOWN group default qlen 1000
link/ether 0e:9a:d7:18:a3:59 brd ff:ff:ff:ff:ff:ff
$ ip route
default via 192.168.1.254 dev br0  proto static
192.168.1.0/24 dev br0  proto kernel  scope link  src 192.168.1.87
 % brctl show
bridge name bridge id STP enabledinterfaces
  br08000.b60c0022f14a no  enp7s0

 vb-poppy
---

I used to boot the container this way :
# systemd-nspawn --network-bridge=br0 -bD /path_to/my_container

Is this correct?


  *
Now on the container side:

Nothing configured. NetworkManager enabled, systemd-networkd enabled
and started.

---
$ ip a
2: host0: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN group
default qlen 1000
link/ether 0e:7f:c3:fb:25:b1 brd ff:ff:ff:ff:ff:ff
-
host0 is down

$ journalctl -x
..
-- Unit NetworkManager.service has begun starting up.
Apr 27 13:18:01 poppy firewalld[35]: 2015-04-27 13:18:01 ERROR:
ebtables not usable, disabling ethernet bridge firewall.
Apr 27 13:18:01 poppy firewalld[35]: 2015-04-27 13:18:01 FATAL ERROR:
No IPv4 and IPv6 firewall.
Apr 27 13:18:01 poppy firewalld[35]: 2015-04-27 13:18:01 ERROR:
Raising SystemExit in run_server
Apr 27 13:18:01 poppy NetworkManager[67]: info  NetworkManager
(version 1.0.0-8.fc22) is starting...
Apr 27 13:18:01 poppy NetworkManager[67]: info  Read config:
/etc/NetworkManager/NetworkManager.conf
Apr 27 13:18:01 poppy NetworkManager[67]: info  WEXT support is enabled
Apr 27 13:18:01 poppy NetworkManager[67]: warn  Could not get
hostname: failed to read /etc/sysconfig/network
Apr 27 13:18:01 poppy NetworkManager[67]: info  Acquired D-Bus
service com.redhat.ifcfgrh1
..

Obviously my old fashioned way to give two IP adress does not work,
and I can't find any other idea/way to do the setup.
Is this firewall story in journalctl the culprit? I do not want any
basic firewall as hardening will be done with Apparmor  (already built
in the kernel) and grsec in a second step.
Hint: I run a custom kernel. Maybe did I miss some network settings ?

Thank you for hints

-- 

google.com/+arnaudgabourygabx
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] test-socket-util: Fix tests on machines without ipv6 support

2015-04-30 Thread systemd github import bot
Patchset imported to github.
Pull request:
https://github.com/systemd-devs/systemd/compare/master...systemd-mailing-devs:1430381911-30460-1-git-send-email-sjoerd.simons%40collabora.co.uk

--
Generated by https://github.com/haraldh/mail2git
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] sd-bus vs gdbus on dbus-daemon

2015-04-30 Thread Umut Tezduyar Lindskog
Hi Greg,

On Wed, Apr 29, 2015 at 5:49 PM, Greg KH gre...@linuxfoundation.org wrote:
 On Wed, Apr 29, 2015 at 04:08:50PM +0200, Umut Tezduyar Lindskog wrote:
 Hi,

 We [1] have noticed that there could be up to %50 performance gain on
 using sd-bus over gdbus on dbus-daemon. For this reason, we have high
 interest in using sd-bus. What are the plans in terms of making sd-bus
 API public?

 Details of the test [2]:

 gdbus.c
   - g_dbus_proxy_new_for_bus_sync()
   - 50 x g_dbus_proxy_call_sync()

 sdbus.c
   - sd_bus_open_system()
   - 50 x sd_bus_get_property()

 Two applications are run with ltrace, perf stat -e cycles, time
 and the results are compared.

 I'll echo Simon's statement here, making a call to
 g_dbus_proxy_call_sync() seems like an odd thing to test.  Is this
 really how your application wants to work?  Is it the normal call path
 that you need optimized?  How many messages do you normally send, or
 want to send, and how big of the data blob are you wanting to
 transmit/receive here?

We have variety of dbus clients sending small/big slow/fast sync/async
messages. But we have not focused on dbus's performance in the
experiment. We wanted to see the efficiency of 2 user space libraries
when it comes to a very simple use case (synchronously retrieving a
property).


 I ask as I'm trying to find how people would like to use D-Bus, if the
 existing dbus-daemon were sped up in various ways.

 Both of these traces show that userspace is sitting around for most of
 the time, I don't see a whole lot of actual CPU usage happening, do you?

Could you please explain how did you come up with that conclusion?
Granted not the top 10 calls are in g... libraries but there are many
entries that program has spent time in g... library. Also the pthread.

Umut


 thanks,

 greg k-h
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCHv2] core: do not spawn jobs or touch other units during coldplugging

2015-04-30 Thread systemd github import bot
Patchset imported to github.
Pull request:
https://github.com/systemd-devs/systemd/compare/master...systemd-mailing-devs:1429895189.10988.22.camel%40gmail.com

--
Generated by https://github.com/haraldh/mail2git
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] initrd mount inactive

2015-04-30 Thread Lennart Poettering
On Wed, 29.04.15 12:09, aaron_wri...@selinc.com (aaron_wri...@selinc.com) wrote:

 I applied those other commits you listed, and I took a look at the lvm2 
 package, which was being compile with --disable-udev_sync and 
 --disable-udev_rules. I enabled both of those and recompiled both lvm2 
 and systemd.
 
 Nothing changed. Sometimes var.mount is still bound to an inactive 
 /dev/mapper/name.

Well, it will be bound to it, but systemd should not act on it
anymore and unmount it.

Also, th device should become active as soon as udev ran and reprobed 
everything.

 Do I need the *.rules files from lvm2?

Well, you do need the DM ones at least. That's actually where the
interesting bits are sicne they properly probe the LUKS device and
make it available for other components including systemd to pick it up.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] [PATCH] treewide: fix typos

2015-04-30 Thread Torstein Husebø
---
 man/systemd.unit.xml| 2 +-
 src/import/export-tar.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/man/systemd.unit.xml b/man/systemd.unit.xml
index c2e374a94e..0aa1eeac77 100644
--- a/man/systemd.unit.xml
+++ b/man/systemd.unit.xml
@@ -1250,7 +1250,7 @@
   row
   entryliteral%H/literal/entry
   entryHost name/entry
-  entryThe hostname of the running system at the point in time the unit 
configuation is loaded./entry
+  entryThe hostname of the running system at the point in time the unit 
configuration is loaded./entry
   /row
   row
   entryliteral%v/literal/entry
diff --git a/src/import/export-tar.c b/src/import/export-tar.c
index 73e1faecf3..d31295745f 100644
--- a/src/import/export-tar.c
+++ b/src/import/export-tar.c
@@ -136,7 +136,7 @@ static void tar_export_report_progress(TarExport *e) {
 unsigned percent;
 assert(e);
 
-/* Do we have any quota info? I fnot, we don't know anything about the 
progress */
+/* Do we have any quota info? If not, we don't know anything about the 
progress */
 if (e-quota_referenced == (uint64_t) -1)
 return;
 
-- 
2.3.7

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] network interface down in container

2015-04-30 Thread arnaud gaboury
On Thu, Apr 30, 2015 at 11:44 AM, Lennart Poettering
lenn...@poettering.net wrote:
 On Thu, 30.04.15 10:01, arnaud gaboury (arnaud.gabo...@gmail.com) wrote:

 I used to boot the container this way :
 # systemd-nspawn --network-bridge=br0 -bD /path_to/my_container

 Is this correct?

 Looks fine.



   *
 Now on the container side:

 Nothing configured. NetworkManager enabled, systemd-networkd enabled
 and started.

 NM doesn't really support being run in a container.

I want to disable it to avoid any potential conflict.

systemctl mask NetworkManager
systemctl mask NetworkManager-dispatcher

But when rebooting, it is enabled again. I guess I must write a custom
service file to mask it ?


 ---
 $ ip a
 2: host0: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN group
 default qlen 1000
 link/ether 0e:7f:c3:fb:25:b1 brd ff:ff:ff:ff:ff:ff
 -
 host0 is down

 Please check what networkctl status -a in the container shows. It
 should tell you whether networkd is configured to do anything.
E2978F 1: lo
   Link File: n/a
Network File: n/a
Type: loopback
   State: carrier (unmanaged)
 MTU: 65536
 Address: 127.0.0.1
  ::1

E2978F 2: host0
   Link File: n/a
Network File: n/a
Type: ether
   State: off (unmanaged)
  HW Address: 0e:7f:c3:fb:25:b1
 MTU: 1500

Not really sain


 Also, what does journalctl -u systemd-networkd -n 200 show in the
 container?
Apr 30 12:10:55 poppy systemd[1]: Starting Network Service...
Apr 30 12:10:56 poppy systemd-networkd[249]: Enumeration completed
Apr 30 12:10:56 poppy systemd[1]: Started Network Service.

sounds OK.

As said, the only error when booting container is:

Apr 27 13:18:01 poppy firewalld[35]: 2015-04-27 13:18:01 ERROR:
ebtables not usable, disabling ethernet bridge firewall.
Apr 27 13:18:01 poppy firewalld[35]: 2015-04-27 13:18:01 FATAL ERROR:
No IPv4 and IPv6 firewall.
Apr 27 13:18:01 poppy firewalld[35]: 2015-04-27 13:18:01 ERROR:
Raising SystemExit in run_server
Apr 27 13:18:01 poppy NetworkManager[67]: info  NetworkManager
(version 1.0.0-8.fc22) is starting...
Apr 27 13:18:01 poppy NetworkManager[67]: info  Read config:
/etc/NetworkManager/NetworkManager.conf
Apr 27 13:18:01 poppy NetworkManager[67]: info  WEXT support is enabled
Apr 27 13:18:01 poppy NetworkManager[67]: warn  Could not get
hostname: failed to read /etc/sysconfig/network
Apr 27 13:18:01 poppy NetworkManager[67]: info  Acquired D-Bus
service com.redhat.ifcfgrh1
Apr 27 13:18:01 poppy NetworkManager[67]: info  Loaded plugin
ifcfg-rh: (c) 2007 - 2013 Red Hat, Inc.  To report bugs please use the
NetworkManager mailing list.
Apr 27 13:18:01 poppy NetworkManager[67]: info  Loaded plugin
keyfile: (c) 2007 - 2013 Red Hat, Inc.  To report bugs please use the
NetworkManager mailing list.
Apr 27 13:18:01 poppy NetworkManager[67]: info  parsing
/etc/sysconfig/network-scripts/ifcfg-lo ...
Apr 27 13:18:01 poppy NetworkManager[67]: info  monitoring kernel
firmware directory '/lib/firmware'.
Apr 27 13:18:01 poppy NetworkManager[67]: info  WiFi enabled by
radio killswitch; enabled by state file
Apr 27 13:18:01 poppy NetworkManager[67]: info  WWAN enabled by
radio killswitch; enabled by state file
Apr 27 13:18:01 poppy NetworkManager[67]: info  WiMAX enabled by
radio killswitch; enabled by state file
Apr 27 13:18:01 poppy NetworkManager[67]: info  Networking is
enabled by state file
Apr 27 13:18:01 poppy NetworkManager[67]: info  (br0): link connected
Apr 27 13:18:01 poppy NetworkManager[67]: info  (br0): carrier is ON
Apr 27 13:18:01 poppy NetworkManager[67]: info  (br0): new Bridge
device (driver: 'bridge' ifindex: 3)
Apr 27 13:18:01 poppy NetworkManager[67]: info  (br0): exported as
/org/freedesktop/NetworkManager/Devices/0


Not sure if it has any impact

 Lennart

 --
 Lennart Poettering, Red Hat



-- 

google.com/+arnaudgabourygabx
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] network interface down in container

2015-04-30 Thread Lennart Poettering
On Thu, 30.04.15 10:01, arnaud gaboury (arnaud.gabo...@gmail.com) wrote:

 I used to boot the container this way :
 # systemd-nspawn --network-bridge=br0 -bD /path_to/my_container
 
 Is this correct?

Looks fine.

 
 
   *
 Now on the container side:
 
 Nothing configured. NetworkManager enabled, systemd-networkd enabled
 and started.

NM doesn't really support being run in a container. 

 ---
 $ ip a
 2: host0: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN group
 default qlen 1000
 link/ether 0e:7f:c3:fb:25:b1 brd ff:ff:ff:ff:ff:ff
 -
 host0 is down

Please check what networkctl status -a in the container shows. It
should tell you whether networkd is configured to do anything.

Also, what does journalctl -u systemd-networkd -n 200 show in the
container?

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] treewide: fix typos

2015-04-30 Thread systemd github import bot
Patchset imported to github.
Pull request:
https://github.com/systemd-devs/systemd/compare/master...systemd-mailing-devs:1430387829-4227-1-git-send-email-torstein%40huseboe.net

--
Generated by https://github.com/haraldh/mail2git
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] network interface down in container

2015-04-30 Thread arnaud gaboury
On Thu, Apr 30, 2015 at 12:18 PM, arnaud gaboury
arnaud.gabo...@gmail.com wrote:
 On Thu, Apr 30, 2015 at 11:44 AM, Lennart Poettering
 lenn...@poettering.net wrote:
 On Thu, 30.04.15 10:01, arnaud gaboury (arnaud.gabo...@gmail.com) wrote:

 I used to boot the container this way :
 # systemd-nspawn --network-bridge=br0 -bD /path_to/my_container

 Is this correct?

 Looks fine.



   *
 Now on the container side:

 Nothing configured. NetworkManager enabled, systemd-networkd enabled
 and started.

 NM doesn't really support being run in a container.

 I want to disable it to avoid any potential conflict.

 systemctl mask NetworkManager
 systemctl mask NetworkManager-dispatcher

 But when rebooting, it is enabled again. I guess I must write a custom
 service file to mask it ?


 ---
 $ ip a
 2: host0: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN group
 default qlen 1000
 link/ether 0e:7f:c3:fb:25:b1 brd ff:ff:ff:ff:ff:ff
 -
 host0 is down

 Please check what networkctl status -a in the container shows. It
 should tell you whether networkd is configured to do anything.
 E2978F 1: lo
Link File: n/a
 Network File: n/a
 Type: loopback
State: carrier (unmanaged)
  MTU: 65536
  Address: 127.0.0.1
   ::1

 E2978F 2: host0
Link File: n/a
 Network File: n/a
 Type: ether
State: off (unmanaged)
   HW Address: 0e:7f:c3:fb:25:b1
  MTU: 1500

 Not really sain


 Also, what does journalctl -u systemd-networkd -n 200 show in the
 container?
 Apr 30 12:10:55 poppy systemd[1]: Starting Network Service...
 Apr 30 12:10:56 poppy systemd-networkd[249]: Enumeration completed
 Apr 30 12:10:56 poppy systemd[1]: Started Network Service.

 sounds OK.

 As said, the only error when booting container is:

 Apr 27 13:18:01 poppy firewalld[35]: 2015-04-27 13:18:01 ERROR:
 ebtables not usable, disabling ethernet bridge firewall.
 Apr 27 13:18:01 poppy firewalld[35]: 2015-04-27 13:18:01 FATAL ERROR:
 No IPv4 and IPv6 firewall.
 Apr 27 13:18:01 poppy firewalld[35]: 2015-04-27 13:18:01 ERROR:
 Raising SystemExit in run_server
 Apr 27 13:18:01 poppy NetworkManager[67]: info  NetworkManager
 (version 1.0.0-8.fc22) is starting...
 Apr 27 13:18:01 poppy NetworkManager[67]: info  Read config:
 /etc/NetworkManager/NetworkManager.conf
 Apr 27 13:18:01 poppy NetworkManager[67]: info  WEXT support is enabled
 Apr 27 13:18:01 poppy NetworkManager[67]: warn  Could not get
 hostname: failed to read /etc/sysconfig/network
 Apr 27 13:18:01 poppy NetworkManager[67]: info  Acquired D-Bus
 service com.redhat.ifcfgrh1
 Apr 27 13:18:01 poppy NetworkManager[67]: info  Loaded plugin
 ifcfg-rh: (c) 2007 - 2013 Red Hat, Inc.  To report bugs please use the
 NetworkManager mailing list.
 Apr 27 13:18:01 poppy NetworkManager[67]: info  Loaded plugin
 keyfile: (c) 2007 - 2013 Red Hat, Inc.  To report bugs please use the
 NetworkManager mailing list.
 Apr 27 13:18:01 poppy NetworkManager[67]: info  parsing
 /etc/sysconfig/network-scripts/ifcfg-lo ...
 Apr 27 13:18:01 poppy NetworkManager[67]: info  monitoring kernel
 firmware directory '/lib/firmware'.
 Apr 27 13:18:01 poppy NetworkManager[67]: info  WiFi enabled by
 radio killswitch; enabled by state file
 Apr 27 13:18:01 poppy NetworkManager[67]: info  WWAN enabled by
 radio killswitch; enabled by state file
 Apr 27 13:18:01 poppy NetworkManager[67]: info  WiMAX enabled by
 radio killswitch; enabled by state file
 Apr 27 13:18:01 poppy NetworkManager[67]: info  Networking is
 enabled by state file
 Apr 27 13:18:01 poppy NetworkManager[67]: info  (br0): link connected
 Apr 27 13:18:01 poppy NetworkManager[67]: info  (br0): carrier is ON
 Apr 27 13:18:01 poppy NetworkManager[67]: info  (br0): new Bridge
 device (driver: 'bridge' ifindex: 3)
 Apr 27 13:18:01 poppy NetworkManager[67]: info  (br0): exported as
 /org/freedesktop/NetworkManager/Devices/0


 Not sure if it has any impact

Do not know if it is a clean approach, but issue is solved with a
static IP (that is what I want).


On host:

$ cat /etc/systemd/networkd/bridge.network

[Match]
Name=br0

[Network]
DNS=192.168.1.254

[Address]
Address=192.168.1.87/24

[Route]
Gateway=192.168.1.254

# ln -sf /dev/null /etc/systemd/network/80-container-host0.network

-

On container

$ cat /etc/systemd/networkd/poppy.network
[Match]
Name=host0

[Network]
DNS=192.168.1.254
Address=192.168.1.94/24
Gateway=192.168.1.254
-bash-4.3#

# ln -sf /dev/null /etc/systemd/network/80-container-host0.network



#  systemd-nspawn --network-bridge=br0 -bD /var/lib/machines/poppy

host:
$ ip a
7: vb-poppy: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc
pfifo_fast master br0 state UP group default qlen 1000
link/ether 0e:9a:d7:18:a3:59 brd ff:ff:ff:ff:ff:ff
inet6 fe80::c9a:d7ff:fe18:a359/64 scope link
 

Re: [systemd-devel] network interface down in container

2015-04-30 Thread arnaud gaboury
On Thu, Apr 30, 2015 at 12:48 PM, arnaud gaboury
arnaud.gabo...@gmail.com wrote:
 On Thu, Apr 30, 2015 at 12:18 PM, arnaud gaboury
 arnaud.gabo...@gmail.com wrote:
 On Thu, Apr 30, 2015 at 11:44 AM, Lennart Poettering
 lenn...@poettering.net wrote:
 On Thu, 30.04.15 10:01, arnaud gaboury (arnaud.gabo...@gmail.com) wrote:

 I used to boot the container this way :
 # systemd-nspawn --network-bridge=br0 -bD /path_to/my_container

 Is this correct?

 Looks fine.



   *
 Now on the container side:

 Nothing configured. NetworkManager enabled, systemd-networkd enabled
 and started.

 NM doesn't really support being run in a container.

 I want to disable it to avoid any potential conflict.

 systemctl mask NetworkManager
 systemctl mask NetworkManager-dispatcher

 But when rebooting, it is enabled again. I guess I must write a custom
 service file to mask it ?


 ---
 $ ip a
 2: host0: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN group
 default qlen 1000
 link/ether 0e:7f:c3:fb:25:b1 brd ff:ff:ff:ff:ff:ff
 -
 host0 is down

 Please check what networkctl status -a in the container shows. It
 should tell you whether networkd is configured to do anything.
 E2978F 1: lo
Link File: n/a
 Network File: n/a
 Type: loopback
State: carrier (unmanaged)
  MTU: 65536
  Address: 127.0.0.1
   ::1

 E2978F 2: host0
Link File: n/a
 Network File: n/a
 Type: ether
State: off (unmanaged)
   HW Address: 0e:7f:c3:fb:25:b1
  MTU: 1500

 Not really sain


 Also, what does journalctl -u systemd-networkd -n 200 show in the
 container?
 Apr 30 12:10:55 poppy systemd[1]: Starting Network Service...
 Apr 30 12:10:56 poppy systemd-networkd[249]: Enumeration completed
 Apr 30 12:10:56 poppy systemd[1]: Started Network Service.

 sounds OK.

 As said, the only error when booting container is:

 Apr 27 13:18:01 poppy firewalld[35]: 2015-04-27 13:18:01 ERROR:
 ebtables not usable, disabling ethernet bridge firewall.
 Apr 27 13:18:01 poppy firewalld[35]: 2015-04-27 13:18:01 FATAL ERROR:
 No IPv4 and IPv6 firewall.
 Apr 27 13:18:01 poppy firewalld[35]: 2015-04-27 13:18:01 ERROR:
 Raising SystemExit in run_server
 Apr 27 13:18:01 poppy NetworkManager[67]: info  NetworkManager
 (version 1.0.0-8.fc22) is starting...
 Apr 27 13:18:01 poppy NetworkManager[67]: info  Read config:
 /etc/NetworkManager/NetworkManager.conf
 Apr 27 13:18:01 poppy NetworkManager[67]: info  WEXT support is enabled
 Apr 27 13:18:01 poppy NetworkManager[67]: warn  Could not get
 hostname: failed to read /etc/sysconfig/network
 Apr 27 13:18:01 poppy NetworkManager[67]: info  Acquired D-Bus
 service com.redhat.ifcfgrh1
 Apr 27 13:18:01 poppy NetworkManager[67]: info  Loaded plugin
 ifcfg-rh: (c) 2007 - 2013 Red Hat, Inc.  To report bugs please use the
 NetworkManager mailing list.
 Apr 27 13:18:01 poppy NetworkManager[67]: info  Loaded plugin
 keyfile: (c) 2007 - 2013 Red Hat, Inc.  To report bugs please use the
 NetworkManager mailing list.
 Apr 27 13:18:01 poppy NetworkManager[67]: info  parsing
 /etc/sysconfig/network-scripts/ifcfg-lo ...
 Apr 27 13:18:01 poppy NetworkManager[67]: info  monitoring kernel
 firmware directory '/lib/firmware'.
 Apr 27 13:18:01 poppy NetworkManager[67]: info  WiFi enabled by
 radio killswitch; enabled by state file
 Apr 27 13:18:01 poppy NetworkManager[67]: info  WWAN enabled by
 radio killswitch; enabled by state file
 Apr 27 13:18:01 poppy NetworkManager[67]: info  WiMAX enabled by
 radio killswitch; enabled by state file
 Apr 27 13:18:01 poppy NetworkManager[67]: info  Networking is
 enabled by state file
 Apr 27 13:18:01 poppy NetworkManager[67]: info  (br0): link connected
 Apr 27 13:18:01 poppy NetworkManager[67]: info  (br0): carrier is ON
 Apr 27 13:18:01 poppy NetworkManager[67]: info  (br0): new Bridge
 device (driver: 'bridge' ifindex: 3)
 Apr 27 13:18:01 poppy NetworkManager[67]: info  (br0): exported as
 /org/freedesktop/NetworkManager/Devices/0


 Not sure if it has any impact

 Do not know if it is a clean approach, but issue is solved with a
 static IP (that is what I want).


 On host:

 $ cat /etc/systemd/networkd/bridge.network

 [Match]
 Name=br0

 [Network]
 DNS=192.168.1.254

 [Address]
 Address=192.168.1.87/24

 [Route]
 Gateway=192.168.1.254

 # ln -sf /dev/null /etc/systemd/network/80-container-host0.network
Useless. Not needed at all

 -

 On container

 $ cat /etc/systemd/networkd/poppy.network
 [Match]
 Name=host0

 [Network]
 DNS=192.168.1.254
 Address=192.168.1.94/24
 Gateway=192.168.1.254
 -bash-4.3#

 # ln -sf /dev/null /etc/systemd/network/80-container-host0.network

 

 #  systemd-nspawn --network-bridge=br0 -bD /var/lib/machines/poppy

 host:
 $ ip a
 7: vb-poppy: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc
 pfifo_fast master br0 

Re: [systemd-devel] network interface down in container

2015-04-30 Thread Lennart Poettering
On Thu, 30.04.15 12:48, arnaud gaboury (arnaud.gabo...@gmail.com) wrote:

  E2978F 2: host0
 Link File: n/a
  Network File: n/a
  Type: ether
 State: off (unmanaged)
HW Address: 0e:7f:c3:fb:25:b1
   MTU: 1500

So, as it appears networkd does consider itself responsible for the
interface and doesn't apply any .network file to it.

 $ cat /etc/systemd/networkd/bridge.network

Well, the directory is /etc/systemd/network/, not /etc/systemd/networkd/.

 $ cat /etc/systemd/networkd/poppy.network

Same here.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] pam_systemd.so indirectly calling pam_acct_mgmt

2015-04-30 Thread Stephen Gallagher
On Thu, 2015-04-30 at 15:01 +0200, Lennart Poettering wrote:
 On Thu, 30.04.15 08:54, Stephen Gallagher (sgall...@redhat.com) 
 wrote:
 
  Does set-linger persist across reboots? 
 
 Yes it does. When a systemd is booted up with a user that has
 lingering on this means that his user@.service instance is invoked at
 boot, without waiting for any login.
 

One last question, Lennart: what is the primary use-case for the
linger feature? When is it expected that users would want to use it?
We (SSSD) need to make sure we have it tied to the right set of access
rights. At the moment, we're defaulting it to Allow. If this is
expected to be useful only for running automated tasks when the user
isn't logged in, we should probably associate it with that set of
access rights instead of blindly allowing it.

signature.asc
Description: This is a digitally signed message part
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] network interface down in container

2015-04-30 Thread Lennart Poettering
On Thu, 30.04.15 12:18, arnaud gaboury (arnaud.gabo...@gmail.com) wrote:

 On Thu, Apr 30, 2015 at 11:44 AM, Lennart Poettering
 lenn...@poettering.net wrote:
  On Thu, 30.04.15 10:01, arnaud gaboury (arnaud.gabo...@gmail.com) wrote:
 
  I used to boot the container this way :
  # systemd-nspawn --network-bridge=br0 -bD /path_to/my_container
 
  Is this correct?
 
  Looks fine.
 
 
 
*
  Now on the container side:
 
  Nothing configured. NetworkManager enabled, systemd-networkd enabled
  and started.
 
  NM doesn't really support being run in a container.
 
 I want to disable it to avoid any potential conflict.
 
 systemctl mask NetworkManager
 systemctl mask NetworkManager-dispatcher
 
 But when rebooting, it is enabled again. I guess I must write a custom
 service file to mask it ?

I figure it gets activated via the
dbus-org.freedesktop.NetworkManager.service name, consider masking
that too.

Or better, just remove the RPM inside the container.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] networkd: dbus API for networkd reconfiguration at run-time

2015-04-30 Thread Rauta, Alin
Hi Tom, Lennart,

I have some questions regarding dbus API and run-time networkd configuration. I 
would really appreciate your answers/suggestions.

First, when upstreaming BridgeFDB support in networkd, I had (in the first 
place) a patch composed of 2 parts:

-  One part  for clearing existing configuration;

-  One part for setting new FDB entries;

Since networkd doesn't currently clear existing configuration, only the first 
part of the patch was accepted.

At that time you said that:

In the future we plan to get a dbus API where networkd can be reconfigured at 
run-time (i.e., change which .network file is applied to a link), and then it 
definitely would make sense to flush routes and addresses when removing the 
.network from the link, but currently we don't do that at all.

Do you have any updates or more information on dbus API (how would this be 
actually done, how would work) ?
What extensions to existing networkd functionality would the dbus API bring ?

Second, regarding BindCarrier= functionality, would dbus API make it possible 
to modify the string content or the bind carrier functionality at run-time ?

Moreover, we currently have the case where networkd is running and has some 
ports involved in BindCarrier= dependencies. Then some of this ports are 
run-time added to a team (link aggregation) device (maybe through command line).
In this case the carrier dependencies affect the team device functionality 
creating confusion at one point in time (team tries to get the childs up/down, 
but the functionality is affected by the carrier dependencies between childs or 
between childs and other ports outside of the team device).
Would dbus API be of any help in this case ? or
Do you have any suggestions on how to avoid these cases ?

Thank you in advance,

Alin Rauta
Software Applications Engineer
+353 (0) 87 101 8449
Intel Shannon Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
Business address: Dromore House, East Park, Shannon, Co. Clare

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] network interface down in container

2015-04-30 Thread arnaud gaboury
 On Thu, Apr 30, 2015, 2:22 PM Lennart Poettering lenn...@poettering.net
wrote:

On Thu, 30.04.15 12:48, arnaud gaboury (arnaud.gabo...@gmail.com) wrote:

  E2978F 2: host0
 Link File: n/a
  Network File: n/a
  Type: ether
 State: off (unmanaged)
HW Address: 0e:7f:c3:fb:25:b1
   MTU: 1500

So, as it appears networkd does consider itself responsible for the
interface and doesn't apply any .network file to it.

 $ cat /etc/systemd/networkd/bridge.network

Well, the directory is /etc/systemd/network/, not /etc/systemd/networkd/.

 $ cat /etc/systemd/networkd/poppy.network

Same here.

 Sorry for typo.

Lennart

--
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] pam_systemd.so indirectly calling pam_acct_mgmt

2015-04-30 Thread Lennart Poettering
On Thu, 30.04.15 08:54, Stephen Gallagher (sgall...@redhat.com) wrote:

 Does set-linger persist across reboots? 

Yes it does. When a systemd is booted up with a user that has
lingering on this means that his user@.service instance is invoked at
boot, without waiting for any login.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] pam_systemd.so indirectly calling pam_acct_mgmt

2015-04-30 Thread Stephen Gallagher
On Thu, 2015-04-30 at 11:36 +0200, Lennart Poettering wrote:
 On Wed, 29.04.15 22:24, Jakub Hrozek (jakub.hro...@posteo.se) wrote:
 
  ...why exactly does systemd-user need to call the account stack 
  for? Again,
  I totally understand session, but account?
 
 Well, if the user service is started without the user being logged in
 (because loginctl set-linger was used), then we still need to check
 if the account actually permits that.
 


Does set-linger persist across reboots? I thought it only amounted to
avoiding the deletion of the user session stuff when the last session
logged out.

If it can cause a user session to be started on boot, then having the
access check makes sense. If it can *only* be started on a particular
boot after the user has logged in at least once, then it seems
redundant at best.

signature.asc
Description: This is a digitally signed message part
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [Q] About supporting nested systemd daemon

2015-04-30 Thread Alban Crequy
On Wed, Feb 25, 2015 at 6:48 PM, Lennart Poettering
lenn...@poettering.net wrote:
 On Wed, 25.02.15 00:05, Cyrill Gorcunov (gorcu...@gmail.com) wrote:

 Hi all! I would really appreciate if someone enlighten me if there is some 
 simple
 solution for the problem we met in OpenVZ: modern containers are mostly 
 systemd
 based so that once it is started up the systemd daemon mounts own instance of
 the systemd cgroup (if previously has not been pre-mounted by container 
 startup
 tools or whatever). To make a strict isolation of nested systemd cgroup (by
 nested I mean systemd cgroup instance mounted inside container) we've 
 patched
 the kernel so that container's systemd obtains own instance of cgroup 
 non-intersected
 anyhow with one present on a host system.

 And we would really love to get rid of this kind of kernel's hack but be able
 to isolate nested systemd with own cgroup instance using solely userspace
 tools. Is there some way to reach this?

 Not really. cgroupfs doesn't really allow that. First of all the root
 cgroup has a different set of attributes than child cgroups, hence you
 cannot mount an arbitrary child to the root cgroup and assume it
 works. But even worse, /proc/$PID/cgroup actually contains the full
 cgroup path, and hence mounting only a subtree would break the
 refernces from that file.

 systemd-nspawn nowadays mounts all hierarchies into the container, but
 mounts all controller hierarchies read-only, and of the name=systemd
 hierarchy mounts everything read-only, except the subtree the
 container is allowed to manage. That way only the cgroup tree the
 container needs access to is writable to it. That solution however
 does not hide the cgroup tree. A process running inside the container
 can still go an explore the tree and its attributes. However, all
 other groups will appear empty to it, since processes not in the
 container PID namespaces will be suppressed when reading the member
 process list.

To sum up what systemd-nspawn is currently mounting in the container:
- /sys/fs/cgroup/systemd/  --  mounted RO
- /sys/fs/cgroup/systemd/machine.slice/machine-xxx.scope/  -- mounted RW
- /sys/fs/cgroup/cpu,cpuacct/  --  mounted RO
- etc. for other cgroup hierarchies  --  mounted RO

In order to let systemd in the container restrict cpu, memory, etc. on
some of its services (see manpage systemd.resource-control(5)), rkt
would like systemd-nspawn to mount a subtree of some hierarchy
(cpu,cpuacct, memory) in read-write mode.

Is there any issues with changing the systemd-nspawn mounts in the
following way:
- /sys/fs/cgroup/systemd/  --  mounted RO
- /sys/fs/cgroup/systemd/machine.slice/machine-xxx.scope/  -- mounted RW
- /sys/fs/cgroup/cpu,cpuacct/  --  mounted RO
- /sys/fs/cgroup/cpu,cpuacct/machine.slice/machine-xxx.scope/  -- mounted RW
- etc. for other cgroup hierarchies.

Iago wrote two experimental patches on systemd-nspawn to try that and
it worked. Delegate=yes was enabled in systemd-nspawn in order to test
this:
https://github.com/endocode/systemd/commits/iaguis/delegate

But I would like to know what is missing to make this safe (or if it
is already safe to do).

 There have been proposals on LKML to add cgroup namespacings, but no
 idea where that went.

 LXC created a FUSE emulation of /proc and /sys, called lxcfs to solve
 this problem. Quite honestly I find this a pretty crazy idea however.

 If I understand correctly we can provide separate slice to container's
 systemd leaving the rest of host cgroup in ro mode, right?

 Yes.

 If so maybe there a way to hide host cgroup completely from
 container so it would see only own cgroup in sysfs?

 I don't see how this could work. I mean, you could overmount all other
 cgroup siblings with empty directories in the containers, but not
 realy scalable nor compatible with cgroups being added or removed
 later on...

 Lennart

 --
 Lennart Poettering, Red Hat
 ___
 systemd-devel mailing list
 systemd-devel@lists.freedesktop.org
 http://lists.freedesktop.org/mailman/listinfo/systemd-devel
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] network interface down in container

2015-04-30 Thread Dan Williams
On Thu, 2015-04-30 at 11:44 +0200, Lennart Poettering wrote:
 On Thu, 30.04.15 10:01, arnaud gaboury (arnaud.gabo...@gmail.com) wrote:
 
  I used to boot the container this way :
  # systemd-nspawn --network-bridge=br0 -bD /path_to/my_container
  
  Is this correct?
 
 Looks fine.
 
  
  
*
  Now on the container side:
  
  Nothing configured. NetworkManager enabled, systemd-networkd enabled
  and started.
 
 NM doesn't really support being run in a container. 

FYI not really true, NM git master (upcoming 1.2) does support being run
without udev in a container...

Dan

  ---
  $ ip a
  2: host0: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN group
  default qlen 1000
  link/ether 0e:7f:c3:fb:25:b1 brd ff:ff:ff:ff:ff:ff
  -
  host0 is down
 
 Please check what networkctl status -a in the container shows. It
 should tell you whether networkd is configured to do anything.
 
 Also, what does journalctl -u systemd-networkd -n 200 show in the
 container?
 
 Lennart
 


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] [PATCH] shared/utmp-wtmp: fix copy/paste error

2015-04-30 Thread Michael Olbrich
---
 src/shared/utmp-wtmp.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/shared/utmp-wtmp.h b/src/shared/utmp-wtmp.h
index 6ac2c7b1c768..5d26ba6fb1d0 100644
--- a/src/shared/utmp-wtmp.h
+++ b/src/shared/utmp-wtmp.h
@@ -65,7 +65,7 @@ static inline int utmp_wall(
 const char *username,
 const char *origin_tty,
 bool (*match_tty)(const char *tty, void *userdata),
-void *userdata);
+void *userdata) {
 return 0;
 }
 
-- 
2.1.4

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] [PATCH] tmpfiles: remember errno before it might be overwritten

2015-04-30 Thread Michael Olbrich
---

I'm not sure if this is really necessary right now, but that might change
in the future. Saving errno before calling another function is always a
good idea.

Michael

 src/tmpfiles/tmpfiles.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/src/tmpfiles/tmpfiles.c b/src/tmpfiles/tmpfiles.c
index d574254e0fb8..218d55051410 100644
--- a/src/tmpfiles/tmpfiles.c
+++ b/src/tmpfiles/tmpfiles.c
@@ -1279,13 +1279,15 @@ static int create_item(Item *i) {
 
 mac_selinux_create_file_prepare(i-path, S_IFLNK);
 r = symlink(resolved, i-path);
+if (r  0)
+r = -errno;
 mac_selinux_create_file_clear();
 
 if (r  0) {
 _cleanup_free_ char *x = NULL;
 
-if (errno != EEXIST)
-return log_error_errno(errno, symlink(%s, %s) 
failed: %m, resolved, i-path);
+if (r != -EEXIST)
+return log_error_errno(r, symlink(%s, %s) 
failed: %m, resolved, i-path);
 
 r = readlink_malloc(i-path, x);
 if (r  0 || !streq(resolved, x)) {
-- 
2.1.4

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] tmpfiles versus tmpwatch

2015-04-30 Thread Zbigniew Jędrzejewski-Szmek
On Wed, Apr 29, 2015 at 10:28:26PM +1200, Roger Qiu wrote:
 Doesn't relatime still update the time if the file is 1 day old
 (regardless of modication time), and the current tmpfiles wipes
 files that are older by 10 days?
Yes, everything should work with relatime, unless you set tmpfiles cleanup
time to less than 1 day. On modern kernels, with the lazytime option,
it will work even if you set it to less than 1 day.

commit 0ae45f63d4ef8d8eeec49c7d8b44a1775fff13e8
Author: Theodore Ts'o ty...@mit.edu
Date:   Mon Feb 2 00:37:00 2015 -0500

vfs: add support for a lazytime mount option

Add a new mount option which enables a new lazytime mode.  This mode
causes atime, mtime, and ctime updates to only be made to the
in-memory version of the inode.  The on-disk times will only get
updated when (a) if the inode needs to be updated for some non-time
related change, (b) if userspace calls fsync(), syncfs() or sync(), or
(c) just before an undeleted inode is evicted from memory.

  Since Linux 2.6.30, the kernel defaults to the behavior provided
 by this option (unless |noatime| was specified), and the
 |strictatime| option is required to obtain traditional semantics. In
 addition, since Linux 2.6.30, the file's last access time is always
 updated if it is more than 1 day old.
 
  http://manpages.courier-mta.org/htmlman8/*mount*.8.html
 
 On 29/04/2015 10:22 PM, Lennart Poettering wrote:
 On Wed, 29.04.15 22:08, Roger Qiu (roger@polycademy.com) wrote:
 
 Hi Lennart,
 
 So there really isn't a fast of way just checking if a file has an open file
 descriptor on it?
 
 Sometimes atime is on relatime, so it only gets updated if modification is
 earlier.
 Using relatime is fine, except for /tmp and /var/tmp really. Setting
 the flag for those file systems is really a poor choice, since it
 breaks aging things there...

I think that relatime is still the default, but lazytime seems to better
in all respects.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] journal-.gitignore-add-audit_type-from-name

2015-04-30 Thread Daniel Buch

From 785b1367fedb912e91074360c0961209ac5dc9f8 Mon Sep 17 00:00:00 2001
From: Daniel Buch boogiewasth...@gmail.com
Date: Thu, 30 Apr 2015 21:20:57 +0200
Subject: [PATCH] journal: .gitignore add audit_type-from-name*

---
 src/journal/.gitignore | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/src/journal/.gitignore b/src/journal/.gitignore
index 0f094f5..94adfb3 100644
--- a/src/journal/.gitignore
+++ b/src/journal/.gitignore
@@ -2,3 +2,5 @@
 /libsystemd-journal.pc
 /audit_type-list.txt
 /audit_type-to-name.h
+/audit_type-from-name.h
+/audit_type-from-name.gperf
-- 
2.3.7

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] tmpfiles: try to handle read-only file systems gracefully

2015-04-30 Thread systemd github import bot
Patchset imported to github.
Pull request:
https://github.com/systemd-devs/systemd/compare/master...systemd-mailing-devs:1430419838-24907-1-git-send-email-m.olbrich%40pengutronix.de

--
Generated by https://github.com/haraldh/mail2git
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] shared/utmp-wtmp: fix copy/paste error

2015-04-30 Thread systemd github import bot
Patchset imported to github.
Pull request:
https://github.com/systemd-devs/systemd/compare/master...systemd-mailing-devs:1430418517-18485-1-git-send-email-m.olbrich%40pengutronix.de

--
Generated by https://github.com/haraldh/mail2git
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] tmpfiles: remember errno before it might be overwritten

2015-04-30 Thread systemd github import bot
Patchset imported to github.
Pull request:
https://github.com/systemd-devs/systemd/compare/master...systemd-mailing-devs:1430418896-20297-1-git-send-email-m.olbrich%40pengutronix.de

--
Generated by https://github.com/haraldh/mail2git
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] [PATCH] tmpfiles: try to handle read-only file systems gracefully

2015-04-30 Thread Michael Olbrich
On read-only filesystems trying to create the target will not fail with
EEXIST but with EROFS. Handle EROFS by checking if the target already
exists, and if empty when truncating.
This avoids reporting errors if tmpfiles doesn't actually needs to do
anything.
---

This is a rework of a patch I wrote some time ago[1]. This time reacting to
EROFS instead of preempting it.

Michael

[1] http://lists.freedesktop.org/archives/systemd-devel/2014-August/022158.html

 src/tmpfiles/tmpfiles.c | 12 +---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/src/tmpfiles/tmpfiles.c b/src/tmpfiles/tmpfiles.c
index 218d55051410..4473bf019911 100644
--- a/src/tmpfiles/tmpfiles.c
+++ b/src/tmpfiles/tmpfiles.c
@@ -983,9 +983,11 @@ static int write_one_file(Item *i, const char *path) {
 log_debug_errno(errno, Not writing \%s\: %m, path);
 return 0;
 }
-
-log_error_errno(errno, Failed to create file %s: %m, path);
-return -errno;
+r = -errno;
+if (i-argument || r != -EROFS || stat(path, st)  0 || 
(i-type == TRUNCATE_FILE  st.st_size  0)) {
+log_error_errno(r, Failed to create file %s: %m, 
path);
+return r;
+}
 }
 
 if (i-argument) {
@@ -1154,6 +1156,10 @@ static int create_item(Item *i) {
 
 log_debug(Copying tree \%s\ to \%s\., resolved, i-path);
 r = copy_tree(resolved, i-path, false);
+
+if (r == -EROFS  stat(i-path, st) == 0)
+r = -EEXIST;
+
 if (r  0) {
 struct stat a, b;
 
-- 
2.1.4

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] timers always run when time changes

2015-04-30 Thread Cameron Norman
On Thu, Apr 30, 2015 at 6:35 PM, Likang Wang labor...@gmail.com wrote:
 Hi all,

 The entire system is running on an embedded box, and the system time will be
 set to 2008-01-01 00:00:00 after every reboot. My app running on the box
 will get the real time from my server and update time on the box after every
 booting.(I could not use NTP or systemd-timesyncd for some other reason)

Does the service file for this custom time syncing service have the
directive `Before=time-sync.target` ? If it does, are you sure that
when it is considered running by systemd that the time has indeed been
synced?

Cheers,
--
Cameron Norman
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] timers always run when time changes

2015-04-30 Thread Likang Wang
Hi all,

I hava a timer with the fellowing setting:

# cat /lib/systemd/system/updateimage.timer

[Unit]
Description=Update image
DefaultDependencies=false

[Timer]
OnCalendar=*-*-* 02:00:00
Persistent=false

[Install]
WantedBy=multi-user.target

And I want the timer call the same-name service only on 02:00:00 daily.

The entire system is running on an embedded box, and the system time will
be set to 2008-01-01 00:00:00 after every reboot. My app running on the box
will get the real time from my server and update time on the box after
every booting.(I could not use NTP or systemd-timesyncd for some other
reason)

Here is my problem.
When my app set the system time to real time, systemd will wake up the
updateimage.timer and run the updateimage.service no matter what time it it
now.The log looks like this:

Jan  1 08:14:00 systemd[1]: xx.service: cgroup is empty
Apr 30 21:04:00 systemd[1]: Time has been changed
Apr 30 21:04:00 systemd[1]: Set up TFD_TIMER_CANCEL_ON_SET timerfd.
Apr 30 21:04:00 systemd[1]: Timer elapsed on updateimage.timer
Apr 30 21:04:00 systemd[1]: Trying to enqueue job
updateimage.service/start/replace
Apr 30 21:04:00 systemd[1]: Installed new job updateimage.service/start as
9269


What I want is the timer and the same-name service only run exactly on
02:00:00 daily, but not when time changes. What should I do?


The systemd version : 215-17.


--
laborish
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] timers always run when time changes

2015-04-30 Thread Likang Wang
The custom time syncing service file does not have
Before=time-sync.target.

In fact, not only when the time changes by the custom time syncing service
,but also when I set the time manually by date -s 2015-05-01 11:08:00,
the timer and the same-name service will run.


Here is the custom time syncing service file:
#cat /lib/systemd/system/XX.service
[Unit]
Description=XXX Daemon
After=ppp.service

[Service]
Type=simple
ExecStart=/usr/lib/XX/xx/bin/xx
Restart=always


Here is the log when I set the time manually by date -s 2015-05-01
11:08:00
#systemctl status updateimage.timer -l
— updateimage.timer - Update image
   Loaded: loaded (/lib/systemd/system/updateimage.timer; disabled)
   Active: active (waiting) since Fri 2010-01-01 08:00:04 CST; 5 years 3
months ago

May 01 11:08:00 systemd[1]: Timer elapsed on updateimage.timer
May 01 11:08:00 systemd[1]: updateimage.timer changed waiting - running
May 01 11:08:00 systemd[1]: updateimage.timer got notified about unit
deactivation.
May 01 11:08:00 systemd[1]: updateimage.timer: Realtime timer elapses at
Sat 2015-05-02 02:00:00 CST.

#systemctl status updateimage.service -l
â— updateimage.service - Update image
   Loaded: loaded (/lib/systemd/system/updateimage.service; static)
   Active: inactive (dead) since Fri 2015-05-01 11:08:00 CST; 18min ago
  Process: 579 ExecStart=/bin/updateImage.sh (code=exited, status=0/SUCCESS)
 Main PID: 579 (code=exited, status=0/SUCCESS)

May 01 11:08:00 systemd[1]: Enqueued job updateimage.service/start as 1019
May 01 11:08:00 systemd[1]: Forked /bin/updateImage.sh as 579
May 01 11:08:00 systemd[1]: updateimage.service changed dead - start
May 01 11:08:00 systemd[579]: Executing: /bin/updateImage.sh
May 01 11:08:00 systemd[1]: Child 579 belongs to updateimage.service


--
laborish




2015-05-01 10:34 GMT+08:00 Cameron Norman camerontnor...@gmail.com:

 On Thu, Apr 30, 2015 at 6:35 PM, Likang Wang labor...@gmail.com wrote:
  Hi all,
 
  The entire system is running on an embedded box, and the system time
 will be
  set to 2008-01-01 00:00:00 after every reboot. My app running on the box
  will get the real time from my server and update time on the box after
 every
  booting.(I could not use NTP or systemd-timesyncd for some other reason)

 Does the service file for this custom time syncing service have the
 directive `Before=time-sync.target` ? If it does, are you sure that
 when it is considered running by systemd that the time has indeed been
 synced?

 Cheers,
 --
 Cameron Norman

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] initrd mount inactive

2015-04-30 Thread Aaron_Wright
Lennart Poettering lenn...@poettering.net wrote on 04/30/2015 02:39:45 
AM:
 On Wed, 29.04.15 12:09, aaron_wri...@selinc.com 
 (aaron_wri...@selinc.com) wrote:
 
  I applied those other commits you listed, and I took a look at the 
lvm2 
  package, which was being compile with --disable-udev_sync and 
  --disable-udev_rules. I enabled both of those and recompiled both 
lvm2 
  and systemd.
  
  Nothing changed. Sometimes var.mount is still bound to an inactive 
  /dev/mapper/name.
 
 Well, it will be bound to it, but systemd should not act on it
 anymore and unmount it.
 
 Also, th device should become active as soon as udev ran and 
 reprobed everything.
 
  Do I need the *.rules files from lvm2?
 
 Well, you do need the DM ones at least. That's actually where the
 interesting bits are sicne they properly probe the LUKS device and
 make it available for other components including systemd to pick it up.

I added a couple udev rules that are present in the Ubuntu dmsetup package 
for my distribution, and now I get a couple errors from systemd-udevd:

systemd-udevd[153]: conflicting device node 
'/dev/mapper/91caea2d-0e19-441e-9ea7-7be1ed345e96' found, link to 
'/dev/dm-1' will not be created
systemd-udevd[154]: conflicting device node 
'/dev/mapper/d8668b2e-3a40-46df-8c64-f369a1a7a09c' found, link to 
'/dev/dm-0' will not be created

With status as:

  dev-mapper-91caea2d\x2d0e19\x2d441e\x2d9ea7\x2d7be1ed345e96.device 
loadedactivating tentative 
/dev/mapper/91caea2d-0e19-441e-9ea7-7be1ed345e96
  dev-mapper-d8668b2e\x2d3a40\x2d46df\x2d8c64\x2df369a1a7a09c.device 
loadedactivating tentative 
/dev/mapper/d8668b2e-3a40-46df-8c64-f369a1a7a09c

The system seems to work just fine though, so I'm wondering if I should 
ignore these errors and move on. I'm sure what the impact is.___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel