Re: [systemd-devel] Systemd and ceph-osd start

2015-12-09 Thread von Thadden, Joachim, SEVEN PRINCIPLES
This has probably nothing to do with systemd! CEPH finds his osd devices 
automatically via udev somewhere here:

  /usr/lib/udev/rules.d/95-ceph-osd.rules.

Did you ever try to manually activate the disks like in the rules? Should be 
something like:

for o in {0..3}; do
  i="\`echo \$o | tr 0123456789 abcdefghij\`"
  echo "... activating disk /dev/ceph${n}-hd\${i}1 on ceph$n"
  /usr/sbin/ceph-disk-activate /dev/ceph${n}-hd\${i}1
  /usr/sbin/ceph-disk activate-journal /dev/ceph${n}-hd\${i}2
done

As this has nothing to do with systemd we should not discuss this here further, 
but you might contact me directly via mail or go directly to the appropiate 
Ceph lists.

Regards
Joachim

Am 08.12.2015 um 19:35 schrieb Andrea Annoè:
Hi,
I try to test ceph 9.2 cluster.

My lab have 1 mon and 2 osd with 4 disks each.

Only 1 osd server (with 4 disks) are online.
The disks of second osd don’t go up …

Some info about environment:
[ceph@OSD1 ~]$ sudo ceph osd tree
ID  WEIGHT  TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 8.0 root default
-4 8.0 datacenter dc1
-5 8.0 room room1
-6 8.0 row row1
-7 4.0 rack rack1
-2 4.0 host OSD1
  0 1.0 osd.0  up  1.0  1.0
  1 1.0 osd.1  up  1.0  1.0
  2 1.0 osd.2  up  1.0  1.0
  3 1.0 osd.3  up  1.0  1.0
-8 4.0 rack rack2
-3 4.0 host OSD2
  4 1.0 osd.4down  1.0  1.0
  5 1.0 osd.5down  1.0  1.0
  6 1.0 osd.6down  1.0  1.0
  7 1.0 osd.7down  1.0  1.0

[ceph@OSD1 ceph-deploy]$ sudo ceph osd dump
epoch 411
fsid d17520de-0d1e-495b-90dc-f7044f7f165f
created 2015-11-14 06:56:36.017672
modified 2015-12-08 09:48:47.685050
flags nodown
pool 0 'rbd' replicated size 2 min_size 1 crush_ruleset 0 object_hash rjenkins 
pg_num 256 pgp_num 256 last_change 53 flags hashpspool 
min_write_recency_for_promote 1 stripe_width 0
max_osd 9
osd.0 up   in  weight 1 up_from 394 up_thru 104 down_at 393 last_clean_interval 
[388,392) 192.168.64.129:6800/4599 192.168.62.129:6800/4599 
192.168.62.129:6801/4599 192.168.64.129:6801/4599 exists,up 
499a3624-b2ba-455d-b35a-31d628e1a353
osd.1 up   in  weight 1 up_from 396 up_thru 136 down_at 395 last_clean_interval 
[390,392) 192.168.64.129:6802/4718 192.168.62.129:6802/4718 
192.168.62.129:6803/4718 192.168.64.129:6803/4718 exists,up 
d7933117-0056-4c3c-ac63-2ad300495e3f
osd.2 up   in  weight 1 up_from 400 up_thru 136 down_at 399 last_clean_interval 
[392,392) 192.168.64.129:6806/5109 192.168.62.129:6806/5109 
192.168.62.129:6807/5109 192.168.64.129:6807/5109 exists,up 
7d820897-8d49-4142-8c58-feda8bb04749
osd.3 up   in  weight 1 up_from 398 up_thru 136 down_at 397 last_clean_interval 
[386,392) 192.168.64.129:6804/4963 192.168.62.129:6804/4963 
192.168.62.129:6805/4963 192.168.64.129:6805/4963 exists,up 
96270d9d-ed95-40be-9ae4-7bf66aedd4d8
osd.4 down out weight 0 up_from 34 up_thru 53 down_at 58 last_clean_interval 
[0,0) 192.168.64.130:6800/3615 192.168.64.130:6801/3615 
192.168.64.130:6802/3615 192.168.64.130:6803/3615 autoout,exists 
6364d590-62fb-4348-b8fe-19b59cd2ceb3
osd.5 down out weight 0 up_from 145 up_thru 151 down_at 203 last_clean_interval 
[39,54) 192.168.64.130:6800/2784 192.168.62.130:6800/2784 
192.168.62.130:6801/2784 192.168.64.130:6801/2784 autoout,exists 
aa51cdcc-ca9c-436b-b9fc-7bddaef3226d
osd.6 down out weight 0 up_from 44 up_thru 53 down_at 58 last_clean_interval 
[0,0) 192.168.64.130:6808/4975 192.168.64.130:6809/4975 
192.168.64.130:6810/4975 192.168.64.130:6811/4975 autoout,exists 
36672496-3346-446a-a617-94c8596e1da2
osd.7 down out weight 0 up_from 155 up_thru 161 down_at 204 last_clean_interval 
[49,54) 192.168.64.130:6800/2434 192.168.62.130:6800/2434 
192.168.62.130:6801/2434 192.168.64.130:6801/2434 autoout,exists 
775065fa-8fa8-48ce-a4cc-b034a720fe93

All UUID are correct (the down is appear after upgrade).
Now I’m not able to create some osd with ceph-deploy.
I have cancel all osd disk from cluster and deploy from zero.
Now I have some problem for service osd when start with systemd.

[ceph@OSD2 ~]$ sudo systemctl start 
ceph-osd@4.service
[ceph@OSD2 ~]$ sudo systemctl status 
ceph-osd@4.service -l
ceph-osd@4.service - Ceph object storage daemon
   Loaded: loaded (/etc/systemd/system/ceph.target.wants/ceph-osd@4.service)
   Active: active (running) since Tue 2015-12-08 10:31:38 PST; 9s ago
  Process: 6542 ExecStartPre=/usr/libexec/ceph/ceph-osd-prestart.sh --cluster 
${CLUSTER} --id %i (code=exited, status=0/SUCCESS)
Main 

Re: [systemd-devel] Systemd and ceph-osd start

2015-12-09 Thread von Thadden, Joachim, SEVEN PRINCIPLES
Am 09.12.2015 um 11:03 schrieb Joachim von Thadden:
This has probably nothing to do with systemd! CEPH finds his osd devices 
automatically via udev somewhere here:

  /usr/lib/udev/rules.d/95-ceph-osd.rules.

Did you ever try to manually activate the disks like in the rules? Should be 
something like:

for o in {0..3}; do

My devices are called /dev/-hd. So don't use my loop, 
only the ceph-disk-activate lines with your devices... Just look into the udev 
rule.

Joachim

--
Joachim von Thadden
Lead Technical Architect

SEVEN PRINCIPLES AG
Ernst-Dietrich-Platz 2
40882 Ratingen
Mobil: +49 162 261 64 66
Tel:   +49 2102 557 100
Fax:   +49 2102 557 101

E-Mail: 
joachim.von-thad...@7p-group.com
Web: www.7p-group.com

Aufsichtsrat: Prof. Dr. h.c. Hans Albert Aukes
Vorstandsvorsitzender: Joseph Kronfli
Handelsregister: HRB 30660 | USt-ID-Nr.: DE197820124 | Steuer-Nr.: 218/5734/1640
Sitz der Gesellschaft: Köln | Registriergericht: Amtsgericht Köln

Der Inhalt dieser E-Mail ist ausschließlich für den bezeichneten Adressaten 
bestimmt. Wenn Sie nicht der vorgesehene Adressat dieser E-Mail oder dessen 
Vertreter sein sollten, beachten Sie bitte, dass jede Form der 
Veröffentlichung, Vervielfältigung oder Weitergabe des Inhalts dieser E-Mail 
unzulässig ist. Wir bitten Sie sofort den Absender zu informieren und die 
E-Mail zu löschen.
The information contained in this e-mail is intended solely for the addressee. 
Access to this e-mail by anyone else is unauthorized. If you are not the 
intended recipient, any form of disclosure, reproduction, distribution or any 
action taken or refrained from in reliance on it, is prohibited and may be 
unlawful. Please notify the sender immediately and destroy this e-mail.

<>___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Query regarding "EnvironmentFile"

2015-12-09 Thread Soumya Koduri

Hi,

I have created a systemd.unit(nfs-ganesha.service) file as below :

[Unit]

After=nfs-ganesha-config.service
Requires=nfs-ganesha-config.service


[Service]
EnvironmentFile=-/run/sysconfig/ganesha
ExecStart=/usr/bin/ganesha.nfsd $OPTIONS ${EPOCH}

...



My intention is to execute/start nfs-ganesha-config.service always prior 
to running nfs-ganesha.service (even during restart).


nfs-ganesha-config.service writes certain configuration values to 
'/run/sysconfig/ganesha' which I would want nfs-ganesha.service to read 
before starting ganesha.nfsd daemon.


But from my tests I see that nfs-ganesha.service picks up old 
configuration values defined in '/run/sysconfig/ganesha' than the ones 
generated by 'nfs-ganesha-config.service' at that point. So I am 
assuming 'EnvironmentFile' gets loaded prior to running any dependent 
services (which is 'nfs-ganesha-config.service' here).


Please confirm if that is the case. Also is there any way to load 
'EnvironmentFile' only after executing all the dependent services.


Thanks,
Soumya
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Utility for persistent alternative driver binding

2015-12-09 Thread Greg KH
On Wed, Dec 09, 2015 at 08:43:05AM +0200, Panu Matilainen wrote:
> On 12/08/2015 01:47 PM, Greg KH wrote:
> >On Tue, Dec 08, 2015 at 11:34:36AM +0200, Panu Matilainen wrote:
> >>>As was mentioned recently PCI bus numbers may change between reboots, so
> >>
> >>Hmm, got a pointer? I dont think PCI slots change between reboots without
> >>physically swapping hardware, the "ethX-problem" comes from the order of
> >>device discovery being unstable across boots, which is a different issue and
> >>not relevant for this case.
> >
> >PCI can renumber the bus ids any time it wants to between reboots, it's
> >not only if you add/remove new hardware.  Now luckily most BIOSes aren't
> >that broken and do keep keep things stable if the hardware doesn't
> >change, but not all do, so be careful about this.  I had a broken BIOS
> >that would renumber things about every 10 reboots just "for fun" that
> >was very good at using for testing system assumptions about static PCI
> >device ids.
> 
> Ugh :-/ I've clearly only seen well-behaved BIOSes. But even if the news is
> bad its good to know, thanks.

See the other messages in this thread about how Qemu will also give you
semi-random PCI addresses every other boot, that's a much more common
problem and something easy to test with.

> >>>you may want to start with something more stable from the very beginning.
> >>
> >>Such as? I dont see any other data that is there for all PCI (and USB)
> >>devices that allows differentiating between two otherwise identical devices.
> >
> >Again, it all depends on the device itself, they should provide
> >something that makes them unique (MAC address, serial number, topology
> >at the moment, etc.)  There is no one-thing-fits-all, which is why udev
> >provides so many different ways to name things (look at /dev/disk/* and
> >/dev/serial/* as examples of this.)
> 
> At the time driverctl runs, things like MAC address are not available since
> the normal driver is not loaded, its just a "raw" device on a bus.

It's a PCI device on the bus, not a network device.  When you create the
network device, then you have a MAC address.  You never need to rename a
PCI device.

> So I guess it'll need to grow an alternative mode that allows override by
> PCI ID and operates on all devices by that ID, which loses a bit of control
> vs the slot number but for the cases where slot number isn't reliable...

What?  What are you trying to "rename" here?  I thought we were talking
about network devices, or something else that a user actually interacts
with.  Userspace never deals with a "raw" PCI device, you should never
care about those ids for anything "real".

thanks,

greg k-h
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Utility for persistent alternative driver binding

2015-12-09 Thread Panu Matilainen

On 12/09/2015 04:26 PM, Greg KH wrote:

On Wed, Dec 09, 2015 at 08:43:05AM +0200, Panu Matilainen wrote:

On 12/08/2015 01:47 PM, Greg KH wrote:

On Tue, Dec 08, 2015 at 11:34:36AM +0200, Panu Matilainen wrote:

As was mentioned recently PCI bus numbers may change between reboots, so


Hmm, got a pointer? I dont think PCI slots change between reboots without
physically swapping hardware, the "ethX-problem" comes from the order of
device discovery being unstable across boots, which is a different issue and
not relevant for this case.


PCI can renumber the bus ids any time it wants to between reboots, it's
not only if you add/remove new hardware.  Now luckily most BIOSes aren't
that broken and do keep keep things stable if the hardware doesn't
change, but not all do, so be careful about this.  I had a broken BIOS
that would renumber things about every 10 reboots just "for fun" that
was very good at using for testing system assumptions about static PCI
device ids.


Ugh :-/ I've clearly only seen well-behaved BIOSes. But even if the news is
bad its good to know, thanks.


See the other messages in this thread about how Qemu will also give you
semi-random PCI addresses every other boot, that's a much more common
problem and something easy to test with.


you may want to start with something more stable from the very beginning.


Such as? I dont see any other data that is there for all PCI (and USB)
devices that allows differentiating between two otherwise identical devices.


Again, it all depends on the device itself, they should provide
something that makes them unique (MAC address, serial number, topology
at the moment, etc.)  There is no one-thing-fits-all, which is why udev
provides so many different ways to name things (look at /dev/disk/* and
/dev/serial/* as examples of this.)


At the time driverctl runs, things like MAC address are not available since
the normal driver is not loaded, its just a "raw" device on a bus.


It's a PCI device on the bus, not a network device.  When you create the
network device, then you have a MAC address.  You never need to rename a
PCI device.


So I guess it'll need to grow an alternative mode that allows override by
PCI ID and operates on all devices by that ID, which loses a bit of control
vs the slot number but for the cases where slot number isn't reliable...


What?  What are you trying to "rename" here?  I thought we were talking
about network devices, or something else that a user actually interacts
with.  Userspace never deals with a "raw" PCI device, you should never
care about those ids for anything "real".


Um, this is not about renaming, never was.

driverctl is for overriding the default device driver binding. For 
example, to bind a certain device to vfio-pci instead of its dedicated 
driver from the go. Or to use an alternative dedicated driver. Or 
prevent any driver from being bound to a device.


At that early stage there is very little to go with besides the PCI slot 
number and ID.


- Panu -

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Systemd and ceph-osd start

2015-12-09 Thread Lennart Poettering
On Tue, 08.12.15 19:35, Andrea Annoè (andrea.an...@iks.it) wrote:

> Hi,
> I try to test ceph 9.2 cluster.

I have no idea about ceph, and I don't really grok what the problem is
supposed to be, but I really looks like it's a Ceph problem. Please
contact the ceph folks for help.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] logind's .PowerOff and .Reboot methods doesn't seem to respect inhibitors

2015-12-09 Thread Mantas Mikulėnas
On Wed, Dec 9, 2015 at 10:20 PM, Troels Mæhl Folke <
t.r.o.e.l.s@gmail.com> wrote:

> Hello,
>
> I've had problems getting systemd-logind to respect shutdown inhibitors
> when
> I ask it over DBus to power off or reboot.
>
> Here is what I've tried:
>
> In one gnome-terminal as non-root, I type:
>
> systemd-inhibit --what=shutdown --mode=block --who=unison unison-gtk2
>
> ...to set up the inhibitor.
> Then next, in another gnome-terminal - also as non-root - I type:
>
> busctl call org.freedesktop.login1 /org/freedesktop/login1 \
> org.freedesktop.login1.Manager PowerOff b true
>

AFAIK, inhibitors from the same user are ignored (which to be honest makes
them not very useful), and systemctl merely checks them manually.

(You might find gnome-session-inhibit useful; it tells GNOME itself to
avoid shutting down.)

-- 
Mantas Mikulėnas 
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] logind's .PowerOff and .Reboot methods doesn't seem to respect inhibitors

2015-12-09 Thread Troels Mæhl Folke
Hello,

I've had problems getting systemd-logind to respect shutdown inhibitors when
I ask it over DBus to power off or reboot.

Here is what I've tried:

In one gnome-terminal as non-root, I type:

systemd-inhibit --what=shutdown --mode=block --who=unison unison-gtk2

...to set up the inhibitor.
Then next, in another gnome-terminal - also as non-root - I type:

busctl call org.freedesktop.login1 /org/freedesktop/login1 \
org.freedesktop.login1.Manager PowerOff b true

...to start the shutdown process. This succeeds without any
polkit-interaction and the shutdown process is started,
even though I haven't closed unison-gtk2. It also succeeds with false
as the parameter.
On the other hand, if I run 'systemctl poweroff' as non-root, the
inhibitor is respected.
I'm running systemd 228 on Arch Linux x86_64.

Best Regards,
Troels
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Query regarding "EnvironmentFile"

2015-12-09 Thread Reindl Harald



Am 09.12.2015 um 20:46 schrieb Lennart Poettering:

I probably should never have added EnvironmentFile= in the first
place. Packagers misunderstand that unit files are subject to admin
configuration and should be treated as such, and that spliting out
configuration of unit files into separate EnvironmentFiles= is a
really non-sensical game of unnecessary indirection


i strongly disagree

it's the easiest way to not touch/copy the systemd-unit *and* 
systemd-snippets for just adjust a simple variable - the point here is 
simple


copy units and/or add own snippets has easily two side effects

* don't get well deserved updates for the units
* or snippets don't play well with later dist-versions of the unit

a EnvironmentFile supported by the distributions unit is well better for 
simple adoptions




signature.asc
Description: OpenPGP digital signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Query regarding "EnvironmentFile"

2015-12-09 Thread Soumya Koduri



On 12/10/2015 01:16 AM, Lennart Poettering wrote:

On Wed, 09.12.15 18:27, Soumya Koduri (skod...@redhat.com) wrote:


Hi,

I have created a systemd.unit(nfs-ganesha.service) file as below :

[Unit]

After=nfs-ganesha-config.service
Requires=nfs-ganesha-config.service


[Service]
EnvironmentFile=-/run/sysconfig/ganesha
ExecStart=/usr/bin/ganesha.nfsd $OPTIONS ${EPOCH}

...



My intention is to execute/start nfs-ganesha-config.service always prior to
running nfs-ganesha.service (even during restart).

nfs-ganesha-config.service writes certain configuration values to
'/run/sysconfig/ganesha' which I would want nfs-ganesha.service to read
before starting ganesha.nfsd daemon.

But from my tests I see that nfs-ganesha.service picks up old configuration
values defined in '/run/sysconfig/ganesha' than the ones generated by
'nfs-ganesha-config.service' at that point. So I am assuming
'EnvironmentFile' gets loaded prior to running any dependent services (which
is 'nfs-ganesha-config.service' here).

Please confirm if that is the case. Also is there any way to load
'EnvironmentFile' only after executing all the dependent services.


EnvironmentFile= is processed immediately before forking off the
service process. The env vars the process will see are hence the
contents of that file after all deps with After= ran.

(But honestly, there's really no point in trying to dynamically
convert stuff into a file that is suitable for EnvironmentFile=. I
mean, if you want a shell script, then use a shell script, and invoke
that from the main daemon's ExecStart= line, and make it exec the real
daemon as last step. There's really no point in playing these
multi-service conversion games. Also /etc/sysconfig is a Redhatism
that should really go away, the whole concept is flawed. Adding a new
/run/sysconfig/ certainly makes that even worse.)

Thanks again for the clarification and your inputs. I wanted to 
understand the behavior before taking any other approach.


-Soumya


I probably should never have added EnvironmentFile= in the first
place. Packagers misunderstand that unit files are subject to admin
configuration and should be treated as such, and that spliting out
configuration of unit files into separate EnvironmentFiles= is a
really non-sensical game of unnecessary indirection.

Lennart


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Query regarding "EnvironmentFile"

2015-12-09 Thread Andrei Borzenkov
10.12.2015 03:08, Reindl Harald пишет:
> 
> 
> Am 09.12.2015 um 20:46 schrieb Lennart Poettering:
>> I probably should never have added EnvironmentFile= in the first
>> place. Packagers misunderstand that unit files are subject to admin
>> configuration and should be treated as such, and that spliting out
>> configuration of unit files into separate EnvironmentFiles= is a
>> really non-sensical game of unnecessary indirection
> 
> i strongly disagree
> 
> it's the easiest way to not touch/copy the systemd-unit *and*
> systemd-snippets for just adjust a simple variable - the point here is
> simple
> 
> copy units and/or add own snippets has easily two side effects
> 
> * don't get well deserved updates for the units
> * or snippets don't play well with later dist-versions of the unit
> 
> a EnvironmentFile supported by the distributions unit is well better for
> simple adoptions
> 

This may have been true in the past.

How is

[Service]
Environment=FOO=bar

snippet today worse than environment file that does exactly the same?
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Query regarding "EnvironmentFile"

2015-12-09 Thread Lennart Poettering
On Wed, 09.12.15 18:27, Soumya Koduri (skod...@redhat.com) wrote:

> Hi,
> 
> I have created a systemd.unit(nfs-ganesha.service) file as below :
> 
> [Unit]
> 
> After=nfs-ganesha-config.service
> Requires=nfs-ganesha-config.service
> 
> 
> [Service]
> EnvironmentFile=-/run/sysconfig/ganesha
> ExecStart=/usr/bin/ganesha.nfsd $OPTIONS ${EPOCH}
> 
> ...
> 
> 
> 
> My intention is to execute/start nfs-ganesha-config.service always prior to
> running nfs-ganesha.service (even during restart).
> 
> nfs-ganesha-config.service writes certain configuration values to
> '/run/sysconfig/ganesha' which I would want nfs-ganesha.service to read
> before starting ganesha.nfsd daemon.
> 
> But from my tests I see that nfs-ganesha.service picks up old configuration
> values defined in '/run/sysconfig/ganesha' than the ones generated by
> 'nfs-ganesha-config.service' at that point. So I am assuming
> 'EnvironmentFile' gets loaded prior to running any dependent services (which
> is 'nfs-ganesha-config.service' here).
> 
> Please confirm if that is the case. Also is there any way to load
> 'EnvironmentFile' only after executing all the dependent services.

EnvironmentFile= is processed immediately before forking off the
service process. The env vars the process will see are hence the
contents of that file after all deps with After= ran.

(But honestly, there's really no point in trying to dynamically
convert stuff into a file that is suitable for EnvironmentFile=. I
mean, if you want a shell script, then use a shell script, and invoke
that from the main daemon's ExecStart= line, and make it exec the real
daemon as last step. There's really no point in playing these
multi-service conversion games. Also /etc/sysconfig is a Redhatism
that should really go away, the whole concept is flawed. Adding a new
/run/sysconfig/ certainly makes that even worse.)

I probably should never have added EnvironmentFile= in the first
place. Packagers misunderstand that unit files are subject to admin
configuration and should be treated as such, and that spliting out
configuration of unit files into separate EnvironmentFiles= is a
really non-sensical game of unnecessary indirection.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel