Re: [systemd-devel] TTL for systemd -> EL7

2016-10-12 Thread Chris Bell

On 2016-10-12 11:26, Chris Bell wrote:

On 2016-10-12 09:28, Reindl Harald wrote:

Am 12.10.2016 um 15:08 schrieb Chris Bell:

Not sure if this is the right place to ask


no


Sorry


I'll unsub so this doesn't happen again. Sorry again for spamming.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] TTL for systemd -> EL7

2016-10-12 Thread Chris Bell

On 2016-10-12 09:28, Reindl Harald wrote:

Am 12.10.2016 um 15:08 schrieb Chris Bell:

Not sure if this is the right place to ask


no


Sorry

box runs 231, but our EL7 (RHEL7.2) boxes are only at 219, where it 
has

been (if I'm not mistaken) since the 7.0 release


not true, there where at least one if not two major jumps within the
CentOS 7 lifetime and it was released with 208


My mistake




Any idea on when they
may port an update?


when they thinks it's needed for very good reasons and you bought that
"no version jumps" implicit with RHEL7 - so choose another
distribution systemd or live with it


Thanks :)

On 2016-10-12 09:51, Greg KH wrote:

Why not ask Red Hat?  You are paying for support, why wouldn't you be
able information like this directly from them?

good luck!

greg k-h


I don't have direct access; that's a couple levels above my paygrade :/


I apologize for spamming the list. Thank you for your responses.

Regards,
Chris
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] TTL for systemd -> EL7

2016-10-12 Thread Chris Bell

Hey everyone,

Not sure if this is the right place to ask, but I figured someone here 
or at RH would know. What's the TTL for systemd updates on EL7? My Arch 
box runs 231, but our EL7 (RHEL7.2) boxes are only at 219, where it has 
been (if I'm not mistaken) since the 7.0 release. Any idea on when they 
may port an update?


Thanks,
Chris
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] How to deploy systemd-nspawn containers and use for deployment

2016-10-12 Thread Chris Bell

On 2016-10-11 22:29, Samuel Williams wrote:


For step 2, what would be the best practice. Rsync the local container
to the remote container?



That's worked fine for me so far. Just to state the obvious: makes sure 
the container is stopped before using rsync.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Create complete dependency/boot organization map?

2015-10-23 Thread Chris Bell
Hi all,

I was wondering, is there any way to create a complete
dependency/ordering map for my entire systemd system? Basically, I'd
like to be able to see, relatively clearly, what targets are reached and
when, what services are wanted/required by the target, and some
dependencies between services. A way to visualize the chain from
switch-root.target and default.target. 

I know various functions exist to extract portions of the information I
need, and they can be used together to get a complete picture, but I
find it difficult to keep everything organized into something ultimately
useful. Is there a facility I'm missing? Or is this particular ability
too complex to be worth implementing? 

Thanks!

--Chris
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Create complete dependency/boot organization map?

2015-10-23 Thread Chris Bell

On 2015-10-23 12:29, Tomasz Torcz wrote:

On Fri, Oct 23, 2015 at 12:02:18PM -0400, Chris Bell wrote:

Hi all,

I was wondering, is there any way to create a complete
dependency/ordering map for my entire systemd system? Basically, I'd
like to be able to see, relatively clearly, what targets are reached 
and

when, what services are wanted/required by the target, and some
dependencies between services. A way to visualize the chain from
switch-root.target and default.target.

I know various functions exist to extract portions of the information 
I

need, and they can be used together to get a complete picture, but I
find it difficult to keep everything organized into something 
ultimately

useful. Is there a facility I'm missing? Or is this particular ability
too complex to be worth implementing?



  Check "systemd-analyze dot".  If you pipe the output through "dot" 
program,

you will get graphical map similar to this:
http://dżogstaff.pipebreaker.pl/2012.03.08-targets.png


Wow. It's not the prettiest thing I've seen, but it certainly does the 
job! Thanks!


--Chris
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] systemd-analyze verify errors

2015-10-23 Thread Chris Bell

Hi all,

First off, I apologize for starting so many threads here. I'm working on 
becoming intimately familiar with systemd and all its capabilities, and 
have run into numerous questions and issues. This list has been 
incredibly helpful, thank you all!!


Now, the issue. When I use `systemd-analyze verify` I intermittently get 
errors about some of my mount units. Specifically, it will list one of 
my mount units and say: "Unit is bound to inactive unit . 
Stopping, too."


I get at least one of these messages any time I verify a unit. If I run 
verify without a unit file name, it do not get the messages.


Examples: (same command each time)

# systemd-analyze verify rngd.service
dev-disk-by\x2duuid-c1de93ea\x2d51ca\x2d4498\x2db7f4\x2df32b3e2fcbd4.swap: 
Unit is bound to inactive unit 
dev-disk-by\x2duuid-c1de93ea\x2d51ca\x2d4498\x2db7f4\x2df32b3e2fcbd4.device. 
Stopping, too.


# systemd-analyze verify rngd.service
boot.mount: Unit is bound to inactive unit dev-sda2.device. Stopping, 
too.
dev-disk-by\x2duuid-c1de93ea\x2d51ca\x2d4498\x2db7f4\x2df32b3e2fcbd4.swap: 
Unit is bound to inactive unit 
dev-disk-by\x2duuid-c1de93ea\x2d51ca\x2d4498\x2db7f4\x2df32b3e2fcbd4.device. 
Stopping, too.
home.mount: Unit is bound to inactive unit 
dev-disk-by\x2duuid-e3c4306f\x2dda34\x2d4e17\x2d9572\x2d048733a2df52.device. 
Stopping, too.


# systemd-analyze verify rngd.service
home.mount: Unit is bound to inactive unit 
dev-disk-by\x2duuid-e3c4306f\x2dda34\x2d4e17\x2d9572\x2d048733a2df52.device. 
Stopping, too.
boot.mount: Unit is bound to inactive unit dev-sda2.device. Stopping, 
too.
boot-efi.mount: Unit is bound to inactive unit dev-sda1.device. 
Stopping, too.




Each time, I get messages that mounts are bound to inactive devices, and 
that the mounts will be stopped. The mounts are never actually stopped. 
I've been just ignoring them for now. Is there anything I should do 
about these?


Thanks again!!
--Chris
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Create complete dependency/boot organization map?

2015-10-23 Thread Chris Bell

On 2015-10-23 12:23, Andrei Borzenkov wrote:

23.10.2015 19:02, Chris Bell пишет:

Hi all,

I was wondering, is there any way to create a complete
dependency/ordering map for my entire systemd system?


For starting something "systemd --test" shows all static dependencies
pulled in by startinggiven unit. For stopping something I do not think
something like this exists.


This provides just a ton of information; far more than what I'm looking 
for.





Basically, I'd
like to be able to see, relatively clearly, what targets are reached 
and

when,


Not sure what "when" means. systemd --test dumps jobs that will be
queued together with dependencies.


Basically in what order are the targets started, and what jobs must 
complete before the next target can be started?





what services are wanted/required by the target, and some


That is also shown by "systemctl list-dependencies".


I use this, but I've found that it's best for analyzing a single unit, 
not the overall system.





dependencies between services. A way to visualize the chain from
switch-root.target and default.target.



switch-root.target is challenging because when you run it after boot
you have rather different unit definitions. It may be possible to
override it using SYSTEMD_UNIT_PATH but I do not know how to emulate
/run and /etc. Also results of generators that were running in initrd
are no more available.


Doesn't have to be from switch-root; I guess from the first 'real' 
target once it leaves the initrd.




I know various functions exist to extract portions of the information 
I

need, and they can be used together to get a complete picture, but I
find it difficult to keep everything organized into something 
ultimately

useful. Is there a facility I'm missing? Or is this particular ability
too complex to be worth implementing?


To clarify, I'd effectively like to be able to create a visualization of 
my boot/init process, in the spirit of how the bootup manpage portrays 
the systemd bootup process [0], only with the particulars of my system. 
I don't expect it to have the nice graphical flowchart representation, 
but some clear representation of that data.


Thanks again!

--Chris

[0] http://www.freedesktop.org/software/systemd/man/bootup.html
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Setting Environment Configuration (Affinity) for Slices

2015-10-19 Thread Chris Bell

Hi all,

Is there a way to set an affinity for an entire slice? Say, for example, 
I have system-webhosted.slice, but I only want the services running 
within system-webhosted.slice to run on cores 5-8. I can set this 
individually per service (systemd.exec man page), but it does not 
indicate that I can do this for slices. Also, the systemd.slice man page 
says it only accepts resource control directives, not environment config 
directives. Is there any way I can set an environment config directive 
for an entire slice? Or do I need to do it per-service?


Alternatively, is there a way (and this sounds way too hacky) to 
hierarchically order a slice under a service? So, basically I can start 
some dummy service with environment configs, and the slice will be a 
child of that service and all units in that slice will inherit the 
environment configs from the parent of the slice?


/
└─system.slice
  └─dummy-slice-wrapper.service
└─system-webhosted.slice
  ├─httpd.service
  └─postgresql.service

Where dummy-slice-wrapper.service has affinity set to 5-8 and has 
system-webhosted.slice as a child. Then the services inside the slice 
(httpd, postgre) also have their affinity locked to 5-8, without having 
to specify it with an override for each service.


I've noticed that systemd-nspawn@.servce has a child 
'system.slice' though I don't know if that setup can enforce what I'd 
like it to.


Is there any way to do this with the current setup?

Thanks in advance!!

--Chris
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Setting Environment Configuration (Affinity) for Slices

2015-10-19 Thread Chris Bell

On 2015-10-19 17:24, Lennart Poettering wrote:

On Mon, 19.10.15 17:16, Chris Bell (cwb...@narmos.org) wrote:


>However, I just had a long chat with Tejun Heo about this, and we came
>to the conclusion that's probably safe to expose a minimal subset of
>cpuset now, and reuse the existing CPUAffinity= service setting for
>that: right now, it only affects the main process of a service at fork
>time (and all child processes forked off from that, recursively), by
>using sched_setaffinity(). Our idea would be to propagate it into the
>"cpuset.cpus" field too, so that the setting is first passed to
>sched_setaffinity(), and then also written to the cpuset
>hierarchy. This should be pretty safe, and allow us to make this
>available in slices too. It would result in a slight change of
>behaviour though, as making adjustments to cpuset would mean that
>daemons cannot extend their affinity with sched_setaffinity() above
>what was set with cpuset anymore. But I think this is OK.

So, there's a good chance for a subset cpuset-related options at the 
slice
level relatively soon, but full capabilities will have to wait until 
kernel

cgroups are improved?


Well, I am not sure what "full capabilities" really mean here. Much of
the cpuset functionality appears to be little else than just help for
writing shell scripts. That part is certainly nothign we want to
expose.

The other part is NUMA memory node stuff, but supposedly that's stuff
that should be dealt with automatically by the kernel, and not need
user configuration. Hence it's nothign we really want to expose right
anytime soon.


Ah, I misunderstood.




>I am not sure I understand what you want to to do with the env vars
>precisely? what kind of env vars do you intend to set?

Basically, I have a number of services that may or may not be running 
at any

given time, based on the whims of the users. All of these services are
hosted services of some type, and occasionally they have been known to 
eat
all CPU cores, lagging everything else. I'm working on setting up CPU 
shares

and other resource controls to try and keep resources available for
immediate execution of system processes, services, etc. I'd prefer to 
do
this with affinity; assign critical processes to CPUs 0-1, and the 
rest
limited to subsets of the available remaining CPUs. I was hoping I 
could do
this in one run by saying "everything in this slice can must run with 
this
affinity." I can do it on a per-service basis, but with a large number 
of

services it gets tedious.


Well, sure, exposing the cpuset knobs as discussed above should make
this easy, and that's precisely what slices have been introduced for.


So I just have to wait for them to be introduced.



I was mostly wondering about the env var issue you raised...

I also think it would be convenient in some cases to be able to use 
the
'Nice' and 'Private{Network,Devices,etc}' directives apply to an 
entire

slice. That way I can use slices to control, manage, and group related
services. (Example: I'd like to manage postfix and dovecot together in
system-mail.slice. I'd like to be able to use the slice to set exec 
options
for both services. Then if I add another service to system-mail.slice, 
it

would also automatically be constrained by the limits set in
system-mail.slice.)


Use CPUShares= as per-slice/per-service/per-scope equivalent of
Nice=.

PrivateXYZ= otoh is very specific to what a daemon does, it's a
sandboxing feature, and sandboxes must always be adjusted to the
individual daemons. I doubt that this is something to support as
anything but a service-specific knob.

Lennart


Ok, so it seems like most of what I've been trying to implement is 
available in some form, just not how I was expecting. I'll take another 
look at the Resource Control directives and see how to adjust them for 
my needs. It's not as direct as I was hoping, but they seem like they'll 
do what I need.


If I have a set of services that really need to be finely controlled I 
should probably just run them in a container, and set limits for the 
container. Will that work as I am expecting? Will a systemd-nspawn 
container respect CPUAffinity settings from the service override file?


Thanks again!!

--Chris

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Setting Environment Configuration (Affinity) for Slices

2015-10-19 Thread Chris Bell

On 2015-10-19 14:15, Lennart Poettering wrote:

On Mon, 19.10.15 17:37, Chris Bell (cwb...@narmos.org) wrote:


>I was mostly wondering about the env var issue you raised...
>
>>I also think it would be convenient in some cases to be able to use the
>>'Nice' and 'Private{Network,Devices,etc}' directives apply to an entire
>>slice. That way I can use slices to control, manage, and group related
>>services. (Example: I'd like to manage postfix and dovecot together in
>>system-mail.slice. I'd like to be able to use the slice to set exec
>>options
>>for both services. Then if I add another service to system-mail.slice,
>>it
>>would also automatically be constrained by the limits set in
>>system-mail.slice.)
>
>Use CPUShares= as per-slice/per-service/per-scope equivalent of
>Nice=.
>
>PrivateXYZ= otoh is very specific to what a daemon does, it's a
>sandboxing feature, and sandboxes must always be adjusted to the
>individual daemons. I doubt that this is something to support as
>anything but a service-specific knob.
>
>Lennart

Ok, so it seems like most of what I've been trying to implement is 
available
in some form, just not how I was expecting. I'll take another look at 
the
Resource Control directives and see how to adjust them for my needs. 
It's
not as direct as I was hoping, but they seem like they'll do what I 
need.


If I have a set of services that really need to be finely controlled I
should probably just run them in a container, and set limits for the
container. Will that work as I am expecting? Will a systemd-nspawn 
container

respect CPUAffinity settings from the service override file?


CPUAffinity= is generally inherited down the process tree. Hence yes,
this will work. But do note that processes may freely readjust ther
own affinity using sched_setaffinity() at any time, and thus are free
to undo the setting. Hooking up cpuset with systemd as proposed this
is not possible anymore. Also, if you we hook up cpuset then it's easy
to readjust the cpuset stuff dynmaically at runtime.

Lennart


Ok, I think my best shot is to readjust my strategy with resource 
management. I'm sure I can implement a solution with CPU Shares and CPU 
usage limits. It doesn't seem like the Affinity option can be enforced 
the way I had hoped, anyway.


Thanks for all the help!!

--Chris
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Setting Environment Configuration (Affinity) for Slices

2015-10-19 Thread Chris Bell

On 2015-10-19 16:54, Lennart Poettering wrote:

On Mon, 19.10.15 11:58, Chris Bell (cwb...@narmos.org) wrote:


Is there a way to set an affinity for an entire slice?


No there is not. But it's definitely our intention to add this, and
expose the "cpuset" cgroup controller this way. Unfortunately the
"cpuset" controller of the kernel currently exposes really awful
behaviour, hence we have not exposed its functionality this way.

However, I just had a long chat with Tejun Heo about this, and we came
to the conclusion that's probably safe to expose a minimal subset of
cpuset now, and reuse the existing CPUAffinity= service setting for
that: right now, it only affects the main process of a service at fork
time (and all child processes forked off from that, recursively), by
using sched_setaffinity(). Our idea would be to propagate it into the
"cpuset.cpus" field too, so that the setting is first passed to
sched_setaffinity(), and then also written to the cpuset
hierarchy. This should be pretty safe, and allow us to make this
available in slices too. It would result in a slight change of
behaviour though, as making adjustments to cpuset would mean that
daemons cannot extend their affinity with sched_setaffinity() above
what was set with cpuset anymore. But I think this is OK.


So, there's a good chance for a subset cpuset-related options at the 
slice level relatively soon, but full capabilities will have to wait 
until kernel cgroups are improved?



Is there any way I >> can set an environment config directive for
an entire slice? Or do I need to do it per-service?


The latter. Slices are really about resource control, and an env var
really isn't a resource control knob.


Alternatively, is there a way (and this sounds way too hacky) to
hierarchically order a slice under a service?


Nope. Slices are the inner nodes of the resource control tree, and
services/scopes are the leaves. That's how they are defined.

I've noticed that systemd-nspawn@.servce has a child 
'system.slice'

though I don't know if that setup can enforce what I'd like it to.


Well, thats because nspawn is a delegation unit that encapsulates a
completely new cgroup hierarchy of its own, managed by a new systemd
instance.


Aha, that makes sense.


I am not sure I understand what you want to to do with the env vars
precisely? what kind of env vars do you intend to set?


Basically, I have a number of services that may or may not be running at 
any given time, based on the whims of the users. All of these services 
are hosted services of some type, and occasionally they have been known 
to eat all CPU cores, lagging everything else. I'm working on setting up 
CPU shares and other resource controls to try and keep resources 
available for immediate execution of system processes, services, etc. 
I'd prefer to do this with affinity; assign critical processes to CPUs 
0-1, and the rest limited to subsets of the available remaining CPUs. I 
was hoping I could do this in one run by saying "everything in this 
slice can must run with this affinity." I can do it on a per-service 
basis, but with a large number of services it gets tedious.


I also think it would be convenient in some cases to be able to use the 
'Nice' and 'Private{Network,Devices,etc}' directives apply to an entire 
slice. That way I can use slices to control, manage, and group related 
services. (Example: I'd like to manage postfix and dovecot together in 
system-mail.slice. I'd like to be able to use the slice to set exec 
options for both services. Then if I add another service to 
system-mail.slice, it would also automatically be constrained by the 
limits set in system-mail.slice.)


Basically, I think this would be a useful hierarchy level for more 
coarse-grained service group management and configuration.


--Chris
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Setting Environment Configuration (Affinity) for Slices

2015-10-19 Thread Chris Bell

On 2015-10-19 17:16, Chris Bell wrote:

Basically, I have a number of services that may or may not be running
at any given time, based on the whims of the users. All of these
services are hosted services of some type, and occasionally they have
been known to eat all CPU cores, lagging everything else. I'm working
on setting up CPU shares and other resource controls to try and keep
resources available for immediate execution of system processes,
services, etc. I'd prefer to do this with affinity; assign critical
processes to CPUs 0-1, and the rest limited to subsets of the
available remaining CPUs. I was hoping I could do this in one run by
saying "everything in this slice can must run with this affinity." I
can do it on a per-service basis, but with a large number of services
it gets tedious.

I also think it would be convenient in some cases to be able to use
the 'Nice' and 'Private{Network,Devices,etc}' directives apply to an
entire slice. That way I can use slices to control, manage, and group
related services. (Example: I'd like to manage postfix and dovecot
together in system-mail.slice. I'd like to be able to use the slice to
set exec options for both services. Then if I add another service to
system-mail.slice, it would also automatically be constrained by the
limits set in system-mail.slice.)

Basically, I think this would be a useful hierarchy level for more
coarse-grained service group management and configuration.


Wait.. Is what I'm looking for here a container?
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Overriding WantedBy and Alias directives

2015-10-16 Thread Chris Bell

On 2015-10-16 18:30, Andrei Borzenkov wrote:

16.10.2015 17:41, Chris Bell пишет:

[Install]
WantedBy=   # To clear out previous wantedby 
params,
though this doesn't seem to work like that. Documentation doesn't say 
it

should, so I'm not surprised.


Only selected directives can be cleared this way.


Is there a way I can have it only enable the alias of the unit? Or do
both have to be enabled?


The problem is that Alias is just a symlink to "primary" unit file.
But in case of instantiated template no primary unit file exists at
all. So there would really be nothing to link to.

But it seems that even if I create link foo@bar.service to
foo@.service it still wants to enable template, not instantiated unit.

Also, is there any way to specify a unit alias within an 
override.conf?




Seems to be ignored, at least [Install] section.


So, in short, systemd doesn't provide quite the functionality I am 
looking for here. I am able to enable only the alias by using:


[Install]
Alias=machines.target.wants/gitlab.service

and not using 'WantedBy.' This achieves part of what I'd like to 
accomplish. However, it doesn't end up being any more convenient from 
most points of view. I was hoping that, if I aliased 
'systemd-nspawn@gitlab.service' to 'gitlab.service' that I could then 
use:


# systemctl  gitlab.service

and have it know that I'm talking about gitlab.service. And I would 
really like to be able to do this with overrides. Why? Because I'd like 
to be able to have conveniently-named service identifiers that point to 
pre-defined services/templates/etc. I would like to manage 
'systemd-nspawn@gitlab.service' as 'gitlab.service' without having to 
copy the systemd-nspawn template to a brand new gitlab.service. If I 
create one-off copies, then if the template is updated those changes 
don't propagate to the one-off copy. I guess I could accomplish this 
with symlinks in the /etc/systemd dir (gitlab.service -> 
systemd-nspawn@gitlab.service) but that's not a nice, clean, or good way 
to do it. Are there any other solutions/workarounds? Or is systemd just 
not intended to be used like this?


Thanks again!

Chris
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Machinectl shell/login do not attach to console

2015-10-16 Thread Chris Bell

On 2015-10-16 13:55, Chris Bell wrote:

On 2015-10-14 15:58, Lennart Poettering wrote:

On Mon, 05.10.15 12:30, Chris Bell (cwb...@narmos.org) wrote:


Hi all,

I have an Arch machine with systemd 226, running an Arch container, 
also
with systemd 226. For whatever reason in 225, `machinectl login` 
stopped
working correctly, and in 226 `machinectl login` does not work 
properly. It
attaches to the machine, but does not seem to redirect stdin and 
stdout to
the machine. When I attempt to use login, the login prompt is never 
printed

to the command line:


There were some races when machinectl was too fast and the systemd
inside the container too slow. This should be fixed in systemd git,
specifically commit 40e1f4ea7458a0a80eaf1ef356e52bfe0835412e and 
related.


I've recompiled from git, and the problem has, indeed, been solved! 
Thank you!


Sorry, I was wrong. I was running 'machinectl shell' without a machine 
name, and it spawned a shell for my host machine. Guest machine still 
cannot be accessed with 'shell' or 'login' and stdin/out are still 
redirected to the journal.


I compiled commit 7a1e5abbc6e741e5b6995288c607522faa69c8b4 (Master) from 
the github repo.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Machinectl shell/login do not attach to console

2015-10-16 Thread Chris Bell

On 2015-10-14 15:58, Lennart Poettering wrote:

On Mon, 05.10.15 12:30, Chris Bell (cwb...@narmos.org) wrote:


Hi all,

I have an Arch machine with systemd 226, running an Arch container, 
also
with systemd 226. For whatever reason in 225, `machinectl login` 
stopped
working correctly, and in 226 `machinectl login` does not work 
properly. It
attaches to the machine, but does not seem to redirect stdin and 
stdout to
the machine. When I attempt to use login, the login prompt is never 
printed

to the command line:


There were some races when machinectl was too fast and the systemd
inside the container too slow. This should be fixed in systemd git,
specifically commit 40e1f4ea7458a0a80eaf1ef356e52bfe0835412e and 
related.


I've recompiled from git, and the problem has, indeed, been solved! 
Thank you!


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Overriding WantedBy and Alias directives

2015-10-16 Thread Chris Bell

Hello,

I currently have a systemd unit that I have to reference a lot which has 
a rather long name. I would prefer to be able to reference this unit as 
a short alias.


Example: I have a container unit called 'systemd-nspawn@gitlab.service' 
and I would like to be able to refer to it as simply 'sd-gitlab.service' 
I've added an override.conf with the following:


[Install]
WantedBy=   # To clear out previous wantedby params, 
though this doesn't seem to work like that. Documentation doesn't say it 
should, so I'm not surprised.
Alias=sd-gitlab.service # I've also tried 
Alias=machines.target.wants/gitlab.service and omitted the following 
WantedBy decl.

WantedBy=machines.target

When I run enable, it does not make the symlinks:

#systemctl enable systemd-nspawn@gitlab.service
Created symlink from 
/etc/systemd/system/machines.target.wants/systemd-nspawn@gitlab.service 
to /etc/systemd/system/systemd-nspawn@gitlab.service



However, if I edit the systemd-nspawn@gitlab.service base unit file 
(systemctl edit --full systemd-nspawn@gitlab.service) and change the 
Install section to:


[Install]
Alias=gitlab.service  # Only added this one line
WantedBy=machines.target

and enable the service:

#systemctl enable systemd-nspawn@gitlab.service
Created symlink from /etc/systemd/system/gitlab.service to 
/etc/systemd/system/systemd-nspawn@gitlab.service.
Created symlink from 
/etc/systemd/system/machines.target.wants/systemd-nspawn@gitlab.service 
to /etc/systemd/system/systemd-nspawn@gitlab.service.


Is there a way I can have it only enable the alias of the unit? Or do 
both have to be enabled?

Also, is there any way to specify a unit alias within an override.conf?

Thanks in advance!


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] systemctl edit via polkit results in access denied

2015-10-16 Thread Chris Bell

Hello,

I have configured polkit to allow my user to manage basically everything 
in systemd without requiring sudo or root. Enabling, disabling, 
reloading, etc all work as expected. However, 'systemctl edit' does not. 
It does not deny permission for me to use the function, but it fails 
when trying to copy the file to a temporary directory:


$ systemctl edit rngd.service
Failed to create directories for 
"/etc/systemd/system/rngd.service.d/override.conf": Permission denied



Is there a way for polkit to correct or temporarily override these 
permissions? Or should I use ACLs to grant write permission to my user 
for those directories?


Thanks!

Chris
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemctl edit via polkit results in access denied

2015-10-16 Thread Chris Bell

On 2015-10-16 15:41, Mantas Mikulėnas wrote:

On Fri, Oct 16, 2015 at 5:44 PM, Chris Bell <cwb...@narmos.org> wrote:

Is there a way for polkit to correct or temporarily override these
permissions? Or should I use ACLs to grant write permission to my
user for those directories?


The problem is that `systemctl edit` only uses D-Bus calls for
reloading systemd; it still manages the unit files directly. For now,
use directory ACLs.


Thanks, that's what I ended up doing. I created a new group, 
sd-managers, and gave the group rwX access to the systemd directories 
via ACLs. Now it works as expected, thanks!


--Chris
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Machinectl shell/login do not attach to console

2015-10-14 Thread Chris Bell

On 2015-10-12 12:35, arnaud gaboury wrote:

On Mon, Oct 5, 2015 at 2:30 PM, Chris Bell <cwb...@narmos.org> wrote:

Hi all,

I have an Arch machine with systemd 226,

which arch version exactly? I had the same issue with 226. It is gone
with 226-3.
setup: Arch host running Fedora container.


226-3 on host and guest. 227 is in testing, maybe that will fix it? I 
don't have any other ideas at this point.

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Machinectl shell/login do not attach to console

2015-10-12 Thread Chris Bell

On 2015-10-05 12:30, Chris Bell wrote:

I have an Arch machine with systemd 226, running an Arch container,
also with systemd 226. For whatever reason in 225, `machinectl login`
stopped working correctly, and in 226 `machinectl login` does not work
properly. It attaches to the machine, but does not seem to redirect
stdin and stdout to the machine. When I attempt to use login, the
login prompt is never printed to the command line:


So I've done some more testing, and have not been able to come up with 
any solution. I have also simplified the scenario for the error:


First, I start the machine from a shell using:

# systemd-nspawn --boot --network-bridge=br0 --machine=gitlab

It boots and drops me to a login prompt:

Spawning container gitlab on /var/lib/machines/gitlab.
Press ^] three times within 1s to kill container.
systemd 226 running in system mode. (+PAM -AUDIT -SELINUX -IMA -APPARMOR 
+SMACK -SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 
+SECCOMP +BLKID -ELFUTILS +KMOD +IDN)

Detected virtualization systemd-nspawn.
Detected architecture x86-64.

Welcome to Arch Linux!

[  OK  ] Reached target Multi-User System.

Arch Linux 4.1.6-1-ARCH (console)

gitlab login:

Where I can log in as root and use the machine as normal. If I attempt 
to spawn a `machinectl shell` from a second bash shell, it simply gives 
me the 'press ^] thre times...to exit' message. The login prompt is then 
printed on the original bash shell - the one where systemd-nspawn is 
running. All stdin and stdout is redirected from the shell that started 
'systemd-nspawn' - the shell where I ran 'machinectl shell' doesn't 
respond to anything but the kill command.


I could really use some help here!

Thanks,
Chris
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Machinectl shell/login do not attach to console

2015-10-05 Thread Chris Bell

Hi all,

I have an Arch machine with systemd 226, running an Arch container, also 
with systemd 226. For whatever reason in 225, `machinectl login` stopped 
working correctly, and in 226 `machinectl login` does not work properly. 
It attaches to the machine, but does not seem to redirect stdin and 
stdout to the machine. When I attempt to use login, the login prompt is 
never printed to the command line:


# machinectl login gitlab
Connected to machine gitlab. Press ^] three times within 1s to exit 
session.

<>^]^]
Connection to machine gitlab terminated.

And nothing of note is printed in the journal (relevant date is Oct 5, 
machine was last started on Sep 28):


# systemctl status systemd-nspawn@gitlab.service
● systemd-nspawn@gitlab.service - Container gitlab
   Loaded: loaded (/usr/lib/systemd/system/systemd-nspawn@.service; 
enabled; vendor preset: disabled)

  Drop-In: /etc/systemd/system/systemd-nspawn@gitlab.service.d
   └─override.conf
   Active: active (running) since Mon 2015-09-28 08:11:33 EDT; 1 weeks 0 
days ago

 Docs: man:systemd-nspawn(1)
 Main PID: 18746 (systemd-nspawn)
   Status: "Container running."
   Memory: 1010.7M
  CPU: 37min 13.126s
   CGroup: /machine.slice/systemd-nspawn@gitlab.service
   ├─18746 /usr/bin/systemd-nspawn --quiet --keep-unit --boot 
--link-journal=try-guest --network-bridge=br0 --machine=gitlab

   ├─init.scope
   │ └─18753 /usr/lib/systemd/systemd
   └─system.slice
 ├─gitlab-sidekiq.service
 │ ├─18886 sh -c sidekiq -q post_receive -q mailer -q 
system_hook -q project_web_hook -q gitlab_shell -q common -q default -q 
archive_repo -e production -L /var/log/gitlab/sidekiq.log >> 
/var/log/gitlab/sidekiq.log 2>&1

 │ └─18904 sidekiq 3.3.0 gitlab [0 of 25 busy]
 ├─dbus.service
 │ └─18789 /usr/bin/dbus-daemon --system --address=systemd: 
--nofork --nopidfile --systemd-activation

 ├─redis.service
 │ └─18797 /usr/bin/redis-server 127.0.0.1:6379
 ├─postfix.service
 │ ├─18881 /usr/lib/postfix/bin/master -w
 │ ├─18883 qmgr -l -t unix -u
 │ └─25044 pickup -l -t unix -u
 ├─systemd-journald.service
 │ └─18772 /usr/lib/systemd/systemd-journald
 ├─gitlab-unicorn.service
 │ ├─18887 unicorn_rails master -c 
/usr/share/webapps/gitlab/config/unicorn.rb -E production
 │ ├─25086 unicorn_rails worker[1] -c 
/usr/share/webapps/gitlab/config/unicorn.rb -E production
 │ ├─25184 unicorn_rails worker[2] -c 
/usr/share/webapps/gitlab/config/unicorn.rb -E production
 │ └─25355 unicorn_rails worker[0] -c 
/usr/share/webapps/gitlab/config/unicorn.rb -E production

 ├─systemd-logind.service
 │ └─18788 /usr/lib/systemd/systemd-logind
 ├─postgresql.service
 │ ├─18815 /usr/bin/postgres -D /var/lib/postgres/data
 │ ├─18854 postgres: checkpointer process
 │ ├─18855 postgres: writer process
 │ ├─18856 postgres: wal writer process
 │ ├─18857 postgres: autovacuum launcher process
 │ ├─18858 postgres: stats collector process
 │ ├─18945 postgres: gitlab_db gitlabhq_production [local] 
idle
 │ ├─21179 postgres: gitlab_db gitlabhq_production [local] 
idle
 │ ├─25090 postgres: gitlab_db gitlabhq_production [local] 
idle
 │ ├─25366 postgres: gitlab_db gitlabhq_production [local] 
idle
 │ └─25382 postgres: gitlab_db gitlabhq_production [local] 
idle

 └─console-getty.service
   └─19441 /sbin/agetty --noclear --keep-baud console 115200 
38400 9600 vt220


Sep 28 08:11:35 zombie.narmos.org systemd-nspawn[18746]: zombie login: [ 
 OK  ] Started PostgreSQL database server.
Sep 28 08:11:35 zombie.narmos.org systemd-nspawn[18746]: [  OK  ] 
Started GitLab Sidekiq Worker.
Sep 28 08:11:35 zombie.narmos.org systemd-nspawn[18746]: [  OK  ] 
Started GitLab Unicorn Server.
Sep 28 08:11:35 zombie.narmos.org systemd-nspawn[18746]: [  OK  ] 
Reached target Multi-User System.
Sep 28 08:11:36 zombie.narmos.org systemd-nspawn[18746]: Arch Linux 
4.1.6-1-ARCH (console)

Sep 28 08:12:38 zombie.narmos.org systemd-nspawn[18746]: gitlab login:
Sep 28 08:12:38 zombie.narmos.org systemd-nspawn[18746]: Arch Linux 
4.1.6-1-ARCH (console)
Sep 28 08:12:38 zombie.narmos.org systemd-nspawn[18746]: gitlab login: 
The Zombie, brought to you by Arch Linux 4.1.6-1-ARCH (pts/0)

Sep 28 08:12:55 zombie.narmos.org systemd-nspawn[18746]: zombie login:
Sep 28 08:12:55 zombie.narmos.org systemd-nspawn[18746]: The Zombie, 
brought to you by Arch Linux 4.1.6-1-ARCH (pts/0)


Note the login prompts as the last couple lines. This is interesting, 
too, as the guest is named gitlab, and the host is named zombie. Both 
login prompts somehow appear in the journal for the guest container.


Machinectl 

Re: [systemd-devel] bootctl: default mount point for the ESP partition.

2015-09-01 Thread Chris Bell

On 2015-09-01 14:23, Kay Sievers wrote:
On Tue, Sep 1, 2015 at 8:08 PM, Tomasz Torcz  
wrote:

On Tue, Sep 01, 2015 at 05:47:57PM +0100, Simon McVittie wrote:

On 01/09/15 17:21, Goffredo Baroncelli wrote:
AIUI, /boot/efi also makes it a bit easier to have the ESP remain
unmounted or read-only when not in active use, which is good for its 
own

robustness; a system crash corrupting an unmounted partition is less
likely than corrupting a mounted filesystem.


  That's why systemd's generator creates automount unit (with timeout)
for /boot.


Right, the ESP at /boot is never mounted unless it is accessed.


So, properly, we shouldn't have separate boot and EFI partitions? I 
generally separate them so that I can have my boot partition on ext4 
(contains only kernels & initrds), but if I'm not mistaken, the EFI 
partition needs to be FAT32. Hence, two separate partitions. The other 
benefit I've seen is that it keeps other operating systems (in a 
multi-boot environment) from clobbering anything in /boot. Is this not 
the correct way to implement this? If not, how can the expected mount 
points be preserved while allowing for separate partitions? I prefer not 
to use FAT for anything if I can help it.


This is my current setup:

/ - root partition, btrfs
/boot - boot partition, ext4
/boot/efi - EFI partition, FAT32

(FWIW, I've been using rEFInd as my EFI bootloader, which looks for 
/boot/efi/EFI during installation.)


Thanks,
Chris
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] SHM parameters on nspawn containers

2015-08-28 Thread Chris Bell

On 2015-08-26 20:28, Florian Koch wrote:

Ohne way is to use an More recent Kernel, with 3.16+ the Kernel
defaults for These values where changed to  unlimeted


This worked. I booted the host with a 4.1 kernel, and the values were 
appropriately high. Thanks!


On 2015-08-27 18:18, Lennart Poettering wrote:

Would be happy to take a patch that automatically propagates these
values from the host into the container.


While it would add a limited-case convenience to patch in this 
functionality, I'm not sure how necessary/beneficial it will be overall, 
since newer kernels apparently make this a non-issue.


Also, I ended up swapping out the CentOS7 guest with an Arch guest; my 
host systemd did not seem to like interfacing with the older (v208) 
systemd on CentOS. (And, 208 doesn't have networkd.) Everything's 
working swimmingly well, now.


Thanks!

--Chris
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] SHM parameters on nspawn containers

2015-08-26 Thread Chris Bell

Hello all,

I'm attempting to run GitLab (with postgresql) on a CentOS 7 container 
with systemd-nspawn. Postgre keeps failing, because it tries to allocate 
more shared memory than the container seems to allow. I cannot use 
sysctl to write the kernel.shmmax and kernel.shmall properties, since 
/sys isn't *real* (sysctl -w fails with 'read-only file system'). I have 
the values set correctly in the host machine, but they do not seem to 
propagate/be available to the container. Is there any way I can set 
(increase) the kernel.shmmax and kernel.shmall values in the container?


Host: systemd 224 on Arch LTS Kernel (3.14.51)
Guest: systemd 208 on CentOS 7 container

Thanks,

Chris
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Console screen blanks while long running service executes

2015-08-12 Thread Chris Bell

On 2015-08-12 20:19, Harry Goldschmitt wrote:

I just modified my grub kernel command line to add the consoleblank=0
parameter. That isn't the problem. First consoleblank is the kernel
screensaver and according to the documentation it kicks in after 15
minutes by default.

What I see is a few kernel driver start up messages. Then the console
screen blanks for about 10 seconds. Then multi-user boot completes.


The only other thing I can think of is when modesetting kicks in and is 
configuring your graphics adapter? You could try adding the 'nomodeset' 
kernel parameter to your boot, but keep in mind that you will lose 
kernel modesetting, which could potentially cause issues with your 
system. I'm not sure what else would cause the screen to 'blank' like 
you are describing at that point in the boot.


--Chris
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [systemd-commits] man/pam_systemd.xml

2014-10-24 Thread Chris Bell
On Fri, Oct 24, 2014 at 4:17 AM, Daniel Mack dan...@zonque.org wrote:
 Could you please send a patch that does that change?

Here you go!

From 517599692ed194156e8277e310270f4407d0d124 Mon Sep 17 00:00:00 2001
From: Chris Bell cwb...@mail.usf.edu
Date: Fri, 24 Oct 2014 05:22:36 -0400
Subject: [PATCH] sidestepped gender-neutral debate

---
 man/pam_systemd.xml | 12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/man/pam_systemd.xml b/man/pam_systemd.xml
index 40709f7..2090e73 100644
--- a/man/pam_systemd.xml
+++ b/man/pam_systemd.xml
@@ -98,10 +98,9 @@

citerefentryrefentrytitlelogind.conf/refentrytitle
 manvolnum5/manvolnum/citerefentry,
 all processes of the session are terminated. If
-the last concurrent session of a user ends, his
-user systemd instance will be terminated too,
-and so will the user's slice
-unit./para/listitem
+the last concurrent session of a user ends,
+that user's systemd instance and slice unit
+will both be terminated./para/listitem

 listitemparaIf the last concurrent session
 of a user ends, the
@@ -201,8 +200,9 @@
 user-writable directory that is bound
 to the user login time on the
 machine. It is automatically created
-the first time a user logs in and
-removed on his final logout. If a user
+the first time a user logs in and is
+removed when the user logs out of
+the last active session. If a user
 logs in twice at the same time, both
 sessions will see the same
 varname$XDG_RUNTIME_DIR/varname
--

Chris Bell

Ph.D. Student
University of South Florida
College of Engineering
Department of Computer Science and Engineering
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Stop Job for User Manager

2014-10-23 Thread Chris Bell
It looks like Fedora recently implemented changes in the user@.service
unit file to address this issue. They use:

ExecStop=/bin/kill -TERM ${MAINPID}
KillSignal=SIGCONT

as opposed to KillMode=mixed. I think this is why I haven't been able
to reproduce this on my Fedora 20 box recently. The discussion that
led to this change is at
http://thread.gmane.org/gmane.comp.sysutils.systemd.devel/16363

The patch in the Fedora git repo:
http://pkgs.fedoraproject.org/cgit/systemd.git/tree/0266-Temporary-work-around-for-slow-shutdown-due-to-unter.patch?h=f20

The changes are live in Fedora 20; my F19 server and Arch box don't
have these modifications. The patch was submitted mid-September.

Is this a 'valid' workaround? Is there a better way? Any comments from
those in the know?

I apologize if I'm asking this in the wrong place; I've had a very
hard time getting assistance with this. If there's a better place to
have this discussion, please let me know.
Chris Bell

Ph.D. Student
University of South Florida
College of Engineering
Department of Computer Science and Engineering
NarMOS Research Team


On Wed, Oct 22, 2014 at 3:45 PM, Chris Bell cwb...@mail.usf.edu wrote:
 On Wed, Oct 22, 2014 at 10:49 AM, Chris Bell cwb...@mail.usf.edu wrote:
 Why 90 seconds? Can this duration be changed?

 Could I accomplish this with the `JobTimeoutSec' systemd parameter in
 the `user@.service' unit file? I can't seem to force my system to get
 stuck on a stop job at the moment to test it. Would changing this to
 e.g. 15 affect the hold timer in this case?

 On Wed, Oct 22, 2014 at 10:54 AM, Reindl Harald h.rei...@thelounge.net 
 wrote:
 there are more rough edges in that context

 Next time it hangs, I'll have to examine this from the emergency
 console. Thanks for the info.

 Chris Bell

 Ph.D. Student
 University of South Florida
 College of Engineering
 Department of Computer Science and Engineering
 NarMOS Research Team
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [systemd-commits] man/pam_systemd.xml

2014-10-23 Thread Chris Bell
On Thu, Oct 23, 2014 at 8:53 PM, Lennart Poettering
lenn...@poettering.net wrote:
 Well, the sentence is complicated enough as it is.

On Thu, Oct 23, 2014 at 8:55 PM, Alex Gaynor alex.gay...@gmail.com wrote:
 Switching to her/his would be a definitely improvement.

What if we reworded it to avoid the use of them, his, or her
altogether? Something like:

-all processes of the session are terminated. If
-the last concurrent session of a user ends, his
-user systemd instance will be terminated too,
-and so will the user's slice
-unit./para/listitem
+all processes of the session are terminated. If
+the last concurrent session of a user ends,
+that user's systemd instance and slice unit
+will both be terminated./para/listitem
...
-removed on his final logout. If a user
+ removed when the user logs out of
+ the last active session. If a user
or
+ removed when the user has logged
+ out of all active sessions. If a user

It's a bit more verbose, but I think it's more precise, and it
completely sidesteps the 'singular they' debate (which I'll refrain
from weighing in on).

Chris Bell

Ph.D. Student
University of South Florida
College of Engineering
Department of Computer Science and Engineering
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Stop Job for User Manager

2014-10-22 Thread Chris Bell
Hi all,

I'm running into an annoying issue. I use systemd 216 on an Arch box,
and systemd 208 on a Fedora box; the issue exists on both. Logins are
handled through getty; there is no desktop manager involved.
Occasionally, on shutdown, I get a 90 second hold while waiting for a
'Stop Job for User 1000'. Logging out before shutdown reduces, but
does not eliminate, the occurrence.

The issue is documented in far more detail at
https://bugs.freedesktop.org/show_bug.cgi?id=70593 ; I'm the Chris in
the comments (starting with #15). I was led here for assistance with
the following questions related to this issue:

What, exactly, is this stop job? What's it waiting for? Why 90
seconds? Can this duration be changed? Where should I start bughunting
here?

The bugzilla hasn't had much activity from devs recently; the last
official-looking comments deal with systemd-208, last December.

I'm not even sure this is actually a 'bug'. For all I know, it's
actually, legitimately, waiting for something. The problem is, I have
no idea - and don't know how to find out - what it thinks it's waiting
for. Any assistance to that end would also be greatly appreciated.

Thanks!

Chris Bell

Ph.D. Student
University of South Florida
College of Engineering
Department of Computer Science and Engineering
NarMOS Research Team
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel