[systemd-devel] Restarting a socket starts the associated service

2014-03-24 Thread Tim Cuthbertson
Hi all,

I've got a socket-activated service, called `myapp-main.service` and
`myapp-main.socket`. These are part of myapp.target:

$ cat myapp.target
[Install]
WantedBy=multi-user.target

$ cat myapp-main.service
[Unit]
After=local-fs.target
After=network.target
Requires=myapp-main.socket

[Service]
ExecStart= .. uninteresting .. 

$ cat user/myapp-main.socket
[Socket]
ListenStream=9776

[Unit]
PartOf=myapp.target

[Install]
WantedBy=myapp.target

--

When I install the units, I run:

$ systemctl reenable myapp-main.service myapp-main.socket myapp.target
$ systemctl daemon-reload
$ systemctl reload-or-try-restart myapp-main.service myapp-main.socket
myapp.target
$ systemctl start myapp.target

This ensures that required units are started, while also ensuring that
if the service did happen to be running already, it is restarted.

On the first install, myapp-main.socket is made active, but
myapp-main.service is not (since it's socket activated, and nothing
uses it). However on the second install it is now running.

It seems to be that if you run `systemctl restart` on a socket that is
currently listening, it starts the associated service even if it was
not previously running. Is this intentional? Is there some different
approach I should use to ensure that:

a) all running units in my list of units that I've installed are
reloaded / restarted
b) no services are unnecessarily started

Cheers,
 - Tim.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Restarting a socket starts the associated service

2014-03-24 Thread Tim Cuthbertson
On Tue, Mar 25, 2014 at 3:05 PM, Tim Cuthbertson t...@gfxmonk.net wrote:
 Hi all,

 I've got a socket-activated service, called `myapp-main.service` and
 `myapp-main.socket`. These are part of myapp.target:

 $ cat myapp.target
 [Install]
 WantedBy=multi-user.target

 $ cat myapp-main.service
 [Unit]
 After=local-fs.target
 After=network.target
 Requires=myapp-main.socket

 [Service]
 ExecStart= .. uninteresting .. 

 $ cat user/myapp-main.socket
 [Socket]
 ListenStream=9776

 [Unit]
 PartOf=myapp.target

 [Install]
 WantedBy=myapp.target

 --

 When I install the units, I run:

 $ systemctl reenable myapp-main.service myapp-main.socket myapp.target
 $ systemctl daemon-reload
 $ systemctl reload-or-try-restart myapp-main.service myapp-main.socket
 myapp.target
 $ systemctl start myapp.target

 This ensures that required units are started, while also ensuring that
 if the service did happen to be running already, it is restarted.

 On the first install, myapp-main.socket is made active, but
 myapp-main.service is not (since it's socket activated, and nothing
 uses it). However on the second install it is now running.

 It seems to be that if you run `systemctl restart` on a socket that is
 currently listening, it starts the associated service even if it was
 not previously running. Is this intentional? Is there some different
 approach I should use to ensure that:

 a) all running units in my list of units that I've installed are
 reloaded / restarted
 b) no services are unnecessarily started

 Cheers,
  - Tim.

I forgot to mention: fedora 20, with systemd 208
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] systemd-logind retrying constantly when operation is denied by selinux

2014-01-02 Thread Tim Cuthbertson
I recently noticed loud and sustained disk noise, and iotop reported that
jdb2 was going full throttle on /dev/sda1 (my root partition). I ran
`journalctl -f` to see if anything obvious was wrong, and was greeted with
the following messages:

Jan 03 10:23:04 meep systemd[1]: SELinux policy denies access.
Jan 03 10:23:04 meep systemd-logind[447]: Failed to query ActiveState:
Access denied

These two messages were appearing constantly - more than 200x per second
each. I quickly ran `setenforce 0`, and everything went quiet.

I think this is due to something I did yesterday - I used `audit2allow` to
allow system-wide systemd unit files to live in a home directory[0]. This
rule added:

allow systemd_logind_t user_home_t:service start;

When I run audit2allow again now (after the errors), it wants to add:

allow systemd_logind_t user_home_t:service { status stop };

I have now changed this to:

allow systemd_logind_t user_home_t:service *;

Which seems to compile, and hopefully won't cause the problem to recur
whenever systemd performs a new operation on this service. But I thought
I'd report my observations here anyway, since it seems pretty drastic for
systemd-logind to be retrying this failed operation 200+ times a second
when the error is access denied (something that is unlikely to be fixed
in the next few milliseconds).

Of course, I don't know if this failure case is distinct from other errors
that *do* benefit from immediate-and-furious-retry, so I'll leave it to the
developers to determine whether something better can / should be done here.

Cheers,
 - Tim.

[0] I have a modified user@.service unit file managed in my home partition,
because I want to run `systemd --user` via a wrapper that picks up
additional user config. I symlink it under /etc/systemd/system/ rather than
keeping it in there because / is wiped on OS upgrades, but /home is a
separate partition that I keep between upgrades.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Stop all uninstalled units

2013-07-17 Thread Tim Cuthbertson
Hi Folks,

I've posted a serverfault question [0], but it got no answers so I thought
I'd bring it to the mailing list:

I have some systemd units installed and running. Let's say I manually
uninstall foo.service by

 - removing its .service (and .socket) files
 - removing all symlinks (e.g from default.target.wants/)

I can run systemctl daemon-reload, and then I see:

# systemctl status foo.service
foo.service
   Loaded: error (Reason: No such file or directory)
   Active: active (running) since Mon 2013-07-08 13:50:29 EST; 48s ago
 Main PID: 1642 (node)

So systemd knows that it's not installed (i.e backed by a file), and that
it is running. Is there some command I can use to stop all running services
which no longer have a unit file?

I do not want to have to somehow know what I've uninstalled, or for the
command to work only in some circumstances - I want something that I can
run at any point, with no additional knowledge, that will stop all units
not backed by a unit file.

I've currently got a hacky script that:
 - Runs `list-units` to get all unit names
 - Then `show -p ActiveState -p UnitFileState $unit` on each
  - It then runs `systemctl stop` on each unit with a UnitFileState of 
(empty string), and ActiveState of anything but failed.

This is almost certainly missing some edge case, as I couldn't really find
any documentation of these properties, it just seemed to be what occurred
with the examples I encountered.

I'm hoping there's a better way, can anyone point me in the right direction?

[0]:
http://serverfault.com/questions/521504/systemd-stop-all-uninstalled-units

Thanks,
- Tim Cuthbertson.

(apologies if this email arrives on the list twice, I think the first time
didn't make it through because I wasn't subscribed to the list)
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Stop all uninstalled units

2013-07-17 Thread Tim Cuthbertson
On Thu, Jul 18, 2013 at 9:47 AM, Lennart Poettering
lenn...@poettering.netwrote:

 On Wed, 17.07.13 22:14, Tim Cuthbertson (t...@gfxmonk.net) wrote:

  Hi Folks,
 
  I've posted a serverfault question [0], but it got no answers so I
 thought
  I'd bring it to the mailing list:
 
  I have some systemd units installed and running. Let's say I manually
  uninstall foo.service by
 
   - removing its .service (and .socket) files
   - removing all symlinks (e.g from default.target.wants/)
 
  I can run systemctl daemon-reload, and then I see:
 
  # systemctl status foo.service
  foo.service
 Loaded: error (Reason: No such file or directory)
 Active: active (running) since Mon 2013-07-08 13:50:29 EST; 48s
 ago
   Main PID: 1642 (node)
 
  So systemd knows that it's not installed, and that it is running. Is
 there
  some command I can use to stop all running services which no longer have
 a
  unit file?
 
  I do not want to have to somehow know what I've uninstalled, or for the
  command to work only in some circumstances - I want something that I can
  run at any point, with no additional knowledge, that will stop all units
  not backed by a unit file.
 
  I've currently got a hacky script that:
   - Runs `list-units` to get all unit names
   - Then `show -p ActiveState -p UnitFileState $unit` on each
   - It then runs `systemctl stop` on each unit with a UnitFileState of 
  (empty string), and ActiveState of anything but failed.
 
  This is almost certainly missing some edge case, as I couldn't really
 find
  any documentation of these properties, it just seemed to be what occurred
  with the examples I encountered.
 
  I'm hoping there's a better way, can anyone point me in the right
 direction?
 
  [0]:
 
 http://serverfault.com/questions/521504/systemd-stop-all-uninstalled-units

 So, the correct way to handle this is to make sure the packages in
 question contain the right scriplets that terminate all units they
 include before deinstallation. Of course, there'll always be broken
 packages like this I fear, hence I can totally see your usecase.

 There's currently no nice way to handle this, but what you can do is this:

 systemctl --all --type=not-found --type=error

 This will list you all units where the load state is either error or
 not-found (note that 'not-found' is available only in very recent
 versions, and in older systemd versions was just a special case of
 'error'. The command line above works for all versions). The --type=
 switch is used to filter unit types, but actually also can be used to
 filter for the load state.

 Then filter out the first column:

 systemctl --no-legend --all --type=not-found --type=error | awk '{ print
 $1 }'

 This will give you a list of unit files which got referenced or started
 but have no unit file installed. Then use this to stop the units:

 systemctl stop `systemctl --no-legend -all --type=not-found --type=error |
 awk '{ print $1}'`

 And there you go.

 Lennart

 --
 Lennart Poettering - Red Hat, Inc.


Thanks, Lennart (and Colin), that looks like a much better approach,
although --type=not-found is rejected by old versions:

$ systemctl --all --type=not-found --type=error
Unknown unit type or load state 'not-found'.
Use -t help to see a list of allowed values.



$ systemctl --version
systemd 204
+PAM +LIBWRAP +AUDIT +SELINUX +IMA +SYSVINIT +LIBCRYPTSETUP +GCRYPT +ACL +XZ

But I can probably try both, and fall back to just --error if not-found is
denied.

I am not too concerned about systemd not knowing how best to kill the units
- they don't contain any special stop actions, so it would be using the
default action to kill the service anyway. But point taken, in the general
case this is probably not advisable.

For background of why this might happen (if you or Colin are still
curious), I'm installing systemd units not as part of any package -
basically, the user has some application-specific config that they can
change at any moment. They run:

$ foo install my-config-file

And (regardless of any previous state), the units generated by that config
file (and no units defined by previous invocations) should be installed /
running. I put an X-Generated-By-MyApp=true in each of the generated unit
files, so I can scan the /etc/systemd/system folder for previously
installed units. This should mean that I can stop them immediately, but I
wanted a failsafe way of doing it just in case things get left in an
unknown state (because of bugs or unanticipated failure modes in my code).

I also had the notion if being able to affect a system from another with
access to its filesystem, e.g. `foo install config-file
--root=/srv/container1/etc/systemd/system` or by running the unit
generation once and rsync'ing the installed units to multiple identical
VMs, and then running a routine task in the container to fix up the state
to reflect the installed units. But this is probably not worth the
weirdness ;)

Cheers, and sorry for the double-post