Re: [systemd-devel] Idea: adding WantsFileBefore= and WantsFileAfter=?

2018-03-23 Thread Ryan Gonzalez
I know everyone here is super busy, but I just wanted to bump this a 
sec before letting it die to make sure it didn't just get lost or 
something. (If someone agrees that it should be a feature, I'd happily 
try to work on it.)


On Tue, Mar 20, 2018 at 4:08 PM, Ryan Gonzalez  wrote:

Hello!!

Recently, I was trying to help out someone on IRC move some sysvinit 
scripts over to systemd units, and there was one interesting issue 
that came up. Many older daemons will create sockets at some 
unspecified point in their startup sequence, with no indication of 
when this occurs. In this case, it was a bit after the pid file, so 
systemd started running units that required this socket ready before 
it was actually ready.


Using socket activation here would be great, but again, this is an 
older daemon, and AFAIK socket activation *always* requires a deamon 
to read the socket path over stdin.


Here's my idea: what if there were WantsFileBefore= and 
WantsFileAfter= options, that could be used like this:


[Service]
Type=oneshot
ExecStart=/usr/bin/my-service
WantsFileBefore=this-file-should-be-existant-before-running-service
WantsFileAfter=systemd-should-wait-until-this-file-exists-before-continuing

In short, WantsFileBefore=file would be roughly equivalent to 
ExecPreStart=wait-for-file file, and WantsFileAfter=file would be 
roughly equivalent to ExecPostStart=wait-for-file file. Of course, 
now there would be no need to useless shell commands.


Thoughts?

--
Ryan (ライアン)
Yoko Shimomura, ryo (supercell/EGOIST), Hiroyuki Sawano >> everyone 
else

https://refi64.com/


--
Ryan (ライアン)
Yoko Shimomura, ryo (supercell/EGOIST), Hiroyuki Sawano >> everyone else
https://refi64.com/


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] leftover machines

2018-03-23 Thread Johannes Ernst
After running a bunch of systemd-nspawn containers, I am left with a few that 
seem to be empty, running nothing, but refuse to die or be killed after they 
did their useful work (so they did run correctly, the problem seems to occur on 
poweroff).  What might be going on here?

This is:
* x86_64
* systemd 238.0-3 (from Arch)
* btrfs

% machinectl
MACHINECLASS SERVICEOS VERSION ADDRESSES
t-23-235311-webapptest container systemd-nspawn -  -   -
t-23-235806-webapptest container systemd-nspawn -  -   -
t-24-000300-webapptest container systemd-nspawn -  -   -
t-24-001033-webapptest container systemd-nspawn -  -   -
t-24-002455-webapptest container systemd-nspawn -  -   -

% machinectl status t-24-002455-webapptest
t-24-002455-webapptest(c844728734f243cfb99ffed0346bc9df)
   Since: Sat 2018-03-24 00:24:55 UTC; 20min ago
  Leader: 22947
 Service: systemd-nspawn; class container
Root: 
/build/x86_64/workarea/repository/dev/x86_64/uncompressed-images/.#machine.ubos_dev_x86_64-container_20180323-234823.tardir53dc09a3ef78c32c
   Iface: 13
Unit: machine-t\x2d24\x2d002455\x2dwebapptest.scope

Mar 24 00:24:55 ubos-pc systemd[1]: Started Container t-24-002455-webapptest.

The path to root does not exist (any more). All that’s left is the sibling lock 
file. btrfs does not list a relevant subvol.

Is Leader supposed to be a process ID? If so, such a process does not exist 
either.

% machinectl poweroff t-24-002455-webapptest
Could not kill machine: No such process

% machinectl terminate t-24-002455-webapptest
(says nothing, but does not do anything either)

% systemd-cgls
(does not show anything in the machine.slice, for none of the leftover machines)

Reboot makes them go away, but when I run the same scripts again, sooner or 
later I have leftover containers again.

There is nothing relevant in the journal I can see either.

This exact setup used to work just fine, I doubt I made any meaningful changes 
other than OS including systemd upgrades.

Life cycle:
1. systemd-nspawn --boot --ephemeral --network-veth --machine=t-xxx-webapptest 
--directory 
'/build/x86_64/workarea/repository/dev/x86_64/uncompressed-images/ubos_dev_x86_64-container_LATEST.tardir'
 --bind '/tmp/da6G8OYszu:/UBOS-STAFF' --system-call-filter=set_tls 
2. System boots, I ssh in, and do stuff, and when done:
3. machinectl poweroff ’t-xxx-webapptest'

Any ideas?

Thanks,



Johannes.



___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Filtering logs of a single execution of a (transient) service

2018-03-23 Thread Lennart Poettering
On Fr, 23.03.18 12:52, Filipe Brandenburger (filbran...@google.com) wrote:

> Hi!
> 
> So I'm testing a program repeatedly and using `systemd-run` to start a
> service with it, passing it a specific unit name.
> 
> When the test finishes and I bring down the service, I want to be able to
> collect the journald logs for that execution of the test alone.
> 
> Right now what I'm doing is naming the service differently every time,
> including a random number, so I can collect the logs for that service alone
> at the end. Such as:

What you are looking for is the "invocation ID". It's a 128bit random
UUID unique for each invocation cycle of a service (i.e it's generated
fresh whenever a unit enters the STARTING phase). It's included in the
service's env block (as well as the keyring and attached to its
cgroup). The journal will also attach it to every log message of the
service.

systemd-run currently doesn't show you the invocation ID however, but
I figure this is something we should really fix. 

> I guess what I'm looking for is a way to get systemd to inject a journal
> field to every message logged by my unit. Something like an environment
> variable perhaps? Or some other field I can pass to systemd-run using -p.
> Or something that systemd itself generates, that's unique for each
> execution of the service and that I can query somehow (perhaps `systemd
> show` while the service is up.) Is there any such thing?

You can also use "-p LogExtraFields=QUUX=miep" to attach arbitrary
additional journal fields to all log messages of the service you run
that way.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Filtering logs of a single execution of a (transient) service

2018-03-23 Thread Filipe Brandenburger
Hi!

So I'm testing a program repeatedly and using `systemd-run` to start a
service with it, passing it a specific unit name.

When the test finishes and I bring down the service, I want to be able to
collect the journald logs for that execution of the test alone.

Right now what I'm doing is naming the service differently every time,
including a random number, so I can collect the logs for that service alone
at the end. Such as:

  # myservice_name=myservice-${RANDOM}.service
  # systemd-run --unit="${myservice_name}" --remain-after-exit mybin
--myarg1 --myarg2 ...

And then collecting the logs using:

  # journalctl -u "${myservice_name}"

One disadvantage of this approach is that the units pile up as I keep
running tests...

  # systemctl status myservice-*.service

And that it's hard to find which one is the latest one, from an unrelated
session (this works only while active):

  # systemctl list-units --state running myservice-*.service

I would like to run these tests all under a single unit name,
myservice.service. I'm fine with not having more than one of them at the
same time (in fact, that's a feature.)

But I wonder how I can get the logs for a single execution...

The best I could come up with was using a cursor to get the logs for the
last execution:

  # journalctl -u myservice MESSAGE_ID=39f53479d3a045ac8e11786248231fbf
--show-cursor
  -- Logs begin at Thu 2018-03-22 22:57:32 UTC, end at Fri 2018-03-23
19:17:01 UTC. --
  Mar 23 16:40:00 mymachine systemd[1]: Started mybin --myarg1 --myarg2
  Mar 23 16:45:00 mymachine systemd[1]: Started mybin --myarg1 --myarg2b
  Mar 23 16:50:00 mymachine systemd[1]: Started mybin --myarg1 --myarg2
--myarg3
  -- cursor:
s=abcde12345...;i=123f45;b=12345abcd...;m=f123f123;t=123456...;x=...

And then use the cursor to query journald and get the logs from the last
execution:

  # journalctl -u myservice --after-cursor 's=abcde12345...;i=123f45;...'

That works to query the last execution of the service, but not a random
one...

I guess what I'm looking for is a way to get systemd to inject a journal
field to every message logged by my unit. Something like an environment
variable perhaps? Or some other field I can pass to systemd-run using -p.
Or something that systemd itself generates, that's unique for each
execution of the service and that I can query somehow (perhaps `systemd
show` while the service is up.) Is there any such thing?

Any other suggestions of how I should accomplish something like this?

Thanks!
Filipe


smime.p7s
Description: S/MIME Cryptographic Signature
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] question about Wants and unit start-up order

2018-03-23 Thread Mantas Mikulėnas
On Fri, Mar 23, 2018 at 5:41 PM, Brian J. Murrell 
wrote:

> If I have:
>
> Wants=system.slice nss-lookup.target named-setup-rndc.service
>
> in named-pkcs11.service, so shouldn't mean that named-pkcs11.service
> will be started up before the nss-lookup.target is Reached/Started?
>

No, dependencies do not imply any specific ordering. (The only exception is
when a .target wants/requires another unit.)

In other words, you will need to additionally list the same units in
After=, or in certain cases in Before=. (For example, named is a nss-lookup
provider, so it should have "Before=nss-lookup.target", but
"After=named-setup-mdc.service".)

On another note, Wants=system.slice is *very* redundant – all system
services go into that slice anyway.

-- 
Mantas Mikulėnas
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] question about Wants and unit start-up order

2018-03-23 Thread Brian J. Murrell
If I have:

Wants=system.slice nss-lookup.target named-setup-rndc.service

in named-pkcs11.service, so shouldn't mean that named-pkcs11.service
will be started up before the nss-lookup.target is Reached/Started?

That doesn't seem to be the case on my system:

Mar 23 09:44:03 server.interlinx.bc.ca systemd[1]: Reached target User and 
Group Name Lookups.
Mar 23 09:44:03 server.interlinx.bc.ca systemd[1]: Starting User and Group Name 
Lookups.
Mar 23 09:47:35 server.interlinx.bc.ca systemd[1]: Starting Berkeley Internet 
Name Domain (DNS) with native PKCS#11...
Mar 23 09:47:42 server.interlinx.bc.ca systemd[1]: Started Berkeley Internet 
Name Domain (DNS) with native PKCS#11.

Am I misunderstanding what systemd is trying to tell me above?  Is it
not saying that the nss-lookup.target was reached (and therefore
services that depend on it can go ahead) before the DNS service was
actually even started?

If I am misunderstanding, is there a better way to understand the order
that units were started in than looking at the journal?

Cheers,
b.


signature.asc
Description: This is a digitally signed message part
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel