[systemd-devel] Antw: [EXT] Immutable Images: Single Data Patition

2023-03-21 Thread Ulrich Windl
>>> Adrian Vovk  schrieb am 21.02.2023 um 22:50 in 
>>> Nachricht
:

[...]
> Part of the A/B approach involves two classes of user data partitions:
> ones that are encrypted (/var, /etc) and ones that are not (/home).

I don't know the OS, nor the concepts behind, but it surprises me that the data 
in /var seem the be treated as more sensitive than the users' homes.

[...]

Ulrich





[systemd-devel] Antw: [EXT] Launch a mount unit from udev rule via ENV{SYSTEMD_WANTS}

2023-02-20 Thread Ulrich Windl
>>> Jacopo  schrieb am 15.02.2023 um 17:59 in Nachricht
:
> I'be been having issues lately trying to automatically remount an external
> USB drive that is mounted at boot from an fstab entry:
> LABEL=data-ssd  /opt/data-ssd  ext4  defaults,nofail,users  0  2
> 
> I'm using systemd 244 and after some investigation I learned about the
> possibility to launch a systemd unit from a udev rule via
> ENV{SYSTEMD_WANTS} (see here
> https://github.com/systemd/systemd/issues/22589#issuecomment-1047940003 and
> https://github.com/systemd/systemd/pull/11373#issuecomment-594014841)
> 
> In particular the second suggestion is exactly what I need, but the only
> way I could make it work was launching a dummy.service that "Wants" the
> corresponding mount unit:
> 
> udev rule:
> ACTION=="add", SUBSYSTEM=="block", KERNEL=="sd*", SUBSYSTEMS=="usb",

And is your SSD really a "sd*" device?

> ENV{DEVTYPE}=="partition", IMPORT{builtin}="blkid",
> ENV{ID_FS_TYPE}=="ext4", ENV{ID_FS_LABEL_ENC}=="data-ssd",
> ENV{SYSTEMD_WANTS}+="dummy.service"
> 
> dummy.service:
> [Unit]
> Description=Dummy service
> Wants=opt-data\x2dssd.mount
> 
> [Service]
> Type=simple
> ExecStart=/bin/echo "I'm dummy"
> 
> [Install]
> WantedBy=local-fs.target
> 
> I also tested using ACTION!="remove" in the udev rule, but same result.
> 
> As I mentioned in the github issue before being redirected here, if I query
> with udevadm, ENV{SYSTEMD_WANTS} is printed out only with in the
> non-working case (.mount unit launched directly from the udev rule):
> 
> # udevadm info --query=property --path=/sys/class/block/sda1
> [...]
> SYSTEMD_WANTS=opt-data\x2dssd.mount
> [...]
> 
> whereas it disappears if launching the dummy.service first.
> 
> Thanks






[systemd-devel] Antw: [EXT] Re: Extend systemd-resolved service to override DNS response

2023-02-20 Thread Ulrich Windl
>>> Barry Scott  schrieb am 15.02.2023 um 15:29 in
Nachricht <9ce7d348-327c-4c22-9a54-c57fef4df...@barrys-emacs.org>:

> 
>> On 15 Feb 2023, at 10:31, Aditya Sharma  wrote:
>> 
>> Hi Kevin,
>> 
>> When the TTLs expire, those records in the cache become 'stale', and are 
> normally purged. Your request is to have an option in systemd-resolved to 
> *not* purge those records, but to continue serving them in case it is unable 
> to communicate with the configured recursive resolver(s).
>> 
>> Sorry for the ambiguity.
>> What I meant was to keep serving the record after the resolvers are not 
> operational or during some outage.
>> We were thinking of an approach where we keep on serving the last known good 
> FQDNs even after the TTL has expired (only when we are unable to communicate 
> with the resolvers). For that, we need to intercept the DNS calls and 
> maintain some kind of local cache. So, wanted to understand how we can extend 
> systemd-resolved service to serve the purpose.
> 
> I would be worried that breaking the TTL caching rules will create more 
> problems then it solves.
> Isn't the underlying issue that you have unreliable DNS servers that are 
> important to your application?

I also thought it might be an "x y problem"; so what problem are you actually 
trying to fix?

> 
> Barry
> 
> 
> 
>> 
>> Thanks
>> Aditya
>> 
>> 
>> On Tue, 14 Feb 2023 at 16:46, Kevin P. Fleming 
>  > 
> wrote:
>>> On Tue, Feb 14, 2023, at 04:04, Aditya Sharma wrote:
 Hi Kevin,
 
 If what you mean is that you want to serve 'stale' records from a cache 
 when 
> their TTLs have expired and the authoritative servers which provided them are 
> not reachable, that's something that a number of existing recursive resolvers 
> are able to do and it could be logical for systemd-resolved to offer it too.
 We are looking to prepare a solution similar to this, to serve back 
 records 
> for FQDNs in case of timeout from the DNS server.
 We want to understand how we can extend systemd-resolved to override 
> response from DNS server in case of timeouts/failures.
>>> 
>>> Again, you need to be very specific in your request.
>>> 
>>> systemd-resolved communicates with one or more recursive resolvers (what 
>>> you 
> are calling "DNS server", but that term is ambiguous). If those resolvers are 
> not operational, systemd-resolved will continue serving records from its 
> cache (if the cache is enabled), until their TTLs expire.
>>> 
>>> When the TTLs expire, those records in the cache become 'stale', and are 
> normally purged. Your request is to have an option in systemd-resolved to 
> *not* purge those records, but to continue serving them in case it is unable 
> to communicate with the configured recursive resolver(s).
>>> 
>>> In your original message you referred to a 'negative response' from the 
>>> "DNS 
> server", but that's a completely different situation.
>>> 






[systemd-devel] Antw: [EXT] Re: [systemd‑devel] Using IPAddressAllow/IPAddressDeny on ‑‑user scopes

2022-12-17 Thread Ulrich Windl
>>> Lennart Poettering  schrieb am 14.12.2022 um 18:34
in
Nachricht :
> On Di, 13.12.22 22:34, Farblos (akfkqu.9df...@vodafonemail.de) wrote:
> 
>> I can imagine that the latter scenario is not supported or requires
>> additional configuration (which?), but I have not found any hints on that,
>> neither in systemd.resource‑control(5) nor in [1.] or [8.] from that man
>> page.
> 
> The relevant mechanisms are implemented via eBPF, which the kernel
> restricts to privileged processes, which means ‑‑user systemd will
> have a hard time.
> 
> There were discussions and some work done to allow signed eBPF
> programs which the kernel would then allow even from unpriv userspace,
> but this didn't materialize so far. I think it would be great solution
> for us.
> 
> Most of our sandboxing settings degrade gracefully if the backing
> kernel concept is not available in the kernel, or not accessible due
> to privs. We generally value portability of service files more than
> the sandboxing settings, currently.

BUT: Shouldn't there be an error message for the --user case?

> 
> Lennart
> 
> ‑‑
> Lennart Poettering, Berlin





[systemd-devel] Antw: Re: Re: Antw: [EXT] [systemd???devel] starting networking from within single user mode?

2022-11-14 Thread Ulrich Windl
>>> Mantas Mikulenas  schrieb am 14.11.2022 um 09:17 in
Nachricht
:
> On Mon, Nov 14, 2022 at 9:00 AM Ulrich Windl <
> ulrich.wi...@rz.uni-regensburg.de> wrote:
> 
>> >>> Mantas Mikulenas  schrieb am 11.11.2022 um 15:49 in
>> Nachricht
>> :
>> > On Fri, Nov 11, 2022 at 4:19 PM Brian Reichert 
>> wrote:
>> >
>> >> On Fri, Nov 11, 2022 at 08:02:00AM +0100, Ulrich Windl wrote:
>> >> > >>> Brian Reichert  schrieb am 10.11.2022 um
>> >> 23:04 in
>> >> > Nachricht <20221110220426.ga17...@numachi.com>:
>> >> > > I've managed to hose a SLES12 SP5 host; it starts to boot, then
>> hangs.
>> >> >
>> >> > And what did you do to mess it up? And what do the boot messages say?
>> >>
>> >> A good question, and not specific to systemd, so I don't want to
>> >> pollute the list archives too much on this matter.
>> >>
>> >> 'All' I did was remove many RPMs that I arbitrarily deemed
>> >> unnecessary.
>> >>
>> >> I came up with a heavily trimmed-down list of SLES RPM for my SLES12
>> >> Sp5 environment.
>> >>
>> >> I successfully installed a server using just that trimmed-down list;
>> >> yay me!
>> >>
>> >> I then explored 'upgrading' a running (slight older) SP5 box, using
>> >> this trimmed-down list.  A purposeful side effect was to uninstall
>> >> RPMs not in that trimmed-down list.
>> >>
>> >> This latter box begins to boot, and gets at least as far as loading
>> >> the initrd image, before hanging.
>> >>
>> >
>> > Boot with "systemd.debug-shell" and use tty9 to investigate from the
>> inside.
>>
>> Wow! never heard of that option. Is that a kind of target, or what is the
>> mechanism?
>> Which of the 196 (man -k systemd | wc -l) systemd-related manual pages
>> would describe it? ;-)
>>
> 
> The more I read your smartass sarcastic comments here, the less I feel like
> staying on this list and helping *other* people with finding stuff in those
> 196 systemd-related manual pages. But I suppose that's what you want to
> achieve, so that you can snark even more about how "systemd is so complex
> that nobody's bothering to reply to the list anymore"?
> 
> For those who have *actually* never heard of that option, it is documented
> in systemd-debug-generator(8).

Thank your for the nice words (people like you seem to like). After Michaels 
message I was able to locate the documentation.
Please save your nice words for other people seeking for help here.

Regards,
Ulrich




[systemd-devel] Antw: Re: [systemd‑devel] Antw: Re: Antw: [EXT] [systemd???devel] starting networking from within single user mode?

2022-11-14 Thread Ulrich Windl
>>> Michael Chapman  schrieb am 14.11.2022 um 09:03 in
Nachricht <2888d487-984a-b071-fa79-b18f662ef...@very.puzzling.org>:
> On Mon, 14 Nov 2022, Ulrich Windl wrote:
> [...]
>> > Boot with "systemd.debug‑shell" and use tty9 to investigate from the
inside.
>> 
>> Wow! never heard of that option. Is that a kind of target, or what is the 
> mechanism?
>> Which of the 196 (man ‑k systemd | wc ‑l) systemd‑related manual pages
would 
> describe it? ;‑)
> 
> All of systemd's kernel command‑line options are documented under "KERNEL 
> COMMAND LINE" in the systemd(1) man page.

OK, that explains it: In my version of the manual page (systemd-249.12) that
parameter does not exist.
No surprise that I've never heard of it.

However when reading systemd.directives(7), it says the command is explained
in kernel-command-line(7).
Looking there it refers to systemd-debug-generator(8). The latter refers to
debug-shell.service.

> 
> When you're in doubt where something might be documented, look at 
> systemd.directives(7). You'll find them there too.

OK! Thanks!

Regards,
Ulrich






[systemd-devel] Antw: Re: Antw: [EXT] [systemd???devel] starting networking from within single user mode?

2022-11-13 Thread Ulrich Windl
>>> Mantas Mikulenas  schrieb am 11.11.2022 um 15:49 in
Nachricht
:
> On Fri, Nov 11, 2022 at 4:19 PM Brian Reichert  wrote:
> 
>> On Fri, Nov 11, 2022 at 08:02:00AM +0100, Ulrich Windl wrote:
>> > >>> Brian Reichert  schrieb am 10.11.2022 um
>> 23:04 in
>> > Nachricht <20221110220426.ga17...@numachi.com>:
>> > > I've managed to hose a SLES12 SP5 host; it starts to boot, then hangs.
>> >
>> > And what did you do to mess it up? And what do the boot messages say?
>>
>> A good question, and not specific to systemd, so I don't want to
>> pollute the list archives too much on this matter.
>>
>> 'All' I did was remove many RPMs that I arbitrarily deemed
>> unnecessary.
>>
>> I came up with a heavily trimmed-down list of SLES RPM for my SLES12
>> Sp5 environment.
>>
>> I successfully installed a server using just that trimmed-down list;
>> yay me!
>>
>> I then explored 'upgrading' a running (slight older) SP5 box, using
>> this trimmed-down list.  A purposeful side effect was to uninstall
>> RPMs not in that trimmed-down list.
>>
>> This latter box begins to boot, and gets at least as far as loading
>> the initrd image, before hanging.
>>
> 
> Boot with "systemd.debug-shell" and use tty9 to investigate from the inside.

Wow! never heard of that option. Is that a kind of target, or what is the 
mechanism?
Which of the 196 (man -k systemd | wc -l) systemd-related manual pages would 
describe it? ;-)

Regards,
Ulrich






Re: [systemd-devel] Antw: [EXT] [systemd???devel] starting networking from within single user mode?

2022-11-13 Thread Ulrich Windl
>>> Brian Reichert  schrieb am 11.11.2022 um 15:19 in
Nachricht <2022141913.gc17...@numachi.com>:
> On Fri, Nov 11, 2022 at 08:02:00AM +0100, Ulrich Windl wrote:
>> >>> Brian Reichert  schrieb am 10.11.2022 um 23:04
in
>> Nachricht <20221110220426.ga17...@numachi.com>:
>> > I've managed to hose a SLES12 SP5 host; it starts to boot, then hangs.
>> 
>> And what did you do to mess it up? And what do the boot messages say?
> 
> A good question, and not specific to systemd, so I don't want to
> pollute the list archives too much on this matter.
> 
> 'All' I did was remove many RPMs that I arbitrarily deemed
> unnecessary.

Unless you used the options to ignore dependencies, that would mean that
either the dependencies were not correct in the RPM packages, or some unistall
scripts were not. Both would be bugs.
However: When you used SUSE's standard installation using BtrFS, you should
have been able to boot a recent snapshot.

> 
> I came up with a heavily trimmed‑down list of SLES RPM for my SLES12
> Sp5 environment.
> 
> I successfully installed a server using just that trimmed‑down list;
> yay me!

So that would mean incorrect uninstall scripts most likely.

> 
> I then explored 'upgrading' a running (slight older) SP5 box, using
> this trimmed‑down list.  A purposeful side effect was to uninstall
> RPMs not in that trimmed‑down list.
> 
> This latter box begins to boot, and gets at least as far as loading
> the initrd image, before hanging.

Well, what does "hanging" mean exactly? systemd waiting for something?
I have one Fedora-based installation that hangs on some LVM monitor
indefinitely (while the VG is fine) for unknown reasons. I just had to add a
kernel boot option to block that monitor.


> 
> I'm pretty certain there's something mismanaged with replacing the
> kernel, but not properly managing all of the related boot files
> (kdump? device probing? etc.)

If the initrd loaded, it's most likely an incorrect initrd. Did you install
current patches?
There were some MD, LVM, dracut-related patches recently...

> 
> Anyway, that's my mess.  Not at all related to systemd, near as I
> can tell.  I just have to methodically narrow down on where my
> process jumps the tracks.

Yes, but sometimes knowing what you did or what the messages are helps to
diagnose and eventuall help ;-)

Regards,
Ulrich

> 
> ‑‑ 
> Brian Reichert
> BSD admin/developer at large  





[systemd-devel] Antw: [EXT] [systemd‑devel] starting networking from within single user mode?

2022-11-10 Thread Ulrich Windl
>>> Brian Reichert  schrieb am 10.11.2022 um 23:04 in
Nachricht <20221110220426.ga17...@numachi.com>:
> I've managed to hose a SLES12 SP5 host; it starts to boot, then hangs.

And what did you do to mess it up? And what do the boot messages say?

> 
> If I get it into single‑user mode (getting into the grub menu, and adding
> init=/bin/bash) I can at least review the file system.
> 
> What I want to do is get networking running, so that I can at least gather
> logs, etc.
> 
> When I try to start networking with 'systemctl', I see this error:
> 
> systemd "failed to connect to bus; No such file or directory"
> 
> What can I do to minimally bring up the networking service? I don't even
> have any network devices at this point...
> 
> ‑‑ 
> Brian Reichert
> BSD admin/developer at large  





[systemd-devel] Antw: [EXT] Re: Support for unmerged-usr systems will be REMOVED in the second half of 2023

2022-11-06 Thread Ulrich Windl
>>> Luca Boccassi  schrieb am 05.11.2022 um 12:32 in
Nachricht
:
> On Sat, 5 Nov 2022, 10:53 TJ,  wrote:
> 
>> On 05/11/2022 10:36, Mantas Mikulėnas wrote:
>> > On Sat, Nov 5, 2022 at 12:06 PM TJ  wrote:
>> >
>> >> Just seen this announcement in the v252 changelog:
>> >>
>> >> "We intend to remove support for split-usr (/usr mounted separately
>> >> during boot) ..."
>> >>
>> >> How does this align with support for separate /usr/ with dm-verity ?
>> >>
>> >> For example, this will affect nspawn. See "man 1 systemd-nspawn" and
>> >> "--root-hash=" where in respect of /usr/ it says:
>> >>
>> >> "Note that this configures the root hash for the root file system. Disk
>> >> images may also contain separate file systems for the /usr/ hierarchy,
>> >> which may be Verity protected as well. The root hash for this
protection
>> >> may be configured via the "user.verity.usrhash" extended file attribute
>> >> or via a .usrhash file adjacent to the disk image, following the same
>> >> format and logic as for the root hash for the root file system
described
>> >> here."
>> >>
>> >
>> > /usr can remain on a separate partition as long as it's mounted *by the
>> > initrd* (the same way initrd currently mounts your rootfs), so that by
>> the
>> > time systemd starts it already has the full filesystem.
>>
>> How does this work when systemd is used inside the initrd, as
>> "recommended" / discussed at, for example "Using systemd inside an initrd"
>> :
>>
>> https://systemd.io/INITRD_INTERFACE/ 
>>
>> > What's finally being removed is support for having the rootfs itself
>> mount
>> > /usr halfway through, which requires many things that normally are on
>> > /usr/lib to be split between it and /lib instead (such as on Debian).
>> >
>> > Using the initrd to mount /usr isn't new.
>> > <
>> 
>
https://web.archive.org/web/20150906203654if_/https://www.gentoo.org/support/

> news-items/2013-09-27-initramfs-required.html
>> >
>> >
>>
>> Does it also affect the command-line options "mount.usr=,
>> mount.usrfstype=, mount.usrflags=, usrhash=, systemd.verity_usr_data=,
>> systemd.verity_usr_hash=, systemd.verity_usr_options=" as per "man 7
>> kernel-command-line" ?
>>
> 
> No, that is unrelated. This is about the ancient notion (that no initrd
> tools support anymore) that you can boot userspace with /bin /lib /sbin and
> no /usr, with the latter being set up late at boot. This is what is no
> longer going to be supported.


..by systemd developers

And so the Linux OS world must change...

> 
>>





[systemd-devel] Antw: [EXT] Re: Support for unmerged-usr systems will be REMOVED in the second half of 2023

2022-11-06 Thread Ulrich Windl
>>> TJ  schrieb am 05.11.2022 um 10:59 in Nachricht
:
> Just seen this announcement in the v252 changelog:
> 
> "We intend to remove support for split-usr (/usr mounted separately 
> during boot) ..."

Actually I think this is because systemd is everything but a small boot 
environment (so wanting the "big /usr").

> 
> How does this align with support for separate /usr/ with dm-verity ?
> 
> For example, this will affect nspawn. See "man 1 systemd-nspawn" and 
> "--root-hash=" where in respect of /usr/ it says:
> 
> "Note that this configures the root hash for the root file system. Disk 
> images may also contain separate file systems for the /usr/ hierarchy, 
> which may be Verity protected as well. The root hash for this protection 
> may be configured via the "user.verity.usrhash" extended file attribute 
> or via a .usrhash file adjacent to the disk image, following the same 
> format and logic as for the root hash for the root file system described 
> here."






[systemd-devel] Antw: Re: Antw: [EXT] Re: SOLVED: daemon-reload does not pick up changes to /etc/systemd/system during boot

2022-10-24 Thread Ulrich Windl
>>> Andrei Borzenkov  schrieb am 24.10.2022 um 10:26 in
Nachricht
:
> On Mon, Oct 24, 2022 at 9:48 AM Ulrich Windl
>  wrote:
>>
>> >>> Alex Aminoff  schrieb am 21.10.2022 um 18:11 in 
>> >>> Nachricht
>> :
>>
>> ...
>> > Just to close out this thread, I am happy to report that
>> >
>> > ExecStart=systemctl start --no-block multi-user.target
>> >
>> > worked great.
>>
>> Makes me wonder: How does systemd handle indirect recursive starts (like the 
> one shown)?
>>
> 
> What do you call a "recursive start"? "systemctl start" simply tells

starting multi-user.target via ExecStart=systemctl start starts all depending 
units, and probably one of those starts the multi-user.target again.
That's what I call recursive.

> systemd to queue the start job. If this job is already queued, nothing
> happens. If this job has already been completed (successfully),
> nothing happens.

So I wonder why the command "ExecStart=systemctl start --no-block 
multi-user.target" has any effect then.

> 
> Where recursion come from?

See above.





[systemd-devel] Antw: [EXT] Re: SOLVED: daemon-reload does not pick up changes to /etc/systemd/system during boot

2022-10-24 Thread Ulrich Windl
>>> Alex Aminoff  schrieb am 21.10.2022 um 18:11 in Nachricht
:

...
> Just to close out this thread, I am happy to report that
> 
> ExecStart=systemctl start --no-block multi-user.target
> 
> worked great.

Makes me wonder: How does systemd handle indirect recursive starts (like the 
one shown)?






[systemd-devel] Antw: [EXT] Re: [systemd‑devel] Finding network interface name in different distro

2022-10-19 Thread Ulrich Windl
>>> Lennart Poettering  schrieb am 19.10.2022 um 12:21
in
Nachricht :

[...]
> Uninstall biosdevname. It's 2022.

Make Lennart happy: Uninstall everything except systemd ;-)

(Sorry, I couldn't resist)

> 
> It's a bit contradictory to install it explicitly and then turn it off
> via biosdevname=0...
> 
> Lennart
> 
> ‑‑
> Lennart Poettering, Berlin





[systemd-devel] Antw: Re: [systemd‑devel] [EXT] Finding network interface name in different distro

2022-10-19 Thread Ulrich Windl
>>> Brian Reichert  schrieb am 18.10.2022 um 19:39 in
Nachricht <20221018173901.ga5...@numachi.com>:
>> > :
>> > > Hi All,
>> > >
>> > > When changing distro or distro major versions, network interfaces'
>> > > names sometimes change.
>> > > For example on some Dell server running CentOS 7 the interface is
>> > > named em1 and running Alma 8 it's eno1.
> 
> This doesn't answer the OP's question, but my trick for enumerating
> network devices was to use something like:
> 
>   egrep ‑v ‑e "lo:" /proc/net/dev | grep ':' | cut ‑d: ‑f

Trying your command here I get an error ("cut: option requires an argument --
'f'").
I also tried an awk-variant that even seems to perform faster:

# time awk '$1 ~ /:$/ { sub(":", "", $1); if ($1 != "lo") print $1 }'
/proc/net/dev
em1
em2
bond1
p4p1
bond0
p4p2

real0m0.001s
user0m0.001s
sys 0m0.000s
# time egrep -v -e "lo:" /proc/net/dev | grep ':' | cut -d: -f1
   em1
   em2
 bond1
  p4p1
 bond0
  p4p2

real0m0.002s
user0m0.002s
sys 0m0.002s

> 
> to get a list of non‑loopback interfaces.
> 
> In my case, I went on to bury everything under a single bond0
> interface, so a) no software had to guess a NIC name, and b) in the
> case of physical cabling, they would all Just Work.
> 
> This was work done in my kickstart file, and worked through many
> releases of Red Hat and CentOS.
> 
> I adopted this tactic as Dell kept switching up how they would
> probe/name devices...
> 
> ‑‑ 
> Brian Reichert
> BSD admin/developer at large  





[systemd-devel] Antw: Re: [EXT] Finding network interface name in different distro

2022-10-19 Thread Ulrich Windl
>>> Etienne Champetier  schrieb am 18.10.2022 um
17:15 in Nachricht
:
> Le mar. 18 oct. 2022 à 10:04, Ulrich Windl
>  a écrit :
>>
>> >>> Etienne Champetier  schrieb am 15.10.2022
um
>> 02:41 in Nachricht
>> :
>> > Hi All,
>> >
>> > When changing distro or distro major versions, network interfaces'
>> > names sometimes change.
>> > For example on some Dell server running CentOS 7 the interface is
>> > named em1 and running Alma 8 it's eno1.
>>
>> Wasn't the idea of "BIOS device name" that the interface's name matches the

> label printed on the chassis?
> 
> Some HPE Gen10 servers have the first port as eno5, on some recent
> Dell servers the first port is eno8303.
> I would love to use eno1 everywhere, but it's a mess.

A Dell PowerEdge R7415 (from 2018 or so) uses eno1 and eno2 for the embedded
NIC, ens4f0np0 and ens4f1np1 for add-on 10Gb NICs (as well as ens5f0np0 and
ens5f1np1 for an additional card). In Linux (SLES 15) the names became em1,
em2, p4p1, p4p2, p5p1, p5p2.
I'm surprised about the "eno8303".

Also I think using "p" twice (pp) was a bad choice.

> 
>> > I'm looking for a way to find the new interface name in advance
>> > without booting the new OS.
>> > One way I found is to unpack the initramfs, mount bind /sys, chroot,
>> > and then run
>> > udevadm test-builtin net_id /sys/class/net/INTF
>> > Problem is that it doesn't give me right away the name according to
>> > the NamePolicy in 99-default.link
>> >
>> > Is there a command to get the future name right away ?
>> >
>> > Thanks
>> > Etienne
>>
>>
>>
>>





[systemd-devel] Antw: [EXT] Finding network interface name in different distro

2022-10-18 Thread Ulrich Windl
>>> Etienne Champetier  schrieb am 15.10.2022 um
02:41 in Nachricht
:
> Hi All,
> 
> When changing distro or distro major versions, network interfaces'
> names sometimes change.
> For example on some Dell server running CentOS 7 the interface is
> named em1 and running Alma 8 it's eno1.

Wasn't the idea of "BIOS device name" that the interface's name matches the 
label printed on the chassis?

> 
> I'm looking for a way to find the new interface name in advance
> without booting the new OS.
> One way I found is to unpack the initramfs, mount bind /sys, chroot,
> and then run
> udevadm test-builtin net_id /sys/class/net/INTF
> Problem is that it doesn't give me right away the name according to
> the NamePolicy in 99-default.link
> 
> Is there a command to get the future name right away ?
> 
> Thanks
> Etienne






[systemd-devel] Antw: [EXT] [systemd‑devel] Is it possible to let systemd create a listening socket and yet be able to have that socket activate nothing, at least temporarily?

2022-10-18 Thread Ulrich Windl
Wouldn't the most logical solution be?: Integrate that backup facility in your
daemon (like before terminating).

>>> Klaus Ebbe Grue  schrieb am 07.10.2022 um 09:24 in
Nachricht
<91a90e97f41a4da3b7f716727262d...@di.ku.dk>:
> Hi systemd‑devel,
> 
> I have a user question which I take the liberty to send here since "about 
> systemd‑devel" says "... it's also OK to direct user questions to this
mailing 
> list ...".
> 
> I have a daemon, /usr/bin/mydaemon, which listens on one and only one TCP 
> port, say , and which does no more than communicating over  and 
> creating, reading, writing and deleting files in /home/me/mydaemon/.
> 
> Mydaemon leaves it to systemd to create a socket which listens at .
> 
> It is unimportant whether or not mydaemon is started at boot and it is also

> unimportant whether or not mydaemon is socket activated. As long as it is at

> least one of the two.
> 
> Now I want to upgrade mydaemon to a new version using a script, without race

> conditions and without closing the listening socket. I want the listening 
> socket to stay open since otherwise there can be a one minute interval
during 
> which it is impossible to reopen .
> 
> If it is just a clean upgrade, the script could replace /usr/bin/mydaemon, 
> then stop mydaemon. If the daemon is socket activated there is no more to
do. 
> If the daemon is activated only on boot then the script must end up 
> restarting mydaemon.
> 
> But now I want to do some more while mydaemon is not running. It could be 
> that my script should take a backup of /home/me/mydaemon/ in case things go

> wrong. It could be the script should translate some file in 
> /home/me/mydaemon/ to some new format required by the new mydaemon or 
> whatever.
> 
> So I need to stop mydaemon in such a way that mydaemon cannot wake up while

> my script fiddles with /home/me/mydaemon/.
> 
> According to https://0pointer.de/blog/projects/three‑levels‑of‑off it seems

> that that was possible in 2011: just do "systemctl disable
mydaemon.service". 
> But when I try that, mydaemon still wakes up if I connect to  using eg 
> netcat.
> 
> I have also tried to mask mydaemon. But if I then connect to  using 
> netcat, then netcat gets kicked of. And if I try again then  is no
longer 
> listening.
> 
> QUESTION: Is it possible to let systemd create a listening socket and yet be

> able to have that socket activate nothing, at least temporarily?
> 
> Cheers,
> Klaus





[systemd-devel] Antw: Re: [EXT] Re: Q: handling generator-like dependency: target won't start on boot

2022-09-29 Thread Ulrich Windl
>>> Andrei Borzenkov  schrieb am 29.09.2022 um 13:57 in
Nachricht
:
...
>> I don't quite understand what an "initial transaction" is,
> 
> The set of (start) jobs starting with default.targtet (or whatever
> target was given to systemd as "initial target") and following
> dependency chain (Wants and Requires). It is computed once when
> systemd is started and it can only include units that are available
> when systemd computes it. Anything added later (even if it has
> dependency with units that are already part of the transaction) will
> not be seen and used by systemd.

Would using daemon-reexec instead make a difference?

...

Regards,
Urich




[systemd-devel] Antw: [EXT] Re: [systemd‑devel] jailrooting services with RootDirectory ‑ how ?

2022-09-29 Thread Ulrich Windl
>>> Branko  schrieb am 29.09.2022 um 01:01 in Nachricht
<20220928230155.783c1a69@\040none\041brane_wrks>:

...
> It's hard to sift through all those piles of manpages without missing
> something.

I agree: It's all very complex.





[systemd-devel] Antw: [EXT] Re: Q: handling generator-like dependency: target won't start on boot

2022-09-29 Thread Ulrich Windl
>>> Andrei Borzenkov  schrieb am 28.09.2022 um 20:34 in
Nachricht :
> On 28.09.2022 09:25, Ulrich Windl wrote:
>> Hi!
>> 
>> I'm trying to establish a mechanism that uses a generator-like mechanism as 
> described below. Unfortunately it starts when triggering the target manually, 
> but it never starts on system boot. I could need some advice how to make it 
> work.
>> 
>> Basically I have a generator-like unit, say "g.servive", that creates other 
> instance-like services like i@.service.
>> Finally I have a target, say "t.target", that wants (among others) those 
> instance-like services and is wanted by default.target.
>> 
>> As said in the beginning: When booting the target does not start (and I 
> don't see any errors logged), but when I "systemctl start t.target", 
> everything starts up fine.
>> 
>> More details:
>> 
>> generator-like services:
>> WantedBy default.target and t.target, and it "Wants=nss-user-lookup.target 
> time-sync.target  paths.target" (the Before= list is identical). In addition 
> it has "Before=default.target t.target".
>> It starts a "oneshot" script that creates the instance-like services with 
> RemainAfterExit=true.
>> 
>> instance-like services:
>> PartOf=t.target, Requires generator-like.service (also After that service). 
> In addition it "Wants=nss-user-lookup.target time-sync.target paths.target" 
> (After= uses the same list). The service is Type=forking, and the unit is 
> WantedBy=t.target
>> 
>> The script used in the generator-like service creates the unit files in 
> /run/systemd/system, and it runs "/usr/bin/systemctl daemon-reload" whenever 
> a unit file had been created or changed.
>> 
> 
> daemon-reload does not re-evaluate initial "transaction" and your new
> units are not used because they did not exist when this transaction was
> computed.

Hi!

I don't quite understand what an "initial transaction" is, but is sounds like a 
design deficit:
The "real generators" (initially I wanted to use those) are too limited (i.e..: 
started too early in the boot process) to be useful. I also noticed that 
daemon-reload seems to re-run the generators (like systemd-sysv-generator), so 
I wonder what use it will be if such generated units are being ignored.

At least what you write seems to explain what I see so far.

Would it work to write yet another unit that starts t.target? If that would 
work, it clearly demonstrates the design problem in systemd.

Regards,
Ulrich


> 
> So the only way to squeeze it into your scheme is to manually start
> newly created units.
> 
>> Could be problem be a race-condition, caused by daemon-reload being run 
> asynchronously, i.e.: The generator-like service unit ends while the actual 
> daemon-reload is still in progress?
>> 
>> systemd version is from SLES12 SP5 (systemd-228-157.40.1.x86_64).
>> 
>> Regards,
>> Ulrich
>> 
>> 
>> 






[systemd-devel] Q: handling generator-like dependency: target won't start on boot

2022-09-28 Thread Ulrich Windl
Hi!

I'm trying to establish a mechanism that uses a generator-like mechanism as 
described below. Unfortunately it starts when triggering the target manually, 
but it never starts on system boot. I could need some advice how to make it 
work.

Basically I have a generator-like unit, say "g.servive", that creates other 
instance-like services like i@.service.
Finally I have a target, say "t.target", that wants (among others) those 
instance-like services and is wanted by default.target.

As said in the beginning: When booting the target does not start (and I don't 
see any errors logged), but when I "systemctl start t.target", everything 
starts up fine.

More details:

generator-like services:
WantedBy default.target and t.target, and it "Wants=nss-user-lookup.target 
time-sync.target  paths.target" (the Before= list is identical). In addition it 
has "Before=default.target t.target".
It starts a "oneshot" script that creates the instance-like services with 
RemainAfterExit=true.

instance-like services:
PartOf=t.target, Requires generator-like.service (also After that service). In 
addition it "Wants=nss-user-lookup.target time-sync.target paths.target" 
(After= uses the same list). The service is Type=forking, and the unit is 
WantedBy=t.target

The script used in the generator-like service creates the unit files in 
/run/systemd/system, and it runs "/usr/bin/systemctl daemon-reload" whenever a 
unit file had been created or changed.

Could be problem be a race-condition, caused by daemon-reload being run 
asynchronously, i.e.: The generator-like service unit ends while the actual 
daemon-reload is still in progress?

systemd version is from SLES12 SP5 (systemd-228-157.40.1.x86_64).

Regards,
Ulrich





[systemd-devel] Antw: [EXT] [systemd‑devel] jailrooting services with RootDirectory ‑ how ?

2022-09-28 Thread Ulrich Windl
And WHAT EXACTLY does not work?

>>>  schrieb am 28.09.2022 um 05:35 in Nachricht
<20220928033517.3ffbcce4@\040none\041brane_wrks>:
> I'm trying to start services within controlled jailroot. So I tried
> using RootDirectory directive as described in systemd‑exec man page.
> 
> It should be simple, but I never managed to make it work. 
> I tried to
> start simple minimalistic, statically compiled program that just prints
> "Hello world". It has no library dependencies etc.
> 
> This should be simple, but it doesn't work. Even when I bind mount just
> about every main directory in "/" into my RootDirectory=/usr/my_chroot.
> 
> I tried grepping the all available service files on my machines for
> RootDirectory to find an example that I could learn from, but I
> couldn't find any.
> 
> So i grepped the internet and couldn't find even a single example that
> uses it. But I did find some remark that its usage can screw some cases
> ( at least service types of Type=notify) due to some boondongle with
> systemd's listening socket or something. 
> But my example is totally simple of the "oneshot" type. It works great
> without RootDirectory directive.
> 
> What gives ? Has anyone tried actually using this ? Or is this one of
> of those silently obsoleted things ?
> 
> It would be great if one could use it to jail each service into its own
> private view of the filesystem on the machine in economic way, using
> not much more than dozen of bind‑mounts...
> 
> Is there a simple demo example that uses it that I could try ?
> 
> TIA





[systemd-devel] Antw: [EXT] Re: Q: "Loaded: not-found (Reason: No such file or directory)"

2022-09-22 Thread Ulrich Windl
>>> Michael Chapman  schrieb am 22.09.2022 um 08:50 in
Nachricht :
> On Thu, 22 Sep 2022, Ulrich Windl wrote:
>> Hi!
>> 
>> I wonder:
>> # systemctl status i*
>> ● inst-sys.service
>>Loaded: not-found (Reason: No such file or directory)
>>Active: inactive (dead)
>> 
>> ● iptables.service
>>Loaded: not-found (Reason: No such file or directory)
>>Active: inactive (dead)
>> 
>> So I ttied to find out what's the problem with these servicres, but they 
> don't
>> seem to exist actually:
>> # systemctl cat iptables.service
>> No files found for iptables.service.
>> 
>> This is SLES12 SP5 (systemd-228).
>> 
>> What can I do to resolve this (remove obsolete files, maybe)?
>> 
>> Regards,
>> Ulrich
> 
> There is no problem here. systemd knows about those units because they are 
> referenced by other units, such as in Wants= directives.

Actually it turned out to be a stupid error of myself:
I had two directories in the CWD: inst-sys and iptables
The correct command to use would have been 
# systemctl status i\*

;-)

Ulrich
> 
> - Michael





[systemd-devel] Antw: [EXT] networkd D-Bus API for link up/down?

2022-09-22 Thread Ulrich Windl
>>> "Kevin P. Fleming"  schrieb am 21.09.2022 um 12:48 in 
>>> Nachricht
:
> When the D-Bus API for systemd-networkd was added there were
> indications that it could be used for bringing links up and down.
> However, when I review the API docs at:
> 
> https://www.freedesktop.org/software/systemd/man/org.freedesktop.network1.ht 
> ml#
> 
> I don't see any methods for doing those operations. networkctl uses
> netlink messages for these operations as well.
> 
> I want to create a cluster resource agent for Pacemaker which can
> manage networkd links, and using D-Bus would be easier than using
> netlink since there is already D-Bus support in the resource agent for
> systemd units.

Personally I think it's preferable to write a true OCF resource agent rather 
than using systemd unit in pacemaker.
Pacemaker adds a lot of complexity, and things are much harder to debug.

Regards,
Ulrich






[systemd-devel] Q: "Loaded: not-found (Reason: No such file or directory)"

2022-09-22 Thread Ulrich Windl
Hi!

I wonder:
# systemctl status i*
● inst-sys.service
   Loaded: not-found (Reason: No such file or directory)
   Active: inactive (dead)

● iptables.service
   Loaded: not-found (Reason: No such file or directory)
   Active: inactive (dead)

So I ttied to find out what's the problem with these servicres, but they don't
seem to exist actually:
# systemctl cat iptables.service
No files found for iptables.service.

This is SLES12 SP5 (systemd-228).

What can I do to resolve this (remove obsolete files, maybe)?

Regards,
Ulrich




[systemd-devel] Antw: [EXT] Re: systemd service causing bash to miss signals?

2022-09-20 Thread Ulrich Windl
>>> Mantas Mikulenas  schrieb am 19.09.2022 um 19:25 in
Nachricht
:
> Pipelines somewhat rely on the kernel delivering SIGPIPE to the writer as
> soon as the read end is closed. So if you have `foo | head -1`, then as
> soon as head reads enough and exits, foo gets killed via SIGPIPE. But as
> most systemd-managed services aren't shell interpreters, systemd marks
> SIGPIPE as "ignored" when starting the service process, so that if the
> service is somehow tricked into opening a pipe that a user has mkfifo'd, at
> least the kernel can't be tricked into killing the service. You can opt out
> of this using IgnoreSIGPIPE=.
> 
> (Though even if there's no signal, I believe  the writer should also get an
> -EPIPE out of every write attempt, but not all tools pay attention to it –
> some just completely ignore the write() result, like apparently `fold` does
> in your case...)

Out of curiosity I tried an strace; maybe try the for diagosing, too:
:~> strace -e trace=process -f perl /tmp/junk.pl
execve("/usr/bin/perl", ["perl", "/tmp/junk.pl"], [/* 72 vars */]) = 0
arch_prctl(ARCH_SET_FS, 0x7fbf22d8d700) = 0
/tmp/junk.pl start 1663658370
clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD,
child_tidptr=0x7fbf22d8d9d0) = 13875
Process 13875 attached
[pid 13875] execve("/bin/sh", ["sh", "-c", "cat /dev/urandom|tr -dc
\"a-zA-Z0"...], [/* 72 vars */]) = 0
[pid 13875] arch_prctl(ARCH_SET_FS, 0x7f0f782c4700) = 0
[pid 13875] clone(child_stack=0,
flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD,
child_tidptr=0x7f0f782c49d0) = 13876
Process 13876 attached
[pid 13875] clone(Process 13877 attached
child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD,
child_tidptr=0x7f0f782c49d0) = 13877
[pid 13875] clone(child_stack=0,
flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD,
child_tidptr=0x7f0f782c49d0) = 13878
Process 13878 attached
[pid 13875] clone(child_stack=0,
flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD,
child_tidptr=0x7f0f782c49d0) = 13879
[pid 13875] wait4(-1,  
[pid 13876] execve("/usr/bin/cat", ["cat", "/dev/urandom"], [/* 72 vars */]) =
0
[pid 13877] execve("/usr/bin/tr", ["tr", "-dc", "a-zA-Z0-9"], [/* 72 vars */])
= 0
[pid 13878] execve("/usr/bin/fold", ["fold", "-w", "64"], [/* 72 vars */]) =
0
[pid 13876] arch_prctl(ARCH_SET_FS, 0x7fcbd4cff700) = 0
[pid 13877] arch_prctl(ARCH_SET_FS, 0x7f48217fa700) = 0
[pid 13878] arch_prctl(ARCH_SET_FS, 0x7f3d58365700) = 0
Process 13879 attached
[pid 13879] execve("/usr/bin/head", ["head", "-1"], [/* 72 vars */]) = 0
[pid 13879] arch_prctl(ARCH_SET_FS, 0x7f0e2bc01700) = 0
[pid 13878] --- SIGPIPE {si_signo=SIGPIPE, si_code=SI_USER, si_pid=13878,
si_uid=1025} ---
[pid 13878] +++ killed by SIGPIPE +++
[pid 13875] <... wait4 resumed> [{WIFSIGNALED(s) && WTERMSIG(s) == SIGPIPE}],
0, NULL) = 13878
[pid 13875] wait4(-1,  
[pid 13879] exit_group(0)   = ?
[pid 13877] --- SIGPIPE {si_signo=SIGPIPE, si_code=SI_USER, si_pid=13877,
si_uid=1025} ---
[pid 13879] +++ exited with 0 +++
[pid 13877] +++ killed by SIGPIPE +++
[pid 13875] <... wait4 resumed> [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0,
NULL) = 13879
[pid 13875] wait4(-1, [{WIFSIGNALED(s) && WTERMSIG(s) == SIGPIPE}], 0, NULL) =
13877
[pid 13875] wait4(-1,  
[pid 13876] --- SIGPIPE {si_signo=SIGPIPE, si_code=SI_USER, si_pid=13876,
si_uid=1025} ---
[pid 13876] +++ killed by SIGPIPE +++
[pid 13875] <... wait4 resumed> [{WIFSIGNALED(s) && WTERMSIG(s) == SIGPIPE}],
0, NULL) = 13876
[pid 13875] --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_KILLED, si_pid=13878,
si_uid=1025, si_status=SIGPIPE, si_utime=0, si_stime=0} ---
[pid 13875] wait4(-1, 0x7fff15c17600, WNOHANG, NULL) = -1 ECHILD (No child
processes)
[pid 13875] exit_group(0)   = ?
[pid 13875] +++ exited with 0 +++
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=13875, si_uid=1025,
si_status=0, si_utime=0, si_stime=0} ---
wait4(13875, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 13875
/tmp/junk.pl end 1663658370
exit_group(0)   = ?
+++ exited with 0 +++

Regards,
Ulrich

> 
> On Mon, Sep 19, 2022, 20:18 Brian Reichert  wrote:
> 
>> I apologize for the vague subject.
>>
>> The background: I've inherited some legacy software to manage.
>>
>> This is on SLES12 SP5, running:
>>
>> systemd-228-157.40.1.x86_64
>>
>> One element is a systemd-managed service, written in Perl, that in
>> turn, is using bash to generate random numbers (don't ask me why
>> this tactic was adopted).
>>
>> Here's an isolation of that logic:
>>
>>   pheonix:~ # cat /root/random_str.pl
>>   #!/usr/bin/perl
>>   print "$0 start ".time."\n";
>>   my $randStr = `cat /dev/urandom|tr -dc "a-zA-Z0-9"|fold -w 64|head -1`;
>>   print "$0 end ".time."\n";
>>
>> You can run this from the command-line, to see how quickly it
>> nominally operates.
>>
>> What I can reproduce in my environment, very reliably, is that when
>> this is invoked as a service:
>>
>> - the 'head' command exits very quickly (to be 

[systemd-devel] Antw: [EXT] [systemd‑devel] systemd service causing bash to miss signals?

2022-09-20 Thread Ulrich Windl
>>> Brian Reichert  schrieb am 19.09.2022 um 19:18 in
Nachricht <20220919171812.gf74...@numachi.com>:
> I apologize for the vague subject.
> 
> The background: I've inherited some legacy software to manage.
> 
> This is on SLES12 SP5, running:
> 
>   systemd‑228‑157.40.1.x86_64
> 
> One element is a systemd‑managed service, written in Perl, that in
> turn, is using bash to generate random numbers (don't ask me why
> this tactic was adopted).
> 
> Here's an isolation of that logic:
> 
>   pheonix:~ # cat /root/random_str.pl
>   #!/usr/bin/perl
>   print "$0 start ".time."\n";
>   my $randStr = `cat /dev/urandom|tr ‑dc "a‑zA‑Z0‑9"|fold ‑w 64|head ‑1`;
>   print "$0 end ".time."\n";
> 
> You can run this from the command‑line, to see how quickly it
> nominally operates.
> 
> What I can reproduce in my environment, very reliably, is that when
> this is invoked as a service:
> 
> ‑ the 'head' command exits very quickly (to be expected)
> ‑ the shell does not exit (maybe missed a SIGCHILD?)
> ‑ 'fold' chews a CPU core
> ‑ A kernel trace shows that 'fold' is spinning on SIGPIPEs, as it's
>   STDOUT is no longer connected to another process.

When I run the command "cat /dev/urandom|tr -dc "a-zA-Z0-9"|fold -w 64|head
-1" outside systemd and Perl in SLES15 SP5 there is no problem, so I guess none
of the tools used is broken.
...

Regards,
Ulrich




[systemd-devel] Antw: [EXT] [systemd‑devel] xen shutdown question

2022-09-14 Thread Ulrich Windl
>>> Richard Kojedzinszky  schrieb am 13.09.2022 um 17:53 in
Nachricht <9c1b2b00d69ee9e996e96102a9771...@kojedz.in>:
> Dear systemd users/developers,
> 
> I am using a shutdown inhibitor (kubelet), which seems to be working 
> when issuing `systemctl reboot` or `systemctl poweroff`, but when the vm 
> is shut down from the outside (xl shutdown), the inhibitor seems not to 
> be waited on.

Isn't the point "who does 'xl shutdown'?"?
Likewise: "Can option -w, --wait be used?"

Regards,
Ulrich


> 
> Systemd version is:
> 
> # systemctl ‑‑version
> systemd 247 (247.3‑7)
> +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP 
> +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +ZSTD +SECCOMP +BLKID 
> +ELFUTILS +KMOD +IDN2 ‑IDN +PCRE2 default‑hierarchy=unified
> 
> os is:
> # cat /etc/os‑release
> PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
> NAME="Debian GNU/Linux"
> VERSION_ID="11"
> VERSION="11 (bullseye)"
> VERSION_CODENAME=bullseye
> ID=debian
> HOME_URL="https://www.debian.org/;
> SUPPORT_URL="https://www.debian.org/support;
> BUG_REPORT_URL="https://bugs.debian.org/;
> 
> I would like to achieve that if a shutdown request arrives from xen, the 
> inhibitor has a chance to respond.
> 
> What configuration is needed to achieve this?
> 
> In detail, I am running a kubernetes node, and would like to use 
> kubernetes node graceful shutdown feature.
> 
> Thanks in advance,
> Richard





[systemd-devel] Antw: Re: Antw: Re: Re: [EXT] Re: Q: Querying units for "what provides" a target

2022-09-13 Thread Ulrich Windl
 >>> THomas HUMMEL  schrieb am 12.09.2022 um 18:22 in
Nachricht <5e07de0a-c365-ce03-987a-b974ef522...@pasteur.fr>:
>>  On 9/9/22 18:09, Andrei Borzenkov wrote:
> 
> Hello,
> 
> maybe referring to 
> https://lists.freedesktop.org/archives/systemd-devel/2022-January/047342.htm 
> l 
> would help clarify ?
> 

At least I'm glad I'm not the only one having trouble to understand things as 
described in the docs ;-)

> --
> TH






[systemd-devel] Antw: Re: Re: Antw: Re: Re: [EXT] Re: Q: Querying units for "what provides" a target

2022-09-09 Thread Ulrich Windl
>>> Michael Biebl  schrieb am 09.09.2022 um 14:15 in Nachricht
:
> Am Fr., 9. Sept. 2022 um 14:12 Uhr schrieb Michael Biebl :
>>
>> Am Fr., 9. Sept. 2022 um 14:01 Uhr schrieb Ulrich Windl
>> :
>> >
>> > >>> Andrei Borzenkov  schrieb am 09.09.2022 um 13:41 
>> > >>> in
>> > Nachricht
>> > :
>> > > On Fri, Sep 9, 2022 at 2:13 PM Ulrich Windl
>> > >  wrote:
>> > > ...
>> > >> >
>> > >> > If you are interested in services that pull in e.g. time-sync.target
>> > >> > via Wants (or Requires) and order themselves before the target, you
>> > >> > can use something like
>> > >> > $ systemctl show time-sync.target -p WantedBy -p RequiredBy -p After
>> > >> > RequiredBy=
>> > >> > WantedBy=chrony.service
>> > >> > After=chrony.service time-set.target
>> > >>
>> > >> It seems what I wanted to know is output by
>> > >> # systemctl show -p After time-set.target
>> > >> After=systemd-timesyncd.service
>> > >> # systemctl show -p After time-sync.target
>> > >> After=time-set.target ntp-wait.service
>> > >>
>> > >> However the "After=" is somewhat unexpected.
>> > >
>> > > This is exactly what targets are about. The only usage for targets is
>> > > to wait until something else becomes active and to do it they must
>> > > come After something.
>> > >
>> > >> And "-p WantedBy" is definitely
>> > >> wrong (it will output units that "require the target", not the units
>> > > "providing
>> > >> the target").
>> > >>
>> > >
>> > > There is no such thing as "units providing the target". Systemd
>> > > documentation makes distinction between targets that Want other units
>> > > ("active") and targets that are WantedBy other units ("passive"). It
>> > > is expected that services "providing" passive targets have
>> > > WantedBy=this-passive.target, otherwise passive targets will not be
>> > > activated at all. So WantedBy is exactly correct in this case.
>> >
>> > Hi!
>> >
>> > But when I include it (as suggested) I get:
>> > # systemctl show  -p WantedBy -p RequiredBy -p After time-sync.target
>> > RequiredBy=
>> > WantedBy=iotwatch@ROOT.service iotwatch@VPM.service 
> iotwatch-generator.service ntp-wait.service systemd-timesyncd.service
>> > After=time-set.target ntp-wait.service
>> > ---
>> >
>> > Those iotwatch* units use "Before="; so is the WantedBy= incorrect for 
> those?
>> >
>> > Those units use:
>> > Wants=nss-user-lookup.target time-sync.target paths.target
>> > After=nss-user-lookup.target time-sync.target paths.target
>>
>>
>> See man systemd.special, passive and active targets (as Andrei already
>> hinted at).
>> Quoting the relevant parts
>> "   Special Passive System Units
>>A number of special system targets are defined that can be used
>> to properly order boot-up of optional services. These targets are
>> generally not part of the initial boot transaction, unless they are
>> explicitly pulled in by one of the implementing
>>services. Note specifically that these passive target units are
>> generally not pulled in by the consumer of a service, but by the
>> provider of the service.
>> "
>>  iotwatch* does appear to be a consumer of the time-sync.target, so it
>> should merely have an ordering but not pull in the target.
> 
> Quoting the rest of the man page section:
> "
> This means: a consuming service should order itself after these
> targets (as appropriate), but not
>  pull it in. A providing service should order itself before these
> targets (as appropriate) and pull it in (via a Wants= type
> dependency).
> "

Thanks for the explanation! For me part of the problem is: Where is "pull in" 
defined? ;-)
The phrase is used for Wants, but also for Requires. So is "pull in " a synonym 
for "wants or requires"?
(IMHO the first part is an "action", while the last part is a "desire", hard to 
treat them as synonyms ;-))

Regards,
Ulrich



[systemd-devel] Antw: Re: Antw: Re: Re: [EXT] Re: Q: Querying units for "what provides" a target

2022-09-09 Thread Ulrich Windl
>>> Andrei Borzenkov  schrieb am 09.09.2022 um 13:41 in
Nachricht
:
> On Fri, Sep 9, 2022 at 2:13 PM Ulrich Windl
>  wrote:
> ...
>> >
>> > If you are interested in services that pull in e.g. time-sync.target
>> > via Wants (or Requires) and order themselves before the target, you
>> > can use something like
>> > $ systemctl show time-sync.target -p WantedBy -p RequiredBy -p After
>> > RequiredBy=
>> > WantedBy=chrony.service
>> > After=chrony.service time-set.target
>>
>> It seems what I wanted to know is output by
>> # systemctl show -p After time-set.target
>> After=systemd-timesyncd.service
>> # systemctl show -p After time-sync.target
>> After=time-set.target ntp-wait.service
>>
>> However the "After=" is somewhat unexpected.
> 
> This is exactly what targets are about. The only usage for targets is
> to wait until something else becomes active and to do it they must
> come After something.
> 
>> And "-p WantedBy" is definitely
>> wrong (it will output units that "require the target", not the units 
> "providing
>> the target").
>>
> 
> There is no such thing as "units providing the target". Systemd
> documentation makes distinction between targets that Want other units
> ("active") and targets that are WantedBy other units ("passive"). It
> is expected that services "providing" passive targets have
> WantedBy=this-passive.target, otherwise passive targets will not be
> activated at all. So WantedBy is exactly correct in this case.

Hi!

But when I include it (as suggested) I get:
# systemctl show  -p WantedBy -p RequiredBy -p After time-sync.target
RequiredBy=
WantedBy=iotwatch@ROOT.service iotwatch@VPM.service iotwatch-generator.service 
ntp-wait.service systemd-timesyncd.service
After=time-set.target ntp-wait.service
---

Those iotwatch* units use "Before="; so is the WantedBy= incorrect for those?

Those units use:
Wants=nss-user-lookup.target time-sync.target paths.target
After=nss-user-lookup.target time-sync.target paths.target

Regards,
Ulrich





[systemd-devel] Antw: Re: Re: [EXT] Re: Q: Querying units for "what provides" a target

2022-09-09 Thread Ulrich Windl
>>> Michael Biebl  schrieb am 09.09.2022 um 12:54 in
Nachricht
:
> Am Fr., 9. Sept. 2022 um 12:31 Uhr schrieb Michael Biebl
:
>>
>> Am Fr., 9. Sept. 2022 um 12:08 Uhr schrieb Ulrich Windl
>> :
>> >
>> > >>> Michael Biebl  schrieb am 09.09.2022 um 10:55 in
>> > Nachricht
>> > :
>> > > Example: syslog.service
>> > >
>> > > $ systemctl status syslog.service
>> > > ● rsyslog.service - System Logging Service
>> > >  Loaded: loaded (/lib/systemd/system/rsyslog.service; enabled;
>> > > preset: enabled)
>> > >  Active: active (running) since Thu 2022-09-08 08:55:45 CEST; 1 day
1h
>> > > ago
>> > > TriggeredBy: ● syslog.socket
>> > >Docs: man:rsyslogd(8)
>> > >  man:rsyslog.conf(5)
>> > >  https://www.rsyslog.com/doc/ 
>> > >Main PID: 624 (rsyslogd)
>> > >   Tasks: 4 (limit: 19002)
>> > >  Memory: 3.8M
>> > > CPU: 1.341s
>> > >  CGroup: /system.slice/rsyslog.service
>> > >  └─624 /usr/sbin/rsyslogd -n -iNONE
>> > >
>> > > You'll see that syslog.service is provided by  provided by
>> > > rsyslog.service (and the actual name of the file on the disk)
>> > > Isn't this what you wanted? If not, I must have misunderstood what you
>> > > are looking for.
>> >
>> > Hi!
>> >
>> > I'm afraid that does not help:
>> > # systemctl status time-set.target
>> > ● time-set.target - System Time Set
>> >  Loaded: loaded (/usr/lib/systemd/system/time-set.target; static)
>> >  Active: active since Mon 2022-09-05 14:30:42 CEST; 3 days ago
>> >Docs: man:systemd.special(7)
>> >
>> > Now what is actually providing "time-set" (if any)?
>> > Does that mean "nothing provides time-set"?
>> >
>> > Likewise:
>> > # systemctl status time-sync.target
>> > ● time-sync.target - System Time Synchronized
>> >  Loaded: loaded (/usr/lib/systemd/system/time-sync.target; static)
>> >  Active: active since Mon 2022-09-05 14:32:00 CEST; 3 days ago
>> >Docs: man:systemd.special(7)
>> >
>> > Sep 05 14:32:00 host16 systemd[1]: Reached target System Time
Synchronized.
>> >
>> > Clear now?
>>
>> Not really.
>> Are you interested in what services hook into time-sync.target (and
>> are ordered before it)?
> 
> If you are interested in services that pull in e.g. time-sync.target
> via Wants (or Requires) and order themselves before the target, you
> can use something like
> $ systemctl show time-sync.target -p WantedBy -p RequiredBy -p After
> RequiredBy=
> WantedBy=chrony.service
> After=chrony.service time-set.target

It seems what I wanted to know is output by
# systemctl show -p After time-set.target
After=systemd-timesyncd.service
# systemctl show -p After time-sync.target
After=time-set.target ntp-wait.service

However the "After=" is somewhat unexpected. And "-p WantedBy" is definitely
wrong (it will output units that "require the target", not the units "providing
the target").

Regards,
Ulrich




[systemd-devel] Antw: Re: Re: [EXT] Re: Q: Querying units for "what provides" a target

2022-09-09 Thread Ulrich Windl
>>> Michael Biebl  schrieb am 09.09.2022 um 12:31 in
Nachricht
:
> Am Fr., 9. Sept. 2022 um 12:08 Uhr schrieb Ulrich Windl
> :
>>
>> >>> Michael Biebl  schrieb am 09.09.2022 um 10:55 in
>> Nachricht
>> :
>> > Example: syslog.service
>> >
>> > $ systemctl status syslog.service
>> > ● rsyslog.service - System Logging Service
>> >  Loaded: loaded (/lib/systemd/system/rsyslog.service; enabled;
>> > preset: enabled)
>> >  Active: active (running) since Thu 2022-09-08 08:55:45 CEST; 1 day
1h
>> > ago
>> > TriggeredBy: ● syslog.socket
>> >Docs: man:rsyslogd(8)
>> >  man:rsyslog.conf(5)
>> >  https://www.rsyslog.com/doc/ 
>> >Main PID: 624 (rsyslogd)
>> >   Tasks: 4 (limit: 19002)
>> >  Memory: 3.8M
>> > CPU: 1.341s
>> >  CGroup: /system.slice/rsyslog.service
>> >  └─624 /usr/sbin/rsyslogd -n -iNONE
>> >
>> > You'll see that syslog.service is provided by  provided by
>> > rsyslog.service (and the actual name of the file on the disk)
>> > Isn't this what you wanted? If not, I must have misunderstood what you
>> > are looking for.
>>
>> Hi!
>>
>> I'm afraid that does not help:
>> # systemctl status time-set.target
>> ● time-set.target - System Time Set
>>  Loaded: loaded (/usr/lib/systemd/system/time-set.target; static)
>>  Active: active since Mon 2022-09-05 14:30:42 CEST; 3 days ago
>>Docs: man:systemd.special(7)
>>
>> Now what is actually providing "time-set" (if any)?
>> Does that mean "nothing provides time-set"?
>>
>> Likewise:
>> # systemctl status time-sync.target
>> ● time-sync.target - System Time Synchronized
>>  Loaded: loaded (/usr/lib/systemd/system/time-sync.target; static)
>>  Active: active since Mon 2022-09-05 14:32:00 CEST; 3 days ago
>>Docs: man:systemd.special(7)
>>
>> Sep 05 14:32:00 host16 systemd[1]: Reached target System Time
Synchronized.
>>
>> Clear now?
> 
> Not really.
> Are you interested in what services hook into time-sync.target (and
> are ordered before it)?

Yes. I call it "providing time set/sync".

Regards,
Ulrich




[systemd-devel] Antw: Re: [EXT] Re: Q: Querying units for "what provides" a target

2022-09-09 Thread Ulrich Windl
>>> Michael Biebl  schrieb am 09.09.2022 um 10:55 in
Nachricht
:
> Example: syslog.service
> 
> $ systemctl status syslog.service
> ● rsyslog.service - System Logging Service
>  Loaded: loaded (/lib/systemd/system/rsyslog.service; enabled;
> preset: enabled)
>  Active: active (running) since Thu 2022-09-08 08:55:45 CEST; 1 day 1h 
> ago
> TriggeredBy: ● syslog.socket
>Docs: man:rsyslogd(8)
>  man:rsyslog.conf(5)
>  https://www.rsyslog.com/doc/ 
>Main PID: 624 (rsyslogd)
>   Tasks: 4 (limit: 19002)
>  Memory: 3.8M
> CPU: 1.341s
>  CGroup: /system.slice/rsyslog.service
>  └─624 /usr/sbin/rsyslogd -n -iNONE
> 
> You'll see that syslog.service is provided by  provided by
> rsyslog.service (and the actual name of the file on the disk)
> Isn't this what you wanted? If not, I must have misunderstood what you
> are looking for.

Hi!

I'm afraid that does not help:
# systemctl status time-set.target
● time-set.target - System Time Set
 Loaded: loaded (/usr/lib/systemd/system/time-set.target; static)
 Active: active since Mon 2022-09-05 14:30:42 CEST; 3 days ago
   Docs: man:systemd.special(7)

Now what is actually providing "time-set" (if any)?
Does that mean "nothing provides time-set"?

Likewise:
# systemctl status time-sync.target
● time-sync.target - System Time Synchronized
 Loaded: loaded (/usr/lib/systemd/system/time-sync.target; static)
 Active: active since Mon 2022-09-05 14:32:00 CEST; 3 days ago
   Docs: man:systemd.special(7)

Sep 05 14:32:00 host16 systemd[1]: Reached target System Time Synchronized.

Clear now?

Regards,
Ulrich

> 
> Am Fr., 9. Sept. 2022 um 10:52 Uhr schrieb Ulrich Windl
> :
>>
>> >>> Michael Biebl  schrieb am 09.09.2022 um 10:30 in
Nachricht
>> :
>> > I'd probably just use `systemctl status`
>>
>> Can you give some details? I don't see what I'm expecting to see.
>>
>> Regards,
>> Ulrich
>>
>>
>> >
>> > Am Fr., 9. Sept. 2022 um 10:18 Uhr schrieb Ulrich Windl
>> > :
>> >>
>> >> Hi!
>> >>
>> >> I'm wondering: having some specific target, e.g. time-set.target, how
can I
>> > find out what actually "provides" that target?
>> >> I see that I can query what "requires" the given target, but how to I
get
>> > the other direction?
>> >> I mean by using a tool like systemctl, not by finding and grepping some
>> > directories for symbolic links.
>> >>
>> >> Sorry if that turns out to be a stupid question where I should have
known
>> > the answer...
>> >>
>> >> Regards,
>> >> Ulrich
>> >>
>> >>
>> >>
>>
>>
>>
>>





[systemd-devel] Antw: [EXT] Re: Q: Querying units for "what provides" a target

2022-09-09 Thread Ulrich Windl
>>> Michael Biebl  schrieb am 09.09.2022 um 10:30 in Nachricht
:
> I'd probably just use `systemctl status`

Can you give some details? I don't see what I'm expecting to see.

Regards,
Ulrich


> 
> Am Fr., 9. Sept. 2022 um 10:18 Uhr schrieb Ulrich Windl
> :
>>
>> Hi!
>>
>> I'm wondering: having some specific target, e.g. time-set.target, how can I 
> find out what actually "provides" that target?
>> I see that I can query what "requires" the given target, but how to I get 
> the other direction?
>> I mean by using a tool like systemctl, not by finding and grepping some 
> directories for symbolic links.
>>
>> Sorry if that turns out to be a stupid question where I should have known 
> the answer...
>>
>> Regards,
>> Ulrich
>>
>>
>>






[systemd-devel] Q: Querying units for "what provides" a target

2022-09-09 Thread Ulrich Windl
Hi!

I'm wondering: having some specific target, e.g. time-set.target, how can I 
find out what actually "provides" that target?
I see that I can query what "requires" the given target, but how to I get the 
other direction?
I mean by using a tool like systemctl, not by finding and grepping some 
directories for symbolic links.

Sorry if that turns out to be a stupid question where I should have known the 
answer...

Regards,
Ulrich





[systemd-devel] Antw: Antw: [EXT] SRe: VirtualBox VM as a unit failures

2022-09-02 Thread Ulrich Windl
>>> "Ulrich Windl"  schrieb am 02.09.2022
um
10:12 in Nachricht <6311bafe02a10004d...@gwsmtp.uni-regensburg.de>:
> Hi!
> 
> The other thing that came to my mind was the ASCI power button: The system

Time for the week-end: s/ASCI/ACPI/

> might actually block or ignore it (or asking for the admin password). So I
> think it would be a good idea to have a "second line of defense" causing a 
> hard
> stop (power off) if the first method fails within a reasonable timeout.
> That's how cluster VM resources are managed, typically.
> 
> Regards,
> Ulrich
> 
>>>> Colin Guthrie  schrieb am 01.09.2022 um 21:30 in
> Nachricht :
>> Hi,
>> 
>> When you have an ExecStop, it's meant to synchronously stop all 
>> processes. Anything left once it exits, systemd considers fair game for 
>> killing!
>> 
>> So chances are, the ExecStop runs and tells the machine the power button 
>> was pressed. This command exits almost immediately and before the 
>> machine gets much of a chance to process it, systemd goes on a killing 
>> spree to tidy up what's left.
>> 
>> So you may need some kind of synchronous version of the stop command 
>> which waits a bit with it's own timeout. You can likely knock up a 
>> script in bash pretty easily. Here's a snippet you can adapt from a 
>> script (non-systemd) I use for managing VMs which would allow you to 
>> implement a controlled shutdown with a timeout.
>> 
>> There may be other issues at play but this is definitely one of them!!
>> 
>> (other problems may include some processes being are user wide if you 
>> and used by multiple VMs but will get lumped in the cgroup with the 
>> first one you start and thus may be killed when it is powered off even 
>> if other VMs are still running!)
>> 
>> HTHs
>> 
>> Col
>> 
>> 
>> VM_NAME="RHEL7"
>> 
>> function isrunning()
>> {
>>VBoxManage list runningvms | grep -q "$VM_NAME" && echo yes || echo no
>> }
>> function stop()
>> {
>>if [ "yes" != $(isrunning) ]; then
>>  echo "Not running"
>>else
>>  VBoxManage controlvm "$VM_NAME" acpipowerbutton
>>  TIMEOUT=60
>>  echo -n "Waiting for shutdown"
>>  while [ "yes" = $(isrunning) ]; do
>>echo -n "."
>>sleep 1
>>if [ 0 -eq $TIMEOUT ]; then
>>  echo
>>  echo "Timeout waiting for shutdown :-(" >&2
>>  echo "Continuing with forced poweroff"
>>  VBoxManage controlvm "$VM_NAME" poweroff
>>  TIMEOUT=10
>>fi
>>TIMEOUT=$(( $TIMEOUT - 1 ))
>>  done
>>  echo
>>fi
>> }
>> 
>> 
>> Sergio Belkin wrote on 01/09/2022 18:59:
>>> I,m triying to configure an user-level unit for a VirtualBox VM, but it 
>>> does not work well, when I stop, it complains:
>>> 
>>> systemctl --user status  vbox_vm_start@RHEL7.service 
>>> ○ vbox_vm_start@RHEL7.service - VirtualBox VM RHEL7
>>>   Loaded: loaded 
>>> (/home/sergio/.config/systemd/user/vbox_vm_start@.service; enabled; 
>>> vendor preset: disabled)
>>>   Active: inactive (dead) since Thu 2022-09-01 14:21:57 -03; 5s ago
>>>  Process: 378373 ExecStart=/usr/bin/VBoxManage startvm RHEL7 --type 
>>> headless (code=exited, status=0/SUCCESS)
>>>  Process: 378581 ExecStop=/usr/bin/VBoxManage controlvm RHEL7 
>>> acpipowerbutton (code=exited, status=0/SUCCESS)
>>>Tasks: 40 (limit: 38236)
>>>   Memory: 23.6M
>>>  CPU: 8.114s
>>>   CGroup: 
>>> 
>>
>
/user.slice/user-1000.slice/user@1000.service/app.slice/app-vbox_vm_start.sl
> i
>> ce/vbox_vm_start@RHEL7.service 
>>>   ├─ 378386 /usr/lib/virtualbox/VBoxXPCOMIPCD
>>>   ├─ 378392 /usr/lib/virtualbox/VBoxSVC --auto-shutdown
>>>   └─ 378442 /usr/lib/virtualbox/VBoxHeadless --comment RHEL7 
>>> --startvm f02a9f08-2ff2-4a92-b3cd-a8dfb17513c6 --vrde config
>>> 
>>> sep 01 14:21:51 munster.belkin.home systemd[3452]: Starting 
>>> vbox_vm_start@RHEL7.service - VirtualBox VM RHEL7...
>>> sep 01 14:21:51 munster.belkin.home VBoxManage[378373]: Waiting for VM 
>>> "RHEL7" to power on...
>>> sep 01 14:21:51 munster.belkin.home VBoxManage[378373]: VM "RHEL7" has 
>>> been successfully sta

[systemd-devel] Antw: [EXT] SRe: VirtualBox VM as a unit failures

2022-09-02 Thread Ulrich Windl
Hi!

The other thing that came to my mind was the ASCI power button: The system
might actually block or ignore it (or asking for the admin password). So I
think it would be a good idea to have a "second line of defense" causing a hard
stop (power off) if the first method fails within a reasonable timeout.
That's how cluster VM resources are managed, typically.

Regards,
Ulrich

>>> Colin Guthrie  schrieb am 01.09.2022 um 21:30 in
Nachricht :
> Hi,
> 
> When you have an ExecStop, it's meant to synchronously stop all 
> processes. Anything left once it exits, systemd considers fair game for 
> killing!
> 
> So chances are, the ExecStop runs and tells the machine the power button 
> was pressed. This command exits almost immediately and before the 
> machine gets much of a chance to process it, systemd goes on a killing 
> spree to tidy up what's left.
> 
> So you may need some kind of synchronous version of the stop command 
> which waits a bit with it's own timeout. You can likely knock up a 
> script in bash pretty easily. Here's a snippet you can adapt from a 
> script (non-systemd) I use for managing VMs which would allow you to 
> implement a controlled shutdown with a timeout.
> 
> There may be other issues at play but this is definitely one of them!!
> 
> (other problems may include some processes being are user wide if you 
> and used by multiple VMs but will get lumped in the cgroup with the 
> first one you start and thus may be killed when it is powered off even 
> if other VMs are still running!)
> 
> HTHs
> 
> Col
> 
> 
> VM_NAME="RHEL7"
> 
> function isrunning()
> {
>VBoxManage list runningvms | grep -q "$VM_NAME" && echo yes || echo no
> }
> function stop()
> {
>if [ "yes" != $(isrunning) ]; then
>  echo "Not running"
>else
>  VBoxManage controlvm "$VM_NAME" acpipowerbutton
>  TIMEOUT=60
>  echo -n "Waiting for shutdown"
>  while [ "yes" = $(isrunning) ]; do
>echo -n "."
>sleep 1
>if [ 0 -eq $TIMEOUT ]; then
>  echo
>  echo "Timeout waiting for shutdown :-(" >&2
>  echo "Continuing with forced poweroff"
>  VBoxManage controlvm "$VM_NAME" poweroff
>  TIMEOUT=10
>fi
>TIMEOUT=$(( $TIMEOUT - 1 ))
>  done
>  echo
>fi
> }
> 
> 
> Sergio Belkin wrote on 01/09/2022 18:59:
>> I,m triying to configure an user-level unit for a VirtualBox VM, but it 
>> does not work well, when I stop, it complains:
>> 
>> systemctl --user status  vbox_vm_start@RHEL7.service 
>> ○ vbox_vm_start@RHEL7.service - VirtualBox VM RHEL7
>>   Loaded: loaded 
>> (/home/sergio/.config/systemd/user/vbox_vm_start@.service; enabled; 
>> vendor preset: disabled)
>>   Active: inactive (dead) since Thu 2022-09-01 14:21:57 -03; 5s ago
>>  Process: 378373 ExecStart=/usr/bin/VBoxManage startvm RHEL7 --type 
>> headless (code=exited, status=0/SUCCESS)
>>  Process: 378581 ExecStop=/usr/bin/VBoxManage controlvm RHEL7 
>> acpipowerbutton (code=exited, status=0/SUCCESS)
>>Tasks: 40 (limit: 38236)
>>   Memory: 23.6M
>>  CPU: 8.114s
>>   CGroup: 
>> 
>
/user.slice/user-1000.slice/user@1000.service/app.slice/app-vbox_vm_start.sli
> ce/vbox_vm_start@RHEL7.service 
>>   ├─ 378386 /usr/lib/virtualbox/VBoxXPCOMIPCD
>>   ├─ 378392 /usr/lib/virtualbox/VBoxSVC --auto-shutdown
>>   └─ 378442 /usr/lib/virtualbox/VBoxHeadless --comment RHEL7 
>> --startvm f02a9f08-2ff2-4a92-b3cd-a8dfb17513c6 --vrde config
>> 
>> sep 01 14:21:51 munster.belkin.home systemd[3452]: Starting 
>> vbox_vm_start@RHEL7.service - VirtualBox VM RHEL7...
>> sep 01 14:21:51 munster.belkin.home VBoxManage[378373]: Waiting for VM 
>> "RHEL7" to power on...
>> sep 01 14:21:51 munster.belkin.home VBoxManage[378373]: VM "RHEL7" has 
>> been successfully started.
>> sep 01 14:21:51 munster.belkin.home systemd[3452]: Started 
>> vbox_vm_start@RHEL7.service - VirtualBox VM RHEL7.
>> sep 01 14:21:56 munster.belkin.home systemd[3452]: Stopping 
>> vbox_vm_start@RHEL7.service - VirtualBox VM RHEL7...
>> sep 01 14:21:57 munster.belkin.home systemd[3452]: 
>> vbox_vm_start@RHEL7.service: Unit process 378386 (VBoxXPCOMIPCD) remains 
>> running after unit stopped.
>> sep 01 14:21:57 munster.belkin.home systemd[3452]: 
>> vbox_vm_start@RHEL7.service: Unit process 378392 (VBoxSVC) remains 
>> running after unit stopped.
>> sep 01 14:21:57 munster.belkin.home systemd[3452]: 
>> vbox_vm_start@RHEL7.service: Unit process 378442 (VBoxHeadless) remains 
>> running after unit stopped.
>> sep 01 14:21:57 munster.belkin.home systemd[3452]: Stopped 
>> vbox_vm_start@RHEL7.service - VirtualBox VM RHEL7.
>> sep 01 14:21:57 munster.belkin.home systemd[3452]: 
>> vbox_vm_start@RHEL7.service: Consumed 3.386s CPU time.
>> 
>> If I try to start, these are the errors:
>> 
>> × vbox_vm_start@RHEL7.service - VirtualBox VM RHEL7
>>   Loaded: loaded 
>> (/home/sergio/.config/systemd/user/vbox_vm_start@.service; enabled; 

[systemd-devel] Antw: Re: Antw: [EXT] [systemd‑devel] systemd‑tmpfiles: behavior of ‑‑clean

2022-08-31 Thread Ulrich Windl
>>> Wols Lists  schrieb am 31.08.2022 um 09:25 in
Nachricht <6c25970f-c7cb-2728-a827-1452e1937...@youngman.org.uk>:
> On 31/08/2022 07:16, Ulrich Windl wrote:
>>> 'tomcat.<>' with other associated files (or directories below this).
> 
>> I think with the guideline "clean up your own dirt" there wouldn't be so 
> much
>> need for external cleanup if the applications would do a better job. The
>> application always knows best what, how, and when to clean up. (MHO)
>> 
> Always? Sometimes the application doesn't have a clue!

If you write an app and you don't know what temporary data you create or need, 
you have a big problem!

> 
> Simple example, your system crashes (or a thread exits with a crash, 
> security violation, out-of-mem, whatever). Why on earth should the 
> *application* have a clue about what that crashed process was doing? If 

35 years ago vi knew.

> (as is the case with tomcat) it was carrying out a user request, all 
> state has gone.

Then put "volatile" files in a separate directory and wipe that before start.
Other data should be recovered from their location.
It's all a design issue.

> 
> And even within the application you can't just come along and clean up 
> stuff that doesn't belong to you because another thread might own it and 

Wha said the application should clean up "foreign" stuff?

> be most upset when it disappears. Truly, the sysadmin often *does* know 
> best.

But the sysadmin is not an automatic job ;-)

> 
> Cheers,
> Wol






Re: [systemd-devel] Antw: [EXT] Re: Help required for configuring a blocking service during shutdown

2022-08-31 Thread Ulrich Windl
>>> Henning Moll  schrieb am 31.08.2022 um 08:13 in
Nachricht
:
> Just tried "shutdown -r now" and unfortunately it also doesn't care about 
> systemd-inhibit.

Is the systemd shutdown target triggered anyway? I guess it is. If so, the
problem must be deeper "under the hood".

> Gesendet: Mittwoch, 31. August 2022 um 08:07 UhrVon: "Ulrich Windl" 
> An: "Michael Biebl" , 
> newssc...@gmx.decc: "systemd-devel@lists.freedesktop.org" 
> Betreff: Antw: [EXT] Re:
[systemd-devel] 
> Help required for configuring a blocking service during shutdown
>>>> Henning Moll  schrieb am 30.08.2022 um 16:24 
>
inNachricht -gmx-bs58>:> Hi,>> This is an interesting mechanism, but unfortunately it is
to 
> weak for my use> case: It does only delay reboots which are initiated via 
> "systemctl ...". A> "sudo reboot" is not delayedAt least in the past there
was 
> a difference between "reboot" and "shutdown -rnow":The former wouldn't care

> much about how the reboot is done, while the latterwould try an orderly 
> shutdown before reboot.>> Best regards> Henning> Gesendet: Dienstag, 30.
August 
> 2022 um 10:14 UhrVon: "Michael Biebl"> An: "Mantas
Mikulėnas" 
> Cc: "HenningMoll"> , "Systemd" 
> Betreff:Re:> [systemd-devel] Help
required 
> for configuring a blocking service during> shutdown> Would the systemd
inhibit 
> interface be an> 
> option?https://www.freedesktop.org/wiki/Software/systemd/inhibit/It was> 
> designed for that use case after all. 




[systemd-devel] Antw: [EXT] [systemd‑devel] systemd‑tmpfiles: behavior of ‑‑clean

2022-08-31 Thread Ulrich Windl
>>> SCOTT FIELDS  schrieb am 30.08.2022 um 21:15 in
Nachricht


> I'm currently running systemd 219‑78.
> 
> I realize there's significant behavior differences in the current release, 
> but per the documentation, I take it the behavior of '‑clean' hasn't changed

> in regards to directly specifications only applying to the directory entry 
> and the top level files. AKA, it still won't do recursive evaluation.
> 
> I also assume that the '‑clean' behavior also still doesn't provide any 
> 'glob' behavior specifications.
> 
> Primary issue for my side is managing tomcat tmp files.
> 
> These create a file in its temp location with a directory named of the form

> 'tomcat.<>' with other associated files (or directories below this).

I think with the guideline "clean up your own dirt" there wouldn't be so much
need for external cleanup if the applications would do a better job. The
application always knows best what, how, and when to clean up. (MHO)

> 
> As such, we can't use 'systemd‑tmpfiles' to purge these, though it's 
> desirable to do so.
> 
> Is the current status for "system‑tmpfiles" still such that the above can't

> be achieved with this subsystem?





[systemd-devel] Antw: [EXT] Re: Help required for configuring a blocking service during shutdown

2022-08-31 Thread Ulrich Windl
>>> Henning Moll  schrieb am 30.08.2022 um 16:24 in
Nachricht
:
> Hi,
> 
> This is an interesting mechanism, but unfortunately it is to weak for my use

> case: It does only delay reboots which are initiated via "systemctl ...". A

> "sudo reboot" is not delayed

At least in the past there was a difference between "reboot" and "shutdown -r
now":
The former wouldn't care much about how the reboot is done, while the latter
would try an orderly shutdown before reboot.

> 
> Best regards
> Henning
> Gesendet: Dienstag, 30. August 2022 um 10:14 UhrVon: "Michael Biebl" 
> An: "Mantas Mikulėnas" Cc: "Henning
Moll" 
> , "Systemd" Betreff:
Re: 
> [systemd-devel] Help required for configuring a blocking service during 
> shutdown
> Would the systemd inhibit interface be an 
> option?https://www.freedesktop.org/wiki/Software/systemd/inhibit/It was 
> designed for that use case after all.




[systemd-devel] Antw: [EXT] Re: [systemd‑devel] logind: discontinuous TTYs?

2022-08-30 Thread Ulrich Windl
>>> Toomas Rosin  schrieb am 26.08.2022 um 19:22 in Nachricht
<6751.1661534553@toomas>:
> Hi,
> 
> Arseny Maslennikov  wrote:
> 
>> $ TTYID=tty2 # for example
>> $ ln ‑s /dev/null /etc/systemd/system/autovt@$TTYID.service
> 
> This works, thank you!

Would the command above be the same as "systemctl mask
autovt@$TTYID.service"?
If so, the command would probably preferable to messing with systemd links, I
guess.

Regards,
Ulrich





[systemd-devel] Antw: [EXT] Re: [systemd‑devel] Ordering units and targets with devices

2022-08-26 Thread Ulrich Windl
>>> Michael Cassaniti  schrieb am 26.08.2022 um 06:46 
>>> in
Nachricht
<01000182d8797b39-375650cc-485b-43ec-84e0-9be3a66f22f4-000...@email.amazonses.co
 
>:
> On 25/8/22 22:22, Lennart Poettering wrote:
>> On Do, 25.08.22 10:50, Michael Cassaniti (mich...@cassaniti.id.au) wrote:
>>
>>> It seems to be somewhat more complicated than that, and perhaps it has more
>>> to do with my setup. Here's my /etc/crypttab which just might explain a bit:
>>>
>>>  # Mount root and swap
>>>  # These will initially have an empty password
>>>  root /dev/disk/by-partlabel/root - 
> fido2-device=/dev/yubico-fido2,token-timeout=0,try-empty-password=true,x-init
> rd.attach
>>>  swap /dev/disk/by-partlabel/swap - 
> fido2-device=/dev/yubico-fido2,token-timeout=0,try-empty-password=true,x-init
> rd.attach
>>>
>>> I think the fact that both of these get setup at boot and will concurrently
>>> try to access the FIDO2 token is causing issues. That crypttab is included
>>> in the initrd.
>> There was an issue with concurrent access to FIDO2 devices conflicting
>> with each other. This was addressed in libfido2 though, it will now
>> take a BSD lock on the device while talking to it, thus synchronizing
>> access properly.
>>
>> See this bug:
>>
>> https://github.com/systemd/systemd/issues/23889 
>>
>> Maybe it's sufficient to update libfido2 on your system?
>>
>>
>> Lennart
>>
>> --
>> Lennart Poettering, Berlin
> Hi Lennart,
> Thanks for the fast response. I've got version 1.11 of libfido2 and it 
> seems I'd need 1.12 (to be released) to fix it [1]. It terrifies me to 
> think what I might break on my system by upgrading libfido2. On Gentoo 

Or "Use the source, Luke": Try to "patch in" just that missing lock into your 
current version.

> there is revdep-rebuild but Ubuntu doesn't have anything like that. I'm 
> on Ubuntu 22.10 which is the latest development version so I can use 
> some shiny new systemd features.
> 
> For now I've written a rather dodgy generator that will scan through the 
> generated units for both cryptsetup and resume, then add in some 
> ordering. Currently it will make the cryptsetup units run serially. I am 
> yet to test it though.
> 
> [1]: https://github.com/Yubico/libfido2/pull/604#issuecomment-1178637796 
> 
> Thanks,
> Michael Cassaniti, Australia






[systemd-devel] Antw: Re: [EXT] Re: org.freedesktop.timedate1.NTPSynchronized not signaled: rationale?

2022-08-18 Thread Ulrich Windl
>>> Mantas Mikulenas  schrieb am 18.08.2022 um 10:20 in
Nachricht
:
> On Thu, Aug 18, 2022 at 10:47 AM Ulrich Windl <
> ulrich.wi...@rz.uni-regensburg.de> wrote:
> 
>> >>> Mantas Mikulenas  schrieb am 17.08.2022 um 15:17 in
>> Nachricht
>> :
>> > On Wed, Aug 17, 2022 at 1:59 PM Etienne Doms 
>> wrote:
>> >
>> >> Hi,
>> >>
>> >> I'm developing an application for an embedded system that needs to
>> >> wait for proper NTP synchronization. systemd-timesyncd is running and
>> >> I can read NTPSynchronized from /org/freedesktop/timedate1 using
>> >> D-Bus. I read in the manual that this property is not signaled, and
>> >> that I need to do some weird magic with timerfd's
>> >> TFD_TIMER_CANCEL_ON_SET flag.
>> >>
>> >> It works, but having the ECANCELLED on the read() means that something
>> >> somewhere did clock_settime(CLOCK_REALTIME, <...>), not especially
>> >> that I got a proper NTP synchronization. Then, I still need to query
>> >> NTPSynchronized after, and retry the timerfd thing if it didn't switch
>> >> to "true", which is still some kind of polling (but very unlikely,
>> >> sure).
>> >>
>> >> As a result, I'm a bit curious, what was the rationale of not simply
>> >> signaling NTPSynchronized?
>> >>
>> >
>> > timedated itself doesn't have knowledge of that event, because it isn't
>> the
>> > daemon that performs actual synchronization (that's timesyncd), so all
>> that
>> > the D-Bus property does is report you the status of adjtimex() –
>> > specifically it returns whether ".maxerror < 1600". Timedated would
>> > still need to poll and/or do timerfd tricks in order to see that state
>> > being reached. (Currently timedated is not a continuously running daemon
>> –
>> > it starts up only whenever properties are queried and exits when idle.)
>> >
>> > A better question is why the timesyncd daemon does not have such a D-Bus
>> > signal; looks like it *almost* does
>> > (org.freedesktop.timesync1.Manager.NTPMessage) but it looks like it only
>> > emits the raw messages and not whether they resulted in a successful
>> sync.
>>
>> Maybe because a "successful sync" is actually not sharply defined.
>> There can be very interesing scenarios (like requiring three "surviving
>> clocks", but only two were found)
>>
> 
> It's an SNTP client, it only deals with one timeserver at a time. And it
> already has a specific definition of "synced" in the code because it sets a
> flag file on the filesystem when that happens, just doesn't do the same via
> D-Bus.

So strictly speaking the event is not "time synced", but "time set".
The difference will be obvious after a week or so ;-)

Regards,
Ulrich

> 
> -- 
> Mantas Mikulėnas





[systemd-devel] Antw: [EXT] Re: org.freedesktop.timedate1.NTPSynchronized not signaled: rationale?

2022-08-18 Thread Ulrich Windl
>>> Mantas Mikulenas  schrieb am 17.08.2022 um 15:17 in
Nachricht
:
> On Wed, Aug 17, 2022 at 1:59 PM Etienne Doms 
wrote:
> 
>> Hi,
>>
>> I'm developing an application for an embedded system that needs to
>> wait for proper NTP synchronization. systemd-timesyncd is running and
>> I can read NTPSynchronized from /org/freedesktop/timedate1 using
>> D-Bus. I read in the manual that this property is not signaled, and
>> that I need to do some weird magic with timerfd's
>> TFD_TIMER_CANCEL_ON_SET flag.
>>
>> It works, but having the ECANCELLED on the read() means that something
>> somewhere did clock_settime(CLOCK_REALTIME, <...>), not especially
>> that I got a proper NTP synchronization. Then, I still need to query
>> NTPSynchronized after, and retry the timerfd thing if it didn't switch
>> to "true", which is still some kind of polling (but very unlikely,
>> sure).
>>
>> As a result, I'm a bit curious, what was the rationale of not simply
>> signaling NTPSynchronized?
>>
> 
> timedated itself doesn't have knowledge of that event, because it isn't the
> daemon that performs actual synchronization (that's timesyncd), so all that
> the D-Bus property does is report you the status of adjtimex() –
> specifically it returns whether ".maxerror < 1600". Timedated would
> still need to poll and/or do timerfd tricks in order to see that state
> being reached. (Currently timedated is not a continuously running daemon –
> it starts up only whenever properties are queried and exits when idle.)
> 
> A better question is why the timesyncd daemon does not have such a D-Bus
> signal; looks like it *almost* does
> (org.freedesktop.timesync1.Manager.NTPMessage) but it looks like it only
> emits the raw messages and not whether they resulted in a successful sync.

Maybe because a "successful sync" is actually not sharply defined.
There can be very interesing scenarios (like requiring three "surviving
clocks", but only two were found)
when the status output of NTP looks as if synchronized, but in fact the time
is "running away".
Acually "NTP sync" is not a Boolean, but more like an indicator ranging from 0
(unsynced) to 1 (well-synced).
(I tried to implement such an indicator in the past, but only very few servers
have a score of "1.0")

Regards,
Ulrich

> 
> For now, if you're using timesyncd you can use inotify to watch
> /run/systemd/timesync/synchronized, which is touched after a sync.
> 
> -- 
> Mantas Mikulėnas





[systemd-devel] Antw: Re: [EXT] org.freedesktop.timedate1.NTPSynchronized not signaled: rationale?

2022-08-18 Thread Ulrich Windl
>>> Etienne Doms  schrieb am 17.08.2022 um 14:49 in
Nachricht
:
> Oh, should have added a bit more context, indeed.
> 
> The piece of software is bringing a specific interface up, checking
> that something is connected (link going up or not), then issues a DHCP
> request onto it to fetch an IP and NTP servers, and then waits for
> proper NTP synchronization to keep on. If the remote is not present,
> or if it fails to answer us within a given time frame, we give up and
> do something else.
> 
> Maybe we can play with dedicated .service/.target using Requires/After
> and OnFailure to drive all that mecanic with different oneshot
> processes, but using libsystemd internally in a single process sounded
> just more KISS for our usage.
> 
> I'm just confused that I cannot have a proper "NTP sync'ed" event, the
> documentation explicitly states "we don't signal it, use some timerfd
> things" and I'm curious about the rationale of this choice.

Hi!

I suspect such an event wasn't implemented, because you would typically get at
most one per host boot.
(For completeness the opposite event should be there, too)
What will your software do if NTP sync is lost?

Regards,
Ulrich


> 
> Le mer. 17 août 2022 à 14:01, Ulrich Windl
>  a écrit :
>>
>> >>> Etienne Doms  schrieb am 17.08.2022 um 12:58
in
>> Nachricht
>> :
>> > Hi,
>> >
>> > I'm developing an application for an embedded system that needs to
>> > wait for proper NTP synchronization. systemd-timesyncd is running and
>>
>> What's wrong with time-sync.target? Or maybe even time-set.target?
>>
>> > I can read NTPSynchronized from /org/freedesktop/timedate1 using
>> > D-Bus. I read in the manual that this property is not signaled, and
>> > that I need to do some weird magic with timerfd's
>> > TFD_TIMER_CANCEL_ON_SET flag.
>> >
>> > It works, but having the ECANCELLED on the read() means that something
>> > somewhere did clock_settime(CLOCK_REALTIME, <...>), not especially
>> > that I got a proper NTP synchronization. Then, I still need to query
>> > NTPSynchronized after, and retry the timerfd thing if it didn't switch
>> > to "true", which is still some kind of polling (but very unlikely,
>> > sure).
>> >
>> > As a result, I'm a bit curious, what was the rationale of not simply
>> > signaling NTPSynchronized?
>> >
>> > Thanks,
>> > Etienne
>>
>>
>>
>>





[systemd-devel] Antw: [EXT] org.freedesktop.timedate1.NTPSynchronized not signaled: rationale?

2022-08-17 Thread Ulrich Windl
>>> Etienne Doms  schrieb am 17.08.2022 um 12:58 in
Nachricht
:
> Hi,
> 
> I'm developing an application for an embedded system that needs to
> wait for proper NTP synchronization. systemd-timesyncd is running and

What's wrong with time-sync.target? Or maybe even time-set.target?

> I can read NTPSynchronized from /org/freedesktop/timedate1 using
> D-Bus. I read in the manual that this property is not signaled, and
> that I need to do some weird magic with timerfd's
> TFD_TIMER_CANCEL_ON_SET flag.
> 
> It works, but having the ECANCELLED on the read() means that something
> somewhere did clock_settime(CLOCK_REALTIME, <...>), not especially
> that I got a proper NTP synchronization. Then, I still need to query
> NTPSynchronized after, and retry the timerfd thing if it didn't switch
> to "true", which is still some kind of polling (but very unlikely,
> sure).
> 
> As a result, I'm a bit curious, what was the rationale of not simply
> signaling NTPSynchronized?
> 
> Thanks,
> Etienne






[systemd-devel] Antw: [EXT] [systemd‑devel] Q: Activating persistent Journal in SLES15

2022-08-11 Thread Ulrich Windl
>>> "Ulrich Windl"  schrieb am 11.08.2022
um
09:29 in Nachricht <62f4afee02a10004c...@gwsmtp.uni-regensburg.de>:
> Hi!
> 
> I had activated the persistent journal in SLES15 SP2 using these commands:
> # mkdir /var/log/journal
> # systemctl restart systemd‑journald.service
> 
> A directory had been created automatically:
> systemd‑journald[4273]: System journal 
> (/var/log/journal/e766c8d06f144b1588487221640f55b5) is 8.0M, max 4.0G, 3.9G

> free.
> 
> However when I tried the same with SLES15 SP4, id did not work:
> systemd‑journald[12614]: Runtime Journal 
> (/run/log/journal/1b5c6954ed9447fabb805f9da0419e15) is 8.0M, max 635.7M, 
> 627.7M free.
> 
> Meanwhile the first host is at SP3, but when I list the directories I see a

> difference:
> h16:~ # ll ‑d /var/log/journal/
> drwxr‑sr‑x 1 root systemd‑journal 64 Nov 30  2020 /var/log/journal/
> 
> v02:~ # ll ‑d /var/log/journal/
> drwxr‑xr‑x 1 root root 0 Aug 11 09:16 /var/log/journal/
> 
> I cannot remember havign changed the group or permission of 
> /var/log/journal/ after creating it.
> My guess is that some package update did that change automatically, but now

> the procedure to set it up is different.
> 
> Can anybody confirm?
> 
> Both systems use:
> [Journal]
> #Storage=auto
> 
> The SP3 host uses systemd‑246.16‑150300.7.48.1.x86_64, while the SP4 host
uses 
> systemd‑249.11‑150400.8.5.1.x86_64.
> 
> Regards,
> Ulrich

Hi!

It seems I found the solution, but I still wonder: Is it a bug?
After reading the manual, I tried:
v02:~ # journalctl --flush

Then I saw:
Aug 11 09:41:24 v02 systemd-journald[12614]: Time spent on flushing to
/var/log/journal/1b5c6954ed9447fabb805f9da0419e15 is 29.811ms for 2696
entries.
Aug 11 09:41:24 v02 systemd-journald[12614]: System Journal
(/var/log/journal/1b5c6954ed9447fabb805f9da0419e15) is 8.0M, max 4.0G, 3.9G
free.

Why didn't the service restart work?

The permissions are unchanged, BTW:
v02:~ # ll -d /var/log/journal/
drwxr-xr-x 1 root root 64 Aug 11 09:41 /var/log/journal/

Regards,
Ulrich






[systemd-devel] Antw: Re: Antw: [EXT] Re: [systemd‑devel] systemd‑nspawn container not starting on RHEL9.0

2022-08-11 Thread Ulrich Windl
>>> Neal Gompa  schrieb am 11.08.2022 um 09:22 in
Nachricht
:
> On Thu, Aug 11, 2022 at 3:15 AM Ulrich Windl
>  wrote:
>>
>> >>> Lennart Poettering  schrieb am 10.08.2022 um
22:09
>> in
>> Nachricht :
>> > On Mi, 10.08.22 10:13, Thomas Archambault (t...@tparchambault.com)
wrote:
>> >
>> >> Thank you again Lennart, and thx Kevin.
>> >>
>> >> That makes total sense, and accounts for the application's high level
>> >> start‑up delay which appears to be what we are stuck with if we are
over
>> >> xfs. Unfortunately, it's difficult to dictate to the client to change
>> their
>> >> fs type, consequently we can't develop / ship a tool with that baseline
>> >> latency on our primary target platform (RHEL xx.)
>> >>
>> >> So the next obvious question would be, is XFS reflink support on the
>> >> systemd‑nspawn roadmap or actually, (and even better) has support been
>> >> incorporated already in the latest and greatest src and I'm just behind
>> the
>> >> curve working with the older version of nspawn as shipped in RHEL90?
>> >>
>> >> I'm asking because according to the RHEL 9 docs
>> >
>> 
>
(https://access.redhat.com/documentation/en‑us/red_hat_enterprise_linux/9/htm

> l‑
>>
>> >
>> 
>
single/managing_file_systems/index#the‑xfs‑file‑system_assembly_overview‑of‑a
> vaila
>> > ble‑file‑systems)
>> >> it's the current default fs and is configured for "Reflink‑based file
>> >> copies."
>> >
>> > We issue copy_file_range() syscall, which should do reflinks on xfs,
>> > if it supports that. Question is if your kernel supports that too. I
>> > have no experience with xfs though, no idea how xfs hooked up reflink
>> > initially. And we never tested that really. I don't think outside RHEL
>> > many people use xfs.
>>
>> Not true: For SUSE /home is typically using XFS, and we use it with SLES
for
>> (huge) database filesystems.
>>
> 
> In openSUSE, this hasn't been the default behavior for a while. SLES
> will catch up here eventually.

Accidentially I created some filesystems using "yast2 disk" in SLES15 SP4
(latest updates) today:
There the default for any "data" filesystems is (still) XFS (while OS uses
BtrFS).
Agreed, if you don't have a separate filesystem for /home, it'll be a BtrFS
subvolume ("fill one, you fill all", I don't like the subvolume concept, or I
didn't understand the benefits)

Regards,
Ulrich

> 
> 
> -- 
> 真実はいつも一つ!/ Always, there's only one truth!





[systemd-devel] Q: Activating persistent Journal in SLES15

2022-08-11 Thread Ulrich Windl
Hi!

I had activated the persistent journal in SLES15 SP2 using these commands:
# mkdir /var/log/journal
# systemctl restart systemd-journald.service

A directory had been created automatically:
systemd-journald[4273]: System journal 
(/var/log/journal/e766c8d06f144b1588487221640f55b5) is 8.0M, max 4.0G, 3.9G 
free.

However when I tried the same with SLES15 SP4, id did not work:
systemd-journald[12614]: Runtime Journal 
(/run/log/journal/1b5c6954ed9447fabb805f9da0419e15) is 8.0M, max 635.7M, 627.7M 
free.

Meanwhile the first host is at SP3, but when I list the directories I see a 
difference:
h16:~ # ll -d /var/log/journal/
drwxr-sr-x 1 root systemd-journal 64 Nov 30  2020 /var/log/journal/

v02:~ # ll -d /var/log/journal/
drwxr-xr-x 1 root root 0 Aug 11 09:16 /var/log/journal/

I cannot remember havign changed the group or permission of /var/log/journal/ 
after creating it.
My guess is that some package update did that change automatically, but now the 
procedure to set it up is different.

Can anybody confirm?

Both systems use:
[Journal]
#Storage=auto

The SP3 host uses systemd-246.16-150300.7.48.1.x86_64, while the SP4 host uses 
systemd-249.11-150400.8.5.1.x86_64.

Regards,
Ulrich




[systemd-devel] Antw: [EXT] Re: [systemd‑devel] systemd‑nspawn container not starting on RHEL9.0

2022-08-11 Thread Ulrich Windl
>>> Lennart Poettering  schrieb am 10.08.2022 um 22:09
in
Nachricht :
> On Mi, 10.08.22 10:13, Thomas Archambault (t...@tparchambault.com) wrote:
> 
>> Thank you again Lennart, and thx Kevin.
>>
>> That makes total sense, and accounts for the application's high level
>> start‑up delay which appears to be what we are stuck with if we are over
>> xfs. Unfortunately, it's difficult to dictate to the client to change
their
>> fs type, consequently we can't develop / ship a tool with that baseline
>> latency on our primary target platform (RHEL xx.)
>>
>> So the next obvious question would be, is XFS reflink support on the
>> systemd‑nspawn roadmap or actually, (and even better) has support been
>> incorporated already in the latest and greatest src and I'm just behind
the
>> curve working with the older version of nspawn as shipped in RHEL90?
>>
>> I'm asking because according to the RHEL 9 docs 
>
(https://access.redhat.com/documentation/en‑us/red_hat_enterprise_linux/9/html‑

>
single/managing_file_systems/index#the‑xfs‑file‑system_assembly_overview‑of‑availa
> ble‑file‑systems)
>> it's the current default fs and is configured for "Reflink‑based file
>> copies."
> 
> We issue copy_file_range() syscall, which should do reflinks on xfs,
> if it supports that. Question is if your kernel supports that too. I
> have no experience with xfs though, no idea how xfs hooked up reflink
> initially. And we never tested that really. I don't think outside RHEL
> many people use xfs.

Not true: For SUSE /home is typically using XFS, and we use it with SLES for
(huge) database filesystems.

> 
> If you provide a more complete strace output, you should see the
> copy_file_range() stuff there.
> 
> Lennart
> 
> ‑‑
> Lennart Poettering, Berlin





[systemd-devel] Antw: [systemd‑devel] Antw: [EXT] What is the shutdown sequence with systemd and dracut?

2022-08-08 Thread Ulrich Windl
>>> "Ulrich Windl"  schrieb am 08.08.2022
um
14:50 in Nachricht <62f1069002a10004c...@gwsmtp.uni-regensburg.de>:
>>>> Patrick Schleizer  schrieb am 08.08.2022
um
> 14:24 in Nachricht <7abb7852‑c097‑34d6‑c4ea‑f2101fc5d...@whonix.org>:
>> Hi!
>> 
>> This is what I think but please correct me if I am wrong.
>> 
>> 1. systemd runs systemd units for systemd shutdown.target
>> 
>> 2. /lib/systemd/system‑shutdown (shutdown.c) runs
>> 
>> 3. /lib/systemd/system‑shutdown executes /run/initramfs/shutdown (which
>> is dracut)
>> 
>> 4. dracut shutdown.sh performs various cleanup tasks (such as kill all
>> remaining processes and unmount root disk)
> 
> If dracut unmounts the root disk, the following /usr and /lib mist the in 
> initrd, right?

Sorry: s/mist the in/must be in the"

> 
>> 
>> 5. /lib/systemd/system‑shutdown runs scripts in the
>> /usr/lib/systemd/system‑shutdown/ folder
>> 
>> 6. /lib/systemd/system‑shutdown performs further cleanup (similar to
>> dracut, probably some functionality duplicated with dracut, includes
>> kill all remaining processes, unmount the root risk) and eventually
>> halt/reboot/poweroff/kexec.
>> 
>> Cheers,
>> Patrick





[systemd-devel] Antw: [EXT] What is the shutdown sequence with systemd and dracut?

2022-08-08 Thread Ulrich Windl
>>> Patrick Schleizer  schrieb am 08.08.2022 um
14:24 in Nachricht <7abb7852-c097-34d6-c4ea-f2101fc5d...@whonix.org>:
> Hi!
> 
> This is what I think but please correct me if I am wrong.
> 
> 1. systemd runs systemd units for systemd shutdown.target
> 
> 2. /lib/systemd/system-shutdown (shutdown.c) runs
> 
> 3. /lib/systemd/system-shutdown executes /run/initramfs/shutdown (which
> is dracut)
> 
> 4. dracut shutdown.sh performs various cleanup tasks (such as kill all
> remaining processes and unmount root disk)

If dracut unmounts the root disk, the following /usr and /lib mist the in 
initrd, right?

> 
> 5. /lib/systemd/system-shutdown runs scripts in the
> /usr/lib/systemd/system-shutdown/ folder
> 
> 6. /lib/systemd/system-shutdown performs further cleanup (similar to
> dracut, probably some functionality duplicated with dracut, includes
> kill all remaining processes, unmount the root risk) and eventually
> halt/reboot/poweroff/kexec.
> 
> Cheers,
> Patrick






[systemd-devel] Antw: Re: Antw: [EXT] Re: Q: Change a kernel setting

2022-07-29 Thread Ulrich Windl
>>> Thomas HUMMEL  schrieb am 29.07.2022 um 12:13 in
Nachricht <1a6092bd-e540-1642-87a4-811ca9413...@pasteur.fr>:

> 
> On 29/07/2022 11:41, Ulrich Windl wrote:
> 
>>>You can use tmpfiles. In the manpage
> 
> Hello, well it seems to depend on the subsystem. I tried the tmpfiles 
> way but still encountered some unexplained race condition as explained here
> 
> https://lists.freedesktop.org/archives/systemd-devel/2022-July/048100.html 
> 
> So I rolled back to a service unit and even so I did have to order it 
> After= a late (custom) target

Did you try ConditionPathExists= in the Unit?

> 
> None of this was satisfactory but I did not manage to find out what 
> happened.
> 
> Thanks
> 
> --
> Thomas HUMMEL






[systemd-devel] Antw: [EXT] Re: Q: Change a kernel setting

2022-07-29 Thread Ulrich Windl
>>> Tomasz Torcz  schrieb am 29.07.2022 um 11:01 in
Nachricht
:
> On Fri, Jul 29, 2022 at 08:45:51AM +0200, Ulrich Windl wrote:
>> Hi!
>> 
>> I wonder: What is the recommended way to do this with systemd?:
>> ---
>> Add the following command to a script executed on system boot, such as 
> /etc/init.d/boot.local:
>> 
>> # echo 0 > /sys/kernel/mm/ksm/run
>> ---
>> Do I have to write a unit for it, or is there some generic mechanism 
> already?
> 
>   You can use tmpfiles. In the manpage
> (https://www.freedesktop.org/software/systemd/man/tmpfiles.d.html)
> there's a following example, which you can adapt:
> 
> # Modify sysfs but don't fail if we are in a container with a read-only 
> /proc
> w- /proc/sys/vm/swappiness - - - - 10

Great, thanks!

> 
> 
> -- 
> Tomasz Torcz   “(…) today's high-end is tomorrow's embedded 
> processor.”
> to...@pipebreaker.pl  — Mitchell Blank on LKML





[systemd-devel] Q: Change a kernel setting

2022-07-29 Thread Ulrich Windl
Hi!

I wonder: What is the recommended way to do this with systemd?:
---
Add the following command to a script executed on system boot, such as 
/etc/init.d/boot.local:

# echo 0 > /sys/kernel/mm/ksm/run
---

Do I have to write a unit for it, or is there some generic mechanism already?

Regards,
Ulrich





[systemd-devel] Antw: [EXT] Re: [systemd‑devel] Feedback sought: can we drop cgroupv1 support soon?

2022-07-28 Thread Ulrich Windl
>>> Lennart Poettering  schrieb am 22.07.2022 um 17:35
in
Nachricht :
> On Fr, 22.07.22 12:15, Lennart Poettering (mzerq...@0pointer.de) wrote:
> 
>> > I guess that would mean holding on to cgroup1 support until EOY 2023
>> > or thereabout?
>>
>> That does sound OK to me. We can mark it deprecated before though,
>> i.e. generate warnings, and remove it from docs, as long as the actual
>> code stays around until then.

I would not remove it from the docs, but declare it obsolete/deprecated
instead.
I think "undocumented" features are a bad thing.

> 
> So I prepped a PR now that documents the EOY 2023 date:
> 
> https://github.com/systemd/systemd/pull/24086 
> 
> That way we shoudn't forget about this, and remind us that we still
> actually need to do it then.
> 
> Lennart
> 
> ‑‑
> Lennart Poettering, Berlin





[systemd-devel] Antw: [EXT] Re: [systemd‑devel] Feedback sought: can we drop cgroupv1 support soon?

2022-07-28 Thread Ulrich Windl
>>> Lennart Poettering  schrieb am 22.07.2022 um 12:15
in
Nachricht :
> On Do, 21.07.22 16:24, Stéphane Graber (stgra...@ubuntu.com) wrote:
> 
>> Hey there,
>>
>> I believe Christian may have relayed some of this already but on my
>> side, as much as I can sympathize with the annoyance of having to
>> support both cgroup1 and cgroup2 side by side, I feel that we're sadly
>> nowhere near the cut off point.
>>
>> >From what I can gather from various stats we have, over 90% of LXD
>> users are still on distributions relying on CGroup1.
>> That's because most of them are using LTS releases of server
>> distributions and those only somewhat recently made the jump to
>> cgroup2:
>>  ‑ RHEL 9 in May 2022
>>  ‑ Ubuntu 22.04 LTS in April 2022
>>  ‑ Debian 11 in August 2021
>>
>> OpenSUSE is still on cgroup1 by default in 15.4 for some reason.
>> All this is also excluding our two largest users, Chromebooks and QNAP
>> NASes, neither of them made the switch yet.
> 
> At some point I feel no sympathy there. If google/qnap/suse still are
> stuck in cgroupv1 land, then that's on them, we shouldn't allow
> ourselves to be held hostage by that.
> 
> I mean, that Google isn't forward looking in these things is well
> known, but I am a bit surprised SUSE is still so far back.

Well, openSUSE actually is rather equivalent to SLES15 (which exists for some
years now).
I guess they didn't want to switch within a major release.
Everybody is free to file an "enhancement" request, at opensuse's bugzilla,
however.
...

Regards,
Ulrich




[systemd-devel] Antw: [EXT] Re: Feedback sought: can we drop cgroupv1 support soon?

2022-07-28 Thread Ulrich Windl
Hi!

What about making cgroup1 support _configurable_ as a first step?
So maybe people could try how well things work when there is no cgroups v1
support in systemd.

Regards,
Ulrich

>>> Stéphane Graber  schrieb am 21.07.2022 um 22:24 in
Nachricht
:
> Hey there,
> 
> I believe Christian may have relayed some of this already but on my
> side, as much as I can sympathize with the annoyance of having to
> support both cgroup1 and cgroup2 side by side, I feel that we're sadly
> nowhere near the cut off point.
> 
> From what I can gather from various stats we have, over 90% of LXD
> users are still on distributions relying on CGroup1.
> That's because most of them are using LTS releases of server
> distributions and those only somewhat recently made the jump to
> cgroup2:
>  - RHEL 9 in May 2022
>  - Ubuntu 22.04 LTS in April 2022
>  - Debian 11 in August 2021
> 
> OpenSUSE is still on cgroup1 by default in 15.4 for some reason.
> All this is also excluding our two largest users, Chromebooks and QNAP
> NASes, neither of them made the switch yet.
> 
> I honestly wouldn't be holding deprecating cgroup1 on waiting for
> those few to wake up and transition.
> Both ChromeOS and QNAP can very quickly roll it out to all their users
> should they want to.
> It's a bit trickier for OpenSUSE as it's used as the basis for SLES
> and so those enterprise users are unlikely to see cgroup2 any time
> soon.
> 
> Now all of this is a problem because:
>  - Our users are slow to upgrade. It's common for them to skip an
> entire LTS release and those that upgrade every time will usually wait
> 6 months to a year prior to upgrading to a new release.
>  - This deprecation would prevent users of anything but the most
> recent release from running any newer containers. As it's common to
> switch to newer containers before upgrading the host, this would cause
> some issues.
>  - Unfortunately the reverse is a problem too. RHEL 7 and derivatives
> are still very common as a container workload, as is Ubuntu 16.04 LTS.
> Unfortunately those releases ship with a systemd version that does not
> boot under cgroup2.
> 
> That last issue has been biting us a bit recently but it's something
> that one can currently workaround by forcing systemd back into hybrid
> mode on the host.
> With the deprecation of cgroup1, this won't be possible anymore. You
> simply won't be able to have both CentOS7 and Fedora XYZ running in
> containers on the same system as one will only work on cgroup1 and the
> other only on cgroup2.
> 
> Now this doesn't bother me at all for anything that's end of life, but
> RHEL 7 is only reaching EOL in June 2024 and while Ubuntu 16.04 is
> officially EOL, Canonical provides extended support (ESM) on it until
> April 2026.
> 
> 
> So given all that, my 2 cents would be that ideally systemd should
> keep supporting cgroup1 until June 2024 or shortly before that given
> the usual leg between releasing systemd and it being adopted by Linux
> distros. This would allow for most distros to have made it through two
> long term releases shipping with cgroup2, making sure the vast
> majority of users will finally be on cgroup2 and will also allow for
> those cgroup1-only workloads to have gone away.
> 
> I guess that would mean holding on to cgroup1 support until EOY 2023
> or thereabout?
> 
> Stéphane
> 
> On Thu, Jul 21, 2022 at 5:55 AM Christian Brauner 
wrote:
>>
>> [Cc Stéphane and Serge]
>>
>> On Thu, Jul 21, 2022 at 11:03:49AM +0200, Lennart Poettering wrote:
>> > Heya!
>> >
>> > It's currently a terrible mess having to support both cgroupsv1 and
>> > cgroupsv2 in our codebase.
>> >
>> > cgroupsv2 first entered the kernel in 2014, i.e. *eight* years ago
>> > (kernel 3.16). We soon intend to raise the baseline for systemd to
>> > kernel 4.3 (because we want to be able to rely on the existance of
>> > ambient capabilities), but that also means, that all kernels we intend
>> > to support have a well-enough working cgroupv2 implementation.
>> >
>> > hence, i'd love to drop the cgroupv1 support from our tree entirely,
>> > and simplify and modernize our codebase to go cgroupv2-only. Before we
>> > do that I'd like to seek feedback on this though, given this is not
>> > purely a thing between the kernel and systemd — this does leak into
>> > some userspace, that operates on cgroups directly.
>> >
>> > Specifically, legacy container infra (i.e. docker/moby) for the
>> > longest time was cgroupsv1-only. But as I understand it has since been
>> > updated, to cgroupsv2 too.
>> >
>> > Hence my question: is there a strong community of people who insist on
>> > using newest systemd while using legacy container infra? Anyone else
>> > has a good reason to stick with cgroupsv1 but really wants newest
>> > systemd?
>> >
>> > The time where we'll drop cgroupv1 support *will* come eventually
>> > either way, but what's still up for discussion is to determine
>> > precisely when. hence, please let us know!
>>
>> In general, I wouldn't mind 

[systemd-devel] Antw: [EXT] Re: [systemd‑devel] Running actual systemd‑based distribution image in systemd‑nspawn

2022-07-11 Thread Ulrich Windl
>>> Lennart Poettering  schrieb am 30.06.2022 um 22:23
in
Nachricht :
> On Do, 16.06.22 09:27, Colin Guthrie (gm...@colin.guthr.ie) wrote:
> 
>> Andrei Borzenkov wrote on 15/06/2022 16:56:
>> > I tried it (loop mounting qemu image):
>> >
>> > systemd‑nspawn ‑D ./hd0 ‑b
>> >
>> > and it failed miserably with "Timeout waiting for device
>> > dev‑disk‑by...". Which is not surprising as there are no device units
>> > inside of container (it stops in single user allowing me to use sysctl
>> > ‑t device).
>> >
>> > Is it supposed to work at all? Even if I bind mount /dev/disk it does
>> > not help as systemd does not care whether device is actually present or 
> not.
>>
>> I've not tried "booting" a real install inside nspawn before (just images
>> installed by mkosi mostly), but could this just be a by‑product of it
trying
>> to do what /etc/fstab (or other mount units) say to do?
>>
>> Can you try something like:
>>
>> touch blank
>> systemd‑nspawn ‑‑bind‑ro=./blank:/etc/fstab ‑D ./hd0 ‑b
> 
> This should not be necessary, as systemd‑fstab‑generator actually
> ignores all /etc/fstab entries referencing block devices.  See this:
> 
>
https://github.com/systemd/systemd/blob/main/src/fstab‑generator/fstab‑generat

> or.c#L602
> 
> (i.e. container managers such as nspawn should mount /sys/ read‑only,
> which is indication to container payloads that device management
> should not be done by them but is done somewhere else. This is used as
> check whether to ignore the fstab entries that look like device patjs,
> i.e. start with /dev).
> 
> How precisely does the offending fstab line look like for you?
> Normally it should be ignored just like that. If it is not ignored,
> this looks like a bug.
> 
>> to override the /etc/fstab (there may be other more elegant ways to
disable
>> fstab processing!) and see if that helps?
> 
> No need. Should happen automatically.
> 
> That said: I strongly recommend that distros ship empty /etc/fstab by
> default, and rely on GPT partition auto discovery
> (i.e. systemd‑gpt‑auto‑generator) to mount everything, and only depart
> from that if there's a strong reason to, i.e. default mount options
> don't work, or external block device referenced or so.

What if you have multiple operating systems in various partitions on one
disk?
/etc/fstab absolutely makes sense

Regards,
Ulrich





[systemd-devel] Antw: Re: Antw: Re: Antw: [EXT] Re: Q: Start network in chroot?

2022-06-14 Thread Ulrich Windl
>>> Andrei Borzenkov  schrieb am 14.06.2022 um 10:01 in
Nachricht
:
> On Tue, Jun 14, 2022 at 10:57 AM Wols Lists  wrote:
>>
>> On 14/06/2022 06:57, Ulrich Windl wrote:
>> >> So you're not running an init system but you want the (not-running) init
>> >> system to run something for you?
>>
>> > I don't understand:
>> > The rescue system I'm using (SLES 14 SP3) uses systemd, and the system 
>> > that 

Sorry, my finger was "off by one"; SLES 15 SP3, of course!

> woon't boo is also using systemd (SLES15 SP3).
>> >
>> If it's SLES, what does "boot system on disk" do? HOPEFULLY it will
>> transition into the on-disk system using the boot media kernel.
>>
>> But I find it weird that you are using an OLD rescue disk on a NEW`
> s/OLD/non existent/
> 
> SLES14 never existed.






[systemd-devel] Antw: Re: Antw: Re: Antw: [EXT] Re: Q: Start network in chroot?

2022-06-14 Thread Ulrich Windl
>>> Wols Lists  schrieb am 14.06.2022 um 08:34 in
Nachricht :
> On 14/06/2022 06:57, Ulrich Windl wrote:
>>> So you're not running an init system but you want the (not-running) init
>>> system to run something for you?
> 
>> I don't understand:
>> The rescue system I'm using (SLES 14 SP3) uses systemd, and the system that 
> woon't boo is also using systemd (SLES15 SP3).
>> 
> If it's SLES, what does "boot system on disk" do? HOPEFULLY it will 

You caught me ;-)
Actually I did not try that, because in most cases you need a real rescue system
(Somehow it's like shopping: You buy the same things even when you don't need 
those urgently)

> transition into the on-disk system using the boot media kernel.

True!

> 
> But I find it weird that you are using an OLD rescue disk on a NEW 
> system. Surely thats *asking* for problems? With raid, if you don't use 

OLD and NEW: Naturally after an online update the installed system will always 
be newer than the rescue image; or are you talking about something completely 
different?

> a "latest and greatest" as your rescue system, as far as we're 
> concerned, you're inviting trouble ...

I'm getting confused: What are you talking about?

Regards,
Ulrich

> 
> Cheers,
> Wol






[systemd-devel] Antw: Re: Antw: Re: Antw: [EXT] Re: Q: Start network in chroot?

2022-06-14 Thread Ulrich Windl
>>> Michal Zegan  schrieb am 14.06.2022 um 09:25 in 
>>> Nachricht


...
>>> Sure when "init" was just a bundle of scripts, you could run one of the
>>> scripts it runs and hope for the best. You can generally still do that,
>>> but just don't expect asking a non-running program to do it for you to work!
>> Still I don't understand: systemd is running.
> 
> on the host. daemons usually read configuration, including service 
> files, from the place they run from. systemd is not running from chroot 
> so it will read services from outside of chroot, doing othervise would 
> be extremely weird behavior.

Thank you for this explanation; it makes sense. However (as written a moment 
ago) the original error messgae is not really helpful trying to understand the 
root cause of the issue.
But still I guess I cannot have a second systemd in chroot.

> 
> note contrary to sysvinit you are not running service scripts, but you 
> communicate with an already running systemd instance to start a service, 
> so because systemd runs from outside of chroot it cannot start a service 
> as if it was in a chroot, nor can this service read config files from 
> chroot.

OK, the problem seems to be that systemctl does not "pass" the units to 
systemd, but systemd "ate" (and digested) them all before.

> 
> You would literally need running systemd copy related to the chroot 
> which you cannot do without namespacing, and you would need network 
> interface in that ns.

namespaces are quite new to me. I have no experience with those.

Regards,
Ulrich

> 
> would be an interesting experiment to do without container software tbh.
> 
>>
>> Regards,
>> Ulrich
>>
>>> Col
>>
>>
>>






[systemd-devel] Antw: Re: Antw: Re: Antw: [EXT] Re: Q: Start network in chroot?

2022-06-14 Thread Ulrich Windl
>>> Andrei Borzenkov  schrieb am 14.06.2022 um 08:36 in
Nachricht <41ccac7a-bae8-47a9-f14e-6c9452478...@gmail.com>:
> On 14.06.2022 08:57, Ulrich Windl wrote:
>>>>> Colin Guthrie  schrieb am 13.06.2022 um 16:34 in
>> Nachricht :
>> 
>>> Ulrich Windl wrote on 13/06/2022 14:42:
>>>>>>> Colin Guthrie  schrieb am 13.06.2022 um 14:58 in
>>>> Nachricht :
>>>>> Ulrich Windl wrote on 13/06/2022 09:09:
>>>>>> Hi!
>>>>>>
>>>>>> Two questions:
>>>>>> 1) Why can't I use "systemctl start network" in a chroot environment 
>>>>>> (e.g.
>>>>> mounting the system from a rescue medium to fix a defective kernel)? When 
>>>>> I
>>>>> try I get: "Running in chroot, ignoring command 'start'"
>>>>>> 2) How can I start the network using systemd?
>>>>>
>>>>> You may wish to consider "booting" the container rather than just 
>>>>> chrooting.
>>>>
>>>> No container involved; an unbootable system instead, and I'd like to have 
>>> networking available for repair.
>>>> So obviously I cannot boot. Without systemd it wouldn't be a problem.
>>>
>>> So you're not running an init system but you want the (not-running) init 
>>> system to run something for you?
>> 
>> I don't understand:
>> The rescue system I'm using (SLES 14 SP3) uses systemd, and the system that 
> woon't boo is also using systemd (SLES15 SP3).
>> 
>>>
>>> If you're wanting to repair a system, and you need networking then bring 
>>> up the network in the repair image before chrooting surely? (i.e. what 
>>> Mantas said)
>> 
>> Well the configuration files are not in the generic rescue system, but in 
> the system that won't boot (I think I had explained that).
>> Also things became much more complicated with systemd and wickedd and all 
> that stuff.
>> 
>>>
>>> If you want to run the network inside the (broken) system you're trying 
>>> to repair, then just run the networking scripts or program manually. 
>>> i.e. run whatever /etc/init.d/network says or whatever ExecStart= says 
>>> in /usr/lib/systemd/network.service says (paths may vary).
>> 
>> There are no files inside /etc/init.d/.
>> 
>>>
>>> There will be loads of other stuff that the init system does that won't 
>>> be in place (e.g. tmpfiles, etc.) which you may or may not need to setup 
>>> manually too, but you can likely get it running.
>>>
>>>  > Without systemd it wouldn't be a problem.
>>>
> 
> Of course it would for anything that goes beyond simple "ip address add".
> 
>>> Sure when "init" was just a bundle of scripts, you could run one of the 
>>> scripts it runs and hope for the best. You can generally still do that, 
>>> but just don't expect asking a non-running program to do it for you to work!
>> 
>> Still I don't understand: systemd is running.
>> 
> 
> I do not understand what you do not understand. systemd is running in
> your rescue system, not in chroot. You want to tell this systemd to use
> configuration inside chroot, but systemd is rescue system is not aware
> of it. It cannot be done unless you start systemd inside of chroot as
> service manager.

Honestly I never had to think about the fact how systemctl communicates with 
systemd.
Also the message "Running in chroot, ignoring command 'start'" is a very poor 
one if it should read:
"There is no systemd to communicate with". To me the original message sounds 
like systemctl refuses to start the unit for some obscure reason.

> 
> And wicked, systemd-networkd, NetwormManager all need functional D-Bus.
> D-Bus is again outside of your chroot with the same caveats. Even
> without systemd in the mix it would be a problem.

Oh, "how many IBM engineers you need to change a light bulb?"
Simplicity has true elegance (Note I'm not talking about being "primitive")

> 
> You have already been told several times - start your networking in
> rescue system before doing chroot. Or use something like systemd-nspawn
> instead of chroot.

I did not even know that systemd-nspawn exists, and actually I don't know what 
a namespace container is.
I guess life has to be that complicated to keep all those engineers busy...

Regards,
Ulrich





[systemd-devel] Antw: [EXT] Re: Q: Start network in chroot?

2022-06-14 Thread Ulrich Windl
>>> Topi Miettinen  schrieb am 13.06.2022 um 18:45 in
Nachricht
<70cd12d1-6372-8d8c-90bd-144d1f7b3...@gmail.com>:
> On 13.6.2022 11.09, Ulrich Windl wrote:
>> Hi!
>> 
>> Two questions:
>> 1) Why can't I use "systemctl start network" in a chroot environment (e.g.

> mounting the system from a rescue medium to fix a defective kernel)? When I

> try I get: "Running in chroot, ignoring command 'start'"
> 
>  From docs/ENVIRONMENT.md:
> * `$SYSTEMD_IGNORE_CHROOT=1` — if set, don't check whether being invoked 
> in a
>`chroot()` environment. This is particularly relevant for systemctl, 
> as it
>will not alter its behaviour for `chroot()` environments if set. 
> Normally it
>refrains from talking to PID 1 in such a case; turning most 
> operations such
>as `start` into no-ops.  If that's what's explicitly desired, you might
>consider setting `$SYSTEMD_OFFLINE=1`.
> 
> -Topi

That reads interesting, but I really don't understand "This is particularly
relevant for systemctl, 
as it will not alter its behaviour for `chroot()` environments if set."; does
that mean the setting is being ignored?
If so, why is it "relevant" then? Or is it just poor wording, meaning
actually: "it DOES alter the behavior for chroot() environments"?
And the "SYSTEMD_OFFLINE" does the same (bad) thing in non-chroot
environments?

Regards,
Ulrich





[systemd-devel] Antw: Re: Antw: [EXT] Re: Q: Start network in chroot?

2022-06-13 Thread Ulrich Windl
>>> Colin Guthrie  schrieb am 13.06.2022 um 16:34 in
Nachricht :

> Ulrich Windl wrote on 13/06/2022 14:42:
>>>>> Colin Guthrie  schrieb am 13.06.2022 um 14:58 in
>> Nachricht :
>>> Ulrich Windl wrote on 13/06/2022 09:09:
>>>> Hi!
>>>>
>>>> Two questions:
>>>> 1) Why can't I use "systemctl start network" in a chroot environment (e.g.
>>> mounting the system from a rescue medium to fix a defective kernel)? When I
>>> try I get: "Running in chroot, ignoring command 'start'"
>>>> 2) How can I start the network using systemd?
>>>
>>> You may wish to consider "booting" the container rather than just chrooting.
>> 
>> No container involved; an unbootable system instead, and I'd like to have 
> networking available for repair.
>> So obviously I cannot boot. Without systemd it wouldn't be a problem.
> 
> So you're not running an init system but you want the (not-running) init 
> system to run something for you?

I don't understand:
The rescue system I'm using (SLES 14 SP3) uses systemd, and the system that 
woon't boo is also using systemd (SLES15 SP3).

> 
> If you're wanting to repair a system, and you need networking then bring 
> up the network in the repair image before chrooting surely? (i.e. what 
> Mantas said)

Well the configuration files are not in the generic rescue system, but in the 
system that won't boot (I think I had explained that).
Also things became much more complicated with systemd and wickedd and all that 
stuff.

> 
> If you want to run the network inside the (broken) system you're trying 
> to repair, then just run the networking scripts or program manually. 
> i.e. run whatever /etc/init.d/network says or whatever ExecStart= says 
> in /usr/lib/systemd/network.service says (paths may vary).

There are no files inside /etc/init.d/.

> 
> There will be loads of other stuff that the init system does that won't 
> be in place (e.g. tmpfiles, etc.) which you may or may not need to setup 
> manually too, but you can likely get it running.
> 
>  > Without systemd it wouldn't be a problem.
> 
> Sure when "init" was just a bundle of scripts, you could run one of the 
> scripts it runs and hope for the best. You can generally still do that, 
> but just don't expect asking a non-running program to do it for you to work!

Still I don't understand: systemd is running.

Regards,
Ulrich

> 
> Col






[systemd-devel] Antw: [EXT] Re: Q: Start network in chroot?

2022-06-13 Thread Ulrich Windl
>>> Colin Guthrie  schrieb am 13.06.2022 um 14:58 in
Nachricht :
> Ulrich Windl wrote on 13/06/2022 09:09:
>> Hi!
>> 
>> Two questions:
>> 1) Why can't I use "systemctl start network" in a chroot environment (e.g. 
> mounting the system from a rescue medium to fix a defective kernel)? When I 
> try I get: "Running in chroot, ignoring command 'start'"
>> 2) How can I start the network using systemd?
> 
> You may wish to consider "booting" the container rather than just chrooting.

No container involved; an unbootable system instead, and I'd like to have 
networking available for repair.
So obviously I cannot boot. Without systemd it wouldn't be a problem.

Regards,
Ulrich

> 
> Combined with IPVLAN or similar (which systemd-nspawn makes easy) you 
> can bring up a namespaced network interface inside the container 
> completely isolated from the host.
> 
> I do this for various setups and it works pretty well.
> 
> Col






[systemd-devel] Wtrlt: Antw: [EXT] Re: Q: Start network in chroot?

2022-06-13 Thread Ulrich Windl
Forgot the list when replying...

>>> Ulrich Windl  schrieb am 13.06.2022 um
10:44
in Nachricht <62a7152a.ed38.00a...@rz.uni-regensburg.de>:
>>>> Mantas Mikulenas  schrieb am 13.06.2022 um 10:13 in
> Nachricht
> :
> > On Mon, Jun 13, 2022 at 11:09 AM Ulrich Windl <
> > ulrich.wi...@rz.uni-regensburg.de> wrote:
> > 
> >> Hi!
> >>
> >> Two questions:
> >> 1) Why can't I use "systemctl start network" in a chroot environment
(e.g.
> >> mounting the system from a rescue medium to fix a defective kernel)?
> > 
> > 
> > Because you don't have systemd as init while inside a chroot.
> > 
> > 2) How can I start the network using systemd?
> >>
> > 
> > Start it outside the chroot.
> 
> That'll be tricky when the configuration (bonding/bridge setup) is stored 
> inside the chroot.
> 
> > 
> > -- 
> > Mantas Mikulėnas
> 
> 
> 
> 





[systemd-devel] Q: Start network in chroot?

2022-06-13 Thread Ulrich Windl
Hi!

Two questions:
1) Why can't I use "systemctl start network" in a chroot environment (e.g. 
mounting the system from a rescue medium to fix a defective kernel)? When I try 
I get: "Running in chroot, ignoring command 'start'"
2) How can I start the network using systemd?

(systemd-246-16 of SLES15 SP3)

Regards,
Ulrich





[systemd-devel] Antw: [EXT] Re: [systemd‑devel] cgroupsv2 and realtime processes

2022-06-07 Thread Ulrich Windl
>>> Michal Koutný  schrieb am 06.06.2022 um 18:29 in
Nachricht
<20220606162925.gh6...@blackbody.suse.cz>:
> On Mon, Jun 06, 2022 at 05:59:32PM +0200, Michał Zegan 

> wrote:
>> I assume if it would be on it would break any and all realtime
>> usage...?
> 
> Most likely (you'd not be able either: turn on RT policy, migrate the
> process or enable CPU controller, i.e. a step that'd lead to an invalid
> state).
> 
> I'm curious, what would be your use case for turning RT group
> schedulling on?

Odd experiments? ;-)

> 
> Thanks,
> Michal





[systemd-devel] Antw: [EXT] Re: [systemd‑devel] MaxRetentionSec does not delete entries older than the specified time

2022-05-30 Thread Ulrich Windl
>>> Barry Scott  schrieb am 29.05.2022 um 14:22 in
Nachricht <038e8fce-5820-4188-91e4-e42b6b448...@barrys-emacs.org>:

> 
>> On 29 May 2022, at 12:36, baba  wrote:
>> 
>> ‑ Mail original ‑
>> De: "Andrei Borzenkov" 
>> 
>>> In which case? Current active journal file gets archived when it is
>>> full. Retention policies apply only to these archived files.
>> 
>> In the case where MaxRetentionSec=3day.
>> And what do you mean by full?.

>From the header (in your original message) it was obvious that all the dates
displayed were covered by a single file.
So that file cannot be deleted without deleting everything. Obvious?


>> 
> 
> There are size limits on the journal files. I think that is what is being 
> referred to.
> See man journald.conf and the SystemMaxFileSize for example.
> 
> Barry





[systemd-devel] https://github.com/QubesOS/qubes-issues/issues/7335

2022-05-30 Thread Ulrich Windl
Hi!


Just in case: Does anybody have any idea what might be causing this effect 
(https://github.com/QubesOS/qubes-issues/issues/7335)?


Regards,
Ulrich




[systemd-devel] Antw: [EXT] Re: Q: logger: "invalid structured data parameter: 'fo\o="b\"a\"r"'"

2022-05-02 Thread Ulrich Windl
>>> Mantas Mikulenas  schrieb am 02.05.2022 um 10:50 in
Nachricht
:
> On Mon, May 2, 2022 at 11:40 AM Ulrich Windl <
> ulrich.wi...@rz.uni-regensburg.de> wrote:
> 
>> Hi!
>>
>> If I understand it corrctly the logger from
>> util-linux-systemd-2.33.2-4.18.1.x86_64 of SLES12 is part of systemd; if
>> not,
>>
> 
> It's not. It's part of util-linux.

Sorry, I was confused by the "-systemd" suffix.

> 
> -- 
> Mantas Mikulėnas





[systemd-devel] Q: logger: "invalid structured data parameter: 'fo\o="b\"a\"r"'"

2022-05-02 Thread Ulrich Windl
Hi!

If I understand it corrctly the logger from
util-linux-systemd-2.33.2-4.18.1.x86_64 of SLES12 is part of systemd; if not,
then please ignore.

I'm testing my implementation of RFC5424 syslog daemon, and I tried this
command:

logger -d --id --msgid=test1 -n 127.0.0.1 --port 10514 \
--rfc5424 --sd-id=test1-id@32473 --sd-param 'fo\o="b\"a\"r"' \
--sd-param 'jack="jill"' -t test-tag -p user.notice "${1:-test message}"

While --sd-param 'fo\o="bar"' worked, --sd-param 'fo\o="b\"a\"r"' does not.
Instead I get the message:
logger: invalid structured data parameter: 'fo\o="b\"a\"r"'

RFC 5424 states on page 16 (6.3.3 SD-PARAM):
   Inside PARAM-VALUE, the characters ’"’ (ABNF %d34), ’\’ (ABNF %d92),
   and ’]’ (ABNF %d93) MUST be escaped.  This is necessary to avoid
   parsing errors.

So my interpretation is that when " must be escaped, it's allowed to use \".

Am I wrong?

Regards,
Ulrich




[systemd-devel] Antw: [EXT] Re: Splitting sd-boot from systemd/bootctl for enabling sd-boot in Fedora

2022-05-02 Thread Ulrich Windl
>>> Jóhann B. Guðmundsson  schrieb am 30.04.2022 um 12:03
in
Nachricht :
> On 30.4.2022 07:53, Jóhann B. Guðmundsson wrote:
>> On 30.4.2022 05:08, Andrei Borzenkov wrote:
>>> On 28.04.2022 10:54, Lennart Poettering wrote:
> * systemd-boot is an additional bootloader, rather than replacing
>an existing one, thus increasing the attack surface.
 Hmm, what? "additional bootloader"? Are they suggesting you use grub
 to start sd-boot? I mean, you certainly could do that, but the only
 people I know who do that do that to patch around the gatekeeping that
 the shim people are doing. Technically the boot chain should either be
 [firmware → sd-boot → kernel] or [firmware → shim → sd-boot → kernel]
 (if you buy into the shim thing), and nothing else.

>>> I guess "additional bootloader" in this context means that distribution
>>> cannot use sd-boot as the only bootloader for obvious reason - it is EFI
>>> only. So distribution would need to keep currently used bootloader
>>> anyway.
>>
>>
>> Distributions most certainly can become efi only if they chose to do 
>> so, there nothing technical that stands in that way.
>>
>>
>>> If current bootloader already works on platforms supported by
>>> distribution, what is gained by adding yet another one?
>>
>> Freedom of *choice*
>>
>> If the distribution allows users the freedom to choose from a set of 
>> components that the OS "made of" or runs, to fit the user use cases or 
>> has targeted use cases ( which bootloaders such as syslinux, u-boot, 
>> redboot etc. are aimed at ) then drawing the line at bootloaders makes 
>> no sense.*
>> *
>>
>> If the distribution does not allow users the freedom to choose, then 
>> it makes no sense to support multiple variants of components that 
>> provide same/similar function in the distribution.*
>> *
>>
> 
> On that note if you take the bug report [1] that has been cited in this 
> thread then it's quite evident that Debian is not about the freedom of 
> choice.
> 
> "We do not consider it valid to have a choice of boot loaders"
> 
> which immediately excludes ca 20+ Linux/(F)OSS boot loader projects and 
> thus**discriminates against the person or group of persons behind those 
> projects and even the person trying to contribute to Debian itself

Well I think "freedom of choice" against "support nightmare" is a valid issue
for the bootloader.
Probably you can install any bootloader for Debian, too, but you are (rather)
alone if something does not work as expected.

Reminds me of the classic IBM joke: "How many bootloaders do you need to boot
Linux?"
(Original was something like: "How many IBM engineers do you need to screw in
a light bulb?"; the answer was "100", BTW)

> 
> "Hi
> 
> I'm rescinding this request. I've got a working prototype, but I don't 
> know where this would go."
> 
> 
> The distribution is not even about freedom of information, which 
> prevents individuals from having the ability to seek and receive and 
> impart information effectively. ( to understand the how and thus the why 
> the conclusion was reached which for in this particular case *all* 
> bootloaders projects could look at the dialog, learn from it and fix 
> anything if it affected them or correct any misunderstanding that might 
> be happening. )
> 
> 
> "> Is this discussion public? Can you share it?
> 
> We unfortunately do not have a written record of it."
> 
> ...
> 
> 
> JBG





[systemd-devel] Antw: [EXT] Re: Starting transient services securely from other service without root

2022-04-29 Thread Ulrich Windl
>>> Vašek Šraier  schrieb am 28.04.2022 um 19:47 in
Nachricht
:
> On Thu, 2022-04-28 at 19:53 +0300, Mantas Mikulėnas wrote:
...
> 
> Interesting. The issue also seems to be quite old meaning it's probably
> not a problem in practise.
...

That's a cool definition! So for any bug just sit and wait ;-)

...



[systemd-devel] Antw: [EXT] [systemd‑devel] Query degraded state

2022-04-29 Thread Ulrich Windl
>>> Barry Scott  schrieb am 28.04.2022 um 17:03 in
Nachricht :
> Is there a command I can use to test for the degraded state?
> 
> I could parsing the output of systemctl‑failed or
> syetemctl status but was looking for something less fragile.

Do you mean "systemctl --state=failed list-units"?

> 
> Barry





[systemd-devel] Antw: Re: Antw: [EXT] Re: [systemd‑devel] Splitting sd‑boot from systemd/bootctl for enabling sd‑boot in Fedora

2022-04-29 Thread Ulrich Windl
>>> Ian Pilcher  schrieb am 28.04.2022 um 16:40 in 
>>> Nachricht
:
> On 4/28/22 05:30, Ulrich Windl wrote:
>> So are there any distros that have /etc/fstab in initrd?
>> Having to start mount units manually is just terrible when a simple "mount
>> /var" would do.
> 
> Putting /etc/fstab in the initrd would mean that it would need to be
> rebuilt every time that file (or a plugin in /etc/fstab.d, or a mount
> unit) changed.

The environments I have here have fstabs that don't change for years, while 
kernel updates happen monthly at least.
Even more important, it's not necessary that the fstab in initrd is always up 
to date; it only needs the entries for the base OS (sometimes called the TCB).
Once you have the environment that allows a "mkinitrd" consistently you can fix 
all the other problems.

Regards,
Ulrich Windl

> 
> -- 
> 
> Google  Where SkyNet meets Idiocracy
> 






[systemd-devel] Antw: [EXT] Re: [systemd‑devel] Waiting for all jobs to finish

2022-04-28 Thread Ulrich Windl
>>> Lennart Poettering  schrieb am 28.04.2022 um 10:47
in
Nachricht :
> On Mi, 27.04.22 18:29, Barry Scott (barry@barrys‑emacs.org) wrote:
> 
>> I have two target files that I use.
>>
>> prod.target is used to start up all the production services.
>> When I use systemctl start prod.target it blocks until all the
>> services are running.
>>
>> I also have prod‑upgrade.target that is used to stop service
>> via Conflicts=. When I use systemctl start prod‑upgrade.target
>> it returns immediately but there are stop jobs running.
>>
>> I believe that this is expected as all the services that need to
>> be started have been.
>>
>> Is there a systemctl command that will wait for all the stop jobs
>> without the need to poll for the systemctl list‑jobs to be empty?
> 
> There is not.
> 
> But the correct way to solve this is by combining Conflicts= with an
> order dep (After= or Before=).
> 
> In systemd the ordering deps After=/Before= control three things:
> 
> 1. The literally define the start‑up order if two units are started,
>i.e. this is the obvious case: if b.service has After=a.service
>this means a.service has to finish startup first, before b.service
>is started.
> 
> 2. If two units are stopped they define the order too, but in
>reverse. if b.service has After=a.service this hence means that if
>both are shutdown, then b.service has to stop *before* a.service is
>stopped, i.e. the opposite order of the start‑up order.
> 
> 3. If one unit is started and one is stopped then the existance of an
>ordering dep means the stopped unit must complete stopping first,
>before the starting of the other is initiated. Note that it doesn't
>mattre if you set After= or Before= here, that doesn#t matter. What
>matters is that you ordered the unit at all, regardless in which
>direction.
> 
> Now, this third rule is what matters here: if your target unit has
> Conflicst= on some service, then the target unit should not enter
> active state until the service fully shutdown. Thus you can place
> After= *or* Before= between the two (your choice) and get the desired
> behaviour.

Is there a tool to simulate actions? I mean: List the actions to be started
(not just "start", of course; "stop" is also "started") and the events to wait
for (events to trigger)?

> 
> Lennart
> 
> ‑‑
> Lennart Poettering, Berlin





Re: [systemd-devel] Antw: [EXT] Re: [systemd‑devel] Splitting sd‑boot from systemd/bootctl for enabling sd‑boot in Fedora

2022-04-28 Thread Ulrich Windl
>>> Lennart Poettering  schrieb am 28.04.2022 um 10:33
in
Nachricht :
> On Do, 28.04.22 10:25, Ulrich Windl (ulrich.wi...@rz.uni‑regensburg.de)
wrote:
> 
>> > Well, it sounds backwards to focus on the boot loader UI side of
>> > "recovery" so much if you don't even have any reasonably thing you
>> > could do in case of recovery better than a login prompt/shell...
>>
>> Well, not the shell, the tools are important:
>> Before systemd I could easily recover as system that failed booting (at
some
>> init stage), because I could easily mount the root filesystem and the
tools
>> were there.
>> With systemd I have a crippled minimum emergency environment where almost 
> all
>> required tools are absent (just es the real fstab is). That's one of the 
> first
>> and biggest frustrations with systemd.
> 
> That's a totally bogus claim. systemd has no control on the set of
> packages your distro installs or not. If you are missing some tool in
> your "emergency environment" (for whatever that is, systemd doesn't
> have a concept like that), then bring that up to your distro.
> 
> my educated guess is that your distro is providing some emergency
> kernel for you that comes with a minimized initrd? If that's the case
> it's purely the decision of your distro what to put in there and what
> not.

So are there any distros that have /etc/fstab in initrd?
Having to start mount units manually is just terrible when a simple "mount
/var" would do.

> 
> So you are barking up the very very wrong tree here. Go, complain to
> your distro instead, we have nothing to do with that.

OK.

> 
> Lennart
> 
> ‑‑
> Lennart Poettering, Berlin





[systemd-devel] Antw: Re: [systemd‑devel] Antw: [EXT] Re: Q: non‑ASCII in syslog

2022-04-28 Thread Ulrich Windl
>>> Lennart Poettering  schrieb am 28.04.2022 um 10:27
in
Nachricht :
> On Do, 28.04.22 09:32, Ulrich Windl (ulrich.wi...@rz.uni‑regensburg.de)
wrote:
> 
>> Actually I wasn't quite sure about the default config in SLES12.
>> It seems the flow is journald ‑> local rsyslogd ‑> remote syslogd
>>
>> > rsyslogd already knows if messages are UTF‑8 because the system's $LANG
>> > (well, nl_langinfo) says so. And if rsyslog can't trust that for some
>> > reason (e.g. because a user might have a different locale), then
>> > systemd‑journald won't be able to trust it either, so it won't know
whether
>> > it could add the BOM.
>>
>> How could a remote syslog server know what the locale on the sending
system
>> is?
> 
> Your local rsyslogd could add the BOM when it transforms journal
> messages to syslog datagrams.
> 
>> > RFC 3164 over the network to a remote server? Outside the scope for
>> > systemd, since it doesn't generate the network packets; your local
rsyslogd
>> > forwarder does. (Also, why RFC 3164 and not 5425?)
>>
>> If you look outside the world of systemd, about 99% of systems create the 
> RFC
>> 3164 type of messages.
> 
> That's a wild claim, and simply wrong actually.

Well actually as systemd cannot send syslog messages to remote, which systems
do you know that send RFC 5424 messages?
Actually I know none here.

> 
> I am pretty sure that more than 50% of syslog messages generated on
> this earth probably are synthesized by glibc's syslog() API. And that
> turns out to be neither conformant to RFC 3164 nor to RFC 5425.

No idea. Can you give an example?

> 
> What glibc sends is close to RFC 3164 but omits one key field that
> isn't really optionally according to RFC 3164: the 'HOSTNAME' field.

Maybe the API is not used correctly. The RFC 3164 says:
"A relay will add a TIMESTAMP and SHOULD add a HOSTNAME as follows (...)"
So when sending to any remote syslog a HOSTNAME should be there.
(It's like a MTA adding a Message-ID (and other fields) if none is present)

Most notable the RFC seems to allow a missing hostname initially.

> 
> systemd is focussed on reality: we generate and process the same
> format glibc generates.

I'm wondering which API all those programs use that create correct syslog
entries.
I tried with my own program:
It sends:
connect(1, {sa_family=AF_LOCAL, sun_path="/dev/log"}, 110) = 0
sendto(1, "<31>Apr 28 11:08:32 iotwatch[239"..., 56, MSG_NOSIGNAL, NULL, 0) =
56

What's logged is:
Apr 28 11:08:32 host-name iotwatch[239...

Also from the syntax being sent by the application, one cannot really say
whether the hostname is missing.
Maybe the trick is that /dev/log is specified as source for _local_ syslog
messages (so that there's no reason or sense in supplying the local hostname).
Also I'm not sure whether the messages in /dev/log are covered by the RFC.

Regards,
Ulrich Windl

> 
> Lennart
> 
> ‑‑
> Lennart Poettering, Berlin





[systemd-devel] Antw: Re: [EXT] Re: Q: non-ASCII in syslog

2022-04-28 Thread Ulrich Windl
>>> Mantas Mikulenas  schrieb am 28.04.2022 um 09:39 in
Nachricht
:
> On Thu, Apr 28, 2022 at 10:32 AM Ulrich Windl <
> ulrich.wi...@rz.uni-regensburg.de> wrote:
> 
>> >>> Mantas Mikulenas  schrieb am 27.04.2022 um 12:03 in
>> Nachricht
>> :
>> > On Wed, Apr 27, 2022 at 10:09 AM Ulrich Windl <
>> > ulrich.wi...@rz.uni-regensburg.de> wrote:
>> >
>> >> Hi!
>> >>
>> >> Having written an RFC 3164 compatible syslog daemon, I noticed that
>> systemd
>> >> created syslog messages with non-ASCII characters.
>> >> The problem is that a remote syslogd can hardly guess the correct
>> character
>> >> set (I'm using rsyslog to forward local messages to a remote server).
>> >>
>> >> Example of such message:
>> >> systemd-tmpfiles[3311]: [/usr/lib/tmpfiles.d/svnserve.conf:1] Line
>> >> references
>> >> path below legacy directory /var/run/, updating /var/run/svnserve →
>> >> /run/svnserve; please update the tmpfiles.d/ drop-in file accordingly.
>> >>
>> >> (The arrow is encoded as three bytes (\xe2\x86\x92))
>> >>
>> >> RFC 5425 syslog messages require the use of a BOM (%xEF.BB.BF) at the
>> >> beginning of a message if the message used UTF-8:
>> >>
>> >>   MSG = MSG-ANY / MSG-UTF8
>> >>   MSG-ANY = *OCTET ; not starting with BOM
>> >>   MSG-UTF8= BOM UTF-8-STRING
>> >>   BOM = %xEF.BB.BF
>> >>
>> >> Wouldn't it make sense to add such a BOM for RFC 3164 syslog messages
>> also
>> >> if
>> >> non-ASCII (i.e.: UTF-8) encoded characters are used?
>> >>
>> >
>> > RFC 3164 over a local socket from journald to local rsyslogd? The local
>>
>> Actually I wasn't quite sure about the default config in SLES12.
>> It seems the flow is journald -> local rsyslogd -> remote syslogd
>>
>> > rsyslogd already knows if messages are UTF-8 because the system's $LANG
>> > (well, nl_langinfo) says so. And if rsyslog can't trust that for some
>> > reason (e.g. because a user might have a different locale), then
>> > systemd-journald won't be able to trust it either, so it won't know
>> whether
>> > it could add the BOM.
>>
>> How could a remote syslog server know what the locale on the sending
system
>> is?
>>
> 
> It's not remote, it's local. I'm talking about the one that's receiving
> messages from journald on the same machine.
> 
> 
>>
>> >
>> > RFC 3164 over the network to a remote server? Outside the scope for
>> > systemd, since it doesn't generate the network packets; your local
>> rsyslogd
>> > forwarder does. (Also, why RFC 3164 and not 5425?)
>>
>> If you look outside the world of systemd, about 99% of systems create the
>> RFC
>> 3164 type of messages.
>> Some may send non-ASCII too, however.
>>
> 
> Still outside the scope of systemd. Systemd doesn't send RFC 3164 messages
> over the network, either.

Correct: It does not send, because it's unable to do so. That's why I used
rsyslogd.

> 
> 
>>
>> >
>> > Generally, if a message successfully decodes as UTF-8 then it's most
>> likely
>> > actual UTF-8 (and if UTF-8 decode fails then you fall back to
ISO8859-1).
>> > Various old systems get away with this without needing a UTF-8 BOM.
>>
>> Yes, you can just output what you received, hoping the messages will be
>> presented correctly.
>> I't just like sending 8-bit E-Mmail without a coding system or charset in
>> the
>> past.

What I meant to say was: Guessing the encoding is a bad concept.

>>
> 
> Which is not what I was saying, but sure, whatever.
> 
> -- 
> Mantas Mikulėnas





[systemd-devel] Antw: [EXT] Re: [systemd‑devel] Splitting sd‑boot from systemd/bootctl for enabling sd‑boot in Fedora

2022-04-28 Thread Ulrich Windl
>>> Lennart Poettering  schrieb am 27.04.2022 um 18:04
in
Nachricht :
> On Mi, 27.04.22 11:10, Neal Gompa (ngomp...@gmail.com) wrote:
> 
>> > Rebooting from the DE has advantages: nice UI without much work, l10n,
>> > accessibility, help, integration with normal auth mechanisms (e.g.
polkit
>> > auth for non‑default boot entries or firmware setup), no need to
>> > fiddle with pressing keys at the exactly right time.
>>
>> It also has a major downside that in the event the OS doesn't boot,
>> you don't have a friendly way to do recovery.
> 
> What does "recovery" precisely mean for you? I mean, on Linux this
> usually means you'll be dumped at a login prompt/shell in one way or
> another. How does it matter whether you first showed a graphical icon
> in that case?
> 
>> Nowadays both Windows and macOS provide graphical boot managers and
>> graphical tools/environments for recovery. These are both things I
>> want in Fedora as well.
> 
> Well, it sounds backwards to focus on the boot loader UI side of
> "recovery" so much if you don't even have any reasonably thing you
> could do in case of recovery better than a login prompt/shell...

Well, not the shell, the tools are important:
Before systemd I could easily recover as system that failed booting (at some
init stage), because I could easily mount the root filesystem and the tools
were there.
With systemd I have a crippled minimum emergency environment where almost all
required tools are absent (just es the real fstab is). That's one of the first
and biggest frustrations with systemd.

> 
> Quite frankly, I think we should actually focus on real improvements
> to recovery stuff, i.e. boot counting/automatic fallback on failed
> boots. which sd‑boot all implements btw, in conjunction with systemd

At the current state of AI, I'd prefer manual recovery over any "automatic".
Last time I had permitted Windows to try automatic recovery, it messed up the
system so severely that I had to restore from backup.
(Only the AHCI mode was lost after a drained BIOS battery).

> userspace. That kind of stuff makes whole sets of problems go away
> entirely, and is *actually* helpful. Whether we first show a graphical
> icon or just a text before we dump you in a shell prompt once all is
> lost anyway is kinda a pointless discussion if you ask me.

fsck, only tring to fix obvious non-controversial issues automatically, but
require manual mode otherwise proved to be a very successful approach over the
years.
Sill users could run with the "-y" option to get "something" that might work,
but still probably loosing some data that could be recovered otherwise.

> 
> For me recovery means something very different than graphical icons I
> must say.

Sadly, today many users judge from the look of the icons, not from the tools
behind.
(If you ever followed Android's syslog, you know what I mean... ;-)

Regards,
Ulrich





[systemd-devel] Antw: [EXT] Re: Splitting sd-boot from systemd/bootctl for enabling sd-boot in Fedora

2022-04-28 Thread Ulrich Windl
>>> Neal Gompa  schrieb am 27.04.2022 um 17:26 in Nachricht
:

...
>> E.g. the biggest development in how the boot looks in recent years
>> in Fedora has been hiding on the boot menus and boot messages by
>> default. I.e. the _design_ is that you start with the logo of the
>> manufacturer which is seamlessly replaced by the gdm login screen.
>> How the boot menu looks never factors into any of this.
>>
> 
> Hiding them by default doesn't mean making them scary and semi-useless
> when you access them. Most people don't access UEFI menus very often,
> if at all, and yet a huge amount of investment went into making that
> UX better than it was in the past. Is it spectacular? No. Is it less
> scary? Absolutely.

Actually without systemd it's good, but with systemd you just experience 
minutes of delay (while systemd waits for something that isn't working) while 
you don't see what's going on.
So I disable all those quiet and silent options. The boot looks really messy 
then (as nobody ever consideerd having boot messages on the console), but I'm 
less bored while waiting for the network cards to configure, disks being 
discovered and multipaths being collected...

...

Maybe those system should log boot messages on tty1, but switch to tty2 (and 
continue from there) for users that don't care about details.
Traditionally the "system console" was not intended for "users" (but for 
administrators).

Regards,
Ulrich





[systemd-devel] Antw: [EXT] Re: Q: non-ASCII in syslog

2022-04-28 Thread Ulrich Windl
>>> Lennart Poettering  schrieb am 27.04.2022 um 13:10
in
Nachricht :
> On Mi, 27.04.22 09:09, Ulrich Windl (ulrich.wi...@rz.uni-regensburg.de) 
> wrote:
> 
>> Hi!
>>
>> Having written an RFC 3164 compatible syslog daemon, I noticed that
systemd
>> created syslog messages with non-ASCII characters.
>> The problem is that a remote syslogd can hardly guess the correct
character
>> set (I'm using rsyslog to forward local messages to a remote
>> server).
> 
> It's 2022. I think at this point, software should always assume the
> charset is UTF-8 if it doesn't have an reason to believe otherwise.
> 
> It's kinda what we started to do all across our codebase really. We'll
> use UTF-8 for everything by default. For some things where people
> complain sufficeintly loudly we'll conditionalize them so that we have
> some fallback in place if we know for sure UTF-8 is not OK, but the
> default we do is always and everywhere UTF-8.
> 
>> Example of such message:
>> systemd-tmpfiles[3311]: [/usr/lib/tmpfiles.d/svnserve.conf:1] Line 
> references
>> path below legacy directory /var/run/, updating /var/run/svnserve →
>> /run/svnserve; please update the tmpfiles.d/ drop-in file accordingly.
>>
>> (The arrow is encoded as three bytes (\xe2\x86\x92))
>>
>> RFC 5425 syslog messages require the use of a BOM (%xEF.BB.BF) at the
>> beginning of a message if the message used UTF-8:
> 
> We do not implement RFC 5425, as glibc doesn't support that. In fact
> we don't even implement RFC 3164 in full, since glibc generates the
> messages in a very specific format only.
> 
>>
>>   MSG = MSG-ANY / MSG-UTF8
>>   MSG-ANY = *OCTET ; not starting with BOM
>>   MSG-UTF8= BOM UTF-8-STRING
>>   BOM = %xEF.BB.BF
>>
>> Wouldn't it make sense to add such a BOM for RFC 3164 syslog messages also

> if
>> non-ASCII (i.e.: UTF-8) encoded characters are used?
> 
> There's plenty software that doesn't support RFC 5425, and putting a
> BOM first is certainly not implemented in any of those. I think BOM is
> hideous and defaulting to UTF-8 generally safe. If we'd put BOM first,
> these messages would likely not be compatible with a large variety of
> consumers anymore, because they can't handle BOM. This would be worse

That's a non-argument:
You say you don't adhere to any of the standards, and claim if you would do,
things would break. ???

> than the status quo I am sure, since if we just send UTF-8 things
> should generally just work fine for any software that either a) also
> defaults to UTF-8 when encountering an 8bit char or b) is agonistic to
> charsets and just passes data thorugh.

Yes, put the head in the sand hoping problems are gone when you look up
again... ;-)

> 
> So, yeah, we might be stretching stdandards and tradition a bit, but
> it actually works out quite well so far.

A good argument for driving without a saftey-belt, BTW.

Regards,
Ulrich

> 
> Lennart
> 
> --
> Lennart Poettering, Berlin





[systemd-devel] Antw: [EXT] Re: Q: non-ASCII in syslog

2022-04-28 Thread Ulrich Windl
>>> Mantas Mikulenas  schrieb am 27.04.2022 um 12:03 in
Nachricht
:
> On Wed, Apr 27, 2022 at 10:09 AM Ulrich Windl <
> ulrich.wi...@rz.uni-regensburg.de> wrote:
> 
>> Hi!
>>
>> Having written an RFC 3164 compatible syslog daemon, I noticed that
systemd
>> created syslog messages with non-ASCII characters.
>> The problem is that a remote syslogd can hardly guess the correct
character
>> set (I'm using rsyslog to forward local messages to a remote server).
>>
>> Example of such message:
>> systemd-tmpfiles[3311]: [/usr/lib/tmpfiles.d/svnserve.conf:1] Line
>> references
>> path below legacy directory /var/run/, updating /var/run/svnserve →
>> /run/svnserve; please update the tmpfiles.d/ drop-in file accordingly.
>>
>> (The arrow is encoded as three bytes (\xe2\x86\x92))
>>
>> RFC 5425 syslog messages require the use of a BOM (%xEF.BB.BF) at the
>> beginning of a message if the message used UTF-8:
>>
>>   MSG = MSG-ANY / MSG-UTF8
>>   MSG-ANY = *OCTET ; not starting with BOM
>>   MSG-UTF8= BOM UTF-8-STRING
>>   BOM = %xEF.BB.BF
>>
>> Wouldn't it make sense to add such a BOM for RFC 3164 syslog messages also
>> if
>> non-ASCII (i.e.: UTF-8) encoded characters are used?
>>
> 
> RFC 3164 over a local socket from journald to local rsyslogd? The local

Actually I wasn't quite sure about the default config in SLES12.
It seems the flow is journald -> local rsyslogd -> remote syslogd

> rsyslogd already knows if messages are UTF-8 because the system's $LANG
> (well, nl_langinfo) says so. And if rsyslog can't trust that for some
> reason (e.g. because a user might have a different locale), then
> systemd-journald won't be able to trust it either, so it won't know whether
> it could add the BOM.

How could a remote syslog server know what the locale on the sending system
is?

> 
> RFC 3164 over the network to a remote server? Outside the scope for
> systemd, since it doesn't generate the network packets; your local rsyslogd
> forwarder does. (Also, why RFC 3164 and not 5425?)

If you look outside the world of systemd, about 99% of systems create the RFC
3164 type of messages.
Some may send non-ASCII too, however.

> 
> Generally, if a message successfully decodes as UTF-8 then it's most likely
> actual UTF-8 (and if UTF-8 decode fails then you fall back to ISO8859-1).
> Various old systems get away with this without needing a UTF-8 BOM.

Yes, you can just output what you received, hoping the messages will be
presented correctly.
I't just like sending 8-bit E-Mmail without a coding system or charset in the
past.

Regards,
Ulrich



[systemd-devel] Q: non-ASCII in syslog

2022-04-27 Thread Ulrich Windl
Hi!

Having written an RFC 3164 compatible syslog daemon, I noticed that systemd
created syslog messages with non-ASCII characters.
The problem is that a remote syslogd can hardly guess the correct character
set (I'm using rsyslog to forward local messages to a remote server).

Example of such message:
systemd-tmpfiles[3311]: [/usr/lib/tmpfiles.d/svnserve.conf:1] Line references
path below legacy directory /var/run/, updating /var/run/svnserve →
/run/svnserve; please update the tmpfiles.d/ drop-in file accordingly.

(The arrow is encoded as three bytes (\xe2\x86\x92))

RFC 5425 syslog messages require the use of a BOM (%xEF.BB.BF) at the
beginning of a message if the message used UTF-8:

  MSG = MSG-ANY / MSG-UTF8
  MSG-ANY = *OCTET ; not starting with BOM
  MSG-UTF8= BOM UTF-8-STRING
  BOM = %xEF.BB.BF

Wouldn't it make sense to add such a BOM for RFC 3164 syslog messages also if
non-ASCII (i.e.: UTF-8) encoded characters are used?

systemd in use is systemd-228-157.38.4 of SLES12 SP5...

Regards,
Ulrich





Re: [systemd-devel] Antw: [EXT] Re: Disallowing fingerprint authentication if pam_systemd_home.so needs a password

2022-04-26 Thread Ulrich Windl
>>> juice  schrieb am 26.04.2022 um 09:11 in Nachricht
<2a780de8-efb2-749b-de43-62978958f...@swagman.org>:
> On 4/26/22 09:41, Ulrich Windl wrote:
>>>
>>> Using fingerprint for *authentication* is totally broken concept which
>>> should never be allowed.
>> Why? Is a PIN any better?
> 
> PIN is much better. You will not be leaving your PIN to any drinking 
> glass you handle or to doorhandles that you open. People leave 
> fingerprints all around the place and it has been repeatedly 
> demonstrated that fingerprints can be easily extracted and replicated to 
> silicone fingers which can be used to fool fingerprint readers.
> 
> 
>>> We leave our fingerprints lying around all the time, and given malicious
>>> enough adversaries they might as well take our fingers too. (I sure would
>>> like to avoid that possibility!!)
>> So you are saying users leave themselves lying around everywhere? ;-)
> 
> People leave fingerprints. Fingerprints can be used to open devices 
> locked by fingerprint. There is also a risk that someone may kill you 
> and cut off your finger.
> 
> 
>>> Fingerprints can be used on place of username, that is OK and does not
>>> present similar risks.
>> Fingerprints are mote than a userid IMHO.
> 
> Fingerprint is exactly that, it is user identification. The police have 
> been using fingerprints now 130 years for identifying people. Some 
> misguided fools have been trying to use fingerprints as substitute for 
> phone unlock PIN for maybe 10 years or so.

Actually I think using a fingerprint to unlock the phone is much safer than 
using a short pin or some swipe pattern:
If someone watches me to unlock my phone using my finger in some public 
transport, he'll have trouble to unlock it if the phone is stolen, but you can 
easily watch the short pins or swipe patterns from the distance.

Regards,
Ulrich

> 
>- juice -






[systemd-devel] Antw: [systemd‑devel] Antw: [EXT] Re: Disallowing fingerprint authentication if pam_systemd_home.so needs a password

2022-04-26 Thread Ulrich Windl
>>> "Ulrich Windl"  schrieb am 26.04.2022 um
08:41 in Nachricht <6267942302a100049...@gwsmtp.uni-regensburg.de>:
>>>> juice  schrieb am 25.04.2022 um 17:03 in Nachricht

...
>> Fingerprints can be used on place of username, that is OK and does not 
>> present similar risks.
> 
> Fingerprints are mote than a userid IMHO.

s/mote/more/ # sorry





[systemd-devel] Antw: [EXT] Re: Disallowing fingerprint authentication if pam_systemd_home.so needs a password

2022-04-26 Thread Ulrich Windl
>>> juice  schrieb am 25.04.2022 um 17:03 in Nachricht
<4cbf03ca-7a0a-4dbe-ad00-c6f3938ff...@swagman.org>:

> 
> 25. huhtikuuta 2022 16.39.56 GMT+03:00 Benjamin Berg 
> kirjoitti:
>>On Mon, 2022-04-25 at 13:28 +0200, Lennart Poettering wrote:
>>> 
>>> Hmm, not sure I follow? I don't know how fingerprint flow of control
>>> is. Is this about authentication-by-fingerprint? Or already about
>>> user-selection-by-fingerprint?
>>
>>I was just thinking of authentication-by-fingerprint. Though I don't
>>think it makes a big difference here.
>>
> 
> Using fingerprint for *authentication* is totally broken concept which 
> should never be allowed.

Why? Is a PIN any better?

> Fingerprints are *userid*, never *password*.
> 
> We leave our fingerprints lying around all the time, and given malicious 
> enough adversaries they might as well take our fingers too. (I sure would 
> like to avoid that possibility!!)

So you are saying users leave themselves lying around everywhere? ;-)

> 
> Fingerprints can be used on place of username, that is OK and does not 
> present similar risks.

Fingerprints are mote than a userid IMHO.

> 
>   - Juice -






[systemd-devel] Antw: [EXT] Re: Waiting for (transient) hostname configuration

2022-04-21 Thread Ulrich Windl
>>> Alessio Igor Bogani  schrieb am 20.04.2022 um 
>>> 22:09
in Nachricht
:

...
> The %H specifier in the commented ExecStart always returns
> "localhost". The following ExecStart is my workaround to have the

I wonder whether a future version could interpret "\%H" as "substitute %H when 
it's used, not when it was parsed the first time" ("%%" is in use already).

> hostname provided by DHCP (in the first version it was a while loop
> but a sleep works anyway and make things less convoluted).
> 
> Ciao,
> Alessio






[systemd-devel] Antw: [EXT] Re: [systemd‑devel] Waiting for (transient) hostname configuration

2022-04-20 Thread Ulrich Windl
>>> Lennart Poettering  schrieb am 19.04.2022 um 11:41
in
Nachricht :
> On Di, 19.04.22 11:14, Alessio Igor Bogani (alessio.bog...@elettra.eu)
wrote:
> 
>> Dear systemd developers,
>>
>> Sorry for my very bad english.
> 
> Not bad at all.
> 
>> We use the hostname provided by DHCP to start the right subset of our
>> applications on each of our machines (in other words the transient
>> hostname is used as predictable machine identifier).
>>
>> Make it works on the ancient SysV is straightforward but I can't find
>> a (reasonable) way to achieve the same goal with systemd. We already
>> use both "Requires" and "After" with "network‑online.target
>> nss‑lookup.target" into our units but it isn't enough. When our units
>> start they obtain "localhost" from %H specifier. We don't find a way
>> to wait for (transient) hostname to be configured. The only solution
>> that it seems to work is to replace our binary in ExecStart with a
>> script which make a busy‑loop until `hostname` output doesn't differ
>> from "localhost".
>>
>> Is there a better way?
> 
> First of all: % specifiers (including %H) are resolved at the moment
> the unit files are *loaded*, i.e. typically at earliest boot, long

That's quite some surprise: I thought they are evaluated when they are
executed (as in shell scripts).
Is there a way to "reload" a specific unit file?

> before they are actually *started*. Thus, if you change the hostname
> and want %H to resolve to that, you need to issue a reload at the
> right moment, i.e. issue `systemctl daemon‑reload` once the hostname
> is acquired.
> 
> If your hostname is set via DHCP you need to wait for DHCP to be
> acquired. How that's done, depends on the networking solution you
> use. If you use systemd‑networkd, then the
> sytemd‑network‑wait‑online.service is what you want to use. If you
> enable that then network‑online.target should be the point where DHCP
> is acquired and thus also the hostname in effect.
> 
> Lennart
> 
> ‑‑
> Lennart Poettering, Berlin





Re: [systemd-devel] Antw: [EXT] Re: [systemd‑devel] device unit files

2022-04-14 Thread Ulrich Windl
>>> Lennart Poettering  schrieb am 14.04.2022 um 09:45
in
Nachricht :
> On Do, 14.04.22 08:00, Ulrich Windl (ulrich.wi...@rz.uni‑regensburg.de)
wrote:
> 
>> >>> Lennart Poettering  schrieb am 13.04.2022 um
17:38
>> in
>> Nachricht :
>> > On Di, 12.04.22 14:38, Elbek Mamajonov (emm.boxin...@gmail.com) wrote:
>> >
>> >> On graph I have mmcblk.device taking 1.624s. From the point that
>> >> this partition is where my rootfs lies, and systems does remounting
>> >> of rootfs, I did look up for systemd‑remount‑fs.service, it took
>> >> 231ms, and systemd‑udev‑trigger.service, as you suggested, it took
>> >> 517ms to become active. But even after systemd‑udev‑trigger.service
>> >> becomes active there is about 800ms for mmcblk.device to become
>> >> active. Are those services related to the activation of the
>> >> mmcblk.device? Can I consider those 231ms and 517ms as a part of the
>> >> activation time of the mmcblk.device? How can I debug udev in this
>> >> case?
>> >
>> > "systemd‑udev‑trigger.service" might take a while before it completes,
>> > since it triggers basically all devices in the system.
>> >
>> > It might be worth triggering block devices first. With upcoming v251
>>
>> What is the expected benefit? On bigger servers with hundreds of disks
this
>> may take longest.
> 
> There are a myriad of devices on current systems. Traditionally, we
> trigger them at boot in alphabetical order by their sysfs path (more
> or less that is). Only once triggered subsystems waiting for them will
> see the devices. Since at boot typically the most waited for devices
> are block devices it's thus benefical to trigger them first, as this
> unblocks a major part of the rest of the boot process.
> 
> Or in other words: nothing really "waits" for your mouse to show up in
> the device table. Everyting waits for your root block device to show
> up. Hence trigger the root block device first, and the mouse later.

Hi!

I agree, but (how) can you trigger only the root block device?
Apr 01 08:46:25 h16 kernel: sd 0:2:0:0: [sda] 467664896 512-byte logical
blocks: (239 GB/223 GiB)
...
Apr 01 08:46:33 h16 kernel: sd 3:0:7:2: [sdda] 524288 512-byte logical blocks:
(268 MB/256 MiB)
...
That's 8 seconds to discover the devices

Apr 01 08:47:04 h16 kernel: sd 2:0:5:2: alua: port group 01 state A preferred
supports tolusnA

And another 30 seconds until multipath has settled.

Apr 01 08:47:04 h16 systemd[1]: Reached target System Initialization.

Regards,
Ulrich

> 
> Lennart
> 
> ‑‑
> Lennart Poettering, Berlin





[systemd-devel] Antw: [EXT] Re: device unit files

2022-04-14 Thread Ulrich Windl
>>> Lennart Poettering  schrieb am 13.04.2022 um 17:38
in
Nachricht :
> On Di, 12.04.22 14:38, Elbek Mamajonov (emm.boxin...@gmail.com) wrote:
> 
>> On graph I have mmcblk.device taking 1.624s. From the point that
>> this partition is where my rootfs lies, and systems does remounting
>> of rootfs, I did look up for systemd-remount-fs.service, it took
>> 231ms, and systemd-udev-trigger.service, as you suggested, it took
>> 517ms to become active. But even after systemd-udev-trigger.service
>> becomes active there is about 800ms for mmcblk.device to become
>> active. Are those services related to the activation of the
>> mmcblk.device? Can I consider those 231ms and 517ms as a part of the
>> activation time of the mmcblk.device? How can I debug udev in this
>> case?
> 
> "systemd-udev-trigger.service" might take a while before it completes,
> since it triggers basically all devices in the system.
> 
> It might be worth triggering block devices first. With upcoming v251

What is the expected benefit? On bigger servers with hundreds of disks this
may take longest.

> we actually will do that by default. But until then you could extend
> the service by first issuing "udevadm trigger -sblock", before the
> rest.
> 
> udev will announce devices to the system (and thus also PID1) once it
> probed the device. it does this based on rules, and the default rules
> will run blkid on the device, to see what's on it (i.e. to extract fs
> label/uuid, …). maybe that's just terribly slow on your device?
> 
> Lennart
> 
> --
> Lennart Poettering, Berlin





[systemd-devel] Q: Forwarding journal to "remote" syslogd?

2022-04-13 Thread Ulrich Windl
Hi!

When trying to configure forwarding of journal messages to a remote syslogd, I 
realized that it seems not possible (systemd 228 of SLES12 SP5).
Is that true?
What I want is forwarding to "host:port" via UDP.

Forwarding to a local syslogd just to be able to forward to a remote syslogd 
seems to be a terrible concept (MHO).

Regards,
Ulrich





[systemd-devel] Antw: [systemd‑devel] Antw: [EXT] Re: Samba Config Reload

2022-04-11 Thread Ulrich Windl
>>> "Ulrich Windl"  schrieb am 11.04.2022
um
08:26 in Nachricht <6253ca1802a100049...@gwsmtp.uni-regensburg.de>:
> Hi!
> 

Sorry for the typos:

> I thin Lennart had pointed it out: If the sapplication being reloaded does 
s/thin/think/
> not provide any feedback when the reloading is complete, you can never be 
> sure what it did complete.

s/what/when/

> Adding some sleep may catch a grat number of cases whule waiting too long in


s/grat/great/; s/whule/while/

> most cases.
> 
> So before discussing systemd meachnisms: How do you know when reload is 
> complete?
> 
> Regards,
> Ulrich
> 
>>>> Wols Lists  schrieb am 09.04.2022 um 17:10 in
> Nachricht :
>> On 09/04/2022 09:00, Yolo von BNANA wrote:
>>> Can you please explain this in more Detail?
>>> 
>>> What does this mean: " "systemctl reload" will basically return
>>> immediately without the reload being complete"?
>>> 
>>> And what is an Example for an synchronous command for ExecReload=
>>> 
>> Do you understand the difference between "synchronous" and 
>> "asynchronous"? The words basically mean "aligned in time" and "without 
>> timed alignment".
>> 
>> Think of writing to files. In the old days of MS‑DOS et al, when your 
>> program called "write", the CPU went off, saved the data to disk, and 
>> returned to your program. That's "synchronous", all nicely ordered in 
>> time, and your program knew the data was safe.
>> 
>> Now, when your linux program calls write, linux itself replies "got it", 
>> and your program goes off knowing that something else is going to take 
>> care of actually saving the data to disk ‑ that's "asynchronous". Except 
>> that sometimes the program needs to know that the data HAS been safely 
>> squirreled away (hence all these fsync calls).
>> 
>> So when systemd calls ExecReload *A*synchronously, it goes off and fires 
>> off a load more stuff, knowing that the ExecReload IS GOING (future 
>> tense) to happen. What the previous poster wanted was a synchronous 
>> ExecReload, so that when systemd goes off do the next thing, the 
>> ExecReload HAS ALREADY HAPPENED (past tense). (Which in general is a bad 
>> thing because it *seriously* knackers performance).
>> 
>> Cheers,
>> Wol





[systemd-devel] Antw: [EXT] Re: Samba Config Reload

2022-04-11 Thread Ulrich Windl
Hi!

I thin Lennart had pointed it out: If the sapplication being reloaded does not 
provide any feedback when the reloading is complete, you can never be sure what 
it did complete.
Adding some sleep may catch a grat number of cases whule waiting too long in 
most cases.

So before discussing systemd meachnisms: How do you know when reload is 
complete?

Regards,
Ulrich

>>> Wols Lists  schrieb am 09.04.2022 um 17:10 in
Nachricht :
> On 09/04/2022 09:00, Yolo von BNANA wrote:
>> Can you please explain this in more Detail?
>> 
>> What does this mean: " "systemctl reload" will basically return
>> immediately without the reload being complete"?
>> 
>> And what is an Example for an synchronous command for ExecReload=
>> 
> Do you understand the difference between "synchronous" and 
> "asynchronous"? The words basically mean "aligned in time" and "without 
> timed alignment".
> 
> Think of writing to files. In the old days of MS-DOS et al, when your 
> program called "write", the CPU went off, saved the data to disk, and 
> returned to your program. That's "synchronous", all nicely ordered in 
> time, and your program knew the data was safe.
> 
> Now, when your linux program calls write, linux itself replies "got it", 
> and your program goes off knowing that something else is going to take 
> care of actually saving the data to disk - that's "asynchronous". Except 
> that sometimes the program needs to know that the data HAS been safely 
> squirreled away (hence all these fsync calls).
> 
> So when systemd calls ExecReload *A*synchronously, it goes off and fires 
> off a load more stuff, knowing that the ExecReload IS GOING (future 
> tense) to happen. What the previous poster wanted was a synchronous 
> ExecReload, so that when systemd goes off do the next thing, the 
> ExecReload HAS ALREADY HAPPENED (past tense). (Which in general is a bad 
> thing because it *seriously* knackers performance).
> 
> Cheers,
> Wol






[systemd-devel] Antw: [EXT] Re: [systemd‑devel] Intercepting/Delaying the boot process

2022-04-10 Thread Ulrich Windl
>>> Lennart Poettering  schrieb am 08.04.2022 um 15:14 
>>> in
Nachricht :

...
> This reminds of an RFE we have had for a while, and which I think
> would make sense to add directly to systemd: a generator that detects
> whether the battery is nearly empty at boot, and if so redirects boot
> to some special target showing a nice message for a bit and then
> powering off.

Why does it have to be a generator; can't a normal unit detect and handle that?

...

Regards,
Ulrich






  1   2   3   4   5   6   >