Re: [systemd-devel] Storing package metadata in ELF objects

2021-04-10 Thread Zbigniew Jędrzejewski-Szmek
[I'm forwarding the mail from Luca who is not subscribed to fedora-devel]

On Sat, Apr 10, 2021 at 01:38:31PM +0100, Luca Boccassi wrote:

Hello,

Cross-posting to the mailing lists of a few relevant projects.

After an initial discussion [0], recently we have been working on a new
specification [0] to encode rich package-level metadata inside ELF
objects, so that it can be included automatically in generated coredump
files. The prototype to parse this in systemd-coredump and store the
information in systemd-journal is ready for testing and merged
upstream. We are now seeking further comments/opinions/suggestions, as
we have a few months before the next release and thus there's plenty of
time to make incompatible changes to the format and implementation, if
required.

A proposal to use this by default for all packages built in Fedora 35
has been submitted [1].

The Fedora Wiki and the systemd.io document have more details, but to
make a long story short, a new .notes.package section with a JSON
payload will be included in ELF objects, encoding various package-
build-time information like distro name, package name,
etc.

To summarize from the discussion, the main reasons why we believe this
is useful are as following:

1) minimal containers: the rpm database is not installed in the
containers. The information about build-ids needs to be stored
externally, so package name information is not available immediately,
but only after offline processing. The new note doesn't depend on the
rpm db in any way.

2) handling of a core from a container, where the container and host
have different distros

3) self-built and external packages: unless a lot of care is taken to
keep access to the debuginfo packages, this information may be lost.
The new note is available even if the repository metadata gets lost.
Users can easily provide equivalent information in a format that makes
sense in their own environment. It should work even when rpms and debs
and other formats are mixed, e.g. during container image creation.

Other than in Fedora, we are already making the required code changes
at Microsoft to use the same format for internally-built
binaries, and for tools that parse core files and logs.

Tools for RPM and DEB (debhelper) integration are also available [3].

> -- 
> Kind regards,
> Luca Boccassi


___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] How should Wayland compositors handle logind restarts?

2021-02-08 Thread Zbigniew Jędrzejewski-Szmek
On Mon, Feb 08, 2021 at 11:43:35AM +0200, Vlad Zahorodnii wrote:
> Hi,


> My question is - should Wayland compositors handle logind restarts
> in any way?
> 
> At the moment, many Wayland compositors don't take any precautions
> against the case where logind is restarted. They assume that the DRM
> file descriptors will remain valid and the session will be restored
> auto-magically.

Yes, that is what they should be doing.

> Currently, a lot of Wayland compositors can't recover from logind
> restarts. For example, that's the case with weston, sway, kwin, and
> perhaps other compositors.
> 
> The culprit seems to be that atomic commits fail with the
> "Permission denied" error.

That's because of a bug in logind. I started working on a fix last
year [1], but doing this properly requires restructuring how the code
handles cleanup. It's on the TODO list.

[1] https://github.com/systemd/systemd/pull/17558

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Is LTO worth it?

2021-01-13 Thread Zbigniew Jędrzejewski-Szmek
On Mon, Jan 11, 2021 at 06:05:26PM +0100, Michael Biebl wrote:
> Am Mo., 11. Jan. 2021 um 16:39 Uhr schrieb Lennart Poettering
> :
> > https://fedoraproject.org/wiki/LTOByDefault
> 
> Interestingly, that wiki page says, that LTO should produce smaller
> binaries, which clearly isn't the case here.

I think LTO should be understood as "work in progress".
Making it the default in Fedora surfaced a few different issues in the
compiler, and some packages needed to temporarily opt out. Compiler
maintainers worked on those, and I think they have been all or almost
all resolved. But since they were working on just making the thing work,
they most likely didn't have too much time to work on improvements.
The initial reports spoke about a few percent savings on average,
with significantly larger savings for some packages, esp. C++ programs.
Hopefully, we will see bigger savings in the future.

> I wonder whether the wiki is incorrect or whether this is a toolchain
> issue or if this is specific to systemd
> Or maybe this is Debian specific. Would be interested to see numbers
> from other distros.

This is a rebuild of systemd package with standard settings (lto, -O2) and 
without lto (-O2):

$ ls -l *rpm no-lto/*rpm
9932144 Jan 13 13:03 systemd-247.2-2.fc34.src.rpm (with lto)
9932165 Jan 13 13:28 systemd-247.2-2.fc34.src.rpm (the second number is always 
without lto)
4411100 Jan 13 13:07 systemd-247.2-2.fc34.x86_64.rpm
4412181 Jan 13 13:31 systemd-247.2-2.fc34.x86_64.rpm
512945 Jan 13 13:07 systemd-container-247.2-2.fc34.x86_64.rpm   <--
548286 Jan 13 13:31 systemd-container-247.2-2.fc34.x86_64.rpm   <--
1105667 Jan 13 13:07 systemd-container-debuginfo-247.2-2.fc34.x86_64.rpm
1584398 Jan 13 13:31 systemd-container-debuginfo-247.2-2.fc34.x86_64.rpm
9608297 Jan 13 13:08 systemd-debuginfo-247.2-2.fc34.x86_64.rpm
9779602 Jan 13 13:31 systemd-debuginfo-247.2-2.fc34.x86_64.rpm
3012544 Jan 13 13:08 systemd-debugsource-247.2-2.fc34.x86_64.rpm
3010956 Jan 13 13:31 systemd-debugsource-247.2-2.fc34.x86_64.rpm
441654 Jan 13 13:07 systemd-devel-247.2-2.fc34.x86_64.rpm
441622 Jan 13 13:31 systemd-devel-247.2-2.fc34.x86_64.rpm
104294 Jan 13 13:07 systemd-journal-remote-247.2-2.fc34.x86_64.rpm
105182 Jan 13 13:31 systemd-journal-remote-247.2-2.fc34.x86_64.rpm
157884 Jan 13 13:07 systemd-journal-remote-debuginfo-247.2-2.fc34.x86_64.rpm
157852 Jan 13 13:31 systemd-journal-remote-debuginfo-247.2-2.fc34.x86_64.rpm
558758 Jan 13 13:07 systemd-libs-247.2-2.fc34.x86_64.rpm  <--
610528 Jan 13 13:31 systemd-libs-247.2-2.fc34.x86_64.rpm  <--
1572056 Jan 13 13:07 systemd-libs-debuginfo-247.2-2.fc34.x86_64.rpm   <--
2939352 Jan 13 13:31 systemd-libs-debuginfo-247.2-2.fc34.x86_64.rpm   <--
486239 Jan 13 13:07 systemd-networkd-247.2-2.fc34.x86_64.rpm
501935 Jan 13 13:31 systemd-networkd-247.2-2.fc34.x86_64.rpm
1144027 Jan 13 13:07 systemd-networkd-debuginfo-247.2-2.fc34.x86_64.rpm
1263367 Jan 13 13:31 systemd-networkd-debuginfo-247.2-2.fc34.x86_64.rpm
319473 Jan 13 13:07 systemd-pam-247.2-2.fc34.x86_64.rpm
362518 Jan 13 13:31 systemd-pam-247.2-2.fc34.x86_64.rpm
 979482 Jan 13 13:07 systemd-pam-debuginfo-247.2-2.fc34.x86_64.rpm
1726838 Jan 13 13:31 systemd-pam-debuginfo-247.2-2.fc34.x86_64.rpm
26582 Jan 13 13:07 systemd-rpm-macros-247.2-2.fc34.noarch.rpm
26553 Jan 13 13:31 systemd-rpm-macros-247.2-2.fc34.noarch.rpm
110101 Jan 13 13:07 systemd-standalone-sysusers-247.2-2.fc34.x86_64.rpm  <--
139709 Jan 13 13:31 systemd-standalone-sysusers-247.2-2.fc34.x86_64.rpm  <--
284825 Jan 13 13:07 
systemd-standalone-sysusers-debuginfo-247.2-2.fc34.x86_64.rpm  <--
759506 Jan 13 13:31 
systemd-standalone-sysusers-debuginfo-247.2-2.fc34.x86_64.rpm  <--
153608 Jan 13 13:07 systemd-standalone-tmpfiles-247.2-2.fc34.x86_64.rpm  <--
184424 Jan 13 13:31 systemd-standalone-tmpfiles-247.2-2.fc34.x86_64.rpm  <--
389470 Jan 13 13:07 
systemd-standalone-tmpfiles-debuginfo-247.2-2.fc34.x86_64.rpm
727117 Jan 13 13:31 
systemd-standalone-tmpfiles-debuginfo-247.2-2.fc34.x86_64.rpm
4601160 Jan 13 13:08 systemd-tests-247.2-2.fc34.x86_64.rpm
4635619 Jan 13 13:31 systemd-tests-247.2-2.fc34.x86_64.rpm
15261886 Jan 13 13:08 systemd-tests-debuginfo-247.2-2.fc34.x86_64.rpm  <--
21949592 Jan 13 13:32 systemd-tests-debuginfo-247.2-2.fc34.x86_64.rpm  <--
1568229 Jan 13 13:07 systemd-udev-247.2-2.fc34.x86_64.rpm
1571073 Jan 13 13:31 systemd-udev-247.2-2.fc34.x86_64.rpm
934456 Jan 13 13:07 systemd-udev-debuginfo-247.2-2.fc34.x86_64.rpm
986681 Jan 13 13:31 systemd-udev-debuginfo-247.2-2.fc34.x86_64.rpm

So we get some small savings in package size, with huge savings in
-debuginfo packages.

> Concerning the build speed: I wonder whether at least disabling LTO on
> our CI would make sense. We don't really care for fast/small
> executables there.

We could disable it on some CIs. I agree that disabling it everywhere
would be detrimental.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org

Re: [systemd-devel] Creating executable device nodes in /dev?

2020-12-11 Thread Zbigniew Jędrzejewski-Szmek
On Wed, Dec 09, 2020 at 10:58:59PM +0100, Ben Hutchings wrote:
> I'm convinced.  I've committed a change to initramfs-tools that removes
> the noexec mount option again.

Systemd counterpart: https://github.com/systemd/systemd/pull/17940.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Creating executable device nodes in /dev?

2020-11-19 Thread Zbigniew Jędrzejewski-Szmek
On Thu, Nov 19, 2020 at 08:17:08AM -0800, Andy Lutomirski wrote:
> Hi udev people-
> 
> The upcoming Linux SGX driver has a device node /dev/sgx.  User code
> opens it, does various setup things, mmaps it, and needs to be able to
> create PROT_EXEC mappings.  This gets quite awkward if /dev is mounted
> noexec.
> 
> Can udev arrange to make a device node executable on distros that make
> /dev noexec?  This could be done by bind-mounting from an exec tmpfs.
> Alternatively, the kernel could probably learn to ignore noexec on
> /dev/sgx, but that seems a little bit evil.

I'd be inclined to simply drop noexec from /dev by default.
We don't do noexec on either /tmp or /dev/shm (because that causes immediate
problems with stuff like Java and cffi). And if you have those two at your
disposal anyway, having noexec on /dev doesn't seem important.

Afaik, the kernel would refuse execve() on a character or block device
anyway. Thus noexec on /dev matters only for actual binaries copied to
/dev, which requires root privileges in the first place.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd prerelease 247-rc2

2020-11-12 Thread Zbigniew Jędrzejewski-Szmek
On Thu, Nov 12, 2020 at 04:36:01PM +0100, Michael Biebl wrote:
> Hi there
> 
> Am Do., 12. Nov. 2020 um 11:58 Uhr schrieb systemd tag bot
> :
> >
> > A new systemd ☠️ pre-release ☠️ has just been tagged. Please download the 
> > tarball here:
> >
> > https://github.com/systemd/systemd/archive/v247-rc2.tar.gz
> 
> Congrats to the new release!
> 
> > Changes since the previous release:
> >
> > * KERNEL API INCOMPATIBILITY: Linux 4.12 introduced two new uevents
> >   "bind" and "unbind" to the Linux device model. When this kernel
> >   change was made, systemd-udevd was only minimally updated to 
> > handle
> >   and propagate these new event types. The introduction of these new
> >   uevents (which are typically generated for USB devices and devices
> >   needing a firmware upload before being functional) resulted in a
> >   number of issues which we so far didn't address. We hoped the 
> > kernel
> >   maintainers would themselves address these issues in some form, 
> > but
> >   that did not happen. To handle them properly, many (if not most) 
> > udev
> >   rules files shipped in various packages need updating, and so do 
> > many
> >   programs that monitor or enumerate devices with libudev or 
> > sd-device,
> >   or otherwise process uevents. Please note that this 
> > incompatibility
> >   is not fault of systemd or udev, but caused by an incompatible 
> > kernel
> >   change that happened back in Linux 4.12, but is becoming more and
> >   more visible as the new uevents are generated by more kernel 
> > drivers.
> >
> >   To minimize issues resulting from this kernel change (but not 
> > avoid
> >   them entirely) starting with systemd-udevd 247 the udev "tags"
> >   concept (which is a concept for marking and filtering devices 
> > during
> >   enumeration and monitoring) has been reworked: udev tags are now
> >   "sticky", meaning that once a tag is assigned to a device it will 
> > not
> >   be removed from the device again until the device itself is 
> > removed
> >   (i.e. unplugged). This makes sure that any application monitoring
> >   devices that match a specific tag is guaranteed to both see 
> > uevents
> >   where the device starts being relevant, and those where it stops
> >   being relevant (the latter now regularly happening due to the new
> >   "unbind" uevent type). The udev tags concept is hence now a 
> > concept
> >   tied to a *device* instead of a device *event* — unlike for 
> > example
> >   udev properties whose lifecycle (as before) is generally tied to a
> >   device event, meaning that the previously determined properties 
> > are
> >   forgotten whenever a new uevent is processed.
> >
> >   With the newly redefined udev tags concept, sometimes it's 
> > necessary
> >   to determine which tags are the ones applied by the most recent
> >   uevent/database update, in order to discern them from those
> >   originating from earlier uevents/database updates of the same
> >   device. To accommodate for this a new automatic property 
> > CURRENT_TAGS
> >   has been added that works similar to the existing TAGS property 
> > but
> >   only lists tags set by the most recent uevent/database
> >   update. Similarly, the libudev/sd-device API has been updated with
> >   new functions to enumerate these 'current' tags, in addition to 
> > the
> >   existing APIs that now enumerate the 'sticky' ones.
> >
> >   To properly handle "bind"/"unbind" on Linux 4.12 and newer it is
> >   essential that all udev rules files and applications are updated 
> > to
> >   handle the new events. Specifically:
> >
> >   • All rule files that currently use a header guard similar to
> > ACTION!="add|change",GOTO="xyz_end" should be updated to use
> > ACTION=="remove",GOTO="xyz_end" instead, so that the
> > properties/tags they add are also applied whenever "bind" (or
> > "unbind") is seen. (This is most important for all physical 
> > device
> > types — those for which "bind" and "unbind" are currently
> > generated, for all other device types this change is still
> > recommended but not as important — but certainly prepares for
> > future kernel uevent type additions).
> >
> >   • Similarly, all code monitoring devices that contains an 'if' 
> > branch
> > discerning the "add" + "change" uevent actions from all other
> > uevents actions (i.e. considering devices only relevant after 
> > "add"
> > or "change", and irrelevant on all other events) should be 
> > reworked
> > to instead negatively check 

Re: [systemd-devel] Sponsoring systemd

2020-10-15 Thread Zbigniew Jędrzejewski-Szmek
On Thu, Oct 15, 2020 at 10:57:18AM +0200, Jan Keller wrote:
> Hey systemd project,
> 
> I am trying to get in touch with someone regarding a potential sponsorship
> (details at
> https://security.googleblog.com/2019/12/announcing-updates-to-our-patch-rewards.html).
> Who is best to talk to?

In the past financial stuff was handled by by Lennart and me.
You can just reply to this email (maybe with the mailing list removed).

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Ensuring that a unit starts before any networking

2020-06-27 Thread Zbigniew Jędrzejewski-Szmek
On Sat, Jun 27, 2020 at 11:42:00AM +0100, Mark Rogers wrote:
> On Sat, 27 Jun 2020 at 11:06, Zbigniew Jędrzejewski-Szmek
>  wrote:
> > You should use Before=network-pre.target, Wants=network-pre.target.
> 
> Thanks, tried that but still not working:
> 
> $ journalctl -b | grep -Ei '(db2config|dhcpcd)'
> Feb 14 10:12:03 localhost systemd[1]: Starting dhcpcd on all interfaces...
> Feb 14 10:12:03 localhost dhcpcd[341]: read_config: fopen
> `/etc/dhcpcd.conf': No such file or directory
> [...]
> Jun 27 10:19:39 localhost dhcpcd[341]: wlan0: /etc/wpa_supplicant.conf
> does not exist
> [...]
> Jun 27 10:19:39 localhost dhcpcd[341]: read_config: fopen
> `/etc/dhcpcd.conf': No such file or directory
> [...]
> Jun 27 10:19:40 localhost dhcpcd[341]: eth0: soliciting an IPv6 router
> Jun 27 10:19:40 localhost dhcpcd[341]: eth0: soliciting a DHCP lease
> Jun 27 10:19:41 mypi db2config.py[325]: 2020-06-27 10:19:41 db2config
> Creating /tmp/sys//etc/dhcpcd.conf
> Jun 27 10:19:41 mypi db2config.py[325]: 2020-06-27 10:19:41 db2config
> Creating /tmp/sys//etc/wpa_supplicant/wpa_supplicant.conf
> 
> (Comments about that extract: the jump from Feb to Jun I assume is the
> clock getting updated from RTC, it's all from the same boot obviously;
> also note my db2config script doesn't run until after hostname is set
> which I would assume is set by the network startup?)
> 
> Unit file is currently:
> 
> [Unit]
> Description=Config generation from DB
> Before=network-pre.target
> Wants=network-pre.target
> 
> [Service]
> Type=oneshot
> ExecStart=/home/mark/bin/db2config.py
> 
> [Install]
> RequiredBy=network.target

And how does the unit that is running dhcpcd look like?

Zbyszek

> After any changes I'm using
> $ sudo systemctl daemon-reload
> $ sudo systemctl reenable db2config.service
> 
> ... although that's another area I'm not entirely clear about what
> exactly is required after a unit file change.
> 
> PS: Is list etiquette around here to CC on reply? Some love it, some
> hate it, others don't care...
reply-all is fine.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Ensuring that a unit starts before any networking

2020-06-27 Thread Zbigniew Jędrzejewski-Szmek
On Sat, Jun 27, 2020 at 09:34:00AM +0100, Mark Rogers wrote:
> I have tried multiple approaches so far but by current service file
> looks like this:
> 
> [Unit]
> Description=Config generation from DB
> Before=networking.service

You should use Before=network-pre.target, Wants=network-pre.target.

> [Service]
> Type=oneshot
> ExecStart=/home/mark/bin/db2config.py
> 
> [Install]
> RequiredBy=network.target
> 
> [1] 
> https://stackoverflow.com/questions/62574482/ensuring-that-a-systemd-unit-starts-before-any-networking
> - no responses there to date, feel free to respond there for
> reputation or else I'll update it when I solve it.

Yes, please update so. Also feel free to submit a PR for our man pages
if you find the time. netowrk-pre.target is briefly explained in
systemd.special(7), but maybe the network target deserve a separate
section in bootup(7)?

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [systemd-level]: how to config and enable systemd help to create coredump file when the process crashes ?

2020-05-20 Thread Zbigniew Jędrzejewski-Szmek
On Wed, May 20, 2020 at 04:22:35PM +0800, www wrote:
> Dear all,
> 
> 
> how to config  and enable systemd help to create coredump file when the 
> process crashes ?

See http://www.freedesktop.org/software/systemd/man/coredumpctl.html.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-hostnamed/hostnamectl and transient hostname change

2020-04-30 Thread Zbigniew Jędrzejewski-Szmek
On Mon, Apr 27, 2020 at 12:00:36PM +0200, Thomas HUMMEL wrote:
> On 4/27/20 11:51 AM, Mantas Mikulėnas wrote:
> 
> Hello, thanks for your answer.
> 
> >On Mon, Apr 20, 2020 at 6:17 PM Thomas HUMMEL
> >mailto:thomas.hum...@pasteur.fr>>
> >wrote:
> >
> >1. why does the transient hostname change while I stated --static only
> >while running hostnamectl ?
> >
> >2. why does the change take some time to appear on dbus ?
> >
> >
> 
> >Hostnamed does not implement receiving hostname change
> >notifications from the kernel, so it always reports you the same
> >hostname that it has seen on startup.
> 
> That was my understanding as well.

Lennart opened a PR to remove the caching:
https://github.com/systemd/systemd/pull/15624.

> >You're only seeing changes because hostnamed /exits when idle/ --
> >the next time you're actually talking to a brand new instance of
> >hostnamed, which has seen the new hostname.
> 
> But this does not explain why the transient hostname is changed as I
> only changed the static one, does it ? Unless this new instance sets
> it from the static one when it starts ? I mean something has to call
> sethostname(2) to set the transient to the new static one, right ?

The documentation is wrong. The code in hostnamed sets the kernel
hostname when setting the static one. This was changed in
https://github.com/systemd/systemd/commit/c779a44222:

commit c779a44222161155c039a7fd2fd304c006590ac7
Author: Stef Walter 
Date:   Wed Feb 12 09:46:31 2014 +0100

   hostnamed: Fix the way that static and transient host names interact
   
   It is almost always incorrect to allow DHCP or other sources of
   transient host names to override an explicitly configured static host
   name.
   
   This commit changes things so that if a static host name is set, this
   will override the transient host name (eg: provided via DHCP). Transient
   host names can still be used to provide host names for machines that have
   not been explicitly configured with a static host name.
   
   The exception to this rule is if the static host name is set to
   "localhost". In those cases we act as if no
   static host name has been explicitly set.

We need to reconcile the code and the docs. I'd go for updating the docs
to match the code, because this is a long-standing behaviour and people
haven't been complaining about it. (I'm assuming you're not unhappy, just
confused by the unexpected results...). Opinions?

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd vulnerability detection

2020-04-29 Thread Zbigniew Jędrzejewski-Szmek
On Wed, Apr 29, 2020 at 08:53:23AM +0530, Amish wrote:
> 
> On 29/04/20 1:00 am, Lennart Poettering wrote:
> >Please see:
> >
> >https://systemd.io/SECURITY/
> >
> >...
> >
> >Lennart
> 
> On a side note, phrasing on the site needs to be changed.

https://github.com/systemd/systemd/pull/15632 ?

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] user service conflict and confusion

2020-04-10 Thread Zbigniew Jędrzejewski-Szmek
On Fri, Apr 10, 2020 at 10:53:36AM -0500, Matt Zagrabelny wrote:
> Greetings,
> 
> I am hitting a confusing scenario with my system. I am running 245.4-2
> (Debian).
> 
> I have a user service, mpd, which is failing to start. It is enabled:
> 
> $ systemctl --user is-enabled mpd
> enabled
> 
> And now that I look for the enabled unit within the filesystem, I don't see
> it.
> 
> I'm expecting to see something in ~/.config/systemd, but that directory
> doesn't exist.
> 
> $ stat ~/.config/systemd
> stat: cannot stat '/home/z/.config/systemd': No such file or directory
> 
> I have other systems with user services and ~/.config/systemd is where all
> the details are.
> 
> First question, where should I be looking (in the filesystem) for user
> enabled services?

Try 'systemctl --user cat mpd'.

> After that I look to see why the user service isn't starting:
> 
> $ systemctl --user status mpd
> [...]
> Apr 10 10:00:29 zipper mpd[16231]: exception: Failed to bind to '
> 192.168.0.254:6600'
> Apr 10 10:00:29 zipper mpd[16231]: exception: nested: Failed to bind
> socket: Address already in use
> Apr 10 10:00:29 zipper systemd[1982]: mpd.service: Main process exited,
> code=exited, status=1/FAILURE
> 
> Okay. Something is using that port.
> 
> $ sudo fuser 6600/tcp
> 6600/tcp: 1795
> 
> $ ps -f -q 1795
> UID  PIDPPID  C STIME TTY  TIME CMD
> root1795   1  0 08:24 ?00:00:00 /lib/systemd/systemd
> --user
> 
> Is that "systemd --user" command running for the root user? or is that the
> system level systemd?
> 
> My system level mpd.* units are disabled and inactive:
> 
> # systemctl is-active mpd.service
> inactive
> 
> # systemctl is-active mpd.socket
> inactive

Maybe it's running under user@0.service, i.e. the root's user manager?
You can drill down from 'systemctl status 1795'.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [RFC] pstore: options to enable kernel writing into the pstore

2020-03-26 Thread Zbigniew Jędrzejewski-Szmek
On Thu, Mar 26, 2020 at 01:05:22PM -0500, Eric DeVolder wrote:
> >Below is a proposal for adding a couple of settings to the systemd pstore
> >service so that it can enable the kernel parameters that allow the
> >kernel to write into the pstore.

Hi,

please submit this as PR.

> > From 837d716c6e7ed02518a399356df95bf7c47e1772 Mon Sep 17 00:00:00 2001
> >From: Eric DeVolder 
> >Date: Wed, 11 Mar 2020 14:11:03 -0500
> >Subject: [RFC] pstore: options to enable kernel writing into the pstore
> >
> >The systemd pstore service archives the contents of /sys/fs/pstore
> >upon boot so that there is room for a subsequent dump. The pstore is
> >usually backed by flash memory typically in the vicinity of 64KB.  The
> >pstore can contain post-mortem debug information even if kdump fails
> >or is not enabld.
> >
> >The issue is that while the service is present, the kernel still needs
> >to be configured to write data into the pstore. It has two parameters,
> >crash_kexec_post_notifiers and printk.always_kmsg_dump, that control
> >writes into pstore.
> >
> >The crash_kexec_post_notifiers parameter enables the kernel to write
> >dmesg (including stack trace) into pstore upon a panic, and
> >printk.always_kmsg_dump parameter enables the kernel to write dmesg upon
> >a shutdown (shutdown, reboot, halt).
> >
> >As it stands today, these parameters are not managed/manipulated by the
> >systemd pstore service, and are solely reliant upon the user [to have
> >the foresight] to set them on the kernel command line at boot, or post
> >boot via sysfs. Furthermore, the user would need to set these parameters
> >in a persistent fashion so that that they are enabled on subsequent
> >reboots.
> >
> >This patch allows the user to set these parameters via the systemd
> >pstore service, and forget about it. This patch introduces two new
> >settings in the pstore.conf, 'kmsg' and 'crash'. If either of these
> >is set to true, then the corresponding parameter is enabled in the
> >kernel. If the setting is false, then the parameter is not touched,
> >thus preserving whatever behavior the user may have previously
> >chosen.
> >---
> >  src/pstore/pstore.c | 36 +++-
> >  1 file changed, 35 insertions(+), 1 deletion(-)
> >
> >diff --git a/src/pstore/pstore.c b/src/pstore/pstore.c
> >index 5c812b5d5b..02bd94751f 100644
> >--- a/src/pstore/pstore.c
> >+++ b/src/pstore/pstore.c
> >@@ -68,6 +68,10 @@ static 
> >DEFINE_CONFIG_PARSE_ENUM(config_parse_pstore_storage, pstore_storage, PSt
> >  static PStoreStorage arg_storage = PSTORE_STORAGE_EXTERNAL;
> >
> >  static bool arg_unlink = true;
> >+static bool arg_kmsg = false;
> >+static bool arg_crash = false;
> >+static const char *arg_kmsg_path = 
> >"/sys/module/printk/parameters/always_kmsg_dump";
> >+static const char *arg_crash_path = 
> >"/sys/module/kernel/parameters/crash_kexec_post_notifiers";
> >  static const char *arg_sourcedir = "/sys/fs/pstore";
> >  static const char *arg_archivedir = "/var/lib/systemd/pstore";
> >
> >@@ -75,6 +79,8 @@ static int parse_config(void) {
> >  static const ConfigTableItem items[] = {
> >  { "PStore", "Unlink",  config_parse_bool,   0, 
> > _unlink },
> >  { "PStore", "Storage", config_parse_pstore_storage, 0, 
> > _storage },
> >+    { "PStore", "kmsg",    config_parse_bool,   0, 
> >_kmsg },
> >+    { "PStore", "crash",   config_parse_bool,   0, 
> >_crash },

This also needs a man page update.

Config keys are generally capitalized. But they should be named in a
way that indicates what they're actually configuring.

> >  {}
> >  };
> >
> >@@ -363,7 +369,7 @@ static int list_files(PStoreList *list, const char 
> >*sourcepath) {
> >
> >  static int run(int argc, char *argv[]) {
> >  _cleanup_(pstore_entries_reset) PStoreList list = {};
> >-    int r;
> >+    int fd, r;
> >
> >  log_setup_service();
> >
> >@@ -380,6 +386,34 @@ static int run(int argc, char *argv[]) {
> >  log_debug("Selected storage: %s.", 
> > pstore_storage_to_string(arg_storage));
> >  log_debug("Selected unlink: %s.", yes_no(arg_unlink));
> >
> >+    if (arg_kmsg) {
> >+    /* Only enable if requested; otherwise do not touch the 
> >parameter */
> >+    /* NOTE: These errors are not fatal */
> >+    fd = open(arg_kmsg_path, O_WRONLY|O_CLOEXEC);
> >+    if (fd < 0)
> >+    log_error_errno(r, "Failed to open %s: %m", 
> >arg_kmsg_path);
> >+    r = write(fd, "Y", 1);
> >+    if (r != 1)
> >+    log_error_errno(r, "Failed to write: %m");
> >+    else
> >+    log_debug("Set printk.always_kmsg_dump.");
> >+    close(fd);
> >+    }
> >+
> >+    if (arg_crash) {
> >+    /* Only enable if requested; otherwise do not touch the 
> 

Re: [systemd-devel] systemd prerelease 245-rc2

2020-03-03 Thread Zbigniew Jędrzejewski-Szmek
On Tue, Mar 03, 2020 at 07:43:22AM +, systemd tag bot wrote:
> A new systemd ☠️ pre-release ☠️ has just been tagged. Please download the 
> tarball here:
> 
> https://github.com/systemd/systemd/archive/v245-rc2.tar.gz

It's mostly bufixes, but still 146 commits since -rc1.
The plan is too release the final version at the end of the week.
Regressions and bugs are still being reported, so expect more patches
to go in.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] sd-daemon documentation clarification

2020-03-03 Thread Zbigniew Jędrzejewski-Szmek
On Mon, Mar 02, 2020 at 01:38:54PM +0100, Łukasz Niemier wrote:
> > AFAIK both stdout and stderr even get attached to the same journal pipe by 
> > default, so they should also be interpreted in the same way.
> > 
> > The description of SyslogLevelPrefix= in systemd.exec(5) also says: "This 
> > only applies to log messages written to stdout or stderr.”
> 
> THX, I must have missed that. This mean that the `sd-daemon` documentation 
> page should be updated to contain that information as well.

Please file a PR!

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Minimize systemd for kdump's initramfs

2020-02-24 Thread Zbigniew Jędrzejewski-Szmek
On Tue, Feb 25, 2020 at 01:12:08PM +0800, Kairui Song wrote:
> On Fri, Jan 3, 2020 at 3:23 PM Zbigniew Jędrzejewski-Szmek
>  wrote:
> > On Fri, Jan 03, 2020 at 11:48:53AM +0800, Dave Young wrote:
> > > On 01/03/20 at 11:45am, Dave Young wrote:
> > > > On 01/02/20 at 09:02am, Zbigniew Jędrzejewski-Szmek wrote:
> > > > > On Thu, Jan 02, 2020 at 12:21:26AM +0800, Kairui Song wrote:
> > > > > > Some component, like Systemd, have grown by a lot, here is a list of
> > > > > > the size of part of binaries along with the binaries they required 
> > > > > > in
> > > > > > F31:
> > > > > > /root/image/bin/systemctl
> > > > > > 20M .
> > > > > > /root/image/usr/bin/systemctl
> > > > > > 20M .
> > > > > > /root/image/usr/bin/systemd-cgls
> > > > > > 20M .
> > > > > > /root/image/usr/bin/systemd-escape
> > > > > > 20M .
> > > > > > /root/image/usr/bin/systemd-run
> > > > > > 20M .
> > > > > > ...
> > > > > >
> > > > > > There are overlays between the libraries they used so when installed
> > > > > > into the initramfs, the total size didn't go too big yet. But we can
> > > > > > see the size of systemd binary and libraries it used is much bigger
> > > > > > than others.
> > > > >
> > > > > All systemd binaries will mostly link to the same libraries (because
> > > > > they link to a private shared library, which links to various other
> > > > > shared libraries). So this "20M" will be repeated over and over, but
> > > > > it's the same dependencies.
> > > > >
> > > > > While we'd all prefer for this to be smaller, 20M should is actually
> > > > > not that much...
> > > > >
> > > > > > And as a compare, from version 219 to 243, systemd's library
> > > > > > dependency increased a lot:
> > > > > > (v219 is 5M in total, v243 is 20M in total)
> > > > >
> > > > > This is slightly misleading. Code was moved from individual binaries
> > > > > to libsystemd-shared-nnn.so, so if you look at the deps of just a 
> > > > > single
> > > > > binary, you'll see many more deps (because libsystemd-shared-nnn.so 
> > > > > has
> > > > > more deps). But the total number of deps when summed over all binaries
> > > > > grew much less. A more useful measure would be the size with deps 
> > > > > summed
> > > > > over all systemd binaries that are installed into your image in v219 
> > > > > and
> > > > > v243.
> > > > >
> > > >
> > > > I vaguely remember the size increased before due to linking with libidn2
> > > > previously, so those libraries contribute a lot.
> > > >
> > > > Does every systemd binary depend on all libraries? Or each of the
> > > > systemd binary only depends on those libs when really needed?
> > >
> > > For example if I do not need journalctl, then I can drop journalctl and
> > > those libraries specific for journalctl?
> >
> > It's using standard shared object linking, so yeah, for anything which
> > libsystemd-shared-nnn.so links to, "every systemd binary depend[s] on
> > all libraries", in the sense that the runtime linker will fail to start
> > the executable if any of the libraries are missing.
> >
> 
> Hi, it has been a while since last discuss update, but a second
> thought about the libsystemd-shared-nnn.so problem:
> 
> Now each systemd executable depends on libsystemd-shared-nnn.so, which
> then depend on a lot of things. In older version (eg. version 219),
> each individual systemd executable have it's own dependency, that make
> thing much cleaner for special usages like kdump.
> 
> I'm not sure what is the purpose of this change, could there be any
> work be done to minimize the lib dependency of each systemd binary?

libsystemd-shared-nnn.so holds code used in multiple executables. This
means that if the full suite is installed, shared code is present in
just one copy, and the total footprint of the installation is minimized
(disk, loading time, rss). OTOH, the footprint of installing just a
single executable is unfortunately increased. Our thinking was that
installing many executables is much more common (pretty even a minimal
system with systemd has at least ~30 systemd binaries), so it makes
sense to prioritize that.

See https://github.com/systemd/systemd/pull/3516 for the discussion
of the space savings back when this was originally done. Now we have
many more binaries (and even more shared code since integration of
various components is increasing...), so I expect the savings to
be even bigger.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] lunch before devconf

2020-01-16 Thread Zbigniew Jędrzejewski-Szmek
Update:

The location is now known: Purkyňova 99.

We're crowdsourcing some topics to discuss so we're not bored ;)
Please add to the list:
https://docs.google.com/document/d/1y4NU60q0oCHGAo3qi0NkK5KVLdQRY-ZLSUroqrUT-XE/edit?usp=sharing

Zbyszek


On Thu, Jan 09, 2020 at 04:06:27PM +, Zbigniew Jędrzejewski-Szmek wrote:
> Dear all,
> 
> we'll be having an open meeting in Brno on Thursday, Jan 23rd, 2020,
> before the DevConf.cz conference. Everyone interested in systemd
> development is invited. The plan is to meet for lunch around 1 PM, and
> then go the Red Hat office afterwards for discussion and planning.
> We should be at the office from 2PM and it is of course also possible
> to go directly there.
> 
> When: 23.1.2020, Thursday, 13:00 – 16:00.
> Location: Red Hat offices in Brno,
>   Purkyňova 99 or 111 depending on availability (14:00–16:00)
>   and somewhere close for lunch (13:00—14:00).
> 
> If you wish to join, please let me know (off-list), so I can keep tabs.
> Once we know the exact location, I'll reply to this email with additional
> details.
> 
> Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] lunch before devconf

2020-01-09 Thread Zbigniew Jędrzejewski-Szmek
Dear all,

we'll be having an open meeting in Brno on Thursday, Jan 23rd, 2020,
before the DevConf.cz conference. Everyone interested in systemd
development is invited. The plan is to meet for lunch around 1 PM, and
then go the Red Hat office afterwards for discussion and planning.
We should be at the office from 2PM and it is of course also possible
to go directly there.

When: 23.1.2020, Thursday, 13:00 – 16:00.
Location: Red Hat offices in Brno,
  Purkyňova 99 or 111 depending on availability (14:00–16:00)
  and somewhere close for lunch (13:00—14:00).

If you wish to join, please let me know (off-list), so I can keep tabs.
Once we know the exact location, I'll reply to this email with additional
details.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Minimize systemd for kdump's initramfs

2020-01-02 Thread Zbigniew Jędrzejewski-Szmek
On Fri, Jan 03, 2020 at 11:48:53AM +0800, Dave Young wrote:
> On 01/03/20 at 11:45am, Dave Young wrote:
> > On 01/02/20 at 09:02am, Zbigniew Jędrzejewski-Szmek wrote:
> > > On Thu, Jan 02, 2020 at 12:21:26AM +0800, Kairui Song wrote:
> > > > Some component, like Systemd, have grown by a lot, here is a list of
> > > > the size of part of binaries along with the binaries they required in
> > > > F31:
> > > > /root/image/bin/systemctl
> > > > 20M .
> > > > /root/image/usr/bin/systemctl
> > > > 20M .
> > > > /root/image/usr/bin/systemd-cgls
> > > > 20M .
> > > > /root/image/usr/bin/systemd-escape
> > > > 20M .
> > > > /root/image/usr/bin/systemd-run
> > > > 20M .
> > > > ...
> > > > 
> > > > There are overlays between the libraries they used so when installed
> > > > into the initramfs, the total size didn't go too big yet. But we can
> > > > see the size of systemd binary and libraries it used is much bigger
> > > > than others.
> > > 
> > > All systemd binaries will mostly link to the same libraries (because
> > > they link to a private shared library, which links to various other
> > > shared libraries). So this "20M" will be repeated over and over, but
> > > it's the same dependencies.
> > > 
> > > While we'd all prefer for this to be smaller, 20M should is actually
> > > not that much...
> > > 
> > > > And as a compare, from version 219 to 243, systemd's library
> > > > dependency increased a lot:
> > > > (v219 is 5M in total, v243 is 20M in total)
> > > 
> > > This is slightly misleading. Code was moved from individual binaries
> > > to libsystemd-shared-nnn.so, so if you look at the deps of just a single
> > > binary, you'll see many more deps (because libsystemd-shared-nnn.so has
> > > more deps). But the total number of deps when summed over all binaries
> > > grew much less. A more useful measure would be the size with deps summed
> > > over all systemd binaries that are installed into your image in v219 and
> > > v243.
> > > 
> > 
> > I vaguely remember the size increased before due to linking with libidn2
> > previously, so those libraries contribute a lot.
> > 
> > Does every systemd binary depend on all libraries? Or each of the
> > systemd binary only depends on those libs when really needed?
> 
> For example if I do not need journalctl, then I can drop journalctl and
> those libraries specific for journalctl?

It's using standard shared object linking, so yeah, for anything which
libsystemd-shared-nnn.so links to, "every systemd binary depend[s] on
all libraries", in the sense that the runtime linker will fail to start
the executable if any of the libraries are missing.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Minimize systemd for kdump's initramfs

2020-01-02 Thread Zbigniew Jędrzejewski-Szmek
On Thu, Jan 02, 2020 at 03:29:26PM -0500, Robbie Harwood wrote:
> Kairui Song  writes:
> 
> > What I'm trying to do is reduce the initramfs size used for kdump.
> > Kdump loads a crash kernel and kdump initramfs image in a prereseved
> > memory region, which get booted when current kernel crashed and
> > perform crash dump. The prereserved memory is limited, so initramfs
> > shouldn't go too big.
> >
> > Kdump in Fedora use Dracut to create bootable initramfs, just hook the
> > final step to do kdump things instead of switch root. And to reduce
> > the size only the binaries and drivers required to boot and perform
> > kdump on current machine is installed. So long it have been working
> > very well.
> >
> > But problem is Dracut works by reusing binaries and libraries from the
> > currently running system, and many userspace binaries and libraries is
> > keep growing and using more space. So the initramfs is also growing.
> >
> > /root/image/bin/bash
> > 4.8M.
> > /root/image/bin/sh
> > 4.8M.
> 
> So it's not a direct comparison, but:
> 
> $ du -sh /bin/bash /bin/dash
> 1.2M /bin/bash
> 132K /bin/dash
> 
> This suggests to me that 1-3 MB could be reduced by using dash as the
> shell.  (dash's library dependencies are also smaller since it drops
> requirements on libtinfo (200K) and libdl (36K); whether this matters I
> don't know.)

dash doesn't support various bash extensions and syntaxes. The problem
is that many scripts which use #!/bin/sh really require #!/bin/bash.
So after switching to dash as the provider of /bin/sh various scripts
would suddenly behave differently, and those bugs would only be detected
at runtime.

Debian went through a long process of switching to dash as the default
init shell and fixing various scripts to remove bashisms so things
would run on dash (or any other /bin/sh). This was way more work than
anyone excepted and took a long time. IMO the gain of 1 MB that we
would get is not nearly enough to offset the work required and the
destabilization.

(In Debian the motivation was speed, rather than installation footprint.
So that work was mostly wasted because of the switch from sysvinit to systemd
and ensuing avoidance of shell during boot. Instead of trying to switch
shells, we should instead try to avoid shell in boot as much as possible.)

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Minimize systemd for kdump's initramfs

2020-01-02 Thread Zbigniew Jędrzejewski-Szmek
On Thu, Jan 02, 2020 at 12:21:26AM +0800, Kairui Song wrote:
> Some component, like Systemd, have grown by a lot, here is a list of
> the size of part of binaries along with the binaries they required in
> F31:
> /root/image/bin/systemctl
> 20M .
> /root/image/usr/bin/systemctl
> 20M .
> /root/image/usr/bin/systemd-cgls
> 20M .
> /root/image/usr/bin/systemd-escape
> 20M .
> /root/image/usr/bin/systemd-run
> 20M .
> ...
> 
> There are overlays between the libraries they used so when installed
> into the initramfs, the total size didn't go too big yet. But we can
> see the size of systemd binary and libraries it used is much bigger
> than others.

All systemd binaries will mostly link to the same libraries (because
they link to a private shared library, which links to various other
shared libraries). So this "20M" will be repeated over and over, but
it's the same dependencies.

While we'd all prefer for this to be smaller, 20M should is actually
not that much...

> And as a compare, from version 219 to 243, systemd's library
> dependency increased a lot:
> (v219 is 5M in total, v243 is 20M in total)

This is slightly misleading. Code was moved from individual binaries
to libsystemd-shared-nnn.so, so if you look at the deps of just a single
binary, you'll see many more deps (because libsystemd-shared-nnn.so has
more deps). But the total number of deps when summed over all binaries
grew much less. A more useful measure would be the size with deps summed
over all systemd binaries that are installed into your image in v219 and
v243.

I don't have a link at hand, but there's work being done to use openssl
for all crypto, which would reduce the dependency list nicely.

> Is there any way to have a smaller systemd binary that is just enough
> to boot the initramfs into the stage before switch-root?

We don't have anything like this now, sorry!

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Antw: Re: Why does initrd-parse-etc.service re-start initrd-fs.target?

2019-12-14 Thread Zbigniew Jędrzejewski-Szmek
On Sat, Dec 14, 2019 at 11:58:37AM +0300, Andrei Borzenkov wrote:
> 09.12.2019 10:06, Ulrich Windl пишет:
> >>
> >> After real root is mounted daemon-reload re-runs fstab generator which
> >> parses real root /etc/fstab and may pull mount points from it.
> > 
> > I wonder: Are there realistic cases when the fstab in initrd is newer than 
> > the
> > fstab in the root file system?
> 
> It has nothing to do with being "newer". It allows managing initrd
> filesystems in one place and avoids need to re-create initrd every time
> you need additional filesystem.

I'd go even further: initramfses should not need to be rebuilt all the time.
I.e. they should be what dracut calls hostonly=no. Having to propagate 
configuration
files from the host to the initramfs is very costly.

So yeah, in general I think we need to think about mechanisms which pull
all possible information from the host. This is never going to be as simple
as encoding all configuration statically in the initramfs, but that's the
price we have to pay for usability.

Zbyszek

> > That would be the case where detecting a "new"
> > fstab would fail. I didn't dig into the generators, but an alternative 
> > method
> > to detect a file change would be to compare the size as well (as cheap as
> > stat()), or to compare some checksum (requires some more extra code). I feel
> > the generators should fix the issue, not the user (i.e. restart).
> > 
> >> Restarting initrd-fs.target will propagate start request to its (newly
> >> created) dependent mount units. Otherwise there is no obvious way to
> >> start them (without explicitly starting each).
> > 
> > I never liked the idea of generators and /etc/fstab.
> > 
> 
> It fits perfectly into systemd design goal - start services *on boot*
> once. Most problems with systemd stem from attempt to use it as "dynamic
> service manager" which it is not. This discussion is example of it.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] No error even a Required= service does not exist

2019-11-25 Thread Zbigniew Jędrzejewski-Szmek
On Mon, Nov 25, 2019 at 06:13:17PM +0200, Uoti Urpala wrote:
> On Mon, 2019-11-25 at 15:19 +0200, Mantas Mikulėnas wrote:
> > > Requires=xyz.service 
> > > 
> > > produces no complaint and starts the service even if there is no 
> > > xyz.service
> > > Is this the normal behavior or can I configure systemd to throw an error 
> > > in this case?
> > 
> > The docs say you can get this behavior if you also have After=xyz.service. 
> > (Not entirely sure why.)
> 
> No when there IS NOT an "After=xyz.service".
> 
> Without "After=", there is no ordering dependency - it just tells that
> anything starting this unit will effectively order the start of the
> other as well. Without ordering, this unit can be the one to start
> first. If the other one fails to actually start later, that doesn't
> make systemd go back to stop this one (note that this is consistent
> with ordering dependencies - if a depended-on service fails later
> during runtime, that does not automatically force a stop of already
> running depending services). I guess this logic extends to failures of
> the "does not exist at all" type where there was never a chance of
> successfully starting the unit.

Sounds like a bug. I'd expect the transaction to fail if the Required
unit cannot be found.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] Please reopen issue #12506

2019-11-18 Thread Zbigniew Jędrzejewski-Szmek
On Mon, Nov 18, 2019 at 11:41:20AM +, Marcos Mello wrote:
> Although util-linux's fstab.d work has stalled, there is still systemd code 
> that needs porting to libmount. See Karel's last comment:
> 
> https://github.com/systemd/systemd/issues/12506

Reopened.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] Dead link in the documentation of the Journal File Format on freedesktop.org

2019-11-07 Thread Zbigniew Jędrzejewski-Szmek
On Thu, Nov 07, 2019 at 02:12:55PM +, kein name wrote:
> Hi,
> 
> I'd like to report a dead link in the documentation of the Journal File 
> Format on freedesktop.org[1].
> The link[2] goes to gmane.org which is dead, it should probably use your own 
> mailing list archive[3].

Thanks, updated.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] is the watchdog useful?

2019-10-31 Thread Zbigniew Jędrzejewski-Szmek
On Thu, Oct 31, 2019 at 06:30:33PM +0100, Lennart Poettering wrote:
> On Mo, 21.10.19 17:50, Zbigniew Jędrzejewski-Szmek (zbys...@in.waw.pl) wrote:
> 
> > In principle, the watchdog for services is nice. But in practice it seems
> > be bring only grief. The Fedora bugtracker is full of automated reports of 
> > ABRTs,
> > and of those that were fired by the watchdog, pretty much 100% are bogus, in
> > the sense that the machine was resource starved and the watchdog fired.
> >
> > There a few downsides to the watchdog killing the service:
> > 1. if it is something like logind, it is possible that it will cause 
> > user-visible
> > failure of other services
> > 2. restarting of the service causes additional load on the machine
> > 3. coredump handling causes additional load on the machine, quite 
> > significant
> > 4. those failures are reported in bugtrackers and waste everyone's time.
> >
> > I had the following ideas:
> > 1. disable coredumps for watchdog abrts: systemd could set some flag
> > on the unit or otherwise notify systemd-coredump about this, and it could 
> > just
> > log the occurence but not dump the core file.
> > 2. generally disable watchdogs and make them opt in. We have 
> > 'systemd-analyze service-watchdogs',
> > and we could make the default configurable to "yes|no".
> >
> > What do you think?
> 
> Isn't this more a reason to substantially increase the watchdog
> interval by default? i.e. 30min if needed?

Yep, there was a proposal like that. I want to make it 1h in Fedora.

Zbyszek

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

[systemd-devel] getting ready for v244

2019-10-29 Thread Zbigniew Jędrzejewski-Szmek
Hi everyone,

I think it's time to get ready for a v244-prerelease.
Currently, 8 issues are open on 
https://github.com/systemd/systemd/milestones/v244
without a PR. I'll be working on the remaining issues and hope to release an 
-rc1
late this week or early next.

If there is stuff I should review or look into with higher priority, let me 
know..

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] is the watchdog useful?

2019-10-25 Thread Zbigniew Jędrzejewski-Szmek
On Thu, Oct 24, 2019 at 02:56:55PM -0700, Vito Caputo wrote:
> On Thu, Oct 24, 2019 at 10:45:32AM +0000, Zbigniew Jędrzejewski-Szmek wrote:
> > On Tue, Oct 22, 2019 at 04:35:13AM -0700, Vito Caputo wrote:
> > > On Tue, Oct 22, 2019 at 10:51:49AM +, Zbigniew Jędrzejewski-Szmek 
> > > wrote:
> > > > On Tue, Oct 22, 2019 at 12:34:45PM +0200, Umut Tezduyar Lindskog wrote:
> > > > > I am curious Zbigniew of how you find out if the coredump was on a 
> > > > > starved
> > > > > process?
> > > > 
> > > > A very common case is systemd-journald which gets SIGABRT when in a
> > > > read() or write() or similar syscall. Another case is when
> > > > systemd-udevd workers get ABRT when doing open() on a device.
> > > > 
> > > 
> > > In the case of journald, is it really in read()/write() syscalls you're
> > > seeing the SIGABRTs?
> > 
> > I was sloppy here — it's not read/write, but various other syscalls.
> > In particular clone(), which makes sense, because it involves memory
> > allocation.
> > 
> 
> That's interesting, it's not like journald calls clone() a lot. 

Hm, maybe it was udevd that was calling clone(), not journald.
All the reports are available here:
https://bugzilla.redhat.com/show_bug.cgi?id=1300212

I opened a pull request to make the watchdog setting configurable
for our own internal services: https://github.com/systemd/systemd/pull/13843.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] is the watchdog useful?

2019-10-24 Thread Zbigniew Jędrzejewski-Szmek
On Tue, Oct 22, 2019 at 04:35:13AM -0700, Vito Caputo wrote:
> On Tue, Oct 22, 2019 at 10:51:49AM +0000, Zbigniew Jędrzejewski-Szmek wrote:
> > On Tue, Oct 22, 2019 at 12:34:45PM +0200, Umut Tezduyar Lindskog wrote:
> > > I am curious Zbigniew of how you find out if the coredump was on a starved
> > > process?
> > 
> > A very common case is systemd-journald which gets SIGABRT when in a
> > read() or write() or similar syscall. Another case is when
> > systemd-udevd workers get ABRT when doing open() on a device.
> > 
> 
> In the case of journald, is it really in read()/write() syscalls you're
> seeing the SIGABRTs?

I was sloppy here — it's not read/write, but various other syscalls.
In particular clone(), which makes sense, because it involves memory
allocation.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] is the watchdog useful?

2019-10-22 Thread Zbigniew Jędrzejewski-Szmek
On Tue, Oct 22, 2019 at 12:34:45PM +0200, Umut Tezduyar Lindskog wrote:
> I am curious Zbigniew of how you find out if the coredump was on a starved
> process?

A very common case is systemd-journald which gets SIGABRT when in a
read() or write() or similar syscall. Another case is when
systemd-udevd workers get ABRT when doing open() on a device.

> This is common for our embedded devices. I didn't think it is common for
> desktop too.


> It is really useful for getting coredumps on deadlocked applications. For
> that reason I don't think it is good to remove this functionality
> completely.

Yes, I never suggested removing it completely. I'm just saying that for
the type of systems that Fedora targets, I don't recall any actual deadlock.
For more specialized systems, where the workload is more predictable,
it makes sense to have the watchdog.

There might be cases where the kernel is dead-locked internally, and e.g.
open() or modprobe() never returns. For those cases it might be useful to
get the backtrace, but actually killing the process and/or storing the
coredump is useful.

Zbyszek

> 
> Umut
> 
> On Mon, Oct 21, 2019 at 7:51 PM Zbigniew Jędrzejewski-Szmek <
> zbys...@in.waw.pl> wrote:
> 
> > In principle, the watchdog for services is nice. But in practice it seems
> > be bring only grief. The Fedora bugtracker is full of automated reports of
> > ABRTs,
> > and of those that were fired by the watchdog, pretty much 100% are bogus,
> > in
> > the sense that the machine was resource starved and the watchdog fired.
> >
> > There a few downsides to the watchdog killing the service:
> > 1. if it is something like logind, it is possible that it will cause
> > user-visible
> > failure of other services
> > 2. restarting of the service causes additional load on the machine
> > 3. coredump handling causes additional load on the machine, quite
> > significant
> > 4. those failures are reported in bugtrackers and waste everyone's time.
> >
> > I had the following ideas:
> > 1. disable coredumps for watchdog abrts: systemd could set some flag
> > on the unit or otherwise notify systemd-coredump about this, and it could
> > just
> > log the occurence but not dump the core file.
> > 2. generally disable watchdogs and make them opt in. We have
> > 'systemd-analyze service-watchdogs',
> > and we could make the default configurable to "yes|no".
> >
> > What do you think?
> > Zbyszek
> > ___
> > systemd-devel mailing list
> > systemd-devel@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/systemd-devel
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] is the watchdog useful?

2019-10-22 Thread Zbigniew Jędrzejewski-Szmek
On Mon, Oct 21, 2019 at 02:32:08PM -0700, Vito Caputo wrote:
> On Mon, Oct 21, 2019 at 05:50:44PM +0000, Zbigniew Jędrzejewski-Szmek wrote:
> > In principle, the watchdog for services is nice. But in practice it seems
> > be bring only grief. The Fedora bugtracker is full of automated reports of 
> > ABRTs,
> > and of those that were fired by the watchdog, pretty much 100% are bogus, 
> > in 
> > the sense that the machine was resource starved and the watchdog fired.
> > 
> > There a few downsides to the watchdog killing the service:
> > 1. if it is something like logind, it is possible that it will cause 
> > user-visible
> > failure of other services
> > 2. restarting of the service causes additional load on the machine
> > 3. coredump handling causes additional load on the machine, quite 
> > significant
> > 4. those failures are reported in bugtrackers and waste everyone's time.
> > 
> > I had the following ideas:
> > 1. disable coredumps for watchdog abrts: systemd could set some flag
> > on the unit or otherwise notify systemd-coredump about this, and it could 
> > just
> > log the occurence but not dump the core file.
> > 2. generally disable watchdogs and make them opt in. We have 
> > 'systemd-analyze service-watchdogs',
> > and we could make the default configurable to "yes|no".
> > 
> > What do you think?
> > Zbyszek
> 
> 
> I think the main issue is the watchdog timeout hasn't been tuned
> appropriately for the environment it's being applied.
> 
> It's as if the timeouts are somewhere near the hard real-time
> expectations end of the spectrum, while being applied to
> non-deterministically delayed and scheduled normal priority userspace
> processes.  It's a sort of impedance mismatch.
> 
> I /think/ the purpose of the watchdog is to detect when processes are
> permanently wedged, capture their state for debugging, and forcefully
> unwedge them.
> 
> That seems perfectly reasonable, but the timeout heuristic being used,
> given our non-deterministic scheduling, should be incredibly long by
> default.  It's not the kind of thing you want false positives on, folks
> can always shrink the timeout if they find it's desirable.

It is now 3 minutes in all systemd units. Dunno, maybe we should make
that 30 minutes.

Zbyszek

> Without having spent much time thinking about this, I'd lean towards
> retaining the watchdogs but making their default timeouts so long a
> program would have to be wedged for an hour+ before it triggered.
> 
> At least that way we preserve the passive information gathering of
> serious bugs which might otherwise go unnoticed with background/idle
> services, improving debugging substantially, but eliminate the problems
> you describe resulting from false positives.
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] is the watchdog useful?

2019-10-22 Thread Zbigniew Jędrzejewski-Szmek
On Tue, Oct 22, 2019 at 09:54:31AM +0300, Pekka Paalanen wrote:
> On Mon, 21 Oct 2019 17:50:44 +
> Zbigniew Jędrzejewski-Szmek  wrote:
> 
> > In principle, the watchdog for services is nice. But in practice it seems
> > be bring only grief. The Fedora bugtracker is full of automated reports of 
> > ABRTs,
> > and of those that were fired by the watchdog, pretty much 100% are bogus, 
> > in 
> > the sense that the machine was resource starved and the watchdog fired.
> 
> Hi,
> 
> just curious, is that resource starvation caused by something big, e.g.
> a browser, using too much memory which leads to the kernel reclaiming
> also pages of program text sections because they can be reloaded from
> disk at any time, however those pages are needed again immediately
> after when some CPU core switches process context, leading to something
> that looks like a hard freeze to a user, while the kernel is furiously
> loading pages from disk just to drop them again, and can take from
> minutes to hours before any progress is visible?

I don't really know. Unfortunately, abrt in Fedora does not collect log
messages. In the old syslog days, a snippet of /var/log/messages for the
last 20 minutes or something like that before a crash would be copied
into the bug report, and this would include kernel messages about disk
errors, or kernel stalls, or other interesting hints. Unfortunately
nowadays, because of privacy concerns (?) and an effort to make things
more efficient (?), just some heavily-filtered journalctl output is
attached. In practice, usually this is at most a few lines and
completely useless. In particular, it does not give any hints to the
overall state of the system.

I have spoken to abrt maintainers about this, but it seems that this
problem is specific to systemd, and for most other applications it is
OK to get a backtrace without any system-wide context. So I don't see
this changing any time soon ;(

Sometimes I ask people for logs, and sometimes I get them, and in those
cases it seems that both hardware issues (e.g. a failing disk), or memory
exhaustion are often involved. In some cases there is no clear reason.
And since in the great majority we don't have any logs, it is hard to
say anything.

> It has happened to me on Fedora in the past. I could probably dig up
> discussions about the problem in general if you want, they explain it
> better than I ever could.
> 
> Does Fedora prevent that situation by tuning some kernel knobs nowadays
> for desktops?

I don't think so.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

[systemd-devel] is the watchdog useful?

2019-10-21 Thread Zbigniew Jędrzejewski-Szmek
In principle, the watchdog for services is nice. But in practice it seems
be bring only grief. The Fedora bugtracker is full of automated reports of 
ABRTs,
and of those that were fired by the watchdog, pretty much 100% are bogus, in 
the sense that the machine was resource starved and the watchdog fired.

There a few downsides to the watchdog killing the service:
1. if it is something like logind, it is possible that it will cause 
user-visible
failure of other services
2. restarting of the service causes additional load on the machine
3. coredump handling causes additional load on the machine, quite significant
4. those failures are reported in bugtrackers and waste everyone's time.

I had the following ideas:
1. disable coredumps for watchdog abrts: systemd could set some flag
on the unit or otherwise notify systemd-coredump about this, and it could just
log the occurence but not dump the core file.
2. generally disable watchdogs and make them opt in. We have 'systemd-analyze 
service-watchdogs',
and we could make the default configurable to "yes|no".

What do you think?
Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] cannot unsubscribe from this list

2019-10-16 Thread Zbigniew Jędrzejewski-Szmek
On Tue, Oct 15, 2019 at 04:08:24PM -0400, Brian Reichert wrote:
> I initiated an unsubscribe from this web page:
> 
>   https://lists.freedesktop.org/mailman/options/systemd-devel
> 
> That created a confirmation email, that I replied to.

Yeah, that doesn't work. Use the web interface:
> https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] Replacement for JoinControllers

2019-10-05 Thread Zbigniew Jędrzejewski-Szmek
On Sat, Oct 05, 2019 at 09:34:09AM +0200, azu...@pobox.sk wrote:
> Hi guys.
> 
> recently, we upgraded one of our servers from Debian Stretch
> (systemd version 232-25+deb9u12) to Debian Buster (systemd version
> 241-7~deb10u1). Soon after, we found out that setting
> 'JoinControllers' is not longer available. Is there any replacement
> or workaround? Our software is depended on that setting.

The replacement is to use cgroups v2, where all controllers are
always joined, so this setting is not useful.

JoinContollers= was removed in 143fadf369a18449464956206226761e49be1928
because of a) cgroupsv2, b) it was broken, c) it appears almost
nobody was using it (which is also confirmed by the fact that nobody
reported that it doesn't work as documented...).

Sorry if that's disappointing, but you'll need to switch and/or adjust.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] Unload disabled units

2019-09-16 Thread Zbigniew Jędrzejewski-Szmek
On Sun, Sep 15, 2019 at 03:12:22AM +, Daniel Duong wrote:
> Hi,
> 
> I have a 2 template units: 1 for a service and 1 for a socket. Each
> instance is a version of my web application.
> 
> After a successful deploy, I stop and disable the old version and I
> enable the new one:
>   systemctl start belleshop@0.2.socket
>   # Test that everything is fine
>   systemctl enable belleshop@0.2.socket
>   systemctl stop belleshop@0.1.socket
>   systemctl stop belleshop@0.1.service
>   systemctl disable belleshop@0.1.socket
> 
> I've done that for a few versions now, and it seemed to work OK. There
> is a little problem though. The old versions are still loaded:
> 
>   $ systemctl --no-legend --all list-units belleshop@*
>   belleshop@0.110.service loaded active   running Belleshop server
>   belleshop@0.34.service  loaded inactive deadBelleshop server
>   belleshop@0.36.service  loaded inactive deadBelleshop server
>   belleshop@0.37.service  loaded inactive deadBelleshop server
>   [...]
>   belleshop@0.110.socket  loaded active   running Belleshop socket
>   belleshop@0.34.socket   loaded inactive deadBelleshop socket
>   belleshop@0.36.socket   loaded inactive deadBelleshop socket
>   belleshop@0.37.socket   loaded inactive deadBelleshop socket
>   [...]
> 
> Is there any way I can unload these old versions?

Normally units should be unloaded immediately if they are stopped
and didn't fail. What systemd version are you using? (One possibility
to consider is that the glob matches *files*, and you are simply loading
the units at the time the systemctl query is made. Use 'belleshop@*'
instead.)

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] systemd243rc2, sysd-coredump is not triggered on segfaults

2019-09-03 Thread Zbigniew Jędrzejewski-Szmek
On Mon, Sep 02, 2019 at 07:55:20PM -0600, Chris Murphy wrote:
> Maybe it's something unique to gnome-shell segfaults, that's the only
> thing I have crashing right now. But I've got a pretty good reproducer
> to get it to crash and I never have any listings with coredumpctl.
> 
> process segfaults but systemd-coredump does not capture it
> https://bugzilla.redhat.com/show_bug.cgi?id=1748145

Thanks for the report. I looks to be a bug/feature in gnome-shell.
I replied in the bug.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] When will my timer next run?

2019-08-31 Thread Zbigniew Jędrzejewski-Szmek
On Fri, Aug 30, 2019 at 06:04:22PM -0700, Kenneth Porter wrote:
> I've created my service timer with the following:
> 
> [Timer]
> # wait a bit after boot to let our victim catch up with its work
> OnBoot=13m

This needs to be OnBootSec=13m. (systemd-analyze verify is your friend
in cases like this.)

> # let the victim get some work done between backups
> # we use inactive to prevent back-to-back backups if they run long
> OnUnitInactiveSec=1h
> 
> I then run list-timers but all the time columns for my service are
> n/a. I want my backup to run with an hour between backups, and with
> a pause after boot to let all the machines come up and finish
> overdue work from any long power outage. I started the timer unit
> and then see this:
> 
> # systemctl  list-timers
> NEXT LEFT  LAST PASSED
> UNIT ACTIVATES
> n/a  n/a   n/a
> n/a rsync-Saruman.timer  rsync-Saruman.service

So... your timer has (after the invalid "OnBoot=" is ignored), only
OnUnitInactiveSec=1h. After this timer unit is started, it is never
scheduled. I would expect it to be scheduled 1h after the unit was
started. So this seems to be bug to me. If you agree, please open
an issue on https://github.com/systemd/systemd/issues/ so we can
track this.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] logind: 242~rc2 break VT/tty switching on Fedora 31

2019-08-30 Thread Zbigniew Jędrzejewski-Szmek
On Fri, Aug 30, 2019 at 04:08:39PM +0200, Hans de Goede wrote:
> Hi All,
> 
> I already filed a github issue for $subject:
> https://github.com/systemd/systemd/issues/13437
>
> But I'm not sure how close github issues are watched hence this email. It 
> would be nice if we can get this fixed for F31 beta, or if some more time is 
> needed, at least get this regression fixed for F31 gold.

Yes, they are watched. No need to post to the mailing list about bugs.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] sd-boot on Fedora 30?

2019-08-24 Thread Zbigniew Jędrzejewski-Szmek
On Fri, Aug 23, 2019 at 11:43:43AM -0700, Filipe Brandenburger wrote:
> Hi,
> 
> I've been trying to get sd-boot to work on Fedora 30, made some progress
> but not fully there yet...
> 
> First I found my partition GPT type in /boot was incorrect and bootctl was
> trying to use /boot/efi instead. Ok, that fixed, now I get a list of
> kernels.

What fstype is /boot? If it's ext4, then it will not be read by sd-boot.

> But whenever I boot, I only get the "Reboot Into Firmware Interface" menu
> entry and nothing else...
> 
> I imagine this might be related to the Grub entries:
> 
> $ sudo bootctl list
> /boot/loader/entries/4d3fcddc096748c4a398037699515189-5.2.8-200.fc30.x86_64.conf:7:
> Unknown line "id", ignoring.
> /boot/loader/entries/4d3fcddc096748c4a398037699515189-5.2.8-200.fc30.x86_64.conf:8:
> Unknown line "grub_users", ignoring.
> /boot/loader/entries/4d3fcddc096748c4a398037699515189-5.2.8-200.fc30.x86_64.conf:9:
> Unknown line "grub_arg", ignoring.
> /boot/loader/entries/4d3fcddc096748c4a398037699515189-5.2.8-200.fc30.x86_64.conf:10:
> Unknown line "grub_class", ignoring.
> title: Fedora (5.2.8-200.fc30.x86_64) 30 (Workstation Edition)
> (default)
>id: 4d3fcddc096748c4a398037699515189-5.2.8-200.fc30.x86_64
>source:
> /boot/loader/entries/4d3fcddc096748c4a398037699515189-5.2.8-200.fc30.x86_64.conf
>   version: 5.2.8-200.fc30.x86_64
> linux: /vmlinuz-5.2.8-200.fc30.x86_64
>initrd: /initramfs-5.2.8-200.fc30.x86_64.img
>   options: $kernelopts
> 
> I tried to at least fix the $kernelopts one, with grubby --args="..."
> adding a dummy argument just to deduplicate it from the grubenv contents,
> but still couldn't boot from there...
> 
> Even if I fix that, looks like new kernels installed would trigger
> /usr/lib/kernel/install.d/20-grub.install and probably mess up that setup
> (do I have to mask or remove it completely?)
> 
> Fedora's BLS document unfortunately doesn't mention sd-boot at all :-(
> https://fedoraproject.org/wiki/Changes/BootLoaderSpecByDefault
> 
> Anyways, if anyone has hints of what I could try next, I'd be quite
> interested to know. (Perhaps adding some docs to Fedora wiki would be
> pretty helpful too!) I thought I'd ask here first... If I don't hear back,
> I might try to ask on Fedora lists instead.

Unfortunately what the grub people call BLS is not the real thing.
For reasons I still don't quite understand, they decided to make their
file format incompatible. (That they chose different file paths, that
is understandable, because they wanted to use /boot and retain compatibility
with old installations. But this is not what causes incompatibilities with
sd-boot. It's the gratuitous syntax changes in entry files.)

To get sd-boot working on Fedora, the easiest option is to get rid of
boot loader entries in /boot, and make sure kernels are installed to
/boot/efi.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] nspawn blocks sync_file_range on arm

2019-08-19 Thread Zbigniew Jędrzejewski-Szmek
On Sun, Aug 18, 2019 at 05:02:35PM +0100, Steve Dodd wrote:
> ARM has two sync_file_range syscalls, sync_file_range and sync_file_range2.
> The former is apparently not used, and glibc calls the latter whenever a
> userspace program calls sync_file_range. I'm guessing systemd-nspawn
> doesn't know this, because the follow code consistently fails in an nspawn
> container on ARM:
> 
> #define _GNU_SOURCE
> #include 
> #include 
> #include 
> #include 
> 
> void main()
> {
> int f = open("/tmp/syncrange.test",O_CREAT|O_RDWR,0666);
> int r=sync_file_range(f, 0, 0, 0);
> if (r)
> perror("sync_file_range");
> close(f);
> }
> 
> This seems to be causing problems specifically for borg(backup) and
> postgres:
> https://github.com/borgbackup/borg/issues/4710
> https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BydOUT4zjxb6QmJWy8U9WbC-q%2BJWV7wLsEY9Df%3Dmw0Mw%40mail.gmail.com#ac8f14897647dc7eae3c7e7cbed36d93
> 
> I will test the obvious fix when I can, unless someone beats me to it :)

Please test https://github.com/systemd/systemd/pull/13352.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] lto issues

2019-08-06 Thread Zbigniew Jędrzejewski-Szmek
On Tue, Aug 06, 2019 at 09:34:36AM +0200, Michael Biebl wrote:
> Am Di., 6. Aug. 2019 um 09:26 Uhr schrieb Zbigniew Jędrzejewski-Szmek
> :
> >
> > On Sat, Aug 03, 2019 at 07:03:47PM +0200, Michael Biebl wrote:
> > > Hi,
> > >
> > > today I tried compiling systemd v242 (on Debian sid) once using lto
> > > (-Db_lto=true) and once without lto (-Db_lto=false).
> > >
> > > The lto build took approximately twice as long on my laptop (using
> > > dpkg-buildpackage, which introduces a bit of overhead):
> > >
> > > lto:
> > > real 11m22,605s
> > > user 37m9,675s
> > > sys 2m51,041s
> > >
> > > nolto:
> > > real 6m35,615s
> > > user 18m51,782s
> > > sys 2m12,934s
> > >
> > > That's kinda expected. What suprised me though is that using lto
> > > produced larger binaries:
> >
> > I built systemd in F31 (-Doptimization=2 -Db_lto=true/false, and I saw
> > a big increase in binary sizes *before stripping*. After stripping,
> > binaries with lto=true are smaller:
> >
> > $ ls -l build-rawhide{,-lto}/{systemd,src/shared/libsystemd-shared-243.so}
> >   7116384 Aug  6 09:08 build-rawhide/systemd*
> >  11951256 Aug  6 09:07 build-rawhide/src/shared/libsystemd-shared-243.so*
> >   1594912 Aug  6 09:12 build-rawhide-lto/systemd*
> >   3167096 Aug  6 09:11 
> > build-rawhide-lto/src/shared/libsystemd-shared-243.so*
> > $ strip build-rawhide{,-lto}/{systemd,src/shared/libsystemd-shared-243.so}
> > $ ls -l build-rawhide{,-lto}/{systemd,src/shared/libsystemd-shared-243.so}
> >   1439640 Aug  6 09:19 build-rawhide/systemd*
> >   2806456 Aug  6 09:19 build-rawhide/src/shared/libsystemd-shared-243.so*
> >   1370008 Aug  6 09:19 build-rawhide-lto/systemd*
> >   2806288 Aug  6 09:19 
> > build-rawhide-lto/src/shared/libsystemd-shared-243.so*
> 
> 
> The sizes I posted i.e. the debdiff is after stripping.
> 
> gcc --version
> gcc (Debian 8.3.0-19) 8.3.0
> Copyright (C) 2018 Free Software Foundation, Inc.
> This is free software; see the source for copying conditions.  There is NO
> warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
> 
> ld --version
> GNU ld (GNU Binutils for Debian) 2.32.51.20190727
> Copyright (C) 2019 Free Software Foundation, Inc.
> This program is free software; you may redistribute it under the terms of
> the GNU General Public License version 3 or (at your option) a later version.
> 
> So with the toolchain I have, mostly has downsides. The only benefit
> it seems to have is that it optimizes unnecessary library dependencies
> away (see how the udev subpackage does not depend on libcap2 (>=
> 1:2.10), libidn2-0 (>= 0.6)

In Fedora, lto seems to have good returns. The final package size was
~10% smaller. We disabled it for a while because there were linking
failures, but I think it's all resolved now.

I forgot to mention. In my test:
gcc-9.1.1-2.fc31.1.x86_64
binutils-2.32-19.fc31.x86_64

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] lto issues

2019-08-06 Thread Zbigniew Jędrzejewski-Szmek
On Sat, Aug 03, 2019 at 07:03:47PM +0200, Michael Biebl wrote:
> Hi,
> 
> today I tried compiling systemd v242 (on Debian sid) once using lto
> (-Db_lto=true) and once without lto (-Db_lto=false).
> 
> The lto build took approximately twice as long on my laptop (using
> dpkg-buildpackage, which introduces a bit of overhead):
> 
> lto:
> real 11m22,605s
> user 37m9,675s
> sys 2m51,041s
> 
> nolto:
> real 6m35,615s
> user 18m51,782s
> sys 2m12,934s
> 
> That's kinda expected. What suprised me though is that using lto
> produced larger binaries:

I built systemd in F31 (-Doptimization=2 -Db_lto=true/false, and I saw
a big increase in binary sizes *before stripping*. After stripping,
binaries with lto=true are smaller:

$ ls -l build-rawhide{,-lto}/{systemd,src/shared/libsystemd-shared-243.so}
  7116384 Aug  6 09:08 build-rawhide/systemd*
 11951256 Aug  6 09:07 build-rawhide/src/shared/libsystemd-shared-243.so*
  1594912 Aug  6 09:12 build-rawhide-lto/systemd*
  3167096 Aug  6 09:11 build-rawhide-lto/src/shared/libsystemd-shared-243.so*
$ strip build-rawhide{,-lto}/{systemd,src/shared/libsystemd-shared-243.so}
$ ls -l build-rawhide{,-lto}/{systemd,src/shared/libsystemd-shared-243.so}  
  1439640 Aug  6 09:19 build-rawhide/systemd*
  2806456 Aug  6 09:19 build-rawhide/src/shared/libsystemd-shared-243.so*
  1370008 Aug  6 09:19 build-rawhide-lto/systemd*
  2806288 Aug  6 09:19 build-rawhide-lto/src/shared/libsystemd-shared-243.so*

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] systemd's connections to /run/systemd/private ?

2019-08-01 Thread Zbigniew Jędrzejewski-Szmek
On Thu, Aug 01, 2019 at 08:46:58AM -0400, Brian Reichert wrote:
> On Thu, Aug 01, 2019 at 08:17:01AM +, Zbigniew J??drzejewski-Szmek wrote:
> > The kernel will use the lower-numbered available fd, so there's lot of
> > "reuse" of the same numbers happening. This strace means that between
> > each of those close()s here, some other function call returned fd 19.
> > Until we know what those calls are, we cannot say why fd19 remains
> > open. (In fact, the only thing we can say for sure, is that the
> > accept4() call shown above is not relevant.)
> 
> So, what I propose at this step:
> 
> - Restart my strace, this time using '-e trace=desc' (Trace all
>   file descriptor related system calls.)
> 
> - Choose to focus on a single descriptor; when I passively notice
>   that '19' has been reused a couple of time, stop the trace.

Well, no. If you notice that '19' has STOPPED being reused, then
stop the trace. If it is being reused, then it's it's not "leaked".

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] Backup the current boot logs in raw format

2019-08-01 Thread Zbigniew Jędrzejewski-Szmek
On Thu, Aug 01, 2019 at 10:52:07AM +0200, Francis Moreau wrote:
> On Thu, Aug 1, 2019 at 10:36 AM Zbigniew Jędrzejewski-Szmek
>  wrote:
> >
> > On Thu, Aug 01, 2019 at 10:26:50AM +0200, Francis Moreau wrote:
> > > On Thu, Aug 1, 2019 at 9:45 AM Zbigniew Jędrzejewski-Szmek
> > >  wrote:
> > > >
> > > > On Thu, Aug 01, 2019 at 09:11:19AM +0200, Francis Moreau wrote:
> > > > > On Wed, Jul 24, 2019 at 4:08 PM Zbigniew Jędrzejewski-Szmek
> > > > >  wrote:
> > > > > > you can export and write to a journal file with:
> > > > > >   journalctl -o export ... | 
> > > > > > /usr/lib/systemd/systemd-journal-remote -o /tmp/foo.journal -
> > > > > > This has the advantage that you can apply any journalctl filter 
> > > > > > where
> > > > > > the dots are, e.g. '-b'.
> > > > >
> > > > > This doesn't look to work correctly:
> > > > >
> > > > > $ journalctl -b | head
> > > > > -- Logs begin at Thu 2017-04-13 14:05:51 CEST, end at Thu 2019-08-01
> > > > > 08:51:39 CEST. --
> > > > > Mar 25 06:51:35 crapovo kernel: microcode: microcode updated early to
> > > > > revision 0x25, date = 2018-04-02
> > > > >
> > > > > $ journalctl -o export -b | /usr/lib/systemd/systemd-journal-remote -o
> > > > > /tmp/foo.journal -
> > > > > $ journalctl -b --file=/tmp/foo.journal | head
> > > > > -- Logs begin at Sat 2019-06-22 18:32:31 CEST, end at Thu 2019-08-01
> > > > > 08:45:45 CEST. --
> > > > > Jun 22 18:32:31 crapovo polkitd[1278]: Unregistered Authentication
> > > > > Agent for unix-process:7300:772806437 (system bus name :1.4562, object
> > > > > path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale
> > > > > en_US.UTF-8) (disconnected from bus)
> > > > >
> > > > > As you can see, the start is not the same.
> > > > >
> > > > > Also are foo.journal data compressed ?
> > > >
> > > > What does "ls /tmp/foo*" say?
> > > >
> > >
> > > /tmp/foo@4e9f5da4aaac4433bc8744fe49e25b5a-0001-000584e4cc1ef61f.journal
> > >  /tmp/foo.journal
> >
> > systemd-journal-remote will "rotate" files when they grow above certain 
> > size.
> > (The same as systemd-journald). 'journalctl --file=/tmp/foo*.journal' should
> > do the trick.
> 
> Indeed that did the trick, thanks !
> 
> It's a bit counterintuitive because I asked the journal to be saved in
> /tmp/foo.journal only. Can this "rotation" be disabled somehow ?

Unfortunately not. That's because of the heritage of those programs which
were written in mind with continuous reception of logs in mind, and
"conversions" like the one above were just a fortunate side-effect.
I guess it'd be nice to make it possible to disable rotation.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] Backup the current boot logs in raw format

2019-08-01 Thread Zbigniew Jędrzejewski-Szmek
On Thu, Aug 01, 2019 at 10:26:50AM +0200, Francis Moreau wrote:
> On Thu, Aug 1, 2019 at 9:45 AM Zbigniew Jędrzejewski-Szmek
>  wrote:
> >
> > On Thu, Aug 01, 2019 at 09:11:19AM +0200, Francis Moreau wrote:
> > > On Wed, Jul 24, 2019 at 4:08 PM Zbigniew Jędrzejewski-Szmek
> > >  wrote:
> > > > you can export and write to a journal file with:
> > > >   journalctl -o export ... | /usr/lib/systemd/systemd-journal-remote -o 
> > > > /tmp/foo.journal -
> > > > This has the advantage that you can apply any journalctl filter where
> > > > the dots are, e.g. '-b'.
> > >
> > > This doesn't look to work correctly:
> > >
> > > $ journalctl -b | head
> > > -- Logs begin at Thu 2017-04-13 14:05:51 CEST, end at Thu 2019-08-01
> > > 08:51:39 CEST. --
> > > Mar 25 06:51:35 crapovo kernel: microcode: microcode updated early to
> > > revision 0x25, date = 2018-04-02
> > >
> > > $ journalctl -o export -b | /usr/lib/systemd/systemd-journal-remote -o
> > > /tmp/foo.journal -
> > > $ journalctl -b --file=/tmp/foo.journal | head
> > > -- Logs begin at Sat 2019-06-22 18:32:31 CEST, end at Thu 2019-08-01
> > > 08:45:45 CEST. --
> > > Jun 22 18:32:31 crapovo polkitd[1278]: Unregistered Authentication
> > > Agent for unix-process:7300:772806437 (system bus name :1.4562, object
> > > path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale
> > > en_US.UTF-8) (disconnected from bus)
> > >
> > > As you can see, the start is not the same.
> > >
> > > Also are foo.journal data compressed ?
> >
> > What does "ls /tmp/foo*" say?
> >
> 
> /tmp/foo@4e9f5da4aaac4433bc8744fe49e25b5a-0001-000584e4cc1ef61f.journal
>  /tmp/foo.journal

systemd-journal-remote will "rotate" files when they grow above certain size.
(The same as systemd-journald). 'journalctl --file=/tmp/foo*.journal' should
do the trick.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] systemd's connections to /run/systemd/private ?

2019-08-01 Thread Zbigniew Jędrzejewski-Szmek
On Wed, Jul 31, 2019 at 11:37:31AM -0400, Brian Reichert wrote:
> On Wed, Jul 31, 2019 at 12:36:41AM +0300, Uoti Urpala wrote:
> > On Tue, 2019-07-30 at 14:56 -0400, Brian Reichert wrote:
> > > I see, between 13:49:30 and 13:50:01, I see 25 'successful' calls
> > > for close(), e.g.:
> > > 
> > >   13:50:01 close(19)  = 0
> > > 
> > > Followed by getsockopt(), and a received message on the supposedly-closed
> > > file descriptor:
> > > 
> > >   13:50:01 getsockopt(19, SOL_SOCKET, SO_PEERCRED, {pid=3323, uid=0, 
> > > gid=0}, [12]) = 0
> > 
> > Are you sure it's the same file descriptor? You don't explicitly say
> > anything about there not being any relevant lines between those. Does
> > systemd really just call getsockopt() on fd 19 after closing it, with
> > nothing to trigger that? Obvious candidates to check in the strace
> > would be an accept call returning a new fd 19, or epoll indicating
> > activity on the fd (though I'd expect systemd to remove the fd from the
> > epoll set after closing it).
> 
> My analysis is naive.
> 
> There was an earlier suggestion to use strace, limiting it to a
> limited number of system calls.
> 
> I then used a simple RE to look for the string '(19', to see calls where
> '19' was used as an initial argument to system calls.  That's way too
> simplistic.
> 
> To address some of your questions/points.
> 
> - No, I don't know if it's the same file descriptor.  I could not
>   start strace early enough to catch the creation of several dozen
>   file descriptors.

This shouldn't really matter. We care about descriptors which are
created while the process is running, so it is not a problem if we
"miss" some that are created early.

>   13:50:01 accept4(13, 0, NULL, SOCK_CLOEXEC|SOCK_NONBLOCK) = 19
>   13:50:01 getsockopt(19, SOL_SOCKET, SO_PEERCRED, {pid=3323, uid=0, gid=0}, 
> [12]) = 0
>   13:50:01 getsockopt(19, SOL_SOCKET, SO_RCVBUF, [4194304], [4]) = 0
>   13:50:01 getsockopt(19, SOL_SOCKET, SO_SNDBUF, [262144], [4]) = 0
>   13:50:01 getsockopt(19, SOL_SOCKET, SO_PEERCRED, {pid=3323, uid=0, gid=0}, 
> [12]) = 0
>   13:50:01 getsockopt(19, SOL_SOCKET, SO_ACCEPTCONN, [0], [4]) = 0 13:50:01 
> getsockname(19, {sa_family=AF_LOCAL, sun_path="/run/systemd/private"}, [23]) 
> = 0 13:50:01 recvmsg(19, {msg_name(0)=NULL, msg_iov(1)=[{"\0AUTH EXTERNAL 
> 30\r\nNEGOTIATE_UNIX_FD\r\nBEGIN\r\n", 256}], msg_controllen=0, 
> msg_flags=MSG_CMSG_CLOEXEC}, MSG_DONTWAIT|MSG_NOSIGNAL|MSG_CMSG_CLOEXEC) = 45
>   13:50:01 sendmsg(19, {msg_name(0)=NULL, msg_iov(3)=[{"OK 
> 9fcf621ece0a4fe897586e28058cd2fb\r\nAGREE_UNIX_FD\r\n", 52}, {NULL, 0}, 
> {NULL, 0}], msg_controllen=0, msg_flags=0}, MSG_DONTWAIT|MSG_NOSIGNAL) = 52 
> 13:50:01 sendmsg(19, {msg_name(0)=NULL, 
> msg_iov(2)=[{"l\4\1\1P\0\0\0\1\0\0\0p\0\0\0\1\1o\0\31\0\0\0/org/freedesktop/systemd1\0\0\0\0\0\0\0\2\1s\0
>  
> \0\0\0org.freedesktop.systemd1.Manager\0\0\0\0\0\0\0\0\3\1s\0\7\0\0\0UnitNew\0\10\1g\0\2so\0",
>  128}, 
> {"\20\0\0\0session-11.scope\0\0\0\0003\0\0\0/org/freedesktop/systemd1/unit/session_2d11_2escope\0",
>  80}], msg_controllen=0, msg_flags=0}, MSG_DONTWAIT|MSG_NOSIGNAL) = -1 EPIPE 
> (Broken pipe)
>   13:50:01 close(19)  = 0
>   13:50:01 close(19)  = 0
>   13:50:01 close(19)  = 0
>   13:50:01 close(19)  = 0
>   13:50:01 close(19)  = 0
>   13:50:01 close(19)  = 0
>   13:50:01 close(19)  = 0

The kernel will use the lower-numbered available fd, so there's lot of
"reuse" of the same numbers happening. This strace means that between
each of those close()s here, some other function call returned fd 19.
Until we know what those calls are, we cannot say why fd19 remains
open. (In fact, the only thing we can say for sure, is that the
accept4() call shown above is not relevant.)

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] Backup the current boot logs in raw format

2019-08-01 Thread Zbigniew Jędrzejewski-Szmek
On Thu, Aug 01, 2019 at 09:11:19AM +0200, Francis Moreau wrote:
> On Wed, Jul 24, 2019 at 4:08 PM Zbigniew Jędrzejewski-Szmek
>  wrote:
> > you can export and write to a journal file with:
> >   journalctl -o export ... | /usr/lib/systemd/systemd-journal-remote -o 
> > /tmp/foo.journal -
> > This has the advantage that you can apply any journalctl filter where
> > the dots are, e.g. '-b'.
> 
> This doesn't look to work correctly:
> 
> $ journalctl -b | head
> -- Logs begin at Thu 2017-04-13 14:05:51 CEST, end at Thu 2019-08-01
> 08:51:39 CEST. --
> Mar 25 06:51:35 crapovo kernel: microcode: microcode updated early to
> revision 0x25, date = 2018-04-02
> 
> $ journalctl -o export -b | /usr/lib/systemd/systemd-journal-remote -o
> /tmp/foo.journal -
> $ journalctl -b --file=/tmp/foo.journal | head
> -- Logs begin at Sat 2019-06-22 18:32:31 CEST, end at Thu 2019-08-01
> 08:45:45 CEST. --
> Jun 22 18:32:31 crapovo polkitd[1278]: Unregistered Authentication
> Agent for unix-process:7300:772806437 (system bus name :1.4562, object
> path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale
> en_US.UTF-8) (disconnected from bus)
> 
> As you can see, the start is not the same.
> 
> Also are foo.journal data compressed ?

What does "ls /tmp/foo*" say?

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] KExecWatchdogSec NEWS entry needs work

2019-07-30 Thread Zbigniew Jędrzejewski-Szmek
On Tue, Jul 30, 2019 at 08:32:44AM +1000, Clinton Roy wrote:
> Particularly the following sentence:
> 
> This option defaults to off, since it depends on drivers and
> software setup whether the watchdog is correctly reset again after
> the kexec completed, and thus for the general case not clear if safe
> (since it might cause unwanted watchdog reboots after the kexec
> completed otherwise).
> 
> I can't quite work out what intent is, otherwise I'd take a stab myself.

https://github.com/systemd/systemd/pull/13227

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] Backup the current boot logs in raw format

2019-07-24 Thread Zbigniew Jędrzejewski-Szmek
On Wed, Jul 24, 2019 at 03:48:40PM +0200, Francis Moreau wrote:
> Hi,
> 
> I would like to backup the journal logs for the current boot in a
> "raw" format so I can reuse it later with "journalctl
> --file=my-backup".
> 
> But looking at the different values for "-o" option I can't find the answer.
> 
> Could anybody give me some clues ?

One option is to simply copy some of the files in /var/log/journal
to a different location. You can then read them with 'journalctl -D'.
If you want to be more granular and only select specific log entries,
you can export and write to a journal file with:
  journalctl -o export ... | /usr/lib/systemd/systemd-journal-remote -o 
/tmp/foo.journal -
This has the advantage that you can apply any journalctl filter where
the dots are, e.g. '-b'.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] "Unknown lvalue '' in section 'Service'"

2019-07-18 Thread Zbigniew Jędrzejewski-Szmek
On Thu, Jul 18, 2019 at 11:46:59AM +0300, Mantas Mikulėnas wrote:
> On Thu, Jul 18, 2019 at 11:34 AM Ulrich Windl <
> ulrich.wi...@rz.uni-regensburg.de> wrote:
> 
> > Hi!
> >
> > I noticed that a line of "===" in "[Service]" cases the message "
> > Unknown lvalue '' in section 'Service'".
> > (systemd 228)
> >
> > Shouldn't that be "Parse error at '===' in section 'Service'"?
> >
> 
> Arguably it isn't a parse error – the keyfile parser successfully
> recognizes the line as assigning the value "==" to the key "". It's
> only later when the parsed results are interpreted that each key is matched
> to an internal handler.
> 
> The error message *could* be clearer if all such errors had a common "Parse
> error:" prefix, I guess. (And what's the point of calling it an 'lvalue'
> anyway?...)

https://github.com/systemd/systemd/pull/13107

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] systemd's connections to /run/systemd/private ?

2019-07-11 Thread Zbigniew Jędrzejewski-Szmek
On Thu, Jul 11, 2019 at 10:08:43AM -0400, Brian Reichert wrote:
> On Wed, Jul 10, 2019 at 10:44:14PM +, Zbigniew J??drzejewski-Szmek wrote:
> > That's ancient... 228 was released almost four years ago.
> 
> That's the joy of using a commercial Linux distribution; they tend
> to be conservative about updates.  SLES may very well have backported
> fixes to the packaged version they maintain.
> 
> They may also have a newer version of a systemd RPM for us to take.
> 
> I'm looking for an efficient way to repro the symptoms, as to confirm
> whether a newer RPM solves this for us.
> 
> > > > > When we first spin up a new SLES12 host with our custom services,
> > > > > the number of connections to /run/systemd/private numbers in the
> > > > > mere hundreds. 
> > > 
> > > > That sounds wrong already. Please figure out what those connections
> > > > are. I'm afraid that you might have to do some debugging on your
> > > > own, since this issue doesn't seem easily reproducible.
> 
> Above, I cite a desire for reproducing the symptoms.  If you're
> confident that a newly-spun-up idle host should not hover at hundreds
> of connections, then hypothetically I could update the vendor-provided
> systemd RPM (if there is one), reboot, and see if the connection
> count is reduced.
> 
> > strace -p1 -e recvmsg,close,accept4,getsockname,getsockopt,sendmsg -s999
> >
> > yields the relevant info. In particular, the pid, uid, and guid of the
> > remote is shown. My approach would be to log this to some file, and
> > then see which fds remain, and then look up this fd in the log.
> > The recvmsg calls contain the serialized dbus calls, a bit messy but
> > understandable. E.g. 'systemctl show systemd-udevd' gives something
> > like this:
> 
> Thanks for such succinct feedback; I'll see what I can get from this.
> 
> In my prior email, I showed how some of the connections were
> hours/days old, even with no connecting peer.
> 
> Does that sound like expected behavior?

No, this shouldn't happen.

What I was trying to say, is that if you have the strace log, you
can figure out what created the stale connection and what the dbus
call was, and from all that info it should be fairly simply to figure
out what the calling command was. Once you have that, it'll be much
easier to reproduce the issue in controlled setting and look for the
fix.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] systemd's connections to /run/systemd/private ?

2019-07-10 Thread Zbigniew Jędrzejewski-Szmek
On Wed, Jul 10, 2019 at 09:51:36AM -0400, Brian Reichert wrote:
> On Wed, Jul 10, 2019 at 07:37:19AM +, Zbigniew J??drzejewski-Szmek wrote:
> 
> > It's a bug report as any other. Writing a meaningful reply takes time
> > and effort. Lack of time is a much better explanation than ressentiments.
> 
> I wasn't expressing resentment; I apologize if it came off that way.
> 
> > Please always specify the systemd version in use. We're not all SLES
> > users, and even if we were, I assume that there might be different
> > package versions over time.
> 
> Quite reasonable:
> 
>   localhost:/var/tmp # cat /etc/os-release
>   NAME="SLES"
>   VERSION="12-SP3"
>   VERSION_ID="12.3"
>   PRETTY_NAME="SUSE Linux Enterprise Server 12 SP3"
>   ID="sles"
>   ANSI_COLOR="0;32"
>   CPE_NAME="cpe:/o:suse:sles:12:sp3"
> 
>   localhost:/var/tmp # rpm -q systemd
>   systemd-228-142.1.x86_64

That's ancient... 228 was released almost four years ago.

> > > When we first spin up a new SLES12 host with our custom services,
> > > the number of connections to /run/systemd/private numbers in the
> > > mere hundreds. 
> 
> > That sounds wrong already. Please figure out what those connections
> > are. I'm afraid that you might have to do some debugging on your
> > own, since this issue doesn't seem easily reproducible.
> 
> What tactics should I employ?  All of those file handles to
> /run/systemd/private are owned by PID 1, and 'ss' implies there are
> no peers.
> 
> 'strace' in pid shows messages are flowing, but that doesn't reveal
> the logic about how the connections get created or culled, nor who
> initiated them.

strace -p1 -e recvmsg,close,accept4,getsockname,getsockopt,sendmsg -s999

yields the relevant info. In particular, the pid, uid, and guid of the
remote is shown. My approach would be to log this to some file, and
then see which fds remain, and then look up this fd in the log.
The recvmsg calls contain the serialized dbus calls, a bit messy but
understandable. E.g. 'systemctl show systemd-udevd' gives something
like this:

recvmsg(20, {msg_name=NULL, msg_namelen=0, 
msg_iov=[{iov_base="l\1\4\1\5\0\0\0\1\0\0\0\257\0\0\0\1\1o\08\0\0\0", 
iov_len=24}], msg_iovlen=1, msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, 
MSG_DONTWAIT|MSG_CMSG_CLOEXEC) = 24
recvmsg(20, {msg_name=NULL, msg_namelen=0, 
msg_iov=[{iov_base="/org/freedesktop/systemd1/unit/systemd_2dudevd_2eservice\0\0\0\0\0\0\0\0\3\1s\0\6\0\0\0GetAll\0\0\2\1s\0\37\0\0\0org.freedesktop.DBus.Properties\0\6\1s\0\30\0\0\0org.freedesktop.systemd1\0\0\0\0\0\0\0\0\10\1g\0\1s\0\0\0\0\0\0\0",
 ...

HTH,
Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] OFFLIST Re: systemd's connections to /run/systemd/private ?

2019-07-10 Thread Zbigniew Jędrzejewski-Szmek
On Tue, Jul 09, 2019 at 11:05:50PM +0300, Mantas Mikulėnas wrote:
> On Tue, Jul 9, 2019 at 4:28 PM Brian Reichert  wrote:
> 
> > On Tue, Jul 09, 2019 at 11:21:13AM +0100,
> > systemd-devel@lists.freedesktop.org wrote:
> > > Hi Brian
> > >
> > > I feel embarrassed at having recommended you to join the systemd-devel
> > > list :( I don't understand why nobody is responding to you, and I'm not
> > > qualified to help!
> >
> > I appreciate the private feedback.  I recognize this is an all-volunteer
> > ecosystem, but I'm not used to radio silence. :/
> >
> > > There is a bit of anti-SUSE feeling for some reason
> > > that I don't really understand, but Lennart in particular normally
> > > seems to be very helpful, as does Zbigniew.
> >
> 
> It seems that Lennart tends to process his mailing-list inbox only every
> couple of weeks. He's a bit more active on GitHub however.
> 
> The rest of us are probably either waiting for a dev to make a comment,
> and/or wondering why such massive numbers of `systemctl` are being run on
> your system in the first place.
> >
> > I'm new to this list, so haven't seen any anti-SLES sentiments as
> > of yet.  But, based on the original symptoms I reported, this occurs
> > on many distributions.

It's a bug report as any other. Writing a meaningful reply takes time
and effort. Lack of time is a much better explanation than ressentiments.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] systemd's connections to /run/systemd/private ?

2019-07-10 Thread Zbigniew Jędrzejewski-Szmek
On Tue, Jul 02, 2019 at 09:57:44AM -0400, Brian Reichert wrote:
> At $JOB, on some of our SLES12 boxes, our logs are getting swamped
> with messages saying:
> 
>   "Too many concurrent connections, refusing"

Please always specify the systemd version in use. We're not all SLES
users, and even if we were, I assume that there might be different
package versions over time.

>   # ss -x | grep /run/systemd/private | wc -l
>   4015

/run/systemd/private is used by systemctl and other systemd utilities
when running as root. Those connections are expected to be short-lived.
Generally, on a normal machine "ss -x | grep /run/systemd/private | wc -l"
is expected to yield 0 or a very low number transiently.

> But, despite the almost 4k connections, 'ss' shows that there are
> no connected peers:
> 
>   # ss -x | grep /run/systemd/private | grep -v -e '* 0' | wc -l
>   0

Interesting. ss output is not documented at all from what I can see,
but indeed '* 0' seems to indicate that. It is possible that systemd
has a leak and is not closing the private bus connections properly.

> When we first spin up a new SLES12 host with our custom services,
> the number of connections to /run/systemd/private numbers in the
> mere hundreds. 
That sounds wrong already. Please figure out what those connections
are. I'm afraid that you might have to do some debugging on your
own, since this issue doesn't seem easily reproducible.

(I installed systemd with CONNECTIONS_MAX set to 10, and I can easily
saturate the number of available connections with
  for i in {1..11}; do systemctl status '*' & sleep 0.5; kill -STOP $!;done
As soon as I allow the processes to continue or kill them, the connection
count goes down. They never show up with '* 0'.)

> Is my guess about CONNECTIONS_MAX's relationship to /run/systemd/private
> correct?

Yes. The number is hardcoded because it's expected to be "large
enough". The connection count shouldn't be more than "a few" or maybe
a dozen at any time.

> I have a hypothesis that this may be some resource leak in systemd,
> but I've not found a way to test that.

Once you figure out what is creating the connection, it would be useful
to attach strace to pid 1 and see what is happening there.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] Antw: Re: Is "systemctl status --state=failed" expected to fail silently?

2019-07-09 Thread Zbigniew Jędrzejewski-Szmek
On Tue, Jul 09, 2019 at 10:23:47AM +0200, Ulrich Windl wrote:
> >>> Zbigniew Jedrzejewski-Szmek  schrieb am 09.07.2019 um
> 10:05
> in Nachricht <20190709080527.gk17...@in.waw.pl>:
> > On Tue, Jul 09, 2019 at 08:49:32AM +0200, Ulrich Windl wrote:
> >> Hi!
> >> 
> >> It seems "‑‑state=failed" is being ignored silently for "systemctl status"
> (in 
> > version 228). Is this by design?
> > 
> > Nope. In 242‑1092+ it seems to work fine.
> 
> In v228 is is effective for "list-units", but not for "status"...

Oh, right. I checked "list-units", but not "status".
"systemctl status 'systemd*' --state=running" and
"systemctl status 'systemd*' --state=failed" both seem to do the
right thing here.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] Is "systemctl status --state=failed" expected to fail silently?

2019-07-09 Thread Zbigniew Jędrzejewski-Szmek
On Tue, Jul 09, 2019 at 08:49:32AM +0200, Ulrich Windl wrote:
> Hi!
> 
> It seems "--state=failed" is being ignored silently for "systemctl status" 
> (in version 228). Is this by design?

Nope. In 242-1092+ it seems to work fine.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] connection failure

2019-07-02 Thread Zbigniew Jędrzejewski-Szmek
On Tue, Jul 02, 2019 at 03:39:04PM +0200, ABDUL MAJITH wrote:
> Hi all,
> 
> I am trying to use the Docker in GNS3, when I try to launch it show the
> error as follows,
> 
> -- The start-up result is done.
> Jul 02 15:21:55 reccon.irisa.fr systemd[1]: docker.service: Start request
> repeated too quickly.
> Jul 02 15:21:55 reccon.irisa.fr systemd[1]: docker.service: Failed with
> result 'exit-code'.
> Jul 02 15:21:55 reccon.irisa.fr systemd[1]: Failed to start Docker
> Application Container Engine.
> -- Subject: Unit docker.service has failed
> -- Defined-By: systemd
> -- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
> -- 
> -- Unit docker.service has failed.
> -- 
> -- The result is failed.
> Jul 02 15:21:55 reccon.irisa.fr systemd[1]: docker.socket: Failed with
> result 'service-start-limit-hit'.
> Jul 02 15:24:06 reccon.irisa.fr systemd[1]: Starting dnf makecache...
> -- Subject: Unit dnf-makecache.service has begun start-up
> -- Defined-By: systemd
> -- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
> -- 
> -- Unit dnf-makecache.service has begun starting up.
> Jul 02 15:24:07 reccon.irisa.fr dnf[18029]: Metadata timer caching disabled.
> Jul 02 15:24:07 reccon.irisa.fr systemd[1]: Started dnf makecache.
> -- Subject: Unit dnf-makecache.service has finished start-up
> -- Defined-By: systemd
> -- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
> -- 
> -- Unit dnf-makecache.service has finished starting up.
> -- 
> -- The start-up result is done.
> ...skipping...
> -- 
> -- The start-up result is done.
> Jul 02 15:21:55 reccon.irisa.fr systemd[1]: docker.service: Start request
> repeated too quickly.
> Jul 02 15:21:55 reccon.irisa.fr systemd[1]: docker.service: Failed with
> result 'exit-code'.
> Jul 02 15:21:55 reccon.irisa.fr systemd[1]: Failed to start Docker
> Application Container Engine.
> -- Subject: Unit docker.service has failed
> -- Defined-By: systemd
> -- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
> -- 
> -- Unit docker.service has failed.
> -- 
> -- The result is failed.
> Jul 02 15:21:55 reccon.irisa.fr systemd[1]: docker.socket: Failed with
> result 'service-start-limit-hit'.
> Jul 02 15:24:06 reccon.irisa.fr systemd[1]: Starting dnf makecache...
> -- Subject: Unit dnf-makecache.service has begun start-up
> -- Defined-By: systemd
> -- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
> -- 
> -- Unit dnf-makecache.service has begun starting up.
> Jul 02 15:24:07 reccon.irisa.fr dnf[18029]: Metadata timer caching disabled.
> Jul 02 15:24:07 reccon.irisa.fr systemd[1]: Started dnf makecache.
> -- Subject: Unit dnf-makecache.service has finished start-up
> -- Defined-By: systemd
> -- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
> -- 
> -- Unit dnf-makecache.service has finished starting up.
> -- 
> -- The start-up result is done.
> 
> How to rectify this failure to start the docker application

This is a problem with docker. Nothing to do with the systemd-devel list.
Please ask in a forum appropriate for docker issues.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] swap on zram service unit, using Conflicts=umount

2019-06-25 Thread Zbigniew Jędrzejewski-Szmek
On Tue, Jun 25, 2019 at 10:55:27AM +0200, Lennart Poettering wrote:
> On Mo, 24.06.19 13:16, Zbigniew Jędrzejewski-Szmek (zbys...@in.waw.pl) wrote:
> 
> > > So for tmpfs mounts that don't turn off DefaultDependencies= we
> > > implicit add in an After=swap.target ordering dep. The thinking was
> > > that there's no point in swapping in all data of a tmpfs because we
> > > want to detach the swap device when we are going to flush it all out
> > > right after anyway. This made quite a difference to some folks.
> >
> > But we add Conflicts=umount.target, Before=umount.target, so we do
> > swapoff on all swap devices, which means that swap in the data after all.
> > Maybe that's an error, and we should remove this, at least for
> > normal swap partitions (not files)?
> 
> We never know what kind of weird storage swap might be on, I'd
> probably leave that in, as it's really hard to figure out correctly
> when leaving swap on would be safe and when not.
> 
> Or to say this differently: if people want to micro-optimize that,
> they by all means should, but in that case they should probably drop
> in their manually crafted .swap unit with DefaultDependencies=no and
> all the ordering in place they need, and nothing else. i.e. I believe
> this kind of optimization is nothing we need to cater for in the
> generic case when swap is configured with /etc/fstab or through GPT
> enumeration.

Not swapping off would make a nice optimization. Maybe we should
invert this, and "drive" this from the other side: if we get a stop
job for the storage device, then do the swapoff. Then if there are
devices which don't need to stop, we wouldn't swapoff. This would cover
the common case of swap on partition.

I haven't really thought about the details, but in principle this
should already work, if all the dependencies are declared correctly.

> zswap is different: we know exactly that the swap data is located in
> RAM, not on complex storage, hence it's entirely safe to not
> disassemble it at all, iiuc.

Agreed. It seems that any Conflicts= (including the one I proposed) are
unnecessary/harmful.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] swap on zram service unit, using Conflicts=umount

2019-06-24 Thread Zbigniew Jędrzejewski-Szmek
On Mon, Jun 24, 2019 at 02:11:03PM +0200, Lennart Poettering wrote:
> On Sa, 22.06.19 10:42, Chris Murphy (li...@colorremedies.com) wrote:
> 
> > Hi,
> >
> > I've got a commit to add 'Conflicts=umount.target' to this zram
> > service based on a bug comment I cited in the comment. But I'm not
> > certain I understand if it's a good idea or necessary.
> >
> > https://src.fedoraproject.org/fork/chrismurphy/rpms/zram/c/63900c455e8a53827aed697b9f602709b7897eb2?branch=devel
> >
> > I figure it's plausible at shutdown time that something is swapped
> > out, and a umount before swapoff could hang (briefly or indefinitely I
> > don't know), and therefore it's probably better to cause swapoff to
> > happen before umount.
> 
> So for tmpfs mounts that don't turn off DefaultDependencies= we
> implicit add in an After=swap.target ordering dep. The thinking was
> that there's no point in swapping in all data of a tmpfs because we
> want to detach the swap device when we are going to flush it all out
> right after anyway. This made quite a difference to some folks.

But we add Conflicts=umount.target, Before=umount.target, so we do
swapoff on all swap devices, which means that swap in the data after all.
Maybe that's an error, and we should remove this, at least for
normal swap partitions (not files)?

> That said, I don't really grok zram, and not sure why there's any need
> to detach it at all. I mean, if at shutdown we lose compressed RAM
> or lose uncompressed RAM shouldn't really matter. Hence from my
> perspective there's no need for Conflicts= at all, but maybe I am
> missing something?
> 
> Zbigniew, any particular reason why you added the Conflicts= line?

It's been a while since I wrote that comment... Most likely I did it
because that's the default combination that we use for units, and I didn't
think that something different should be used for a swap service. Don't
read too much into it.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] Building systemd

2019-06-10 Thread Zbigniew Jędrzejewski-Szmek
On Sun, Jun 09, 2019 at 05:58:16PM -0300, Leonardo Akel Daher wrote:
> Hi,
> 
> I am trying to make a Pull Request for systemd, but I am struggling running
> the tests locally (before making any code changes). I get the error of the
> attached image when running "ninja -C build/ test".

Hi,

please don't attach error logs as messages. Just copy & paste the error
text into your mail.

You are hitting a bug in meson. Since you installed meson using pip, you
have a very new meson, which you are using with a rather old python.
It's likely that this combination is not tested very well.
But it *should* work. Maybe you can get more help from Ubuntu users.

> I then decided to try using mkosi to create an image, but then I get this
> other error (attached on this other image). I currently use Ubuntu, so
> should I change the file mkosi.default so it includes Ubuntu's file instead
> of Fedora's?

You are using an old mkosi version that doesn't know about newer Fedora.
Either update, or switch to Ubuntu images as you say.

> Another problem I faced, but have already fixed was while installing Meson.
> In Ubuntu's package manager, the latest Meson version is < 0.46, so when
> running "meson build", it renders an error stating that my Meson version
> should be >= 0.46. I fixed this by installing Meson through Python's
> package manager, pip. But, I think this would be a problem when trying to
> build the Ubuntu image with mkosi. How would I install the pip's version of
> Meson instead of Ubuntu package manager's?

You can run arbitrary scripts during image creation, so in principle you
could do 'pip install' then. But it quickly becomes very messy and complicated.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] Systemd spends much time about 30 seconds for every unit

2019-06-07 Thread Zbigniew Jędrzejewski-Szmek
On Fri, Jun 07, 2019 at 03:48:58AM +, 程 书意 wrote:
> Hi all,
> 
> I am working on virtualization on armv8 platform and encountered a complex 
> problem. From the below boot log, we can find:
> 
>   1.  Every unit init need about 30 seconds.
>   2.  status_printf function doesn't work. [no status_welcome print and no 
> manager_status_printf print]
>   3.  I'm trying to use systemd.log_level=debug systemd.log_target=console 
> console=ttymxc0,115200 earlycon=ec_imx6q,0x3086,115200 to show more debug 
> info, however the boot stoped at systemd[1]: System time before build time, 
> advancing clock.
> 
> [2.478912] systemd[1]: System time before build time, advancing clock.
> [2.499714] systemd[1]: systemd 237 running in system mode. (+PAM -AUDIT 
> -SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS 
> +ACL +XZ -LZ4 -SECCOMP +BLKID -ELFUTILS +KMOD -ID)

Please try with something newer. There have been many random number
handling since then.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] systemd and chroot()

2019-06-04 Thread Zbigniew Jędrzejewski-Szmek
On Tue, Jun 04, 2019 at 12:42:35PM -0400, Steve Dickson wrote:
> Hello,
> 
> We are adding some new functionality to the NFS server that 
> will make it a bit more container friendly... 
> 
> This new functionality needs to do a chroot(2) system call. 
> This systemcall is failing with EPERM due to the
> following AVC error:
> 
> AVC avc:  denied  { sys_chroot } for  pid=2919 comm="rpc.mountd" 
> capability=18  scontext=system_u:system_r:nfsd_t:s0 
> tcontext=system_u:system_r:nfsd_t:s0 tclass=capability permissive=0

It doesn't sound right to do any kind of chrooting yourself.
Why can't you use the systemd builtins for this?

Zbyszek

> The entery in the /var/loc/audit.log
> type=AVC msg=audit(1559659652.217:250): avc:  denied  { sys_chroot } for  
> pid=2412 comm="rpc.mountd" capability=18  
> scontext=system_u:system_r:nfsd_t:s0 tcontext=system_u:system_r:nfsd_t:s0 
> tclass=capability permissive=0
> 
> It definitely is something with systemd, since I can
> start the daemon by hand... 
> 
> It was suggested I make the following change to the service unit
> # diff -u nfs-mountd.service.orig nfs-mountd.service
> --- nfs-mountd.service.orig   2019-06-04 10:38:57.0 -0400
> +++ nfs-mountd.service2019-06-04 12:29:34.339621802 -0400
> @@ -11,3 +11,4 @@
>  [Service]
>  Type=forking
>  ExecStart=/usr/sbin/rpc.mountd
> +AmbientCapabilities=CAP_SYS_CHROOT
> 
> which did not work. 
> 
> Any ideas on how to tell systemd its ok for a daemon
> to do a chroot(2) system call?
> 
> tia,
> 
> steved.
> ___
> systemd-devel mailing list
> systemd-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/systemd-devel
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] How to get hardware watchdog status when using systemd

2019-05-28 Thread Zbigniew Jędrzejewski-Szmek
On Tue, May 28, 2019 at 01:50:53PM +0200, Wiktor Kwapisiewicz wrote:
> Hi Zbyszek,
> 
> On 28.05.2019 13:43, Zbigniew Jędrzejewski-Szmek wrote:
> >What kind of information are you after?
> 
> One interesting statistic I'd like to see changing is the time when
> the watchdog was notified last.
> 
> For example, there is Timeleft in this wdctl output [0]:
> 
>   # wdctl
>   Identity:  iTCO_wdt [version 0]
>   Timeout:   30 seconds
>   Timeleft:   2 seconds

This currently isn't exported by systemd, and there's even no log
message at debug level. I guess this could be exposed, but I don't
think it'd be very useful. If the watchdog ping works, most people
don't need to look at it. If it doesn't, the machine should reboot...

If this is just for debugging, you can do something like

  sudo strace -e ioctl -p 1

and look for WDIOC_KEEPALIVE.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] How to get hardware watchdog status when using systemd

2019-05-28 Thread Zbigniew Jędrzejewski-Szmek
On Tue, May 28, 2019 at 12:59:27PM +0200, Wiktor Kwapisiewicz wrote:
> Hello,
> 
> I've enabled "RuntimeWatchdogSec=30" in /etc/systemd/system.conf
> (after reading excellent "systemd for Administrators" series [0]).
> 
> Before enabling that "wdctl" printed nice statistics but now it only
> informs that the "watchdog already in use, terminating." I guess
> this is obvious as systemd is using /dev/watchdog now but is there a
> way to get more statistics about watchdog from systemd?
> 
> Journal has only basic info that the setting is enabled:
> 
> $ journalctl | grep watchdog

journalctl --grep watchdog

> kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter.
> systemd[1]: Hardware watchdog 'iTCO_wdt', version 0
> systemd[1]: Set hardware watchdog to 30s.

What kind of information are you after?
It is possible to query the systemd state:
$ systemctl show |grep Watchdog
RuntimeWatchdogUSec=0
ShutdownWatchdogUSec=10min
ServiceWatchdogs=yes

... but that's essentially the same information that you got from the logs
already.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] Node.js sd-notify module visibility

2019-05-23 Thread Zbigniew Jędrzejewski-Szmek
On Thu, May 23, 2019 at 12:43:27PM +0100, Rory Bradford wrote:
> Yup, go for it.

Done.

Zbyszek

> On 23/05/2019 12:36, Zbigniew Jędrzejewski-Szmek wrote:
> > On Thu, May 23, 2019 at 12:27:56PM +0100, Rory Bradford wrote:
> >> Yeah of course, didn't even think. I'm happy with that but I've lost the
> >> rights to change this.
> > Is "node-sd-notify" ok?
> >
> > Zbyszek
> >
> >> On 23/05/2019 12:25, Mantas Mikulėnas wrote:
> >>> On Thu, May 23, 2019 at 12:05 PM Zbigniew Jędrzejewski-Szmek
> >>> mailto:zbys...@in.waw.pl>> wrote:
> >>>
> >>> On Thu, May 23, 2019 at 09:25:02AM +0100, Rory Bradford wrote:
> >>> > Hi,
> >>> >
> >>> > I actively maintain a Node.js module "sd-notify" available on GitHub
> >>> > (https://github.com/roryrjb/sd-notify) and npm
> >>> > (https://www.npmjs.com/package/sd-notify) that has been
> >>> contributed to
> >>> > and is being used by others. As the name suggests it binds to
> >>> > sd_notify_* functions.
> >>> >
> >>> > It would be nice to be listed alongside other language bindings
> >>> on this
> >>> > page https://www.freedesktop.org/wiki/Software/systemd/ if possible,
> >>> > especially as it is actively maintained.
> >>> Links have been added.
> >>>
> >>> > Also I don't know if the projects listed in the GitHub
> >>> organisation only
> >>> > reflect those of official systemd contributors/developers but
> >>> moving my
> >>> > repo out into that namespace would be awesome and provide even more
> >>> > community scrutiny.
> >>> I sent you an invitation for a new @systemd/nodejs team. Once you
> >>> accept
> >>> that, you should be able to move your project under the systemd
> >>> umbrella.
> >>>
> >>>
> >>> Hmm, now that it's under systemd/, maybe it should be renamed to
> >>> "node-sd-notify" or something similarly less-generic?
> >>>
> >>> -- 
> >>> Mantas Mikulėnas
> >> pub  3072R/1EC95E98 2019-04-21 Rory Bradford 
> >> 
> >> sub  3072R/5899FC77 2019-04-21 [expires: 2020-04-20]

> pub  3072R/1EC95E98 2019-04-21 Rory Bradford 
> 
> sub  3072R/5899FC77 2019-04-21 [expires: 2020-04-20]

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] Node.js sd-notify module visibility

2019-05-23 Thread Zbigniew Jędrzejewski-Szmek
On Thu, May 23, 2019 at 12:27:56PM +0100, Rory Bradford wrote:
> Yeah of course, didn't even think. I'm happy with that but I've lost the
> rights to change this.

Is "node-sd-notify" ok?

Zbyszek

> 
> On 23/05/2019 12:25, Mantas Mikulėnas wrote:
> > On Thu, May 23, 2019 at 12:05 PM Zbigniew Jędrzejewski-Szmek
> > mailto:zbys...@in.waw.pl>> wrote:
> >
> > On Thu, May 23, 2019 at 09:25:02AM +0100, Rory Bradford wrote:
> > > Hi,
> > >
> > > I actively maintain a Node.js module "sd-notify" available on GitHub
> > > (https://github.com/roryrjb/sd-notify) and npm
> > > (https://www.npmjs.com/package/sd-notify) that has been
> > contributed to
> > > and is being used by others. As the name suggests it binds to
> > > sd_notify_* functions.
> > >
> > > It would be nice to be listed alongside other language bindings
> > on this
> > > page https://www.freedesktop.org/wiki/Software/systemd/ if possible,
> > > especially as it is actively maintained.
> > Links have been added.
> >
> > > Also I don't know if the projects listed in the GitHub
> > organisation only
> > > reflect those of official systemd contributors/developers but
> > moving my
> > > repo out into that namespace would be awesome and provide even more
> > > community scrutiny.
> > I sent you an invitation for a new @systemd/nodejs team. Once you
> > accept
> > that, you should be able to move your project under the systemd
> > umbrella.
> >
> >
> > Hmm, now that it's under systemd/, maybe it should be renamed to
> > "node-sd-notify" or something similarly less-generic?
> >
> > -- 
> > Mantas Mikulėnas

> pub  3072R/1EC95E98 2019-04-21 Rory Bradford 
> 
> sub  3072R/5899FC77 2019-04-21 [expires: 2020-04-20]

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] Node.js sd-notify module visibility

2019-05-23 Thread Zbigniew Jędrzejewski-Szmek
On Thu, May 23, 2019 at 10:11:32AM +0100, Rory Bradford wrote:
> Many thanks!
> 
> The repo has been moved.

Cool, welcome ;)
I updated https://www.freedesktop.org/wiki/Software/systemd/ to match.
Let us know if you want to add more maintainers to the nodejs group.

Zbyszek

> 
> On 23/05/2019 10:04, Zbigniew Jędrzejewski-Szmek wrote:
> > On Thu, May 23, 2019 at 09:25:02AM +0100, Rory Bradford wrote:
> >> Hi,
> >>
> >> I actively maintain a Node.js module "sd-notify" available on GitHub
> >> (https://github.com/roryrjb/sd-notify) and npm
> >> (https://www.npmjs.com/package/sd-notify) that has been contributed to
> >> and is being used by others. As the name suggests it binds to
> >> sd_notify_* functions.
> >>
> >> It would be nice to be listed alongside other language bindings on this
> >> page https://www.freedesktop.org/wiki/Software/systemd/ if possible,
> >> especially as it is actively maintained.
> > Links have been added.
> >
> >> Also I don't know if the projects listed in the GitHub organisation only
> >> reflect those of official systemd contributors/developers but moving my
> >> repo out into that namespace would be awesome and provide even more
> >> community scrutiny.
> > I sent you an invitation for a new @systemd/nodejs team. Once you accept
> > that, you should be able to move your project under the systemd umbrella.
> >
> > Zbyszek

> pub  3072R/1EC95E98 2019-04-21 Rory Bradford 
> 
> sub  3072R/5899FC77 2019-04-21 [expires: 2020-04-20]

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] Node.js sd-notify module visibility

2019-05-23 Thread Zbigniew Jędrzejewski-Szmek
On Thu, May 23, 2019 at 09:25:02AM +0100, Rory Bradford wrote:
> Hi,
> 
> I actively maintain a Node.js module "sd-notify" available on GitHub
> (https://github.com/roryrjb/sd-notify) and npm
> (https://www.npmjs.com/package/sd-notify) that has been contributed to
> and is being used by others. As the name suggests it binds to
> sd_notify_* functions.
> 
> It would be nice to be listed alongside other language bindings on this
> page https://www.freedesktop.org/wiki/Software/systemd/ if possible,
> especially as it is actively maintained.
Links have been added.

> Also I don't know if the projects listed in the GitHub organisation only
> reflect those of official systemd contributors/developers but moving my
> repo out into that namespace would be awesome and provide even more
> community scrutiny.
I sent you an invitation for a new @systemd/nodejs team. Once you accept
that, you should be able to move your project under the systemd umbrella.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] Does "systemctl daemon-reload" discard service information?

2019-05-21 Thread Zbigniew Jędrzejewski-Szmek
On Mon, May 20, 2019 at 03:36:28PM +0200, Ulrich Windl wrote:
> Hi!
> 
> I have had the effect that a "systectl status" before and after a
> "daemon-reload" is different, while the service in question wasn't restarted:

Whenever making a report like this, always include the exact systemd
version you are running. A link to the definition of the service would
also be useful, so people can try to reproduce locally.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] Antw: Re: Antw: Re: Re: "bad" status for genersated target; why?

2019-05-16 Thread Zbigniew Jędrzejewski-Szmek
On Thu, May 16, 2019 at 12:04:28PM +0200, Ulrich Windl wrote:
> >>> Lennart Poettering  schrieb am 16.05.2019 um 10:29
> in
> Nachricht <20190516082910.GA24042@gardel-login>:
> > On Do, 16.05.19 08:55, Ulrich Windl (ulrich.wi...@rz.uni‑regensburg.de)
> wrote:
> > 
> >> Hi!
> >>
> >> After having read the page again, it's not more clear than
> >> before. Even I have some more questions:
> >>
> >> Why do generators receive three directory paths: Should the
> >> generator decide where at those three paths to add a unit?
> > 
> > Yes.
> > 
> > This is explained in the documentation btw:
> > https://www.freedesktop.org/software/systemd/man/systemd.generator.html 
> > 
> > Long story short: it's about unit file precedence.
> 
> Sorry I don't get it: So the idea is to have different generators, depending
> on the precedence?

Please, this is just wasting everyone's time. Take the advice that
Reindl gave you: generators are an advanced tool that you don't need
when staring with systemd.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] systemctl list all possible (including unloaded) services

2019-05-10 Thread Zbigniew Jędrzejewski-Szmek
On Fri, May 10, 2019 at 09:27:17AM -0600, Roger Pack wrote:
> Hello, I'm trying to answer this question:
> 
> https://unix.stackexchange.com/questions/517872/systemctl-list-all-possible-including-disabled-services
> 
> Basically, I have a my_service.service that is disabled, I wish I
> could "see that it is an option to start" by running systemctl, but it
> doesn't seem to show up in any incantation of queries.  The reason
> being I wanted to check that some /etc/init.d/XX scripts "had an
> autogenerated service equivalent or not" and some were showing up in
> the systemctl lists and some weren't (the disabled ones weren't, even
> though still controllable by systemctl).  It was some newbie confusion
> but still...confusing.
> 
> My hunch is that since it isn't auto started it is never "loaded" and
> then doesn't appear in any query (if this is the case I really wish a
> new command could be created to "list all units installed on the
> system").
> 
> systemd 219 in this case.

Seems to work here. I have just one sysvinit script:
$ ls /etc/init.d/
functions  network  README
$ systemctl list-unit-files 'network*'
UNIT FILE STATE
network.service   generated
...
$ systemctl cat network.service
# /run/systemd/generator.late/network.service
# Automatically generated by systemd-sysv-generator
...
$ systemctl start network # autocompletes to network.service

$ systemctl --version
systemd 241 (v241-7.gita2eaa1c.fc30)

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] systemd-coredump[2529]: Process 2326 (xfwm4) of user 1000 dumped core maybe because of lightdm[2482]: gkr-pam: unable to locate daemon control file

2019-04-29 Thread Zbigniew Jędrzejewski-Szmek
On Mon, Apr 29, 2019 at 09:32:57PM +0200, linuxfr...@gmx.at wrote:
> Hi Zbyszek,
> 
> 
> i've already did this.
> 
> It started after desktop freezes here:
> 
> https://forum.manjaro.org/t/xfce-desktop-freezes-after-automated-screen-lock-manjaro-18-0-4-illyria/84717/12
> 
> which were solved, but there are still systemd-coredump's

systemd-coredump is a component reports abnormal program termination
and logs the backtrace. Thus, systemd-coredump is just saying that
xwfm4 made a boo boo.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] systemd-coredump[2529]: Process 2326 (xfwm4) of user 1000 dumped core maybe because of lightdm[2482]: gkr-pam: unable to locate daemon control file

2019-04-29 Thread Zbigniew Jędrzejewski-Szmek
On Mon, Apr 29, 2019 at 09:22:30PM +0200, linuxfr...@gmx.at wrote:
> Hi,
> 
> i do not know who is the correct person / mailing list, to ask for this one.
> 
> 1) screen-lock locks the screen  (OK)
> 
> 2) screensaver comes and screen is going to be black.  (OK)
> 
> 3) Everytime i unlock my locked screen (after the screensaver kicked
> in) i got this:*systemd-coredump*   (NOT OK)
> 
> 
> What are the next steps to fix this?
> 
> regards LF
> 
> 
> 
> 
> *Apr 29 21:08:37  lightdm[2482]: gkr-pam: unable to locate daemon
> control file**
> **Apr 29 21:08:40  systemd-coredump[2529]: Process 2326 (xfwm4) of
> user 1000 dumped core.*

Please take this up with your distribution, component xfwm4.
Most likely this has nothing to do with systemd.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] systemd prerelease 242-rc1

2019-04-03 Thread Zbigniew Jędrzejewski-Szmek
On Wed, Apr 03, 2019 at 07:55:42AM +, systemd tag bot wrote:
> A new systemd ☠️ pre-release ☠️ has just been tagged. Please download the 
> tarball here:

Please ignore this one. The version number was not bumped properly.
v242-rc2 has already been tagged.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] WantedBy=default.target

2019-03-07 Thread Zbigniew Jędrzejewski-Szmek
On Thu, Mar 07, 2019 at 11:31:18AM +0100, Michael Biebl wrote:
> Am Do., 7. März 2019 um 11:24 Uhr schrieb Lennart Poettering
> :
> >
> > On Do, 07.03.19 10:30, Michael Biebl (mbi...@gmail.com) wrote:
> >
> > > Looks like quite a few services use
> > > WantedBy=default.target
> > > https://codesearch.debian.net/search?q=WantedBy%3Ddefault.target
> > >
> > > Some of them are user services, but I wonder if there is
> > > recommendation regarding system services.
> > > Should they use multi-user.target or graphical.target instead or is it
> > > ok for system services to hook into default.target?
> >
> > Usually doing this for system services is wrong, but there are
> > exceptions.
> 
> [..]
> 
> > It might make sense to extend "systemd-analyze" in some form to warn
> > about this, maybe file an RFE bug about that?
> 
> I was considering filing a lintian bug, which would check debian
> packages automatically for this.
> Do we have documentation for this where I could point the lintian maintainer 
> at?

https://www.freedesktop.org/software/systemd/man/systemd.special.html#multi-user.target

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

[systemd-devel] planned v241 release

2019-01-15 Thread Zbigniew Jędrzejewski-Szmek
Hi everyone,

241 will be released soon. Current list of outstanding tickets [1]:

"23 wireguard peers hang systemd-networkd"
#11404 opened 3 days ago by darkk
network: wait for kernel to reply ipv6 peer address
#11428
(needs review)

"Bump numbers for v241"
#11387
(release mechanics)

"Add 'udevadm control --ping'"
#11349
I'm haven't looked at this one in a while, but since it was a solution
for the udev problems that got solved in the meantime in a different way,
I think it may be postponed for v242.

"systemd-240 fails to mount squashfs again, waiting for loop0 till timeout bug 
bug pid1"
#11342

"systemd-resolved: TCP connection is prematurely closed when multiple requests 
are sent on same connection"
#11332

"nss-resolve: PROTECT_ERRNO macro interfers with glibc logic to retry queries 
with a larger buffer"
#11321
There were CI troubles. Might get postponed.

"tmpfiles.d: allow "C" lines to copy stuff into pre-existing empty destination 
dirs"
#11287
Can be merged if review passes and there are no issues.

"USB device still active even after physically unplugged"
#7587
We don't have a PR for this one, so it'll get bumped.

So there's a few bugs, but small ones (the big ones are left for later).
As soon as we merge fixes for them, I plan to make a release candidate and then
the release v241 a day or two later.

[1] https://github.com/systemd/systemd/milestones/v241

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Bugfix release(s)

2019-01-15 Thread Zbigniew Jędrzejewski-Szmek
On Tue, Jan 15, 2019 at 01:09:16PM +0100, Lennart Poettering wrote:
> On Mo, 14.01.19 11:26, Dave Reisner (d...@falconindy.com) wrote:
> > >
> > > Well, that sounds as if you want to volunteer as release engineer? ;-)
> > >
> > > Thing is, we are understaffed. I too have a wishlist of things I'd
> > > like to see done, but with only two paid full-time upstream engineers
> > > there's only so much we can do.
> >
> > Then, IMO, you have a fundamental misalignment. You're prioritizing
> > feature work over stability, to the detriment of your customers. I
> 
> Hu? By my "customers" I figure you mean RHEL customers? They don't use
> the newest, hottest upstream systemd versions, but a stabilized
> version that made its way through Fedora and the RHEL QA
> processes. Production distros generally should handle it that way I
> guess, not only for systemd, but any other project.
> 
> Yes, we do feature work upstream, where else?
> 
> > really don't think this would be a full time job. There's already a
> 
> Well, fixing bugs is hard work. Release engineering means fixing
> bugs. That's a lot of work, and I do a good chunk of it. Please, by
> all means, join in, but don't claim it wouldn't be that much work. It
> is! A lot! (And thankless work on top)
> 
> > pretty good effort around tagging open issues which provides visibility
> > to someone who might be in charge of cutting releases. And, much of what
> > I'm suggesting is about *not* merging things after a certain point.
> 
> Well, if you follow our commit history you'll see that quite often we
> delay stuff until after a release. For example, right now #9762,
> #10495 have been delayed from before the last release that way...
> 
> Again. Step up if you want to have more systematic relase
> management. You are invited! I'd very much appreciate if you do.
> 
> > > If you (or anyone else in the community) would like to step up and
> > > maybe take a position of release engineer, working towards a clearer
> > > release cadence I think everybody would love you for that! I know I
> > > certainly would.
> > >
> > > But additional work is not going to be done without anyone doing it...
> >
> > Like I said, it's a tradeoff. You currently have someone maintaining a
> > stable branch in lieu of making your release snapshots more stable.
> 
> It's not "me" who has that really. It's a group of volunteers doing
> that, like a lot in Open Source. They scractched their own itches. If
> you want a more strict cadence, the scratch your own itches, too,
> please step up, like the folks doing the stable series stepped up!
> 
> > > >   Jun: 86
> > > >   Jul: 276
> > > >   Aug: 241
> > > >   Sep: 317
> > > >   Oct: 812
> > > >   Nov: 882
> > > >   Dec: 560
> > >
> > > That it slightly misleading though, as a good chunk of those PRs that
> > > good merged late where posted much earlier on, but only entered the
> > > master branch late. So, most development work is definitely done at
> > > the beginning of the cycle than in the end, but it's hard to see that
> > > due to rebases/merges...
> >
> > This is all based on commit date, not merge date. It's not
> > misleading.
> 
> Well, we rebase all the time, it's not that easy...
> 
> > > > Please, let's make all future systemd release better, not just the next
> > > > 1 or 2.
> > >
> > > I certainly see the problem, but quite frankly, I don't see the
> > > additional workforce for implementing a tighter cadence appear
> > > anywhere... :-(
> >
> > It's really unfortunate that you're not willing to make any tradeoffs
> > here.
> 
> Well, I can tell you I will wholeheartedly support anyone stepping up,
> and taking over release management. Otherwise we'll just keep trying
> our best like we currently do, which isn't good enough for you.
> 
> You know, I try to split my time between new work and bugfixing. I
> simply don't won#t to get sucked into just the latter.
> 
> We can certainly repriotize things and more often declassify bugs
> hitting more exotic cases as release-crtical, in order to come to a
> more strigent release cadence I.e. more aggressively ignore bugs with
> exotic archs, non-redhat distros, exotic desktops, exotic libcs, weird
> drivers, yadda yadda, and leave them to be fixed by community
> patches. But I doubt that is in your interest either, is it?

I agree with Lennart here. We have a constant stream of bug reports
coming in (yay, successful software, used in ever wider context), and if
we decided to fix all bugs before doing feature work, we'd _never_ do
any feature work. All of the top contributors already spend a significant
chunk of their time doing fixes and cleanups and refactorings and
adapting to changes in other components. If you look at the list of
patches between v239 and v240, majority is bugfixes and refactorings.
We could do this a bit more and then 100% of time would be spent on
this, effectively switching to maintenance-only mode. This would
be detrimental to our users, because those new features 

Re: [systemd-devel] Requires and After

2019-01-02 Thread Zbigniew Jędrzejewski-Szmek
On Wed, Jan 02, 2019 at 07:14:10PM +1100, Michael Chapman wrote:
> On Wed, 2 Jan 2019, Olaf van der Spek wrote:
> > On Wed, Jan 2, 2019 at 4:22 AM James Feeney  wrote:
> > > systemd has two different classes of "dependencies": 1) "activation" 
> > > dependencies, and 2) "ordering" dependencies.
> > >
> > > An activation dependency does not, a priori, have to obey any rules 
> > > about ordering.  There are not, automatically, any promises or 
> > > guarantees about in what order service units, for instance, might be 
> > > queued for execution, based upon a Requires= dependency.
> > >
> > > "Ordering" is an independent characteristic from "Activation".  
> > > "Activation" only promises to enqueue a unit, and then, only if the 
> > > unit is some kind of unit that can be "executed", such as a timer or 
> > > service unit.  In contrast, for instance, systemd is only a "passive 
> > > observer" of a device unit.  "enqueuing" a device unit for 
> > > "activation" would make no sense in this context.  A *service* unit 
> > > that *creates* a device unit could be enqueued for activation, but not 
> > > the device unit itself.
> > >
> > > If "A Requires B", and you don't like that "A" *might* get enqueued, 
> > > or get executed, before "B", then add an "ordering" dependency.  
> > > "Ordering dependencies", then, create guarantees about unit activation 
> > > *ordering*.
> > 
> > What good is an activation dependency without an ordering dependency?
> 
> The problem is that it's not necessarily clear _which_ ordering dependency 
> is required. systemd can't just assume one way or the other.
> 
> I have two services on my system, A.service and B.service, where A.service 
> Wants=B.service but is ordered Before=B.service. The reason for this is 
> that when I start A I want B to be automatically started too, but B cannot 
> function without A being active.

But this really means that B.service should have After=A.service +
Requires=A.service. Having A.service/start automatically imply B.service/start
is just unnecessary magic.

> So here's an example where the activation dependency is essentially 
> "opposite" that of the ordering dependency.
> 
> As you've pointed out, Requires= a bit of a strange case. If I change the 
> above situation to use Requires= instead, and if B subsequently exits or 
> fails, A would be stopped. I don't have an immediate use for that, but I 
> think it's a bit presumptuous to assume that no use could possibly exist.
> 
> I think there's use in having Wants= and Requires= work similarly to each 
> other, in that they are both orthogonal to ordering dependencies. It would 
> be odd to have only one imply an ordering dependency.

If we made Requires= imply After=, it'd be a "default" dependency, so an
explicit Before= would prevent the After= from being added. (This is the same
as for .wants/ and .after/ directories: an ordering dependency is added, if
DefaultDependencies=yes, and if Before= was not added to invert the ordering.)
So a "reverse" orderding like you describe would still be possible.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Requires and After

2019-01-02 Thread Zbigniew Jędrzejewski-Szmek
On Sun, Dec 30, 2018 at 12:05:46PM +0100, Olaf van der Spek wrote:
> Hi,
> 
> Evverx suggested I ask here @ https://github.com/systemd/systemd/issues/11284
> It's about Requires and After. I think a unit in Requires should imply
> that unit in After too, otherwise the requirement isn't really met.
> Is there a use case for Requires but not After?
> If not, would it make sense to change semantics to have Requires imply After?

The short answer is that Requires without After is mostly meaningless,
because it's impossible for systemd to actually implement the check,
so effectively Requires downgrades to Wants.

Two considerations though:
- For Wants=, it is OK to not have an ordering dependency.

  So if we made Requires= imply After=, there'd be an inconsistency
  between Requires= and Wants=.

- When .requires/ is used (and .wants/ too), an After= dependency is
  added automatically.

I think we could consider making Requires= imply After=, with the
necessary notices in NEWS, if somebody looks through units (e.g. all
the ones packaged in Fedora), to verify that this is unlikely to break
existing units. Elsewhere in the thread, somebody mentioned openstack,
so that'd be first thing to check.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] systemd 240 released

2018-12-21 Thread Zbigniew Jędrzejewski-Szmek
ed that way cannot be opened, and attempts to
  open them result in EPERM. This breaks the "graceful fallback" logic
  in systemd's PrivateDevices= sand-boxing option. This option is
  implemented defensively, so that when systemd detects it runs in a
  restricted environment (such as a user namespace, or an environment
  where mknod() is blocked through seccomp or absence of CAP_SYS_MKNOD)
  where device nodes cannot be created the effect of PrivateDevices= is
  bypassed (following the logic that 2nd-level sand-boxing is not
  essential if the system systemd runs in is itself already sand-boxed
  as a whole). This logic breaks with 4.18 in container managers where
  user namespacing is used: suddenly PrivateDevices= succeeds setting
  up a private /dev/ file system containing devices nodes — but when
  these are opened they don't work.

  At this point is is recommended that container managers utilizing
  user namespaces that intend to run systemd in the payload explicitly
  block mknod() with seccomp or similar, so that the graceful fallback
  logic works again.

  We are very sorry for the breakage and the requirement to change
  container configurations for newer kernels. It's purely caused by an
  incompatible kernel change. The relevant kernel developers have been
  notified about this userspace breakage quickly, but they chose to
  ignore it.

Contributions from: afg, Alan Jenkins, Aleksei Timofeyev, Alexander
Filippov, Alexander Kurtz, Alexey Bogdanenko, Andreas Henriksson,
Andrew Jorgensen, Anita Zhang, apnix-uk, Arkan49, Arseny Maslennikov,
asavah, Asbjørn Apeland, aszlig, Bastien Nocera, Ben Boeckel, Benedikt
Morbach, Benjamin Berg, Bruce Zhang, Carlo Caione, Cedric Viou, Chen
Qi, Chris Chiu, Chris Down, Chris Morin, Christian Rebischke, Claudius
Ellsel, Colin Guthrie, dana, Daniel, Daniele Medri, Daniel Kahn
Gillmor, Daniel Rusek, Daniel van Vugt, Dariusz Gadomski, Dave Reisner,
David Anderson, Davide Cavalca, David Leeds, David Malcolm, David
Strauss, David Tardon, Dimitri John Ledkov, Dmitry Torokhov, dj-kaktus,
Dongsu Park, Elias Probst, Emil Soleyman, Erik Kooistra, Ervin Peters,
Evgeni Golov, Evgeny Vereshchagin, Fabrice Fontaine, Faheel Ahmad,
Faizal Luthfi, Felix Yan, Filipe Brandenburger, Franck Bui, Frank
Schaefer, Frantisek Sumsal, Gautier Husson, Gianluca Boiano, Giuseppe
Scrivano, glitsj16, Hans de Goede, Harald Hoyer, Harry Mallon, Harshit
Jain, Helmut Grohne, Henry Tung, Hui Yiqun, imayoda, Insun Pyo, Iwan
Timmer, Jan Janssen, Jan Pokorný, Jan Synacek, Jason A. Donenfeld,
javitoom, Jérémy Nouhaud, Jeremy Su, Jiuyang Liu, João Paulo Rechi
Vita, Joe Hershberger, Joe Rayhawk, Joerg Behrmann, Joerg Steffens,
Jonas Dorel, Jon Ringle, Josh Soref, Julian Andres Klode, Jun Bo Bi,
Jürg Billeter, Keith Busch, Khem Raj, Kirill Marinushkin, Larry
Bernstone, Lennart Poettering, Lion Yang, Li Song, Lorenz
Hübschle-Schneider, Lubomir Rintel, Lucas Werkmeister, Ludwin Janvier,
Lukáš Nykrýn, Luke Shumaker, mal, Marc-Antoine Perennou, Marcin
Skarbek, Marco Trevisan (Treviño), Marian Cepok, Mario Hros, Marko
Myllynen, Markus Grimm, Martin Pitt, Martin Sobotka, Martin Wilck,
Mathieu Trudel-Lapierre, Matthew Leeds, Michael Biebl, Michael Olbrich,
Michael 'pbone' Pobega, Michael Scherer, Michal Koutný, Michal
Sekletar, Michal Soltys, Mike Gilbert, Mike Palmer, Muhammet Kara, Neal
Gompa, Neil Brown, Network Silence, Niklas Tibbling, Nikolas Nyby,
Nogisaka Sadata, Oliver Smith, Patrik Flykt, Pavel Hrdina, Paweł
Szewczyk, Peter Hutterer, Piotr Drąg, Ray Strode, Reinhold Mueller,
Renaud Métrich, Roman Gushchin, Ronny Chevalier, Rubén Suárez Alvarez,
Ruixin Bao, RussianNeuroMancer, Ryutaroh Matsumoto, Saleem Rashid, Sam
Morris, Samuel Morris, Sandy Carter, scootergrisen, Sébastien Bacher,
Sergey Ptashnick, Shawn Landden, Shengyao Xue, Shih-Yuan Lee
(FourDollars), Silvio Knizek, Sjoerd Simons, Stasiek Michalski, Stephen
Gallagher, Steven Allen, Steve Ramage, Susant Sahani, Sven Joachim,
Sylvain Plantefève, Tanu Kaskinen, Tejun Heo, Thiago Macieira, Thomas
Blume, Thomas Haller, Thomas H. P. Andersen, Tim Ruffing, TJ, Tobias
Jungel, Todd Walton, Tommi Rantala, Tomsod M, Tony Novak, Tore
Anderson, Trevonn, Victor Laskurain, Victor Tapia, Violet Halo, Vojtech
Trefny, welaq, William A. Kennington III, William Douglas, Wyatt Ward,
Xiang Fan, Xi Ruoyao, Xuanwo, Yann E. Morin, YmrDtnJu, Yu Watanabe,
Zbigniew Jędrzejewski-Szmek, Zhang Xianwei, Zsolt Dollenstein

— Warsaw, 2018-12-21

Enjoy!

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] systemd hackfest talking points

2018-10-03 Thread Zbigniew Jędrzejewski-Szmek
Hi,

we had a systemd hackfest/talkfest last Sunday in Berlin as part of 
AllSystemdGo 2018.

Here is a copy of the doc we used to discuss the technical &
documentations topics:
https://docs.google.com/document/d/12mWXZem7IOc9u-Db04Cy-NPi39LijiiMlTqX-0DYO98/edit

(I tried to convert this to text, but there's a lot of markup.)

Zbyszek


Topics for the systemd hackfest_talkfest_BoF_Miniconf @ All Systems Go! 2018-1.odt
Description: application/vnd.oasis.opendocument.text
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH v2] meson: use the host architecture compiler/linker for src/boot/efi

2018-09-28 Thread Zbigniew Jędrzejewski-Szmek
On Fri, Sep 28, 2018 at 01:19:27PM +0100, Simon McVittie wrote:
> On Fri, 28 Sep 2018 at 10:40:28 +0200, Lennart Poettering wrote:
> > On Do, 27.09.18 17:17, Helmut Grohne (hel...@subdivi.de) wrote:
> > 
> > > cross building systemd to arm64 presently fails, because the build
> > > system uses plain gcc and plain ld (build architecture compiler and
> > > linker respectively) for building src/boot/efi. These values come from
> > > the efi-cc and efi-ld options respectively. It rather should be using
> > > host tools here.
> > > 
> > > Fixes: b710072da441 ("add support for building efi modules")
> > 
> > Hmm, any chance you could submit this through github please?
> 
> This is an updated version for
>  which is owned by mbiebl
> (presumably Helmut isn't sending his own pull requests for whatever
> reason).

I already submitted a new PR.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd behavior during shutdown

2018-09-19 Thread Zbigniew Jędrzejewski-Szmek
On Wed, Sep 19, 2018 at 05:01:33AM +, Tiwari, Hari Sahaya wrote:
> Hi,
> 
> I am facing one issue with systemd where systems socket is not opening a new 
> connection during shutdown.
> I get below logs,
> 
> Sep 12 20:01:32 jara2 systemd[1]: mytestX.socket: Incoming traffic
> Sep 12 20:01:32 jara2 systemd[1]: mytestX.socket: Suppressing connection 
> request since unit stop is scheduled.
> 
> I have one systemd service which is trying to open a new connection during 
> shutdown sequence but looks like systemd sockets stop accepting new 
> connections as soon as they are marked for stop.
> I tried putting various combination of dependencies but that didn't help. 
> Everytime getting the above message.
> 
> Is there any parameter which can be set in unit files to resolve this issue? 
> Any pointers will be appreciated.

Hi,

yes, this is intentional. It was added to avoid the situation where
services are stop and subsequently restarted during shutdown because
something opens a connection, leading to loops.

If you absolutely need to open a connection to a socket activated
unit, then you could try making the .socket and .service units have
DefaultDependencies=false, so that they will not conflict with
shutdown.target and the start jobs will not be created for them. But
then you need to make sure that they actually *are* stopped at the
right time, but issuing a 'systemctl stop' request for them.
This can be done, but will be messy, so I'd use a different approach
if possible.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] When is a unit "loaded"?

2018-07-12 Thread Zbigniew Jędrzejewski-Szmek
On Wed, Jul 11, 2018 at 06:52:12PM -0700, Filipe Brandenburger wrote:
> Hey Daniel!
> 
> On Wed, Jul 11, 2018 at 5:16 PM Daniel Wang  wrote:
> > I have a unit, say foo.service, on my system that's in 
> > /usr/lib/systemd/system, but disabled by preset.
> 
> Not that it matters, but presets don't really matter here. The unit is
> disabled, period.
> 
> > On system boot, it doesn't show as "loaded" per `systemctl --all | grep 
> > foo`.
> 
> Because there's no reference to it in units systemd sees, so systemd
> doesn't need to load it.
> 
> At bootup, systemd will simply start with default.target and then
> recursively load the units it needs.
> 
> See this link for more details:
> https://www.freedesktop.org/software/systemd/man/bootup.html#System%20Manager%20Bootup
> 
> `systemctl --all` will only show the units in memory, so foo.service
> won't be listed since it's not loaded.
> 
> > So if I override it with a file with the same name but under 
> > /etc/systemd/system, `systemctl cat foo.service` will show the one under 
> > /etc without the need for a `systemctl daemon-reload`.
> 
> Yes, because it's not loaded.
> 
> > If I create another service unit, bar.service, which has a After= 
> > dependency on foo.service, and start bar, foo.service will show as loaded. 
> > And then if I try to override it, `systemctl cat foo.service` will print a 
> > warning saying a daemon-reload is needed.
> 
> Yes. If systemd sees a reference for that unit (even if it's an
> After=), it will need to load it, since it needs to record the
> dependency between the units in the internal memory structures, so it
> needs a reference to the unit, and it loads it to have a complete
> reference to it...
> 
> > Nothings seems incorrect, but I have a few questions:
> > - Which units are loaded on-boot and which are not?
> 
> Only default.target and recursively any unit referred to by the loaded units.

What you are saying is all correct, with two very minor nipticks:
- the "default" unit can be overridden with systemd.unit= and 1/2/3, so
  it's not always default.target,
- also, there are other units that'll be loaded, for example anything
  pulled in by SYSTEMD_WANTS= in udev rules (if the hardware is present),
  and .device/.mount/.swap units can also pull other stuff in, and such
  units may be created based on the state of the system. So it's
  not just the tree starting at the "default" unit, but also a bunch
  of other subtree.

> > - Is the After= dependency alone enough to have systemd load a unit? Are 
> > there any other dependency directives that will result in the same effect?
> 
> Yes, I believe any reference to units will trigger the unit to be
> loaded. As I mentioned, systemd wants to keep the state in memory, so
> loading the unit will allow it to keep complete state.
> 
> An exception (haven't checked it, but I expect it) is references in
> the [Install] section (such as Also=) since those are only used by
> `systemctl enable` and are not really loaded into memory as far as I
> can tell (but I might be wrong here, and these might trigger the unit
> to load as well.)
Yes, anything in [Install] doesn't matter (except that after the
installation is done, the symlinks that are created do matter).

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] upcoming systemd-239 release

2018-06-18 Thread Zbigniew Jędrzejewski-Szmek
Hi all,

we plan to release systemd-239 wednesday-ish. On Friday we merged the
last batch of new features, and until the release, only bug fixes,
cleanups, and documentation improvements should be merged. Please test
and report any regressions.

I'll be making builds for Fedora in copr (*):
https://copr.fedorainfracloud.org/coprs/zbyszek/systemd/build/768345/

Zbyszek

(*) It seems building from dist-git directly, which was a super nifty
feature, doesn't work anymore. In my latest attempt I uploaded the
srpm old-style…
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Running a service *just* before unmounting filesystems

2018-06-13 Thread Zbigniew Jędrzejewski-Szmek
On Wed, Jun 13, 2018 at 04:55:27PM +0200, Hans de Goede wrote:
> Hi,
> 
> On 12-06-18 19:11, Lennart Poettering wrote:
> >On Di, 12.06.18 11:33, Hans de Goede (hdego...@redhat.com) wrote:
> >>AFAIK the service actually doing the updates is supposed to call
> >>systemctl reboot --force when it is done, so any targets after
> >>system-update.target won't get started ?
> >
> >True, the service in question could split the reboot call of course,
> >if it wanted, so that you can plug things in between.
> 
> Since in this case we want to increment a boot_indeterminate counter
> to indicate the last boot was not a normal boot, so no clear
> success status is available I'm fine with the service doing the
> increment before the updates run.
> 
> So I was thinking about adding a system-update-pre.target
> and then in system-update.target add:
> 
> Wants=system-update-pre.target
> After=system-update-pre.target
Yep, that sounds reasonable.

> I would rather not modify the existing offline-updates
> services because there are 3 of them and I believe it
> would be better to do this in a single place.
Yes, let's better avoid that. If those services are
to be modified, I'd rather work on converting them to start
a new target like system-update-post.target, and move the reboot
there, so that they don't have to be modified again after that.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH] detect-virt: do not return exit failure code when the state is none

2018-05-25 Thread Zbigniew Jędrzejewski-Szmek
On Fri, May 25, 2018 at 10:25:08AM +0200, Franck Bui wrote:
> Hi Joey,
> 
> On 05/25/2018 09:33 AM, joeyli wrote:
> > 
> > Do you have good idea to inhibit the exit failure to avoid the subsequent
> > activity be blocked?
> > 
> 
> For the context the actual rule is:
> 
> SUBSYSTEM=="memory", ACTION=="add",
> PROGRAM=="/usr/bin/systemd-detect-virt", RESULT!="zvm", ...
> 
> You can use PROGRAM=="/bin/sh -c '/usr/bin/systemd-detect-virt || :'" so
> the exit status of systemd-detect-virt is explicitly ignored.
> 
> I assume that if systemd-detect-virt fails then the value of RESULT will
> be the empty string (you should check).

It will be empty if -q is passed, and "none" without it.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] option to wait for pid file to appear

2018-05-17 Thread Zbigniew Jędrzejewski-Szmek
On Thu, May 17, 2018 at 12:12:10PM +0200, Igor Bukanov wrote:
> On 17 May 2018 at 11:58, Mantas Mikulėnas  wrote:
> > Have you tried without the PIDFile= setting at all?
> 
> As far as I can see that breaks live updates that nginx supports where
> it starts a new process and workers and then gracefully terminates the
> old main.

FTR, it *is* possible to do live updates like that. We added support
for that with FD_STORE in pid1, and systemd-journald and systemd-logind
now support restart without losing connections. It would be great if
nginx did that too.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] GPL and unit/conf files

2018-05-11 Thread Zbigniew Jędrzejewski-Szmek
On Fri, May 11, 2018 at 02:21:30PM +0100, Paul Jakma wrote:
> Hi,
> 
> logind.conf has a GPL header, as do things like getty@.service.

An LGPL header actually, *library*.

All you need to do, is to keep the possibility to modify/destribute
that .conf file.
 
> If I needed to make changes to logind.conf, and wanted to bundle a
> modified logind.conf with a GPL-incompatible application, is that
> allowed?
Yes.

> Similarlty, if I wanted to replace getty@.service with my
> own console handler, can I start with a modified getty@.service unit
> file to launch it and bundle that with said application?
Yes.

> Is each setting in those files copyrighted? Or is there some
> threshold, where if I re-use enough of those settings in my file,
> that the GPL applies? Are certain combinations of settings in
> logind.conf or such unit files copyrighted?
That's debatable. I'd say that even the whole file is not
copyrightable, but don't quote me on that.

> Does it even make sense to put these settings/config files under the
> GPL? If a copyright notice is required, might the FSF all-permissive
> licence be better for such files?
It's easier for us to reuse the same header and license everywhere.
Since it's just LGPL, it does not really restrict use.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] github merge rules update

2018-05-10 Thread Zbigniew Jędrzejewski-Szmek
To systemd repo committers:

an investigation in https://github.com/systemd/systemd/issues/8665
done by filbranden showed that github's "rebase & merge" button works
better than the "squash & merge" version. The authorship and original
timestamps on the commit are not mangled. Hence, let's mostly use
"rebase & merge" for single commits instead of "squash & merge".

New rules:
— single commit → rebase
— a bunch of fixup commits that should be one → squash
— more than one commit → merge

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-tmpfiles duplicate line for path, ignoring

2018-05-10 Thread Zbigniew Jędrzejewski-Szmek
On Wed, May 09, 2018 at 09:01:06PM -0600, Chris Murphy wrote:
> Hi,
> 
> I see this in the journal
> 
> May 09 14:36:06 f28h.local systemd-tmpfiles[3273]:
> [/etc/tmpfiles.d/suspendfix.conf:7] Duplicate line for path
> "/proc/acpi/wakeup", ignoring.
> 
> This file contains:
> 
> w /proc/acpi/wakeup - - - - PWRB
> w /proc/acpi/wakeup - - - - XHC
> 
> 
> They aren't really duplicate lines, as the lines themselves differ.
> The paths involved are duplicates. I guess the obvious work around is
> to use separate .conf files?
No, it'll still warn about duplicate files.
tmpfiles is not smart enough to understand that multiple values can
be meaningfully written to a file. Something to fix on our side.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Implementing networkd d-bus interface

2018-05-09 Thread Zbigniew Jędrzejewski-Szmek
On Tue, May 08, 2018 at 01:34:58PM -0500, Thad Phetteplace wrote:
> Hi systemd developers,
> 
> I'm wondering what the state of development is on the d-bus interface
> for systemd-networkd. Currently the interface appears to be minimal,
> but I've also seen comments that a full featured API is planned. Has
> there been any discussion on what that interface might look like? Just
> to be clear, what I'm thinking of is a d-bus interface that allows
> manipulation of network settings without having to manually edit the
> /etc/systemd/network/* config files... not just something that
> interrogates the current settings. I'm willing to work on
> implementation, but I don't want to strike off in a crazy direction
> that fails to be useful to the rest of the community.

Hi,
I already replied in the ticket... so just repeating this briefly for
the sake of archives: there is no plan and the person implementing this
would have to do the design too. Let's continue in the ticket.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [PATCH bluez] hid2hci: Fix udev rules for linux-4.14+

2018-05-07 Thread Zbigniew Jędrzejewski-Szmek
On Mon, May 07, 2018 at 04:06:38PM +0300, Ville Syrjala wrote:
> From: Ville Syrjälä 
> 
> Since commit 1455cf8dbfd0 ("driver core: emit uevents when
> device is bound to a driver") the kernel started emitting
> "bound" and "unbound" uevents which confuse the hid2hci
> udev rules.
> 
> The symptoms on an affected machine (Dell E5400 in my case)
> include bluetooth devices not appearing and udev hogging
> the cpu as it's busy processing a constant stream of these
> "bound"+"unbound" uevents.
> 
> Change the udev rules only kick in for an "add" event.
> This seems to cure my machine at least.
> 
> Cc: Dmitry Torokhov 
> Cc: Greg Kroah-Hartman 
> Cc: Marcel Holtmann 
> Cc: Kay Sievers 
> Cc: systemd-devel@lists.freedesktop.org
> Cc: linux-ker...@vger.kernel.org
> Cc: linux-blueto...@vger.kernel.org
> Signed-off-by: Ville Syrjälä 
> ---
>  tools/hid2hci.rules | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/tools/hid2hci.rules b/tools/hid2hci.rules
> index db6bb03d2ef3..daa381d77387 100644
> --- a/tools/hid2hci.rules
> +++ b/tools/hid2hci.rules
> @@ -1,6 +1,6 @@
>  # do not edit this file, it will be overwritten on update
>  
> -ACTION=="remove", GOTO="hid2hci_end"
> +ACTION!="add", GOTO="hid2hci_end"
>  SUBSYSTEM!="usb*", GOTO="hid2hci_end"

This will skip over lines 22-23. Is the rule there supposed to
work for ACTION==add only (in which case your patch would be OK),
or also for ACTION==change? Maybe it'd be safer to just add the GOTO
for bind/unbind.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] [ty...@mit.edu: Re: Linux messages full of `random: get_random_u32 called from`]

2018-05-07 Thread Zbigniew Jędrzejewski-Szmek
On Wed, May 02, 2018 at 12:23:33PM -0300, Cristian Rodríguez wrote:
> El 02-05-2018 a las 6:25, Lennart Poettering escribió:
> >On Di, 01.05.18 18:08, Vito Caputo (vcap...@pengaru.com) wrote:
> 
> >Or maybe this confusion is just another iteration of the stuff
> >dicussed here? https://github.com/systemd/systemd/issues/4167

That bug was closed after some improvements, but based on the comments
there we can conclude that systemd *does* consume a lot of random bytes
from /dev/urandom and even though we are using the kernel APIs as documented,
it would be nice if were didn't use read all this random data, because
that impacts other processes that need random data.

But to change how much random bytes we use, we'd need to refactor
the code, because right now by the time we get to the part that
actually reads the bytes, we're far from the caller who knows if we
need really proper random bytes or we would be fine with some fluff.

I wasn't aware that this is still a problem. If it is, it'd probably
be worth looking into.

> On modern x86 hardware we could fallback to rdrand but only when
> getrandom returns EAGAIN.
> 
> For other non-cryptographic uses maybe implementing xoroshiro128+ or
> Mersenne Twister would suffice, until libc gets a sane random
> interface if ever.
Yeah, that's something to look into to. But that'd still probably
need the refactoring to pass down more information about how those
numbers will be used.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] automount EFI system partition to /efi fails

2018-04-25 Thread Zbigniew Jędrzejewski-Szmek
On Wed, Apr 25, 2018 at 11:10:54AM +0200, Lennart Poettering wrote:
> On Mi, 25.04.18 07:48, Zbigniew Jędrzejewski-Szmek (zbys...@in.waw.pl) wrote:
> 
> > > [6.291607] f28h.local systemd[715]: Followed symlinks /efi → /efi.
> > > [6.291643] f28h.local systemd[715]: Applying namespace mount on /efi
> > > [6.291671] f28h.local systemd[715]: Successfully mounted /efi to /efi
> > > [6.294820] f28h.local systemd[715]: Remounted /efi read-only.
> > > [6.314602] f28h.local systemd[715]: Remounted 
> > > /sys/firmware/efi/efivars
> > > read-only.
> > 
> > It looks like /efi does get mounted. What mounted it?
> 
> That's misleading I figure. That message is probably caused by
> ProtectSystem=yes or ProtectSystem=full being set for some system
> service. In that case systemd will mount /efi and /boot read-only for
> the specific service, but leave / writable. And for that to work it
> synthesizes a bind mount point for /efi and /boot within the service's
> mount namespace, the logging about which you see above. It hence
> doesn't mean /efi or /boot is a mount point on the host.

Even if /efi is empty? We should probably skip the mount point in that
case as a minor optimization.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] automount EFI system partition to /efi fails

2018-04-25 Thread Zbigniew Jędrzejewski-Szmek
On Tue, Apr 24, 2018 at 06:47:24PM -0600, Chris Murphy wrote:
> https://www.freedesktop.org/wiki/Specifications/DiscoverablePartitionsSpec/
> *The ESP used for the current boot is automatically mounted to /efi (or
> /boot as fallback),*
> 
> systemd-238-7.fc28.1.x86_64
> 
> I've commented out the /boot/efi entry in /etc/fstab and reboot but the ESP
> doesn't get mounted to /efi or /boot or /boot/efi.

gpt-auto-generator cannot mount anything to a nested directory,
because when it's running the outer mount (/boot in this case) will
not be done yet, hence it cannot check if /boot/efi is a directory. So
by design it will only consider /efi and /boot as the targets.
 
> Full journal with systemd.log_level=debug.
> https://drive.google.com/open?id=1b4Lfd0HurX4Z68T1jAHYMC0wy51tQwtk
> 
> I get a couple of confusing items:
> 
> [3.971099] f28h.local systemd-gpt-auto-generator[476]: /efi already
> populated, ignoring.
> [4.102022] f28h.local audit[476]: AVC avc:  denied  { read } for
>  pid=476 comm="systemd-gpt-aut" name="efi" dev="nvme0n1p9" ino=3999777
> scontext=system_u:system_r:systemd_gpt_generator_t:s0
> tcontext=unconfined_u:object_r:default_t:s0 tclass=dir permissive=0
> 
> 
> It's definitely empty. But maybe it's due to the avc. chcon fails to set
> the type to systemd_gpt_generator_t which itself gives me an avc.

It's possible that this is the cause. From the AVC we don't see the name,
but we know it's a directory and read() fails on it. Failure to list the
directory will cause gpt-auto-generator to consider the directory busy,
which would lead to the "/efi already populated" message.

> Apr 24 18:42:49 f28h.local audit[4486]: AVC avc:  denied  { relabelto } for
>  pid=4486 comm="chcon" name="efi" dev="nvme0n1p9" ino=3999777
> scontext=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
> tcontext=system_u:object_r:systemd_gpt_generator_t:s0 tclass=dir
> permissive=0
> 
> 
> Next
> 
> 
> [6.291607] f28h.local systemd[715]: Followed symlinks /efi → /efi.
> [6.291643] f28h.local systemd[715]: Applying namespace mount on /efi
> [6.291671] f28h.local systemd[715]: Successfully mounted /efi to /efi
> [6.294820] f28h.local systemd[715]: Remounted /efi read-only.
> [6.314602] f28h.local systemd[715]: Remounted /sys/firmware/efi/efivars
> read-only.

It looks like /efi does get mounted. What mounted it?

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] automount EFI system partition to /efi fails

2018-04-25 Thread Zbigniew Jędrzejewski-Szmek
On Wed, Apr 25, 2018 at 07:48:13AM +, Zbigniew Jędrzejewski-Szmek wrote:
> On Tue, Apr 24, 2018 at 06:47:24PM -0600, Chris Murphy wrote:
> > https://www.freedesktop.org/wiki/Specifications/DiscoverablePartitionsSpec/
> > *The ESP used for the current boot is automatically mounted to /efi (or
> > /boot as fallback),*
> > 
> > systemd-238-7.fc28.1.x86_64
> > 
> > I've commented out the /boot/efi entry in /etc/fstab and reboot but the ESP
> > doesn't get mounted to /efi or /boot or /boot/efi.
> 
> gpt-auto-generator cannot mount anything to a nested directory,
> because when it's running the outer mount (/boot in this case) will
> not be done yet, hence it cannot check if /boot/efi is a directory. So
> by design it will only consider /efi and /boot as the targets.
>  
> > Full journal with systemd.log_level=debug.
> > https://drive.google.com/open?id=1b4Lfd0HurX4Z68T1jAHYMC0wy51tQwtk
> > 
> > I get a couple of confusing items:
> > 
> > [3.971099] f28h.local systemd-gpt-auto-generator[476]: /efi already
> > populated, ignoring.
> > [4.102022] f28h.local audit[476]: AVC avc:  denied  { read } for
> >  pid=476 comm="systemd-gpt-aut" name="efi" dev="nvme0n1p9" ino=3999777
> > scontext=system_u:system_r:systemd_gpt_generator_t:s0
> > tcontext=unconfined_u:object_r:default_t:s0 tclass=dir permissive=0
> > 
> > 
> > It's definitely empty. But maybe it's due to the avc. chcon fails to set
> > the type to systemd_gpt_generator_t which itself gives me an avc.
> 
> It's possible that this is the cause. From the AVC we don't see the name,
> but we know it's a directory and read() fails on it. Failure to list the
> directory will cause gpt-auto-generator to consider the directory busy,
> which would lead to the "/efi already populated" message.

https://github.com/systemd/systemd/pull/8812 has a patch to improve
logging in this case.

Zbyszek

> > Apr 24 18:42:49 f28h.local audit[4486]: AVC avc:  denied  { relabelto } for
> >  pid=4486 comm="chcon" name="efi" dev="nvme0n1p9" ino=3999777
> > scontext=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
> > tcontext=system_u:object_r:systemd_gpt_generator_t:s0 tclass=dir
> > permissive=0
> > 
> > 
> > Next
> > 
> > 
> > [6.291607] f28h.local systemd[715]: Followed symlinks /efi → /efi.
> > [6.291643] f28h.local systemd[715]: Applying namespace mount on /efi
> > [6.291671] f28h.local systemd[715]: Successfully mounted /efi to /efi
> > [6.294820] f28h.local systemd[715]: Remounted /efi read-only.
> > [6.314602] f28h.local systemd[715]: Remounted /sys/firmware/efi/efivars
> > read-only.
> 
> It looks like /efi does get mounted. What mounted it?
> 
> Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] About supporting relative values (e.g. 50%) in configuration

2018-03-09 Thread Zbigniew Jędrzejewski-Szmek
On Thu, Mar 08, 2018 at 03:48:54PM +0800, ChenQi wrote:
> Hi All,
> 
> I'd like to know if systemd community thinks that adding support for
> relative values in configuration is a good idea.
> 
> I found a related patch and discussion in 
> https://lists.freedesktop.org/archives/systemd-devel/2015-May/032191.html.
> 
> Quoting from the comments:
> """
> 
> I'd always keep our basic structures as generic as possible, and as
> close to the way CPUs work. Hence: store things as fixed-point
> 32bit/32bit internally, but make it configurable in percent as
> user-pointing UI.
> 
> """
> 
> According to the comments, it seems that adding such support is acceptable?
> 
> I' sending out this email because I want to check with the community
> before I start to investigate more.

Hi,

yes, a patch to allow relative values would be very much welcome.

Zbyszek
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


  1   2   3   4   5   6   7   8   9   10   >