Re: [systemd-devel] Why do passive target units have to exist?

2017-10-24 Thread 林自均
Hi Lennart,

Yes, it makes sense to me. Thank you for your explanation.

However, I still have a question about it. If the general goal is to
minimize synchronization points, why don't we convert more active targets
into passive targets? For example: not all machines have swap, so can the
swap.target be a passive target to avoid unnecessary sync point? I guess
there must be a reason not to do so, but I don't know what it is.

John

Lennart Poettering  於 2017年10月24日 週二 下午4:45寫道:

> On Mo, 16.10.17 15:15, 林自均 (johnl...@gmail.com) wrote:
>
> > Hi folks,
> >
> > I am reading systemd documents, and I find passive target units a little
> > bit confusing.
> >
> > Take "network.target" for example:
> >
> > "systemd-networkd.service" specifies "Wants=network.target" and
> > "Before=network.target". That effectively makes starting
> > "systemd-networkd.service" brings up both "systemd-networkd.service" and
> > "network.target", and make sure that "network.target" is active after
> > "systemd-networkd.service" being active. It also implies that the
> shutdown
> > order is correct: "network.target" will be stopped before
> > "systemd-networkd.service".  Everything is fine.
> >
> > What if we use an active target unit to achieve all this? Can we specify
> a
> > "WantedBy=network.target" in "systemd-network.target"? So that we can
> > enable "systemd-network.service" (which makes a symbolic link in the
> > "network.target.wants" directory) and start "network.target" to pull in
> > "systemd-networkd.service". That also makes sure "network.target" is
> active
> > after "systemd-networkd.service" because of the target unit default
> > dependencies. And shutdown order will be correct too.
> >
> > The only difference I can tell is the units to start. With a passive
> > "network.target", we start "systemd-networkd.service". With an active
> > "network.target", we start "network.target".
> >
> > Is there any benefits of passive target units over active target units?
>
> We generally try to minimize synchronization points. "network.target"
> is one, and we don't want it around if there's no reason to have
> it. The more synchronization points you have the more serial your boot
> becomes.
>
> The logic hence is: if there's not networking stack then there's no
> reason for services to synchronize before or after it, and thus it's
> the networking stack that pulls in network.target and not its
> consumer, if you so will.
>
> Does that make sense?
>
> Lennart
>
> --
> Lennart Poettering, Red Hat
>
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-coredump gets terminated during shutdown

2017-10-24 Thread Joel Holdsworth

On 24/10/17 02:38, Lennart Poettering wrote:

On Do, 12.10.17 12:23, Joel Holdsworth (joel.holdswo...@vcatechnology.com) 
wrote:


Hi All,

I have an issue with the standard unit file:
./units/systemd-cored...@.service.in

In my use case if the main application crashes twice in 2-minutes, the
system will reboot into a recovery environment. I'm using systemd-coredump
to capture the coredump files, but the problem is that if the reboot is
triggered, then the coredump process is killed during shutdown before the
coredump has been written to disk.

First of all, I'm having trouble correcting this behaviour. The
systemd-coredump@.service should have no Conflicts=shutdown.target, and it
must have Before=shutdown.target. I tried making similar changes to the
corresponding .slice and .socket - but for some reason the coredump process
is still getting killed. Is there any way to make systemd log the reason why
a process was chosen for termination?

Also, the coredump process need to complete before the the relevant
partition is unmounted. Is there a way to do that?

These are all systemd n00b questions. But the bigger question is about
whether this is a bug in the standard unit files.

Quite frankly you hit a misdesign in the coredump service there. To
make this safe I figure it needs to become a Type=oneshot service
(i.e. instead of being a long-running service that shall be terminated
at shutdown it would the be a short-running service that we'll wait
for before shutting down).

COuld you please file a bug about this in the github issue tracker, so
that we fix this for good upstream?

Lennart


Hi Lennart, thanks for the response.

I filed a bug: https://github.com/systemd/systemd/issues/7176

For the time being, I found a hack setting 
KillSignal= gives me the behavior I want, but 
obviously it would be better to have a proper solution to prevent this.


Joel

___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Two conditional mount units for the same mount point

2017-10-24 Thread Jan Kundrát
I would like to mount one of two partitions to a common mount point based 
on an option passed on the kernel's command line. I tried to implement this 
with two cfg-A.mount and cfg-B.mount units with an appropriate 
ConditionKernelCommandLine=... stanzas, but this did not work because a 
.mount unit apparently needs to have a name which exactly matches its 
mountpoint. I don't know how I can have two units with a different content 
sharing the same unit name, though.


Here's my cfg-A.mount, the other one was very similar:


[Unit]
Description=Persistent configuration: slot A
ConditionKernelCommandLine=rauc.slot=A
DefaultDependencies=no
Conflicts=umount.target
Before=local-fs.target umount.target
After=swap.target

[Mount]
What=/dev/mmcblk0p2
Where=/cfg
Type=auto
Options=relatime,nosuid,nodev


In the end, I kinda-solved this by a custom oneshot service which simply 
calls `mount` explicitly, but that's probably not a very systemd-ish 
solution.


How should I solve this elegantly?

With kind regards,
Jan
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] Later activation of the HW watchdog

2017-10-24 Thread Jan Kundrát

Hi,
is it possible to change systemd's global settings for RuntimeWatchdogSec 
at runtime? I would like to have the early boot "guarded" by the HW 
watchdog started by my platform code, and for systemd to take over only 
after a certain target has been reached. I was thinking about an extra unit 
which simply writes an appropriate config file, but the docs for `systemctl 
daemon-reload` or `daemon-reexec` do not talk about these top-level 
settins. How do I tell systemd to notice a new value?


Context: I'm using systemd on an embedded ARM box with reliable network 
connectivity. The system has two fully separate rootfs/kernel/devicetree 
instances, A and B. The bootloader starts a HW watchdog timer, and the 
bootloader keeps a counter tracking of how many times a particular A/B 
"boot slot" attempted to boot. The kernel ignores the watchdog, and once 
systemd gets launched and checks it system.conf file, it proceeds to 
re-start the WD timer periodically. Finally, a unit which is pulled in by 
my default target updates the bootloader's environment, resetting the boot 
counter.


My goal is to be able to boot a possibly broken image (but not a malicious 
one, of course) without fearing that it's going to lock me out of my 
device. If the new image "fails" for some reason, I epxect the HW watchdog 
to reset the system, the boot attempt counter to eventually reach zero, and 
the whole system to roll-back to the previous image, eventually. In my 
scneario, it's preferred to make the decision to reboot rather than waiting 
for human interaction for solving the actual problem. The once-failed slot 
can be re-flahed very cheapily, and an updated version can be re-tried 
during the next update attempt.


During my testing, I was able to unplug the system's SD card at a "wrong" 
moment which resulted in systemd trying to boot into emergency.target and 
ultimately failing due to a missing rootfs. I ended up with an unusable 
system which did not reboot automatically because systemd was periodically 
pinging the HW watchdog timer. [1]


I got a suggestion to adjust the important units so that they specify a 
FailureAction. I do not like that solution because it is additional work 
(identifying which units might fail, coming up with various possible 
failing scenarios, being hard to test and get "right" in face of systemd 
updates in future, etc). It also feels like I am attacking a wrong problem. 
I already *have* a watchdog which will shoot the system into the head if 
something wrong happens. Wouldn't it make more sense to rely on this piece 
of infrastructure and start telling the watchdog "hey, I'm OK" only after 
the system has fuly booted and my ultimate target has been *reached*?


SUggestions which offer additional possibilities are welcome. I like 
system'd feature set, and I won't pretend that I know all of them :).


With kind regards,
Jan

[1] https://github.com/systemd/systemd/issues/7063
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] journal: Receiving messages from the future

2017-10-24 Thread Lennart Poettering
On Fr, 13.10.17 12:00, Michael Littlejohn (mlitt...@live.co.uk) wrote:

> Hi,
> 
> TL;DR: Server is receiving log message via systemd-journal-remote
> from 14s in the future (sender clock is 14s fast). Causes journal
> files to close, rotate and start anew on every message/batch. Check
> at journal-file.c:621 is checking for messages "from the future"
> from perspective of local system, rather than checking if messages
> are in or out of order within context of that message
> stream. Discuss.

Sounds like a bug. Rotate-on-time-jumps is a feature journald should
enforce, but not journal-remote. Please file a bug about this in the
github issue tracker, or even better prep a PR fixing this!

Thanks,

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] systemd-coredump gets terminated during shutdown

2017-10-24 Thread Lennart Poettering
On Do, 12.10.17 12:23, Joel Holdsworth (joel.holdswo...@vcatechnology.com) 
wrote:

> Hi All,
> 
> I have an issue with the standard unit file:
> ./units/systemd-cored...@.service.in
> 
> In my use case if the main application crashes twice in 2-minutes, the
> system will reboot into a recovery environment. I'm using systemd-coredump
> to capture the coredump files, but the problem is that if the reboot is
> triggered, then the coredump process is killed during shutdown before the
> coredump has been written to disk.
> 
> First of all, I'm having trouble correcting this behaviour. The
> systemd-coredump@.service should have no Conflicts=shutdown.target, and it
> must have Before=shutdown.target. I tried making similar changes to the
> corresponding .slice and .socket - but for some reason the coredump process
> is still getting killed. Is there any way to make systemd log the reason why
> a process was chosen for termination?
> 
> Also, the coredump process need to complete before the the relevant
> partition is unmounted. Is there a way to do that?
> 
> These are all systemd n00b questions. But the bigger question is about
> whether this is a bug in the standard unit files.

Quite frankly you hit a misdesign in the coredump service there. To
make this safe I figure it needs to become a Type=oneshot service
(i.e. instead of being a long-running service that shall be terminated
at shutdown it would the be a short-running service that we'll wait
for before shutting down).

COuld you please file a bug about this in the github issue tracker, so
that we fix this for good upstream?

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] How to stop systemd-udevd reading a device after dd

2017-10-24 Thread Lennart Poettering
On Fr, 13.10.17 08:06, Mantas Mikulėnas (graw...@gmail.com) wrote:

> gparted seems to achieve this by masking all .rules files it can find (by
> creating 0-byte versions under /run/udev/rules.d).

Urks.

Somebody should tell them about BSD file locks.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Why do passive target units have to exist?

2017-10-24 Thread Lennart Poettering
On Do, 19.10.17 09:46, 林自均 (johnl...@gmail.com) wrote:

> Hi Andrei,
> 
> Thank you for your reply!
> Just to confirm: there are about 10 passive system targets in
> systemd.special(5):
> 
> - getty-pre.target
> - cryptsetup-pre.target
> - local-fs-pre.target
> - network.target
> - network-pre.target
> - nss-lookup.target
> - nss-user-lookup.target
> - remote-fs-pre.target
> - rpcbind.target
> - time-sync.target
> 
> Are they all related to early days initscripts?

No, they have nothing to do with that really. They are all
synchronization points that we don't want in the boot transaction if
possible, but pull them in if we have to.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] Why do passive target units have to exist?

2017-10-24 Thread Lennart Poettering
On Mo, 16.10.17 15:15, 林自均 (johnl...@gmail.com) wrote:

> Hi folks,
> 
> I am reading systemd documents, and I find passive target units a little
> bit confusing.
> 
> Take "network.target" for example:
> 
> "systemd-networkd.service" specifies "Wants=network.target" and
> "Before=network.target". That effectively makes starting
> "systemd-networkd.service" brings up both "systemd-networkd.service" and
> "network.target", and make sure that "network.target" is active after
> "systemd-networkd.service" being active. It also implies that the shutdown
> order is correct: "network.target" will be stopped before
> "systemd-networkd.service".  Everything is fine.
> 
> What if we use an active target unit to achieve all this? Can we specify a
> "WantedBy=network.target" in "systemd-network.target"? So that we can
> enable "systemd-network.service" (which makes a symbolic link in the
> "network.target.wants" directory) and start "network.target" to pull in
> "systemd-networkd.service". That also makes sure "network.target" is active
> after "systemd-networkd.service" because of the target unit default
> dependencies. And shutdown order will be correct too.
> 
> The only difference I can tell is the units to start. With a passive
> "network.target", we start "systemd-networkd.service". With an active
> "network.target", we start "network.target".
> 
> Is there any benefits of passive target units over active target units?

We generally try to minimize synchronization points. "network.target"
is one, and we don't want it around if there's no reason to have
it. The more synchronization points you have the more serial your boot
becomes.

The logic hence is: if there's not networking stack then there's no
reason for services to synchronize before or after it, and thus it's
the networking stack that pulls in network.target and not its
consumer, if you so will.

Does that make sense?

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] How to stop systemd-udevd reading a device after dd

2017-10-24 Thread Lennart Poettering
On Fr, 13.10.17 01:01, Akira Hayakawa (ruby.w...@gmail.com) wrote:

> I have a device /dev/sdb1 and let's trace the block request by blktrace
> 
> $ sudo blktrace -d /dev/sdb1
> 
> When I write 4KB using dd
> $ sudo dd if=/dev/zero of=/dev/sdb1 oflag=direct bs=4k count=1
> 
> The block trace (after blkparsed) is write request as expected
>   8,17   22 0.03171  5930  Q  WS 2048 + 8 [dd]
> 
> followed by a unexpected read from systemd-udevd
>   8,17   72 0.001755563  5931  Q   R 2048 + 8 [systemd-udevd]
> 
> My first question is what is: this read request?
> 
> And I want to stop the read request because it makes it difficult to test 
> kernel code.
> So the second question is: how can I stop the read request?

If you want exclusive access to a block device and don't want udev to
step in, then simply take a BSD file lock on it (i.e. flock(2)), and
udev won't probe it.

This is not particularly well documented (one could say: not at all),
but it's the official way really, and very UNIXish I'd claim.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] restricting resource of a process within a service

2017-10-24 Thread Lennart Poettering
On Di, 17.10.17 18:21, Armen Baloyan (armen.balo...@daqri.com) wrote:

> Hi,
> 
> I need to restrict CPU usage for a process that runs in a service. The
> service has over 10 processes running but I need to put restrictions only
> on 1 of those processes.

As Colin already pointed out, the usual way is to split up that
service into multiple services that can be individually managed in
regards to resource consumption.

Note that if you want to limit he total CPU time consumption you can use classic
UNIX resource limits for this, i.e. RLIMIT_CPU. But this puts a hard
limit on the total CPU runtime, it's not a way to say "this processes
gets only 10% of the cpu time of each second runtime"...

> Is it possible to move this process to a separate cgroup or it cannot be
> removed its service's cgroup?

You can open a scope unit for a specific process, programatically via D-Bus.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] code=exited, status=226/NAMESPACE

2017-10-24 Thread Lennart Poettering
On Sa, 30.09.17 16:08, Reindl Harald (h.rei...@thelounge.net) wrote:

> nevermind, it's still https://bugzilla.redhat.com/show_bug.cgi?id=131
> because systemd in Fedora is abandonware and /home is on that VM a symlink
> to the storage-lvm

More current systemd versions should have no problems when mixing
namespaces and symlinked directories anymore.

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] How to give users permissions to /dev/kfd

2017-10-24 Thread Lennart Poettering
On Mo, 16.10.17 12:32, Simon McVittie (s...@collabora.com) wrote:

> On Sat, 14 Oct 2017 at 17:50:33 +0300, Mantas Mikulėnas wrote:
> > No, it's only available for local sessions (ones which systemd-logind 
> > considers
> > "local" + "active"). I think the idea is that console users automatically 
> > get
> > more privileges in general.
> 
> Specifically, the idea is that console users should have access to
> devices that are the machine representation of things they can physically
> access anyway. The classic example is audio. If Alice is sitting at a
> desktop/laptop computer and Bob is ssh'd in to the same computer, it's
> fine for Alice to be able to record the same audio that she can hear
> already; but it is usually not OK for Bob to be able to record audio
> because that would let him spy on Alice.
> 
> Similarly, logind defaults to allowing local active users to shut down
> the machine (because they are likely to be in a position to pull the
> plug or remove the battery anyway), but not remote users (to prevent
> them from causing denial-of-service for local users or other remote users).
> 
> > For SSH-only usage, use traditional groups (e.g. add yourself to the "video"
> > group). To assign group ownership to /dev/kfd, use GROUP="foo" in udev 
> > rules.
> 
> And, yes, the way to bypass the "only local users" bit is to add your uid
> to an appropriate group, which is a way of saying: this user has special
> privileges, and can access something (in your case video) even when not
> physically present.

For the sake of the archives this discussion more or less moved to:

https://github.com/systemd/systemd/pull/7112

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel