>>> Lennart Poettering schrieb am 02.03.2022 um 17:22
in
Nachricht :
> On Mi, 02.03.22 13:02, Arian van Putten (arian.vanput...@gmail.com) wrote:
>
>> I've seen this a lot with docker/containerd. It seems as if for some
reason
>> systemd doesn't wait for their cgroups to cleaned up on shutdown.
Hi!
In SLES15 SP3 (systemd-246.16-7.33.1.x86_64) I have this effect, wondering
whether it is a bug or a feature:
When using "journalctl -b -g raid" I see that _ome_ matches are highlighted in
red, but others aren't. For example:
Mar 01 01:37:09 h16 kernel: mega*raid*_sas :c1:00.0: BAR:0x1 B
Ah, right, I forgot – since this is done in the service child (right before
exec) and not in the main process, you probably need to add the -f option
to make strace follow forks...
On Thu, Mar 3, 2022, 22:08 Christopher Obbard
wrote:
> Hi Mantas,
>
> On 03/03/2022 19:18, Mantas Mikulėnas wrote:
On Thu, Mar 3, 2022 at 9:09 PM Christopher Obbard <
chris.obb...@collabora.com> wrote:
> Hi systemd experts!
>
> I am using systemd-247 and systemd-250 on debian system, which is
> running a minimal downstream 5.4 kernel for a Qualcomm board.
>
> systemd 241 in debian buster works fine, but system
Hi systemd experts!
I am using systemd-247 and systemd-250 on debian system, which is
running a minimal downstream 5.4 kernel for a Qualcomm board.
systemd 241 in debian buster works fine, but systemd 247 (debian
bullseye) and systemd 250 (debian unstable) seem to get upset about file
descri
On Mi, 02.03.22 17:50, Lennart Poettering (lenn...@poettering.net) wrote:
> That said, we could certainly show both the comm field and the PID of
> the offending processes. I am prepping a patch for that.
See: https://github.com/systemd/systemd/pull/22655
Lennart
--
Lennart Poettering, Berlin
On Do, 03.03.22 18:35, Felip Moll (fe...@schedmd.com) wrote:
> I have read and studied all your suggestions and I understand them.
> I also did some performance tests in which I fork+executed a systemd-run to
> launch a service for every step and I got bad performance overall.
> One of our QA test
Hi folks, I wanted to keep the case as generic as possible but I think it
is important at this point to comment on what we're talking about, so let
me clarify a little bit the case I am dealing with at the moment.
In SchedMD, we want Slurm to support 'Cgroup v2'. As you may know Slurm is
a HPC res
On Mo, 21.02.22 22:16, Felip Moll (lip...@gmail.com) wrote:
> Silvio,
>
> As I commented in my previous post, creating every single job in a separate
> slice is an overhead I cannot assume.
> An HTC system could run thousands of jobs per second, and doing extra
> fork+execs plus waiting for system
On Mo, 21.02.22 18:07, Felip Moll (lip...@gmail.com) wrote:
> > That's a bad idea typically, and a generally a hack: the unit should
> > probably be split up differently, i.e. the processes that shall stick
> > around on restart should probably be in their own unit, i.e. another
> > service or sco
10 matches
Mail list logo