Thanks for the details.

On Mon, Nov 16, 2020 at 09:30:20PM +0300, Andrei Enshin <b...@bk.ru> wrote:
> I see the kubelet crash with error: «Failed to start ContainerManager failed 
> to initialize top level QOS containers: root container [kubepods] doesn't 
> exist»
> details:  https://github.com/kubernetes/kubernetes/issues/95488
I skimmed the issue and noticed that your setup uses 'cgroupfs' cgroup
driver. As explained in the other messages in this thread, it conflicts
with systemd operation over the root cgroup tree.

> I can see same two mounts of named systemd hierarchy from shell on the same 
> node, simply by `$ cat /proc/self/mountinfo`
> I think kubelet is running in the «main» mount namespace which has weird 
> named systemd mount.
I assume so as well. It may be a residual inside kubelet context when
environment was prepared for a container spawned from within this
context.

> I would like to reproduce such weird mount to understand the full
> situation and make sure I can avoid it in future.
I'm afraid you may be seeing results of various races between systemd
service (de)activation and container spawnings under the "shared" root
(both of which comprise cgroup creation/removal and migrations).
There's a reason behind the cgroup subtree delegation.

So I'd say there's not much to do from systemd side now.


Michal

Attachment: signature.asc
Description: Digital signature

_______________________________________________
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Reply via email to