Re: [systemd-devel] systemd-nspawn and cgroup hybrid mode

2019-05-20 Thread Lennart Poettering
On Mo, 13.05.19 11:07, Antoine Pietri (antoine.piet...@gmail.com) wrote:

> On Mon, May 13, 2019 at 10:42 AM Lennart Poettering
>  wrote:
> > you can use it to lock up the machine, hence we generally don't do it.
>
> Thanks, got it. For my usecase though, security isn't much of a
> concern and I don't necessarily have the time/bandwidth to migrate the
> software to cgroupsv2 upstream right now. Is there an option "this
> will void your warranty" that I can enable to force delegation in the
> hybrid setup? :-)

Nope, there currently is not, sorry. And it's unlikely we'll add this
now given that cgroupv1 is kinda on its way out, and it's genuinely
unsafe to do this...

Lennart

--
Lennart Poettering, Berlin
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] systemd-nspawn and cgroup hybrid mode

2019-05-13 Thread Antoine Pietri
On Mon, May 13, 2019 at 10:42 AM Lennart Poettering
 wrote:
> you can use it to lock up the machine, hence we generally don't do it.

Thanks, got it. For my usecase though, security isn't much of a
concern and I don't necessarily have the time/bandwidth to migrate the
software to cgroupsv2 upstream right now. Is there an option "this
will void your warranty" that I can enable to force delegation in the
hybrid setup? :-)

Thanks,

-- 
Antoine Pietri
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

Re: [systemd-devel] systemd-nspawn and cgroup hybrid mode

2019-05-13 Thread Lennart Poettering
On So, 12.05.19 14:09, Antoine Pietri (antoine.piet...@gmail.com) wrote:

> Hi,
>
> I have a probably dumb question for which I couldn't find an answer in
> the docs. I'm trying to make a program that uses the cgroupv1 API run
> into a systemd-nspawn container. In the host, I know that I can just
> look at /proc/self/cgroup to see the path of my cgroup and write stuff
> there. The legacy cgroup tree is properly created on the host:
>
> $ systemd-run --pty find /sys/fs/cgroup/ | grep 'run-u[0-9]' | grep
> group/memory/
> /sys/fs/cgroup/memory/system.slice/run-u4161.service/cgroup.procs
> /sys/fs/cgroup/memory/system.slice/run-u4161.service/memory.use_hierarchy
> /sys/fs/cgroup/memory/system.slice/run-u4161.service/memory.kmem.tcp.usage_in_bytes
> /sys/fs/cgroup/memory/system.slice/run-u4161.service/memory.soft_limit_in_bytes
> [...]
>
> But when I'm in the container, it doesn't work anymore. Running the
> same command returns no results. Now, this is most certainly due to
> the fact that in the container, /sys/fs/cgroup is mounted in read-only
> so systemd can't create anything in /sys/fs/cgroup/memory... but then,
> what is the proper way to write into the legacy cgroups? I also tried
> to turn on Delegate=true to see if it would change something, but it
> doesn't.

Delegation of the various controllers to less privileged environments
(i.e. from host to container) is not safe on cgroupsv1, you can use it
to lock up the machine, hence we generally don't do it.

Delegation of the various controllers to less privileged environments
is safe on cgroupsv2, and there we do it. That's why you'll find
controllers such as "memory" delegated to nspawn containers and
systemd --user by default on such systems.

The "hybrid" setup is in most ways like cgroupsv1, i.e. all
controllers are still access the cgroupsv1 way, and hence delegation
is not done there either.

It's a major reason why people really should switch to cgroupsv2 now.

Lennart

--
Lennart Poettering, Berlin
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel

[systemd-devel] systemd-nspawn and cgroup hybrid mode

2019-05-12 Thread Antoine Pietri
Hi,

I have a probably dumb question for which I couldn't find an answer in
the docs. I'm trying to make a program that uses the cgroupv1 API run
into a systemd-nspawn container. In the host, I know that I can just
look at /proc/self/cgroup to see the path of my cgroup and write stuff
there. The legacy cgroup tree is properly created on the host:

$ systemd-run --pty find /sys/fs/cgroup/ | grep 'run-u[0-9]' | grep
group/memory/
/sys/fs/cgroup/memory/system.slice/run-u4161.service/cgroup.procs
/sys/fs/cgroup/memory/system.slice/run-u4161.service/memory.use_hierarchy
/sys/fs/cgroup/memory/system.slice/run-u4161.service/memory.kmem.tcp.usage_in_bytes
/sys/fs/cgroup/memory/system.slice/run-u4161.service/memory.soft_limit_in_bytes
[...]

But when I'm in the container, it doesn't work anymore. Running the
same command returns no results. Now, this is most certainly due to
the fact that in the container, /sys/fs/cgroup is mounted in read-only
so systemd can't create anything in /sys/fs/cgroup/memory... but then,
what is the proper way to write into the legacy cgroups? I also tried
to turn on Delegate=true to see if it would change something, but it
doesn't.

Thanks,

-- 
Antoine Pietri
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel