Re: [systemd-devel] running systemd in a cgroup: future compatibility

2018-02-14 Thread Josh Snyder
On Wed, Feb 14, 2018 at 1:38 AM, Lennart Poettering
 wrote:
>
> That all said, I think we should really try to make systemd work with
> your usecase directly and natively, i.e. add enough flexibility to
> systemd so that you don't have to wrap it in such a "foreign" prefix
> cgroup to do what you want... For example, if PID 1 knew a
> "DefaultSlice=" option in /etc/systemd/system.conf, and
> logind/machined knew similar options in their respective configuration
> files you could do what you are trying to do without leaving systemd
> territory relatively easily. Just set those options to some slice
> further down thre tree, and you could leave machined run inside of
> systemd just fine without having to arrange anything outside of it...

If the feature you describe existed, I'd throw away my initramfs script in a
heartbeat.

Josh
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] running systemd in a cgroup: future compatibility

2018-02-14 Thread Lennart Poettering
On Di, 13.02.18 17:06, Josh Snyder (jo...@netflix.com) wrote:

> I've tested against both legacy and unified cgroup hierarchies. The
> functionality to detect the current cgroups and nest processes underneath them
> appears to be in manager_setup_cgroup (src/core/cgroup.c:2033). My question 
> for
> the list is what motivated adding this awesome feature to systemd in the first
> place, and (more importantly to me) is it likely to continue to exist in the
> future?

This logic exists for two reasons:

1. "systemd --user" needs this so that it can manage the cgroup
   subtree it is started in. It gets a delegated, unprivileged cgroup
   subtree and couldn't even run outside of its subtree, even if it
   wanted to.

2. Before cgroup namespaces existed this kind of behaviour was
   necessary to make sure systemd run inside containers works
   correctly: it would only make use of its subtree, and leave alone
   the rest.

And besides that, cgroups is supposed to be neatly composable, hence
it's philosophically really the right thing to do. And yes, we'll
continue to support that.

That all said, I think we should really try to make systemd work with
your usecase directly and natively, i.e. add enough flexibility to
systemd so that you don't have to wrap it in such a "foreign" prefix
cgroup to do what you want... For example, if PID 1 knew a
"DefaultSlice=" option in /etc/systemd/system.conf, and
logind/machined knew similar options in their respective configuration
files you could do what you are trying to do without leaving systemd
territory relatively easily. Just set those options to some slice
further down thre tree, and you could leave machined run inside of
systemd just fine without having to arrange anything outside of it...

Lennart

-- 
Lennart Poettering, Red Hat
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


Re: [systemd-devel] running systemd in a cgroup: future compatibility

2018-02-13 Thread Mantas Mikulėnas
On Wed, Feb 14, 2018 at 3:06 AM, Josh Snyder  wrote:

> Hi all,
>
> I'm working on a use-case where I want to impose memory limits on the
> system in
> aggregate, with one process (e.g. memcached) opted-out of the limit. With a
> typical distribution's setup (Ubuntu, in my case), I would be able to
> impose a
> memory limit on each systemd slice (viz. user, machine, system)
> separately, but
> not in aggregate.
>
> I've found a solution for this by launching systemd in a cgroup of its
> own. In
> the initramfs, I have a script (snippet included below) which mounts the
> cgroupfs, puts all of the extant processes into a new cgroup, and then
> umounts
> the cgroupfs.
>
> This works for my needs: all of the processes in the system live in one
> cgroup,
> except memcached in a separate cgroup. systemd seems perfectly happy to
> boot in
> this configuration, correctly sensing that it's operating in a cgroup and
> nesting the processes it is responsible for under the existing cgroup. With
> memcached running separately, the resulting hierarchy looks like:
>
> /sys/fs/cgroup/
> ├── memcached
> └── root
> ├── init.scope
> ├── system.slice
> ...
>
> And /proc/1/cgroup shows that the systemd (and more importantly, its
> descendants) lives in the memory cgroup:
>
> 11:net_cls,net_prio:/
> 10:cpuset:/
> 9:freezer:/
> 8:hugetlb:/
> 7:perf_event:/
> 6:pids:/root/init.scope
> 5:devices:/root/init.scope
> 4:cpu,cpuacct:/
> 3:blkio:/
> 2:memory:/root
> 1:name=systemd:/root/init.scope
>
> I've tested against both legacy and unified cgroup hierarchies. The
> functionality to detect the current cgroups and nest processes underneath
> them
> appears to be in manager_setup_cgroup (src/core/cgroup.c:2033). My
> question for
> the list is what motivated adding this awesome feature to systemd in the
> first
> place, and (more importantly to me) is it likely to continue to exist in
> the
> future?
>

I'm assuming it happens to work because the unprivileged `systemd --user`
instances require the same kind of autodetection.

-- 
Mantas Mikulėnas
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel


[systemd-devel] running systemd in a cgroup: future compatibility

2018-02-13 Thread Josh Snyder
Hi all,

I'm working on a use-case where I want to impose memory limits on the system in
aggregate, with one process (e.g. memcached) opted-out of the limit. With a
typical distribution's setup (Ubuntu, in my case), I would be able to impose a
memory limit on each systemd slice (viz. user, machine, system) separately, but
not in aggregate.

I've found a solution for this by launching systemd in a cgroup of its own. In
the initramfs, I have a script (snippet included below) which mounts the
cgroupfs, puts all of the extant processes into a new cgroup, and then umounts
the cgroupfs.

This works for my needs: all of the processes in the system live in one cgroup,
except memcached in a separate cgroup. systemd seems perfectly happy to boot in
this configuration, correctly sensing that it's operating in a cgroup and
nesting the processes it is responsible for under the existing cgroup. With
memcached running separately, the resulting hierarchy looks like:

/sys/fs/cgroup/
├── memcached
└── root
├── init.scope
├── system.slice
...

And /proc/1/cgroup shows that the systemd (and more importantly, its
descendants) lives in the memory cgroup:

11:net_cls,net_prio:/
10:cpuset:/
9:freezer:/
8:hugetlb:/
7:perf_event:/
6:pids:/root/init.scope
5:devices:/root/init.scope
4:cpu,cpuacct:/
3:blkio:/
2:memory:/root
1:name=systemd:/root/init.scope

I've tested against both legacy and unified cgroup hierarchies. The
functionality to detect the current cgroups and nest processes underneath them
appears to be in manager_setup_cgroup (src/core/cgroup.c:2033). My question for
the list is what motivated adding this awesome feature to systemd in the first
place, and (more importantly to me) is it likely to continue to exist in the
future?

Josh

---
#!/bin/sh

move_tasks()
{
  cd "${MOUNTPOINT}"
  mkdir root
  exec 3< "${CGROUP_FILE}"
  set +e
  while read task; do
  echo $task > "root/${CGROUP_FILE}" 2>/dev/null
  done <&3
  exec 3<&-
  set -e
  cd /
  umount "${MOUNTPOINT}"

}

MOUNTPOINT=/sys/fs/cgroup
CGROUP_FILE=tasks
mount -t cgroup -o none,name=systemd none "${MOUNTPOINT}"
move_tasks

mount -t cgroup -o memory none "${MOUNTPOINT}"
cd "${MOUNTPOINT}"
echo 1 > memory.use_hierarchy
move_tasks

mount -t cgroup2 none "${MOUNTPOINT}"
CGROUP_FILE=cgroup.procs
move_tasks
___
systemd-devel mailing list
systemd-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/systemd-devel