Re: [systemd-devel] Feedback sought: can we drop cgroupv1 support soon?

2023-08-18 Thread Dimitri John Ledkov
On Mon, 7 Aug 2023 at 17:26, Lennart Poettering  wrote:
>
> On Do, 20.07.23 01:59, Dimitri John Ledkov (dimitri.led...@canonical.com) 
> wrote:
>
> > Some deployments that switch back their modern v2 host to hybrid or v1, are
> > the ones that need to run old workloads that contain old systemd. Said old
> > systemd only has experimental incomplete v2 support that doesn't work with
> > v2-only (the one before current stable magick mount value).
>
> What's stopping you from mounting a private "named" cgroup v1
> hierarchy to such containers (i.e. no controllers). systemd will then
> use that when taking over and not bother with mounting anything on its
> own, such as a cgroupv2 tree.
>
> that should be enough to make old systemd happy.
>

Let me see if I can create all the right config files to attempt that.
I have some other constraints which may prevent this, but hopefully i
can provide sufficient workarounds to get this over the barrier.


--
okurrr,

Dimitri


Re: [systemd-devel] Feedback sought: can we drop cgroupv1 support soon?

2023-08-18 Thread Lewis Gaul
> What's stopping you from mounting a private "named" cgroup v1
> hierarchy to such containers (i.e. no controllers). systemd will then
> use that when taking over and not bother with mounting anything on its
> own, such as a cgroupv2 tree.

We specifically want to be able to make use of cgroup controllers within
the container. One example of this would be to use "MemoryLimit" (cgroupv1)
for a systemd unit (I understand this is deprecated in the latest versions
of systemd, but as far as I can see we wouldn't be able to use the cgroupv2
"MemoryMax" config in this scenario anyway).

> You are doing something half broken and
> outside of the intended model already, I am not sure we need to go the
> extra mile to support this for longer.

I'm slightly surprised and disheartened by this viewpoint. I have paid
close attention to https://systemd.io/CONTAINER_INTERFACE/ and
https://systemd.io/CGROUP_DELEGATION/, and I'd interpreted the statement as
being that running systemd in a container should be fully supported (not
only on cgroupsv2, at least using recent-but-not-latest systemd versions).

In particular, the following:

"Note that it is our intention to make systemd systems work flawlessly and
out-of-the-box in containers. In fact, we are interested to ensure that the
same OS image can be booted on a bare system, in a VM and in a container,
and behave correctly each time. If you notice that some component in
systemd does not work in a container as it should, even though the
container manager implements everything documented above, please contact
us."

"When systemd runs as container payload it will make use of all hierarchies
it has write access to. For legacy mode you need to make at least
/sys/fs/cgroup/systemd/ available, all other hierarchies are optional."

I note that point 6 under "Some Don'ts" does correlate with what you're
saying:
"Think twice before delegating cgroup v1 controllers to less privileged
containers. It’s not safe, you basically allow your containers to freeze
the system with that and worse."
However, in our case we're talking about a privileged container, so this
doesn't really apply.

I think there's a definite use-case here, and unfortunately when systemd
drops support for cgroupsv1 I think this will just mean we'll be unable to
upgrade the container's systemd version until all relevant hosts use
cgroupsv2 by default (probably a couple of years away).

Thanks for your time,
Lewis

On Mon, 7 Aug 2023 at 17:26, Lennart Poettering 
wrote:

> On Do, 20.07.23 01:59, Dimitri John Ledkov (dimitri.led...@canonical.com)
> wrote:
>
> > Some deployments that switch back their modern v2 host to hybrid or v1,
> are
> > the ones that need to run old workloads that contain old systemd. Said
> old
> > systemd only has experimental incomplete v2 support that doesn't work
> with
> > v2-only (the one before current stable magick mount value).
>
> What's stopping you from mounting a private "named" cgroup v1
> hierarchy to such containers (i.e. no controllers). systemd will then
> use that when taking over and not bother with mounting anything on its
> own, such as a cgroupv2 tree.
>
> that should be enough to make old systemd happy.
>
> Lennart
>
> --
> Lennart Poettering, Berlin
>


Re: [systemd-devel] Feedback sought: can we drop cgroupv1 support soon?

2023-08-07 Thread Lennart Poettering
On Do, 20.07.23 01:59, Dimitri John Ledkov (dimitri.led...@canonical.com) wrote:

> Some deployments that switch back their modern v2 host to hybrid or v1, are
> the ones that need to run old workloads that contain old systemd. Said old
> systemd only has experimental incomplete v2 support that doesn't work with
> v2-only (the one before current stable magick mount value).

What's stopping you from mounting a private "named" cgroup v1
hierarchy to such containers (i.e. no controllers). systemd will then
use that when taking over and not bother with mounting anything on its
own, such as a cgroupv2 tree.

that should be enough to make old systemd happy.

Lennart

--
Lennart Poettering, Berlin


Re: [systemd-devel] Feedback sought: can we drop cgroupv1 support soon?

2023-08-07 Thread Lennart Poettering
On Mi, 19.07.23 10:23, Lewis Gaul (lewis.g...@gmail.com) wrote:

> Hi Lennart, all,
>
> TL;DR: A container making use of cgroup controllers must use the same
> cgroup version as the host,

Controllers on cgroupv1 are not safely delegatable. If you did, then
this highly problematic anyway, as you give containers the ability to
hang the whole system. Moreover many controllers are not actually
recursive on cgroupsv1 (cpuset, …), hence totally wrong to delegate.

The kernel never supported that and we explicitly never supported that
in systemd, documenting this. If you ignore that, and delegate anyway,
then this leaves me kinda indefferent to your situation...

You can safely delegate named hierachies (i.e. not controller
hierarchies) on cgroupsv1, hence that is what I'd recommend you to do.

> Does this make sense as a use-case and motivation for wanting new systemd
> versions to continue supporting cgroups v1? Of course not forever, but
> until there are less hosts out there using cgroups v1.

I am not too impressed tbh. You are doing something half broken and
outside of the intended model already, I am not sure we need to go the
extra mile to support this for longer.

Lennart

--
Lennart Poettering, Berlin


Re: [systemd-devel] Feedback sought: can we drop cgroupv1 support soon?

2023-07-19 Thread Dimitri John Ledkov
Some deployments that switch back their modern v2 host to hybrid or v1, are
the ones that need to run old workloads that contain old systemd. Said old
systemd only has experimental incomplete v2 support that doesn't work with
v2-only (the one before current stable magick mount value).

Specifically that is trying to run sustemd v229 in a container:

https://bugs.launchpad.net/ubuntu/xenial/+source/systemd/+bug/1962332

When cgroupsv2 got added in the kernel doesn't matter, as much as, when
systemd started to correctly support cgroupsv2.
https://github.com/systemd/systemd/commit/099619957a0/

This shipped in v230 in May 2016, and I failed to backport this to v229 and
make it work in a container on an otherwise v2-only host - it still failed
to start for me.

230 was one month too late, and hence v229 shipped in Xenial Ubuntu 16.04
LTS, which will be supported through to 2026, including as a container on
newer hosts. Which for now only works if host is in hybrid or v1 modes.

To me, 6 years support is too short for the case of old container on a new
host.

And I wish to resolve inability for v229 to start as a container on v2-only
host and open to ship any minimal backport fix to unblock this.

The inverse problem of running newer containers on older systems also
exists, but usually such deployments find a way to also get newer hosts
easily.

Has anyone else managed to run v229 in a container on a v2-only host?



On Thu, 21 Jul 2022, 10:04 Lennart Poettering, 
wrote:

> Heya!
>
> It's currently a terrible mess having to support both cgroupsv1 and
> cgroupsv2 in our codebase.
>
> cgroupsv2 first entered the kernel in 2014, i.e. *eight* years ago
> (kernel 3.16). We soon intend to raise the baseline for systemd to
> kernel 4.3 (because we want to be able to rely on the existance of
> ambient capabilities), but that also means, that all kernels we intend
> to support have a well-enough working cgroupv2 implementation.
>
> hence, i'd love to drop the cgroupv1 support from our tree entirely,
> and simplify and modernize our codebase to go cgroupv2-only. Before we
> do that I'd like to seek feedback on this though, given this is not
> purely a thing between the kernel and systemd — this does leak into
> some userspace, that operates on cgroups directly.
>
> Specifically, legacy container infra (i.e. docker/moby) for the
> longest time was cgroupsv1-only. But as I understand it has since been
> updated, to cgroupsv2 too.
>
> Hence my question: is there a strong community of people who insist on
> using newest systemd while using legacy container infra? Anyone else
> has a good reason to stick with cgroupsv1 but really wants newest
> systemd?
>
> The time where we'll drop cgroupv1 support *will* come eventually
> either way, but what's still up for discussion is to determine
> precisely when. hence, please let us know!
>
> Thanks,
>
> Lennart
>
> --
> Lennart Poettering, Berlin
>


Re: [systemd-devel] Feedback sought: can we drop cgroupv1 support soon?

2023-07-19 Thread Daniel Walsh

On 7/19/23 08:59, Neal Gompa wrote:

On Wed, Jul 19, 2023 at 8:52 AM Luca Boccassi  wrote:

On Wed, 19 Jul 2023 at 13:45, Neal Gompa  wrote:

On Thu, Jul 21, 2022 at 6:15 AM Lennart Poettering
 wrote:

Heya!

It's currently a terrible mess having to support both cgroupsv1 and
cgroupsv2 in our codebase.

cgroupsv2 first entered the kernel in 2014, i.e. *eight* years ago
(kernel 3.16). We soon intend to raise the baseline for systemd to
kernel 4.3 (because we want to be able to rely on the existance of
ambient capabilities), but that also means, that all kernels we intend
to support have a well-enough working cgroupv2 implementation.

hence, i'd love to drop the cgroupv1 support from our tree entirely,
and simplify and modernize our codebase to go cgroupv2-only. Before we
do that I'd like to seek feedback on this though, given this is not
purely a thing between the kernel and systemd — this does leak into
some userspace, that operates on cgroups directly.

Specifically, legacy container infra (i.e. docker/moby) for the
longest time was cgroupsv1-only. But as I understand it has since been
updated, to cgroupsv2 too.

Hence my question: is there a strong community of people who insist on
using newest systemd while using legacy container infra? Anyone else
has a good reason to stick with cgroupsv1 but really wants newest
systemd?

The time where we'll drop cgroupv1 support *will* come eventually
either way, but what's still up for discussion is to determine
precisely when. hence, please let us know!


The main concern I have about cgroup v1 removal is that some major
Kubernetes distributions don't support cgroup v2 yet. Upstream
Kubernetes only started fully supporting cgroup v2 with Kubernetes
1.25, as noted in their documentation:
https://kubernetes.io/docs/concepts/architecture/cgroups/

OpenShift just added support in 4.13 (but didn't enable it by default
yet): https://cloud.redhat.com/blog/cgroups-v2-goes-ga-in-openshift-4.13

AKS seems to only support cgroup v2 with Ubuntu 22.04:
https://learn.microsoft.com/en-us/azure/aks/supported-kubernetes-versions?tabs=azure-cli#aks-components-breaking-changes-by-version

(No mention of Azure Linux? I'm pretty sure CBL-Mariner is cgroup v2 only)

It is unclear whether EKS supports cgroup v2 at all (I suspect not,
since EKS doesn't yet run on Amazon Linux 2023):
https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html

It is similarly unclear with GKE:
https://cloud.google.com/kubernetes-engine/docs/concepts/node-images

(The version of Ubuntu is not mentioned in the documentation, if it's
not new enough, it's still cgroup v1)

DigitalOcean Kubernetes Service (DOKS) is still cgroup v1:
https://docs.digitalocean.com/products/kubernetes/details/changelog/

Linode Kubernetes Engine (LKE) is still cgroup v1:
https://www.linode.com/docs/products/compute/kubernetes/release-notes/

It is possible that systemd's deprecation will push things over the
edge, but I wanted to make sure people are aware of this.

Are you sure that in all those cases it's really not supported at all,
vs simply not being the default configuration that can be changed with
a toggle?

If it's not mentioned, it's probably not supported. And especially
with managed Kubernetes, it's pretty rare to allow such kind of
configuration changes.




--
真実はいつも一つ!/ Always, there's only one truth!

I believe it is definitely time to remove support for it. Cgroupv1 never 
worked well, and this is a chance to move forward with a well supported 
solution.




Re: [systemd-devel] Feedback sought: can we drop cgroupv1 support soon?

2023-07-19 Thread Neal Gompa
On Wed, Jul 19, 2023 at 8:52 AM Luca Boccassi  wrote:
>
> On Wed, 19 Jul 2023 at 13:45, Neal Gompa  wrote:
> >
> > On Thu, Jul 21, 2022 at 6:15 AM Lennart Poettering
> >  wrote:
> > >
> > > Heya!
> > >
> > > It's currently a terrible mess having to support both cgroupsv1 and
> > > cgroupsv2 in our codebase.
> > >
> > > cgroupsv2 first entered the kernel in 2014, i.e. *eight* years ago
> > > (kernel 3.16). We soon intend to raise the baseline for systemd to
> > > kernel 4.3 (because we want to be able to rely on the existance of
> > > ambient capabilities), but that also means, that all kernels we intend
> > > to support have a well-enough working cgroupv2 implementation.
> > >
> > > hence, i'd love to drop the cgroupv1 support from our tree entirely,
> > > and simplify and modernize our codebase to go cgroupv2-only. Before we
> > > do that I'd like to seek feedback on this though, given this is not
> > > purely a thing between the kernel and systemd — this does leak into
> > > some userspace, that operates on cgroups directly.
> > >
> > > Specifically, legacy container infra (i.e. docker/moby) for the
> > > longest time was cgroupsv1-only. But as I understand it has since been
> > > updated, to cgroupsv2 too.
> > >
> > > Hence my question: is there a strong community of people who insist on
> > > using newest systemd while using legacy container infra? Anyone else
> > > has a good reason to stick with cgroupsv1 but really wants newest
> > > systemd?
> > >
> > > The time where we'll drop cgroupv1 support *will* come eventually
> > > either way, but what's still up for discussion is to determine
> > > precisely when. hence, please let us know!
> > >
> >
> > The main concern I have about cgroup v1 removal is that some major
> > Kubernetes distributions don't support cgroup v2 yet. Upstream
> > Kubernetes only started fully supporting cgroup v2 with Kubernetes
> > 1.25, as noted in their documentation:
> > https://kubernetes.io/docs/concepts/architecture/cgroups/
> >
> > OpenShift just added support in 4.13 (but didn't enable it by default
> > yet): https://cloud.redhat.com/blog/cgroups-v2-goes-ga-in-openshift-4.13
> >
> > AKS seems to only support cgroup v2 with Ubuntu 22.04:
> > https://learn.microsoft.com/en-us/azure/aks/supported-kubernetes-versions?tabs=azure-cli#aks-components-breaking-changes-by-version
> >
> > (No mention of Azure Linux? I'm pretty sure CBL-Mariner is cgroup v2 only)
> >
> > It is unclear whether EKS supports cgroup v2 at all (I suspect not,
> > since EKS doesn't yet run on Amazon Linux 2023):
> > https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html
> >
> > It is similarly unclear with GKE:
> > https://cloud.google.com/kubernetes-engine/docs/concepts/node-images
> >
> > (The version of Ubuntu is not mentioned in the documentation, if it's
> > not new enough, it's still cgroup v1)
> >
> > DigitalOcean Kubernetes Service (DOKS) is still cgroup v1:
> > https://docs.digitalocean.com/products/kubernetes/details/changelog/
> >
> > Linode Kubernetes Engine (LKE) is still cgroup v1:
> > https://www.linode.com/docs/products/compute/kubernetes/release-notes/
> >
> > It is possible that systemd's deprecation will push things over the
> > edge, but I wanted to make sure people are aware of this.
>
> Are you sure that in all those cases it's really not supported at all,
> vs simply not being the default configuration that can be changed with
> a toggle?

If it's not mentioned, it's probably not supported. And especially
with managed Kubernetes, it's pretty rare to allow such kind of
configuration changes.




--
真実はいつも一つ!/ Always, there's only one truth!


Re: [systemd-devel] Feedback sought: can we drop cgroupv1 support soon?

2023-07-19 Thread Luca Boccassi
On Wed, 19 Jul 2023 at 13:45, Neal Gompa  wrote:
>
> On Thu, Jul 21, 2022 at 6:15 AM Lennart Poettering
>  wrote:
> >
> > Heya!
> >
> > It's currently a terrible mess having to support both cgroupsv1 and
> > cgroupsv2 in our codebase.
> >
> > cgroupsv2 first entered the kernel in 2014, i.e. *eight* years ago
> > (kernel 3.16). We soon intend to raise the baseline for systemd to
> > kernel 4.3 (because we want to be able to rely on the existance of
> > ambient capabilities), but that also means, that all kernels we intend
> > to support have a well-enough working cgroupv2 implementation.
> >
> > hence, i'd love to drop the cgroupv1 support from our tree entirely,
> > and simplify and modernize our codebase to go cgroupv2-only. Before we
> > do that I'd like to seek feedback on this though, given this is not
> > purely a thing between the kernel and systemd — this does leak into
> > some userspace, that operates on cgroups directly.
> >
> > Specifically, legacy container infra (i.e. docker/moby) for the
> > longest time was cgroupsv1-only. But as I understand it has since been
> > updated, to cgroupsv2 too.
> >
> > Hence my question: is there a strong community of people who insist on
> > using newest systemd while using legacy container infra? Anyone else
> > has a good reason to stick with cgroupsv1 but really wants newest
> > systemd?
> >
> > The time where we'll drop cgroupv1 support *will* come eventually
> > either way, but what's still up for discussion is to determine
> > precisely when. hence, please let us know!
> >
>
> The main concern I have about cgroup v1 removal is that some major
> Kubernetes distributions don't support cgroup v2 yet. Upstream
> Kubernetes only started fully supporting cgroup v2 with Kubernetes
> 1.25, as noted in their documentation:
> https://kubernetes.io/docs/concepts/architecture/cgroups/
>
> OpenShift just added support in 4.13 (but didn't enable it by default
> yet): https://cloud.redhat.com/blog/cgroups-v2-goes-ga-in-openshift-4.13
>
> AKS seems to only support cgroup v2 with Ubuntu 22.04:
> https://learn.microsoft.com/en-us/azure/aks/supported-kubernetes-versions?tabs=azure-cli#aks-components-breaking-changes-by-version
>
> (No mention of Azure Linux? I'm pretty sure CBL-Mariner is cgroup v2 only)
>
> It is unclear whether EKS supports cgroup v2 at all (I suspect not,
> since EKS doesn't yet run on Amazon Linux 2023):
> https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html
>
> It is similarly unclear with GKE:
> https://cloud.google.com/kubernetes-engine/docs/concepts/node-images
>
> (The version of Ubuntu is not mentioned in the documentation, if it's
> not new enough, it's still cgroup v1)
>
> DigitalOcean Kubernetes Service (DOKS) is still cgroup v1:
> https://docs.digitalocean.com/products/kubernetes/details/changelog/
>
> Linode Kubernetes Engine (LKE) is still cgroup v1:
> https://www.linode.com/docs/products/compute/kubernetes/release-notes/
>
> It is possible that systemd's deprecation will push things over the
> edge, but I wanted to make sure people are aware of this.

Are you sure that in all those cases it's really not supported at all,
vs simply not being the default configuration that can be changed with
a toggle?


Re: [systemd-devel] Feedback sought: can we drop cgroupv1 support soon?

2023-07-19 Thread Neal Gompa
On Thu, Jul 21, 2022 at 6:15 AM Lennart Poettering
 wrote:
>
> Heya!
>
> It's currently a terrible mess having to support both cgroupsv1 and
> cgroupsv2 in our codebase.
>
> cgroupsv2 first entered the kernel in 2014, i.e. *eight* years ago
> (kernel 3.16). We soon intend to raise the baseline for systemd to
> kernel 4.3 (because we want to be able to rely on the existance of
> ambient capabilities), but that also means, that all kernels we intend
> to support have a well-enough working cgroupv2 implementation.
>
> hence, i'd love to drop the cgroupv1 support from our tree entirely,
> and simplify and modernize our codebase to go cgroupv2-only. Before we
> do that I'd like to seek feedback on this though, given this is not
> purely a thing between the kernel and systemd — this does leak into
> some userspace, that operates on cgroups directly.
>
> Specifically, legacy container infra (i.e. docker/moby) for the
> longest time was cgroupsv1-only. But as I understand it has since been
> updated, to cgroupsv2 too.
>
> Hence my question: is there a strong community of people who insist on
> using newest systemd while using legacy container infra? Anyone else
> has a good reason to stick with cgroupsv1 but really wants newest
> systemd?
>
> The time where we'll drop cgroupv1 support *will* come eventually
> either way, but what's still up for discussion is to determine
> precisely when. hence, please let us know!
>

The main concern I have about cgroup v1 removal is that some major
Kubernetes distributions don't support cgroup v2 yet. Upstream
Kubernetes only started fully supporting cgroup v2 with Kubernetes
1.25, as noted in their documentation:
https://kubernetes.io/docs/concepts/architecture/cgroups/

OpenShift just added support in 4.13 (but didn't enable it by default
yet): https://cloud.redhat.com/blog/cgroups-v2-goes-ga-in-openshift-4.13

AKS seems to only support cgroup v2 with Ubuntu 22.04:
https://learn.microsoft.com/en-us/azure/aks/supported-kubernetes-versions?tabs=azure-cli#aks-components-breaking-changes-by-version

(No mention of Azure Linux? I'm pretty sure CBL-Mariner is cgroup v2 only)

It is unclear whether EKS supports cgroup v2 at all (I suspect not,
since EKS doesn't yet run on Amazon Linux 2023):
https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html

It is similarly unclear with GKE:
https://cloud.google.com/kubernetes-engine/docs/concepts/node-images

(The version of Ubuntu is not mentioned in the documentation, if it's
not new enough, it's still cgroup v1)

DigitalOcean Kubernetes Service (DOKS) is still cgroup v1:
https://docs.digitalocean.com/products/kubernetes/details/changelog/

Linode Kubernetes Engine (LKE) is still cgroup v1:
https://www.linode.com/docs/products/compute/kubernetes/release-notes/

It is possible that systemd's deprecation will push things over the
edge, but I wanted to make sure people are aware of this.





--
真実はいつも一つ!/ Always, there's only one truth!


Re: [systemd-devel] Feedback sought: can we drop cgroupv1 support soon?

2023-07-19 Thread Luca Boccassi
On Wed, 19 Jul 2023 at 11:46, Lewis Gaul  wrote:
>
> Hi Luca,
>
> > All the distributions you quoted above support cgroupv2 to the best of
> > my knowledge, it simply has to be enabled at boot. Why isn't that
> > sufficient?
>
> As I said in my previous email:
>
> > in the case of it being a systemd container on an arbitrary host then a 
> > lack of cgroup v1 support from systemd would place a cgroup v2 requirement 
> > on the host, which is an undesirable property of a container.
>
> and
>
> > we are not in a position to require the end-user to reconfigure their host 
> > to enable running our container.

What's the problem with that? You will already have _some_
requirements, just add a new one. It's just a configuration change.


Re: [systemd-devel] Feedback sought: can we drop cgroupv1 support soon?

2023-07-19 Thread Lewis Gaul
Hi Luca,

> All the distributions you quoted above support cgroupv2 to the best of
> my knowledge, it simply has to be enabled at boot. Why isn't that
> sufficient?

As I said in my previous email:

> in the case of it being a systemd container on an arbitrary host then a
lack of cgroup v1 support from systemd would place a cgroup v2 requirement
on the host, which is an undesirable property of a container.

and

> we are not in a position to require the end-user to reconfigure their
host to enable running our container.

Regards,
Lewis

On Wed, 19 Jul 2023 at 11:35, Luca Boccassi  wrote:

> On Wed, 19 Jul 2023 at 11:30, Lewis Gaul  wrote:
> >
> > Hi Lennart, all,
> >
> > TL;DR: A container making use of cgroup controllers must use the same
> cgroup version as the host, and in the case of it being a systemd container
> on an arbitrary host then a lack of cgroup v1 support from systemd would
> place a cgroup v2 requirement on the host, which is an undesirable property
> of a container.
> >
> > I can totally understand the desire to simplify the codebase/support
> matrix, and appreciate this response is coming quite late (almost a year
> since cgroups v1 was noted as a future deprecation in systemd). However, I
> wanted to share a use-case/argument for keeping cgroups v1 support a little
> longer in case it may impact the decision at all.
> >
> > At my $work we provide a container image to customers, where the
> container runs using systemd as the init system. The end-user has some
> freedom on how/where to run this container, e.g. using docker/podman on a
> host of their choice, or in Kubernetes (e.g. EKS in AWS).
> >
> > Of course there are bounds on what we officially support, but generally
> we would like to support recent LTS releases of major distros, currently
> including Ubuntu 20.04, Ubuntu 22.04, RHEL 8, RHEL 9, Amazon Linux 2 (EKS
> doesn’t yet support Amazon Linux 2023). Of these, only Ubuntu 22.04 and
> RHEL 9 have switched to using cgroups v2 by default, and we are not in a
> position to require the end-user to reconfigure their host to enable
> running our container. What’s more, since we make use of cgroup controllers
> inside the container, we cannot have cgroup v1 controllers enabled on the
> host while attempting to use cgroups v2 inside the container.
> >
> > > Because of that I see no reason why old systemd cgroupv1 payloads
> > > shouldn#t just work on cgroupv2 hosts: as long as you give them a
> > > pre-set-up cgroupv1 environemnt, and nothing stops you from doing
> > > that. In fact, this is something we even documented somewhere: what to
> > > do if the host only does a subset of the cgroup stuff you want, and
> > > what you have to do to set up the other stuff (i.e. if host doesn't
> > > manage your hierarchy of choice, but only others, just follow the same
> > > structure in the other hierarchy, and clean up after yourself). This
> > > is what nspawn does: if host is cgroupv2 only it will set up
> > > name=systemd hierarchy in cgroupv1 itself, and pass that to the
> > > container.
> >
> > I don't think this works for us since we need the full cgroup (v1/v2)
> filesystem available in the container, with controllers enabled.
> >
> > This means that we must, for now, continue to support cgroups v1 in our
> container image. If systemd were to drop support for cgroups v1 then we may
> find ourselves in an awkward position of not being able to upgrade to this
> new systemd version, or be forced to pass this restriction on to end-users.
> The reason we’re uncomfortable about insisting on the use of cgroups v2 is
> that as a container app we ideally wouldn’t place such requirements on the
> host.
> >
> > So, while it's true that the container ecosystem does now largely
> support cgroups v2, there is still an aspect of caring about what the host
> is running, which from our perspective this should be assumed to be the
> default configuration for the chosen distro. With this in mind, we’d
> ideally like to have systemd support cgroups v1 a little longer than the
> end of this year.
> >
> > Does this make sense as a use-case and motivation for wanting new
> systemd versions to continue supporting cgroups v1? Of course not forever,
> but until there are less hosts out there using cgroups v1.
>
> All the distributions you quoted above support cgroupv2 to the best of
> my knowledge, it simply has to be enabled at boot. Why isn't that
> sufficient?
>
> Kind regards,
> Luca Boccassi
>


Re: [systemd-devel] Feedback sought: can we drop cgroupv1 support soon?

2023-07-19 Thread Luca Boccassi
On Wed, 19 Jul 2023 at 11:30, Lewis Gaul  wrote:
>
> Hi Lennart, all,
>
> TL;DR: A container making use of cgroup controllers must use the same cgroup 
> version as the host, and in the case of it being a systemd container on an 
> arbitrary host then a lack of cgroup v1 support from systemd would place a 
> cgroup v2 requirement on the host, which is an undesirable property of a 
> container.
>
> I can totally understand the desire to simplify the codebase/support matrix, 
> and appreciate this response is coming quite late (almost a year since 
> cgroups v1 was noted as a future deprecation in systemd). However, I wanted 
> to share a use-case/argument for keeping cgroups v1 support a little longer 
> in case it may impact the decision at all.
>
> At my $work we provide a container image to customers, where the container 
> runs using systemd as the init system. The end-user has some freedom on 
> how/where to run this container, e.g. using docker/podman on a host of their 
> choice, or in Kubernetes (e.g. EKS in AWS).
>
> Of course there are bounds on what we officially support, but generally we 
> would like to support recent LTS releases of major distros, currently 
> including Ubuntu 20.04, Ubuntu 22.04, RHEL 8, RHEL 9, Amazon Linux 2 (EKS 
> doesn’t yet support Amazon Linux 2023). Of these, only Ubuntu 22.04 and RHEL 
> 9 have switched to using cgroups v2 by default, and we are not in a position 
> to require the end-user to reconfigure their host to enable running our 
> container. What’s more, since we make use of cgroup controllers inside the 
> container, we cannot have cgroup v1 controllers enabled on the host while 
> attempting to use cgroups v2 inside the container.
>
> > Because of that I see no reason why old systemd cgroupv1 payloads
> > shouldn#t just work on cgroupv2 hosts: as long as you give them a
> > pre-set-up cgroupv1 environemnt, and nothing stops you from doing
> > that. In fact, this is something we even documented somewhere: what to
> > do if the host only does a subset of the cgroup stuff you want, and
> > what you have to do to set up the other stuff (i.e. if host doesn't
> > manage your hierarchy of choice, but only others, just follow the same
> > structure in the other hierarchy, and clean up after yourself). This
> > is what nspawn does: if host is cgroupv2 only it will set up
> > name=systemd hierarchy in cgroupv1 itself, and pass that to the
> > container.
>
> I don't think this works for us since we need the full cgroup (v1/v2) 
> filesystem available in the container, with controllers enabled.
>
> This means that we must, for now, continue to support cgroups v1 in our 
> container image. If systemd were to drop support for cgroups v1 then we may 
> find ourselves in an awkward position of not being able to upgrade to this 
> new systemd version, or be forced to pass this restriction on to end-users. 
> The reason we’re uncomfortable about insisting on the use of cgroups v2 is 
> that as a container app we ideally wouldn’t place such requirements on the 
> host.
>
> So, while it's true that the container ecosystem does now largely support 
> cgroups v2, there is still an aspect of caring about what the host is 
> running, which from our perspective this should be assumed to be the default 
> configuration for the chosen distro. With this in mind, we’d ideally like to 
> have systemd support cgroups v1 a little longer than the end of this year.
>
> Does this make sense as a use-case and motivation for wanting new systemd 
> versions to continue supporting cgroups v1? Of course not forever, but until 
> there are less hosts out there using cgroups v1.

All the distributions you quoted above support cgroupv2 to the best of
my knowledge, it simply has to be enabled at boot. Why isn't that
sufficient?

Kind regards,
Luca Boccassi


Re: [systemd-devel] Feedback sought: can we drop cgroupv1 support soon?

2023-07-19 Thread Lewis Gaul
Hi Lennart, all,

TL;DR: A container making use of cgroup controllers must use the same
cgroup version as the host, and in the case of it being a systemd container
on an arbitrary host then a lack of cgroup v1 support from systemd would
place a cgroup v2 requirement on the host, which is an undesirable property
of a container.

I can totally understand the desire to simplify the codebase/support
matrix, and appreciate this response is coming quite late (almost a year
since cgroups v1 was noted as a future deprecation in systemd). However, I
wanted to share a use-case/argument for keeping cgroups v1 support a little
longer in case it may impact the decision at all.

At my $work we provide a container image to customers, where the container
runs using systemd as the init system. The end-user has some freedom on
how/where to run this container, e.g. using docker/podman on a host of
their choice, or in Kubernetes (e.g. EKS in AWS).

Of course there are bounds on what we officially support, but generally we
would like to support recent LTS releases of major distros, currently
including Ubuntu 20.04, Ubuntu 22.04, RHEL 8, RHEL 9, Amazon Linux 2 (EKS
doesn’t yet support Amazon Linux 2023). Of these, only Ubuntu 22.04 and
RHEL 9 have switched to using cgroups v2 by default, and we are not in a
position to require the end-user to reconfigure their host to enable
running our container. What’s more, since we make use of cgroup controllers
inside the container, we cannot have cgroup v1 controllers enabled on the
host while attempting to use cgroups v2 inside the container.

> Because of that I see no reason why old systemd cgroupv1 payloads
> shouldn#t just work on cgroupv2 hosts: as long as you give them a
> pre-set-up cgroupv1 environemnt, and nothing stops you from doing
> that. In fact, this is something we even documented somewhere: what to
> do if the host only does a subset of the cgroup stuff you want, and
> what you have to do to set up the other stuff (i.e. if host doesn't
> manage your hierarchy of choice, but only others, just follow the same
> structure in the other hierarchy, and clean up after yourself). This
> is what nspawn does: if host is cgroupv2 only it will set up
> name=systemd hierarchy in cgroupv1 itself, and pass that to the
> container.

I don't think this works for us since we need the full cgroup
(v1/v2) filesystem available in the container, with controllers enabled.

This means that we must, for now, continue to support cgroups v1 in our
container image. If systemd were to drop support for cgroups v1 then we may
find ourselves in an awkward position of not being able to upgrade to this
new systemd version, or be forced to pass this restriction on to end-users.
The reason we’re uncomfortable about insisting on the use of cgroups v2 is
that as a container app we ideally wouldn’t place such requirements on the
host.

So, while it's true that the container ecosystem does now largely support
cgroups v2, there is still an aspect of caring about what the host is
running, which from our perspective this should be assumed to be the
default configuration for the chosen distro. With this in mind, we’d
ideally like to have systemd support cgroups v1 a little longer than the
end of this year.

Does this make sense as a use-case and motivation for wanting new systemd
versions to continue supporting cgroups v1? Of course not forever, but
until there are less hosts out there using cgroups v1.

Best wishes,
Lewis

On Fri, 22 Jul 2022 at 11:15, Lennart Poettering 
wrote:

> On Do, 21.07.22 16:24, Stéphane Graber (stgra...@ubuntu.com) wrote:
>
> > Hey there,
> >
> > I believe Christian may have relayed some of this already but on my
> > side, as much as I can sympathize with the annoyance of having to
> > support both cgroup1 and cgroup2 side by side, I feel that we're sadly
> > nowhere near the cut off point.
> >
> > >From what I can gather from various stats we have, over 90% of LXD
> > users are still on distributions relying on CGroup1.
> > That's because most of them are using LTS releases of server
> > distributions and those only somewhat recently made the jump to
> > cgroup2:
> >  - RHEL 9 in May 2022
> >  - Ubuntu 22.04 LTS in April 2022
> >  - Debian 11 in August 2021
> >
> > OpenSUSE is still on cgroup1 by default in 15.4 for some reason.
> > All this is also excluding our two largest users, Chromebooks and QNAP
> > NASes, neither of them made the switch yet.
>
> At some point I feel no sympathy there. If google/qnap/suse still are
> stuck in cgroupv1 land, then that's on them, we shouldn't allow
> ourselves to be held hostage by that.
>
> I mean, that Google isn't forward looking in these things is well
> known, but I am a bit surprised SUSE is still so far back.
>
> > I honestly wouldn't be holding deprecating cgroup1 on waiting for
> > those few to wake up and transition.
> > Both ChromeOS and QNAP can very quickly roll it out to all their users
> > should they want to.
> > It's a bit 

[systemd-devel] Antw: [EXT] Re: [systemd‑devel] Feedback sought: can we drop cgroupv1 support soon?

2022-07-28 Thread Ulrich Windl
>>> Lennart Poettering  schrieb am 22.07.2022 um 17:35
in
Nachricht :
> On Fr, 22.07.22 12:15, Lennart Poettering (mzerq...@0pointer.de) wrote:
> 
>> > I guess that would mean holding on to cgroup1 support until EOY 2023
>> > or thereabout?
>>
>> That does sound OK to me. We can mark it deprecated before though,
>> i.e. generate warnings, and remove it from docs, as long as the actual
>> code stays around until then.

I would not remove it from the docs, but declare it obsolete/deprecated
instead.
I think "undocumented" features are a bad thing.

> 
> So I prepped a PR now that documents the EOY 2023 date:
> 
> https://github.com/systemd/systemd/pull/24086 
> 
> That way we shoudn't forget about this, and remind us that we still
> actually need to do it then.
> 
> Lennart
> 
> ‑‑
> Lennart Poettering, Berlin





[systemd-devel] Antw: [EXT] Re: [systemd‑devel] Feedback sought: can we drop cgroupv1 support soon?

2022-07-28 Thread Ulrich Windl
>>> Lennart Poettering  schrieb am 22.07.2022 um 12:15
in
Nachricht :
> On Do, 21.07.22 16:24, Stéphane Graber (stgra...@ubuntu.com) wrote:
> 
>> Hey there,
>>
>> I believe Christian may have relayed some of this already but on my
>> side, as much as I can sympathize with the annoyance of having to
>> support both cgroup1 and cgroup2 side by side, I feel that we're sadly
>> nowhere near the cut off point.
>>
>> >From what I can gather from various stats we have, over 90% of LXD
>> users are still on distributions relying on CGroup1.
>> That's because most of them are using LTS releases of server
>> distributions and those only somewhat recently made the jump to
>> cgroup2:
>>  ‑ RHEL 9 in May 2022
>>  ‑ Ubuntu 22.04 LTS in April 2022
>>  ‑ Debian 11 in August 2021
>>
>> OpenSUSE is still on cgroup1 by default in 15.4 for some reason.
>> All this is also excluding our two largest users, Chromebooks and QNAP
>> NASes, neither of them made the switch yet.
> 
> At some point I feel no sympathy there. If google/qnap/suse still are
> stuck in cgroupv1 land, then that's on them, we shouldn't allow
> ourselves to be held hostage by that.
> 
> I mean, that Google isn't forward looking in these things is well
> known, but I am a bit surprised SUSE is still so far back.

Well, openSUSE actually is rather equivalent to SLES15 (which exists for some
years now).
I guess they didn't want to switch within a major release.
Everybody is free to file an "enhancement" request, at opensuse's bugzilla,
however.
...

Regards,
Ulrich




Re: [systemd-devel] Feedback sought: can we drop cgroupv1 support soon?

2022-07-22 Thread Lennart Poettering
On Fr, 22.07.22 12:15, Lennart Poettering (mzerq...@0pointer.de) wrote:

> > I guess that would mean holding on to cgroup1 support until EOY 2023
> > or thereabout?
>
> That does sound OK to me. We can mark it deprecated before though,
> i.e. generate warnings, and remove it from docs, as long as the actual
> code stays around until then.

So I prepped a PR now that documents the EOY 2023 date:

https://github.com/systemd/systemd/pull/24086

That way we shoudn't forget about this, and remind us that we still
actually need to do it then.

Lennart

--
Lennart Poettering, Berlin


Re: [systemd-devel] Feedback sought: can we drop cgroupv1 support soon?

2022-07-22 Thread Lennart Poettering
On Fr, 22.07.22 12:37, Wols Lists (antli...@youngman.org.uk) wrote:

> On 22/07/2022 11:15, Lennart Poettering wrote:
> > > I guess that would mean holding on to cgroup1 support until EOY 2023
> > > or thereabout?
>
> > That does sound OK to me. We can mark it deprecated before though,
> > i.e. generate warnings, and remove it from docs, as long as the actual
> > code stays around until then.
>
> You've probably thought of this sort of thing already, but can you wrap all
> v1-specific code in #ifdefs? Especially if it's inside an if block, the
> compiler can then optimise the test away if you compile with that set to
> false.
>
> Upstream can then set the default to false, while continuing to support it,
> but it will then become more and more a conscious effort on the part of
> downstream to keep it working.
>
> Once it's visibly bit-rotting you can dump it :-)

The goal really is to reduce code size, not to increase it further by
having to maintain a ton of ifdeffery all over the place.

we generally frown on ifdeffery in "main" code aleady, i.e. we try to
isolate ifdeffery into "library" calls that hide it internally, and then
return EOPNOTSUPP if somethings is compiled out. That way the "main"
code can then treat compiled out stuff via usual error handling,
greatly simplifying conditionalizations and the combinatorial
explosion from having many optional deps.

ifdeffery comes at a price, and is very hard to test for (because CIs
do not test in all combinations of present and absent optional deps),
hence the goal should be to minimize, isolate it, not emphasize it and
sprinkle it over the whole codebase as if it was candy.

Lennart

--
Lennart Poettering, Berlin


Re: [systemd-devel] Feedback sought: can we drop cgroupv1 support soon?

2022-07-22 Thread Wols Lists

On 22/07/2022 11:15, Lennart Poettering wrote:

I guess that would mean holding on to cgroup1 support until EOY 2023
or thereabout?



That does sound OK to me. We can mark it deprecated before though,
i.e. generate warnings, and remove it from docs, as long as the actual
code stays around until then.


You've probably thought of this sort of thing already, but can you wrap 
all v1-specific code in #ifdefs? Especially if it's inside an if block, 
the compiler can then optimise the test away if you compile with that 
set to false.


Upstream can then set the default to false, while continuing to support 
it, but it will then become more and more a conscious effort on the part 
of downstream to keep it working.


Once it's visibly bit-rotting you can dump it :-)

Cheers,
Wol


Re: [systemd-devel] Feedback sought: can we drop cgroupv1 support soon?

2022-07-22 Thread Lennart Poettering
On Do, 21.07.22 16:24, Stéphane Graber (stgra...@ubuntu.com) wrote:

> Hey there,
>
> I believe Christian may have relayed some of this already but on my
> side, as much as I can sympathize with the annoyance of having to
> support both cgroup1 and cgroup2 side by side, I feel that we're sadly
> nowhere near the cut off point.
>
> >From what I can gather from various stats we have, over 90% of LXD
> users are still on distributions relying on CGroup1.
> That's because most of them are using LTS releases of server
> distributions and those only somewhat recently made the jump to
> cgroup2:
>  - RHEL 9 in May 2022
>  - Ubuntu 22.04 LTS in April 2022
>  - Debian 11 in August 2021
>
> OpenSUSE is still on cgroup1 by default in 15.4 for some reason.
> All this is also excluding our two largest users, Chromebooks and QNAP
> NASes, neither of them made the switch yet.

At some point I feel no sympathy there. If google/qnap/suse still are
stuck in cgroupv1 land, then that's on them, we shouldn't allow
ourselves to be held hostage by that.

I mean, that Google isn't forward looking in these things is well
known, but I am a bit surprised SUSE is still so far back.

> I honestly wouldn't be holding deprecating cgroup1 on waiting for
> those few to wake up and transition.
> Both ChromeOS and QNAP can very quickly roll it out to all their users
> should they want to.
> It's a bit trickier for OpenSUSE as it's used as the basis for SLES
> and so those enterprise users are unlikely to see cgroup2 any time
> soon.
>
> Now all of this is a problem because:
>  - Our users are slow to upgrade. It's common for them to skip an
> entire LTS release and those that upgrade every time will usually wait
> 6 months to a year prior to upgrading to a new release.
>  - This deprecation would prevent users of anything but the most
> recent release from running any newer containers. As it's common to
> switch to newer containers before upgrading the host, this would cause
> some issues.
>  - Unfortunately the reverse is a problem too. RHEL 7 and derivatives
> are still very common as a container workload, as is Ubuntu 16.04 LTS.
> Unfortunately those releases ship with a systemd version that does not
> boot under cgroup2.

Hmm, cgroupv1 named hiearchies should still be available even on
cgroupv2 hosts. I am pretty sure nspawn at least should have no
problem with running old cgroupv1 payloads on a cgroupv2 host.

Isn't this issue just an artifact of the fact that LXD doesn't
pre-mount cgroupfs? Or does it do so these days? because systemd's
PID1 since time began would just use the cgroup setup it finds itself
in if it's already mounted/set up. And only mount and make a choice
between cgroup1 or cgroupv2 if there's really nothing set up so far.

Because of that I see no reason why old systemd cgroupv1 payloads
shouldn#t just work on cgroupv2 hosts: as long as you give them a
pre-set-up cgroupv1 environemnt, and nothing stops you from doing
that. In fact, this is something we even documented somewhere: what to
do if the host only does a subset of the cgroup stuff you want, and
what you have to do to set up the other stuff (i.e. if host doesn't
manage your hierarchy of choice, but only others, just follow the same
structure in the other hierarchy, and clean up after yourself). This
is what nspawn does: if host is cgroupv2 only it will set up
name=systemd hierarchy in cgroupv1 itself, and pass that to the
container.

(I mean, we might have regressed on this, since i guess this kind of
setup is not as well tested with nspawn, but I distinctly remember
that I wrote that stuff once upon a time, and it worked fine then.)

> That last issue has been biting us a bit recently but it's something
> that one can currently workaround by forcing systemd back into hybrid
> mode on the host.

This should not be necessary, if LXD would do minimal cgroup setup on
its own.

> With the deprecation of cgroup1, this won't be possible anymore. You
> simply won't be able to have both CentOS7 and Fedora XYZ running in
> containers on the same system as one will only work on cgroup1 and the
> other only on cgroup2.

I am pretty sure this works fine with nspawn...

> I guess that would mean holding on to cgroup1 support until EOY 2023
> or thereabout?

That does sound OK to me. We can mark it deprecated before though,
i.e. generate warnings, and remove it from docs, as long as the actual
code stays around until then.

Thank you, for the input,

Lennart

--
Lennart Poettering, Berlin


Re: [systemd-devel] Feedback sought: can we drop cgroupv1 support soon?

2022-07-22 Thread Lennart Poettering
On Do, 21.07.22 11:55, Christian Brauner (brau...@kernel.org) wrote:

> In general, I wouldn't mind dropping cgroup1 support in the future.
>
> The only thing I immediately kept thinking about is what happens to
> workloads that have a v1 cgroup layout on the host possibly with an
> older systemd running container workloads using a newer distro with a
> systemd version without cgroup1 support.
>
> Think Ubuntu 18.04 host running a really new Ubuntu LTS that has a
> version of systemd with cgroup1 support already dropped. People do
> actually do stuff like that. Stéphane and Serge might know more about
> actual use-cases in that area.

The question is though how much can we get away with at that
front. i.e. I think we can all agree that if you attempt to run an
extremely new container on an extremely old host is something we
really don't have to support, once the age difference is beyond some
boundary. The question is at what that boundary is.

Much the same way as we have a baseline on kernel versions systemd
supports (currently 3.15, soon 4.5), we probably should start to
define a baseline of what to expect from a container manager.

Lennart

--
Lennart Poettering, Berlin


Re: [systemd-devel] Feedback sought: can we drop cgroupv1 support soon?

2022-07-21 Thread Stéphane Graber
Hey there,

I believe Christian may have relayed some of this already but on my
side, as much as I can sympathize with the annoyance of having to
support both cgroup1 and cgroup2 side by side, I feel that we're sadly
nowhere near the cut off point.

>From what I can gather from various stats we have, over 90% of LXD
users are still on distributions relying on CGroup1.
That's because most of them are using LTS releases of server
distributions and those only somewhat recently made the jump to
cgroup2:
 - RHEL 9 in May 2022
 - Ubuntu 22.04 LTS in April 2022
 - Debian 11 in August 2021

OpenSUSE is still on cgroup1 by default in 15.4 for some reason.
All this is also excluding our two largest users, Chromebooks and QNAP
NASes, neither of them made the switch yet.

I honestly wouldn't be holding deprecating cgroup1 on waiting for
those few to wake up and transition.
Both ChromeOS and QNAP can very quickly roll it out to all their users
should they want to.
It's a bit trickier for OpenSUSE as it's used as the basis for SLES
and so those enterprise users are unlikely to see cgroup2 any time
soon.

Now all of this is a problem because:
 - Our users are slow to upgrade. It's common for them to skip an
entire LTS release and those that upgrade every time will usually wait
6 months to a year prior to upgrading to a new release.
 - This deprecation would prevent users of anything but the most
recent release from running any newer containers. As it's common to
switch to newer containers before upgrading the host, this would cause
some issues.
 - Unfortunately the reverse is a problem too. RHEL 7 and derivatives
are still very common as a container workload, as is Ubuntu 16.04 LTS.
Unfortunately those releases ship with a systemd version that does not
boot under cgroup2.

That last issue has been biting us a bit recently but it's something
that one can currently workaround by forcing systemd back into hybrid
mode on the host.
With the deprecation of cgroup1, this won't be possible anymore. You
simply won't be able to have both CentOS7 and Fedora XYZ running in
containers on the same system as one will only work on cgroup1 and the
other only on cgroup2.

Now this doesn't bother me at all for anything that's end of life, but
RHEL 7 is only reaching EOL in June 2024 and while Ubuntu 16.04 is
officially EOL, Canonical provides extended support (ESM) on it until
April 2026.


So given all that, my 2 cents would be that ideally systemd should
keep supporting cgroup1 until June 2024 or shortly before that given
the usual leg between releasing systemd and it being adopted by Linux
distros. This would allow for most distros to have made it through two
long term releases shipping with cgroup2, making sure the vast
majority of users will finally be on cgroup2 and will also allow for
those cgroup1-only workloads to have gone away.

I guess that would mean holding on to cgroup1 support until EOY 2023
or thereabout?

Stéphane

On Thu, Jul 21, 2022 at 5:55 AM Christian Brauner  wrote:
>
> [Cc Stéphane and Serge]
>
> On Thu, Jul 21, 2022 at 11:03:49AM +0200, Lennart Poettering wrote:
> > Heya!
> >
> > It's currently a terrible mess having to support both cgroupsv1 and
> > cgroupsv2 in our codebase.
> >
> > cgroupsv2 first entered the kernel in 2014, i.e. *eight* years ago
> > (kernel 3.16). We soon intend to raise the baseline for systemd to
> > kernel 4.3 (because we want to be able to rely on the existance of
> > ambient capabilities), but that also means, that all kernels we intend
> > to support have a well-enough working cgroupv2 implementation.
> >
> > hence, i'd love to drop the cgroupv1 support from our tree entirely,
> > and simplify and modernize our codebase to go cgroupv2-only. Before we
> > do that I'd like to seek feedback on this though, given this is not
> > purely a thing between the kernel and systemd — this does leak into
> > some userspace, that operates on cgroups directly.
> >
> > Specifically, legacy container infra (i.e. docker/moby) for the
> > longest time was cgroupsv1-only. But as I understand it has since been
> > updated, to cgroupsv2 too.
> >
> > Hence my question: is there a strong community of people who insist on
> > using newest systemd while using legacy container infra? Anyone else
> > has a good reason to stick with cgroupsv1 but really wants newest
> > systemd?
> >
> > The time where we'll drop cgroupv1 support *will* come eventually
> > either way, but what's still up for discussion is to determine
> > precisely when. hence, please let us know!
>
> In general, I wouldn't mind dropping cgroup1 support in the future.
>
> The only thing I immediately kept thinking about is what happens to
> workloads that have a v1 cgroup layout on the host possibly with an
> older systemd running container workloads using a newer distro with a
> systemd version without cgroup1 support.
>
> Think Ubuntu 18.04 host running a really new Ubuntu LTS that has a
> version of systemd with cgroup1 support already dropped. 

Re: [systemd-devel] Feedback sought: can we drop cgroupv1 support soon?

2022-07-21 Thread Christian Brauner
[Cc Stéphane and Serge]

On Thu, Jul 21, 2022 at 11:03:49AM +0200, Lennart Poettering wrote:
> Heya!
> 
> It's currently a terrible mess having to support both cgroupsv1 and
> cgroupsv2 in our codebase.
> 
> cgroupsv2 first entered the kernel in 2014, i.e. *eight* years ago
> (kernel 3.16). We soon intend to raise the baseline for systemd to
> kernel 4.3 (because we want to be able to rely on the existance of
> ambient capabilities), but that also means, that all kernels we intend
> to support have a well-enough working cgroupv2 implementation.
> 
> hence, i'd love to drop the cgroupv1 support from our tree entirely,
> and simplify and modernize our codebase to go cgroupv2-only. Before we
> do that I'd like to seek feedback on this though, given this is not
> purely a thing between the kernel and systemd — this does leak into
> some userspace, that operates on cgroups directly.
> 
> Specifically, legacy container infra (i.e. docker/moby) for the
> longest time was cgroupsv1-only. But as I understand it has since been
> updated, to cgroupsv2 too.
> 
> Hence my question: is there a strong community of people who insist on
> using newest systemd while using legacy container infra? Anyone else
> has a good reason to stick with cgroupsv1 but really wants newest
> systemd?
> 
> The time where we'll drop cgroupv1 support *will* come eventually
> either way, but what's still up for discussion is to determine
> precisely when. hence, please let us know!

In general, I wouldn't mind dropping cgroup1 support in the future.

The only thing I immediately kept thinking about is what happens to
workloads that have a v1 cgroup layout on the host possibly with an
older systemd running container workloads using a newer distro with a
systemd version without cgroup1 support.

Think Ubuntu 18.04 host running a really new Ubuntu LTS that has a
version of systemd with cgroup1 support already dropped. People do
actually do stuff like that. Stéphane and Serge might know more about
actual use-cases in that area.

But fwiw, we did have people show up with this and related problems for
the last 5 years or so at conferences.

Christian


[systemd-devel] Feedback sought: can we drop cgroupv1 support soon?

2022-07-21 Thread Lennart Poettering
Heya!

It's currently a terrible mess having to support both cgroupsv1 and
cgroupsv2 in our codebase.

cgroupsv2 first entered the kernel in 2014, i.e. *eight* years ago
(kernel 3.16). We soon intend to raise the baseline for systemd to
kernel 4.3 (because we want to be able to rely on the existance of
ambient capabilities), but that also means, that all kernels we intend
to support have a well-enough working cgroupv2 implementation.

hence, i'd love to drop the cgroupv1 support from our tree entirely,
and simplify and modernize our codebase to go cgroupv2-only. Before we
do that I'd like to seek feedback on this though, given this is not
purely a thing between the kernel and systemd — this does leak into
some userspace, that operates on cgroups directly.

Specifically, legacy container infra (i.e. docker/moby) for the
longest time was cgroupsv1-only. But as I understand it has since been
updated, to cgroupsv2 too.

Hence my question: is there a strong community of people who insist on
using newest systemd while using legacy container infra? Anyone else
has a good reason to stick with cgroupsv1 but really wants newest
systemd?

The time where we'll drop cgroupv1 support *will* come eventually
either way, but what's still up for discussion is to determine
precisely when. hence, please let us know!

Thanks,

Lennart

--
Lennart Poettering, Berlin