Re: [systemd-devel] USB installer for mkosi

2023-08-18 Thread Daan De Meyer
Unfortunately there's no ready made answer yet here. We're busy designing
and implementing a solution for these problems.
https://github.com/systemd/systemd/pull/27792 has more details.

Cheers,

Daan

On Fri, 18 Aug 2023, 19:44 Nils Kattenbeck,  wrote:

> Hi,
>
> currently I am building a minimalistic Linux image using mkosi which
> should be installed on bare-metal hardware.
> For the installation I am trying to create a USB-stick installer which
> simply installs the resulting image on the hardware.
>
> First and foremost:
> Does someone maybe know of an existing tool which generates such a USB
> installer?
> For now I have found the installer script[1] from Yocto, and FAI
> (Fully automatic installation)[2].
> I would like to avoid using Yocto and the script seems to also perform
> partitioning etc. which I do not need as mkosi already generates a
> ready-to-use raw disk image with partitions set up.
> FAI on the other hand seems to focus on network installs and prefers
> to build its own images instead of using an arbitrary .raw/ISOs.
>
> So I fear that I will have to write my own installer...
> I do not require fancy GUI shenanigans; a simple CLI application
> prompting for the destination disk should suffice.
>
> Based on my understanding the primary steps are `cp /dev/usb-stick
> /dev/target-disk` (or dd for the old fashioned), followed by a `parted
> --script --fix /dev/target-disk print` to resolve GPT warnings due to
> the header not being at the end when the disk is larger than the USB
> stick.
> Is it possible to replace the second step with `systemd-parted`.
> Especially given that mkosi v15 now uses it for itself, this would
> likely be a lot cleaner than invoking parted.
>
> The major problem I am facing with that approach is how do I know
> whether I am booting from a USB stick or already from the final disk
> drive.
> One technique which comes to mind would be to create two images, one
> of which will be placed into the mkosi.extra/ directory of the other.
> This way I could have one auto-start the install script whereas the
> other image would be completely free of that logic.
> Am I missing a more obvious way to perform this?
>
> Any help would be greatly appreciated!
> Kind regard, Nils
>
> [1]
> https://github.com/yoctoproject/poky/blob/13734bb520732882a95da7ee6efe1e5b98568acc/meta/recipes-core/initrdscripts/initramfs-module-install-efi_1.0.bb
> [2] https://fai-project.org/
>


[systemd-devel] USB installer for mkosi

2023-08-18 Thread Nils Kattenbeck
Hi,

currently I am building a minimalistic Linux image using mkosi which
should be installed on bare-metal hardware.
For the installation I am trying to create a USB-stick installer which
simply installs the resulting image on the hardware.

First and foremost:
Does someone maybe know of an existing tool which generates such a USB
installer?
For now I have found the installer script[1] from Yocto, and FAI
(Fully automatic installation)[2].
I would like to avoid using Yocto and the script seems to also perform
partitioning etc. which I do not need as mkosi already generates a
ready-to-use raw disk image with partitions set up.
FAI on the other hand seems to focus on network installs and prefers
to build its own images instead of using an arbitrary .raw/ISOs.

So I fear that I will have to write my own installer...
I do not require fancy GUI shenanigans; a simple CLI application
prompting for the destination disk should suffice.

Based on my understanding the primary steps are `cp /dev/usb-stick
/dev/target-disk` (or dd for the old fashioned), followed by a `parted
--script --fix /dev/target-disk print` to resolve GPT warnings due to
the header not being at the end when the disk is larger than the USB
stick.
Is it possible to replace the second step with `systemd-parted`.
Especially given that mkosi v15 now uses it for itself, this would
likely be a lot cleaner than invoking parted.

The major problem I am facing with that approach is how do I know
whether I am booting from a USB stick or already from the final disk
drive.
One technique which comes to mind would be to create two images, one
of which will be placed into the mkosi.extra/ directory of the other.
This way I could have one auto-start the install script whereas the
other image would be completely free of that logic.
Am I missing a more obvious way to perform this?

Any help would be greatly appreciated!
Kind regard, Nils

[1] 
https://github.com/yoctoproject/poky/blob/13734bb520732882a95da7ee6efe1e5b98568acc/meta/recipes-core/initrdscripts/initramfs-module-install-efi_1.0.bb
[2] https://fai-project.org/


Re: [systemd-devel] Feedback sought: can we drop cgroupv1 support soon?

2023-08-18 Thread Dimitri John Ledkov
On Mon, 7 Aug 2023 at 17:26, Lennart Poettering  wrote:
>
> On Do, 20.07.23 01:59, Dimitri John Ledkov (dimitri.led...@canonical.com) 
> wrote:
>
> > Some deployments that switch back their modern v2 host to hybrid or v1, are
> > the ones that need to run old workloads that contain old systemd. Said old
> > systemd only has experimental incomplete v2 support that doesn't work with
> > v2-only (the one before current stable magick mount value).
>
> What's stopping you from mounting a private "named" cgroup v1
> hierarchy to such containers (i.e. no controllers). systemd will then
> use that when taking over and not bother with mounting anything on its
> own, such as a cgroupv2 tree.
>
> that should be enough to make old systemd happy.
>

Let me see if I can create all the right config files to attempt that.
I have some other constraints which may prevent this, but hopefully i
can provide sufficient workarounds to get this over the barrier.


--
okurrr,

Dimitri


Re: [systemd-devel] Feedback sought: can we drop cgroupv1 support soon?

2023-08-18 Thread Lewis Gaul
> What's stopping you from mounting a private "named" cgroup v1
> hierarchy to such containers (i.e. no controllers). systemd will then
> use that when taking over and not bother with mounting anything on its
> own, such as a cgroupv2 tree.

We specifically want to be able to make use of cgroup controllers within
the container. One example of this would be to use "MemoryLimit" (cgroupv1)
for a systemd unit (I understand this is deprecated in the latest versions
of systemd, but as far as I can see we wouldn't be able to use the cgroupv2
"MemoryMax" config in this scenario anyway).

> You are doing something half broken and
> outside of the intended model already, I am not sure we need to go the
> extra mile to support this for longer.

I'm slightly surprised and disheartened by this viewpoint. I have paid
close attention to https://systemd.io/CONTAINER_INTERFACE/ and
https://systemd.io/CGROUP_DELEGATION/, and I'd interpreted the statement as
being that running systemd in a container should be fully supported (not
only on cgroupsv2, at least using recent-but-not-latest systemd versions).

In particular, the following:

"Note that it is our intention to make systemd systems work flawlessly and
out-of-the-box in containers. In fact, we are interested to ensure that the
same OS image can be booted on a bare system, in a VM and in a container,
and behave correctly each time. If you notice that some component in
systemd does not work in a container as it should, even though the
container manager implements everything documented above, please contact
us."

"When systemd runs as container payload it will make use of all hierarchies
it has write access to. For legacy mode you need to make at least
/sys/fs/cgroup/systemd/ available, all other hierarchies are optional."

I note that point 6 under "Some Don'ts" does correlate with what you're
saying:
"Think twice before delegating cgroup v1 controllers to less privileged
containers. It’s not safe, you basically allow your containers to freeze
the system with that and worse."
However, in our case we're talking about a privileged container, so this
doesn't really apply.

I think there's a definite use-case here, and unfortunately when systemd
drops support for cgroupsv1 I think this will just mean we'll be unable to
upgrade the container's systemd version until all relevant hosts use
cgroupsv2 by default (probably a couple of years away).

Thanks for your time,
Lewis

On Mon, 7 Aug 2023 at 17:26, Lennart Poettering 
wrote:

> On Do, 20.07.23 01:59, Dimitri John Ledkov (dimitri.led...@canonical.com)
> wrote:
>
> > Some deployments that switch back their modern v2 host to hybrid or v1,
> are
> > the ones that need to run old workloads that contain old systemd. Said
> old
> > systemd only has experimental incomplete v2 support that doesn't work
> with
> > v2-only (the one before current stable magick mount value).
>
> What's stopping you from mounting a private "named" cgroup v1
> hierarchy to such containers (i.e. no controllers). systemd will then
> use that when taking over and not bother with mounting anything on its
> own, such as a cgroupv2 tree.
>
> that should be enough to make old systemd happy.
>
> Lennart
>
> --
> Lennart Poettering, Berlin
>


[systemd-devel] Can AppArmor be used with NoNewPrivileges=true enabled

2023-08-18 Thread 嵩智
Hi all,

I had a program which launched by systemd, and had NoNewPrivileges=true in
the service file. This program will use GIO subprocess to execute another
program2. Program2 will failed to run if applied AppArmor profile to it.
But if mark NoNewPrivileges=true out, then everything works fine. Can
NoNewPrivileges=true can work with AppArmor together?

Regards,
Dirk