To what extent a machine is locked down is a policy choice. There are
already loads of tools available to manage policy so this really doesn't
belong here and if you want to ensure that your fleet of machines are locked
down through something like PREFER_HARDENED_CONFIG=1, you're going to need
Hello,
Just out of curiosity, I commented out DeviceAllow=/dev/net/tun rwm in
systemd-nspawn@.service and tried running. A failure was expected, but
it was not.
copy_devnodes() in src/nspawn/nspawn.c executes mknod() on /dev/net/tun,
EPERM is expected because DeviceAllow=/dev/net/tun rwm does
Am Montag, dem 21.02.2022 um 22:16 +0100 schrieb Felip Moll:
> Silvio,
>
> As I commented in my previous post, creating every single job in a
> separate slice is an overhead I cannot assume.
> An HTC system could run thousands of jobs per second, and doing extra
> fork+execs plus waiting for
> On 21 Feb 2022, at 21:16, Felip Moll wrote:
>
>
>
>> You could invoke a man:systemd-run for each new process. Than you can
>> put every single job in a seperate .slice with its own
>> man:systemd.resource-control applied.
>> This would also mean that you don't need to compile against
You could invoke a man:systemd-run for each new process. Than you can
> put every single job in a seperate .slice with its own
> man:systemd.resource-control applied.
> This would also mean that you don't need to compile against libsystemd.
> Just exec() accordingly if a systemd-system is
Am Montag, dem 21.02.2022 um 18:07 +0100 schrieb Felip Moll:
> The hard requirement that my project has is that processes need to
> live even if the daemon who forked them dies.
> Roughly it is how a batch scheduler works: one controller sends a
> request to my daemon for launching a process in
>
> Hmm? Hard requirement of what? Not following?
>
>
The hard requirement that my project has is that processes need to live
even if the daemon who forked them dies.
Roughly it is how a batch scheduler works: one controller sends a request
to my daemon for launching a process in the name of a
On Mo, 21.02.22 14:14, Felip Moll (lip...@gmail.com) wrote:
> Hello,
>
> I am creating a software which consists of one daemon which forks several
> processes from user requests.
> This is basically acting like a job scheduler.
>
> The daemon is started using a unit file and with Delegate=yes
Hello,
I am creating a software which consists of one daemon which forks several
processes from user requests.
This is basically acting like a job scheduler.
The daemon is started using a unit file and with Delegate=yes option,
because every process must be constrained differently. I manage my