On 27.10.25 18:12, Laurent Bercot wrote:

 Hi Benny,

 Indeed s6-rc isn't reentrant. There are several solutions to work with
that.

 The cleanest one would be to rebuild your database when you change
your configuration from static to DHCP or vice-versa. Because when you
change that configuration, you change the dependency graph, and an s6-rc
database has a static dependency graph.

 But it's likely you're reading your network configuration dynamically
from some location like /etc/network/interfaces, so your service
database cannot be easily rebuilt. In that case, a working solution
would be the one you suggested (and don't like): putting a marker in
the filesystem so the udhcpc script knows whether it's in a static or
dynamic configuration, and either execs into udhcpc or into a longrun
that does nothing, such as s6-pause or sleep infinity.

 But you could also ask yourself: since your udhcpc service is
conditional and can only be activated from inside another service, why
invoke s6-rc to start and stop it at all? You could start it with
s6-svc.
 - Keep your udhcpc service in the s6-rc database, and don't add it to
the default bundle, so the supervisor is spawned at boot time but the
service is down by default.
 - In your oneshot network script, use "s6-svc -U /run/service/udhcpc"
instead of "s6-rc start udhcpc". ("-U" instead of "-u" will ensure the
down file is removed.)
 - In the down script for your network oneshot, use "s6-svc -D
/run/service/udhcpc".
 You should get the same effects as with s6-rc, without deadlocking.

 If you need readiness notification, e.g. so that your oneshot can
exit when udhcpc is ready, you can achieve that via the script given
to udhcpc's -s option. Have that script write a byte to fd 3 when it
succeeds, etc. Then your oneshot can instead invoke
"s6-svc -UwU /run/service/udhcpc" and the command will not exit until
udhcpc is ready.

 I'm currently working on a frontend for s6 and s6-rc that should make
all this feel significantly less hacky, less cumbersome, and more
officially supported. 🙂

The annoying, but somewhat clean, solution I've used in the past is to have one global s6-rc instance and one s6-rc instance per network configuration each with a different prefix in the same run directory. I then had a bundle in the global configuration to that contained the active network configuration. You can change the bundle without recompiling the global database so it works, but it's still a horrible user experience.

Reply via email to