#!/usr/local/bin/execlineb -P
If you're a distributor: don't do that. Distributions should not install anything in /usr/local. Also, if you're providing s6 as the base init system, you should have the execline binaries in the root filesystem, which means /bin. Because the user may choose to have /usr on a different filesystem.
foreground { s6-hiercopy /etc/s6/user-serv /run/user }
Do you need to perform that step every time your service restarts? It's idempotent, so you should put that in a separate oneshot, instead of in a run script.
if { if { s6-mkdir -p -m 0755 /run/user-run/ } forbacktickx -p -0 i { s6-ls -0 /run/user } import -u i foreground { s6-ln -s -- /run/user/${i} /run/user-run/ } }
Same thing: that's idempotent, so put that in a separate oneshot. Besides, all that accomplishes is add another level of indirection. Why do you need /run/user-run with symlinks when you could just use /run/user ? /run/user is already a working copy of your user data in /etc/s6/user-serv, why do you need an additional directory where you're not adding anything meaningful ?
s6-envdir -I /run/user-run/service/.s6-svscan
I don't think that does what you think it does. Nothing in .s6-svscan is meant to be read and put into environment variables.
then i launch a s6-rc-init by a oneshot just after like this :
Hmmm. This means that your user s6-svscan should notify readiness. It's difficult to have a s6-svscan service notify readiness, because how do you define readiness? - If it's just that the s6-svscan process has entered its main loop and can now receive signals and commands and process the scandir: that's easy to do, but doesn't mean much. - If it's that the whole supervision tree at the time of launch, i.e. all the early services, are ready: that's a bit harder to implement, it implies running s6-svwait -a on all early services. Which is doable, but requires the run script to be a bit more complex than the one you wrote.
#!/usr/local/bin/execlineb -S0 s6-svscanctl -q -- /run/user-run/service
That's not going to work. Your s6-svscan service is a longrun, so your supposed "down" file will be ignored; it's not doing anything, you should delete it. What is happening is that your user s6-svscan is sent a SIGTERM when s6-rc decides it's time to bring it down. Which is the right thing to do. Note that you launched s6-svscan with -s, so what SIGTERM does depends on the contents of /run/user-run/service/.s6-svscan/SIGTERM. That's probably not a good idea for a s6-svscan instance that is not the main one. My suggestion would be to just remove the -s option in your run script.
The rc database bring down correctly the service declared on it but the scandir do not want to stop.
I suspect you don't have a script for SIGTERM, so s6-svscan is happily ignoring it. Just remove the -s option to s6-svscan and everything will work fine.
However, i have a another trouble, when i try to launch some classic service with the second scandir(or the first, it's equal) i have this error on the log : @40000000575cf0bb05fd2e24 s6-supervise syslogd-linux/log: warning: unable to spawn ./run - waiting 10 seconds @40000000575cf0c504c9de44 s6-supervise syslogd-linux: warning: unable to spawn ./run - waiting 10 seconds @40000000575cf0c504ca5f2c s6-supervise (child): fatal: unable to exec run: No such file or directory @40000000575cf0c50660c63c s6-supervise (child): fatal: unable to exec run: No such file or directory
You have an invalid "syslogd-linux" service directory, with no run script. Same thing with syslogd-linux/log. Check what is happening to your run scripts. -- Laurent