Folks I have decided to have another go at getting an S6 system going. At
the moment I am frustrated by my stage 1 kernel-panicing just after there
is no possibility of help, so I post in the hope that some suggestion may
make sense to my ancient grey cells.
I have built a fresh LFS, it is on a LVM2 partition, so I need an
Initramfs, and this enters the Stage 1 script with a devtmpfs somewhat
populated, and /sys and /proc already mounted. My thinking is that these
mounts are OK, and I don't see any advantage in umounting them, just to
remake /dev in a tmpfs.
So I should need much less than Laurent has in his example. (did I mention
the ancient grey cells?)
########################################
cd /
umask 022
# close stdin
fdclose 0
if { s6-echo "*s6 init stage 1 starts *" }
#close stdout and stderr
fdclose 1 fdclose 2
# somewhere to mount /tmp and /dev/service, but not /dev
if { s6-mount -wt tmpfs -o mode=0755,size=67108864 tmpfs /mnt/tmpfs }
# copy the image of the tmpfs onto it.
if { s6hiercopy /img/tmpfs /mnt/tmpfs }
# connect stdin to /dev/null
redirfd -r 0 /dev/null
# connect stdout to a dangling fifo which will be picked up by the logger
redirfd -wnb 1 /service/s6-svscan-log/fifo # (black magic: doesn't block)
# connect stderr
fdmove -c 2 1
# load the environment variables
s6-envdir /etc/s6-init/env
background
{
s6-setsid
redirfd -w 1 /service/s6-svscan-log/fifo # (blocks until the logger reads)
/etc/s6-init/init-stage2
}
unexport !
# Start stage 2.
s6-svscan -t0 /service
########################################
But somewhere along the line this panics (attempt to close init), and I am
at a loss to know how to debug it. The best suggestion is probably 'use
systemd', but please refrain.
How do you debug this sort of thing?
TOF.