Re: runit SIGPWR support

2020-02-28 Thread Alex Suykov
Fri, Feb 28, 2020 at 07:39:47AM +0100, Jan Braun wrote:

> The Linux kernel uses SIGPWR to tell init "there's a power failure
> imminent". There's a difference to "please shutdown the system", even if
> (current versions of) busybox and openrc choose to shutdown the system
> when told about an imminent power failure.

The Linux kernel never sends SIGPWR to init on most platforms.

Grepping the source, there are only about two places where SIGPWR can be
sent to the init process from the kernel: machine check exception on S390,
and some power-related hardware failure indication on SGI workstations.
I'm not familiar with either of those but I suspect in both cases
the signal is expected to initiate immediate shutdown. Otherwise, why
would they signal init.

Just for reference, I checked apcupsd sources. It does include an example
of sending SIGPWR to init when some UPS event arrives, but it's just an
example, and in that setting spawning some script would be a better approach
anyway. The same applies to pretty much anything else in userspace that might
be monitoring power supply. It should not be signalling init, it should
have some sort of a configuration event handler.

Unless it's matter of principle, hard-coding SIGPWR as a shutdown signal
sounds like a decent idea to me, at least on Linux. LXD and similar
containers might be the only class of applications for which signalling
init like that actually makes sense. The distinction between SIGINT for
reboot and SIGPWR for shutdown does not sound awful either.


Re: The "Unix Philosophy 2020" document

2019-12-29 Thread Alex Suykov
Sat, Dec 28, 2019 at 06:41:56PM +0100, Oliver Schad wrote:

> > The reason I think it's mostly useless is because the only use case
> > for cgroup supervision is supervising double-forking daemons, which
> > is not a very smart thing to do. A much better approach is to get rid
> > of double-forking and then just directly supervise the resulting long
> > running process.
> > I can't think of any other cases where it would be useful.
> 
> I definitly have to correct you: cgroups are *NOT* designed to catch
> wild forking processes. This is just a side-effect ot them.

Er, that whole quoted part, including the last sentence, is about using
cgroups to supervise processes. Not about the use of cgroups in general.
I can't think of any other use cases where cgroup supervision would be
useful, other than for double-forking daemons.

Also, wrt process supervision, calling it a side effect is bit misleading.
The interfaces are not really made for that kind of use at all. Strictly
speaking, anything doing kill `cat .../cgroup.procs` is racy and unreliable.
Including that runcg tool that I wrote. In practice, the race is pretty
much irrelevant, but it's still there, inherent to the interfaces.


Re: The "Unix Philosophy 2020" document

2019-12-28 Thread Alex Suykov
Sat, Dec 28, 2019 at 01:57:25PM +1100, eric vidal wrote:

> Well, for me, cgroups is clearly a lack to s6. Using it for every
> process like systemd do have no sense, but in some case it can be
> useful to protect e.g an "over-eat" allocation memory by a program
> (in particular over a server).

Minor note on this. Resource limiting with cgroups does not require
any explicit support from s6, or any external tools for that matter.
Literally just `echo $$ > $cg/cgroup.procs` in the startup script
is enough, assuming the group has been created (mkdir) and the limits
have been set (bunch of echo's).

The whole thing regarding cgroups in systemd is really about a very
different problem: supervising broken services that exit early and
leave orphaned children behind.

If you only want to implement cgroup-based resource limiting, it can
be done with current s6 just fine. Also with runit, bare sysvinit,
busybox init and pretty much anything else than can run shell scripts.
Even suckless tools probably.


Re: The "Unix Philosophy 2020" document

2019-12-28 Thread Alex Suykov
Sat, Dec 28, 2019 at 01:46:08AM -0500, Steve Litt wrote:

> * No if any code within the s6 stack must be changed. So much good
>   software has gone bad trying to incorporate features for only the
>   purpose of getting new users.

It does not require any changes to s6. That's a major point I'd like
to demonstrate with this tool. A 200 C LOC external tool is enough to
let any process supervisor become a cgroup supervisor, on case-by-case
basis, just by chain-execing the tool with the application being
supervised.

What the tool itself does is fork-spawning the chained application
and then waiting until cgroup is empty. While also proxying signals.
For the supervisor, it looks and behaves like a regular long-running
process. The supervisor does not need to know anything about cgroups.

The reason I think it's mostly useless is because the only use case
for cgroup supervision is supervising double-forking daemons, which
is not a very smart thing to do. A much better approach is to get rid
of double-forking and then just directly supervise the resulting long
running process.
I can't think of any other cases where it would be useful.


Re: The "Unix Philosophy 2020" document

2019-12-27 Thread Alex Suykov
Fri, Dec 27, 2019 at 07:23:09PM +0800, Casper Ti. Vector wrote:

> I also wonder if someone on this mailing list is interested in actually
> implementing a cgroup-based babysitter as is outlined in the post,
> perhaps packaged together with standalone workalikes of the cgroup
> chainloaders (`create-control-group' etc) from nosh?  I am still quite
> a newbie to actual system programming in C...

https://github.com/arsv/minibase/blob/master/src/misc/runcg.c

It's really simple.

I don't think a tool like this has any actual uses, other than in
arguments with systemd fans, but I guess that alone could justify
its existance.


Re: The "Unix Philosophy 2020" document

2019-12-27 Thread Alex Suykov
Fri, Dec 27, 2019 at 04:29:13PM -0500, Steve Litt wrote:

> What is slew?

I'd guess this thing: https://gitlab.com/CasperVector/slew


Re: [Announce] s6.rc: a distribution-friendly init/rc framework

2018-03-23 Thread Alex Suykov
Fri, Mar 23, 2018 at 10:51:57AM +, Laurent Bercot wrote:

>  Bear in mind that - this is a simplified, but descriptive enough view
> of the political landscape of the current Linux ecosystem - distribution
> maintainers are *lazy*. They already know systemd, or openrc, or
> sysvinit; they don't want to put in more effort. They don't want to have
> to significantly modify their packaging scripts to accommodate a wildly
> different format.

In their defence, I don't think any mainstream distribution makes this
kind of modifications easy. IMO it's safe to assume a new init system
means a new distribution (possibly derived from something larger).

> For instance, instead of having to provide s6-rc source definition
> directories for services, they would provide a descriptive file,
> similar to systemd unit files, that would then be automatically
> processed and turned into a set of s6-rc definition directories.

Extra layers generally make things harder to work with, not easier.

Whoever may be building something new with s6 would probably benefit
more from a reference system that's simple enough to understand
and which could be used as a base for extending into a full blown
distribution. Having examples at hand would likely matter much more
than the format of service description.

The reaction to the slew manual I'm afraid will likely be along the
lines of "that's all cool and stuff but how do I actually run this?".


announcing minibase

2017-10-31 Thread Alex Suykov
Hi everyone,

A project I've been working on for quite some time is approaching its
first public release, and while it's not exactly there yet, I'd like
to make an early announcement for the supervision crowd:

  https://github.com/arsv/minibase

The init part is further development of sninit (which was also announced
here) that has been completely re-worked. Some features of the project:

  * Static linkage, bundled base library, no external dependencies.

  * Split stage init system similar to s6 but with unitary
supervisor (same parent process for all children).

  * Shell-like interpreter for boot and startup scripts,
similar to execline in role but quite different in design.

  * Bus-less service control using local sockets.

The project also includes some stuff that in my opinion gets overlooked
in relation to non-mainstream init systems, namely VT/DRM/KMS switching,
disk encryption and networking.

Bootable buildroot-based images for qemu, along with the build scripts,
are available here:

  https://github.com/arsv/miniroot

For a quick start, try running pre-built images (from Releases) with qemu.

Please note that while general structure is already in place, lots of
details are still missing and some parts are merely placeholders that
will be rewritten later.


Re: s6 talk at FOSDEM 2017

2017-01-06 Thread Alex Suykov
Thu, Jan 05, 2017 at 09:33:41PM -0800, Colin Booth wrote:

> >  I didn't look too hard at the source code because things like this
> > _should_ be documented no matter what. I remember early experiments
> > with old RedHats or Debians where the gettys were actually started in
> > parallel, but things may have changed since. My experience with Alpine
> > matches yours: at least getty1 gets started after "openrc default" - but
> > that's a busybox init implementation choice.
> 
> As I have learned with Debian, init waits until each line is done
> before going on to the next.

It can be either way actually, check "wait" vs "once" in inittab(5).
Most setups that do stuff like /etc/rc2.d run those as "wait", but
it's not strictly necessary if rc.d scripts are known not to interfere
with gettys/logins/etc.

Sources, in case anyone's willing to look: in sysvinit it's startup()
and the loop from start_if_needed(); in busybox it's run_actions()
calls in init_main(), and run_actions() itself.


Re: [announce] buildroot-s6 0.4.0

2016-01-29 Thread Alex Suykov
Wed, Jan 27, 2016 at 01:16:05PM +0100, Laurent Bercot wrote:

>  And so, daemon packages need to be separated into "mechanism" (the
> daemon itself) and "policy" (the way to start it), with as many
> "policy" packages as there are supported service managers.

This got me thinking of possible implementations.

I don't like the idea of introducing dozens of tiny packages,
one per daemons per service manager, which is probably the only way
of doing this with common package managers.

But in this particular case there is an alternative: one policy package
per manager providing startup files for all supported daemons.

>  I plan to do this work for Alpine Linux towards the end of this year;
> I think most of it will be reusable for other distributions, including
> Buildroot.

Buildroot is very unusual in that each package gets to see the whole
system configuration at build time. This (mis)feature makes it really
easy to implement the policy package.

https://github.com/arsv/sninit/commit/879b6bf2f17b8b68bddcdb15344f3d3741c07cd2
https://github.com/arsv/sninit/tree/master/misc/service

I did it with sninit because I have startup files there already,
but the same can be done with buildroot-s6 just as well.

This trick probably won't work with proper distributions, but Buildroot
may still be useful as a sort of staging and testing environment.



Re: [announce] buildroot-s6 0.2.0

2015-11-12 Thread Alex Suykov
Thu, Nov 12, 2015 at 09:29:37PM +0100, Eric Le Bihan wrote:

> Comments welcomed!

depends on BR2_TOOLCHAIN_USES_GLIBC || BR2_TOOLCHAIN_USES_MUSL looks really
strange, uClibc is typically the least problematic libc.

I can definitely build it with a uclibc toolchain, all it needs is -lrt
in skalibs configure when testing for posix_spawn* and later whenever -lskalibs
is used.

It does not work though. What I have so far is this:

execve("/usr/sbin/s6-echo", ["s6-echo", "--", "Starting system."], ...)
execve("s6-echo", ["s6-echo", "--", "Starting system."], ...)
execve("s6-echo", ["s6-echo", "--", "Starting system."], ...)
execve("s6-echo", ["s6-echo", "--", "Starting system."], ...)
execve("s6-echo", ["s6-echo", "--", "Starting system."], ...)
...

and it keeps repeating that execve in an infinite loop.

That's execline running /sbin/init.
The first if and the first occurence of a { } block in that script.
Strange, but I did not look further into this yet.

> buildroot-s6 is a sandbox to play with the s6 stack and see how it could be
> integrated into the official Buildroot. It is still a work-in-progress.

The worst part will be dealing with per-package service files.
I did try doing similar thing. Ended up with all my stuff in BR2_EXTERNAL,
this way it's much easier to keep Buildroot proper up to date.


s6 hangs at shutdown

2015-11-12 Thread Alex Suykov
Hi,

I'm trying to reboot s6-based qemu system [1] and it seems to hang
at around s6-rc -ad change stage. Could anyone please point out why
it can't proceed to reboot, maybe I'm doing something wrong?

The system runs with s6-svscan as pid 1.
I log in and initiate reboot with

s6-svscanctl -i /run/service

This results in

s6-svc: fatal: unable to control /run/service/s6-svscan-log: supervisor not 
listening
Shutting down system.
Syncing disks.

but I still have my sh, and I can see with ps that all s6-supervise have been
killed except for s6-supervise getty. By this point ps shows pid 1 as

foreground s6-rc -ad change foreground s6-echo Performing reboot. s6-reboot

and it does not seem to proceed any further.
Killing sh, getty or its supervisor does not help.
Same with halt and poweroff.

The init script in that system is
  custom/board/common/overlay/s6-init/sbin/init
and reboot script is
  custom/board/common/overlay/s6-init/etc/rc.shutdown


[1] https://github.com/elebihan/buildroot-s6