Re: Readiness notification exemplars

2020-04-07 Thread Brett Neumeier
On Sat, Apr 4, 2020 at 5:18 PM Serge E. Hallyn  wrote:

> >  On the daemon side, you can use any option you like to tell the daemon
> > what fd it should write to. It has nothing to do with s6, and I have no
> > recommended policy for daemons.
>

It's maybe a little heavy-handed, but is there any technical reason not to
just pick an arbitrary FD, like say 3, and just always use that for a
particular daemon?

Cheers,

Brett

-- 
Brett Neumeier (bneume...@gmail.com)


Readiness notification exemplars

2020-04-01 Thread Brett Neumeier
I read on http://skarnet.org/software/s6/notifywhenup.html:

"...daemons can simply write a line to a file descriptor of their choice,
then close that file descriptor, when they're ready to serve. This is a
generic mechanism that some daemons already implement."

I am curious, does anyone on this list know of examples of such daemons? I
am considering creating and submitting patches for some daemon programs
that I use that do *not* support this mechanism as yet, and am curious if
it is as simple as it looks like it should be.

Cheers!

Brett

-- 
Brett Neumeier (bneume...@gmail.com)


Re: s6-linux-init: Actions after unmounting filesystems

2019-08-18 Thread Brett Neumeier
On Sat, Aug 17, 2019 at 2:20 PM Guillermo  wrote:

> There are certain setups that require doing something after
> filesystems are unmounted. Two examples are LVM logical volumes and
> LUKS encrypted volumes, but I suppose there must be more cases. In any
> such setup, the shutdown sequence would include a 'vgchange -a n'
> command or 'cryptsetup close' command after filesystems are unmounted.
>

How important are these cases?

I have not inspected the code to see what cryptsetup close does, aside from
remove the encrypted device mapping for the encrypted volume; nor have I
looked at the LVM code to see if it does anything other than remove device
mapping. But in either case, what I generally do on my systems is simply
unmount volumes (or in the case of the root filesystem, remount it
read-only), sync to make sure all dirty buffers are written, and then just
shut down/reboot/whatever.

Leaving LVM and encrypted volumes active has never caused any problems for
me.

It seems worthwhile to ask whether those examples are real problems that
need to be addressed, since all of the proposed solutions have some level
of hairiness about them. And if those examples are *not* real issues, it
might be worthwhile to ask if there are other examples that *are*.

-- 
Brett Neumeier (bneume...@gmail.com)


Re: A better method than daisy-chaining logging files?

2019-05-31 Thread Brett Neumeier
On Fri, May 31, 2019 at 4:21 AM Laurent Bercot 
wrote:

> >I just attempted to link an apache24 instance to its log files via a
> >bundle, which isn't acceptable to s6-rc-compile.
> My advice is to use s6-rc's producer/consumer mechanism for one
> of the log streams, and use a named pipe for the other one, without
> cramming it into the s6-rc mechanism. That would typically mean:
>
> - configure apache24 to output its access log to stdout
> - declare apache24 as a producer for apache24-access-log and
> apache24-access-log as a consumer for apache24
> - apache24-access-log is a simple s6-log invocation, reading
> from its stdin
> - mkfifo /var/run/apache24/error-fifo (with appropriate rights)
> - declare that apache24 outputs its error log to
> /var/run/apache24/error-fifo
> - apache24-error-log has its run script doing something like:
> redirfd -r 0 /var/run/apache24/error-fifo s6-log your-logging-script
> - manually list apache24-error-log in apache24's dependencies, so
> apache24 doesn't start before apache24-error-log. (The pipeline
> mechanism automatically adds apache24-access-log to apache24's deps.)
> - manually define any bundles you want.
>

For what it's worth, I use approximately this setup on my s6- and
s6-rc-managed nginx server. The only difference is that I have nginx using
/dev/stdout as its _error_ stream; and then I have a service that creates a
separate fifo for each site defined in the nginx configuration. Nginx
writes each access log to the appropriate fifo, and there's a separate
s6-log process consuming from each of the fifos. I have had no problems
whatever with that setup, it works like a charm and was really pretty
straightforward to set up.

In fact, I find that there are a lot of services I want to run that can
either log to syslog or write to a specific filesystem location, and the
same "service writes to a fifo, s6-log reads from the fifo" mechanism works
fine for all of them. Since I use that pattern so frequently, I create a
`/run/log-fifos` directory to contain all the fifos. I think that makes the
entire mechanism pretty obvious and transparent, which is my general goal
with system administration.

Cheers,

Brett

-- 
Brett Neumeier (bneume...@gmail.com)


Re: How best to ensure s6-managed services are shut down cleanly?

2019-02-02 Thread Brett Neumeier
On Fri, Feb 1, 2019 at 1:46 PM Laurent Bercot 
wrote:

> The question is, how does systemd decide to proceed with the rest of
> the shutdown? If it's just waiting for s6-svscan to die, then it's
> easy: don't allow s6-svscan to die before all your services are
> properly shut down. That can be done by a single s6-svwait invocation
> in .s6-svscan/finish:
>
> #!/bin/sh
> exec s6-svwait -D -T6 /scandir/*
>
> and s6-svscan's death won't be reported to systemd before all your
> services are really down, or one minute, whichever happens sooner.
>

Perfect! I figured there would be something. Thanks as always for your help.

-- 
Brett Neumeier (bneume...@gmail.com)


How best to ensure s6-managed services are shut down cleanly?

2019-02-01 Thread Brett Neumeier
I use s6 to supervise userspace services like RabbitMQ and PostgreSQL. The
s6-svscan process is launched and managed by systemd (because it's a CentOS
7 system).
What I would like to do is ensure that PostgreSQL is shut down cleanly when
the system is being powered down or rebooted. Because of the way that
PostgreSQL handles signals, the best way to do that is to send it a SIGINT
and then wait for the main server process to terminate.

I _think_ that with my naive current setup, what actually happens is:

- systemd sends a SIGTERM to s6-svscan;
- s6-svscan sends a SIGTERM or SIGHUP to all s6-supervise processes,
depending on what they are supervising, and then runs the finish program;
- the s6-supervise for postgresql sends a SIGTERM and a SIGCONT to the main
database process. It then waits for the postgresql process to terminate,
runs its finish program if there is one, and exits;
- because postgresql responds to SIGTERM by disallowing new connections but
permitting existing ones to keep running, it continues doing that until
being killed.

Reviewing the current docs for s6, I see that I can improve this situation
a bit by using a "down-signal" file to tell s6-supervise to send a SIGINT
instead of a SIGTERM. That's cool! But what I would really _like_ to do is
wait for up to a minute to allow the database to shut down cleanly before
the system shutdown proceeds -- something more like...

s6-svc -Oic -wd -T6 /path/to/svcdir || s6-svc -Oq -wd /path/to/svcdir

Is there an elegant way to get that to happen?

It seems like maybe I could do that by running s6-svscan with the -s
option, and writing a .s6-svscan/SIGTERM handler, or by putting the
commands I want to run in the s6-svscan finish script, but if there's a
better option I am really curious to know it!

Cheers,

Brett

-- 
Brett Neumeier (bneume...@gmail.com)


Re: Can s6 be enough?: was s6-ps

2019-01-06 Thread Brett Neumeier
On Sat, Jan 5, 2019 at 2:30 PM Steve Litt  wrote:

> So what do you all think? Is s6 a useful init system without s6-rc?
>

My 0.02 USD -- based on my experience of setting up a simple GNU/Linux
distribution from the ground up using s6, s6-rc, and s6-linux-init...

- s6-rc provides useful functionality: it is really handy, when defining
the way that the system should start up, to have bundles and oneshots; it
is also really handy to be able to start up or shut down groups of
processes via bundles.
- The cost of using s6-rc is negligible. As installed on my x86_64 system
with documentation, it consumes around 576 *kilobytes* of storage space. It
compiles and installs in substantially less than a minute. Learning how to
craft s6-rc service definition directories is no more difficult than
learning how to craft s6 servicedirs.
- You don't lose any capability provided by s6 if you also use s6-rc. You
can send whatever signals you want to the supervised processes by using
s6-svc directly.

So ... costs ~= 0, benefits > 0, to me the question of whether s6 is useful
_without_ s6-rc is kind of pointless.

I'm inclined to turn the question around: what leads you to want to avoid
s6-rc? Is there some other system that provides more benefits at lower cost?

Cheers!

Brett

-- 
Brett Neumeier (bneume...@gmail.com)


Re: s6-svscanboot, how to exit?

2017-07-14 Thread Brett Neumeier
On Fri, Jul 14, 2017 at 10:58 AM, Laurent Bercot <
ska-supervis...@skarnet.org> wrote:

> Do you have example settings for systemd to start s6-svscan?
>>
> ​[..]​
>
>  But I'm sure somebody somewhere has such an example, and if not, it
> really shouldn't be too hard, because it's a very simple service.
>

​I do! -- because I'm using s6 for process supervision on centos 7, which
use systemd as init. (I'm also using s6/s6-rc/s6-linux-init on my own
built-from-source distribution, and prefer it *enormously*, btw).

What I have is:

cut here
​[Unit]
Description=s6-svscan

[Service]
Type=simple
Environment=PATH=/opt/s6/bin:/sbin:/bin:/usr/sbin:/usr/bin
ExecStart=/var/s6scandir/.s6-svscanboot
Restart=always

[Install]
WantedBy=multi-user.target​
cut here


-- 
Brett Neumeier (bneume...@gmail.com)


Re: possible to transform signals sent by s6-svc?

2016-10-23 Thread Brett Neumeier
On Sat, Oct 15, 2016 at 12:09 AM, Laurent Bercot <
ska-supervis...@skarnet.org> wrote:

> So what I'd like to do is have "s6-svc -d" result in the process receiving
>> SIGINT and SIGCONT to the supervised process, rather than SIGTERM and
>> SIGCONT.
>>
>
> The problem with custom control is [...]

A bad script could make s6-supervise unresponsive and
> useless. I thought it was too much of a risk for the benefit it brings.
>

(Hunh, I tried sending this almost a week ago and it bounced. Strange.)

I concur. And it's not particularly important, as far as that goes -- as
you pointed out, using -Oic is arguably the right thing to do, because it's
more transparent about what is going on. The only reason I prefer "-d" is
that it's more symmetric with other managed services, and if the database
needs to be bounced in a hurry it's likely because of a crisis, and I hate
to increase the cognitive load on the operator who's trying to deal with
the hypothetical future crisis.

On the other hand, I can write a *really simple* control script that just
takes sysvinit-style "start," "stop", "reload" etc arguments and runs the
appropriate s6-svc command. Or, alternatively:

 But there's a way to trap and convert signals in your run script itself:
>  http://skarnet.org/software/execline/trap.html


I had totally missed the trap command! That provides a nice workaround.
Thank you! I'll try modifying the run script to use this approach and see
whether the result makes me happier than the control-script approach.

Cheers,

Brett


-- 
Brett Neumeier (bneume...@gmail.com)