Re: thoughts on rudimentary dependency handling

2015-01-19 Thread Avery Payne


On 1/19/2015 2:31 PM, Jonathan de Boyne Pollard wrote:

Avery Payne:
> * implement a ./wants directory.  [...]
> * implement a ./needs directory.  [...]
> * implement a ./conflicts directory.  [...]

Well this looks familiar.


I ducked out of ./needs and ./conflicts for the time being; if I spend 
too much time with making those features then the scripts won't move 
forward.  I'm already several weeks "behind" in my own schedule that I 
have set for scripts.




Before you read further, including to my next message, get yourself a 
copy of nosh and read the manual pages therein for service-manager(1) 
and system-control(1), paying particular attention in the latter to 
the section entitled "Service bundles".


Then grab the nosh Guide and read the new interfaces chapter.  On a 
Debian system this would be:


xdg-open /usr/local/share/doc/nosh/new-interfaces.html


Sounds like I have my homework cut out.  I will do so as soon as I can, 
although I warn you that it joins an already-long list of material to 
read and think about.




Re: thoughts on rudimentary dependency handling

2015-01-19 Thread Jonathan de Boyne Pollard

Laurent Bercot:

I'm of the opinion that packagers will naturally go towards what gives
them the less work, and the reason why supervision frameworks have

trouble

getting in is that they require different scripting and organization, so
supporting them would give packagers a lot of work; whereas sticking to
the SysV way allows them not to change what they already have.
Especially with systemd around, which they already kinda have to convert
services to, I don't see them bothering with another out of the way
packaging design.

So, to me, the really important work that you are doing is the run script
collection and standardization. If you provide packagers with already

made

run scripts, you are helping them tremendously by reducing the amount of
work that supervision support needs, and they'll be more likely to

adopt it.

nosh 1.12 comes with a collection of some 177 pre-built service bundles. 
 As I said to the FreeBSD people, I have a goal of making the 155 
service bundles that should replace most of FreeBSD rc.d .  (There are a 
load of non-FreeBSD bundles in there, including ones for VirtualBox 
services, OpenStack, RabbitMQ, and so forth. This is why I haven't 
reached 155 even though I've made 177.)


It also comes with a tool for importing system service and socket units 
into service bundles.  And the nosh Guide chapter on creating service 
bundles has pointers to the run file collections by Gerrit Pape, Wayne 
Marshall, Kevin J. DeGraaf, and Glenn Strauss.


xdg-open /usr/local/share/doc/nosh/creating-bundles.html

Incidental note: I just added another service bundle, for nagios, to 
version 1.13 because of this:


http://unix.stackexchange.com/questions/179798#179798

Enjoy this, too:


http://homepage.ntlworld.com./jonathan.deboynepollard/FGA/run-scripts-and-service-units-side-by-side.html



RE: thoughts on rudimentary dependency handling

2015-01-19 Thread Jonathan de Boyne Pollard

Avery Payne:

Finally, with regard to the up vs actually running issue, I'm not even
going to try and address it due to the race conditions involved.


RabbitMQ is the canonical example of the real world problems here. With 
durable queues containing a lot of persistent messages, it can be tens 
of minutes (on one of my machines) before a freshly started RabbitMQ 
daemon opens its server socket and allows TCP connections.




Re: thoughts on rudimentary dependency handling

2015-01-19 Thread Jonathan de Boyne Pollard

John Albietz:
 > I wonder if this will help address a common situation for me where I
 > install a package and realize that at the end of the installation the
 > daemon is started using upstart or sysv.
 >
 > At that point, to 'supervise' the app, I first have to stop the current
 > daemon and then start it up using runit or another process manager.

On Debian, for one, they aren't started using upstart or sysv (whatever 
that is).  Maintainer scripts enable them with update-rc.d and start 
them with invoke-rc.d.  You are expected to have update-rc.d and 
invoke-rc.d tools that are appropriate to your init system, as well as 
the respective control files of course.


openrc comes with update-rc.d and invoke-rc.d that understand openrc 
scripts.  The sysv-rc package comes with update-rc.d and invoke-rc.d 
that understand systemd units, upstart jobs, and System 5 rc.d scripts. 
 Ironically, the systemd and upstart packages do not come with their 
own update-rc.d and invoke-rc.d commands, relying instead upon the 
sysv-rc package to supply them.


This is all a bit shakey and rickety, though.  One well-known fly in the 
ointment is that what may be a single init.d script for System 5 rc may 
be multiple service and socket units for systemd, and stopping a 
socket-activated service for package upgrade might not do the right 
thing as the socket activation might activate the service unit mid-upgrade.


Last year, I gave Debian Developer Russ Allbery a patch for an improved 
version of the Debian Policy Manual that sets this out more clearly than 
the current one.  You might want to get it off him. The sections (of the 
patched document) that you are interested in are 9.3.1.2, 9.3.6, 
9.3.6.1, 9.3.6.2, and 9.3.6.3.




Re: thoughts on rudimentary dependency handling

2015-01-19 Thread Jonathan de Boyne Pollard

Avery Payne:

But from a practical perspective there isn't anything right now that

handles

dependencies at a global level.


Now you know that there is.

The nosh design is, as you have seen, one that separates policy and 
mechanism.  The service-manager provides a way of loading and unloading 
services, and starting and stopping them.  The dependency system is 
layered entirely on top of that.  Whilst one can use 
svscan/service-dt-scanner and svc/service-control and have things run 
the daemontools way, one can alternatively use the 
start/stop/enable/disable subcommands of system-control and have service 
dependency management instead.


The trick is that dependency management is calculated by the 
system-control program.  When you tell it "systemctl start 
workstation.target" it follows all of the wants/, required-by/, and 
conflicts/ symbolic links recursively to construct a set of start/stop 
jobs for the relevant services.  Then it follows the after/ and before/ 
symbolic links to turn that set into a ordered graph of jobs.  Finally, 
it iterates through the graph repeatedly, sending start and stop 
commands to the service manager for the relevant services and polling 
their statuses, until all jobs have been enacted.


conflicts/ is actually easy, although it took me two tries.  If 
"A/wants/B" exists, then a start job for A creates a start job for B. 
If "A/conflicts/B" exists then a start job for A creates a stop job for B.


The important point is that the service manager is entirely ignorant of 
this.  It is just told to start and stop individual services, and it 
knows nothing at all of dependencies or orderings.  (A dependency is not 
an ordering relationship, of course, but the manual page that I said to 
read has already explained that. (-:)  It's all worked out outwith the 
service manager.


Which means, of course, that it is alterable without changing the 
service manager.


And indeed, nosh demonstrates such alteration in action.  The 
dependency-based system that one gets with system-control is one of two 
alternative policies that come in the box; the other being an old 
daemontools-style system with a /service directory and ./log 
relationships, as implemented by the service-dt-scanner (a.k.a. svscan) 
daemon.


Again, the trick is that the daemontools-style stuff is all worked out 
in svscan, and the service manager proper has no dealings in it.  The 
service manager provides a flexible plumbing layer.  Higher layers 
decide how they want that plumbing put together.


Re: thoughts on rudimentary dependency handling

2015-01-08 Thread Luke Diamand

On 08/01/15 17:53, Avery Payne wrote:

The use of hidden directories was done for administrative and aesthetic
reasons.  The rationale was that the various templates and scripts and
utilities shouldn't be mixed in while looking at a display of the various
definitions.


Why shouldn't they be mixed in? Surely better to see everything clearly 
and plainly, than to hide some parts away where people won't expect to 
find them. I think this may confuse people, especially if they use tools 
that ignore hidden directories.


At least on my Debian system, there are hardly any other hidden files or 
directories of note under /etc, so this would be setting a bit of a 
precedent to have quite so many non-trivial items present.



The other rationale was that the entire set of definitions
could be moved or copied using a single directory, although it doesn't work
that way in practice, because a separate cp is needed to move the
dot-directories.


Move everything down one level then?

sv
   - templates
   - run-scripts
   - other-things-not-yet-thought-of

FWIW, I started trying to make a debian package, and dpkg got very upset 
about all those dot files.




The basic directory structure is as follows:

sv
-> .bin
-> .env
-> .finish
-> .log
-> .run

Where:

sv is the container of all definitions and utilities.  Best-case, the
entire structure, including dot directories, could be set in place with mv,
although this is something that a package maintainer would be likely to
do.  People initially switching over will probably want to use cp while the
project develops.  That way, you can pull new definitions and bugfixes with
git or mercurial, and copy them into place.  Or you could download it as a
tarball off of the website(s) and simply expand-in-place.  So there's a few
different ways to get this done.

.bin is meant to store any supporting programs.  At the moment this is a
bit of a misnomer because it really only stores the framework shunts and
the supporting scripts for switching those shunts.  It may have actual
binaries in the future, such as usersv, or other independent utilities.
When you run use-* to switch frameworks, it changes a set of symlinks to
point to what should be the tools of your installed framework; this makes
it "portable" between all frameworks, a key feature.

.env is an environmental variable directory meant to be loaded with the
envdir tool.  It represents system-wide settings, like PATH, and some of
the settings that are global to all of the definitions.  It is used within
the templates.

.finish will hold ./finish scripts.  Right now, it's pretty much a stub.
Eventually it will hold a basic finish script that alerts the administrator
to issues with definitions not launching, as well as handling other
non-standard terminations.

.log will hold ./log scripts.  It currently has a single symlink, ./run,
that points to whatever logging system is the default.  At the moment it's
svlogd only because I haven't finished logging for s6 and daemontools.
Eventually .log/run will be a symlink to whatever loggin arrangement you
need.  In this fashion, the entire set of scripts can be switched by simply
switching the one symlink.

.run will hold the ./run scripts.  It has a few different ones in them, but
the main one at this time is run-envdir, which loads daemon specific
settings from the definition's env directory and uses them to launch the
daemon.  Others include an optional feature for user-defined services, and
basic support for one of three getty.  I may or may not make a new one for
the optional dependency feature; I'm going to see if it can be standardized
within run-envdir first.

I can always remove the dots, but then you would have these mixed in with
all of the definitions, and I think it will add to the confusion more than
having them hidden.  As it stands, the only time you need to mess with the
dot directories is (a) when setting them up for the first time, or (b) when
you are switching your logging around.  Otherwise there's really no need to
be in them, and when you use "ls /etc/sv" to see what is available, they
stay out of your way.

If there is a better arrangement that keeps everything in one base
directory for easy management but eliminates the dots, I'll listen.
Although I think this arrangement actually makes a bit more sense, and the
install instructions are careful to include the dots, so you only need to
mess around with them at install time.

On Thu, Jan 8, 2015 at 8:20 AM, Luke Diamand  wrote:


Is it possible to avoid using hidden files (.env) as it makes it quite a
lot harder for people who don't know what's going on to, um, work out
what's going on.

Thanks!
Luke








Re: thoughts on rudimentary dependency handling

2015-01-08 Thread Avery Payne
On Thu, Jan 8, 2015 at 9:23 AM, Steve Litt 
wrote:
>
> I'm having trouble understanding exactly what you're saying. You mean
> the executable being daemonized fails, by itself, because a service it
> needs isn't there, right? You *don't* mean that the init itself fails,
> right?
>

Both correct.


> I'm not sure what you're saying. Are you saying that the dependency
> code is in the runscript, but within an IF statement that checks
> for ../env/NEEDS_ENABLED?
>

Correct.  If the switch, which is a data value in a file, is zero, it
simply skips all of the dependency stuff with a giant if-then wrapper.  At
least, that's the plan.  I won't know until I can get to it.


> > Like I
> > said, this will be a fall-back feature, and it will have minor
> > annoyances or issues.
>
> Yes. If I'm understanding you correctly, you're only going so far in
> determinint "really up", because otherwise writing a one size fits all
> services thing starts getting way too complicated.
>

Correct.  I'm taking an approach that has "the minimum needed to make
things work correctly."


>
> I was looking at runit docs yesterday before my Init System
> presentation, and learned that I'm supposed to put my own "Really Up"
> code in a script called ./check.
>

Also correct, although I'm trying to only do ./check scripts where
absolutely needed, such as the ypbind situation.  Otherwise, the check
usually looks at "is the child PID still around".


> If I read the preceding correctly, you're making service tool calls for
> runit, s6, perp and nosh grammatically identical.


Correct.


> Are you doing that so
> that your run scripts can invoke the init-agnostic commands, so you
> just have one version of your scripts?
>

Exactly correct.  This how I am able to turn the bulk of the definitions
into templates.  ./run files in the definition directories are little more
than symlinks back to a script in ../.run, which means...write once, use a
whole lot. :)  It's also the reason that features are slow in coming - I
have to be very, very careful about interactions.


>
> However you end up doing the preceding, I think it's essential to
> thoroughly document it, complete with examples. I think that the
> additional layer of indirection might be skipped by those not fully
> aware of the purpose.
>

I just haven't gotten around to this part, sorry.


>
> I can help with the documentation.
>

https://bitbucket.org/avery_payne/supervision-scripts
or
https://github.com/apayne/supervision-scripts

Feel free to clone, change, and send a pull request.


Re: thoughts on rudimentary dependency handling

2015-01-08 Thread Avery Payne
The use of hidden directories was done for administrative and aesthetic
reasons.  The rationale was that the various templates and scripts and
utilities shouldn't be mixed in while looking at a display of the various
definitions.  The other rationale was that the entire set of definitions
could be moved or copied using a single directory, although it doesn't work
that way in practice, because a separate cp is needed to move the
dot-directories.

The basic directory structure is as follows:

sv
-> .bin
-> .env
-> .finish
-> .log
-> .run

Where:

sv is the container of all definitions and utilities.  Best-case, the
entire structure, including dot directories, could be set in place with mv,
although this is something that a package maintainer would be likely to
do.  People initially switching over will probably want to use cp while the
project develops.  That way, you can pull new definitions and bugfixes with
git or mercurial, and copy them into place.  Or you could download it as a
tarball off of the website(s) and simply expand-in-place.  So there's a few
different ways to get this done.

.bin is meant to store any supporting programs.  At the moment this is a
bit of a misnomer because it really only stores the framework shunts and
the supporting scripts for switching those shunts.  It may have actual
binaries in the future, such as usersv, or other independent utilities.
When you run use-* to switch frameworks, it changes a set of symlinks to
point to what should be the tools of your installed framework; this makes
it "portable" between all frameworks, a key feature.

.env is an environmental variable directory meant to be loaded with the
envdir tool.  It represents system-wide settings, like PATH, and some of
the settings that are global to all of the definitions.  It is used within
the templates.

.finish will hold ./finish scripts.  Right now, it's pretty much a stub.
Eventually it will hold a basic finish script that alerts the administrator
to issues with definitions not launching, as well as handling other
non-standard terminations.

.log will hold ./log scripts.  It currently has a single symlink, ./run,
that points to whatever logging system is the default.  At the moment it's
svlogd only because I haven't finished logging for s6 and daemontools.
Eventually .log/run will be a symlink to whatever loggin arrangement you
need.  In this fashion, the entire set of scripts can be switched by simply
switching the one symlink.

.run will hold the ./run scripts.  It has a few different ones in them, but
the main one at this time is run-envdir, which loads daemon specific
settings from the definition's env directory and uses them to launch the
daemon.  Others include an optional feature for user-defined services, and
basic support for one of three getty.  I may or may not make a new one for
the optional dependency feature; I'm going to see if it can be standardized
within run-envdir first.

I can always remove the dots, but then you would have these mixed in with
all of the definitions, and I think it will add to the confusion more than
having them hidden.  As it stands, the only time you need to mess with the
dot directories is (a) when setting them up for the first time, or (b) when
you are switching your logging around.  Otherwise there's really no need to
be in them, and when you use "ls /etc/sv" to see what is available, they
stay out of your way.

If there is a better arrangement that keeps everything in one base
directory for easy management but eliminates the dots, I'll listen.
Although I think this arrangement actually makes a bit more sense, and the
install instructions are careful to include the dots, so you only need to
mess around with them at install time.

On Thu, Jan 8, 2015 at 8:20 AM, Luke Diamand  wrote:

> Is it possible to avoid using hidden files (.env) as it makes it quite a
> lot harder for people who don't know what's going on to, um, work out
> what's going on.
>
> Thanks!
> Luke
>
>


Re: thoughts on rudimentary dependency handling

2015-01-08 Thread Steve Litt
On Wed, 7 Jan 2015 14:25:28 -0800
Avery Payne  wrote:

> On Wed, Jan 7, 2015 at 7:23 AM, Steve Litt 
>  wrote:
> >
> > I'm pretty sure this conforms to James' preference (and mine
> > probably) that it be done in the config and not in the init program.
> >
> > To satisfy Laurent's preference, everything but the exec cron -f
> > could be commented out, and if the user wants to use this, he/she
> > can uncomment all the rest. Or your run script writing program
> > could have an option to write the dependencies, or not.
> >
> 
> I've pretty much settled on a system-wide switch in sv/.env (which in
> the scripts will show up as ../.env).  The switch will, by default,
> follow Laruent's behavior of "naive launching", ie. no dependencies
> are up, missing dependencies cause failures,

I'm having trouble understanding exactly what you're saying. You mean
the executable being daemonized fails, by itself, because a service it
needs isn't there, right? You *don't* mean that the init itself fails,
right?

This is all pretty cool, Avery! Currently, my best guess is that
eventually my daily driver computer will be initted by runit (with
Epoch as a backup, should I bork my runit). It sounds like what you're
doing will ease runit config by an order of magnitude.

>  and the admin must check
> logging for notifications. Enabling the feature would be as simple as
> 
> echo "1" > /etc/sv/.env/NEEDS_ENABLED
> 
> ...and every new service launch would receive it.  You could also
> force-reload with a restart command.  Without the flag, the entire
> chunk of dependency code is bypassed and the launch continues "as
> normal".

I'm not sure what you're saying. Are you saying that the dependency
code is in the runscript, but within an IF statement that checks
for ../env/NEEDS_ENABLED?

> 
> The goal is the same but the emphasis has changed.  This will be
> considered a fall-back feature for those systems that do not have
> such a tool available, or have constraints that force the continued
> use of a shell launcher.  It is the option of last resort, and while
> I think I can make it work fairly consistently, it will come with
> some warnings in the wiki.  For Laurent, he wouldn't even need to
> lift a finger - it fully complies with his desires out of the box. ;-)
> 
> As new tools emerge in the future, I will be able to write a shunt
> into the script that detects the tool and uses it instead of the
> built-in scripted support.  This will allow Laurent's work to be
> integrated without messing anything up, so the behavior will be the
> same, but implemented differently.

Now THAT'S Unix at work!

> 
> Finally, with regard to the up vs actually running issue, I'm not even
> going to try and address it due to the race conditions involved.  The
> best I will manage is to first issue the up, then do a service check
> to confirm that it didn't die upon launch, which for a majority (but
> not all) cases should suffice.  Yes, there are still race conditions,
> but that is fine - I'm falling back to the original model of "service
> fails continually until it succeeds", which means a silently-failed
> "child" dependency that was missed by the "check" command will still
> cause the "parent" script to fail, because the daemon itself will
> fail.  It is a crude form of graceful failure.  So the supervisor
> starts the parent again...and again...until the truant dependency is
> up and running, at which point it will bring the parent up.  Like I
> said, this will be a fall-back feature, and it will have minor
> annoyances or issues.

Yes. If I'm understanding you correctly, you're only going so far in
determinint "really up", because otherwise writing a one size fits all
services thing starts getting way too complicated.

I was looking at runit docs yesterday before my Init System
presentation, and learned that I'm supposed to put my own "Really Up"
code in a script called ./check.

> 
> Right now the biggest problem is handling all of the service tool
> calls. They all have the same grammar, (tool) (command) (service
> name), so I can script that easily.  Getting the tools to show up as
> the correct command and command option is something else, and I'm
> working on a way to wedge it into the use-* scripts so that the tools
> are set up out of the box all at the same time.  This will create
> $SVCTOOL, and a set of $CMDDOWN, $CMDUP, $CMDCHECK, etc. that will be
> used in the scripts.  **Once that is done I can fully test the rest
> of the dependency concept and get it fleshed out.** If anyone wants
> to see it, email me directly and I'll pass it along, but there's not
> much to look at.

If I read the preceding correctly, you're making service tool calls for
runit, s6, perp and nosh grammatically identical. Are you doing that so
that your run scripts can invoke the init-agnostic commands, so you
just have one version of your scripts?

However you end up doing the preceding, I think it's essential to
thoroughly document it, complete with examp

Re: thoughts on rudimentary dependency handling

2015-01-08 Thread Luke Diamand
Is it possible to avoid using hidden files (.env) as it makes it quite a
lot harder for people who don't know what's going on to, um, work out
what's going on.

Thanks!
Luke


On 7 January 2015 at 22:25, Avery Payne  wrote:

> On Wed, Jan 7, 2015 at 7:23 AM, Steve Litt 
>  wrote:
> >
> > I'm pretty sure this conforms to James' preference (and mine probably)
> > that it be done in the config and not in the init program.
> >
> > To satisfy Laurent's preference, everything but the exec cron -f could
> > be commented out, and if the user wants to use this, he/she can
> > uncomment all the rest. Or your run script writing program could have an
> > option to write the dependencies, or not.
> >
>
> I've pretty much settled on a system-wide switch in sv/.env (which in the
> scripts will show up as ../.env).  The switch will, by default, follow
> Laruent's behavior of "naive launching", ie. no dependencies are up,
> missing dependencies cause failures, and the admin must check logging for
> notifications. Enabling the feature would be as simple as
>
> echo "1" > /etc/sv/.env/NEEDS_ENABLED
>
> ...and every new service launch would receive it.  You could also
> force-reload with a restart command.  Without the flag, the entire chunk of
> dependency code is bypassed and the launch continues "as normal".
>
> The goal is the same but the emphasis has changed.  This will be considered
> a fall-back feature for those systems that do not have such a tool
> available, or have constraints that force the continued use of a shell
> launcher.  It is the option of last resort, and while I think I can make it
> work fairly consistently, it will come with some warnings in the wiki.  For
> Laurent, he wouldn't even need to lift a finger - it fully complies with
> his desires out of the box. ;-)
>
> As new tools emerge in the future, I will be able to write a shunt into the
> script that detects the tool and uses it instead of the built-in scripted
> support.  This will allow Laurent's work to be integrated without messing
> anything up, so the behavior will be the same, but implemented differently.
>
> Finally, with regard to the up vs actually running issue, I'm not even
> going to try and address it due to the race conditions involved.  The best
> I will manage is to first issue the up, then do a service check to confirm
> that it didn't die upon launch, which for a majority (but not all) cases
> should suffice.  Yes, there are still race conditions, but that is fine -
> I'm falling back to the original model of "service fails continually until
> it succeeds", which means a silently-failed "child" dependency that was
> missed by the "check" command will still cause the "parent" script to fail,
> because the daemon itself will fail.  It is a crude form of graceful
> failure.  So the supervisor starts the parent again...and again...until the
> truant dependency is up and running, at which point it will bring the
> parent up.  Like I said, this will be a fall-back feature, and it will have
> minor annoyances or issues.
>
> Right now the biggest problem is handling all of the service tool calls.
> They all have the same grammar, (tool) (command) (service name), so I can
> script that easily.  Getting the tools to show up as the correct command
> and command option is something else, and I'm working on a way to wedge it
> into the use-* scripts so that the tools are set up out of the box all at
> the same time.  This will create $SVCTOOL, and a set of $CMDDOWN, $CMDUP,
> $CMDCHECK, etc. that will be used in the scripts.  **Once that is done I
> can fully test the rest of the dependency concept and get it fleshed out.**
>  If anyone wants to see it, email me directly and I'll pass it along, but
> there's not much to look at.
>
> Unfortunately, the envdir tool, which I use to abstract away the daemons
> and settings, only chain-loads; it would be nice if it had a persistence
> mechanism, so that I could "load once" for the scope of the shell script.
> Because of that, there will be some odd scripting in there that pulls the
> values, i.e.
>
> [ -f ../.env/CMDUP ] || echo "$(basename $0): fatal error: unable to load
> CMDUP" && exit 99
> CMDUP=$(cat ../.env/CMDUP)
>
> with an entry for each command.
>
>
> > In my 5 minute thought process, the last remaining challenge, and it's
> > a big one, is to get the right service names for the dependencies, and
> > that requires a standardized list, because, as far as I know, the
> > daemontools-inspired inits don't have "provides". Such a list would be
> > hard enough to develop and have accepted, but I expect our
> > friends at Red Hat to start changing the names in order to mess us
> > up.
> >
>
> Using a "./provides" as a rendezvous or advertisement mechanism I think is
> nice-in-concept but difficult-in-practice. Give it a bit more thought and
> you'll see that we're not just talking about the *service* but also any
> *protocol* to speak with it and one or more *data transport* needed to talk

Re: thoughts on rudimentary dependency handling

2015-01-08 Thread Laurent Bercot

  Here's an ugly hack that allows you do that using envdir:
set -a
eval $({ env; envdir ../.env env; } | grep -vF -e _= -e SHLVL= | sort | uniq -u)
set +a


 Ugh, in the morning (almost) light it's even uglier than I thought,
because it won't work for values you *change* either, which could
be important for things like PATH. Or if values contains a _=
or such.
 Try this instead:

set -a
eval $(envdir ../.env env | grep -v -e ^_= -e ^SHLVL=)
set +a

 It will reexport all the variables you already have, which is
also ugly, and it still won't remove anything, but at least it
will pick up changes to existing variables.

 Ah, the shell. Everytime I doubt whether execline was worth
writing, it only takes a few minutes of /bin/sh scripting to
renew my confidence.



It will work fine. I'm attempting to pre-load values that will remain
constant inside the scope of the script, so there isn't a need to change
them at runtime.


 My point is, if you are using this construct in run scripts, remember
the scripts will inherit some environment from init - that could contain
system-wide settings such as PATH or TZ. If for some reason you need to
remove or modify those variables in a specific script, it is important
that it works as expected. :)

--
 Laurent



RE: thoughts on rudimentary dependency handling

2015-01-08 Thread James Powell
I'll be following this intently as I have a project I'm working on that will 
use s6 heavily even discretely.

Sent from my Windows Phone

From: Avery Payne<mailto:avery.p.pa...@gmail.com>
Sent: ‎1/‎7/‎2015 11:58 PM
To: supervision@list.skarnet.org<mailto:supervision@list.skarnet.org>
Subject: Re: thoughts on rudimentary dependency handling

On Wed, Jan 7, 2015 at 6:53 PM, Laurent Bercot 
wrote:
>
>  Unfortunately, the envdir tool, which I use to abstract away the daemons
>> and settings, only chain-loads; it would be nice if it had a persistence
>> mechanism, so that I could "load once" for the scope of the shell script.
>>
>
>  Here's an ugly hack that allows you do that using envdir:
> set -a
> eval $({ env; envdir ../.env env; } | grep -vF -e _= -e SHLVL= | sort |
> uniq -u)
> set +a
>

Thanks!  When I can carve out a bit of time this week I'll put it in and
finish up the few bits needed.  Most of the dependency loop is already
written, I just didn't have a somewhat clean way of pulling in the
$CMDWHATEVER settings without repeatedly reading ./env over and over.


>  It only works for variables you add, though, not for variables you remove.


It will work fine. I'm attempting to pre-load values that will remain
constant inside the scope of the script, so there isn't a need to change
them at runtime.


Re: thoughts on rudimentary dependency handling

2015-01-07 Thread Avery Payne
On Wed, Jan 7, 2015 at 6:53 PM, Laurent Bercot 
wrote:
>
>  Unfortunately, the envdir tool, which I use to abstract away the daemons
>> and settings, only chain-loads; it would be nice if it had a persistence
>> mechanism, so that I could "load once" for the scope of the shell script.
>>
>
>  Here's an ugly hack that allows you do that using envdir:
> set -a
> eval $({ env; envdir ../.env env; } | grep -vF -e _= -e SHLVL= | sort |
> uniq -u)
> set +a
>

Thanks!  When I can carve out a bit of time this week I'll put it in and
finish up the few bits needed.  Most of the dependency loop is already
written, I just didn't have a somewhat clean way of pulling in the
$CMDWHATEVER settings without repeatedly reading ./env over and over.


>  It only works for variables you add, though, not for variables you remove.


It will work fine. I'm attempting to pre-load values that will remain
constant inside the scope of the script, so there isn't a need to change
them at runtime.


Re: thoughts on rudimentary dependency handling

2015-01-07 Thread Laurent Bercot

On 07/01/2015 23:25, Avery Payne wrote:

The goal is the same but the emphasis has changed.  This will be considered
a fall-back feature for those systems that do not have such a tool
available, or have constraints that force the continued use of a shell
launcher.  It is the option of last resort, and while I think I can make it
work fairly consistently, it will come with some warnings in the wiki.  For
Laurent, he wouldn't even need to lift a finger - it fully complies with
his desires out of the box. ;-)


 Eh, you don't have to. It's not for me, it's for the users. I honestly
believe it's better this way, but you may disagree, and as long as you're
the one writing the code, you're the one having the say.



Unfortunately, the envdir tool, which I use to abstract away the daemons
and settings, only chain-loads; it would be nice if it had a persistence
mechanism, so that I could "load once" for the scope of the shell script.


 Here's an ugly hack that allows you do that using envdir:
set -a
eval $({ env; envdir ../.env env; } | grep -vF -e _= -e SHLVL= | sort | uniq -u)
set +a

 It only works for variables you add, though, not for variables you remove.
 Also, be aware that it considerably lengthens the code path... but not
significantly more than dependency management in shell already does. :P



Sshh!  Don't talk too loudly...you'll expose my secret plan! :-)  But
thanks for saying that, and you are right.  The drive of the project is to
increase accessibility of the frameworks, and in doing so, increase
adoption.  Adoption will lead to feedback, which leads to improvements,
which drives development forward, increasing accessibility, etc. in a
continuous feedback loop.


 This is a very laudable goal, if you think the way to getting into distros
is this kind of accessibility.

 I'm of the opinion that packagers will naturally go towards what gives
them the less work, and the reason why supervision frameworks have trouble
getting in is that they require different scripting and organization, so
supporting them would give packagers a lot of work; whereas sticking to
the SysV way allows them not to change what they already have.
 Especially with systemd around, which they already kinda have to convert
services to, I don't see them bothering with another out of the way
packaging design.

 So, to me, the really important work that you are doing is the run script
collection and standardization. If you provide packagers with already made
run scripts, you are helping them tremendously by reducing the amount of
work that supervision support needs, and they'll be more likely to adopt it.
 It's a thankless job, and I understand that you want to have some fun by
playing with a dependency manager ;)

--
 Laurent



RE: thoughts on rudimentary dependency handling

2015-01-07 Thread Avery Payne
On Wed, Jan 7, 2015 at 7:23 AM, Steve Litt 
 wrote:
>
> I'm pretty sure this conforms to James' preference (and mine probably)
> that it be done in the config and not in the init program.
>
> To satisfy Laurent's preference, everything but the exec cron -f could
> be commented out, and if the user wants to use this, he/she can
> uncomment all the rest. Or your run script writing program could have an
> option to write the dependencies, or not.
>

I've pretty much settled on a system-wide switch in sv/.env (which in the
scripts will show up as ../.env).  The switch will, by default, follow
Laruent's behavior of "naive launching", ie. no dependencies are up,
missing dependencies cause failures, and the admin must check logging for
notifications. Enabling the feature would be as simple as

echo "1" > /etc/sv/.env/NEEDS_ENABLED

...and every new service launch would receive it.  You could also
force-reload with a restart command.  Without the flag, the entire chunk of
dependency code is bypassed and the launch continues "as normal".

The goal is the same but the emphasis has changed.  This will be considered
a fall-back feature for those systems that do not have such a tool
available, or have constraints that force the continued use of a shell
launcher.  It is the option of last resort, and while I think I can make it
work fairly consistently, it will come with some warnings in the wiki.  For
Laurent, he wouldn't even need to lift a finger - it fully complies with
his desires out of the box. ;-)

As new tools emerge in the future, I will be able to write a shunt into the
script that detects the tool and uses it instead of the built-in scripted
support.  This will allow Laurent's work to be integrated without messing
anything up, so the behavior will be the same, but implemented differently.

Finally, with regard to the up vs actually running issue, I'm not even
going to try and address it due to the race conditions involved.  The best
I will manage is to first issue the up, then do a service check to confirm
that it didn't die upon launch, which for a majority (but not all) cases
should suffice.  Yes, there are still race conditions, but that is fine -
I'm falling back to the original model of "service fails continually until
it succeeds", which means a silently-failed "child" dependency that was
missed by the "check" command will still cause the "parent" script to fail,
because the daemon itself will fail.  It is a crude form of graceful
failure.  So the supervisor starts the parent again...and again...until the
truant dependency is up and running, at which point it will bring the
parent up.  Like I said, this will be a fall-back feature, and it will have
minor annoyances or issues.

Right now the biggest problem is handling all of the service tool calls.
They all have the same grammar, (tool) (command) (service name), so I can
script that easily.  Getting the tools to show up as the correct command
and command option is something else, and I'm working on a way to wedge it
into the use-* scripts so that the tools are set up out of the box all at
the same time.  This will create $SVCTOOL, and a set of $CMDDOWN, $CMDUP,
$CMDCHECK, etc. that will be used in the scripts.  **Once that is done I
can fully test the rest of the dependency concept and get it fleshed out.**
 If anyone wants to see it, email me directly and I'll pass it along, but
there's not much to look at.

Unfortunately, the envdir tool, which I use to abstract away the daemons
and settings, only chain-loads; it would be nice if it had a persistence
mechanism, so that I could "load once" for the scope of the shell script.
Because of that, there will be some odd scripting in there that pulls the
values, i.e.

[ -f ../.env/CMDUP ] || echo "$(basename $0): fatal error: unable to load
CMDUP" && exit 99
CMDUP=$(cat ../.env/CMDUP)

with an entry for each command.


> In my 5 minute thought process, the last remaining challenge, and it's
> a big one, is to get the right service names for the dependencies, and
> that requires a standardized list, because, as far as I know, the
> daemontools-inspired inits don't have "provides". Such a list would be
> hard enough to develop and have accepted, but I expect our
> friends at Red Hat to start changing the names in order to mess us
> up.
>

Using a "./provides" as a rendezvous or advertisement mechanism I think is
nice-in-concept but difficult-in-practice. Give it a bit more thought and
you'll see that we're not just talking about the *service* but also any
*protocol* to speak with it and one or more *data transport* needed to talk
to the service.  Example: MySQL using a port number bound to 127.0.0.1, vs
MySQL using a file socket.  Both provide a MySQL database and MySQL's
binary client protocol, but the transport is entirely different.  Another
example: exim4 vs postfix vs qmail vs (insert favorite SMTP server here).
All speak SMTP - but some do LMTP at the same time (in either sockets or
ports), so there are a

Re: thoughts on rudimentary dependency handling

2015-01-07 Thread Steve Litt
Howbout the following, exampled for cron and socklog-unix as per
http://smarden.org/runit/faq.html#depends:

==
#!/bin/sh
truly_useful_socklog-unix(){
return 0
}

sv start socklog-unix || exit 1
truly_useful_socklog-unix || exit 1
exec cron -f
==

The user writes truly_useful_socklog-unix as a function in the run
script. It comes with a stub that returns 0 so that it's a NO-OP
by default.

I'm pretty sure this conforms to James' preference (and mine probably)
that it be done in the config and not in the init program.

To satisfy Laurent's preference, everything but the exec cron -f could
be commented out, and if the user wants to use this, he/she can
uncomment all the rest. Or your run script writing program could have an
option to write the dependencies, or not.

In my 5 minute thought process, the last remaining challenge, and it's
a big one, is to get the right service names for the dependencies, and
that requires a standardized list, because, as far as I know, the
daemontools-inspired inits don't have "provides". Such a list would be
hard enough to develop and have accepted, but I expect our
friends at Red Hat to start changing the names in order to mess us
up.

In spite of the challenges, what you're doing here is so valuable that
it *must* be done. I believe that what you're contemplating busts the
whole situation wide open, relieves "upstreams" of all init
responsibility except to declare their dependencies by their official
names, makes it almost trival for any distro to adopt a
daemontools-inspired init, and makes it doable for a modestly smart
user to jumper around their existing init system, if that's what they
want to do.

SteveT

Steve Litt*  http://www.troubleshooters.com/
Troubleshooting Training  *  Human Performance




On Wed, 7 Jan 2015 11:04:51 +
Luke Diamand  wrote:

> The problem is more to do with finding a nice way to structure things
> without it turning into a maze of cobbled-together bits of shell
> script for doing random hacks to get things to come up in the right
> order.
> 
> On 7 January 2015 at 10:59, James Powell 
> wrote:
> 
> > You probably could have it initial a ping or similar callback to
> > the NIS with an if/else switch as well for the checks. Target a
> > file, directory, IP signal, anything really and if found, start the
> > service, else it exits and gets retried.
> >
> > Sent from my Windows Phone
> > 
> > From: Luke Diamand<mailto:l...@diamand.org>
> > Sent: ‎1/‎7/‎2015 12:17 AM
> > To:
> > supervision@list.skarnet.org<mailto:supervision@list.skarnet.org>
> > Subject: Re: thoughts on rudimentary dependency handling
> >
> > On 07/01/15 07:05, James Powell wrote:
> > > The way I see it is this... Either you have high level dependency
> > handling within the service supervision system itself, or you have
> > low level dependency handling within the service execution files.
> > >
> > > Keeping the system as simplified as possible lowers the
> > > probability and
> > possibility of issues. That's one of the basic rules of UNIX
> > programming.
> > >
> > > I, personally, don't see a need other than low level handling.
> > Systemd/uselessd even does this within the unit files, as does some
> > instances of sysvinit/bsdinit by using numbered symlinks for
> > execution order or using a master control script.
> >
> > systemd also has 'wants' directories which add dependencies to a
> > unit in much the way that's being suggested here.
> >
> > sysvinit obviously has all the LSB headers in the init.d files to
> > try to express dependencies.
> >
> > Avery - your suggestion sounds like it ought to be simple to
> > implement and reasonably easy to understand for a sys-admin.
> > Perhaps the way to go is to suck it and see - it'll perhaps then be
> > more obvious with a real implementation if it's worth pursuing.
> >
> > One small concern would be that it's not enough to simply signal a
> > dependency to be "up" - it needs to be actually running and working
> > before you can start the dependent service successfully. To take my
> > [least-]favourite example, you can't start autofs until ypbind has
> > actually contacted the NIS server.
> >
> >
> >
> > >
> > > Sent from my Windows Phone
> > > 
> > > From: Avery Payne<mailto:avery.p.pa...@gmail.com>
> > > Sent: ‎1/‎6/‎2015 10:35 PM
> > > To: Steve Litt<mail

Re: thoughts on rudimentary dependency handling

2015-01-07 Thread Laurent Bercot

On 07/01/2015 09:16, Luke Diamand wrote:

One small concern would be that it's not enough to simply signal a
dependency to be "up" - it needs to be actually running and working
before you can start the dependent service successfully. To take my
[least-]favourite example, you can't start autofs until ypbind has
actually contacted the NIS server.


 Readiness notification is hard: it needs support from the daemon
itself, and most daemons are not written that way.

 Of course, systemd jumped on the opportunity to encourage daemon
authors to use systemd-specific interfaces to provide readiness
notification. Embrace and extend.

 Fortunately, it's not the only way. There's a very simple way
for a daemon to notify the outside world of its readiness: write
something to a descriptor.

 See http://skarnet.org/software/s6/notifywhenup.html

--
 Laurent


Re: thoughts on rudimentary dependency handling

2015-01-07 Thread Luke Diamand
The problem is more to do with finding a nice way to structure things
without it turning into a maze of cobbled-together bits of shell script for
doing random hacks to get things to come up in the right order.

On 7 January 2015 at 10:59, James Powell  wrote:

> You probably could have it initial a ping or similar callback to the NIS
> with an if/else switch as well for the checks. Target a file, directory, IP
> signal, anything really and if found, start the service, else it exits and
> gets retried.
>
> Sent from my Windows Phone
> 
> From: Luke Diamand<mailto:l...@diamand.org>
> Sent: ‎1/‎7/‎2015 12:17 AM
> To: supervision@list.skarnet.org<mailto:supervision@list.skarnet.org>
> Subject: Re: thoughts on rudimentary dependency handling
>
> On 07/01/15 07:05, James Powell wrote:
> > The way I see it is this... Either you have high level dependency
> handling within the service supervision system itself, or you have low
> level dependency handling within the service execution files.
> >
> > Keeping the system as simplified as possible lowers the probability and
> possibility of issues. That's one of the basic rules of UNIX programming.
> >
> > I, personally, don't see a need other than low level handling.
> Systemd/uselessd even does this within the unit files, as does some
> instances of sysvinit/bsdinit by using numbered symlinks for execution
> order or using a master control script.
>
> systemd also has 'wants' directories which add dependencies to a unit in
> much the way that's being suggested here.
>
> sysvinit obviously has all the LSB headers in the init.d files to try to
> express dependencies.
>
> Avery - your suggestion sounds like it ought to be simple to implement
> and reasonably easy to understand for a sys-admin. Perhaps the way to go
> is to suck it and see - it'll perhaps then be more obvious with a real
> implementation if it's worth pursuing.
>
> One small concern would be that it's not enough to simply signal a
> dependency to be "up" - it needs to be actually running and working
> before you can start the dependent service successfully. To take my
> [least-]favourite example, you can't start autofs until ypbind has
> actually contacted the NIS server.
>
>
>
> >
> > Sent from my Windows Phone
> > ________
> > From: Avery Payne<mailto:avery.p.pa...@gmail.com>
> > Sent: ‎1/‎6/‎2015 10:35 PM
> > To: Steve Litt<mailto:sl...@troubleshooters.com>
> > Cc: supervision@list.skarnet.org<mailto:supervision@list.skarnet.org>
> > Subject: Re: thoughts on rudimentary dependency handling
> >
> >
> >
> >
> >
> >> On Jan 6, 2015, at 4:56 PM, Steve Litt 
> wrote:
> >>
> >> On Tue, 6 Jan 2015 13:17:39 -0800
> >> Avery Payne  wrote:
> >>
> >>> On Tue, Jan 6, 2015 at 10:20 AM, Laurent Bercot
> >>>  >>>> wrote:
> >>>>
> >>>>
> >>>> I firmly believe that a tool, no matter what it is, should do what
> >>>> the user wants, even if it's wrong or can't possibly work. If you
> >>>> cannot do what the user wants, don't try to be smart; yell at the
> >>>> user, spam the logs if necessary, and fail. But don't do anything
> >>>> the user has not explicitly told you to do.
> >>>
> >>> And there's the rub.  I'm at a crossroad with regard to this because:
> >>>
> >>> 1. The user wants service A to run.
> >>> 2. Service A needs B (and possibly C) running, or it will fail.
> >>>
> >>> Should the service fail because of B and C, even though the user
> >>> wants A up,
> >>>
> >>> or
> >>>
> >>> Should the service start B and C because the user requested A be
> >>> running?
> >>
> >> I thought the way to do the latter was like this:
> >>
> >> http://smarden.org/runit/faq.html#depends
> >>
> >> If every "upstream" simply declared that his program needs B and C
> >> running before his program runs, it's easy to translate that into an sv
> >> start command in the run script.
> >
> > Normally, with hand written scripts, that would be the case.   You
> cobble together what is needed and go on your way.  But this time things
> are a little different.  The project I'm doing uses templates - pre-written
> scripts that turn the various launch issues into variables while using the
> sa

RE: thoughts on rudimentary dependency handling

2015-01-07 Thread James Powell
You probably could have it initial a ping or similar callback to the NIS with 
an if/else switch as well for the checks. Target a file, directory, IP signal, 
anything really and if found, start the service, else it exits and gets retried.

Sent from my Windows Phone

From: Luke Diamand<mailto:l...@diamand.org>
Sent: ‎1/‎7/‎2015 12:17 AM
To: supervision@list.skarnet.org<mailto:supervision@list.skarnet.org>
Subject: Re: thoughts on rudimentary dependency handling

On 07/01/15 07:05, James Powell wrote:
> The way I see it is this... Either you have high level dependency handling 
> within the service supervision system itself, or you have low level 
> dependency handling within the service execution files.
>
> Keeping the system as simplified as possible lowers the probability and 
> possibility of issues. That's one of the basic rules of UNIX programming.
>
> I, personally, don't see a need other than low level handling. 
> Systemd/uselessd even does this within the unit files, as does some instances 
> of sysvinit/bsdinit by using numbered symlinks for execution order or using a 
> master control script.

systemd also has 'wants' directories which add dependencies to a unit in
much the way that's being suggested here.

sysvinit obviously has all the LSB headers in the init.d files to try to
express dependencies.

Avery - your suggestion sounds like it ought to be simple to implement
and reasonably easy to understand for a sys-admin. Perhaps the way to go
is to suck it and see - it'll perhaps then be more obvious with a real
implementation if it's worth pursuing.

One small concern would be that it's not enough to simply signal a
dependency to be "up" - it needs to be actually running and working
before you can start the dependent service successfully. To take my
[least-]favourite example, you can't start autofs until ypbind has
actually contacted the NIS server.



>
> Sent from my Windows Phone
> 
> From: Avery Payne<mailto:avery.p.pa...@gmail.com>
> Sent: ‎1/‎6/‎2015 10:35 PM
> To: Steve Litt<mailto:sl...@troubleshooters.com>
> Cc: supervision@list.skarnet.org<mailto:supervision@list.skarnet.org>
> Subject: Re: thoughts on rudimentary dependency handling
>
>
>
>
>
>> On Jan 6, 2015, at 4:56 PM, Steve Litt  wrote:
>>
>> On Tue, 6 Jan 2015 13:17:39 -0800
>> Avery Payne  wrote:
>>
>>> On Tue, Jan 6, 2015 at 10:20 AM, Laurent Bercot
>>> >>> wrote:
>>>>
>>>>
>>>> I firmly believe that a tool, no matter what it is, should do what
>>>> the user wants, even if it's wrong or can't possibly work. If you
>>>> cannot do what the user wants, don't try to be smart; yell at the
>>>> user, spam the logs if necessary, and fail. But don't do anything
>>>> the user has not explicitly told you to do.
>>>
>>> And there's the rub.  I'm at a crossroad with regard to this because:
>>>
>>> 1. The user wants service A to run.
>>> 2. Service A needs B (and possibly C) running, or it will fail.
>>>
>>> Should the service fail because of B and C, even though the user
>>> wants A up,
>>>
>>> or
>>>
>>> Should the service start B and C because the user requested A be
>>> running?
>>
>> I thought the way to do the latter was like this:
>>
>> http://smarden.org/runit/faq.html#depends
>>
>> If every "upstream" simply declared that his program needs B and C
>> running before his program runs, it's easy to translate that into an sv
>> start command in the run script.
>
> Normally, with hand written scripts, that would be the case.   You cobble 
> together what is needed and go on your way.  But this time things are a 
> little different.  The project I'm doing uses templates - pre-written scripts 
> that turn the various launch issues into variables while using the same code. 
>  This reduces development time and bugs - write the template once, debug it 
> once, and reuse it over and over.
>
> The idea for dependencies is that I could write something that looks at 
> symlinks in a directory and if the template finds anything, it starts the 
> dependency.  Otherwise it remains blissfully unaware.
>
> The issue that Laurent is arguing for is that by creating such a framework - 
> even one as thin and as carefully planned - it shuts out future possibilities 
> that would handle this correctly at a very high level, without the need to 
> write or maintain this layout.  To accommodate this possibility and to 
> maximize comp

Re: thoughts on rudimentary dependency handling

2015-01-07 Thread Luke Diamand

On 07/01/15 07:05, James Powell wrote:

The way I see it is this... Either you have high level dependency handling 
within the service supervision system itself, or you have low level dependency 
handling within the service execution files.

Keeping the system as simplified as possible lowers the probability and 
possibility of issues. That's one of the basic rules of UNIX programming.

I, personally, don't see a need other than low level handling. Systemd/uselessd 
even does this within the unit files, as does some instances of 
sysvinit/bsdinit by using numbered symlinks for execution order or using a 
master control script.


systemd also has 'wants' directories which add dependencies to a unit in 
much the way that's being suggested here.


sysvinit obviously has all the LSB headers in the init.d files to try to 
express dependencies.


Avery - your suggestion sounds like it ought to be simple to implement 
and reasonably easy to understand for a sys-admin. Perhaps the way to go 
is to suck it and see - it'll perhaps then be more obvious with a real 
implementation if it's worth pursuing.


One small concern would be that it's not enough to simply signal a 
dependency to be "up" - it needs to be actually running and working 
before you can start the dependent service successfully. To take my 
[least-]favourite example, you can't start autofs until ypbind has 
actually contacted the NIS server.






Sent from my Windows Phone

From: Avery Payne<mailto:avery.p.pa...@gmail.com>
Sent: ‎1/‎6/‎2015 10:35 PM
To: Steve Litt<mailto:sl...@troubleshooters.com>
Cc: supervision@list.skarnet.org<mailto:supervision@list.skarnet.org>
Subject: Re: thoughts on rudimentary dependency handling






On Jan 6, 2015, at 4:56 PM, Steve Litt  wrote:

On Tue, 6 Jan 2015 13:17:39 -0800
Avery Payne  wrote:


On Tue, Jan 6, 2015 at 10:20 AM, Laurent Bercot

wrote:


I firmly believe that a tool, no matter what it is, should do what
the user wants, even if it's wrong or can't possibly work. If you
cannot do what the user wants, don't try to be smart; yell at the
user, spam the logs if necessary, and fail. But don't do anything
the user has not explicitly told you to do.


And there's the rub.  I'm at a crossroad with regard to this because:

1. The user wants service A to run.
2. Service A needs B (and possibly C) running, or it will fail.

Should the service fail because of B and C, even though the user
wants A up,

or

Should the service start B and C because the user requested A be
running?


I thought the way to do the latter was like this:

http://smarden.org/runit/faq.html#depends

If every "upstream" simply declared that his program needs B and C
running before his program runs, it's easy to translate that into an sv
start command in the run script.


Normally, with hand written scripts, that would be the case.   You cobble 
together what is needed and go on your way.  But this time things are a little 
different.  The project I'm doing uses templates - pre-written scripts that 
turn the various launch issues into variables while using the same code.  This 
reduces development time and bugs - write the template once, debug it once, and 
reuse it over and over.

The idea for dependencies is that I could write something that looks at 
symlinks in a directory and if the template finds anything, it starts the 
dependency.  Otherwise it remains blissfully unaware.

The issue that Laurent is arguing for is that by creating such a framework - even one as thin and 
as carefully planned - it shuts out future possibilities that would handle this correctly at a very 
high level, without the need to write or maintain this layout.  To accommodate this possibility and 
to maximize compatibility, he is arguing to stay with the same "service is unaware of its 
surroundings" that have been a part of the design of all the frameworks - let things fail and 
make the admin fix it.  There is merit to this, not just because of future expansions, but also 
because it allows the end user (read: SysAdmin) the choice of running things.  Or long story short, 
"don't set policy, let the end user decide".  And I agree; I'm a bit old school about 
these things and I picked up Linux over a decade ago because I wanted choices.

But from a practical perspective there isn't anything right now that handles dependencies 
at a global level.  The approach of "minimum knowledge needed and best effort 
separation of duty" would give a minimal environment for the time being.  The design 
is very decentralized and works at a peer level; no service definition knows anything 
other than what is linked in its ./needs directory, and side effects are minimized by 
trying hard to keep separation of duty.  Because the scripts are distributed and meant to 
be installed as a

RE: thoughts on rudimentary dependency handling

2015-01-06 Thread James Powell
The way I see it is this... Either you have high level dependency handling 
within the service supervision system itself, or you have low level dependency 
handling within the service execution files.

Keeping the system as simplified as possible lowers the probability and 
possibility of issues. That's one of the basic rules of UNIX programming.

I, personally, don't see a need other than low level handling. Systemd/uselessd 
even does this within the unit files, as does some instances of 
sysvinit/bsdinit by using numbered symlinks for execution order or using a 
master control script.

Sent from my Windows Phone

From: Avery Payne<mailto:avery.p.pa...@gmail.com>
Sent: ‎1/‎6/‎2015 10:35 PM
To: Steve Litt<mailto:sl...@troubleshooters.com>
Cc: supervision@list.skarnet.org<mailto:supervision@list.skarnet.org>
Subject: Re: thoughts on rudimentary dependency handling





> On Jan 6, 2015, at 4:56 PM, Steve Litt  wrote:
>
> On Tue, 6 Jan 2015 13:17:39 -0800
> Avery Payne  wrote:
>
>> On Tue, Jan 6, 2015 at 10:20 AM, Laurent Bercot
>> >> wrote:
>>>
>>>
>>> I firmly believe that a tool, no matter what it is, should do what
>>> the user wants, even if it's wrong or can't possibly work. If you
>>> cannot do what the user wants, don't try to be smart; yell at the
>>> user, spam the logs if necessary, and fail. But don't do anything
>>> the user has not explicitly told you to do.
>>
>> And there's the rub.  I'm at a crossroad with regard to this because:
>>
>> 1. The user wants service A to run.
>> 2. Service A needs B (and possibly C) running, or it will fail.
>>
>> Should the service fail because of B and C, even though the user
>> wants A up,
>>
>> or
>>
>> Should the service start B and C because the user requested A be
>> running?
>
> I thought the way to do the latter was like this:
>
> http://smarden.org/runit/faq.html#depends
>
> If every "upstream" simply declared that his program needs B and C
> running before his program runs, it's easy to translate that into an sv
> start command in the run script.

Normally, with hand written scripts, that would be the case.   You cobble 
together what is needed and go on your way.  But this time things are a little 
different.  The project I'm doing uses templates - pre-written scripts that 
turn the various launch issues into variables while using the same code.  This 
reduces development time and bugs - write the template once, debug it once, and 
reuse it over and over.

The idea for dependencies is that I could write something that looks at 
symlinks in a directory and if the template finds anything, it starts the 
dependency.  Otherwise it remains blissfully unaware.

The issue that Laurent is arguing for is that by creating such a framework - 
even one as thin and as carefully planned - it shuts out future possibilities 
that would handle this correctly at a very high level, without the need to 
write or maintain this layout.  To accommodate this possibility and to maximize 
compatibility, he is arguing to stay with the same "service is unaware of its 
surroundings" that have been a part of the design of all the frameworks - let 
things fail and make the admin fix it.  There is merit to this, not just 
because of future expansions, but also because it allows the end user (read: 
SysAdmin) the choice of running things.  Or long story short, "don't set 
policy, let the end user decide".  And I agree; I'm a bit old school about 
these things and I picked up Linux over a decade ago because I wanted choices.

But from a practical perspective there isn't anything right now that handles 
dependencies at a global level.  The approach of "minimum knowledge needed and 
best effort separation of duty" would give a minimal environment for the time 
being.  The design is very decentralized and works at a peer level; no service 
definition knows anything other than what is linked in its ./needs directory, 
and side effects are minimized by trying hard to keep separation of duty.  
Because the scripts are distributed and meant to be installed as a whole, I can 
kinda get away with this because all of the dependencies are hard coded out of 
the box and the assumption is that you won't break something by tinkering with 
the links. But of course this runs against the setup that Laurent was 
discussing.  It's also brittle and that means something is wrong with the 
design.  It should be flexible.

So for now, I will look at having a minimum implementation that starts things 
as needed.  But only if the user sets a flag to use that feature.  This keeps 
the scripts open for the future while giving a minimum functionalit

Re: thoughts on rudimentary dependency handling

2015-01-06 Thread Avery Payne




> On Jan 6, 2015, at 4:56 PM, Steve Litt  wrote:
> 
> On Tue, 6 Jan 2015 13:17:39 -0800
> Avery Payne  wrote:
> 
>> On Tue, Jan 6, 2015 at 10:20 AM, Laurent Bercot
>> >> wrote:
>>> 
>>> 
>>> I firmly believe that a tool, no matter what it is, should do what
>>> the user wants, even if it's wrong or can't possibly work. If you
>>> cannot do what the user wants, don't try to be smart; yell at the
>>> user, spam the logs if necessary, and fail. But don't do anything
>>> the user has not explicitly told you to do.
>> 
>> And there's the rub.  I'm at a crossroad with regard to this because:
>> 
>> 1. The user wants service A to run.
>> 2. Service A needs B (and possibly C) running, or it will fail.
>> 
>> Should the service fail because of B and C, even though the user
>> wants A up,
>> 
>> or
>> 
>> Should the service start B and C because the user requested A be
>> running?
> 
> I thought the way to do the latter was like this:
> 
> http://smarden.org/runit/faq.html#depends
> 
> If every "upstream" simply declared that his program needs B and C
> running before his program runs, it's easy to translate that into an sv
> start command in the run script.

Normally, with hand written scripts, that would be the case.   You cobble 
together what is needed and go on your way.  But this time things are a little 
different.  The project I'm doing uses templates - pre-written scripts that 
turn the various launch issues into variables while using the same code.  This 
reduces development time and bugs - write the template once, debug it once, and 
reuse it over and over. 

The idea for dependencies is that I could write something that looks at 
symlinks in a directory and if the template finds anything, it starts the 
dependency.  Otherwise it remains blissfully unaware. 

The issue that Laurent is arguing for is that by creating such a framework - 
even one as thin and as carefully planned - it shuts out future possibilities 
that would handle this correctly at a very high level, without the need to 
write or maintain this layout.  To accommodate this possibility and to maximize 
compatibility, he is arguing to stay with the same "service is unaware of its 
surroundings" that have been a part of the design of all the frameworks - let 
things fail and make the admin fix it.  There is merit to this, not just 
because of future expansions, but also because it allows the end user (read: 
SysAdmin) the choice of running things.  Or long story short, "don't set 
policy, let the end user decide".  And I agree; I'm a bit old school about 
these things and I picked up Linux over a decade ago because I wanted choices.

But from a practical perspective there isn't anything right now that handles 
dependencies at a global level.  The approach of "minimum knowledge needed and 
best effort separation of duty" would give a minimal environment for the time 
being.  The design is very decentralized and works at a peer level; no service 
definition knows anything other than what is linked in its ./needs directory, 
and side effects are minimized by trying hard to keep separation of duty.  
Because the scripts are distributed and meant to be installed as a whole, I can 
kinda get away with this because all of the dependencies are hard coded out of 
the box and the assumption is that you won't break something by tinkering with 
the links. But of course this runs against the setup that Laurent was 
discussing.  It's also brittle and that means something is wrong with the 
design.  It should be flexible. 

So for now, I will look at having a minimum implementation that starts things 
as needed.  But only if the user sets a flag to use that feature.  This keeps 
the scripts open for the future while giving a minimum functionality today that 
is relatively "safe".  So you don't get dependency resolution unless you 
specifically turn it on.  And turning it on comes with caveats. It's not a 
perfect solution, it has potential issues, and the user abdicates/delegates 
some of their decisions to my scripts.  A no-no, to be sure. But again, that's 
the user's choice to flip the switch on...

Also keep in mind that I'm bouncing ideas off of him and he is looking at it 
from a perspective much different from mine.  I'm taking a pragmatic approach 
that helps deal with an old situation - the frameworks would be more 
usable/adoptable if there was a baseline set of service definitions that 
allowed for a wholesale switch over to using them from $(whatever you're using 
now).  So it's very pragmatic and legacy and "old school" for desktops and 
servers, and that's something still needed because we still have those being 
used everywhere.  It's basically a way to grandfather the frameworks in using 
the current technology by providing all of the missing "glue" in the form of 
service definitions.  And unfortunately dependency resolution is an old issue 
that has to be worked out as one of my goals, which is to increase ease of use 
so that people w

Re: thoughts on rudimentary dependency handling

2015-01-06 Thread Steve Litt
On Tue, 6 Jan 2015 13:17:39 -0800
Avery Payne  wrote:

> On Tue, Jan 6, 2015 at 10:20 AM, Laurent Bercot
>  > wrote:
> >
> >
> >  I firmly believe that a tool, no matter what it is, should do what
> > the user wants, even if it's wrong or can't possibly work. If you
> > cannot do what the user wants, don't try to be smart; yell at the
> > user, spam the logs if necessary, and fail. But don't do anything
> > the user has not explicitly told you to do.
> >
> 
> And there's the rub.  I'm at a crossroad with regard to this because:
> 
> 1. The user wants service A to run.
> 2. Service A needs B (and possibly C) running, or it will fail.
> 
> Should the service fail because of B and C, even though the user
> wants A up,
> 
>  or
> 
> Should the service start B and C because the user requested A be
> running?

I thought the way to do the latter was like this:

http://smarden.org/runit/faq.html#depends

If every "upstream" simply declared that his program needs B and C
running before his program runs, it's easy to translate that into an sv
start command in the run script.

If "upstreams" declared this stuff, it would also be pretty easy to
write a script, with or without the make command, to do the whole
dependency tree. Given that not every init has "provides" ability,
perhaps a standard list of service names could be distributed. There's
already an /etc/services for port numbers: Maybe there could be
an /etc/servicenames for standard names for services.

SteveT

Steve Litt*  http://www.troubleshooters.com/
Troubleshooting Training  *  Human Performance



Re: thoughts on rudimentary dependency handling

2015-01-06 Thread Laurent Bercot

On 06/01/2015 22:17, Avery Payne wrote:

And there's the rub.  I'm at a crossroad with regard to this because:

1. The user wants service A to run.
2. Service A needs B (and possibly C) running, or it will fail.

Should the service fail because of B and C, even though the user wants A up,

  or

Should the service start B and C because the user requested A be running?


 My answer is: there are two layers, and what to do depends on what exactly
the user is asking and whom it is asking.

 If the user is asking *A* to run, as in "s6-svc -u /service/A", and A needs B
and B is down, then A should fail.
 If the user is asking *the global state manager*, i.e. the upper layer, to
change the current state to the current state + A, then the global state
manager should look up its dependency database, see that in order to bring up
A it also needs to bring up B and C, and do it.

 If you have a global state manager, that is the entity the user should
communicate with to change the state of services. It is the only entity
deciding which individual service goes up or down; if you type
"s6-svc -u /service/A", the state manager should notice this and go
"nope, this is not the state I've been asked to enforce" and bring down
A immediately. But if you tell it to bring A up itself, then it should do
whatever it takes to fulfill your request, including bringing up B and C.

 This is Unix: for every shared resource, there should be one daemon which
centralizes access to that resource to avoid conflicts. If you want service
dependencies, then the set of service states becomes a resource, and you
need that centralization.

 I'll get to writing such a daemon at some point, but it won't be
tomorrow, so feel free to implement whatever fulfills your needs:
whoever writes the code is right. I just wanted to make sure you
don't start an underspecified kitchen sink that will end up being a
maintenance nightmare, and I would really advise you to stick to the naive,
dumb approach until you can commit to a full centralized state manager.

--
 Laurent



Re: thoughts on rudimentary dependency handling

2015-01-06 Thread Avery Payne
On Tue, Jan 6, 2015 at 10:20 AM, Laurent Bercot  wrote:
>
>
>  I firmly believe that a tool, no matter what it is, should do what the
> user wants, even if it's wrong or can't possibly work. If you cannot do
> what the user wants, don't try to be smart; yell at the user, spam the
> logs if necessary, and fail. But don't do anything the user has not
> explicitly told you to do.
>

And there's the rub.  I'm at a crossroad with regard to this because:

1. The user wants service A to run.
2. Service A needs B (and possibly C) running, or it will fail.

Should the service fail because of B and C, even though the user wants A up,

 or

Should the service start B and C because the user requested A be running?

For some, the first choice, which is to immediately fail, is perfectly
fine.  I can agree to that, and I understand the "why" of it, and it makes
sense.  But in other use cases, you'll have users that aren't looking at
this chain of details.  They asked for A to be up, why do they need to
bring up B, oh look there's C too...things suddenly look "broken", even
though they aren't.  I'm caught between making sure the script comes up,
and doing the right thing consistently.

I can certainly make the scripts "naive" of each other and not start
anything at all...and leave everything up to the administrator to figure
out how to get things working.  Currently this is how the majority of them
are done, and it wouldn't take much to change the rest to match this
behavior.

It's also occurred to me that instead of making the "dependency feature" a
requirement, I can make it optional.  It could be a feature that you choose
to activate by setting a file or environment variable.  Without the
setting, you would get the default behavior you are wanting to see, with no
dependency support; this would be the default "out of the box" experience.
With the setting, you get the automatic start-up that I think people will
want.  So the choice is back with the user, and they can decide.  That
actually might be the way to handle this, and both parties - the ones that
want full control and visibility, and the ones after ease of use - will get
what they want.  On the one hand I can assure that you will get working
scripts, because scripts that have dependencies can be made to work that
way.  On the other hand, if you want strict behavior, that is assured as
well.

The only drawback is you can't get both because of the limitations of the
environment that I am in.


Re: thoughts on rudimentary dependency handling

2015-01-06 Thread John Albietz
"Down by default" makes sense to me and will be a great feature.
I think that having it will require all services to have a 'down' script
that defines how to make sure that a service is actually down.

I wonder if this will help address a common situation for me where I
install a package and realize that at the end of the installation the
daemon is started using upstart or sysv.

At that point, to 'supervise' the app, I first have to stop the current
daemon and then start it up using runit or another process manager.

Otherwise I end up with two copies of the app running, with only one of them
being supervised.

John Albietz
m: 516-592-2372
e: inthecloud...@gmail.com
l'in: www.linkedin.com/in/inthecloud247

On Tue, Jan 6, 2015 at 10:20 AM, Laurent Bercot  wrote:

> On 06/01/2015 18:46, Avery Payne wrote:
>
>> 1. A service can ask another service to start.
>> 2. A service can only signal itself to go down.  It can never ask another
>> service to go down.
>> 3. A service can only mark itself with a ./down file.  It can never mark
>> another service with a ./down file.
>>
>> That's it.  Numbers 2 and 3 are the only times I would go against what you
>> are saying.
>>
>
>  So does number 1.
>  When you ask another service to start, you change the global state. If
> it is done automatically by any tool or service, this means the global
> state changes without the admin having requested it. This is trying to
> be smarter than the user, which is almost always a bad thing.
>
>  Number 2 is in the same boat. If the admin wants you to be up,
> then you can't just quit and decide that you will be down. You can't
> change the global state behind the admin's back.
>
>  Number 3 is different. It doesn't change the global state - unless there's
> a serious incident. But it breaks resiliency against that incident: it
> kills the guarantee the supervision tree offers you.
>
>  I firmly believe that a tool, no matter what it is, should do what the
> user wants, even if it's wrong or can't possibly work. If you cannot do
> what the user wants, don't try to be smart; yell at the user, spam the
> logs if necessary, and fail. But don't do anything the user has not
> explicitly told you to do.
>
>  Maybe a dependency manager needs to be smarter than that. In which case
> I would call it a "global state manager". There would be the "current
> state", which starts at "everything down" when the machine boots, and
> the "wanted state", which starts at "everything up"; as long as those
> states are not matched, the global state manager is running, implementing
> retry policies and such, and may change the global state at any time,
> bringing individual services up and down as it sees fit, with the
> ultimate goal of matching the global current state with the global wanted
> state. It could do stuff like exponential backoff so failing services
> would not be "wanted up" all the time; but it would never, ever change
> the global wanted state without an order to do so from the admin.
>
>  If you want a dependency manager with online properties, I think this
> is the way to do it.
>
> --
>  Laurent
>
>


Re: thoughts on rudimentary dependency handling

2015-01-06 Thread Laurent Bercot

On 06/01/2015 18:46, Avery Payne wrote:

1. A service can ask another service to start.
2. A service can only signal itself to go down.  It can never ask another
service to go down.
3. A service can only mark itself with a ./down file.  It can never mark
another service with a ./down file.

That's it.  Numbers 2 and 3 are the only times I would go against what you
are saying.


 So does number 1.
 When you ask another service to start, you change the global state. If
it is done automatically by any tool or service, this means the global
state changes without the admin having requested it. This is trying to
be smarter than the user, which is almost always a bad thing.

 Number 2 is in the same boat. If the admin wants you to be up,
then you can't just quit and decide that you will be down. You can't
change the global state behind the admin's back.

 Number 3 is different. It doesn't change the global state - unless there's
a serious incident. But it breaks resiliency against that incident: it
kills the guarantee the supervision tree offers you.

 I firmly believe that a tool, no matter what it is, should do what the
user wants, even if it's wrong or can't possibly work. If you cannot do
what the user wants, don't try to be smart; yell at the user, spam the
logs if necessary, and fail. But don't do anything the user has not
explicitly told you to do.

 Maybe a dependency manager needs to be smarter than that. In which case
I would call it a "global state manager". There would be the "current
state", which starts at "everything down" when the machine boots, and
the "wanted state", which starts at "everything up"; as long as those
states are not matched, the global state manager is running, implementing
retry policies and such, and may change the global state at any time,
bringing individual services up and down as it sees fit, with the
ultimate goal of matching the global current state with the global wanted
state. It could do stuff like exponential backoff so failing services
would not be "wanted up" all the time; but it would never, ever change
the global wanted state without an order to do so from the admin.

 If you want a dependency manager with online properties, I think this
is the way to do it.

--
 Laurent



Re: thoughts on rudimentary dependency handling

2015-01-06 Thread Avery Payne
On Tue, Jan 6, 2015 at 8:52 AM, Laurent Bercot 
wrote:

>
>  I'm not sure exactly in what context your message needs to be taken
> - is that about a tool you have written or are writing, or something
> else ? - but if you're going to work on dependency management, it's
> important that you get it right. It's complex stuff that needs
> planning and thought.


This is in the context of "service definition A needs service definition B
to be up".


>
>  * implement a ./needs directory.  This would have symlinks to any
>> definitions that would be required to run before the main definition can
>> run.  For instance, Debian's version of lightdm requires that dbus be
>> running, or it will abort.  Should a ./needs not be met, the current
>> definition will receive a ./down file, write out a message indicating what
>> service blocked it from starting, and then will send a "down service" to
>> itself.
>>
>
>  For instance, I'm convinced that the approach you're taking here actually
> takes away from reliability. Down files are dangerous: they break the
> supervision chain guarantee. If the supervisor dies and is respawned by
> its parent, it *will not* restart the service if there's a down file.
> You want down files to be very temporary, for debugging or something,
> you don't want them to be a part of your normal operation.
>
>  If your dependency manager works online, you *will* bring services down
> when you don't want to. You *will* have more headaches making things work
> than if you had no dependency manager at all. I guarantee it.


I should have added some clarifications.  There are some basic rules I'm
using with regard to starting/stopping services:

1. A service can ask another service to start.
2. A service can only signal itself to go down.  It can never ask another
service to go down.
3. A service can only mark itself with a ./down file.  It can never mark
another service with a ./down file.

That's it.  Numbers 2 and 3 are the only times I would go against what you
are saying.  And since the only reason I would do that is because of some
failure that was unexpected, there would be a good reason to do so.  And in
all cases, there would be a message output as to why it signaled itself
back down, or why it marked itself with a ./down file.  The ./down file is,
I think, being used correctly - I'm trying to flag to the sysadmin that
"something is wrong with this service, and it shouldn't restart until you
fix it".

I'm sorry if the posting was confusing.  Hopefully the rules clarify when
and how I would be using these features.  I believe it should be safe if
they are confined within the context of the service definition itself, and
not other dependencies.  If there is something to the contrary that I'm
missing in those three rules, I'm listening.


Re: thoughts on rudimentary dependency handling

2015-01-06 Thread Laurent Bercot


 I'm not sure exactly in what context your message needs to be taken
- is that about a tool you have written or are writing, or something
else ? - but if you're going to work on dependency management, it's
important that you get it right. It's complex stuff that needs
planning and thought.



* implement a ./needs directory.  This would have symlinks to any
definitions that would be required to run before the main definition can
run.  For instance, Debian's version of lightdm requires that dbus be
running, or it will abort.  Should a ./needs not be met, the current
definition will receive a ./down file, write out a message indicating what
service blocked it from starting, and then will send a "down service" to
itself.


 For instance, I'm convinced that the approach you're taking here actually
takes away from reliability. Down files are dangerous: they break the
supervision chain guarantee. If the supervisor dies and is respawned by
its parent, it *will not* restart the service if there's a down file.
You want down files to be very temporary, for debugging or something,
you don't want them to be a part of your normal operation.

 I firmly believe that in order to keep boot and shutdown procedures fast
and simple, and avoid reinventing the kitchen sink, any dependency
management on top of a supervision system should work *offline*. Keep the
dependency manager out of the supervisor's way in normal operation; just
use it to generate state change scripts.

 If your dependency manager works online, you *will* bring services down
when you don't want to. You *will* have more headaches making things work
than if you had no dependency manager at all. I guarantee it.

 I don't know how to design such a beast. I'm not there yet, I haven't
given it any thought. But a general principle applies: don't do more, do
less. If something is unnecessary, don't do it. What a supervision
framework needs is a partial order on how to bring services up or down
at boot time and shutdown time, and other global state changes; not
instructions on what to do in normal operation. Stay out of the way of
the supervisor outside of a global state change.

--
 Laurent