Re: thoughts on rudimentary dependency handling

2015-01-08 Thread Luke Diamand

On 08/01/15 17:53, Avery Payne wrote:

The use of hidden directories was done for administrative and aesthetic
reasons.  The rationale was that the various templates and scripts and
utilities shouldn't be mixed in while looking at a display of the various
definitions.


Why shouldn't they be mixed in? Surely better to see everything clearly 
and plainly, than to hide some parts away where people won't expect to 
find them. I think this may confuse people, especially if they use tools 
that ignore hidden directories.


At least on my Debian system, there are hardly any other hidden files or 
directories of note under /etc, so this would be setting a bit of a 
precedent to have quite so many non-trivial items present.



The other rationale was that the entire set of definitions
could be moved or copied using a single directory, although it doesn't work
that way in practice, because a separate cp is needed to move the
dot-directories.


Move everything down one level then?

sv
   - templates
   - run-scripts
   - other-things-not-yet-thought-of

FWIW, I started trying to make a debian package, and dpkg got very upset 
about all those dot files.




The basic directory structure is as follows:

sv
- .bin
- .env
- .finish
- .log
- .run

Where:

sv is the container of all definitions and utilities.  Best-case, the
entire structure, including dot directories, could be set in place with mv,
although this is something that a package maintainer would be likely to
do.  People initially switching over will probably want to use cp while the
project develops.  That way, you can pull new definitions and bugfixes with
git or mercurial, and copy them into place.  Or you could download it as a
tarball off of the website(s) and simply expand-in-place.  So there's a few
different ways to get this done.

.bin is meant to store any supporting programs.  At the moment this is a
bit of a misnomer because it really only stores the framework shunts and
the supporting scripts for switching those shunts.  It may have actual
binaries in the future, such as usersv, or other independent utilities.
When you run use-* to switch frameworks, it changes a set of symlinks to
point to what should be the tools of your installed framework; this makes
it portable between all frameworks, a key feature.

.env is an environmental variable directory meant to be loaded with the
envdir tool.  It represents system-wide settings, like PATH, and some of
the settings that are global to all of the definitions.  It is used within
the templates.

.finish will hold ./finish scripts.  Right now, it's pretty much a stub.
Eventually it will hold a basic finish script that alerts the administrator
to issues with definitions not launching, as well as handling other
non-standard terminations.

.log will hold ./log scripts.  It currently has a single symlink, ./run,
that points to whatever logging system is the default.  At the moment it's
svlogd only because I haven't finished logging for s6 and daemontools.
Eventually .log/run will be a symlink to whatever loggin arrangement you
need.  In this fashion, the entire set of scripts can be switched by simply
switching the one symlink.

.run will hold the ./run scripts.  It has a few different ones in them, but
the main one at this time is run-envdir, which loads daemon specific
settings from the definition's env directory and uses them to launch the
daemon.  Others include an optional feature for user-defined services, and
basic support for one of three getty.  I may or may not make a new one for
the optional dependency feature; I'm going to see if it can be standardized
within run-envdir first.

I can always remove the dots, but then you would have these mixed in with
all of the definitions, and I think it will add to the confusion more than
having them hidden.  As it stands, the only time you need to mess with the
dot directories is (a) when setting them up for the first time, or (b) when
you are switching your logging around.  Otherwise there's really no need to
be in them, and when you use ls /etc/sv to see what is available, they
stay out of your way.

If there is a better arrangement that keeps everything in one base
directory for easy management but eliminates the dots, I'll listen.
Although I think this arrangement actually makes a bit more sense, and the
install instructions are careful to include the dots, so you only need to
mess around with them at install time.

On Thu, Jan 8, 2015 at 8:20 AM, Luke Diamand l...@diamand.org wrote:


Is it possible to avoid using hidden files (.env) as it makes it quite a
lot harder for people who don't know what's going on to, um, work out
what's going on.

Thanks!
Luke








Re: thoughts on rudimentary dependency handling

2015-01-08 Thread Laurent Bercot

  Here's an ugly hack that allows you do that using envdir:
set -a
eval $({ env; envdir ../.env env; } | grep -vF -e _= -e SHLVL= | sort | uniq -u)
set +a


 Ugh, in the morning (almost) light it's even uglier than I thought,
because it won't work for values you *change* either, which could
be important for things like PATH. Or if values contains a _=
or such.
 Try this instead:

set -a
eval $(envdir ../.env env | grep -v -e ^_= -e ^SHLVL=)
set +a

 It will reexport all the variables you already have, which is
also ugly, and it still won't remove anything, but at least it
will pick up changes to existing variables.

 Ah, the shell. Everytime I doubt whether execline was worth
writing, it only takes a few minutes of /bin/sh scripting to
renew my confidence.



It will work fine. I'm attempting to pre-load values that will remain
constant inside the scope of the script, so there isn't a need to change
them at runtime.


 My point is, if you are using this construct in run scripts, remember
the scripts will inherit some environment from init - that could contain
system-wide settings such as PATH or TZ. If for some reason you need to
remove or modify those variables in a specific script, it is important
that it works as expected. :)

--
 Laurent



RE: thoughts on rudimentary dependency handling

2015-01-08 Thread James Powell
I'll be following this intently as I have a project I'm working on that will 
use s6 heavily even discretely.

Sent from my Windows Phone

From: Avery Paynemailto:avery.p.pa...@gmail.com
Sent: ‎1/‎7/‎2015 11:58 PM
To: supervision@list.skarnet.orgmailto:supervision@list.skarnet.org
Subject: Re: thoughts on rudimentary dependency handling

On Wed, Jan 7, 2015 at 6:53 PM, Laurent Bercot ska-supervis...@skarnet.org
wrote:

  Unfortunately, the envdir tool, which I use to abstract away the daemons
 and settings, only chain-loads; it would be nice if it had a persistence
 mechanism, so that I could load once for the scope of the shell script.


  Here's an ugly hack that allows you do that using envdir:
 set -a
 eval $({ env; envdir ../.env env; } | grep -vF -e _= -e SHLVL= | sort |
 uniq -u)
 set +a


Thanks!  When I can carve out a bit of time this week I'll put it in and
finish up the few bits needed.  Most of the dependency loop is already
written, I just didn't have a somewhat clean way of pulling in the
$CMDWHATEVER settings without repeatedly reading ./env over and over.


  It only works for variables you add, though, not for variables you remove.


It will work fine. I'm attempting to pre-load values that will remain
constant inside the scope of the script, so there isn't a need to change
them at runtime.


Re: thoughts on rudimentary dependency handling

2015-01-08 Thread Steve Litt
On Wed, 7 Jan 2015 14:25:28 -0800
Avery Payne avery.p.pa...@gmail.com wrote:

 On Wed, Jan 7, 2015 at 7:23 AM, Steve Litt sl...@troubleshooters.com
  wrote:
 
  I'm pretty sure this conforms to James' preference (and mine
  probably) that it be done in the config and not in the init program.
 
  To satisfy Laurent's preference, everything but the exec cron -f
  could be commented out, and if the user wants to use this, he/she
  can uncomment all the rest. Or your run script writing program
  could have an option to write the dependencies, or not.
 
 
 I've pretty much settled on a system-wide switch in sv/.env (which in
 the scripts will show up as ../.env).  The switch will, by default,
 follow Laruent's behavior of naive launching, ie. no dependencies
 are up, missing dependencies cause failures,

I'm having trouble understanding exactly what you're saying. You mean
the executable being daemonized fails, by itself, because a service it
needs isn't there, right? You *don't* mean that the init itself fails,
right?

This is all pretty cool, Avery! Currently, my best guess is that
eventually my daily driver computer will be initted by runit (with
Epoch as a backup, should I bork my runit). It sounds like what you're
doing will ease runit config by an order of magnitude.

  and the admin must check
 logging for notifications. Enabling the feature would be as simple as
 
 echo 1  /etc/sv/.env/NEEDS_ENABLED
 
 ...and every new service launch would receive it.  You could also
 force-reload with a restart command.  Without the flag, the entire
 chunk of dependency code is bypassed and the launch continues as
 normal.

I'm not sure what you're saying. Are you saying that the dependency
code is in the runscript, but within an IF statement that checks
for ../env/NEEDS_ENABLED?

 
 The goal is the same but the emphasis has changed.  This will be
 considered a fall-back feature for those systems that do not have
 such a tool available, or have constraints that force the continued
 use of a shell launcher.  It is the option of last resort, and while
 I think I can make it work fairly consistently, it will come with
 some warnings in the wiki.  For Laurent, he wouldn't even need to
 lift a finger - it fully complies with his desires out of the box. ;-)
 
 As new tools emerge in the future, I will be able to write a shunt
 into the script that detects the tool and uses it instead of the
 built-in scripted support.  This will allow Laurent's work to be
 integrated without messing anything up, so the behavior will be the
 same, but implemented differently.

Now THAT'S Unix at work!

 
 Finally, with regard to the up vs actually running issue, I'm not even
 going to try and address it due to the race conditions involved.  The
 best I will manage is to first issue the up, then do a service check
 to confirm that it didn't die upon launch, which for a majority (but
 not all) cases should suffice.  Yes, there are still race conditions,
 but that is fine - I'm falling back to the original model of service
 fails continually until it succeeds, which means a silently-failed
 child dependency that was missed by the check command will still
 cause the parent script to fail, because the daemon itself will
 fail.  It is a crude form of graceful failure.  So the supervisor
 starts the parent again...and again...until the truant dependency is
 up and running, at which point it will bring the parent up.  Like I
 said, this will be a fall-back feature, and it will have minor
 annoyances or issues.

Yes. If I'm understanding you correctly, you're only going so far in
determinint really up, because otherwise writing a one size fits all
services thing starts getting way too complicated.

I was looking at runit docs yesterday before my Init System
presentation, and learned that I'm supposed to put my own Really Up
code in a script called ./check.

 
 Right now the biggest problem is handling all of the service tool
 calls. They all have the same grammar, (tool) (command) (service
 name), so I can script that easily.  Getting the tools to show up as
 the correct command and command option is something else, and I'm
 working on a way to wedge it into the use-* scripts so that the tools
 are set up out of the box all at the same time.  This will create
 $SVCTOOL, and a set of $CMDDOWN, $CMDUP, $CMDCHECK, etc. that will be
 used in the scripts.  **Once that is done I can fully test the rest
 of the dependency concept and get it fleshed out.** If anyone wants
 to see it, email me directly and I'll pass it along, but there's not
 much to look at.

If I read the preceding correctly, you're making service tool calls for
runit, s6, perp and nosh grammatically identical. Are you doing that so
that your run scripts can invoke the init-agnostic commands, so you
just have one version of your scripts?

However you end up doing the preceding, I think it's essential to
thoroughly document it, complete with examples. I think that the
additional layer of 

Re: thoughts on rudimentary dependency handling

2015-01-08 Thread Avery Payne
The use of hidden directories was done for administrative and aesthetic
reasons.  The rationale was that the various templates and scripts and
utilities shouldn't be mixed in while looking at a display of the various
definitions.  The other rationale was that the entire set of definitions
could be moved or copied using a single directory, although it doesn't work
that way in practice, because a separate cp is needed to move the
dot-directories.

The basic directory structure is as follows:

sv
- .bin
- .env
- .finish
- .log
- .run

Where:

sv is the container of all definitions and utilities.  Best-case, the
entire structure, including dot directories, could be set in place with mv,
although this is something that a package maintainer would be likely to
do.  People initially switching over will probably want to use cp while the
project develops.  That way, you can pull new definitions and bugfixes with
git or mercurial, and copy them into place.  Or you could download it as a
tarball off of the website(s) and simply expand-in-place.  So there's a few
different ways to get this done.

.bin is meant to store any supporting programs.  At the moment this is a
bit of a misnomer because it really only stores the framework shunts and
the supporting scripts for switching those shunts.  It may have actual
binaries in the future, such as usersv, or other independent utilities.
When you run use-* to switch frameworks, it changes a set of symlinks to
point to what should be the tools of your installed framework; this makes
it portable between all frameworks, a key feature.

.env is an environmental variable directory meant to be loaded with the
envdir tool.  It represents system-wide settings, like PATH, and some of
the settings that are global to all of the definitions.  It is used within
the templates.

.finish will hold ./finish scripts.  Right now, it's pretty much a stub.
Eventually it will hold a basic finish script that alerts the administrator
to issues with definitions not launching, as well as handling other
non-standard terminations.

.log will hold ./log scripts.  It currently has a single symlink, ./run,
that points to whatever logging system is the default.  At the moment it's
svlogd only because I haven't finished logging for s6 and daemontools.
Eventually .log/run will be a symlink to whatever loggin arrangement you
need.  In this fashion, the entire set of scripts can be switched by simply
switching the one symlink.

.run will hold the ./run scripts.  It has a few different ones in them, but
the main one at this time is run-envdir, which loads daemon specific
settings from the definition's env directory and uses them to launch the
daemon.  Others include an optional feature for user-defined services, and
basic support for one of three getty.  I may or may not make a new one for
the optional dependency feature; I'm going to see if it can be standardized
within run-envdir first.

I can always remove the dots, but then you would have these mixed in with
all of the definitions, and I think it will add to the confusion more than
having them hidden.  As it stands, the only time you need to mess with the
dot directories is (a) when setting them up for the first time, or (b) when
you are switching your logging around.  Otherwise there's really no need to
be in them, and when you use ls /etc/sv to see what is available, they
stay out of your way.

If there is a better arrangement that keeps everything in one base
directory for easy management but eliminates the dots, I'll listen.
Although I think this arrangement actually makes a bit more sense, and the
install instructions are careful to include the dots, so you only need to
mess around with them at install time.

On Thu, Jan 8, 2015 at 8:20 AM, Luke Diamand l...@diamand.org wrote:

 Is it possible to avoid using hidden files (.env) as it makes it quite a
 lot harder for people who don't know what's going on to, um, work out
 what's going on.

 Thanks!
 Luke




Re: thoughts on rudimentary dependency handling

2015-01-08 Thread Avery Payne
On Thu, Jan 8, 2015 at 9:23 AM, Steve Litt sl...@troubleshooters.com
wrote:

 I'm having trouble understanding exactly what you're saying. You mean
 the executable being daemonized fails, by itself, because a service it
 needs isn't there, right? You *don't* mean that the init itself fails,
 right?


Both correct.


 I'm not sure what you're saying. Are you saying that the dependency
 code is in the runscript, but within an IF statement that checks
 for ../env/NEEDS_ENABLED?


Correct.  If the switch, which is a data value in a file, is zero, it
simply skips all of the dependency stuff with a giant if-then wrapper.  At
least, that's the plan.  I won't know until I can get to it.


  Like I
  said, this will be a fall-back feature, and it will have minor
  annoyances or issues.

 Yes. If I'm understanding you correctly, you're only going so far in
 determinint really up, because otherwise writing a one size fits all
 services thing starts getting way too complicated.


Correct.  I'm taking an approach that has the minimum needed to make
things work correctly.



 I was looking at runit docs yesterday before my Init System
 presentation, and learned that I'm supposed to put my own Really Up
 code in a script called ./check.


Also correct, although I'm trying to only do ./check scripts where
absolutely needed, such as the ypbind situation.  Otherwise, the check
usually looks at is the child PID still around.


 If I read the preceding correctly, you're making service tool calls for
 runit, s6, perp and nosh grammatically identical.


Correct.


 Are you doing that so
 that your run scripts can invoke the init-agnostic commands, so you
 just have one version of your scripts?


Exactly correct.  This how I am able to turn the bulk of the definitions
into templates.  ./run files in the definition directories are little more
than symlinks back to a script in ../.run, which means...write once, use a
whole lot. :)  It's also the reason that features are slow in coming - I
have to be very, very careful about interactions.



 However you end up doing the preceding, I think it's essential to
 thoroughly document it, complete with examples. I think that the
 additional layer of indirection might be skipped by those not fully
 aware of the purpose.


I just haven't gotten around to this part, sorry.



 I can help with the documentation.


https://bitbucket.org/avery_payne/supervision-scripts
or
https://github.com/apayne/supervision-scripts

Feel free to clone, change, and send a pull request.