Re: The "Unix Philosophy 2020" document

2019-10-28 Thread Avery Payne
For those readers that meet the following critieria:



   - Are unfortunate enough to only speak a single language, English;
   - And simply want to read an English version of the document;
   - Are (un)fortunately running a current Debian installation with missing
   Latex dependencies;



Do the following:


mkdir -p ~/Projects/ ; cd ~/Projects

git clone https://gitlab.com/CasperVector/up2020.git

cd ./up2020 ; sudo aptitude install latexmk


Edit the first line of the latexmkrc file in the up2020 directory so it
looks like:


@default_files = ('up2020-en');


Then in the up2020 directory, execute this command:


latexmk


The resulting document, while incomplete, will be created as a
English-language PDF named up2020-en.pdf.


Sorry if this added noise to the mailing list.  It was just frustrating to
not have the document build because of missing software dependencies on my
system; doing this tweak allowed me to at least read it.


Re: [Announce] s6.rc: a distribution-friendly init/rc framework (long, off-topic)

2018-03-23 Thread Avery Payne
>
>  I see that s6.rc comes with a lot of pre-written scripts, from acpid
> to wpa_supplicant. Like Avery's supervision-scripts package, this is
> something that I think goes above and beyond simple "policy": this is
> seriously the beginning of a distribution initiative. I have no wish
> at all to do this outside of a distribution, and am convinced that
> the software base must come first, and then service-specific
> scripts must be written in coordination with distributions that use
> it; that is what I plan to do for s6-frontend in coordination with
> Adélie or Alpine (which are the most likely to use it first). But there
> is a large gray area here: what is "reusable policy" (RP) and what is
> "distribution-specific" (DS)? For instance, look at the way the
> network is brought up - in s6.rc, in OpenRC, in sysvinit scripts,
> in systemd. Is bringing up the network RP or DS? If it involves
> choosing between several external software packages that provide
> equivalent functionality, is it okay to hardcode a dependency, or
> should we provide flexibility (with a complexity cost)?
>
>  This is very much the kind of discussion that I think is important
> to have, that I would like to have in the relatively near future, and
> since more and more people are getting experience with semi-packaging
> external software, and flirting with the line between software
> development and distro integration - be it Olivier, Avery, Casper,
> Jonathan, or others - I think we're now in a good position to have it.
>
>
I'm still thinking this over, especially the distribution-specific
dependencies.  The tl;dr version is, we are really dealing with the
intersection of settings specific to the supervisor, the distribution's
policy (in the form of naming-by-path, environment settings, file
locations, etc), and the options needed for the version of the daemon
used.  If you can account for all three, you should be able to get
consistent run scripts.

The launch of a simple longrun process is nearly (but not entirely)
universal.  What I typically see in > 90% of cases are:

1. designation of the scripting environment in the shebang, to enforce
consistency.
2. clearing and resetting the environment state.
3. if needed, capture STDERR for in-line logging.
4. if needed, running any pre-start programs to create conditions (example:
uuid generation prior to launching udev)
5. if needed, the creation of a run directory at the distribution-approved
location
7. if needed, permission changes to the run directory
6. if needed, ownership changes to the run directory
7. as needed, chain loading helper programs, with dependencies on path
8. as needed, chain loading environment variables
9. specification of the daemon to run, with dependencies on path
10. settings as appropriate for the version of daemon used, with
dependencies on path

The few processes that can't do this, typically have either a design flaw
or a very elaborate launch process.  Either of those require a "special"
run file anyways, so they are already exceptions.

The following issues arise from distribution causing policy to be needed:

* The type of logging used, which can vary quite a bit
* The various names of the supervisory programs
* The path of the daemon itself
* The path of the daemon's settings file and/or directory
* Naming conventions for devices, especially network devices
* How to deal with "instances" of a service
* Handling of failures in the finish file
* Changes in valid option flags between different versions of a daemon

Notice that the first 4 could easily be turned into parameters.  Device
names I don't have an answer for - yet.  Instances are going to be
dependent on the supervisor and system-state mechanism used, and frankly, I
think are beyond the scope of the project.  I don't have an answer for the
finish file at this time because that is a behavior dictated by
distribution policy; it too is outside of scope.  The last one, option
flags, can be overcome by making them a parameter.

The idea that we could turn the bulk of policy into parametric settings
that are isolated away from code is why I have not been as concerned about
separation of policy.  I've been messing around with using two parametric
files + simple macro expansion of the 10 longrun steps listed above to
build the run files as needed.  You would use it like this:

1. You download the source code.
2. You specify a supervisor in a settings file, which in turn provides all
of the correct names for various programs
3. You specify a distribution "environment" in a settings file, which
provides path information, device naming, etc.
4. You run a build process to create all of the run files, which
incorporate the correct values based on the settings from the prior two
files.
5. You run a final build step that installs the files into a "svcdef"
directory which contains all of the definitions ready-to-use; this would
correspond with the s6 approach of a definition directory that does not
contain 

re: Incompatibilities between runit and s6?

2018-01-10 Thread Avery Payne
I am guessing the differences will be subtle, and most of the general
behavior you desire will remain the same.  You may be able to get a way
with a "sed 's/sv\ /s6-sv\ /' new-script-name" on some of
your scripts; give it a try, what could it hurt?

Also, for those systems not running CentOS, what are you currently using
for init + service management?


runit: is it still maintained and does it have a CVS repository?

2018-01-09 Thread Avery Payne
I have a slightly older (version 2.1.2) mercurial repository at this
address: https://bitbucket.org/avery_payne/runit

It should not be far behind whatever is "current".  That being said, unless
you have a specific need for runit, s6 & friends are probably the way to go.


[announce] release 0.3.0 of rc-shim

2017-10-18 Thread Avery Payne
I'm announcing the release of rc-shim v0.3.0, a small script that is useful
for adding supervision to existing installations already using SysV-styled
rc scripts.  The following has changed:

* there is a testing script that ensures the shim will work with your shell.

* confirmation that the shim works with several different shells, outlined
in the README file.

* there is now a "fancy" print variant, that uses colorized printing, etc.
that comes with your OS installation.

The script is in regular use and I have not had any major defects to be
concerned about.  There are a few known issues with output, but these
defects will not affect the operation of the shim.

The quality is now "beta" level and I have decided to continue to make
minor improvements; I retract my former statement that I am headed to a
maintenance release.

Source can be obtained with mercurial or git at
https://bitbucket.org/avery_payne/rc-shim. Incremental improvements will
continue with each release and feedback is always appreciated.


Re: A dumb question

2017-05-03 Thread Avery Payne

On 5/1/2017 2:11 PM, Francisco Gómez wrote:

And during the process, someone recently told me
something like this.

 "It's old software. Its last version is from 2014. If I have to
 choose between a dynamic, bug-filled init like Systemd and a barely
 maintained init like Runit, I'd rather use Systemd."

That sounds bad, doesn't it?


No, it doesn't sound bad at all, but the person you talked with does 
sound mis-informed.


To quote an (in)famous but well-meaning internet troll, "Functionality 
is an asset, but code is a liability."


Runit is simple enough, and small enough, that it does not require a VCS 
to maintain it.  The same goes for s6, and many other supervision suites.


Now contrast those with systemd: just how big is systemd, including all 
of its pieces?  How many people are needed to maintain it?


Ponder those three sentences, together, for a little while.


[announce] release 0.2.5 of rc-shim

2017-04-11 Thread Avery Payne
I'm announcing the release of rc-shim v0.2.5, a small script that is useful
for adding supervision to existing installations already using SysV-styled
rc scripts.

This is a very minor release.  The following has changed.

* now includes a primitive test script, useful for debugging shim behavior.

* now confirmed to work on 7 different shells, using the test script.

* fixed a typo with a variable name.

Source can be obtained with mercurial or git at
https://bitbucket.org/avery_payne/rc-shim.  Feedback is always appreciated.


[announce] release 0.2.3 of rc-shim

2017-01-01 Thread Avery Payne
I'm announcing the release of rc-shim v0.2.3, a small script that is 
useful for adding supervision to existing installations already using 
SysV-styled rc scripts.  At this point the shim appears to be fairly 
stable.  The following has changed:


* README has been expanded.

* supervisor settings have been consolidated.

I am working toward the final release, after which the project will 
enter "maintenance".


Source can be obtained with mercurial or git at 
https://bitbucket.org/avery_payne/rc-shim. Feedback is always appreciated.


[announce] release 0.2.2 of rc-shim

2016-12-11 Thread Avery Payne
I'm announcing the release of rc-shim v0.2.2, a small script that is 
useful for adding supervision to existing installations already using 
SysV-styled rc scripts.  I was going to withhold this announcement until 
the 0.3 release, but there were some bugs that needed to be addressed.  
The following has changed:


*  Debugging statements have been re-ordered so they fail immediately 
upon detecting a missed setting.


*  The STARTWAIT setting was moved in the script to fix a 
reference-before-declaration bug.


More importantly, the script is no longer considered experimental. It 
appears to be performing its functions consistently and I will be 
extending its testing.  I would rate it as "alpha" quality, meaning that 
some of the settings may be subject to change, but the function of it 
will continue as-is.


Source can be obtained with mercurial or git at 
https://bitbucket.org/avery_payne/rc-shim.  Incremental improvements 
will continue with each release and feedback is always appreciated.


Re: [announce] Release 0.2 of rc-shim

2016-11-13 Thread Avery Payne

Seems like I'm always forgetting something...

https://bitbucket.org/avery_payne/rc-shim

I need to rewrite the README.  You'll want to edit the settings in 
os-settings and supervisor-settings.


Feedback is always appreciated.  If you have questions, contact me 
outside of the mailing list.


On 11/13/2016 12:52 PM, Jean Louis wrote:

Did you forget the link?

And s6 scripts, just taught me to think myself, so I have adapted
everything to work nicely on my system, and rc-shim should help me
then not to think... :-)

Jean





[announce] Release 0.2 of rc-shim

2016-11-13 Thread Avery Payne
I'm pleased to announce the release of rc-shim v0.2, a small script that 
is useful for adding supervision to existing installations using 
SysV-styled rc scripts.  The script replaces existing /etc/init.d 
scripts with a shim that interfaces to a supervisor of your choice.  It 
should support any daemontools-alike supervisor.



Since the 0.1 announcement, the following has changed:

* Fixed several bugs in the 0.1 version that affected starting, 
stopping, and reporting status.


* The "reload" option has been removed as it was not compliant with the 
LSB 3.1 standard for arguments accepted by rc scripts.  It has been 
replaced with a stub for "force-reload".  The "force-reload" option 
requires customization to be used correctly, and currently performs a 
no-op.  This is by design.


* The shim header was altered to make it minimally compliant with LSB 
3.1 specifications.  It should allow the shim to work with tools that 
alter runlevel settings.  So far it has been successfully tested with 
Debian's update-rc.d program.


* The shim now correctly sets up and tears down symlinks in the service 
scan directory with each start/stop.


* The shim now has the option to use asynchronous start.  This is a 
trade-off between verification that the supervisor has started, and the 
speed at which the shim processes a start request.  It is disabled by 
default, but can be controlled per-script or system-wide.  Enabling the 
option skips verification in return for speeding up a start request, 
making the assumption that the service scan process will take care of it.


* Added debugging output, which is disabled by default.  This is useful 
during the installation process to confirm that the shim is working 
correctly with your supervisor and daemon.  It is set on a per-script level.



The following limitations still apply:

*  You will need to supply your own supervisor and run scripts for this 
to work.


* The run scripts must be organized into a set of definitions, a set of 
live run directories, and a set of symlinks in a service scan directory.


* The shim only supports starting a single daemon.  If you are replacing 
an rc script that starts multiple daemons, you will need to create a 
custom service scan directory and start that to emulate the behavior.


This script should still be considered experimental.  It continues to 
receive minor testing with a live system.  If you decide to test it, it 
is recommended that you simply rename your existing init.d scripts that 
you are replacing to allow for a rollback, should the shim not function 
correctly for your installation.  Future releases will have additional 
testing and incremental improvements. Suggestions are welcome.


[announce] Release 0.1 of rc-shim

2016-10-16 Thread Avery Payne
I'm announcing the initial release of rc-shim, a small script that is 
useful for adding supervision to existing installations using 
SysV-styled rc scripts.


The script replaces existing scripts in /etc/init.d with a shim that 
interfaces to an existing supervisor.  It should support any 
daemontools-alike supervisor.  You will need to supply your own run 
scripts for this to work.


The first release should be considered experimental.  It has had only 
minor testing with a live system, and the command set is limited.  
Future releases will have additional testing and incremental 
improvements.  Suggestions are welcome.


The script can be found here:
https://bitbucket.org/avery_payne/rc-shim/src


Re: Runit questions

2016-10-16 Thread Avery Payne
I hate using webmail.  It always eats your formatting.

The script shows up as a block of text in the mailing list because of this;
just click the link for the project and look for the run-svlogd script.
You'll find what you need there.

On Sun, Oct 16, 2016 at 1:16 PM, Avery Payne <avery.p.pa...@gmail.com>
wrote:

>
>
> On Sat, Oct 15, 2016 at 3:28 PM, Andy Mender <andymenderu...@gmail.com>
> wrote:
>
>> Hello everyone,
>>
>> I managed to solve some of my problems. It turned out that my terminal was
>> being spammed
>> with erroneous output, because I didn't add the "exec 2&>1" redirection to
>> my ./run files.
>>
>
> This behavior is fairly consistent among supervisors.  The use of STDERR
> typically is left to the administrator to decide; more commonly, it's
> simply tied off into the STDOUT stream for ease of use.
>
>
>>
>> Another problem popped up on my Devuan box, though. I was trying to add a
>> log directory with a ./run script inside to log dhcpcd per runit's design.
>> However, runsvdir complaints that "exec: not found".
>>
>
> This is probably a problem with your shell.  The shebang specified might
> point to an actual sh and not something like bash, ash, dash, etc.  I don't
> have a reference handy but I suspect that you are seeing failure messages
> for "exec" because the shell doesn't support that built-in.
>
> If you encounter this, change the first line to /bin/bash or /bin/ash or
> /bin/dash and see if that resolves the issue.
>
>
>> The same run scripts work on Gentoo, which makes it even more surprising.
>>
>
> Gentoo may be mapping /bin/sh to bash/ash/something  else, but I don't
> know.  I would suggest finding out.
>
>
>> Below the log/run script:
>> #!/bin/sh
>> exec chpst -u log svlogd -tt ./
>>
>
> I'll post a companion log script in a moment.
>
>
>> PS Earlier runsvdir was complaining that "log: user/group not found". I
>> created the "log" group, thinking it might help somehow.
>>
>
> You shouldn't need to create the group normally, which may indicate a
> separate issue.
>
> Try this run script in your ./svcdef/dbus/log directory.  It's lifted
> straight from the source at http://bitbucket.org/avery_
> payne/supervision-scripts as the ./svcdef/.log/run-svlogd script.  Please
> note that you may need to change the group name after chown, and the shell
> in the first line to meet your needs:
>
> #!/bin/shexec 2>&1# determine our parent's nameSVNAME=$( basename $( echo 
> `pwd` | sed 's/log//' ) )# bring up svlogd logging into a subdirectory that 
> has the parent's name.[ -d /var/log/$SVNAME ] || mkdir -p /var/log/$SVNAME ; 
> chown :adm /var/log/$SVNAME[ -d main ] || ln -s /var/log/$SVNAME mainexec 
> /usr/bin/svlogd -tt main
>
> If you are using Gentoo, I would recommend looking at Toki Clover's
> implementation at https://github.com/tokiclover/supervision.
>
> I hope this helps.
>


Re: Runit questions

2016-10-16 Thread Avery Payne
On Sat, Oct 15, 2016 at 3:28 PM, Andy Mender 
wrote:

> Hello everyone,
>
> I managed to solve some of my problems. It turned out that my terminal was
> being spammed
> with erroneous output, because I didn't add the "exec 2&>1" redirection to
> my ./run files.
>

This behavior is fairly consistent among supervisors.  The use of STDERR
typically is left to the administrator to decide; more commonly, it's
simply tied off into the STDOUT stream for ease of use.


>
> Another problem popped up on my Devuan box, though. I was trying to add a
> log directory with a ./run script inside to log dhcpcd per runit's design.
> However, runsvdir complaints that "exec: not found".
>

This is probably a problem with your shell.  The shebang specified might
point to an actual sh and not something like bash, ash, dash, etc.  I don't
have a reference handy but I suspect that you are seeing failure messages
for "exec" because the shell doesn't support that built-in.

If you encounter this, change the first line to /bin/bash or /bin/ash or
/bin/dash and see if that resolves the issue.


> The same run scripts work on Gentoo, which makes it even more surprising.
>

Gentoo may be mapping /bin/sh to bash/ash/something  else, but I don't
know.  I would suggest finding out.


> Below the log/run script:
> #!/bin/sh
> exec chpst -u log svlogd -tt ./
>

I'll post a companion log script in a moment.


> PS Earlier runsvdir was complaining that "log: user/group not found". I
> created the "log" group, thinking it might help somehow.
>

You shouldn't need to create the group normally, which may indicate a
separate issue.

Try this run script in your ./svcdef/dbus/log directory.  It's lifted
straight from the source at
http://bitbucket.org/avery_payne/supervision-scripts as the
./svcdef/.log/run-svlogd script.  Please note that you may need to change
the group name after chown, and the shell in the first line to meet your
needs:

#!/bin/shexec 2>&1# determine our parent's nameSVNAME=$( basename $(
echo `pwd` | sed 's/log//' ) )# bring up svlogd logging into a
subdirectory that has the parent's name.[ -d /var/log/$SVNAME ] ||
mkdir -p /var/log/$SVNAME ; chown :adm /var/log/$SVNAME[ -d main ] ||
ln -s /var/log/$SVNAME mainexec /usr/bin/svlogd -tt main

If you are using Gentoo, I would recommend looking at Toki Clover's
implementation at https://github.com/tokiclover/supervision.

I hope this helps.


Re: Runit questions

2016-10-13 Thread Avery Payne
On Tue, Oct 11, 2016 at 3:09 PM, Andy Mender 
wrote:

> Hello again,
>
> I'm rewriting some of the standard sysvinit and openrc scripts to ./run
> scripts
>

I would look around a bit.  There are little pockets of pre-written scripts
out there, you just need to dig them up.

Some of the scripts on smarden.org may have minor issues with the daemon
flags they use, so if it doesn't work, go read the man page and compare the
flags in the script to the flags for your installed daemon.


> and I have some problems with dbus. I took the ./run script from Void Linux
> as the original runit documentation doesn't have an exemplary dbus script.
> Whenever I check the status of dbus via "sv status dbus", I get the
> following
> error: "warning: dbus: unable to open supervise/ok: file does not exist".
> This
> makes no sense, as both /etc/sv/dbus/supervise/ and
> /var/service/dbus/supervise/
> contain the "ok" file. Below the run script from Void Linux:
> #!/bin/sh
> [ ! -d /run/dbus ] && install -m755 -g 22 -o 22 -d /run/dbus
> exec dbus-daemon --system --nofork --nopidfile
>

Here is a hacked-up copy of my ./run script.  Be sure to change the
"messagebus" user name after the setuidgid to the proper daemon account for
your installation.  Sorry for the backslash, the word-wrap in the posting
would otherwise kill any formatting.


#!/bin/sh
exec 2>&1
PATH=/sbin:/bin:/usr/sbin:/usr/bin

# must have a valid procfs
mountpoint -q /proc/ || exit 1

# create a unique identifier on each startup
dbus-uuidgen --ensure || exit 1

# start the service
exec pgrphack setuidgid messagebus \
   dbus-daemon --nofork --system --nopidfile



>
> Not sure what's wrong and why this run script needs to contain so many
> operands.
>

The daemon's runtime directory needs to exist before it is launched.  The
first line after the shebang basically does that.


Re: listen(1): proposed addition to the runit suite

2016-08-23 Thread Avery Payne

On 8/22/2016 6:57 AM, Gerrit Pape wrote:

But beware, directory layout, build and release processes are from
around 2001, and might not be as modern as expected ;).
With just a minor hack, I was able to get a clean build without 
slashpackage interference.  It might not need that much cleanup after all...




Re[2]: error on logging and how to correctly implement svlogd?

2016-06-22 Thread Avery Payne



Hi,

Thanks for replying. I don't use symlink, instead I put everything 
directly

on /etc/service/test, then sv start test


Try this:

mkdir /etc/svcdef
mkdir /etc/svcdef/test
mkdir /etc/svcdef/test/log

Put a copy of your test service ./run file into the new directory:

cp /etc/service/test/run /etc/svcdef/test/run

Now open an editor like this:

vi /etc/svcdef/test/log/run

and put this into it:

#!/bin/sh
exec 2>&1
# extract the service name
SVNAME=$( basename $( echo `pwd` | sed 's/log//' ) )
# create a logging directory if one isn't present
[ -d /var/log/$SVNAME ] || mkdir -p /var/log/$SVNAME ; chown :adm 
/var/log/$SVNAME

# create a hard-coded path name to reference
[ -d main ] || ln -s /var/log/$SVNAME main
# launch the logger
exec /usr/bin/svlogd -tt main

after that, save the file and exit the editor, and do the following:

mkdir /etc/sv
cp -Rav /etc/svcdef/* /etc/sv/
ln -s /etc/sv /service

Now start your supervision and make sure it's pointing at /service 
instead of /etc/service.  Type


ps fax

...and you should see a supervision "tree" complete with your test 
service and logger.  You don't have to use /etc/svcdef or /etc/sv or 
even /service, I'm just giving these as suggestions.  For that matter 
the logger could even be switched out, the logging done elsewhere, etc.


The logging needs to start using a subdirectory of the service.  In this 
case, the service is /etc/sv/test and the logger would be 
/etc/sv/test/log.  A ./run file needs to be present in the log directory 
to launch the logger, which is the script we just created at 
/etc/svcdef/test/log/run.


Hope this helps.



Re: runit kill runsv

2016-06-22 Thread Avery Payne


I am try to reproduce situation when runsv under some catastrophic 
failure,

when runsv got killed, it will restart, but my test daemon "memcached"
still running on background, eventually it will start memcached twice. 
How
could I avoid this from happening? Seems fault handling isn't that 
great on

this matter.

It almost sounds like you need to chain-load memcached using chpst.  If 
memcached has internal code to change its process group then it is 
"escaping" supervision, which means that runsv is not in direct control 
of it.  To fix this, your ./run script would be similar to:


#!/bin/sh
exec 2>&1
exec chpst -P memcached

See http://smarden.org/runit/chpst.8.html for details.  This would cause 
memcached to be "captive" to the runsv process.  Try the change with 
chpst and see what happens.  You may find other issues you're not seeing 
after you make this change; check the log with tail -f /path/to/log/file 
and see if it is restarting over and over (a "restart loop").




Re: [DNG] Supervision scripts (was Re: OpenRC and Devuan)

2016-05-06 Thread Avery Payne
Regarding the use of supervision-scripts as "glue" in distributions, yes,
the project was meant for that. Most - but not all of - the scripts are in
working order, as I use them at home on my personal server.  If you are
willing to take the time to remap the names (as needed), the scripts should
work out-of-the-box.  If you have questions, need help, or get stuck, write
to me and I'll do my best to give you answers.

Currently, these design problems remain:

* account name mappings that correspond to how the names are set up in the
installation,
* hard coded file pathing, which is inconsistent between distributions,
* handling of device names, which are inconsistent between kernels,
* handling of "instances", where one service will be "reused" over and over
(think multiple web servers of the same type on different ports),
* the "versioning problem", which I have (inadequately) described elsewhere
on the mailing list.  The current design addresses this.

My personal life has been very busy, and I needed a break, so there hasn't
been much movement.  Now that things are slowing, I can turn my attention
to it again this summer.  I have a plan to revitalize supervision-scripts
which addresses all (or most) of the listed design problems. The current
plan is:

* come up with clear definitions of the problems,
* a proposal, detailing solutions step by step, which will become a design
document,
* peer review of the document for inaccuracies and errors,
* close out the existing design and archive it,
* announce the old design as version 0.1, for historical interest,
* conversion of the existing design's data into new data structures, and
* finally, writing the code needed to generate the proper ./run files based
on the data provided.

The first step is mostly done.  The second one is just starting.


On Wed, May 4, 2016 at 9:39 AM, Steve Litt 
wrote:

> On Tue, 3 May 2016 22:41:48 -1000
> Joel Roth  wrote:
>
> > We're not the first people to think about supporting
> > alternative init systems. There are collections of the
> > init scripts already available.
> >
> > https://bitbucket.org/avery_payne/supervision-scripts
> > https://github.com/tokiclover/supervision
>
> Those can serve as references and starting points, but I don't think
> either one is complete, and in Avery's case, that can mean you don't
> know whether a given daemon's run script and environment was complete
> or not. In tokiclover's case, that github page implies that the only run
> scripts he had were for the gettys, and that's pretty straightforward
> (and well known) anyway.
>
> As I remember, before he had to put it aside for awhile, Avery was
> working on new ways of testing whether needed daemons (like the
> network) were really functional. That would have been huge.
>
> Another source of daemon startup scripts his here:
>
> https://universe2.us/collector/epoch.conf
>
> SteveT
>
> Steve Litt
> April 2016 featured book: Rapid Learning for the 21st Century
> http://www.troubleshooters.com/rl21
>


unit2run.py, a Python script for converting systemd units

2016-04-02 Thread Avery Payne
I'm announcing the 0.1 release of unit2run.py, which is a small
brain-damaged hack that attempts to create a ./run file from a systemd
unit.  It is released under the ISC license.

The following features are available:

* It will pump out a horrifically formatted shell script.  It might even be
legible.
* Sometimes the shell scripts actually work, regardless of the supervisor.
Sometimes.

It has the following limitations:

* It only understands about 4 statements, 2 of which do *absolutely nothing*
.
* It laughs at [sections].  Same goes for dependency handling, socket
activation, and other systemd features.  Ha-ha!
* It will eat your kittens and overwrite your existing run files in the
same directory if given the chance, so um, don't do that.
* It currently suffers from "procedural programming".
* Not if but when it breaks, you get to keep all the pieces.

Feel free to download it and have a laugh or two.  Yes, this was written in
my spare time, and it shows.

http://bitbucket.org/avery_payne/unit2run.py

P.S. make sure that there isn't already a file named "run" in the directory
you are running the script in.  You wouldn't like the results, trust me.


Re: The plans for Alpine Linux

2016-03-03 Thread Avery Payne


On 2/3/2016 9:30 AM, Steve Litt wrote:


Hi Laurent,

The situation you describe, with the maintainer of a distro's
maintainer for a specific daemon packaging every "policy" for every
init system and service manager, is certainly something we're working
toward. But there are obstacles:

* Daemon maintainer might not have the necessary time.

This is probably true.



* Daemon maintainer might not have the necessary knowledge of the init
   system or service manager.
Also true.  To them it's probably "just glue that needs to be there to 
support that thing-a-ma-jig".  Look at some of the Debian init.d scripts 
sometime and notice just how butchered some are.  Some are blatant 
cut-and-paste jobs with the bare minimum to make the launch happen...


I personally feel that the first two points made represent the 
"majority" of cases, although I don't have any proof to back that assertion.




* Daemon maintainer might be one of these guys who figures that the
   only way to start up a system is sysvinit or systemd, and that any
   accommodation, by him, for any other daemon startup, would be giving
   aid and comfort to the enemy.

Not entirely sure that would be the case, but hey, anything is possible.



* Daemon maintainer might be one of these guys who tries to (example)
   Debianize the run script to the point where the beauty and simplicity
   and unique qualities of the service manager are discarded.
Given how Debian works (examples: alternatives, package and repository 
licensing, etc.) I would be surprised if this DIDN'T happen.  Debian 
software really wants one way to deal with multiple choices, so there 
would probably be some kind of glue scripts that would make the 
appropriate calls, etc.


Just my .05 cents.



[announce] supervision-scripts to be retired

2015-11-21 Thread Avery Payne
Since I began what amounts to my first open source project - ever - I 
have learned a lot in the process, met several interesting characters, 
and hopefully provided some insights to a few others as well.  To 
everyone over the last year and half that have put up with me, thank you 
for giving me an ad-hoc education, being patient with my silly and often 
inane questions, and tolerating some of my goofy responses.


When I started supervision-scripts, I had a vision of a set of ./run 
files that could be used with many different supervisors, in different 
environments.  It was, at the time, an admitted reactive approach to 
dealing with unpleasant actions in the Linux community. I have since 
changed my view and approach to this.


Along the way, I also found that there were issues that would prevent 
both the completion and use of the current scripts on other 
distributions.  One of those problems was the use of different user 
account names; different distributions use their own namings and none of 
them easily align with their peers in any fashion.  Another problem was 
the use of daemons that required "settings after installation", which I 
do not have a current provision for.  To make matters worse, much has 
happened in my personal life that has obstructed me from continuing work 
on the current version of supervision-scripts.  I have given some 
thought to this, and between the current lack of time and the 
constructive criticism received from various parties, I will be unable 
to continue adding new definitions as I had planned.


The existing arrangement I came up with, using indirection and program 
renaming, is viable for installations that use a supervisor only.  At 
first I thought I could incorporate it into system or state management, 
but I now see that will not be possible - yet another design flaw that 
prevents me from reaching the project's goals.  The other issue is the 
embedded user accounts used by various daemons, which currently are all 
Debian-mapped, making the project still intimately tied to Debian when I 
do not want it to be so.


Despite all of those limitations, it has been easy for me to create new 
definitions quickly, and use them for my own purposes at home. In the 
sense that this shows a proof-of-concept, it validates some of my 
assumptions about making portable ./run files.


So, the current project is entering "maintenance".  By this I mean that 
I may occasionally add new definitions to the project but overall, there 
will be no further code changes, and no changes to the structure of how 
it works.  The documentation will be adjusted to reflect this, along 
with the current design flaws.  Once the documentation is complete, the 
project will "retire", rarely updating.


I still believe that process supervision has a future; that for that 
future to become a reality, there needs to be an easy way for 
distributions, packagers, and end-user to incorporate supervision into 
existing arrangements; and that the current, easiest method for doing so 
is to have pre-written definitions that can be quickly installed and 
used.  I am not fully admitting defeat in this process.  This specific 
battle was lost, but this isn't over.


As time goes on, I will take what little spare time I have left and put 
it towards a new design.  The design will be fully implemented "on 
paper" first, and I will ask for peer review in the vain hope that more 
experienced eyes besides my own will be able to discern problems on 
paper before they solidify in code.  This new design will incorporate 
what I have learned from supervision-scripts, but it will take an 
entirely different approach, one that I hope achieves the original 
objective of a portable set of ./run files for supervision.


Until then, I will stay in the background, content to observe.


Re: machine api to supervise

2015-10-16 Thread Avery Payne
It would be nice to develop a common "grammar" that describes whatever 
is being probed.  If the grammar was universal enough, you could fill in 
empty strings or zeros for things that don't apply, but the interpreter 
for the state data need only be written once.


The following pipe-dream passes for valid json according to 
http://jsoneditoronline.org but some of the data might not even be 
available, so a lot of it is just wishful thinking...


{
  "servicelist": {
"myservice":
{
  "definition":
  {
"type": "s6",
"path": "/path/to/service/definition",
"run": "myservice"
  },
  "state":
  {
"uptime": "54321 seconds",
"last-start": "2015-10-16 18:27:32 PST",
"last-state": "unknown",
"number-restarts": 0,
"wanted": "up",
"currently": "up"
  },
  "env-vars":
  {
"A_STRAY_ENV_VARIABLE": "option-blarg",
"DAEMONSETTING": "--some-daemon-setting",
"OTHEROPTION": "myobscureoptionhere",
"PATH": "/sbin:/usr/sbin:/bin:/usr/bin:/path/to/service/definition"
  },
  "supervisor":
  {
"name": "s6-supervise",
"pid": 123,
"uid": 0,
"gid": 0,
"stdin": 0,
"stdout": 1,
"stderr": 1
  },
  "daemon":
  {
"name": "my-service-daemon",
"pid": 124,
"uid": 101,
"gid": 102,
"stdin": 0,
"stdout": 1,
"stderr": 1
  },
  "logger":
  {
"name": "logwriterprocess",
"pid": 125,
"uid": 99,
"gid": 999,
"stdin": 0,
"stdout": 1,
"stderr": 1
  }
}
  }
}

There should be enough there for the admin to diagnose most issues 
quickly.  I wrapped it in a "servicelist" so multiple definitions could 
be returned.  stdin/stdout/stderr, along with the env settings are 
probably not feasible, but I threw them in anyways.


On 10/16/2015 2:11 PM, Buck Evan wrote:

If you'll specify how you want it to look / work I'll (possibly) attempt to
provide a patch.

`s6-svstat --json` is my initial bid.

I'll try to match the nosh output as closely as is reasonable.

On Thu, Sep 24, 2015 at 4:27 AM, Laurent Bercot 

Re: Built-in ordering

2015-09-20 Thread Avery Payne
Regarding the use of ordering during "stage 1", couldn't you just have a
single definition for stage 1, run through it to set up whatever is needed,
then transition to a new system state that doesn't include that defintion
with (insert system management here)?  What I am trying to ask is if the
down files are really necessary.


Re: Some suggestions about s6 and s6-rc

2015-09-19 Thread Avery Payne
With regard to having scripted placement of down files, if it was in a
template or compiled as such, then the entire process of writing it into
the definition becomes trivial or moot.  While there should always be a
manual option to override a script, or the option to write one directly, I
think the days of writing all of the definitions needed by hand have long
since past.

But there is an issue that would keep this idea from easily occurring. You
would need to find a way to signal the daemon that it the system is going
down, vs. merely the daemon going down.  I suppose you could argue that the
down and stop commands should be semantically different, and use those to
send the signal, but that's not how they are used today.  Beyond that, if
the placement of the down file was baked into all of the scripts, either by
compiling or templating, then there isn't an issue of repeatedly typing in
support for this.


Re: [ale] systemd talk from July has slide deck online now

2015-09-09 Thread Avery Payne

On 9/9/2015 9:57 AM, Laurent Bercot wrote:

 Quoting the document: "This article is not meant to impart any technical
judgments, but to simply document what has been done".

 I don't think the author of the document is condoning XML configuration
any more than you and I are. (Context: VR is also the author of uselessd
and he eventually gave up on it because systemd was just too complex to
redesign.)



With regard to the redesign, I've noticed an ongoing method of derailing 
systemd discussion.  I'm only going to digress from the mailing list for 
a brief moment to point out this recurring format:


User1: "I don't like systemd that much because of reason X" (where 
reason X may or may not be a legitimate reason to dislike it)

User2: "Go design a better systemd"

Notice how the discussion is turned back to systemd directly.  It's 
always "go re-invent it", not "make something different".  I just wanted 
to point that out.


P.S. because of the flammable nature of this topic, please, do NOT 
respond to my posting on the mailing list; if you want to respond, send 
an email.


supervision-scripts 2015-08

2015-09-09 Thread Avery Payne

Done:
- - - -

+ New definitions: clamd, cpufreqd.

+ Definitions are now versioned.  The ./envdir directory is now a 
symlink to another directory that contains the proper definitions for a 
given version of software.  This solves a long standing problem of 
"version 1 is different from version 2, so using the wrong definition 
breaks deployment", is minimally intrusive, only requires one additional 
symlink and directory in the definition's directory, allows for 
deployment on older systems (with older daemons), doesn't conflict with 
anything that I am aware of (at the moment), and keeps with the 
"filesystem is the database" concept.


+ Cleaned up some old references to /sv that should be /svcdef

+ Revised documentation.  Many concepts are being cleaned up and 
reorganized, including integration into state management systems like 
anopa and s6-rc.



In Progress:
- - - -

+ Renaming some of the internal environment variables to be SIG* instead 
of CMD* to better reflect their nature.


+ Create a test environment with various supervisors on it.  The idea is 
to have a minimal image with several boot options; each boot option 
brings up some combination of supervision + state management, i.e. 
OpenRC + (something), runit, s6 + s6-rc, etc.  So I would select an 
option and boot...and see the results.


+ More definitions.  I've noticed that I've reduced the flow of new 
entries to a trickle since I have had other commitments.  I need to make 
more; the project is nearly at a standstill but a lot of work is still 
needed.  At a minimum, one definition a week should be obtainable; 
ideally, I would like 4 a week.  To the few people who have been 
watching, yes, the project continues, no, it isn't dead.



To Do:
- - - -
Everything.


Re: runit maintenance - Re: patch: sv check should wait when svrun is not ready

2015-06-22 Thread Avery Payne



On 6/20/2015 3:58 AM, Lasse Kliemann wrote:

Gerrit Pape p...@smarden.org writes:


First is moving away from slashpackage.  Doing it similar to what
Laurant does in his current projects sounds good to me, but I didn't
look into details.  This even might include moving away from the djblib
(Public Domain files from daemontools included in runit) to skalibs.

Sorry to interrupt. Why moving away from slashpackage?

As far as I see, Laurent's build system is a super-set up slashpackage
functionality.

I think the whole idea of registered namespaces for software packages
is a good idea and should be supported. There are of course different
ways of implementing it.
Why not keep slashpackage as an alternative installation method? Is 
there any reason you can't package both traditional and slashpackage 
methods together?


Re: runit maintenance - Re: patch: sv check should wait when svrun is not ready

2015-06-22 Thread Avery Payne


On 6/18/2015 6:24 PM, Buck Evan wrote:

Thanks Gerrit.

How would you like to see the layout, build process look? Maybe there's an
example project you like?
If it's easy enough for me to try, I'd like to.



I pulled Gerrit's stuff into bitbucket a few days ago.  The first step I 
would humbly suggest, would be to remove the tarballs embedded into the 
repository.Cleaning those out would reduce visual noise when looking 
at the file layout.  Because the entire source history is available, 
there is no reason you can't go back and recreate them on-demand; roll 
back the repo to a prior version and run a makefile outside of it to 
build a tarball.


A general sequence of events for init

2015-06-22 Thread Avery Payne
I have this crazy dream.  I dream that, for supervision-styled 
frameworks, there will be a unified init sequence.


*  It will not matter what supervision framework you use.  All of them 
will start properly after the init sequence completes.


*  It will not matter how sophisticated your supervision is.  It is 
independent of the features that are provided by the framework.


*  It will not matter if you only have process supervision, or if you 
have something that manages system state fully.  They are independent of 
the init start-up/shutdown.


* It will be scripted in a minimal fashion.  Each stage of the init 
would be a plugin called by a master script.  The plugins would be 
straight-forward, so you could debug it easily.


* It will not matter if you are on Linux or *BSD anymore; the proper 
low-level initialization will take place.  All that would happen is a 
different plugin would be called.


* It would have a system-specific plugin for handling emergencies, so if 
the init fails, you drop into a shell, or reboot, or hang, or do 
whatever it is your heart desires.


I'm really trying to figure out why this can't exist.  What am I missing 
(beyond the shutdown portion)?  I know there will be the whole BSD 
rc-scripts / SysV rc-scripts / OpenRC debate, I'm trying to avoid any of 
those.  I've used BSD-styled scripts years ago on Slackware, and have 
dealth with SysV's crufty stuff recently.  I haven't tried OpenRC yet.


Re: A general sequence of events for init

2015-06-22 Thread Avery Payne

On 6/22/2015 6:42 PM, post-sysv wrote:
Handling stages 1 and 3 may need some additions to conditional logic, 
however. 
The idea would be that different plugins would represent some abstract 
notion at some stage in the boot process, i.e. mount the root 
filesystem would be abstracted away to a script that was 
correct/specific for the platform it was on, and the init would simply 
call the program/script/symlink at a pre-arranged location.  No 
conditional logic needed, just point to the right plugins. :)


Introducing shared objects in this would be overkill though, unless 
you're using the word plugin to mean a script using a common interface.


Yes, plugin in this context is an externally callable script or program 
of some kind, called by the init in the correct ordering/sequence.   A 
control script or program would do the coordination and call each 
plugin.


You said there are others with a similar approach, I guess I need to do 
some additional homework.


Re: comparison

2015-06-16 Thread Avery Payne

On 6/15/2015 9:00 PM, Colin Booth wrote:

I only know s6 and runit well enough to comment on for the most part but
filling in some blanks on your matrix:

Updated, thanks for the help.  As I said, it's a start.  It'll need some 
time to improve.  I mostly needed it for the project, to help me keep 
the mapping of what tool does what action straight so I can move 
forward.  I'd like to add some of the missing specialty tools that s6 
and nosh provides, and see if there are equivalent mappings elsewhere.  
Also, as new scripting frameworks are discovered, I'll add them as well.


Re: Readiness notification for systemd

2015-06-16 Thread Avery Payne

On 6/13/2015 11:48 AM, Laurent Bercot wrote:

It's
a wrapper for daemons using the simple write a newline readiness
notification mechanism advertised by s6, which converts that
notification to the sd_notify format.


This had me tossing around some ideas yesterday while I was headed home.

Most (but not all) daemons provide logging.

Logging generally (but not always) implies calling printf() with a 
newline at some point.


What if we could come up with a simple standard that extends your 
newline concept into the logging output?  A newline itself may be 
emitted as part of a startup string sent to a log, so I can't be assured 
that a daemon is really ready.  But what if we could ensure that a 
universally agreed pattern were present in the log? Something like 
Ready.\n when the daemon is up.  We literally could watch the 
stdout/stderr for this pattern and would solve the entire readiness 
notification problem in one go.


It would be an enormous problem politically because most daemon authors 
aren't going to add the 3-4 lines of code needed to support this.


But I could dream...


Re: comparison

2015-06-16 Thread Avery Payne

On 6/16/2015 5:22 AM, James Powell wrote:

Very true, but something always seems to say something along the lines of if we had 
done #2 years ago, we might have avoided a huge mess that now exists.

Agreed.

The same applies to init systems. If there are ready to use feet wetting, taste 
testing scripts ready to go, the job of importing things just gets easier on 
the distribution.
Also agreed.  Actually, there's some discussion on the mailing list from 
a few months back about this.


From: Steve Littmailto:sl...@troubleshooters.com
Sent: ‎6/‎16/‎2015 4:45 AM
To: supervision@list.skarnet.orgmailto:supervision@list.skarnet.org
Subject: Re: comparison

On Tue, 16 Jun 2015 04:05:29 -0700
James Powell james4...@hotmail.com wrote:


I agree Laurent. Though, even though complete init+supervision
systems like Runit exist, it's been nearly impossible to get a
foothold with any alternatives to sysvinit and systemd effectively. I
think one of the major setbacks has been the lack of ready-to-use
script sets, like those included with OpenRC, various rehashes of
sysvinit and bsdinit scripts, and systemd units just aren't there
ready to go.
The true problem is that each daemon needs its own special environment 
variables, command flags, and other gobbledygook that is specific to 
getting it up and running, and a master catalog of all settings doesn't 
exist.  Compounding that is the normal and inevitable need for each 
supervision author to do their own thing, in their own way, so tools get 
renamed, flags get mapped, return codes aren't consistent.  That's just 
the framework, we haven't talked about run scripts yet.  Who wants to 
write hundreds of scripts?  Each hand-cobbled script is an error-prone 
task, and that implies the potential for hundreds of errors, bugs, 
strange behaviors, etc.


This is the _entire_ reason for supervision-scripts.  It was meant to be 
a generic one size fits most solution to providing prefabricated run 
scripts, easing or removing the burden for package maintainers, system 
builders, etc.  All of the renaming and flags and options and 
environment settings and other things are abstracted away as variables 
that are correctly set for whatever environment you have.  With all of 
that out of the way, it becomes much easier to actually write scripts to 
launch things under multiple environments.  A single master script 
handles it all, reduces debugging, and can be easily swapped out to 
support chainload launchers from s6 and nosh.


The opposite end of this is Laurent's proposal to compile the scripts so 
they are built into existence.  If I'm understanding / imagining this 
correctly, this would take all of the settings and with a makefile 
bake each script into existence with all of the steps and settings 
needed.  It would in effect provide the same thing I am doing but it 
would make it static to the environment. There's nothing wrong with the 
approach, and the end result is the same.


The only difference between Laurent's approach and mine, is that 
Laurent's would need to re-bake your scripts if your framework 
changes; in my project, you simply run a single script and all of the 
needed settings change on the fly.  I'm not sure of the pros/cons to 
either approach, as I would hazard a guess that any system switching 
between frameworks may also require a reboot if a new init is desired.


Here's the rub: in both cases, the settings for each 
service/daemon/whatever are key to getting things running.  Again, we 
come back to the idea of a master catalog of settings.  If it existed, 
then half of the problem would be resolved.  There are lots of examples 
out there, but, they're not all in one place.


So I try to toil over supervision-scripts when I get time, and make that 
catalog.  Even if people don't like what I'm doing with the master run 
script itself, that doesn't matter.  *What matters is that I've managed 
to capture the settings for the various daemons, along with some 
annotations*.  Because I took the time to support envdir, and the 
settings for each daemon are stored in this format, those settings can 
be extracted and used elsewhere.  I'm slowly creating that master 
catalog in a plaintext format that can be read and processed easily.  
This is the real, hidden value of supervision-scripts.


By the way, I'm going to bite the bullet and switch off of MPL 2.0 soon, 
probably by month-end.


Towards a clearinghouse

2015-06-16 Thread Avery Payne
On Jun 16, 2015 2:39 PM, Steve Litt sl...@troubleshooters.com wrote:

 On Tue, 16 Jun 2015 14:12:48 -0700
 Avery Payne avery.p.pa...@gmail.com wrote:
 
  In my not very humble opinion, we really need a single point of
  reference, driven by the community, shared and sharable, and publicly
  visible.  I could envision something like a single website which
  would collect settings and store them, and if you needed settings, it
  would build all of the envdir files and download them in one giant
  dollop, probably a tarball.  Unpack the tarball and all of the envdir
  settings are there, waiting to be used.  You could even be fancy
  and track option flags through various daemon revisions, so that if
  you have an older service running, you tell it I have older version
  x.y.z and you get the correct flags and not the current ones.

 I must be too cynical. I see that after the above described collection,
 website, and envdirs (does that mean service directories) is somewhat
 operational, a well funded Open Source vendor will flood it with
 wheelbarrows of cash and a parade of developers, and that nice, simple
 collection and web app becomes unfathomable (and has a 20K word terms
 and conditions).

I think this concept could be made to fly on a shoestring budget.  While
part of me enjoys the illusion of build it and they will come the more
rational part of me is well aware that it will probably see less than 1,000
hits a month, 98% from crawlers updating their cache.

With regard to the perinnial issue of buckets of money sway and manipulate
projects the answer is simple, Don't Do That (tm) and it will be fine.
This would be a community project with public visibility, and any vendor
attempts to strong-arm it would be spotted fairly quickly.

We need to set the cadence and tempo here and I think having a central
resource would help.   I'm about this close   to buying a cheap domain
name and then putting it up myself.  Except I know zero about website
design and anything I put up would be ugly.

I'm going to give it some more thought.  I can't promise anything beyond
that.


Re: patch: sv check should wait when svrun is not ready

2015-06-16 Thread Avery Payne
I'm not the maintainer of any C code, anywhere.  While I do host a 
mirror or two on bitbucket, I only do humble scripts, sorry.  Gerrit is 
around, he's just a bit elusive.


On 6/16/2015 9:37 AM, Buck Evan wrote:

I'd still like to get this merged.

Avery: are you the current maintainer?
I haven't seen Gerrit Pape on the list.

On Tue, Feb 17, 2015 at 4:49 PM, Buck Evan b...@yelp.com 
mailto:b...@yelp.com wrote:


On Tue, Feb 17, 2015 at 4:20 PM, Avery Payne
avery.p.pa...@gmail.com mailto:avery.p.pa...@gmail.com wrote:

 On 2/17/2015 11:02 AM, Buck Evan wrote:

 I think there's only three cases here:

  1. Users that would have gotten immediate failure, and no
amount of
 spinning would help. These users will see their error delayed
by $SVWAIT
 seconds, but no other difference.
  2. Users that would have gotten immediate failure, but could
have gotten
 a success within $SVWAIT seconds. All of these users will of
course be glad
 of the change.
  3. Users that would not have gotten immediate failure. None of
these
 users will see the slightest change in behavior.

 Do you have a particular scenario in mind when you mention
breaking lots
 of existing installations elsewhere due to a default behavior
change? I
 don't see that there is any case this change would break.
snip

Thanks for the thoughtful reply Avery. My background is also
maintaining business software, although putting it in those terms
gives me horrific visions of java servlets and soap protocols.

 I have to look at it from a viewpoint of what is everything
else in the system expecting when this code is called.  This
means thinking in terms of code-as-API, so that calls elsewhere
don't break.

As a matter of API, sv-check does sometimes take up to $SVWAIT
seconds to fail.
Any caller to sv-check will be expecting this (strictly limited)
delay, in the exceptional case.
My patch just extends this existing, documented behavior to the
special case of unable to open supervise/ok.
The API is unchanged, just the amount of time to return the result
is changed.

 This happens because the use of sv check (child) follows the
convention of check, and either succeed fast or fail fast, ...

Either you're confused about what sv-check does, or I'm confused about
what you're saying.
sv-check generaly doesn't fail fast (except in the special case I'm
trying to make no longer fail fast -- svrun is not started).
Generally it will spin for $SVWAIT seconds before failing.

 Without that fast-fail, the logged hint never occurs; the
sysadmin now has to figure out which of three possible services in
a dependency chain are causing the hang.

Even if I put the above issue aside aside, you wouldn't get a hang,
you'd get the failure message you're familiar with, just several
seconds (default: 7) later. The sysadmin wouldn't search any more than
previously. He would however find that the system fails less often,
since it has that 7 seconds of tolerance now. This is how sv-check
behaves already when a ./check script exits nonzero.


 While this is
 implemented differently from other installations, there are
known cases
 similar to what I am doing, where people have ./run scripts like
this:

 #!/bin/sh
 sv check child-service || exit 1
 exec parent-service

This would still work just fine, just strictly more often.






Re: comparison

2015-06-15 Thread Avery Payne
I'm working on something similar, but you're asking for capabilities, 
and most of what I have is a mapping.  I've tried to include a few 
critical links in the comparison for the various homepages, licenses, 
source code, etc.  It's incomplete for now, but it's a start.


https://bitbucket.org/avery_payne/supervision-scripts/wiki/comparison



On 6/15/2015 5:37 PM, Buck Evan wrote:

Is there any resource that compares the capabilities of daemontools,
daemontools-encore, runit, s6, and friends?





Re: dependant services

2015-06-08 Thread Avery Payne

On 6/8/2015 10:44 AM, Steve Litt wrote:

Just so we're all on the same page, am I correct that the subject of
your response here is *not* socket activation, the awesome and
wonderful feature of systemd.

You're simply talking about a service opening its socket before it's
ready to exchange information, right?
That is my understanding, yes.  We are discussing using UCSPI to hold a 
socket for clients to connect to, then launching the service and 
connecting the socket on demand; as a by-product the assumption is the 
client will block while the launch is occuring for the socket.  Of 
course, to make this work, there is an implicit assumption that the 
launch includes handling of service is up vs service is ready.



Isn't this all controlled by the service? sshd decides when to open its
socket: The admin has nothing to do with it.
UCSPI is basically the inetd concept re-done daemontools style.  It can 
be a local socket, a network socket, etc.  So the UCSPI program would 
create and hold the socket; upon connection, the service spawns.





[Snip 2 paragraphs discussing the complexity of sockets used in a
certain context]


If I were to write support for sockets in, I would guess that it
would probably augment the existing ./needs approach by checking for
a socket first (when the feature is enabled), and then failing to
find one proceed to peer-level dependency management (when it is
enabled).

Man, is all this bo-ha-ha about dependencies?
Sequencing actually; I'm just mixing a metaphor here, in that my 
version of dependencies is sequential, self-organizing, but not 
manually ordered.  Order is obtained by sequentially walking the tree, 
so while you have a little control by organizing the relationships, you 
don't have any control over which relationship launches first at a given 
level.


=
if /usr/local/bin/networkisdown; then
   sleep 5
   exit 1
fi
exec /usr/sbin/sshd -d -q
=

Is this all about using the existance of a socket to decide whether to
exec your service or not? If it is, personally I think it's too
generic, for the reasons you said: On an arbitrary service,
perhaps written by a genius, perhaps written by a poodle, having a
socket running is no proof of anything. I know you're trying to write
generic run scripts, but at some point, especially with dependencies on
specific but arbitrary processes, you need to know about how the
process works and about the specific environment in which it's working.
And it's not all that difficult, if you allow a human to do it. I think
that such edge case dependencies are much easier for humans to do than
for algorithms to do.
Oh, don't get me wrong, I'm saying that the human should not only be 
involved but also have a choice.  Yes, I will have explicit assumptions 
about X needs Y but there's still a human around that can decide if 
they want to flip the switch on to get that behavior.




If this really is about recognizing when a process is fully functional,
because the process being spawned depends on it, I'd start collecting a
bunch of best-practices, portable scripts called ServiceXIsDown and
ServiceXIsUp.
This is of passing interest to me, because a lot of that accumulated 
knowledge can be re-implemented to support run scripts.  I may write 
about that separately in a little bit.

Sorry for the DP101 shellscript grammar: Shellscripts are a second
language for me.

The project is currently written in shell, so you're in good company.


Anyway, each possible dependent program could have one or more
best-practice is it up type test shellscripts. Some would involve
sockets, some wouldn't. I don't think this is something you can code
into the actual process manager, without a kudzu field of if statements.
It wouldn't be any more difficult than the existing peer code.  Yes, I 
know you peeked at that once and found it a bit baroque but if you take 
the time to walk through it, it's not all that bad, and I'm trying hard 
to make sure each line is clear about its intention and use.


Regarding an older comment that was made about relocating peer 
dependencies into a separate script, I'm about 80% convinced to do it, 
if only to make things a little more modular internally.


[snip a couple paragraphs that were way above my head]


Of course, there are no immediate plans to support UCSPI, although
I've already made the mistake of baking in some support with a bcron
definition.  I think I need to go back and revisit that entry...

I'm a big fan of parsimonious scope and parsimonious dependencies, so
IMHO the less that's baked in, the better.
The minimum dependencies are there.  If anything, my dependencies are 
probably lighter than most - there isn't anything in shell that is baked 
in (i.e. explicit service X start statements in the script outright), 
and the dependencies themselves are simply symlinks that can be changed.





As a side note, I'm beginning to suspect that the 

Re: dependant services

2015-06-08 Thread Avery Payne

On 6/8/2015 2:15 PM, Steve Litt wrote:
I'm not familiar with inetd. Using sockets to activate what? In what 
manner? Whose socket?


~ ~ ~
Let's go back in time a little bit.  The year is 1996, I'm downstairs 
literally in my basement with my creaky old 486 with 16Mb of RAM and I'm 
trying to squeeze as much as I can into my Slackware 3.6 install that I 
made with 12 floppy disks.  There are some of these service-thingys that 
I'm learning about and they all take up gobs of expen$ive RAM, and while 
I can swap to disk and deal with that, swapping is a slooow 
afair because a drive that pushes 10 megaBYTES per second is speedy.  
Heck, my drive isn't even IDE, it's ESDI, and being a full-height 5 1/2 
drive is actually larger than a brick.  But I digress.  It would be cool 
if there was a way to reduce the RAM consumption...

~ ~ ~

Me: There's got to be something that can free up some RAM...time to dig 
around documentation and aritcles online with my uber-kool 14.4 dialup 
modem!  Let's see herewhat's this?   Inetd?  Whoa, it frees up RAM 
while providing services!  Now I just need RAM to run inetd and all the 
RAM I save from not running other things can be used for mischief!


~ ~ ~
What inetd does is:

1. Have a giant list of port numbers defined, with a program that pairs
   with each port number (/etc/inetd.conf)
2. Opens port numbers out of that list when the inetd daemon is run and
   listens to all of them.
3. When someone talks to the port, the corresponding program is
   launched and the port connected to the program.  If the program
   fails to launch, the connection is closed.
4. You only need RAM for inetd + any services that launch.
5. ...
6. Profit!

Meanwhile, in the same year, halfway across the country in Illinois, in 
a dark lab...

~ ~ ~

DJB: (swiveling around in a dramatic swivel chair, but no cat, because 
cats would shed hair on his cool looking sweater) I shall take the old 
inetd concept, and make it generic and decoupled and streamlined and 
secure.  I shall gift this to you, the Internet, so that you may all be 
secure, unlike Sendmail's Security Exploit of the Month Club which keeps 
arriving in my inbox when I didn't ask for it.  Go forth, and provide 
much joy to sysadmins everywhere! (queue dramatic music)


~ ~ ~
...and thus, UCSPI was born.  Fast forward to 2014while surfing 
various Linux news articles, I stumble into something that sounds like 
an infomercial...

~ ~ ~

 ...Systemd will now do socket activation with not only file sockets 
but also network sockets too!  NETWORK SOCKETS!  It's like an Armed Bear 
riding a Shark with Frickin' Laser Beams while singing the National 
Anthem with an Exploding Background!!  Get your copy today for THREE 
easy payments!!!  Order Now While Supplies Last OPERATORS ARE 
STANDING BY!!!


~ ~ ~
Yes, that juicy sound is the sound of my eyes rolling up into their 
sockets as I read that article, attempting to retreat to the relative 
safety of my skull as I Cannot Un-see What I Have Seen...as you can 
tell, this isn't exactly a new concept, and it's been done before, many 
many times, in various ways (inetd, xinetd, various flavors of UCSPI, 
and now systemd's flavor).


Re: dependant services

2015-06-08 Thread Avery Payne

On 5/14/2015 3:25 PM, Jonathan de Boyne Pollard wrote:
The most widespread general purpose practice for breaking (i.e. 
avoiding) this kind of ordering is of course opening server sockets 
early.  Client and server then don't need to be so strongly ordered. 
This is where I've resisted using sockets.  Not because they are bad - 
they are not.  I've resisted because they are difficult to make 100% 
portable between environments.  Let me explain.


First, there is the question of what environment am I running in? This 
can break down in to several sub-questions of what variable settings do 
I have, what does my directory structure look like, and what tools 
are available.  That last one - what tools are installed - is what 
kills me.  Because while I can be assured that the bulk of a framework 
will be present, there is no guarantee that I will have UCSPI sockets 
around.


Let's say I decide to only support frameworks that package UCSPI out of 
the box, so I am assured that the possibility of socket activate is 100% 
guaranteed, ignoring the fact that I just jettisoned several other 
frameworks in the process simply to support this one feature. So we 
press on with the design assumption it is safe to assume that UCSPI is 
installed and therefore can be encoded into run scripts. Now we have 
another problem - integration.  Using sockets means I need to have a 
well-defined namespace to locate the sockets themselves, and that means 
a well-known area in the filesystem because the filesystem is what 
organizes the namespace.  So where do the sockets live?  /var/run?  
/run?  /var/sockets? /insert-my-own-flavor-here?


Let's take it a step further and I decide on some name - I'll pull one 
out of a hat and simply call it /var/run/ucspi-sockets - and ignore all 
of the toes I'm stepping on in the process, including the possibility 
that some distribution already has that name reserved. Now I have (a) 
the assurance that UCSPI is supported and (b) a place for UCSPI to get 
its groove on, then we have the next problem, getting all of the 
services to play nice within this context.  Do I write everything to 
depend on UCSPI sockets so that I get automatic block?  Do I make it 
entirely the choice of the administrator to activate this feature via a 
switch that can be thrown?  Or is it used for edge cases only?  
Getting consistency out of it would be great, but then I back the admin 
into a corner with this is design policy and you get it, like it or 
not.  If I go with admin controlled, that means yet another code path 
in an already bloaty ./run.sh script that may or may not activate, and 
the admin has their day with it, but the number of potential problem 
vectors grows.  Or I can hybridize it and do it for edge cases only, but 
now the admin is left scratching their head asking why is it here, but 
not there?  it's not consistent, what where they thinking??


Personally, I would do the following:

* Create a socket directory in whatever passes for /var/run, and name it 
/var/run/ucspi-sockets.


* For each service definition that has active sockets, there would be 
/var/run/ucspi-sockets/{directory} where {directory} is the name of the 
service, and inside of that is a socket file named 
/var/run/ucspi-sockets/{directory}/socket.  That is about as generic and 
safe as I can get, given that /var/run on Linux is a symlink that 
points to /run in some cases.  It is consistent - the admin knows where 
to find the socket every single time, and is assured that the socket 
inside of the directory is the one that connects to a service.  It is a 
reasonable name - the odds of /var/run/ucspi-sockets being taken for 
anything else but that is fairly low, and the odds of me stepping on top 
of some other construct in that directory are low as well, because any 
existing sub-directory in that location is probably there for the same 
reason.


* Make socket activate an admin-controlled feature that is disabled by 
default.  You want socket activation, you ask for it first.  The admin 
gets control, I get more headache, and mostly everyone can be happy.


We've answered the where and the when, now we are left with the 
how.  I suspect that you and Laurent would argue that I shouldn't be 
using sockets inside of ./run as it is, that it should be in the layer 
above in service management proper, meaning that the entire construct 
shouldn't exist at that level.  Which means I shouldn't even support it 
inside of ./run.  Which means I can't package this feature in my 
scripts.  And we're back to square one.


Let's say I ignore this advice (at my own peril) and provide support for 
those frameworks that don't have external management layers on top of 
them.  This was the entire reason I wrote my silly peer-level dependency 
support to begin with, so that other folks would have one or two of 
these features available to them, even though they don't have external 
management like nosh or s6-rc or anopa.  It's a poor man's 

Re: Arch Linux derivative using s6?

2015-05-15 Thread Avery Payne


On 5/14/2015 3:47 PM, Jonathan de Boyne Pollard wrote:
There are even more than that.  I mentioned back in January that the 
nosh Guide chapter on creating service bundles has pointers to the run 
file collections by Gerrit Pape, Wayne Marshall, Kevin J. DeGraaf, and 
Glenn Strauss.  I also pointed out that nosh came with some 177 
pre-built service bundles.  That figure has since risen to some 
230-odd (not including log services).


We are supervision-scripts. Lower your firewalls and surrender your 
source. We will add your definitions and technological distinctiveness 
to our own. Your framework will adapt to service us. Resistance is futile.


Oops, sorry, don't know what came over 
me...http://en.wikipedia.org/wiki/Borg_%28Star_Trek%29#cite_note-4


I will most assuredly pursue those 'service bundles' from *all* of the 
above authors when time permits... believe me, I've already scoured out 
most of github and bitbucket.  I've also done a few off of runit's 
definitions.


Nosh is still on my to-do list.  Near as I can tell, it shouldn't be too 
hard to include support for it, but I won't really know until I get a 
full VM cooked.  I think the quickest way to get this accomplished - for 
both nosh and s6 - is to install Debian 8 sans systemd into a VM image.  
From there I can add your new Debian packages to get nosh installed, and 

I will finally have GNU make 4.0 for building s6.



Re: Thoughts on First Class Services

2015-04-29 Thread Avery Payne
Note: this re-post is due to an error I made earlier today.  I've gutted 
out a bunch of stuff as well.  My apologies for the duplication.


On 4/28/2015 11:34 AM, Laurent Bercot wrote:


I'm also interested in Avery's experience with dependency handling.


Hm.  Today isn't the best day to write this (having been up since 4am) 
but I'll try to digest all the little bits and pieces into something.  
Here we go...


First, I will qualify a few things.  The project's scope is, compared to 
a lot of the discussion on the mailing list, very narrow.  There are 
several goals but the primary thrust of the project is to create a 
generic, universal set of service definitions that could be plugged into 
many init, distribution, and supervision framework arrangements. That's 
a tall order in itself, but there are ways around a lot of this.  So 
while the next three paragraphs are off-topic, they are there to address 
those three concerns mentioned.


With regard to init work, I don't touch it.  Trying to describe a proper 
init sequence is already beyond the scope of the project. I'm leaving 
that to other implementers.


With regard to distributions, well, I'm trying to make it as generic as 
possible.  Development is done on a Debian 7 box but I have made efforts 
to avoid any Debian-isms in the actual project itself.  In theory, you 
should be able to use the scripts on any distribution.


With regard to the supervision programs used, the difference in command 
names have been abstracted away.  I'm not entirely wild about how it is 
currently done, but creating definitions is a higher priority than 
revisiting this at the moment.  In the future, I will probably 
restructure it.


~ ~ ~ ~ ~ ~ ~ ~

What
---
The dependency handling in supervision-scripts is meant to be used in 
installations that don't have access to it.  Put another way, it's a 
Poor Man's Solution to the problem and functions as a convenience.  
The feature is turned off by default, and this will cause any service 
definition that requires other services to run-loop repeatedly until 
someone starts them manually.  This could be said to be the default 
behavior of most installations that don't have dependency handling, so 
I'm not introducing a disruptive behavior with this feature.


Why
---
I could have hard-coded many of the dependencies into the various run 
scripts, but this would have created a number of problems for other areas.


1. Hard-coding prevents switching from shell to execline in the future, 
by necessitating a re-write.  There will be an estimated 1,000+ scripts 
when the project is complete, so this is a major concern.


2. We are already using the filesystem as an ad-hoc database, so it 
makes sense to continue with this concept.  The dependencies should be 
stored on the filesystem and not inside of the script.


With this in mind, I picked sv/(service)/needs as a directory to hold 
the definitions to be used.  Because I can't envision what every init 
and future dependency management framework would look like, I'll simply 
make it as generic as I can, leaving things as open as possible to 
additional changes.A side note: it is by fortuitous circumstance 
that anopa uses a ./needs directory that has the same functionality and 
behavior.  I use soft links just because.  Anopa uses named files.  
The net effect is the same.


Each dependency is simply a named soft link that points to a service 
that needs to be started, typically something like 
sv/(service)/needs/foobar points to /service/foobar.  In this case, a 
soft link is made with the name of the service, pointing to the service 
definition in /service.  This also allows me to ensure that the 
dependency is actually available, and not just assume that it is there.


A single rule determines what goes into ./needs, you can only have the 
names of other services that are explicitly needed.  You can say foo 
needs baz and baz needs bar but NEVER would you say foo needs baz, 
foo needs bar.  This is intentional because it's not the job of the 
starting service to handle the entire chain.  It simplifies the list of 
dependencies because a service will only worry about its immediate 
needs, and not the needs of dependent services it launches.  It also has 
the desirable property of making dependency chains self-organizing, 
which is an important decision with hundreds of services having 
potentially hundreds of dependencies.   Setup is straightforward and you 
can easily extend a service need by adding one soft link to the new 
dependency.  This also fits with my current use of a single launch 
script; I don't have to change the script, just the parameters that the 
script uses.  The new soft link becomes just another parameter.  You 
could call this peer-level dependency resolution if you like.



How
---
Enabling this behavior requires that you set sv/.env/NEEDS_ENABLED to 
the single character 1.   It is normally set to 0.  With the setting 
disabled (zero), the entire 

Re: Thoughts on First Class Services

2015-04-28 Thread Avery Payne

Dang it.  Hit the send button.

It will be a bit, I'll follow up with the completed email.  Sorry for 
the half-baked posting.


Re: Thoughts on First Class Services

2015-04-28 Thread Avery Payne



On 4/28/2015 10:50 AM, bougyman wrote:
Well at least we're talking the same language now, though reversing 
parent/child is disconcerting to my OCD. 


Sorry if the terminology is reversed.


Here's the current version of run.sh, with dependency support baked
in:
https://bitbucket.org/avery_payne/supervision-scripts/src/b8383ed5aaa1f6d848c1a85e6216e59ba98c3440/sv/.run/run.sh?at=default


That's a gnarley run script. It's as big as a lot of sysvinit or OpenRC
scripts I've seen. One of the reasons I like daemontools style package
management is my run scripts are usually less than 10 lines.


This was my thought, as well. It adds a level of complexity we try to
avoid in our run scripts.
It also seems to me that there is less typing involved in individual
run scripts than the
individual things that have to be configured for this script. If on
goal of this
abstraction is to minimize mistakes, adding more moving parts to edit
doesn't seem to
work towards that goal.


Currently there are the following sections, in sequence:

1. shunt stderr to stdout for logging purposes
2. shunt supporting symlinks into the $PATH so that tools are called 
correctly.  This is critical to supporting more than just a single 
framework; all of the programs referenced in .bin are actually symlinks 
that point to the correct program to run.  See the .bin/use-* scripts 
for details.
3. if a definition is broken in some way, then immediately write a 
message to the log and abort the run.
4. if dependency handling is enabled, then process dependencies. 
Otherwise, just skip the entire thing.  By default, dependencies are 
disabled; this means ./run scripts behave as if they have no dependency 
support.
4a. should dependency handling fail, log the failing child in the 
parent's log, and abort the run.

5. figure out if user or group IDs are in used, and define them.
6. figure out if a run state directory is needed.  If so, set it up.
7. start the daemon.



Re: Thoughts on First Class Services

2015-04-28 Thread Avery Payne

On 4/28/2015 11:34 AM, Laurent Bercot wrote:


 If a lot of people would like to participate but don't want to
subscribe to the skaware mailing-list, I'll move the thread here.

Good point, I'm going to stop discussion here and go over there, where 
the discussion belongs.


Re: Thoughts on First Class Services

2015-04-28 Thread Avery Payne



On 4/28/2015 10:31 AM, Steve Litt wrote:

Good! I was about to ask the definitions of parent and child, but the
preceding makes it clear.


I'm taking it from the viewpoint that says the service that the user 
wishes to start is the parent of all other service dependencies that 
must start.



So what you're doing here is minimizing polling, right? Instead of
saying whoops, child not running yet, continue the runit loop, you
actually start the child, the hope being that no service will ever be
skipped and have to wait for the next iteration. Do I have that right?
Kinda.  A failure of a single child still causes a run loop, but the 
next time around, some of the children are already started, and a start 
of the child will quickly return a success, allowing the script to skip 
over it quickly until it is looking at the same problem child from the 
last time.  The time lost is only on failed starts, and child starts 
typically don't take that long.  If they are, well, it's not the 
parent's fault...


  

Here's the current version of run.sh, with dependency support baked
in:
https://bitbucket.org/avery_payne/supervision-scripts/src/b8383ed5aaa1f6d848c1a85e6216e59ba98c3440/sv/.run/run.sh?at=default


That's a gnarley run script.


Yup.  For the moment.


If I'm not mistaken, everything inside the if test
$( cat ../.env/NEEDS_ENABLED ) -gt 0; then block is boilerplate that
could be put inside a shellscript callable from any ./run.


True, and that idea has merit.


  That would
hack off 45 lines right there. I think you could do something similar
with everything between lines 83 and 110. The person who is truly
interested in the low level details could look at the called
shellscripts (perhaps called with the dot operator). I'm thinking you
could knock this ./run down to less than 35 lines of shellscript by
putting boilerplate in shellscripts.
I've seen this done in other projects, and for the sake of simplicity 
(and reducing subshell spawns) I've tried to avoid it. But that doesn't 
mean I'm against the idea.  Certainly, all of these are improvements 
with merit, provided that they don't interfere with some of the other 
project goals.  If I can get the time to look at all of it, I'll 
re-write it by segmenting out the various components.


In fact, you may have given me an idea to solve an existing problem I'm 
having with certain daemons...




You're doing more of a recursive start. No doubt, when there are two or
three levels of dependency and services take a non-trivial amount of
time to start (seconds), yours results in the quicker boot. But for
typical stuff, I'd imagine the old wait til next time if your ducks
aren't in line will be almost as fast, will be conceptually
simpler, and more codeable by the end user. Not because your method is
any harder, but because you're applying it against a program whose
native behavior is wait til next cycle.


Actually, I was looking for the lowest-cost solution to how do I keep 
track of dependency trees between multiple services.  The result was a 
self-organizing set of data and scripts.  I don't manage *anything* 
beyond service A must have service B.  It doesn't matter how deep that 
dependency tree goes, or even if there are common leaf nodes at the 
end of the tree, because it self-organizes.  This reduces my cognitive 
workload; as the project grows to hundreds of scripts, the number of 
possible combinations reaches a point where it would be unmanageable 
otherwise.  Using this approach means I don't care how many there are, I 
only care about what is needed for a specific service.



And, as you said in a past email, having a run-once capability without
insane kludges would be nice, and as you said in another past email,
it's not enough to test for the child service to be up according to
runit, but it must pass a test to indicate the process itself is
functional. I've been doing that ever since you mentioned it.


At some point I have to go back and start writing ./check scripts. :(


Re: Another attempt at S6 init

2015-04-21 Thread Avery Payne



On 4/21/2015 7:34 AM, TheOldFellow wrote:


So I should need much less than Laurent has in his example.  (did I mention
the ancient grey cells?)
I'm no expert at execline, so I'm taking wild guesses here based on the 
little bits that I know from reading about it.



#close stdout and stderr
fdclose 1 fdclose 2
...would it hurt anything to move the fdclose 1  2 near the connect 
stdin to /dev/null?  Yes your console will be noisy but you at least 
get to see if anything is hurt or wounded prior to re-plumbing everything.


At Laurent's website, it mentions setting up an early getty so that you 
have access to something during the boot.

   The best suggestion is probably 'use
systemd', but please refrain.

(saying this while I cover my ears) You said the s word.


Re: dependant services

2015-04-21 Thread Avery Payne

On 4/21/2015 2:19 PM, Buck Evan wrote:

Does s6 (or friends) have first-class support for dependant services?
I know that runit and daemontools do not.  I do know that nosh has 
direct support for this. I believe s6 supports it through various 
intermediary tools, i.e. using socket activation to bring services up, 
so you could say that while it supports it directly and provides a full 
guarantee, it's not first class in the sense that you can simply 
provide a list of bring these up first and it will do it out of the 
box.  The recently announced anopa init system fills in this gap and 
makes it first class, in the sense that you can simply provide the 
names of definitions that need to start and everything else is handled 
for you.



Alternatively, are there general-purpose practices for breaking this kind
of dependency?
Strange as it sounds, renaming the child definition of a dependency 
chain (which typically translates into the directory name of the 
defintion) seems to be a regular issue.  Changing the name of the 
definition typically causes various links to break, causing the parent 
service to be unable to locate its children by name at start-up.


Re: dependant services

2015-04-21 Thread Avery Payne

On 4/21/2015 2:56 PM, Buck Evan wrote:
My understanding of s6 socket activation is that services should open, 
hold onto their listening socket when they're up, and s6 relies on the 
OS for swapping out inactive services. It's not socket activation in 
the usual sense. http://skarnet.org/software/s6/socket-activation.html


I apologize, I was a bit hasty and I think I need more sleep.  I'm 
confusing socket activation with some other s6 feature, perhaps I was 
confusing it with how s6-notifywhenup is used... 
http://skarnet.org/software/s6/s6-notifywhenup.html
So I wonder what the full guarantee provided by s6 that you 
mentioned looks like.
It seems like in such a world all services would race and the 
determinism of the race would depend on each service's implementation.
This I do understand, having gone through it with supervision-scripts.  
The basic problem is that a running service does not mean a service is 
ready, it only means it's up.


Dependency handling with guarantee means there is some means by which 
the child service itself signals I'm fully up and running, vs. I'm 
started but not ready.  Because there is no polling going on, this 
allows the start-up of the parent daemon to sleep until it either is 
notified or times out.  And you get a clean start-up of the parent 
because the children have directly signaled that we're all ready.


Dependency handling without guarantee is what my project does as an 
optional feature - it brings up the child process and then calls the 
child's ./check script to see if everything is OK, which is polling the 
child (and wasting CPU cycles).  This is fine for light use because 
most child processes will start quickly and the parent won't time out 
while waiting.  There are trade-offs for using this feature.  First, 
./check scripts may have unintended bugs, behaviors, or issues that you 
can't see or resolve, unlike the child directly signalling that it is 
ready for use.  Second, the polling approach adds to CPU overhead, 
making it less than ideal for mobile computing - it will draw more power 
over time.  Third, there are edge cases where it can make a bad 
situation worse - picture a heavily loaded system that takes 20+ minutes 
to start a child process, and the result being the parent spawn-loops 
repeatedly, which just adds even more load.  That's just the three I can 
think off off the top of my head - I'm sure there's more.  It's also why 
it's not enabled by default.


Re: dependant services

2015-04-21 Thread Avery Payne

On 4/21/2015 3:08 PM, Buck Evan wrote:



On Tue, Apr 21, 2015 at 2:46 PM, Avery Payne avery.p.pa...@gmail.com 
mailto:avery.p.pa...@gmail.com wrote:


Alternatively, are there general-purpose practices for
breaking this kind
of dependency?

Strange as it sounds, renaming the child definition of a
dependency chain (which typically translates into the directory
name of the defintion) seems to be a regular issue.  Changing the
name of the definition typically causes various links to break,
causing the parent service to be unable to locate its children by
name at start-up.


Ah, I just realized you misunderstood me. You understood breaking 
dependencies to mean the dependant system no longer works where 
what I meant was the dependency is no longer relevant.
With regard to practice or policy, I can only speak to my own project.  
I try to stick with minimum feasible assumption when designing 
things.  In the case of the run script handling dependencies, it only 
assumes that the child failed for reasons known only to the child, and 
therefore the parent will abort out and eventually spawn-loop.  Prior to 
exiting the script, a message is left for the systems administrator 
about which child failed, so that they can at least see why the parent 
refused to start.  Beyond that, I try not to assume too much.


If the dependency is no longer relevant, then that is a small issue - 
the ./needs directory holds the names of all the child processes that 
are needed, and if the child will fail because it's broken / moved / 
uninstalled / picked up its marbles and went home, then the parent will 
simply continue to fail to start, until the child's name is removed from 
the ./needs directory.  Again, you'll see a recorded message in the 
parent log about the child causing the failure, but not much more than 
that.  It can be easily fixed by simply removing the child symlink in 
./needs, which will cause the parent to forget about the child.  This 
possibility should be documented somewhere in the project, and I know I 
haven't done so yet.  Thanks for bringing it up, I'll try to get to it soon.


Re: Arch Linux derivative using s6?

2015-04-19 Thread Avery Payne

On 4/19/2015 7:03 AM, John Regan wrote:

It's not quite the same, but I think Alpine linux is pretty close to what 
you're looking for. They'd probably love to get more people involved, writing 
documentation, making packages, etc. It doesn't use s6, but I've submitted the 
s6 packages to the project. Maybe you could work on adding s6 init scripts to 
packages?
There's already a project for adding definitions for various daemons.   
http://bitbucket.org/avery_payne/supervision-scripts




Re: anopa: init system/service manager built around s6

2015-04-10 Thread Avery Payne

On 4/10/2015 6:41 PM, Aristomenis Pikeas wrote:

Laurent (s6-rc), Olivier (anopa), Toki (supervision framework), and Gorka 
(s6-overlay),

I'm having a lot of trouble figuring out the differences between your projects. 
The s6 suite of utils can be considered building blocks for a full init system, 
but what does each of your projects do on top of s6?


A breakdown of differences:

+ s6-rc: Laurent's version of an init system based on s6, meant to bring 
a machine up/down.


+ anopa: Olivier's version of an init system base on s6, meant to bring 
a machine up/down.


+ supervision framework: Toki's version of a complete framework-agnostic 
init system, partially geared towards OpenRC. Last I checked he had 
progressed into supporting a complete init (with OpenRC support).


+ s6-overlay: this is meant for Docker containers, and is most likely 
the one you want.


Yes, there is duplication of effort between Laurent and Olivier, that's 
ok though - I personally argue that choice is a good thing. :)  There 
are actually many, many projects out there, if you know what to look 
for, that might provide clues or insights.  Ignite on github I believe 
has some rudimentary init stuff in it, although it's runit based.



For a bit of context, my goal is the simplest init system that could possibly 
work, to be run inside of a docker container. I need to start services and 
gracefully handle SIGTERM/SIGKILL, with everything logged to standard out. 
That's about it. But this is proving to be difficult with s6. I've been 
chipping away at things, but it's slow going between understanding all of the 
tricky bash-isms and learning about all of the relevant s6 components.

If by tricky bash-isms you mean the shell redirections and exec and 
all of that, well...once you can visualize it, it's not that bad 
really.  I don't believe any of the projects use bash directly. Toki's 
project (as well as my own) assume /bin/sh, which at this time usually 
means an ash variant.  Laurent and Olivier have *nothing* done in bash 
(beyond the build process).  If anything, I think all of the projects 
are trying hard to avoid bash specific implementations; believe me 
when I say, I've looked at *a lot* of shell scripting in the last 5 
months, and I can say that a lot of projects with shell scripts are 
actually fairly clean.  (Yes, I sound a little surprised when I say that)


POLL RESULTS: what installations would you use process supervision in?

2015-04-01 Thread Avery Payne

There were 8 respondents.

[ 4 ] A hand-made / hand-customized Linux installation

[ 1 ] A commercial installation (HP-UX, AIX, Pre-Oracle Solaris)

[ 2 ] an installation made with LFS

[ 2 ] an installation made with Gentoo

[ 0 ] an installation made with Arch

[ 3 ] an installation made with Debian / Ubuntu

[ 2 ] an installation made with Fedora / Red Hat

[ 4 ] an installation made with NetBSD/OpenBSD/FreeBSD

[ 1 ] an installation made with DragonflyBSD

[ 0 ] an installation made with Android Open Source Project

[ 4 ] an installation not listed here (please give name and/or details)
for this category, responses are broken down as:
+ 1  condensed summary: runit within a larger project that uses Docker
+ 1 Illumos-derived distros (e.g. SmartOS, OmniOS)
+ 1 Docker Images, Using Gorka's s6-overlay
+ 1 Ubuntu and Alpine Linux, but both inside docker :-)

Method:
An open invitation to the supervision mailing list, with multiple choice 
responses being sent to my email address.  Responses were tallied on 
April 1 (no joke).  Each respondent is allowed +1 vote for a category, 
although multiple categories are allowed.


Summary:
The poll was meant to provide a broad picture of how process supervision 
is used by platform, and give a general feeling to how people are using 
it.  Even with just 8 respondents, this is informative.  From this, some 
(personally biased) observations:


* Docker, which is not an OS but a container solution, has a surprising 
amount of interest.  I haven't had time to play with Docker yet so I can 
only guess as to why - perhaps the low overhead and/or storage 
requirements that come with this model lend to making for slim containers?


* People have a very keen interest in using supervision with *BSD, with 
4 responders using it in some fashion.  Perhaps some outreach to those 
communities is in order...


* I was surprised to see Fedora/Red Hat listed, as these are 
traditionally systemd based, and systemd provides a superset of process 
supervision features.


* Some of the design decisions in my project came from the idea that the 
definition directories should be as portable as possible, because 
process supervision is a concept that extends to a large number of 
systems, and not just the one.  As a result of that decision, 
development has been very slow and deliberate, probably slower than I 
would like.  Because I'm seeing a strong showing by non-Linux systems, I 
think it hints strongly that this was the right decision to make.


A big Thank You to everyone for your time, votes, and comments.


POLL: what installations would you use process supervision in?

2015-03-20 Thread Avery Payne
This is a simple straw poll.  Please do *not* reply to the mailing list 
- I don't want to clog it with answers.  Send the replies directly to my 
personal email address instead.  The poll will remain open until March 
31, and I will publish results after that time.


POLL: what installations would you use process supervision in?

[ ] A hand-made / hand-customized Linux installation
[ ] A commercial installation (HP-UX, AIX, Pre-Oracle Solaris)
[ ] an installation made with LFS
[ ] an installation made with Gentoo
[ ] an installation made with Arch
[ ] an installation made with Debian / Ubuntu
[ ] an installation made with Fedora / Red Hat
[ ] an installation made with NetBSD/OpenBSD/FreeBSD
[ ] an installation made with DragonflyBSD
[ ] an installation made with Android Open Source Project
[ ] an installation not listed here (please give name and/or details)


supervision scripts, 2015-01

2015-02-03 Thread Avery Payne
Lots of changes behind the scenes.  Not as many new definitions, 
although things will return to normal in the next month.



Done:
- - - - - - - -
+ New! Internal framework-neutral command grammar.  Upon selecting a 
given framework, all scripts automatically use the correct start and 
check commands.


+ New! Broken daemon support.  Daemons without proper background 
support now have a warning written to the daemon's log, and the run 
script aborts.  If you want to continue with the daemon, remove the 
./broken file from the definition and the script will launch.  Typically 
this is used for services that don't support foreground operation.


+ New! Optional framework-neutral peer-level dependency handling.All 
dependencies will be included out-of-the-box. Dependencies are defined 
as any external program required to be running before the daemon defined 
can launch. It is recommended that it only be used when no dependency 
manager is available older frameworks, and you desire automatic start-up 
of dependent definitions.  It is disabled by default, because it comes 
with certain caveats, notably: (a) you cannot change definition names 
and expect it to work, and (b) it provides a weak guarantee that 
dependencies are running, so it is possible to have race conditions or 
other failure states.


+ New! Dynamic run state directory creation.  Definitions will now 
create /var/run/{daemon-name} upon startup with the correct permissions.


+ New definitions: ypbind, autofs, rsyslog, mini-httpd, rpcbind, 
shellinabox, slpd, gdomap, rarpd


+ Reworked definitions: forked-daapd, avahi-daemon, dbus, mpd, mouseemu, 
ntpd, munin-node (broken)


+ Several tweaks to the README file, including a compatibility chart for 
user-land tools.



In Progress:
- - - - - - - -
+ Merged all run templates that utilize /bin/sh into a single, master 
template.  This was done after much deliberation - why spend time 
figuring out which one of a half-dozen different template scripts will 
have the feature you need, when you could just link one script and get 
on with life?


+ Use double indirection for ./run files to allow switching of template 
environments, i.e. it makes it possible to switch between /bin/sh and 
another arrangement.  The ./run file in the definition will point to 
../.run/run, which itself points to the actual template.


+ Look at supporting execline in parallel, although features like 
script-based dependency management will stop working.


+ Revisit all fgetty/agetty/mingetty and re-code them to support ./env 
settings.


+ Revisit all existing one-off scripts and see if they now qualify to be 
used with a template instead, and re-write them if possible.


+ Examine nosh to see if it can be supported as well.

+ Final push for 0.1 release, which will include 10% of defined 
definitions in Debian 7, all run templates stabilized, and all logging 
stabilized.



To-Do / Experimental:
- - - - - - - -
+ Think about ways to incorporate perp, which uses a different format of 
./run file.


... plus all of the To-Do stuff from last month

As always, suggestions, comments, and ideas are welcome.



Re: Could s6-scscan ignore non-servicedir folders?

2015-01-21 Thread Avery Payne


On 1/21/2015 7:19 PM, post-sysv wrote:


I'm not sure what effective and worthwhile ways there are to express 
service *relationships*,
however, or what that would exactly entail. I think service conflicts 
and service bindings might
be flimsy to express without a formal system, though I don't think 
it's anything that pre-start
conditional checks and finish checks can't emulate, perhaps less 
elegantly?


This brings to mind the discussion from Jan. 8 about ./provides, where 
a defining a daemon implies:


* the service that it actually provides (SMTP, IMAP, database, etc.); 
think of it as the doing, the piece that performs work


* a data transport (pipe, file, fifo, socket, IPv4, etc.); think of it 
as how you connect to it


* a protocol (HTTP, etc.); think of it as a grammar for conversing with 
the service, with vertical/specific applications like MySQL having their 
own grammars, i.e. MySQL-3, MySQL-4, MySQL-5, etc. for each generation 
that the grammar changes.


I'm sure there are other bits and pieces missing.  With regard to 
relationships, if you had a mapping of these, it would be a start 
towards a set of formal (although incomplete) definitions.  From that 
you could say I need a database that speaks MySQL-4 over a file socket 
and you could, in theory, have a separate program bring up MySQL 4.01 
over a file socket when needed.


But do we really need this?


Re: thoughts on rudimentary dependency handling

2015-01-19 Thread Avery Payne


On 1/19/2015 2:31 PM, Jonathan de Boyne Pollard wrote:

Avery Payne:
 * implement a ./wants directory.  [...]
 * implement a ./needs directory.  [...]
 * implement a ./conflicts directory.  [...]

Well this looks familiar.


I ducked out of ./needs and ./conflicts for the time being; if I spend 
too much time with making those features then the scripts won't move 
forward.  I'm already several weeks behind in my own schedule that I 
have set for scripts.




Before you read further, including to my next message, get yourself a 
copy of nosh and read the manual pages therein for service-manager(1) 
and system-control(1), paying particular attention in the latter to 
the section entitled Service bundles.


Then grab the nosh Guide and read the new interfaces chapter.  On a 
Debian system this would be:


xdg-open /usr/local/share/doc/nosh/new-interfaces.html


Sounds like I have my homework cut out.  I will do so as soon as I can, 
although I warn you that it joins an already-long list of material to 
read and think about.




first round of optional dependency support

2015-01-15 Thread Avery Payne
Ok, admittedly I'm excited because it works.

The High Points:

+ It works (ok, yeah, it took me long enough.)
+ Framework-neutral grammar for bringing services up and checking them, no
case-switches needed
+ Uses symlinks (of course) to declare dependencies in a tidy ./needs
directory
+ Can do chain dependencies, where A needs B, B needs C, C starts as a
consequence of starting A
+ Chain dependencies are naive, having no concept of each other beyond what
is in their ./needs directory, so you do NOT need to declare the kitchen
sink when setting up ./needs
+ Is entirely optional, it is off by default, so you get the existing
behavior until enabled
+ Simple activation, you enable it by writing a 1 to a file
+ Smart enough to notice missing definitions or /service entries, a script
will fail until fixed

The So-So Points:

~ Framework grammar makes the working assumption that it follows a
tool-command-service format.  This might be a problem for future frameworks
that require tool-service-command or other grammars.
~ Some distro maintainers may have situations where they compile out
something that will be defined in a ./needs, or may compile in something
that is missing from ./needs; this mismatch will bring tears, but for now,
I am assuming that things are sane enough that these inter-dependencies
will remain intact.
~ I'm not happy with handling of ./env settings, it could have been cleaner
~ Oversight of dependencies is based on the assumption that the supervisor
for the dependency will keep the service propped up and running.
~ Once enabled, you need to start or restart services.  It doesn't affect
running services.
~ Currently starting a scripts sends the commands up, then check.  Maybe it
should do check, then up, then check?  That feels wrong - at what point
does it turn into turtles all the way down?

The Low Points:

- Not true dependency management.  It only tackles start-up, not shut-down,
and won't monitor a chain of dependencies for failures or restarts.
- Enormous code bloat.  By the time I finished with the bulk of exception
handling, I felt like I ran a marathon...twice.  The resulting script is
*multiple* times the size of the others.
- The number of dependent commands needed in user-space to run the script
has gone upway up.  Every additional user-space tool included is
another does your install have X that ultimately limits things -
especially embedded devices.  Did I mention bloat earlier?
- Way too many failure tests, which means...way too many failure paths.
This makes testing much harder.
- There's a bug (or two) lurking in there, my gut tells me so
- Relative pathing is fine for a static install inside of /etc, but what
happens when users try to spawn off their own user-controlled services?  I
smell a security hole in the making...

The Plan:

This will become a part of avahi-daemon and forked-daapd definitions, but
disabled by default.  From everyone else's perspectives, it will function
like it always did, until enabled.  With sv/.env/ENABLE_NEEDS set to 1, for
example, a launch of forked-daapd will bring up avahi-daemon, and
avahi-daemon will bring up dbus.

Constructive criticism welcome.  I ask that Laurent leaves his flamethrower
at home - the urge to burn it with fire to purify the project may be
strong. ;)


Re: first round of optional dependency support

2015-01-15 Thread Avery Payne
Depends on what you call fail-safe.

If you mean will only run a service if all dependencies met, then yes.
The current mode is to log the dependency failure and exit the script.  The
service will of course run-loop until fixed; this isn't far from the
current non-dependency behavior anyways.  The emitted warnings in the log
should be descriptive enough to diagnose the problem and remedy it.

If you mean attempt run regardless of dependencies, then no.  In my eyes,
that would be a ./wants directory, and I debated doing that as well.  I
figured, ./wants is a variant of ./needs, so I should get ./needs right
first.  Once I know ./needs is stable, I can adapt it, removing a lot of
the failure code, and it turns into ./wants, i.e. attempt to start the
dependency, but if it doesn't start, don't worry about it, keep going


On Thu, Jan 15, 2015 at 9:16 PM, James Powell james4...@hotmail.com wrote:

  Service scripts often need a lot of setup code before the actual daemon
 is executed. My question is, does it provide a fail-safe solution to
 dependency trees?

 Shutdown is only an issue if you need a finish script, otherwise the
 service supervisor will execute the kill signal and bring things down.

 Sent from my Windows Phone
  --
 From: Avery Payne avery.p.pa...@gmail.com
 Sent: ‎1/‎15/‎2015 9:11 PM
 To: supervision@list.skarnet.org
 Subject: first round of optional dependency support

  Ok, admittedly I'm excited because it works.

 The High Points:

 + It works (ok, yeah, it took me long enough.)
 + Framework-neutral grammar for bringing services up and checking them, no
 case-switches needed
 + Uses symlinks (of course) to declare dependencies in a tidy ./needs
 directory
 + Can do chain dependencies, where A needs B, B needs C, C starts as a
 consequence of starting A
 + Chain dependencies are naive, having no concept of each other beyond what
 is in their ./needs directory, so you do NOT need to declare the kitchen
 sink when setting up ./needs
 + Is entirely optional, it is off by default, so you get the existing
 behavior until enabled
 + Simple activation, you enable it by writing a 1 to a file
 + Smart enough to notice missing definitions or /service entries, a script
 will fail until fixed

 The So-So Points:

 ~ Framework grammar makes the working assumption that it follows a
 tool-command-service format.  This might be a problem for future frameworks
 that require tool-service-command or other grammars.
 ~ Some distro maintainers may have situations where they compile out
 something that will be defined in a ./needs, or may compile in something
 that is missing from ./needs; this mismatch will bring tears, but for now,
 I am assuming that things are sane enough that these inter-dependencies
 will remain intact.
 ~ I'm not happy with handling of ./env settings, it could have been cleaner
 ~ Oversight of dependencies is based on the assumption that the supervisor
 for the dependency will keep the service propped up and running.
 ~ Once enabled, you need to start or restart services.  It doesn't affect
 running services.
 ~ Currently starting a scripts sends the commands up, then check.  Maybe it
 should do check, then up, then check?  That feels wrong - at what point
 does it turn into turtles all the way down?

 The Low Points:

 - Not true dependency management.  It only tackles start-up, not shut-down,
 and won't monitor a chain of dependencies for failures or restarts.
 - Enormous code bloat.  By the time I finished with the bulk of exception
 handling, I felt like I ran a marathon...twice.  The resulting script is
 *multiple* times the size of the others.
 - The number of dependent commands needed in user-space to run the script
 has gone upway up.  Every additional user-space tool included is
 another does your install have X that ultimately limits things -
 especially embedded devices.  Did I mention bloat earlier?
 - Way too many failure tests, which means...way too many failure paths.
 This makes testing much harder.
 - There's a bug (or two) lurking in there, my gut tells me so
 - Relative pathing is fine for a static install inside of /etc, but what
 happens when users try to spawn off their own user-controlled services?  I
 smell a security hole in the making...

 The Plan:

 This will become a part of avahi-daemon and forked-daapd definitions, but
 disabled by default.  From everyone else's perspectives, it will function
 like it always did, until enabled.  With sv/.env/ENABLE_NEEDS set to 1, for
 example, a launch of forked-daapd will bring up avahi-daemon, and
 avahi-daemon will bring up dbus.

 Constructive criticism welcome.  I ask that Laurent leaves his flamethrower
 at home - the urge to burn it with fire to purify the project may be
 strong. ;)



redoing the layout of things

2015-01-09 Thread Avery Payne
On Thu, Jan 8, 2015 at 3:08 PM, Luke Diamand l...@diamand.org wrote:

 On 08/01/15 17:53, Avery Payne wrote:

 The use of hidden directories was done for administrative and aesthetic
 reasons.  The rationale was that the various templates and scripts and
 utilities shouldn't be mixed in while looking at a display of the various
 definitions.


 Why shouldn't they be mixed in? Surely better to see everything clearly
 and plainly, than to hide some parts away where people won't expect to find
 them. I think this may confuse people, especially if they use tools that
 ignore hidden directories.


Ok, I'll take this as part of the consideration.


 Move everything down one level then?


I've given it a bit of thought.  I would be willing to remove the dots.
However, the current naming convention would create confusion if you were
to eliminate the support directories altogether.  Keep in mind the purpose
of the directories was to separate out functionality and clearly define
what a group of things does; a service template is vastly different from a
logging template.  The script names were meant as a reminder to how they
are used, along with the directories.  This is why there is a run-svlogd,
and not a log-svlogd.  However, I suppose I could rename things to better
match their intended use.  And while I don't want to drop the prefix (for
reasons of clarity when writing the script) as long as the directories
remain, I'm willing to drop those as well.  The proposal woudl be, inside
of sv/, something like:

/bin
/bin/use-daemontools
/bin/use-runit
/bin/use-s6
/env
/env/PATH
/env/FRAMEWORK
/env/ENABLE_DEPENDS
/finish
/finish/clean
/finish/notify
/finish/force
/log
/log/multilogd
/log/svlogd
/log/s6-log
/log/logger
/log/socklog
/run
/run/envdir
/run/getty
/run/user-service
/(definition 1)
/(definition 2)

...and so on, without the dots.  I'm not wild about the messy appearance
it will give but if it makes adoption easier, then I'll do it.  That, and
we now have five words that are reserved and can never be used by any
service (although I doubt that a service would use any of the above),
because the names exist alongside the definitions.  That was another reason
I wanted dot-files - it was one less thing to worry about, one less issue
that needed attention.

Good thing the bulk of the defintions are symlinks...makes it easy to
switch the directory name. ;)


Re: thoughts on rudimentary dependency handling

2015-01-08 Thread Avery Payne
The use of hidden directories was done for administrative and aesthetic
reasons.  The rationale was that the various templates and scripts and
utilities shouldn't be mixed in while looking at a display of the various
definitions.  The other rationale was that the entire set of definitions
could be moved or copied using a single directory, although it doesn't work
that way in practice, because a separate cp is needed to move the
dot-directories.

The basic directory structure is as follows:

sv
- .bin
- .env
- .finish
- .log
- .run

Where:

sv is the container of all definitions and utilities.  Best-case, the
entire structure, including dot directories, could be set in place with mv,
although this is something that a package maintainer would be likely to
do.  People initially switching over will probably want to use cp while the
project develops.  That way, you can pull new definitions and bugfixes with
git or mercurial, and copy them into place.  Or you could download it as a
tarball off of the website(s) and simply expand-in-place.  So there's a few
different ways to get this done.

.bin is meant to store any supporting programs.  At the moment this is a
bit of a misnomer because it really only stores the framework shunts and
the supporting scripts for switching those shunts.  It may have actual
binaries in the future, such as usersv, or other independent utilities.
When you run use-* to switch frameworks, it changes a set of symlinks to
point to what should be the tools of your installed framework; this makes
it portable between all frameworks, a key feature.

.env is an environmental variable directory meant to be loaded with the
envdir tool.  It represents system-wide settings, like PATH, and some of
the settings that are global to all of the definitions.  It is used within
the templates.

.finish will hold ./finish scripts.  Right now, it's pretty much a stub.
Eventually it will hold a basic finish script that alerts the administrator
to issues with definitions not launching, as well as handling other
non-standard terminations.

.log will hold ./log scripts.  It currently has a single symlink, ./run,
that points to whatever logging system is the default.  At the moment it's
svlogd only because I haven't finished logging for s6 and daemontools.
Eventually .log/run will be a symlink to whatever loggin arrangement you
need.  In this fashion, the entire set of scripts can be switched by simply
switching the one symlink.

.run will hold the ./run scripts.  It has a few different ones in them, but
the main one at this time is run-envdir, which loads daemon specific
settings from the definition's env directory and uses them to launch the
daemon.  Others include an optional feature for user-defined services, and
basic support for one of three getty.  I may or may not make a new one for
the optional dependency feature; I'm going to see if it can be standardized
within run-envdir first.

I can always remove the dots, but then you would have these mixed in with
all of the definitions, and I think it will add to the confusion more than
having them hidden.  As it stands, the only time you need to mess with the
dot directories is (a) when setting them up for the first time, or (b) when
you are switching your logging around.  Otherwise there's really no need to
be in them, and when you use ls /etc/sv to see what is available, they
stay out of your way.

If there is a better arrangement that keeps everything in one base
directory for easy management but eliminates the dots, I'll listen.
Although I think this arrangement actually makes a bit more sense, and the
install instructions are careful to include the dots, so you only need to
mess around with them at install time.

On Thu, Jan 8, 2015 at 8:20 AM, Luke Diamand l...@diamand.org wrote:

 Is it possible to avoid using hidden files (.env) as it makes it quite a
 lot harder for people who don't know what's going on to, um, work out
 what's going on.

 Thanks!
 Luke




Re: thoughts on rudimentary dependency handling

2015-01-08 Thread Avery Payne
On Thu, Jan 8, 2015 at 9:23 AM, Steve Litt sl...@troubleshooters.com
wrote:

 I'm having trouble understanding exactly what you're saying. You mean
 the executable being daemonized fails, by itself, because a service it
 needs isn't there, right? You *don't* mean that the init itself fails,
 right?


Both correct.


 I'm not sure what you're saying. Are you saying that the dependency
 code is in the runscript, but within an IF statement that checks
 for ../env/NEEDS_ENABLED?


Correct.  If the switch, which is a data value in a file, is zero, it
simply skips all of the dependency stuff with a giant if-then wrapper.  At
least, that's the plan.  I won't know until I can get to it.


  Like I
  said, this will be a fall-back feature, and it will have minor
  annoyances or issues.

 Yes. If I'm understanding you correctly, you're only going so far in
 determinint really up, because otherwise writing a one size fits all
 services thing starts getting way too complicated.


Correct.  I'm taking an approach that has the minimum needed to make
things work correctly.



 I was looking at runit docs yesterday before my Init System
 presentation, and learned that I'm supposed to put my own Really Up
 code in a script called ./check.


Also correct, although I'm trying to only do ./check scripts where
absolutely needed, such as the ypbind situation.  Otherwise, the check
usually looks at is the child PID still around.


 If I read the preceding correctly, you're making service tool calls for
 runit, s6, perp and nosh grammatically identical.


Correct.


 Are you doing that so
 that your run scripts can invoke the init-agnostic commands, so you
 just have one version of your scripts?


Exactly correct.  This how I am able to turn the bulk of the definitions
into templates.  ./run files in the definition directories are little more
than symlinks back to a script in ../.run, which means...write once, use a
whole lot. :)  It's also the reason that features are slow in coming - I
have to be very, very careful about interactions.



 However you end up doing the preceding, I think it's essential to
 thoroughly document it, complete with examples. I think that the
 additional layer of indirection might be skipped by those not fully
 aware of the purpose.


I just haven't gotten around to this part, sorry.



 I can help with the documentation.


https://bitbucket.org/avery_payne/supervision-scripts
or
https://github.com/apayne/supervision-scripts

Feel free to clone, change, and send a pull request.


Re: thoughts on rudimentary dependency handling

2015-01-07 Thread Avery Payne
On Wed, Jan 7, 2015 at 6:53 PM, Laurent Bercot ska-supervis...@skarnet.org
wrote:

  Unfortunately, the envdir tool, which I use to abstract away the daemons
 and settings, only chain-loads; it would be nice if it had a persistence
 mechanism, so that I could load once for the scope of the shell script.


  Here's an ugly hack that allows you do that using envdir:
 set -a
 eval $({ env; envdir ../.env env; } | grep -vF -e _= -e SHLVL= | sort |
 uniq -u)
 set +a


Thanks!  When I can carve out a bit of time this week I'll put it in and
finish up the few bits needed.  Most of the dependency loop is already
written, I just didn't have a somewhat clean way of pulling in the
$CMDWHATEVER settings without repeatedly reading ./env over and over.


  It only works for variables you add, though, not for variables you remove.


It will work fine. I'm attempting to pre-load values that will remain
constant inside the scope of the script, so there isn't a need to change
them at runtime.


RE: thoughts on rudimentary dependency handling

2015-01-07 Thread Avery Payne
On Wed, Jan 7, 2015 at 7:23 AM, Steve Litt sl...@troubleshooters.com
 wrote:

 I'm pretty sure this conforms to James' preference (and mine probably)
 that it be done in the config and not in the init program.

 To satisfy Laurent's preference, everything but the exec cron -f could
 be commented out, and if the user wants to use this, he/she can
 uncomment all the rest. Or your run script writing program could have an
 option to write the dependencies, or not.


I've pretty much settled on a system-wide switch in sv/.env (which in the
scripts will show up as ../.env).  The switch will, by default, follow
Laruent's behavior of naive launching, ie. no dependencies are up,
missing dependencies cause failures, and the admin must check logging for
notifications. Enabling the feature would be as simple as

echo 1  /etc/sv/.env/NEEDS_ENABLED

...and every new service launch would receive it.  You could also
force-reload with a restart command.  Without the flag, the entire chunk of
dependency code is bypassed and the launch continues as normal.

The goal is the same but the emphasis has changed.  This will be considered
a fall-back feature for those systems that do not have such a tool
available, or have constraints that force the continued use of a shell
launcher.  It is the option of last resort, and while I think I can make it
work fairly consistently, it will come with some warnings in the wiki.  For
Laurent, he wouldn't even need to lift a finger - it fully complies with
his desires out of the box. ;-)

As new tools emerge in the future, I will be able to write a shunt into the
script that detects the tool and uses it instead of the built-in scripted
support.  This will allow Laurent's work to be integrated without messing
anything up, so the behavior will be the same, but implemented differently.

Finally, with regard to the up vs actually running issue, I'm not even
going to try and address it due to the race conditions involved.  The best
I will manage is to first issue the up, then do a service check to confirm
that it didn't die upon launch, which for a majority (but not all) cases
should suffice.  Yes, there are still race conditions, but that is fine -
I'm falling back to the original model of service fails continually until
it succeeds, which means a silently-failed child dependency that was
missed by the check command will still cause the parent script to fail,
because the daemon itself will fail.  It is a crude form of graceful
failure.  So the supervisor starts the parent again...and again...until the
truant dependency is up and running, at which point it will bring the
parent up.  Like I said, this will be a fall-back feature, and it will have
minor annoyances or issues.

Right now the biggest problem is handling all of the service tool calls.
They all have the same grammar, (tool) (command) (service name), so I can
script that easily.  Getting the tools to show up as the correct command
and command option is something else, and I'm working on a way to wedge it
into the use-* scripts so that the tools are set up out of the box all at
the same time.  This will create $SVCTOOL, and a set of $CMDDOWN, $CMDUP,
$CMDCHECK, etc. that will be used in the scripts.  **Once that is done I
can fully test the rest of the dependency concept and get it fleshed out.**
 If anyone wants to see it, email me directly and I'll pass it along, but
there's not much to look at.

Unfortunately, the envdir tool, which I use to abstract away the daemons
and settings, only chain-loads; it would be nice if it had a persistence
mechanism, so that I could load once for the scope of the shell script.
Because of that, there will be some odd scripting in there that pulls the
values, i.e.

[ -f ../.env/CMDUP ] || echo $(basename $0): fatal error: unable to load
CMDUP  exit 99
CMDUP=$(cat ../.env/CMDUP)

with an entry for each command.


 In my 5 minute thought process, the last remaining challenge, and it's
 a big one, is to get the right service names for the dependencies, and
 that requires a standardized list, because, as far as I know, the
 daemontools-inspired inits don't have provides. Such a list would be
 hard enough to develop and have accepted, but tinfoil_hatI expect our
 friends at Red Hat to start changing the names in order to mess us
 up/tinfoil_hat.


Using a ./provides as a rendezvous or advertisement mechanism I think is
nice-in-concept but difficult-in-practice. Give it a bit more thought and
you'll see that we're not just talking about the *service* but also any
*protocol* to speak with it and one or more *data transport* needed to talk
to the service.  Example: MySQL using a port number bound to 127.0.0.1, vs
MySQL using a file socket.  Both provide a MySQL database and MySQL's
binary client protocol, but the transport is entirely different.  Another
example: exim4 vs postfix vs qmail vs (insert favorite SMTP server here).
All speak SMTP - but some do LMTP at the same time (in either sockets or
ports), so 

Re: s6 init-stage1

2015-01-06 Thread Avery Payne
On Tue, Jan 6, 2015 at 4:02 AM, Laurent Bercot ska-supervis...@skarnet.org
wrote:

  I very much dislike having / read-write. In desktops or other systems
 where /etc is not really static, it is unfortunately unavoidable
 (unless symlinks to /var are made, for instance /etc/resolv.conf should
 be a symlink to /var/etc/resolv.conf or something, but you cannot store,
 for instance, /etc/passwd on /var...)


What if /etc were a mount overlay?  I don't know if other *nix systems
support the concept, but under Linux, mounting a file system onto an
existing directory simply blocks the original directory contents
underneath, exposing only the file system on top, and all writes go to
the top filesystem.  This would allow you to cook up a minimalist /etc
that could be left read-only, but when the system comes up, /etc is
remounted as read-write with a different filesystem to capture read-write
data.  Dismounting /etc would occur along with all the other dismounts at
the tail-end of shutdown.  The only issue I could see is /etc/passwd having
a password set for root, which would be needed to secure the console in the
event that the startup failed somehow and /etc isn't mounted yet. This
implies a possible de-sync between the read-only /etc/passwd and the
read-write /etc/passwd; the former is fixed in stone, the later can change.

 But on servers and embedded systems, / should definitely be read-only.
 Having it read-write makes it susceptible to filesystem corruption,
 which kills the guarantee that your machine will boot to at least a
 debuggable state. A read-only / saves you the hassle of having a
 recovery system.


Interesting concept.


Re: thoughts on rudimentary dependency handling

2015-01-06 Thread Avery Payne
On Tue, Jan 6, 2015 at 10:20 AM, Laurent Bercot ska-supervis...@skarnet.org
 wrote:


  I firmly believe that a tool, no matter what it is, should do what the
 user wants, even if it's wrong or can't possibly work. If you cannot do
 what the user wants, don't try to be smart; yell at the user, spam the
 logs if necessary, and fail. But don't do anything the user has not
 explicitly told you to do.


And there's the rub.  I'm at a crossroad with regard to this because:

1. The user wants service A to run.
2. Service A needs B (and possibly C) running, or it will fail.

Should the service fail because of B and C, even though the user wants A up,

 or

Should the service start B and C because the user requested A be running?

For some, the first choice, which is to immediately fail, is perfectly
fine.  I can agree to that, and I understand the why of it, and it makes
sense.  But in other use cases, you'll have users that aren't looking at
this chain of details.  They asked for A to be up, why do they need to
bring up B, oh look there's C too...things suddenly look broken, even
though they aren't.  I'm caught between making sure the script comes up,
and doing the right thing consistently.

I can certainly make the scripts naive of each other and not start
anything at all...and leave everything up to the administrator to figure
out how to get things working.  Currently this is how the majority of them
are done, and it wouldn't take much to change the rest to match this
behavior.

It's also occurred to me that instead of making the dependency feature a
requirement, I can make it optional.  It could be a feature that you choose
to activate by setting a file or environment variable.  Without the
setting, you would get the default behavior you are wanting to see, with no
dependency support; this would be the default out of the box experience.
With the setting, you get the automatic start-up that I think people will
want.  So the choice is back with the user, and they can decide.  That
actually might be the way to handle this, and both parties - the ones that
want full control and visibility, and the ones after ease of use - will get
what they want.  On the one hand I can assure that you will get working
scripts, because scripts that have dependencies can be made to work that
way.  On the other hand, if you want strict behavior, that is assured as
well.

The only drawback is you can't get both because of the limitations of the
environment that I am in.


Re: thoughts on rudimentary dependency handling

2015-01-06 Thread Avery Payne
On Tue, Jan 6, 2015 at 8:52 AM, Laurent Bercot ska-supervis...@skarnet.org
wrote:


  I'm not sure exactly in what context your message needs to be taken
 - is that about a tool you have written or are writing, or something
 else ? - but if you're going to work on dependency management, it's
 important that you get it right. It's complex stuff that needs
 planning and thought.


This is in the context of service definition A needs service definition B
to be up.



  * implement a ./needs directory.  This would have symlinks to any
 definitions that would be required to run before the main definition can
 run.  For instance, Debian's version of lightdm requires that dbus be
 running, or it will abort.  Should a ./needs not be met, the current
 definition will receive a ./down file, write out a message indicating what
 service blocked it from starting, and then will send a down service to
 itself.


  For instance, I'm convinced that the approach you're taking here actually
 takes away from reliability. Down files are dangerous: they break the
 supervision chain guarantee. If the supervisor dies and is respawned by
 its parent, it *will not* restart the service if there's a down file.
 You want down files to be very temporary, for debugging or something,
 you don't want them to be a part of your normal operation.

  If your dependency manager works online, you *will* bring services down
 when you don't want to. You *will* have more headaches making things work
 than if you had no dependency manager at all. I guarantee it.


I should have added some clarifications.  There are some basic rules I'm
using with regard to starting/stopping services:

1. A service can ask another service to start.
2. A service can only signal itself to go down.  It can never ask another
service to go down.
3. A service can only mark itself with a ./down file.  It can never mark
another service with a ./down file.

That's it.  Numbers 2 and 3 are the only times I would go against what you
are saying.  And since the only reason I would do that is because of some
failure that was unexpected, there would be a good reason to do so.  And in
all cases, there would be a message output as to why it signaled itself
back down, or why it marked itself with a ./down file.  The ./down file is,
I think, being used correctly - I'm trying to flag to the sysadmin that
something is wrong with this service, and it shouldn't restart until you
fix it.

I'm sorry if the posting was confusing.  Hopefully the rules clarify when
and how I would be using these features.  I believe it should be safe if
they are confined within the context of the service definition itself, and
not other dependencies.  If there is something to the contrary that I'm
missing in those three rules, I'm listening.


Re: Using runit-init on debian/Jessie in place of sysvinit/systemd

2015-01-02 Thread Avery Payne
On Fri, Jan 2, 2015 at 6:51 AM, Luke Diamand l...@diamand.org wrote:

 On 02/01/15 10:40, Avery Payne wrote:

 On Thu, Jan 1, 2015 at 3:39 PM, Luke Diamand l...@diamand.org wrote:

 Caution, a shameless plug follows:

 If you are willing to share the contents of your scripts with a very
 permissive license, I would like to see them and possibly incorporate the
 ideas into my current project.


 https://github.com/luked99/supervision-scripts


Thanks!  I'll incorporate them as soon as possible, although a pull request
would work too. :)


 The instructions should just tell you to install the debian package
 (runit-initscripts ?). Is that possible? I might be able to write such a
 package if it was the right way to go.

While the scripts received their start in life from runit, they are now
meant to be framework-agnostic, supporting daemontools and s6 as well.
That is why I changed the name to supervision-scripts, although you could
make a virtual package runit-scripts that pulls supervision-scripts I
suppose.



 Sorry, I didn't record the errors at the time (I assumed it was just
 user-error on my part). runit is 2.1.2-3.

I'm on runit 2.1.1-6.2 at the moment.  Perhaps a change caused the issue?
Gerrit is probably lurking around the mailing list somewhere.


 That's quite a hard problem to solve...!

Indeed.  To make things worse, I think/guess the reason Gerrit didn't make
the package install runit-init as default, is because of the sudden
transition without any scripts in place.  Too many things depending on SysV
would break, including a working console to fix things (remember, there is
no inittab to spawn a getty), and now with the transition to systemd it
would be all kinds of messy.  I believe in Jessie (aka Debian 8) there is
a sysv-core(?) package that, along with systemd-shim, would continue to
make transitioning to runit easier by adhering to the old SysV method.
Nevermind that SysV is ignorant of /sbin/init and overwrites it when it
updates, killing runit-init in the process.  After becoming tired of doing
mv init init.sysv; ln -s runit.init init many times, I now edit the linux
kernel parameters at startup with init=/sbin/runit-init and make sure I
use init 6 to shut down, allowing both systems to be present.

Perhaps Gerrit could change the existing package to do the following:

+ Address the issue of /service vs. /etc/service via ln -s /etc/service
/service, although this may be an issue with Debian's use of the FHS,
which I think forbids adding things in /, but I'm not sure.  At least with
the symlink, removal of the package would remove just /service but keep
/etc/service intact, preserving your choices.  This would help resolve some
of the instruction issues on the webpage.
+ Put in a requires sysv-core and systemd-shim for Jessie.  Trying to
accomodate systemd while Jessie transitions would be too much trouble I
think; that could be addressed in Debian 9, when SysV is scheduled to go
away completely.
+ Allow /etc/runit to be created and populated.  Hunting down the
installation files in /usr/share/doc/runit/debian isn't what I was
expecting.  The instructions for installation are probably meant to be
fairly cross-platform, so you can understand what is involved (and
therefore, what you need to do to recover in an emergency).  They can be
kept as-is with a simple addition, If you are running Debian, the
/etc/runit directory has been created for you already, and you can go to
step blah-blah next.  Otherwise, the following steps need to be taken...


runit-scripts gone, supervision-scripts progress

2015-01-02 Thread Avery Payne
Happy belated New Year!

As discussed elsewhere, the runit-scripts repository has been removed.  A
link has been left that redirects to the supervision-scripts project.  The
new project should be a 100% compatible replacement.

I did not achieve my personal goal of a 0.1 release by January 1.  I feel
badly about this, but it has been a hectic holiday for my family.  Current,
the project is short about 50 definitions needed for the release, which
would put it at 10% coverage, or about ~120 definitions.  Here's what
little has been done so far:

Done:
- - - - - - - -
+ getty support is via a template, and supports 3 different types

+ socklog is now via a template for its three different modes

+ user-controlled services are now via a template, in pure shell script for
all three frameworks (although it's not fully tested)

+ Incorporate pgrphack, envdir, and setuidgid regardless of framework used

+ system-wide environment PATH in .env

+ Migrate environment variables off of the ./options shell file and onto
envdir for service-specific settings

+ Retired run-simple completely in favor of run-envdir, making it possible
to have non-shell ./run launchers

+ Removed the dependency of the directory name matching the program

+ Service definition directories can now be named arbitrarily vs. the
actual name of the daemon, meaning it may be possible to support runit's
SysV shim mode again!


In Progress:
- - - - - - - -
+ hunt down the last vestiges of any runit-specific scripting, and replace
it with generic framework scripting for all three frameworks

+ hunt down ./run scripts in the wild, gather them, and give the authors
attribution.  Goal: accelerate development

+ Re-organize the definition creation sequence around Debian's pop-con
data, with the most common services being written first.  Goal: increase
the project's usefulness by making common things accessible

+ Experimental service dependencies in 100% shell script.  Goal: No
compiling required upon install!

+ Experimental one-shot definitions that don't need a pause(1) or a
signal.  Goal: No PIDs or sleep(y) programs

+ Reach that 0.1 release!!!


To-Do / Experimental:
- - - - - - - -
+ The ./finish concept needs development and refinement.

+ Need to incorporate some kind of alerting or reporting mechanism into
./finish, so that the sysadmin receives notifications

+ service definition names may be changed in the future to better support
SysV shimming, but this is not a definite plan, and may be cancelled.

+ replace the user-controlled service template with an active program that
seeks out service directories and starts them up as needed; there is a
Github project to this effect, but I have not been able to contact the
author.

+ Look at re-writing the project in execline(!), although several features
may stop working

+ Refine logging to support all three frameworks.  Currently it assumes
that (service)/log/run is sane, when in fact it's just a pointer to
something else.

+ Refine the logging mechanism closer to Laurent's logging chain concept,
if possible for all three

+  Not everything needs per-service logging.  At the moment, all service
definitions receive this, regardless if it is needed or not.  This blanket
logging ensures nothing is lost but it's inefficient.  I plan on
backtracking through in the future and cleaning this up as part of the
logging re-structure.


Re: runit-scripts gone, supervision-scripts progress

2015-01-02 Thread Avery Payne
On Fri, Jan 2, 2015 at 3:42 PM, James Powell james4...@hotmail.com wrote:


 Anyways, I'll be posting more frequently about getting init-stage-1/2/3
 drafted correctly and in execline script language. Avery maybe you can
 share your notes as well on this with me, if possible.


I'll provide what little I know.  There's a lot of ground to cover.


Re: runit-scripts gone, supervision-scripts progress

2015-01-02 Thread Avery Payne

  One way or the other, ./finish should only be used scarcely, for clean-up
 duties that absolutely need to happen when the long-lived process has died:
 removing stale or temporary files, for instance. Those should be brief
 operations and absolutely cannot block.


I'm thinking spawn to background and exit just after that.


  So, if you're implementing reporting in ./finish, make sure you are using
 fast, non-blocking commands that just fail (possibly logging an error
 message) if they have trouble doing their job.

  The way I would implement reporting wouldn't be based on ./finish, but on
 an external set of processes listening to down/up/ready notifications in
 /service/foobar/event. It would only work with s6, though.


Unfortunately I don't have a firm plan for supporting framework
enhancements just yet.  Although every little note and suggestion you give
will certainly be remembered, and when the time comes, I'll see what I can
do to incorporate them.

Right now I'm having an internal dialog about if I should have an
environment variable that hints the framework to the scripts, which in
turn would allow me to support framework-specific features.  I like the
idea but I'm concerned that it will be unmaintainable without templates.




 --
  Laurent




Re: Missing files /etc/init.d/rcS and rmnologin

2014-12-16 Thread Avery Payne
On Tue, Dec 16, 2014 at 8:54 AM, Steve Litt sl...@troubleshooters.com
wrote:


 Thanks Avery,

 First of all, I eventually overcame these problems, so this email is
 pretty much confirmation of what you wrote.


Cool!

So now I've removed both systemd and sysvinit from the equation. My
 next task is to start moving stuff from openrc's sysinit and boot
 to /etc/runit/1 (and I don't know how to do this yet), and then start
 switching services from openrc's default to linked /service
 subdirectories (you know what I mean).


I may have a few helpful tips (please excuse the long post):

If you have a stable definition for a getty (and it sounds like you do),
then clone it and set up others, or use a tty multiplexer like tmux or
screen.  That way, if you experiment on tty5 and have problems, you can
jump to another console and you're not locked out.

Enable handling of ctrl-alt-del in runit.  Using this feature you'll get a
clean(er) shutdown of the file system, instead of pressing the hardware
reset button.  Without it, in a lock-out situation you'll need a hard
reset, and a file system that needs a fsck.  You can always disable it
later if you don't want this behavior.

When writing up services that could lock you out, go to the service
definition directory and do a touch ./down, so if you reboot, it the
system will come up with the service down, and you're not stuck fixing a
run loop.  You can always start it manually with sv start (service) and
it will ignore the down file.  If the service is faulty or locks you out,
you can ctrl-alt-del (as the above suggestion) and recover gracefully.
When the service is running smoothly, remove the down file, and it will
come up normally after reboot.

Doing a touch ./down only implies that the service isn't started **when
runsvdir starts**.  It does not imply that the service will stay down if
you tell it to go down, and then ask it to come back up - the down file
will not block it from starting again.

You'll find that udev doesn't fit neatly into this arrangement because it
is a service that needs to be up before supervision starts, and down after
it stops. This is The udev question - how do we handle this? and it's
vexing for runit, because it needs to be handled in stage 1 and stage 3,
and the supervision tree is only up in stage 2.  Some people don't bother
with it and just let it run separate of supervision - it doesn't hurt to do
this.

There are a few pre-made definitions for processes on github and
bitbucket.  If you get stumped, you can see what others have done; many are
happy to share their efforts, so contact the authors and ask.

If a service keeps detaching from the process tree, it probably needs an
option passed to it that tells it not to background itself.  The SysV
approach of background the service at all costs has pervaded the design
of just about all *nix services, so you'll encounter this a lot.

Lastly, the mailing list is fairly low-noise/high-content, and has lots of
older posts that may cover questions or issues you'll encounter.  It's
worth looking back through and seeing the discussion that has been posted.
You can always see it here:
http://www.mail-archive.com/supervision@list.skarnet.org/

Good luck with your switch-over!


supervision-scripts is now licensed

2014-12-10 Thread Avery Payne
The project is now assigned the Mozilla Public License 2.0.  If you have
questions about this, please contact me directly instead of asking/posting
on the mailing list, as per the list owner's request.


Re: Transition to supervision-scripts

2014-11-07 Thread Avery Payne
On Fri, Nov 7, 2014 at 1:09 PM, John Albietz inthecloud...@gmail.com
wrote:

 I'd love to help with this. Any way we can move the runit scripts to
 github instead of svn / googlecode?


True, runit-for-lfs is svn hosted, and supervision-scripts is mercurial.
But git should push/pull both with little to no effort.  I pull from git
with mercurial all the time  can push back changes.  Just pulled the
runit-for-lfs stuff onto a windows box using tortoisehg today, certainly a
cross-platform cross-protocol cross-repository adventure, and everything
survived.


 On Nov 7, 2014, at 12:31 PM, Colin Booth cathe...@gmail.com wrote:

  Not sure about winbindd or rsyncd but the problem with sshd is that
  you need a reader for /dev/log. I'm pretty sure this isn't
  configurable without rebuilding openssh. The easiest is to run a
  supervised copy of rsyslog since it'll handle all the syslog logging
  channels correctly, plus enough other programs are going to expect a
  reader on the /dev/log socket that it's going to end up a requirement.


I suspect you are right about /dev/log.  Some of my installs have socklog,
some are still running rsyslog.  So, yeah, in all cases, there is something
listening to /dev/log on the other end...which would explain why my sshd
works fine for me.

The migration to a framework-agnostic set of scripts has exposed all kinds
of warts, and logging is one of them. There's lots of stuff that is
baked-in to use /dev/log, so there's a ton of development inertia that
propels us to provide some kind of remedial support for it.



  I've been running xdm under supervision as my display manager for over
  a year, so my guess is that kdm is missing some supports.


I guarantee that the kdm/gdm/lightdm display issues are all related to
missing libraries and services.  Lightdm uses dbus, but if dbus isn't up,
the restart until it works kicks in and it will flicker the display so
badly you'll want to punch the reset button.  Even after I got lightdm up
and stable, there are missing bits and pieces that are just implied to be
there, but if they aren't then all kinds of options are simply
non-functional.  Yeah, fun times.


Transition to supervision-scripts

2014-11-03 Thread Avery Payne
The transition is complete and all framework-specific dependencies are
being replaced with generic redirects.  The runit-scripts repository will
be deleted January 1st.


use of envdir

2014-11-03 Thread Avery Payne
Just a quick poll, anyone here using the envdir feature?  Is it widely
supported, or do people even bother?


Re: use of envdir

2014-11-03 Thread Avery Payne
That's three yes, I do! responses in less than an hour.  I'd say it's
somewhat popular. :)  Thank you for your responses.

On Mon, Nov 3, 2014 at 10:41 AM, Avery Payne avery.p.pa...@gmail.com
wrote:

 Just a quick poll, anyone here using the envdir feature?  Is it widely
 supported, or do people even bother?



Fwd: Process Dependency?

2014-10-31 Thread Avery Payne
A message was dropped...passing it along as part of the discussion
-- Forwarded message --
From: Casper Ti. Vector caspervec...@gmail.com
Date: Fri, Oct 31, 2014 at 3:37 AM
Subject: Re: Process Dependency?
To: Avery Payne avery.p.pa...@gmail.com


Sorry, but I just found that I did not list-reply your original mail, so
this pratically became a private message.  Nevertheless, you may forward
this message to the mail list if you consider it favourable :)

On Fri, Oct 31, 2014 at 08:15:03AM +0800, Casper Ti. Vector wrote:
 For one already implemented way of dependency interface in
 daemontools-like service managers, you can have a look at how nosh does
 it.

--
My current OpenPGP key:
4096R/0xE18262B5D9BF213A (expires: 2017.1.1)
D69C 1828 2BF2 755D C383 D7B2 E182 62B5 D9BF 213A


Re: Process Dependency?

2014-10-31 Thread Avery Payne
Part of the temptation I've been fighting with templates is to write the
grand unified template, that does it all.  It sounds horrible and barely
feasible, but the more I poke at it, the more I realize that there is a
specific, constrained set of requirements that could be met in a single
script...under the right circumstances.  The reality is there will be more
than one template in this arrangement.  One that is simple service, one
that covers the unique needs of getties, one that needs a
var/run-and-pid (which is just simple-service with extras), and one that
I haven't done yet that I call swiss army knife, the script of scripts.
There are still lots of one-offs that will be needed, and that shows the
limits of what can be done.  All of these solve the current issues with
process management, and as Laurent has pointed out, *none* of them address
service management.  As a stop-gap, until service management is really
ready, the plan is to temporarily patch over the issue by having smaller
processes manage the state of services, and then controlling it through
process management (and again, Laurent has pointed out this is
sub-optimal). An example is using per-interface instances of dhcpcd (no,
not *dhcpd*) to manage each interface.  This is heavy and bloats up the
process tree for larger systems because a single process is needed for each
instance of state to manage, when the kernel itself is already
using/managing that state.

With regard to coming up with something akin to a domain-specific language
in the form of a specification for services, this is ideal and solves
plenty. I would love, love, LOVE to see a JSON specification that addressed
the 9+ specific needs of starting processes as a base point, and then
extend it to provide full service coverage, becoming a domain-specific
language that encompasses what is needed.  This would be backwards
compatible with daemontools/runit/s6 (within the limitations of the
environment), forwards compatible with future service management, and would
completely supplant the need for templates.  I'd like to hear more.

On Fri, Oct 31, 2014 at 2:40 AM, John Albietz inthecloud...@gmail.com
wrote:

 Script generators are the way I've been leaning.

 It's really convenient to have one or more services defined in some kind
 of structured data format like yaml or json and to then generate and
 install the service scripts.

 I wish there was a standard format to define services so the generator
 could take one input file and output appropriate service scripts for
 different process supervisor systems.

 Anyone seen any efforts in this direction? Most upstart and sysv scripts
 have standard boilerplate, so it looks like there are common standards that
 could be derived.

 - John

  On Oct 31, 2014, at 1:05 AM, Laurent Bercot ska-supervis...@skarnet.org
 wrote:
 
 
  First, I need to apologize, because I spend too much time talking and
  dismissing ideas, and not enough time coding and releasing stuff. The
  thing is, coding requires sizeable amounts of uninterrupted time - which
  I definitely do not have at the moment and won't until December or so -
  while writing to the mailing-list is a much lighter commitment. Please
  don't see me as the guy who criticizes initiatives and isn't helping or
  doing anything productive. That's not who I am. (Usually.)
 
  On to your idea.
  What you're suggesting is implementing the dependency tree in the
  filesystem and having the supervisor consult it.
  The thing is, it's indeed a good thing (for code simplicity etc.) to
  implement the dependency tree in the filesystem, but that does not make
  it a good idea to make the process supervision tree handle dependencies
  itself!
 
  (Additionally, the implementation should be slightly different, because
  your ./needs directory only addresses service startup, and you would also
  need a reverse dependency graph for service shutdown. This is an
  implementation detail - it needs to be solved, but it's not the main
  problem I see with your proposal. The symlink idea itself is sound.)
 
  The design issues I see are:
 
  * First and foremost, as always, services can be more than processes.
  Your design would only handle dependencies between long-lived processes;
  those are easy to solve and are not the annoying part of service
  management, iow I don't think process dependency is worth tackling
  per se. Dependencies between long-lived processes and machine state that
  is *not* represented by long-lived processes is the critical part of
  dependency management, and what supervision frameworks are really lacking
  today.
 
  * Let's focus on process dependency for a bit. Current process
  supervision roughly handles 4 states: the wanted up/down state x the
  actual up/down state. This is a simple model that works well for
  maintaining daemons, but what happens when you establish dependencies
  across daemons ? What does it mean for the supervisor that A needs B ?
 
- Does that just 

Re: runit-scripts transitions to supervision-scripts

2014-10-31 Thread Avery Payne
Just curious, but does anyone else have issues seeing some messages on the
mailing list?  My last message won't load through the web interface on
skarnet.

On Fri, Oct 31, 2014 at 3:06 PM, Avery Payne avery.p.pa...@gmail.com
wrote:

 The work on runit-scripts will officially cease December 31, 2014.
 However, all of the scripts are migrating to supervision-scripts effective
 immediately.  All future efforts will be concentrated on
 supervision-scripts and runit-scripts will be deprecated.  You can find
 supervision-scripts here:

 https://bitbucket.org/avery_payne/supervision-scripts

 Feel free to clone the repository with either git or mercurial.

 The change was made as part of a reflection that other frameworks besides
 runit could be supported with a minimal amount of effort.  The target
 environment remains Debian and Debian-alike systems (where possible), and
 most of the original runit-script goals remain as well.  Additional goals
 will be added to support s6 out of the box in the very near future.
 Daemontools will be investigated as a potential target ias well.  If you
 have any questions, please feel free to ask.  A secondary announcement will
 go out once the transition is 100% complete.



Process Dependency?

2014-10-30 Thread Avery Payne
I know that most process management tools look at having the script do the
heavy lifting (or at least carry the load by calling other tools) when
trying to bring up dependencies.  Couldn't we just have a (service)/needs
directory?

The idea is that the launch program (s6-supervise or runsv) would see the
./needs directory the same way it would see the ./log directory.  Each
entry in ./needs is a symlink to an existing (service) directory, so
everything needed to start (dependency) is already available.  The
(service) launcher would notify its *parent* that it wants those launched,
and it would be the parent's responsibility to bring up each process
entry.  For s6 the parent would be s6-svscan, for runit it would be
runsvdir.  During this time the launcher simply waits until it is signaled
to either proceed, or to abort and clean up.  Once all dependency entries
are up, the parent would signal that the launcher can proceed to start
./run.  There isn't much in the way of state-tracking beyond the signals,
and the symlinks reduce the requirement for more memory.  The existing
mechanisms for checking processes remain in place, and can be re-used to
ensure that a dependent process didn't die before the ./run script turns
over control to (service).  Just about all ./run scripts remain as-is and
even if they launch a dependency they continue to work (because it's
already launched).

What are the hidden issues that I'm not aware of?


License selection for process scripts

2014-10-23 Thread Avery Payne
This may sound a bit off-topic but it has a practical purpose.  Currently I
am working under the assumption that a BSD 3-Clause license may be
sufficient to provide process control scripts for daemontools, runit, and
s6.  However, each has a different license.  I need to find a license that
provides maximum compatibility without placing the entire work in the
public domain (which is roughly the equivalent of saying I abandon this.)
 I'm more than open to doing multiple licenses if that is necessary.
Suggestions are welcome.


Re: another distro using runit...

2014-10-20 Thread Avery Payne
On Mon, Oct 20, 2014 at 10:52 AM, John Albietz inthecloud...@gmail.com
wrote:


 Re: runit, a few things I've run into:
 - I haven't found a way to lower the default wait time on init start from
 the current default 1 second. Also not sure if there's a way to lower the
 runsvdir service poll timer below 1 second?


Just curious - why under 1 second?


 - With nearly all of my services, I create enable scripts that check for,
 and if necessary set up directories and perhaps even default passwords or
 databases.


Most (process)/run scripts will support the pre-run environment setup, such
as creating /var/run/(process) directories, or setting up special files.
Usually this takes the form of a check for the directory/file, and if it
isn't there, it creates it.


 And I haven't found an elegant way yet to integrate this into
 runit. I think it would be useful to separate out a command for 'enable'
 that would run successfully only once for a service.


This is entirely possible, provided your file(s) don't conflict with other
packagers or arrangements.  Keep in mind that runit is a re-imagined
version of djb daemontools.  There are others like s6 that also qualify in
this arrangement.  It wouldn't take much to make your efforts portable
between these frameworks.


 Or I guess I can
 create some idempotent test that runs before each service run command.


Exactly.  This could be as easy as doing ' [ -f ./foo ] || do-something 
touch ./foo ' in your startup script.


 But
 that doesn't seem as elegant.


I guess we should define elegant?  For some people, seeing a file is
elegant. The runit approach is file-centric and it would seem likely (to me
at least) to keep with the same file-and-symlink approach to dealing with
the service definition.


 Unit already has a concept of a 'check' file
 and a 'finish' file. What do you think about adding support for a 'enable',
 'start' or 'pre' file that only gets run on service start?


We have a post-start hook in the form of (service)/finish; a pre-start hook
may or may not be needed, given that the (service)/run script typically
handles all of the pre-start requirements.

You could also make an arrangement where you issue a run-once, which does
the preliminary setup, and then run it normally.  Upon exit the first time,
a setup flag is placed in the service directory and it will know to skip
the preliminary setup.   Not a great solution, but it would fit within the
framework.

sv once (process) ; sv start (process).


Re: init.d script names

2014-10-02 Thread Avery Payne
On Thu, Oct 2, 2014 at 7:01 PM, Charlie Brady 
charlieb-supervis...@budge.apana.org.au wrote:


 On Thu, 2 Oct 2014, Avery Payne wrote:

  On Thu, Oct 2, 2014 at 3:55 PM, Laurent Bercot 
 ska-supervis...@skarnet.org
  wrote:
  
Yeah, unfortunately that's not going to happen. You might get close
 for
   some
   services, maybe even a majority, but in the general case there's no
 direct
   mapping from a System V script to a supervision run script.
 
  Won't stop me from trying.  Even if only 10% are direct maps, that's
  approximately 100+ scripts I don't need to write.

 What about 0%? No System V script execs the service so that it runs in the
 foreground. Startup would hang if it did.


The scripts in question are /etc/sv/(service)/run scripts, not the actual
init.d scripts.  The plan is to use a set of templates, coupled with
environment variables, to handle all of that.  The template has just the
bare necessities in it, with the last lines looking like

[ -f ./options ]  . ./options
exec chpst -P $DAEMONNAME $DAEMONOPTS

... where $DAEMONOPTS would have the necessary flags to cause the program
to run in the foreground. The environment variables are already
partially-set in /etc/default/(service) on Debian, and some of them do pass
the equivalent of $DAEMONOPTS in the init.d versions of the script.  So I'm
not exactly re-inventing the wheel here.  Worse case, if I need to, I can
place a /etc/sv/(service)/options file that does the same thing as
/etc/default/(service), and have the flags stored in there...I don't know
if Debian allows or encourages alteration of the /etc/default/(service)
stuff at the package level, but in case they don't, the options file would
be the backup plan to deal with that.

Of course, this only applies for simpler services that don't have special
needs.  I've already bumped into ejabberd wanting to do its own thing.


Re: init.d script names

2014-10-02 Thread Avery Payne
  It's a harder problem than it looks, really, and all the easy
 way outs are hackish and will fail horribly when you least expect it.


So, should I stop work on runit-scripts and s6-scripts?  If this is all for
naught, then why put the effort in?