Re: thoughts on rudimentary dependency handling

2015-01-19 Thread Avery Payne


On 1/19/2015 2:31 PM, Jonathan de Boyne Pollard wrote:

Avery Payne:
> * implement a ./wants directory.  [...]
> * implement a ./needs directory.  [...]
> * implement a ./conflicts directory.  [...]

Well this looks familiar.


I ducked out of ./needs and ./conflicts for the time being; if I spend 
too much time with making those features then the scripts won't move 
forward.  I'm already several weeks "behind" in my own schedule that I 
have set for scripts.




Before you read further, including to my next message, get yourself a 
copy of nosh and read the manual pages therein for service-manager(1) 
and system-control(1), paying particular attention in the latter to 
the section entitled "Service bundles".


Then grab the nosh Guide and read the new interfaces chapter.  On a 
Debian system this would be:


xdg-open /usr/local/share/doc/nosh/new-interfaces.html


Sounds like I have my homework cut out.  I will do so as soon as I can, 
although I warn you that it joins an already-long list of material to 
read and think about.




Good news for BSD

2015-01-19 Thread Jonathan de Boyne Pollard

Laurent Bercot:

Readiness notification is hard: it needs support from the daemon
itself, and most daemons are not written that way.



Ah yes.  That reminds me.  I said a while back that I have to give some 
of you some good news.


https://lists.debian.org/debian-devel/2014/10/msg00709.html

You might be interested in the readiness discussion, too.


Re: thoughts on rudimentary dependency handling

2015-01-19 Thread Jonathan de Boyne Pollard

Laurent Bercot:

I'm of the opinion that packagers will naturally go towards what gives
them the less work, and the reason why supervision frameworks have

trouble

getting in is that they require different scripting and organization, so
supporting them would give packagers a lot of work; whereas sticking to
the SysV way allows them not to change what they already have.
Especially with systemd around, which they already kinda have to convert
services to, I don't see them bothering with another out of the way
packaging design.

So, to me, the really important work that you are doing is the run script
collection and standardization. If you provide packagers with already

made

run scripts, you are helping them tremendously by reducing the amount of
work that supervision support needs, and they'll be more likely to

adopt it.

nosh 1.12 comes with a collection of some 177 pre-built service bundles. 
 As I said to the FreeBSD people, I have a goal of making the 155 
service bundles that should replace most of FreeBSD rc.d .  (There are a 
load of non-FreeBSD bundles in there, including ones for VirtualBox 
services, OpenStack, RabbitMQ, and so forth. This is why I haven't 
reached 155 even though I've made 177.)


It also comes with a tool for importing system service and socket units 
into service bundles.  And the nosh Guide chapter on creating service 
bundles has pointers to the run file collections by Gerrit Pape, Wayne 
Marshall, Kevin J. DeGraaf, and Glenn Strauss.


xdg-open /usr/local/share/doc/nosh/creating-bundles.html

Incidental note: I just added another service bundle, for nagios, to 
version 1.13 because of this:


http://unix.stackexchange.com/questions/179798#179798

Enjoy this, too:


http://homepage.ntlworld.com./jonathan.deboynepollard/FGA/run-scripts-and-service-units-side-by-side.html



RE: thoughts on rudimentary dependency handling

2015-01-19 Thread Jonathan de Boyne Pollard

Avery Payne:

Finally, with regard to the up vs actually running issue, I'm not even
going to try and address it due to the race conditions involved.


RabbitMQ is the canonical example of the real world problems here. With 
durable queues containing a lot of persistent messages, it can be tens 
of minutes (on one of my machines) before a freshly started RabbitMQ 
daemon opens its server socket and allows TCP connections.




Re: thoughts on rudimentary dependency handling

2015-01-19 Thread Jonathan de Boyne Pollard

John Albietz:
 > I wonder if this will help address a common situation for me where I
 > install a package and realize that at the end of the installation the
 > daemon is started using upstart or sysv.
 >
 > At that point, to 'supervise' the app, I first have to stop the current
 > daemon and then start it up using runit or another process manager.

On Debian, for one, they aren't started using upstart or sysv (whatever 
that is).  Maintainer scripts enable them with update-rc.d and start 
them with invoke-rc.d.  You are expected to have update-rc.d and 
invoke-rc.d tools that are appropriate to your init system, as well as 
the respective control files of course.


openrc comes with update-rc.d and invoke-rc.d that understand openrc 
scripts.  The sysv-rc package comes with update-rc.d and invoke-rc.d 
that understand systemd units, upstart jobs, and System 5 rc.d scripts. 
 Ironically, the systemd and upstart packages do not come with their 
own update-rc.d and invoke-rc.d commands, relying instead upon the 
sysv-rc package to supply them.


This is all a bit shakey and rickety, though.  One well-known fly in the 
ointment is that what may be a single init.d script for System 5 rc may 
be multiple service and socket units for systemd, and stopping a 
socket-activated service for package upgrade might not do the right 
thing as the socket activation might activate the service unit mid-upgrade.


Last year, I gave Debian Developer Russ Allbery a patch for an improved 
version of the Debian Policy Manual that sets this out more clearly than 
the current one.  You might want to get it off him. The sections (of the 
patched document) that you are interested in are 9.3.1.2, 9.3.6, 
9.3.6.1, 9.3.6.2, and 9.3.6.3.




Re: Using runit-init on debian/Jessie in place of sysvinit/systemd

2015-01-19 Thread Jonathan de Boyne Pollard

Avery Payne:

I have been giving the "one shot" question considerable thought.  I

have a

number of scripts that require this kind of behavior, so it's of great
interest to me.  The current methods that I have encountered are:

+ use a pause(1) command that simply sleeps forever on a subset of

signals,

then terminates; this effectively holds the script
+ as per discussion elsewhere on the mailing list, have the script send a
signal to itself to make it go to sleep (untested)


nosh has a built-in pause(1) command, created for this very purpose.  It 
is used in run scripts created by convert-systemd-units if a systemd 
service unit has Type=oneshot and RemainAfterExit=false.  However, nosh 
has a third option that you haven't listed.


If a systemd service unit has RemainAfterExit=true, irrespective of 
type, then instead convert-systemd-units makes use of the "run_on_empty" 
mechanism in service-manager(1), where a file named "remain" in the 
service/ directory causes the service manager not to automatically leave 
the RUNNING state when the daemon process exits.  One sees this with the 
pre-supplied standard targets, which are in the running state but have 
no process ID:


JdeBP %systemctl status 
{basic,local-fs,multi-user,normal,server,sysinit,workstation,virtualbox}.target 
sshd.service

/etc/system-manager/targets/basic: running 3d 4h 55m 12s ago
/etc/system-manager/targets/local-fs: running 3d 4h 55m 17s ago
/etc/system-manager/targets/multi-user: running 3d 4h 55m 11s ago
/etc/system-manager/targets/normal: running 3d 4h 55m 11s ago
/etc/system-manager/targets/server: running 3d 4h 55m 11s ago
/etc/system-manager/targets/sysinit: running 3d 4h 55m 19s ago
/etc/system-manager/targets/workstation: running 3d 4h 55m 11s ago
/etc/system-manager/targets/virtualbox: running 3d 4h 55m 10s ago
/var/sv/sshd: running (pid 220) 3d 4h 55m 49s ago
JdeBP %


Re: thoughts on rudimentary dependency handling

2015-01-19 Thread Jonathan de Boyne Pollard

Avery Payne:

But from a practical perspective there isn't anything right now that

handles

dependencies at a global level.


Now you know that there is.

The nosh design is, as you have seen, one that separates policy and 
mechanism.  The service-manager provides a way of loading and unloading 
services, and starting and stopping them.  The dependency system is 
layered entirely on top of that.  Whilst one can use 
svscan/service-dt-scanner and svc/service-control and have things run 
the daemontools way, one can alternatively use the 
start/stop/enable/disable subcommands of system-control and have service 
dependency management instead.


The trick is that dependency management is calculated by the 
system-control program.  When you tell it "systemctl start 
workstation.target" it follows all of the wants/, required-by/, and 
conflicts/ symbolic links recursively to construct a set of start/stop 
jobs for the relevant services.  Then it follows the after/ and before/ 
symbolic links to turn that set into a ordered graph of jobs.  Finally, 
it iterates through the graph repeatedly, sending start and stop 
commands to the service manager for the relevant services and polling 
their statuses, until all jobs have been enacted.


conflicts/ is actually easy, although it took me two tries.  If 
"A/wants/B" exists, then a start job for A creates a start job for B. 
If "A/conflicts/B" exists then a start job for A creates a stop job for B.


The important point is that the service manager is entirely ignorant of 
this.  It is just told to start and stop individual services, and it 
knows nothing at all of dependencies or orderings.  (A dependency is not 
an ordering relationship, of course, but the manual page that I said to 
read has already explained that. (-:)  It's all worked out outwith the 
service manager.


Which means, of course, that it is alterable without changing the 
service manager.


And indeed, nosh demonstrates such alteration in action.  The 
dependency-based system that one gets with system-control is one of two 
alternative policies that come in the box; the other being an old 
daemontools-style system with a /service directory and ./log 
relationships, as implemented by the service-dt-scanner (a.k.a. svscan) 
daemon.


Again, the trick is that the daemontools-style stuff is all worked out 
in svscan, and the service manager proper has no dealings in it.  The 
service manager provides a flexible plumbing layer.  Higher layers 
decide how they want that plumbing put together.


thoughts on rudimentary dependency handling

2015-01-19 Thread Jonathan de Boyne Pollard

Avery Payne:
> * implement a ./wants directory.  [...]
> * implement a ./needs directory.  [...]
> * implement a ./conflicts directory.  [...]

Well this looks familiar.

Before you read further, including to my next message, get yourself a 
copy of nosh and read the manual pages therein for service-manager(1) 
and system-control(1), paying particular attention in the latter to the 
section entitled "Service bundles".


Then grab the nosh Guide and read the new interfaces chapter.  On a 
Debian system this would be:


xdg-open /usr/local/share/doc/nosh/new-interfaces.html

Have you done those?  Good.  That saves me writing it all out again.  (-:

Now hit your next message button.



Re: [PATCH 0/4] Add info on why process is down to statusfile

2015-01-19 Thread Laurent Bercot

On 19/01/2015 21:55, Olivier Brunel wrote:

Side question: I see you haven't added support of the file "ready" into
s6-svstat, any reason?


 I hesitated, then decided against it, but that's not a strong decision.

 The argument was that s6-svstat shows the state of the service as it is
seen by s6-supervise, i.e. purely the process up/down state. Introducing
the notion of readiness into it would change things. For instance,
s6-svstat currently prints the number of seconds that the process has
been up; users may be more interested in knowing want the number of
seconds that the process has been ready, and that needs storing another
timestamp. The supervise/ready file could be used for that, but I'm
not comfortable enough yet with tightly integrating readiness into the
whole series of tools. In the beginning, I thought I could get away
with s6-notifywhenup only, but it appears that modifications are
spreading across several binaries - I need time to convince myself that
it's not gratuitous feature creep and the utility is worth the increase
in complexity. I'm paranoid like that.

--
 Laurent



Using runit-init on debian/Jessie in place of sysvinit/systemd

2015-01-19 Thread Jonathan de Boyne Pollard

Luke Diamand:
> Are there any plans to create a debian package that would just put all
> the bits in the right place, so you can simply install the package and
> have it then use runit-init as /sbin/init ?

It's called runit-run, created over a decade ago, and it was a Debian 
package for some years until other people filed bugs that discouraged M. 
Pape and led him to believe that it wasn't wanted.


https://tracker.debian.org/pkg/runit-run

Note that people are now filing bugs against runit, some of which have 
been causing M. Pape grief, because they involve things like the 
disappearance of /etc/inittab pulling the rug out from underneath the 
maintainer scripts.


https://bugs.debian.org/cgi-bin/pkgreport.cgi?src=runit



Re: [PATCH 0/4] Add info on why process is down to statusfile

2015-01-19 Thread Olivier Brunel
On 01/19/15 17:41, Laurent Bercot wrote:
> On 19/01/2015 01:04, Olivier Brunel wrote:
> 
>> Sure; though I'm totally fine if you want to discuss them, or would like
>> me to fix/make changes and re-send, as you see fit. That's what I
>> expected really.
> 
>  I have implemented your suggestions, with minor changes.
> 
>  For instance, I removed the "felldown" flag: the only case where it would
> be false is at boot time, when the service hasn't been launched yet.
> Using s6-svstat at this point is very unlikely and would return a transient
> result, identified with "want up" in the output, unless the service has a
> down file, in which case the admin is supposed to know what he's doing -
> in other words, it's a corner case not worthy of ad-hoc code IMO.
> 
>  Please try the latest skalibs and s6 git snapshots and tell me if they're
> working for you.

Yep, works great. Thanks a lot!

Side question: I see you haven't added support of the file "ready" into
s6-svstat, any reason?



Re: [PATCH 0/4] Add info on why process is down to statusfile

2015-01-19 Thread Laurent Bercot

On 19/01/2015 01:04, Olivier Brunel wrote:


Sure; though I'm totally fine if you want to discuss them, or would like
me to fix/make changes and re-send, as you see fit. That's what I
expected really.


 I have implemented your suggestions, with minor changes.

 For instance, I removed the "felldown" flag: the only case where it would
be false is at boot time, when the service hasn't been launched yet.
Using s6-svstat at this point is very unlikely and would return a transient
result, identified with "want up" in the output, unless the service has a
down file, in which case the admin is supposed to know what he's doing -
in other words, it's a corner case not worthy of ad-hoc code IMO.

 Please try the latest skalibs and s6 git snapshots and tell me if they're
working for you.

--
 Laurent