Re: [announce] mdevd-0.0.1.0 - a mdev-compatible uevent manager

2018-01-08 Thread Colin Booth
On Mon, Jan 08, 2018 at 07:31:17PM +, Laurent Bercot wrote:
> > #!/usr/bin/execlineb -P
> > pipeline { find /sys -type f -name uevent -print0 }
> > forstdin -0 -p f importas -u f f redirfd -w 1 $f echo add
>  The problem with your script is that it's getting a lot of
> duplicates, since multiple symlinks in /sys point to the same
> directory describing your device. I'm still hoping to avoid scanning
> the entirety of /sys, so I'd like to find a correct pattern, but if
> there's no other way, I'll just scan everything discarding symlinks.
> 
'-type f' should only find regular files, not symlinks. I don't believe
it will even recurse into symlinks by default so deduplication shouldn't
be necessary.

-- 
Colin Booth


Re: s6-svscan - controlling terminal semantics and stdin use

2018-01-01 Thread Colin Booth
On Mon, Jan 01, 2018 at 07:53:39PM -0800, Earl Chew via skaware wrote:
> Thanks for the s6-* family of programs.
> 
> I've just started using s6-svscan for some deployments.
> 
> In one of the scenarios, I was prototyping the behaviour of s6-svscan
> over a supervision tree directly at the interactive terminal for use in
> a cron-based scenario.
> ...
> Has this scenario (ie starting s6-svscan from an interactive terminal)
> been considered previously?
I believe this particular failure mode has been considered and decided
an edge-enough case to not be worried about. Laurent will have to say
for sure though.
> 
> My second observation is that stdin of s6-svscan is inherited by all its
> s6-supervise children. I'm wondering if there is anything to be gained
> by that, and whether it would be less surprising to set stdin to
> /dev/null after fork() since having a herd of processes attempting to
> read from the same stdin does not seem to lead anywhere useful.
stdin, stdout, and stderr are inherited when a process is forked,
there's nothing special going on here. Since s6-svscan and s6-svscanctl
ignore stdin, keeping it open shouldn't impact anything. If you want to
close stdin, do it before execing into s6-svscan (or as part of the call
if using shell).
> 
> What do you think?
> 
> Earl

-- 
Colin Booth


Re: Where does /dev get mounted?

2017-10-30 Thread Colin Booth
On Mon, Oct 30, 2017 at 12:14:50PM -0500, Brett Neumeier wrote:
> Hi all,
> 
> I have a system working well with the combination of s6/s6-rc/s6-linux-init
> -- all of which work for me exactly as documented. So I don't have any
> problems, really!
> 
> But I do have a question: as documented, I find that when stage1's child
> process execs into stage2, there is a devtmpfs mounted at /dev. I don't
> understand where this happens! I see where the stage 1 script mounts a
> tmpfs at /run, but I don't see anything there, or in the initial s6
> scandir, that mounts /dev. What am I missing?
Assuming a distro, the initramfs handles that. It also handles mounting
/sys and running an initial udev coldplug cycle.
> 
> Cheers!
> 
> Brett
> 
> -- 
> Brett Neumeier (bneume...@gmail.com)

-- 
Colin Booth


Re: .env file handling

2017-10-24 Thread Colin Booth
On Tue, Oct 24, 2017 at 06:22:46AM +, Laurent Bercot wrote:
> > I tried rewriting the whole thing in execline and while I'm pretty sure
> > it's doable it's not easy.
> 
>  A direct translation of Casper's script to execline could be:
> 
> #!/command/execlineb -P
> backtick LIST { cat /path/to/xyz.env }
> importas -nsd"\n" LIST LIST
> env -i ${LIST}
> /path/to/xyz
> 
>  It's not as idiomatic as other ways to handle variables in execline,
> but it should work.
I regularly forget that env does its work by chainloading as opposed to
being a built-in that manipulates the shells environment directly. 
> 
>  The underlying difficulty with Monty's question is that execline tries
> to avoid parsing as much as possible, and a file full of key=value lines,
> as simple as it is, still requires some parsing. execline wasn't made
> for this; the idiomatic way to store key-value pairs in the filesystem
> for use by an execline script is, as Monty found out, s6-envdir.
> If you really want a shell-like syntax, the best way to handle it is
> with tools that already understand this syntax, such as a shell, or as
> Casper suggested, env.
> 
I would say that is one of the two difficulties. The other one being
that execline also tries hard not to carry any overhead, which means
that often times you can end up in situations where the aggressive
scoping that it does makes things challenging at best (assuming you're
trying to stay with the spirit of the language).

-- 
Colin Booth


Re: .env file handling

2017-10-23 Thread Colin Booth
On Tue, Oct 24, 2017 at 09:25:37AM +0800, Casper Ti. Vector wrote:
> > #!/bin/sh
> > exec env -i $(cat /path/to/xyz.env) /path/to/xyz
> And of course you should be careful with the contents in `xyz.env'.
> 
You can do the same with 
#!/bin/sh
while read A ; do 
  export "$A"
done < "$1"
exec prog...

If you don't want to use cat.

I tried rewriting the whole thing in execline and while I'm pretty sure
it's doable it's not easy. The problem is that scope in execline doesn't
extend past the execution context of a given program, so while the
following program looks like it should work, it doesn't:

#!/command/execlineb
elgetpositionals
redirfd -r 0 $1
withstdinas -n VAR
prog...

since prog will get run with the environmental variable VAR set to the
entire contents of your file. 

Changing it to the following to unwrap VAR ends up with you setting the
variables correctly each loop, but not getting everything pulled into
the environment at once due to the aforementioned scoping:

#!?command/execlineb 
elgetpositionals
redirfd -r 0 $1
foreground { 
  forstdin VAR
  importas -usd= VAR VAR
  export ${VAR}
  s6-echo ""
}
prog...

You can test that it's setting the variables and then losing them by
change `s6-echo ""' to `env'. And setting prog... to 
`foreground { s6-echo "" } env'

Hope is however not all lost. You can do it with execline and s6 as long
as you have a tmpfs laying around somewhere:

#!/command/execlineb 
elgetpositionals
foreground { mkdir -p /run/envdir }
redirfd -r 0 $1
foreground {
  forstdin -Cd"\n" VAR
  importas -u VAR VAR
  multidefine -d= ${VAR} { K V }
  redirfd -w 1 /run/envdir/${k}
  s6-echo ${V}
}
s6-envdir /run/envdir
prog...

That does create a directory somewhere, but it parses a multi-line K=V
style file into something that s6-envdir can handle.

Oh, and it should go without saying, but all these script snippets
assume that you're calling them as `script /path/to/envfile'

Cheers!

--
Colin Booth


Re: difference between bundles and dependencies in s6-rc

2017-05-26 Thread Colin Booth
On Fri, May 26, 2017 at 03:58:15PM -0500, Brett Neumeier wrote:
> Hello Skaware!
> 
> I'm in the process of setting up s6 and s6-rc as the init and service
> management systems for my linux system, and am curious: is there a
> significant functional difference between a bundle service, and a oneshot
> atomic service that does nothing but declares a bunch of other services as
> dependencies?
> 
> If there is a functional difference -- what is it?
> 
Hi Brett,

Getting a few things out of the way first, just so we're on the same
page.

Dependencies are ordering constraints: A depends on B depends on means
that to bring up A, you first need to start B, which first requires C. 

Bundles are groupings of services: Bundle 1, containing A, B, and C, can
be used as a shorthand to address all of those but internally there
aren't any ordering guarantees (unless A, B, and C also have
interdependencies).

The functional difference then is this: telling a bundle to go up or go
down (especially important in the down case) will signal all contents of
that bundle to change state (in addition to anything that depends on
anything within the bundle or the bundle itself), whereas telling a
syncronization oneshot to go down will leave the stuff that it depends
on untouched. 

In my own setups, I do both: I have essentially run-level bundles
(bndl-init, bndl-local, bndl-lan, bndl-all) that contain all the things
needed to make that level work. I then have a synchronization oneshot
that depends on those bundles and is a member of the next bundle up the
chain (so ok-init depends on init, and lives in local). That way, I can
go to a minimally functional state (init only) without having to mess
around with s6-rc shutting down udev or trying to remount half my disks.
Services that have specific requirements take a dependency on the atomic
in question, whereas things with wider-sweeping requirements like xdm or
a web server take a dependency on the synchronization point.

Cheers!
-Colin


Re: Building execline on Ubuntu 17.04

2017-04-15 Thread Colin Booth
On Sat, Apr 15, 2017 at 03:12:11PM +0200, En Nu wrote:
> I tried:
> 
> sudo apt-get install skalibs-dev
> git clone git://git.skarnet.org/execline
> cd execline
> ./configure
> 
> I get this error running the above:
> 
> ./configure: error: /usr/lib/skalibs/sysdeps is not a valid sysdeps directory
> 
Hi Vincent,

The version currently packaged in Debian (and so Ubuntu as well) is
about seven years out of date. I am in the process of getting packages
together for Debian and at some point in the future will be taking
maintainership of skalibs, as well as adding execline, s6, and a few
others. This will probably entail also doing merges from Debian to
Ubuntu when bug fixes happen, but since I'm not an Ubuntu user I haven't
quite figured out that part of the plan yet.

Regardless though, until that happens my suggestion is to clone or
download skalibs as well, build that first, and then build execline.

Cheers!
-Colin


Re: Man pages

2017-03-31 Thread Colin Booth
On Fri, Mar 31, 2017 at 6:02 AM, Guillaume Perréal  wrote:
> Pandoc (http://pandoc.org/) might be useful. The out-of-the-box template is
> ugly, but nonetheless usable.
>
That's what I was using but the post-processing editing overhead was
too high without digging in and adjusting the conversion template.

Cheers!

-- 
"If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern."
  --  William Blake


Re: Man pages

2017-03-26 Thread Colin Booth
On Mar 25, 2017 10:01 PM, "S. Gilles"  wrote:
>
> This is a rather silly question, but the other day I wanted to look
> up the syntax of some command or other, but had no internet connection.
> I had always assumed that the online documentation was generated
> from manpages, but I don't see any in the source.
>
> Am I overlooking a repository, or are the docs HTML only?  (And if
> the latter, would hypothetical patches to add manpages be accepted?)

Docs are HTML only but are shipped with the source
(component_name/docs/*.html) and should be present on all systems that
build skaware stuff. This has been mentioned before in the #s6 IRC channel,
and yes, patches to add manpages would be accepted. I've threatened to do
the same, that or to rewrite the docs into something that trivially
compiles into both html and man-style troff, but a lack of time (and the
presence of a copy of lynx on all of my systems that run s6) has kept me
from really digging in to the project.

Cheers!


Re: s6-linux-init: /etc/rc.tini not executed

2017-02-02 Thread Colin Booth
On Thu, Feb 2, 2017 at 4:43 AM, Guillaume Perréal  wrote:
> Hello,
>
> I finally have a working initramfs and know the system is happily booting. I
> am slowly adding one-shots and services to build a functional server
>
> However, it seems there is an issue with poweroff :
>
> If I manually launch /run/s6/services/.s6-svcan/SIGUSR1, everything is fine.
> /etc/rc.tini is executed first, shutting down s6-rc, then /etc/rc.shutdown.
>
> But when I use s6-poweroff, it seems s6-svscan skips directly to
> /etc/rc.shutdown, omitting /etc/rc.tini.
>
> Have I overlooked/misunderstood something ?
Make sure you are running s6-svscan with the -s option.
>
> The source of my scripts are there, if one would like to check them:
> https://github.com/Adirelle/s6-alpy/blob/master/rootfs/etc
>
> --
>
> Guillaume.
>



-- 
"If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern."
  --  William Blake


Re: bash < execline > bash

2016-05-01 Thread Colin Booth
On Sat, Apr 30, 2016 at 10:45 PM, Eric Vidal  wrote:
> It is possible to have a zsh/bash and execlineb syntax on the same file?
With some work yes. By default any execline script plays nice with
interstitial programs that only do chain loading. So programs like
nice, time, etc are all fine. It does not play nice with programs that
terminate, however it provides the foreground and background programs
to handle that.
>
> For example :
> #!/bin/execlineb -P
>
> if [ -d /run/example ]; then
> s6-mkdir -m 0755 /run/example
> fi
Not an execline script, this is pure shell with an alternate mkdir
implementation. Using the execline interpreter is wrong here.
>
> or
>
> #!/bin/execlineb -P
>
> if { s6-test -d /run/example }
> mkdir -m 0755 /run/example
This is fine. The execline if program chain loads into the next
program if its test passes, and while mkdir (and s6-mkdir) are not
chain loading programs, as long as it is the last program in the
script it doesn't matter. If you wanted to have the script do more
work after the mkdir, you would need to rewrite your script as
follows:
#!/bin/execlineb -P
if {s6-test -d /run/example }
foreground { mkdir -m 0755 /run/example }
rest_of_program

foreground is used to wrap a normal, terminating program with one that
understands how to exec and is commonly used when a discrete one-shot
program is needed within an execline program.

Cheers!
-Colin

-- 
"If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern."
  --  William Blake


Re: rc-init misunderstanding dependencies files

2016-05-01 Thread Colin Booth
On Sat, Apr 30, 2016 at 10:39 PM, Eric Vidal  wrote:
>
> Hello,
>
> Can you explain me what means the 00 in some dependencies files on your 
> examples providen by s6 packages, please?
Within the examples, 00 is the root service that all other services
depend on. Either through an explicit dependency call (like
mount-proc) or through a bundle dependency (anything depending on
ok-local will transitively depend on 00 since an explicit dependency
on a bundle is an implicit dependency on every member of that bundle)
>
> If i understand correctly the first line/name is read AND executed before 
> read AND execute the second line.
> I mean if i have a file dependencies like this :
> mount-proc
> mount-sys
>
> when rc-init read the file dependencies, it launch the service mount-proc 
> first, wait for the exit code then launch the second service mount-sys, right?
A slight correction, rc-init doesn't run anything, it handles
preparing the service tree. s6-rc change $SERVICE does the actual
work. When s6-rc-compile packs the compiled form of the service
directory it creates a stable ordering based on the dependency
callouts. In the case of two services with equal weight (like if
longrunA depends on oneshot1 and oneshot2, and both oneshots have no
dependencies), s6-rc will run both nominally in parallel when it comes
time to bring up the supervision tree. The only way to get the
ordering that you describe, where s6-rc launches one service and then
waits for the exit code before launching the second, is to have an
explicit dependency called out in the second service.
>
> With this principle i can decide what service need to start first an another 
> or after an another, right?
For the most part yes. You can't have a service that says that it
needs to run before another, but a dependency callout will guarantee
that the listed services are started before the service defining those
dependencies.
>
> Eric Vidal
Cheers!


-- 
"If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern."
  --  William Blake


Re: emptyenv and default path

2016-02-06 Thread Colin Booth
On Sat, Feb 6, 2016 at 10:41 AM, Brett Neumeier  wrote:
> Hi,
>
> I'm trying to get a handle on execline and have found some behavior that
> perplexes me. I'm hoping someone can clarify what's going on!
>
You came to the right place!
>
> If I change the first foreground command (line #1) so that it *also* has a
> full path /opt/skar/bin/foreground, then the script works just as it does
> if /opt/skar/bin is in the PATH when I run the script -- so after emptyenv
> exec's into the next program, the default path is definitely being used.
> Why isn't it used when emptyenv is running?
>
Due to the mechanism of exec, emptyenv doesn't modify it's
environment, it modifies the environment that the next command
receives. Because of this, emptyenv has a $PATH that it attempts to
find foreground on. Foreground however does not receive a path and so
falls back to using the default.
>
> Cheers!
>
> Brett
Cheers!
-Colin


-- 
"If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern."
  --  William Blake


Re: Rewriting a shell script

2015-11-30 Thread Colin Booth
On Mon, Nov 30, 2015 at 12:14 AM, Laurent Bercot
 wrote:
>  Hm ? I don't think awk understand delimiters at all. It only
> takes a single argument (in this case). The quotes are a shell
> thing.
>
Delimiters was the wrong word. I was poorly trying to say that it
wanted its script as a single element.
>
>> define -sn B ${A} does essentially the same thing as awk '{print $1}'.
>
>  Not at all, if ${A} contains whitespace! ${B} will then expand to
> several words, but you only want the first one.

Eh what? define -sn splits ${A} into N words, of which the first is
put into ${B} and the rest dropped. Actually, on further testing, it
looks like what's happening is that -n throws away the last item in
non-newline terminated values. For example:
---
$ execlineb -c 'define B "a b c" define -sn A ${B} echo ${A}'
a b
---
Not sure if that's intentional, but it's definitely not what the
documents say is supposed to happen. My confusion was because -n was
throwing away the last delimiter, along with the hostname after it.
That also explains why chomping doesn't do that on newline terminated
values, it still has something legal to toss.

Cheers!

-- 
"If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern."
  --  William Blake


Re: Rewriting a shell script

2015-11-29 Thread Colin Booth
On Sun, Nov 29, 2015 at 5:04 PM, Scott Mebberson
 wrote:
>
> Sorry for the noob question. But how can I pass the value to awk without it
> complaining?
>
Hi Scott,

I'm pretty sure that your single quotes are confusing the execline
parser. Since awk accepts either single or double quotes as
delimiters, switching to double quotes should fix it. Your script also
has a few issues that you'd have come across once fixing the awk
issue. Try the following out instead:

#!/command/execlineb -P
s6-envdir -fn env
importas -un HOSTNAME HOSTNAME
backtick -in A {
pipeline { getent hosts ${HOSTNAME} }
awk "{print $1}"
}
import -u A
...

Specifically, you need the -n option for importas, otherwise you'll
end up sending a newline to awk and getting back a blank line. You can
also swap out importas HOSTNAME HOSTNAME for import HOSTNAME but
that's stylistic.

All that said, you don't need to call awk in a pipeline to get your ip
out of your environment. The following will do the same:
#!/command/execlineb -P
s6-envdir -fn env
importas -un HOSTNAME HOSTNAME
backtick -in A { /usr/bin/getent hosts ${HOSTNAME} }
import -u A
define -sn B ${A}
...

define -sn B ${A} does essentially the same thing as awk '{print $1}'.
You just need to bounce the value in and out of the environment a few
times.

Cheers!

-- 
"If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern."
  --  William Blake


Re: [announce] s6-rc: a s6-based service manager for Unix systems

2015-09-29 Thread Colin Booth
On Sep 24, 2015 9:20 AM, "Colin Booth" <cathe...@gmail.com> wrote:
>
> Here are the few issues that I've noticed so far:
> * Acpi pm-suspend issue...
Solved. It turns out that the power button scripts are run by acpid and
acpid's path wasn't searching /usr/sbin. Fixed in the ./run script

> * mountall.sh shutdown...
Solved in a hacky way. mountall.sh tests for a fifo named /run/initctl, so
I create one in the ./up script and remove it after mountall.sh runs.
Pretty? No. Functional? Yes. It (annoyingly) didn't solve the /run
permissions issue, so I need to keep digging on that.

> * bootclean.sh paving /run if run a second time
Solved by having the bootmisc ./down file re-seed .clean and .tmpfs files.
They should be harmless since bootmisc.sh removes them as part of the
system bootstrap.

> * The above mentioned issue where `/etc/init.d/udev' script isn't
> suitable as a oneshot (it starts a daemon) but also isn't suitable as
> a direct translation into a longrun since it does a pile of work after
> udev is up. I'm sure that a longrun or a longrun+follow-on oneshot can
> be written to do this right, I haven't had time to do it.
>
Still an open question on correct ordering. I want to do this:
1) udev longrun
2) udev support oneshot

But I also don't want to maintain a patch set for init.d/udev to remove the
start call to udev itself since udev is perfectly happy to have multiples
running. So currently I'm sticking with the (not terribly pretty):
1) udev-init oneshot
  a) init.d/udev start
  b) udevadm control --exit (could be replaced with init.d/udev stop)
2) udev longrun

Functional but not at all elegant.

Pardon the overlong lines, sent from my phone due to a lack of real
internet in my house.

Cheers!

--
"If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern."
--  William Blake


Re: [announce] s6-rc: a s6-based service manager for Unix systems

2015-09-29 Thread Colin Booth
On Tue, Sep 29, 2015 at 10:40 AM, Laurent Bercot
 wrote:
>
>  Ah, OK, I understand.
>  I'd argue that you can convert more or less painlessly by making all your
> services oneshots that call the appropriate init.d/foo scripts and
> forgetting about supervision entirely. As soon as you try digging into
> stuff and actually taking advantage of the supervision tree, it becomes
> work that needs brain involvement, I'm afraid.
>
WIth the exception of udev everything that runs in single user mode
(scripts in rcS.d) are either oneshots in the classic sense or easily
rewritable as supervised services. I'm not worried about the stuff
I've rewritten changing definitions on me because all the movement
happens in either config files or in the sanitization or daemonization
areas of the init script (iow, all the stuff you stop worrying about
when supervising stuff). The various oneshot scripts in rcS.d I'm
calling as oneshots from s6-rc, so again, no worries on movement.
udev, being a freedesktop special child, has both oneshot (cleanup,
sanitization, prep) and longrun (daemonization) operations intermixed
in the same script which makes things suck. Hence my current stupid
hack of calling the init script, stopping the daemonized udev, and
then starting a properly supervised udev right after.

I'll probably end up starting a supervised udev and then calling a
hand-rolled oneshot prep script, but it's not ideal from a distro
cutover perspective. I'm ok doing that work, but folks who swing by
wanting to try out the s6-rc/s6-init hotness will either end up with a
non-supervised udev (non-ideal) or a hack of equal-or-worse uglyness
to the one I've got currently. Either way, testing in odd (or not so
odd as the case may be) circumstances is a good way to find out where
the shims are needed and what pain points folks with enthusiasm but no
experience are going to run into.

Cheers!

-- 
"If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern."
  --  William Blake


Re: [announce] s6-rc: a s6-based service manager for Unix systems

2015-09-24 Thread Colin Booth
On Thu, Sep 24, 2015 at 2:23 AM, Laurent Bercot <ska-skaw...@skarnet.org> wrote:
> On 24/09/2015 02:55, Colin Booth wrote:
>> rc scripts are easy peasy: figure out dependency order, write
>> oneshots. The only one that was an issue is the udev script, and I
>> solved that by running udev as a oneshot, then running udev stop, then
>> firing off the logrun udev.
>
>
>  Gah. One of the main objectives of s6-rc is precisely to cleanly
> handle early services such as udev. You should be able to run udevd
> as a longrun as soon as you need it; what problems did you encounter?
>
udev was easy, the other stuff that the Debian udev init script does
after starting udev was less so. At least, it wasn't suitable as a
longrun, and making it a oneshot explicitly removes the benefit of a
process supervisor. Udev is still the first daemon that runs, it just
runs twice on my system: one in a one-shot that calls the debian init
script, and one in a longrun. Like I said, it's purely from a desire
to not re-invent the wheel and use the packaged scripts as much as
possible.
>
>> I can send you my (almost) 100% functional Debian rcS stuff
>> replacement stuff that I use to run one of my systems. If you can
>> figure out why the acpid keyboard hooks don't work after start, that'd
>> be great.
>
>
>  I'm very interested in any examples of scripts that work with a
> "traditional" rc system and don't with s6-rc. It's the exact kind of
> data I need to polish the system and make it suitable as a drop-in
> replacement.
>
Here are the few issues that I've noticed so far:
* The acpi translation stuff doesn't do some step properly and breaks
the sleep button on my laptop. It definitely makes it into the sleep
script but doesn't have the correct privileges so nothing happens.
Manually running pm-suspend as root works, hence the confusion. I got
it to work once after re-running some of the oneshots, so it's
probably an ordering problem where either Debian got it wrong but
there's a race that they happen to get lucky with (unlikely) or I
mis-read something and have a dependency order backwards (more
likely).
* The above mentioned issue where `/etc/init.d/udev' script isn't
suitable as a oneshot (it starts a daemon) but also isn't suitable as
a direct translation into a longrun since it does a pile of work after
udev is up. I'm sure that a longrun or a longrun+follow-on oneshot can
be written to do this right, I haven't had time to do it.
* One of the Debian initscripts (mountall.sh) causes a Debain system
with sysvinit installed but running with s6-svscan powers down the
system! It tests to see if a pipe called initctl is not present in
/run (which it won't be, since we're not running sysvinit) and tests
to see if `update-rc.d' is present (which it will be, since
sysvinit-core is installed). Assuming both tests pass, it fires
SIGUSR1 at pid 1. USR1 tells sysvinit to close and re-open the control
pipe, but tells s6-svscan to terminate in poweroff mode. Avoiding
running this script leaves a few mounts untouched (/boot), and a few
mounts with incorrect permissions (/run is 1640 or something, and
owned by nobody:nogroup).
* If you tear your system down past a certain point and then bring it
back, you end up deleting the s6-* control files. Fixable by making a
bootmisc.sh ./down file that re-seeds the flag files or by updating
the bootclean.sh script to never try to clear /run, regardless of flag
files. Avoidable if you simply don't stop your lowest-level
initialization bundle. There are merits and flaws to both solutions.

I think that's it. They are all solvable with different levels of
annoyance. My goal originally had been to do the minimum of work
needed to run s6-rc as a drop-in replacement and barring those four
things it was just a case of writing a bunch of up and down scripts.
Once I'm finished moving and hopefully have a little more time in the
evenings, I'm going to try and solve those in a minimally invasive
way, but that won't be until next week at the earliest.

Cheers!

-- 
"If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern."
  --  William Blake


Re: [announce] s6-rc: a s6-based service manager for Unix systems

2015-09-23 Thread Colin Booth
On Wed, Sep 23, 2015 at 7:54 PM, Avery Payne  wrote:
>> I can send you my (almost) 100% functional Debian rcS stuff replacement
>> stuff that I use to run one of my systems. If you can figure out why the
>> acpid keyboard hooks don't work after start, that'd be great. Cheers!
>
> I probably misunderstood, or misrepresented what I was after.  I thought
> that we were talking about something that would take a
> /etc/init.d/(whatever) script and churn out the appropriate ./run file.
Most of the scripts under rcS.d fit the oneshot model much better than
the longrun model. I've written oneshot definitions for s6-rc to
handle the boot ordering correctly, primarily wrapping
`/etc/init.d/service start' (and stop where appropriate, I skipped
making down scripts for init scripts that no-op stop). I mostly did it
that way (instead of re-doing the work into execline) because the
called script might change and I'd rather not have to manage my own
fork of initscripts.

Cheers!

-- 
"If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern."
  --  William Blake


Re: [announce] s6-rc: a s6-based service manager for Unix systems

2015-09-23 Thread Colin Booth
On Wed, Sep 23, 2015 at 3:58 PM, Avery Payne  wrote:
>
> * Write a script to translate the ./needs directory of a definition into a
> dependencies file.  Which is more or less a one-liner that pipes the
> directory listing into a text file one line at a time. The script will be
> placed into svcdef/.bin (along with all of the other tool scripts).  Should
> be easy.
Yup. Your ./needs are the same as a s6-rc ./dependencies file.
> * Write a script that, for a given definition, calls the ./needs translator
> (when required), then copies the definition + (optional) dependencies file
> into the source directory format, making any name changes as needed.  Not
> rocket science.
The only issue you'll run into there is you'll need to make that a
two-pass system. Pass one is to generate a ./dependencies file for
s6-rc's source files, then calling s6-rc-compile. I'd suggest seeing
about doing a one-time conversion to the ./dependencies naming, and
then updating your scripts to use that name instead of ./needs in
order to not break folks who don't use s6-rc.
> * Figure out how to place the .bin, .run, .log, and .finish directories into
> the live system.  This is probably done with another simple support script
> that simply does a "cp -Rav" of them to where the compiled definitions live.
> Not exactly rocket science either.
There might be a chance that s6-rc-compile derefrences symlinks into
real files, or just bulk copies those files without massaging them. If
either is the case it should be safe to just put those into place with
the target pointing in the right place (the real .bin directory in the
dereference case, dangling into space in the bulk copy case). You'll
need to experiment but it shouldn't be too much of an issue.
> Beyond that, the rest of the definitions should be plug and play.
>
> With regard to converting rc scripts to source format...I'm all ears, this
> would of course accelerate my project by an order of magnitude, because I
> would be able to simply convert the giant blob of rc definitions that I
> extracted from Debian 7...
>
rc scripts are easy peasy: figure out dependency order, write
oneshots. The only one that was an issue is the udev script, and I
solved that by running udev as a oneshot, then running udev stop, then
firing off the logrun udev.

I can send you my (almost) 100% functional Debian rcS stuff
replacement stuff that I use to run one of my systems. If you can
figure out why the acpid keyboard hooks don't work after start, that'd
be great.

Cheers!


Re: s6-rc-update initial findings

2015-09-17 Thread Colin Booth
On Thu, Sep 17, 2015 at 3:40 AM, Laurent Bercot  wrote:
>  I could theoretically add a control command to s6-supervise to
> make it delay the execution of ./finish. But I don't think it would
> be worth it: it adds significant risks (what if a process sends a
> "block" command, then dies or otherwise fails to send an "unblock"
> command?), and complexity, for an extreme corner case that will
> probably never happen. If a ./finish failure is critical, the user
> should simply tell s6-rc-update to restart the service, which is
> 100% safe because the service directory will then be updated offline
> instead of live.
>
Makes sense. And no, the above isn't worth it. Actually, the corner
case is even more extreme than that. The failure doesn't rely on the
rare chance that a service terminates while it's getting updated, it
relies on the rare chance that the service terminates while it's
getting updated AND ./finish relies on stuff in ./data or ./env.

Cheers!

-- 
"If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern."
  --  William Blake


Re: s6-rc-update initial findings

2015-09-14 Thread Colin Booth
On Sun, Sep 13, 2015 at 11:25 PM, Colin Booth <cathe...@gmail.com> wrote:
> Things it didn't do right:
> Put the links back into /run/service
>
> That last one was a bit surprising, and is totally fine until the next
> time I (or something else) issues `s6-svscanctl -an /run/service'. I'm
> going to go manually fix that since an 80% empty supervision root is a
> bit uncomfortable. My guess though is that's undesirable behavior
> since unless I'm mistaken adding longruns require triggering a rescan
> of service/.
>
Ok, did some more testing and it looks like the contents of $SVCDIR
end up being the additive delta between current and new. When
initializing, there are no s6-rc managed servoces in $SVCDIR so of
course the delta will be all new services. When adding a new longrun,
your contents of $SVCDIR will only be the new service. It's probably
safe since giving s6-svscan SIGALRM only adds services (never
removes), and s6-rc brings down services by directly sending s6-svc
-wD -dx to the service. Not sure if this was a design decision, but I
still prefer having $SVCDIR be representative of my run state. At
least I now know what's going on.

Cheers!

-Colin

-- 
"If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern."
  --  William Blake


Re: s6-rc-update initial findings

2015-09-14 Thread Colin Booth
On Mon, Sep 14, 2015 at 4:44 PM, Laurent Bercot  wrote:
>  Yeah, that's not normal. s6-rc-update should remove the links when it
> brings the old services down, and should also add the links when it
> brings the new services up. I don't have an exact picture of what is
> actually happening in all cases; I didn't have the time today, but I'll
> do more testing on that tomorrow.
>
No worries. I just wanted to get that stuff reported while it was
still fresh in my head.

Cheers!

-- 
"If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern."
  --  William Blake


s6-rc-update initial findings

2015-09-14 Thread Colin Booth
Just tested out s6-rc-update and it works! Mostly. This is with the
current git head at position 930c7fb

Things it did right:
Computed my bundle name change between compiled and compile-new.
Built a new /run/s6/rc directory
and re-linked /run/s6/rc to it
Moved all the files for the various s6-supervise processes to use the
new directories (that's a clever trick by the way)
Purge old symlinks from /run/service (s6-svscan root)

Things it didn't do right:
Put the links back into /run/service

That last one was a bit surprising, and is totally fine until the next
time I (or something else) issues `s6-svscanctl -an /run/service'. I'm
going to go manually fix that since an 80% empty supervision root is a
bit uncomfortable. My guess though is that's undesirable behavior
since unless I'm mistaken adding longruns require triggering a rescan
of service/.

Debug output for s6-rc-update dry-runs is somewhat unhelpful when
adding services. Not terribly surprising since `s6-rc -ua change' (dry
run or no) requires an up-to-date live/ to work against.

There's a documentation oversight that should get corrected at some
point. The docs should mention that this doesn't touch the original
compiled database and that it's on the user to update their call to
s6-rc-init before the next reboot.

Cheers!

-- 
"If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern."
  --  William Blake


s6-rc shutdown timing issue

2015-09-13 Thread Colin Booth
Hey there,

I've been digging into managing a system completely under s6 and I
can't seem to find the right time to run `s6-rc -da change'. Run it
before sending s6-svscan the shutdown/reboot/halt commands and can end
up with a situation where your read/write drive has been set read-only
before your services have come down. Run it after telling s6-svscan to
start taking the system down, and s6rc-oneshot-runner is stopped by
the time s6-rc tries to disassemble the system.

There are a few solutions that I've come up with, none of them terribly great.

1) Have s6-rc handle setup but not teardown and just ignore oneshots
on shutdown by not calling s6-rc from within the stage3 script.
2) Ignore s6-rc entirely during shutdown, letting s6-svscan's native
signaling to s6-supervise handle longruns and manually trigger a list
of oneshots to destroy after the process table is cleared.
3) Call s6-rc before signaling s6-svscan but make sure that s6-rc
doesn't know about system-critical shutdown routines, and instead call
those directly from stage3 after s6-svscan has destroyed the
supervision tree.
4) Give s6-ipcserverd a flag to ignore SIGTERM like s6-log has, then
call s6-rc -d in stage3 before firing off s6-nuke.

Like I said, all those solutions aren't great. The first limits the
use of down scripts for oneshots and might leave a system in an
undesierable state before shutdown. The second and third both require
that the oneshot list (at least those that need to be fired on
shutdown) to be maintained in two places - both s6-rc and an
in-sequence triggered set. The last requires changes to s6-ipcserverd
and makes a third nominally unkillable service.

All four end up with oneshots and longruns being decoupled from each
other, though in the grand scheme of things that isn't the end of the
world. My current solution is number two, though I'd like to be able
to write a handful of ./down scripts for those oneshots that I need to
worry about and let a late run of `s6-rc -da change' take care of it.

One other question that doesn't really belong here but doesn't need
its own thread. If I have a oneshot that only does any work on
shutdown, can I get away with having the required ./up script be
empty, or do I need to write something like
#!/command/execlineb
exit 0

To satisfy the requirement of up existing.

Cheers!
-Colin

-- 
"If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern."
  --  William Blake


Re: s6-rc shutdown timing issue

2015-09-13 Thread Colin Booth
>  This is the right way to proceed:
>  * First s6-rc -da change
>  * Then s6-svscanctl -t /run/service
>
>  I don't understand the issue you're having: why would your rw filesystem
> be set read-only before the services come down ?
>  - Your non-root filesystems should be unmounted via the corresponding
> "down" scripts of the oneshot services you're using to mount them, so
> they will be unmounted in order.
>  - Your root filesystem will be remounted read-only in stage 3, i.e.
> when everything is down.
>
I was a bit unclear last night. I'm not concerned about things that
are under the control of s6-rc, those of course should be shut down in
the correct order as per s6-rc's design. I'm concerned about stray
backgrounded processes, supervised things that s6-rc doesn't know
about, etc.

My current issue is that I'm initially remounting my root filesystem
as r/w as one of the first steps for s6-rc, which means that if I'm
doing everything correctly, s6-rc attempts to remount root as
read-only as part of its shutdown. That should be safe, since if any
program has a file open for writing when you attempt to remount it,
the operation will fail and get re-attempted in stage 3. I'd like to
account for a situation where where are no open write handles, some
program has escaped the control of s6-rc, and that program will
attempt to write state to disk when signalled. I'm currently getting
around that by having s6-rc not do read-only remounting and instead
solely doing it in stage 3 after the last sync call but I was
wondering if there was a better way.
>
>  If your dependencies are correct, there should be no problem. What
> sequence of events is happening to you ?
>
Mentioned above, but explicitly the sequence that I'm concerned about is:
  s6-rc shuts down all s6-supervised services
  s6-rc successfully remounts all devices read-only
  s6-svscan receives terminate signal and execs into finish, which
execs into stage3 teardown script
  s6-nuke -t catches an orphaned process which attempts to open a file
for writing out persist state

I probably shouldn't be worrying about that particular scenario since
it's pretty rare but I've been thinking about it.

>
>  Yes, an empty ./up script will work. (Oneshot scripts are interpreted
> at compile time by the execlineb parser, which treats an empty script
> as "exit 0".)
>
Cool, thanks. That's what I thought but I wasn't sure to what degree
execlineb cared about script validity and I don't have a terribly
great test methodology for oneshots figured out yet.

Cheers!

-- 
"If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern."
  --  William Blake


Re: s6-rc shutdown timing issue

2015-09-13 Thread Colin Booth
On Sun, Sep 13, 2015 at 2:34 PM, Laurent Bercot  wrote:
>
>  I'm afraid there's no real solution to the stragglers problem, and the
> only safe approach is to keep everything mounted all the time and only
> perform the unmounts in stage 3 after everything else has been done and
> the processes have been killed.
>
Yeah, the asymmetry is annoying, but all the other options are worse.
Sounds like current best practices should be to keep rw->ro remounts
out of s6-rc (at least / and /var), call `s6-rc -da change' before
signalling s6-svscan, and then explicitly do those remounts in stage
3. Like I said, not great but better than the alternatives, and a lot
less magic than it could be.
>
>  In other news, I'm now in the process of testing s6-rc-update. I've
> finished brooming the obvious bugs, now is the most annoying part: so
> many test cases, so many things that can go wrong, and I have to try
> them one by one with specifically crafted evil databases. Ugh. I said
> I'd release that thing in September, but that won't happen if I die of
> boredom first.
>  If you're totally crazy, you can try running it, but unless you're
> doing something trivial such as switching to the exact same database,
> chances are that something will blow up in your face - in which case
> please let me analyze the smoke and ashes.
>
When I've got some time I'll spin up a vm and let you know what the
disaster looks like. What do you want in terms of debuging? I'm
assuming old and new service databases at the minimum. Odds are they
won't be nearly as terrible as your custom-built databases, but who
knows.

Cheers!


-- 
"If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern."
  --  William Blake


Re: s6-svstat "want up"

2015-09-08 Thread Colin Booth
On Tue, Sep 8, 2015 at 1:43 PM, Colin Booth <cathe...@gmail.com> wrote:
> On Tue, Sep 8, 2015 at 1:36 PM, Buck Evan <b...@yelp.com> wrote:
>> Below is a silly toy service that I've used to prove out some of the s6
>> behavior.
>>
>> $ cat date/run
>> #!/bin/bash
>> exec date > now.date
>>
>> $ s6-supervise date &
>> [5] 3916351
>>
>> $ cat date/now.date
>> Tue Sep  8 13:32:21 PDT 2015
>> $ s6-svstat date/
>> down (exitcode 0) 0 seconds, normally up, want up, ready 0 seconds
>>
>> $ cat date/now.date
>> Tue Sep  8 13:32:23 PDT 2015
>> $ s6-svstat date/
>> down (exitcode 0) 0 seconds, normally up, want up, ready 0 seconds
>>
>> $ cat date/now.date
>> Tue Sep  8 13:32:24 PDT 2015
>> $ s6-svstat date/
>> down (exitcode 0) 0 seconds, normally up, want up, ready 0 seconds
>>
>> That's all fine and sensical.
>>
>> $ s6-svc  -dx date/
>> [5]   Dones6-supervise date
>> $ s6-svc  -dx date/
>> s6-svc: fatal: unable to control date/: supervisor not listening
>>
>> $ s6-svstat date/
>> down (exitcode 0) 6 seconds, normally up, want up, ready 6 seconds
>> $ cat date/now.date
>> Tue Sep  8 13:33:07 PDT 2015
>>
>> $ s6-svstat date/
>> down (exitcode 0) 10 seconds, normally up, want up, ready 10 seconds
>> $ cat date/now.date
>> Tue Sep  8 13:33:07 PDT 2015
>>
>> The "want up" here seems patently false.
>> The last command sent was "-dx", which tells the thing to go down.
>> The "normally up" is fine, since there's no `down` file.
>>
>> Is this a bug or a feature?
> Neither, but it is totally expected. -dx tells s6-supervise to bring
> the service down and exit. Which causes s6-svscan to restart
> s6-supervise. Which brings up the service.
>
> Runit does the same thing, as does daemontools.
>
> Cheers!
Oops, missed that you weren't running a full supervision tree. I think
it's an oversight and s6-supervise is handling the exit before it
updates the status files.

Cheers!

-- 
"If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern."
  --  William Blake


Re: s6-svstat "want up"

2015-09-08 Thread Colin Booth
On Tue, Sep 8, 2015 at 1:36 PM, Buck Evan  wrote:
> Below is a silly toy service that I've used to prove out some of the s6
> behavior.
>
> $ cat date/run
> #!/bin/bash
> exec date > now.date
>
> $ s6-supervise date &
> [5] 3916351
>
> $ cat date/now.date
> Tue Sep  8 13:32:21 PDT 2015
> $ s6-svstat date/
> down (exitcode 0) 0 seconds, normally up, want up, ready 0 seconds
>
> $ cat date/now.date
> Tue Sep  8 13:32:23 PDT 2015
> $ s6-svstat date/
> down (exitcode 0) 0 seconds, normally up, want up, ready 0 seconds
>
> $ cat date/now.date
> Tue Sep  8 13:32:24 PDT 2015
> $ s6-svstat date/
> down (exitcode 0) 0 seconds, normally up, want up, ready 0 seconds
>
> That's all fine and sensical.
>
> $ s6-svc  -dx date/
> [5]   Dones6-supervise date
> $ s6-svc  -dx date/
> s6-svc: fatal: unable to control date/: supervisor not listening
>
> $ s6-svstat date/
> down (exitcode 0) 6 seconds, normally up, want up, ready 6 seconds
> $ cat date/now.date
> Tue Sep  8 13:33:07 PDT 2015
>
> $ s6-svstat date/
> down (exitcode 0) 10 seconds, normally up, want up, ready 10 seconds
> $ cat date/now.date
> Tue Sep  8 13:33:07 PDT 2015
>
> The "want up" here seems patently false.
> The last command sent was "-dx", which tells the thing to go down.
> The "normally up" is fine, since there's no `down` file.
>
> Is this a bug or a feature?
Neither, but it is totally expected. -dx tells s6-supervise to bring
the service down and exit. Which causes s6-svscan to restart
s6-supervise. Which brings up the service.

Runit does the same thing, as does daemontools.

Cheers!

-- 
"If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern."
  --  William Blake


Re: s6-rc - odd warn logging and a best practices question

2015-08-28 Thread Colin Booth
On Fri, Aug 21, 2015 at 2:11 AM, Laurent Bercot ska-skaw...@skarnet.org wrote:
  Wow. Is it a mount -o remount, or a umount followed by a mount ?
  If a -o remount has this effect on file handles, then it's probably
 worth reporting to the kernel guys, because it's insane.

  Even if the script does something nonsensical such as remounting
 everything read-only, which hardly makes any sense for a tmpfs,
 this is not normal behaviour: when I remount a partition in read-only
 mode, and there are still open descriptors for writing, the mount()
 call fails with EBUSY; it does not silently invalidate all the writing
 descriptors!

First reboot in a while so I spent some time tracking this down. It
was caused by some really cute interactions between a few of the
Debian single-user mode system prep scripts. checkfs-bootclean.sh is
safe to run against tmpfs mounts right until you run bootmisc.sh,
which removes the flag files that the clean_all function uses to
identify a tmpfs. So that's been fixed.


  Last time I looked at a mainstream distro's boot cycle, i.e. almost
 10 years ago, it was already unnecessarily complex and convoluted; and
 Debian was far from the worst. I doubt it has become simpler since.

It probably doesn't help that I'm working against the hardest target
too: laptops. Thankfully, the only place where I really need to
interact with the sysvinit stuff is in the collection of oneshots that
are emulating the single-user portion of the bootcycle. I did find a
script in there that will halt an s6-init system if you run it . That
was fun. It's the only place that I found that actually cares about
what init you're running under. In the case where you have sysvinit
but no initctl control pipe (such as can happen if you mount a new
/run over the old one) it recreates that and then fires off SIGUSR1 at
whatever happens to be init at the time.

The only things left to fix are some file permissions and mounts that
the aforementioned script fixes up, and that ACPI sleep handler
weirdness that I mentioned earlier. Plus, you know, not running a
pre-alpha rc system ;)

  systemd will probably make scripting simpler, by moving a lot of the
 complexity into the C code. Which is obviously the worst possible
 solution.

Probably. I almost want to build out a systemd machine to see what the
early boot land looks like. Depending on what the system prep stuff
looks like it might be easier to gut. Like I said though, almost.

Cheers!



-- 
If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern.
  --  William Blake


Re: Preliminary version of s6-rc available

2015-08-22 Thread Colin Booth
On Fri, Aug 21, 2015 at 6:36 PM, Guillermo gdiazhartu...@gmail.com wrote:
 Hello,

 I have the following issues with the current s6-rc git head (last
 commit 8bdcc09f699a919b500885f00db15cd0764cebe1):
(snip)


I run my s6 stuff in slashpackage configuration so I missed the
s6-fdholder-filler issue. The slashpackage puts full paths in for all
generated run scripts so I'm a little surprised it isn't doing that
for standard FHS layouts.

All the uid/gid stuff I've verified as failing in the same ways. I'd
also expect the gid directories to either: not be symlinks but their
own directories or have a single access directory that both the uid
and gid entries are links to. I also don't know s6-fdholder's rules
well enough, but does it treat uid 0 special or if you specify a
non-root uid do you also need to specify root?

Lastly, I appear to have never run `s6-rc-db pipeline longrun'. From
the source it's failing in the if (buffer_flush(buffer_1)) call. I may
be wrong, but I think removing the if test and just forcing out the
flush is what you want.

Cheers!

-- 
If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern.
  --  William Blake


Re: skaware manpages?

2015-08-22 Thread Colin Booth
On Fri, Aug 21, 2015 at 4:33 PM, Laurent Bercot ska-skaw...@skarnet.org wrote:
 On 22/08/2015 01:03, Buck Evan wrote:

 Actually, apparently HTML is the preferred format, so we're good.
 https://www.debian.org/doc/debian-policy/ch-docs.html#s12.4


  Yeah, in the 12.1 section they still say that they consider the lack
 of a manpage for a binary as a bug. .
  If they'll take stubs, that's probably your best option.

I'm going to take this moment to say that I'm really happy that, for
all their Makefile style guides and build location stuff,  the FreeBSD
ports guidelines don't care at all about how documentation is offered.

Speaking of which, I've managed to get caught up with the current
release versions. There was a period where I was lagging about 1-2
release versions behind due to slowness on the part of the  port
integrators, but they seem to have picked up the pace now that 10.2 is
out.

Cheers!

-- 
If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern.
  --  William Blake


Re: Bug in ucspilogd v2.2.0.0

2015-08-21 Thread Colin Booth
On Fri, Aug 21, 2015 at 6:34 PM, Guillermo gdiazhartu...@gmail.com wrote:
 2015-08-12 2:54 GMT-03:00 Laurent Bercot:

 Oh. Then logger version 2.26.2 should work fine adding the -T option,
 and does for me using ucspilogd from s6-2.1.6.0 (which I believe
 didn't change for 2.2.0.0).

Awesome! I don't use logger at home, but we use it at work there's a
non-zero chance that at some point we'll switch to a
lighter-than-rsyslog log writer for local messages.

Cheers!

-- 
If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern.
  --  William Blake


s6-rc - odd warn logging and a best practices question

2015-08-20 Thread Colin Booth
Hey all,

I've reconfigured one of my debian systems to boot with s6-init/s6-rc
and while trying to debug a timing issue that I think was my own fault
(my all services bundle didn't contain my ersatz single-user
bundle). That mucked up a bunch of timing since half of the
initialization stuff wasn't running until I tried to start syslogd.
Anyway, as part of the debugging I found some garbage in verbose
logging that might be logging issues and might be something more
serious.

When running s6-rc -v2 change $bundle, the s6rc-fdholder pre addition
of notification-fd message has some garbage characters in it:
s6-svc: warning: /run/s6/rc/scandir/s6rc-fdholderà¯'þpre addition of
notification-fd
s6-svc: warning: /run/s6/rc/scandir/s6rc-fdholder/notification-fdpost
addition of notification-fd

The garbage changes each time, but it's always there.

Also, I know it's aesthetic, but it'd be nice to have a space between
the service name and the text pre addition or post addition.

As for the best practices question. What's the right way to fake
service notification for daemons that don't support it? My udev run
script is the following which while I'm pretty sure it works strikes
me as not the best:

#!/command/execlineb -P
fdmove -c 2 1
if {
foreground { /etc/init.d/udev start }
foreground { /bin/udevadm control --exit }
}
fdmove 1 3
foreground { s6-echo  }
/lib/systemd/systemd-udevd

Firstly, that seems to be leaving me with a pipe to nowhere on fd 1
that never closes unless I re-fdmove fd2 back onto fd1 (not sure if
that matters mind you, it probably depends on if the service chats
over stdout at all). Secondly, that seems really hacky to me.

Now, I'm pretty sure that the cleanest method would be to break it up
into two atomics:
oneshot udev-init - runs `udev start' and `udevadm control --exit'
longrun udev-svc - normal run script handling the maintenance of systemd-udevd

The general question though is: what's the best way to handle
readyness notification on services that run a prep script before
starting the daemon proper. Assuming daemon availability is relatively
instant, is foregrounding your initialization script and then moving
the notification fd onto stdout right before sending a blank message
the best method?

I'll do some more testing on the potential timing issues that I've
(hopefully) fixed, but so far it's been an interesting experience. I'm
hoping I can sort out the remaining issues without having to force
ordering to the same level as Debian's rcX.d/S0Xservice scripts or
resorting to check loops inside of run scripts. Also, it'll be nice to
have s6-rc-update, I've been rebooting... a lot.

Once I've got my laptop booting correctly all the way into X with the
wireless running and a few things like the sleep button working
(`/usr/bin/pm-suspend' works, Fn+F4... not so much, until I run
/etc/init.d/acpid start once, even if I turn it off afterwards.. why?
beats me) I'll post some comments, bundle up my init stuff, and see
about making it available for folks who want to go full crazy.

Cheers!

-- 
If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern.
  --  William Blake


Re: s6-rc - odd warn logging and a best practices question

2015-08-20 Thread Colin Booth
On Thu, Aug 20, 2015 at 1:16 PM, Colin Booth cathe...@gmail.com wrote:
 By the way, I've found a maybe-bug that, if real, is pretty severe.
 `s6-rc -d change all ; some stuff ; s6-rc -u change all' has caused my
 s6-init + s6-rc testbed system to remove the control pipe for my pid 1
 s6-svscan. I need to make sure it wasn't something I did between
 things, and to make sure it wasn't mucked up handling in various
 scripts that I was running. I'm at work right now so I can't test it
 out, but sometime in the next day or so I should have the cycles to
 test it out.

Not a bug in s6-rc or s6 but in some Debian script somewhere. Some
single-user script appears to re-mount all mount points, which has the
net result of causing all file handles into tmpfs mounts to go stale.
That's what's breaking s6-svscan. Once I isolate it, I'll see if I can
avoid calling that script, and if I have to I'll see about moving its
execution somewhere safe.

I am learning way more about the complexities of the distro boot cycle
than I'd ever expected to this week.

Cheers!


-- 
If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern.
  --  William Blake


Re: s6-rc - odd warn logging and a best practices question

2015-08-20 Thread Colin Booth
On Thu, Aug 20, 2015 at 2:35 AM, Laurent Bercot ska-skaw...@skarnet.org wrote:

  I can't grep the word addition in my current git, either s6 or s6-rc.
 Are you sure it's not a message you wrote? Can you please give me the
 exact line you're running and the exact output you're getting?
  Thanks,

Ugh, it was something I'd hacked in to s6-svc early on in the life of
s6-rc to track down some issue I was having with something. I never
committed it and sort of assumed that the next git pull I made would
have complained and forced me to back it out. Apparently git merge got
smarter about unstaged non-conflicts recently.

Mystery solved! I'll reply to the other stuff in the other mail fork.


-- 
If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern.
  --  William Blake


Re: s6-rc - odd warn logging and a best practices question

2015-08-20 Thread Colin Booth
On Thu, Aug 20, 2015 at 1:57 AM, Laurent Bercot ska-skaw...@skarnet.org wrote:
  Just don't have a notification-fd file. s6-rc will assume your daemon
 is ready as soon as the run script is started. It may spam you with a
 warning on high verbosity levels, but that's it. :)

Yeah, this is for the special case where you have a daemon that
doesn't do readiness notification but also has a non-trivial amount of
initialization work before it starts. For most things doing the below
talked about oneshot/longrun split is best, but sometimes you need to
run that initialization every time (data validators are the most
obvious example).

  If your daemon doesn't support readiness notification, I'd generally
 advise not to pretend it does: even if daemon availability is fast,
 the scheduler can always screw you. So yes, if at all possible, having
 the init in a oneshot and the daemon in a longrun depending on the
 init oneshot is the best way to go, without declaring a notification-fd
 for the longrun. If it's not possible, foregrounding the init, then
 sending a blank notification message, then execing the daemon, is
 probably the least ugly way to proceed.

I was using the readiness signal to enforce the timing between udev's
heavyweight system prep scripts and everything that depends on udev.
Starting udev itself is trivial, and I'm pretty sure that udev doesn't
need to guaranteedly be running for other things to start, it just
needs to be running for the preparation steps. Hence that run the
sysvinit udev script, immediately afterwards stop udev, start it again
supervised dance. Breaking out into a pair of atomics is a lot more
elegant. Why didn't I think of that until last night when I've been
experimenting with this since Monday? Dunno.

  I'm surprised that systemd-udevd doesn't provide notification: that's
 one of the least bad reasons for daemons to integrate with systemd.
 Doesn't it use sd_notify() ?

It does provide notification, but only if you're running under
systemd. At least according to the sd_notify() docs. I'll see about
faking up the environment so sd_notify() is happy and report back.

 Also, it'll be nice to have s6-rc-update, I've been rebooting...
  a lot.


  No need to reboot:

  s6-rc -da change
  for i in `ls -1 $live/servicedirs` ; do rm $scandir/$i ; done
  s6-svscanctl -an $scandir
  rm -rf `basedir $live`/`s6-linkname $live`
  rm $live
  s6-rc-init -l $live -c $newcompiled $scandir
  s6-rc -u change $everythingbundle

  That's more or less what s6-rc-update will do, of course with
 optimizations to avoid restarting everything.

Actually, the more I think about it, the less s6-rc-update will help
me avoid reboots in the short term since part of what I need to get
back is a pristine post-boot environment.

  Power management on Linux laptops is high-level demonology, and mere
 mortals should not dabble in it, lest their souls be consumed. I had a
 friend who tried and came back shaking and drooling... it took him a
 long while to recover. Fortunately, there's *almost* no permanent
 damage to his mind.

HA! I'm pretty sure the failure is in some acpi policy handling glue
code that isn't getting set right. The init.d/acpid script isn't
terribly complicated, I simply need to capture the system state before
and after the init script is run.

Cheers!

-- 
If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern.
  --  William Blake


Re: s6-rc - odd warn logging and a best practices question

2015-08-20 Thread Colin Booth
On Thu, Aug 20, 2015 at 10:24 AM, Laurent Bercot
ska-skaw...@skarnet.org wrote:

  Oh, the protocol is complicated too. If I start to implement it,
 there's no stopping, and I'll be running behind systemd every time
 they add something to the protocol, which is exactly what I don't
 want to do.

Sure. And I bet that listening for any message on the socket isn't
good enough since things might be chattery.

  You can enforce a non-race by synchronizing both processes, i.e.
 making the notification listener notify the notification sender
 that it is ready to receive a message. I'm not even joking.
 Notifiception is a thing with the wonderful systemd APIs.

NOW we're talking!

  I see. You could pull those out of the set of services managed by s6-rc
 and just run them sequentially at boot time, until s6-rc-update is out.

Yeah, but then you get into that question of what you do with oneshots
that depend on longruns which are required for initialization... Like
I said, it's a bit of a mess but isn't any more of a mess than someone
who is doing early boot optimization in any other init. Once I've
sorted out all the timing issues (and I think I'm close) it should be
fine.

By the way, I've found a maybe-bug that, if real, is pretty severe.
`s6-rc -d change all ; some stuff ; s6-rc -u change all' has caused my
s6-init + s6-rc testbed system to remove the control pipe for my pid 1
s6-svscan. I need to make sure it wasn't something I did between
things, and to make sure it wasn't mucked up handling in various
scripts that I was running. I'm at work right now so I can't test it
out, but sometime in the next day or so I should have the cycles to
test it out.

Cheers!

-- 
If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern.
  --  William Blake


Re: s6-rc - odd warn logging and a best practices question

2015-08-20 Thread Colin Booth
On Thu, Aug 20, 2015 at 8:44 AM, Laurent Bercot ska-skaw...@skarnet.org wrote:
  In that case, yes,
  if { init } if { notification } daemon is probably the best. It
 represents service readiness almost correctly, if service includes
 the initialization.

Cool. Not the most elegant but good to know I was on the right track.

 It does provide notification, but only if you're running under
 systemd. At least according to the sd_notify() docs. I'll see about
 faking up the environment so sd_notify() is happy and report back.


  systemd's notification API is a pain. It forces you to have a daemon
 listening on a Unix socket. So basically you'd have to have a
 notification receiver service, communicating with the supervisors -
 which eventually makes it a lot simpler to integrate everything into
 a single binary.
  This API was made to make systemd look like the only possible design
 for a service manager. That's political design to the utmost, and I
 hate that with a passion.

I think only the socket part is fancy systemd-centric design, so
presumably a stupid subscript that takes socket messages and emits
s6-ftrig events could do the reverse of sdnotify_wrapper. I'm thinking
something like s6-ipcserver-socketbinder execing into a background'ed
puller to s6-ftrig-notify chain. The puller would be something like
s6-ftrig-wait but for generic file descriptors instead of fifo dirs
(this probably exists and if not should be reasonably easy), and
s6-ftrig-notify would handle the actual readiness alarm.

The API is definitely more complicated than the s6 notification one,
but it doesn't seem insurmountable. My solution is a bit racy, though
I'd hope a socket puller would start faster than a daemon, scheduler
whims or no.


  What do you have in that post-boot environment that would be different
 from what you have after shutting down all your s6-rc services and
 wiping the live directory ?

Adjustments to modules, locale and hostname setting, re-seeding the
random device. Basically everything that happens in the single-user
boot stage on distro systems. For example, the udev init script does a
lot of work that can't easily be un-done without a reboot.

Cheers!

-- 
If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern.
  --  William Blake


Re: s6-rc plans (was: Build break in s6-rc)

2015-08-13 Thread Colin Booth
On Thu, Aug 13, 2015 at 9:46 AM, Laurent Bercot ska-skaw...@skarnet.org wrote:

  Oh, and btw, I'll have to change s6-rc-init and go back to the
 the directory must not exist model, and you won't be able to
 use a tmpfs as live directory - you'll have to use a subdirectory
 of your tmpfs.

Ah bummer. It was fun while it lasted.

  The reason: as it is now, it's too hard to handle all the failure
 cases when updating live. It's much easier to build another live
 directory, and atomically change what live points to - by renaming
 a symlink. And that can't be done if live is a mount point.

Makes sense. In this case can we get a --livedir=DIR buildtime option
so us suckers using a noexec-mounted /run can relocate things easily
without having to type -l livepath every time we want to interact
with s6-rc?

Cheers!


-- 
If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern.
  --  William Blake


Re: Build Break in s6-rc

2015-08-13 Thread Colin Booth
On Thu, Aug 13, 2015 at 9:37 AM, Laurent Bercot ska-skaw...@skarnet.org wrote:

  Eh... keep a backup of your current source, if you're using it in
 a half-serious environment. The current version uses automatically
 generated services, and the scripts haven't been tested yet, it's
 the first draft.

I run s6-rc on my laptop, which is about as half-serious an
environment as you get.

  I went back and changed the way s6-rc-compile handled producer/logger
 pairs, because logged services were too much of a pain to correctly
 manage in s6-rc-update: it became insanely complex to compute when
 a service directory can be kept and when the service has to be
 stopped and restarted. Doing away with logged services makes the
 update procedure more straightforward.

I may be missing something, but the auto-generated log service doesn't
seem to be a thing yet. Or is that all under-the-hood changes and the
filesystem interface hasn't changed.


  But since a pipeline now includes autogenerated services, there
 needs to be a bundle containing everything, for easy takedown. So,
 autogenerated bundle, too. It's beautiful and scary at the same
 time - I feel like complexity can get out of control fast.

Gotcha, makes sense. I'm still glad to get rid of my explicit
service-logger bundle directories. I'm assuming though that an
explicit bundle can call out an autogenerated bundle as a requirement?
So for example an autogenerated sshd bundle (containing sshd-srv and
sshd-log) can be called out in an explicit lan-services bundle. I'd
test it out, but I don't have time right now to re-do all my logger
producer/consumer files so a recompile will break.

Cheers!

-- 
If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern.
  --  William Blake


Re: Build Break in s6-rc

2015-08-13 Thread Colin Booth
On Thu, Aug 13, 2015 at 11:40 AM, Laurent Bercot
ska-skaw...@skarnet.org wrote:
  You like to play with fire. :)
  Until it's released, it's not production-ready by any means.
 Just making sure you're very much aware of that.

It's how I roll. Plus the backout path to a functional system takes a
few minutes in a getty. I'd feel a lot more nervous if I didn't have
physical access to the hardware.

  But now that I have s6-fdholder-daemon and an infrastructure to
 handle dependencies between oneshots and longruns, I don't need
 to use the pipes provided by s6-svscan anymore.

  - Now: the user declares a producer/consumer pair, or more
 generally, a pipeline of services: a service can have both a
 producer and a consumer, and that just means it will be in the
 middle of a pipeline. s6-rc-compile identifies pipelines, adds
 autogenerated oneshots to create pipes and store them into a
 fd-holder, and sneakily modifies the longruns so they retrieve
 the pipes from the fd-holder. It crafts the correct dependencies
 and wraps everything in a bundle, so all the shenanigans are
 invisible from the user. There will still be an indestructible
 pipe between a producer and its consumer; it just won't be held
 by s6-svscan. And there aren't any foobar/log service directories,
 which should make s6-rc-update a lot easier to write.

I'm not sure how I feel about having the indestructibility guarantee
residing in a service that isn't the root of the supervision tree. I
haven't done much with s6-fdholderd but unless there's some extra
magic going on in s6rc-fdholderd, if it goes down it won't be able to
re-establish its control over the overall communications state due to
it creating a fresh socket. I know, I know, it should be fine, but
accidents happen.

In the simple world where s6-svscan was the pipe holder, an accident
that hosed it would also hose the supervision tree, and the logging
guarantee stays that. It seems now that if s6-fdholderd gets restarted
and then a full logger process gets restarted (s6-svc -dx $svc-log),
there will be nothing to re-establish the pipe between $svc-svc and
$svc-log since s6-fdholderd has forgotten about it.

It may be indestructible still, but it's a lesser guarantee of
indestructibility.

  The autogenerated bundles are there for the user's convenience,
 they simply represent a whole pipeline, so yes. You can choose how
 they're called, and you can have dependencies to them.
  The only thing you can't do is have an explicit dependency to an
 autogenerated atomic service, with a name that starts with s6rc-.
 Those are part of the s6-rc-compile mechanism, they're not visible
 from the source.

That's fine, I was mostly just making sure that from the compiler's
standpoint an autogenerated service and a manually defined service are
treated the same.

  This is all subject to change before the first release, but I
 think the design is pretty sound, so it probably won't move much.

Cool.



-- 
If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern.
  --  William Blake


Build Break in s6-rc

2015-08-13 Thread Colin Booth
Hi Laurent,

Your commit 979046fdee76d70792750f5a1a9afd2bba5f127f introduced a
build failure in
src/s6-rc/s6-rc-compile.c.

It'll probably get fixed today, but since the git repo is advertised
as a reasonable alternative to downloading tarballs you might want to
start doing work on side branches and merging into master only when
you have stable code. Or, barring that, only pushing to the
git.skarnet.org remote when you're ready.

Cheers!

-- 
If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern.
  --  William Blake


Bug in ucspilogd v2.2.0.0

2015-08-09 Thread Colin Booth
Hi Laurent,

I'm pretty sure some change in skalibs v2.3.6.0 broke some types of
message handling in ucspilogd. Specifically, a computer I have running
skalibs v2.3.5.1 and s6 v2.1.6.0 is able to read messages sent via the
logger command, whereas a computer I have running skalibs v2.3.6.0 and
s6 v2.2.0.0 cannot.

ucspilogd v2.2.0.0 gives the following when I send a message via logger:
root@radon:~# logger woo
@400055c6fe44103957dc ucspilogd: fatal: unable to read from stdin:
Broken pipe

ucspilogd v2.1.6.0:
root@heliocat:~# logger woo
@400055c6fe7401034cdc 0: 0: user.notice: Aug  9 07:16:32 heliocat: woo

I initially thought it was in the handoff code between s6-ipcserverd
and ucspilogd but after stracing the s6-ipcserverd process and
following forked off children, I'm pretty sure the issue is in
ucspilogd (or, more correctly, the skalibs functions that ucspilogd
wraps since the main program hasn't changed in months).

I haven't yet dug into the skalibs code to see what changed between
those tags, or started bisecting it to find out which commit broke.
However, the difference between a functional and non-functional log
write in strace is the following:

Functional:
[pid 19388] readv(0, [{13Aug  9 07:26:07 cathexis: wo..., 8191},
{NULL, 0}], 2) = 34
[pid 19388] writev(1, [{1000: 1000: user.notice: Aug  9 ..., 55},
{NULL, 0}], 2) = 55
[pid 19388] readv(0,
[{\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0...,
8158}, {13Aug  9 07:26:07 cathexis: wo..., 33}], 2) = 0

Dysfunctional:
[pid 31983] readv(0, [{13Aug  8 23:46:57 cathexis: wo..., 8191},
{NULL, 0}], 2) = 33
[pid 31983] readv(0,
[{\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0...,
8159}, {13Aug  8 23:46:57 cathexis: wo, 32}], 2) = 0
[pid 31983] writev(2, [{ucspilogd: fatal: unable to read..., 57},
{NULL, 0}], 2) = 57

Some syslog messages generated via non-logger sources work, such as
auth messages. Those have the same readv, writev, readv pattern.

Let me know if you need more.

Cheers!

-- 
If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern.
  --  William Blake


Re: Bug in ucspilogd v2.2.0.0

2015-08-09 Thread Colin Booth
On Sun, Aug 9, 2015 at 10:23 AM, Laurent Bercot ska-skaw...@skarnet.org wrote:
 On 09/08/2015 19:12, Colin Booth wrote:

 I haven't experimented with it yet, but I think the messages from
 long-running logger processes are null-separated, just not the last
 line. I'll take a look later today when I have time.


  Ah, that's easy enough to fix. Please try with the latest s6 git
 and tell me if it works for you.

 --
  Laurent

Ok, I was wrong. I set up a little netcat /dev/log reader and there's
no separator at all between messages. At least not one that made it to
netcat. It also looks like the new logger stops reading after the
first \0, and strips all newlines.

The ucspilogd fix works for the single message case which should be
good enough for handling script output. Using logger as a cheap stdout
syslog injector in supervised services seems like a no-go for now, at
least when ucspilogd is handling reception. I'll have to take a look
at how rsyslog decides what the message end is since my test service
logged correctly when rsyslog was pulling on /dev/log.

Cheers!
-- 
If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern.
  --  William Blake


Re: Bug in ucspilogd v2.2.0.0

2015-08-09 Thread Colin Booth
On Sun, Aug 9, 2015 at 6:20 AM, Olivier Brunel j...@jjacky.com wrote:
 On 08/09/15 10:44, Laurent Bercot wrote:
  The path leading to the first invocation of readv() hasn't changed,
 but readv() gives different results. My first suspicion is that logger
 isn't sending the last character (newline or \0) in the second case
 before exiting, which skagetlnsep() interprets as I was unable to
 read a full line before EOF happened and reports as EPIPE.
  Are you using the same version of logger on both machines ?

That would do it. heliocat is running Debian 8, whereas radon is
running Debian unstable. Debian 8 is on bsdutils 2.25.2 and unstable
on 2.26.2.
  Grrr.  If logger starts sending incomplete lines, I may have to change
 the ucspilogd code to accommodate it.

 Had a quick look at this (procrastination  stuff :p) and it seems to me
 this is probably a bug in logger actually. At some point[1] they started
 not to use syslog(3) anymore but implementing things on their own instead.
 However, there's a difference with glibc's implementation, specifically
 when using a SOCK_STREAM the later adds a NUL byte as record terminator,
 which the former does not. Hence there's never a terminating NUL byte
 from logger anymore and ucspilogd fails w/ EPIPE.

 HTH,
 -j

 [1]
 https://github.com/karelzak/util-linux/commit/1d57503378bdcd838365d625f6d2d0a09da9c29d


Thanks, checking logger versions was next on my list, right below going to bed.

Ok, looks like the bsdutils upgrade back in May broke it, the timing
lines up perfectly. Though it's my own damn fault for not noticing
until now.

It's unlikely that logger will get fixed, since rsyslog is able to
properly parse messages that logger is sending so from the maintainers
perspective this probably isn't going to get classified as a bug.
Entertainingly, switching ucspilogd for xargs -0 -- works for
single-shot log messages. Using logger as a supervised log/run doesn't
work, xargs won't print anything until it's told to terminate at which
point it execs into echo and dumps its output.

I haven't experimented with it yet, but I think the messages from
long-running logger processes are null-separated, just not the last
line. I'll take a look later today when I have time.

Cheers!
-- 
If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern.
  --  William Blake


Re: Bug in ucspilogd v2.2.0.0

2015-08-09 Thread Colin Booth
On Sun, Aug 9, 2015 at 12:17 PM, Laurent Bercot ska-skaw...@skarnet.org wrote:
 On 09/08/2015 20:13, Colin Booth wrote:

 Ok, I was wrong. I set up a little netcat /dev/log reader and there's
 no separator at all between messages. At least not one that made it to
 netcat. It also looks like the new logger stops reading after the
 first \0, and strips all newlines.


  ... wat.

I know, right?
cathexis@radon:~/tmp/log/util-linux-2.27-rc1$ printf a\0a\0b\0c | logger
@400055c7a8e32ce72bac 1000: 1000: user.notice: Aug  9 12:23:43 cathexis: a

cathexis@radon:~/tmp/log/util-linux-2.27-rc1$ printf a\na\nb\nc | logger
@400055c7a9001e5d5e6c 1000: 1000: user.notice: Aug  9 12:24:12
cathexis: a13Aug  9 12:24:12 cathexis: a13Aug  9 12:24:12
cathexis: b13Aug  9 12:24:12 cathexis: c

Actually, with a bit more experimenting it's even weirder than I thought:
# printf a\nb\0c\nd\n | logger
@400055c81eaf2d4fdc74 0: 0: user.notice: Aug  9 20:46:19 root:
a13Aug  9 20:46:19 root: b13Aug  9 20:46:19 root: d

# printf a\nb\n\0c\nd\n | logger
@400055c81ee61b5b7c94 0: 0: user.notice: Aug  9 20:47:14 root:
a13Aug  9 20:47:14 root: b13Aug  9 20:47:14 root: 13Aug  9
20:47:14 root: d

rsyslog outputs similar, but doesn't drop the newlines:
# printf a\nb\n\0c\nd\n | logger
Aug  9 20:43:23 radon root: a
Aug  9 20:43:23 radon root: b
Aug  9 20:43:23 radon root:
Aug  9 20:43:23 radon root: d

So a null stops logger from sending anything until the next newline.
But ucspilogd is what's dropping the newlines.


 I'll have to take a look
 at how rsyslog decides what the message end is since my test service
 logged correctly when rsyslog was pulling on /dev/log.


  Wild guess: they changed logger/rsyslog to send/receive datagrams
 by default. By Linux magic, it still works with ucspilogd, that reads
 a stream, but doesn't know boundaries.

I'd be unsurprised if rsyslog has done datagrams for a while.
omuxsock, the rsyslog log sender module, only does datagrams so I'd be
surprised if imuxsock didn't handle them natively. Hell, they might
have been always sending datagrams but not removing the stream markers
until recently.

Like I said earlier, the ucspilogd change in the s6 HEAD solves all
the ad-hoc logger use issues. Any setup using logger for as a
long-running log sender but reading with ucspilogd (not likely) is
still broken. Hope this helps!

Cheers!
-Colin

-- 
If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern.
  --  William Blake


Re: Preliminary version of s6-rc available

2015-07-19 Thread Colin Booth
On Fri, Jul 17, 2015 at 10:13 AM, Claes Wallin (韋嘉誠)
skar...@clacke.user.lysator.liu.se wrote:
 On 17-Jul-2015 12:49 am, Colin Booth cathe...@gmail.com wrote:

 Depending on your cron, users might be able to simply put an @reboot
 s6-svscan in their user crontab. I don't see many drawbacks with that.

There's nothing managing the per-user s6-svscan if it dies during
normal system runtime, which defeats the entire purpose of using a
supervision framework in the first place. With process suprevision, at
some point your supervision tree must have PID 1 bringing the tree
back up (be it an inittab entry, s6-svscan running as init, runit
managing runsvdir and so on) otherwise you're only playing tricks with
daemonization. Using @reboot crontab entries is a clever way around
the reboot case, but like I said above, it doesn't protect the
supervision root process outside of that event.

I actually think that systemd based systems can have a correctly
supervised non-privileged supervision tree through the use of loginctl
enable-linger and daemon-ish unit files. So you could bring up your
supervision tree that way, or just forego the process supervisor and
write directly against systemd. I however don't have any systemd hosts
laying around to test that on, and even if I did s6-rc and systemd
both cover the same operational space.

Cheers!

-- 
If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern.
  --  William Blake


Re: Preliminary version of s6-rc available

2015-07-16 Thread Colin Booth
On Thu, Jul 16, 2015 at 1:40 AM, Laurent Bercot ska-skaw...@skarnet.org wrote:
 On 14/07/2015 18:23, Colin Booth wrote:

... bunch of fixes ...
Looks good.


  Well, that's the fundamental asymmetry of run and finish scripts.
 The service is considered up as long as ./run is alive, but that's all:
 as soon as ./run is dead, the service is considered down, whether or
 not ./finish exists and no matter how long it takes to run.

  It may be useful for s6-supervise to report a ./finish exited event,
 and to have an option for s6-svc to wait for that event, but I believe
 this should be different from the 'd' event - 'd' should definitely be
 for when ./run dies.  What do you think ?

You're right, ./run is up, and being in ./finish doesn't count as up.
At work we use a lot of runit and have a lot more services that do
cleanup in their ./finish scripts so I'm more used to the runit
handling of down statuses (up for ./run, finish for ./finish, and down
for not running). My personal setup, which is pretty much all on s6
(though migrated from runit), only has informational logging in the
./finish scripts so it's rare for my services to ever be in that
interim state for long enough for anything to notice.

As for notification, maybe 'd' for when ./run dies, and 'D' for when
./finish ends. Though since s6-supervise SIGKILLs long-running
./finish scripts, it encourages people to do their cleanup elsewhere
and as such removes the main reason why you'd want to be notified on
when your service is really down. If the s6-supervise timer wasn't
there, I'd definitely suggest sending some message when ./finish went
away.


  s6-rc passes the timeout-up or timeout-down value to the forked
 s6-svc. But yes, when there's no service-specific timeout, it would
 probably be a good idea to pass along the global timeout value. Or
 to pass along the min in every case.

Ah, gotcha. I was sending explicit timeout values in my s6-rc comands,
not using timeout-up and timeout-down files. Assuming -tN is the
global value, then passing that along definitely makes sense, if
nothing else than to bring its behavior in-line with the behavior of
timeout-up and timeout-down.


  Those are actually the same. :)
  s6-svc has no timeout management itself. When called with a -U|-D
 option, it rewrites itself into a s6-svlisten1 command that calls
 s6-svc without the option. This is what you're seeing.

Cool. I did my close reading on s6 commands before s6-svlisten was a
thing so I missed (well, forgot) the bit where s6-svc execs into
s6-svlisten1.


  It's contrary to getopt() to allow an option to either be argless or
 take an arg. Think of the default dry-run option as -n0, not -n. :)

Noted. Not to big a deal since it IS only a one-character difference after all.


  I don't think removing the uid 0 restriction on s6-rc-init would
 accomplish what you want. It would mean that some user has access to his
 own private supervision tree along with his own complete service
 database, and manages his own sets of services with s6-rc, including
 a private instance of s6rc-oneshot-runner - in short, duplicating the
 whole s6-rc infrastructure at the user level. It's possible, but
 expensive, and I'm not convinced it would be useful.

That's actually pretty much was what I was talking about, I'll expand
on it a bit. You have a custom service that you want to run under
supervision, that gets regular updates from developers, and for
various reasons your setup has an application user that has a very
limited scope of interaction that it's allowed to do (primarily
limited to putting code on a host and then running it). It's trivially
easy to run a setuidgid sub-tree as a service of the main tree which
allows your application user the ability to make changes to their
services without leaking privileges for the system at large. What
isn't easy is all the stuff that supervision is historically bad at,
and that s6-rc (especially the one-shot stuff) is working on fixing.

At work we have that above setup under runit, with a collection of
mid-weight shell scripts to handle interaction between deploy scripts
and runsv. The notification and listen parts of s6 duplicates about
80% of what we have, and s6-rc provides both a convenient wrapper
around s6-svc and an in-supervisor method of dealing with oneshots.


  Users can write their own source directories for service definitions,
 and the admin can take them into account by including them in the
 s6-rc-compile command line. It's not very flexible, but it's secure;
 is there some more flexible functionality that you would like to see ?

Part of my job entails dealing with development servers where
automatic deploys happen pretty frequently but service definitions
dont change too often. So having non-privileged access to a subsection
of the supervision tree is more important than having non-privileged
access to the pre- and post- compiled offline stuff.

By the way, that's less secure than running a full non-privileged

Re: Preliminary version of s6-rc available

2015-07-14 Thread Colin Booth
On Mon, Jul 13, 2015 at 3:20 PM, Laurent Bercot ska-skaw...@skarnet.org wrote:
  Ah, so that's why you didn't like the must not exist yet requirement.
 OK, got it.
  Yeah, mounting another tmpfs inside the noexec tmpfs can work, thanks
 for the idea. It's still ugly, but a bit less ugly than the other choices.
 I don't see anything inherently bad in nesting tmpfses either, it's just a
 small waste of resources - and distros that insist on having /run noexec
 are probably not the ones that care about thrifty resource management.

It's the (least) ugly option that I can think of. Like I said, not
great but better than the alternative. It does give some nice per-user
isolation as well if you're running multiple sub-trees.

  s6-rc obviously won't mount a tmpfs itself, since the operation is
 system-specific. I will simply document that some distros like to have
 /run noexec and suggest that workaround.

And s6-rc shouldn't be responsible for handling the creation and
mounting of its tmpfs, system specific or not. That's the
responsibility of the system administrator or the package maintainer.


  Yes, I'm going to change that. absent was to ensure that s6-rc-init
 was really called early at boot time in a clean tmpfs, but absent|empty
 should be fine too.

A fresh, empty tmpfs is probably cleaner than a freshly created
directory in a dirty tmpfs (like /run can be), at least if you're
running s6-svscan in non-pid1 mode.

  Landmines indeed. Services aren't guaranteed to keep the same numbers
 from one compiled to another, so you may well have shuffled the live
 state without noticing, and your next s6-rc change could have very
 unexpected results.

Everything seemed to work out ok but live-updating stuff without
adjusting the state file seemed dicy.

  But yes, bundle and dependency changes are easy. The hard part is when
 atomic services change, and that's when I need a whiteboard with tables
 and flowcharts everywhere to keep track of what to do in every case.

Yeah, that'll be a bit harder. Good luck with your whiteboarding.


  Please mention them. If you're having trouble with the tools, so will
 other people.

Most of the stuff has been handled with my closer reading of s6-rc -a,
plus the changes to s6-rc list. Plus simply familiarizing myself with
the tools and their output has helped a lot. I did find a few bugs,
documentation or otherwise:

s6-rc-db: [-d] dependencies servicename exits 1 if you pass it a
bundle. Interestingly, all-dependencies servicename shows the full
dependency tree if you pass it a bundle and the docs makes no special
mention of bundles so I'm guessing that the failure when checking
dependencies of bundles is a bug and that the docs are correct.

s6-rc-init.html: Typical usage could be mis-read to have someone who
hasn't been working with s6 for a while to think that s6-rc-init
should be run before the catch-all logger is set up.
index.html Discussion location listed twice.

s6-rc.html: longrun transitions for non-notification supporting
services should say that the service is considered to be up as soon as
s6-supervise is forked and ./run is executed. This deals with an
ambiguity case for non-supervision experts who may not think of the
run script as part of the service. This might be talked about in the
s6 docs, but it's important and should be repeated if that is the
case.

s6-rc.html: note that s6-rc will block indefinitely when starting
services with notification support unless a timeout is set. Similar to
the above, dry-running commands will tell you what's going on under
the hood, but otherwise it's a bit of a black box.

s6-rc: if you run `s6-rc -utN change service' and the timeout occurs,
s6-rc -da list still reports the service down (as per the docs) but
subsequent runs of `s6-rc -u change service' complain about not being
able to remove the down file. I'd expect a service that timed out on
startup to have the down file since s6-rc-compile.html notes that down
files are used to mark services that s6-rc considers to be down. Maybe
make the removal of the down file the last thing the startup routine
does instead of the first since I'd consider interrupting or killing a
call to s6-rc the same as timing out (and as such shouldn't change the
reported state). -dtN has the same behavior (putting the down file in
place before calling s6-svc) but in that case erring on the side of
down feels correct.

s6-svc: -Dd doesn't seem to take finish scripts into account. Not a
bug per-se, but somewhat surprising since a run script is considered
to be part of the service. Initially I thought this was a s6-rc
timeout bug which is why I noticed it here originally.

s6-rc: Unless there's a really good reason not to, -tN should pass
along its timeout value to the forked s6-svc and s6-svlisten1
processes. If for no other reason than it'll keep impatient
administrators with misbehaving processes and too-low shutdown
timeouts from spawning tons and tons of orphaned s6-svlisten1
processes.


Re: Preliminary version of s6-rc available

2015-07-12 Thread Colin Booth
On Sat, Jul 11, 2015 at 10:59 PM, Laurent Bercot
ska-skaw...@skarnet.org wrote:


  So I decided to publish what's already there, so you can test it and
 give feedback while I'm working on the rest. You can compile service
 definition directories, look into the compiled database, and run the
 service manager. It works on my machines, all that's missing is the
 live update capability.

The requirement for the s6-rc-init live directory to not exist is
awkward if trying to go with the defaults on a distro system since
/run is mounted noexec. It's pretty easy to work around but then the
defaults are broken on distro systems.

It'd be nice if s6-rc-db contents printed a newline after the last
service for single-item bundles:
root@radon:/run/s6# s6-rc-db -l /run/s6/s6-rc/ contents 1
getty-5root@radon:/run/s6# s6-rc-db -l /run/s6/s6-rc/ contents 2
getty-6root@radon:/run/s6# s6-rc-db -l /run/s6/s6-rc/ contents 3
getty-5
getty-6
root@radon:/run/s6#

  Bug-reports more than welcome: they are in demand!

s6-rc -da list doesn't seem to work right. At least, it doesn't work
like I'd expect, which is to show all services that are down. Given
two longruns getty-5 and getty-6, with getty-5 up and getty-6 down,
I'd expect s6-rc -da list to show getty-6 (and s6-rc -ua list to only
show getty-5). Currently -da list and -ua list show the same thing:
root@radon:/run/s6# s6-rc -l /run/s6/s6-rc -da list
getty-5
root@radon:/run/s6# s6-rc -l /run/s6/s6-rc -ua list
getty-5

s6-svstat shows the correct status of the world:
root@radon:/run/s6# s6-svstat service/getty-5/
up (pid 8944) 301 seconds
root@radon:/run/s6# s6-svstat service/getty-6/
down (signal SIGTERM) 206 seconds

`s6-rc -ua change' also doesn't seem to do what I'd expect. `s6-rc -da
change' brings down all running services, `s6-rc -pda change' brings
down all running services and then starts all stopped services.
Following that logic I'd expect `s6-rc -ua change' to start all
stopped services, however it instead appears to do nothing. My guess
is that it's related to the issues with -a above and that -a is only
ever returning the things in the up group.

Not exactle a bug but the docs are wrong: the index page points to
s6-rc-upgrade when it should point to s6-rc-update.

 --
  Laurent

Lastly, I know you're working on it but s6-rc-update will be much
appreciated. Having to tear down the entire supervision tree, delete
the compiled and live directories, and then re-initialize everything
with s6-rc-init is awkward to say the least. Especially with the above
issues involving s6-rc-init and not being able to overwrite the
contents of directories if they exist.

That's everything I've found in an hour or two of messing around. I
haven't done anything with oneshots or larger dependency trees yet, so
far it's just been a few getty processes and some wrapper bundles.

Cheers!

-- 
If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern.
  --  William Blake


Re: s6-rc design ; comparison with anopa

2015-04-26 Thread Colin Booth
Apologies to Laurent who is about to get this one twice.

On Mon, Apr 27, 2015 at 12:48:58AM +0200, Laurent Bercot wrote:
  I'll probably add an automatic bundling feature for a daemon and its
 logger; however, a oneshot to create a chroot directory? That's too
 specific. That's not even guaranteed portable :) (chroot isn't in Single
 Unix.) If you want a change from the default daemon+logger configuration,
 you'll have to manually set up your own bundle.
 
OpenSSH, at least on Linux and *BSD, chroots into an empty directory
after forking for your login. That was an example but I think the
question is still valid: if you have a logical grouping of longrun foo,
longrun foo-log, and a oneshot helper bar, where foo depends on foo-log 
and bar, does s6-rc automatically create a bundle contaning foo, 
foo-log, and bar? In the grand scheme of things it doesn't matter since
regardless of the method used to start foo, s6-rc will start foo-log and
bar due to the dependency graph. However, I still think it's a
reasonable question since it sheds light into the expected method of
grouping longruns and oneshots. 

In hindsight, this question could probably have been better asked as
follows: does s6-rc automatically create bundles for each complete
dependency chain? If it doesn't that's totally fine, I'm just trying 
to suss out implementation and interaction details well before the ship
date. Actually, it's probably better if automatic bundle creation
doesn't happen beyond daemons and their loggers, since otherwise we'll 
end up with totally dumb bundle names like:
nginx+nginx-log+php-fpm+php-fpm-log+syslogd+syslogd-log+mysql+mysql-
log+net-enable\(boot\)_bundle :)
 
  What do you mean by user-supplied dependencies ? Every dependency is
 basically user-supplied - in the case of a distro, the user will be the
 packager, but s6-rc won't deduce dependencies from a piece of software
 or anything: the source will be entirely made by people. So I don't
 understand the distinction here.
 
It's mostly a distinction of does service foo start but maybe not
do the right thing if bar isn't running (httpd and php-fpm) vs. does 
service foo crash if bar isn't running (basically everything that
depends on dbus). And yes, in the end everything is user supplied, be it
at the programmer, distro, or end user level, but some people feel like
those differences necessitate using different terminology as well. For 
the record, I don't particularly feel like those distinctions warrent 
different terminology and handling, but I'm sure folks who don't want 
to write their own dependency definitions feel otherwise.

Also, by no means was I trying to imply that s6-rc should deduce 
anything. If anything I was saying that, as an SA, being able to take
implicit dependencies that mostly exist in the form of polling loops in
run scripts and other such hackery (such as my wireless setup) and 
rewrite them as explicit dependencies for the state manager to manage
sounds great and is probably the part of s6-rc that I'm most excited 
about.
 
  The s6-rc tool includes switches to print resolved bundles and resolved
 dependency graphs. I will make it evolve depending on what is actually
 needed. What kind of functionality would you like to see ?
  (There is also a library to load the compiled form into memory, so writing
 a specific binary dumper shouldn't be too hard.)
 
Low-surprise interoperability with standard unix tools mostly. Assuming
the compiled format isn't human readable, having the functionality to do
a human-readable dump to stdout (so people can diff, etc) is totally
fine. If we can hit the compiled db directly with the same tools and get
meaningful results, than all the better.

Cheers!
-Colin


Re: s6-rc design ; comparison with anopa

2015-04-25 Thread Colin Booth
On Sat, Apr 25, 2015 at 12:36:24PM +0200, Laurent Bercot wrote:
 On 25/04/2015 09:35, Colin Booth wrote:
 I've been having a hard time thinking about bundles the right way. At
 first they seemed like first-class services along with longruns and
 oneshots, but it sounds more like they are more of a shorthand to
 reference a collection of atomic services than a service in their own
 right. Especially since bundles can't depend on anything it clarifies
 things greatly to think of them as a shorthand for a collection of
 atomic services instead of a service itself.
 
  Yes, that's exactly what a bundle is: a shorthand for a collection of
 atomic services. Sorry if I didn't make it clear enough.
 
Sounds good. I was having problems reconciling a bundle as a full
service and the ability to bring down bundle A and have it change bundle
B if there was overlap in member services. In more concrete terms, if we
have bundle web {httpd, php, sql} and bundle java {tomcat, sql} and we
ask to shut down java, if bundles were first-class services than I'd
expect to have s6-rc bring down tomcat and keep sql running since it's a
component of another bundle that still needs to be up. 

This actually brings up another question, is there any provision for
automatic bundling? If sshd requires sshd-log and a oneshot to create a
chroot directory does s6-compile also create a bundle to represent that 
relationship or do we need to define those namings ourselves. This is
the inverse of my question about loggers being implicit dependencies of
services. 
 
 As long as A depends on B depends on C, if you ask s6-rc (or whatever)
 to shutdown A, the dependency manager should be able to walk the graph,
 find C as a terminal point, and then unwind C to B then finally A. While
 packagers will screw up their dependency graph, they'll screw it up (and
 fix it) in the instantiation direction.
 
  If A depends on B depends on C, and you ask s6-rc to shutdown A, then it
 will *only* shutdown A. Hoever, if you ask it to shutdown C, then it will
 shutdown A first, then B, then C. For shutdowns, s6-rc uses the graph of
 reverse dependencies (which is computed at compile time).
 
That's actually what I meant wrt the order. Mostly it was a comment
about how packagers will screw up dependencies but that it'll get
screwed up in the startup direction.
 
 
 If you have a wireless router running hostapd (to handle monitor mode on
 your radios) and dnsmasq (for dhcp and dns) you're going to want an
 ordering where dnsmasq starts before hostapd is allowed to run. There's
 isn't anything in hostapd what explicitly requires dnsmasq to be running
 (so no dependency in the classic sense) but you do need those started in
 the right order to avoid a race between a wireless connection and the
 ability to get an ip address.
 
  Hmmm.
  If hostapd dies and is restarted while dnsmasq is down, the race condition
 will also occur, right ?
  Since hostapd, like any longrun process, may die at any time, I would
 argue that there's a real dependency from hostapd to dnsmasq. If dnsmasq
 is down, then hostapd is at risk of being nonfunctional. I still doubt
 ordering without dependency is a thing.
 
It depends on the persons setup. In my case it's a user-supplied
dependency since there's nothing intrisic to dnsmasq or hostapd that
reqires them to run together which I currently get around by polling for
dnsmasq's status from within the hostapd run script. All in all it's a
semantic difference between dependencies that are needed to start
(dependencies), and depenencies that are needed for correct functioning 
but are not needed to run (orderings). The nice part is that while there
is a slim difference between the two, all the mechanisms for handling
dependencies can also handle orderings as long as the dependency tracker
handles user-supplied dependencies. Handling user supplied dependencies
also simplifies the system conceptually since people won't have to track
multiple types of requirements.

One last thing and I'm not sure if this has been mentioned earlier, but
how easy is it to introspect the compiled form without using the
supplied tools? A binary dumper is probably good enough, but I'd hate to 
be in a situation where the only way to debug a dependency ordering 
issue that the compiler introduced is from within the confines of the 
s6-rc-* ecosystem.

Cheers!
-Colin


Re: [PATCH] examples: Fix syslog LOGSCRIPT

2015-03-03 Thread Colin Booth
On Mar 3, 2015 2:35 PM, Olivier Brunel j...@jjacky.com wrote:

 Log lines are actually prefixed with uids from $IPCREMOTEEUID 
$IPCREMOTEEGID,
 so they should be acocunted for in the regexs.

Damn, you're right.

 Also note the need to use \s because, AFAIK, there's no way to use spaces
in the
 regex then, as space is a delimiter for splitting. This is probably
important
 enough to be noted alongside in some README in fact, since that's why I've
 personally kept them in the run file.

I need to double-check the word splitting but there might be a way around
that while keeping the logger args separate (and readable) from the run
script.

Cheers!


Re: s6, execline, skalibs in FreeBSD

2015-02-23 Thread Colin Booth
The s6 port was accepted this morning so the ports collection now has:

skalibs-2.3.0.0
execline-2.0.2.1
s6-2.1.1.1

I'm unlikely to port any of the other skaware packages in the near
future since these are the only three that I run with any regularity
on bsd.

Laurent, I forewent anything related to init-replacement with this
port. Not that it isn't possible but I feel like that's outside the
bailiwick of ports on a bsd system. Especially with the base system
contract that freebsd gives you by default.

Cheers!

-- 
If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern.
  --  William Blake


s6, execline, skalibs in FreeBSD

2015-01-31 Thread Colin Booth
Hi all,

I just submitted s6 to the ports collection in Freebsd. I also added
patches to update execline to 2.0.2.0 from 1.08, update skalibs to
2.2.1.0 from 0.47, and to take maintainership of both of them. All
three bugs are in the request pipeline and might take some time to get
through.

It did necessitate breaking the Freebsd port of Paul Jarc's runwhen.
To be fair though, the version of runwhen in ports is 2003.10.31 and
the latest available version doesn't seem compatible with the 2.0.0.0
refactor of skalibs.

Cheers!

-- 
If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern.
  --  William Blake


Re: execline: use of define, export, unexport, import

2014-08-25 Thread Colin Booth
On Mon, Aug 25, 2014 at 7:59 PM, John Vogel jvog...@stny.rr.com wrote:
 If I define a variable to some value at the top level of an execline script,
 there seems no way to redefine it.

 Stripped down example:

 #!/command/execlineb
 define var 0
 foreground { echo $var }
 define var 1
 foreground { echo $var }
 export var 2
 import var
 foreground { echo $var }
 unexport var
 export var 3
 import var
 echo $var
 # end of script

 All output is 0 from the first define. If I change the first define, then
 all output will be what ever I set that to.

Define is weird (I've just been playing around with it recently, so someone
should correct me if I'm wrong, please). As Patrick said, define overwrites
all instances of a key with a value. So in the case of youre script you've
overwritten var with 0 everywhere you go.

For example the following script displays 0 then 1 then 0 again:
#!/command/execlineb
define var 0
foreground { echo $var }
define 0 1
foreground { echo $0 }
echo $var

 Am I missing something basic here? If no state is maintained, should the
 definition persist? The perplexing part is that I would have thought that the
 lines that unexported, exported, then imported would force the redefinition.
(un)export is for setting and unsetting environment variables, not injecting
variables into your script. Once you've exported a variable into the environment
you should import it to be used:

#!/command/execlineb
emptyenv
export var some text
foreground { env }
foreground { echo $var }
import var
echo $var

Note that import appears to have the same no-clobber powers as define though
export doesn't:
#!/command/execlineb
emptyenv
export var some text
export var other text
import var
echo $var

outputs other text whereas:

#!/command/execlineb
emptyenv
export var some text
import var
export var other text
import var
echo $var

outputs some text

Presumably this is similar to the define issue above. I can't find a good way
to override the imported variable by name. The closest I've found is to use
`importas var2 var ; echo $var2' to pull in the (current) value of var
to the local variable
var2, and then echo that.


 I will work on redesigning the script I was trying to redefine a variable in.
 Maybe I am making it too complex.

 --
   John

Cheers!

-- 
If the doors of perception were cleansed every thing would appear to
man as it is, infinite. For man has closed himself up, till he sees
all things thru' narrow chinks of his cavern.
  --  William Blake


skalibs and related 2.0

2014-08-20 Thread Colin Booth
For those of us who use (and don't mind) slashpackage conventions for our
non-packaged code, what changes are we going to need once the new s6 stuff
lands? Based on prior comments I'm assuming it'll still be a viable option,
but it'd be nice to know ahead of update day.

Cheers!

-- 
If the doors of perception were cleansed every thing would appear to man
as it is, infinite. For man has closed himself up, till he sees all things
thru' narrow chinks of his cavern.
  --  William Blake