On 6/8/2015 10:44 AM, Steve Litt wrote:
Just so we're all on the same page, am I correct that the subject of
your response here is *not* "socket activation", the awesome and
wonderful feature of systemd.
You're simply talking about a service opening its socket before it's
ready to exchange information, right?
That is my understanding, yes. We are discussing using UCSPI to hold a
socket for clients to connect to, then launching the service and
connecting the socket on demand; as a by-product the assumption is the
client will block while the launch is occuring for the socket. Of
course, to make this work, there is an implicit assumption that the
launch includes handling of "service is up" vs "service is ready".
Isn't this all controlled by the service? sshd decides when to open its
socket: The admin has nothing to do with it.
UCSPI is basically the inetd concept re-done daemontools style. It can
be a local socket, a network socket, etc. So the UCSPI program would
create and hold the socket; upon connection, the service spawns.
[Snip 2 paragraphs discussing the complexity of sockets used in a
certain context]
If I were to write support for sockets in, I would guess that it
would probably augment the existing ./needs approach by checking for
a socket first (when the feature is enabled), and then failing to
find one proceed to peer-level dependency management (when it is
enabled).
Maaaaannnnn, is all this bo-ha-ha about dependencies?
Sequencing actually; I'm just mixing a metaphor here, in that "my
version" of dependencies is sequential, self-organizing, but not
manually ordered. Order is obtained by sequentially walking the tree,
so while you have a little control by organizing the relationships, you
don't have any control over which relationship launches first at a given
level.
=====================================
if /usr/local/bin/networkisdown; then
sleep 5
exit 1
fi
exec /usr/sbin/sshd -d -q
=====================================
Is this all about using the existance of a socket to decide whether to
exec your service or not? If it is, personally I think it's too
generic, for the reasons you said: On an arbitrary service,
perhaps written by a genius, perhaps written by a poodle, having a
socket running is no proof of anything. I know you're trying to write
generic run scripts, but at some point, especially with dependencies on
specific but arbitrary processes, you need to know about how the
process works and about the specific environment in which it's working.
And it's not all that difficult, if you allow a human to do it. I think
that such edge case dependencies are much easier for humans to do than
for algorithms to do.
Oh, don't get me wrong, I'm saying that the human should not only be
involved but also have a choice. Yes, I will have explicit assumptions
about "X needs Y" but there's still a human around that can decide if
they want to flip the switch "on" to get that behavior.
If this really is about recognizing when a process is fully functional,
because the process being spawned depends on it, I'd start collecting a
bunch of best-practices, portable scripts called ServiceXIsDown and
ServiceXIsUp.
This is of passing interest to me, because a lot of that accumulated
knowledge can be re-implemented to support run scripts. I may write
about that separately in a little bit.
Sorry for the DP101 shellscript grammar: Shellscripts are a second
language for me.
The project is currently written in shell, so you're in good company.
Anyway, each possible dependent program could have one or more
best-practice "is it up" type test shellscripts. Some would involve
sockets, some wouldn't. I don't think this is something you can code
into the actual process manager, without a kudzu field of if statements.
It wouldn't be any more difficult than the existing peer code. Yes, I
know you peeked at that once and found it a bit baroque but if you take
the time to walk through it, it's not all that bad, and I'm trying hard
to make sure each line is clear about its intention and use.
Regarding an older comment that was made about relocating peer
dependencies into a separate script, I'm about 80% convinced to do it,
if only to make things a little more modular internally.
[snip a couple paragraphs that were way above my head]
Of course, there are no immediate plans to support UCSPI, although
I've already made the mistake of baking in some support with a bcron
definition. I think I need to go back and revisit that entry...
I'm a big fan of parsimonious scope and parsimonious dependencies, so
IMHO the less that's baked in, the better.
The minimum dependencies are there. If anything, my dependencies are
probably lighter than most - there isn't anything in shell that is baked
in (i.e. explicit service X start statements in the script outright),
and the dependencies themselves are simply symlinks that can be changed.
As a side note, I'm beginning to suspect that the desire for "true
parallel startup" is more of a "mirage caused by desire" rather than
by design. What I'm saying is that it may be more of an ideal we
aspire to rather than a design that was thought through. If you have
sequenced dependencies, can you truly gain a lot of time by
attempting parallel startup? Is the gain for the effort really that
important? Can we even speed things up when fsck is deemed mandatory
by the admin for a given situation? Questions like these make me
wonder if this is really a feasible feature at all.
Avery, I'm nowhere near your knowledge level on init systems, but I've
wondered that myself. A 2 second boot would be nice, but at what cost?
For that matter, if there is a requirement to boot in less than 2
seconds, you're (a) probably doing something wrong, as redundant
services/servers elsewhere can be used or (b) have a special edge case,
such as embedded hardware on an orbital space station, where the next
"service call" costs several million dollars and is planned in terms of
months. Both situations are well beyond normal.
Plus there's this: Even original daemontools is nowhere near serial:
Correct me if I'm wrong, but I believe that with daemontools, svscan
keeps spinning, testing and trying, while supervise modules are
instantiating their daemons.
Correct, that is my understanding as well.
Sometimes, in my more cynical moods, I wonder whether "parallel
instantiation" is less of a helpful feature than it is a marketing
bulletpoint, much like "magnesium paddle shifter."
"Shiny paddle shifters, for great win."