On 6/8/2015 11:55 AM, Laurent Bercot wrote:
On 08/06/2015 16:00, Avery Payne wrote:
This is where I've resisted using sockets.  Not because they are bad
- they are not.  I've resisted because they are difficult to make
100% portable between environments.  Let me explain.

 I have trouble understanding several points of your message.

 - You've resisted using sockets. What does that mean ? A daemon
will, or will not, use a socket; as an integrator, you don't have
much say on the matter.
I'm not specifically speaking about a socket required by a daemon. I'm talking about using sockets for activation via UCSPI, similar to the old inetd concept.

 - What tools are available. What does that have to do with
daemons using sockets ? UCSPI tools will, or will not, be available,
but daemons will do as they please. If your scripts rely on UCSPI
tools to ease socket management, then add a package dependency -
your scripts need UCSPI tools installed, end of story.
Until now, I have been able to resist adding additional requirements. Each tool requirement, from outside the project, looks "lightweight and harmless", but you know as well as I do that it isn't; each external dependency increases complexity and reduces the chance that you can port your code.

I've tried, really, really hard to avoid this when possible. It probably doesn't seem like that from the outside looking in, but that's been the intent. Sometimes my decisions will appear to be silly; this is a learning process and as I go, silly decisions have been made, and much has been learned. I don't claim to be an expert in any sense, just someone who took the time to work with something.

Dependencies
are not a bad thing per se, they just need to be controlled and
justified.
 "UCSPI sockets" does not make sense. You'll have Unix sockets and
INET sockets, and maybe one or two esoteric things such as netlink.
UCSPI is a framework that helps manipulate sockets with command-line
utilities. Use the tools or don't use them, but I don't understand
what your actual problem is.
The problem is that there isn't an assurance that UCSPI tools are /available/ at the point of installation; which means writing a shell script to handle using such tools becomes complicated, I can't just specifically code in a tool name and expect things to just work out of the box. Compounding that is the lack of UCSPI tools being installed as a standard part of a software package, so things like runit may have a "standardized set" of tools that I can count on to be there, but not Gerrit's version of UCSPI.

I got around this problem with framework tools by abstracting away the toolset with symlinks and fall-back behavior, so to do UCSPI properly, I would most likely do the same again. I'm not happy about my decision but it works across the widest range of tools and has the "least" impact. It also makes extending support to other frameworks easier via abstraction.


 So where do
the sockets live?  /var/run? /run?  /var/sockets?
/insert-my-own-flavor-here?

 How about the service directory of the daemon using the socket ?
That's what a service directory is for.
Also true. I was just pointing out that it's yet another decision that has to be made. And as I pointed out elsewhere, things like anopa require some custom work to wedge the definitions in, complicating the process. Adding UCSPI support just complicates it further. Call me whiny for not wanting to put more effort in if you like; I'll admit to it on this specific topic.


* Make socket activate an admin-controlled feature that is disabled
by default.  You want socket activation, you ask for it first. The
admin gets control, I get more headache, and mostly everyone can be
happy.

 If all this fuss is about socket activation, then you can simply
forget it altogether.
Already did. :)

As a side note, I'm beginning to suspect that the desire for "true
parallel startup" is more of a "mirage caused by desire" rather than
by design.
 At least, if by "parallel startup" you mean "start things as soon as
they can be started without risk, without needless waiting times".

The question was along the lines of "sure it's something we can do, but does that mean we should do it to begin with?" There is value to starting things quickly when possible, but the parallel start I'm talking about is "we can launch 2 or more processes at once and get a massive speed gain due to multiple cores" line of thinking. That kind of magic makes sense when you're running a farm of 1,000 web servers; it makes zero sense when you have a large NAS/SAN and you're forcing a fsck-on-reboot as a default behavior because the data is too valuable to leave to chance. Or put in shorter terms, if you have to wait for external factors, then what's the point of multi-core parallel launching?

Reply via email to