Le 17/03/2015 19:56, Harald Becker a écrit :
Hi Didier,

On 17.03.2015 19:00, Didier Kryn wrote:
     The common practice of daemons to put themselves in background and
orphan themself is starting to become disaproved by many designers. I
tend to share this opinion. If such a behaviour is desired, it may well
be done in the script (nohup), and the "go to background" feature be
completely removed from the daemon proper. The idea behind this change
is to allow for supervisor not being process #1.

Ack, for the case the daemon does not allow to be used with an external supervisor.

Invoking a daemon from scripts is no problem, but did you ever come in a situation, where you needed to maintain a system by hand? Therefor I personally vote for having a simple command doing auto background of the daemon, allowing to run from a supervisor, by a simple extra parameter (e.g. "-n"). Which is usually no problem, as the supervisor need any kind of configuration, where you should be able to add the arguments, the daemon gets started with. So you have to enter that parameter just once for your usage from supervisor, but save extra parameters for manual invocation.

Long lived daemons should have both startup methods, selectable by a parameter, so you make nobodies work more difficult than required.

Dropping the auto background feature, would mean, saving a single function call to fork and may be an exit. This will result in a savage of roughly around 10 to 40 Byte of the binary (typical x86 32 bit). To much cost to allow both usages?

OK, I think you are right, because it is a little more than a fork: you want to detach from the controlling terminal and start a new session. I agree that it is a pain to do it by hand and it is OK if there is a command-line switch to avoid all of it. But there must be this switch.



     Could you clarify, please: do you mean implementing in netlink the
logic to restart fifosvd? Previously you described it as just a data
funnel.

No, restart is not required, as netlink dies, when fifosvd dies (or later on when the handler dies), the supervisor watching netlink may then fire up a new netlink reader (possibly after failure management), where this startup is always done through a central startup command (e.g. xdev).

The supervisor, never starts up the netlink reader directly, but watches the process it starts up for xdev. xdev does it's initial action (startup code) then chains (exec) to the netlink reader. This may look ugly and unnecessary complicated at the first glance, but is a known practical trick to drop some memory resources not needed by the long lived daemon, but required by the start up code. For the supervisor instance this looks like a single process, it has started and it may watch until it exits. So from that view it looks, as if netlink has created the pipe and started the fifosvd, but in fact this is done by the startup code (difference between flow of operation and technical placing the code).

I didn't notice this trick in your description. It is making more and more sense :-).

Now look, since nldev (lest's call it by its name) is execed by xdev, it remains the parent of fifosvd, and therefore it shall receive the SIGCLD if fifosvd dies. This is the best way for nldev to watch fifosvd. Otherwise it should wait until it receives an event from the netlink and tries to write it to the pipe, hence loosing the event and the possible burst following it. nldev must die on SIGCLD (after piping available events, though); this is the only "supervision" logic it must implement, but I think it is critical. And it is the same if nldev is launched with a long-lived mdev-i without a fifosvd.



     Well, this is what I thought, but the manual says an empty end
causes end-of file, not mentionning the pipe being empty.

end-of-file always include the pipe being empty. Consider a pipe which has still some data in it, when the writer closes the write-end. If the reader would receive eof before all data has bean consumed, it would lose some data. That would be absolutely unreliable. Therefore, the eof is only forwarded to the read end, when the pipe is empty.
I agree that the other way wouldn't work. Just noticing the manual is wrong/unclear on that point.


*Does anybody know the exact specification of poll behavior on this
case?*
     My experience, with select() which is roughly the same, is that it
does not detect EOF. And, since fifosvd must not read the pipe, how does
it detect that it is broken?

Not detect? Sure you closed all open file descriptors for the write end (a common cave-eat)? I have never bean hit by such a case, except anyone forgot to close all file descriptors of the write end.
You notice that something happened on input (AFAIR) but I'm sure you don't know what. It may be data as well. You must read() to know.

Anyway you don't want to poll() the pipe unless mdev-i is dead because you don't want to awake fifosvd for every event. The only way I can see for fifosvd to monitor nldev is to read() the pipe when mdev-i is dead. But if this is not an EOF but an event, it would be lost for mdev-i, unless it is possible to invoke mdev-i with the first event passed through the command-line.

fifosvd should not poll() the pipe; it just does not help. But it shouild read() it when and only when mdev-i is not running.

    Didier

_______________________________________________
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox

Reply via email to