On 13.03.2015 12:41, Michael Conrad wrote:
On 3/13/2015 3:25 AM, Harald Becker wrote:
This is splitting operation of a big process in different threads,
using an interprocess communication method. Using a named pipe
(fifo) is the proven Unix way for this ... and it allows #2 without
blocking #1 or #0.
Multiple processes writing into the same fifo is not a valid design.
Who told you that? It is *the* proven N to 1 IPC in Unix.
Stream-writes are not atomic, and your message can theoretically get
cut in half and interleaved with another process writing the same
fifo. (in practice, this is unlikely, but still an invalid design)
This is not completely correct:
picked out of Linux pipe manual page (man 7 pipe):
---snip---
O_NONBLOCK disabled, n <= PIPE_BUF
All n bytes are written atomically; write(2) may block if there is not
room for n bytes to be written immediately
---snip---
As long as the message written to the pipe/fifo is less than PIPE_BUF,
the kernel guaranties atomicity of the write, message mixing only
happens when you write single messages > PIPE_BUF size, or use split
writing (e.g. do fprintf without setting to line buffer mode);
PIPE_BUF shouldn't be smaller than 512, but more likely 4k as on Linux
(old), or even 64k (modern).
If you want to do this you need a unix datagram socket, like they use
for syslog.
Socket overhead is higher than writing a pipe. Not only at code size,
much more like at CPU cost passing the messages.
It is also a broken approximation of netlink because you don't
preserve the ordering that netlink would give you, which according to
the kernel documentation was one of the driving factors to invent
it.
Sure. You say netlink is the better solution, I say netlink is it, but
next door you may find one who dislike netlink usage. We are not living
in a perfect world.
Ordering is handled different in mdev, that shall stay as is. My
approach can't solve every single problem in this method, but that is up
to those who like to stay, still they should gain from the speed
improvement, and less problems from race conditions (each device
operation is done without mixing with other device operation, as in pure
parallelism). Additionally the hotplug helper speed is increased and
does an really early exit compared to current mdev (or your approach).
This should reduce system pressure and event reordering, but will indeed
not avoid (needs to be synchronized) ... but I got a different idea: I
heard about, the kernel provide a sequence number, which is used in mdev
to do synchronization. May be we should just send the messages to the
pipe as fast as posible, but prefix them with the event sequence number.
The parser reads the message and checks the sequence number and pushes
reordered messages in a back list, until right message receives (or some
timeout - as done in mdev, but does not need reading / writing a file).
Oh I think the sequence number info is in the docs/mdev.txt description,
including how this is done in mdev.
If someone really wants a netlink solution they will not be happy
with a fifo approximation of one.
You missed the fact, my approach allows for free selection of the
mechanism. C hosing netlink means using netlink, as it should be. The
event listener part is as small as possible and write to the pipe, which
fire up a parser / handler to consume the event messages.
Where is there an approximation? Kernel hotplug helper mechanism is a
different method, but also available for those who like to use them.
Either one will have only some unused code part (if not opted out on
config).
The difference is, default config can include both mechanisms in
pre-build binaries. The user can chose and test the mechanism he wants,
and then possibly build a specific version and opt out unwanted stuff.
--
Harald
_______________________________________________
busybox mailing list
[email protected]
http://lists.busybox.net/mailman/listinfo/busybox