On Thu, Mar 22, 2018 at 05:10:58PM +0000, Laurent Bercot wrote: > Having one stream per syslog client is a good thing per se because > it obsoletes the need to identify the client in every log record; > but the killer advantage would be to do away with system-wide > regular expressions for log filtering, and that's not something > we have yet. Even when you pipe "s6-ipcserver ucspilogd" into > s6-log, your s6-log script is the same for every client, so the > regexes need to be global. A real improvement would be to have > a different log script for every client connection, so the log > filtering would really be local; but I haven't yet thought about a way > to design such an architecture. That's the main reason why I haven't > much pushed for SOCK_STREAM syslog() in musl; if we can come up with a > syslogd scheme that works without any global regexes, then we'll > have a real case for SOCK_STREAM adoption. Until then, socklog works. Assuming the number of syslog logging scripts is fairly small (a few for daemons in an anticipated list, and perhaps one for the rest; I think this scheme is actually already in use by most syslog users), what about setting up a group of s6-log consumer services, and use a chainloading program (akin to s6-tcpserver-access) with s6-ipcserver to dynamically decide which consumer to connect to (by interacting with s6rc-fdholder)?
(I think this scheme, with some variations, can also be usable with services that produce multiple output streams, like Apache which is recently discussed on this mailing list. This would be particularly easy if these services can be configured to log to /proc/self/fd/[num]: then the user can simply use s6-fdholder-retrieve to chainload the daemons for the services.) -- My current OpenPGP key: RSA4096/0x227E8CAAB7AA186C (expires: 2020.10.19) 7077 7781 B859 5166 AE07 0286 227E 8CAA B7AA 186C