However without any control directive, the result is: s6-log: usage: s6-log [ -d notif ] [ -q | -v ] [ -b ] [ -p ] [ -t ] [ -e ] [ -l linelimit ] logging_script Though running s6-log without a control directive is probably a little silly, perhaps the requirement to have one may be worthwhile mentioning in the doc.
Again, I cannot reproduce that, either on Linux or on FreeBSD. Running s6-log without a control directive works as intended for me. Can you please paste the exact command line you're running that causes the issue for you ?
Aside: I had orginally placed ErrorLog "|/usr/local/bin/s6-log -b n32 s50000 S7000000 /var/log/httpd-error T !'/usr/bin/xz -7q' /var/log/httpd-error" into apache24 which worked well in testing (one httpd), but of course in production there are lots of httpd that do NOT use the parent for logging errors, so locking is a problem.
Locking won't be a problem unless your services are logging lines that are longer than (at least) 4 kB. For lines that are shorter than 4 kB, writing/reading a line through a pipe will be done atomically. In a normal Apache logging configuration, lines won't be too long, so you'll be fine.
Because I have three websites (3x error files, 3x access files) I was looking at using 6 pipelines into two s6-log processes and regex's to route the content. (hence my original example). Is this a good use of resources or better to pipeline (funnel) to their own s6-log?
It's entirely your choice. The s6-log process doesn't take a lot of resources on its own, so my default choice would be to use a s6-log process per log stream - because it's always easier to merge logs than it is to separate them. If your priority is to use the least amount of CPU, or if you're not sure, definitely use more s6-log processes and less regex matching. But if your priority is to use as little RAM as possible, you'll probably get slightly better results by funneling several log streams into one s6-log process and using some regex matching. I have not profiled this, though. -- Laurent
