I just attempted to link an apache24 instance to its log files via a
bundle, which isn't acceptable to s6-rc-compile.

The approach attempted was to chain:
1. apache24 (longrun) and is a producer-for apache24-log
2. apache24-log (bundle) is a consumer-for apache24, with contents, the
following, two longrun's for logging)
3. apache24-access-log (longrun) & apache24-error-log (longrun)

Producers and consumers have to be atomic services. You cannot
declare that a bundle is a consumer.


Is it envisaged that s6-rc would enable something like this in the
future, or will the following method remain:

1. apache24 (longrun) and is a producer-for apache24-access-log
-. apache24-log (bundle) [ only for admin, though largely redundant ]
2. apache24-access-log (longrun) consumer-for apache24, producer-for
apache24-error-log
3. apache24-error-log (longrun) consumer-for apache24-access-log

The link between items 2 and 3 is fictional as is the absence of a
connection between 1 and 3.

That would work, but declare the following chain of data:
apache24 -> apache24-access-log -> apache24-error-log
which doesn't reflect your real setup. It would essentially
create and maintain an anonymous pipe between apache24-access-log
and apache24-error-log which would not be used - unless you
manually retrieve it from s6rc-fdholder and use it with apache24.
Which would work, but would be a hack that breaks abstractions.


Ideally having:
apache24 as producer-for (both) apache24-access-log and
apache24-error-log  might be another option as it reflects reality.  But
this also isn't acceptable to the s6-rc-compile.

This is a very simplified example as I have 6 sites to manage, and its
seems wrong to complicate the setup with artificial links with s6-rc?

I'm very interested to understand the reasoning.

The point of declaring producers and consumers in s6-rc is to
automatically manage pipelines: the stdout of a producer is
automatically redirected to the stdin of the corresponding consumer.
The feature uses Unix pipes, and one of the limitations of pipes is
that you can have several writers to one reader (which is supported
by s6-rc: you can have several producers for a unique consumer), but
you cannot have more than one reader on a pipe. So you cannot
declare several consumers on the same producer, because it would
mean having several processes reading from the consumer's stdout.

The model is limited in that it's only usable for stdin and stdout;
when a service, like apache24, produces several streams of data,
you can only fit one in the model (if you can get it to output one
of those streams to stdout), and the others have to be handled
differently.
Changing this is not planned atm, because it would make the model
significantly more complex (require the producer to list which fds
it's writing to, etc.) for a feature that would only be useful for
a few services - apache httpd unfortunately being one of them.

My advice is to use s6-rc's producer/consumer mechanism for one
of the log streams, and use a named pipe for the other one, without
cramming it into the s6-rc mechanism. That would typically mean:

- configure apache24 to output its access log to stdout
- declare apache24 as a producer for apache24-access-log and
apache24-access-log as a consumer for apache24
- apache24-access-log is a simple s6-log invocation, reading
from its stdin
- mkfifo /var/run/apache24/error-fifo (with appropriate rights)
- declare that apache24 outputs its error log to
/var/run/apache24/error-fifo
- apache24-error-log has its run script doing something like:
redirfd -r 0 /var/run/apache24/error-fifo s6-log your-logging-script
- manually list apache24-error-log in apache24's dependencies, so
apache24 doesn't start before apache24-error-log. (The pipeline
mechanism automatically adds apache24-access-log to apache24's deps.)
- manually define any bundles you want.

HTH,

--
Laurent

Reply via email to