On Tue, Dec 19, 2017 at 10:01:53AM -0500, Wietse Venema wrote:
> I suppose one approach is to make a Postfix container disposable,
> i.e. a container is never updated with a new Postfix version, but
> it is replaced with a newer one

That is the common Docker approach.  Images are immutable.

> and it imports its queue and data directories from the host. These
> directories must of course not be imported into multiple containers. I
> don't know how to prevent that.

That is a problem for a different layer of the stack.  The sysadmin is
supposed to provide persistent storage and make sure that multiple
containers do not write to the same directory.  It should not be our job
to babysit the infra.  Inform and bail out if the deal is broken?

> Also, a Postfix container would import the logging sockets from the
> host (www.projectatomic.io/blog/2016/10/playing-with-docker-logging)
> and would set 'syslog_name = $myhostname/postfix' in the container's
> main.cf file to make logging from different containers distinct.
> Of course the logging sockets may be imported into as many containers
> as needed.

Uhm, systemd (or any other init system) as pid 1 is not the "docker
way".  It is better for docker to know when the service has stopped /
crashed etc so it can take appropriate action.  So consider ditching
seperate pid 1 daemon option.

Usually sending log output to console is the preferred approach.  If we
cannot do it natively, having syslog daemon write to console (in
addition to local log file?) looks like a better option than importing
sockets from the host.

> If one wants multiple Postfix instances in a single container, then
> that will require a 'minder' program that runs in the foreground and
> that plays nice with higher-level orchestration systems. I won't
> sabotage that approach.

Do we really need that?  Too many layers all trying to do a similar job?

-- 
Eray

Reply via email to