On Sat, Dec 24, 2022 at 06:28:29AM +0400, Samer Afach <samer.af...@msn.com> 
wrote:

> On 24/12/2022 5:30 AM, raf wrote:
> > On Fri, Dec 23, 2022 at 04:35:03PM +0400, Samer Afach <samer.af...@msn.com> 
> > wrote:
> > 
> > >     About your great loud thought, my containers are versioned but there's
> > >     no CI in there, and every launch for them recreates them. They're all
> > >     based on either Debian or Ubuntu (depending on support for my
> > >     applications), which means they'll be updated automatically. I don't
> > >     use random images from untrusted sources. There's even plan to run apt
> > >     update/upgrade on every launch to ensure everything is up to date even
> > >     if I forget to recreate a container for any reason, and I'm planning
> > >     cron jobs that'll restart the containers daily. I really appreciate
> > >     your loud thoughts, keep 'em coming, and I hope I have it covered that
> > >     one with my plan.
> > One thing to consider, rather than restarting the
> > containers daily, is to install the unattended-upgrades
> > package in the container and a configuration for it
> > that automatically installs at least all security
> > upgrades. That way, the container can stay running for
> > long periods of time without the need to restart it
> > daily which presumably introduces tiny regular outages.
> > 
> > cheers,
> > raf
> 
> Dear Raf:
> 
> That's actually what I do on all the bare-metal machines, but from my
> understanding of how docker works, every container is made to run exactly
> one service, and somehow default Linux images disable system services. They
> can be re-enabled, but it's not the way it's meant to work, and given that
> I'm just a beginner in this whole docker thing, I'm trying not to jump over
> rooftops before some time passes by and I feel comfortable with everything
> I've done so far and build the confidence of "It worked for a while, now
> let's try changing that one thing".

Ah, I didn't realise that. Thanks. It makes sense I
suppose. A container can have any number of processes
in it, but the default assumption is going to be
immutable infrastructure, and it won't include any
processes that you don't put in there explicitly.

However, you could maybe have a cronjob outside the
container that starts a process inside the container to
perform security updates. But it sounds like a hassle.
If the mail volume isn't huge, the tiny outages when
restarting might not be a problem, and so they don't
need to be eliminated.

> This can get much worse for beginners, and it took me a while to get email
> working properly. If you notice in my setup, you'll see that postfix,
> dovecot and OpenDKIM each is running in its own container (and they all must
> be running in foreground mode to access logging).

> Luckily, sharing socket
> files in Linux is allowed among containers, and the reasoning there, if I
> understand correctly, is that all these containers use the same kernel, and
> that's the only required condition. This simplified my setup a lot. Over
> time I'll have to move everything to inet and stop using socket files
> because it sounds dirty.

I wouldn't be too keen to do that. UNIX domain sockets
are faster than TCP. There's nothing dirty about them.
It's just another network address family. And they have
some nice benefits.

> The worst part in all this is OpenDKIM. It doesn't support stdout logging,
> which means I have to force the rsyslog service to work to see any errors,
> but given that its docker should start with exactly 1 program in the
> foreground, I don't know how to print the logs with something like tail
> since OpenDKIM is running in the foreground. Another problem to be looking
> into soon when I'm done with all these more prior piling issues.
> Too much unsolicited information. Apologies, but I wanted to make the
> situation clear, because this is a typical problem in docker.

I'd be tempted to see all of these related processes as
a single service (i.e. mail), and putting them in the
same container with rsyslog. :-)
But that's probably silly.

The OpenDKIM authors would probably accept a patch for
an option that logs to stdout rather than via syslog()
for use in Docker. It should be easy enough to do. If
not, at least raise an issue with them. They'd probably
be happy to make their software easier to use in Docker.

> Cheers,
> Sam

cheers,
raf

Reply via email to