Added Kolla tag as we all together might want to do something to that systemd included in containers via *multiple* package dependencies, like [0]. Ideally, that might be properly packaging all/some (like those names listed in [1]) of the places having it as a dependency, to stop doing that as of now it's Containers Time?.. As a temporary security band-aiding I was thinking of removing systemd via footers [1] as an extra layer added on top, but not sure that buys something good long-term.


On 11/28/18 12:45 PM, Bogdan Dobrelya wrote:
To follow up and explain the patches for code review:

The "header" patch -> (requires), and also -> (which in turn requires) -> (Kolla change, the 1st to go)

Please also read the commit messages, I tried to explain all "Whys" very carefully. Just to sum up it here as well:

The current self-containing (config and runtime bits) architecture of containers badly affects:

* the size of the base layer and all containers images as an
   additional 300MB (adds an extra 30% of size).
* Edge cases, where we have containers images to be distributed, at
   least once to hit local registries, over high-latency and limited
   bandwith, highly unreliable WAN connections.
* numbers of packages to update in CI for all containers for all
   services (CI jobs do not rebuild containers so each container gets
   updated for those 300MB of extra size).
* security and the surface of attacks, by introducing systemd et al as
   additional subjects for CVE fixes to maintain for all containers.
* services uptime, by additional restarts of services related to
   security maintanence of irrelevant to openstack components sitting
   as a dead weight in containers images for ever.

On 11/27/18 4:08 PM, Bogdan Dobrelya wrote:
Changing the topic to follow the subject.

[tl;dr] it's time to rearchitect container images to stop incluiding config-time only (puppet et al) bits, which are not needed runtime and pose security issues, like CVEs, to maintain daily.

Background: 1) For the Distributed Compute Node edge case, there is potentially tens of thousands of a single-compute-node remote edge sites connected over WAN to a single control plane, which is having high latency, like a 100ms or so, and limited bandwith.
2) For a generic security case,
3) TripleO CI updates all


Here is a related bug [1] and implementation [1] for that. PTAL folks!


Let's also think of removing puppet-tripleo from the base container.
It really brings the world-in (and yum updates in CI!) each job and each container! So if we did so, we should then either install puppet-tripleo and co on the host and bind-mount it for the docker-puppet deployment task steps (bad idea IMO), OR use the magical --volumes-from <a-side-car-container> option to mount volumes from some "puppet-config" sidecar container inside each of the containers being launched by docker-puppet tooling.

On Wed, Oct 31, 2018 at 11:16 AM Harald Jensås <hjensas at> wrote:
We add this to all images:

/bin/sh -c yum -y install iproute iscsi-initiator-utils lvm2 python
socat sudo which openstack-tripleo-common-container-base rsync cronie
crudini openstack-selinux ansible python-shade puppet-tripleo python2-
kubernetes && yum clean all && rm -rf /var/cache/yum 276 MB
Is the additional 276 MB reasonable here?
openstack-selinux <- This package run relabling, does that kind of
touching the filesystem impact the size due to docker layers?

Also: python2-kubernetes is a fairly large package (18007990) do we use
that in every image? I don't see any tripleo related repos importing
from that when searching on Hound? The original commit message[1]
adding it states it is for future convenience.

On my undercloud we have 101 images, if we are downloading every 18 MB
per image thats almost 1.8 GB for a package we don't use? (I hope it's
not like this? With docker layers, we only download that 276 MB
transaction once? Or?)


Best regards,
Bogdan Dobrelya,
Irc #bogdando

Best regards,
Bogdan Dobrelya,
Irc #bogdando

OpenStack Development Mailing List (not for usage questions)

Reply via email to