On Thu, Mar 8, 2018 at 11:44 AM, R0b0t1 <r03...@gmail.com> wrote:

> On Wed, Mar 7, 2018 at 3:06 PM, Rich Freeman <ri...@gentoo.org> wrote:
> > On Wed, Mar 7, 2018 at 3:55 PM, R0b0t1 <r03...@gmail.com> wrote:
> >> On Wed, Mar 7, 2018 at 1:15 PM, Alec Warner <anta...@gentoo.org> wrote:
> >>>
> >>> Because containers are awesome and are way easier to use.
> >>>
> >>
> >> I think you missed my point: Why are they easier to use?
> >>
> >
> > I suspect that he was equating "containers" with "Docker," which both
> > runs containers and also is an image management solution (one that I
> > don't particularly like, but probably because I don't have heavy needs
> > so I find myself fighting it more than anything else - it doesn't hurt
> > that for the life of me I can't get it to connect containers to my
> > bridge, and it seems like it is completely opposed to just using DHCP
> > to obtain IPs).
> >
> > But, if you're using Docker, sure, you can run whatever the command is
> > to download an image of a container with some software pre-installed
> > and run it.
> >
> > If you're using something other than Docker to manage your containers
> > then you still have to get the software you want to use installed in
> > your container image.
> >
>
> I think I was equating containers to Docker as well. My point was
> instead of trying to manage dependencies, containers allow people to
> shove everything into an empty root with no conflicts. The
> enthusiastic blog post seems to restate this.
>
>
I think the premise on which I disagree is that somehow portage (and
package managers in general)
somehow allow you to keep a system 'clean'. Clean being a generic sort of
idea like:

1) All system files are owned by a package.
2) Only the packages that need to be installed are installed.
3) As the administrator, I fully understand the state of the system at all
times.

This is a useful abstraction (because its how many people do systems
administration) but I don't like it for a few reasons.

1) People log in and run stuff as root. Stuff running as root doesn't have
to follow the rules of the PM, and so the system state can diverge (even by
accident.)
  a) (Note that 'People' is often 'me'; I'm not immune!
2) The PM itself (or the code it runs, such as pre / post installs) can
have bugs, and so those bugs also can cause stage to diverge.
3) In general, the state of the machine evolves organically over time.
  a) Start with a clean stage3.
  b) You install your configuration management software (puppet / cfengine
/ chef / whatever your CM is)
  c) You commit some code into your CM, but it has a bug, so you
accidentally mutate some unexpected state)
  d) Your CM is really great at doing things like "deploying a web node"
but its bad at "undeploying the web node" (perhaps you never implemented
this recipe.)
    1) You deploy a web node CM recipe on your server. It works great and
runs as a web node for six months.
    2) The web node is getting busy, so you decide to move servers. You run
your web node recipe on another server and decide to repurpose the server
in 1.
    3) You deploy an LDAP node CM recipe on the server in 1. Maybe its also
still running the web node; maybe it has the web node files but not apache
(because you uninstalled it.)

To me this was systems administration in the 90s / 2000s. The case of 3
(organic machine growth over N years) was rampant. On physical machines and
VMs we commonly addressed this problem by having a machine + service
lifecycle, making reinstalls automated, and reinstalling machines when they
switched roles. Reinstalls were 'clean' and we built new installer images
weekly. It also means I can test my procedures and be sure that I can say,
deploy a new webnode or slave and not worry about never doing deploys; its
common for automation to go stale if unused.

I am reminded a bit of a security mindset; once the machine is compromised
its really hard to prove that its no longer compromised (which is why
backup + re-image is a common remediation in security-land.)
I tend to use this as well in machines. If I logged in and did stuff as
root, its probable its not in configuration management, the package manager
may not know, and its now 'unclean.'

Its also possible that you write CM recipes really well (or distribute all
software in packages maybe.) You don't have bugs very often, or you just
don't care about this idiom of "clean" machines. The intent of this text
was merely to provide context for my experience.

In contrast with disposable containers:

   1. Automated build process for my containers.
      1. If there is a bug in the build, I can throw my buggy containers
      away and build new ones.
   2. Containers are encouraged to be stateless, so logging in to the
   container as root is unlikely to scale well; the containers are likely to
   remain 'clean.'
      1. If my containers are dirty, I can just throw them away and build
      new ones.
   3. If I need to change roles, I can just destroy the webnode container
   and deploy a LDAP node container.

The containers are nominally stateless, so there is less chance of 'gunk'
building up and surprising me later. It also makes the lifecycle simpler.

Obviously its somewhat harder for stateful services (databases, etc.) but I
suspect things like SANs (or Ceph) can really provide the storage backing
for the database.
(database "schema" cleanliness is perhaps a separate issue that I'll defer
for another time ;p)

-A

Reply via email to