On Tue, Sep 16, 2014 at 08:53:09PM +0200, Anton Lundin wrote: > > Docker is as high tech as a sledgehammer compared to Chef or Puppet. > They kinda aim for different targets. > > Docker is probably a way simpler way to go for you.
Yes. Chef and Puppet aim at automating large data centers. I am running a few services on a single server (with maybe a vague idea to have a failover in a different geo at some point). Docker helps me to do some separation of services and to make the services easier to reproduce if things fall down. > > The biggest challenge with Docker is that it's not really designed for the > > type of services I'm running... you cannot really do the "one app one > > container" thing Docker wants you to do. Trac requires a web server, a git > > server, a mail server, and it's entirely non-trivial and > > counter-productive to spread those out across multiple containers - at > > least as far as I can tell... > > You can do some trickery with volume containers and intra container > routing, but its probably simpler for you to just lump a slew of > services into one container. > > I would recommend you at least use volumes for data, and containers for > code. I have a container for MySQL. A ton of other data actually comes as volumes from the 'real' host. This makes it so much easier to back things up and to be able to quickly tear a container down, fix something and bring it back up. > > So I'm using Docker/baseimage to run multiple services in one container > > and basically use Docker as a set of tools to be able to encapsulate > > larger logical blocks. E.g. the MySQL server is its own container. As is > > the WordPress site (that one had been hacked before). I'm still in the > > experimentation phase regarding the separation of the other services - > > especially the trac/git server will likely be one single container... > > > > THis means that multiple containers will be running apache and there needs > > to be a reverse proxy in front of that (also apache) which means that I > > have a lot of independent apache processes running. I'll have to monitor > > how much that increases system resource load. I did switch to a 16 core > > Xeon server with 24GB of memory, so this should be big enough for a few > > years (famous last words). > > > > 16 cores and 24GB ram is a huge machine for such a task. You could have > used VM's for everything and still had ram left with such a machine. I know. This machine was much cheaper than a more reasonably sized one :-) > ( And you don't need to run a apache for each of them. You could as > easily just run tracd in your container and let apache proxy to that ) That I'm still figuring out. Trac is a pain in the rear... > > My biggest problem is time. I just don't have enough. This day job keeps > > distracting me from working on Subsurface infrastructure :-) > > Pity that someone have nabbed https://github.com/subsurface , that could > be a simple place to just use as our infrastructure. I HATE the github infrastructure. What a pile of... no, let's not go there. I have control issues. As long as I maintain Subsurface it will run on my own servers. It's that simple. /D _______________________________________________ subsurface mailing list [email protected] http://lists.hohndel.org/cgi-bin/mailman/listinfo/subsurface
