This is a hugely appreciated answer. Sorry the question was so generic!
For now, we'll chose separate clusters - understanding the sub-optimal side
effects of that decision. We'll revisit the single-or-multiple cluster question
in a year or so, once we've had a reasonable amount of experience running
everything on it. I'll bet the work required to run our infrastructure in a
single cluster won't seem so daunting then :-).
FWIW, we're huge fans of k8s so far and needing to run a few separate clusters
is a pittance compared to the gains.
On Friday, August 11, 2017 at 4:55:34 PM UTC-7, Tim Hockin wrote:
> This is not an easy question to answer without opining. I'll try.
> Kubernetes was designed to model Borg. This assumes a smaller number
> of larger clusters, shared across applications and users and
> environments. This design decision is visible in a number of places
> in the system - Namespaces, ip-per-pod, Services, PersistentVolumes.
> We really emphasized the idea that sharing is important, and the
> consumption of global resources (such as ports on a node) is rare and
> carefully guarded. The benefits of this are myriad: amortization of
> overhead, good HA properties, efficient bin-packing, higher
> utilization, and centralization of cluster administration, just to
> name a few.
> What we see most often, though, is people using one cluster for one
> application (typically a number of Deployments, Services, etc. but one
> logical app). This means that any user of non-trivial size is going
> to have multiple clusters. This reduces the opportunities for
> efficiency and overhead amortization, and increases the administrative
> burden (and likely decreases the depth of understanding any one admin
> can reach).
> So, why?
> First, Kubernetes is, frankly, missing a few things that make the Borg
> model truly viable. Most clouds do not have sub-VM billing abilities.
> Container security is not well trusted yet (though getting better).
> Linux's isolation primitives are not perfect (Google has hundreds or
> thousands of patches) but they are catching up. The story around
> identity and policy and authorization and security are not where we
> want them to be.
> Second, it's still pretty early in the overall life of this system.
> Best practices are still being developed / discovered. Books are
> still being written.
> Third, the system is still evolving rapidly. People are not sure how
> things like multi-tenancy are going to look as they emerge. Siloing
> is a hedge against uncertainty.
> Fourth, upgrades-in-place are not as easy or robust as we want them to
> be. It's sometimes easier to just bring up a new cluster and flip the
> workload over. That is easier when the workload is more contained.
> All that said, I still believe the right eventual model is shared. I
> fully understand why people are not doing that yet, but I think that
> in a couple years time we will look back on this era as Kubernetes'
> awkward adolescence.
> On Fri, Aug 11, 2017 at 4:38 PM, wrote:
> > First, I'm sorry if this question has already been asked & answered. My
> > search-foo may have failed me.
> > We're in the process of moving to k8s and I'm not confident about how many
> > clusters I should setup. I know there are many possible options, but I'd
> > really appreciate feedback from people running k8s throughout their company.
> > Nearly everything we run is containerized, and that includes our
> > company-wide internal services like FreeIPA, Gitlab, Jenkins, etc. We also
> > have multiple, completely separate, applications with varying
> > security/auditing needs.
> > Today, we schedule all of our containers via salt which only allows for
> > containers to be mapped to systems in a fixed way (not great). We have a
> > group of systems for each application environment and one group for
> > internal services. Each group of systems may be subject to different
> > network restrictions, depending on what they're running.
> > The seemingly-obvious answer to replace our setup with k8s clusters is the
> > following configuration:
> > - Create one cluster for internal services
> > - Create one cluster per application, with environments managed by
> > namespaces whenever possible
> > Great, that puts us with several clusters, but a smaller number of clusters
> > than our previous "system groups". And, our network rules will mostly
> > remain as-is.
> > However, there is another option. It seems that a mix of calico
> > ingress/egress rules, namespaces, RBAC, and carefully crafted pod resource
> > definitions would allow us to have a single large cluster. Maybe it's just
> > my inexperience, but that path seems daunting.
> > So, all that background leads me to the simple question: In general, do you
> > create one cluster per application? If not, do you have some other general
> > rule that's not just "when latency or redudancy require it, make a new
> > cluster"?
> > Thanks in advance!
> > Terence
> > --
> > You received this message because you are subscribed to the Google Groups
> > "Kubernetes user discussion and Q&A" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to kubernetes-users+unsubscr...@googlegroups.com.
> > To post to this group, send email to firstname.lastname@example.org.
> > Visit this group at https://groups.google.com/group/kubernetes-users.
> > For more options, visit https://groups.google.com/d/optout.
You received this message because you are subscribed to the Google Groups
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email
To post to this group, send email to email@example.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.