We've been discussing this a little internally (at uSwitch) as we start to
pull more workloads over to Kubernetes so I'm glad it's come up on the list
also!

To add to the discussion- we currently run two clusters (called red and
black) with separate namespaces per team and RBAC to control access. Teams
decide individually how they want their systems to run- some have staging
builds others prefer to have single environment but with feature toggles
etc.

The different clusters have the same RBAC and namespace configuration and
are mostly in-sync on OS (CoreOS)/K8s releases also. Red is the cluster we
would upgrade first- we tell users of red to expect it but aim for both
clusters to be available to users without disruption.

We're trying hard to consolidate on a few large k8s clusters (irrespective
of team, environment etc.). Currently I'd guess around 50%-75% of our
workloads run across 30+ different ECS clusters (team*environment*other)
which are mostly at single-digit utilisation :)

On Thu, Apr 20, 2017 at 9:12 AM, Matthias Rampke <m...@soundcloud.com> wrote:

> At SoundCloud, we use multiple environments in one cluster, using
> namespaces and different configuration.
>
> We have a weaker notion of environments – there is no global "staging".
> Therefore, the "env" dimension is grouped *under* the system (~ app). One
> system's staging and production may be another system's blue and green.
> Some systems even have ephemeral environments that are automatically
> created for each PR, but that is rare.
>
> It is up to the owners of a system to decide, for each env, which other
> system/env combo it should talk to. For example, foo-staging may talk to
> bar-green, baz-testing and bam-production.
>
> We bake multiple configurations into each artifact, and select which one
> to load when templating out the podspec per env. By convention, the name of
> the configuration is the name of the env in which it should be loaded, but
> that is not a hard rule – the aforementioned ephemeral staging envs all
> share the same configuration.
>
> We also have test clusters, but these are mostly for staging and testing
> changes to Kubernetes itself.
>
> /MR
>
> On Thu, Apr 20, 2017 at 6:47 AM 'EJ Campbell' via Kubernetes user
> discussion and Q&A <kubernetes-users@googlegroups.com> wrote:
>
>> Definitely an interesting question!
>>
>> At Yahoo, we've generally separated the underlying infrastructure based
>> on whether our CI/CD infrastructure is performing the deployment versus if
>> a developer is manually making changes. Mapping to Box's definition, "dev"
>> is one very locked down K8s environment, while stage/prod would share a
>> single (or a small number) of large K8s clusters, per data center.
>>
>> As for the issue of routing requests from foo-stage to bar-stage, our
>> CI/CD infrastructure injects the environment an application is running in
>> into the pod. This is used by the application's configuration
>> <https://github.com/yahoo/ycb> to select the appropriate service to hit
>> based on the injected environment variable (e.g. if an app is running in
>> "stage", it *may* want to hit a "stage" version of its API).
>>
>> For example:
>>   [{
>>     "settings": [ "master" ],
>>     "bar": "api-bar.example.com"
>>   }, {
>>     "settings": [ "environment:stage" ],
>>     "bar": "api-stage-bar.example.com"
>>   }]
>>
>> So, K8s itself does not know whether an app is the stage, canary, qa,
>> prod, etc. version of it. Those are application specific concepts that are
>> separate from the underlying infrastructure hosting the application.
>>
>> -EJ
>>
>> On Wednesday, April 19, 2017, 10:30:30 PM PDT, 'Tim Hockin' via
>> Kubernetes user discussion and Q&A <kubernetes-users@googlegroups.com>
>> wrote:
>> Sam,
>>
>> I don't have a clean answer for you.  What you really want (it seems)
>> is nested Namespaces.  If only our foresight were better...
>>
>> The way we end up doing it internally is that foo-prod and foo-test
>> get baked into the templates that produce the final configs that are
>> sent to the master.
>>
>> Kubernetes really is designed to empower large, shared clusters - Borg
>> style.  A lot of people are using it for small clusters today - one
>> per app or one per environment.  That works, but it is not really
>> capturing the idea we wanted to express.  There are lots of good,
>> valid, reasons why people can't use mega-clusters yet - authn, authz,
>> billing, etc.  We're working on a lot of these things now (and RBAC
>> has landed :)
>>
>> I'd love to hear more people's thoughts - this is a very interesting
>> topic.
>>
>> On Wed, Apr 19, 2017 at 10:23 PM, Nikhil Jindal
>> <nikhiljinda...@gmail.com> wrote:
>> > Thats a great question!
>> >
>> > I would let others comment on what the kubernetes vision is but to me
>> > colocating multiple environments in a single cluster seems better than
>> > having different clusters for each environment for the reasons you
>> mentioned
>> > (better utilization and efficiency). There is work going on for hard
>> > multitenancy to support better isolation.
>> >
>> > For service discovery within kubernetes, we look for service in the same
>> > namespace by default. So if service foo and baz are in the same
>> namespace
>> > then foo in foobaz-dev namespace will reach out to baz in foobaz-dev
>> > namespace and foo in foobaz-prod will reach out to baz in foobaz-prod
>> > namespace.
>> > For service discovery in federation, you have to explicitly request the
>> > federated service which includes providing the namespace as well (for
>> ex:
>> > baz.foobaz-dev.federation.svc.<domain>)
>> >
>> >
>> >
>> >
>> > On Tue, Apr 18, 2017 at 4:14 PM, 'Sam Ghods' via Kubernetes user
>> discussion
>> > and Q&A <kubernetes-users@googlegroups.com> wrote:
>> >>
>> >> Hello,
>> >>
>> >> We're struggling internally at Box with a question that we were hoping
>> the
>> >> community could help shed some light on.
>> >>
>> >> At Box we have three main environments: development, staging, and
>> >> production. The definition of 'environment' here is primarily a logical
>> >> service discovery domain - what instances of other services should
>> access me
>> >> and what instances of other services should I access? (Some
>> configuration
>> >> can vary as well.) The development environment is a collection of
>> instances
>> >> where changes are pushed first and the most frequently changing
>> environment.
>> >> Staging is where changes go right before they're released and where any
>> >> manual testing is done. Production is the set of instances serving
>> customer
>> >> traffic.
>> >>
>> >> Currently, we have four bare-metal datacenters, one is for
>> non-production
>> >> workloads (let's call it NP), the three other are for production
>> workloads
>> >> (let's call them A, B, C). Each one has a single large Kubernetes
>> cluster
>> >> named after the datacenter it's in. Initially, we considered equating
>> >> namespaces to environments, and having the dev and staging namespaces
>> in the
>> >> NP cluster and the production namespace in the A, B and C clusters.
>> But we
>> >> could not get good isolation between different teams and microservices
>> for
>> >> authentication, quota management, etc.
>> >>
>> >> So instead, for a service 'foo,' each cluster uses namespaces like
>> >> 'foo-dev', 'foo-staging', and 'foo-production', with the first two
>> >> namespaces only existing in the NP cluster, but the production
>> namespace
>> >> only existing in clusters A, B and C. The foo team only has access to
>> the
>> >> foo namespaces (through ABAC, soon RBAC) and the foo namespaces can
>> have
>> >> quotas put on them to ensure they do not overrun a cluster or
>> environment.
>> >>
>> >> However, we've started to wonder whether colocating multiple
>> environments
>> >> in a single cluster like this is a good idea. The first thing that
>> gave us
>> >> pause was federation and service discovery - as the foo service, I'd
>> love to
>> >> be able to deploy to a cluster, then indicate that I want to talk to
>> the
>> >> 'baz' service, and have it automatically find the baz service in my
>> cluster,
>> >> and fall back to a secondary cluster if it's not there. Having multiple
>> >> environments in a single cluster means every app in a cluster needs to
>> not
>> >> only know that it reaches a 'baz' service, but it needs to know to
>> >> specifically reach out to 'baz-dev|staging|prod' etc., which pollutes
>> >> everyone's configs. This is specifically because there's no first class
>> >> concept for "environment" in Kubernetes at the moment - only what we've
>> >> clobbered into namespaces, configs and service names. (Something like
>> >> hierarchical namespaces may be able to help with this.)
>> >>
>> >> The alternative we're considering is to have each cluster contain only
>> a
>> >> single environment. Having one environment per cluster simplifies a
>> lot of
>> >> configuration and object definitions across the board, because there's
>> only
>> >> one axis to worry about (cluster) instead of two (cluster +
>> environment). We
>> >> can know implicitly that everything in a given cluster belongs to a
>> specific
>> >> environment, potentially simplifying configuration more broadly. It
>> also
>> >> feels like it might be a lot more natural of a fit to Kubernetes'
>> federation
>> >> plans, but we haven't looked into this in as much depth.
>> >>
>> >> But on the flip side, I've always understood Kubernetes' ultimate goal
>> to
>> >> be a lot more like Borg or an AWS availability zone or region, where
>> the
>> >> software operates more at an infrastructure layer than the application
>> >> layer, because this dramatically improves hardware utilization and
>> >> efficiency and minimizes the number of clusters to operate, scale, etc.
>> >>
>> >> An extreme alternative we've heard is to actually bootstrap a cluster
>> per
>> >> team, but that feels pretty far from the Kubernetes vision, though we
>> might
>> >> be wrong about that as well.
>> >>
>> >> So, we'd love to hear opinions on not only what's recommended or
>> possible
>> >> today with Kubernetes, but what is the vision - should Kubernetes
>> clusters
>> >> exist at an application/environment layer or at the infrastructure
>> layer?
>> >>
>> >> Thank you!
>> >> Sam
>> >>
>> >> --
>> >> You received this message because you are subscribed to the Google
>> Groups
>> >> "Kubernetes user discussion and Q&A" group.
>> >> To unsubscribe from this group and stop receiving emails from it, send
>> an
>> >> email to kubernetes-users+unsubscr...@googlegroups.com.
>> >> To post to this group, send email to kubernetes-users@googlegroups.
>> com.
>> >> Visit this group at https://groups.google.com/group/kubernetes-users.
>> >> For more options, visit https://groups.google.com/d/optout.
>>
>> >
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> Groups
>> > "Kubernetes user discussion and Q&A" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> an
>> > email to kubernetes-users+unsubscr...@googlegroups.com.
>> > To post to this group, send email to kubernetes-users@googlegroups.com.
>> > Visit this group at https://groups.google.com/group/kubernetes-users.
>> > For more options, visit https://groups.google.com/d/optout.
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Kubernetes user discussion and Q&A" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to kubernetes-users+unsubscr...@googlegroups.com.
>> To post to this group, send email to kubernetes-users@googlegroups.com.
>> Visit this group at https://groups.google.com/group/kubernetes-users.
>> For more options, visit https://groups.google.com/d/optout.
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Kubernetes user discussion and Q&A" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to kubernetes-users+unsubscr...@googlegroups.com.
>> To post to this group, send email to kubernetes-users@googlegroups.com.
>> Visit this group at https://groups.google.com/group/kubernetes-users.
>> For more options, visit https://groups.google.com/d/optout.
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q&A" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.
  • [kubernetes... 'Sam Ghods' via Kubernetes user discussion and Q&A
    • Re: [k... Nikhil Jindal
      • Re... 'Tim Hockin' via Kubernetes user discussion and Q&A
        • ... 'EJ Campbell' via Kubernetes user discussion and Q&A
          • ... Matthias Rampke
            • ... Paul Ingles
    • [kuber... 'Brian Grant' via Kubernetes user discussion and Q&A
    • Re: [k... 'David Oppenheimer' via Kubernetes user discussion and Q&A
      • Re... 'John Huffaker' via Kubernetes user discussion and Q&A
        • ... 'David Oppenheimer' via Kubernetes user discussion and Q&A
          • ... 'John Huffaker' via Kubernetes user discussion and Q&A
            • ... 'David Oppenheimer' via Kubernetes user discussion and Q&A
              • ... 'John Huffaker' via Kubernetes user discussion and Q&A
                • ... 'David Oppenheimer' via Kubernetes user discussion and Q&A
                • ... 'Tim Hockin' via Kubernetes user discussion and Q&A
                • ... 'John Huffaker' via Kubernetes user discussion and Q&A

Reply via email to