Sam,

I don't have a clean answer for you.  What you really want (it seems)
is nested Namespaces.  If only our foresight were better...

The way we end up doing it internally is that foo-prod and foo-test
get baked into the templates that produce the final configs that are
sent to the master.

Kubernetes really is designed to empower large, shared clusters - Borg
style.  A lot of people are using it for small clusters today - one
per app or one per environment.  That works, but it is not really
capturing the idea we wanted to express.  There are lots of good,
valid, reasons why people can't use mega-clusters yet - authn, authz,
billing, etc.  We're working on a lot of these things now (and RBAC
has landed :)

I'd love to hear more people's thoughts - this is a very interesting topic.

On Wed, Apr 19, 2017 at 10:23 PM, Nikhil Jindal
<nikhiljinda...@gmail.com> wrote:
> Thats a great question!
>
> I would let others comment on what the kubernetes vision is but to me
> colocating multiple environments in a single cluster seems better than
> having different clusters for each environment for the reasons you mentioned
> (better utilization and efficiency). There is work going on for hard
> multitenancy to support better isolation.
>
> For service discovery within kubernetes, we look for service in the same
> namespace by default. So if service foo and baz are in the same namespace
> then foo in foobaz-dev namespace will reach out to baz in foobaz-dev
> namespace and foo in foobaz-prod will reach out to baz in foobaz-prod
> namespace.
> For service discovery in federation, you have to explicitly request the
> federated service which includes providing the namespace as well (for ex:
> baz.foobaz-dev.federation.svc.<domain>)
>
>
>
>
> On Tue, Apr 18, 2017 at 4:14 PM, 'Sam Ghods' via Kubernetes user discussion
> and Q&A <kubernetes-users@googlegroups.com> wrote:
>>
>> Hello,
>>
>> We're struggling internally at Box with a question that we were hoping the
>> community could help shed some light on.
>>
>> At Box we have three main environments: development, staging, and
>> production. The definition of 'environment' here is primarily a logical
>> service discovery domain - what instances of other services should access me
>> and what instances of other services should I access? (Some configuration
>> can vary as well.) The development environment is a collection of instances
>> where changes are pushed first and the most frequently changing environment.
>> Staging is where changes go right before they're released and where any
>> manual testing is done. Production is the set of instances serving customer
>> traffic.
>>
>> Currently, we have four bare-metal datacenters, one is for non-production
>> workloads (let's call it NP), the three other are for production workloads
>> (let's call them A, B, C). Each one has a single large Kubernetes cluster
>> named after the datacenter it's in. Initially, we considered equating
>> namespaces to environments, and having the dev and staging namespaces in the
>> NP cluster and the production namespace in the A, B and C clusters. But we
>> could not get good isolation between different teams and microservices for
>> authentication, quota management, etc.
>>
>> So instead, for a service 'foo,' each cluster uses namespaces like
>> 'foo-dev', 'foo-staging', and 'foo-production', with the first two
>> namespaces only existing in the NP cluster, but the production namespace
>> only existing in clusters A, B and C. The foo team only has access to the
>> foo namespaces (through ABAC, soon RBAC) and the foo namespaces can have
>> quotas put on them to ensure they do not overrun a cluster or environment.
>>
>> However, we've started to wonder whether colocating multiple environments
>> in a single cluster like this is a good idea. The first thing that gave us
>> pause was federation and service discovery - as the foo service, I'd love to
>> be able to deploy to a cluster, then indicate that I want to talk to the
>> 'baz' service, and have it automatically find the baz service in my cluster,
>> and fall back to a secondary cluster if it's not there. Having multiple
>> environments in a single cluster means every app in a cluster needs to not
>> only know that it reaches a 'baz' service, but it needs to know to
>> specifically reach out to 'baz-dev|staging|prod' etc., which pollutes
>> everyone's configs. This is specifically because there's no first class
>> concept for "environment" in Kubernetes at the moment - only what we've
>> clobbered into namespaces, configs and service names. (Something like
>> hierarchical namespaces may be able to help with this.)
>>
>> The alternative we're considering is to have each cluster contain only a
>> single environment. Having one environment per cluster simplifies a lot of
>> configuration and object definitions across the board, because there's only
>> one axis to worry about (cluster) instead of two (cluster + environment). We
>> can know implicitly that everything in a given cluster belongs to a specific
>> environment, potentially simplifying configuration more broadly. It also
>> feels like it might be a lot more natural of a fit to Kubernetes' federation
>> plans, but we haven't looked into this in as much depth.
>>
>> But on the flip side, I've always understood Kubernetes' ultimate goal to
>> be a lot more like Borg or an AWS availability zone or region, where the
>> software operates more at an infrastructure layer than the application
>> layer, because this dramatically improves hardware utilization and
>> efficiency and minimizes the number of clusters to operate, scale, etc.
>>
>> An extreme alternative we've heard is to actually bootstrap a cluster per
>> team, but that feels pretty far from the Kubernetes vision, though we might
>> be wrong about that as well.
>>
>> So, we'd love to hear opinions on not only what's recommended or possible
>> today with Kubernetes, but what is the vision - should Kubernetes clusters
>> exist at an application/environment layer or at the infrastructure layer?
>>
>> Thank you!
>> Sam
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Kubernetes user discussion and Q&A" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to kubernetes-users+unsubscr...@googlegroups.com.
>> To post to this group, send email to kubernetes-users@googlegroups.com.
>> Visit this group at https://groups.google.com/group/kubernetes-users.
>> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q&A" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.
  • [kubernetes... 'Sam Ghods' via Kubernetes user discussion and Q&A
    • Re: [k... Nikhil Jindal
      • Re... 'Tim Hockin' via Kubernetes user discussion and Q&A
        • ... 'EJ Campbell' via Kubernetes user discussion and Q&A
          • ... Matthias Rampke
            • ... Paul Ingles
    • [kuber... 'Brian Grant' via Kubernetes user discussion and Q&A
    • Re: [k... 'David Oppenheimer' via Kubernetes user discussion and Q&A
      • Re... 'John Huffaker' via Kubernetes user discussion and Q&A
        • ... 'David Oppenheimer' via Kubernetes user discussion and Q&A
          • ... 'John Huffaker' via Kubernetes user discussion and Q&A
            • ... 'David Oppenheimer' via Kubernetes user discussion and Q&A
              • ... 'John Huffaker' via Kubernetes user discussion and Q&A

Reply via email to