Re: [kubernetes-users] One environment or many per cluster?

2017-06-05 Thread
We don't have any strict requirements, and it would probably largely rely on 
the service discovery method being used.  Our service-loadbalancer and 
smartstack derived ones would probably give a 503, whereas kube-dns/kube-proxy 
would fail to look up a DNS entry.


-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.


Re: [kubernetes-users] One environment or many per cluster?

2017-06-04 Thread
On Tue, May 9, 2017 at 10:50 AM, 'John Huffaker' via Kubernetes user
discussion and Q&A  wrote:

> Yeah.
>
> In the end we would like:
> 1. "Simple" service discovery per env.  foo in dev talks to bar in dev.
>  foo in staging talks to bar in staging, and ideally the code and config
> for foo would just reference "bar" instead of "bar-dev" (or
> "bar.dev.cluster.local").
>

Do you have any particular requirement about what happens if a pod in
environment "dev" tries to connect to service "foo" and there is no service
"foo" running in environment "dev" ?


> 2. Dev and staging on the same cluster.
> 3. Quotas per app (i.e. foo can only have 20 cpu cores) or per app-env.
> 4. Access control per app (i.e. only foo owners can exec into foo pods).
>
> Right now 2,3,4 are achievable by sacrificing #1.  Or we could also
> achieve: 1,3,4 by sacrificing #2.  Or if quotas and abac/rbac worked via
> selectors within a namespace we could achieve 1,2,3,4.  Or if we had
> hierarchical namespaces we could also achieve 1,2,3,4.
>
> I was somewhat worried about sacrificing #1 in the face of federated
> clusters, but Sam has convinced me that the "foo-dev" naming, while ugly
> will work just fine in that context.
>
> If this is confusing or if anyone wants further clarification I'm happy to
> chat as @huggsboson on the kubernetes chat.
>
>
> On Tue, May 9, 2017 at 8:44 AM, 'Tim Hockin' via Kubernetes user
> discussion and Q&A  wrote:
>
>> If I read correctly, they want quota to apply to a subset of pods in a
>> Namespace (by selector) not the whole namespace (so multiple teams can
>> share a namespace), or else they need to pollute names with
>> env-specific decorations.
>>
>> On Tue, May 9, 2017 at 12:44 AM, 'David Oppenheimer' via Kubernetes
>> user discussion and Q&A  wrote:
>> >> we'd need to make functionality like access control, quotas and other
>> >> things work based on labels and selectors within the namespace
>> >
>> > So you want access control and quotas to be per service per env, not
>> just
>> > per env?
>> >
>> >
>> > On Mon, May 8, 2017 at 10:20 AM, 'John Huffaker' via Kubernetes user
>> > discussion and Q&A  wrote:
>> >>
>> >> So that's our current setup (see sam's message): We have namespaces
>> >> following the convention "serviceName-envName".  This gives us quotas
>> and
>> >> ABAC, but we lose "semantic naming", as in our conf generation needs
>> to pass
>> >> dev, staging or prod everywhere and services need to hit
>> >> "https://serviceName-envName/";.
>> >>
>> >> In my mind there are two ways to make this clean:
>> >> 1. Hierarchical namespaces like Tim suggested.  Where top level
>> namespace
>> >> is env and sub-namespace is service.
>> >> 2. Or the env namespaces like you suggest.  Where the namespace is the
>> >> env.  Service discovery happens in the clean way you you describe, but
>> we'd
>> >> need to make functionality like access control, quotas and other
>> things work
>> >> based on labels and selectors within the namespace.  We'd probably
>> also need
>> >> to be more careful about linking configMaps to their deployments via
>> >> references instead of the "just kill the namespace" approach we take
>> today.
>> >>
>> >> #2 feels both practical and right to me at this point, but it'd
>> obviously
>> >> require some work.
>> >>
>> >>
>> >>
>> >> On Sat, May 6, 2017 at 10:34 PM, 'David Oppenheimer' via Kubernetes
>> user
>> >> discussion and Q&A  wrote:
>> >>>
>> >>> On Sat, May 6, 2017 at 7:45 PM, 'John Huffaker' via Kubernetes user
>> >>> discussion and Q&A  wrote:
>> 
>>  So our "dev" env would be composed of N-different services foo, bar
>> and
>>  baz for example. 3 different teams maintain the 3 different services
>> and
>>  their related deployments.  We would like to limit operations like
>> apply,
>>  delete, exec and logs to only people on those teams to their
>> respective
>>  services and deployments.  the only way we found to get ABAC working
>> in the
>>  way we wanted in 1.2 was to put each service/deployment in their own
>>  namespace (+ "-env").  Additionally for each service's deployment
>> we'd like
>>  to set a quota on how many CPUs/ram they can reserve.  As of right
>> now it
>>  looks like that is per-namespace as well.
>> >>>
>> >>>
>> >>> Is there some reason you don't want the three services to be in three
>> >>> different namespaces? If you put them in three different namespaces,
>> you can
>> >>> do everything you described with RBAC and quota.
>> >>>
>> 
>> 
>>  I've been worried about this conflict between service discovery and
>>  abac/quota's interpretation of how namespaces should be used for a
>> while.
>> 
>>  On May 6, 2017 7:31 PM, "'David Oppenheimer' via Kubernetes user
>>  discussion and Q&A"  wrote:
>> >
>> >
>> >
>> > On Sat, May 6, 2017 at 7:18 PM, 'John Huffaker' via Kubernetes user
>> > discussion and Q&A  wrote:
>> >>
>> >> At this point it's the lac

Re: [kubernetes-users] One environment or many per cluster?

2017-05-10 Thread Tim St. Clair


On Tuesday, May 9, 2017 at 10:44:54 AM UTC-5, Tim Hockin wrote:
>
> If I read correctly, they want quota to apply to a subset of pods in a 
> Namespace (by selector) not the whole namespace (so multiple teams can 
> share a namespace), or else they need to pollute names with 
> env-specific decorations. 
>
>  
Quota part sounds similar to 
DHQ- 
https://docs.google.com/document/d/1UEw4NASc3bIFV9q9E628ORlOiATA6HxTcBjkPKyND0I/edit?usp=sharing


 

> On Tue, May 9, 2017 at 12:44 AM, 'David Oppenheimer' via Kubernetes 
> user discussion and Q&A > 
> wrote: 
> >> we'd need to make functionality like access control, quotas and other 
> >> things work based on labels and selectors within the namespace 
> > 
> > So you want access control and quotas to be per service per env, not 
> just 
> > per env? 
> > 
> > 
> > On Mon, May 8, 2017 at 10:20 AM, 'John Huffaker' via Kubernetes user 
> > discussion and Q&A > wrote: 
> >> 
> >> So that's our current setup (see sam's message): We have namespaces 
> >> following the convention "serviceName-envName".  This gives us quotas 
> and 
> >> ABAC, but we lose "semantic naming", as in our conf generation needs to 
> pass 
> >> dev, staging or prod everywhere and services need to hit 
> >> "https://serviceName-envName/";. 
> >> 
> >> In my mind there are two ways to make this clean: 
> >> 1. Hierarchical namespaces like Tim suggested.  Where top level 
> namespace 
> >> is env and sub-namespace is service. 
> >> 2. Or the env namespaces like you suggest.  Where the namespace is the 
> >> env.  Service discovery happens in the clean way you you describe, but 
> we'd 
> >> need to make functionality like access control, quotas and other things 
> work 
> >> based on labels and selectors within the namespace.  We'd probably also 
> need 
> >> to be more careful about linking configMaps to their deployments via 
> >> references instead of the "just kill the namespace" approach we take 
> today. 
> >> 
> >> #2 feels both practical and right to me at this point, but it'd 
> obviously 
> >> require some work. 
> >> 
> >> 
> >> 
> >> On Sat, May 6, 2017 at 10:34 PM, 'David Oppenheimer' via Kubernetes 
> user 
> >> discussion and Q&A > wrote: 
> >>> 
> >>> On Sat, May 6, 2017 at 7:45 PM, 'John Huffaker' via Kubernetes user 
> >>> discussion and Q&A > 
> wrote: 
>  
>  So our "dev" env would be composed of N-different services foo, bar 
> and 
>  baz for example. 3 different teams maintain the 3 different services 
> and 
>  their related deployments.  We would like to limit operations like 
> apply, 
>  delete, exec and logs to only people on those teams to their 
> respective 
>  services and deployments.  the only way we found to get ABAC working 
> in the 
>  way we wanted in 1.2 was to put each service/deployment in their own 
>  namespace (+ "-env").  Additionally for each service's deployment 
> we'd like 
>  to set a quota on how many CPUs/ram they can reserve.  As of right 
> now it 
>  looks like that is per-namespace as well. 
> >>> 
> >>> 
> >>> Is there some reason you don't want the three services to be in three 
> >>> different namespaces? If you put them in three different namespaces, 
> you can 
> >>> do everything you described with RBAC and quota. 
> >>> 
>  
>  
>  I've been worried about this conflict between service discovery and 
>  abac/quota's interpretation of how namespaces should be used for a 
> while. 
>  
>  On May 6, 2017 7:31 PM, "'David Oppenheimer' via Kubernetes user 
>  discussion and Q&A" > 
> wrote: 
> > 
> > 
> > 
> > On Sat, May 6, 2017 at 7:18 PM, 'John Huffaker' via Kubernetes user 
> > discussion and Q&A > 
> wrote: 
> >> 
> >> At this point it's the lack of quotas and abac associated with 
> >> selectors instead of namespaces. 
> > 
> > 
> > Can you say more about what you mean? What are scenarios where you'd 
> > like to restrict use of selectors? (and on what objects?) 
> > 
> >> 
> >>   I haven't looked closely enough at rbac to see if it gives us 
> what 
> >> we need within a namespace-per-env setup. 
> >> 
> >> The other side benefit that we can tool around is that namespace 
> make 
> >> a good "packaging" mechanism for deployments and their related 
> >> configMaps/secrets.  i.e. Want to delete a deployment just delete 
> it's 
> >> namespace. 
> >> 
> >> On May 6, 2017 6:53 PM, "'David Oppenheimer' via Kubernetes user 
> >> discussion and Q&A" > 
> wrote: 
> >> 
> >> Re-reading this thread, I'm wondering why the existing service name 
> >> resolution procedure that Nikhil mentioned doesn't solve Sam's 
> problem 
> >> (without the need for hierarchical namespaces). Use one namespace 
> per 
> >> environment, and use the unqualified service name for lookup to 
> find the 
> >> desired service in the same environment. 
> >> 
> >> 
> >> 

Re: [kubernetes-users] One environment or many per cluster?

2017-05-09 Thread
Yeah.

In the end we would like:
1. "Simple" service discovery per env.  foo in dev talks to bar in dev.
 foo in staging talks to bar in staging, and ideally the code and config
for foo would just reference "bar" instead of "bar-dev" (or
"bar.dev.cluster.local").
2. Dev and staging on the same cluster.
3. Quotas per app (i.e. foo can only have 20 cpu cores) or per app-env.
4. Access control per app (i.e. only foo owners can exec into foo pods).

Right now 2,3,4 are achievable by sacrificing #1.  Or we could also
achieve: 1,3,4 by sacrificing #2.  Or if quotas and abac/rbac worked via
selectors within a namespace we could achieve 1,2,3,4.  Or if we had
hierarchical namespaces we could also achieve 1,2,3,4.

I was somewhat worried about sacrificing #1 in the face of federated
clusters, but Sam has convinced me that the "foo-dev" naming, while ugly
will work just fine in that context.

If this is confusing or if anyone wants further clarification I'm happy to
chat as @huggsboson on the kubernetes chat.


On Tue, May 9, 2017 at 8:44 AM, 'Tim Hockin' via Kubernetes user discussion
and Q&A  wrote:

> If I read correctly, they want quota to apply to a subset of pods in a
> Namespace (by selector) not the whole namespace (so multiple teams can
> share a namespace), or else they need to pollute names with
> env-specific decorations.
>
> On Tue, May 9, 2017 at 12:44 AM, 'David Oppenheimer' via Kubernetes
> user discussion and Q&A  wrote:
> >> we'd need to make functionality like access control, quotas and other
> >> things work based on labels and selectors within the namespace
> >
> > So you want access control and quotas to be per service per env, not just
> > per env?
> >
> >
> > On Mon, May 8, 2017 at 10:20 AM, 'John Huffaker' via Kubernetes user
> > discussion and Q&A  wrote:
> >>
> >> So that's our current setup (see sam's message): We have namespaces
> >> following the convention "serviceName-envName".  This gives us quotas
> and
> >> ABAC, but we lose "semantic naming", as in our conf generation needs to
> pass
> >> dev, staging or prod everywhere and services need to hit
> >> "https://serviceName-envName/";.
> >>
> >> In my mind there are two ways to make this clean:
> >> 1. Hierarchical namespaces like Tim suggested.  Where top level
> namespace
> >> is env and sub-namespace is service.
> >> 2. Or the env namespaces like you suggest.  Where the namespace is the
> >> env.  Service discovery happens in the clean way you you describe, but
> we'd
> >> need to make functionality like access control, quotas and other things
> work
> >> based on labels and selectors within the namespace.  We'd probably also
> need
> >> to be more careful about linking configMaps to their deployments via
> >> references instead of the "just kill the namespace" approach we take
> today.
> >>
> >> #2 feels both practical and right to me at this point, but it'd
> obviously
> >> require some work.
> >>
> >>
> >>
> >> On Sat, May 6, 2017 at 10:34 PM, 'David Oppenheimer' via Kubernetes user
> >> discussion and Q&A  wrote:
> >>>
> >>> On Sat, May 6, 2017 at 7:45 PM, 'John Huffaker' via Kubernetes user
> >>> discussion and Q&A  wrote:
> 
>  So our "dev" env would be composed of N-different services foo, bar
> and
>  baz for example. 3 different teams maintain the 3 different services
> and
>  their related deployments.  We would like to limit operations like
> apply,
>  delete, exec and logs to only people on those teams to their
> respective
>  services and deployments.  the only way we found to get ABAC working
> in the
>  way we wanted in 1.2 was to put each service/deployment in their own
>  namespace (+ "-env").  Additionally for each service's deployment
> we'd like
>  to set a quota on how many CPUs/ram they can reserve.  As of right
> now it
>  looks like that is per-namespace as well.
> >>>
> >>>
> >>> Is there some reason you don't want the three services to be in three
> >>> different namespaces? If you put them in three different namespaces,
> you can
> >>> do everything you described with RBAC and quota.
> >>>
> 
> 
>  I've been worried about this conflict between service discovery and
>  abac/quota's interpretation of how namespaces should be used for a
> while.
> 
>  On May 6, 2017 7:31 PM, "'David Oppenheimer' via Kubernetes user
>  discussion and Q&A"  wrote:
> >
> >
> >
> > On Sat, May 6, 2017 at 7:18 PM, 'John Huffaker' via Kubernetes user
> > discussion and Q&A  wrote:
> >>
> >> At this point it's the lack of quotas and abac associated with
> >> selectors instead of namespaces.
> >
> >
> > Can you say more about what you mean? What are scenarios where you'd
> > like to restrict use of selectors? (and on what objects?)
> >
> >>
> >>   I haven't looked closely enough at rbac to see if it gives us what
> >> we need within a namespace-per-env setup.
> >>
> >> The other side benefit tha

Re: [kubernetes-users] One environment or many per cluster?

2017-05-09 Thread
If I read correctly, they want quota to apply to a subset of pods in a
Namespace (by selector) not the whole namespace (so multiple teams can
share a namespace), or else they need to pollute names with
env-specific decorations.

On Tue, May 9, 2017 at 12:44 AM, 'David Oppenheimer' via Kubernetes
user discussion and Q&A  wrote:
>> we'd need to make functionality like access control, quotas and other
>> things work based on labels and selectors within the namespace
>
> So you want access control and quotas to be per service per env, not just
> per env?
>
>
> On Mon, May 8, 2017 at 10:20 AM, 'John Huffaker' via Kubernetes user
> discussion and Q&A  wrote:
>>
>> So that's our current setup (see sam's message): We have namespaces
>> following the convention "serviceName-envName".  This gives us quotas and
>> ABAC, but we lose "semantic naming", as in our conf generation needs to pass
>> dev, staging or prod everywhere and services need to hit
>> "https://serviceName-envName/";.
>>
>> In my mind there are two ways to make this clean:
>> 1. Hierarchical namespaces like Tim suggested.  Where top level namespace
>> is env and sub-namespace is service.
>> 2. Or the env namespaces like you suggest.  Where the namespace is the
>> env.  Service discovery happens in the clean way you you describe, but we'd
>> need to make functionality like access control, quotas and other things work
>> based on labels and selectors within the namespace.  We'd probably also need
>> to be more careful about linking configMaps to their deployments via
>> references instead of the "just kill the namespace" approach we take today.
>>
>> #2 feels both practical and right to me at this point, but it'd obviously
>> require some work.
>>
>>
>>
>> On Sat, May 6, 2017 at 10:34 PM, 'David Oppenheimer' via Kubernetes user
>> discussion and Q&A  wrote:
>>>
>>> On Sat, May 6, 2017 at 7:45 PM, 'John Huffaker' via Kubernetes user
>>> discussion and Q&A  wrote:

 So our "dev" env would be composed of N-different services foo, bar and
 baz for example. 3 different teams maintain the 3 different services and
 their related deployments.  We would like to limit operations like apply,
 delete, exec and logs to only people on those teams to their respective
 services and deployments.  the only way we found to get ABAC working in the
 way we wanted in 1.2 was to put each service/deployment in their own
 namespace (+ "-env").  Additionally for each service's deployment we'd like
 to set a quota on how many CPUs/ram they can reserve.  As of right now it
 looks like that is per-namespace as well.
>>>
>>>
>>> Is there some reason you don't want the three services to be in three
>>> different namespaces? If you put them in three different namespaces, you can
>>> do everything you described with RBAC and quota.
>>>


 I've been worried about this conflict between service discovery and
 abac/quota's interpretation of how namespaces should be used for a while.

 On May 6, 2017 7:31 PM, "'David Oppenheimer' via Kubernetes user
 discussion and Q&A"  wrote:
>
>
>
> On Sat, May 6, 2017 at 7:18 PM, 'John Huffaker' via Kubernetes user
> discussion and Q&A  wrote:
>>
>> At this point it's the lack of quotas and abac associated with
>> selectors instead of namespaces.
>
>
> Can you say more about what you mean? What are scenarios where you'd
> like to restrict use of selectors? (and on what objects?)
>
>>
>>   I haven't looked closely enough at rbac to see if it gives us what
>> we need within a namespace-per-env setup.
>>
>> The other side benefit that we can tool around is that namespace make
>> a good "packaging" mechanism for deployments and their related
>> configMaps/secrets.  i.e. Want to delete a deployment just delete it's
>> namespace.
>>
>> On May 6, 2017 6:53 PM, "'David Oppenheimer' via Kubernetes user
>> discussion and Q&A"  wrote:
>>
>> Re-reading this thread, I'm wondering why the existing service name
>> resolution procedure that Nikhil mentioned doesn't solve Sam's problem
>> (without the need for hierarchical namespaces). Use one namespace per
>> environment, and use the unqualified service name for lookup to find the
>> desired service in the same environment.
>>
>>
>>
>>
>> On Tue, Apr 18, 2017 at 4:14 PM, 'Sam Ghods' via Kubernetes user
>> discussion and Q&A  wrote:
>>>
>>> Hello,
>>>
>>> We're struggling internally at Box with a question that we were
>>> hoping the community could help shed some light on.
>>>
>>> At Box we have three main environments: development, staging, and
>>> production. The definition of 'environment' here is primarily a logical
>>> service discovery domain - what instances of other services should 
>>> access me
>>> and what instances of other services should I access? (Some 

Re: [kubernetes-users] One environment or many per cluster?

2017-05-09 Thread
> we'd need to make functionality like access control, quotas and other
things work based on labels and selectors within the namespace

So you want access control and quotas to be per service per env, not just
per env?


On Mon, May 8, 2017 at 10:20 AM, 'John Huffaker' via Kubernetes user
discussion and Q&A  wrote:

> So that's our current setup (see sam's message): We have namespaces
> following the convention "serviceName-envName".  This gives us quotas and
> ABAC, but we lose "semantic naming", as in our conf generation needs to
> pass dev, staging or prod everywhere and services need to hit "
> https://serviceName-envName/";.
>
> In my mind there are two ways to make this clean:
> 1. Hierarchical namespaces like Tim suggested.  Where top level namespace
> is env and sub-namespace is service.
> 2. Or the env namespaces like you suggest.  Where the namespace is the
> env.  Service discovery happens in the clean way you you describe, but we'd
> need to make functionality like access control, quotas and other things
> work based on labels and selectors within the namespace.  We'd probably
> also need to be more careful about linking configMaps to their deployments
> via references instead of the "just kill the namespace" approach we take
> today.
>
> #2 feels both practical and right to me at this point, but it'd obviously
> require some work.
>
>
>
> On Sat, May 6, 2017 at 10:34 PM, 'David Oppenheimer' via Kubernetes user
> discussion and Q&A  wrote:
>
>> On Sat, May 6, 2017 at 7:45 PM, 'John Huffaker' via Kubernetes user
>> discussion and Q&A  wrote:
>>
>>> So our "dev" env would be composed of N-different services foo, bar and
>>> baz for example. 3 different teams maintain the 3 different services and
>>> their related deployments.  We would like to limit operations like apply,
>>> delete, exec and logs to only people on those teams to their respective
>>> services and deployments.  the only way we found to get ABAC working in the
>>> way we wanted in 1.2 was to put each service/deployment in their own
>>> namespace (+ "-env").  Additionally for each service's deployment we'd like
>>> to set a quota on how many CPUs/ram they can reserve.  As of right now it
>>> looks like that is per-namespace as well.
>>>
>>
>> Is there some reason you don't want the three services to be in three
>> different namespaces? If you put them in three different namespaces, you
>> can do everything you described with RBAC and quota.
>>
>>
>>>
>>> I've been worried about this conflict between service discovery and
>>> abac/quota's interpretation of how namespaces should be used for a while.
>>>
>>> On May 6, 2017 7:31 PM, "'David Oppenheimer' via Kubernetes user
>>> discussion and Q&A"  wrote:
>>>


 On Sat, May 6, 2017 at 7:18 PM, 'John Huffaker' via Kubernetes user
 discussion and Q&A  wrote:

> At this point it's the lack of quotas and abac associated with
> selectors instead of namespaces.
>

 Can you say more about what you mean? What are scenarios where you'd
 like to restrict use of selectors? (and on what objects?)


>   I haven't looked closely enough at rbac to see if it gives us what
> we need within a namespace-per-env setup.
>
> The other side benefit that we can tool around is that namespace make
> a good "packaging" mechanism for deployments and their related
> configMaps/secrets.  i.e. Want to delete a deployment just delete it's
> namespace.
>
> On May 6, 2017 6:53 PM, "'David Oppenheimer' via Kubernetes user
> discussion and Q&A"  wrote:
>
> Re-reading this thread, I'm wondering why the existing service name
> resolution procedure
> 
> that Nikhil mentioned doesn't solve Sam's problem (without the need for
> hierarchical namespaces). Use one namespace per environment, and use the
> unqualified service name for lookup to find the desired service in the 
> same
> environment.
>
>
>
>
> On Tue, Apr 18, 2017 at 4:14 PM, 'Sam Ghods' via Kubernetes user
> discussion and Q&A  wrote:
>
>> Hello,
>>
>> We're struggling internally at Box with a question that we were
>> hoping the community could help shed some light on.
>>
>> At Box we have three main environments: development, staging, and
>> production. The definition of 'environment' here is primarily a logical
>> service discovery domain - what instances of other services should access
>> me and what instances of other services should I access? (Some
>> configuration can vary as well.) The development environment is a
>> collection of instances where changes are pushed first and the most
>> frequently changing environment. Staging is where changes go right before
>> they're released and where any manual testing is done. Production is the
>> set of instances serving customer traffic.

Re: [kubernetes-users] One environment or many per cluster?

2017-05-08 Thread
So that's our current setup (see sam's message): We have namespaces
following the convention "serviceName-envName".  This gives us quotas and
ABAC, but we lose "semantic naming", as in our conf generation needs to
pass dev, staging or prod everywhere and services need to hit "
https://serviceName-envName/";.

In my mind there are two ways to make this clean:
1. Hierarchical namespaces like Tim suggested.  Where top level namespace
is env and sub-namespace is service.
2. Or the env namespaces like you suggest.  Where the namespace is the
env.  Service discovery happens in the clean way you you describe, but we'd
need to make functionality like access control, quotas and other things
work based on labels and selectors within the namespace.  We'd probably
also need to be more careful about linking configMaps to their deployments
via references instead of the "just kill the namespace" approach we take
today.

#2 feels both practical and right to me at this point, but it'd obviously
require some work.



On Sat, May 6, 2017 at 10:34 PM, 'David Oppenheimer' via Kubernetes user
discussion and Q&A  wrote:

> On Sat, May 6, 2017 at 7:45 PM, 'John Huffaker' via Kubernetes user
> discussion and Q&A  wrote:
>
>> So our "dev" env would be composed of N-different services foo, bar and
>> baz for example. 3 different teams maintain the 3 different services and
>> their related deployments.  We would like to limit operations like apply,
>> delete, exec and logs to only people on those teams to their respective
>> services and deployments.  the only way we found to get ABAC working in the
>> way we wanted in 1.2 was to put each service/deployment in their own
>> namespace (+ "-env").  Additionally for each service's deployment we'd like
>> to set a quota on how many CPUs/ram they can reserve.  As of right now it
>> looks like that is per-namespace as well.
>>
>
> Is there some reason you don't want the three services to be in three
> different namespaces? If you put them in three different namespaces, you
> can do everything you described with RBAC and quota.
>
>
>>
>> I've been worried about this conflict between service discovery and
>> abac/quota's interpretation of how namespaces should be used for a while.
>>
>> On May 6, 2017 7:31 PM, "'David Oppenheimer' via Kubernetes user
>> discussion and Q&A"  wrote:
>>
>>>
>>>
>>> On Sat, May 6, 2017 at 7:18 PM, 'John Huffaker' via Kubernetes user
>>> discussion and Q&A  wrote:
>>>
 At this point it's the lack of quotas and abac associated with
 selectors instead of namespaces.

>>>
>>> Can you say more about what you mean? What are scenarios where you'd
>>> like to restrict use of selectors? (and on what objects?)
>>>
>>>
   I haven't looked closely enough at rbac to see if it gives us what we
 need within a namespace-per-env setup.

 The other side benefit that we can tool around is that namespace make a
 good "packaging" mechanism for deployments and their related
 configMaps/secrets.  i.e. Want to delete a deployment just delete it's
 namespace.

 On May 6, 2017 6:53 PM, "'David Oppenheimer' via Kubernetes user
 discussion and Q&A"  wrote:

 Re-reading this thread, I'm wondering why the existing service name
 resolution procedure
 
 that Nikhil mentioned doesn't solve Sam's problem (without the need for
 hierarchical namespaces). Use one namespace per environment, and use the
 unqualified service name for lookup to find the desired service in the same
 environment.




 On Tue, Apr 18, 2017 at 4:14 PM, 'Sam Ghods' via Kubernetes user
 discussion and Q&A  wrote:

> Hello,
>
> We're struggling internally at Box with a question that we were hoping
> the community could help shed some light on.
>
> At Box we have three main environments: development, staging, and
> production. The definition of 'environment' here is primarily a logical
> service discovery domain - what instances of other services should access
> me and what instances of other services should I access? (Some
> configuration can vary as well.) The development environment is a
> collection of instances where changes are pushed first and the most
> frequently changing environment. Staging is where changes go right before
> they're released and where any manual testing is done. Production is the
> set of instances serving customer traffic.
>
> Currently, we have four bare-metal datacenters, one is for
> non-production workloads (let's call it NP), the three other are for
> production workloads (let's call them A, B, C). Each one has a single 
> large
> Kubernetes cluster named after the datacenter it's in. Initially, we
> considered equating namespaces to environments, and having the dev and
> staging namespaces in the NP cluster and the production na

Re: [kubernetes-users] One environment or many per cluster?

2017-05-06 Thread
On Sat, May 6, 2017 at 7:45 PM, 'John Huffaker' via Kubernetes user
discussion and Q&A  wrote:

> So our "dev" env would be composed of N-different services foo, bar and
> baz for example. 3 different teams maintain the 3 different services and
> their related deployments.  We would like to limit operations like apply,
> delete, exec and logs to only people on those teams to their respective
> services and deployments.  the only way we found to get ABAC working in the
> way we wanted in 1.2 was to put each service/deployment in their own
> namespace (+ "-env").  Additionally for each service's deployment we'd like
> to set a quota on how many CPUs/ram they can reserve.  As of right now it
> looks like that is per-namespace as well.
>

Is there some reason you don't want the three services to be in three
different namespaces? If you put them in three different namespaces, you
can do everything you described with RBAC and quota.


>
> I've been worried about this conflict between service discovery and
> abac/quota's interpretation of how namespaces should be used for a while.
>
> On May 6, 2017 7:31 PM, "'David Oppenheimer' via Kubernetes user
> discussion and Q&A"  wrote:
>
>>
>>
>> On Sat, May 6, 2017 at 7:18 PM, 'John Huffaker' via Kubernetes user
>> discussion and Q&A  wrote:
>>
>>> At this point it's the lack of quotas and abac associated with selectors
>>> instead of namespaces.
>>>
>>
>> Can you say more about what you mean? What are scenarios where you'd like
>> to restrict use of selectors? (and on what objects?)
>>
>>
>>>   I haven't looked closely enough at rbac to see if it gives us what we
>>> need within a namespace-per-env setup.
>>>
>>> The other side benefit that we can tool around is that namespace make a
>>> good "packaging" mechanism for deployments and their related
>>> configMaps/secrets.  i.e. Want to delete a deployment just delete it's
>>> namespace.
>>>
>>> On May 6, 2017 6:53 PM, "'David Oppenheimer' via Kubernetes user
>>> discussion and Q&A"  wrote:
>>>
>>> Re-reading this thread, I'm wondering why the existing service name
>>> resolution procedure
>>> 
>>> that Nikhil mentioned doesn't solve Sam's problem (without the need for
>>> hierarchical namespaces). Use one namespace per environment, and use the
>>> unqualified service name for lookup to find the desired service in the same
>>> environment.
>>>
>>>
>>>
>>>
>>> On Tue, Apr 18, 2017 at 4:14 PM, 'Sam Ghods' via Kubernetes user
>>> discussion and Q&A  wrote:
>>>
 Hello,

 We're struggling internally at Box with a question that we were hoping
 the community could help shed some light on.

 At Box we have three main environments: development, staging, and
 production. The definition of 'environment' here is primarily a logical
 service discovery domain - what instances of other services should access
 me and what instances of other services should I access? (Some
 configuration can vary as well.) The development environment is a
 collection of instances where changes are pushed first and the most
 frequently changing environment. Staging is where changes go right before
 they're released and where any manual testing is done. Production is the
 set of instances serving customer traffic.

 Currently, we have four bare-metal datacenters, one is for
 non-production workloads (let's call it NP), the three other are for
 production workloads (let's call them A, B, C). Each one has a single large
 Kubernetes cluster named after the datacenter it's in. Initially, we
 considered equating namespaces to environments, and having the dev and
 staging namespaces in the NP cluster and the production namespace in the A,
 B and C clusters. But we could not get good isolation between different
 teams and microservices for authentication, quota management, etc.

 So instead, for a service 'foo,' each cluster uses namespaces like
 'foo-dev', 'foo-staging', and 'foo-production', with the first two
 namespaces only existing in the NP cluster, but the production namespace
 only existing in clusters A, B and C. The foo team only has access to the
 foo namespaces (through ABAC, soon RBAC) and the foo namespaces can have
 quotas put on them to ensure they do not overrun a cluster or environment.

 However, we've started to wonder whether colocating multiple
 environments in a single cluster like this is a good idea. The first thing
 that gave us pause was federation and service discovery - as the foo
 service, I'd love to be able to deploy to a cluster, then indicate that I
 want to talk to the 'baz' service, and have it automatically find the baz
 service in my cluster, and fall back to a secondary cluster if it's not
 there. Having multiple environments in a single cluster means every app in
 a cluster needs to not only know that

Re: [kubernetes-users] One environment or many per cluster?

2017-05-06 Thread
So our "dev" env would be composed of N-different services foo, bar and baz
for example. 3 different teams maintain the 3 different services and their
related deployments.  We would like to limit operations like apply, delete,
exec and logs to only people on those teams to their respective services
and deployments.  the only way we found to get ABAC working in the way we
wanted in 1.2 was to put each service/deployment in their own namespace (+
"-env").  Additionally for each service's deployment we'd like to set a
quota on how many CPUs/ram they can reserve.  As of right now it looks like
that is per-namespace as well.

I've been worried about this conflict between service discovery and
abac/quota's interpretation of how namespaces should be used for a while.

On May 6, 2017 7:31 PM, "'David Oppenheimer' via Kubernetes user discussion
and Q&A"  wrote:

>
>
> On Sat, May 6, 2017 at 7:18 PM, 'John Huffaker' via Kubernetes user
> discussion and Q&A  wrote:
>
>> At this point it's the lack of quotas and abac associated with selectors
>> instead of namespaces.
>>
>
> Can you say more about what you mean? What are scenarios where you'd like
> to restrict use of selectors? (and on what objects?)
>
>
>>   I haven't looked closely enough at rbac to see if it gives us what we
>> need within a namespace-per-env setup.
>>
>> The other side benefit that we can tool around is that namespace make a
>> good "packaging" mechanism for deployments and their related
>> configMaps/secrets.  i.e. Want to delete a deployment just delete it's
>> namespace.
>>
>> On May 6, 2017 6:53 PM, "'David Oppenheimer' via Kubernetes user
>> discussion and Q&A"  wrote:
>>
>> Re-reading this thread, I'm wondering why the existing service name
>> resolution procedure
>> 
>> that Nikhil mentioned doesn't solve Sam's problem (without the need for
>> hierarchical namespaces). Use one namespace per environment, and use the
>> unqualified service name for lookup to find the desired service in the same
>> environment.
>>
>>
>>
>>
>> On Tue, Apr 18, 2017 at 4:14 PM, 'Sam Ghods' via Kubernetes user
>> discussion and Q&A  wrote:
>>
>>> Hello,
>>>
>>> We're struggling internally at Box with a question that we were hoping
>>> the community could help shed some light on.
>>>
>>> At Box we have three main environments: development, staging, and
>>> production. The definition of 'environment' here is primarily a logical
>>> service discovery domain - what instances of other services should access
>>> me and what instances of other services should I access? (Some
>>> configuration can vary as well.) The development environment is a
>>> collection of instances where changes are pushed first and the most
>>> frequently changing environment. Staging is where changes go right before
>>> they're released and where any manual testing is done. Production is the
>>> set of instances serving customer traffic.
>>>
>>> Currently, we have four bare-metal datacenters, one is for
>>> non-production workloads (let's call it NP), the three other are for
>>> production workloads (let's call them A, B, C). Each one has a single large
>>> Kubernetes cluster named after the datacenter it's in. Initially, we
>>> considered equating namespaces to environments, and having the dev and
>>> staging namespaces in the NP cluster and the production namespace in the A,
>>> B and C clusters. But we could not get good isolation between different
>>> teams and microservices for authentication, quota management, etc.
>>>
>>> So instead, for a service 'foo,' each cluster uses namespaces like
>>> 'foo-dev', 'foo-staging', and 'foo-production', with the first two
>>> namespaces only existing in the NP cluster, but the production namespace
>>> only existing in clusters A, B and C. The foo team only has access to the
>>> foo namespaces (through ABAC, soon RBAC) and the foo namespaces can have
>>> quotas put on them to ensure they do not overrun a cluster or environment.
>>>
>>> However, we've started to wonder whether colocating multiple
>>> environments in a single cluster like this is a good idea. The first thing
>>> that gave us pause was federation and service discovery - as the foo
>>> service, I'd love to be able to deploy to a cluster, then indicate that I
>>> want to talk to the 'baz' service, and have it automatically find the baz
>>> service in my cluster, and fall back to a secondary cluster if it's not
>>> there. Having multiple environments in a single cluster means every app in
>>> a cluster needs to not only know that it reaches a 'baz' service, but it
>>> needs to know to specifically reach out to 'baz-dev|staging|prod' etc.,
>>> which pollutes everyone's configs. This is specifically because there's no
>>> first class concept for "environment" in Kubernetes at the moment - only
>>> what we've clobbered into namespaces, configs and service names. (Something
>>> like hierarchical namespaces may be able to help wi

Re: [kubernetes-users] One environment or many per cluster?

2017-05-06 Thread
On Sat, May 6, 2017 at 7:18 PM, 'John Huffaker' via Kubernetes user
discussion and Q&A  wrote:

> At this point it's the lack of quotas and abac associated with selectors
> instead of namespaces.
>

Can you say more about what you mean? What are scenarios where you'd like
to restrict use of selectors? (and on what objects?)


>   I haven't looked closely enough at rbac to see if it gives us what we
> need within a namespace-per-env setup.
>
> The other side benefit that we can tool around is that namespace make a
> good "packaging" mechanism for deployments and their related
> configMaps/secrets.  i.e. Want to delete a deployment just delete it's
> namespace.
>
> On May 6, 2017 6:53 PM, "'David Oppenheimer' via Kubernetes user
> discussion and Q&A"  wrote:
>
> Re-reading this thread, I'm wondering why the existing service name
> resolution procedure
> 
> that Nikhil mentioned doesn't solve Sam's problem (without the need for
> hierarchical namespaces). Use one namespace per environment, and use the
> unqualified service name for lookup to find the desired service in the same
> environment.
>
>
>
>
> On Tue, Apr 18, 2017 at 4:14 PM, 'Sam Ghods' via Kubernetes user
> discussion and Q&A  wrote:
>
>> Hello,
>>
>> We're struggling internally at Box with a question that we were hoping
>> the community could help shed some light on.
>>
>> At Box we have three main environments: development, staging, and
>> production. The definition of 'environment' here is primarily a logical
>> service discovery domain - what instances of other services should access
>> me and what instances of other services should I access? (Some
>> configuration can vary as well.) The development environment is a
>> collection of instances where changes are pushed first and the most
>> frequently changing environment. Staging is where changes go right before
>> they're released and where any manual testing is done. Production is the
>> set of instances serving customer traffic.
>>
>> Currently, we have four bare-metal datacenters, one is for non-production
>> workloads (let's call it NP), the three other are for production workloads
>> (let's call them A, B, C). Each one has a single large Kubernetes cluster
>> named after the datacenter it's in. Initially, we considered equating
>> namespaces to environments, and having the dev and staging namespaces in
>> the NP cluster and the production namespace in the A, B and C clusters. But
>> we could not get good isolation between different teams and microservices
>> for authentication, quota management, etc.
>>
>> So instead, for a service 'foo,' each cluster uses namespaces like
>> 'foo-dev', 'foo-staging', and 'foo-production', with the first two
>> namespaces only existing in the NP cluster, but the production namespace
>> only existing in clusters A, B and C. The foo team only has access to the
>> foo namespaces (through ABAC, soon RBAC) and the foo namespaces can have
>> quotas put on them to ensure they do not overrun a cluster or environment.
>>
>> However, we've started to wonder whether colocating multiple environments
>> in a single cluster like this is a good idea. The first thing that gave us
>> pause was federation and service discovery - as the foo service, I'd love
>> to be able to deploy to a cluster, then indicate that I want to talk to the
>> 'baz' service, and have it automatically find the baz service in my
>> cluster, and fall back to a secondary cluster if it's not there. Having
>> multiple environments in a single cluster means every app in a cluster
>> needs to not only know that it reaches a 'baz' service, but it needs to
>> know to specifically reach out to 'baz-dev|staging|prod' etc., which
>> pollutes everyone's configs. This is specifically because there's no first
>> class concept for "environment" in Kubernetes at the moment - only what
>> we've clobbered into namespaces, configs and service names. (Something like
>> hierarchical namespaces may be able to help with this.)
>>
>> The alternative we're considering is to have each cluster contain only a
>> single environment. Having one environment per cluster simplifies a lot of
>> configuration and object definitions across the board, because there's only
>> one axis to worry about (cluster) instead of two (cluster + environment).
>> We can know implicitly that everything in a given cluster belongs to a
>> specific environment, potentially simplifying configuration more broadly.
>> It also feels like it might be a lot more natural of a fit to Kubernetes'
>> federation plans, but we haven't looked into this in as much depth.
>>
>> But on the flip side, I've always understood Kubernetes' ultimate goal to
>> be a lot more like Borg or an AWS availability zone or region, where the
>> software operates more at an infrastructure layer than the application
>> layer, because this dramatically improves hardware utilization and
>> efficiency and minimizes the 

Re: [kubernetes-users] One environment or many per cluster?

2017-05-06 Thread
At this point it's the lack of quotas and abac associated with selectors
instead of namespaces.   I haven't looked closely enough at rbac to see if
it gives us what we need within a namespace-per-env setup.

The other side benefit that we can tool around is that namespace make a
good "packaging" mechanism for deployments and their related
configMaps/secrets.  i.e. Want to delete a deployment just delete it's
namespace.

On May 6, 2017 6:53 PM, "'David Oppenheimer' via Kubernetes user discussion
and Q&A"  wrote:

Re-reading this thread, I'm wondering why the existing service name
resolution procedure
 that
Nikhil mentioned doesn't solve Sam's problem (without the need for
hierarchical namespaces). Use one namespace per environment, and use the
unqualified service name for lookup to find the desired service in the same
environment.




On Tue, Apr 18, 2017 at 4:14 PM, 'Sam Ghods' via Kubernetes user discussion
and Q&A  wrote:

> Hello,
>
> We're struggling internally at Box with a question that we were hoping the
> community could help shed some light on.
>
> At Box we have three main environments: development, staging, and
> production. The definition of 'environment' here is primarily a logical
> service discovery domain - what instances of other services should access
> me and what instances of other services should I access? (Some
> configuration can vary as well.) The development environment is a
> collection of instances where changes are pushed first and the most
> frequently changing environment. Staging is where changes go right before
> they're released and where any manual testing is done. Production is the
> set of instances serving customer traffic.
>
> Currently, we have four bare-metal datacenters, one is for non-production
> workloads (let's call it NP), the three other are for production workloads
> (let's call them A, B, C). Each one has a single large Kubernetes cluster
> named after the datacenter it's in. Initially, we considered equating
> namespaces to environments, and having the dev and staging namespaces in
> the NP cluster and the production namespace in the A, B and C clusters. But
> we could not get good isolation between different teams and microservices
> for authentication, quota management, etc.
>
> So instead, for a service 'foo,' each cluster uses namespaces like
> 'foo-dev', 'foo-staging', and 'foo-production', with the first two
> namespaces only existing in the NP cluster, but the production namespace
> only existing in clusters A, B and C. The foo team only has access to the
> foo namespaces (through ABAC, soon RBAC) and the foo namespaces can have
> quotas put on them to ensure they do not overrun a cluster or environment.
>
> However, we've started to wonder whether colocating multiple environments
> in a single cluster like this is a good idea. The first thing that gave us
> pause was federation and service discovery - as the foo service, I'd love
> to be able to deploy to a cluster, then indicate that I want to talk to the
> 'baz' service, and have it automatically find the baz service in my
> cluster, and fall back to a secondary cluster if it's not there. Having
> multiple environments in a single cluster means every app in a cluster
> needs to not only know that it reaches a 'baz' service, but it needs to
> know to specifically reach out to 'baz-dev|staging|prod' etc., which
> pollutes everyone's configs. This is specifically because there's no first
> class concept for "environment" in Kubernetes at the moment - only what
> we've clobbered into namespaces, configs and service names. (Something like
> hierarchical namespaces may be able to help with this.)
>
> The alternative we're considering is to have each cluster contain only a
> single environment. Having one environment per cluster simplifies a lot of
> configuration and object definitions across the board, because there's only
> one axis to worry about (cluster) instead of two (cluster + environment).
> We can know implicitly that everything in a given cluster belongs to a
> specific environment, potentially simplifying configuration more broadly.
> It also feels like it might be a lot more natural of a fit to Kubernetes'
> federation plans, but we haven't looked into this in as much depth.
>
> But on the flip side, I've always understood Kubernetes' ultimate goal to
> be a lot more like Borg or an AWS availability zone or region, where the
> software operates more at an infrastructure layer than the application
> layer, because this dramatically improves hardware utilization and
> efficiency and minimizes the number of clusters to operate, scale, etc.
>
> An extreme alternative we've heard is to actually bootstrap a cluster per
> team, but that feels pretty far from the Kubernetes vision, though we might
> be wrong about that as well.
>
> So, we'd love to hear opinions on not only what's recommended or possible
> today with Kubernetes, 

Re: [kubernetes-users] One environment or many per cluster?

2017-05-06 Thread
Re-reading this thread, I'm wondering why the existing service name
resolution procedure
 that
Nikhil mentioned doesn't solve Sam's problem (without the need for
hierarchical namespaces). Use one namespace per environment, and use the
unqualified service name for lookup to find the desired service in the same
environment.




On Tue, Apr 18, 2017 at 4:14 PM, 'Sam Ghods' via Kubernetes user discussion
and Q&A  wrote:

> Hello,
>
> We're struggling internally at Box with a question that we were hoping the
> community could help shed some light on.
>
> At Box we have three main environments: development, staging, and
> production. The definition of 'environment' here is primarily a logical
> service discovery domain - what instances of other services should access
> me and what instances of other services should I access? (Some
> configuration can vary as well.) The development environment is a
> collection of instances where changes are pushed first and the most
> frequently changing environment. Staging is where changes go right before
> they're released and where any manual testing is done. Production is the
> set of instances serving customer traffic.
>
> Currently, we have four bare-metal datacenters, one is for non-production
> workloads (let's call it NP), the three other are for production workloads
> (let's call them A, B, C). Each one has a single large Kubernetes cluster
> named after the datacenter it's in. Initially, we considered equating
> namespaces to environments, and having the dev and staging namespaces in
> the NP cluster and the production namespace in the A, B and C clusters. But
> we could not get good isolation between different teams and microservices
> for authentication, quota management, etc.
>
> So instead, for a service 'foo,' each cluster uses namespaces like
> 'foo-dev', 'foo-staging', and 'foo-production', with the first two
> namespaces only existing in the NP cluster, but the production namespace
> only existing in clusters A, B and C. The foo team only has access to the
> foo namespaces (through ABAC, soon RBAC) and the foo namespaces can have
> quotas put on them to ensure they do not overrun a cluster or environment.
>
> However, we've started to wonder whether colocating multiple environments
> in a single cluster like this is a good idea. The first thing that gave us
> pause was federation and service discovery - as the foo service, I'd love
> to be able to deploy to a cluster, then indicate that I want to talk to the
> 'baz' service, and have it automatically find the baz service in my
> cluster, and fall back to a secondary cluster if it's not there. Having
> multiple environments in a single cluster means every app in a cluster
> needs to not only know that it reaches a 'baz' service, but it needs to
> know to specifically reach out to 'baz-dev|staging|prod' etc., which
> pollutes everyone's configs. This is specifically because there's no first
> class concept for "environment" in Kubernetes at the moment - only what
> we've clobbered into namespaces, configs and service names. (Something like
> hierarchical namespaces may be able to help with this.)
>
> The alternative we're considering is to have each cluster contain only a
> single environment. Having one environment per cluster simplifies a lot of
> configuration and object definitions across the board, because there's only
> one axis to worry about (cluster) instead of two (cluster + environment).
> We can know implicitly that everything in a given cluster belongs to a
> specific environment, potentially simplifying configuration more broadly.
> It also feels like it might be a lot more natural of a fit to Kubernetes'
> federation plans, but we haven't looked into this in as much depth.
>
> But on the flip side, I've always understood Kubernetes' ultimate goal to
> be a lot more like Borg or an AWS availability zone or region, where the
> software operates more at an infrastructure layer than the application
> layer, because this dramatically improves hardware utilization and
> efficiency and minimizes the number of clusters to operate, scale, etc.
>
> An extreme alternative we've heard is to actually bootstrap a cluster per
> team, but that feels pretty far from the Kubernetes vision, though we might
> be wrong about that as well.
>
> So, we'd love to hear opinions on not only what's recommended or possible
> today with Kubernetes, but what is the vision - should Kubernetes clusters
> exist at an application/environment layer or at the infrastructure layer?
>
> Thank you!
> Sam
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q&A" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to kubernetes-users@googlegroups.com.
> Visit this group at https://gr

Re: [kubernetes-users] One environment or many per cluster?

2017-04-20 Thread Paul Ingles
We've been discussing this a little internally (at uSwitch) as we start to
pull more workloads over to Kubernetes so I'm glad it's come up on the list
also!

To add to the discussion- we currently run two clusters (called red and
black) with separate namespaces per team and RBAC to control access. Teams
decide individually how they want their systems to run- some have staging
builds others prefer to have single environment but with feature toggles
etc.

The different clusters have the same RBAC and namespace configuration and
are mostly in-sync on OS (CoreOS)/K8s releases also. Red is the cluster we
would upgrade first- we tell users of red to expect it but aim for both
clusters to be available to users without disruption.

We're trying hard to consolidate on a few large k8s clusters (irrespective
of team, environment etc.). Currently I'd guess around 50%-75% of our
workloads run across 30+ different ECS clusters (team*environment*other)
which are mostly at single-digit utilisation :)

On Thu, Apr 20, 2017 at 9:12 AM, Matthias Rampke  wrote:

> At SoundCloud, we use multiple environments in one cluster, using
> namespaces and different configuration.
>
> We have a weaker notion of environments – there is no global "staging".
> Therefore, the "env" dimension is grouped *under* the system (~ app). One
> system's staging and production may be another system's blue and green.
> Some systems even have ephemeral environments that are automatically
> created for each PR, but that is rare.
>
> It is up to the owners of a system to decide, for each env, which other
> system/env combo it should talk to. For example, foo-staging may talk to
> bar-green, baz-testing and bam-production.
>
> We bake multiple configurations into each artifact, and select which one
> to load when templating out the podspec per env. By convention, the name of
> the configuration is the name of the env in which it should be loaded, but
> that is not a hard rule – the aforementioned ephemeral staging envs all
> share the same configuration.
>
> We also have test clusters, but these are mostly for staging and testing
> changes to Kubernetes itself.
>
> /MR
>
> On Thu, Apr 20, 2017 at 6:47 AM 'EJ Campbell' via Kubernetes user
> discussion and Q&A  wrote:
>
>> Definitely an interesting question!
>>
>> At Yahoo, we've generally separated the underlying infrastructure based
>> on whether our CI/CD infrastructure is performing the deployment versus if
>> a developer is manually making changes. Mapping to Box's definition, "dev"
>> is one very locked down K8s environment, while stage/prod would share a
>> single (or a small number) of large K8s clusters, per data center.
>>
>> As for the issue of routing requests from foo-stage to bar-stage, our
>> CI/CD infrastructure injects the environment an application is running in
>> into the pod. This is used by the application's configuration
>>  to select the appropriate service to hit
>> based on the injected environment variable (e.g. if an app is running in
>> "stage", it *may* want to hit a "stage" version of its API).
>>
>> For example:
>>   [{
>> "settings": [ "master" ],
>> "bar": "api-bar.example.com"
>>   }, {
>> "settings": [ "environment:stage" ],
>> "bar": "api-stage-bar.example.com"
>>   }]
>>
>> So, K8s itself does not know whether an app is the stage, canary, qa,
>> prod, etc. version of it. Those are application specific concepts that are
>> separate from the underlying infrastructure hosting the application.
>>
>> -EJ
>>
>> On Wednesday, April 19, 2017, 10:30:30 PM PDT, 'Tim Hockin' via
>> Kubernetes user discussion and Q&A 
>> wrote:
>> Sam,
>>
>> I don't have a clean answer for you.  What you really want (it seems)
>> is nested Namespaces.  If only our foresight were better...
>>
>> The way we end up doing it internally is that foo-prod and foo-test
>> get baked into the templates that produce the final configs that are
>> sent to the master.
>>
>> Kubernetes really is designed to empower large, shared clusters - Borg
>> style.  A lot of people are using it for small clusters today - one
>> per app or one per environment.  That works, but it is not really
>> capturing the idea we wanted to express.  There are lots of good,
>> valid, reasons why people can't use mega-clusters yet - authn, authz,
>> billing, etc.  We're working on a lot of these things now (and RBAC
>> has landed :)
>>
>> I'd love to hear more people's thoughts - this is a very interesting
>> topic.
>>
>> On Wed, Apr 19, 2017 at 10:23 PM, Nikhil Jindal
>>  wrote:
>> > Thats a great question!
>> >
>> > I would let others comment on what the kubernetes vision is but to me
>> > colocating multiple environments in a single cluster seems better than
>> > having different clusters for each environment for the reasons you
>> mentioned
>> > (better utilization and efficiency). There is work going on for hard
>> > multitenancy to support better isolation.
>> >
>> > For service discover

Re: [kubernetes-users] One environment or many per cluster?

2017-04-20 Thread Matthias Rampke
At SoundCloud, we use multiple environments in one cluster, using
namespaces and different configuration.

We have a weaker notion of environments – there is no global "staging".
Therefore, the "env" dimension is grouped *under* the system (~ app). One
system's staging and production may be another system's blue and green.
Some systems even have ephemeral environments that are automatically
created for each PR, but that is rare.

It is up to the owners of a system to decide, for each env, which other
system/env combo it should talk to. For example, foo-staging may talk to
bar-green, baz-testing and bam-production.

We bake multiple configurations into each artifact, and select which one to
load when templating out the podspec per env. By convention, the name of
the configuration is the name of the env in which it should be loaded, but
that is not a hard rule – the aforementioned ephemeral staging envs all
share the same configuration.

We also have test clusters, but these are mostly for staging and testing
changes to Kubernetes itself.

/MR

On Thu, Apr 20, 2017 at 6:47 AM 'EJ Campbell' via Kubernetes user
discussion and Q&A  wrote:

> Definitely an interesting question!
>
> At Yahoo, we've generally separated the underlying infrastructure based on
> whether our CI/CD infrastructure is performing the deployment versus if a
> developer is manually making changes. Mapping to Box's definition, "dev" is
> one very locked down K8s environment, while stage/prod would share a single
> (or a small number) of large K8s clusters, per data center.
>
> As for the issue of routing requests from foo-stage to bar-stage, our
> CI/CD infrastructure injects the environment an application is running in
> into the pod. This is used by the application's configuration
>  to select the appropriate service to hit
> based on the injected environment variable (e.g. if an app is running in
> "stage", it *may* want to hit a "stage" version of its API).
>
> For example:
>   [{
> "settings": [ "master" ],
> "bar": "api-bar.example.com"
>   }, {
> "settings": [ "environment:stage" ],
> "bar": "api-stage-bar.example.com"
>   }]
>
> So, K8s itself does not know whether an app is the stage, canary, qa,
> prod, etc. version of it. Those are application specific concepts that are
> separate from the underlying infrastructure hosting the application.
>
> -EJ
>
> On Wednesday, April 19, 2017, 10:30:30 PM PDT, 'Tim Hockin' via Kubernetes
> user discussion and Q&A  wrote:
> Sam,
>
> I don't have a clean answer for you.  What you really want (it seems)
> is nested Namespaces.  If only our foresight were better...
>
> The way we end up doing it internally is that foo-prod and foo-test
> get baked into the templates that produce the final configs that are
> sent to the master.
>
> Kubernetes really is designed to empower large, shared clusters - Borg
> style.  A lot of people are using it for small clusters today - one
> per app or one per environment.  That works, but it is not really
> capturing the idea we wanted to express.  There are lots of good,
> valid, reasons why people can't use mega-clusters yet - authn, authz,
> billing, etc.  We're working on a lot of these things now (and RBAC
> has landed :)
>
> I'd love to hear more people's thoughts - this is a very interesting topic.
>
> On Wed, Apr 19, 2017 at 10:23 PM, Nikhil Jindal
>  wrote:
> > Thats a great question!
> >
> > I would let others comment on what the kubernetes vision is but to me
> > colocating multiple environments in a single cluster seems better than
> > having different clusters for each environment for the reasons you
> mentioned
> > (better utilization and efficiency). There is work going on for hard
> > multitenancy to support better isolation.
> >
> > For service discovery within kubernetes, we look for service in the same
> > namespace by default. So if service foo and baz are in the same namespace
> > then foo in foobaz-dev namespace will reach out to baz in foobaz-dev
> > namespace and foo in foobaz-prod will reach out to baz in foobaz-prod
> > namespace.
> > For service discovery in federation, you have to explicitly request the
> > federated service which includes providing the namespace as well (for ex:
> > baz.foobaz-dev.federation.svc.)
> >
> >
> >
> >
> > On Tue, Apr 18, 2017 at 4:14 PM, 'Sam Ghods' via Kubernetes user
> discussion
> > and Q&A  wrote:
> >>
> >> Hello,
> >>
> >> We're struggling internally at Box with a question that we were hoping
> the
> >> community could help shed some light on.
> >>
> >> At Box we have three main environments: development, staging, and
> >> production. The definition of 'environment' here is primarily a logical
> >> service discovery domain - what instances of other services should
> access me
> >> and what instances of other services should I access? (Some
> configuration
> >> can vary as well.) The development environment is a collection of
> instances
> >> where changes are 

Re: [kubernetes-users] One environment or many per cluster?

2017-04-19 Thread
Definitely an interesting question!
At Yahoo, we've generally separated the underlying infrastructure based on 
whether our CI/CD infrastructure is performing the deployment versus if a 
developer is manually making changes. Mapping to Box's definition, "dev" is one 
very locked down K8s environment, while stage/prod would share a single (or a 
small number) of large K8s clusters, per data center.
As for the issue of routing requests from foo-stage to bar-stage, our CI/CD 
infrastructure injects the environment an application is running in into the 
pod. This is used by the application's configuration to select the appropriate 
service to hit based on the injected environment variable (e.g. if an app is 
running in "stage", it may want to hit a "stage" version of its API). 
For example:  [{
    "settings": [ "master" ],
    "bar": "api-bar.example.com"
  }, {
    "settings": [ "environment:stage" ],
    "bar": "api-stage-bar.example.com"
  }]

So, K8s itself does not know whether an app is the stage, canary, qa, prod, 
etc. version of it. Those are application specific concepts that are separate 
from the underlying infrastructure hosting the application.
-EJ
On Wednesday, April 19, 2017, 10:30:30 PM PDT, 'Tim Hockin' via Kubernetes user 
discussion and Q&A  wrote:Sam,

I don't have a clean answer for you.  What you really want (it seems)
is nested Namespaces.  If only our foresight were better...

The way we end up doing it internally is that foo-prod and foo-test
get baked into the templates that produce the final configs that are
sent to the master.

Kubernetes really is designed to empower large, shared clusters - Borg
style.  A lot of people are using it for small clusters today - one
per app or one per environment.  That works, but it is not really
capturing the idea we wanted to express.  There are lots of good,
valid, reasons why people can't use mega-clusters yet - authn, authz,
billing, etc.  We're working on a lot of these things now (and RBAC
has landed :)

I'd love to hear more people's thoughts - this is a very interesting topic.

On Wed, Apr 19, 2017 at 10:23 PM, Nikhil Jindal
 wrote:
> Thats a great question!
>
> I would let others comment on what the kubernetes vision is but to me
> colocating multiple environments in a single cluster seems better than
> having different clusters for each environment for the reasons you mentioned
> (better utilization and efficiency). There is work going on for hard
> multitenancy to support better isolation.
>
> For service discovery within kubernetes, we look for service in the same
> namespace by default. So if service foo and baz are in the same namespace
> then foo in foobaz-dev namespace will reach out to baz in foobaz-dev
> namespace and foo in foobaz-prod will reach out to baz in foobaz-prod
> namespace.
> For service discovery in federation, you have to explicitly request the
> federated service which includes providing the namespace as well (for ex:
> baz.foobaz-dev.federation.svc.)
>
>
>
>
> On Tue, Apr 18, 2017 at 4:14 PM, 'Sam Ghods' via Kubernetes user discussion
> and Q&A  wrote:
>>
>> Hello,
>>
>> We're struggling internally at Box with a question that we were hoping the
>> community could help shed some light on.
>>
>> At Box we have three main environments: development, staging, and
>> production. The definition of 'environment' here is primarily a logical
>> service discovery domain - what instances of other services should access me
>> and what instances of other services should I access? (Some configuration
>> can vary as well.) The development environment is a collection of instances
>> where changes are pushed first and the most frequently changing environment.
>> Staging is where changes go right before they're released and where any
>> manual testing is done. Production is the set of instances serving customer
>> traffic.
>>
>> Currently, we have four bare-metal datacenters, one is for non-production
>> workloads (let's call it NP), the three other are for production workloads
>> (let's call them A, B, C). Each one has a single large Kubernetes cluster
>> named after the datacenter it's in. Initially, we considered equating
>> namespaces to environments, and having the dev and staging namespaces in the
>> NP cluster and the production namespace in the A, B and C clusters. But we
>> could not get good isolation between different teams and microservices for
>> authentication, quota management, etc.
>>
>> So instead, for a service 'foo,' each cluster uses namespaces like
>> 'foo-dev', 'foo-staging', and 'foo-production', with the first two
>> namespaces only existing in the NP cluster, but the production namespace
>> only existing in clusters A, B and C. The foo team only has access to the
>> foo namespaces (through ABAC, soon RBAC) and the foo namespaces can have
>> quotas put on them to ensure they do not overrun a cluster or environment.
>>
>> However, we've started to wonder whether colocating multiple environments
>> in a single 

Re: [kubernetes-users] One environment or many per cluster?

2017-04-19 Thread
Sam,

I don't have a clean answer for you.  What you really want (it seems)
is nested Namespaces.  If only our foresight were better...

The way we end up doing it internally is that foo-prod and foo-test
get baked into the templates that produce the final configs that are
sent to the master.

Kubernetes really is designed to empower large, shared clusters - Borg
style.  A lot of people are using it for small clusters today - one
per app or one per environment.  That works, but it is not really
capturing the idea we wanted to express.  There are lots of good,
valid, reasons why people can't use mega-clusters yet - authn, authz,
billing, etc.  We're working on a lot of these things now (and RBAC
has landed :)

I'd love to hear more people's thoughts - this is a very interesting topic.

On Wed, Apr 19, 2017 at 10:23 PM, Nikhil Jindal
 wrote:
> Thats a great question!
>
> I would let others comment on what the kubernetes vision is but to me
> colocating multiple environments in a single cluster seems better than
> having different clusters for each environment for the reasons you mentioned
> (better utilization and efficiency). There is work going on for hard
> multitenancy to support better isolation.
>
> For service discovery within kubernetes, we look for service in the same
> namespace by default. So if service foo and baz are in the same namespace
> then foo in foobaz-dev namespace will reach out to baz in foobaz-dev
> namespace and foo in foobaz-prod will reach out to baz in foobaz-prod
> namespace.
> For service discovery in federation, you have to explicitly request the
> federated service which includes providing the namespace as well (for ex:
> baz.foobaz-dev.federation.svc.)
>
>
>
>
> On Tue, Apr 18, 2017 at 4:14 PM, 'Sam Ghods' via Kubernetes user discussion
> and Q&A  wrote:
>>
>> Hello,
>>
>> We're struggling internally at Box with a question that we were hoping the
>> community could help shed some light on.
>>
>> At Box we have three main environments: development, staging, and
>> production. The definition of 'environment' here is primarily a logical
>> service discovery domain - what instances of other services should access me
>> and what instances of other services should I access? (Some configuration
>> can vary as well.) The development environment is a collection of instances
>> where changes are pushed first and the most frequently changing environment.
>> Staging is where changes go right before they're released and where any
>> manual testing is done. Production is the set of instances serving customer
>> traffic.
>>
>> Currently, we have four bare-metal datacenters, one is for non-production
>> workloads (let's call it NP), the three other are for production workloads
>> (let's call them A, B, C). Each one has a single large Kubernetes cluster
>> named after the datacenter it's in. Initially, we considered equating
>> namespaces to environments, and having the dev and staging namespaces in the
>> NP cluster and the production namespace in the A, B and C clusters. But we
>> could not get good isolation between different teams and microservices for
>> authentication, quota management, etc.
>>
>> So instead, for a service 'foo,' each cluster uses namespaces like
>> 'foo-dev', 'foo-staging', and 'foo-production', with the first two
>> namespaces only existing in the NP cluster, but the production namespace
>> only existing in clusters A, B and C. The foo team only has access to the
>> foo namespaces (through ABAC, soon RBAC) and the foo namespaces can have
>> quotas put on them to ensure they do not overrun a cluster or environment.
>>
>> However, we've started to wonder whether colocating multiple environments
>> in a single cluster like this is a good idea. The first thing that gave us
>> pause was federation and service discovery - as the foo service, I'd love to
>> be able to deploy to a cluster, then indicate that I want to talk to the
>> 'baz' service, and have it automatically find the baz service in my cluster,
>> and fall back to a secondary cluster if it's not there. Having multiple
>> environments in a single cluster means every app in a cluster needs to not
>> only know that it reaches a 'baz' service, but it needs to know to
>> specifically reach out to 'baz-dev|staging|prod' etc., which pollutes
>> everyone's configs. This is specifically because there's no first class
>> concept for "environment" in Kubernetes at the moment - only what we've
>> clobbered into namespaces, configs and service names. (Something like
>> hierarchical namespaces may be able to help with this.)
>>
>> The alternative we're considering is to have each cluster contain only a
>> single environment. Having one environment per cluster simplifies a lot of
>> configuration and object definitions across the board, because there's only
>> one axis to worry about (cluster) instead of two (cluster + environment). We
>> can know implicitly that everything in a given cluster belongs to a specific
>> environme

Re: [kubernetes-users] One environment or many per cluster?

2017-04-19 Thread Nikhil Jindal
Thats a great question!

I would let others comment on what the kubernetes vision is but to me
colocating multiple environments in a single cluster seems better than
having different clusters for each environment for the reasons you
mentioned (better utilization and efficiency). There is work going on for
hard multitenancy to support better isolation.

For service discovery within kubernetes, we look for service in the same
namespace by default. So if service foo and baz are in the same namespace
then foo in foobaz-dev namespace will reach out to baz in foobaz-dev
namespace and foo in foobaz-prod will reach out to baz in foobaz-prod
namespace.
For service discovery in federation, you have to explicitly request the
federated service which includes providing the namespace as well (for ex:
baz.foobaz-dev.federation.svc.)




On Tue, Apr 18, 2017 at 4:14 PM, 'Sam Ghods' via Kubernetes user discussion
and Q&A  wrote:

> Hello,
>
> We're struggling internally at Box with a question that we were hoping the
> community could help shed some light on.
>
> At Box we have three main environments: development, staging, and
> production. The definition of 'environment' here is primarily a logical
> service discovery domain - what instances of other services should access
> me and what instances of other services should I access? (Some
> configuration can vary as well.) The development environment is a
> collection of instances where changes are pushed first and the most
> frequently changing environment. Staging is where changes go right before
> they're released and where any manual testing is done. Production is the
> set of instances serving customer traffic.
>
> Currently, we have four bare-metal datacenters, one is for non-production
> workloads (let's call it NP), the three other are for production workloads
> (let's call them A, B, C). Each one has a single large Kubernetes cluster
> named after the datacenter it's in. Initially, we considered equating
> namespaces to environments, and having the dev and staging namespaces in
> the NP cluster and the production namespace in the A, B and C clusters. But
> we could not get good isolation between different teams and microservices
> for authentication, quota management, etc.
>
> So instead, for a service 'foo,' each cluster uses namespaces like
> 'foo-dev', 'foo-staging', and 'foo-production', with the first two
> namespaces only existing in the NP cluster, but the production namespace
> only existing in clusters A, B and C. The foo team only has access to the
> foo namespaces (through ABAC, soon RBAC) and the foo namespaces can have
> quotas put on them to ensure they do not overrun a cluster or environment.
>
> However, we've started to wonder whether colocating multiple environments
> in a single cluster like this is a good idea. The first thing that gave us
> pause was federation and service discovery - as the foo service, I'd love
> to be able to deploy to a cluster, then indicate that I want to talk to the
> 'baz' service, and have it automatically find the baz service in my
> cluster, and fall back to a secondary cluster if it's not there. Having
> multiple environments in a single cluster means every app in a cluster
> needs to not only know that it reaches a 'baz' service, but it needs to
> know to specifically reach out to 'baz-dev|staging|prod' etc., which
> pollutes everyone's configs. This is specifically because there's no first
> class concept for "environment" in Kubernetes at the moment - only what
> we've clobbered into namespaces, configs and service names. (Something like
> hierarchical namespaces may be able to help with this.)
>
> The alternative we're considering is to have each cluster contain only a
> single environment. Having one environment per cluster simplifies a lot of
> configuration and object definitions across the board, because there's only
> one axis to worry about (cluster) instead of two (cluster + environment).
> We can know implicitly that everything in a given cluster belongs to a
> specific environment, potentially simplifying configuration more broadly.
> It also feels like it might be a lot more natural of a fit to Kubernetes'
> federation plans, but we haven't looked into this in as much depth.
>
> But on the flip side, I've always understood Kubernetes' ultimate goal to
> be a lot more like Borg or an AWS availability zone or region, where the
> software operates more at an infrastructure layer than the application
> layer, because this dramatically improves hardware utilization and
> efficiency and minimizes the number of clusters to operate, scale, etc.
>
> An extreme alternative we've heard is to actually bootstrap a cluster per
> team, but that feels pretty far from the Kubernetes vision, though we might
> be wrong about that as well.
>
> So, we'd love to hear opinions on not only what's recommended or possible
> today with Kubernetes, but what is the vision - should Kubernetes clusters
> exist at an application/envi