Re: [openstack-dev] [magnum]swarm + compose = k8s?

2016-02-15 Thread 王华
I think master nodes should be controlled by Magnum, so that we can do the
operation work for users. AWS and GCE use the mode. And master nodes are
resource-consuming. If master nodes are not controlled by users, we can do
some optimization to reduce the cost which is invisible to users. For
example, we can combine some masters node into one with correct isolation.

Regards,
Wanghua

On Tue, Feb 16, 2016 at 1:52 AM, Hongbin Lu <hongbin...@huawei.com> wrote:

> Regarding to the COE mode, it seems there are three options:
>
> 1.   Place both master nodes and worker nodes to user’s tenant
> (current implementation).
>
> 2.   Place only worker nodes to user’s tenant.
>
> 3.   Hide both master nodes and worker nodes from user’s tenant.
>
>
>
> Frankly, I don’t know which one will succeed/fail in the future. Each mode
> seems to have use cases. Maybe magnum could support multiple modes?
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Corey O'Brien [mailto:coreypobr...@gmail.com]
> *Sent:* February-15-16 8:43 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [magnum]swarm + compose = k8s?
>
>
>
> Hi all,
>
>
>
> A few thoughts to add:
>
>
>
> I like the idea of isolating the masters so that they are not
> tenant-controllable, but I don't think the Magnum control plane is the
> right place for them. They still need to be running on tenant-owned
> resources so that they have access to things like isolated tenant networks
> or that any bandwidth they consume can still be attributed and billed to
> tenants.
>
>
>
> I think we should extend that concept a little to include worker nodes as
> well. While they should live in the tenant like the masters, they shouldn't
> be controllable by the tenant through anything other than the COE API. The
> main use case that Magnum should be addressing is providing a managed COE
> environment. Like Hongbin mentioned, Magnum users won't have the domain
> knowledge to properly maintain the swarm/k8s/mesos infrastructure the same
> way that Nova users aren't expected to know how to manage a hypervisor.
>
>
>
> I agree with Egor that trying to have Magnum schedule containers is going
> to be a losing battle. Swarm/K8s/Mesos are always going to have better
> scheduling for their containers. We don't have the resources to try to be
> yet another container orchestration engine. Besides that, as a developer, I
> don't want to learn another set of orchestration semantics when I already
> know swarm or k8s or mesos.
>
>
>
> @Kris, I appreciate the real use case you outlined. In your idea of having
> multiple projects use the same masters, how would you intend to isolate
> them? As far as I can tell none of the COEs would have any way to isolate
> those teams from each other if they share a master. I think this is a big
> problem with the idea of sharing masters even within a single tenant. As an
> operator, I definitely want to know that users can isolate their resources
> from other users and tenants can isolate their resources from other tenants.
>
>
>
> Corey
>
>
>
> On Mon, Feb 15, 2016 at 1:24 AM Peng Zhao <p...@hyper.sh> wrote:
>
> Hi,
>
>
>
> I wanted to give some thoughts to the thread.
>
>
>
> There are various perspective around “Hosted vs Self-managed COE”, But if
> you stand at the developer's position, it basically comes down to “Ops vs
> Flexibility”.
>
>
>
> For those who want more control of the stack, so as to customize in anyway
> they see fit, self-managed is a more appealing option. However, one may
> argue that the same job can be done with a heat template+some patchwork of
> cinder/neutron. And the heat template is more customizable than magnum,
> which probably introduces some requirements on the COE configuration.
>
>
>
> For people who don't want to manage the COE, hosted is a no-brainer. The
> question here is that which one is the core compute engine is the stack,
> nova or COE? Unless you are running a public, multi-tenant OpenStack
> deployment, it is highly likely that you are sticking with only one COE.
> Supposing k8s is what your team is dealing with everyday, then why you need
> nova sitting under k8s, whose job is just launching some VMs. After all, it
> is the COE that orchestrates cinder/neutron.
>
>
>
> One idea of this is to put COE at the same layer of nova. Instead of
> running atop nova, these two run side by side. So you got two compute
> engines: nova for IaaS workload, k8s for CaaS workload. If you go this way, 
> hypernetes
> <https://github.com/hyperhq/hypernetes>is probably what you are looking
> for.
&g

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2016-02-15 Thread Hongbin Lu
Regarding to the COE mode, it seems there are three options:

1.   Place both master nodes and worker nodes to user’s tenant (current 
implementation).

2.   Place only worker nodes to user’s tenant.

3.   Hide both master nodes and worker nodes from user’s tenant.

Frankly, I don’t know which one will succeed/fail in the future. Each mode 
seems to have use cases. Maybe magnum could support multiple modes?

Best regards,
Hongbin

From: Corey O'Brien [mailto:coreypobr...@gmail.com]
Sent: February-15-16 8:43 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Hi all,

A few thoughts to add:

I like the idea of isolating the masters so that they are not 
tenant-controllable, but I don't think the Magnum control plane is the right 
place for them. They still need to be running on tenant-owned resources so that 
they have access to things like isolated tenant networks or that any bandwidth 
they consume can still be attributed and billed to tenants.

I think we should extend that concept a little to include worker nodes as well. 
While they should live in the tenant like the masters, they shouldn't be 
controllable by the tenant through anything other than the COE API. The main 
use case that Magnum should be addressing is providing a managed COE 
environment. Like Hongbin mentioned, Magnum users won't have the domain 
knowledge to properly maintain the swarm/k8s/mesos infrastructure the same way 
that Nova users aren't expected to know how to manage a hypervisor.

I agree with Egor that trying to have Magnum schedule containers is going to be 
a losing battle. Swarm/K8s/Mesos are always going to have better scheduling for 
their containers. We don't have the resources to try to be yet another 
container orchestration engine. Besides that, as a developer, I don't want to 
learn another set of orchestration semantics when I already know swarm or k8s 
or mesos.

@Kris, I appreciate the real use case you outlined. In your idea of having 
multiple projects use the same masters, how would you intend to isolate them? 
As far as I can tell none of the COEs would have any way to isolate those teams 
from each other if they share a master. I think this is a big problem with the 
idea of sharing masters even within a single tenant. As an operator, I 
definitely want to know that users can isolate their resources from other users 
and tenants can isolate their resources from other tenants.

Corey

On Mon, Feb 15, 2016 at 1:24 AM Peng Zhao <p...@hyper.sh<mailto:p...@hyper.sh>> 
wrote:
Hi,

I wanted to give some thoughts to the thread.

There are various perspective around “Hosted vs Self-managed COE”, But if you 
stand at the developer's position, it basically comes down to “Ops vs 
Flexibility”.

For those who want more control of the stack, so as to customize in anyway they 
see fit, self-managed is a more appealing option. However, one may argue that 
the same job can be done with a heat template+some patchwork of cinder/neutron. 
And the heat template is more customizable than magnum, which probably 
introduces some requirements on the COE configuration.

For people who don't want to manage the COE, hosted is a no-brainer. The 
question here is that which one is the core compute engine is the stack, nova 
or COE? Unless you are running a public, multi-tenant OpenStack deployment, it 
is highly likely that you are sticking with only one COE. Supposing k8s is what 
your team is dealing with everyday, then why you need nova sitting under k8s, 
whose job is just launching some VMs. After all, it is the COE that 
orchestrates cinder/neutron.

One idea of this is to put COE at the same layer of nova. Instead of running 
atop nova, these two run side by side. So you got two compute engines: nova for 
IaaS workload, k8s for CaaS workload. If you go this way, hypernetes 
<https://github.com/hyperhq/hypernetes> is probably what you are looking for.

Another idea is “Dockerized (Immutable) IaaS”, e.g. replace Glance with Docker 
registry, and use nova to launch Docker images. But this is not done by 
nova-docker, simply because it is hard to integrate things like cinder/neutron 
with lxc. The idea is a nova-hyper 
driver<https://openstack.nimeyo.com/49570/openstack-dev-proposal-of-nova-hyper-driver>.
 Since Hyper is hypervisor-based, it is much easier to make it work with 
others. SHAMELESS PROMOTION: if you are interested in this idea, we've 
submitted a proposal at the Austin summit: 
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/8211.

Peng

Disclaim: I maintainer Hyper.

-
Hyper - Make VM run like Container



On Mon, Feb 15, 2016 at 9:53 AM, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:
My replies are inline.

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com<mailto:wk...@cn.ibm.com>]

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2016-02-15 Thread Corey O'Brien
Hi all,

A few thoughts to add:

I like the idea of isolating the masters so that they are not
tenant-controllable, but I don't think the Magnum control plane is the
right place for them. They still need to be running on tenant-owned
resources so that they have access to things like isolated tenant networks
or that any bandwidth they consume can still be attributed and billed to
tenants.

I think we should extend that concept a little to include worker nodes as
well. While they should live in the tenant like the masters, they shouldn't
be controllable by the tenant through anything other than the COE API. The
main use case that Magnum should be addressing is providing a managed COE
environment. Like Hongbin mentioned, Magnum users won't have the domain
knowledge to properly maintain the swarm/k8s/mesos infrastructure the same
way that Nova users aren't expected to know how to manage a hypervisor.

I agree with Egor that trying to have Magnum schedule containers is going
to be a losing battle. Swarm/K8s/Mesos are always going to have better
scheduling for their containers. We don't have the resources to try to be
yet another container orchestration engine. Besides that, as a developer, I
don't want to learn another set of orchestration semantics when I already
know swarm or k8s or mesos.

@Kris, I appreciate the real use case you outlined. In your idea of having
multiple projects use the same masters, how would you intend to isolate
them? As far as I can tell none of the COEs would have any way to isolate
those teams from each other if they share a master. I think this is a big
problem with the idea of sharing masters even within a single tenant. As an
operator, I definitely want to know that users can isolate their resources
from other users and tenants can isolate their resources from other tenants.

Corey

On Mon, Feb 15, 2016 at 1:24 AM Peng Zhao <p...@hyper.sh> wrote:

> Hi,
>
> I wanted to give some thoughts to the thread.
>
> There are various perspective around “Hosted vs Self-managed COE”, But if
> you stand at the developer's position, it basically comes down to “Ops vs
> Flexibility”.
>
> For those who want more control of the stack, so as to customize in anyway
> they see fit, self-managed is a more appealing option. However, one may
> argue that the same job can be done with a heat template+some patchwork of
> cinder/neutron. And the heat template is more customizable than magnum,
> which probably introduces some requirements on the COE configuration.
>
> For people who don't want to manage the COE, hosted is a no-brainer. The
> question here is that which one is the core compute engine is the stack,
> nova or COE? Unless you are running a public, multi-tenant OpenStack
> deployment, it is highly likely that you are sticking with only one COE.
> Supposing k8s is what your team is dealing with everyday, then why you need
> nova sitting under k8s, whose job is just launching some VMs. After all, it
> is the COE that orchestrates cinder/neutron.
>
> One idea of this is to put COE at the same layer of nova. Instead of
> running atop nova, these two run side by side. So you got two compute
> engines: nova for IaaS workload, k8s for CaaS workload. If you go this way, 
> hypernetes
> <https://github.com/hyperhq/hypernetes>is probably what you are looking
> for.
>
> Another idea is “Dockerized (Immutable) IaaS”, e.g. replace Glance with
> Docker registry, and use nova to launch Docker images. But this is not done
> by nova-docker, simply because it is hard to integrate things like
> cinder/neutron with lxc. The idea is a nova-hyper driver
> <https://openstack.nimeyo.com/49570/openstack-dev-proposal-of-nova-hyper-driver>.
> Since Hyper is hypervisor-based, it is much easier to make it work with
> others. SHAMELESS PROMOTION: if you are interested in this idea, we've
> submitted a proposal at the Austin summit:
> https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/8211
> .
>
> Peng
>
> Disclaim: I maintainer Hyper.
>
> -
> Hyper - Make VM run like Container
>
>
>
> On Mon, Feb 15, 2016 at 9:53 AM, Hongbin Lu <hongbin...@huawei.com> wrote:
>
>> My replies are inline.
>>
>>
>>
>> *From:* Kai Qiang Wu [mailto:wk...@cn.ibm.com]
>> *Sent:* February-14-16 7:17 PM
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [magnum]swarm + compose = k8s?
>>
>>
>>
>> HongBin,
>>
>> See my replies and questions in line. >>
>>
>>
>> Thanks
>>
>> Best Wishes,
>>
>> -

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2016-02-14 Thread Peng Zhao
Hi,
I wanted to give some thoughts to the thread.
There are various perspective around “Hosted vs Self-managed COE”, But if you
stand at the developer's position, it basically comes down to “Ops vs
Flexibility”.
For those who want more control of the stack, so as to customize in anyway they
see fit, self-managed is a more appealing option. However, one may argue that
the same job can be done with a heat template+some patchwork of cinder/neutron.
And the heat template is more customizable than magnum, which probably
introduces some requirements on the COE configuration.
For people who don't want to manage the COE, hosted is a no-brainer. The
question here is that which one is the core compute engine is the stack, nova or
COE? Unless you are running a public, multi-tenant OpenStack deployment, it is
highly likely that you are sticking with only one COE. Supposing k8s is what
your team is dealing with everyday, then why you need nova sitting under k8s,
whose job is just launching some VMs. After all, it is the COE that orchestrates
cinder/neutron.
One idea of this is to put COE at the same layer of nova. Instead of running
atop nova, these two run side by side. So you got two compute engines: nova for
IaaS workload, k8s for CaaS workload. If you go this way, hypernetes is 
probably what you are looking for.
Another idea is “Dockerized (Immutable) IaaS”, e.g. replace Glance with Docker
registry, and use nova to launch Docker images. But this is not done by
nova-docker, simply because it is hard to integrate things like cinder/neutron
with lxc. The idea is a nova-hyper driver . Since Hyper is hypervisor-based, it 
is much easier to make it work with
others. SHAMELESS PROMOTION: if you are interested in this idea, we've submitted
a proposal at the Austin summit: 
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/8211
 .
Peng
Disclaim: I maintainer Hyper.
- Hyper - Make VM run like 
Container


On Mon, Feb 15, 2016 at 9:53 AM, Hongbin Lu < hongbin...@huawei.com > wrote:
My replies are inline.



From: Kai Qiang Wu [mailto: wk...@cn.ibm.com ]
Sent: February-14-16 7:17 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?



HongBin,

See my replies and questions in line. >>


Thanks

Best Wishes,
-- -- 

Kai Qiang Wu ( 吴开强 Kennan )
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193
-- -- 

Follow your heart. You are miracle!

Hongbin Lu ---15/02/2016 01:26:09 am---Kai Qiang, A major benefit is to have
Magnum manage the COEs for end-users. Currently, Magnum basica

From: Hongbin Lu < hongbin...@huawei.com >
To: “OpenStack Development Mailing List (not for usage questions)“ < 
openstack-dev@lists. openstack.org >
Date: 15/02/2016 01:26 am
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?







Kai Qiang,

A major benefit is to have Magnum manage the COEs for end-users. Currently,
Magnum basically have its end-users manage the COEs by themselves after a
successful deployment. This might work well for domain users, but it is a pain
for non-domain users to manage their COEs. By moving master nodes out of users’
tenants, Magnum could offer users a COE management service. For example, Magnum
could offer to monitor the etcd/swarm-manage clusters and recover them on
failure. Again, the pattern of managing COEs for end-users is what Google
container service and AWS container service offer. I guess it is fair to
conclude that there are use cases out there?

>> I am not sure when you talked about domain here, is it keystone domain or 
>> other
case ? What's the non-domain users case to manage the COEs?

Reply: I mean domain experts, someone who are experts of kubernetes/swarm/mesos.



If we decide to offer a COE management service, we could discuss further on how
to consolidate the IaaS resources for improving utilization. Solutions could be
(i) introducing a centralized control services for all tenants/clusters, or (ii)
keeping the control services separated but isolating them by containers (instead
of VMs). A typical use case is what Kris mentioned below.

>> for (i) it is more complicated than (ii), and I did not see much benefits 
>> gain
for utilization case here for (i), instead it could introduce much burden for
upgrade case and service interference for all tenants/clusters

Reply: Definitely we could discuss it further. I don’t have preference in mind
right now.




Best regards,
Hongbin

From: Kai Qiang Wu 

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2016-02-14 Thread Kai Qiang Wu
HongBin,

See my replies and questions in line. >>


Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Hongbin Lu <hongbin...@huawei.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
Date:   15/02/2016 01:26 am
Subject:    Re: [openstack-dev] [magnum]swarm + compose = k8s?



Kai Qiang,

A major benefit is to have Magnum manage the COEs for end-users. Currently,
Magnum basically have its end-users manage the COEs by themselves after a
successful deployment. This might work well for domain users, but it is a
pain for non-domain users to manage their COEs. By moving master nodes out
of users’ tenants, Magnum could offer users a COE management service. For
example, Magnum could offer to monitor the etcd/swarm-manage clusters and
recover them on failure. Again, the pattern of managing COEs for end-users
is what Google container service and AWS container service offer. I guess
it is fair to conclude that there are use cases out there?

>> I am not sure when you talked about domain here, is it keystone domain
or other case ?  What's the non-domain users case to manage the COEs?

If we decide to offer a COE management service, we could discuss further on
how to consolidate the IaaS resources for improving utilization. Solutions
could be (i) introducing a centralized control services for all
tenants/clusters, or (ii) keeping the control services separated but
isolating them by containers (instead of VMs). A typical use case is what
Kris mentioned below.

>> for (i) it is more complicated than (ii), and I did not see much
benefits gain for utilization case here for (i), instead it could introduce
much burden for upgrade case and service interference for all
tenants/clusters


Best regards,
Hongbin

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: February-13-16 11:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?



Hi HongBin and Egor,
I went through what you talked about, and thinking what's the great
benefits for utilisation here.
For user cases, looks like following:

user A want to have a COE provision.
user B want to have a separate COE. (different tenant, non-share)
user C want to use existed COE (same tenant as User A, share)

When you talked about utilisation case, it seems you mentioned:
different tenant users want to use same control node to manage different
nodes, it seems that try to make COE openstack tenant aware, it also means
you want to introduce another control schedule layer above the COEs, we
need to think about the if it is typical user case, and what's the benefit
compared with containerisation.


And finally, it is a topic can be discussed in middle cycle meeting.


Thanks

Best Wishes,


Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193


Follow your heart. You are miracle!

Inactive hide details for Hongbin Lu ---13/02/2016 11:02:13 am---Egor,
Thanks for sharing your insights. I gave it more thoughtHongbin Lu
---13/02/2016 11:02:13 am---Egor, Thanks for sharing your insights. I gave
it more thoughts. Maybe the goal can be achieved with

From: Hongbin Lu <hongbin...@huawei.com>
To: Guz Egor <guz_e...@yahoo.com>, "OpenStack Development Mailing List (not
for usage questions)" <openstack-dev@lists.openstack.org>
Date: 13/02/2016 11:02 am
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?




Egor,

Thanks for sharing your insights. I gave it more thoughts. Maybe the goal
can be achieved without implementing a shared COE. We could move all the
master nodes out of user tenants, containerize them, and consolidate them
into a set of VMs/Physical servers.

I think we could separate the discussion into two:
1. Should Magnum introduce a new bay type, in which master
nodes are managed by Magnum (not users themselves)? Like what
GCE [1] or ECS [2] does.
2. How to consolidate the control services that originally runs
on master nodes of each cluster?

Note that the proposal is for adding a new COE (not for chan

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2016-02-14 Thread Hongbin Lu
Kai Qiang,

A major benefit is to have Magnum manage the COEs for end-users. Currently, 
Magnum basically have its end-users manage the COEs by themselves after a 
successful deployment. This might work well for domain users, but it is a pain 
for non-domain users to manage their COEs. By moving master nodes out of users’ 
tenants, Magnum could offer users a COE management service. For example, Magnum 
could offer to monitor the etcd/swarm-manage clusters and recover them on 
failure. Again, the pattern of managing COEs for end-users is what Google 
container service and AWS container service offer. I guess it is fair to 
conclude that there are use cases out there?

If we decide to offer a COE management service, we could discuss further on how 
to consolidate the IaaS resources for improving utilization. Solutions could be 
(i) introducing a centralized control services for all tenants/clusters, or 
(ii) keeping the control services separated but isolating them by containers 
(instead of VMs). A typical use case is what Kris mentioned below.

Best regards,
Hongbin

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: February-13-16 11:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?


Hi HongBin and Egor,
I went through what you talked about, and thinking what's the great benefits 
for utilisation here.
For user cases, looks like following:

user A want to have a COE provision.
user B want to have a separate COE. (different tenant, non-share)
user C want to use existed COE (same tenant as User A, share)

When you talked about utilisation case, it seems you mentioned:
different tenant users want to use same control node to manage different nodes, 
it seems that try to make COE openstack tenant aware, it also means you want to 
introduce another control schedule layer above the COEs, we need to think about 
the if it is typical user case, and what's the benefit compared with 
containerisation.


And finally, it is a topic can be discussed in middle cycle meeting.


Thanks

Best Wishes,

Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com<mailto:wk...@cn.ibm.com>
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

[Inactive hide details for Hongbin Lu ---13/02/2016 11:02:13 am---Egor, Thanks 
for sharing your insights. I gave it more thought]Hongbin Lu ---13/02/2016 
11:02:13 am---Egor, Thanks for sharing your insights. I gave it more thoughts. 
Maybe the goal can be achieved with

From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
To: Guz Egor <guz_e...@yahoo.com<mailto:guz_e...@yahoo.com>>, "OpenStack 
Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: 13/02/2016 11:02 am
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?





Egor,

Thanks for sharing your insights. I gave it more thoughts. Maybe the goal can 
be achieved without implementing a shared COE. We could move all the master 
nodes out of user tenants, containerize them, and consolidate them into a set 
of VMs/Physical servers.

I think we could separate the discussion into two:
1. Should Magnum introduce a new bay type, in which master nodes are managed by 
Magnum (not users themselves)? Like what GCE [1] or ECS [2] does.
2. How to consolidate the control services that originally runs on master nodes 
of each cluster?

Note that the proposal is for adding a new COE (not for changing the existing 
COEs). That means users will continue to provision existing self-managed COE 
(k8s/swarm/mesos) if they choose to.

[1] https://cloud.google.com/container-engine/
[2] http://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html

Best regards,
Hongbin

From: Guz Egor [mailto:guz_e...@yahoo.com]
Sent: February-12-16 2:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Hongbin Lu
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Hongbin,

I am not sure that it's good idea, it looks you propose Magnum enter to 
"schedulers war" (personally I tired from these debates Mesos vs Kub vs Swarm).
If your concern is just utilization you can always run control plane at 
"agent/slave" nodes, there main reason why operators (at least in our case) 
keep them
separate because they need different attention (e.g. I almost don't care 
why/when "agent/slave" node died, but always double check that master node was
repaired or replaced).

One use case I see for shared COE (at 

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2016-02-13 Thread Kai Qiang Wu
Hi HongBin and Egor,
I went through what you talked about, and thinking what's the great
benefits for utilisation here.
For user cases, looks like following:

 user A want to have a  COE provision.
 user B want to have a separate COE.  (different tenant, non-share)
 user C want to use existed COE (same tenant as User A, share)

When you talked about utilisation case, it seems you mentioned:
   different tenant users want to use same control node to manage different
nodes, it seems that try to make COE openstack tenant aware, it also means
you want to introduce another control schedule layer above the COEs,  we
need to think about the if it is typical user case, and what's the benefit
compared with containerisation.


And finally, it is a topic can be discussed in middle cycle meeting.


Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Hongbin Lu <hongbin...@huawei.com>
To: Guz Egor <guz_e...@yahoo.com>, "OpenStack Development Mailing
List (not for usage questions)"
<openstack-dev@lists.openstack.org>
Date:   13/02/2016 11:02 am
Subject:    Re: [openstack-dev] [magnum]swarm + compose = k8s?



Egor,

Thanks for sharing your insights. I gave it more thoughts. Maybe the goal
can be achieved without implementing a shared COE. We could move all the
master nodes out of user tenants, containerize them, and consolidate them
into a set of VMs/Physical servers.

I think we could separate the discussion into two:
  1.   Should Magnum introduce a new bay type, in which master
  nodes are managed by Magnum (not users themselves)? Like what GCE [1]
  or ECS [2] does.
  2.   How to consolidate the control services that originally runs
  on master nodes of each cluster?

Note that the proposal is for adding a new COE (not for changing the
existing COEs). That means users will continue to provision existing
self-managed COE (k8s/swarm/mesos) if they choose to.

[1] https://cloud.google.com/container-engine/
[2] http://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html

Best regards,
Hongbin

From: Guz Egor [mailto:guz_e...@yahoo.com]
Sent: February-12-16 2:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Hongbin Lu
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Hongbin,

I am not sure that it's good idea, it looks you propose Magnum enter to
"schedulers war" (personally I tired from these debates Mesos vs Kub vs
Swarm).
If your  concern is just utilization you can always run control plane at
"agent/slave" nodes, there main reason why operators (at least in our case)
keep them
separate because they need different attention (e.g. I almost don't care
why/when "agent/slave" node died, but always double check that master node
was
repaired or replaced).

One use case I see for shared COE (at least in our environment), when
developers want run just docker container without installing anything
locally
(e.g docker-machine). But in most cases it's just examples from internet or
there own experiments ):

But we definitely should discuss it during midcycle next week.

---
Egor


From: Hongbin Lu <hongbin...@huawei.com>
To: OpenStack Development Mailing List (not for usage questions) <
openstack-dev@lists.openstack.org>
Sent: Thursday, February 11, 2016 8:50 PM
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Hi team,

Sorry for bringing up this old thread, but a recent debate on container
resource [1] reminded me the use case Kris mentioned below. I am going to
propose a preliminary idea to address the use case. Of course, we could
continue the discussion in the team meeting or midcycle.

Idea: Introduce a docker-native COE, which consists of only
minion/worker/slave nodes (no master nodes).
Goal: Eliminate duplicated IaaS resources (master node VMs, lbaas vips,
floating ips, etc.)
Details: Traditional COE (k8s/swarm/mesos) consists of master nodes and
worker nodes. In these COEs, control services (i.e. scheduler) run on
master nodes, and containers run on worker nodes. If we can port the COE
control services to Magnum control plate and share them with all tenants,
we eliminate the need of master nodes thus improving resource utilization.
In the new COE, users create/manage containers through Magnum API
endpoints. Magnum is responsible to spin tenant VMs, schedule containers to
the VMs, and manage the life-cycle of those containers. Unlike other COEs,
containers created by this COE are considered as Op

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2016-02-12 Thread Guz Egor
Hongbin,
I am not sure that it's good idea, it looks you propose Magnum enter to 
"schedulers war" (personally I tired from these debates Mesos vs Kub vs 
Swarm).If your  concern is just utilization you can always run control plane at 
"agent/slave" nodes, there main reason why operators (at least in our case) 
keep themseparate because they need different attention (e.g. I almost don't 
care why/when "agent/slave" node died, but always double check that master node 
was repaired or replaced).   
One use case I see for shared COE (at least in our environment), when 
developers want run just docker container without installing anything locally 
(e.g docker-machine). But in most cases it's just examples from internet or 
there own experiments ):  
But we definitely should discuss it during midcycle next week.   --- Egor
  From: Hongbin Lu <hongbin...@huawei.com>
 To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org> 
 Sent: Thursday, February 11, 2016 8:50 PM
 Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
   
 Hi team,    Sorry 
for bringing up this old thread, but a recent debate on container resource [1] 
reminded me the use case Kris mentioned below. I am going to propose a 
preliminary idea to address the use case. Of course, we could continue the 
discussion in the team meeting or midcycle.    Idea: Introduce a docker-native 
COE, which consists of only minion/worker/slave nodes (no master nodes). Goal: 
Eliminate duplicated IaaS resources (master node VMs, lbaas vips, floating ips, 
etc.) Details: Traditional COE (k8s/swarm/mesos) consists of master nodes and 
worker nodes. In these COEs, control services (i.e. scheduler) run on master 
nodes, and containers run on worker nodes. If we can port the COE control 
services to Magnum control plate and share them with all tenants, we eliminate 
the need of master nodes thus improving resource utilization. In the new COE, 
users create/manage containers through Magnum API endpoints. Magnum is 
responsible to spin tenant VMs, schedule containers to the VMs, and manage the 
life-cycle of those containers. Unlike other COEs, containers created by this 
COE are considered as OpenStack-manage resources. That means they will be 
tracked in Magnum DB, and accessible by other OpenStack services (i.e. Horizon, 
Heat, etc.).    What do you feel about this proposal? Let’s discuss.    
[1]https://etherpad.openstack.org/p/magnum-native-api    Best regards, Hongbin  
  From: Kris G. Lindgren [mailto:klindg...@godaddy.com]
Sent: September-30-15 7:26 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?    We are looking 
at deploying magnum as an answer for how do we do containers company wide at 
Godaddy.  I am going to agree with both you and josh.    I agree that managing 
one large system is going to be a pain and pas experience tells me this wont be 
practical/scale, however from experience I also know exactly the pain Josh is 
talking about.    We currently have ~4k projects in our internal openstack 
cloud, about 1/4 of the projects are currently doing some form of containers on 
their own, with more joining every day.  If all of these projects were to 
convert of to the current magnum configuration we would suddenly be attempting 
to support/configure ~1k magnum clusters.  Considering that everyone will want 
it HA, we are looking at a minimum of 2 kube nodes per cluster + lbaas vips + 
floating ips.  From a capacity standpoint this is an excessive amount of 
duplicated infrastructure to spinup in projects where people maybe running 
10–20 containers per project.  From an operator support perspective this is a 
special level of hell that I do not want to get into.   Even if I am off by 
75%,  250 still sucks.    From my point of view an ideal use case for companies 
like ours (yahoo/godaddy) would be able to support hierarchical projects in 
magnum.  That way we could create a project for each department, and then the 
subteams of those departments can have their own projects.  We create a a bay 
per department.  Sub-projects if they want to can support creation of their own 
bays (but support of the kube cluster would then fall to that team).  When a 
sub-project spins up a pod on a bay, minions get created inside that teams sub 
projects and the containers in that pod run on the capacity that was spun up  
under that project, the minions for each pod would be a in a scaling group and 
as such grow/shrink as dictated by load.    The above would make it so where we 
support a minimal, yet imho reasonable, number of kube clusters, give people 
who can't/don’t want to fall inline with the provided resource a way to make 
their own and still offer a "good enough for a single company" level of 
multi-tenancy. >Joshua, >   >If you share resources, you give up multi-tenancy. 
 No COE system

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2016-02-12 Thread Hongbin Lu
Egor,

Thanks for sharing your insights. I gave it more thoughts. Maybe the goal can 
be achieved without implementing a shared COE. We could move all the master 
nodes out of user tenants, containerize them, and consolidate them into a set 
of VMs/Physical servers.

I think we could separate the discussion into two:

1.   Should Magnum introduce a new bay type, in which master nodes are 
managed by Magnum (not users themselves)? Like what GCE [1] or ECS [2] does.

2.   How to consolidate the control services that originally runs on master 
nodes of each cluster?

Note that the proposal is for adding a new COE (not for changing the existing 
COEs). That means users will continue to provision existing self-managed COE 
(k8s/swarm/mesos) if they choose to.

[1] https://cloud.google.com/container-engine/
[2] http://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html

Best regards,
Hongbin

From: Guz Egor [mailto:guz_e...@yahoo.com]
Sent: February-12-16 2:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Hongbin Lu
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Hongbin,

I am not sure that it's good idea, it looks you propose Magnum enter to 
"schedulers war" (personally I tired from these debates Mesos vs Kub vs Swarm).
If your  concern is just utilization you can always run control plane at 
"agent/slave" nodes, there main reason why operators (at least in our case) 
keep them
separate because they need different attention (e.g. I almost don't care 
why/when "agent/slave" node died, but always double check that master node was
repaired or replaced).

One use case I see for shared COE (at least in our environment), when 
developers want run just docker container without installing anything locally
(e.g docker-machine). But in most cases it's just examples from internet or 
there own experiments ):

But we definitely should discuss it during midcycle next week.

---
Egor


From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Sent: Thursday, February 11, 2016 8:50 PM
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Hi team,

Sorry for bringing up this old thread, but a recent debate on container 
resource [1] reminded me the use case Kris mentioned below. I am going to 
propose a preliminary idea to address the use case. Of course, we could 
continue the discussion in the team meeting or midcycle.

Idea: Introduce a docker-native COE, which consists of only minion/worker/slave 
nodes (no master nodes).
Goal: Eliminate duplicated IaaS resources (master node VMs, lbaas vips, 
floating ips, etc.)
Details: Traditional COE (k8s/swarm/mesos) consists of master nodes and worker 
nodes. In these COEs, control services (i.e. scheduler) run on master nodes, 
and containers run on worker nodes. If we can port the COE control services to 
Magnum control plate and share them with all tenants, we eliminate the need of 
master nodes thus improving resource utilization. In the new COE, users 
create/manage containers through Magnum API endpoints. Magnum is responsible to 
spin tenant VMs, schedule containers to the VMs, and manage the life-cycle of 
those containers. Unlike other COEs, containers created by this COE are 
considered as OpenStack-manage resources. That means they will be tracked in 
Magnum DB, and accessible by other OpenStack services (i.e. Horizon, Heat, 
etc.).

What do you feel about this proposal? Let’s discuss.

[1] https://etherpad.openstack.org/p/magnum-native-api

Best regards,
Hongbin

From: Kris G. Lindgren [mailto:klindg...@godaddy.com]
Sent: September-30-15 7:26 PM
To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

We are looking at deploying magnum as an answer for how do we do containers 
company wide at Godaddy.  I am going to agree with both you and josh.

I agree that managing one large system is going to be a pain and pas experience 
tells me this wont be practical/scale, however from experience I also know 
exactly the pain Josh is talking about.

We currently have ~4k projects in our internal openstack cloud, about 1/4 of 
the projects are currently doing some form of containers on their own, with 
more joining every day.  If all of these projects were to convert of to the 
current magnum configuration we would suddenly be attempting to 
support/configure ~1k magnum clusters.  Considering that everyone will want it 
HA, we are looking at a minimum of 2 kube nodes per cluster + lbaas vips + 
floating ips.  From a capacity standpoint this is an excessive amount of 
duplicated infrastructure to spinup in projects where people maybe running 
10–20 containers per project.  From an o

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-10-03 Thread Egor Guz
Kris,

We are facing similar challenges/questions and there are some thoughts. We 
cannot ignore scalability limits: Kub ~ 100 nodes (there are plans to support 
1K next year), Swarm ~ ??? (I never heard
even about 100 nodes, definitely not ready for production yet (happy to be 
wrong ;))), Mesos ~ 100K nodes, but it’s scalability issues with many 
schedulers (e.g. each team develop/use their
own framework (Marathon/Aurora)). It looks like small clusters is better/save 
option today (even if you need to pay for additional control plane), but I 
belie situation will change in next twelve months.

—
Egor

From: "Kris G. Lindgren" <klindg...@godaddy.com<mailto:klindg...@godaddy.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, September 30, 2015 at 16:26
To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

We are looking at deploying magnum as an answer for how do we do containers 
company wide at Godaddy.  I am going to agree with both you and josh.

I agree that managing one large system is going to be a pain and pas experience 
tells me this wont be practical/scale, however from experience I also know 
exactly the pain Josh is talking about.

We currently have ~4k projects in our internal openstack cloud, about 1/4 of 
the projects are currently doing some form of containers on their own, with 
more joining every day.  If all of these projects were to convert of to the 
current magnum configuration we would suddenly be attempting to 
support/configure ~1k magnum clusters.  Considering that everyone will want it 
HA, we are looking at a minimum of 2 kube nodes per cluster + lbaas vips + 
floating ips.  From a capacity standpoint this is an excessive amount of 
duplicated infrastructure to spinup in projects where people maybe running 
10–20 containers per project.  From an operator support perspective this is a 
special level of hell that I do not want to get into.   Even if I am off by 
75%,  250 still sucks.

>From my point of view an ideal use case for companies like ours 
>(yahoo/godaddy) would be able to support hierarchical projects in magnum.  
>That way we could create a project for each department, and then the subteams 
>of those departments can have their own projects.  We create a a bay per 
>department.  Sub-projects if they want to can support creation of their own 
>bays (but support of the kube cluster would then fall to that team).  When a 
>sub-project spins up a pod on a bay, minions get created inside that teams sub 
>projects and the containers in that pod run on the capacity that was spun up  
>under that project, the minions for each pod would be a in a scaling group and 
>as such grow/shrink as dictated by load.

The above would make it so where we support a minimal, yet imho reasonable, 
number of kube clusters, give people who can't/don’t want to fall inline with 
the provided resource a way to make their own and still offer a "good enough 
for a single company" level of multi-tenancy.

>Joshua,
>
>If you share resources, you give up multi-tenancy.  No COE system has the
>concept of multi-tenancy (kubernetes has some basic implementation but it
>is totally insecure).  Not only does multi-tenancy have to “look like” it
>offers multiple tenants isolation, but it actually has to deliver the
>goods.

>

>I understand that at first glance a company like Yahoo may not want >separate 
>bays for their various applications because of the perceived >administrative 
>overhead. I would then challenge Yahoo to go deploy a COE >like kubernetes 
>(which has no multi-tenancy or a very basic implementation >of such) and get 
>it to work with hundreds of different competing >applications. I would 
>speculate the administrative overhead of getting >all that to work would be 
>greater then the administrative overhead of >simply doing a bay create for the 
>various tenants.

>

>Placing tenancy inside a COE seems interesting, but no COE does that >today. 
>Maybe in the future they will. Magnum was designed to present an >integration 
>point between COEs and OpenStack today, not five years down >the road. Its not 
>as if we took shortcuts to get to where we are.

>

>I will grant you that density is lower with the current design of Magnum >vs a 
>full on integration with OpenStack within the COE itself. However, >that model 
>which is what I believe you proposed is a huge design change to >each COE 
>which would overly c

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-10-01 Thread Hongbin Lu
Kris,

I think the proposal of hierarchical projects is out of the scope of magnum, 
and you might need to bring it up at keystone or cross-project meeting. I am 
going to propose a walk-around that might work for you at existing tenancy 
model.

Suppose there is a department (department A) with two subteams (team 1 and team 
2). You can create three projects:

· Project A

· Project A-1

· Project A-2

Then you can assign users to projects in the following ways:

· Assign team 1 members to both Project A and Project A-1

· Assign team 2 members to both Project A and Project A-2

Then you can create a bay at project A, which is shared by the whole 
department. In addition, each subteam can create their own bays at project A-X 
if they want. Does it address your use cases?

Best regards,
Hongbin

From: Kris G. Lindgren [mailto:klindg...@godaddy.com]
Sent: September-30-15 7:26 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

We are looking at deploying magnum as an answer for how do we do containers 
company wide at Godaddy.  I am going to agree with both you and josh.

I agree that managing one large system is going to be a pain and pas experience 
tells me this wont be practical/scale, however from experience I also know 
exactly the pain Josh is talking about.

We currently have ~4k projects in our internal openstack cloud, about 1/4 of 
the projects are currently doing some form of containers on their own, with 
more joining every day.  If all of these projects were to convert of to the 
current magnum configuration we would suddenly be attempting to 
support/configure ~1k magnum clusters.  Considering that everyone will want it 
HA, we are looking at a minimum of 2 kube nodes per cluster + lbaas vips + 
floating ips.  From a capacity standpoint this is an excessive amount of 
duplicated infrastructure to spinup in projects where people maybe running 
10–20 containers per project.  From an operator support perspective this is a 
special level of hell that I do not want to get into.   Even if I am off by 
75%,  250 still sucks.

From my point of view an ideal use case for companies like ours (yahoo/godaddy) 
would be able to support hierarchical projects in magnum.  That way we could 
create a project for each department, and then the subteams of those 
departments can have their own projects.  We create a a bay per department.  
Sub-projects if they want to can support creation of their own bays (but 
support of the kube cluster would then fall to that team).  When a sub-project 
spins up a pod on a bay, minions get created inside that teams sub projects and 
the containers in that pod run on the capacity that was spun up  under that 
project, the minions for each pod would be a in a scaling group and as such 
grow/shrink as dictated by load.

The above would make it so where we support a minimal, yet imho reasonable, 
number of kube clusters, give people who can't/don’t want to fall inline with 
the provided resource a way to make their own and still offer a "good enough 
for a single company" level of multi-tenancy.

>Joshua,

>

>If you share resources, you give up multi-tenancy.  No COE system has the

>concept of multi-tenancy (kubernetes has some basic implementation but it

>is totally insecure).  Not only does multi-tenancy have to “look like” it

>offers multiple tenants isolation, but it actually has to deliver the

>goods.


>

>I understand that at first glance a company like Yahoo may not want

>separate bays for their various applications because of the perceived

>administrative overhead.  I would then challenge Yahoo to go deploy a COE

>like kubernetes (which has no multi-tenancy or a very basic implementation

>of such) and get it to work with hundreds of different competing

>applications.  I would speculate the administrative overhead of getting

>all that to work would be greater then the administrative overhead of

>simply doing a bay create for the various tenants.

>

>Placing tenancy inside a COE seems interesting, but no COE does that

>today.  Maybe in the future they will.  Magnum was designed to present an

>integration point between COEs and OpenStack today, not five years down

>the road.  Its not as if we took shortcuts to get to where we are.

>

>I will grant you that density is lower with the current design of Magnum

>vs a full on integration with OpenStack within the COE itself.  However,

>that model which is what I believe you proposed is a huge design change to

>each COE which would overly complicate the COE at the gain of increased

>density.  I personally don’t feel that pain is worth the gain.



___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-10-01 Thread Fox, Kevin M
I believe keystone already supports hierarchical projects

Thanks,
Kevin

From: Hongbin Lu [hongbin...@huawei.com]
Sent: Thursday, October 01, 2015 7:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Kris,

I think the proposal of hierarchical projects is out of the scope of magnum, 
and you might need to bring it up at keystone or cross-project meeting. I am 
going to propose a walk-around that might work for you at existing tenancy 
model.

Suppose there is a department (department A) with two subteams (team 1 and team 
2). You can create three projects:

· Project A

· Project A-1

· Project A-2

Then you can assign users to projects in the following ways:

· Assign team 1 members to both Project A and Project A-1

· Assign team 2 members to both Project A and Project A-2

Then you can create a bay at project A, which is shared by the whole 
department. In addition, each subteam can create their own bays at project A-X 
if they want. Does it address your use cases?

Best regards,
Hongbin

From: Kris G. Lindgren [mailto:klindg...@godaddy.com]
Sent: September-30-15 7:26 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

We are looking at deploying magnum as an answer for how do we do containers 
company wide at Godaddy.  I am going to agree with both you and josh.

I agree that managing one large system is going to be a pain and pas experience 
tells me this wont be practical/scale, however from experience I also know 
exactly the pain Josh is talking about.

We currently have ~4k projects in our internal openstack cloud, about 1/4 of 
the projects are currently doing some form of containers on their own, with 
more joining every day.  If all of these projects were to convert of to the 
current magnum configuration we would suddenly be attempting to 
support/configure ~1k magnum clusters.  Considering that everyone will want it 
HA, we are looking at a minimum of 2 kube nodes per cluster + lbaas vips + 
floating ips.  From a capacity standpoint this is an excessive amount of 
duplicated infrastructure to spinup in projects where people maybe running 
10–20 containers per project.  From an operator support perspective this is a 
special level of hell that I do not want to get into.   Even if I am off by 
75%,  250 still sucks.

>From my point of view an ideal use case for companies like ours 
>(yahoo/godaddy) would be able to support hierarchical projects in magnum.  
>That way we could create a project for each department, and then the subteams 
>of those departments can have their own projects.  We create a a bay per 
>department.  Sub-projects if they want to can support creation of their own 
>bays (but support of the kube cluster would then fall to that team).  When a 
>sub-project spins up a pod on a bay, minions get created inside that teams sub 
>projects and the containers in that pod run on the capacity that was spun up  
>under that project, the minions for each pod would be a in a scaling group and 
>as such grow/shrink as dictated by load.

The above would make it so where we support a minimal, yet imho reasonable, 
number of kube clusters, give people who can't/don’t want to fall inline with 
the provided resource a way to make their own and still offer a "good enough 
for a single company" level of multi-tenancy.

>Joshua,

>

>If you share resources, you give up multi-tenancy.  No COE system has the

>concept of multi-tenancy (kubernetes has some basic implementation but it

>is totally insecure).  Not only does multi-tenancy have to “look like” it

>offers multiple tenants isolation, but it actually has to deliver the

>goods.


>

>I understand that at first glance a company like Yahoo may not want

>separate bays for their various applications because of the perceived

>administrative overhead.  I would then challenge Yahoo to go deploy a COE

>like kubernetes (which has no multi-tenancy or a very basic implementation

>of such) and get it to work with hundreds of different competing

>applications.  I would speculate the administrative overhead of getting

>all that to work would be greater then the administrative overhead of

>simply doing a bay create for the various tenants.

>

>Placing tenancy inside a COE seems interesting, but no COE does that

>today.  Maybe in the future they will.  Magnum was designed to present an

>integration point between COEs and OpenStack today, not five years down

>the road.  Its not as if we took shortcuts to get to where we are.

>

>I will grant you that density is lower with the current design of Magnum

>vs a full on integration with OpenStack within the COE itself.  However,

>that model which is what I believe you proposed

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-10-01 Thread Hongbin Lu
Do you mean this proposal 
http://specs.openstack.org/openstack/keystone-specs/specs/juno/hierarchical_multitenancy.html
 ? It looks like a support of hierarchical role/privilege, and I couldn't find 
anything related to resource sharing. I am not sure if it can address the use 
cases Kris mentioned.

Best regards,
Hongbin

From: Fox, Kevin M [mailto:kevin@pnnl.gov]
Sent: October-01-15 11:58 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

I believe keystone already supports hierarchical projects

Thanks,
Kevin

From: Hongbin Lu [hongbin...@huawei.com]
Sent: Thursday, October 01, 2015 7:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
Kris,

I think the proposal of hierarchical projects is out of the scope of magnum, 
and you might need to bring it up at keystone or cross-project meeting. I am 
going to propose a walk-around that might work for you at existing tenancy 
model.

Suppose there is a department (department A) with two subteams (team 1 and team 
2). You can create three projects:

* Project A

* Project A-1

* Project A-2

Then you can assign users to projects in the following ways:

* Assign team 1 members to both Project A and Project A-1

* Assign team 2 members to both Project A and Project A-2

Then you can create a bay at project A, which is shared by the whole 
department. In addition, each subteam can create their own bays at project A-X 
if they want. Does it address your use cases?

Best regards,
Hongbin

From: Kris G. Lindgren [mailto:klindg...@godaddy.com]
Sent: September-30-15 7:26 PM
To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

We are looking at deploying magnum as an answer for how do we do containers 
company wide at Godaddy.  I am going to agree with both you and josh.

I agree that managing one large system is going to be a pain and pas experience 
tells me this wont be practical/scale, however from experience I also know 
exactly the pain Josh is talking about.

We currently have ~4k projects in our internal openstack cloud, about 1/4 of 
the projects are currently doing some form of containers on their own, with 
more joining every day.  If all of these projects were to convert of to the 
current magnum configuration we would suddenly be attempting to 
support/configure ~1k magnum clusters.  Considering that everyone will want it 
HA, we are looking at a minimum of 2 kube nodes per cluster + lbaas vips + 
floating ips.  From a capacity standpoint this is an excessive amount of 
duplicated infrastructure to spinup in projects where people maybe running 
10-20 containers per project.  From an operator support perspective this is a 
special level of hell that I do not want to get into.   Even if I am off by 
75%,  250 still sucks.

>From my point of view an ideal use case for companies like ours 
>(yahoo/godaddy) would be able to support hierarchical projects in magnum.  
>That way we could create a project for each department, and then the subteams 
>of those departments can have their own projects.  We create a a bay per 
>department.  Sub-projects if they want to can support creation of their own 
>bays (but support of the kube cluster would then fall to that team).  When a 
>sub-project spins up a pod on a bay, minions get created inside that teams sub 
>projects and the containers in that pod run on the capacity that was spun up  
>under that project, the minions for each pod would be a in a scaling group and 
>as such grow/shrink as dictated by load.

The above would make it so where we support a minimal, yet imho reasonable, 
number of kube clusters, give people who can't/don't want to fall inline with 
the provided resource a way to make their own and still offer a "good enough 
for a single company" level of multi-tenancy.

>Joshua,

>

>If you share resources, you give up multi-tenancy.  No COE system has the

>concept of multi-tenancy (kubernetes has some basic implementation but it

>is totally insecure).  Not only does multi-tenancy have to "look like" it

>offers multiple tenants isolation, but it actually has to deliver the

>goods.

>

>I understand that at first glance a company like Yahoo may not want

>separate bays for their various applications because of the perceived

>administrative overhead.  I would then challenge Yahoo to go deploy a COE

>like kubernetes (which has no multi-tenancy or a very basic implementation

>of such) and get it to work with hundreds of different competing

>applications.  I would speculate the administrative overhead of getting

>all that to work would be greater then the administrative ove

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Joshua Harlow
+1

Pretty please don't make it a deployment project; because really some
other project that just specializes in deployment (ansible, chef,
puppet...) can do that better. I do get how public clouds can find a
deployment project useful (it allows customers to try out these new
~fancy~ COE things), but I also tend to think it's short-term thinking
to believe that such a project will last.

Now an integrated COE <-> openstack (keystone, cinder, neutron...)
project I think really does provide value and has some really neat
possiblities to provide a unique value add to openstack; a project that
can deploy some other software, meh, not so much IMHO. Of course an
integrated COE <-> openstack project will of course be much harder,
especially as the COE projects are not openstack 'native' but nothing
worth doing is easy. I hope that it was known that COE projects are a
new (and rapidly shifting) landscape and the going wasn't going to be
easy when magnum was created; don't lose hope! (I'm cheering for you
guys/gals).

My 2 cents,

Josh

On Wed, 30 Sep 2015 00:00:17 -0400
Monty Taylor <mord...@inaugust.com> wrote:

> *waving hands wildly at details* ...
> 
> I believe that the real win is if Magnum's control plan can integrate 
> the network and storage fabrics that exist in an OpenStack with 
> kube/mesos/swarm. Just deploying is VERY meh. I do not care - it's
> not interesting ... an ansible playbook can do that in 5 minutes.
> OTOH - deploying some kube into a cloud in such a way that it shares
> a tenant network with some VMs that are there - that's good stuff and
> I think actually provides significant value.
> 
> On 09/29/2015 10:57 PM, Jay Lau wrote:
> > +1 to Egor, I think that the final goal of Magnum is container as a
> > service but not coe deployment as a service. ;-)
> >
> > Especially we are also working on Magnum UI, the Magnum UI should
> > export some interfaces to enable end user can create container
> > applications but not only coe deployment.
> >
> > I hope that the Magnum can be treated as another "Nova" which is
> > focusing on container service. I know it is difficult to unify all
> > of the concepts in different coe (k8s has pod, service, rc, swarm
> > only has container, nova only has VM, PM with different
> > hypervisors), but this deserve some deep dive and thinking to see
> > how can move forward.
> >
> > On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz <e...@walmartlabs.com
> > <mailto:e...@walmartlabs.com>> wrote:
> >
> > definitely ;), but the are some thoughts to Tom’s email.
> >
> > I agree that we shouldn't reinvent apis, but I don’t think
> > Magnum should only focus at deployment (I feel we will become
> > another Puppet/Chef/Ansible module if we do it ):)
> > I belive our goal should be seamlessly integrate
> > Kub/Mesos/Swarm to OpenStack ecosystem
> > (Neutron/Cinder/Barbican/etc) even if we need to step in to
> > Kub/Mesos/Swarm communities for that.
> >
> > —
> > Egor
> >
> > From: Adrian Otto <adrian.o...@rackspace.com
> > <mailto:adrian.o...@rackspace.com><mailto:adrian.o...@rackspace.com
> > <mailto:adrian.o...@rackspace.com>>>
> > Reply-To: "OpenStack Development Mailing List (not for usage
> > questions)" <openstack-dev@lists.openstack.org
> > 
> > <mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org
> > <mailto:openstack-dev@lists.openstack.org>>>
> > Date: Tuesday, September 29, 2015 at 08:44
> > To: "OpenStack Development Mailing List (not for usage
> > questions)" <openstack-dev@lists.openstack.org
> > 
> > <mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org
> > <mailto:openstack-dev@lists.openstack.org>>>
> > Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
> >
> > This is definitely a topic we should cover in Tokyo.
> >
> > On Sep 29, 2015, at 8:28 AM, Daneyon Hansen (danehans)
> > <daneh...@cisco.com
> > <mailto:daneh...@cisco.com><mailto:daneh...@cisco.com
> > <mailto:daneh...@cisco.com>>> wrote:
> >
> >
> > +1
> >
> > From: Tom Cammann <tom.camm...@hpe.com
> > <mailto:tom.camm...@hpe.com><mailto:tom.camm...@hpe.com
> > <mailto:tom.camm...@hpe.com>>>
> > Reply-To: "openstack-dev@lists.openstack.org
> > 
> > <mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstac

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Peng Zhao
Echo with Monty:

> I believe that the real win is if Magnum's control plan can integrate the
network and storage fabrics > that exist in an OpenStack with kube/mesos/swarm.
We are working on the Cinder (ceph), Neutron, Keystone integration in HyperStack
[1] and love to contribute. Another TODO is the multi-tenancy support in
k8s/swarm/mesos. A global scheduler/orchestrator for all tenants yields higher
utilization rate than separate schedulers for each.
[1] https://launchpad.net/hyperstack
- Hyper - Make VM run like 
Container


On Wed, Sep 30, 2015 at 12:00 PM, Monty Taylor < mord...@inaugust.com > wrote:
*waving hands wildly at details* ...

I believe that the real win is if Magnum's control plan can integrate the
network and storage fabrics that exist in an OpenStack with kube/mesos/swarm.
Just deploying is VERY meh. I do not care - it's not interesting ... an ansible
playbook can do that in 5 minutes. OTOH - deploying some kube into a cloud in
such a way that it shares a tenant network with some VMs that are there - that's
good stuff and I think actually provides significant value.

On 09/29/2015 10:57 PM, Jay Lau wrote:
+1 to Egor, I think that the final goal of Magnum is container as a
service but not coe deployment as a service. ;-)

Especially we are also working on Magnum UI, the Magnum UI should export
some interfaces to enable end user can create container applications but
not only coe deployment.

I hope that the Magnum can be treated as another “Nova” which is
focusing on container service. I know it is difficult to unify all of
the concepts in different coe (k8s has pod, service, rc, swarm only has
container, nova only has VM, PM with different hypervisors), but this
deserve some deep dive and thinking to see how can move forward.

On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz < e...@walmartlabs.com
> wrote:

definitely ;), but the are some thoughts to Tom’s email.

I agree that we shouldn't reinvent apis, but I don’t think Magnum
should only focus at deployment (I feel we will become another
Puppet/Chef/Ansible module if we do it ):)
I belive our goal should be seamlessly integrate Kub/Mesos/Swarm to
OpenStack ecosystem (Neutron/Cinder/Barbican/etc) even if we need to
step in to Kub/Mesos/Swarm communities for that.

—
Egor

From: Adrian Otto < adrian.o...@rackspace.com
>>
Reply-To: “OpenStack Development Mailing List (not for usage
questions)“ < openstack-dev@lists.openstack .org
>>
Date: Tuesday, September 29, 2015 at 08:44
To: “OpenStack Development Mailing List (not for usage questions)“
< openstack-dev@lists.openstack .org
>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This is definitely a topic we should cover in Tokyo.

On Sep 29, 2015, at 8:28 AM, Daneyon Hansen (danehans)
< daneh...@cisco.com
>> wrote:


+1

From: Tom Cammann < tom.camm...@hpe.com
>>
Reply-To: “ openstack-dev@lists.openstack .org
>”
< openstack-dev@lists.openstack .org
>>
Date: Tuesday, September 29, 2015 at 2:22 AM
To: “ openstack-dev@lists.openstack .org
>”
< openstack-dev@lists.openstack .org
>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This has been my thinking in the last couple of months to completely
deprecate the COE specific APIs such as pod/service/rc and container.

As we now support Mesos, Kubernetes and Docker Swarm its going to be
very difficult and probably a wasted effort trying to consolidate
their separate APIs under a single Magnum API.

I'm starting to see Magnum as COEDaaS - Container Orchestration
Engine Deployment as a Service.

On 29/09/15 06:30, Ton Ngo wrote:
Would it make sense to ask the opposite of Wanghua's question:
should pod/service/rc be deprecated if the user can easily get to
the k8s api?
Even if we want to orchestrate these in a Heat template, the
corresponding heat resources can just interface with k8s instead of
Magnum.
Ton Ngo,

Egor Guz ---09/28/2015 10:20:02 PM---Also I belive
docker compose is just command line tool which doesn’t have any api
or scheduling feat

From: Egor Guz < e...@walmartlabs.com
> >
To: “ openstack-dev@lists.openstack .org
“>
< openstack-dev@lists.openstack .org
>>
Date: 09/28/2015 10:20 PM
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
__ __



Also I belive docker compose is just command line tool which doesn’t
have any api or scheduling features.
But during last Docker Conf hackathon PayPal folks implemented
docker compose executor for Mesos
( https://github.com/mohitsoni/ compose-executor )
which can give you pod like experience.

—
Egor

From: Adrian Otto < adrian.o...@rackspace.com
>>>
Reply-To: “OpenStack Development Mailing List (not for usage
questions)“ < openstack-dev@lists.openstack .org
>>>
Date: Monday, September 28, 2015 at 22:03
To: “OpenStack Development Mailing List (not for us

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Joshua Harlow
 other COE specific disjoint system).

Thoughts?

This one becomes especially hard if said COE(s) don't even have a
tenancy model in the first place :-/


Thanks,

Adrian


On Sep 30, 2015, at 8:58 AM, Devdatta
Kulkarni<devdatta.kulka...@rackspace.com
<mailto:devdatta.kulka...@rackspace.com>> wrote:

+1 Hongbin.

From perspective of Solum, which hopes to use Magnum for its
application container scheduling requirements, deep integration of
COEs with OpenStack services like Keystone will be useful.
Specifically, I am thinking that it will be good if Solum can
depend on Keystone tokens to deploy and schedule containers on the
Bay nodes instead of having to use COE specific credentials. That
way, container resources will become first class components that
can be monitored using Ceilometer, access controlled using
Keystone, and managed from within Horizon.

Regards, Devdatta


From: Hongbin Lu<hongbin...@huawei.com
<mailto:hongbin...@huawei.com>> Sent: Wednesday, September
30, 2015 9:44 AM To: OpenStack Development Mailing List (not for
usage questions) Subject: Re: [openstack-dev] [magnum]swarm +
compose = k8s?


+1 from me as well.

I think what makes Magnum appealing is the promise to provide
container-as-a-service. I see coe deployment as a helper to achieve
the promise, instead of the main goal.

Best regards, Hongbin


From: Jay Lau [mailto:jay.lau@gmail.com] Sent: September-29-15
10:57 PM To: OpenStack Development Mailing List (not for usage
questions) Subject: Re: [openstack-dev] [magnum]swarm + compose =
k8s?



+1 to Egor, I think that the final goal of Magnum is container as a
service but not coe deployment as a service. ;-)

Especially we are also working on Magnum UI, the Magnum UI should
export some interfaces to enable end user can create container
applications but not only coe deployment.

I hope that the Magnum can be treated as another "Nova" which is
focusing on container service. I know it is difficult to unify all
of the concepts in different coe (k8s has pod, service, rc, swarm
only has container, nova only has VM, PM with different
hypervisors), but this deserve some deep dive and thinking to see
how can move forward.





On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz<e...@walmartlabs.com
<mailto:e...@walmartlabs.com>>
wrote: definitely ;), but the are some thoughts to Tom’s email.

I agree that we shouldn't reinvent apis, but I don’t think Magnum
should only focus at deployment (I feel we will become another
Puppet/Chef/Ansible module if we do it ):) I belive our goal should
be seamlessly integrate Kub/Mesos/Swarm to OpenStack ecosystem
(Neutron/Cinder/Barbican/etc) even if we need to step in to
Kub/Mesos/Swarm communities for that.

— Egor

From: Adrian
Otto<adrian.o...@rackspace.com
<mailto:adrian.o...@rackspace.com><mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage
questions)"<openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>>



Date: Tuesday, September 29, 2015 at 08:44

To: "OpenStack Development Mailing List (not for usage
questions)"<openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>>



Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?


This is definitely a topic we should cover in Tokyo.

On Sep 29, 2015, at 8:28 AM, Daneyon Hansen
(danehans)<daneh...@cisco.com
<mailto:daneh...@cisco.com><mailto:daneh...@cisco.com>> wrote:


+1

From: Tom Cammann<tom.camm...@hpe.com
<mailto:tom.camm...@hpe.com><mailto:tom.camm...@hpe.com>>
Reply-To:
"openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>"<openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>>



Date: Tuesday, September 29, 2015 at 2:22 AM

To:
"openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>"<openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>>



Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?


This has been my thinking in the last couple of months to
completely deprecate the COE specific APIs such as pod/service/rc
and container.

As we now support Mesos, Kubernetes and Docker Swarm its going to
be very difficult and probably a wasted effort trying to
consolidate their separate APIs under a single Magnum API.

I'm starting to see Magnum as COEDaaS - Container Orchestration
Engine Deployment as a Service.

On 29/09/15 06:30, Ton Ngo wrote: Would it make sense to ask the
opposite of Wanghua's question: should pod/service/

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Joshua Harlow

Adrian Otto wrote:

Thanks everyone who has provided feedback on this thread. The good
news is that most of what has been asked for from Magnum is actually
in scope already, and some of it has already been implemented. We
never aimed to be a COE deployment service. That happens to be a
necessity to achieve our more ambitious goal: We want to provide a
compelling Containers-as-a-Service solution for OpenStack clouds in a
way that offers maximum leverage of what’s already in OpenStack,
while giving end users the ability to use their favorite tools to
interact with their COE of choice, with the multi-tenancy capability
we expect from all OpenStack services, and simplified integration
with a wealth of existing OpenStack services (Identity,
Orchestration, Images, Networks, Storage, etc.).

The areas we have disagreement are whether the features offered for
the k8s COE should be mirrored in other COE’s. We have not attempted
to do that yet, and my suggestion is to continue resisting that
temptation because it is not aligned with our vision. We are not here
to re-invent container management as a hosted service. Instead, we
aim to integrate prevailing technology, and make it work great with
OpenStack. For example, adding docker-compose capability to Magnum is
currently out-of-scope, and I think it should stay that way. With
that said, I’m willing to have a discussion about this with the
community at our upcoming Summit.

An argument could be made for feature consistency among various COE
options (Bay Types). I see this as a relatively low value pursuit.
Basic features like integration with OpenStack Networking and
OpenStack Storage services should be universal. Whether you can
present a YAML file for a bay to perform internal orchestration is
not important in my view, as long as there is a prevailing way of
addressing that need. In the case of Docker Bays, you can simply
point a docker-compose client at it, and that will work fine.



So an interesting question, but how is tenancy going to work, will there 
be a keystone tenancy <-> COE tenancy adapter? From my understanding a 
whole bay (COE?) is owned by a tenant, which is great for tenants that 
want to ~experiment~ with a COE but seems disjoint from the end goal of 
an integrated COE where the tenancy model of both keystone and the COE 
is either the same or is adapted via some adapter layer.


For example:

1) Bay that is connected to uber-tenant 'yahoo'

   1.1) Pod inside bay that is connected to tenant 'yahoo-mail.us'
   1.2) Pod inside bay that is connected to tenant 'yahoo-mail.in'
   ...

All those tenancy information is in keystone, not replicated/synced into 
the COE (or in some other COE specific disjoint system).


Thoughts?

This one becomes especially hard if said COE(s) don't even have a 
tenancy model in the first place :-/



Thanks,

Adrian


On Sep 30, 2015, at 8:58 AM, Devdatta
Kulkarni<devdatta.kulka...@rackspace.com>  wrote:

+1 Hongbin.

From perspective of Solum, which hopes to use Magnum for its
application container scheduling requirements, deep integration of
COEs with OpenStack services like Keystone will be useful.
Specifically, I am thinking that it will be good if Solum can
depend on Keystone tokens to deploy and schedule containers on the
Bay nodes instead of having to use COE specific credentials. That
way, container resources will become first class components that
can be monitored using Ceilometer, access controlled using
Keystone, and managed from within Horizon.

Regards, Devdatta


From: Hongbin Lu<hongbin...@huawei.com> Sent: Wednesday, September
30, 2015 9:44 AM To: OpenStack Development Mailing List (not for
usage questions) Subject: Re: [openstack-dev] [magnum]swarm +
compose = k8s?


+1 from me as well.

I think what makes Magnum appealing is the promise to provide
container-as-a-service. I see coe deployment as a helper to achieve
the promise, instead of  the main goal.

Best regards, Hongbin


From: Jay Lau [mailto:jay.lau@gmail.com] Sent: September-29-15
10:57 PM To: OpenStack Development Mailing List (not for usage
questions) Subject: Re: [openstack-dev] [magnum]swarm + compose =
k8s?



+1 to Egor, I think that the final goal of Magnum is container as a
service but not coe deployment as a service. ;-)

Especially we are also working on Magnum UI, the Magnum UI should
export some interfaces to enable end user can create container
applications but not only coe deployment.

I hope that the Magnum can be treated as another "Nova" which is
focusing on container service. I know it is difficult to unify all
of the concepts in different coe (k8s has pod, service, rc, swarm
only has container, nova only has VM,  PM with different
hypervisors), but this deserve some deep dive and thinking to see
how can move forward.





On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz<e...@walmartlabs.com>
wrote: definitely ;), but the are some thoughts to Tom’s email.

I agree that we shouldn't reinv

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Adrian Otto
Joshua,

The tenancy boundary in Magnum is the bay. You can place whatever single-tenant 
COE you want into the bay (Kubernetes, Mesos, Docker Swarm). This allows you to 
use native tools to interact with the COE in that bay, rather than using an 
OpenStack specific client. If you want to use the OpenStack client to create 
both bays, pods, and containers, you can do that today. You also have the 
choice, for example, to run kubctl against your Kubernetes bay, if you so 
desire.

Bays offer both a management and security isolation between multiple tenants. 
There is no intent to share a single bay between multiple tenants. In your use 
case, you would simply create two bays, one for each of the yahoo-mail.XX 
tenants. I am not convinced that having an uber-tenant makes sense.

Adrian

On Sep 30, 2015, at 1:13 PM, Joshua Harlow 
<harlo...@outlook.com<mailto:harlo...@outlook.com>> wrote:

Adrian Otto wrote:
Thanks everyone who has provided feedback on this thread. The good
news is that most of what has been asked for from Magnum is actually
in scope already, and some of it has already been implemented. We
never aimed to be a COE deployment service. That happens to be a
necessity to achieve our more ambitious goal: We want to provide a
compelling Containers-as-a-Service solution for OpenStack clouds in a
way that offers maximum leverage of what’s already in OpenStack,
while giving end users the ability to use their favorite tools to
interact with their COE of choice, with the multi-tenancy capability
we expect from all OpenStack services, and simplified integration
with a wealth of existing OpenStack services (Identity,
Orchestration, Images, Networks, Storage, etc.).

The areas we have disagreement are whether the features offered for
the k8s COE should be mirrored in other COE’s. We have not attempted
to do that yet, and my suggestion is to continue resisting that
temptation because it is not aligned with our vision. We are not here
to re-invent container management as a hosted service. Instead, we
aim to integrate prevailing technology, and make it work great with
OpenStack. For example, adding docker-compose capability to Magnum is
currently out-of-scope, and I think it should stay that way. With
that said, I’m willing to have a discussion about this with the
community at our upcoming Summit.

An argument could be made for feature consistency among various COE
options (Bay Types). I see this as a relatively low value pursuit.
Basic features like integration with OpenStack Networking and
OpenStack Storage services should be universal. Whether you can
present a YAML file for a bay to perform internal orchestration is
not important in my view, as long as there is a prevailing way of
addressing that need. In the case of Docker Bays, you can simply
point a docker-compose client at it, and that will work fine.


So an interesting question, but how is tenancy going to work, will there be a 
keystone tenancy <-> COE tenancy adapter? From my understanding a whole bay 
(COE?) is owned by a tenant, which is great for tenants that want to 
~experiment~ with a COE but seems disjoint from the end goal of an integrated 
COE where the tenancy model of both keystone and the COE is either the same or 
is adapted via some adapter layer.

For example:

1) Bay that is connected to uber-tenant 'yahoo'

  1.1) Pod inside bay that is connected to tenant 
'yahoo-mail.us<http://yahoo-mail.us/>'
  1.2) Pod inside bay that is connected to tenant 'yahoo-mail.in'
  ...

All those tenancy information is in keystone, not replicated/synced into the 
COE (or in some other COE specific disjoint system).

Thoughts?

This one becomes especially hard if said COE(s) don't even have a tenancy model 
in the first place :-/

Thanks,

Adrian

On Sep 30, 2015, at 8:58 AM, Devdatta
Kulkarni<devdatta.kulka...@rackspace.com<mailto:devdatta.kulka...@rackspace.com>>
  wrote:

+1 Hongbin.

From perspective of Solum, which hopes to use Magnum for its
application container scheduling requirements, deep integration of
COEs with OpenStack services like Keystone will be useful.
Specifically, I am thinking that it will be good if Solum can
depend on Keystone tokens to deploy and schedule containers on the
Bay nodes instead of having to use COE specific credentials. That
way, container resources will become first class components that
can be monitored using Ceilometer, access controlled using
Keystone, and managed from within Horizon.

Regards, Devdatta


From: Hongbin Lu<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> Sent: 
Wednesday, September
30, 2015 9:44 AM To: OpenStack Development Mailing List (not for
usage questions) Subject: Re: [openstack-dev] [magnum]swarm +
compose = k8s?


+1 from me as well.

I think what makes Magnum appealing is the promise to provide
container-as-a-service. I see coe deployment as a helper to achieve
the promise, instead of  the main goal.

Best regards, Hongbin


F

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Steven Dake (stdake)
expect from all OpenStack services, and simplified integration
>>>> with a wealth of existing OpenStack services (Identity,
>>>> Orchestration, Images, Networks, Storage, etc.).
>>>>
>>>> The areas we have disagreement are whether the features offered for
>>>> the k8s COE should be mirrored in other COE’s. We have not attempted
>>>> to do that yet, and my suggestion is to continue resisting that
>>>> temptation because it is not aligned with our vision. We are not here
>>>> to re-invent container management as a hosted service. Instead, we
>>>> aim to integrate prevailing technology, and make it work great with
>>>> OpenStack. For example, adding docker-compose capability to Magnum is
>>>> currently out-of-scope, and I think it should stay that way. With
>>>> that said, I’m willing to have a discussion about this with the
>>>> community at our upcoming Summit.
>>>>
>>>> An argument could be made for feature consistency among various COE
>>>> options (Bay Types). I see this as a relatively low value pursuit.
>>>> Basic features like integration with OpenStack Networking and
>>>> OpenStack Storage services should be universal. Whether you can
>>>> present a YAML file for a bay to perform internal orchestration is
>>>> not important in my view, as long as there is a prevailing way of
>>>> addressing that need. In the case of Docker Bays, you can simply
>>>> point a docker-compose client at it, and that will work fine.
>>>>
>>>
>>> So an interesting question, but how is tenancy going to work, will
>>> there be a keystone tenancy <-> COE tenancy adapter? From my
>>> understanding a whole bay (COE?) is owned by a tenant, which is great
>>> for tenants that want to ~experiment~ with a COE but seems disjoint
>>> from the end goal of an integrated COE where the tenancy model of both
>>> keystone and the COE is either the same or is adapted via some adapter
>>> layer.
>>>
>>> For example:
>>>
>>> 1) Bay that is connected to uber-tenant 'yahoo'
>>>
>>> 1.1) Pod inside bay that is connected to tenant 'yahoo-mail.us
>>> <http://yahoo-mail.us/>'
>>> 1.2) Pod inside bay that is connected to tenant 'yahoo-mail.in'
>>> ...
>>>
>>> All those tenancy information is in keystone, not replicated/synced
>>> into the COE (or in some other COE specific disjoint system).
>>>
>>> Thoughts?
>>>
>>> This one becomes especially hard if said COE(s) don't even have a
>>> tenancy model in the first place :-/
>>>
>>>> Thanks,
>>>>
>>>> Adrian
>>>>
>>>>> On Sep 30, 2015, at 8:58 AM, Devdatta
>>>>> Kulkarni<devdatta.kulka...@rackspace.com
>>>>> <mailto:devdatta.kulka...@rackspace.com>> wrote:
>>>>>
>>>>> +1 Hongbin.
>>>>>
>>>>> From perspective of Solum, which hopes to use Magnum for its
>>>>> application container scheduling requirements, deep integration of
>>>>> COEs with OpenStack services like Keystone will be useful.
>>>>> Specifically, I am thinking that it will be good if Solum can
>>>>> depend on Keystone tokens to deploy and schedule containers on the
>>>>> Bay nodes instead of having to use COE specific credentials. That
>>>>> way, container resources will become first class components that
>>>>> can be monitored using Ceilometer, access controlled using
>>>>> Keystone, and managed from within Horizon.
>>>>>
>>>>> Regards, Devdatta
>>>>>
>>>>>
>>>>> From: Hongbin Lu<hongbin...@huawei.com
>>>>> <mailto:hongbin...@huawei.com>> Sent: Wednesday, September
>>>>> 30, 2015 9:44 AM To: OpenStack Development Mailing List (not for
>>>>> usage questions) Subject: Re: [openstack-dev] [magnum]swarm +
>>>>> compose = k8s?
>>>>>
>>>>>
>>>>> +1 from me as well.
>>>>>
>>>>> I think what makes Magnum appealing is the promise to provide
>>>>> container-as-a-service. I see coe deployment as a helper to achieve
>>>>> the promise, instead of the main goal.
>>>>>
>>>>> Best regards, Hongbin
>>>>>
>>>>>
>>>>> From: Jay Lau [mailto:jay.l

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Joshua Harlow
>>> <mailto:harlo...@outlook.com>>  wrote:
>>>>
>>>> Adrian Otto wrote:
>>>>> Thanks everyone who has provided feedback on this thread. The good
>>>>> news is that most of what has been asked for from Magnum is actually
>>>>> in scope already, and some of it has already been implemented. We
>>>>> never aimed to be a COE deployment service. That happens to be a
>>>>> necessity to achieve our more ambitious goal: We want to provide a
>>>>> compelling Containers-as-a-Service solution for OpenStack clouds in a
>>>>> way that offers maximum leverage of what’s already in OpenStack,
>>>>> while giving end users the ability to use their favorite tools to
>>>>> interact with their COE of choice, with the multi-tenancy capability
>>>>> we expect from all OpenStack services, and simplified integration
>>>>> with a wealth of existing OpenStack services (Identity,
>>>>> Orchestration, Images, Networks, Storage, etc.).
>>>>>
>>>>> The areas we have disagreement are whether the features offered for
>>>>> the k8s COE should be mirrored in other COE’s. We have not attempted
>>>>> to do that yet, and my suggestion is to continue resisting that
>>>>> temptation because it is not aligned with our vision. We are not here
>>>>> to re-invent container management as a hosted service. Instead, we
>>>>> aim to integrate prevailing technology, and make it work great with
>>>>> OpenStack. For example, adding docker-compose capability to Magnum is
>>>>> currently out-of-scope, and I think it should stay that way. With
>>>>> that said, I’m willing to have a discussion about this with the
>>>>> community at our upcoming Summit.
>>>>>
>>>>> An argument could be made for feature consistency among various COE
>>>>> options (Bay Types). I see this as a relatively low value pursuit.
>>>>> Basic features like integration with OpenStack Networking and
>>>>> OpenStack Storage services should be universal. Whether you can
>>>>> present a YAML file for a bay to perform internal orchestration is
>>>>> not important in my view, as long as there is a prevailing way of
>>>>> addressing that need. In the case of Docker Bays, you can simply
>>>>> point a docker-compose client at it, and that will work fine.
>>>>>
>>>> So an interesting question, but how is tenancy going to work, will
>>>> there be a keystone tenancy<->  COE tenancy adapter? From my
>>>> understanding a whole bay (COE?) is owned by a tenant, which is great
>>>> for tenants that want to ~experiment~ with a COE but seems disjoint
>>>> from the end goal of an integrated COE where the tenancy model of both
>>>> keystone and the COE is either the same or is adapted via some adapter
>>>> layer.
>>>>
>>>> For example:
>>>>
>>>> 1) Bay that is connected to uber-tenant 'yahoo'
>>>>
>>>> 1.1) Pod inside bay that is connected to tenant 'yahoo-mail.us
>>>> <http://yahoo-mail.us/>'
>>>> 1.2) Pod inside bay that is connected to tenant 'yahoo-mail.in'
>>>> ...
>>>>
>>>> All those tenancy information is in keystone, not replicated/synced
>>>> into the COE (or in some other COE specific disjoint system).
>>>>
>>>> Thoughts?
>>>>
>>>> This one becomes especially hard if said COE(s) don't even have a
>>>> tenancy model in the first place :-/
>>>>
>>>>> Thanks,
>>>>>
>>>>> Adrian
>>>>>
>>>>>> On Sep 30, 2015, at 8:58 AM, Devdatta
>>>>>> Kulkarni<devdatta.kulka...@rackspace.com
>>>>>> <mailto:devdatta.kulka...@rackspace.com>>  wrote:
>>>>>>
>>>>>> +1 Hongbin.
>>>>>>
>>>>>>  From perspective of Solum, which hopes to use Magnum for its
>>>>>> application container scheduling requirements, deep integration of
>>>>>> COEs with OpenStack services like Keystone will be useful.
>>>>>> Specifically, I am thinking that it will be good if Solum can
>>>>>> depend on Keystone tokens to deploy and schedule containers on the
>>>>>> Bay nodes instead of having to use COE specific credentials. That
>&g

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Kris G. Lindgren
We are looking at deploying magnum as an answer for how do we do containers 
company wide at Godaddy.  I am going to agree with both you and josh.

I agree that managing one large system is going to be a pain and pas experience 
tells me this wont be practical/scale, however from experience I also know 
exactly the pain Josh is talking about.

We currently have ~4k projects in our internal openstack cloud, about 1/4 of 
the projects are currently doing some form of containers on their own, with 
more joining every day.  If all of these projects were to convert of to the 
current magnum configuration we would suddenly be attempting to 
support/configure ~1k magnum clusters.  Considering that everyone will want it 
HA, we are looking at a minimum of 2 kube nodes per cluster + lbaas vips + 
floating ips.  From a capacity standpoint this is an excessive amount of 
duplicated infrastructure to spinup in projects where people maybe running 
10–20 containers per project.  From an operator support perspective this is a 
special level of hell that I do not want to get into.   Even if I am off by 
75%,  250 still sucks.

From my point of view an ideal use case for companies like ours (yahoo/godaddy) 
would be able to support hierarchical projects in magnum.  That way we could 
create a project for each department, and then the subteams of those 
departments can have their own projects.  We create a a bay per department.  
Sub-projects if they want to can support creation of their own bays (but 
support of the kube cluster would then fall to that team).  When a sub-project 
spins up a pod on a bay, minions get created inside that teams sub projects and 
the containers in that pod run on the capacity that was spun up  under that 
project, the minions for each pod would be a in a scaling group and as such 
grow/shrink as dictated by load.

The above would make it so where we support a minimal, yet imho reasonable, 
number of kube clusters, give people who can't/don’t want to fall inline with 
the provided resource a way to make their own and still offer a "good enough 
for a single company" level of multi-tenancy.

>Joshua,
>
>If you share resources, you give up multi-tenancy.  No COE system has the
>concept of multi-tenancy (kubernetes has some basic implementation but it
>is totally insecure).  Not only does multi-tenancy have to “look like” it
>offers multiple tenants isolation, but it actually has to deliver the
>goods.

>

>I understand that at first glance a company like Yahoo may not want >separate 
>bays for their various applications because of the perceived >administrative 
>overhead. I would then challenge Yahoo to go deploy a COE >like kubernetes 
>(which has no multi-tenancy or a very basic implementation >of such) and get 
>it to work with hundreds of different competing >applications. I would 
>speculate the administrative overhead of getting >all that to work would be 
>greater then the administrative overhead of >simply doing a bay create for the 
>various tenants.

>

>Placing tenancy inside a COE seems interesting, but no COE does that >today. 
>Maybe in the future they will. Magnum was designed to present an >integration 
>point between COEs and OpenStack today, not five years down >the road. Its not 
>as if we took shortcuts to get to where we are.

>

>I will grant you that density is lower with the current design of Magnum >vs a 
>full on integration with OpenStack within the COE itself. However, >that model 
>which is what I believe you proposed is a huge design change to >each COE 
>which would overly complicate the COE at the gain of increased >density. I 
>personally don’t feel that pain is worth the gain.


___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Adrian Otto
Kris,

On Sep 30, 2015, at 4:26 PM, Kris G. Lindgren 
> wrote:

We are looking at deploying magnum as an answer for how do we do containers 
company wide at Godaddy.  I am going to agree with both you and josh.

I agree that managing one large system is going to be a pain and pas experience 
tells me this wont be practical/scale, however from experience I also know 
exactly the pain Josh is talking about.

We currently have ~4k projects in our internal openstack cloud, about 1/4 of 
the projects are currently doing some form of containers on their own, with 
more joining every day.  If all of these projects were to convert of to the 
current magnum configuration we would suddenly be attempting to 
support/configure ~1k magnum clusters.  Considering that everyone will want it 
HA, we are looking at a minimum of 2 kube nodes per cluster + lbaas vips + 
floating ips.  From a capacity standpoint this is an excessive amount of 
duplicated infrastructure to spinup in projects where people maybe running 
10–20 containers per project.  From an operator support perspective this is a 
special level of hell that I do not want to get into.   Even if I am off by 
75%,  250 still sucks.

Keep in mind that your magnum bays can use the same floating ip addresses that 
your containers do, and the containers hosts are shared between the COE nodes 
and the containers that make up the applications running in the bay. It is 
possible to use private address space for that, and proxy public facing access 
through a proxy layer that uses names to route connections to the appropriate 
magnum bay. That’s how you can escape the problem of public IP addresses as a 
scarce resource.

Also, if you use Magnum to start all those bays, they can all look the same, 
rather than the ~1000 container environments you have today that probably don’t 
look very similar, one to the next. Upgrading becomes much more achievable when 
you have wider consistency. There is a new feature currently in review called 
public baymodel that allows the cloud operator to define the bay model, but 
individual tenants can start bays based on that one common “template”. This is 
a way of centralizing most of your configuration. This balances a lot of the 
operational concern.

From my point of view an ideal use case for companies like ours (yahoo/godaddy) 
would be able to support hierarchical projects in magnum.  That way we could 
create a project for each department, and then the subteams of those 
departments can have their own projects.  We create a a bay per department.  
Sub-projects if they want to can support creation of their own bays (but 
support of the kube cluster would then fall to that team).  When a sub-project 
spins up a pod on a bay, minions get created inside that teams sub projects and 
the containers in that pod run on the capacity that was spun up  under that 
project, the minions for each pod would be a in a scaling group and as such 
grow/shrink as dictated by load.

You can do this today by sharing your TLS certs. In fact, you could make the 
cert signing a bit more sophisticated than it is today, and allow each subteam 
to have a unique TLS cert that can auth against a common bay.

The above would make it so where we support a minimal, yet imho reasonable, 
number of kube clusters, give people who can't/don’t want to fall inline with 
the provided resource a way to make their own and still offer a "good enough 
for a single company" level of multi-tenancy.

This is different than what Joshua was asking for with identities in keystone, 
because today’s COE’s themselves don’t have modular identity solutions that are 
implemented with multi-tenancy.

Imagine for a moment that you don’t need to run your bays on Nova instances 
that are virtual machines. What if you had an additional host aggregate that 
could produce libvirt/lxc guests that you can use to form bays. They can 
actually be composed of nodes that are sourced from BOTH your libvirt/lxc host 
aggregate (for hosting your COE’s) and your normal KVM (or the hypervisor) host 
aggregate for your apps to use. Then the effective consolidation ratio of your 
bays (what you referred to as “excessive amount of duplicated infrastructure”) 
become processes running on a much smaller number of compute nodes. You could 
do this by specifying a different master_flavor_id and flavor_id such that 
these fall on different host aggregates. As long as you are “all one company” 
and are not concerned primarily with security isolation between neighboring COE 
master nodes, that approach may actually be the right balance, and would not 
require an architectural shift or figuring out how to accomplish nested tenants.

Adrian

>Joshua,
>
>If you share resources, you give up multi-tenancy.  No COE system has the
>concept of multi-tenancy (kubernetes has some basic implementation but it
>is totally insecure).  Not only does multi-tenancy have to “look 

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Ryan Rossiter


On 9/29/2015 11:00 PM, Monty Taylor wrote:

*waving hands wildly at details* ...

I believe that the real win is if Magnum's control plan can integrate 
the network and storage fabrics that exist in an OpenStack with 
kube/mesos/swarm. Just deploying is VERY meh. I do not care - it's not 
interesting ... an ansible playbook can do that in 5 minutes. OTOH - 
deploying some kube into a cloud in such a way that it shares a tenant 
network with some VMs that are there - that's good stuff and I think 
actually provides significant value.

+1 on sharing the tenant network with VMs.

When I look at Magnum being an OpenStack project, I see it winning by 
integrating itself with the other projects, and to make containers just 
work in your cloud. Here's the scenario I would want a cloud with Magnum 
to do (though it may be very pie-in-the-sky):


I want to take my container, replicate it across 3 container host VMs 
(each of which lives on a different compute host), stick a Neutron LB in 
front of it, and hook it up to the same network as my 5 other VMs.


This way, it handles my containers in a service, and integrates 
beautifully with my existing OpenStack cloud.


On 09/29/2015 10:57 PM, Jay Lau wrote:

+1 to Egor, I think that the final goal of Magnum is container as a
service but not coe deployment as a service. ;-)

Especially we are also working on Magnum UI, the Magnum UI should export
some interfaces to enable end user can create container applications but
not only coe deployment.

I hope that the Magnum can be treated as another "Nova" which is
focusing on container service. I know it is difficult to unify all of
the concepts in different coe (k8s has pod, service, rc, swarm only has
container, nova only has VM, PM with different hypervisors), but this
deserve some deep dive and thinking to see how can move forward.

On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz <e...@walmartlabs.com
<mailto:e...@walmartlabs.com>> wrote:

definitely ;), but the are some thoughts to Tom’s email.

I agree that we shouldn't reinvent apis, but I don’t think Magnum
should only focus at deployment (I feel we will become another
Puppet/Chef/Ansible module if we do it ):)
I belive our goal should be seamlessly integrate Kub/Mesos/Swarm to
OpenStack ecosystem (Neutron/Cinder/Barbican/etc) even if we need to
step in to Kub/Mesos/Swarm communities for that.

—
Egor

From: Adrian Otto <adrian.o...@rackspace.com
<mailto:adrian.o...@rackspace.com><mailto:adrian.o...@rackspace.com
<mailto:adrian.o...@rackspace.com>>>
Reply-To: "OpenStack Development Mailing List (not for usage
questions)" <openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>>>
Date: Tuesday, September 29, 2015 at 08:44
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org
    <mailto:openstack-dev@lists.openstack.org>>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This is definitely a topic we should cover in Tokyo.

On Sep 29, 2015, at 8:28 AM, Daneyon Hansen (danehans)
<daneh...@cisco.com
<mailto:daneh...@cisco.com><mailto:daneh...@cisco.com
<mailto:daneh...@cisco.com>>> wrote:


+1

From: Tom Cammann <tom.camm...@hpe.com
<mailto:tom.camm...@hpe.com><mailto:tom.camm...@hpe.com
<mailto:tom.camm...@hpe.com>>>
Reply-To: "openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>>"
<openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>>>
Date: Tuesday, September 29, 2015 at 2:22 AM
To: "openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>>"
<openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This has been my thinking in the last couple of months to completely
deprecate the COE specific APIs such as pod/service/rc and 
container.


As we now support Mesos, Kubernetes and Docker Swarm its going to be
very difficult and probably a wasted effort trying to consolidate
their separate APIs under a single Magnum API.

I'

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Hongbin Lu
+1 from me as well.

I think what makes Magnum appealing is the promise to provide 
container-as-a-service. I see coe deployment as a helper to achieve the 
promise, instead of the main goal.

Best regards,
Hongbin

From: Jay Lau [mailto:jay.lau@gmail.com]
Sent: September-29-15 10:57 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

+1 to Egor, I think that the final goal of Magnum is container as a service but 
not coe deployment as a service. ;-)

Especially we are also working on Magnum UI, the Magnum UI should export some 
interfaces to enable end user can create container applications but not only 
coe deployment.
I hope that the Magnum can be treated as another "Nova" which is focusing on 
container service. I know it is difficult to unify all of the concepts in 
different coe (k8s has pod, service, rc, swarm only has container, nova only 
has VM, PM with different hypervisors), but this deserve some deep dive and 
thinking to see how can move forward.

On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz 
<e...@walmartlabs.com<mailto:e...@walmartlabs.com>> wrote:
definitely ;), but the are some thoughts to Tom’s email.

I agree that we shouldn't reinvent apis, but I don’t think Magnum should only 
focus at deployment (I feel we will become another Puppet/Chef/Ansible module 
if we do it ):)
I belive our goal should be seamlessly integrate Kub/Mesos/Swarm to OpenStack 
ecosystem (Neutron/Cinder/Barbican/etc) even if we need to step in to 
Kub/Mesos/Swarm communities for that.

—
Egor

From: Adrian Otto 
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com><mailto:adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>>
Date: Tuesday, September 29, 2015 at 08:44
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This is definitely a topic we should cover in Tokyo.

On Sep 29, 2015, at 8:28 AM, Daneyon Hansen (danehans) 
<daneh...@cisco.com<mailto:daneh...@cisco.com><mailto:daneh...@cisco.com<mailto:daneh...@cisco.com>>>
 wrote:


+1

From: Tom Cammann 
<tom.camm...@hpe.com<mailto:tom.camm...@hpe.com><mailto:tom.camm...@hpe.com<mailto:tom.camm...@hpe.com>>>
Reply-To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>"
 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>>
Date: Tuesday, September 29, 2015 at 2:22 AM
To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>"
 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This has been my thinking in the last couple of months to completely deprecate 
the COE specific APIs such as pod/service/rc and container.

As we now support Mesos, Kubernetes and Docker Swarm its going to be very 
difficult and probably a wasted effort trying to consolidate their separate 
APIs under a single Magnum API.

I'm starting to see Magnum as COEDaaS - Container Orchestration Engine 
Deployment as a Service.

On 29/09/15 06:30, Ton Ngo wrote:
Would it make sense to ask the opposite of Wanghua's question: should 
pod/service/rc be deprecated if the user can easily get to the k8s api?
Even if we want to orchestrate these in a Heat template, the corresponding heat 
resources can just interface with k8s instead of Magnum.
Ton Ngo,

Egor Guz ---09/28/2015 10:20:02 PM---Also I belive docker compose 
is just command line tool which doesn’t have any api or scheduling feat

From: Egor Guz 
<e...@walmartlabs.com<mailto:e...@walmartlabs.com>><mailto:e...@walmartlabs.com<mailto:e...@walmartlabs.com>>
To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>"<mailto:openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
 
<openstack-dev@lists.openstack.org<mailto:

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Adrian Otto
Thanks everyone who has provided feedback on this thread. The good news is that 
most of what has been asked for from Magnum is actually in scope already, and 
some of it has already been implemented. We never aimed to be a COE deployment 
service. That happens to be a necessity to achieve our more ambitious goal: We 
want to provide a compelling Containers-as-a-Service solution for OpenStack 
clouds in a way that offers maximum leverage of what’s already in OpenStack, 
while giving end users the ability to use their favorite tools to interact with 
their COE of choice, with the multi-tenancy capability we expect from all 
OpenStack services, and simplified integration with a wealth of existing 
OpenStack services (Identity, Orchestration, Images, Networks, Storage, etc.).

The areas we have disagreement are whether the features offered for the k8s COE 
should be mirrored in other COE’s. We have not attempted to do that yet, and my 
suggestion is to continue resisting that temptation because it is not aligned 
with our vision. We are not here to re-invent container management as a hosted 
service. Instead, we aim to integrate prevailing technology, and make it work 
great with OpenStack. For example, adding docker-compose capability to Magnum 
is currently out-of-scope, and I think it should stay that way. With that said, 
I’m willing to have a discussion about this with the community at our upcoming 
Summit.

An argument could be made for feature consistency among various COE options 
(Bay Types). I see this as a relatively low value pursuit. Basic features like 
integration with OpenStack Networking and OpenStack Storage services should be 
universal. Whether you can present a YAML file for a bay to perform internal 
orchestration is not important in my view, as long as there is a prevailing way 
of addressing that need. In the case of Docker Bays, you can simply point a 
docker-compose client at it, and that will work fine.

Thanks,

Adrian

> On Sep 30, 2015, at 8:58 AM, Devdatta Kulkarni 
> <devdatta.kulka...@rackspace.com> wrote:
> 
> +1 Hongbin.
> 
> From perspective of Solum, which hopes to use Magnum for its application 
> container scheduling
> requirements, deep integration of COEs with OpenStack services like Keystone 
> will be useful.
> Specifically, I am thinking that it will be good if Solum can depend on 
> Keystone tokens to deploy 
> and schedule containers on the Bay nodes instead of having to use COE 
> specific credentials. 
> That way, container resources will become first class components that can be 
> monitored 
> using Ceilometer, access controlled using Keystone, and managed from within 
> Horizon.
> 
> Regards,
> Devdatta
> 
> 
> From: Hongbin Lu <hongbin...@huawei.com>
> Sent: Wednesday, September 30, 2015 9:44 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>   
> 
> +1 from me as well.
>  
> I think what makes Magnum appealing is the promise to provide 
> container-as-a-service. I see coe deployment as a helper to achieve the 
> promise, instead of  the main goal.
>  
> Best regards,
> Hongbin
>  
> 
> From: Jay Lau [mailto:jay.lau....@gmail.com]
> Sent: September-29-15 10:57 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>  
> 
> 
> +1 to Egor, I think that the final goal of Magnum is container as a service 
> but not coe deployment as a service. ;-)
> 
> Especially we are also working on Magnum UI, the Magnum UI should export some 
> interfaces to enable end user can create container applications but not only 
> coe deployment.
> 
> I hope that the Magnum can be treated as another "Nova" which is focusing on 
> container service. I know it is difficult to unify all of the concepts in 
> different coe (k8s has pod, service, rc, swarm only has container, nova only 
> has VM,  PM with different hypervisors), but this deserve some deep dive and 
> thinking to see how can move forward. 
> 
> 
> 
>  
> 
> On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz <e...@walmartlabs.com> wrote:
> definitely ;), but the are some thoughts to Tom’s email.
> 
> I agree that we shouldn't reinvent apis, but I don’t think Magnum should only 
> focus at deployment (I feel we will become another Puppet/Chef/Ansible module 
> if we do it ):)
> I belive our goal should be seamlessly integrate Kub/Mesos/Swarm to OpenStack 
> ecosystem (Neutron/Cinder/Barbican/etc) even if we need to step in to 
> Kub/Mesos/Swarm communities for that.
> 
> —
> Egor
> 
> From: Adrian Otto 
> <adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>>
> Reply-To: "OpenStack Devel

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Devdatta Kulkarni
+1 Hongbin.

From perspective of Solum, which hopes to use Magnum for its application 
container scheduling
requirements, deep integration of COEs with OpenStack services like Keystone 
will be useful.
Specifically, I am thinking that it will be good if Solum can depend on 
Keystone tokens to deploy 
and schedule containers on the Bay nodes instead of having to use COE specific 
credentials. 
That way, container resources will become first class components that can be 
monitored 
using Ceilometer, access controlled using Keystone, and managed from within 
Horizon.

Regards,
Devdatta


From: Hongbin Lu <hongbin...@huawei.com>
Sent: Wednesday, September 30, 2015 9:44 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
  

+1 from me as well.
 
I think what makes Magnum appealing is the promise to provide 
container-as-a-service. I see coe deployment as a helper to achieve the 
promise, instead of  the main goal.
 
Best regards,
Hongbin
 

From: Jay Lau [mailto:jay.lau@gmail.com]
Sent: September-29-15 10:57 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
  


+1 to Egor, I think that the final goal of Magnum is container as a service but 
not coe deployment as a service. ;-)

Especially we are also working on Magnum UI, the Magnum UI should export some 
interfaces to enable end user can create container applications but not only 
coe deployment.
 
I hope that the Magnum can be treated as another "Nova" which is focusing on 
container service. I know it is difficult to unify all of the concepts in 
different coe (k8s has pod, service, rc, swarm only has container, nova only 
has VM,  PM with different hypervisors), but this deserve some deep dive and 
thinking to see how can move forward. 
 


 

On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz <e...@walmartlabs.com> wrote:
definitely ;), but the are some thoughts to Tom’s email.

I agree that we shouldn't reinvent apis, but I don’t think Magnum should only 
focus at deployment (I feel we will become another Puppet/Chef/Ansible module 
if we do it ):)
I belive our goal should be seamlessly integrate Kub/Mesos/Swarm to OpenStack 
ecosystem (Neutron/Cinder/Barbican/etc) even if we need to step in to 
Kub/Mesos/Swarm communities for that.

—
Egor

From: Adrian Otto <adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, September 29, 2015 at 08:44
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This is definitely a topic we should cover in Tokyo.

On Sep 29, 2015, at 8:28 AM, Daneyon Hansen (danehans) 
<daneh...@cisco.com<mailto:daneh...@cisco.com>> wrote:


+1

From: Tom Cammann <tom.camm...@hpe.com<mailto:tom.camm...@hpe.com>>
Reply-To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, September 29, 2015 at 2:22 AM
To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This has been my thinking in the last couple of months to completely deprecate 
the COE specific APIs such as pod/service/rc and container.

As we now support Mesos, Kubernetes and Docker Swarm its going to be very 
difficult and probably a wasted effort trying to consolidate their separate 
APIs under a single Magnum API.

I'm starting to see Magnum as COEDaaS - Container Orchestration Engine 
Deployment as a Service.

On 29/09/15 06:30, Ton Ngo wrote:
Would it make sense to ask the opposite of Wanghua's question: should 
pod/service/rc be deprecated if the user can easily get to the k8s api?
Even if we want to orchestrate these in a Heat template, the corresponding heat 
resources can just interface with k8s instead of Magnum.
Ton Ngo,

Egor Guz ---09/28/2015 10:20:02 PM---Also I belive docker compose 
is just command line tool which doesn’t have any api or scheduling feat

From: Egor Guz <e...@walmartlabs.com><mailto:e...@walmartlabs.com>
To: 
"openstack-dev@lists.openstack.org"<mailto:openstack-dev@lists.openstack.org> 
<openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>
Date: 09/28/2015 10:20 PM
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?




Also I belive

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-29 Thread 王华
I agree with Tom to see Magnum as COEDaaS. k8s, swarm, mesos are so
different in their architectures that magnum can not provide unified API to
user. So I think we should focus on deployment.

Regards,
Wanghua

On Tue, Sep 29, 2015 at 5:22 PM, Tom Cammann <tom.camm...@hpe.com> wrote:

> This has been my thinking in the last couple of months to completely
> deprecate the COE specific APIs such as pod/service/rc and container.
>
> As we now support Mesos, Kubernetes and Docker Swarm its going to be very
> difficult and probably a wasted effort trying to consolidate their separate
> APIs under a single Magnum API.
>
> I'm starting to see Magnum as COEDaaS - Container Orchestration Engine
> Deployment as a Service.
>
>
> On 29/09/15 06:30, Ton Ngo wrote:
>
> Would it make sense to ask the opposite of Wanghua's question: should
> pod/service/rc be deprecated if the user can easily get to the k8s api?
> Even if we want to orchestrate these in a Heat template, the corresponding
> heat resources can just interface with k8s instead of Magnum.
> Ton Ngo,
>
> [image: Inactive hide details for Egor Guz ---09/28/2015 10:20:02
> PM---Also I belive docker compose is just command line tool which doe]Egor
> Guz ---09/28/2015 10:20:02 PM---Also I belive docker compose is just
> command line tool which doesn’t have any api or scheduling feat
>
> From: Egor Guz <e...@walmartlabs.com> <e...@walmartlabs.com>
> To: "openstack-dev@lists.openstack.org"
> <openstack-dev@lists.openstack.org> <openstack-dev@lists.openstack.org>
> <openstack-dev@lists.openstack.org>
> Date: 09/28/2015 10:20 PM
> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
> --
>
>
>
> Also I belive docker compose is just command line tool which doesn’t have
> any api or scheduling features.
> But during last Docker Conf hackathon PayPal folks implemented docker
> compose executor for Mesos (https://github.com/mohitsoni/compose-executor)
> which can give you pod like experience.
>
> —
> Egor
>
> From: Adrian Otto <adrian.o...@rackspace.com<
> mailto:adrian.o...@rackspace.com <adrian.o...@rackspace.com>>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org
> <openstack-dev@lists.openstack.org>>>
> Date: Monday, September 28, 2015 at 22:03
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org
> <openstack-dev@lists.openstack.org>>>
> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>
> Wanghua,
>
> I do follow your logic, but docker-compose only needs the docker API to
> operate. We are intentionally avoiding re-inventing the wheel. Our goal is
> not to replace docker swarm (or other existing systems), but to compliment
> it/them. We want to offer users of Docker the richness of native APIs and
> supporting tools. This way they will not need to compromise features or
> wait longer for us to implement each new feature as it is added. Keep in
> mind that our pod, service, and replication controller resources pre-date
> this philosophy. If we started out with the current approach, those would
> not exist in Magnum.
>
> Thanks,
>
> Adrian
>
> On Sep 28, 2015, at 8:32 PM, 王华 <wanghua.hum...@gmail.com<
> mailto:wanghua.hum...@gmail.com <wanghua.hum...@gmail.com>>> wrote:
>
> Hi folks,
>
> Magnum now exposes service, pod, etc to users in kubernetes coe, but
> exposes container in swarm coe. As I know, swarm is only a scheduler of
> container, which is like nova in openstack. Docker compose is a
> orchestration program which is like heat in openstack. k8s is the
> combination of scheduler and orchestration. So I think it is better to
> expose the apis in compose to users which are at the same level as k8s.
>
>
> Regards
> Wanghua
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org<
> mailto:openstack-dev-requ...@lists.openstack.org
> <openstack-dev-requ...@lists.openstack.org>>?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
&g

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-29 Thread Tom Cammann
This has been my thinking in the last couple of months to completely 
deprecate the COE specific APIs such as pod/service/rc and container.


As we now support Mesos, Kubernetes and Docker Swarm its going to be 
very difficult and probably a wasted effort trying to consolidate their 
separate APIs under a single Magnum API.


I'm starting to see Magnum as COEDaaS - Container Orchestration Engine 
Deployment as a Service.


On 29/09/15 06:30, Ton Ngo wrote:


Would it make sense to ask the opposite of Wanghua's question: should 
pod/service/rc be deprecated if the user can easily get to the k8s api?
Even if we want to orchestrate these in a Heat template, the 
corresponding heat resources can just interface with k8s instead of 
Magnum.

Ton Ngo,

Inactive hide details for Egor Guz ---09/28/2015 10:20:02 PM---Also I 
belive docker compose is just command line tool which doeEgor Guz 
---09/28/2015 10:20:02 PM---Also I belive docker compose is just 
command line tool which doesn’t have any api or scheduling feat


From: Egor Guz <e...@walmartlabs.com>
To: "openstack-dev@lists.openstack.org" 
<openstack-dev@lists.openstack.org>

Date: 09/28/2015 10:20 PM
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?





Also I belive docker compose is just command line tool which doesn’t 
have any api or scheduling features.
But during last Docker Conf hackathon PayPal folks implemented docker 
compose executor for Mesos (https://github.com/mohitsoni/compose-executor)

which can give you pod like experience.

—
Egor

From: Adrian Otto 
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage 
questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>

Date: Monday, September 28, 2015 at 22:03
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>

Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Wanghua,

I do follow your logic, but docker-compose only needs the docker API 
to operate. We are intentionally avoiding re-inventing the wheel. Our 
goal is not to replace docker swarm (or other existing systems), but 
to compliment it/them. We want to offer users of Docker the richness 
of native APIs and supporting tools. This way they will not need to 
compromise features or wait longer for us to implement each new 
feature as it is added. Keep in mind that our pod, service, and 
replication controller resources pre-date this philosophy. If we 
started out with the current approach, those would not exist in Magnum.


Thanks,

Adrian

On Sep 28, 2015, at 8:32 PM, 王华 
<wanghua.hum...@gmail.com<mailto:wanghua.hum...@gmail.com>> wrote:


Hi folks,

Magnum now exposes service, pod, etc to users in kubernetes coe, but 
exposes container in swarm coe. As I know, swarm is only a scheduler 
of container, which is like nova in openstack. Docker compose is a 
orchestration program which is like heat in openstack. k8s is the 
combination of scheduler and orchestration. So I think it is better to 
expose the apis in compose to users which are at the same level as k8s.



Regards
Wanghua
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-29 Thread 王华
@Egor,  docker compose is just a command line tool now, but I think it will
change its architecture to Client and Server in the future, otherwise it
can not do some complicate jobs.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-29 Thread Daneyon Hansen (danehans)

+1

From: Tom Cammann <tom.camm...@hpe.com<mailto:tom.camm...@hpe.com>>
Reply-To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, September 29, 2015 at 2:22 AM
To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This has been my thinking in the last couple of months to completely deprecate 
the COE specific APIs such as pod/service/rc and container.

As we now support Mesos, Kubernetes and Docker Swarm its going to be very 
difficult and probably a wasted effort trying to consolidate their separate 
APIs under a single Magnum API.

I'm starting to see Magnum as COEDaaS - Container Orchestration Engine 
Deployment as a Service.

On 29/09/15 06:30, Ton Ngo wrote:

Would it make sense to ask the opposite of Wanghua's question: should 
pod/service/rc be deprecated if the user can easily get to the k8s api?
Even if we want to orchestrate these in a Heat template, the corresponding heat 
resources can just interface with k8s instead of Magnum.
Ton Ngo,

[Inactivehide details for Egor Guz ---09/28/2015 10:20:02 PM---Also 
Ibelive docker compose is just command line tool which doe]Egor Guz 
---09/28/2015 10:20:02 PM---Also I belive docker compose is just command line 
tool which doesn’t have any api or scheduling feat

From: Egor Guz <e...@walmartlabs.com><mailto:e...@walmartlabs.com>
To: 
"openstack-dev@lists.openstack.org"<mailto:openstack-dev@lists.openstack.org> 
<openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>
Date: 09/28/2015 10:20 PM
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?





Also I belive docker compose is just command line tool which doesn’t have any 
api or scheduling features.
But during last Docker Conf hackathon PayPal folks implemented docker compose 
executor for Mesos (https://github.com/mohitsoni/compose-executor)
which can give you pod like experience.

―
Egor

From: Adrian Otto 
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com><mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>>
Date: Monday, September 28, 2015 at 22:03
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Wanghua,

I do follow your logic, but docker-compose only needs the docker API to 
operate. We are intentionally avoiding re-inventing the wheel. Our goal is not 
to replace docker swarm (or other existing systems), but to compliment it/them. 
We want to offer users of Docker the richness of native APIs and supporting 
tools. This way they will not need to compromise features or wait longer for us 
to implement each new feature as it is added. Keep in mind that our pod, 
service, and replication controller resources pre-date this philosophy. If we 
started out with the current approach, those would not exist in Magnum.

Thanks,

Adrian

On Sep 28, 2015, at 8:32 PM, 王华 
<wanghua.hum...@gmail.com<mailto:wanghua.hum...@gmail.com><mailto:wanghua.hum...@gmail.com>>
 wrote:

Hi folks,

Magnum now exposes service, pod, etc to users in kubernetes coe, but exposes 
container in swarm coe. As I know, swarm is only a scheduler of container, 
which is like nova in openstack. Docker compose is a orchestration program 
which is like heat in openstack. k8s is the combination of scheduler and 
orchestration. So I think it is better to expose the apis in compose to users 
which are at the same level as k8s.


Regards
Wanghua
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org><mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-29 Thread Adrian Otto
This is definitely a topic we should cover in Tokyo.

On Sep 29, 2015, at 8:28 AM, Daneyon Hansen (danehans) 
<daneh...@cisco.com<mailto:daneh...@cisco.com>> wrote:


+1

From: Tom Cammann <tom.camm...@hpe.com<mailto:tom.camm...@hpe.com>>
Reply-To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, September 29, 2015 at 2:22 AM
To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This has been my thinking in the last couple of months to completely deprecate 
the COE specific APIs such as pod/service/rc and container.

As we now support Mesos, Kubernetes and Docker Swarm its going to be very 
difficult and probably a wasted effort trying to consolidate their separate 
APIs under a single Magnum API.

I'm starting to see Magnum as COEDaaS - Container Orchestration Engine 
Deployment as a Service.

On 29/09/15 06:30, Ton Ngo wrote:
Would it make sense to ask the opposite of Wanghua's question: should 
pod/service/rc be deprecated if the user can easily get to the k8s api?
Even if we want to orchestrate these in a Heat template, the corresponding heat 
resources can just interface with k8s instead of Magnum.
Ton Ngo,

Egor Guz ---09/28/2015 10:20:02 PM---Also I belive docker compose 
is just command line tool which doesn’t have any api or scheduling feat

From: Egor Guz <e...@walmartlabs.com><mailto:e...@walmartlabs.com>
To: 
"openstack-dev@lists.openstack.org"<mailto:openstack-dev@lists.openstack.org> 
<openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>
Date: 09/28/2015 10:20 PM
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?




Also I belive docker compose is just command line tool which doesn’t have any 
api or scheduling features.
But during last Docker Conf hackathon PayPal folks implemented docker compose 
executor for Mesos (https://github.com/mohitsoni/compose-executor)
which can give you pod like experience.

―
Egor

From: Adrian Otto 
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com><mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>>
Date: Monday, September 28, 2015 at 22:03
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Wanghua,

I do follow your logic, but docker-compose only needs the docker API to 
operate. We are intentionally avoiding re-inventing the wheel. Our goal is not 
to replace docker swarm (or other existing systems), but to compliment it/them. 
We want to offer users of Docker the richness of native APIs and supporting 
tools. This way they will not need to compromise features or wait longer for us 
to implement each new feature as it is added. Keep in mind that our pod, 
service, and replication controller resources pre-date this philosophy. If we 
started out with the current approach, those would not exist in Magnum.

Thanks,

Adrian

On Sep 28, 2015, at 8:32 PM, 王华 
<wanghua.hum...@gmail.com<mailto:wanghua.hum...@gmail.com><mailto:wanghua.hum...@gmail.com>>
 wrote:

Hi folks,

Magnum now exposes service, pod, etc to users in kubernetes coe, but exposes 
container in swarm coe. As I know, swarm is only a scheduler of container, 
which is like nova in openstack. Docker compose is a orchestration program 
which is like heat in openstack. k8s is the combination of scheduler and 
orchestration. So I think it is better to expose the apis in compose to users 
which are at the same level as k8s.


Regards
Wanghua
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org><mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-29 Thread Monty Taylor

*waving hands wildly at details* ...

I believe that the real win is if Magnum's control plan can integrate 
the network and storage fabrics that exist in an OpenStack with 
kube/mesos/swarm. Just deploying is VERY meh. I do not care - it's not 
interesting ... an ansible playbook can do that in 5 minutes. OTOH - 
deploying some kube into a cloud in such a way that it shares a tenant 
network with some VMs that are there - that's good stuff and I think 
actually provides significant value.


On 09/29/2015 10:57 PM, Jay Lau wrote:

+1 to Egor, I think that the final goal of Magnum is container as a
service but not coe deployment as a service. ;-)

Especially we are also working on Magnum UI, the Magnum UI should export
some interfaces to enable end user can create container applications but
not only coe deployment.

I hope that the Magnum can be treated as another "Nova" which is
focusing on container service. I know it is difficult to unify all of
the concepts in different coe (k8s has pod, service, rc, swarm only has
container, nova only has VM, PM with different hypervisors), but this
deserve some deep dive and thinking to see how can move forward.

On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz <e...@walmartlabs.com
<mailto:e...@walmartlabs.com>> wrote:

definitely ;), but the are some thoughts to Tom’s email.

I agree that we shouldn't reinvent apis, but I don’t think Magnum
should only focus at deployment (I feel we will become another
Puppet/Chef/Ansible module if we do it ):)
I belive our goal should be seamlessly integrate Kub/Mesos/Swarm to
OpenStack ecosystem (Neutron/Cinder/Barbican/etc) even if we need to
step in to Kub/Mesos/Swarm communities for that.

—
Egor

From: Adrian Otto <adrian.o...@rackspace.com
<mailto:adrian.o...@rackspace.com><mailto:adrian.o...@rackspace.com
<mailto:adrian.o...@rackspace.com>>>
Reply-To: "OpenStack Development Mailing List (not for usage
questions)" <openstack-dev@lists.openstack.org

<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>>>
Date: Tuesday, September 29, 2015 at 08:44
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org

<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org
    <mailto:openstack-dev@lists.openstack.org>>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This is definitely a topic we should cover in Tokyo.

On Sep 29, 2015, at 8:28 AM, Daneyon Hansen (danehans)
<daneh...@cisco.com
<mailto:daneh...@cisco.com><mailto:daneh...@cisco.com
<mailto:daneh...@cisco.com>>> wrote:


+1

From: Tom Cammann <tom.camm...@hpe.com
<mailto:tom.camm...@hpe.com><mailto:tom.camm...@hpe.com
<mailto:tom.camm...@hpe.com>>>
Reply-To: "openstack-dev@lists.openstack.org

<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>>"
<openstack-dev@lists.openstack.org

<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>>>
Date: Tuesday, September 29, 2015 at 2:22 AM
To: "openstack-dev@lists.openstack.org

<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>>"
<openstack-dev@lists.openstack.org

<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This has been my thinking in the last couple of months to completely
deprecate the COE specific APIs such as pod/service/rc and container.

As we now support Mesos, Kubernetes and Docker Swarm its going to be
very difficult and probably a wasted effort trying to consolidate
their separate APIs under a single Magnum API.

I'm starting to see Magnum as COEDaaS - Container Orchestration
Engine Deployment as a Service.

On 29/09/15 06:30, Ton Ngo wrote:
Would it make sense to ask the opposite of Wanghua's question:
should pod/service/rc be deprecated if the user can easily get to
the k8s api?
Even if we want to orchestrate these in a Heat template, the
corresponding heat resources can just interface with k8s instead of
Magnum.
Ton Ngo,

Egor Guz ---09/28/2015 10:20:02 PM---Also I belive
docker compose is just command line tool which doesn’t have any api
or scheduling feat

From: Egor Guz <e...@wal

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-29 Thread Jay Lau
+1 to Egor, I think that the final goal of Magnum is container as a service
but not coe deployment as a service. ;-)

Especially we are also working on Magnum UI, the Magnum UI should export
some interfaces to enable end user can create container applications but
not only coe deployment.

I hope that the Magnum can be treated as another "Nova" which is focusing
on container service. I know it is difficult to unify all of the concepts
in different coe (k8s has pod, service, rc, swarm only has container, nova
only has VM, PM with different hypervisors), but this deserve some deep
dive and thinking to see how can move forward.

On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz <e...@walmartlabs.com> wrote:

> definitely ;), but the are some thoughts to Tom’s email.
>
> I agree that we shouldn't reinvent apis, but I don’t think Magnum should
> only focus at deployment (I feel we will become another Puppet/Chef/Ansible
> module if we do it ):)
> I belive our goal should be seamlessly integrate Kub/Mesos/Swarm to
> OpenStack ecosystem (Neutron/Cinder/Barbican/etc) even if we need to step
> in to Kub/Mesos/Swarm communities for that.
>
> —
> Egor
>
> From: Adrian Otto <adrian.o...@rackspace.com adrian.o...@rackspace.com>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org
> >>
> Date: Tuesday, September 29, 2015 at 08:44
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org
> >>
> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>
> This is definitely a topic we should cover in Tokyo.
>
> On Sep 29, 2015, at 8:28 AM, Daneyon Hansen (danehans) <daneh...@cisco.com
> <mailto:daneh...@cisco.com>> wrote:
>
>
> +1
>
> From: Tom Cammann <tom.camm...@hpe.com<mailto:tom.camm...@hpe.com>>
> Reply-To: "openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org>" <openstack-dev@lists.openstack.org
> <mailto:openstack-dev@lists.openstack.org>>
> Date: Tuesday, September 29, 2015 at 2:22 AM
> To: "openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org>" <openstack-dev@lists.openstack.org
> <mailto:openstack-dev@lists.openstack.org>>
> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>
> This has been my thinking in the last couple of months to completely
> deprecate the COE specific APIs such as pod/service/rc and container.
>
> As we now support Mesos, Kubernetes and Docker Swarm its going to be very
> difficult and probably a wasted effort trying to consolidate their separate
> APIs under a single Magnum API.
>
> I'm starting to see Magnum as COEDaaS - Container Orchestration Engine
> Deployment as a Service.
>
> On 29/09/15 06:30, Ton Ngo wrote:
> Would it make sense to ask the opposite of Wanghua's question: should
> pod/service/rc be deprecated if the user can easily get to the k8s api?
> Even if we want to orchestrate these in a Heat template, the corresponding
> heat resources can just interface with k8s instead of Magnum.
> Ton Ngo,
>
> Egor Guz ---09/28/2015 10:20:02 PM---Also I belive docker
> compose is just command line tool which doesn’t have any api or scheduling
> feat
>
> From: Egor Guz <e...@walmartlabs.com><mailto:e...@walmartlabs.com>
> To: "openstack-dev@lists.openstack.org" openstack-dev@lists.openstack.org> <openstack-dev@lists.openstack.org
> ><mailto:openstack-dev@lists.openstack.org>
> Date: 09/28/2015 10:20 PM
> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
> 
>
>
>
> Also I belive docker compose is just command line tool which doesn’t have
> any api or scheduling features.
> But during last Docker Conf hackathon PayPal folks implemented docker
> compose executor for Mesos (https://github.com/mohitsoni/compose-executor)
> which can give you pod like experience.
>
> —
> Egor
>
> From: Adrian Otto <adrian.o...@rackspace.com adrian.o...@rackspace.com><mailto:adrian.o...@rackspace.com>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org
> ><mailto:openstack-dev@lists.openstack.org>>
> Date: Monday, September 28, 2015 at 22:03
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org
> ><mailto:openstack-dev@lists.openstack.org>>

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-29 Thread Egor Guz
definitely ;), but the are some thoughts to Tom’s email.

I agree that we shouldn't reinvent apis, but I don’t think Magnum should only 
focus at deployment (I feel we will become another Puppet/Chef/Ansible module 
if we do it ):)
I belive our goal should be seamlessly integrate Kub/Mesos/Swarm to OpenStack 
ecosystem (Neutron/Cinder/Barbican/etc) even if we need to step in to 
Kub/Mesos/Swarm communities for that.

―
Egor

From: Adrian Otto <adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, September 29, 2015 at 08:44
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This is definitely a topic we should cover in Tokyo.

On Sep 29, 2015, at 8:28 AM, Daneyon Hansen (danehans) 
<daneh...@cisco.com<mailto:daneh...@cisco.com>> wrote:


+1

From: Tom Cammann <tom.camm...@hpe.com<mailto:tom.camm...@hpe.com>>
Reply-To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, September 29, 2015 at 2:22 AM
To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This has been my thinking in the last couple of months to completely deprecate 
the COE specific APIs such as pod/service/rc and container.

As we now support Mesos, Kubernetes and Docker Swarm its going to be very 
difficult and probably a wasted effort trying to consolidate their separate 
APIs under a single Magnum API.

I'm starting to see Magnum as COEDaaS - Container Orchestration Engine 
Deployment as a Service.

On 29/09/15 06:30, Ton Ngo wrote:
Would it make sense to ask the opposite of Wanghua's question: should 
pod/service/rc be deprecated if the user can easily get to the k8s api?
Even if we want to orchestrate these in a Heat template, the corresponding heat 
resources can just interface with k8s instead of Magnum.
Ton Ngo,

Egor Guz ---09/28/2015 10:20:02 PM---Also I belive docker compose 
is just command line tool which doesn’t have any api or scheduling feat

From: Egor Guz <e...@walmartlabs.com><mailto:e...@walmartlabs.com>
To: 
"openstack-dev@lists.openstack.org"<mailto:openstack-dev@lists.openstack.org> 
<openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>
Date: 09/28/2015 10:20 PM
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?




Also I belive docker compose is just command line tool which doesn’t have any 
api or scheduling features.
But during last Docker Conf hackathon PayPal folks implemented docker compose 
executor for Mesos (https://github.com/mohitsoni/compose-executor)
which can give you pod like experience.

―
Egor

From: Adrian Otto 
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com><mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>>
Date: Monday, September 28, 2015 at 22:03
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Wanghua,

I do follow your logic, but docker-compose only needs the docker API to 
operate. We are intentionally avoiding re-inventing the wheel. Our goal is not 
to replace docker swarm (or other existing systems), but to compliment it/them. 
We want to offer users of Docker the richness of native APIs and supporting 
tools. This way they will not need to compromise features or wait longer for us 
to implement each new feature as it is added. Keep in mind that our pod, 
service, and replication controller resources pre-date this philosophy. If we 
started out with the current approach, those would not exist in Magnum.

Thanks,

Adrian

On Sep 28, 2015, at 8:32 PM, 王华 
<wanghua.hum...@gmail.com<mailto:wanghua.hum...@gmail.com><mailto:wanghua.hum...@gmail.com>>
 wrote:

Hi folks,

Magnum now exposes service, pod, etc to users in kubernetes coe, but exposes 
container in swarm coe. As I know, swarm is only a scheduler of container, 
which is like 

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-28 Thread Mike Spreitzer
> From: 王华 <wanghua.hum...@gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)" 
> <openstack-dev@lists.openstack.org>
> Date: 09/28/2015 11:34 PM
> Subject: [openstack-dev] [magnum]swarm + compose = k8s?
> 
> Hi folks,
> 
> Magnum now exposes service, pod, etc to users in kubernetes coe, but
> exposes container in swarm coe. As I know, swarm is only a scheduler
> of container, which is like nova in openstack. Docker compose is a 
> orchestration program which is like heat in openstack. k8s is the 
> combination of scheduler and orchestration. So I think it is better 
> to expose the apis in compose to users which are at the same level as 
k8s.
> 

Why should the users be deprived of direct access to the Swarm API when it 
is there?

Note also that Compose addresses more general, and differently focused, 
orchestration than Kubernetes; the latter only offers homogenous scaling 
groups --- which a docker-compose.yaml file can not even describe.

Regards,
Mike



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum]swarm + compose = k8s?

2015-09-28 Thread 王华
Hi folks,

Magnum now exposes service, pod, etc to users in kubernetes coe, but
exposes container in swarm coe. As I know, swarm is only a scheduler of
container, which is like nova in openstack. Docker compose is a
orchestration program which is like heat in openstack. k8s is the
combination of scheduler and orchestration. So I think it is better to
expose the apis in compose to users which are at the same level as k8s.


Regards
Wanghua
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-28 Thread Adrian Otto
Wanghua,

I do follow your logic, but docker-compose only needs the docker API to 
operate. We are intentionally avoiding re-inventing the wheel. Our goal is not 
to replace docker swarm (or other existing systems), but to compliment it/them. 
We want to offer users of Docker the richness of native APIs and supporting 
tools. This way they will not need to compromise features or wait longer for us 
to implement each new feature as it is added. Keep in mind that our pod, 
service, and replication controller resources pre-date this philosophy. If we 
started out with the current approach, those would not exist in Magnum.

Thanks,

Adrian

On Sep 28, 2015, at 8:32 PM, ?? 
> wrote:

Hi folks,

Magnum now exposes service, pod, etc to users in kubernetes coe, but exposes 
container in swarm coe. As I know, swarm is only a scheduler of container, 
which is like nova in openstack. Docker compose is a orchestration program 
which is like heat in openstack. k8s is the combination of scheduler and 
orchestration. So I think it is better to expose the apis in compose to users 
which are at the same level as k8s.


Regards
Wanghua
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-28 Thread Ton Ngo
Would it make sense to ask the opposite of Wanghua's question:  should
pod/service/rc be deprecated if the user can easily get to the k8s api?
Even if we want to orchestrate these in a Heat template, the corresponding
heat resources can just interface with k8s instead of Magnum.
Ton Ngo,



From:   Egor Guz <e...@walmartlabs.com>
To: "openstack-dev@lists.openstack.org"
<openstack-dev@lists.openstack.org>
Date:   09/28/2015 10:20 PM
Subject:        Re: [openstack-dev] [magnum]swarm + compose = k8s?



Also I belive docker compose is just command line tool which doesn’t have
any api or scheduling features.
But during last Docker Conf hackathon PayPal folks implemented docker
compose executor for Mesos (https://github.com/mohitsoni/compose-executor)
which can give you pod like experience.

―
Egor

From: Adrian Otto <adrian.o...@rackspace.com<
mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org
>>
Date: Monday, September 28, 2015 at 22:03
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org
>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Wanghua,

I do follow your logic, but docker-compose only needs the docker API to
operate. We are intentionally avoiding re-inventing the wheel. Our goal is
not to replace docker swarm (or other existing systems), but to compliment
it/them. We want to offer users of Docker the richness of native APIs and
supporting tools. This way they will not need to compromise features or
wait longer for us to implement each new feature as it is added. Keep in
mind that our pod, service, and replication controller resources pre-date
this philosophy. If we started out with the current approach, those would
not exist in Magnum.

Thanks,

Adrian

On Sep 28, 2015, at 8:32 PM, 王华 <wanghua.hum...@gmail.com<
mailto:wanghua.hum...@gmail.com>> wrote:

Hi folks,

Magnum now exposes service, pod, etc to users in kubernetes coe, but
exposes container in swarm coe. As I know, swarm is only a scheduler of
container, which is like nova in openstack. Docker compose is a
orchestration program which is like heat in openstack. k8s is the
combination of scheduler and orchestration. So I think it is better to
expose the apis in compose to users which are at the same level as k8s.


Regards
Wanghua
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org<
mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-28 Thread Egor Guz
Also I belive docker compose is just command line tool which doesn’t have any 
api or scheduling features.
But during last Docker Conf hackathon PayPal folks implemented docker compose 
executor for Mesos (https://github.com/mohitsoni/compose-executor)
which can give you pod like experience.

―
Egor

From: Adrian Otto <adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Monday, September 28, 2015 at 22:03
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Wanghua,

I do follow your logic, but docker-compose only needs the docker API to 
operate. We are intentionally avoiding re-inventing the wheel. Our goal is not 
to replace docker swarm (or other existing systems), but to compliment it/them. 
We want to offer users of Docker the richness of native APIs and supporting 
tools. This way they will not need to compromise features or wait longer for us 
to implement each new feature as it is added. Keep in mind that our pod, 
service, and replication controller resources pre-date this philosophy. If we 
started out with the current approach, those would not exist in Magnum.

Thanks,

Adrian

On Sep 28, 2015, at 8:32 PM, 王华 
<wanghua.hum...@gmail.com<mailto:wanghua.hum...@gmail.com>> wrote:

Hi folks,

Magnum now exposes service, pod, etc to users in kubernetes coe, but exposes 
container in swarm coe. As I know, swarm is only a scheduler of container, 
which is like nova in openstack. Docker compose is a orchestration program 
which is like heat in openstack. k8s is the combination of scheduler and 
orchestration. So I think it is better to expose the apis in compose to users 
which are at the same level as k8s.


Regards
Wanghua
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev