Re: [openstack-dev] [magnum] Mesos Conductor

2015-11-18 Thread Jay Lau
+1.

One problem I want to mention is that for mesos integration, we cannot
limited to Marathon + Mesos as there are many frameworks can run on top of
Mesos, such as Chronos, Kubernetes etc, we may need to consider more for
Mesos integration as there is a huge eco-system build on top of Mesos.

On Thu, Nov 19, 2015 at 8:26 AM, Adrian Otto <adrian.o...@rackspace.com>
wrote:

> Bharath,
>
> I agree with Hongbin on this. Let’s not expand magnum to deal with apps or
> appgroups in the near term. If there is a strong desire to add these
> things, we could allow it by having a plugin/extensions interface for the
> Magnum API to allow additional COE specific features. Honestly, it’s just
> going to be a nuisance to keep up with the various upstreams until they
> become completely stable from an API perspective, and no additional changes
> are likely. All of our COE’s still have plenty of maturation ahead of them,
> so this is the wrong time to wrap them.
>
> If someone really wants apps and appgroups, (s)he could add that to an
> experimental branch of the magnum client, and have it interact with the
> marathon API directly rather than trying to represent those resources in
> Magnum. If that tool became popular, then we could revisit this topic for
> further consideration.
>
> Adrian
>
> > On Nov 18, 2015, at 3:21 PM, Hongbin Lu <hongbin...@huawei.com> wrote:
> >
> > Hi Bharath,
> >
> > I agree the “container” part. We can implement “magnum container-create
> ..” for mesos bay in the way you mentioned. Personally, I don’t like to
> introduce “apps” and “appgroups” resources to Magnum, because they are
> already provided by native tool [1]. I couldn’t see the benefits to
> implement a wrapper API to offer what native tool already offers. However,
> if you can point out a valid use case to wrap the API, I will give it more
> thoughts.
> >
> > Best regards,
> > Hongbin
> >
> > [1] https://docs.mesosphere.com/using/cli/marathonsyntax/
> >
> > From: bharath thiruveedula [mailto:bharath_...@hotmail.com]
> > Sent: November-18-15 1:20 PM
> > To: openstack-dev@lists.openstack.org
> > Subject: [openstack-dev] [magnum] Mesos Conductor
> >
> > Hi all,
> >
> > I am working on the blueprint [1]. As per my understanding, we have two
> resources/objects in mesos+marathon:
> >
> > 1)Apps: combination of instances/containers running on multiple hosts
> representing a service.[2]
> > 2)Application Groups: Group of apps, for example we can have database
> application group which consists mongoDB app and MySQL App.[3]
> >
> > So I think we need to have two resources 'apps' and 'appgroups' in mesos
> conductor like we have pod and rc for k8s. And regarding 'magnum container'
> command, we can create, delete and retrieve container details as part of
> mesos app itself(container =  app with 1 instance). Though I think in mesos
> case 'magnum app-create ..."  and 'magnum container-create ...' will use
> the same REST API for both cases.
> >
> > Let me know your opinion/comments on this and correct me if I am wrong
> >
> > [1]https://blueprints.launchpad.net/magnum/+spec/mesos-conductor.
> > [2]https://mesosphere.github.io/marathon/docs/application-basics.html
> > [3]https://mesosphere.github.io/marathon/docs/application-groups.html
> >
> >
> > Regards
> > Bharath T
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Autoscaling both clusters andcontainers

2015-11-18 Thread Ton Ngo
The slides for the Tokyo talk is available on slideshare:
http://www.slideshare.net/huengo965921/exploring-magnum-and-senlin-integration-for-autoscaling-containers

Ton,




From:   Jay Lau <jay.lau@gmail.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
Date:   11/17/2015 10:05 PM
Subject:        Re: [openstack-dev] [magnum] Autoscaling both clusters and
containers



It's great that we discuss this in mail list, I filed a bp here
https://blueprints.launchpad.net/magnum/+spec/two-level-auto-scaling and
planning a spec for this. You can get some early ideas from what Ton
pointed here:
https://www.openstack.org/summit/tokyo-2015/videos/presentation/exploring-magnum-and-senlin-integration-for-autoscaling-containers


@Ton, is it possible that we publish the slides to slideshare? ;-)

Our thinking was introduce an autoscaler service to Magnum just like what
GCE is doing now, will have you updated when a spec is ready for review.

On Wed, Nov 18, 2015 at 1:22 PM, Egor Guz <e...@walmartlabs.com> wrote:
  Ryan

  I haven’t seen any proposals/implementations from Mesos/Swarm (but  I am
  not following Mesos and Swam community very close these days).
  But Kubernetes 1.1 has pod autoscaling (
  
https://github.com/kubernetes/kubernetes/blob/master/docs/design/horizontal-pod-autoscaler.md
  ),
  which should cover containers auto-scaling. Also there is PR for cluster
  auto-scaling (https://github.com/kubernetes/kubernetes/pull/15304), which
  has implementation for GCE, but OpenStack support can be added as well.

  —
  Egor

  From: Ton Ngo <t...@us.ibm.com<mailto:t...@us.ibm.com>>
  Reply-To: "OpenStack Development Mailing List (not for usage questions)"
  <openstack-dev@lists.openstack.org>
  Date: Tuesday, November 17, 2015 at 16:58
  To: "OpenStack Development Mailing List (not for usage questions)" <
  openstack-dev@lists.openstack.org>
  Subject: Re: [openstack-dev] [magnum] Autoscaling both clusters and
  containers


  Hi Ryan,
  There was a talk in the last Summit on this topics to explore the options
  with Magnum, Senlin, Heat, Kubernetes:
  
https://www.openstack.org/summit/tokyo-2015/videos/presentation/exploring-magnum-and-senlin-integration-for-autoscaling-containers

  A demo was shown with Senlin interfacing to Magnum to autoscale.
  There was also a Magnum design session to discuss this same topics. The
  use cases are similar to what you describe. Because the subject is
  complex, there are many moving parts, and multiple teams/projects are
  involved, one outcome of the design session is that we will write a spec
  on autoscaling containers and cluster. A patch should be coming soon, so
  it would be great to have your input on the spec.
  Ton,

  [Inactive hide details for Ryan Rossiter ---11/17/2015 02:05:48 PM---Hi
  all, I was having a discussion with a teammate with resp]Ryan Rossiter
  ---11/17/2015 02:05:48 PM---Hi all, I was having a discussion with a
  teammate with respect to container

  From: Ryan Rossiter <rlros...@linux.vnet.ibm.com>
  To: openstack-dev@lists.openstack.org
  Date: 11/17/2015 02:05 PM
  Subject: [openstack-dev] [magnum] Autoscaling both clusters and
  containers

  



  Hi all,

  I was having a discussion with a teammate with respect to container
  scaling. He likes the aspect of nova-docker that allows you to scale
  (essentially) infinitely almost instantly, assuming you are using a
  large pool of compute hosts. In the case of Magnum, if I'm a container
  user, I don't want to be paying for a ton of vms that just sit idle, but
  I also want to have enough vms to handle my scale when I infrequently
  need it. But above all, when I need scale, I don't want to suddenly have
  to go boot vms and wait for them to start up when I really need it.

  I saw [1] which discusses container scaling, but I'm thinking we can
  take this one step further. If I don't want to pay for a lot of vms when
  I'm not using them, could I set up an autoscale policy that allows my
  cluster to expand when my container concentration gets too high on my
  existing cluster? It's kind of a case of nested autoscaling. The
  containers are scaled based on request demand, and the cluster vms are
  scaled based on container count.

  I'm unsure of the details of Senlin, but at least looking at Heat
  autoscaling [2], this would not be very hard to add to the Magnum
  templates, and we would forward those on through the bay API. (I figure
  we would do this through the bay, not baymodel, because I can see
  similar clusters that would want to be scaled differently).

  Let me know if I'm totally crazy or if this is a good idea (or if you
  guys have already talked about this before). I would be interested in
  your feedback.

  [1]
  http://lists.openstack.org/pipermail/openstack-dev/201

[openstack-dev] [magnum] Issue on history of renamed file/folder

2015-11-18 Thread Hongbin Lu
Hi team,

I would like to start this ML to discuss the git rename issue. Here is the 
problem. In Git, it is handy to retrieve commit history of a file/folder. There 
are several ways to do that. In CLI, you can run "git log ..." to show the 
history. In Github, you can click "History" bottom on top of the file. The 
history of a file is traced back to the commit in which the file was created or 
renamed. In other words, renaming a file will cut the commit history of the 
file. If you want to trace the full history of a renamed file, in CLI, you can 
use "git log -follow ...". However, this feature is not supported in Github.

A way to mitigate the issue is to avoid renaming file/folder if it is not for 
fixing a functional defect (e.g. for improving the naming style). If we do 
that, we sacrifice quality of file/folder names to get a more traceable 
history. On the other hand, if we don't do that, we have to tolerant the 
history disconnection in Github. I want to discuss which solution is preferred? 
Or there is a better way to handle it?

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Autoscaling both clusters and containers

2015-11-17 Thread Ton Ngo

Hi Ryan,
 There was a talk in the last Summit on this topics to explore the
options with Magnum, Senlin, Heat, Kubernetes:
https://www.openstack.org/summit/tokyo-2015/videos/presentation/exploring-magnum-and-senlin-integration-for-autoscaling-containers
A demo was shown with Senlin interfacing to Magnum to autoscale.
There was also a Magnum design session to discuss this same topics.
The use cases are similar to what you describe.  Because the subject is
complex, there are many moving parts, and multiple teams/projects are
involved, one outcome of the design session is that we will write a spec on
autoscaling containers and cluster.  A patch should be coming soon, so it
would be great to have your input on the spec.
Ton,



From:   Ryan Rossiter <rlros...@linux.vnet.ibm.com>
To: openstack-dev@lists.openstack.org
Date:   11/17/2015 02:05 PM
Subject:    [openstack-dev] [magnum] Autoscaling both clusters and
containers



Hi all,

I was having a discussion with a teammate with respect to container
scaling. He likes the aspect of nova-docker that allows you to scale
(essentially) infinitely almost instantly, assuming you are using a
large pool of compute hosts. In the case of Magnum, if I'm a container
user, I don't want to be paying for a ton of vms that just sit idle, but
I also want to have enough vms to handle my scale when I infrequently
need it. But above all, when I need scale, I don't want to suddenly have
to go boot vms and wait for them to start up when I really need it.

I saw [1] which discusses container scaling, but I'm thinking we can
take this one step further. If I don't want to pay for a lot of vms when
I'm not using them, could I set up an autoscale policy that allows my
cluster to expand when my container concentration gets too high on my
existing cluster? It's kind of a case of nested autoscaling. The
containers are scaled based on request demand, and the cluster vms are
scaled based on container count.

I'm unsure of the details of Senlin, but at least looking at Heat
autoscaling [2], this would not be very hard to add to the Magnum
templates, and we would forward those on through the bay API. (I figure
we would do this through the bay, not baymodel, because I can see
similar clusters that would want to be scaled differently).

Let me know if I'm totally crazy or if this is a good idea (or if you
guys have already talked about this before). I would be interested in
your feedback.

[1]
http://lists.openstack.org/pipermail/openstack-dev/2015-November/078628.html

[2] https://wiki.openstack.org/wiki/Heat/AutoScaling#AutoScaling_API

--
Thanks,

Ryan Rossiter (rlrossit)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Autoscaling both clusters and containers

2015-11-17 Thread Egor Guz
Ryan

I haven’t seen any proposals/implementations from Mesos/Swarm (but  I am not 
following Mesos and Swam community very close these days).
But Kubernetes 1.1 has pod autoscaling 
(https://github.com/kubernetes/kubernetes/blob/master/docs/design/horizontal-pod-autoscaler.md),
which should cover containers auto-scaling. Also there is PR for cluster 
auto-scaling (https://github.com/kubernetes/kubernetes/pull/15304), which
has implementation for GCE, but OpenStack support can be added as well.

—
Egor

From: Ton Ngo <t...@us.ibm.com<mailto:t...@us.ibm.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, November 17, 2015 at 16:58
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Autoscaling both clusters and containers


Hi Ryan,
There was a talk in the last Summit on this topics to explore the options with 
Magnum, Senlin, Heat, Kubernetes:
https://www.openstack.org/summit/tokyo-2015/videos/presentation/exploring-magnum-and-senlin-integration-for-autoscaling-containers
A demo was shown with Senlin interfacing to Magnum to autoscale.
There was also a Magnum design session to discuss this same topics. The use 
cases are similar to what you describe. Because the subject is complex, there 
are many moving parts, and multiple teams/projects are involved, one outcome of 
the design session is that we will write a spec on autoscaling containers and 
cluster. A patch should be coming soon, so it would be great to have your input 
on the spec.
Ton,

[Inactive hide details for Ryan Rossiter ---11/17/2015 02:05:48 PM---Hi all, I 
was having a discussion with a teammate with resp]Ryan Rossiter ---11/17/2015 
02:05:48 PM---Hi all, I was having a discussion with a teammate with respect to 
container

From: Ryan Rossiter 
<rlros...@linux.vnet.ibm.com<mailto:rlros...@linux.vnet.ibm.com>>
To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Date: 11/17/2015 02:05 PM
Subject: [openstack-dev] [magnum] Autoscaling both clusters and containers





Hi all,

I was having a discussion with a teammate with respect to container
scaling. He likes the aspect of nova-docker that allows you to scale
(essentially) infinitely almost instantly, assuming you are using a
large pool of compute hosts. In the case of Magnum, if I'm a container
user, I don't want to be paying for a ton of vms that just sit idle, but
I also want to have enough vms to handle my scale when I infrequently
need it. But above all, when I need scale, I don't want to suddenly have
to go boot vms and wait for them to start up when I really need it.

I saw [1] which discusses container scaling, but I'm thinking we can
take this one step further. If I don't want to pay for a lot of vms when
I'm not using them, could I set up an autoscale policy that allows my
cluster to expand when my container concentration gets too high on my
existing cluster? It's kind of a case of nested autoscaling. The
containers are scaled based on request demand, and the cluster vms are
scaled based on container count.

I'm unsure of the details of Senlin, but at least looking at Heat
autoscaling [2], this would not be very hard to add to the Magnum
templates, and we would forward those on through the bay API. (I figure
we would do this through the bay, not baymodel, because I can see
similar clusters that would want to be scaled differently).

Let me know if I'm totally crazy or if this is a good idea (or if you
guys have already talked about this before). I would be interested in
your feedback.

[1]
http://lists.openstack.org/pipermail/openstack-dev/2015-November/078628.html
[2] https://wiki.openstack.org/wiki/Heat/AutoScaling#AutoScaling_API

--
Thanks,

Ryan Rossiter (rlrossit)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Autoscaling both clusters and containers

2015-11-17 Thread Jay Lau
It's great that we discuss this in mail list, I filed a bp here
https://blueprints.launchpad.net/magnum/+spec/two-level-auto-scaling and
planning a spec for this. You can get some early ideas from what Ton
pointed here:
https://www.openstack.org/summit/tokyo-2015/videos/presentation/exploring-magnum-and-senlin-integration-for-autoscaling-containers

*@Ton*, is it possible that we publish the slides to slideshare? ;-)

Our thinking was introduce an autoscaler service to Magnum just like what
GCE is doing now, will have you updated when a spec is ready for review.

On Wed, Nov 18, 2015 at 1:22 PM, Egor Guz <e...@walmartlabs.com> wrote:

> Ryan
>
> I haven’t seen any proposals/implementations from Mesos/Swarm (but  I am
> not following Mesos and Swam community very close these days).
> But Kubernetes 1.1 has pod autoscaling (
> https://github.com/kubernetes/kubernetes/blob/master/docs/design/horizontal-pod-autoscaler.md
> ),
> which should cover containers auto-scaling. Also there is PR for cluster
> auto-scaling (https://github.com/kubernetes/kubernetes/pull/15304), which
> has implementation for GCE, but OpenStack support can be added as well.
>
> —
> Egor
>
> From: Ton Ngo <t...@us.ibm.com<mailto:t...@us.ibm.com>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org
> >>
> Date: Tuesday, November 17, 2015 at 16:58
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org
> >>
> Subject: Re: [openstack-dev] [magnum] Autoscaling both clusters and
> containers
>
>
> Hi Ryan,
> There was a talk in the last Summit on this topics to explore the options
> with Magnum, Senlin, Heat, Kubernetes:
>
> https://www.openstack.org/summit/tokyo-2015/videos/presentation/exploring-magnum-and-senlin-integration-for-autoscaling-containers
> A demo was shown with Senlin interfacing to Magnum to autoscale.
> There was also a Magnum design session to discuss this same topics. The
> use cases are similar to what you describe. Because the subject is complex,
> there are many moving parts, and multiple teams/projects are involved, one
> outcome of the design session is that we will write a spec on autoscaling
> containers and cluster. A patch should be coming soon, so it would be great
> to have your input on the spec.
> Ton,
>
> [Inactive hide details for Ryan Rossiter ---11/17/2015 02:05:48 PM---Hi
> all, I was having a discussion with a teammate with resp]Ryan Rossiter
> ---11/17/2015 02:05:48 PM---Hi all, I was having a discussion with a
> teammate with respect to container
>
> From: Ryan Rossiter <rlros...@linux.vnet.ibm.com rlros...@linux.vnet.ibm.com>>
> To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org>
> Date: 11/17/2015 02:05 PM
> Subject: [openstack-dev] [magnum] Autoscaling both clusters and containers
>
> 
>
>
>
> Hi all,
>
> I was having a discussion with a teammate with respect to container
> scaling. He likes the aspect of nova-docker that allows you to scale
> (essentially) infinitely almost instantly, assuming you are using a
> large pool of compute hosts. In the case of Magnum, if I'm a container
> user, I don't want to be paying for a ton of vms that just sit idle, but
> I also want to have enough vms to handle my scale when I infrequently
> need it. But above all, when I need scale, I don't want to suddenly have
> to go boot vms and wait for them to start up when I really need it.
>
> I saw [1] which discusses container scaling, but I'm thinking we can
> take this one step further. If I don't want to pay for a lot of vms when
> I'm not using them, could I set up an autoscale policy that allows my
> cluster to expand when my container concentration gets too high on my
> existing cluster? It's kind of a case of nested autoscaling. The
> containers are scaled based on request demand, and the cluster vms are
> scaled based on container count.
>
> I'm unsure of the details of Senlin, but at least looking at Heat
> autoscaling [2], this would not be very hard to add to the Magnum
> templates, and we would forward those on through the bay API. (I figure
> we would do this through the bay, not baymodel, because I can see
> similar clusters that would want to be scaled differently).
>
> Let me know if I'm totally crazy or if this is a good idea (or if you
> guys have already talked about this before). I would be interested in
> your feedback.
>
> [1]
>
> http://lists.openstack.org/pipermail/openstack-dev/2015-November/078628.html
>

[openstack-dev] [magnum] Autoscaling both clusters and containers

2015-11-17 Thread Ryan Rossiter

Hi all,

I was having a discussion with a teammate with respect to container 
scaling. He likes the aspect of nova-docker that allows you to scale 
(essentially) infinitely almost instantly, assuming you are using a 
large pool of compute hosts. In the case of Magnum, if I'm a container 
user, I don't want to be paying for a ton of vms that just sit idle, but 
I also want to have enough vms to handle my scale when I infrequently 
need it. But above all, when I need scale, I don't want to suddenly have 
to go boot vms and wait for them to start up when I really need it.


I saw [1] which discusses container scaling, but I'm thinking we can 
take this one step further. If I don't want to pay for a lot of vms when 
I'm not using them, could I set up an autoscale policy that allows my 
cluster to expand when my container concentration gets too high on my 
existing cluster? It's kind of a case of nested autoscaling. The 
containers are scaled based on request demand, and the cluster vms are 
scaled based on container count.


I'm unsure of the details of Senlin, but at least looking at Heat 
autoscaling [2], this would not be very hard to add to the Magnum 
templates, and we would forward those on through the bay API. (I figure 
we would do this through the bay, not baymodel, because I can see 
similar clusters that would want to be scaled differently).


Let me know if I'm totally crazy or if this is a good idea (or if you 
guys have already talked about this before). I would be interested in 
your feedback.


[1] 
http://lists.openstack.org/pipermail/openstack-dev/2015-November/078628.html

[2] https://wiki.openstack.org/wiki/Heat/AutoScaling#AutoScaling_API

--
Thanks,

Ryan Rossiter (rlrossit)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum][Testing] Reduce Functional testing ongate.

2015-11-13 Thread Hongbin Lu
I am going to share something that might be off the topic a bit.

Yesterday, I was pulled to the #openstack-infra channel to participant a 
discussion, which is related to the atomic image download in Magnum. It looks 
the infra team is not satisfied with the large image size. In particular, they 
need to double the timeout to accommodate the job [1] [2], which made them 
unhappy. Is there a way to reduce the image size? Or even better, is it 
possible to build the image locally instead of downloading it?

[1] https://review.openstack.org/#/c/242742/
[2] https://review.openstack.org/#/c/244847/

Best regards,
Hongbin

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: November-13-15 12:33 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum][Testing] Reduce Functional testing ongate.


Right now, we seems can not reduce devstack runtime. ANd @Ton, yes, download 
image time seems OK in jenkins job, it found about 4~5 mins

But bay-creation time is interesting topic, it seems something related with 
heat or VM setup time consumption. But needs some investigation.



Thanks

Best Wishes,

Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com<mailto:wk...@cn.ibm.com>
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

[Inactive hide details for "Ton Ngo" ---13/11/2015 01:13:47 pm---Thanks Eli for 
the analysis.  I notice that the time to downloa]"Ton Ngo" ---13/11/2015 
01:13:47 pm---Thanks Eli for the analysis. I notice that the time to download 
the image is only around 1:15 mins

From: "Ton Ngo" <t...@us.ibm.com<mailto:t...@us.ibm.com>>
To: "OpenStack Development Mailing List \(not for usage questions\)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: 13/11/2015 01:13 pm
Subject: Re: [openstack-dev] [Magnum][Testing] Reduce Functional testing on 
gate.





Thanks Eli for the analysis. I notice that the time to download the image is 
only around 1:15 mins out of some 21 mins to set up devstack. So it seems 
trying to reduce the size of the image won't make a significant improvement in 
the devstack time. I wonder how the image size affects the VM creation time for 
the cluster. If we can look at the Heat event stream, we might get an idea.
Ton,


[Inactive hide details for Egor Guz ---11/12/2015 05:25:15 PM---Eli, First of 
all I would like to say thank you for your effort]Egor Guz ---11/12/2015 
05:25:15 PM---Eli, First of all I would like to say thank you for your effort 
(I never seen so many path sets ;)),

From: Egor Guz <e...@walmartlabs.com<mailto:e...@walmartlabs.com>>
To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: 11/12/2015 05:25 PM
Subject: Re: [openstack-dev] [Magnum][Testing] Reduce Functional testing on 
gate.




Eli,

First of all I would like to say thank you for your effort (I never seen so 
many path sets ;)), but I don’t think we should remove “tls_disabled=True” 
tests from gates now (maybe in L).
It’s still vey commonly used feature and backup plan if TLS doesn’t work for 
some reasons.

I think it’s good idea to group tests per pipeline we should definitely follow 
it.

―
Egor

From: "Qiao,Liyong" <liyong.q...@intel.com<mailto:liyong.q...@intel.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, November 11, 2015 at 23:02
To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Magnum][Testing] Reduce Functional testing on gate.

hello all:

I will update some Magnum functional testing status, functional/integration 
testing
is important to us, since we change/modify the Heat template rapidly, we need to
verify the modification is correct, so we need to cover all templates Magnum 
has.
and currently we only has k8s testing(only test with atomic image), we need to
add more, like swarm(WIP), mesos(under plan), also , we may need to support COS 
image.
lots of work need to be done.

for the functional testing time costing, we discussed during the Tokyo summit,
Adrian expected that we can reduce the timing cost to 20min.

I did some analyse

Re: [openstack-dev] [Magnum][Testing] Reduce Functional testing ongate.

2015-11-13 Thread Gregory Haynes
Excerpts from Hongbin Lu's message of 2015-11-13 16:05:24 +:
> I am going to share something that might be off the topic a bit.
> 
> Yesterday, I was pulled to the #openstack-infra channel to participant a 
> discussion, which is related to the atomic image download in Magnum. It looks 
> the infra team is not satisfied with the large image size. In particular, 
> they need to double the timeout to accommodate the job [1] [2], which made 
> them unhappy. Is there a way to reduce the image size? Or even better, is it 
> possible to build the image locally instead of downloading it?
> 
> [1] https://review.openstack.org/#/c/242742/
> [2] https://review.openstack.org/#/c/244847/
> 
> Best regards,
> Hongbin

I am not sure how much of the current job is related to image
downloading (a previous message suggested maybe it isn't much?). If it
is an issue though - we have a tool for making images (DIB[1]) which is
already used by many OpenStack projects and it would be great if support
was added for it to make images that are useful to Magnum. DIB is also
pretty good at making images which are as small as possible, so it might
be a good fit.

I looked at doing this a while ago, and IIRC the atomic images were just
an lvm with a partition for a rootfs and a partition for a docker
overlay fs. The docs look like more options could be supported, but
regardless this seems like something DIB could do if someone was willing
to invest the effort.

Cheers,
Greg

1: http://docs.openstack.org/developer/diskimage-builder/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum][Testing] Reduce Functional testingongate.

2015-11-13 Thread Ton Ngo

We discussed this topics at the last design summit and a BP has been opened
to track the effort:
https://blueprints.launchpad.net/magnum/+spec/ubuntu-image-build
Ton,



From:   Gregory Haynes <g...@greghaynes.net>
To: openstack-dev <openstack-dev@lists.openstack.org>
Date:   11/13/2015 10:03 AM
Subject:    Re: [openstack-dev] [Magnum][Testing] Reduce Functional testing
ongate.



Excerpts from Hongbin Lu's message of 2015-11-13 16:05:24 +:
> I am going to share something that might be off the topic a bit.
>
> Yesterday, I was pulled to the #openstack-infra channel to participant a
discussion, which is related to the atomic image download in Magnum. It
looks the infra team is not satisfied with the large image size. In
particular, they need to double the timeout to accommodate the job [1] [2],
which made them unhappy. Is there a way to reduce the image size? Or even
better, is it possible to build the image locally instead of downloading
it?
>
> [1] https://review.openstack.org/#/c/242742/
> [2] https://review.openstack.org/#/c/244847/
>
> Best regards,
> Hongbin

I am not sure how much of the current job is related to image
downloading (a previous message suggested maybe it isn't much?). If it
is an issue though - we have a tool for making images (DIB[1]) which is
already used by many OpenStack projects and it would be great if support
was added for it to make images that are useful to Magnum. DIB is also
pretty good at making images which are as small as possible, so it might
be a good fit.

I looked at doing this a while ago, and IIRC the atomic images were just
an lvm with a partition for a rootfs and a partition for a docker
overlay fs. The docs look like more options could be supported, but
regardless this seems like something DIB could do if someone was willing
to invest the effort.

Cheers,
Greg

1: http://docs.openstack.org/developer/diskimage-builder/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum][Testing] Reduce Functional testing on gate.

2015-11-12 Thread Egor Guz
Eli,

First of all I would like to say thank you for your effort (I never seen so 
many path sets ;)), but I don’t think we should remove “tls_disabled=True” 
tests from gates now (maybe in L).
It’s still vey commonly used feature and backup plan if TLS doesn’t work for 
some reasons.

I think it’s good idea to group tests per pipeline we should definitely follow 
it.

—
Egor

From: "Qiao,Liyong" <liyong.q...@intel.com<mailto:liyong.q...@intel.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, November 11, 2015 at 23:02
To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Magnum][Testing] Reduce Functional testing on gate.

hello all:

I will update some Magnum functional testing status, functional/integration 
testing
is important to us, since we change/modify the Heat template rapidly, we need to
verify the modification is correct, so we need to cover all templates Magnum 
has.
and currently we only has k8s testing(only test with atomic image), we need to
add more, like swarm(WIP), mesos(under plan), also , we may need to support COS 
image.
lots of work need to be done.

for the functional testing time costing, we discussed during the Tokyo summit,
Adrian expected that we can reduce the timing cost to 20min.

I did some analyses on the functional/integrated testing on gate pipeline.
the stages will be follows:
take k8s functional testing for example, we did follow testing case:

1) baymodel creation
2) bay(tls_disabled=True) creation/deletion
3) bay(tls_disabled=False) creation to testing k8s api and delete it after 
testing.

for each stage, the time costing is follows:

  *   devstack prepare: 5-6 mins
  *   Running devstack: 15 mins(include downloading atomic image)
  *   1) and 2) 15 mins
  *   3) 15 +3 mins

totally about 60 mins currently a example is 1h 05m 57s
see 
http://logs.openstack.org/10/243910/1/check/gate-functional-dsvm-magnum-k8s/5e61039/console.html
for all time stamps.

I don't think it is possible to reduce time to 20 mins, since devstack setup 
will take 20 mins already.

To reduce time, I suggest to only create 1 bay each pipeline and do vary kinds 
of testing
on this bay, if want to test some specify bay (for example, network_driver 
etc), create
a new pipeline .

So, I think we can delete 2), since 3) will do similar things(create/delete), 
the different is
3) use tls_disabled=False. what do you think ?
see https://review.openstack.org/244378 for the time costing, will reduce to 45 
min (48m 50s in the example.)

=
For other related functional testing works:
I 'v done the split of functional testing per COE, we have pipeline as:

  *   gate-functional-dsvm-magnum-api 30 mins
  *   gate-functional-dsvm-magnum-k8s 60 mins

And for swam pipeline, patches is done, under reviewing now(works fine on gate)
https://review.openstack.org/244391
https://review.openstack.org/226125



--
BR, Eli(Li Yong)Qiao
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum][Testing] Reduce Functional testing on gate.

2015-11-12 Thread Ton Ngo

Thanks Eli for the analysis.  I notice that the time to download the image
is only around 1:15 mins out of some 21 mins to set up devstack.  So it
seems trying to reduce the size of the image won't make a significant
improvement in the devstack time.   I wonder how the image size affects the
VM creation time for the cluster.  If we can look at the Heat event stream,
we might get an idea.
Ton,




From:   Egor Guz <e...@walmartlabs.com>
To: "openstack-dev@lists.openstack.org"
<openstack-dev@lists.openstack.org>
Date:   11/12/2015 05:25 PM
Subject:        Re: [openstack-dev] [Magnum][Testing] Reduce Functional testing
on gate.



Eli,

First of all I would like to say thank you for your effort (I never seen so
many path sets ;)), but I don’t think we should remove “tls_disabled=True”
tests from gates now (maybe in L).
It’s still vey commonly used feature and backup plan if TLS doesn’t work
for some reasons.

I think it’s good idea to group tests per pipeline we should definitely
follow it.

?
Egor

From: "Qiao,Liyong" <liyong.q...@intel.com<mailto:liyong.q...@intel.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org
>>
Date: Wednesday, November 11, 2015 at 23:02
To: "openstack-dev@lists.openstack.org<
mailto:openstack-dev@lists.openstack.org>"
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org
>>
Subject: [openstack-dev] [Magnum][Testing] Reduce Functional testing on
gate.

hello all:

I will update some Magnum functional testing status, functional/integration
testing
is important to us, since we change/modify the Heat template rapidly, we
need to
verify the modification is correct, so we need to cover all templates
Magnum has.
and currently we only has k8s testing(only test with atomic image), we need
to
add more, like swarm(WIP), mesos(under plan), also , we may need to support
COS image.
lots of work need to be done.

for the functional testing time costing, we discussed during the Tokyo
summit,
Adrian expected that we can reduce the timing cost to 20min.

I did some analyses on the functional/integrated testing on gate pipeline.
the stages will be follows:
take k8s functional testing for example, we did follow testing case:

1) baymodel creation
2) bay(tls_disabled=True) creation/deletion
3) bay(tls_disabled=False) creation to testing k8s api and delete it after
testing.

for each stage, the time costing is follows:

  *   devstack prepare: 5-6 mins
  *   Running devstack: 15 mins(include downloading atomic image)
  *   1) and 2) 15 mins
  *   3) 15 +3 mins

totally about 60 mins currently a example is 1h 05m 57s
see
http://logs.openstack.org/10/243910/1/check/gate-functional-dsvm-magnum-k8s/5e61039/console.html

for all time stamps.

I don't think it is possible to reduce time to 20 mins, since devstack
setup will take 20 mins already.

To reduce time, I suggest to only create 1 bay each pipeline and do vary
kinds of testing
on this bay, if want to test some specify bay (for example, network_driver
etc), create
a new pipeline .

So, I think we can delete 2), since 3) will do similar things
(create/delete), the different is
3) use tls_disabled=False. what do you think ?
see https://review.openstack.org/244378 for the time costing, will reduce
to 45 min (48m 50s in the example.)

=
For other related functional testing works:
I 'v done the split of functional testing per COE, we have pipeline as:

  *   gate-functional-dsvm-magnum-api 30 mins
  *   gate-functional-dsvm-magnum-k8s 60 mins

And for swam pipeline, patches is done, under reviewing now(works fine on
gate)
https://review.openstack.org/244391
https://review.openstack.org/226125



--
BR, Eli(Li Yong)Qiao
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum][Testing] Reduce Functional testing ongate.

2015-11-12 Thread Kai Qiang Wu
Right now, we seems can not reduce devstack runtime.  ANd @Ton, yes,
download image time seems OK in jenkins job, it found about 4~5 mins

But bay-creation time is interesting topic, it seems something related with
heat or VM setup time consumption. But needs some investigation.



Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   "Ton Ngo" <t...@us.ibm.com>
To: "OpenStack Development Mailing List \(not for usage questions
\)" <openstack-dev@lists.openstack.org>
Date:   13/11/2015 01:13 pm
Subject:    Re: [openstack-dev] [Magnum][Testing] Reduce Functional testing
on  gate.



Thanks Eli for the analysis. I notice that the time to download the image
is only around 1:15 mins out of some 21 mins to set up devstack. So it
seems trying to reduce the size of the image won't make a significant
improvement in the devstack time. I wonder how the image size affects the
VM creation time for the cluster. If we can look at the Heat event stream,
we might get an idea.
Ton,


Inactive hide details for Egor Guz ---11/12/2015 05:25:15 PM---Eli, First
of all I would like to say thank you for your effort Egor Guz ---11/12/2015
05:25:15 PM---Eli, First of all I would like to say thank you for your
effort (I never seen so many path sets ;)),

From: Egor Guz <e...@walmartlabs.com>
To: "openstack-dev@lists.openstack.org" <openstack-dev@lists.openstack.org>
Date: 11/12/2015 05:25 PM
Subject: Re: [openstack-dev] [Magnum][Testing] Reduce Functional testing on
gate.



Eli,

First of all I would like to say thank you for your effort (I never seen so
many path sets ;)), but I don’t think we should remove “tls_disabled=True”
tests from gates now (maybe in L).
It’s still vey commonly used feature and backup plan if TLS doesn’t work
for some reasons.

I think it’s good idea to group tests per pipeline we should definitely
follow it.

―
Egor

From: "Qiao,Liyong" <liyong.q...@intel.com<mailto:liyong.q...@intel.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org
>>
Date: Wednesday, November 11, 2015 at 23:02
To: "openstack-dev@lists.openstack.org<
mailto:openstack-dev@lists.openstack.org>"
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org
>>
Subject: [openstack-dev] [Magnum][Testing] Reduce Functional testing on
gate.

hello all:

I will update some Magnum functional testing status, functional/integration
testing
is important to us, since we change/modify the Heat template rapidly, we
need to
verify the modification is correct, so we need to cover all templates
Magnum has.
and currently we only has k8s testing(only test with atomic image), we need
to
add more, like swarm(WIP), mesos(under plan), also , we may need to support
COS image.
lots of work need to be done.

for the functional testing time costing, we discussed during the Tokyo
summit,
Adrian expected that we can reduce the timing cost to 20min.

I did some analyses on the functional/integrated testing on gate pipeline.
the stages will be follows:
take k8s functional testing for example, we did follow testing case:

1) baymodel creation
2) bay(tls_disabled=True) creation/deletion
3) bay(tls_disabled=False) creation to testing k8s api and delete it after
testing.

for each stage, the time costing is follows:

 *   devstack prepare: 5-6 mins
 *   Running devstack: 15 mins(include downloading atomic image)
 *   1) and 2) 15 mins
 *   3) 15 +3 mins

totally about 60 mins currently a example is 1h 05m 57s
see
http://logs.openstack.org/10/243910/1/check/gate-functional-dsvm-magnum-k8s/5e61039/console.html

for all time stamps.

I don't think it is possible to reduce time to 20 mins, since devstack
setup will take 20 mins already.

To reduce time, I suggest to only create 1 bay each pipeline and do vary
kinds of testing
on this bay, if want to test some specify bay (for example, network_driver
etc), create
a new pipeline .

So, I think we can delete 2), since 3) will do similar things
(create/delete), the different is
3) use tls_disabled=False. what do you think ?
see https://review.openstack.org/244378 for the time costing, will reduce
to 45 min (48m 50s in the example.)

=
For other related functional testing works:
I 'v done the split of functional testing per COE, we have pipeline as:

 *   gate-functional-dsvm-magnum-api 30 min

[openstack-dev] [Magnum][Testing] Reduce Functional testing on gate.

2015-11-11 Thread Qiao,Liyong

hello all:

I will update some Magnum functional testing status, 
functional/integration testing
is important to us, since we change/modify the Heat template rapidly, we 
need to
verify the modification is correct, so we need to cover all templates 
Magnum has.
and currently we only has k8s testing(only test with atomic image), we 
need to
add more, like swarm(WIP), mesos(under plan), also , we may need to 
support COS image.

lots of work need to be done.

for the functional testing time costing, we discussed during the Tokyo 
summit,

Adrian expected that we can reduce the timing cost to 20min.

I did some analyses on the functional/integrated testing on gate pipeline.
the stages will be follows:
take k8s functional testing for example, we did follow testing case:

1) baymodel creation
2) bay(tls_disabled=True) creation/deletion
3) bay(tls_disabled=False) creation to testing k8s api and delete it 
after testing.


for each stage, the time costing is follows:

 * devstack prepare: 5-6 mins
 * Running devstack: 15 mins(include downloading atomic image)
 * 1) and 2) 15 mins
 * 3) 15 +3 mins

totally about 60 mins currently a example is 1h 05m 57s
see 
http://logs.openstack.org/10/243910/1/check/gate-functional-dsvm-magnum-k8s/5e61039/console.html

for all time stamps.

I don't think it is possible to reduce time to 20 mins, since devstack 
setup will take 20 mins already.


To reduce time, I suggest to only create 1 bay each pipeline and do vary 
kinds of testing
on this bay, if want to test some specify bay (for example, 
network_driver etc), create

a new pipeline .

So, I think we can *delete 2)*, since 3) will do similar 
things(create/delete), the different is

3) use tls_disabled=False. *what do you think *?
see https://review.openstack.org/244378 for the time costing, will 
reduce to 45 min (48m 50s in the example.)


=
For other related functional testing works:
I 'v done the split of functional testing per COE, we have pipeline as:

 * gate-functional-dsvm-magnum-api 30 mins
 * gate-functional-dsvm-magnum-k8s 60 mins

And for swam pipeline, patches is done, under reviewing now(works fine 
on gate)

https://review.openstack.org/244391
https://review.openstack.org/226125



--
BR, Eli(Li Yong)Qiao

<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum][Testing] Reduce Functional testing on gate.

2015-11-11 Thread Adrian Otto
Eli,

I like this proposed approach. We did have a discussion with a few Stackers 
from openstack-infra in Tokyo to express our interest in using bare metal for 
gate testing. That’s still a way out, but that may be another way to speed this 
up further. A third idea would be to adjust the nova virt driver in our 
devstack image to use libvirt/lxc by default (instead of libvirt/kvm) which 
would allow for bays to be created more rapidly. This would potentially allow 
us to to perform repeated bay creations int he same pipeline in a reasonable 
timeframe.

Adrian

On Nov 11, 2015, at 11:02 PM, Qiao,Liyong 
> wrote:

hello all:

I will update some Magnum functional testing status, functional/integration 
testing
is important to us, since we change/modify the Heat template rapidly, we need to
verify the modification is correct, so we need to cover all templates Magnum 
has.
and currently we only has k8s testing(only test with atomic image), we need to
add more, like swarm(WIP), mesos(under plan), also , we may need to support COS 
image.
lots of work need to be done.

for the functional testing time costing, we discussed during the Tokyo summit,
Adrian expected that we can reduce the timing cost to 20min.

I did some analyses on the functional/integrated testing on gate pipeline.
the stages will be follows:
take k8s functional testing for example, we did follow testing case:

1) baymodel creation
2) bay(tls_disabled=True) creation/deletion
3) bay(tls_disabled=False) creation to testing k8s api and delete it after 
testing.

for each stage, the time costing is follows:

  *   devstack prepare: 5-6 mins
  *   Running devstack: 15 mins(include downloading atomic image)
  *   1) and 2) 15 mins
  *   3) 15 +3 mins

totally about 60 mins currently a example is 1h 05m 57s
see 
http://logs.openstack.org/10/243910/1/check/gate-functional-dsvm-magnum-k8s/5e61039/console.html
for all time stamps.

I don't think it is possible to reduce time to 20 mins, since devstack setup 
will take 20 mins already.

To reduce time, I suggest to only create 1 bay each pipeline and do vary kinds 
of testing
on this bay, if want to test some specify bay (for example, network_driver 
etc), create
a new pipeline .

So, I think we can delete 2), since 3) will do similar things(create/delete), 
the different is
3) use tls_disabled=False. what do you think ?
see https://review.openstack.org/244378 for the time costing, will reduce to 45 
min (48m 50s in the example.)

=
For other related functional testing works:
I 'v done the split of functional testing per COE, we have pipeline as:

  *   gate-functional-dsvm-magnum-api 30 mins
  *   gate-functional-dsvm-magnum-k8s 60 mins

And for swam pipeline, patches is done, under reviewing now(works fine on gate)
https://review.openstack.org/244391
https://review.openstack.org/226125



--
BR, Eli(Li Yong)Qiao

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Networking Subteam Meetings

2015-11-05 Thread Daneyon Hansen (danehans)
All,

I apologize for issues with today's meeting. My calendar was updated to reflect 
daylight savings and displayed an incorrect meeting start time. This issue is 
now resolved. We will meet on 11/12 at 18:30 UTC. The meeting has been pushed 
back 30 minutes from our usual start time. This is because Docker is hosting a 
Meetup [1] to discuss the new 1.9 networking features. I encourage everyone to 
attend the Meetup.

[1] http://www.meetup.com/Docker-Online-Meetup/events/226522306/
[2] 
https://wiki.openstack.org/wiki/Meetings/Containers#Container_Networking_Subteam_Meeting

Regards,
Daneyon Hansen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Networking Subteam Meetings

2015-11-05 Thread Lee Calcote
It seems integration of the SocketPlane acquisition has come to fruition in 1.9…

Lee

> On Nov 5, 2015, at 1:18 PM, Daneyon Hansen (danehans)  
> wrote:
> 
> All,
> 
> I apologize for issues with today's meeting. My calendar was updated to 
> reflect daylight savings and displayed an incorrect meeting start time. This 
> issue is now resolved. We will meet on 11/12 at 18:30 UTC. The meeting has 
> been pushed back 30 minutes from our usual start time. This is because Docker 
> is hosting a Meetup [1] to discuss the new 1.9 networking features. I 
> encourage everyone to attend the Meetup.
> 
> [1] http://www.meetup.com/Docker-Online-Meetup/events/226522306/ 
> 
> [2] 
> https://wiki.openstack.org/wiki/Meetings/Containers#Container_Networking_Subteam_Meeting
>  
> 
> 
> Regards,
> Daneyon Hansen
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] [RFC] split pip line of functional testing

2015-11-03 Thread Qiao,Liyong

hi Magnum hackers:

Currently there is a pip line on project-config to do magnum functional 
testing [1]


on summit, we've discussed that we need to split it per COE[2], we can 
do this by adding new pip line to testing./

/ /- '{pipeline}-functional-dsvm-magnum{coe}{job-suffix}':/
coe could be swarm/mesos/k8s,
then passing coe in our post_test_hook.sh [3], is this a good idea?
and I still have others questions need to be addressed before split 
functional testing per COE:
1 how can we pass COE parameter to tox in [4], or add some new envs like 
[testenv:functional-swarm] [testenv:functional-k8s] etc?

stupid?
2 also there are some common testing cases, should we run them in all 
COE ?(I think so)

but how to construct the source code tree?

//functional/swarm//
///functional/k8s//
///functional/common ../


[1]https://github.com/openstack-infra/project-config/blob/master/jenkins/jobs/projects.yaml#L2288
[2]https://etherpad.openstack.org/p/mitaka-magnum-functional-testing
[3]https://github.com/openstack/magnum/blob/master/magnum/tests/contrib/post_test_hook.sh#L100
[4]https://github.com/openstack/magnum/blob/master/tox.ini#L19

--
BR, Eli(Li Yong)Qiao

<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] [RFC] split pip line of functional testing

2015-11-03 Thread Kai Qiang Wu
Hi eliqiao,

1) I think there are many openstack projects constructed multi-pipeline for
different purpose, for example test different os distro pipelines.
It is good to refer them.

2) we are construct new envs for different COE, I think it is easy to
maintain.

3) yes, for code restructure, sorts different tests is a good idea.

functional/swarm
functional/mesos
functional/k8s
functional/common




Thanks

Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   "Qiao,Liyong" <liyong.q...@intel.com>
To: openstack-dev@lists.openstack.org, "Qiao, Liyong"
<liyong.q...@intel.com>
Date:   03/11/2015 06:13 pm
Subject:[openstack-dev] [Magnum] [RFC] split pip line of functional
testing



hi Magnum hackers:

Currently there is a pip line on project-config to do magnum functional
testing [1]

on summit, we've discussed that we need to split it per COE[2], we can do
this by adding new pip line to testing.
- '{pipeline}-functional-dsvm-magnum{coe}{job-suffix}':
coe could be swarm/mesos/k8s,
then passing coe in our post_test_hook.sh [3], is this a good idea?
and I still have others questions need to be addressed before split
functional testing per COE:
1 how can we pass COE parameter to tox in [4], or add some new envs like
[testenv:functional-swarm] [testenv:functional-k8s] etc?
stupid?
2 also there are some common testing cases, should we run them in all
COE ?(I think so)
but how to construct the source code tree?

/functional/swarm
/functional/k8s
/functional/common ..


[1]
https://github.com/openstack-infra/project-config/blob/master/jenkins/jobs/projects.yaml#L2288

[2]https://etherpad.openstack.org/p/mitaka-magnum-functional-testing
[3]
https://github.com/openstack/magnum/blob/master/magnum/tests/contrib/post_test_hook.sh#L100

[4]https://github.com/openstack/magnum/blob/master/tox.ini#L19
--
BR, Eli(Li Yong)Qiao[attachment "liyong_qiao.vcf" deleted by Kai Qiang
Wu/China/IBM]
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]generate config files by magnum

2015-11-02 Thread Steven Dake (stdake)
The reason we don’t rely on cloudint more then we already do (sed is run via 
cloudiit) is because many modern distress like CentOS and Fedora Atomic have 
many parts of the host os as read-only.

I prefer the structure as it is.

Regards
-steve


From: 王华 <wanghua.hum...@gmail.com<mailto:wanghua.hum...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Monday, November 2, 2015 at 12:41 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [magnum]generate config files by magnum

Hi forks,

Magnum needs to prepare config files for k8s and docker and add these services 
to systemd. Now we use "sed" to replace some parameters in config files. The 
method has a disadvantage. Magnum code  depends on a specific image. Users may 
want to create images by themselves. The config files in their images may be 
different from ours. I think magnum shouldn't depends on the config files in 
the image. These config files should be generated by magnum. What magnum needs 
should be just the installation of k8s, docker, etc. Maybe we can use 
cloud-init to install the softwares automatically, so that we don't need to 
create images and what we needs is just a image with cloud-init.

Regards,
Wang Hua
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]generate config files by magnum

2015-11-02 Thread Egor Guz
Steve,actually Kub is moving to fully containerize model when you need only 
kublet running at the host 
(https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant-single.html) 
and all other services will come in containers (e.g. ui 
http://kubernetes.io/v1.0/docs/user-guide/ui.html). So we will have only etcd, 
flannel, kublet preinstalled and kublet will start all necessary containers 
(e.g. https://review.openstack.org/#/c/240818/).

Wanghua, we discussed concerns about curent Fedora Atomic images during the 
summit and there are some actions points:
1. Fix CoreOS template. I started working at it, but it will take some time 
because we need to coordinate it with template refactoring 
(https://review.openstack.org/#/c/211771/)
2. Try to minimize Fedora Atomic image (Ton, will take a look at it)
3. Build Ubuntu image/template (I or Ton will pickup it, feel free to join ;))

―
Egor

From: "Steven Dake (stdake)" <std...@cisco.com<mailto:std...@cisco.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Monday, November 2, 2015 at 06:37
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]generate config files by magnum

The reason we don’t rely on cloudint more then we already do (sed is run via 
cloudiit) is because many modern distress like CentOS and Fedora Atomic have 
many parts of the host os as read-only.

I prefer the structure as it is.

Regards
-steve


From: 王华 <wanghua.hum...@gmail.com<mailto:wanghua.hum...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Monday, November 2, 2015 at 12:41 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [magnum]generate config files by magnum

Hi forks,

Magnum needs to prepare config files for k8s and docker and add these services 
to systemd. Now we use "sed" to replace some parameters in config files. The 
method has a disadvantage. Magnum code  depends on a specific image. Users may 
want to create images by themselves. The config files in their images may be 
different from ours. I think magnum shouldn't depends on the config files in 
the image. These config files should be generated by magnum. What magnum needs 
should be just the installation of k8s, docker, etc. Maybe we can use 
cloud-init to install the softwares automatically, so that we don't need to 
create images and what we needs is just a image with cloud-init.

Regards,
Wang Hua
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] magnum on OpenStack Kilo

2015-11-02 Thread Bruce D'Amora
Hi,
Does anyone have any guidance for configuring magnum on OpenStack kilo? this is 
outside of devstack. I thought I had it configured and when I log into horizon, 
I see the magnum service is started, but when I execute cli commands such as:
magnum service-list or magnum container-list I get ERRORs:
ERROR: publicURL endpoint for container service not found

I added an endpoint:
openstack endpoint create \
  --publicurl http://9.2.132.246:9511/v1 \
  --internalurl http://9.2.132.246:9511/v1 \
  --adminurl http://9.2.132.246:9511/v1 \
  --region RegionOne \
  magnum


but still get an error. Any ideas?__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] magnum on OpenStack Kilo

2015-11-02 Thread Adrian Otto
Bruce,

That sounds like this bug to me:

https://bugs.launchpad.net/magnum/+bug/1411333

Resolved by:

https://review.openstack.org/148059

I think you need this:


keystone service-create --name=magnum \
--type=container \
--description="magnum Container Service"
keystone endpoint-create --service=magnum \
 --publicurl=http://127.0.0.1:9511/v1 \
 --internalurl=http://127.0.0.1:9511/v1 \
 --adminurl=http://127.0.0.1:9511/v1 \
 --region RegionOne

Any chance you missed the first of these two? Also, be sure you are using the 
latest Magnum, either from the master branch or from the Downloads section of:

https://wiki.openstack.org/wiki/Magnum

Thanks,

Adrain


On Nov 2, 2015, at 2:25 PM, Bruce D'Amora 
> wrote:

Does anyone have any guidance for configuring magnum on OpenStack kilo? this is 
outside of devstack. I thought I had it configured and when I log into horizon, 
I see the magnum service is started, but when I execute cli commands such as:
magnum service-list or magnum container-list I get ERRORs:
ERROR: publicURL endpoint for container service not found

I added an endpoint:
openstack endpoint create \
  --publicurl http://9.2.132.246:9511/v1 \
  --internalurl http://9.2.132.246:9511/v1 \
  --adminurl http://9.2.132.246:9511/v1 \
  --region RegionOne \
  magnum

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] magnum on OpenStack Kilo

2015-11-02 Thread Adrian Otto
Bruce,

Another suggestion for your consideration:

The region the client is using needs to match the region the endpoint is set to 
use in the service catalog. Check that OS_REGION_NAME in the environment 
running the client is set to ‘RegionOne’ rather than ‘regionOne’. That has 
snagged others in the past as well.

Adrian

On Nov 2, 2015, at 4:22 PM, Adrian Otto 
> wrote:

Bruce,

That sounds like this bug to me:

https://bugs.launchpad.net/magnum/+bug/1411333

Resolved by:

https://review.openstack.org/148059

I think you need this:


keystone service-create --name=magnum \
--type=container \
--description="magnum Container Service"
keystone endpoint-create --service=magnum \
 --publicurl=http://127.0.0.1:9511/v1 \
 --internalurl=http://127.0.0.1:9511/v1 \
 --adminurl=http://127.0.0.1:9511/v1 \
 --region RegionOne

Any chance you missed the first of these two? Also, be sure you are using the 
latest Magnum, either from the master branch or from the Downloads section of:

https://wiki.openstack.org/wiki/Magnum

Thanks,

Adrain


On Nov 2, 2015, at 2:25 PM, Bruce D'Amora 
> wrote:

Does anyone have any guidance for configuring magnum on OpenStack kilo? this is 
outside of devstack. I thought I had it configured and when I log into horizon, 
I see the magnum service is started, but when I execute cli commands such as:
magnum service-list or magnum container-list I get ERRORs:
ERROR: publicURL endpoint for container service not found

I added an endpoint:
openstack endpoint create \
  --publicurl http://9.2.132.246:9511/v1 \
  --internalurl http://9.2.132.246:9511/v1 \
  --adminurl http://9.2.132.246:9511/v1 \
  --region RegionOne \
  magnum

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum]generate config files by magnum

2015-11-01 Thread 王华
Hi forks,

Magnum needs to prepare config files for k8s and docker and add these
services to systemd. Now we use "sed" to replace some parameters in config
files. The method has a disadvantage. Magnum code  depends on a specific
image. Users may want to create images by themselves. The config files in
their images may be different from ours. I think magnum shouldn't depends
on the config files in the image. These config files should be generated by
magnum. What magnum needs should be just the installation of k8s, docker,
etc. Maybe we can use cloud-init to install the softwares automatically, so
that we don't need to create images and what we needs is just a image with
cloud-init.

Regards,
Wang Hua
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][kolla] Stepping down as a Magnum core reviewer

2015-10-29 Thread Jay Lau
Hi Steve,

It is really a big loss to Magnum and thanks very much for your help in my
Magnum journey. Wish you good luck in Kolla!


On Tue, Oct 27, 2015 at 2:29 PM, 大塚元央  wrote:

> Hi Steve,
>
> I'm very sad about your stepping down from Magnum core. Without your help,
> I couldn't contribute to magnum project.
> But kolla is also fantastic project.
> I wish you the best of luck in kolla.
>
> Best regards.
> - Yuanying Otsuka
>
> On Tue, Oct 27, 2015 at 00:39 Baohua Yang  wrote:
>
>> Really a pity!
>>
>> We need more resources on the container part in OpenStack indeed, as so
>> many new projects are just initiated.
>>
>> Community is not only about putting technologies together, but also
>> putting technical guys together.
>>
>> Happy to see so many guys in the Tokyo Summit this afternoon.
>>
>> Let's take care of the opportunities to make good communications with
>> each other.
>>
>> On Mon, Oct 26, 2015 at 8:17 AM, Steven Dake (stdake) 
>> wrote:
>>
>>> Hey folks,
>>>
>>> It is with sadness that I find myself under the situation to have to
>>> write this message.  I have the privilege of being involved in two of the
>>> most successful and growing projects (Magnum, Kolla) in OpenStack.  I chose
>>> getting involved in two major initiatives on purpose, to see if I could do
>>> the job; to see if  I could deliver two major initiatives at the same
>>> time.  I also wanted it to be a length of time that was significant – 1+
>>> year.  I found indeed I was able to deliver both Magnum and Kolla, however,
>>> the impact on my personal life has not been ideal.
>>>
>>> The Magnum engineering team is truly a world class example of how an
>>> Open Source project should be constructed and organized.  I hope some young
>>> academic writes a case study on it some day but until then, my gratitude to
>>> the Magnum core reviewer team is warranted by the level of  their sheer
>>> commitment.
>>>
>>> I am officially focusing all of my energy on Kolla going forward.  The
>>> Kolla core team elected me as PTL (or more accurately didn’t elect anyone
>>> else;) and I really want to be effective for them, especially in what I
>>> feel is Kolla’s most critical phase of growth.
>>>
>>> I will continue to fight  for engineering resources for Magnum
>>> internally in Cisco.  Some of these have born fruit already including the
>>> Heat resources, the Horizon plugin, and of course the Networking plugin
>>> system.  I will also continue to support Magnum from a resources POV where
>>> I can do so (like the fedora image storage for example).  What I won’t be
>>> doing is reviewing Magnum code (serving as a gate), or likely making much
>>> technical contribution to Magnum in the future.  On the plus side I’ve
>>> replaced myself with many many more engineers from Cisco who should be much
>>> more productive combined then I could have been alone ;)
>>>
>>> Just to be clear, I am not abandoning Magnum because I dislike the
>>> people or the technology.  I think the people are fantastic! And the
>>> technology – well I helped design the entire architecture!  I am letting
>>> Magnum grow up without me as I have other children that need more direct
>>> attention.  I think this viewpoint shows trust in the core reviewer team,
>>> but feel free to make your own judgements ;)
>>>
>>> Finally I want to thank Perry Myers for influencing me to excel at
>>> multiple disciplines at once.  Without Perry as a role model, Magnum may
>>> have never happened (or would certainly be much different then it is
>>> today). Being a solid hybrid engineer has a long ramp up time and is really
>>> difficult, but also very rewarding.  The community has Perry to blame for
>>> that ;)
>>>
>>> Regards
>>> -steve
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Best wishes!
>> Baohua
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [magnum][kolla] Stepping down as a Magnum core reviewer

2015-10-27 Thread 大塚元央
Hi Steve,

I'm very sad about your stepping down from Magnum core. Without your help,
I couldn't contribute to magnum project.
But kolla is also fantastic project.
I wish you the best of luck in kolla.

Best regards.
- Yuanying Otsuka

On Tue, Oct 27, 2015 at 00:39 Baohua Yang  wrote:

> Really a pity!
>
> We need more resources on the container part in OpenStack indeed, as so
> many new projects are just initiated.
>
> Community is not only about putting technologies together, but also
> putting technical guys together.
>
> Happy to see so many guys in the Tokyo Summit this afternoon.
>
> Let's take care of the opportunities to make good communications with each
> other.
>
> On Mon, Oct 26, 2015 at 8:17 AM, Steven Dake (stdake) 
> wrote:
>
>> Hey folks,
>>
>> It is with sadness that I find myself under the situation to have to
>> write this message.  I have the privilege of being involved in two of the
>> most successful and growing projects (Magnum, Kolla) in OpenStack.  I chose
>> getting involved in two major initiatives on purpose, to see if I could do
>> the job; to see if  I could deliver two major initiatives at the same
>> time.  I also wanted it to be a length of time that was significant – 1+
>> year.  I found indeed I was able to deliver both Magnum and Kolla, however,
>> the impact on my personal life has not been ideal.
>>
>> The Magnum engineering team is truly a world class example of how an Open
>> Source project should be constructed and organized.  I hope some young
>> academic writes a case study on it some day but until then, my gratitude to
>> the Magnum core reviewer team is warranted by the level of  their sheer
>> commitment.
>>
>> I am officially focusing all of my energy on Kolla going forward.  The
>> Kolla core team elected me as PTL (or more accurately didn’t elect anyone
>> else;) and I really want to be effective for them, especially in what I
>> feel is Kolla’s most critical phase of growth.
>>
>> I will continue to fight  for engineering resources for Magnum internally
>> in Cisco.  Some of these have born fruit already including the Heat
>> resources, the Horizon plugin, and of course the Networking plugin system.
>> I will also continue to support Magnum from a resources POV where I can do
>> so (like the fedora image storage for example).  What I won’t be doing is
>> reviewing Magnum code (serving as a gate), or likely making much technical
>> contribution to Magnum in the future.  On the plus side I’ve replaced
>> myself with many many more engineers from Cisco who should be much more
>> productive combined then I could have been alone ;)
>>
>> Just to be clear, I am not abandoning Magnum because I dislike the people
>> or the technology.  I think the people are fantastic! And the technology –
>> well I helped design the entire architecture!  I am letting Magnum grow up
>> without me as I have other children that need more direct attention.  I
>> think this viewpoint shows trust in the core reviewer team, but feel free
>> to make your own judgements ;)
>>
>> Finally I want to thank Perry Myers for influencing me to excel at
>> multiple disciplines at once.  Without Perry as a role model, Magnum may
>> have never happened (or would certainly be much different then it is
>> today). Being a solid hybrid engineer has a long ramp up time and is really
>> difficult, but also very rewarding.  The community has Perry to blame for
>> that ;)
>>
>> Regards
>> -steve
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best wishes!
> Baohua
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][kolla] Stepping down as a Magnum core reviewer

2015-10-26 Thread Kai Qiang Wu
Hi Stdake,

You really did a fantastic  job on Magnum, and your bright ideas and
thoughts help Magnum to grow. It was sad to hear your stepping down from
Magnum Core at the first time, However, after read your messages from
heart,  I think  you are following your heart. I will wish you new success
on more areas(include Kolla and many new coming projects :).



I want to think you for your help on me while in Magnum.  Thanks very
much :)


Wish you bigger and more bright future ! Looking forward to receive your
any thoughts on Magnum.



Thanks


Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   "Steven Dake (stdake)" <std...@cisco.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
Date:   26/10/2015 09:22 am
Subject:[openstack-dev] [magnum][kolla] Stepping down as a Magnum core
reviewer



Hey folks,

It is with sadness that I find myself under the situation to have to write
this message.  I have the privilege of being involved in two of the most
successful and growing projects (Magnum, Kolla) in OpenStack.  I chose
getting involved in two major initiatives on purpose, to see if I could do
the job; to see if  I could deliver two major initiatives at the same time.
I also wanted it to be a length of time that was significant �C 1+ year.  I
found indeed I was able to deliver both Magnum and Kolla, however, the
impact on my personal life has not been ideal.

The Magnum engineering team is truly a world class example of how an Open
Source project should be constructed and organized.  I hope some young
academic writes a case study on it some day but until then, my gratitude to
the Magnum core reviewer team is warranted by the level of  their sheer
commitment.

I am officially focusing all of my energy on Kolla going forward.  The
Kolla core team elected me as PTL (or more accurately didn’t elect anyone
else;) and I really want to be effective for them, especially in what I
feel is Kolla’s most critical phase of growth.

I will continue to fight  for engineering resources for Magnum internally
in Cisco.  Some of these have born fruit already including the Heat
resources, the Horizon plugin, and of course the Networking plugin system.
I will also continue to support Magnum from a resources POV where I can do
so (like the fedora image storage for example).  What I won’t be doing is
reviewing Magnum code (serving as a gate), or likely making much technical
contribution to Magnum in the future.  On the plus side I’ve replaced
myself with many many more engineers from Cisco who should be much more
productive combined then I could have been alone ;)

Just to be clear, I am not abandoning Magnum because I dislike the people
or the technology.  I think the people are fantastic! And the technology �C
well I helped design the entire architecture!  I am letting Magnum grow up
without me as I have other children that need more direct attention.  I
think this viewpoint shows trust in the core reviewer team, but feel free
to make your own judgements ;)

Finally I want to thank Perry Myers for influencing me to excel at multiple
disciplines at once.  Without Perry as a role model, Magnum may have never
happened (or would certainly be much different then it is today). Being a
solid hybrid engineer has a long ramp up time and is really difficult, but
also very rewarding.  The community has Perry to blame for that ;)

Regards
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][kolla] Stepping down as a Magnum corereviewer

2015-10-26 Thread Ton Ngo
Hi Steve,
 It will certainly be a loss to not have your constant presence for
guidance.  Your sensible approach to solving hard problems has many times
given clarity to the solution.  I am sure many in the team takes you as a
role model, so I think from time to time we will likely approach you for
ideas.
Ton,



From:   "Steven Dake (stdake)" <std...@cisco.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
Date:   10/25/2015 05:22 PM
Subject:[openstack-dev] [magnum][kolla] Stepping down as a Magnum core
reviewer



Hey folks,

It is with sadness that I find myself under the situation to have to write
this message.  I have the privilege of being involved in two of the most
successful and growing projects (Magnum, Kolla) in OpenStack.  I chose
getting involved in two major initiatives on purpose, to see if I could do
the job; to see if  I could deliver two major initiatives at the same time.
I also wanted it to be a length of time that was significant – 1+ year.  I
found indeed I was able to deliver both Magnum and Kolla, however, the
impact on my personal life has not been ideal.

The Magnum engineering team is truly a world class example of how an Open
Source project should be constructed and organized.  I hope some young
academic writes a case study on it some day but until then, my gratitude to
the Magnum core reviewer team is warranted by the level of  their sheer
commitment.

I am officially focusing all of my energy on Kolla going forward.  The
Kolla core team elected me as PTL (or more accurately didn’t elect anyone
else;) and I really want to be effective for them, especially in what I
feel is Kolla’s most critical phase of growth.

I will continue to fight  for engineering resources for Magnum internally
in Cisco.  Some of these have born fruit already including the Heat
resources, the Horizon plugin, and of course the Networking plugin system.
I will also continue to support Magnum from a resources POV where I can do
so (like the fedora image storage for example).  What I won’t be doing is
reviewing Magnum code (serving as a gate), or likely making much technical
contribution to Magnum in the future.  On the plus side I’ve replaced
myself with many many more engineers from Cisco who should be much more
productive combined then I could have been alone ;)

Just to be clear, I am not abandoning Magnum because I dislike the people
or the technology.  I think the people are fantastic! And the technology –
well I helped design the entire architecture!  I am letting Magnum grow up
without me as I have other children that need more direct attention.  I
think this viewpoint shows trust in the core reviewer team, but feel free
to make your own judgements ;)

Finally I want to thank Perry Myers for influencing me to excel at multiple
disciplines at once.  Without Perry as a role model, Magnum may have never
happened (or would certainly be much different then it is today). Being a
solid hybrid engineer has a long ramp up time and is really difficult, but
also very rewarding.  The community has Perry to blame for that ;)

Regards
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][kolla] Stepping down as a Magnum core reviewer

2015-10-26 Thread Baohua Yang
Really a pity!

We need more resources on the container part in OpenStack indeed, as so
many new projects are just initiated.

Community is not only about putting technologies together, but also putting
technical guys together.

Happy to see so many guys in the Tokyo Summit this afternoon.

Let's take care of the opportunities to make good communications with each
other.

On Mon, Oct 26, 2015 at 8:17 AM, Steven Dake (stdake) 
wrote:

> Hey folks,
>
> It is with sadness that I find myself under the situation to have to write
> this message.  I have the privilege of being involved in two of the most
> successful and growing projects (Magnum, Kolla) in OpenStack.  I chose
> getting involved in two major initiatives on purpose, to see if I could do
> the job; to see if  I could deliver two major initiatives at the same
> time.  I also wanted it to be a length of time that was significant – 1+
> year.  I found indeed I was able to deliver both Magnum and Kolla, however,
> the impact on my personal life has not been ideal.
>
> The Magnum engineering team is truly a world class example of how an Open
> Source project should be constructed and organized.  I hope some young
> academic writes a case study on it some day but until then, my gratitude to
> the Magnum core reviewer team is warranted by the level of  their sheer
> commitment.
>
> I am officially focusing all of my energy on Kolla going forward.  The
> Kolla core team elected me as PTL (or more accurately didn’t elect anyone
> else;) and I really want to be effective for them, especially in what I
> feel is Kolla’s most critical phase of growth.
>
> I will continue to fight  for engineering resources for Magnum internally
> in Cisco.  Some of these have born fruit already including the Heat
> resources, the Horizon plugin, and of course the Networking plugin system.
> I will also continue to support Magnum from a resources POV where I can do
> so (like the fedora image storage for example).  What I won’t be doing is
> reviewing Magnum code (serving as a gate), or likely making much technical
> contribution to Magnum in the future.  On the plus side I’ve replaced
> myself with many many more engineers from Cisco who should be much more
> productive combined then I could have been alone ;)
>
> Just to be clear, I am not abandoning Magnum because I dislike the people
> or the technology.  I think the people are fantastic! And the technology –
> well I helped design the entire architecture!  I am letting Magnum grow up
> without me as I have other children that need more direct attention.  I
> think this viewpoint shows trust in the core reviewer team, but feel free
> to make your own judgements ;)
>
> Finally I want to thank Perry Myers for influencing me to excel at
> multiple disciplines at once.  Without Perry as a role model, Magnum may
> have never happened (or would certainly be much different then it is
> today). Being a solid hybrid engineer has a long ramp up time and is really
> difficult, but also very rewarding.  The community has Perry to blame for
> that ;)
>
> Regards
> -steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best wishes!
Baohua
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum][kolla] Stepping down as a Magnum core reviewer

2015-10-25 Thread Steven Dake (stdake)
Hey folks,

It is with sadness that I find myself under the situation to have to write this 
message.  I have the privilege of being involved in two of the most successful 
and growing projects (Magnum, Kolla) in OpenStack.  I chose getting involved in 
two major initiatives on purpose, to see if I could do the job; to see if  I 
could deliver two major initiatives at the same time.  I also wanted it to be a 
length of time that was significant - 1+ year.  I found indeed I was able to 
deliver both Magnum and Kolla, however, the impact on my personal life has not 
been ideal.

The Magnum engineering team is truly a world class example of how an Open 
Source project should be constructed and organized.  I hope some young academic 
writes a case study on it some day but until then, my gratitude to the Magnum 
core reviewer team is warranted by the level of  their sheer commitment.

I am officially focusing all of my energy on Kolla going forward.  The Kolla 
core team elected me as PTL (or more accurately didn't elect anyone else;) and 
I really want to be effective for them, especially in what I feel is Kolla's 
most critical phase of growth.

I will continue to fight  for engineering resources for Magnum internally in 
Cisco.  Some of these have born fruit already including the Heat resources, the 
Horizon plugin, and of course the Networking plugin system.  I will also 
continue to support Magnum from a resources POV where I can do so (like the 
fedora image storage for example).  What I won't be doing is reviewing Magnum 
code (serving as a gate), or likely making much technical contribution to 
Magnum in the future.  On the plus side I've replaced myself with many many 
more engineers from Cisco who should be much more productive combined then I 
could have been alone ;)

Just to be clear, I am not abandoning Magnum because I dislike the people or 
the technology.  I think the people are fantastic! And the technology - well I 
helped design the entire architecture!  I am letting Magnum grow up without me 
as I have other children that need more direct attention.  I think this 
viewpoint shows trust in the core reviewer team, but feel free to make your own 
judgements ;)

Finally I want to thank Perry Myers for influencing me to excel at multiple 
disciplines at once.  Without Perry as a role model, Magnum may have never 
happened (or would certainly be much different then it is today). Being a solid 
hybrid engineer has a long ramp up time and is really difficult, but also very 
rewarding.  The community has Perry to blame for that ;)

Regards
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] k8s api tls_enabled mode testing

2015-10-25 Thread OTSUKA , Motohiro
Hi, Eli Qiao

If ca or client certs is wrong, I think client will get error before `client 
hello`.
I tested broken ca cert and client cert in my local environment.
See below logs.

yuanying@devstack:~/temp$ curl https://192.168.19.92:6443 --tlsv1.0 -v  --key 
./client.key --cert ./client.crt --cacert ./ca.crt
* Rebuilt URL to: https://192.168.19.92:6443/
* Hostname was NOT found in DNS cache
*   Trying 192.168.19.92...
* Connected to 192.168.19.92 (192.168.19.92) port 6443 (#0)
* unable to use client certificate (no key found or wrong pass phrase?)
* Closing connection 0
curl: (58) unable to use client certificate (no key found or wrong pass phrase?)



--  
OTSUKA, Motohiro
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)


On Wednesday, October 21, 2015 at 20:34, Qiao, Liyong wrote:

> Hello,
> I need your help on k8s api tls_enabled mode.
> Here’s my patch https://review.openstack.org/232421
>   
> It is always failed on gate, but it works in my setup.
> Debug more I found that the ca cert return api return length with difference:
>   
> On my setup:
> 10.238.157.49 - - [21/Oct/2015 19:16:17] "POST /v1/certificates HTTP/1.1" 201 
> 3360
> …
> 10.238.157.49 - - [21/Oct/2015 19:16:17] "GET 
> /v1/certificates/d4bf6135-a3d0-4980-a785-e3f2900ca315 HTTP/1.1" 200 1357
>   
> On gate:
>   
> 127.0.0.1 - - [21/Oct/2015 10:59:40] "POST /v1/certificates HTTP/1.1" 201 3352
> 127.0.0.1 - - [21/Oct/2015 10:59:40] "GET 
> /v1/certificates/a9aa1bbd-d624-4791-a4b9-e7a076c8bf58 HTTP/1.1" 200 1349
>   
> Misses 8 Bit.
>   
> I also print out the cert file content, but the length of both on gate and my 
> setup are same.
> But failed on gate due to SSL exception.
> Does anyone know what will be the root cause?
>   
>   
>   
> BR, Eli(Li Yong)Qiao
>   
>  
>  
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>  
>  


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][kolla] Stepping down as a Magnum core reviewer

2015-10-25 Thread Adrian Otto
Steve,

Thanks so much for your contributions to Magnum. You have played a critically 
important role for us, and we are saddened to hear the news of your departure. 
You are welcome to return at any time.

Cheers,

Adrian

On Oct 26, 2015, at 9:17 AM, Steven Dake (stdake) 
> wrote:

Hey folks,

It is with sadness that I find myself under the situation to have to write this 
message.  I have the privilege of being involved in two of the most successful 
and growing projects (Magnum, Kolla) in OpenStack.  I chose getting involved in 
two major initiatives on purpose, to see if I could do the job; to see if  I 
could deliver two major initiatives at the same time.  I also wanted it to be a 
length of time that was significant – 1+ year.  I found indeed I was able to 
deliver both Magnum and Kolla, however, the impact on my personal life has not 
been ideal.

The Magnum engineering team is truly a world class example of how an Open 
Source project should be constructed and organized.  I hope some young academic 
writes a case study on it some day but until then, my gratitude to the Magnum 
core reviewer team is warranted by the level of  their sheer commitment.

I am officially focusing all of my energy on Kolla going forward.  The Kolla 
core team elected me as PTL (or more accurately didn’t elect anyone else;) and 
I really want to be effective for them, especially in what I feel is Kolla’s 
most critical phase of growth.

I will continue to fight  for engineering resources for Magnum internally in 
Cisco.  Some of these have born fruit already including the Heat resources, the 
Horizon plugin, and of course the Networking plugin system.  I will also 
continue to support Magnum from a resources POV where I can do so (like the 
fedora image storage for example).  What I won’t be doing is reviewing Magnum 
code (serving as a gate), or likely making much technical contribution to 
Magnum in the future.  On the plus side I’ve replaced myself with many many 
more engineers from Cisco who should be much more productive combined then I 
could have been alone ;)

Just to be clear, I am not abandoning Magnum because I dislike the people or 
the technology.  I think the people are fantastic! And the technology – well I 
helped design the entire architecture!  I am letting Magnum grow up without me 
as I have other children that need more direct attention.  I think this 
viewpoint shows trust in the core reviewer team, but feel free to make your own 
judgements ;)

Finally I want to thank Perry Myers for influencing me to excel at multiple 
disciplines at once.  Without Perry as a role model, Magnum may have never 
happened (or would certainly be much different then it is today). Being a solid 
hybrid engineer has a long ramp up time and is really difficult, but also very 
rewarding.  The community has Perry to blame for that ;)

Regards
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][kolla] Stepping down as a Magnum core reviewer

2015-10-25 Thread Hongbin Lu
Hi Steve,

Thanks for your contributions. Personally, I would like to think for your 
mentorship and guidance when I was new to Magnum. It helps me a lot to pick up 
everything. Best wish for your adventure in Kolla.

Best regards,
Hongbin

From: Steven Dake (stdake) [mailto:std...@cisco.com]
Sent: October-25-15 8:18 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum][kolla] Stepping down as a Magnum core reviewer

Hey folks,

It is with sadness that I find myself under the situation to have to write this 
message.  I have the privilege of being involved in two of the most successful 
and growing projects (Magnum, Kolla) in OpenStack.  I chose getting involved in 
two major initiatives on purpose, to see if I could do the job; to see if  I 
could deliver two major initiatives at the same time.  I also wanted it to be a 
length of time that was significant - 1+ year.  I found indeed I was able to 
deliver both Magnum and Kolla, however, the impact on my personal life has not 
been ideal.

The Magnum engineering team is truly a world class example of how an Open 
Source project should be constructed and organized.  I hope some young academic 
writes a case study on it some day but until then, my gratitude to the Magnum 
core reviewer team is warranted by the level of  their sheer commitment.

I am officially focusing all of my energy on Kolla going forward.  The Kolla 
core team elected me as PTL (or more accurately didn't elect anyone else;) and 
I really want to be effective for them, especially in what I feel is Kolla's 
most critical phase of growth.

I will continue to fight  for engineering resources for Magnum internally in 
Cisco.  Some of these have born fruit already including the Heat resources, the 
Horizon plugin, and of course the Networking plugin system.  I will also 
continue to support Magnum from a resources POV where I can do so (like the 
fedora image storage for example).  What I won't be doing is reviewing Magnum 
code (serving as a gate), or likely making much technical contribution to 
Magnum in the future.  On the plus side I've replaced myself with many many 
more engineers from Cisco who should be much more productive combined then I 
could have been alone ;)

Just to be clear, I am not abandoning Magnum because I dislike the people or 
the technology.  I think the people are fantastic! And the technology - well I 
helped design the entire architecture!  I am letting Magnum grow up without me 
as I have other children that need more direct attention.  I think this 
viewpoint shows trust in the core reviewer team, but feel free to make your own 
judgements ;)

Finally I want to thank Perry Myers for influencing me to excel at multiple 
disciplines at once.  Without Perry as a role model, Magnum may have never 
happened (or would certainly be much different then it is today). Being a solid 
hybrid engineer has a long ramp up time and is really difficult, but also very 
rewarding.  The community has Perry to blame for that ;)

Regards
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Coe components status

2015-10-21 Thread Egor Guz
Vikas,

Could you clarify what do you mean under ’status’? I don’t seed this command in 
kubectl, so I assume it is get or describe?
Also for Docker, is it info, inspect or stats? We can get app/container details 
through Marathon API in Mesos, but it’s very depend what information we are 
looking for ;)

My two cents, I think we should implement/found common ground between 'kub 
describe’, ‘docker inspect’ and 'curl http://${MASTER_IP}:8080/v2/tasks' first. 
These commands
are very useful for troubleshooting.

About 'magnum container’ command for all COEs, we should definitely discuss 
this topic during summit. But challenge here that Marathon/Mesos app/container 
definition is
very different form Kub model.

—
Egor

From: Vikas Choudhary 
<choudharyvika...@gmail.com<mailto:choudharyvika...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, October 20, 2015 at 20:56
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Magnum] Coe components status

@ Eli ,

I will look into how to support this feature for other COEs also(mesos and 
swarm). But anyways Magnum's goal is to provide users *atleast* what other coes 
are providing (if not something extra). All coes dont have common features, so 
we cant be very strict on providing common interface apis for all coes. For 
example "magnum container" commands work only with swarm not k8s or mesos.
It will not be justified if k8s is providing a way to monitor at more granular 
level but magnum will not allow user to use it just beacuse other coes does not 
provide this feature.

Agree that it will be nice if could support this feature for all. I will prefer 
to start with k8s first and if similar feature is supported by mesos and swarm 
also, incrementally will implement that also.

Regards
Vikas Choudhary

On Wed, Oct 21, 2015 at 6:50 AM, Qiao,Liyong 
<liyong.q...@intel.com<mailto:liyong.q...@intel.com>> wrote:
hi Vikas,
thanks for propose this changes, I wonder if you can show some examples for 
other coes we currently supported:
swarm, mesos ?

if we propose a public api like you proposed, we'd better to support all coes 
instead of coe specific.

thanks
Eli.


On 2015年10月20日 18:14, Vikas Choudhary wrote:
Hi Team,

I would appreciate any opinion/concern regarding "coe-component-status" feature 
implementation [1].

For example in k8s, using API api/v1/namespaces/{namespace}/componentstatuses, 
status of each k8s component can be queried. My approach would be to provide a 
command in magnum like "magnum coe-component-status" leveraging coe provided 
rest api and result will be shown to user.

[1] https://blueprints.launchpad.net/magnum/+spec/coe-component-status



-Vikas Choudhary



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
BR, Eli(Li Yong)Qiao

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] k8s api tls_enabled mode testing

2015-10-21 Thread Qiao, Liyong
Hello,
I need your help on k8s api tls_enabled mode.
Here’s my patch https://review.openstack.org/232421

It is always failed on gate, but it works in my setup.
Debug more I found that the ca cert return api return length with difference:

On my setup:
10.238.157.49 - - [21/Oct/2015 19:16:17] "POST /v1/certificates HTTP/1.1" 201 
3360
…
10.238.157.49 - - [21/Oct/2015 19:16:17] "GET 
/v1/certificates/d4bf6135-a3d0-4980-a785-e3f2900ca315 HTTP/1.1" 200 1357

On gate:

127.0.0.1 - - [21/Oct/2015 10:59:40] "POST /v1/certificates HTTP/1.1" 201 3352

127.0.0.1 - - [21/Oct/2015 10:59:40] "GET 
/v1/certificates/a9aa1bbd-d624-4791-a4b9-e7a076c8bf58 HTTP/1.1" 200 1349



Misses 8 Bit.



I also print out the cert file content, but the length of both on gate and my 
setup are same.

But failed on gate due to SSL exception.

Does anyone know what will be the root cause?




BR, Eli(Li Yong)Qiao

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Mitaka Design Summit Sessions in Tokyo

2015-10-21 Thread Adrian Otto
Magnum Team,

For those of you attending the Mitaka Design Summit in Tokyo next week, here 
are the sessions that I respectfully request you attend:

2015-10-28, 11:15-11:55 http://sched.co/4Qck Heat Template Refactoring
2015-10-28, 12:05-12:45 http://sched.co/4Qdc Magnum Scalability (Work Session)
2015-10-28, 14:00-14:40 http://sched.co/4Qaf Magnum Security (Work Session)
2015-10-28, 16:40-17:20 http://sched.co/49xE OpenStack Magnum - 
Containers-as-a-Service (Summit Session)
2015-10-29, 09:00-09:40 http://sched.co/4QdW Magnum: Storage Features (Fishbowl)
2015-10-29, 09:50-10:30 http://sched.co/4Qel Netwoking: Past, Present, and 
Future (Fishbowl)
2015-10-29, 11:00-11:40 http://sched.co/4Qeo Magnum Auto-scaling (Workroom)
2015-10-29, 11:50-12:30 http://sched.co/4QeX Functional Testing (Work Session)
2015-10-29, 13:50-14:30 http://sched.co/4Qem Magnum: Bare Metal Bays (Fishbowl)
2015-10-29, 14:40-15:20 http://sched.co/4Qcw Magnum: Getting Started for 
Developers and Sysadmins (Fishbowl)
2015-10-29, 15:30-16:10 http://sched.co/4Qdj Magnum: Scope of Magnum (Fishbowl)
2015-10-30, 09:00-12:30 http://sched.co/4Qdh Magnum contributors meetup
2015-10-30, 14:00-17:30 http://sched.co/4Qbc Magnum contributors meetup

Note that the contributors meetups have a shared open agenda. Find the etherpad 
linked to the shed entry. Please put your topics in the list, and we will sort 
them as a group together so we can cover anything that was not already 
addressed in the previous sessions.

Also note that you can star these sessions in Sched, and then subscribe to the 
iCal feed, labeled “Mobile App + iCal” so all of your selected sessions show in 
your calendar if you subscribe to that iCal feed.

I have also published this same information on our Wiki:
https://wiki.openstack.org/wiki/Magnum/Summit

See you at the summit!

Thanks,

Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Coe components status

2015-10-20 Thread Vikas Choudhary
Hi Team,

I would appreciate any opinion/concern regarding "coe-component-status"
feature implementation [1].

For example in k8s, using API api/v1/namespaces/{namespace}/componentstatuses,
status of each k8s component can be queried. My approach would be to
provide a command in magnum like "magnum
coe-component-status" leveraging coe provided rest api and result will be
shown to user.

[1] https://blueprints.launchpad.net/magnum/+spec/coe-component-status



-Vikas Choudhary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Coe components status

2015-10-20 Thread Qiao,Liyong

hi Vikas,
thanks for propose this changes, I wonder if you can show some examples 
for other coes we currently supported:

swarm, mesos ?

if we propose a public api like you proposed, we'd better to support all 
coes instead of coe specific.


thanks
Eli.

On 2015年10月20日 18:14, Vikas Choudhary wrote:

Hi Team,

I would appreciate any opinion/concern regarding 
"coe-component-status" feature implementation [1].


For example in k8s, using 
APIapi/v1/namespaces/{namespace}/componentstatuses, status of each k8s 
component can be queried. My approach would be to provide a command in 
magnum like "magnum coe-component-status" leveraging coe provided rest 
api and result will be shown to user.


[1] https://blueprints.launchpad.net/magnum/+spec/coe-component-status



-Vikas Choudhary


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
BR, Eli(Li Yong)Qiao

<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Coe components status

2015-10-20 Thread Vikas Choudhary
@ Eli ,

I will look into how to support this feature for other COEs also(mesos and
swarm). But anyways Magnum's goal is to provide users *atleast* what other
coes are providing (if not something extra). All coes dont have common
features, so we cant be very strict on providing common interface apis for
all coes. For example "magnum container" commands work only with swarm not
k8s or mesos.
It will not be justified if k8s is providing a way to monitor at more
granular level but magnum will not allow user to use it just beacuse other
coes does not provide this feature.

Agree that it will be nice if could support this feature for all. I will
prefer to start with k8s first and if similar feature is supported by mesos
and swarm also, incrementally will implement that also.

Regards
Vikas Choudhary

On Wed, Oct 21, 2015 at 6:50 AM, Qiao,Liyong  wrote:

> hi Vikas,
> thanks for propose this changes, I wonder if you can show some examples
> for other coes we currently supported:
> swarm, mesos ?
>
> if we propose a public api like you proposed, we'd better to support all
> coes instead of coe specific.
>
> thanks
> Eli.
>
>
> On 2015年10月20日 18:14, Vikas Choudhary wrote:
>
> Hi Team,
>
> I would appreciate any opinion/concern regarding "coe-component-status"
> feature implementation [1].
>
> For example in k8s, using API api/v1/namespaces/{namespace}/componentstatuses,
> status of each k8s component can be queried. My approach would be to
> provide a command in magnum like "magnum
> coe-component-status" leveraging coe provided rest api and result will be
> shown to user.
>
> [1] https://blueprints.launchpad.net/magnum/+spec/coe-component-status
>
>
>
> -Vikas Choudhary
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> --
> BR, Eli(Li Yong)Qiao
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Coe components status

2015-10-20 Thread Bharath Thiruveedula
+ 1 to vikas.

As we have monitor framework only to docker swarm COE at present and we are
pushing other COE drivers in future. So it is better to have component
status for one COE at first and will push other COEs support later. Correct
me if I am wrong.

Regards
Bharath T

On Wed, Oct 21, 2015 at 9:26 AM, Vikas Choudhary  wrote:

> @ Eli ,
>
> I will look into how to support this feature for other COEs also(mesos and
> swarm). But anyways Magnum's goal is to provide users *atleast* what other
> coes are providing (if not something extra). All coes dont have common
> features, so we cant be very strict on providing common interface apis for
> all coes. For example "magnum container" commands work only with swarm not
> k8s or mesos.
> It will not be justified if k8s is providing a way to monitor at more
> granular level but magnum will not allow user to use it just beacuse other
> coes does not provide this feature.
>
> Agree that it will be nice if could support this feature for all. I will
> prefer to start with k8s first and if similar feature is supported by mesos
> and swarm also, incrementally will implement that also.
>
> Regards
> Vikas Choudhary
>
> On Wed, Oct 21, 2015 at 6:50 AM, Qiao,Liyong 
> wrote:
>
>> hi Vikas,
>> thanks for propose this changes, I wonder if you can show some examples
>> for other coes we currently supported:
>> swarm, mesos ?
>>
>> if we propose a public api like you proposed, we'd better to support all
>> coes instead of coe specific.
>>
>> thanks
>> Eli.
>>
>>
>> On 2015年10月20日 18:14, Vikas Choudhary wrote:
>>
>> Hi Team,
>>
>> I would appreciate any opinion/concern regarding "coe-component-status"
>> feature implementation [1].
>>
>> For example in k8s, using API api/v1/namespaces/{namespace}
>> /componentstatuses, status of each k8s component can be queried. My
>> approach would be to provide a command in magnum like "magnum
>> coe-component-status" leveraging coe provided rest api and result will be
>> shown to user.
>>
>> [1] https://blueprints.launchpad.net/magnum/+spec/coe-component-status
>>
>>
>>
>> -Vikas Choudhary
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> --
>> BR, Eli(Li Yong)Qiao
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Failed to create swarm bay with fedora-21-atomic-5-d181.qcow2

2015-10-18 Thread Mars Ma
Hi Ton,

"docker --help" command works ok, but
[fedora@sw-d7cum4a5z5a-0-dx4eksy72u4q-swarm-node-3d7bwzm7fso7 ~]$ sudo
/usr/bin/docker -d -H fd:// -H tcp://0.0.0.0:2375 --tlsverify
--tlscacert="/etc/docker/ca.crt" --tlskey="/etc/docker/server.key"
--tlscert="/etc/docker/server.crt" --selinux-enabled --storage-driver
devicemapper --storage-opt dm.fs=xfs --storage-opt
dm.datadev=/dev/mapper/atomicos-docker--data --storage-opt
dm.metadatadev=/dev/mapper/atomicos-docker--meta
Warning: '-d' is deprecated, it will be removed soon. See usage.
WARN[] please use 'docker daemon' instead.
ERRO[] ServeAPI error: No sockets found
WARN[0001] --storage-opt dm.thinpooldev is preferred over --storage-opt
dm.datadev or dm.metadatadev

[fedora@sw-d7cum4a5z5a-0-dx4eksy72u4q-swarm-node-3d7bwzm7fso7 ~]$ sudo
/usr/bin/docker daemon -H fd:// -H tcp://0.0.0.0:2375 --tlsverify
--tlscacert="/etc/docker/ca.crt" --tlskey="/etc/docker/server.key"
--tlscert="/etc/docker/server.crt" --selinux-enabled --storage-driver
devicemapper --storage-opt dm.fs=xfs --storage-opt
dm.thinpooldev=/dev/mapper/atomicos-docker--data --storage-opt
dm.metadatadev=/dev/mapper/atomicos-docker--meta
ERRO[] ServeAPI error: No sockets found
FATA[0001] Error starting daemon: error initializing graphdriver: EOF

[fedora@sw-d7cum4a5z5a-0-dx4eksy72u4q-swarm-node-3d7bwzm7fso7 ~]$ sudo
systemctl status docker.socket
● docker.socket - Docker Socket for the API
   Loaded: loaded (/etc/systemd/system/docker.socket; enabled)
   Active: active (listening) since Fri 2015-10-16 05:55:04 UTC; 2 days ago
   Listen: /var/run/docker.sock (Stream)

Oct 16 05:55:04
sw-d7cum4a5z5a-0-dx4eksy72u4q-swarm-node-3d7bwzm7fso7.novalocal systemd[1]:
Listening on Docker Socket for the API.

this is strange, log into the swarm node, cannot also start the docker
service manually.


Thanks & Best regards !
Mars Ma

On Mon, Oct 19, 2015 at 1:11 PM, Mars Ma  wrote:

> Hi Hongbin,
>
> I can ssh into the swarm node, and curl cmd works ok:
> [fedora@sw-d7cum4a5z5a-0-dx4eksy72u4q-swarm-node-3d7bwzm7fso7 ~]$ curl
> openstack.org
> 
> 301 Moved Permanently
> 
> 301 Moved Permanently
> nginx
> 
> 
>
> On Fri, Oct 16, 2015 at 2:36 PM, Mars Ma  wrote:
>
>> Hi,
>>
>> I used image fedora-21-atomic-5-d181.qcow2 to create swarm bay , but the
>> bay went to failed status with status reason: Resource CREATE failed:
>> WaitConditionFailure:
>> resources.swarm_nodes.resources[0].resources.node_agent_wait_condition:
>> swarm-agent service failed to start.
>> debug inside swarm node, found that docker failed to start, lead to
>> swarm-agent and swarm-manager services failed to start.
>> [fedora@sw-d7cum4a5z5a-0-dx4eksy72u4q-swarm-node-3d7bwzm7fso7 ~]$ docker
>> -v
>> Docker version 1.8.1.fc21, build 32b8b25/1.8.1
>>
>> detailed debug log, I pasted here :
>> http://paste.openstack.org/show/476450/
>>
>>
>>
>>
>> Thanks & Best regards !
>> Mars Ma
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Failed to create swarm bay with fedora-21-atomic-5-d181.qcow2

2015-10-18 Thread Mars Ma
Hi Hongbin,

I can ssh into the swarm node, and curl cmd works ok:
[fedora@sw-d7cum4a5z5a-0-dx4eksy72u4q-swarm-node-3d7bwzm7fso7 ~]$ curl
openstack.org

301 Moved Permanently

301 Moved Permanently
nginx



On Fri, Oct 16, 2015 at 2:36 PM, Mars Ma  wrote:

> Hi,
>
> I used image fedora-21-atomic-5-d181.qcow2 to create swarm bay , but the
> bay went to failed status with status reason: Resource CREATE failed:
> WaitConditionFailure:
> resources.swarm_nodes.resources[0].resources.node_agent_wait_condition:
> swarm-agent service failed to start.
> debug inside swarm node, found that docker failed to start, lead to
> swarm-agent and swarm-manager services failed to start.
> [fedora@sw-d7cum4a5z5a-0-dx4eksy72u4q-swarm-node-3d7bwzm7fso7 ~]$ docker
> -v
> Docker version 1.8.1.fc21, build 32b8b25/1.8.1
>
> detailed debug log, I pasted here :
> http://paste.openstack.org/show/476450/
>
>
>
>
> Thanks & Best regards !
> Mars Ma
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Failed to create swarm baywithfedora-21-atomic-5-d181.qcow2

2015-10-18 Thread Qiao,Liyong
I co-worked with Mars last week, but in my environment I can not 
reproduce this issue.

I'v told my docker daemon line to Mars, but same errors.

one thing I forget to ask, it is same with hongbin's question:
if you need proxy to access internet, please add them in baymodel:

taget@taget-ThinkStation-P300:~/devstack$ magnum baymodel-create ...
--dns-nameserver 10.248.2.5 --coe swarm --fixed-network 192.168.0.0/24 
--http-proxy http://myhttpproxy:port/ --https-proxy 
https://myhttpsproxy:port/ --no-proxy 
192.168.0.1,192.168.0.2,192.168.0.3,192.168.0.4,192.168.0.5



On 2015年10月17日 05:18, Ton Ngo wrote:


Hi Mars,
Your paste shows that the docker service is not starting, and all the 
following services like swarm-agent fails because of the dependency. 
The error message INVALIDARGUMENT seems odd, I have seen elsewhere but 
not with docker. If you log into the node, you can check the docker 
command itself, like:

docker --help
Or manually run the full command as done in the service:
/usr/bin/docker -d -H fd:// -H tcp://0.0.0.0:2375 --tlsverify 
--tlscacert="/etc/docker/ca.crt" --tlskey="/etc/docker/server.key" 
--tlscert="/etc/docker/server.crt" --selinux-enabled --storage-driver 
devicemapper --storage-opt dm.fs=xfs --storage-opt 
dm.datadev=/dev/mapper/atomicos-docker--data --storage-opt 
dm.metadatadev=/dev/mapper/atomicos-docker--meta


Ton,

Inactive hide details for Hongbin Lu ---10/16/2015 01:05:12 PM---Hi 
Mars, I cannot reproduce the error. My best guess is that yHongbin Lu 
---10/16/2015 01:05:12 PM---Hi Mars, I cannot reproduce the error. My 
best guess is that your VMs don’t have external internet a


From: Hongbin Lu <hongbin...@huawei.com>
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>

Date: 10/16/2015 01:05 PM
Subject: Re: [openstack-dev] [magnum] Failed to create swarm bay with 
fedora-21-atomic-5-d181.qcow2






Hi Mars,

I cannot reproduce the error. My best guess is that your VMs don’t 
have external internet access (Could you verify it by ssh into one of 
your VM and type “curl openstack.org” ?). If not, please create a bug 
to report the error (_https://bugs.launchpad.net/magnum_).


Thanks,
Hongbin

*From:*Mars Ma [mailto:wenc...@gmail.com] *
Sent:*October-16-15 2:37 AM*
To:*openstack-dev@lists.openstack.org*
Subject:*[openstack-dev] [magnum] Failed to create swarm bay with 
fedora-21-atomic-5-d181.qcow2


Hi,

I used image fedora-21-atomic-5-d181.qcow2 to create swarm bay , but 
the bay went to failed status with status reason: Resource CREATE 
failed: WaitConditionFailure: 
resources.swarm_nodes.resources[0].resources.node_agent_wait_condition: swarm-agent 
service failed to start.
debug inside swarm node, found that docker failed to start, lead to 
swarm-agent and swarm-manager services failed to start.
[fedora@sw-d7cum4a5z5a-0-dx4eksy72u4q-swarm-node-3d7bwzm7fso7 ~]$ 
docker -v

Docker version 1.8.1.fc21, build 32b8b25/1.8.1

detailed debug log, I pasted here :
_http://paste.openstack.org/show/476450/_




Thanks & Best regards !
Mars 
Ma__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
BR, Eli(Li Yong)Qiao

<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Failed to create swarm baywithfedora-21-atomic-5-d181.qcow2

2015-10-16 Thread Ton Ngo
Hi Mars,
Your paste shows that the docker service is not starting, and all the
following services like swarm-agent fails because of the dependency.  The
error message INVALIDARGUMENT seems odd, I have seen elsewhere but not with
docker.  If you log into the node, you can check the docker command itself,
like:
docker --help
Or manually run the full command as done in the service:
/usr/bin/docker -d -H fd:// -H tcp://0.0.0.0:2375 --tlsverify
--tlscacert="/etc/docker/ca.crt" --tlskey="/etc/docker/server.key"
--tlscert="/etc/docker/server.crt" --selinux-enabled --storage-driver
devicemapper --storage-opt dm.fs=xfs --storage-opt
dm.datadev=/dev/mapper/atomicos-docker--data --storage-opt
dm.metadatadev=/dev/mapper/atomicos-docker--meta

Ton,



From:   Hongbin Lu <hongbin...@huawei.com>
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
Date:   10/16/2015 01:05 PM
Subject:Re: [openstack-dev] [magnum] Failed to create swarm bay
withfedora-21-atomic-5-d181.qcow2



Hi Mars,

I cannot reproduce the error. My best guess is that your VMs don’t have
external internet access (Could you verify it by ssh into one of your VM
and type “curl openstack.org” ?). If not, please create a bug to report the
error (https://bugs.launchpad.net/magnum).

Thanks,
Hongbin

From: Mars Ma [mailto:wenc...@gmail.com]
Sent: October-16-15 2:37 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [magnum] Failed to create swarm bay with
fedora-21-atomic-5-d181.qcow2

Hi,

I used image fedora-21-atomic-5-d181.qcow2 to create swarm bay , but the
bay went to failed status with status reason: Resource CREATE failed:
WaitConditionFailure: resources.swarm_nodes.resources
[0].resources.node_agent_wait_condition: swarm-agent service failed to
start.
debug inside swarm node, found that docker failed to start, lead to
swarm-agent and swarm-manager services failed to start.
[fedora@sw-d7cum4a5z5a-0-dx4eksy72u4q-swarm-node-3d7bwzm7fso7 ~]$ docker -v
Docker version 1.8.1.fc21, build 32b8b25/1.8.1

detailed debug log, I pasted here :
http://paste.openstack.org/show/476450/




Thanks & Best regards !
Mars Ma
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Failed to create swarm bay with fedora-21-atomic-5-d181.qcow2

2015-10-16 Thread Hongbin Lu
Hi Mars,

I cannot reproduce the error. My best guess is that your VMs don’t have 
external internet access (Could you verify it by ssh into one of your VM and 
type “curl openstack.org” ?). If not, please create a bug to report the error 
(https://bugs.launchpad.net/magnum).

Thanks,
Hongbin

From: Mars Ma [mailto:wenc...@gmail.com]
Sent: October-16-15 2:37 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [magnum] Failed to create swarm bay with 
fedora-21-atomic-5-d181.qcow2

Hi,

I used image fedora-21-atomic-5-d181.qcow2 to create swarm bay , but the bay 
went to failed status with status reason: Resource CREATE failed: 
WaitConditionFailure: 
resources.swarm_nodes.resources[0].resources.node_agent_wait_condition: 
swarm-agent service failed to start.
debug inside swarm node, found that docker failed to start, lead to swarm-agent 
and swarm-manager services failed to start.
[fedora@sw-d7cum4a5z5a-0-dx4eksy72u4q-swarm-node-3d7bwzm7fso7 ~]$ docker -v
Docker version 1.8.1.fc21, build 32b8b25/1.8.1

detailed debug log, I pasted here :
http://paste.openstack.org/show/476450/




Thanks & Best regards !
Mars Ma
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Failed to create swarm bay with fedora-21-atomic-5-d181.qcow2

2015-10-16 Thread Mars Ma
Hi,

I used image fedora-21-atomic-5-d181.qcow2 to create swarm bay , but the
bay went to failed status with status reason: Resource CREATE failed:
WaitConditionFailure:
resources.swarm_nodes.resources[0].resources.node_agent_wait_condition:
swarm-agent service failed to start.
debug inside swarm node, found that docker failed to start, lead to
swarm-agent and swarm-manager services failed to start.
[fedora@sw-d7cum4a5z5a-0-dx4eksy72u4q-swarm-node-3d7bwzm7fso7 ~]$ docker -v
Docker version 1.8.1.fc21, build 32b8b25/1.8.1

detailed debug log, I pasted here :
http://paste.openstack.org/show/476450/




Thanks & Best regards !
Mars Ma
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Creating pods results in "EOF occurred in violation of protocol" exception

2015-10-14 Thread Hongbin Lu
Hi Bertrand,

Thanks for reporting the error. I confirmed that this error was consistently 
reproducible. A bug ticket was created for that.

https://bugs.launchpad.net/magnum/+bug/1506226

Best regards,
Hongbin

-Original Message-
From: Bertrand NOEL [mailto:bertrand.n...@cern.ch] 
Sent: October-14-15 8:51 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] Creating pods results in "EOF occurred in 
violation of protocol" exception

Hi,
I try Magnum, following instructions on the quickstart page [1]. I successfully 
create the baymodel and the bay. When I run the command to create redis pods 
(_magnum pod-create --manifest ./redis-master.yaml --bay k8sbay_), client side, 
it timeouts. And server side (m-cond.log), I get the following stack trace. It 
also happens with other Kubernetes examples.
I try with Ubuntu 14.04, with Magnum at commit 
fc8f412c87ea0f9dc0fc1c24963013e6d6209f27.


2015-10-14 12:16:40.877 ERROR oslo_messaging.rpc.dispatcher
[req-960570cf-17b2-489f-9376-81890e2bf2d8 admin admin] Exception during message 
handling: [Errno 8] _ssl.c:510: EOF occurred in violation of protocol
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher Traceback (most 
recent call last):
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py",
line 142, in _dispatch_and_reply
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher
executor_callback))
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py",
line 186, in _dispatch
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher
executor_callback)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py",
line 129, in _do_dispatch
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher result = func(ctxt, 
**new_args)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/conductor/handlers/k8s_conductor.py", line 89, in 
pod_create
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher
namespace='default')
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/common/pythonk8sclient/swagger_client/apis/apiv_api.py",
line 3596, in create_namespaced_pod
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher
callback=params.get('callback'))
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/common/pythonk8sclient/swagger_client/api_client.py",
line 320, in call_api
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher response_type, 
auth_settings, callback)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/common/pythonk8sclient/swagger_client/api_client.py",
line 148, in __call_api
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher 
post_params=post_params, body=body)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/common/pythonk8sclient/swagger_client/api_client.py",
line 350, in request
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher body=body)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/common/pythonk8sclient/swagger_client/rest.py", line 
265, in POST
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher return 
self.IMPL.POST(*n, **kw)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/common/pythonk8sclient/swagger_client/rest.py", line 
187, in POST
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher body=body)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/common/pythonk8sclient/swagger_client/rest.py", line 
133, in request
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher headers=headers)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/urllib3/request.py", line 72, in request
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher **urlopen_kw)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/urllib3/request.py", line 149, in 
request_encode_body
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher return 
self.urlopen(method, url, **extra_kw)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/urllib3/poolmanager.py", line 161, in 
urlopen
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher response = 
conn.urlopen(method, u.request_uri, **kw)
2015-10-14 12:16:40.877 TRACE o

[openstack-dev] [magnum] Creating pods results in "EOF occurred in violation of protocol" exception

2015-10-14 Thread Bertrand NOEL

Hi,
I try Magnum, following instructions on the quickstart page [1]. I 
successfully create the baymodel and the bay. When I run the command to 
create redis pods (_magnum pod-create --manifest ./redis-master.yaml 
--bay k8sbay_), client side, it timeouts. And server side (m-cond.log), 
I get the following stack trace. It also happens with other Kubernetes 
examples.
I try with Ubuntu 14.04, with Magnum at commit 
fc8f412c87ea0f9dc0fc1c24963013e6d6209f27.



2015-10-14 12:16:40.877 ERROR oslo_messaging.rpc.dispatcher 
[req-960570cf-17b2-489f-9376-81890e2bf2d8 admin admin] Exception during 
message handling: [Errno 8] _ssl.c:510: EOF occurred in violation of 
protocol
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", 
line 142, in _dispatch_and_reply
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", 
line 186, in _dispatch
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", 
line 129, in _do_dispatch
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/conductor/handlers/k8s_conductor.py", line 89, 
in pod_create
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher 
namespace='default')
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/common/pythonk8sclient/swagger_client/apis/apiv_api.py", 
line 3596, in create_namespaced_pod
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher 
callback=params.get('callback'))
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/common/pythonk8sclient/swagger_client/api_client.py", 
line 320, in call_api
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher 
response_type, auth_settings, callback)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/common/pythonk8sclient/swagger_client/api_client.py", 
line 148, in __call_api
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher 
post_params=post_params, body=body)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/common/pythonk8sclient/swagger_client/api_client.py", 
line 350, in request

2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher body=body)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/common/pythonk8sclient/swagger_client/rest.py", line 
265, in POST
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher return 
self.IMPL.POST(*n, **kw)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/common/pythonk8sclient/swagger_client/rest.py", line 
187, in POST

2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher body=body)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/magnum/magnum/common/pythonk8sclient/swagger_client/rest.py", line 
133, in request

2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher headers=headers)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/urllib3/request.py", line 72, in 
request

2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher **urlopen_kw)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/urllib3/request.py", line 149, 
in request_encode_body
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher return 
self.urlopen(method, url, **extra_kw)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/urllib3/poolmanager.py", line 
161, in urlopen
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher response = 
conn.urlopen(method, u.request_uri, **kw)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 
588, in urlopen
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher raise 
SSLError(e)
2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher SSLError: 
[Errno 8] _ssl.c:510: EOF occurred in violation of protocol

2015-10-14 12:16:40.877 TRACE oslo_messaging.rpc.dispatcher
2015-10-14 12:16:40.879 ERROR oslo_messaging._drivers.common 
[req-960570cf-17b2-489f-9376-81890e2bf2d8 admin admin] Returning 
exception [Errno 8] _ssl.c:510: EOF occurred in violation of protocol to 
caller
2015-10-14 12:16:40.879 

[openstack-dev] [magnum] Networking Subteam Meeting Cancelations

2015-10-13 Thread Daneyon Hansen (danehans)
All,

I have a conflict this week and will be unable to chair the weekly irc meeting 
[1]. Therefore, we will not meet this week. 10/22 and 10/29 meetings will also 
be canceled due to the Mitaka Design Summit. We will resume are regularly 
scheduled meetings on 11/5.

[1] 
https://wiki.openstack.org/wiki/Meetings/Containers#Container_Networking_Subteam_Meeting

Regards,
Daneyon Hansen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Document adding --memory option to create containers

2015-10-11 Thread Neil Jerram
Please note that you jumped there from developer-focus to user‎-focus. Of 
course some users are also developers, and vice versa, but I would expect doc 
focussed on development to be quite different from that focussed on use.

For development doc, I think the Neutron devref is a great example, so you 
might want to be inspired by that.

Regards,
 Neil


From: Adrian Otto
Sent: Thursday, 8 October 2015 21:07
To: OpenStack Development Mailing List (not for usage questions)
Reply To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Document adding --memory option to create 
containers


Steve,

I agree with the concept of a simple quickstart doc, but there also needs to be 
a comprehensive user guide, which does not yet exist. In the absence of the 
user guide, the quick start is the void where this stuff is starting to land. 
We simply need to put together a magnum reference document, and start moving 
content into that.

Adrian

On Oct 8, 2015, at 12:54 PM, Steven Dake (stdake) 
<std...@cisco.com<mailto:std...@cisco.com>> wrote:

Quickstart guide should be dead dead dead dead simple.  The goal of the 
quickstart guide isn’t to tach people best practices around Magnum.  It is to 
get a developer operational to give them that sense of feeling that Magnum can 
be worked on.  The goal of any quickstart guide should be to encourage the 
thinking that a person involving themselves with the project the quickstart 
guide represents is a good use of the person’s limited time on the planet.

Regards
-steve


From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, October 8, 2015 at 9:00 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [magnum] Document adding --memory option to create 
containers

Hi team,

I want to move the discussion in the review below to here, so that we can get 
more feedback

https://review.openstack.org/#/c/232175/

In summary, magnum currently added support for specifying the memory size of 
containers. The specification of the memory size is optional, and the COE won’t 
reserve any memory to the containers with unspecified memory size. The debate 
is whether we should document this optional parameter in the quickstart guide. 
Below is the positions of both sides:

Pros:
· It is a good practice to always specifying the memory size, because 
containers with unspecified memory size won’t have QoS guarantee.
· The in-development autoscaling feature [1] will query the memory size 
of each container to estimate the residual capacity and triggers scaling 
accordingly. Containers with unspecified memory size will be treated as taking 
0 memory, which negatively affects the scaling decision.
Cons:
· The quickstart guide should be kept as simple as possible, so it is 
not a good idea to have the optional parameter in the guide.

Thoughts?

[1] https://blueprints.launchpad.net/magnum/+spec/autoscale-bay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Document adding --memory option to create containers

2015-10-10 Thread Steven Dake (stdake)
Adrian,

What I suggest is to commit a table of contents for a new comprehensive user 
guide.  Then new contributors can put stuff there instead of the quick start 
guide.  Any complexity in the quick start guide can also be transitioned into 
the comprehensive user guide.

So who is going to do the work? :)

Regards
-steve


From: Adrian Otto <adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, October 8, 2015 at 1:04 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Document adding --memory option to create 
containers

Steve,

I agree with the concept of a simple quickstart doc, but there also needs to be 
a comprehensive user guide, which does not yet exist. In the absence of the 
user guide, the quick start is the void where this stuff is starting to land. 
We simply need to put together a magnum reference document, and start moving 
content into that.

Adrian

On Oct 8, 2015, at 12:54 PM, Steven Dake (stdake) 
<std...@cisco.com<mailto:std...@cisco.com>> wrote:

Quickstart guide should be dead dead dead dead simple.  The goal of the 
quickstart guide isn’t to tach people best practices around Magnum.  It is to 
get a developer operational to give them that sense of feeling that Magnum can 
be worked on.  The goal of any quickstart guide should be to encourage the 
thinking that a person involving themselves with the project the quickstart 
guide represents is a good use of the person’s limited time on the planet.

Regards
-steve


From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, October 8, 2015 at 9:00 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [magnum] Document adding --memory option to create 
containers

Hi team,

I want to move the discussion in the review below to here, so that we can get 
more feedback

https://review.openstack.org/#/c/232175/

In summary, magnum currently added support for specifying the memory size of 
containers. The specification of the memory size is optional, and the COE won’t 
reserve any memory to the containers with unspecified memory size. The debate 
is whether we should document this optional parameter in the quickstart guide. 
Below is the positions of both sides:

Pros:
· It is a good practice to always specifying the memory size, because 
containers with unspecified memory size won’t have QoS guarantee.
· The in-development autoscaling feature [1] will query the memory size 
of each container to estimate the residual capacity and triggers scaling 
accordingly. Containers with unspecified memory size will be treated as taking 
0 memory, which negatively affects the scaling decision.
Cons:
· The quickstart guide should be kept as simple as possible, so it is 
not a good idea to have the optional parameter in the guide.

Thoughts?

[1] https://blueprints.launchpad.net/magnum/+spec/autoscale-bay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Testing result of new atomic-6 image

2015-10-09 Thread Qiao,Liyong

Testing result of new atomic-6 image [1] built by Tango
atomic-5 image has issue to start a container instance(docker version is 
1.7.1), Tango built a new atomic-6 image with docker 1.8.1 version.

eghobo and I (eliqiao) did some testing works (eghobo_ did most of them)

Here is the summary:

 * coe=swarm

1.  can not pull swarm:0.2.0, try to use 0.4.0 or latest works
2.  when creating a container with magnum CLI, the image name
   should use full name like "docker.io/cirros"

examples for 2:

   /taget@taget-ThinkStation-P300:~/kubernetes/examples/redis$ magnum
   container-create --name testcontainer --image cirros --bay swarmbay6
   --command "echo hello"//
   //ERROR: Docker internal Error: 404 Client Error: Not Found ("No
   such image: cirros") (HTTP 500)//
   //taget@taget-ThinkStation-P300:~/kubernetes/examples/redis$ magnum
   container-create --name testcontainer --image docker.io/cirros --bay
   swarmbay6 --command "echo hello"

   /

 * coe=k8s (tls_disabled=True)

kube-apiserver.service can not start up , but could use command line[2] 
to start, I tried to use kubctl get pod, but failed as


   /[minion@k8-5qx66ie62f-0-vaucgvagirv4-kube-master-oemtlcotgak6 ~]$
   kubectl get pod/
   /error: couldn't read version from server: Get
   http://localhost:8080/api: dial tcp 127.0.0.1:8080: connection refused/


netstat shows that 8080 is not in listened, not sure why(not familiar 
with k8s)


   /[minion@k8-5qx66ie62f-0-vaucgvagirv4-kube-master-oemtlcotgak6 ~]$
   ps aux | grep kub/
   /kube   805  0.5  1.0  30232 21436 ?Ssl  08:12 0:29
   /usr/bin/kube-controller-manager --logtostderr=true --v=0
   --master=http://127.0.0.1:8080/
   /kube   806  0.1  0.6  17332 13048 ?Ssl  08:12 0:09
   /usr/bin/kube-scheduler --logtostderr=true --v=0
   --master=http://127.0.0.1:8080/
   /root  1246  0.0  1.0  33656 22300 pts/0Sl+  09:33 0:00
   /usr/bin/kube-apiserver --logtostderr=true --v=0
   --etcd_servers=http://127.0.0.1:2379 --insecure-bind-address=0.0.0.0
   --insecure-port=8080 --kubelet_port=10250 --allow_privileged=true
   --service-cluster-ip-range=10.254.0.0/16 --runtime_config=api/all=true/
   /minion1276  0.0  0.0  11140  1632 pts/1S+   09:46 0:00 grep
   --color=auto kub/


[1] https://fedorapeople.org/groups/magnum/fedora-21-atomic-6-d181.qcow2
[2] http://paste.openstack.org/show/475824/

-- BR, Eli(Li Yong)Qiao
<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] New Core Reviewers

2015-10-09 Thread 王华
Thanks everyone!
It is my pleasure to be a magnum core reviewer. Let's make magnum better
together.

Thanks
Wanghua

On Wed, Oct 7, 2015 at 1:19 AM, Vilobh Meshram <
vilobhmeshram.openst...@gmail.com> wrote:

> Thanks everyone!
>
> I really appreciate this. Happy to join Magnum-Core  :)
>
> We have a great team, very diverse and very dedicated. It's pleasure to
> work with all of you.
>
> Thanks,
> Vilobh
>
> On Mon, Oct 5, 2015 at 5:26 PM, Adrian Otto <adrian.o...@rackspace.com>
> wrote:
>
>> Team,
>>
>> In accordance with our consensus and the current date/time, I hereby
>> welcome Vilobh and Hua as new core reviewers, and have added them to the
>> magnum-core group. I will announce this addition at tomorrow’s team meeting
>> at our new time of 1600 UTC (no more alternating schedule, remember?).
>>
>> Thanks,
>>
>> Adrian
>>
>> On Oct 1, 2015, at 7:33 PM, Jay Lau <jay.lau@gmail.com> wrote:
>>
>> +1 for both! Welcome!
>>
>> On Thu, Oct 1, 2015 at 7:07 AM, Hongbin Lu <hongbin...@huawei.com> wrote:
>>
>>> +1 for both. Welcome!
>>>
>>>
>>>
>>> *From:* Davanum Srinivas [mailto:dava...@gmail.com]
>>> *Sent:* September-30-15 7:00 PM
>>> *To:* OpenStack Development Mailing List (not for usage questions)
>>> *Subject:* Re: [openstack-dev] [magnum] New Core Reviewers
>>>
>>>
>>>
>>> +1 from me for both Vilobh and Hua.
>>>
>>>
>>>
>>> Thanks,
>>>
>>> Dims
>>>
>>>
>>>
>>> On Wed, Sep 30, 2015 at 6:47 PM, Adrian Otto <adrian.o...@rackspace.com>
>>> wrote:
>>>
>>> Core Reviewers,
>>>
>>> I propose the following additions to magnum-core:
>>>
>>> +Vilobh Meshram (vilobhmm)
>>> +Hua Wang (humble00)
>>>
>>> Please respond with +1 to agree or -1 to veto. This will be decided by
>>> either a simple majority of existing core reviewers, or by lazy consensus
>>> concluding on 2015-10-06 at 00:00 UTC, in time for our next team meeting.
>>>
>>> Thanks,
>>>
>>> Adrian Otto
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> <http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>
>>>
>>> --
>>>
>>> Davanum Srinivas :: https://twitter.com/dims
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> <http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Thanks,
>>
>> Jay Lau (Guangya Liu)
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org
>> ?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] How to verify a service is correctly setup in heat template

2015-10-08 Thread Qiao,Liyong

hi Magnum hackers:

Recently, we upgrade to fedora atomic-5 image, but the docker (1.7.1) in 
that image doesn't works well.

see [1].

When I using that image to create a swarm bay, magnum told me that bay 
is usable, actually swarm-master

swarm-agent service are not running correctly, so that bay is not usable.
I proposed a fix [2] to check all service's status (using systemctl 
status) before trigger a signal,

Andrew Melton feel that checking is not reliable, so he propose fix [3].
but fix[3] is not working because additional signals will be ignored 
since in heat template

the default signal count=1. Please refer more information on [4]

So my question is why [2] can not work well ? is my understand wrong on 
https://bugs.launchpad.net/magnum/+bug/1502329/comments/5 ,

is there any other better way to get an asynchronous signal?

[1]https://bugs.launchpad.net/magnum/+bug/1499607
[2]https://review.openstack.org/#/c/228762/
[3]https://review.openstack.org/#/c/230639/
[4]https://bugs.launchpad.net/magnum/+bug/1502329

Thanks.

--
BR, Eli(Li Yong)Qiao

<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Document adding --memory option to create containers

2015-10-08 Thread Hongbin Lu
Hi team,

I want to move the discussion in the review below to here, so that we can get 
more feedback

https://review.openstack.org/#/c/232175/

In summary, magnum currently added support for specifying the memory size of 
containers. The specification of the memory size is optional, and the COE won't 
reserve any memory to the containers with unspecified memory size. The debate 
is whether we should document this optional parameter in the quickstart guide. 
Below is the positions of both sides:

Pros:

* It is a good practice to always specifying the memory size, because 
containers with unspecified memory size won't have QoS guarantee.

* The in-development autoscaling feature [1] will query the memory size 
of each container to estimate the residual capacity and triggers scaling 
accordingly. Containers with unspecified memory size will be treated as taking 
0 memory, which negatively affects the scaling decision.
Cons:

* The quickstart guide should be kept as simple as possible, so it is 
not a good idea to have the optional parameter in the guide.

Thoughts?

[1] https://blueprints.launchpad.net/magnum/+spec/autoscale-bay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Document adding --memory option to create containers

2015-10-08 Thread Adrian Otto
Thanks Hongbin, for raising this for discussion. There is a middle ground that 
we can reach. We can collect a set of “best practices”, and place them together 
in a document. Some of them will be operational best practices for cloud 
operators, and some of them will be for end users. We can make callouts to them 
in the quick start, so our newcomers know where to look for them, but this will 
help us to keep the quickstart concise. The practice of selecting a memory 
limit would be one of the best practices that we can call out to.

Adrian

On Oct 8, 2015, at 9:00 AM, Hongbin Lu 
> wrote:

Hi team,

I want to move the discussion in the review below to here, so that we can get 
more feedback

https://review.openstack.org/#/c/232175/

In summary, magnum currently added support for specifying the memory size of 
containers. The specification of the memory size is optional, and the COE won’t 
reserve any memory to the containers with unspecified memory size. The debate 
is whether we should document this optional parameter in the quickstart guide. 
Below is the positions of both sides:

Pros:
• It is a good practice to always specifying the memory size, because 
containers with unspecified memory size won’t have QoS guarantee.
• The in-development autoscaling feature [1] will query the memory size 
of each container to estimate the residual capacity and triggers scaling 
accordingly. Containers with unspecified memory size will be treated as taking 
0 memory, which negatively affects the scaling decision.
Cons:
• The quickstart guide should be kept as simple as possible, so it is 
not a good idea to have the optional parameter in the guide.

Thoughts?

[1] https://blueprints.launchpad.net/magnum/+spec/autoscale-bay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Document adding --memory option to create containers

2015-10-08 Thread Vikas Choudhary
In my opinion, there should be a more detailed document explaining
importance of commands and options.
Though --memory is an important attribute, but since objective of
quickstart is to get user a minimum working system within minimum time, it
seems better to skip this option in quickstart.


-Vikas

On Fri, Oct 9, 2015 at 1:47 AM, Egor Guz <e...@walmartlabs.com> wrote:

> Adrian,
>
> I agree with Steve, otherwise it’s hard to find balance what should go to
> quick start guide (e.g. many operators worry about cpu or I/O instead of
> memory).
> Also I belve auto-scalling deserve it’s own detail document.
>
> —
> Egor
>
> From: Adrian Otto <adrian.o...@rackspace.com adrian.o...@rackspace.com>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org
> >>
> Date: Thursday, October 8, 2015 at 13:04
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org
> >>
> Subject: Re: [openstack-dev] [magnum] Document adding --memory option to
> create containers
>
> Steve,
>
> I agree with the concept of a simple quickstart doc, but there also needs
> to be a comprehensive user guide, which does not yet exist. In the absence
> of the user guide, the quick start is the void where this stuff is starting
> to land. We simply need to put together a magnum reference document, and
> start moving content into that.
>
> Adrian
>
> On Oct 8, 2015, at 12:54 PM, Steven Dake (stdake) <std...@cisco.com
> <mailto:std...@cisco.com>> wrote:
>
> Quickstart guide should be dead dead dead dead simple.  The goal of the
> quickstart guide isn’t to tach people best practices around Magnum.  It is
> to get a developer operational to give them that sense of feeling that
> Magnum can be worked on.  The goal of any quickstart guide should be to
> encourage the thinking that a person involving themselves with the project
> the quickstart guide represents is a good use of the person’s limited time
> on the planet.
>
> Regards
> -steve
>
>
> From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org
> >>
> Date: Thursday, October 8, 2015 at 9:00 AM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org
> >>
> Subject: [openstack-dev] [magnum] Document adding --memory option to
> create containers
>
> Hi team,
>
> I want to move the discussion in the review below to here, so that we can
> get more feedback
>
> https://review.openstack.org/#/c/232175/
>
> In summary, magnum currently added support for specifying the memory size
> of containers. The specification of the memory size is optional, and the
> COE won’t reserve any memory to the containers with unspecified memory
> size. The debate is whether we should document this optional parameter in
> the quickstart guide. Below is the positions of both sides:
>
> Pros:
> · It is a good practice to always specifying the memory size,
> because containers with unspecified memory size won’t have QoS guarantee.
> · The in-development autoscaling feature [1] will query the memory
> size of each container to estimate the residual capacity and triggers
> scaling accordingly. Containers with unspecified memory size will be
> treated as taking 0 memory, which negatively affects the scaling decision.
> Cons:
> · The quickstart guide should be kept as simple as possible, so it
> is not a good idea to have the optional parameter in the guide.
>
> Thoughts?
>
> [1] https://blueprints.launchpad.net/magnum/+spec/autoscale-bay
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Document adding --memory option to create containers

2015-10-08 Thread Adrian Otto
Steve,

I agree with the concept of a simple quickstart doc, but there also needs to be 
a comprehensive user guide, which does not yet exist. In the absence of the 
user guide, the quick start is the void where this stuff is starting to land. 
We simply need to put together a magnum reference document, and start moving 
content into that.

Adrian

On Oct 8, 2015, at 12:54 PM, Steven Dake (stdake) 
<std...@cisco.com<mailto:std...@cisco.com>> wrote:

Quickstart guide should be dead dead dead dead simple.  The goal of the 
quickstart guide isn’t to tach people best practices around Magnum.  It is to 
get a developer operational to give them that sense of feeling that Magnum can 
be worked on.  The goal of any quickstart guide should be to encourage the 
thinking that a person involving themselves with the project the quickstart 
guide represents is a good use of the person’s limited time on the planet.

Regards
-steve


From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, October 8, 2015 at 9:00 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [magnum] Document adding --memory option to create 
containers

Hi team,

I want to move the discussion in the review below to here, so that we can get 
more feedback

https://review.openstack.org/#/c/232175/

In summary, magnum currently added support for specifying the memory size of 
containers. The specification of the memory size is optional, and the COE won’t 
reserve any memory to the containers with unspecified memory size. The debate 
is whether we should document this optional parameter in the quickstart guide. 
Below is the positions of both sides:

Pros:
· It is a good practice to always specifying the memory size, because 
containers with unspecified memory size won’t have QoS guarantee.
· The in-development autoscaling feature [1] will query the memory size 
of each container to estimate the residual capacity and triggers scaling 
accordingly. Containers with unspecified memory size will be treated as taking 
0 memory, which negatively affects the scaling decision.
Cons:
· The quickstart guide should be kept as simple as possible, so it is 
not a good idea to have the optional parameter in the guide.

Thoughts?

[1] https://blueprints.launchpad.net/magnum/+spec/autoscale-bay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Document adding --memory option to create containers

2015-10-08 Thread Egor Guz
Adrian,

I agree with Steve, otherwise it’s hard to find balance what should go to quick 
start guide (e.g. many operators worry about cpu or I/O instead of memory).
Also I belve auto-scalling deserve it’s own detail document.

—
Egor

From: Adrian Otto <adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, October 8, 2015 at 13:04
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Document adding --memory option to create 
containers

Steve,

I agree with the concept of a simple quickstart doc, but there also needs to be 
a comprehensive user guide, which does not yet exist. In the absence of the 
user guide, the quick start is the void where this stuff is starting to land. 
We simply need to put together a magnum reference document, and start moving 
content into that.

Adrian

On Oct 8, 2015, at 12:54 PM, Steven Dake (stdake) 
<std...@cisco.com<mailto:std...@cisco.com>> wrote:

Quickstart guide should be dead dead dead dead simple.  The goal of the 
quickstart guide isn’t to tach people best practices around Magnum.  It is to 
get a developer operational to give them that sense of feeling that Magnum can 
be worked on.  The goal of any quickstart guide should be to encourage the 
thinking that a person involving themselves with the project the quickstart 
guide represents is a good use of the person’s limited time on the planet.

Regards
-steve


From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, October 8, 2015 at 9:00 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [magnum] Document adding --memory option to create 
containers

Hi team,

I want to move the discussion in the review below to here, so that we can get 
more feedback

https://review.openstack.org/#/c/232175/

In summary, magnum currently added support for specifying the memory size of 
containers. The specification of the memory size is optional, and the COE won’t 
reserve any memory to the containers with unspecified memory size. The debate 
is whether we should document this optional parameter in the quickstart guide. 
Below is the positions of both sides:

Pros:
· It is a good practice to always specifying the memory size, because 
containers with unspecified memory size won’t have QoS guarantee.
· The in-development autoscaling feature [1] will query the memory size 
of each container to estimate the residual capacity and triggers scaling 
accordingly. Containers with unspecified memory size will be treated as taking 
0 memory, which negatively affects the scaling decision.
Cons:
· The quickstart guide should be kept as simple as possible, so it is 
not a good idea to have the optional parameter in the guide.

Thoughts?

[1] https://blueprints.launchpad.net/magnum/+spec/autoscale-bay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Document adding --memory option to create containers

2015-10-08 Thread Ton Ngo
We should reserve time at the next summit to discuss putting together a
detailed user guide, laying down a skeleton so contributors can start
filling in different parts.
Otherwise as we observe, everything is falling into the quick start guide.
Ton Ngo,



From:   "Qiao,Liyong" <liyong.q...@intel.com>
To: openstack-dev@lists.openstack.org
Date:   10/08/2015 06:32 PM
Subject:    Re: [openstack-dev] [magnum] Document adding --memory option to
create containers



+1, we can add more detail explanation information of --memory in magnum
CLI instead of quick start.

Eli.

On 2015年10月09日 07:45, Vikas Choudhary wrote:
  In my opinion, there should be a more detailed document explaining
  importance of commands and options.
  Though --memory is an important attribute, but since objective of
  quickstart is to get user a minimum working system within minimum
  time, it seems better to skip this option in quickstart.


  -Vikas

  On Fri, Oct 9, 2015 at 1:47 AM, Egor Guz <e...@walmartlabs.com>
  wrote:
Adrian,

I agree with Steve, otherwise it’s hard to find balance what should
go to quick start guide (e.g. many operators worry about cpu or I/O
instead of memory).
Also I belve auto-scalling deserve it’s own detail document.

—
Egor

From: Adrian Otto <adrian.o...@rackspace.com>
Reply-To: "OpenStack Development Mailing List (not for usage
questions)" <openstack-dev@lists.openstack.org>
Date: Thursday, October 8, 2015 at 13:04
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [magnum] Document adding --memory
option to create containers

Steve,

I agree with the concept of a simple quickstart doc, but there also
needs to be a comprehensive user guide, which does not yet exist.
In the absence of the user guide, the quick start is the void where
this stuff is starting to land. We simply need to put together a
magnum reference document, and start moving content into that.

Adrian

On Oct 8, 2015, at 12:54 PM, Steven Dake (stdake) <std...@cisco.com
<mailto:std...@cisco.com>> wrote:

Quickstart guide should be dead dead dead dead simple.? The goal of
the quickstart guide isn’t to tach people best practices around
Magnum.? It is to get a developer operational to give them that
sense of feeling that Magnum can be worked on.? The goal of any
quickstart guide should be to encourage the thinking that a person
involving themselves with the project the quickstart guide
represents is a good use of the person’s limited time on the
planet.

Regards
-steve


From: Hongbin Lu <hongbin...@huawei.com>
Reply-To: "OpenStack Development Mailing List (not for usage
questions)" <openstack-dev@lists.openstack.org>
Date: Thursday, October 8, 2015 at 9:00 AM
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org>
Subject: [openstack-dev] [magnum] Document adding --memory option
to create containers

Hi team,

I want to move the discussion in the review below to here, so that
we can get more feedback

https://review.openstack.org/#/c/232175/

In summary, magnum currently added support for specifying the
memory size of containers. The specification of the memory size is
optional, and the COE won’t reserve any memory to the containers
with unspecified memory size. The debate is whether we should
document this optional parameter in the quickstart guide. Below is
the positions of both sides:

Pros:
·? ? ? ? ?It is a good practice to always specifying the memory
size, because containers with unspecified memory size won’t have
QoS guarantee.
·? ? ? ? ?The in-development autoscaling feature [1] will query the
memory size of each container to estimate the residual capacity and
triggers scaling accordingly. Containers with unspecified memory
size will be treated as taking 0 memory, which negatively affects
the scaling decision.
Cons:
·? ? ? ? ?The quickstart guide should be kept as simple as
possible, so it is not a good idea to have the optional parameter
in the guide.

Thoughts?

[1] https://blueprints.launchpad.net/magnum/+spec/autoscale-bay

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lis

Re: [openstack-dev] [magnum] Document adding --memory option to create containers

2015-10-08 Thread Qiao,Liyong
+1, we can add more detail explanation information of --memory in magnum 
CLI instead of quick start.


Eli.

On 2015年10月09日 07:45, Vikas Choudhary wrote:
In my opinion, there should be a more detailed document explaining 
importance of commands and options.
Though --memory is an important attribute, but since objective of 
quickstart is to get user a minimum working system within minimum 
time, it seems better to skip this option in quickstart.



-Vikas

On Fri, Oct 9, 2015 at 1:47 AM, Egor Guz <e...@walmartlabs.com 
<mailto:e...@walmartlabs.com>> wrote:


Adrian,

I agree with Steve, otherwise it’s hard to find balance what
should go to quick start guide (e.g. many operators worry about
cpu or I/O instead of memory).
Also I belve auto-scalling deserve it’s own detail document.

—
Egor

From: Adrian Otto <adrian.o...@rackspace.com
<mailto:adrian.o...@rackspace.com><mailto:adrian.o...@rackspace.com 
<mailto:adrian.o...@rackspace.com>>>
Reply-To: "OpenStack Development Mailing List (not for usage
questions)" <openstack-dev@lists.openstack.org

<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>>>
Date: Thursday, October 8, 2015 at 13:04
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org

<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>>>
Subject: Re: [openstack-dev] [magnum] Document adding --memory
option to create containers

Steve,

I agree with the concept of a simple quickstart doc, but there
also needs to be a comprehensive user guide, which does not yet
exist. In the absence of the user guide, the quick start is the
void where this stuff is starting to land. We simply need to put
together a magnum reference document, and start moving content
into that.

Adrian

On Oct 8, 2015, at 12:54 PM, Steven Dake (stdake)
<std...@cisco.com
<mailto:std...@cisco.com><mailto:std...@cisco.com
<mailto:std...@cisco.com>>> wrote:

Quickstart guide should be dead dead dead dead simple. The goal of
the quickstart guide isn’t to tach people best practices around
Magnum.  It is to get a developer operational to give them that
sense of feeling that Magnum can be worked on.  The goal of any
quickstart guide should be to encourage the thinking that a person
involving themselves with the project the quickstart guide
represents is a good use of the person’s limited time on the planet.

Regards
-steve


From: Hongbin Lu <hongbin...@huawei.com
<mailto:hongbin...@huawei.com><mailto:hongbin...@huawei.com
<mailto:hongbin...@huawei.com>>>
Reply-To: "OpenStack Development Mailing List (not for usage
questions)" <openstack-dev@lists.openstack.org

<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>>>
Date: Thursday, October 8, 2015 at 9:00 AM
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org

<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>>>
Subject: [openstack-dev] [magnum] Document adding --memory option
to create containers

Hi team,

I want to move the discussion in the review below to here, so that
we can get more feedback

https://review.openstack.org/#/c/232175/

In summary, magnum currently added support for specifying the
memory size of containers. The specification of the memory size is
optional, and the COE won’t reserve any memory to the containers
with unspecified memory size. The debate is whether we should
document this optional parameter in the quickstart guide. Below is
the positions of both sides:

Pros:
· It is a good practice to always specifying the memory
size, because containers with unspecified memory size won’t have
QoS guarantee.
· The in-development autoscaling feature [1] will query
the memory size of each container to estimate the residual
capacity and triggers scaling accordingly. Containers with
unspecified memory size will be treated as taking 0 memory, which
negatively affects the scaling decision.
Cons:
· The quickstart guide should be kept as simple as
possible, so it is not a good idea to have the optional parameter
in the guide.

Thoughts?

[1] https://blueprints.launchpad.net/magnum/+spec/autoscale-bay
_

Re: [openstack-dev] [magnum] Document adding --memory option to create containers

2015-10-08 Thread Steven Dake (stdake)
Quickstart guide should be dead dead dead dead simple.  The goal of the 
quickstart guide isn’t to tach people best practices around Magnum.  It is to 
get a developer operational to give them that sense of feeling that Magnum can 
be worked on.  The goal of any quickstart guide should be to encourage the 
thinking that a person involving themselves with the project the quickstart 
guide represents is a good use of the person’s limited time on the planet.

Regards
-steve


From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, October 8, 2015 at 9:00 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [magnum] Document adding --memory option to create 
containers

Hi team,

I want to move the discussion in the review below to here, so that we can get 
more feedback

https://review.openstack.org/#/c/232175/

In summary, magnum currently added support for specifying the memory size of 
containers. The specification of the memory size is optional, and the COE won’t 
reserve any memory to the containers with unspecified memory size. The debate 
is whether we should document this optional parameter in the quickstart guide. 
Below is the positions of both sides:

Pros:

· It is a good practice to always specifying the memory size, because 
containers with unspecified memory size won’t have QoS guarantee.

· The in-development autoscaling feature [1] will query the memory size 
of each container to estimate the residual capacity and triggers scaling 
accordingly. Containers with unspecified memory size will be treated as taking 
0 memory, which negatively affects the scaling decision.
Cons:

· The quickstart guide should be kept as simple as possible, so it is 
not a good idea to have the optional parameter in the guide.

Thoughts?

[1] https://blueprints.launchpad.net/magnum/+spec/autoscale-bay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Document adding --memory option to create containers

2015-10-08 Thread Murali Allada
+1


Anything with default values should be ignored in the quickstart guide.


-Murali




From: Steven Dake (stdake) <std...@cisco.com>
Sent: Thursday, October 8, 2015 2:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Document adding --memory option to create 
containers

Quickstart guide should be dead dead dead dead simple.  The goal of the 
quickstart guide isn't to tach people best practices around Magnum.  It is to 
get a developer operational to give them that sense of feeling that Magnum can 
be worked on.  The goal of any quickstart guide should be to encourage the 
thinking that a person involving themselves with the project the quickstart 
guide represents is a good use of the person's limited time on the planet.

Regards
-steve


From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, October 8, 2015 at 9:00 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [magnum] Document adding --memory option to create 
containers

Hi team,

I want to move the discussion in the review below to here, so that we can get 
more feedback

https://review.openstack.org/#/c/232175/

In summary, magnum currently added support for specifying the memory size of 
containers. The specification of the memory size is optional, and the COE won't 
reserve any memory to the containers with unspecified memory size. The debate 
is whether we should document this optional parameter in the quickstart guide. 
Below is the positions of both sides:

Pros:

· It is a good practice to always specifying the memory size, because 
containers with unspecified memory size won't have QoS guarantee.

· The in-development autoscaling feature [1] will query the memory size 
of each container to estimate the residual capacity and triggers scaling 
accordingly. Containers with unspecified memory size will be treated as taking 
0 memory, which negatively affects the scaling decision.
Cons:

· The quickstart guide should be kept as simple as possible, so it is 
not a good idea to have the optional parameter in the guide.

Thoughts?

[1] https://blueprints.launchpad.net/magnum/+spec/autoscale-bay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] New Core Reviewers

2015-10-06 Thread Vilobh Meshram
Thanks everyone!

I really appreciate this. Happy to join Magnum-Core  :)

We have a great team, very diverse and very dedicated. It's pleasure to
work with all of you.

Thanks,
Vilobh

On Mon, Oct 5, 2015 at 5:26 PM, Adrian Otto <adrian.o...@rackspace.com>
wrote:

> Team,
>
> In accordance with our consensus and the current date/time, I hereby
> welcome Vilobh and Hua as new core reviewers, and have added them to the
> magnum-core group. I will announce this addition at tomorrow’s team meeting
> at our new time of 1600 UTC (no more alternating schedule, remember?).
>
> Thanks,
>
> Adrian
>
> On Oct 1, 2015, at 7:33 PM, Jay Lau <jay.lau@gmail.com> wrote:
>
> +1 for both! Welcome!
>
> On Thu, Oct 1, 2015 at 7:07 AM, Hongbin Lu <hongbin...@huawei.com> wrote:
>
>> +1 for both. Welcome!
>>
>>
>>
>> *From:* Davanum Srinivas [mailto:dava...@gmail.com]
>> *Sent:* September-30-15 7:00 PM
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [magnum] New Core Reviewers
>>
>>
>>
>> +1 from me for both Vilobh and Hua.
>>
>>
>>
>> Thanks,
>>
>> Dims
>>
>>
>>
>> On Wed, Sep 30, 2015 at 6:47 PM, Adrian Otto <adrian.o...@rackspace.com>
>> wrote:
>>
>> Core Reviewers,
>>
>> I propose the following additions to magnum-core:
>>
>> +Vilobh Meshram (vilobhmm)
>> +Hua Wang (humble00)
>>
>> Please respond with +1 to agree or -1 to veto. This will be decided by
>> either a simple majority of existing core reviewers, or by lazy consensus
>> concluding on 2015-10-06 at 00:00 UTC, in time for our next team meeting.
>>
>> Thanks,
>>
>> Adrian Otto
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> <http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>>
>> --
>>
>> Davanum Srinivas :: https://twitter.com/dims
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> <http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Thanks,
>
> Jay Lau (Guangya Liu)
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] New Core Reviewers

2015-10-05 Thread Adrian Otto
Team,

In accordance with our consensus and the current date/time, I hereby welcome 
Vilobh and Hua as new core reviewers, and have added them to the magnum-core 
group. I will announce this addition at tomorrow’s team meeting at our new time 
of 1600 UTC (no more alternating schedule, remember?).

Thanks,

Adrian

On Oct 1, 2015, at 7:33 PM, Jay Lau 
<jay.lau@gmail.com<mailto:jay.lau@gmail.com>> wrote:

+1 for both! Welcome!

On Thu, Oct 1, 2015 at 7:07 AM, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:
+1 for both. Welcome!

From: Davanum Srinivas [mailto:dava...@gmail.com<mailto:dava...@gmail.com>]
Sent: September-30-15 7:00 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] New Core Reviewers

+1 from me for both Vilobh and Hua.

Thanks,
Dims

On Wed, Sep 30, 2015 at 6:47 PM, Adrian Otto 
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>> wrote:
Core Reviewers,

I propose the following additions to magnum-core:

+Vilobh Meshram (vilobhmm)
+Hua Wang (humble00)

Please respond with +1 to agree or -1 to veto. This will be decided by either a 
simple majority of existing core reviewers, or by lazy consensus concluding on 
2015-10-06 at 00:00 UTC, in time for our next team meeting.

Thanks,

Adrian Otto
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-10-03 Thread Egor Guz
Kris,

We are facing similar challenges/questions and there are some thoughts. We 
cannot ignore scalability limits: Kub ~ 100 nodes (there are plans to support 
1K next year), Swarm ~ ??? (I never heard
even about 100 nodes, definitely not ready for production yet (happy to be 
wrong ;))), Mesos ~ 100K nodes, but it’s scalability issues with many 
schedulers (e.g. each team develop/use their
own framework (Marathon/Aurora)). It looks like small clusters is better/save 
option today (even if you need to pay for additional control plane), but I 
belie situation will change in next twelve months.

—
Egor

From: "Kris G. Lindgren" <klindg...@godaddy.com<mailto:klindg...@godaddy.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, September 30, 2015 at 16:26
To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

We are looking at deploying magnum as an answer for how do we do containers 
company wide at Godaddy.  I am going to agree with both you and josh.

I agree that managing one large system is going to be a pain and pas experience 
tells me this wont be practical/scale, however from experience I also know 
exactly the pain Josh is talking about.

We currently have ~4k projects in our internal openstack cloud, about 1/4 of 
the projects are currently doing some form of containers on their own, with 
more joining every day.  If all of these projects were to convert of to the 
current magnum configuration we would suddenly be attempting to 
support/configure ~1k magnum clusters.  Considering that everyone will want it 
HA, we are looking at a minimum of 2 kube nodes per cluster + lbaas vips + 
floating ips.  From a capacity standpoint this is an excessive amount of 
duplicated infrastructure to spinup in projects where people maybe running 
10–20 containers per project.  From an operator support perspective this is a 
special level of hell that I do not want to get into.   Even if I am off by 
75%,  250 still sucks.

>From my point of view an ideal use case for companies like ours 
>(yahoo/godaddy) would be able to support hierarchical projects in magnum.  
>That way we could create a project for each department, and then the subteams 
>of those departments can have their own projects.  We create a a bay per 
>department.  Sub-projects if they want to can support creation of their own 
>bays (but support of the kube cluster would then fall to that team).  When a 
>sub-project spins up a pod on a bay, minions get created inside that teams sub 
>projects and the containers in that pod run on the capacity that was spun up  
>under that project, the minions for each pod would be a in a scaling group and 
>as such grow/shrink as dictated by load.

The above would make it so where we support a minimal, yet imho reasonable, 
number of kube clusters, give people who can't/don’t want to fall inline with 
the provided resource a way to make their own and still offer a "good enough 
for a single company" level of multi-tenancy.

>Joshua,
>
>If you share resources, you give up multi-tenancy.  No COE system has the
>concept of multi-tenancy (kubernetes has some basic implementation but it
>is totally insecure).  Not only does multi-tenancy have to “look like” it
>offers multiple tenants isolation, but it actually has to deliver the
>goods.

>

>I understand that at first glance a company like Yahoo may not want >separate 
>bays for their various applications because of the perceived >administrative 
>overhead. I would then challenge Yahoo to go deploy a COE >like kubernetes 
>(which has no multi-tenancy or a very basic implementation >of such) and get 
>it to work with hundreds of different competing >applications. I would 
>speculate the administrative overhead of getting >all that to work would be 
>greater then the administrative overhead of >simply doing a bay create for the 
>various tenants.

>

>Placing tenancy inside a COE seems interesting, but no COE does that >today. 
>Maybe in the future they will. Magnum was designed to present an >integration 
>point between COEs and OpenStack today, not five years down >the road. Its not 
>as if we took shortcuts to get to where we are.

>

>I will grant you that density is lower with the current design of Magnum >vs a 
>full on integration with OpenStack within the COE itself. However, >that model 
>which is what I believe you proposed is a huge design change to >each COE 
>which would overly c

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-10-01 Thread Hongbin Lu
Kris,

I think the proposal of hierarchical projects is out of the scope of magnum, 
and you might need to bring it up at keystone or cross-project meeting. I am 
going to propose a walk-around that might work for you at existing tenancy 
model.

Suppose there is a department (department A) with two subteams (team 1 and team 
2). You can create three projects:

· Project A

· Project A-1

· Project A-2

Then you can assign users to projects in the following ways:

· Assign team 1 members to both Project A and Project A-1

· Assign team 2 members to both Project A and Project A-2

Then you can create a bay at project A, which is shared by the whole 
department. In addition, each subteam can create their own bays at project A-X 
if they want. Does it address your use cases?

Best regards,
Hongbin

From: Kris G. Lindgren [mailto:klindg...@godaddy.com]
Sent: September-30-15 7:26 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

We are looking at deploying magnum as an answer for how do we do containers 
company wide at Godaddy.  I am going to agree with both you and josh.

I agree that managing one large system is going to be a pain and pas experience 
tells me this wont be practical/scale, however from experience I also know 
exactly the pain Josh is talking about.

We currently have ~4k projects in our internal openstack cloud, about 1/4 of 
the projects are currently doing some form of containers on their own, with 
more joining every day.  If all of these projects were to convert of to the 
current magnum configuration we would suddenly be attempting to 
support/configure ~1k magnum clusters.  Considering that everyone will want it 
HA, we are looking at a minimum of 2 kube nodes per cluster + lbaas vips + 
floating ips.  From a capacity standpoint this is an excessive amount of 
duplicated infrastructure to spinup in projects where people maybe running 
10–20 containers per project.  From an operator support perspective this is a 
special level of hell that I do not want to get into.   Even if I am off by 
75%,  250 still sucks.

From my point of view an ideal use case for companies like ours (yahoo/godaddy) 
would be able to support hierarchical projects in magnum.  That way we could 
create a project for each department, and then the subteams of those 
departments can have their own projects.  We create a a bay per department.  
Sub-projects if they want to can support creation of their own bays (but 
support of the kube cluster would then fall to that team).  When a sub-project 
spins up a pod on a bay, minions get created inside that teams sub projects and 
the containers in that pod run on the capacity that was spun up  under that 
project, the minions for each pod would be a in a scaling group and as such 
grow/shrink as dictated by load.

The above would make it so where we support a minimal, yet imho reasonable, 
number of kube clusters, give people who can't/don’t want to fall inline with 
the provided resource a way to make their own and still offer a "good enough 
for a single company" level of multi-tenancy.

>Joshua,

>

>If you share resources, you give up multi-tenancy.  No COE system has the

>concept of multi-tenancy (kubernetes has some basic implementation but it

>is totally insecure).  Not only does multi-tenancy have to “look like” it

>offers multiple tenants isolation, but it actually has to deliver the

>goods.


>

>I understand that at first glance a company like Yahoo may not want

>separate bays for their various applications because of the perceived

>administrative overhead.  I would then challenge Yahoo to go deploy a COE

>like kubernetes (which has no multi-tenancy or a very basic implementation

>of such) and get it to work with hundreds of different competing

>applications.  I would speculate the administrative overhead of getting

>all that to work would be greater then the administrative overhead of

>simply doing a bay create for the various tenants.

>

>Placing tenancy inside a COE seems interesting, but no COE does that

>today.  Maybe in the future they will.  Magnum was designed to present an

>integration point between COEs and OpenStack today, not five years down

>the road.  Its not as if we took shortcuts to get to where we are.

>

>I will grant you that density is lower with the current design of Magnum

>vs a full on integration with OpenStack within the COE itself.  However,

>that model which is what I believe you proposed is a huge design change to

>each COE which would overly complicate the COE at the gain of increased

>density.  I personally don’t feel that pain is worth the gain.



___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-10-01 Thread Fox, Kevin M
I believe keystone already supports hierarchical projects

Thanks,
Kevin

From: Hongbin Lu [hongbin...@huawei.com]
Sent: Thursday, October 01, 2015 7:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Kris,

I think the proposal of hierarchical projects is out of the scope of magnum, 
and you might need to bring it up at keystone or cross-project meeting. I am 
going to propose a walk-around that might work for you at existing tenancy 
model.

Suppose there is a department (department A) with two subteams (team 1 and team 
2). You can create three projects:

· Project A

· Project A-1

· Project A-2

Then you can assign users to projects in the following ways:

· Assign team 1 members to both Project A and Project A-1

· Assign team 2 members to both Project A and Project A-2

Then you can create a bay at project A, which is shared by the whole 
department. In addition, each subteam can create their own bays at project A-X 
if they want. Does it address your use cases?

Best regards,
Hongbin

From: Kris G. Lindgren [mailto:klindg...@godaddy.com]
Sent: September-30-15 7:26 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

We are looking at deploying magnum as an answer for how do we do containers 
company wide at Godaddy.  I am going to agree with both you and josh.

I agree that managing one large system is going to be a pain and pas experience 
tells me this wont be practical/scale, however from experience I also know 
exactly the pain Josh is talking about.

We currently have ~4k projects in our internal openstack cloud, about 1/4 of 
the projects are currently doing some form of containers on their own, with 
more joining every day.  If all of these projects were to convert of to the 
current magnum configuration we would suddenly be attempting to 
support/configure ~1k magnum clusters.  Considering that everyone will want it 
HA, we are looking at a minimum of 2 kube nodes per cluster + lbaas vips + 
floating ips.  From a capacity standpoint this is an excessive amount of 
duplicated infrastructure to spinup in projects where people maybe running 
10–20 containers per project.  From an operator support perspective this is a 
special level of hell that I do not want to get into.   Even if I am off by 
75%,  250 still sucks.

>From my point of view an ideal use case for companies like ours 
>(yahoo/godaddy) would be able to support hierarchical projects in magnum.  
>That way we could create a project for each department, and then the subteams 
>of those departments can have their own projects.  We create a a bay per 
>department.  Sub-projects if they want to can support creation of their own 
>bays (but support of the kube cluster would then fall to that team).  When a 
>sub-project spins up a pod on a bay, minions get created inside that teams sub 
>projects and the containers in that pod run on the capacity that was spun up  
>under that project, the minions for each pod would be a in a scaling group and 
>as such grow/shrink as dictated by load.

The above would make it so where we support a minimal, yet imho reasonable, 
number of kube clusters, give people who can't/don’t want to fall inline with 
the provided resource a way to make their own and still offer a "good enough 
for a single company" level of multi-tenancy.

>Joshua,

>

>If you share resources, you give up multi-tenancy.  No COE system has the

>concept of multi-tenancy (kubernetes has some basic implementation but it

>is totally insecure).  Not only does multi-tenancy have to “look like” it

>offers multiple tenants isolation, but it actually has to deliver the

>goods.


>

>I understand that at first glance a company like Yahoo may not want

>separate bays for their various applications because of the perceived

>administrative overhead.  I would then challenge Yahoo to go deploy a COE

>like kubernetes (which has no multi-tenancy or a very basic implementation

>of such) and get it to work with hundreds of different competing

>applications.  I would speculate the administrative overhead of getting

>all that to work would be greater then the administrative overhead of

>simply doing a bay create for the various tenants.

>

>Placing tenancy inside a COE seems interesting, but no COE does that

>today.  Maybe in the future they will.  Magnum was designed to present an

>integration point between COEs and OpenStack today, not five years down

>the road.  Its not as if we took shortcuts to get to where we are.

>

>I will grant you that density is lower with the current design of Magnum

>vs a full on integration with OpenStack within the COE itself.  However,

>that model which is what I believe you proposed

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-10-01 Thread Hongbin Lu
Do you mean this proposal 
http://specs.openstack.org/openstack/keystone-specs/specs/juno/hierarchical_multitenancy.html
 ? It looks like a support of hierarchical role/privilege, and I couldn't find 
anything related to resource sharing. I am not sure if it can address the use 
cases Kris mentioned.

Best regards,
Hongbin

From: Fox, Kevin M [mailto:kevin@pnnl.gov]
Sent: October-01-15 11:58 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

I believe keystone already supports hierarchical projects

Thanks,
Kevin

From: Hongbin Lu [hongbin...@huawei.com]
Sent: Thursday, October 01, 2015 7:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
Kris,

I think the proposal of hierarchical projects is out of the scope of magnum, 
and you might need to bring it up at keystone or cross-project meeting. I am 
going to propose a walk-around that might work for you at existing tenancy 
model.

Suppose there is a department (department A) with two subteams (team 1 and team 
2). You can create three projects:

* Project A

* Project A-1

* Project A-2

Then you can assign users to projects in the following ways:

* Assign team 1 members to both Project A and Project A-1

* Assign team 2 members to both Project A and Project A-2

Then you can create a bay at project A, which is shared by the whole 
department. In addition, each subteam can create their own bays at project A-X 
if they want. Does it address your use cases?

Best regards,
Hongbin

From: Kris G. Lindgren [mailto:klindg...@godaddy.com]
Sent: September-30-15 7:26 PM
To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

We are looking at deploying magnum as an answer for how do we do containers 
company wide at Godaddy.  I am going to agree with both you and josh.

I agree that managing one large system is going to be a pain and pas experience 
tells me this wont be practical/scale, however from experience I also know 
exactly the pain Josh is talking about.

We currently have ~4k projects in our internal openstack cloud, about 1/4 of 
the projects are currently doing some form of containers on their own, with 
more joining every day.  If all of these projects were to convert of to the 
current magnum configuration we would suddenly be attempting to 
support/configure ~1k magnum clusters.  Considering that everyone will want it 
HA, we are looking at a minimum of 2 kube nodes per cluster + lbaas vips + 
floating ips.  From a capacity standpoint this is an excessive amount of 
duplicated infrastructure to spinup in projects where people maybe running 
10-20 containers per project.  From an operator support perspective this is a 
special level of hell that I do not want to get into.   Even if I am off by 
75%,  250 still sucks.

>From my point of view an ideal use case for companies like ours 
>(yahoo/godaddy) would be able to support hierarchical projects in magnum.  
>That way we could create a project for each department, and then the subteams 
>of those departments can have their own projects.  We create a a bay per 
>department.  Sub-projects if they want to can support creation of their own 
>bays (but support of the kube cluster would then fall to that team).  When a 
>sub-project spins up a pod on a bay, minions get created inside that teams sub 
>projects and the containers in that pod run on the capacity that was spun up  
>under that project, the minions for each pod would be a in a scaling group and 
>as such grow/shrink as dictated by load.

The above would make it so where we support a minimal, yet imho reasonable, 
number of kube clusters, give people who can't/don't want to fall inline with 
the provided resource a way to make their own and still offer a "good enough 
for a single company" level of multi-tenancy.

>Joshua,

>

>If you share resources, you give up multi-tenancy.  No COE system has the

>concept of multi-tenancy (kubernetes has some basic implementation but it

>is totally insecure).  Not only does multi-tenancy have to "look like" it

>offers multiple tenants isolation, but it actually has to deliver the

>goods.

>

>I understand that at first glance a company like Yahoo may not want

>separate bays for their various applications because of the perceived

>administrative overhead.  I would then challenge Yahoo to go deploy a COE

>like kubernetes (which has no multi-tenancy or a very basic implementation

>of such) and get it to work with hundreds of different competing

>applications.  I would speculate the administrative overhead of getting

>all that to work would be greater then the administrative ove

[openstack-dev] [Magnum]: PTL Voting is now open

2015-10-01 Thread Tony Breeds
If you are a Foundation individual member and had a commit in one of Magnum's
projects[0] over the Kilo-Liberty timeframe (September 18, 2014 06:00 UTC to
September 18, 2015 05:59 UTC) then you are eligible to vote. You should find
your email with a link to the Condorcet page to cast your vote in the inbox of
your gerrit preferred email[1].

What to do if you don't see the email and have a commit in at least one of the
programs having an election:
  * check the trash or spam folders of your gerrit Preferred Email address, in
case it went into trash or spam
  * wait a bit and check again, in case your email server is a bit slow
  * find the sha of at least one commit from the program project repos[0] and
  * email myself and Tristan[2] at the below email addresses. If we can confirm
that you are entitled to vote, we will add you to the voters list for the
appropriate election.

Our democratic process is important to the health of OpenStack, please exercise
your right to vote.

Candidate statements/platforms can be found linked to Candidate names on this
page:
https://wiki.openstack.org/wiki/PTL_Elections_September_2015#Confirmed_Candidates

Happy voting,

[0] The list of the program projects eligible for electoral status:
https://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml?id=sept-2015-elections#n1151

[1] Sign into review.openstack.org:
Go to Settings > Contact Information.
Look at the email listed as your Preferred Email.
That is where the ballot has been sent.

[2] Tony's email: tony at bakeyournoodle dot com
Tristan's email: tdecacqu at redhat dot com

Yours Tony.


pgpLf5EA7maeQ.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum]: PTL Voting is now open

2015-10-01 Thread Anne Gentle
I want to thank personally and publicly our election officials for their
willingness to take care of this election. Nice work, Tony and Tristan.
Much appreciated.

Anne

On Thu, Oct 1, 2015 at 5:58 PM, Tony Breeds  wrote:

> If you are a Foundation individual member and had a commit in one of
> Magnum's
> projects[0] over the Kilo-Liberty timeframe (September 18, 2014 06:00 UTC
> to
> September 18, 2015 05:59 UTC) then you are eligible to vote. You should
> find
> your email with a link to the Condorcet page to cast your vote in the
> inbox of
> your gerrit preferred email[1].
>
> What to do if you don't see the email and have a commit in at least one of
> the
> programs having an election:
>   * check the trash or spam folders of your gerrit Preferred Email
> address, in
> case it went into trash or spam
>   * wait a bit and check again, in case your email server is a bit slow
>   * find the sha of at least one commit from the program project repos[0]
> and
>   * email myself and Tristan[2] at the below email addresses. If we can
> confirm
> that you are entitled to vote, we will add you to the voters list for
> the
> appropriate election.
>
> Our democratic process is important to the health of OpenStack, please
> exercise
> your right to vote.
>
> Candidate statements/platforms can be found linked to Candidate names on
> this
> page:
>
> https://wiki.openstack.org/wiki/PTL_Elections_September_2015#Confirmed_Candidates
>
> Happy voting,
>
> [0] The list of the program projects eligible for electoral status:
>
> https://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml?id=sept-2015-elections#n1151
>
> [1] Sign into review.openstack.org:
> Go to Settings > Contact Information.
> Look at the email listed as your Preferred Email.
> That is where the ballot has been sent.
>
> [2] Tony's email: tony at bakeyournoodle dot com
> Tristan's email: tdecacqu at redhat dot com
>
> Yours Tony.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Anne Gentle
Rackspace
Principal Engineer
www.justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] New Core Reviewers

2015-10-01 Thread Jay Lau
+1 for both! Welcome!

On Thu, Oct 1, 2015 at 7:07 AM, Hongbin Lu <hongbin...@huawei.com> wrote:

> +1 for both. Welcome!
>
>
>
> *From:* Davanum Srinivas [mailto:dava...@gmail.com]
> *Sent:* September-30-15 7:00 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [magnum] New Core Reviewers
>
>
>
> +1 from me for both Vilobh and Hua.
>
>
>
> Thanks,
>
> Dims
>
>
>
> On Wed, Sep 30, 2015 at 6:47 PM, Adrian Otto <adrian.o...@rackspace.com>
> wrote:
>
> Core Reviewers,
>
> I propose the following additions to magnum-core:
>
> +Vilobh Meshram (vilobhmm)
> +Hua Wang (humble00)
>
> Please respond with +1 to agree or -1 to veto. This will be decided by
> either a simple majority of existing core reviewers, or by lazy consensus
> concluding on 2015-10-06 at 00:00 UTC, in time for our next team meeting.
>
> Thanks,
>
> Adrian Otto
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> --
>
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Joshua Harlow
+1

Pretty please don't make it a deployment project; because really some
other project that just specializes in deployment (ansible, chef,
puppet...) can do that better. I do get how public clouds can find a
deployment project useful (it allows customers to try out these new
~fancy~ COE things), but I also tend to think it's short-term thinking
to believe that such a project will last.

Now an integrated COE <-> openstack (keystone, cinder, neutron...)
project I think really does provide value and has some really neat
possiblities to provide a unique value add to openstack; a project that
can deploy some other software, meh, not so much IMHO. Of course an
integrated COE <-> openstack project will of course be much harder,
especially as the COE projects are not openstack 'native' but nothing
worth doing is easy. I hope that it was known that COE projects are a
new (and rapidly shifting) landscape and the going wasn't going to be
easy when magnum was created; don't lose hope! (I'm cheering for you
guys/gals).

My 2 cents,

Josh

On Wed, 30 Sep 2015 00:00:17 -0400
Monty Taylor <mord...@inaugust.com> wrote:

> *waving hands wildly at details* ...
> 
> I believe that the real win is if Magnum's control plan can integrate 
> the network and storage fabrics that exist in an OpenStack with 
> kube/mesos/swarm. Just deploying is VERY meh. I do not care - it's
> not interesting ... an ansible playbook can do that in 5 minutes.
> OTOH - deploying some kube into a cloud in such a way that it shares
> a tenant network with some VMs that are there - that's good stuff and
> I think actually provides significant value.
> 
> On 09/29/2015 10:57 PM, Jay Lau wrote:
> > +1 to Egor, I think that the final goal of Magnum is container as a
> > service but not coe deployment as a service. ;-)
> >
> > Especially we are also working on Magnum UI, the Magnum UI should
> > export some interfaces to enable end user can create container
> > applications but not only coe deployment.
> >
> > I hope that the Magnum can be treated as another "Nova" which is
> > focusing on container service. I know it is difficult to unify all
> > of the concepts in different coe (k8s has pod, service, rc, swarm
> > only has container, nova only has VM, PM with different
> > hypervisors), but this deserve some deep dive and thinking to see
> > how can move forward.
> >
> > On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz <e...@walmartlabs.com
> > <mailto:e...@walmartlabs.com>> wrote:
> >
> > definitely ;), but the are some thoughts to Tom’s email.
> >
> > I agree that we shouldn't reinvent apis, but I don’t think
> > Magnum should only focus at deployment (I feel we will become
> > another Puppet/Chef/Ansible module if we do it ):)
> > I belive our goal should be seamlessly integrate
> > Kub/Mesos/Swarm to OpenStack ecosystem
> > (Neutron/Cinder/Barbican/etc) even if we need to step in to
> > Kub/Mesos/Swarm communities for that.
> >
> > —
> > Egor
> >
> > From: Adrian Otto <adrian.o...@rackspace.com
> > <mailto:adrian.o...@rackspace.com><mailto:adrian.o...@rackspace.com
> > <mailto:adrian.o...@rackspace.com>>>
> > Reply-To: "OpenStack Development Mailing List (not for usage
> > questions)" <openstack-dev@lists.openstack.org
> > 
> > <mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org
> > <mailto:openstack-dev@lists.openstack.org>>>
> > Date: Tuesday, September 29, 2015 at 08:44
> > To: "OpenStack Development Mailing List (not for usage
> > questions)" <openstack-dev@lists.openstack.org
> > 
> > <mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org
> > <mailto:openstack-dev@lists.openstack.org>>>
> > Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
> >
> > This is definitely a topic we should cover in Tokyo.
> >
> > On Sep 29, 2015, at 8:28 AM, Daneyon Hansen (danehans)
> > <daneh...@cisco.com
> > <mailto:daneh...@cisco.com><mailto:daneh...@cisco.com
> > <mailto:daneh...@cisco.com>>> wrote:
> >
> >
> > +1
> >
> > From: Tom Cammann <tom.camm...@hpe.com
> > <mailto:tom.camm...@hpe.com><mailto:tom.camm...@hpe.com
> > <mailto:tom.camm...@hpe.com>>>
> > Reply-To: "openstack-dev@lists.openstack.org
> > 
> > <mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstac

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Peng Zhao
Echo with Monty:

> I believe that the real win is if Magnum's control plan can integrate the
network and storage fabrics > that exist in an OpenStack with kube/mesos/swarm.
We are working on the Cinder (ceph), Neutron, Keystone integration in HyperStack
[1] and love to contribute. Another TODO is the multi-tenancy support in
k8s/swarm/mesos. A global scheduler/orchestrator for all tenants yields higher
utilization rate than separate schedulers for each.
[1] https://launchpad.net/hyperstack
- Hyper - Make VM run like 
Container


On Wed, Sep 30, 2015 at 12:00 PM, Monty Taylor < mord...@inaugust.com > wrote:
*waving hands wildly at details* ...

I believe that the real win is if Magnum's control plan can integrate the
network and storage fabrics that exist in an OpenStack with kube/mesos/swarm.
Just deploying is VERY meh. I do not care - it's not interesting ... an ansible
playbook can do that in 5 minutes. OTOH - deploying some kube into a cloud in
such a way that it shares a tenant network with some VMs that are there - that's
good stuff and I think actually provides significant value.

On 09/29/2015 10:57 PM, Jay Lau wrote:
+1 to Egor, I think that the final goal of Magnum is container as a
service but not coe deployment as a service. ;-)

Especially we are also working on Magnum UI, the Magnum UI should export
some interfaces to enable end user can create container applications but
not only coe deployment.

I hope that the Magnum can be treated as another “Nova” which is
focusing on container service. I know it is difficult to unify all of
the concepts in different coe (k8s has pod, service, rc, swarm only has
container, nova only has VM, PM with different hypervisors), but this
deserve some deep dive and thinking to see how can move forward.

On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz < e...@walmartlabs.com
> wrote:

definitely ;), but the are some thoughts to Tom’s email.

I agree that we shouldn't reinvent apis, but I don’t think Magnum
should only focus at deployment (I feel we will become another
Puppet/Chef/Ansible module if we do it ):)
I belive our goal should be seamlessly integrate Kub/Mesos/Swarm to
OpenStack ecosystem (Neutron/Cinder/Barbican/etc) even if we need to
step in to Kub/Mesos/Swarm communities for that.

—
Egor

From: Adrian Otto < adrian.o...@rackspace.com
>>
Reply-To: “OpenStack Development Mailing List (not for usage
questions)“ < openstack-dev@lists.openstack .org
>>
Date: Tuesday, September 29, 2015 at 08:44
To: “OpenStack Development Mailing List (not for usage questions)“
< openstack-dev@lists.openstack .org
>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This is definitely a topic we should cover in Tokyo.

On Sep 29, 2015, at 8:28 AM, Daneyon Hansen (danehans)
< daneh...@cisco.com
>> wrote:


+1

From: Tom Cammann < tom.camm...@hpe.com
>>
Reply-To: “ openstack-dev@lists.openstack .org
>”
< openstack-dev@lists.openstack .org
>>
Date: Tuesday, September 29, 2015 at 2:22 AM
To: “ openstack-dev@lists.openstack .org
>”
< openstack-dev@lists.openstack .org
>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This has been my thinking in the last couple of months to completely
deprecate the COE specific APIs such as pod/service/rc and container.

As we now support Mesos, Kubernetes and Docker Swarm its going to be
very difficult and probably a wasted effort trying to consolidate
their separate APIs under a single Magnum API.

I'm starting to see Magnum as COEDaaS - Container Orchestration
Engine Deployment as a Service.

On 29/09/15 06:30, Ton Ngo wrote:
Would it make sense to ask the opposite of Wanghua's question:
should pod/service/rc be deprecated if the user can easily get to
the k8s api?
Even if we want to orchestrate these in a Heat template, the
corresponding heat resources can just interface with k8s instead of
Magnum.
Ton Ngo,

Egor Guz ---09/28/2015 10:20:02 PM---Also I belive
docker compose is just command line tool which doesn’t have any api
or scheduling feat

From: Egor Guz < e...@walmartlabs.com
> >
To: “ openstack-dev@lists.openstack .org
“>
< openstack-dev@lists.openstack .org
>>
Date: 09/28/2015 10:20 PM
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
__ __



Also I belive docker compose is just command line tool which doesn’t
have any api or scheduling features.
But during last Docker Conf hackathon PayPal folks implemented
docker compose executor for Mesos
( https://github.com/mohitsoni/ compose-executor )
which can give you pod like experience.

—
Egor

From: Adrian Otto < adrian.o...@rackspace.com
>>>
Reply-To: “OpenStack Development Mailing List (not for usage
questions)“ < openstack-dev@lists.openstack .org
>>>
Date: Monday, September 28, 2015 at 22:03
To: “OpenStack Development Mailing List (not for us

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Joshua Harlow
 other COE specific disjoint system).

Thoughts?

This one becomes especially hard if said COE(s) don't even have a
tenancy model in the first place :-/


Thanks,

Adrian


On Sep 30, 2015, at 8:58 AM, Devdatta
Kulkarni<devdatta.kulka...@rackspace.com
<mailto:devdatta.kulka...@rackspace.com>> wrote:

+1 Hongbin.

From perspective of Solum, which hopes to use Magnum for its
application container scheduling requirements, deep integration of
COEs with OpenStack services like Keystone will be useful.
Specifically, I am thinking that it will be good if Solum can
depend on Keystone tokens to deploy and schedule containers on the
Bay nodes instead of having to use COE specific credentials. That
way, container resources will become first class components that
can be monitored using Ceilometer, access controlled using
Keystone, and managed from within Horizon.

Regards, Devdatta


From: Hongbin Lu<hongbin...@huawei.com
<mailto:hongbin...@huawei.com>> Sent: Wednesday, September
30, 2015 9:44 AM To: OpenStack Development Mailing List (not for
usage questions) Subject: Re: [openstack-dev] [magnum]swarm +
compose = k8s?


+1 from me as well.

I think what makes Magnum appealing is the promise to provide
container-as-a-service. I see coe deployment as a helper to achieve
the promise, instead of the main goal.

Best regards, Hongbin


From: Jay Lau [mailto:jay.lau@gmail.com] Sent: September-29-15
10:57 PM To: OpenStack Development Mailing List (not for usage
questions) Subject: Re: [openstack-dev] [magnum]swarm + compose =
k8s?



+1 to Egor, I think that the final goal of Magnum is container as a
service but not coe deployment as a service. ;-)

Especially we are also working on Magnum UI, the Magnum UI should
export some interfaces to enable end user can create container
applications but not only coe deployment.

I hope that the Magnum can be treated as another "Nova" which is
focusing on container service. I know it is difficult to unify all
of the concepts in different coe (k8s has pod, service, rc, swarm
only has container, nova only has VM, PM with different
hypervisors), but this deserve some deep dive and thinking to see
how can move forward.





On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz<e...@walmartlabs.com
<mailto:e...@walmartlabs.com>>
wrote: definitely ;), but the are some thoughts to Tom’s email.

I agree that we shouldn't reinvent apis, but I don’t think Magnum
should only focus at deployment (I feel we will become another
Puppet/Chef/Ansible module if we do it ):) I belive our goal should
be seamlessly integrate Kub/Mesos/Swarm to OpenStack ecosystem
(Neutron/Cinder/Barbican/etc) even if we need to step in to
Kub/Mesos/Swarm communities for that.

— Egor

From: Adrian
Otto<adrian.o...@rackspace.com
<mailto:adrian.o...@rackspace.com><mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage
questions)"<openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>>



Date: Tuesday, September 29, 2015 at 08:44

To: "OpenStack Development Mailing List (not for usage
questions)"<openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>>



Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?


This is definitely a topic we should cover in Tokyo.

On Sep 29, 2015, at 8:28 AM, Daneyon Hansen
(danehans)<daneh...@cisco.com
<mailto:daneh...@cisco.com><mailto:daneh...@cisco.com>> wrote:


+1

From: Tom Cammann<tom.camm...@hpe.com
<mailto:tom.camm...@hpe.com><mailto:tom.camm...@hpe.com>>
Reply-To:
"openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>"<openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>>



Date: Tuesday, September 29, 2015 at 2:22 AM

To:
"openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>"<openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>>



Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?


This has been my thinking in the last couple of months to
completely deprecate the COE specific APIs such as pod/service/rc
and container.

As we now support Mesos, Kubernetes and Docker Swarm its going to
be very difficult and probably a wasted effort trying to
consolidate their separate APIs under a single Magnum API.

I'm starting to see Magnum as COEDaaS - Container Orchestration
Engine Deployment as a Service.

On 29/09/15 06:30, Ton Ngo wrote: Would it make sense to ask the
opposite of Wanghua's question: should pod/service/

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Joshua Harlow

Adrian Otto wrote:

Thanks everyone who has provided feedback on this thread. The good
news is that most of what has been asked for from Magnum is actually
in scope already, and some of it has already been implemented. We
never aimed to be a COE deployment service. That happens to be a
necessity to achieve our more ambitious goal: We want to provide a
compelling Containers-as-a-Service solution for OpenStack clouds in a
way that offers maximum leverage of what’s already in OpenStack,
while giving end users the ability to use their favorite tools to
interact with their COE of choice, with the multi-tenancy capability
we expect from all OpenStack services, and simplified integration
with a wealth of existing OpenStack services (Identity,
Orchestration, Images, Networks, Storage, etc.).

The areas we have disagreement are whether the features offered for
the k8s COE should be mirrored in other COE’s. We have not attempted
to do that yet, and my suggestion is to continue resisting that
temptation because it is not aligned with our vision. We are not here
to re-invent container management as a hosted service. Instead, we
aim to integrate prevailing technology, and make it work great with
OpenStack. For example, adding docker-compose capability to Magnum is
currently out-of-scope, and I think it should stay that way. With
that said, I’m willing to have a discussion about this with the
community at our upcoming Summit.

An argument could be made for feature consistency among various COE
options (Bay Types). I see this as a relatively low value pursuit.
Basic features like integration with OpenStack Networking and
OpenStack Storage services should be universal. Whether you can
present a YAML file for a bay to perform internal orchestration is
not important in my view, as long as there is a prevailing way of
addressing that need. In the case of Docker Bays, you can simply
point a docker-compose client at it, and that will work fine.



So an interesting question, but how is tenancy going to work, will there 
be a keystone tenancy <-> COE tenancy adapter? From my understanding a 
whole bay (COE?) is owned by a tenant, which is great for tenants that 
want to ~experiment~ with a COE but seems disjoint from the end goal of 
an integrated COE where the tenancy model of both keystone and the COE 
is either the same or is adapted via some adapter layer.


For example:

1) Bay that is connected to uber-tenant 'yahoo'

   1.1) Pod inside bay that is connected to tenant 'yahoo-mail.us'
   1.2) Pod inside bay that is connected to tenant 'yahoo-mail.in'
   ...

All those tenancy information is in keystone, not replicated/synced into 
the COE (or in some other COE specific disjoint system).


Thoughts?

This one becomes especially hard if said COE(s) don't even have a 
tenancy model in the first place :-/



Thanks,

Adrian


On Sep 30, 2015, at 8:58 AM, Devdatta
Kulkarni<devdatta.kulka...@rackspace.com>  wrote:

+1 Hongbin.

From perspective of Solum, which hopes to use Magnum for its
application container scheduling requirements, deep integration of
COEs with OpenStack services like Keystone will be useful.
Specifically, I am thinking that it will be good if Solum can
depend on Keystone tokens to deploy and schedule containers on the
Bay nodes instead of having to use COE specific credentials. That
way, container resources will become first class components that
can be monitored using Ceilometer, access controlled using
Keystone, and managed from within Horizon.

Regards, Devdatta


From: Hongbin Lu<hongbin...@huawei.com> Sent: Wednesday, September
30, 2015 9:44 AM To: OpenStack Development Mailing List (not for
usage questions) Subject: Re: [openstack-dev] [magnum]swarm +
compose = k8s?


+1 from me as well.

I think what makes Magnum appealing is the promise to provide
container-as-a-service. I see coe deployment as a helper to achieve
the promise, instead of  the main goal.

Best regards, Hongbin


From: Jay Lau [mailto:jay.lau@gmail.com] Sent: September-29-15
10:57 PM To: OpenStack Development Mailing List (not for usage
questions) Subject: Re: [openstack-dev] [magnum]swarm + compose =
k8s?



+1 to Egor, I think that the final goal of Magnum is container as a
service but not coe deployment as a service. ;-)

Especially we are also working on Magnum UI, the Magnum UI should
export some interfaces to enable end user can create container
applications but not only coe deployment.

I hope that the Magnum can be treated as another "Nova" which is
focusing on container service. I know it is difficult to unify all
of the concepts in different coe (k8s has pod, service, rc, swarm
only has container, nova only has VM,  PM with different
hypervisors), but this deserve some deep dive and thinking to see
how can move forward.





On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz<e...@walmartlabs.com>
wrote: definitely ;), but the are some thoughts to Tom’s email.

I agree that we shouldn't reinv

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Adrian Otto
Joshua,

The tenancy boundary in Magnum is the bay. You can place whatever single-tenant 
COE you want into the bay (Kubernetes, Mesos, Docker Swarm). This allows you to 
use native tools to interact with the COE in that bay, rather than using an 
OpenStack specific client. If you want to use the OpenStack client to create 
both bays, pods, and containers, you can do that today. You also have the 
choice, for example, to run kubctl against your Kubernetes bay, if you so 
desire.

Bays offer both a management and security isolation between multiple tenants. 
There is no intent to share a single bay between multiple tenants. In your use 
case, you would simply create two bays, one for each of the yahoo-mail.XX 
tenants. I am not convinced that having an uber-tenant makes sense.

Adrian

On Sep 30, 2015, at 1:13 PM, Joshua Harlow 
<harlo...@outlook.com<mailto:harlo...@outlook.com>> wrote:

Adrian Otto wrote:
Thanks everyone who has provided feedback on this thread. The good
news is that most of what has been asked for from Magnum is actually
in scope already, and some of it has already been implemented. We
never aimed to be a COE deployment service. That happens to be a
necessity to achieve our more ambitious goal: We want to provide a
compelling Containers-as-a-Service solution for OpenStack clouds in a
way that offers maximum leverage of what’s already in OpenStack,
while giving end users the ability to use their favorite tools to
interact with their COE of choice, with the multi-tenancy capability
we expect from all OpenStack services, and simplified integration
with a wealth of existing OpenStack services (Identity,
Orchestration, Images, Networks, Storage, etc.).

The areas we have disagreement are whether the features offered for
the k8s COE should be mirrored in other COE’s. We have not attempted
to do that yet, and my suggestion is to continue resisting that
temptation because it is not aligned with our vision. We are not here
to re-invent container management as a hosted service. Instead, we
aim to integrate prevailing technology, and make it work great with
OpenStack. For example, adding docker-compose capability to Magnum is
currently out-of-scope, and I think it should stay that way. With
that said, I’m willing to have a discussion about this with the
community at our upcoming Summit.

An argument could be made for feature consistency among various COE
options (Bay Types). I see this as a relatively low value pursuit.
Basic features like integration with OpenStack Networking and
OpenStack Storage services should be universal. Whether you can
present a YAML file for a bay to perform internal orchestration is
not important in my view, as long as there is a prevailing way of
addressing that need. In the case of Docker Bays, you can simply
point a docker-compose client at it, and that will work fine.


So an interesting question, but how is tenancy going to work, will there be a 
keystone tenancy <-> COE tenancy adapter? From my understanding a whole bay 
(COE?) is owned by a tenant, which is great for tenants that want to 
~experiment~ with a COE but seems disjoint from the end goal of an integrated 
COE where the tenancy model of both keystone and the COE is either the same or 
is adapted via some adapter layer.

For example:

1) Bay that is connected to uber-tenant 'yahoo'

  1.1) Pod inside bay that is connected to tenant 
'yahoo-mail.us<http://yahoo-mail.us/>'
  1.2) Pod inside bay that is connected to tenant 'yahoo-mail.in'
  ...

All those tenancy information is in keystone, not replicated/synced into the 
COE (or in some other COE specific disjoint system).

Thoughts?

This one becomes especially hard if said COE(s) don't even have a tenancy model 
in the first place :-/

Thanks,

Adrian

On Sep 30, 2015, at 8:58 AM, Devdatta
Kulkarni<devdatta.kulka...@rackspace.com<mailto:devdatta.kulka...@rackspace.com>>
  wrote:

+1 Hongbin.

From perspective of Solum, which hopes to use Magnum for its
application container scheduling requirements, deep integration of
COEs with OpenStack services like Keystone will be useful.
Specifically, I am thinking that it will be good if Solum can
depend on Keystone tokens to deploy and schedule containers on the
Bay nodes instead of having to use COE specific credentials. That
way, container resources will become first class components that
can be monitored using Ceilometer, access controlled using
Keystone, and managed from within Horizon.

Regards, Devdatta


From: Hongbin Lu<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> Sent: 
Wednesday, September
30, 2015 9:44 AM To: OpenStack Development Mailing List (not for
usage questions) Subject: Re: [openstack-dev] [magnum]swarm +
compose = k8s?


+1 from me as well.

I think what makes Magnum appealing is the promise to provide
container-as-a-service. I see coe deployment as a helper to achieve
the promise, instead of  the main goal.

Best regards, Hongbin


F

Re: [openstack-dev] [magnum] New Core Reviewers

2015-09-30 Thread Hongbin Lu
+1 for both. Welcome!

From: Davanum Srinivas [mailto:dava...@gmail.com]
Sent: September-30-15 7:00 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] New Core Reviewers

+1 from me for both Vilobh and Hua.

Thanks,
Dims

On Wed, Sep 30, 2015 at 6:47 PM, Adrian Otto 
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>> wrote:
Core Reviewers,

I propose the following additions to magnum-core:

+Vilobh Meshram (vilobhmm)
+Hua Wang (humble00)

Please respond with +1 to agree or -1 to veto. This will be decided by either a 
simple majority of existing core reviewers, or by lazy consensus concluding on 
2015-10-06 at 00:00 UTC, in time for our next team meeting.

Thanks,

Adrian Otto
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Davanum Srinivas :: https://twitter.com/dims
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Steven Dake (stdake)
expect from all OpenStack services, and simplified integration
>>>> with a wealth of existing OpenStack services (Identity,
>>>> Orchestration, Images, Networks, Storage, etc.).
>>>>
>>>> The areas we have disagreement are whether the features offered for
>>>> the k8s COE should be mirrored in other COE’s. We have not attempted
>>>> to do that yet, and my suggestion is to continue resisting that
>>>> temptation because it is not aligned with our vision. We are not here
>>>> to re-invent container management as a hosted service. Instead, we
>>>> aim to integrate prevailing technology, and make it work great with
>>>> OpenStack. For example, adding docker-compose capability to Magnum is
>>>> currently out-of-scope, and I think it should stay that way. With
>>>> that said, I’m willing to have a discussion about this with the
>>>> community at our upcoming Summit.
>>>>
>>>> An argument could be made for feature consistency among various COE
>>>> options (Bay Types). I see this as a relatively low value pursuit.
>>>> Basic features like integration with OpenStack Networking and
>>>> OpenStack Storage services should be universal. Whether you can
>>>> present a YAML file for a bay to perform internal orchestration is
>>>> not important in my view, as long as there is a prevailing way of
>>>> addressing that need. In the case of Docker Bays, you can simply
>>>> point a docker-compose client at it, and that will work fine.
>>>>
>>>
>>> So an interesting question, but how is tenancy going to work, will
>>> there be a keystone tenancy <-> COE tenancy adapter? From my
>>> understanding a whole bay (COE?) is owned by a tenant, which is great
>>> for tenants that want to ~experiment~ with a COE but seems disjoint
>>> from the end goal of an integrated COE where the tenancy model of both
>>> keystone and the COE is either the same or is adapted via some adapter
>>> layer.
>>>
>>> For example:
>>>
>>> 1) Bay that is connected to uber-tenant 'yahoo'
>>>
>>> 1.1) Pod inside bay that is connected to tenant 'yahoo-mail.us
>>> <http://yahoo-mail.us/>'
>>> 1.2) Pod inside bay that is connected to tenant 'yahoo-mail.in'
>>> ...
>>>
>>> All those tenancy information is in keystone, not replicated/synced
>>> into the COE (or in some other COE specific disjoint system).
>>>
>>> Thoughts?
>>>
>>> This one becomes especially hard if said COE(s) don't even have a
>>> tenancy model in the first place :-/
>>>
>>>> Thanks,
>>>>
>>>> Adrian
>>>>
>>>>> On Sep 30, 2015, at 8:58 AM, Devdatta
>>>>> Kulkarni<devdatta.kulka...@rackspace.com
>>>>> <mailto:devdatta.kulka...@rackspace.com>> wrote:
>>>>>
>>>>> +1 Hongbin.
>>>>>
>>>>> From perspective of Solum, which hopes to use Magnum for its
>>>>> application container scheduling requirements, deep integration of
>>>>> COEs with OpenStack services like Keystone will be useful.
>>>>> Specifically, I am thinking that it will be good if Solum can
>>>>> depend on Keystone tokens to deploy and schedule containers on the
>>>>> Bay nodes instead of having to use COE specific credentials. That
>>>>> way, container resources will become first class components that
>>>>> can be monitored using Ceilometer, access controlled using
>>>>> Keystone, and managed from within Horizon.
>>>>>
>>>>> Regards, Devdatta
>>>>>
>>>>>
>>>>> From: Hongbin Lu<hongbin...@huawei.com
>>>>> <mailto:hongbin...@huawei.com>> Sent: Wednesday, September
>>>>> 30, 2015 9:44 AM To: OpenStack Development Mailing List (not for
>>>>> usage questions) Subject: Re: [openstack-dev] [magnum]swarm +
>>>>> compose = k8s?
>>>>>
>>>>>
>>>>> +1 from me as well.
>>>>>
>>>>> I think what makes Magnum appealing is the promise to provide
>>>>> container-as-a-service. I see coe deployment as a helper to achieve
>>>>> the promise, instead of the main goal.
>>>>>
>>>>> Best regards, Hongbin
>>>>>
>>>>>
>>>>> From: Jay Lau [mailto:jay.l

Re: [openstack-dev] [magnum] New Core Reviewers

2015-09-30 Thread Davanum Srinivas
+1 from me for both Vilobh and Hua.

Thanks,
Dims

On Wed, Sep 30, 2015 at 6:47 PM, Adrian Otto 
wrote:

> Core Reviewers,
>
> I propose the following additions to magnum-core:
>
> +Vilobh Meshram (vilobhmm)
> +Hua Wang (humble00)
>
> Please respond with +1 to agree or -1 to veto. This will be decided by
> either a simple majority of existing core reviewers, or by lazy consensus
> concluding on 2015-10-06 at 00:00 UTC, in time for our next team meeting.
>
> Thanks,
>
> Adrian Otto
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Joshua Harlow
>>> <mailto:harlo...@outlook.com>>  wrote:
>>>>
>>>> Adrian Otto wrote:
>>>>> Thanks everyone who has provided feedback on this thread. The good
>>>>> news is that most of what has been asked for from Magnum is actually
>>>>> in scope already, and some of it has already been implemented. We
>>>>> never aimed to be a COE deployment service. That happens to be a
>>>>> necessity to achieve our more ambitious goal: We want to provide a
>>>>> compelling Containers-as-a-Service solution for OpenStack clouds in a
>>>>> way that offers maximum leverage of what’s already in OpenStack,
>>>>> while giving end users the ability to use their favorite tools to
>>>>> interact with their COE of choice, with the multi-tenancy capability
>>>>> we expect from all OpenStack services, and simplified integration
>>>>> with a wealth of existing OpenStack services (Identity,
>>>>> Orchestration, Images, Networks, Storage, etc.).
>>>>>
>>>>> The areas we have disagreement are whether the features offered for
>>>>> the k8s COE should be mirrored in other COE’s. We have not attempted
>>>>> to do that yet, and my suggestion is to continue resisting that
>>>>> temptation because it is not aligned with our vision. We are not here
>>>>> to re-invent container management as a hosted service. Instead, we
>>>>> aim to integrate prevailing technology, and make it work great with
>>>>> OpenStack. For example, adding docker-compose capability to Magnum is
>>>>> currently out-of-scope, and I think it should stay that way. With
>>>>> that said, I’m willing to have a discussion about this with the
>>>>> community at our upcoming Summit.
>>>>>
>>>>> An argument could be made for feature consistency among various COE
>>>>> options (Bay Types). I see this as a relatively low value pursuit.
>>>>> Basic features like integration with OpenStack Networking and
>>>>> OpenStack Storage services should be universal. Whether you can
>>>>> present a YAML file for a bay to perform internal orchestration is
>>>>> not important in my view, as long as there is a prevailing way of
>>>>> addressing that need. In the case of Docker Bays, you can simply
>>>>> point a docker-compose client at it, and that will work fine.
>>>>>
>>>> So an interesting question, but how is tenancy going to work, will
>>>> there be a keystone tenancy<->  COE tenancy adapter? From my
>>>> understanding a whole bay (COE?) is owned by a tenant, which is great
>>>> for tenants that want to ~experiment~ with a COE but seems disjoint
>>>> from the end goal of an integrated COE where the tenancy model of both
>>>> keystone and the COE is either the same or is adapted via some adapter
>>>> layer.
>>>>
>>>> For example:
>>>>
>>>> 1) Bay that is connected to uber-tenant 'yahoo'
>>>>
>>>> 1.1) Pod inside bay that is connected to tenant 'yahoo-mail.us
>>>> <http://yahoo-mail.us/>'
>>>> 1.2) Pod inside bay that is connected to tenant 'yahoo-mail.in'
>>>> ...
>>>>
>>>> All those tenancy information is in keystone, not replicated/synced
>>>> into the COE (or in some other COE specific disjoint system).
>>>>
>>>> Thoughts?
>>>>
>>>> This one becomes especially hard if said COE(s) don't even have a
>>>> tenancy model in the first place :-/
>>>>
>>>>> Thanks,
>>>>>
>>>>> Adrian
>>>>>
>>>>>> On Sep 30, 2015, at 8:58 AM, Devdatta
>>>>>> Kulkarni<devdatta.kulka...@rackspace.com
>>>>>> <mailto:devdatta.kulka...@rackspace.com>>  wrote:
>>>>>>
>>>>>> +1 Hongbin.
>>>>>>
>>>>>>  From perspective of Solum, which hopes to use Magnum for its
>>>>>> application container scheduling requirements, deep integration of
>>>>>> COEs with OpenStack services like Keystone will be useful.
>>>>>> Specifically, I am thinking that it will be good if Solum can
>>>>>> depend on Keystone tokens to deploy and schedule containers on the
>>>>>> Bay nodes instead of having to use COE specific credentials. That
>&g

[openstack-dev] [magnum] New Core Reviewers

2015-09-30 Thread Adrian Otto
Core Reviewers,

I propose the following additions to magnum-core:

+Vilobh Meshram (vilobhmm)
+Hua Wang (humble00)

Please respond with +1 to agree or -1 to veto. This will be decided by either a 
simple majority of existing core reviewers, or by lazy consensus concluding on 
2015-10-06 at 00:00 UTC, in time for our next team meeting.

Thanks,

Adrian Otto
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Kris G. Lindgren
We are looking at deploying magnum as an answer for how do we do containers 
company wide at Godaddy.  I am going to agree with both you and josh.

I agree that managing one large system is going to be a pain and pas experience 
tells me this wont be practical/scale, however from experience I also know 
exactly the pain Josh is talking about.

We currently have ~4k projects in our internal openstack cloud, about 1/4 of 
the projects are currently doing some form of containers on their own, with 
more joining every day.  If all of these projects were to convert of to the 
current magnum configuration we would suddenly be attempting to 
support/configure ~1k magnum clusters.  Considering that everyone will want it 
HA, we are looking at a minimum of 2 kube nodes per cluster + lbaas vips + 
floating ips.  From a capacity standpoint this is an excessive amount of 
duplicated infrastructure to spinup in projects where people maybe running 
10–20 containers per project.  From an operator support perspective this is a 
special level of hell that I do not want to get into.   Even if I am off by 
75%,  250 still sucks.

From my point of view an ideal use case for companies like ours (yahoo/godaddy) 
would be able to support hierarchical projects in magnum.  That way we could 
create a project for each department, and then the subteams of those 
departments can have their own projects.  We create a a bay per department.  
Sub-projects if they want to can support creation of their own bays (but 
support of the kube cluster would then fall to that team).  When a sub-project 
spins up a pod on a bay, minions get created inside that teams sub projects and 
the containers in that pod run on the capacity that was spun up  under that 
project, the minions for each pod would be a in a scaling group and as such 
grow/shrink as dictated by load.

The above would make it so where we support a minimal, yet imho reasonable, 
number of kube clusters, give people who can't/don’t want to fall inline with 
the provided resource a way to make their own and still offer a "good enough 
for a single company" level of multi-tenancy.

>Joshua,
>
>If you share resources, you give up multi-tenancy.  No COE system has the
>concept of multi-tenancy (kubernetes has some basic implementation but it
>is totally insecure).  Not only does multi-tenancy have to “look like” it
>offers multiple tenants isolation, but it actually has to deliver the
>goods.

>

>I understand that at first glance a company like Yahoo may not want >separate 
>bays for their various applications because of the perceived >administrative 
>overhead. I would then challenge Yahoo to go deploy a COE >like kubernetes 
>(which has no multi-tenancy or a very basic implementation >of such) and get 
>it to work with hundreds of different competing >applications. I would 
>speculate the administrative overhead of getting >all that to work would be 
>greater then the administrative overhead of >simply doing a bay create for the 
>various tenants.

>

>Placing tenancy inside a COE seems interesting, but no COE does that >today. 
>Maybe in the future they will. Magnum was designed to present an >integration 
>point between COEs and OpenStack today, not five years down >the road. Its not 
>as if we took shortcuts to get to where we are.

>

>I will grant you that density is lower with the current design of Magnum >vs a 
>full on integration with OpenStack within the COE itself. However, >that model 
>which is what I believe you proposed is a huge design change to >each COE 
>which would overly complicate the COE at the gain of increased >density. I 
>personally don’t feel that pain is worth the gain.


___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Adrian Otto
Kris,

On Sep 30, 2015, at 4:26 PM, Kris G. Lindgren 
> wrote:

We are looking at deploying magnum as an answer for how do we do containers 
company wide at Godaddy.  I am going to agree with both you and josh.

I agree that managing one large system is going to be a pain and pas experience 
tells me this wont be practical/scale, however from experience I also know 
exactly the pain Josh is talking about.

We currently have ~4k projects in our internal openstack cloud, about 1/4 of 
the projects are currently doing some form of containers on their own, with 
more joining every day.  If all of these projects were to convert of to the 
current magnum configuration we would suddenly be attempting to 
support/configure ~1k magnum clusters.  Considering that everyone will want it 
HA, we are looking at a minimum of 2 kube nodes per cluster + lbaas vips + 
floating ips.  From a capacity standpoint this is an excessive amount of 
duplicated infrastructure to spinup in projects where people maybe running 
10–20 containers per project.  From an operator support perspective this is a 
special level of hell that I do not want to get into.   Even if I am off by 
75%,  250 still sucks.

Keep in mind that your magnum bays can use the same floating ip addresses that 
your containers do, and the containers hosts are shared between the COE nodes 
and the containers that make up the applications running in the bay. It is 
possible to use private address space for that, and proxy public facing access 
through a proxy layer that uses names to route connections to the appropriate 
magnum bay. That’s how you can escape the problem of public IP addresses as a 
scarce resource.

Also, if you use Magnum to start all those bays, they can all look the same, 
rather than the ~1000 container environments you have today that probably don’t 
look very similar, one to the next. Upgrading becomes much more achievable when 
you have wider consistency. There is a new feature currently in review called 
public baymodel that allows the cloud operator to define the bay model, but 
individual tenants can start bays based on that one common “template”. This is 
a way of centralizing most of your configuration. This balances a lot of the 
operational concern.

From my point of view an ideal use case for companies like ours (yahoo/godaddy) 
would be able to support hierarchical projects in magnum.  That way we could 
create a project for each department, and then the subteams of those 
departments can have their own projects.  We create a a bay per department.  
Sub-projects if they want to can support creation of their own bays (but 
support of the kube cluster would then fall to that team).  When a sub-project 
spins up a pod on a bay, minions get created inside that teams sub projects and 
the containers in that pod run on the capacity that was spun up  under that 
project, the minions for each pod would be a in a scaling group and as such 
grow/shrink as dictated by load.

You can do this today by sharing your TLS certs. In fact, you could make the 
cert signing a bit more sophisticated than it is today, and allow each subteam 
to have a unique TLS cert that can auth against a common bay.

The above would make it so where we support a minimal, yet imho reasonable, 
number of kube clusters, give people who can't/don’t want to fall inline with 
the provided resource a way to make their own and still offer a "good enough 
for a single company" level of multi-tenancy.

This is different than what Joshua was asking for with identities in keystone, 
because today’s COE’s themselves don’t have modular identity solutions that are 
implemented with multi-tenancy.

Imagine for a moment that you don’t need to run your bays on Nova instances 
that are virtual machines. What if you had an additional host aggregate that 
could produce libvirt/lxc guests that you can use to form bays. They can 
actually be composed of nodes that are sourced from BOTH your libvirt/lxc host 
aggregate (for hosting your COE’s) and your normal KVM (or the hypervisor) host 
aggregate for your apps to use. Then the effective consolidation ratio of your 
bays (what you referred to as “excessive amount of duplicated infrastructure”) 
become processes running on a much smaller number of compute nodes. You could 
do this by specifying a different master_flavor_id and flavor_id such that 
these fall on different host aggregates. As long as you are “all one company” 
and are not concerned primarily with security isolation between neighboring COE 
master nodes, that approach may actually be the right balance, and would not 
require an architectural shift or figuring out how to accomplish nested tenants.

Adrian

>Joshua,
>
>If you share resources, you give up multi-tenancy.  No COE system has the
>concept of multi-tenancy (kubernetes has some basic implementation but it
>is totally insecure).  Not only does multi-tenancy have to “look 

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Ryan Rossiter


On 9/29/2015 11:00 PM, Monty Taylor wrote:

*waving hands wildly at details* ...

I believe that the real win is if Magnum's control plan can integrate 
the network and storage fabrics that exist in an OpenStack with 
kube/mesos/swarm. Just deploying is VERY meh. I do not care - it's not 
interesting ... an ansible playbook can do that in 5 minutes. OTOH - 
deploying some kube into a cloud in such a way that it shares a tenant 
network with some VMs that are there - that's good stuff and I think 
actually provides significant value.

+1 on sharing the tenant network with VMs.

When I look at Magnum being an OpenStack project, I see it winning by 
integrating itself with the other projects, and to make containers just 
work in your cloud. Here's the scenario I would want a cloud with Magnum 
to do (though it may be very pie-in-the-sky):


I want to take my container, replicate it across 3 container host VMs 
(each of which lives on a different compute host), stick a Neutron LB in 
front of it, and hook it up to the same network as my 5 other VMs.


This way, it handles my containers in a service, and integrates 
beautifully with my existing OpenStack cloud.


On 09/29/2015 10:57 PM, Jay Lau wrote:

+1 to Egor, I think that the final goal of Magnum is container as a
service but not coe deployment as a service. ;-)

Especially we are also working on Magnum UI, the Magnum UI should export
some interfaces to enable end user can create container applications but
not only coe deployment.

I hope that the Magnum can be treated as another "Nova" which is
focusing on container service. I know it is difficult to unify all of
the concepts in different coe (k8s has pod, service, rc, swarm only has
container, nova only has VM, PM with different hypervisors), but this
deserve some deep dive and thinking to see how can move forward.

On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz <e...@walmartlabs.com
<mailto:e...@walmartlabs.com>> wrote:

definitely ;), but the are some thoughts to Tom’s email.

I agree that we shouldn't reinvent apis, but I don’t think Magnum
should only focus at deployment (I feel we will become another
Puppet/Chef/Ansible module if we do it ):)
I belive our goal should be seamlessly integrate Kub/Mesos/Swarm to
OpenStack ecosystem (Neutron/Cinder/Barbican/etc) even if we need to
step in to Kub/Mesos/Swarm communities for that.

—
Egor

From: Adrian Otto <adrian.o...@rackspace.com
<mailto:adrian.o...@rackspace.com><mailto:adrian.o...@rackspace.com
<mailto:adrian.o...@rackspace.com>>>
Reply-To: "OpenStack Development Mailing List (not for usage
questions)" <openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>>>
Date: Tuesday, September 29, 2015 at 08:44
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This is definitely a topic we should cover in Tokyo.

On Sep 29, 2015, at 8:28 AM, Daneyon Hansen (danehans)
<daneh...@cisco.com
<mailto:daneh...@cisco.com><mailto:daneh...@cisco.com
<mailto:daneh...@cisco.com>>> wrote:


+1

From: Tom Cammann <tom.camm...@hpe.com
<mailto:tom.camm...@hpe.com><mailto:tom.camm...@hpe.com
<mailto:tom.camm...@hpe.com>>>
Reply-To: "openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>>"
<openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>>>
Date: Tuesday, September 29, 2015 at 2:22 AM
To: "openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>>"
<openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This has been my thinking in the last couple of months to completely
deprecate the COE specific APIs such as pod/service/rc and 
container.


As we now support Mesos, Kubernetes and Docker Swarm its going to be
very difficult and probably a wasted effort trying to consolidate
their separate APIs under a single Magnum API.

I'

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Hongbin Lu
+1 from me as well.

I think what makes Magnum appealing is the promise to provide 
container-as-a-service. I see coe deployment as a helper to achieve the 
promise, instead of the main goal.

Best regards,
Hongbin

From: Jay Lau [mailto:jay.lau@gmail.com]
Sent: September-29-15 10:57 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

+1 to Egor, I think that the final goal of Magnum is container as a service but 
not coe deployment as a service. ;-)

Especially we are also working on Magnum UI, the Magnum UI should export some 
interfaces to enable end user can create container applications but not only 
coe deployment.
I hope that the Magnum can be treated as another "Nova" which is focusing on 
container service. I know it is difficult to unify all of the concepts in 
different coe (k8s has pod, service, rc, swarm only has container, nova only 
has VM, PM with different hypervisors), but this deserve some deep dive and 
thinking to see how can move forward.

On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz 
<e...@walmartlabs.com<mailto:e...@walmartlabs.com>> wrote:
definitely ;), but the are some thoughts to Tom’s email.

I agree that we shouldn't reinvent apis, but I don’t think Magnum should only 
focus at deployment (I feel we will become another Puppet/Chef/Ansible module 
if we do it ):)
I belive our goal should be seamlessly integrate Kub/Mesos/Swarm to OpenStack 
ecosystem (Neutron/Cinder/Barbican/etc) even if we need to step in to 
Kub/Mesos/Swarm communities for that.

—
Egor

From: Adrian Otto 
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com><mailto:adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>>
Date: Tuesday, September 29, 2015 at 08:44
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This is definitely a topic we should cover in Tokyo.

On Sep 29, 2015, at 8:28 AM, Daneyon Hansen (danehans) 
<daneh...@cisco.com<mailto:daneh...@cisco.com><mailto:daneh...@cisco.com<mailto:daneh...@cisco.com>>>
 wrote:


+1

From: Tom Cammann 
<tom.camm...@hpe.com<mailto:tom.camm...@hpe.com><mailto:tom.camm...@hpe.com<mailto:tom.camm...@hpe.com>>>
Reply-To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>"
 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>>
Date: Tuesday, September 29, 2015 at 2:22 AM
To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>"
 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This has been my thinking in the last couple of months to completely deprecate 
the COE specific APIs such as pod/service/rc and container.

As we now support Mesos, Kubernetes and Docker Swarm its going to be very 
difficult and probably a wasted effort trying to consolidate their separate 
APIs under a single Magnum API.

I'm starting to see Magnum as COEDaaS - Container Orchestration Engine 
Deployment as a Service.

On 29/09/15 06:30, Ton Ngo wrote:
Would it make sense to ask the opposite of Wanghua's question: should 
pod/service/rc be deprecated if the user can easily get to the k8s api?
Even if we want to orchestrate these in a Heat template, the corresponding heat 
resources can just interface with k8s instead of Magnum.
Ton Ngo,

Egor Guz ---09/28/2015 10:20:02 PM---Also I belive docker compose 
is just command line tool which doesn’t have any api or scheduling feat

From: Egor Guz 
<e...@walmartlabs.com<mailto:e...@walmartlabs.com>><mailto:e...@walmartlabs.com<mailto:e...@walmartlabs.com>>
To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>"<mailto:openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
 
<openstack-dev@lists.openstack.org<mailto:

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Adrian Otto
Thanks everyone who has provided feedback on this thread. The good news is that 
most of what has been asked for from Magnum is actually in scope already, and 
some of it has already been implemented. We never aimed to be a COE deployment 
service. That happens to be a necessity to achieve our more ambitious goal: We 
want to provide a compelling Containers-as-a-Service solution for OpenStack 
clouds in a way that offers maximum leverage of what’s already in OpenStack, 
while giving end users the ability to use their favorite tools to interact with 
their COE of choice, with the multi-tenancy capability we expect from all 
OpenStack services, and simplified integration with a wealth of existing 
OpenStack services (Identity, Orchestration, Images, Networks, Storage, etc.).

The areas we have disagreement are whether the features offered for the k8s COE 
should be mirrored in other COE’s. We have not attempted to do that yet, and my 
suggestion is to continue resisting that temptation because it is not aligned 
with our vision. We are not here to re-invent container management as a hosted 
service. Instead, we aim to integrate prevailing technology, and make it work 
great with OpenStack. For example, adding docker-compose capability to Magnum 
is currently out-of-scope, and I think it should stay that way. With that said, 
I’m willing to have a discussion about this with the community at our upcoming 
Summit.

An argument could be made for feature consistency among various COE options 
(Bay Types). I see this as a relatively low value pursuit. Basic features like 
integration with OpenStack Networking and OpenStack Storage services should be 
universal. Whether you can present a YAML file for a bay to perform internal 
orchestration is not important in my view, as long as there is a prevailing way 
of addressing that need. In the case of Docker Bays, you can simply point a 
docker-compose client at it, and that will work fine.

Thanks,

Adrian

> On Sep 30, 2015, at 8:58 AM, Devdatta Kulkarni 
> <devdatta.kulka...@rackspace.com> wrote:
> 
> +1 Hongbin.
> 
> From perspective of Solum, which hopes to use Magnum for its application 
> container scheduling
> requirements, deep integration of COEs with OpenStack services like Keystone 
> will be useful.
> Specifically, I am thinking that it will be good if Solum can depend on 
> Keystone tokens to deploy 
> and schedule containers on the Bay nodes instead of having to use COE 
> specific credentials. 
> That way, container resources will become first class components that can be 
> monitored 
> using Ceilometer, access controlled using Keystone, and managed from within 
> Horizon.
> 
> Regards,
> Devdatta
> 
> 
> From: Hongbin Lu <hongbin...@huawei.com>
> Sent: Wednesday, September 30, 2015 9:44 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>   
> 
> +1 from me as well.
>  
> I think what makes Magnum appealing is the promise to provide 
> container-as-a-service. I see coe deployment as a helper to achieve the 
> promise, instead of  the main goal.
>  
> Best regards,
> Hongbin
>  
> 
> From: Jay Lau [mailto:jay.lau....@gmail.com]
> Sent: September-29-15 10:57 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>  
> 
> 
> +1 to Egor, I think that the final goal of Magnum is container as a service 
> but not coe deployment as a service. ;-)
> 
> Especially we are also working on Magnum UI, the Magnum UI should export some 
> interfaces to enable end user can create container applications but not only 
> coe deployment.
> 
> I hope that the Magnum can be treated as another "Nova" which is focusing on 
> container service. I know it is difficult to unify all of the concepts in 
> different coe (k8s has pod, service, rc, swarm only has container, nova only 
> has VM,  PM with different hypervisors), but this deserve some deep dive and 
> thinking to see how can move forward. 
> 
> 
> 
>  
> 
> On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz <e...@walmartlabs.com> wrote:
> definitely ;), but the are some thoughts to Tom’s email.
> 
> I agree that we shouldn't reinvent apis, but I don’t think Magnum should only 
> focus at deployment (I feel we will become another Puppet/Chef/Ansible module 
> if we do it ):)
> I belive our goal should be seamlessly integrate Kub/Mesos/Swarm to OpenStack 
> ecosystem (Neutron/Cinder/Barbican/etc) even if we need to step in to 
> Kub/Mesos/Swarm communities for that.
> 
> —
> Egor
> 
> From: Adrian Otto 
> <adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>>
> Reply-To: "OpenStack Devel

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Devdatta Kulkarni
+1 Hongbin.

From perspective of Solum, which hopes to use Magnum for its application 
container scheduling
requirements, deep integration of COEs with OpenStack services like Keystone 
will be useful.
Specifically, I am thinking that it will be good if Solum can depend on 
Keystone tokens to deploy 
and schedule containers on the Bay nodes instead of having to use COE specific 
credentials. 
That way, container resources will become first class components that can be 
monitored 
using Ceilometer, access controlled using Keystone, and managed from within 
Horizon.

Regards,
Devdatta


From: Hongbin Lu <hongbin...@huawei.com>
Sent: Wednesday, September 30, 2015 9:44 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
  

+1 from me as well.
 
I think what makes Magnum appealing is the promise to provide 
container-as-a-service. I see coe deployment as a helper to achieve the 
promise, instead of  the main goal.
 
Best regards,
Hongbin
 

From: Jay Lau [mailto:jay.lau@gmail.com]
Sent: September-29-15 10:57 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
  


+1 to Egor, I think that the final goal of Magnum is container as a service but 
not coe deployment as a service. ;-)

Especially we are also working on Magnum UI, the Magnum UI should export some 
interfaces to enable end user can create container applications but not only 
coe deployment.
 
I hope that the Magnum can be treated as another "Nova" which is focusing on 
container service. I know it is difficult to unify all of the concepts in 
different coe (k8s has pod, service, rc, swarm only has container, nova only 
has VM,  PM with different hypervisors), but this deserve some deep dive and 
thinking to see how can move forward. 
 


 

On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz <e...@walmartlabs.com> wrote:
definitely ;), but the are some thoughts to Tom’s email.

I agree that we shouldn't reinvent apis, but I don’t think Magnum should only 
focus at deployment (I feel we will become another Puppet/Chef/Ansible module 
if we do it ):)
I belive our goal should be seamlessly integrate Kub/Mesos/Swarm to OpenStack 
ecosystem (Neutron/Cinder/Barbican/etc) even if we need to step in to 
Kub/Mesos/Swarm communities for that.

—
Egor

From: Adrian Otto <adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, September 29, 2015 at 08:44
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This is definitely a topic we should cover in Tokyo.

On Sep 29, 2015, at 8:28 AM, Daneyon Hansen (danehans) 
<daneh...@cisco.com<mailto:daneh...@cisco.com>> wrote:


+1

From: Tom Cammann <tom.camm...@hpe.com<mailto:tom.camm...@hpe.com>>
Reply-To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, September 29, 2015 at 2:22 AM
To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This has been my thinking in the last couple of months to completely deprecate 
the COE specific APIs such as pod/service/rc and container.

As we now support Mesos, Kubernetes and Docker Swarm its going to be very 
difficult and probably a wasted effort trying to consolidate their separate 
APIs under a single Magnum API.

I'm starting to see Magnum as COEDaaS - Container Orchestration Engine 
Deployment as a Service.

On 29/09/15 06:30, Ton Ngo wrote:
Would it make sense to ask the opposite of Wanghua's question: should 
pod/service/rc be deprecated if the user can easily get to the k8s api?
Even if we want to orchestrate these in a Heat template, the corresponding heat 
resources can just interface with k8s instead of Magnum.
Ton Ngo,

Egor Guz ---09/28/2015 10:20:02 PM---Also I belive docker compose 
is just command line tool which doesn’t have any api or scheduling feat

From: Egor Guz <e...@walmartlabs.com><mailto:e...@walmartlabs.com>
To: 
"openstack-dev@lists.openstack.org"<mailto:openstack-dev@lists.openstack.org> 
<openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>
Date: 09/28/2015 10:20 PM
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?




Also I belive

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-29 Thread 王华
I agree with Tom to see Magnum as COEDaaS. k8s, swarm, mesos are so
different in their architectures that magnum can not provide unified API to
user. So I think we should focus on deployment.

Regards,
Wanghua

On Tue, Sep 29, 2015 at 5:22 PM, Tom Cammann <tom.camm...@hpe.com> wrote:

> This has been my thinking in the last couple of months to completely
> deprecate the COE specific APIs such as pod/service/rc and container.
>
> As we now support Mesos, Kubernetes and Docker Swarm its going to be very
> difficult and probably a wasted effort trying to consolidate their separate
> APIs under a single Magnum API.
>
> I'm starting to see Magnum as COEDaaS - Container Orchestration Engine
> Deployment as a Service.
>
>
> On 29/09/15 06:30, Ton Ngo wrote:
>
> Would it make sense to ask the opposite of Wanghua's question: should
> pod/service/rc be deprecated if the user can easily get to the k8s api?
> Even if we want to orchestrate these in a Heat template, the corresponding
> heat resources can just interface with k8s instead of Magnum.
> Ton Ngo,
>
> [image: Inactive hide details for Egor Guz ---09/28/2015 10:20:02
> PM---Also I belive docker compose is just command line tool which doe]Egor
> Guz ---09/28/2015 10:20:02 PM---Also I belive docker compose is just
> command line tool which doesn’t have any api or scheduling feat
>
> From: Egor Guz <e...@walmartlabs.com> <e...@walmartlabs.com>
> To: "openstack-dev@lists.openstack.org"
> <openstack-dev@lists.openstack.org> <openstack-dev@lists.openstack.org>
> <openstack-dev@lists.openstack.org>
> Date: 09/28/2015 10:20 PM
> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
> --
>
>
>
> Also I belive docker compose is just command line tool which doesn’t have
> any api or scheduling features.
> But during last Docker Conf hackathon PayPal folks implemented docker
> compose executor for Mesos (https://github.com/mohitsoni/compose-executor)
> which can give you pod like experience.
>
> —
> Egor
>
> From: Adrian Otto <adrian.o...@rackspace.com<
> mailto:adrian.o...@rackspace.com <adrian.o...@rackspace.com>>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org
> <openstack-dev@lists.openstack.org>>>
> Date: Monday, September 28, 2015 at 22:03
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org
> <openstack-dev@lists.openstack.org>>>
> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>
> Wanghua,
>
> I do follow your logic, but docker-compose only needs the docker API to
> operate. We are intentionally avoiding re-inventing the wheel. Our goal is
> not to replace docker swarm (or other existing systems), but to compliment
> it/them. We want to offer users of Docker the richness of native APIs and
> supporting tools. This way they will not need to compromise features or
> wait longer for us to implement each new feature as it is added. Keep in
> mind that our pod, service, and replication controller resources pre-date
> this philosophy. If we started out with the current approach, those would
> not exist in Magnum.
>
> Thanks,
>
> Adrian
>
> On Sep 28, 2015, at 8:32 PM, 王华 <wanghua.hum...@gmail.com<
> mailto:wanghua.hum...@gmail.com <wanghua.hum...@gmail.com>>> wrote:
>
> Hi folks,
>
> Magnum now exposes service, pod, etc to users in kubernetes coe, but
> exposes container in swarm coe. As I know, swarm is only a scheduler of
> container, which is like nova in openstack. Docker compose is a
> orchestration program which is like heat in openstack. k8s is the
> combination of scheduler and orchestration. So I think it is better to
> expose the apis in compose to users which are at the same level as k8s.
>
>
> Regards
> Wanghua
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org<
> mailto:openstack-dev-requ...@lists.openstack.org
> <openstack-dev-requ...@lists.openstack.org>>?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
&g

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-29 Thread Tom Cammann
This has been my thinking in the last couple of months to completely 
deprecate the COE specific APIs such as pod/service/rc and container.


As we now support Mesos, Kubernetes and Docker Swarm its going to be 
very difficult and probably a wasted effort trying to consolidate their 
separate APIs under a single Magnum API.


I'm starting to see Magnum as COEDaaS - Container Orchestration Engine 
Deployment as a Service.


On 29/09/15 06:30, Ton Ngo wrote:


Would it make sense to ask the opposite of Wanghua's question: should 
pod/service/rc be deprecated if the user can easily get to the k8s api?
Even if we want to orchestrate these in a Heat template, the 
corresponding heat resources can just interface with k8s instead of 
Magnum.

Ton Ngo,

Inactive hide details for Egor Guz ---09/28/2015 10:20:02 PM---Also I 
belive docker compose is just command line tool which doeEgor Guz 
---09/28/2015 10:20:02 PM---Also I belive docker compose is just 
command line tool which doesn’t have any api or scheduling feat


From: Egor Guz <e...@walmartlabs.com>
To: "openstack-dev@lists.openstack.org" 
<openstack-dev@lists.openstack.org>

Date: 09/28/2015 10:20 PM
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?





Also I belive docker compose is just command line tool which doesn’t 
have any api or scheduling features.
But during last Docker Conf hackathon PayPal folks implemented docker 
compose executor for Mesos (https://github.com/mohitsoni/compose-executor)

which can give you pod like experience.

—
Egor

From: Adrian Otto 
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage 
questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>

Date: Monday, September 28, 2015 at 22:03
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>

Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Wanghua,

I do follow your logic, but docker-compose only needs the docker API 
to operate. We are intentionally avoiding re-inventing the wheel. Our 
goal is not to replace docker swarm (or other existing systems), but 
to compliment it/them. We want to offer users of Docker the richness 
of native APIs and supporting tools. This way they will not need to 
compromise features or wait longer for us to implement each new 
feature as it is added. Keep in mind that our pod, service, and 
replication controller resources pre-date this philosophy. If we 
started out with the current approach, those would not exist in Magnum.


Thanks,

Adrian

On Sep 28, 2015, at 8:32 PM, 王华 
<wanghua.hum...@gmail.com<mailto:wanghua.hum...@gmail.com>> wrote:


Hi folks,

Magnum now exposes service, pod, etc to users in kubernetes coe, but 
exposes container in swarm coe. As I know, swarm is only a scheduler 
of container, which is like nova in openstack. Docker compose is a 
orchestration program which is like heat in openstack. k8s is the 
combination of scheduler and orchestration. So I think it is better to 
expose the apis in compose to users which are at the same level as k8s.



Regards
Wanghua
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-29 Thread 王华
@Egor,  docker compose is just a command line tool now, but I think it will
change its architecture to Client and Server in the future, otherwise it
can not do some complicate jobs.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-29 Thread Daneyon Hansen (danehans)

+1

From: Tom Cammann <tom.camm...@hpe.com<mailto:tom.camm...@hpe.com>>
Reply-To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, September 29, 2015 at 2:22 AM
To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This has been my thinking in the last couple of months to completely deprecate 
the COE specific APIs such as pod/service/rc and container.

As we now support Mesos, Kubernetes and Docker Swarm its going to be very 
difficult and probably a wasted effort trying to consolidate their separate 
APIs under a single Magnum API.

I'm starting to see Magnum as COEDaaS - Container Orchestration Engine 
Deployment as a Service.

On 29/09/15 06:30, Ton Ngo wrote:

Would it make sense to ask the opposite of Wanghua's question: should 
pod/service/rc be deprecated if the user can easily get to the k8s api?
Even if we want to orchestrate these in a Heat template, the corresponding heat 
resources can just interface with k8s instead of Magnum.
Ton Ngo,

[Inactivehide details for Egor Guz ---09/28/2015 10:20:02 PM---Also 
Ibelive docker compose is just command line tool which doe]Egor Guz 
---09/28/2015 10:20:02 PM---Also I belive docker compose is just command line 
tool which doesn’t have any api or scheduling feat

From: Egor Guz <e...@walmartlabs.com><mailto:e...@walmartlabs.com>
To: 
"openstack-dev@lists.openstack.org"<mailto:openstack-dev@lists.openstack.org> 
<openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>
Date: 09/28/2015 10:20 PM
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?





Also I belive docker compose is just command line tool which doesn’t have any 
api or scheduling features.
But during last Docker Conf hackathon PayPal folks implemented docker compose 
executor for Mesos (https://github.com/mohitsoni/compose-executor)
which can give you pod like experience.

―
Egor

From: Adrian Otto 
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com><mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>>
Date: Monday, September 28, 2015 at 22:03
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Wanghua,

I do follow your logic, but docker-compose only needs the docker API to 
operate. We are intentionally avoiding re-inventing the wheel. Our goal is not 
to replace docker swarm (or other existing systems), but to compliment it/them. 
We want to offer users of Docker the richness of native APIs and supporting 
tools. This way they will not need to compromise features or wait longer for us 
to implement each new feature as it is added. Keep in mind that our pod, 
service, and replication controller resources pre-date this philosophy. If we 
started out with the current approach, those would not exist in Magnum.

Thanks,

Adrian

On Sep 28, 2015, at 8:32 PM, 王华 
<wanghua.hum...@gmail.com<mailto:wanghua.hum...@gmail.com><mailto:wanghua.hum...@gmail.com>>
 wrote:

Hi folks,

Magnum now exposes service, pod, etc to users in kubernetes coe, but exposes 
container in swarm coe. As I know, swarm is only a scheduler of container, 
which is like nova in openstack. Docker compose is a orchestration program 
which is like heat in openstack. k8s is the combination of scheduler and 
orchestration. So I think it is better to expose the apis in compose to users 
which are at the same level as k8s.


Regards
Wanghua
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org><mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-29 Thread Adrian Otto
This is definitely a topic we should cover in Tokyo.

On Sep 29, 2015, at 8:28 AM, Daneyon Hansen (danehans) 
<daneh...@cisco.com<mailto:daneh...@cisco.com>> wrote:


+1

From: Tom Cammann <tom.camm...@hpe.com<mailto:tom.camm...@hpe.com>>
Reply-To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, September 29, 2015 at 2:22 AM
To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This has been my thinking in the last couple of months to completely deprecate 
the COE specific APIs such as pod/service/rc and container.

As we now support Mesos, Kubernetes and Docker Swarm its going to be very 
difficult and probably a wasted effort trying to consolidate their separate 
APIs under a single Magnum API.

I'm starting to see Magnum as COEDaaS - Container Orchestration Engine 
Deployment as a Service.

On 29/09/15 06:30, Ton Ngo wrote:
Would it make sense to ask the opposite of Wanghua's question: should 
pod/service/rc be deprecated if the user can easily get to the k8s api?
Even if we want to orchestrate these in a Heat template, the corresponding heat 
resources can just interface with k8s instead of Magnum.
Ton Ngo,

Egor Guz ---09/28/2015 10:20:02 PM---Also I belive docker compose 
is just command line tool which doesn’t have any api or scheduling feat

From: Egor Guz <e...@walmartlabs.com><mailto:e...@walmartlabs.com>
To: 
"openstack-dev@lists.openstack.org"<mailto:openstack-dev@lists.openstack.org> 
<openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>
Date: 09/28/2015 10:20 PM
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?




Also I belive docker compose is just command line tool which doesn’t have any 
api or scheduling features.
But during last Docker Conf hackathon PayPal folks implemented docker compose 
executor for Mesos (https://github.com/mohitsoni/compose-executor)
which can give you pod like experience.

―
Egor

From: Adrian Otto 
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com><mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>>
Date: Monday, September 28, 2015 at 22:03
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Wanghua,

I do follow your logic, but docker-compose only needs the docker API to 
operate. We are intentionally avoiding re-inventing the wheel. Our goal is not 
to replace docker swarm (or other existing systems), but to compliment it/them. 
We want to offer users of Docker the richness of native APIs and supporting 
tools. This way they will not need to compromise features or wait longer for us 
to implement each new feature as it is added. Keep in mind that our pod, 
service, and replication controller resources pre-date this philosophy. If we 
started out with the current approach, those would not exist in Magnum.

Thanks,

Adrian

On Sep 28, 2015, at 8:32 PM, 王华 
<wanghua.hum...@gmail.com<mailto:wanghua.hum...@gmail.com><mailto:wanghua.hum...@gmail.com>>
 wrote:

Hi folks,

Magnum now exposes service, pod, etc to users in kubernetes coe, but exposes 
container in swarm coe. As I know, swarm is only a scheduler of container, 
which is like nova in openstack. Docker compose is a orchestration program 
which is like heat in openstack. k8s is the combination of scheduler and 
orchestration. So I think it is better to expose the apis in compose to users 
which are at the same level as k8s.


Regards
Wanghua
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org><mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-29 Thread Monty Taylor

*waving hands wildly at details* ...

I believe that the real win is if Magnum's control plan can integrate 
the network and storage fabrics that exist in an OpenStack with 
kube/mesos/swarm. Just deploying is VERY meh. I do not care - it's not 
interesting ... an ansible playbook can do that in 5 minutes. OTOH - 
deploying some kube into a cloud in such a way that it shares a tenant 
network with some VMs that are there - that's good stuff and I think 
actually provides significant value.


On 09/29/2015 10:57 PM, Jay Lau wrote:

+1 to Egor, I think that the final goal of Magnum is container as a
service but not coe deployment as a service. ;-)

Especially we are also working on Magnum UI, the Magnum UI should export
some interfaces to enable end user can create container applications but
not only coe deployment.

I hope that the Magnum can be treated as another "Nova" which is
focusing on container service. I know it is difficult to unify all of
the concepts in different coe (k8s has pod, service, rc, swarm only has
container, nova only has VM, PM with different hypervisors), but this
deserve some deep dive and thinking to see how can move forward.

On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz <e...@walmartlabs.com
<mailto:e...@walmartlabs.com>> wrote:

definitely ;), but the are some thoughts to Tom’s email.

I agree that we shouldn't reinvent apis, but I don’t think Magnum
should only focus at deployment (I feel we will become another
Puppet/Chef/Ansible module if we do it ):)
I belive our goal should be seamlessly integrate Kub/Mesos/Swarm to
OpenStack ecosystem (Neutron/Cinder/Barbican/etc) even if we need to
step in to Kub/Mesos/Swarm communities for that.

—
Egor

From: Adrian Otto <adrian.o...@rackspace.com
<mailto:adrian.o...@rackspace.com><mailto:adrian.o...@rackspace.com
<mailto:adrian.o...@rackspace.com>>>
Reply-To: "OpenStack Development Mailing List (not for usage
questions)" <openstack-dev@lists.openstack.org

<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>>>
Date: Tuesday, September 29, 2015 at 08:44
To: "OpenStack Development Mailing List (not for usage questions)"
<openstack-dev@lists.openstack.org

<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This is definitely a topic we should cover in Tokyo.

On Sep 29, 2015, at 8:28 AM, Daneyon Hansen (danehans)
<daneh...@cisco.com
<mailto:daneh...@cisco.com><mailto:daneh...@cisco.com
<mailto:daneh...@cisco.com>>> wrote:


+1

From: Tom Cammann <tom.camm...@hpe.com
<mailto:tom.camm...@hpe.com><mailto:tom.camm...@hpe.com
<mailto:tom.camm...@hpe.com>>>
Reply-To: "openstack-dev@lists.openstack.org

<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>>"
<openstack-dev@lists.openstack.org

<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>>>
Date: Tuesday, September 29, 2015 at 2:22 AM
To: "openstack-dev@lists.openstack.org

<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>>"
<openstack-dev@lists.openstack.org

<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This has been my thinking in the last couple of months to completely
deprecate the COE specific APIs such as pod/service/rc and container.

As we now support Mesos, Kubernetes and Docker Swarm its going to be
very difficult and probably a wasted effort trying to consolidate
their separate APIs under a single Magnum API.

I'm starting to see Magnum as COEDaaS - Container Orchestration
Engine Deployment as a Service.

On 29/09/15 06:30, Ton Ngo wrote:
Would it make sense to ask the opposite of Wanghua's question:
should pod/service/rc be deprecated if the user can easily get to
the k8s api?
Even if we want to orchestrate these in a Heat template, the
corresponding heat resources can just interface with k8s instead of
Magnum.
Ton Ngo,

Egor Guz ---09/28/2015 10:20:02 PM---Also I belive
docker compose is just command line tool which doesn’t have any api
or scheduling feat

From: Egor Guz <e...@wal

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-29 Thread Jay Lau
+1 to Egor, I think that the final goal of Magnum is container as a service
but not coe deployment as a service. ;-)

Especially we are also working on Magnum UI, the Magnum UI should export
some interfaces to enable end user can create container applications but
not only coe deployment.

I hope that the Magnum can be treated as another "Nova" which is focusing
on container service. I know it is difficult to unify all of the concepts
in different coe (k8s has pod, service, rc, swarm only has container, nova
only has VM, PM with different hypervisors), but this deserve some deep
dive and thinking to see how can move forward.

On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz <e...@walmartlabs.com> wrote:

> definitely ;), but the are some thoughts to Tom’s email.
>
> I agree that we shouldn't reinvent apis, but I don’t think Magnum should
> only focus at deployment (I feel we will become another Puppet/Chef/Ansible
> module if we do it ):)
> I belive our goal should be seamlessly integrate Kub/Mesos/Swarm to
> OpenStack ecosystem (Neutron/Cinder/Barbican/etc) even if we need to step
> in to Kub/Mesos/Swarm communities for that.
>
> —
> Egor
>
> From: Adrian Otto <adrian.o...@rackspace.com adrian.o...@rackspace.com>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org
> >>
> Date: Tuesday, September 29, 2015 at 08:44
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org
> >>
> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>
> This is definitely a topic we should cover in Tokyo.
>
> On Sep 29, 2015, at 8:28 AM, Daneyon Hansen (danehans) <daneh...@cisco.com
> <mailto:daneh...@cisco.com>> wrote:
>
>
> +1
>
> From: Tom Cammann <tom.camm...@hpe.com<mailto:tom.camm...@hpe.com>>
> Reply-To: "openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org>" <openstack-dev@lists.openstack.org
> <mailto:openstack-dev@lists.openstack.org>>
> Date: Tuesday, September 29, 2015 at 2:22 AM
> To: "openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org>" <openstack-dev@lists.openstack.org
> <mailto:openstack-dev@lists.openstack.org>>
> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>
> This has been my thinking in the last couple of months to completely
> deprecate the COE specific APIs such as pod/service/rc and container.
>
> As we now support Mesos, Kubernetes and Docker Swarm its going to be very
> difficult and probably a wasted effort trying to consolidate their separate
> APIs under a single Magnum API.
>
> I'm starting to see Magnum as COEDaaS - Container Orchestration Engine
> Deployment as a Service.
>
> On 29/09/15 06:30, Ton Ngo wrote:
> Would it make sense to ask the opposite of Wanghua's question: should
> pod/service/rc be deprecated if the user can easily get to the k8s api?
> Even if we want to orchestrate these in a Heat template, the corresponding
> heat resources can just interface with k8s instead of Magnum.
> Ton Ngo,
>
> Egor Guz ---09/28/2015 10:20:02 PM---Also I belive docker
> compose is just command line tool which doesn’t have any api or scheduling
> feat
>
> From: Egor Guz <e...@walmartlabs.com><mailto:e...@walmartlabs.com>
> To: "openstack-dev@lists.openstack.org" openstack-dev@lists.openstack.org> <openstack-dev@lists.openstack.org
> ><mailto:openstack-dev@lists.openstack.org>
> Date: 09/28/2015 10:20 PM
> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
> 
>
>
>
> Also I belive docker compose is just command line tool which doesn’t have
> any api or scheduling features.
> But during last Docker Conf hackathon PayPal folks implemented docker
> compose executor for Mesos (https://github.com/mohitsoni/compose-executor)
> which can give you pod like experience.
>
> —
> Egor
>
> From: Adrian Otto <adrian.o...@rackspace.com adrian.o...@rackspace.com><mailto:adrian.o...@rackspace.com>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org
> ><mailto:openstack-dev@lists.openstack.org>>
> Date: Monday, September 28, 2015 at 22:03
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org
> ><mailto:openstack-dev@lists.openstack.org>>

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-29 Thread Egor Guz
definitely ;), but the are some thoughts to Tom’s email.

I agree that we shouldn't reinvent apis, but I don’t think Magnum should only 
focus at deployment (I feel we will become another Puppet/Chef/Ansible module 
if we do it ):)
I belive our goal should be seamlessly integrate Kub/Mesos/Swarm to OpenStack 
ecosystem (Neutron/Cinder/Barbican/etc) even if we need to step in to 
Kub/Mesos/Swarm communities for that.

―
Egor

From: Adrian Otto <adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, September 29, 2015 at 08:44
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This is definitely a topic we should cover in Tokyo.

On Sep 29, 2015, at 8:28 AM, Daneyon Hansen (danehans) 
<daneh...@cisco.com<mailto:daneh...@cisco.com>> wrote:


+1

From: Tom Cammann <tom.camm...@hpe.com<mailto:tom.camm...@hpe.com>>
Reply-To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, September 29, 2015 at 2:22 AM
To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This has been my thinking in the last couple of months to completely deprecate 
the COE specific APIs such as pod/service/rc and container.

As we now support Mesos, Kubernetes and Docker Swarm its going to be very 
difficult and probably a wasted effort trying to consolidate their separate 
APIs under a single Magnum API.

I'm starting to see Magnum as COEDaaS - Container Orchestration Engine 
Deployment as a Service.

On 29/09/15 06:30, Ton Ngo wrote:
Would it make sense to ask the opposite of Wanghua's question: should 
pod/service/rc be deprecated if the user can easily get to the k8s api?
Even if we want to orchestrate these in a Heat template, the corresponding heat 
resources can just interface with k8s instead of Magnum.
Ton Ngo,

Egor Guz ---09/28/2015 10:20:02 PM---Also I belive docker compose 
is just command line tool which doesn’t have any api or scheduling feat

From: Egor Guz <e...@walmartlabs.com><mailto:e...@walmartlabs.com>
To: 
"openstack-dev@lists.openstack.org"<mailto:openstack-dev@lists.openstack.org> 
<openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>
Date: 09/28/2015 10:20 PM
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?




Also I belive docker compose is just command line tool which doesn’t have any 
api or scheduling features.
But during last Docker Conf hackathon PayPal folks implemented docker compose 
executor for Mesos (https://github.com/mohitsoni/compose-executor)
which can give you pod like experience.

―
Egor

From: Adrian Otto 
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com><mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>>
Date: Monday, September 28, 2015 at 22:03
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Wanghua,

I do follow your logic, but docker-compose only needs the docker API to 
operate. We are intentionally avoiding re-inventing the wheel. Our goal is not 
to replace docker swarm (or other existing systems), but to compliment it/them. 
We want to offer users of Docker the richness of native APIs and supporting 
tools. This way they will not need to compromise features or wait longer for us 
to implement each new feature as it is added. Keep in mind that our pod, 
service, and replication controller resources pre-date this philosophy. If we 
started out with the current approach, those would not exist in Magnum.

Thanks,

Adrian

On Sep 28, 2015, at 8:32 PM, 王华 
<wanghua.hum...@gmail.com<mailto:wanghua.hum...@gmail.com><mailto:wanghua.hum...@gmail.com>>
 wrote:

Hi folks,

Magnum now exposes service, pod, etc to users in kubernetes coe, but exposes 
container in swarm coe. As I know, swarm is only a scheduler of container, 
which is like 

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-28 Thread Mike Spreitzer
> From: 王华 <wanghua.hum...@gmail.com>
> To: "OpenStack Development Mailing List (not for usage questions)" 
> <openstack-dev@lists.openstack.org>
> Date: 09/28/2015 11:34 PM
> Subject: [openstack-dev] [magnum]swarm + compose = k8s?
> 
> Hi folks,
> 
> Magnum now exposes service, pod, etc to users in kubernetes coe, but
> exposes container in swarm coe. As I know, swarm is only a scheduler
> of container, which is like nova in openstack. Docker compose is a 
> orchestration program which is like heat in openstack. k8s is the 
> combination of scheduler and orchestration. So I think it is better 
> to expose the apis in compose to users which are at the same level as 
k8s.
> 

Why should the users be deprived of direct access to the Swarm API when it 
is there?

Note also that Compose addresses more general, and differently focused, 
orchestration than Kubernetes; the latter only offers homogenous scaling 
groups --- which a docker-compose.yaml file can not even describe.

Regards,
Mike



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum]swarm + compose = k8s?

2015-09-28 Thread 王华
Hi folks,

Magnum now exposes service, pod, etc to users in kubernetes coe, but
exposes container in swarm coe. As I know, swarm is only a scheduler of
container, which is like nova in openstack. Docker compose is a
orchestration program which is like heat in openstack. k8s is the
combination of scheduler and orchestration. So I think it is better to
expose the apis in compose to users which are at the same level as k8s.


Regards
Wanghua
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-28 Thread Adrian Otto
Wanghua,

I do follow your logic, but docker-compose only needs the docker API to 
operate. We are intentionally avoiding re-inventing the wheel. Our goal is not 
to replace docker swarm (or other existing systems), but to compliment it/them. 
We want to offer users of Docker the richness of native APIs and supporting 
tools. This way they will not need to compromise features or wait longer for us 
to implement each new feature as it is added. Keep in mind that our pod, 
service, and replication controller resources pre-date this philosophy. If we 
started out with the current approach, those would not exist in Magnum.

Thanks,

Adrian

On Sep 28, 2015, at 8:32 PM, ?? 
> wrote:

Hi folks,

Magnum now exposes service, pod, etc to users in kubernetes coe, but exposes 
container in swarm coe. As I know, swarm is only a scheduler of container, 
which is like nova in openstack. Docker compose is a orchestration program 
which is like heat in openstack. k8s is the combination of scheduler and 
orchestration. So I think it is better to expose the apis in compose to users 
which are at the same level as k8s.


Regards
Wanghua
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<    5   6   7   8   9   10   11   12   13   14   >