Re: [openstack-dev] [Zun] Add Deepak Mourya to the core team

2018-05-14 Thread Kumari, Madhuri
Welcome to the team, Deepak!

Regards,
Madhuri

From: Hongbin Lu [mailto:hongbin...@gmail.com]
Sent: Monday, May 14, 2018 10:00 AM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>; deepak.mou...@nectechnologies.in
Subject: [openstack-dev] [Zun] Add Deepak Mourya to the core team

Hi all,

This is an announcement of the following change on the Zun core reviewers team:

+ Deepak Mourya (mourya007)

Deepak has been actively involving in Zun for several months. He has submitted 
several code patches to Zun, all of which are useful features or bug fixes. In 
particular, I would like to highlight that he has implemented the availability 
zone API which is a significant contribution to the Zun feature set. Based on 
his significant contribution, I would like to propose him to become a core 
reviewer of Zun.

This proposal has been voted within the existing core team and is unanimously 
approved. Welcome to the core team Deepak.

Best regards,
Hongbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Announce change of Zun core reviewer team

2018-05-02 Thread Kumari, Madhuri
Welcome to the team, Ji Wei ☺

Regards,
Madhuri

From: Hongbin Lu [mailto:hongbin...@gmail.com]
Sent: Thursday, May 3, 2018 2:10 AM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: [openstack-dev] [Zun] Announce change of Zun core reviewer team

Hi all,

I would like to announce the following change on the Zun core reviewers team:

+ Ji Wei

Ji Wei has been working on Zun for a while. His contributions include 
blueprints, bug fixes, code reviews, etc. In particular, I would like to 
highlight that he has implemented two blueprints [1][2], both of which are not 
easy to implement. Based on his high-quality work in the past, I believe he 
will serve the core reviewer role very well.

This proposal had been voted within the existing core team and was unanimously 
approved. Welcome to the core team Ji Wei.

[1] https://blueprints.launchpad.net/zun/+spec/glance-support-tag
[2] https://blueprints.launchpad.net/zun/+spec/zun-rebuild-on-local-node

Best regards,
Hongbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun][k8s] AWS Fargate and OpenStack Zun

2018-04-30 Thread Kumari, Madhuri
Thank you Hongbin. The article is very helpful.

Regards,
Madhuri

From: Hongbin Lu [mailto:hongbin...@gmail.com]
Sent: Sunday, April 29, 2018 5:16 AM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: [openstack-dev] [Zun][k8s] AWS Fargate and OpenStack Zun

Hi folks,

FYI. I wrote a blog post about a comparison between AWS Fargate and OpenStack 
Zun. It mainly covers the following:

* The basic concepts of OpenStack Zun and AWS Fargate
* The Kubernetes integration plan

Here is the link: 
https://www.linkedin.com/pulse/aws-fargate-openstack-zun-comparing-serverless-container-hongbin-lu/

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Ironic][API] Service Management API Design

2018-01-16 Thread Kumari, Madhuri
Hi Nova Developers,

I am working on adding a service management API in Ironic [1][2]. This spec 
adds a new API /conductors to list, enable/disable an ironic-conductor service.
I am struggling to understand the difference between shutting down a service 
manually and disabling it.

So my question is what happens to the VMs and the current operation(if any) 
going on with the nova-compute service we disable? What is the difference 
between shutting down the service and disabling it?
I understand both the actions, disables scheduling request to the compute 
service and the workloads are taken over by other nova-compute service.

Please help me understand the design in Nova.

[1] https://review.openstack.org/#/c/471217/
[2] https://bugs.launchpad.net/ironic/+bug/1526759


Regards,
Madhuri

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] PTL on vacation for 3 weeks

2017-12-11 Thread Kumari, Madhuri
Hello,

@Hongbin, Enjoy your vacation!

Team, please feel free to reach me at mkrai on #openstack-zun. I am happy to 
serve as Zun’s temporary PTL.

Regards,
Madhuri

From: Hongbin Lu [mailto:hongbin...@gmail.com]
Sent: Friday, December 8, 2017 10:00 PM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Cc: Kumari, Madhuri <madhuri.kum...@intel.com>
Subject: [Zun] PTL on vacation for 3 weeks

Hi team,

I will be on vacation during Dec 11 - Jan 2. Madhuri Kumari (cc-ed) kindly 
agreed to serve the PTL role while I am away. Wish everyone have a happy 
holiday.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Nova docker replaced by zun?

2017-10-03 Thread Kumari, Madhuri
Hi,

nova-docker was discontinued because the lifecycle of containers and VMs are 
different and the Nova APIs doesn't satisfy it.
Zun is an independent project of Nova which has its own set of APIs for 
managing containers on top of OpenStack.

For more information, you can read the FAQ section of Zun wiki page [1].

[1] https://wiki.openstack.org/wiki/Zun

Regards,
Madhuri


From: ADAMS, STEVEN E [mailto:sa2...@att.com]
Sent: Friday, September 29, 2017 8:18 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Nova docker replaced by zun?

Can anyone point me to some background on why nova docker was discontinued and 
how zun is the heir?
Thx,
Steve Adams
AT
https://github.com/openstack/nova-docker/blob/master/README.rst

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Propose change of the core team

2017-09-17 Thread Kumari, Madhuri
+1 for both. Thanks Kien for all your work in Zun, especially multi-node CI.

Regards,
Madhuri

From: Pradeep Singh [mailto:ps4openst...@gmail.com]
Sent: Sunday, September 17, 2017 8:35 PM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [Zun] Propose change of the core team

+1 for both

On Sun, Sep 17, 2017 at 4:26 AM, Kevin Zhao 
<kevin.z...@linaro.org<mailto:kevin.z...@linaro.org>> wrote:
+1 on both

On 15 September 2017 at 08:40, Qiming Teng 
<teng...@linux.vnet.ibm.com<mailto:teng...@linux.vnet.ibm.com>> wrote:
+1 on both.

On Thu, Sep 14, 2017 at 01:58:48PM +, Hongbin Lu wrote:
> Hi all,
>
> I propose the following change of the Zun core reviewer team.
>
> + Kien Nguyen (kiennt2609)
> - Aditi Sharma (adi-sky17)
>
> Kien has been contributing to the Zun project for a few months. His 
> contributions include proposing high-quality codes, providing helpful code 
> reviews, participating team discussion at weekly team meeting and IRC, etc. 
> He is the one who setup the multi-node job in the CI and the job is up and 
> running now. I think his contribution is significant which qualifies him to 
> be a core reviewer. Aditi is a member of the initial core team but becomes 
> inactive for a while.
>
> Core reviewers, please cast your vote on this proposal.
>
> Best regards,
> Hongbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zun] sandbox and clearcontainers

2017-07-11 Thread Kumari, Madhuri
Hi Surya,

Please see my response inline.
Currently Zun have two drivers for managing containers: Docker and NovaDocker. 
Sandbox was initially implemented for the NovaDocker driver which we are going 
to deprecate soon.
Also we are working on making the sandbox optional for the Docker driver. See 
patch [1] for the code.

[1] https://review.openstack.org/#/c/471634/

Regards,
Madhuri


From: surya.prabha...@dell.com [mailto:surya.prabha...@dell.com]
Sent: Wednesday, July 12, 2017 4:44 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [zun] sandbox and clearcontainers

Dell - Internal Use - Confidential
Hi Folks,
I am just trying to wrap my head around zun's sandboxing and clear 
containers.   From what Hongbin told in Barcelona ( see the attached pic which 
I scrapped from his video)

[cid:image003.jpg@01D2FAF8.E8A6E1D0]

current implementation in Zun is, Sandbox is the outer container and the real 
user container is nested inside the sandbox.  I am trying to figure out how 
this is going to play out
when we have clear containers.
[Kumari, Madhuri] The sandbox container is just an infra container that manage 
IaaS resources associated with a container or a group of containers. Real 
container is only using the resources attached with the infra container not 
running inside infra container, so no other virtualization layer is involved 
here.

I envision the following scenarios:


1)  Scenario 1: where the sandbox itself is a clear container and user will 
nest another clear container inside the sandbox. This is like nested 
virtualization.

But I am not sure how this is going to work since the nested containers won't 
get VT-D cpu flags.

2)  Scenario 2: the outer sandbox is just going to be a standard docker 
container without vt-d and the inside container is going to be the real clear 
container with vt-d.  Now this

might work well but we might be losing the isolation features for the network 
and storage which lies open in the sandbox. Wont this defeat the whole purpose 
of using clear containers.

[Kumari, Madhuri] I have tried to run infra container as docker container and 
the real container as a Clear Container and it seems to work well. But I agree 
with your point that we might lose the advantage of using clear container.

So after the sandbox is made optional, we can run a clear container directly 
without any sandbox. Thus solving the issue.



I am just wondering what is the thought process for this design inside zun.  If 
this is trivial and if I am missing something please shed some light :).

Thanks
Surya ( spn )
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Propose addition of Zun core team and removal notice

2017-06-20 Thread Kumari, Madhuri
+1 from me as well.

Thanks Dims and Yanyan for you contribution to Zun ☺

Regards,
Madhuri

From: Kevin Zhao [mailto:kevin.z...@linaro.org]
Sent: Wednesday, June 21, 2017 6:37 AM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [Zun] Propose addition of Zun core team and 
removal notice

+1 for me.
Thx!

On 20 June 2017 at 13:50, Pradeep Singh 
<ps4openst...@gmail.com<mailto:ps4openst...@gmail.com>> wrote:
+1 from me,
Thanks Shunli for your great work :)

On Tue, Jun 20, 2017 at 10:02 AM, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:
Hi all,

I would like to propose the following change to the Zun core team:

+ Shunli Zhou (shunliz)

Shunli has been contributing to Zun for a while and did a lot of work. He has 
completed the BP for supporting resource claim and be closed to finish the 
filter scheduler BP. He showed a good understanding of the Zun’s code base and 
expertise on other OpenStack projects. The quantity [1] and quality of his 
submitted code also shows his qualification. Therefore, I think he will be a 
good addition to the core team.

In addition, I have a removal notice. Davanum Srinivas (Dims) and Yanyan Hu 
requested to be removed from the core team. Dims had been helping us since the 
inception of the project. I treated him as mentor and his guidance is always 
helpful for the whole team. As the project becomes mature and stable, I agree 
with him that it is time to relieve him from the core reviewer responsibility 
because he has many other important responsibilities for the OpenStack 
community. Yanyan’s leaving is because he has been relocated and focused on an 
out-of-OpenStack area. I would like to take this chance to thank Dims and 
Yanyan for their contribution to Zun.

Core reviewers, please cast your vote on this proposal.

Best regards,
Hongbin




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack-operators][all][Ironic][Nova] The /service API endpoint

2017-06-20 Thread Kumari, Madhuri
Hi All,

I am working on a bug [1] in Ironic which talks exposing the state of conductor 
service running in OpenStack environment.
There are two ways to do this:

1) create a new API endpoint e.g. 'v1/service' that can report which conductor 
is managing given node. Additionally it can also report aliveness of all Ironic 
conductors and on which hosts they are running (similar to nova service-list)

2) expose conductor_affinity in node-show (but resolve it to hostname first).

Option #2 is probably quicker to implement, but option #1 has more benefits for 
operators.

So I would like to know from the OpenStack operators and project teams who has 
this API:

1. What are the other use-case of this API?
2. Which option is better to implement? Is it worth adding a new API endpoint 
for the purpose?
2. Also why this API only expose the state of RPC servers and not the API 
server in the environment?

[1] https://bugs.launchpad.net/ironic/+bug/1616878

Regards,
Madhuri
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Proposal a change of Zun core team

2017-05-01 Thread Kumari, Madhuri
+1 for both.
Well deserved Feng!

Thanks,
Madhuri

From: Hongbin Lu [mailto:hongbin...@huawei.com]
Sent: Saturday, April 29, 2017 9:35 AM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: [openstack-dev] [Zun] Proposal a change of Zun core team

Hi all,

I proposes a change of Zun's core team memberships as below:

+ Feng Shengqin (feng-shengqin)
- Wang Feilong (flwang)

Feng Shengqin has contributed a lot to the Zun projects. Her contribution 
includes BPs, bug fixes, and reviews. In particular, she completed an essential 
BP and had a lot of accepted commits in Zun's repositories. I think she is 
qualified for the core reviewer position. I would like to thank Wang Feilong 
for his interest to join the team when the project was found. I believe we are 
always friends regardless of his core membership.

By convention, we require a minimum of 4 +1 votes from Zun core reviewers 
within a 1 week voting window (consider this proposal as a +1 vote from me). A 
vote of -1 is a veto. If we cannot get enough votes or there is a veto vote 
prior to the end of the voting window, this proposal is rejected.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-21 Thread Kumari, Madhuri
It seems the term COE is a valid term now. I am in favor of having “openstack 
coe cluster” or “openstack container cluster”.
Using the command “infra” is too generic and doesn’t relate to what Magnum is 
doing exactly.

Regards,
Madhuri

From: Spyros Trigazis [mailto:strig...@gmail.com]
Sent: Tuesday, March 21, 2017 7:25 PM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [magnum][osc] What name to use for magnum commands 
in osc?

IMO, coe is a little confusing. It is a term used by people related somehow
to the magnum community. When I describe to users how to use magnum,
I spent a few moments explaining what we call coe.

I prefer one of the following:
* openstack magnum cluster create|delete|...
* openstack mcluster create|delete|...
* both the above

It is very intuitive for users because, they will be using an openstack cloud
and they will be wanting to use the magnum service. So, it only make sense
to type openstack magnum cluster or mcluster which is shorter.


On 21 March 2017 at 02:24, Qiming Teng 
<teng...@linux.vnet.ibm.com<mailto:teng...@linux.vnet.ibm.com>> wrote:
On Mon, Mar 20, 2017 at 03:35:18PM -0400, Jay Pipes wrote:
> On 03/20/2017 03:08 PM, Adrian Otto wrote:
> >Team,
> >
> >Stephen Watson has been working on an magnum feature to add magnum commands 
> >to the openstack client by implementing a plugin:
> >
> >https://review.openstack.org/#/q/status:open+project:openstack/python-magnumclient+osc
> >
> >In review of this work, a question has resurfaced, as to what the client 
> >command name should be for magnum related commands. Naturally, we’d like to 
> >have the name “cluster” but that word is already in use by Senlin.
>
> Unfortunately, the Senlin API uses a whole bunch of generic terms as
> top-level REST resources, including "cluster", "event", "action",
> "profile", "policy", and "node". :( I've warned before that use of
> these generic terms in OpenStack APIs without a central group
> responsible for curating the API would lead to problems like this.
> This is why, IMHO, we need the API working group to be ultimately
> responsible for preventing this type of thing from happening.
> Otherwise, there ends up being a whole bunch of duplication and same
> terms being used for entirely different things.
>

Well, I believe the name and namespaces used by Senlin is very clean.
Please see the following outputs. All commands are contained in the
cluster namespace to avoid any conflicts with any other projects.

On the other hand, is there any document stating that Magnum is about
providing clustering service? Why Magnum cares so much about the top
level noun if it is not its business?

From magnum's wiki page [1]:
"Magnum uses Heat to orchestrate an OS image which contains Docker
and Kubernetes and runs that image in either virtual machines or bare
metal in a cluster configuration."

Many services may offer clusters indirectly. Clusters is NOT magnum's focus,
but we can't refer to a collection of virtual machines or physical servers with
another name. Bay proven to be confusing to users. I don't think that magnum
should reserve the cluster noun, even if it was available.

[1] https://wiki.openstack.org/wiki/Magnum



$ openstack --help | grep cluster

  --os-clustering-api-version 

  cluster action list  List actions.
  cluster action show  Show detailed info about the specified action.
  cluster build info  Retrieve build information.
  cluster check  Check the cluster(s).
  cluster collect  Collect attributes across a cluster.
  cluster create  Create the cluster.
  cluster delete  Delete the cluster(s).
  cluster event list  List events.
  cluster event show  Describe the event.
  cluster expand  Scale out a cluster by the specified number of nodes.
  cluster list   List the user's clusters.
  cluster members add  Add specified nodes to cluster.
  cluster members del  Delete specified nodes from cluster.
  cluster members list  List nodes from cluster.
  cluster members replace  Replace the nodes in a cluster with
  specified nodes.
  cluster node check  Check the node(s).
  cluster node create  Create the node.
  cluster node delete  Delete the node(s).
  cluster node list  Show list of nodes.
  cluster node recover  Recover the node(s).
  cluster node show  Show detailed info about the specified node.
  cluster node update  Update the node.
  cluster policy attach  Attach policy to cluster.
  cluster policy binding list  List policies from cluster.
  cluster policy binding show  Show a specific policy that is bound to
  the specified cluster.
  cluster policy binding update  Update a policy's properties on a
  cluster.
  cluster policy create  Create a policy.
  cluster policy delete  Delet

Re: [openstack-dev] [Zun] Propose a change of the Zun core team membership

2017-01-24 Thread Kumari, Madhuri
+1 for both.

-Original Message-
From: Wenzhi Yu [mailto:wenzhi...@163.com] 
Sent: Tuesday, January 24, 2017 7:02 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Zun] Propose a change of the Zun core team 
membership

+1 from me! 
Kevin has made significant contribution to Zun, he deserves this!

Best Regards,
Wenzhi Yu(yuywz)
> 在 2017年1月24日,下午5:12,Shuu Mutou  写道:
> 
> +1 for both.
> 
> regards
> Shu
> 
> 
>> -Original Message-
>> From: Pradeep Singh [mailto:ps4openst...@gmail.com]
>> Sent: Tuesday, January 24, 2017 11:58 AM
>> To: OpenStack Development Mailing List (not for usage questions) 
>> 
>> Subject: Re: [openstack-dev] [Zun] Propose a change of the Zun core 
>> team membership
>> 
>> +1, welcome Kevin. I appreciate your work.
>> 
>> On Tuesday, January 24, 2017, Yanyan Hu >  > wrote:
>> 
>> 
>>  +1 for the change.
>> 
>> 
>>  2017-01-24 6:56 GMT+08:00 Hongbin Lu >  >:
>> 
>> 
>>  Hi Zun cores,
>> 
>> 
>> 
>>  I proposed a change of Zun core team membership as below:
>> 
>> 
>> 
>>  + Kevin Zhao (kevin-zhao)
>> 
>>  - Haiwei Xu (xu-haiwei)
>> 
>> 
>> 
>>  Kevin has been working for Zun for a while, and made 
>> significant 
>> contribution. He submitted several non-trivial patches with high 
>> quality. One of his challenging task is adding support of container 
>> interactive mode, and it looks he is capable to handle this 
>> challenging task (his patches are under reviews now). I think he is a 
>> good addition to the core team. Haiwei is a member of the initial 
>> core team. Unfortunately, his activity dropped down in the past a few months.
>> 
>> 
>> 
>>  According to the OpenStack Governance process [1], we require a 
>> minimum of 4 +1 votes from Zun core reviewers within a 1 week voting 
>> window (consider this proposal as a +1 vote from me). A vote of -1 is 
>> a veto. If we cannot get enough votes or there is a veto vote prior 
>> to the end of the voting window, this proposal is rejected.
>> 
>> 
>> 
>>  [1]
>> https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>> 
>> 
>> 
>> 
>>  Best regards,
>> 
>>  Hongbin
>> 
>> 
>> 
>> 
>> 
>>  __
>> 
>>  OpenStack Development Mailing List (not for usage
>> questions)
>>  Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > >
>> 
>>  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-
>> dev 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>>  --
>> 
>>  Best regards,
>> 
>>  Yanyan
> 
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] New Core Reviewers

2016-11-08 Thread Kumari, Madhuri
+1 for both.

-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com] 
Sent: Tuesday, November 8, 2016 12:36 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [Magnum] New Core Reviewers

Magnum Core Team,

I propose Jaycen Grant (jvgrant) and Yatin Karel (yatin) as new Magnum Core 
Reviewers. Please respond with your votes.

Thanks,

Adrian Otto
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Propose a change of Zun core team

2016-10-19 Thread Kumari, Madhuri
+1 for both. Shubham will be a great addition to team.

Thanks!

Madhuri

From: Hongbin Lu [mailto:hongbin...@huawei.com]
Sent: Thursday, October 20, 2016 2:49 AM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: [openstack-dev] [Zun] Propose a change of Zun core team

Hi team,

I am going to propose an exchange of the core team membership as below:

+ Shubham Kumar Sharma (shubham)
- Chandan Kumar (chandankumar)

Shubham contributed a lot for the container image feature and active on reviews 
and IRC. I think he is a good addition to the core team. Chandan became 
inactive for a long period of time so he didn't meet the expectation of a core 
reviewer anymore. However, thanks for his interest to join the core team when 
the team was found. He is welcomed to re-join the core team if he become active 
in the future.

According to the OpenStack Governance process [1], we require a minimum of 4 +1 
votes from Zun core reviewers within a 1 week voting window (consider this 
proposal as a +1 vote from me). A vote of -1 is a veto. If we cannot get enough 
votes or there is a veto vote prior to the end of the voting window, this 
proposal is rejected and Shubham is not able to join the core team and needs to 
wait 30 days to reapply.

[1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun][Higgins] Proposing Sudipta Biswas and Wenzhi Yu for Zun core reviewer team

2016-08-10 Thread Kumari, Madhuri
+1 from me for both.

From: Hongbin Lu [mailto:hongbin...@gmail.com]
Sent: Wednesday, August 10, 2016 8:40 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [Zun][Higgins] Proposing Sudipta Biswas and Wenzhi Yu 
for Zun core reviewer team

Hi all,

Both Sudipta and Wenzhi have been actively contributing to the Zun project for 
a while. Sudipta provided helpful advice for the project roadmap and 
architecture design. Wenzhi consistently contributed high quality patches and 
insightful reviews. I think both of them are qualified to join the core team.

I am happy to propose Sudipta and Wenzhi to be core reviewers of Zun team. 
According to the OpenStack Governance process [1], we require a minimum of 4 +1 
votes from Zun core reviewers within a 1 week voting window (consider this 
proposal as a +1 vote from me). A vote of -1 is a veto. If we cannot get enough 
votes or there is a veto vote prior to the end of the voting window, Sudipta 
and Wenzhi are not able to join the core team and needs to wait 30 days to 
reapply.

The voting is open until Wednesday August 17st.

[1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Proposing Spyros Trigazis for Magnum core reviewer team

2016-07-25 Thread Kumari, Madhuri
+1 for Sypros.

Regards,
Madhuri

From: Hongbin Lu [mailto:hongbin...@huawei.com]
Sent: Saturday, July 23, 2016 1:57 AM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: [openstack-dev] [magnum] Proposing Spyros Trigazis for Magnum core 
reviewer team

Hi all,

Spyros has consistently contributed to Magnum for a while. In my opinion, what 
differentiate him from others is the significance of his contribution, which 
adds concrete value to the project. For example, the operator-oriented install 
guide he delivered attracts a significant number of users to install Magnum, 
which facilitates the adoption of the project. I would like to emphasize that 
the Magnum team has been working hard but struggling to increase the adoption, 
and Spyros's contribution means a lot in this regards. He also completed 
several essential and challenging tasks, such as adding support for OverlayFS, 
adding Rally job for Magnum, etc. In overall, I am impressed by the amount of 
high-quality patches he submitted. He is also helpful in code reviews, and his 
comments often help us identify pitfalls that are not easy to identify. He is 
also very active in IRC and ML. Based on his contribution and expertise, I 
think he is qualified to be a Magnum core reviewer.

I am happy to propose Spyros to be a core reviewer of Magnum team. According to 
the OpenStack Governance process [1], we require a minimum of 4 +1 votes from 
Magnum core reviewers within a 1 week voting window (consider this proposal as 
a +1 vote from me). A vote of -1 is a veto. If we cannot get enough votes or 
there is a veto vote prior to the end of the voting window, Spyros is not able 
to join the core team and needs to wait 30 days to reapply.

The voting is open until Thursday July 29st.

[1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Following Magnum QuickStart page, how do i use Docker CLI directly to manage containers

2016-07-18 Thread Kumari, Madhuri
Hi Greg,

Yes we do have document that explains the steps to configure your docker client 
to to be able to interact to bays.
Please follow this link 
https://github.com/openstack/magnum/blob/master/doc/source/dev/quickstart.rst#building-and-using-a-swarm-bay

Regards,
Madhuri

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: Monday, July 18, 2016 6:53 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Magnum] Following Magnum QuickStart page, how do i 
use Docker CLI directly to manage containers

Was trying out Magnum in Devstack, following the instructions at 
http://docs.openstack.org/developer/magnum/dev/dev-quickstart.html

I successfully created a swarmbay.
test -f ~/.ssh/id_rsa.pub || ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
nova keypair-add --pub-key ~/.ssh/id_rsa.pub testkey
magnum baymodel-create --name swarmbaymodel \
   --image-id fedora-21-atomic-5 \
   --keypair-id testkey \
   --external-network-id public \
   --dns-nameserver 8.8.8.8 \
   --flavor-id m1.small \
   --docker-volume-size 5 \
   --coe swarm
magnum bay-create --name swarmbay --baymodel swarmbaymodel --node-count 2

Given that the ‘magnum container-… …’ APIs are no longer supported,
I wanted to use the Docker CLI directly to manage containers.

Is there an updated dev-quickstart guide on how to do this ?

-  I tried ssh-ing to the swarm-master VM, using the key … but could 
not connect.

-  ???

Greg.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Is LBAAS mandatory for MAGNUM ?

2016-07-18 Thread Kumari, Madhuri
Hi Greg,

Now it is not mandatory to have lbaas in Magnum. Here is blueprint in Magnum 
that aims to decouple lbaas from  Magnum 
https://blueprints.launchpad.net/magnum/+spec/decouple-lbaas.
You can use flag –master-lb-enabled in baymodel to specify whether you want 
lbaas or not. However it just allows you to disable lbaas when master count is 
1.

Regards,
Madhuri

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: Monday, July 18, 2016 5:11 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Magnum] Is LBAAS mandatory for MAGNUM ?

I’m relatively new to looking at Magnum.
Just recently played with Magnum in devstack on Newton.
I noticed that the HEAT Stack used by Magnum created Load Balancer Pool and 
Load Balancer HealthMonitor.

QUESTION … Is LBAAS support mandatory for MAGNUM ?  or can it be used 
(configured) without it ?

i.e. if the OpenStack distribution being used does NOT support LBAAS, will 
MAGNUM work ?   will it still be useful ?

( … thinking that it could still be used, although would not support the load 
balancing across scaled or multiple instances of a container … )

Greg.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-06-17 Thread Kumari, Madhuri
Hi Ricardo,

Thanks for sharing it. Result seems great and we will surely try to fix the 
issue.

Cheers!

Regards,
Madhuri

-Original Message-
From: Ricardo Rocha [mailto:rocha.po...@gmail.com] 
Sent: Friday, June 17, 2016 8:44 PM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

Hi.

Just thought the Magnum team would be happy to hear :)

We had access to some hardware the last couple days, and tried some tests with 
Magnum and Kubernetes - following an original blog post from the kubernetes 
team.

Got a 200 node kubernetes bay (800 cores) reaching 2 million requests / sec.

Check here for some details:
https://openstack-in-production.blogspot.ch/2016/06/scaling-magnum-and-kubernetes-2-million.html

We'll try bigger in a couple weeks, also using the Rally work from Winnie, Ton 
and Spyros to see where it breaks. Already identified a couple issues, will add 
bugs or push patches for those. If you have ideas or suggestions for the next 
tests let us know.

Magnum is looking pretty good!

Cheers,
Ricardo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Higgins] Call for contribution for Higgins API design

2016-06-16 Thread Kumari, Madhuri
Hi,

We have created an etherpad page for API design 
https://etherpad.openstack.org/p/zun-containers-service-api
Please have a look and write your suggestions.

Regards,
Madhuri

From: Yuanying OTSUKA [mailto:yuany...@oeilvert.org]
Sent: Tuesday, June 14, 2016 9:39 AM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>; Sheel Rana Insaan <ranasheel2...@gmail.com>
Cc: adit...@nectechnologies.in; yanya...@cn.ibm.com; flw...@catalyst.net.nz; Qi 
Ming Teng <teng...@cn.ibm.com>; sitlani.namr...@yahoo.in; Yuanying 
<yuany...@fraction.jp>; Chandan Kumar <chku...@redhat.com>
Subject: Re: [openstack-dev] [Higgins] Call for contribution for Higgins API 
design

Hi, Hongbin,

Yes, those urls are just information for our work.
We will create a etherpad page to collaborate.




2016年6月11日(土) 7:38 Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>>:
Yuanying,

The etherpads you pointed to were a few years ago and the information looks a 
bit outdated. I think we can collaborate a similar etherpad with updated 
information (i.e. remove container runtimes that we don’t care, add container 
runtimes that we care). The existing etherpad can be used as a starting point. 
What do you think?

Best regards,
Hongbin

From: Yuanying OTSUKA 
[mailto:yuany...@oeilvert.org<mailto:yuany...@oeilvert.org>]
Sent: June-01-16 12:43 AM
To: OpenStack Development Mailing List (not for usage questions); Sheel Rana 
Insaan
Cc: adit...@nectechnologies.in<mailto:adit...@nectechnologies.in>; 
yanya...@cn.ibm.com<mailto:yanya...@cn.ibm.com>; 
flw...@catalyst.net.nz<mailto:flw...@catalyst.net.nz>; Qi Ming Teng; 
sitlani.namr...@yahoo.in<mailto:sitlani.namr...@yahoo.in>; Yuanying; Chandan 
Kumar
Subject: Re: [openstack-dev] [Higgins] Call for contribution for Higgins API 
design

Just F.Y.I.

When Magnum wanted to become “Container as a Service”,
There were some discussion about API design.

* https://etherpad.openstack.org/p/containers-service-api
* https://etherpad.openstack.org/p/openstack-containers-service-api



2016年6月1日(水) 12:09 Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>>:
Sheel,

Thanks for taking the responsibility. Assigned the BP to you. As discussed, 
please submit a spec for the API design. Feel free to let us know if you need 
any help.

Best regards,
Hongbin

From: Sheel Rana Insaan 
[mailto:ranasheel2...@gmail.com<mailto:ranasheel2...@gmail.com>]
Sent: May-31-16 9:23 PM
To: Hongbin Lu
Cc: adit...@nectechnologies.in<mailto:adit...@nectechnologies.in>; 
vivek.jain.openst...@gmail.com<mailto:vivek.jain.openst...@gmail.com>; 
flw...@catalyst.net.nz<mailto:flw...@catalyst.net.nz>; Shuu Mutou; Davanum 
Srinivas; OpenStack Development Mailing List (not for usage questions); Chandan 
Kumar; hai...@xr.jp.nec.com<mailto:hai...@xr.jp.nec.com>; Qi Ming Teng; 
sitlani.namr...@yahoo.in<mailto:sitlani.namr...@yahoo.in>; Yuanying; Kumari, 
Madhuri; yanya...@cn.ibm.com<mailto:yanya...@cn.ibm.com>
Subject: Re: [Higgins] Call for contribution for Higgins API design


Dear Hongbin,

I am interested in this.
Thanks!!

Best Regards,
Sheel Rana
On Jun 1, 2016 3:53 AM, "Hongbin Lu" 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:
Hi team,

As discussed in the last team meeting, we agreed to define core use cases for 
the API design. I have created a blueprint for that. We need an owner of the 
blueprint and it requires a spec to clarify the API design. Please let me know 
if you interest in this work (it might require a significant amount of time to 
work on the spec).

https://blueprints.launchpad.net/python-higgins/+spec/api-design

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-06-01 Thread Kumari, Madhuri
Hi Hongbin,

I also liked the idea of having heterogeneous set of nodes but IMO such 
features should not be implemented in Magnum, thus deviating Magnum again from 
its roadmap. Whereas we should leverage Heat(or may be Senlin) APIs for the 
same.

I vote +1 for this feature.

Regards,
Madhuri

-Original Message-
From: Hongbin Lu [mailto:hongbin...@huawei.com] 
Sent: Thursday, June 2, 2016 3:33 AM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually managing the 
bay nodes

Personally, I think this is a good idea, since it can address a set of similar 
use cases like below:
* I want to deploy a k8s cluster to 2 availability zone (in future 2 
regions/clouds).
* I want to spin up N nodes in AZ1, M nodes in AZ2.
* I want to scale the number of nodes in specific AZ/region/cloud. For example, 
add/remove K nodes from AZ1 (with AZ2 untouched).

The use case above should be very common and universal everywhere. To address 
the use case, Magnum needs to support provisioning heterogeneous set of nodes 
at deploy time and managing them at runtime. It looks the proposed idea 
(manually managing individual nodes or individual group of nodes) can address 
this requirement very well. Besides the proposed idea, I cannot think of an 
alternative solution.

Therefore, I vote to support the proposed idea.

Best regards,
Hongbin

> -Original Message-
> From: Hongbin Lu
> Sent: June-01-16 11:44 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: RE: [openstack-dev] [magnum] Discuss the idea of manually 
> managing the bay nodes
> 
> Hi team,
> 
> A blueprint was created for tracking this idea:
> https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-
> nodes . I won't approve the BP until there is a team decision on 
> accepting/rejecting the idea.
> 
> From the discussion in design summit, it looks everyone is OK with the 
> idea in general (with some disagreements in the API style). However, 
> from the last team meeting, it looks some people disagree with the 
> idea fundamentally. so I re-raised this ML to re-discuss.
> 
> If you agree or disagree with the idea of manually managing the Heat 
> stacks (that contains individual bay nodes), please write down your 
> arguments here. Then, we can start debating on that.
> 
> Best regards,
> Hongbin
> 
> > -Original Message-
> > From: Cammann, Tom [mailto:tom.camm...@hpe.com]
> > Sent: May-16-16 5:28 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually 
> > managing the bay nodes
> >
> > The discussion at the summit was very positive around this
> requirement
> > but as this change will make a large impact to Magnum it will need a 
> > spec.
> >
> > On the API of things, I was thinking a slightly more generic 
> > approach to incorporate other lifecycle operations into the same API.
> > Eg:
> > magnum bay-manage  
> >
> > magnum bay-manage  reset –hard
> > magnum bay-manage  rebuild
> > magnum bay-manage  node-delete  magnum bay-manage 
> >  node-add –flavor  magnum bay-manage  node-reset 
> >  magnum bay-manage  node-list
> >
> > Tom
> >
> > From: Yuanying OTSUKA <yuany...@oeilvert.org>
> > Reply-To: "OpenStack Development Mailing List (not for usage 
> > questions)" <openstack-dev@lists.openstack.org>
> > Date: Monday, 16 May 2016 at 01:07
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > <openstack-dev@lists.openstack.org>
> > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually 
> > managing the bay nodes
> >
> > Hi,
> >
> > I think, user also want to specify the deleting node.
> > So we should manage “node” individually.
> >
> > For example:
> > $ magnum node-create —bay …
> > $ magnum node-list —bay
> > $ magnum node-delete $NODE_UUID
> >
> > Anyway, if magnum want to manage a lifecycle of container 
> > infrastructure.
> > This feature is necessary.
> >
> > Thanks
> > -yuanying
> >
> >
> > 2016年5月16日(月) 7:50 Hongbin Lu
> > <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>:
> > Hi all,
> >
> > This is a continued discussion from the design summit. For recap, 
> > Magnum manages bay nodes by using ResourceGroup from Heat. This 
> > approach works but it is infeasible to manage the heterogeneity
> across
> > bay nodes, which is a frequently demanded feature. As an example, 
&

Re: [openstack-dev] [higgins] Should we rename "Higgins"?

2016-06-01 Thread Kumari, Madhuri
Thanks Shu for providing suggestions.

I wanted the new name to be related to containers as Magnum is also synonym for 
containers. So I have few options here.

1. Casket
2. Canister
3. Cistern
4. Hutch

All above options are free to be taken on pypi and Launchpad.
Thoughts?

Regards
Madhuri

-Original Message-
From: Shuu Mutou [mailto:shu-mu...@rf.jp.nec.com] 
Sent: Wednesday, June 1, 2016 11:11 AM
To: openstack-dev@lists.openstack.org
Cc: Haruhiko Katou <har-ka...@ts.jp.nec.com>
Subject: Re: [openstack-dev] [higgins] Should we rename "Higgins"?

I found container related names and checked whether other project uses.

https://en.wikipedia.org/wiki/Straddle_carrier
https://en.wikipedia.org/wiki/Suezmax
https://en.wikipedia.org/wiki/Twistlock

These words are not used by other project on PYPI and Launchpad.

ex.)
https://pypi.python.org/pypi/straddle
https://launchpad.net/straddle


However the chance of renaming in N cycle will be done by Infra-team on this 
Friday, we would not meet the deadline. So

1. use 'Higgins' ('python-higgins' for package name) 2. consider other name for 
next renaming chance (after a half year)

Thoughts?


Regards,
Shu


> -Original Message-
> From: Hongbin Lu [mailto:hongbin...@huawei.com]
> Sent: Wednesday, June 01, 2016 11:37 AM
> To: OpenStack Development Mailing List (not for usage questions) 
> <openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [higgins] Should we rename "Higgins"?
> 
> Shu,
> 
> According to the feedback from the last team meeting, Gatling doesn't 
> seem to be a suitable name. Are you able to find an alternative name?
> 
> Best regards,
> Hongbin
> 
> > -Original Message-
> > From: Shuu Mutou [mailto:shu-mu...@rf.jp.nec.com]
> > Sent: May-24-16 4:30 AM
> > To: openstack-dev@lists.openstack.org
> > Cc: Haruhiko Katou
> > Subject: [openstack-dev] [higgins] Should we rename "Higgins"?
> >
> > Hi all,
> >
> > Unfortunately "higgins" is used by media server project on Launchpad 
> > and CI software on PYPI. Now, we use "python-higgins" for our 
> > project on Launchpad.
> >
> > IMO, we should rename project to prevent increasing points to patch.
> >
> > How about "Gatling"? It's only association from Magnum. It's not 
> > used on both Launchpad and PYPI.
> > Is there any idea?
> >
> > Renaming opportunity will come (it seems only twice in a year) on 
> > Friday, June 3rd. Few projects will rename on this date.
> > http://markmail.org/thread/ia3o3vz7mzmjxmcx
> >
> > And if project name issue will be fixed, I'd like to propose UI 
> > subproject.
> >
> > Thanks,
> > Shu
> >
> >
> >
> __
> _
> > ___
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-
> > requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Higgins] Proposing Eli Qiao to be a Higgins core

2016-05-31 Thread Kumari, Madhuri
+1 for Eli. Great addition to team.

Regards,
Madhuri

-Original Message-
From: Davanum Srinivas [mailto:dava...@gmail.com] 
Sent: Wednesday, June 1, 2016 8:11 AM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [Higgins] Proposing Eli Qiao to be a Higgins core

+1 Welcome Eli

-- Dims

On Tue, May 31, 2016 at 9:22 PM, Yanyan Hu <huyanya...@gmail.com> wrote:
> +1, welcome, Eli :)
>
> 2016-06-01 7:07 GMT+08:00 Yuanying OTSUKA <yuany...@oeilvert.org>:
>>
>> +1, He will become a good contributor!
>>
>>
>>
>> 2016年6月1日(水) 7:14 Fei Long Wang <feil...@catalyst.net.nz>:
>>>
>>> +1
>>>
>>>
>>> On 01/06/16 09:39, Hongbin Lu wrote:
>>>
>>> Hi team,
>>>
>>>
>>>
>>> I wrote this email to propose Eli Qiao (taget-9) to be a Higgins core.
>>> Normally, the requirement to join the core team is to consistently 
>>> contribute to the project for a certain period of time. However, 
>>> given the fact that the project is new and the initial core team was 
>>> formed based on a commitment, I am fine to propose a new core based 
>>> on a strong commitment to contribute plus a few useful 
>>> patches/reviews. In addition, Eli Qiao is currently a Magnum core 
>>> and I believe his expertise will be an asset of Higgins team.
>>>
>>>
>>>
>>> According to the OpenStack Governance process [1], we require a 
>>> minimum of 4 +1 votes from existing Higgins core team within a 1 
>>> week voting window (consider this proposal as a +1 vote from me). A 
>>> vote of -1 is a veto. If we cannot get enough votes or there is a 
>>> veto vote prior to the end of the voting window, Eli is not able to 
>>> join the core team and needs to wait 30 days to reapply.
>>>
>>>
>>>
>>> The voting is open until Tuesday June 7st.
>>>
>>>
>>>
>>> [1] 
>>> https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>>>
>>>
>>>
>>> Best regards,
>>>
>>> Hongbin
>>>
>>>
>>>
>>>
>>> 
>>> __ OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> --
>>> Cheers & Best regards,
>>> Fei Long Wang (王飞龙)
>>>
>>> 
>>> --
>>> Senior Cloud Software Engineer
>>> Tel: +64-48032246
>>> Email: flw...@catalyst.net.nz
>>> Catalyst IT Limited
>>> Level 6, Catalyst House, 150 Willis Street, Wellington
>>>
>>> 
>>> --
>>>
>>>
>>> 
>>> __ OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> _
>> _ OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Best regards,
>
> Yanyan
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



--
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] How to document 'labels'

2016-05-11 Thread Kumari, Madhuri
Hi,

+1 for option #2. Its ok to fetch help message from API server but a drawback 
is help message is supposed to work generally even if the actual services are 
not running.
Thoughts?

Regards,
Madhuri

-Original Message-
From: taget [mailto:qiaoliy...@gmail.com] 
Sent: Thursday, May 12, 2016 7:01 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum] How to document 'labels'

+1 for #2, to keep help message in magnum API server, since we have an
validation list in API Server, see
https://review.openstack.org/#/c/312990/5/magnum/api/attr_validator.py@23 .

But, if we chose #2 , CLI should add supporting to retrieve help message from 
API server.

On 2016年05月12日 09:10, Shuu Mutou wrote:
> Hi all,
>
> +1 for option #2.
>
> Yuanying drafted following blueprint. And I will follow up this.
> https://blueprints.launchpad.net/magnum/+spec/magnum-api-swagger-suppo
> rt
>
> I think, this will be satisfy Tom's comments.
>
> regards,
>
> Shu Muto
>
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Best Regards, Eli Qiao (乔立勇)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Proposing Eli Qiao for Magnum core reviewer team

2016-03-31 Thread Kumari, Madhuri
+1 from me. Thanks Eli for your contribution.

Regards,
Madhuri

From: Yuanying OTSUKA [mailto:yuany...@oeilvert.org]
Sent: Friday, April 1, 2016 8:13 AM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [magnum] Proposing Eli Qiao for Magnum core 
reviewer team

+1 for Eli

Thanks
-Yuanying

2016年4月1日(金) 10:59 王华 
<wanghua.hum...@gmail.com<mailto:wanghua.hum...@gmail.com>>:
+1 for Eli.

Best Regards,
Wanghua

On Fri, Apr 1, 2016 at 9:51 AM, Duan, Li-Gong (Gary, HPServers-Core-OE-PSC) 
<li-gong.d...@hpe.com<mailto:li-gong.d...@hpe.com>> wrote:
+1 for Eli.

Regards,
Gary Duan

From: Hongbin Lu [mailto:hongbin...@huawei.com<mailto:hongbin...@huawei.com>]
Sent: Friday, April 01, 2016 2:18 AM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [magnum] Proposing Eli Qiao for Magnum core reviewer 
team

Hi all,

Eli Qiao has been consistently contributed to Magnum for a while. His 
contribution started from about 10 months ago. Along the way, he implemented 
several important blueprints and fixed a lot of bugs. His contribution covers 
various aspects (i.e. APIs, conductor, unit/functional tests, all the COE 
templates, etc.), which shows that he has a good understanding of almost every 
pieces of the system. The feature set he contributed to is proven to be 
beneficial to the project. For example, the gate testing framework he heavily 
contributed to is what we rely on every days. His code reviews are also 
consistent and useful.

I am happy to propose Eli Qiao to be a core reviewer of Magnum team. According 
to the OpenStack Governance process [1], we require a minimum of 4 +1 votes 
within a 1 week voting window (consider this proposal as a +1 vote from me). A 
vote of -1 is a veto. If we cannot get enough votes or there is a veto vote 
prior to the end of the voting window, Eli is not able to join the core team 
and needs to wait 30 days to reapply.

The voting is open until Thursday April 7st.

[1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

Best regards,
Hongbin



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] [FFE] Support for Magnum Plugin

2016-03-15 Thread Madhuri
Thank you all for the acceptance.

Regards,
Madhuri

On Mon, Mar 14, 2016 at 10:09 PM, Serg Melikyan <smelik...@mirantis.com>
wrote:

> Granted
>
> On Mon, Mar 14, 2016 at 4:09 AM, Nikolay Starodubtsev <
> nstarodubt...@mirantis.com> wrote:
>
>> Hi,
>> I don't like when we need to break rules, but in this case I agree with
>> Kirill. As far as the plugin is not something which can broke general
>> murano functionality, I'm for FFE.
>>
>>
>>
>> Nikolay Starodubtsev
>>
>> Software Engineer
>>
>> Mirantis Inc.
>>
>>
>> Skype: dark_harlequine1
>>
>> 2016-03-14 11:58 GMT+03:00 Kirill Zaitsev <kzait...@mirantis.com>:
>>
>>> I’m going to advocate for this FFE. Even though it’s very late to ask
>>> for FFE, I believe, that this commit is very low-risk/high reward (the
>>> plugin is not enabled by default). I believe that code is in good shape (I
>>> remember +2 it at some point) and would very much like to see this in.
>>>
>>> Serg, do you have any objections?
>>>
>>> --
>>> Kirill Zaitsev
>>> Software Engineer
>>> Mirantis, Inc
>>>
>>> On 14 March 2016 at 11:55:46, Madhuri (madhuri.ra...@gmail.com) wrote:
>>>
>>> Hi All,
>>>
>>> I would like to request a feature freeze exception for "Magnum/Murano
>>> rationalization" [1], Magnum app to deploy Kubernetes/Mesos/Swarm cluster.
>>> The patch is on review[2].
>>> I am looking for your decision about considering this change for a FFE.
>>>
>>> [1]
>>> https://blueprints.launchpad.net/murano/+spec/magnum-murano-rationalization
>>> [2] https://review.openstack.org/#/c/269250/
>>>
>>> Regards,
>>> Madhuri
>>> __
>>>
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Serg Melikyan, Development Manager at Mirantis, Inc.
> http://mirantis.com | smelik...@mirantis.com | +1 (650) 440-8979
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] [FFE] Support for Magnum Plugin

2016-03-14 Thread Madhuri
Hi All,

I would like to request a feature freeze exception for "Magnum/Murano
rationalization" [1], Magnum app to deploy Kubernetes/Mesos/Swarm cluster.
The patch is on review[2].
I am looking for your decision about considering this change for a FFE.

[1]
https://blueprints.launchpad.net/murano/+spec/magnum-murano-rationalization
[2] https://review.openstack.org/#/c/269250/

Regards,
Madhuri
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] API service won't work if conductor down?

2016-02-03 Thread Kumari, Madhuri
Corey the one you are talking about has changed to coe-service-*.

Eli, IMO we should display proper error message. M-api service should only have 
read permission.

Regards,
Madhuri

From: Corey O'Brien [mailto:coreypobr...@gmail.com]
Sent: Wednesday, February 3, 2016 6:50 PM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [Magnum] API service won't work if conductor down?

The service-* commands aren't related to the magnum services (e.g. 
magnum-conductor). The service-* commands are for services on the bay that the 
user creates and deletes.

Corey

On Wed, Feb 3, 2016 at 2:25 AM Eli Qiao 
<liyong.q...@intel.com<mailto:liyong.q...@intel.com>> wrote:
hi
Whey I try to run magnum service-list to list all services (seems now we only 
have m-cond service), it m-cond is down(which means no conductor at all),
API won't response and will return a timeout error.

taget@taget-ThinkStation-P300:~/devstack$ magnum service-list
ERROR: Timed out waiting for a reply to message ID 
fd1e9529f60f42bf8db903bbf75bbade (HTTP 500)

And I debug more and compared with nova service-list, nova will give response 
and will tell the conductor is down.

and deeper I get this in magnum-api boot up:

# Enable object backporting via the conductor
base.MagnumObject.indirection_api = base.MagnumObjectIndirectionAPI()

so in magnum_service api code

return objects.MagnumService.list(context, limit, marker, sort_key,
  sort_dir)

will require to use magnum-conductor to access DB, but no magnum-conductor at 
all, then we get a 500 error.
(nova-api doesn't specify indirection_api so nova-api can access DB)

My question is:

1) is this by designed that we don't allow magnum-api to access DB directly ?
2) if 1) is by designed, then `magnum service-list` won't work, and the error 
message should be improved such as "magnum service is down , please check 
magnum conductor is alive"

What do you think?

P.S. I tested comment this line:
# base.MagnumObject.indirection_api = base.MagnumObjectIndirectionAPI()
magnum-api will response but failed to create bay(), which means api service 
have read access but can not write it at all since(all db write happened in 
conductor layer).



--

Best Regards, Eli(Li Yong)Qiao

Intel OTC China
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] New Core Reviewers

2016-02-01 Thread Kumari, Madhuri
+1 for both. Welcome!

From: 大塚元央 [mailto:yuany...@oeilvert.org]
Sent: Tuesday, February 2, 2016 9:39 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Magnum] New Core Reviewers

+1 welcome!!

2016年2月2日(火) 10:14 Eli Qiao 
>:
+1 +1 thanks for both harding reviewing.

On 2016年02月01日 23:58, Adrian Otto wrote:
> Magnum Core Team,
>
> I propose Ton Ngo (Tango) and Egor Guz (eghobo) as new Magnum Core Reviewers. 
> Please respond with your votes.
>
> Thanks,
>
> Adrian Otto
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

--
Best Regards, Eli(Li Yong)Qiao
Intel OTC China

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][barbican] Setting a live debug session time

2015-07-20 Thread Madhuri
Hi Alee,

Thank you for showing up for help.

The proposed timing suits me. It would be 10:30 am JST for me.

I am madhuri on #freenode.
Will we be discussing on #openstack-containers?

Sdake,
Thank you for setting up this.

Regards,
Madhuri

On Mon, Jul 20, 2015 at 11:26 PM, Ade Lee a...@redhat.com wrote:

 Madhuri,

 I understand that you are somewhere in APAC.  Perhaps it would be best
 to set up a debugging session on Tuesday night  -- at 9:30 pm EST

 This would correspond to 01:30:00 a.m. GMT (Wednesday), which should
 correspond to sometime in the morning for you.

 We can start with the initial goal of getting the snake oil plugin
 working for you, and then see where things are going wrong in the Dogtag
 install.

 Will that work for you?  What is your IRC nick?
 Ade

 (ps. I am alee on #freenode and can be found on either
 #openstack-barbican or #dogtag-pki)

  01:30:00 a.m. Tuesday July 21, 2015 in GMT
 On Fri, 2015-07-17 at 14:39 +, Steven Dake (stdake) wrote:
  Madhuri,
 
 
  Alee is in EST timezone (gmt-5 IIRC).  Alee will help you get barbican
  rolling.  Can you two folks set up a time to chat on irc on Monday or
  tuesday?
 
 
  Thanks
  -steve
 
 
 



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Tom Cammann for core

2015-07-09 Thread Madhuri
+1 to Tom for his great reviews.

Regards,
Madhuri

On Fri, Jul 10, 2015 at 11:20 AM, Adrian Otto adrian.o...@rackspace.com
wrote:

 Team,

 Tom Cammann (tcammann) has become a valued Magnum contributor, and
 consistent reviewer helping us to shape the direction and quality of our
 new contributions. I nominate Tom to join the magnum-core team as our
 newest core reviewer. Please respond with a +1 vote if you agree.
 Alternatively, vote -1 to disagree, and include your rationale for
 consideration.

 Thanks,

 Adrian
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Continuing with heat-coe-templates

2015-06-29 Thread Madhuri
I agree with Tom's comment for not maintaining separate repo for
heat-templates when it can't be reused by others.

Regards,
Madhuri

On Tue, Jun 30, 2015 at 10:56 AM, Angus Salkeld asalk...@mirantis.com
wrote:

 On Tue, Jun 30, 2015 at 8:23 AM, Fox, Kevin M kevin@pnnl.gov wrote:

 Needing to fork templates to tweak things is a very common problem.

 Adding conditionals to Heat was discussed at the Summit. (
 https://etherpad.openstack.org/p/YVR-heat-liberty-template-format). I
 want to say, someone was going to prototype it using YAQL, but I don't
 remember who.


 I was going to do that, but I would not expect that ready in a very short
 time frame. It needs
 some investigation and agreement from others. I'd suggest making you
 decision based
 on what we have now.

 -Angus



 Would it be reasonable to keep if conditionals worked?

 Thanks,
 Kevin
 
 From: Hongbin Lu [hongbin...@huawei.com]
 Sent: Monday, June 29, 2015 3:01 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Magnum] Continuing with heat-coe-templates

 Agree. The motivation of pulling templates out of Magnum tree is hoping
 these templates can be leveraged by a larger community and get more
 feedback. However, it is unlikely to be the case in practise, because
 different people has their own version of templates for addressing
 different use cases. It is proven to be hard to consolidate different
 templates even if these templates share a large amount of duplicated code
 (recall that we have to copy-and-paste the original template to add support
 for Ironic and CoreOS). So, +1 for stopping usage of heat-coe-templates.

 Best regards,
 Hongbin

 -Original Message-
 From: Tom Cammann [mailto:tom.camm...@hp.com]
 Sent: June-29-15 11:16 AM
 To: openstack Development Mailing List (not for usage questions)
 Subject: [openstack-dev] [Magnum] Continuing with heat-coe-templates

 Hello team,

 I've been doing work in Magnum recently to align our templates with the
 upstream templates from larsks/heat-kubernetes[1]. I've also been porting
 these changes to the stackforge/heat-coe-templates[2] repo.

 I'm currently not convinced that maintaining a separate repo for Magnum
 templates (stackforge/heat-coe-templates) is beneficial for Magnum or the
 community.

 Firstly it is very difficult to draw a line on what should be allowed
 into the heat-coe-templates. We are currently taking out changes[3] that
 introduced useful autoscaling capabilities in the templates but that
 didn't fit the Magnum plan. If we are going to treat the heat-coe-templates
 in that way then this extra repo will not allow organic development of new
 and old container engine templates that are not tied into Magnum.
 Another recent change[4] in development is smart autoscaling of bays
 which introduces parameters that don't make a lot of sense outside of
 Magnum.

 There are also difficult interdependency problems between the templates
 and the Magnum project such as the parameter fields. If a required
 parameter is added into the template the Magnum code must be also updated
 in the same commit to avoid functional test failures. This can be avoided
 using Depends-On:
 #xx
 feature of gerrit, but it is an additional overhead and will require some
 CI setup.

 Additionally we would have to version the templates, which I assume would
 be necessary to allow for packaging. This brings with it is own problems.

 As far as I am aware there are no other people using the
 heat-coe-templates beyond the Magnum team, if we want independent growth of
 this repo it will need to be adopted by other people rather than Magnum
 commiters.

 I don't see the heat templates as a dependency of Magnum, I see them as a
 truly fundamental part of Magnum which is going to be very difficult to cut
 out and make reusable without compromising Magnum's development process.

 I would propose to delete/deprecate the usage of heat-coe-templates and
 continue with the usage of the templates in the Magnum repo. How does the
 team feel about that?

 If we do continue with the large effort required to try and pull out the
 templates as a dependency then we will need increase the visibility of repo
 and greatly increase the reviews/commits on the repo. We also have a fairly
 significant backlog of work to align the heat-coe-templates with the
 templates in heat-coe-templates.

 Thanks,
 Tom

 [1] https://github.com/larsks/heat-kubernetes
 [2] https://github.com/stackforge/heat-coe-templates
 [3] https://review.openstack.org/#/c/184687/
 [4] https://review.openstack.org/#/c/196505/

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-15 Thread Madhuri
Adrian,

On Tue, Jun 16, 2015 at 2:39 AM, Adrian Otto adrian.o...@rackspace.com
wrote:

  Madhuri,

  On Jun 15, 2015, at 12:47 AM, Madhuri Rai madhuri.ra...@gmail.com
 wrote:

  Hi,

 Thanks Adrian for the quick response. Please find my response inline.

 On Mon, Jun 15, 2015 at 3:09 PM, Adrian Otto adrian.o...@rackspace.com
 wrote:

 Madhuri,

  On Jun 14, 2015, at 10:30 PM, Madhuri Rai madhuri.ra...@gmail.com
 wrote:

Hi All,

 This is to bring the blueprint  secure-kubernetes
 https://blueprints.launchpad.net/magnum/+spec/secure-kubernetes in 
 discussion.
 I have been trying to figure out what could be the possible change area
 to support this feature in Magnum. Below is just a rough idea on how to
 proceed further on it.

 This task can be further broken in smaller pieces.

 *1. Add support for TLS in python-k8sclient.*
 The current auto-generated code doesn't support TLS. So this work will be
 to add TLS support in kubernetes python APIs.

 *2. Add support for Barbican in Magnum.*
  Barbican will be used to store all the keys and certificates.


  Keep in mind that not all clouds will support Barbican yet, so this
 approach could impair adoption of Magnum until Barbican is universally
 supported. It might be worth considering a solution that would generate all
 keys on the client, and copy them to the Bay master for communication with
 other Bay nodes. This is less secure than using Barbican, but would allow
 for use of Magnum before Barbican is adopted.


 +1, I agree. One question here, we are trying to secure the communication
 between magnum-conductor and kube-apiserver. Right?


  We need API services that are on public networks to be secured with TLS,
 or another approach that will allow us to implement access control so that
 these API’s can only be accessed by those with the correct keys. This need
 extends to all places in Magnum where we are exposing native API’s.


Ok, I understand.


  If both methods were supported, the Barbican method should be the
 default, and we should put warning messages in the config file so that when
 the administrator relaxes the setting to use the non-Barbican configuration
 he/she is made aware that it requires a less secure mode of operation.


  In non-Barbican support, client will generate the keys and pass the
 location of the key to the magnum services. Then again heat template will
 copy and configure the kubernetes services on master node. Same as the
 below step.


  Good!

  My suggestion is to completely implement the Barbican support first,
 and follow up that implementation with a non-Barbican option as a second
 iteration for the feature.


  How about implementing the non-Barbican support first as this would be
 easy to implement, so that we can first concentrate on Point 1 and 3. And
 then after it, we can work on Barbican support with more insights.


  Another possibility would be for Magnum to use its own private
 installation of Barbican in cases where it is not available in the service
 catalog. I dislike this option because it creates an operational burden for
 maintaining the private Barbican service, and additional complexities with
 securing it.


 In my opinion, installation of Barbican should be independent of Magnum.
 My idea here is, if user wants to store his/her keys in Barbican then
 he/she will install it.
 We will have a config paramter like store_secure when True means we have
 to store the keys in Barbican or else not.
  What do you think?


*3. Add support of TLS in Magnum.*
  This work mainly involves supporting the use of key and certificates in
 magnum to support TLS.

 The user generates the keys, certificates and store them in Barbican. Now
 there is two way to access these keys while creating a bay.


  Rather than the user generates the keys…, perhaps it might be better
 to word that as the magnum client library code generates the keys for the
 user…”.


 It is user here. In my opinion, there could be users who don't want to
 use magnum client rather the APIs directly, in that case the user will
 generate the key themselves.


  Good point.

In our first implementation, we can support the user generating the
 keys and then later client generating the keys.


  Users should not require any knowledge of how TLS works, or related
 certificate management tools in order to use Magnum. Let’s aim for this.

  I do agree that’s a good logical first step, but I am reluctant to agree
 to it without confidence that we will add the additional security later. I
 want to achieve a secure-by-default configuration in Magnum. I’m happy to
 take measured forward progress toward this, but I don’t want the less
 secure option(s) to be the default once more secure options come along. By
 doing the more secure one first, and making it the default, we allow other
 options only when the administrator makes a conscious action to relax
 security to meet their constraints.


Barbican will be the default option.


  So

Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-15 Thread Madhuri
+1 Kevin. We will make Barbican a dependency to make it the default option
to secure keys.

Regards,
Madhuri

On Tue, Jun 16, 2015 at 12:48 AM, Fox, Kevin M kevin@pnnl.gov wrote:

  If your asking the cloud provider to go through the effort to install
 Magnum, its not that much extra effort to install Barbican at the same
 time. Making it a dependency isn't too bad then IMHO.

 Thanks,
 Kevin
  --
 *From:* Adrian Otto [adrian.o...@rackspace.com]
 *Sent:* Sunday, June 14, 2015 11:09 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [Magnum] TLS Support in Magnum

  Madhuri,

  On Jun 14, 2015, at 10:30 PM, Madhuri Rai madhuri.ra...@gmail.com
 wrote:

Hi All,

 This is to bring the blueprint  secure-kubernetes
 https://blueprints.launchpad.net/magnum/+spec/secure-kubernetes in 
 discussion.
 I have been trying to figure out what could be the possible change area
 to support this feature in Magnum. Below is just a rough idea on how to
 proceed further on it.

 This task can be further broken in smaller pieces.

 *1. Add support for TLS in python-k8sclient.*
 The current auto-generated code doesn't support TLS. So this work will be
 to add TLS support in kubernetes python APIs.

 *2. Add support for Barbican in Magnum.*
  Barbican will be used to store all the keys and certificates.


  Keep in mind that not all clouds will support Barbican yet, so this
 approach could impair adoption of Magnum until Barbican is universally
 supported. It might be worth considering a solution that would generate all
 keys on the client, and copy them to the Bay master for communication with
 other Bay nodes. This is less secure than using Barbican, but would allow
 for use of Magnum before Barbican is adopted.

  If both methods were supported, the Barbican method should be the
 default, and we should put warning messages in the config file so that when
 the administrator relaxes the setting to use the non-Barbican configuration
 he/she is made aware that it requires a less secure mode of operation.

  My suggestion is to completely implement the Barbican support first, and
 follow up that implementation with a non-Barbican option as a second
 iteration for the feature.

  Another possibility would be for Magnum to use its own private
 installation of Barbican in cases where it is not available in the service
 catalog. I dislike this option because it creates an operational burden for
 maintaining the private Barbican service, and additional complexities with
 securing it.

*3. Add support of TLS in Magnum.*
  This work mainly involves supporting the use of key and certificates in
 magnum to support TLS.

 The user generates the keys, certificates and store them in Barbican. Now
 there is two way to access these keys while creating a bay.


  Rather than the user generates the keys…, perhaps it might be better to
 word that as the magnum client library code generates the keys for the
 user…”.

 1. Heat will access Barbican directly.
 While creating bay, the user will provide this key and heat templates will
 fetch this key from Barbican.


  I think you mean that Heat will use the Barbican key to fetch the TLS key
 for accessing the native API service running on the Bay.

 2. Magnum-conductor access Barbican.
 While creating bay, the user will provide this key and then
 Magnum-conductor will fetch this key from Barbican and provide this key to
 heat.

 Then heat will copy this files on kubernetes master node. Then bay will
 use this key to start a Kubernetes services signed with these keys.


  Make sure that the Barbican keys used by Heat and magnum-conductor to
 store the various TLS certificates/keys are unique per tenant and per bay,
 and are not shared among multiple tenants. We don’t want it to ever be
 possible to trick Magnum into revealing secrets belonging to other tenants.

 After discussion when we all come to same point, I will create
 separate blueprints for each task.
 I am currently working on configuring Kubernetes services with TLS keys.

  Please provide your suggestions if any.


  Thanks for kicking off this discussion.

  Regards,

  Adrian



  Regards,
  Madhuri
  __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ

Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-15 Thread Madhuri
Thanks Egor.

On Tue, Jun 16, 2015 at 1:52 AM, Egor Guz e...@walmartlabs.com wrote:

 +1 for non-Barbican support first, unfortunately Barbican is not very well
 adopted in existing installation.

 Madhuri, also please keep in mind we should come with solution which
 should work with Swarm and Mesos as well in further.


Good point. It will be the same, just the difference will be configuring
the respective services with signed certs and keys.


 —
 Egor

 From: Madhuri Rai madhuri.ra...@gmail.commailto:madhuri.ra...@gmail.com
 
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 
 Date: Monday, June 15, 2015 at 0:47
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
 
 Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

 Hi,

 Thanks Adrian for the quick response. Please find my response inline.

 On Mon, Jun 15, 2015 at 3:09 PM, Adrian Otto adrian.o...@rackspace.com
 mailto:adrian.o...@rackspace.com wrote:
 Madhuri,

 On Jun 14, 2015, at 10:30 PM, Madhuri Rai madhuri.ra...@gmail.commailto:
 madhuri.ra...@gmail.com wrote:

 Hi All,

 This is to bring the blueprint  secure-kubernetes
 https://blueprints.launchpad.net/magnum/+spec/secure-kubernetes in
 discussion. I have been trying to figure out what could be the possible
 change area to support this feature in Magnum. Below is just a rough idea
 on how to proceed further on it.

 This task can be further broken in smaller pieces.

 1. Add support for TLS in python-k8sclient.
 The current auto-generated code doesn't support TLS. So this work will be
 to add TLS support in kubernetes python APIs.

 2. Add support for Barbican in Magnum.
 Barbican will be used to store all the keys and certificates.

 Keep in mind that not all clouds will support Barbican yet, so this
 approach could impair adoption of Magnum until Barbican is universally
 supported. It might be worth considering a solution that would generate all
 keys on the client, and copy them to the Bay master for communication with
 other Bay nodes. This is less secure than using Barbican, but would allow
 for use of Magnum before Barbican is adopted.

 +1, I agree. One question here, we are trying to secure the communication
 between magnum-conductor and kube-apiserver. Right?


 If both methods were supported, the Barbican method should be the default,
 and we should put warning messages in the config file so that when the
 administrator relaxes the setting to use the non-Barbican configuration
 he/she is made aware that it requires a less secure mode of operation.

 In non-Barbican support, client will generate the keys and pass the
 location of the key to the magnum services. Then again heat template will
 copy and configure the kubernetes services on master node. Same as the
 below step.


 My suggestion is to completely implement the Barbican support first, and
 follow up that implementation with a non-Barbican option as a second
 iteration for the feature.

 How about implementing the non-Barbican support first as this would be
 easy to implement, so that we can first concentrate on Point 1 and 3. And
 then after it, we can work on Barbican support with more insights.

 Another possibility would be for Magnum to use its own private
 installation of Barbican in cases where it is not available in the service
 catalog. I dislike this option because it creates an operational burden for
 maintaining the private Barbican service, and additional complexities with
 securing it.

 In my opinion, installation of Barbican should be independent of Magnum.
 My idea here is, if user wants to store his/her keys in Barbican then
 he/she will install it.
 We will have a config paramter like store_secure when True means we have
 to store the keys in Barbican or else not.
 What do you think?

 3. Add support of TLS in Magnum.
 This work mainly involves supporting the use of key and certificates in
 magnum to support TLS.

 The user generates the keys, certificates and store them in Barbican. Now
 there is two way to access these keys while creating a bay.

 Rather than the user generates the keys…, perhaps it might be better to
 word that as the magnum client library code generates the keys for the
 user…”.

 It is user here. In my opinion, there could be users who don't want to
 use magnum client rather the APIs directly, in that case the user will
 generate the key themselves.

 In our first implementation, we can support the user generating the keys
 and then later client generating the keys.

 1. Heat will access Barbican directly.
 While creating bay, the user will provide this key and heat templates will
 fetch this key from Barbican.

 I think you mean that Heat will use the Barbican key to fetch the TLS key
 for accessing the native API service running on the Bay.
 Yes.

 2. Magnum-conductor access Barbican

Re: [openstack-dev] [Magnum] TLS Support in Magnum

2015-06-15 Thread Madhuri Rai
Hi,

Thanks Adrian for the quick response. Please find my response inline.

On Mon, Jun 15, 2015 at 3:09 PM, Adrian Otto adrian.o...@rackspace.com
wrote:

  Madhuri,

  On Jun 14, 2015, at 10:30 PM, Madhuri Rai madhuri.ra...@gmail.com
 wrote:

Hi All,

 This is to bring the blueprint  secure-kubernetes
 https://blueprints.launchpad.net/magnum/+spec/secure-kubernetes in 
 discussion.
 I have been trying to figure out what could be the possible change area
 to support this feature in Magnum. Below is just a rough idea on how to
 proceed further on it.

 This task can be further broken in smaller pieces.

 *1. Add support for TLS in python-k8sclient.*
 The current auto-generated code doesn't support TLS. So this work will be
 to add TLS support in kubernetes python APIs.

 *2. Add support for Barbican in Magnum.*
  Barbican will be used to store all the keys and certificates.


  Keep in mind that not all clouds will support Barbican yet, so this
 approach could impair adoption of Magnum until Barbican is universally
 supported. It might be worth considering a solution that would generate all
 keys on the client, and copy them to the Bay master for communication with
 other Bay nodes. This is less secure than using Barbican, but would allow
 for use of Magnum before Barbican is adopted.


+1, I agree. One question here, we are trying to secure the communication
between magnum-conductor and kube-apiserver. Right?


  If both methods were supported, the Barbican method should be the
 default, and we should put warning messages in the config file so that when
 the administrator relaxes the setting to use the non-Barbican configuration
 he/she is made aware that it requires a less secure mode of operation.


In non-Barbican support, client will generate the keys and pass the
location of the key to the magnum services. Then again heat template will
copy and configure the kubernetes services on master node. Same as the
below step.


  My suggestion is to completely implement the Barbican support first, and
 follow up that implementation with a non-Barbican option as a second
 iteration for the feature.


How about implementing the non-Barbican support first as this would be easy
to implement, so that we can first concentrate on Point 1 and 3. And then
after it, we can work on Barbican support with more insights.


  Another possibility would be for Magnum to use its own private
 installation of Barbican in cases where it is not available in the service
 catalog. I dislike this option because it creates an operational burden for
 maintaining the private Barbican service, and additional complexities with
 securing it.


In my opinion, installation of Barbican should be independent of Magnum. My
idea here is, if user wants to store his/her keys in Barbican then he/she
will install it.
We will have a config paramter like store_secure when True means we have
to store the keys in Barbican or else not.
What do you think?


*3. Add support of TLS in Magnum.*
  This work mainly involves supporting the use of key and certificates in
 magnum to support TLS.

 The user generates the keys, certificates and store them in Barbican. Now
 there is two way to access these keys while creating a bay.


  Rather than the user generates the keys…, perhaps it might be better to
 word that as the magnum client library code generates the keys for the
 user…”.


It is user here. In my opinion, there could be users who don't want to
use magnum client rather the APIs directly, in that case the user will
generate the key themselves.

In our first implementation, we can support the user generating the keys
and then later client generating the keys.


 1. Heat will access Barbican directly.
 While creating bay, the user will provide this key and heat templates will
 fetch this key from Barbican.


  I think you mean that Heat will use the Barbican key to fetch the TLS key
 for accessing the native API service running on the Bay.

Yes.


 2. Magnum-conductor access Barbican.
 While creating bay, the user will provide this key and then
 Magnum-conductor will fetch this key from Barbican and provide this key to
 heat.

 Then heat will copy this files on kubernetes master node. Then bay will
 use this key to start a Kubernetes services signed with these keys.


  Make sure that the Barbican keys used by Heat and magnum-conductor to
 store the various TLS certificates/keys are unique per tenant and per bay,
 and are not shared among multiple tenants. We don’t want it to ever be
 possible to trick Magnum into revealing secrets belonging to other tenants.


Yes, I will take care of it.

   After discussion when we all come to same point, I will create separate
blueprints for each task.
I am currently working on configuring Kubernetes services with TLS keys.

 Please provide your suggestions if any.


 Thanks for kicking off this discussion.


  Regards,

  Adrian



  Regards,
  Madhuri

[openstack-dev] [Magnum] TLS Support in Magnum

2015-06-14 Thread Madhuri Rai
Hi All,

This is to bring the blueprint  secure-kubernetes
https://blueprints.launchpad.net/magnum/+spec/secure-kubernetes in
discussion.
I have been trying to figure out what could be the possible change area to
support this feature in Magnum. Below is just a rough idea on how to
proceed further on it.

This task can be further broken in smaller pieces.

*1. Add support for TLS in python-k8sclient.*
The current auto-generated code doesn't support TLS. So this work will be
to add TLS support in kubernetes python APIs.

*2. Add support for Barbican in Magnum.*
Barbican will be used to store all the keys and certificates.

*3. Add support of TLS in Magnum.*
This work mainly involves supporting the use of key and certificates in
magnum to support TLS.

The user generates the keys, certificates and store them in Barbican. Now
there is two way to access these keys while creating a bay.

1. Heat will access Barbican directly.
While creating bay, the user will provide this key and heat templates will
fetch this key from Barbican.


2. Magnum-conductor access Barbican.
While creating bay, the user will provide this key and then
Magnum-conductor will fetch this key from Barbican and provide this key to
heat.

Then heat will copy this files on kubernetes master node. Then bay will use
this key to start a Kubernetes services signed with these keys.


After discussion when we all come to same point, I will create separate
blueprints for each task.
I am currently working on configuring Kubernetes services with TLS keys.

Please provide your suggestions if any.


Regards,
Madhuri
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Proposing Kai Qiang Wu (Kennan) for Core for Magnum

2015-05-31 Thread Madhuri Kumari
+1 for Kennan.

Thanks  Regards
Madhuri Kumari


From: Steven Dake (stdake) [std...@cisco.com]
Sent: Sunday, May 31, 2015 11:26 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] Proposing Kai Qiang Wu (Kennan) for Core for 
Magnum

Hi core team,

Kennan (Kai Qiang Wu’s nickname) has really done a nice job in Magnum 
contributions.  I would like to propose Kennan for the core reviewer team.  I 
don’t think we necessarily need more core reviewers on Magnum, but Kennan has 
demonstrated a big commitment to Magnum and is a welcome addition in my opinion.

For the lifetime of the project, Kennan has contributed 8% of the reviews, and 
8% of the commits.  Kennan is also active in IRC.  He meets my definition of 
what a core developer for Magnum should be.

Consider my proposal to be a +1 vote.

Please vote +1 to approve or vote –1 to veto.  A single –1 vote acts as a veto, 
meaning Kennan would not be approved for the core team.  I believe we require 3 
+1 core votes presently, but since our core team is larger now then when we 
started, I’d like to propose at our next team meeting we formally define the 
process by which we accept new cores post this proposal for Magnum into the 
magnum-core group and document it in our wiki.

I’ll leave voting open for 1 week until June 6th to permit appropriate time for 
the core team to vote.  If there is a unanimous decision or veto prior to that 
date, I’ll close voting and make the appropriate changes in gerrit as 
appropriate.

Thanks
-steve




DISCLAIMER:
---
The contents of this e-mail and any attachment(s) are confidential and
intended
for the named recipient(s) only. 
It shall not attach any liability on the originator or NEC or its
affiliates. Any views or opinions presented in 
this email are solely those of the author and may not necessarily reflect the
opinions of NEC or its affiliates. 
Any form of reproduction, dissemination, copying, disclosure, modification,
distribution and / or publication of 
this message without the prior written consent of the author of this e-mail is
strictly prohibited. If you have 
received this email in error please delete it and notify the sender
immediately. .
---__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Proposal for Madhuri Kumari to join Core Team

2015-05-01 Thread Madhuri Rai
Hi All,

Thank you for adding me to this team.

It will be my pleasure to work with you all together.
Till now, everyone has helped me in many or some way.

Thank you for all the support and this honor.

Looking forward for contributing more.


Thanks  Regards
Madhuri Kumari

On Fri, May 1, 2015 at 10:25 AM, Adrian Otto adrian.o...@rackspace.com
wrote:

  Team,

  Madnuri has been added to the magnum-core group [1]. Thanks everyone for
 your votes.

  Regards,

  Adrian

  [1] https://review.openstack.org/#/admin/groups/473,members

  On Apr 30, 2015, at 8:48 PM, Hongbin Lu hongbin...@gmail.com wrote:

  +1!

 On Apr 28, 2015, at 11:14 PM, Steven Dake (stdake) std...@cisco.com
 wrote:

  Hi folks,

  I would like to nominate Madhuri Kumari  to the core team for Magnum.
 Please remember a +1 vote indicates your acceptance.  A –1 vote acts as a
 complete veto.

  Why Madhuri for core?

1. She participates on IRC heavily
2. She has been heavily involved in a really difficult project  to
remove Kubernetes kubectl and replace it with a native python language
binding which is really close to be done (TM)
3. She provides helpful reviews and her reviews are of good quality

 Some of Madhuri’s stats, where she performs in the pack with the rest of
 the core team:

  reviews: http://stackalytics.com/?release=kilomodule=magnum-group
 commits:
 http://stackalytics.com/?release=kilomodule=magnum-groupmetric=commits

  Please feel free to vote if your a Magnum core contributor.

  Regards
 -steve


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

  __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] New fedora 21 Atomic images available for testing

2015-04-24 Thread Madhuri Rai
Hi Steve,

I tried to boot a vm with the new image from. But it doesn't work.
The vm state was ACTIVE but I can't ping or ssh to the vm.

If anyone has tested it, please let me know.

Also I would request folks to test the image so that we can pass it as we
need this image for Kubernetes Client.

Regards,
Madhuri


On Fri, Apr 24, 2015 at 2:00 PM, Madhuri Rai madhuri.ra...@gmail.com
wrote:

 Hi Steve,

 Thank you for working on this.

 It will be really good for us to remove dependency on external projects.

 Regards,
 Madhuri

 On Fri, Apr 24, 2015 at 8:27 AM, Steven Dake (stdake) std...@cisco.com
 wrote:

  Hi folks,

  I have spent the last couple of days trying to bring some sanity to the
 image building process for Magnum.

  I have found a tool which the Atomic upstream produces which allows a
 simple repeatable building process for Fedora Atomic images using any
 upstream repos of our choosing.

  I put in a kubernetes 0.15 COPR repo in this build.  Please test and
 get back to me either on irc or the ML.

  The image is available for download from here:
 https://fedorapeople.org/groups/magnum/fedora-21-atomic-3.qcow2.xz
 https://fedorapeople.org/groups/magnum/

  Regards,
 -steve


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] New fedora 21 Atomic images available for testing

2015-04-23 Thread Madhuri Rai
Hi Steve,

Thank you for working on this.

It will be really good for us to remove dependency on external projects.

Regards,
Madhuri

On Fri, Apr 24, 2015 at 8:27 AM, Steven Dake (stdake) std...@cisco.com
wrote:

  Hi folks,

  I have spent the last couple of days trying to bring some sanity to the
 image building process for Magnum.

  I have found a tool which the Atomic upstream produces which allows a
 simple repeatable building process for Fedora Atomic images using any
 upstream repos of our choosing.

  I put in a kubernetes 0.15 COPR repo in this build.  Please test and get
 back to me either on irc or the ML.

  The image is available for download from here:
 https://fedorapeople.org/groups/magnum/fedora-21-atomic-3.qcow2.xz
 https://fedorapeople.org/groups/magnum/

  Regards,
 -steve


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] swagger-codegen generated code for python-k8sclient

2015-04-22 Thread Madhuri Rai
Hi All,

As we are using fedora-21-atomic-2 image and that has Kubernetes v0.11.0, I
tried to run v1beta3 APIs on it. Some of the APIs failed.
The Kubernetes developer said v1beta3 wasn't fully supported until the
0.15.0 release. Hence this is causing some APIs to fail.

Below are the failures:

1. service-create API fail(422 status) with v1beta3 request format.
The request format has changed from v1beta1 to v1beta3.

https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/api.md#v1beta3-conversion-tips

I have logged an issue for the same at GoogleCloudPlatform/Kubernetes:
https://github.com/GoogleCloudPlatform/kubernetes/issues/7157

2. pod-create API fail(500 status) with invalid request format.
While doing negative testing, I found that pod-create API fails with
500 status. It should actually fail with 400 status.

I have logged an issue for the same at GoogleCloudPlatform/Kubernetes:
https://github.com/GoogleCloudPlatform/kubernetes/issues/7087

3. pod-update API fail(404).
While trying to update a pod*, *it failed with status 404 even if the
pod exists. This is due to duplicate replacePod API in Kubernetes Client
code.

I have logged an issue for the same at GoogleCloudPlatform/Kubernetes:
https://github.com/GoogleCloudPlatform/kubernetes/issues/7100

4. All APIs fail with json manifest.
All Kubernetes resources(pod, rc, service) now fails with json format
manifest due to issue in swagger-codegen generated Kubernetes Client code.
It doesn't support unicode string.

After all this issues, can we really switch to Kubernetes Client in this
release or should we wait for the Fedora image with Kubernetes 0.15.0
release that has full support of v1beta3?

Please provide your suggestions on this so that I can proceed further.

 Thanks  Regards
Madhuri Kumari


On Tue, Mar 24, 2015 at 10:37 AM, Madhuri Rai madhuri.ra...@gmail.com
wrote:

 Hi Hongbin,


 On Tue, Mar 24, 2015 at 12:37 AM, Hongbin Lu hongbin...@gmail.com wrote:

 Hi Madhuri,

 Amazing work! I wouldn't concern the code duplication and modularity
 issue since the codes are generated. However, there is another concern
 here: if we find a bug/improvement of the generated code, we probably need
 to modify the generator. The question is if the upstream will accept the
 modifications? If yes, how fast the patch will go through.

 I would prefer to maintain a folk of the generator. By this way, we would
 have full control of the generated code. Thoughts?


 I agree that's a concern. I will try to fix the pep8 error upstream to
 look how it take to push a change upstream.


 Thanks,
 Hongbin

 On Mon, Mar 23, 2015 at 10:11 AM, Steven Dake (stdake) std...@cisco.com
 wrote:



   From: Madhuri Rai madhuri.ra...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage
 questions) openstack-dev@lists.openstack.org
 Date: Monday, March 23, 2015 at 1:53 AM
 To: openstack-dev@lists.openstack.org 
 openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [magnum] swagger-codegen generated code for
 python-k8sclient

   Hi All,

 This is to have a discussion on the blueprint for implementing
 python-k8client for magnum.

 https://blueprints.launchpad.net/magnum/+spec/python-k8sclient

 I have committed the code generated by swagger-codegen at
 https://review.openstack.org/#/c/166720/.
 But I feel the quality of the code generated by swagger-codegen is not
 good.

 Some of the points:
 1) There is lot of code duplication. If we want to generate code for two
 or more versions, same code is duplicated for each API version.
 2) There is no modularity. CLI code for all the APIs are written in same
 file.

 So, I would like your opinion on this. How should we proceed further?


  Madhuri,

  First off, spectacular that you figured out how to do this!  Great
 great job!  I suspected the swagger code would be a bunch of garbage.  Just
 looking over the review, the output isn’t too terribly bad.  It has some
 serious pep8 problems.

  Now that we have seen the swagger code generator works, we need to see
 if it produces useable output.  In other words, can the API be used by the
 magnum backend.  Google is “all-in” on swagger for their API model.
 Realistically maintaining a python binding would be a huge job.  If we
 could just use swagger for the short term, even though its less then ideal,
 that would be my preference.  Even if its suboptimal.  We can put a readme
 in the TLD saying the code was generated by a a code generator and explain
 how to generate the API.

  One last question.  I didn’t see immediately by looking at the api,
 but does it support TLS auth?  We will need that.

  Super impressed!

  Regards
 -steve



 Regards,
 Madhuri Kumari



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org

Re: [openstack-dev] Regarding openstack swift implementation

2015-04-20 Thread Madhuri Rai
Hi,

You can go through below link to get started.

http://docs.openstack.org/developer/swift/development_saio.html


Thanks
Madhuri Kumari

On Tue, Apr 21, 2015 at 2:09 PM, Subbulakshmi Subha 
subbulakshmisubh...@gmail.com wrote:

 Hi,
 I am trying to simulate openstack swift.could you please help me with
 getting started.what are all the details I should be knowing to store
 objects on swift?

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] swagger-codegen generated code for python-k8sclient

2015-03-23 Thread Madhuri Rai
Hi All,

This is to have a discussion on the blueprint for implementing
python-k8client for magnum.

https://blueprints.launchpad.net/magnum/+spec/python-k8sclient

I have committed the code generated by swagger-codegen at
https://review.openstack.org/#/c/166720/.
But I feel the quality of the code generated by swagger-codegen is not good.

Some of the points:
1) There is lot of code duplication. If we want to generate code for two or
more versions, same code is duplicated for each API version.
2) There is no modularity. CLI code for all the APIs are written in same
file.

So, I would like your opinion on this. How should we proceed further?

Regards,
Madhuri Kumari
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] swagger-codegen generated code for python-k8sclient

2015-03-23 Thread Madhuri Rai
Hi Steven,


On Mon, Mar 23, 2015 at 11:11 PM, Steven Dake (stdake) std...@cisco.com
wrote:



   From: Madhuri Rai madhuri.ra...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Date: Monday, March 23, 2015 at 1:53 AM
 To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
 
 Subject: [openstack-dev] [magnum] swagger-codegen generated code for
 python-k8sclient

   Hi All,

 This is to have a discussion on the blueprint for implementing
 python-k8client for magnum.

 https://blueprints.launchpad.net/magnum/+spec/python-k8sclient

 I have committed the code generated by swagger-codegen at
 https://review.openstack.org/#/c/166720/.
 But I feel the quality of the code generated by swagger-codegen is not
 good.

 Some of the points:
 1) There is lot of code duplication. If we want to generate code for two
 or more versions, same code is duplicated for each API version.
 2) There is no modularity. CLI code for all the APIs are written in same
 file.

 So, I would like your opinion on this. How should we proceed further?


  Madhuri,

  First off, spectacular that you figured out how to do this!  Great great
 job!  I suspected the swagger code would be a bunch of garbage.  Just
 looking over the review, the output isn’t too terribly bad.  It has some
 serious pep8 problems.

  Now that we have seen the swagger code generator works, we need to see
 if it produces useable output.  In other words, can the API be used by the
 magnum backend.  Google is “all-in” on swagger for their API model.
 Realistically maintaining a python binding would be a huge job.  If we
 could just use swagger for the short term, even though its less then ideal,
 that would be my preference.  Even if its suboptimal.  We can put a readme
 in the TLD saying the code was generated by a a code generator and explain
 how to generate the API.


I have started working on it and will surely look whether some improvement
can be done or not. And also will try to use it magnum.



  One last question.  I didn’t see immediately by looking at the api, but
 does it support TLS auth?  We will need that.


I am not sure about it. I will check and let you know.


  Super impressed!

  Regards
 -steve



 Regards,
 Madhuri Kumari


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Regards,
Madhuri Kumari
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] swagger-codegen generated code for python-k8sclient

2015-03-23 Thread Madhuri Rai
Hi Hongbin,


On Tue, Mar 24, 2015 at 12:37 AM, Hongbin Lu hongbin...@gmail.com wrote:

 Hi Madhuri,

 Amazing work! I wouldn't concern the code duplication and modularity issue
 since the codes are generated. However, there is another concern here: if
 we find a bug/improvement of the generated code, we probably need to modify
 the generator. The question is if the upstream will accept the
 modifications? If yes, how fast the patch will go through.

 I would prefer to maintain a folk of the generator. By this way, we would
 have full control of the generated code. Thoughts?


I agree that's a concern. I will try to fix the pep8 error upstream to look
how it take to push a change upstream.


 Thanks,
 Hongbin

 On Mon, Mar 23, 2015 at 10:11 AM, Steven Dake (stdake) std...@cisco.com
 wrote:



   From: Madhuri Rai madhuri.ra...@gmail.com
 Reply-To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Date: Monday, March 23, 2015 at 1:53 AM
 To: openstack-dev@lists.openstack.org 
 openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [magnum] swagger-codegen generated code for
 python-k8sclient

   Hi All,

 This is to have a discussion on the blueprint for implementing
 python-k8client for magnum.

 https://blueprints.launchpad.net/magnum/+spec/python-k8sclient

 I have committed the code generated by swagger-codegen at
 https://review.openstack.org/#/c/166720/.
 But I feel the quality of the code generated by swagger-codegen is not
 good.

 Some of the points:
 1) There is lot of code duplication. If we want to generate code for two
 or more versions, same code is duplicated for each API version.
 2) There is no modularity. CLI code for all the APIs are written in same
 file.

 So, I would like your opinion on this. How should we proceed further?


  Madhuri,

  First off, spectacular that you figured out how to do this!  Great
 great job!  I suspected the swagger code would be a bunch of garbage.  Just
 looking over the review, the output isn’t too terribly bad.  It has some
 serious pep8 problems.

  Now that we have seen the swagger code generator works, we need to see
 if it produces useable output.  In other words, can the API be used by the
 magnum backend.  Google is “all-in” on swagger for their API model.
 Realistically maintaining a python binding would be a huge job.  If we
 could just use swagger for the short term, even though its less then ideal,
 that would be my preference.  Even if its suboptimal.  We can put a readme
 in the TLD saying the code was generated by a a code generator and explain
 how to generate the API.

  One last question.  I didn’t see immediately by looking at the api, but
 does it support TLS auth?  We will need that.

  Super impressed!

  Regards
 -steve



 Regards,
 Madhuri Kumari


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 Regards,
Madhuri Kumari
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev