Re: [openstack-dev] [Magnum] Add periodic task threading for conductor server

2015-06-14 Thread Hongbin Lu
I think option #3 is the most desired choice in performance’s point of view, 
because magnum is going to support multiple conductors and all conductors share 
the same DB. However, if each conductor runs its own thread for periodic task, 
we will end up to have multiple instances of tasks for doing the same job 
(syncing heat’s state to magnum’s DB). I think magnum should have only one 
instance of periodic task since the replicated instance of tasks will stress 
the computing and networking resources.

Best regards,
Hongbin

From: Qiao,Liyong [mailto:liyong.q...@intel.com]
Sent: June-14-15 9:38 PM
To: openstack-dev@lists.openstack.org
Cc: qiaoliy...@gmail.com
Subject: [openstack-dev] [Magnum] Add periodic task threading for conductor 
server

hi magnum team,

I am planing to add periodic task for magnum conductor service, it will be good
to sync task status with heat and container service. and I have already have a 
WIP
patch[1], I'd like to start a discussion on the implement.

Currently, conductor service is an rpc server, and it has several handlers
endpoints = [
docker_conductor.Handler(),
k8s_conductor.Handler(),
bay_conductor.Handler(),
conductor_listener.Handler(),
]
all handler runs in the rpc server.

1. my patch [1] is to add periodic task functions in each handlers (if it 
requires such tasks)
and setup these functions when start rpc server, add them to a thread group.
so for example:

if we have task in bay_conductor.Handler() and docker_conductor.Handler(),
then adding 2 threads to current service's tg. each thread run it own periodic 
tasks.

the advantage is we separate each handler's task job to separate thread.
but hongbin's concern is if it will has some impacts on horizontally 
scalability.

2. another implement is put all tasks in a thread, this thread will run all
tasks(for bay,k8s, docker etc), just like sahara does see [2]

3 last one is start a new service in a separate process to run tasks.( I think 
this
will be too heavy/wasteful)

I'd like to get what's your suggestion, thanks in advance.

[1] https://review.openstack.org/#/c/187090/4
[2] 
https://github.com/openstack/sahara/blob/master/sahara/service/periodic.py#L118


--

BR, Eli(Li Yong)Qiao
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Question about retrieving resource_list from ResourceGroup

2015-06-26 Thread Hongbin Lu
Hi team,

I would like to start my question by using a sample template:

heat_template_version: 2014-10-16
parameters:
  count:
type: number
default: 5
  removal_list:
type: comma_delimited_list
default: []
resources:
  sample_group:
type: OS::Heat::ResourceGroup
properties:
  count: {get_param: count}
  removal_policies: [{resource_list: {get_param: removal_list}}]
  resource_def:
type: testnested.yaml
outputs:
  resource_list:
value: # How to output a list of resources of sample_group? Like 
"resource_list: ['0', '1', '2', '3', '4']"?

As showed above, this template has a resource group that contains resources 
defined in a nested template. First, I am going to use this template to create 
a stack. Then, I am going to update the stack to scale down the resource group 
by specifying (through parameters) a subset of resources that I want to remove. 
For example,

$  heat stack-create -f test.yaml test

$ heat stack-show test

$ heat stack-update -f test.yaml -P "count=3;removal_list=1,3" test

I want to know if it is possible to output a "resource_list" that lists all the 
removal candidate, so that I can programmatically process the list to compile 
another list (that is "removal_list") which will be passed back to the template 
as a parameter. Any help will be appreciated.

Thanks,
Honbgin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Continuing with heat-coe-templates

2015-06-29 Thread Hongbin Lu
Agree. The motivation of pulling templates out of Magnum tree is hoping these 
templates can be leveraged by a larger community and get more feedback. 
However, it is unlikely to be the case in practise, because different people 
has their own version of templates for addressing different use cases. It is 
proven to be hard to consolidate different templates even if these templates 
share a large amount of duplicated code (recall that we have to copy-and-paste 
the original template to add support for Ironic and CoreOS). So, +1 for 
stopping usage of heat-coe-templates.

Best regards,
Hongbin

-Original Message-
From: Tom Cammann [mailto:tom.camm...@hp.com] 
Sent: June-29-15 11:16 AM
To: openstack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Magnum] Continuing with heat-coe-templates

Hello team,

I've been doing work in Magnum recently to align our templates with the 
"upstream" templates from larsks/heat-kubernetes[1]. I've also been porting 
these changes to the stackforge/heat-coe-templates[2] repo.

I'm currently not convinced that maintaining a separate repo for Magnum 
templates (stackforge/heat-coe-templates) is beneficial for Magnum or the 
community.

Firstly it is very difficult to draw a line on what should be allowed into the 
heat-coe-templates. We are currently taking out changes[3] that introduced 
"useful" autoscaling capabilities in the templates but that didn't fit the 
Magnum plan. If we are going to treat the heat-coe-templates in that way then 
this extra repo will not allow organic development of new and old container 
engine templates that are not tied into Magnum.
Another recent change[4] in development is smart autoscaling of bays which 
introduces parameters that don't make a lot of sense outside of Magnum.

There are also difficult interdependency problems between the templates and the 
Magnum project such as the parameter fields. If a required parameter is added 
into the template the Magnum code must be also updated in the same commit to 
avoid functional test failures. This can be avoided using "Depends-On: 
#xx"
feature of gerrit, but it is an additional overhead and will require some CI 
setup.

Additionally we would have to version the templates, which I assume would be 
necessary to allow for packaging. This brings with it is own problems.

As far as I am aware there are no other people using the heat-coe-templates 
beyond the Magnum team, if we want independent growth of this repo it will need 
to be adopted by other people rather than Magnum commiters.

I don't see the heat templates as a dependency of Magnum, I see them as a truly 
fundamental part of Magnum which is going to be very difficult to cut out and 
make reusable without compromising Magnum's development process.

I would propose to delete/deprecate the usage of heat-coe-templates and 
continue with the usage of the templates in the Magnum repo. How does the team 
feel about that?

If we do continue with the large effort required to try and pull out the 
templates as a dependency then we will need increase the visibility of repo and 
greatly increase the reviews/commits on the repo. We also have a fairly 
significant backlog of work to align the heat-coe-templates with the templates 
in heat-coe-templates.

Thanks,
Tom

[1] https://github.com/larsks/heat-kubernetes
[2] https://github.com/stackforge/heat-coe-templates
[3] https://review.openstack.org/#/c/184687/
[4] https://review.openstack.org/#/c/196505/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2016-02-12 Thread Hongbin Lu
Egor,

Thanks for sharing your insights. I gave it more thoughts. Maybe the goal can 
be achieved without implementing a shared COE. We could move all the master 
nodes out of user tenants, containerize them, and consolidate them into a set 
of VMs/Physical servers.

I think we could separate the discussion into two:

1.   Should Magnum introduce a new bay type, in which master nodes are 
managed by Magnum (not users themselves)? Like what GCE [1] or ECS [2] does.

2.   How to consolidate the control services that originally runs on master 
nodes of each cluster?

Note that the proposal is for adding a new COE (not for changing the existing 
COEs). That means users will continue to provision existing self-managed COE 
(k8s/swarm/mesos) if they choose to.

[1] https://cloud.google.com/container-engine/
[2] http://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html

Best regards,
Hongbin

From: Guz Egor [mailto:guz_e...@yahoo.com]
Sent: February-12-16 2:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Hongbin Lu
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Hongbin,

I am not sure that it's good idea, it looks you propose Magnum enter to 
"schedulers war" (personally I tired from these debates Mesos vs Kub vs Swarm).
If your  concern is just utilization you can always run control plane at 
"agent/slave" nodes, there main reason why operators (at least in our case) 
keep them
separate because they need different attention (e.g. I almost don't care 
why/when "agent/slave" node died, but always double check that master node was
repaired or replaced).

One use case I see for shared COE (at least in our environment), when 
developers want run just docker container without installing anything locally
(e.g docker-machine). But in most cases it's just examples from internet or 
there own experiments ):

But we definitely should discuss it during midcycle next week.

---
Egor

____
From: Hongbin Lu mailto:hongbin...@huawei.com>>
To: OpenStack Development Mailing List (not for usage questions) 
mailto:openstack-dev@lists.openstack.org>>
Sent: Thursday, February 11, 2016 8:50 PM
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Hi team,

Sorry for bringing up this old thread, but a recent debate on container 
resource [1] reminded me the use case Kris mentioned below. I am going to 
propose a preliminary idea to address the use case. Of course, we could 
continue the discussion in the team meeting or midcycle.

Idea: Introduce a docker-native COE, which consists of only minion/worker/slave 
nodes (no master nodes).
Goal: Eliminate duplicated IaaS resources (master node VMs, lbaas vips, 
floating ips, etc.)
Details: Traditional COE (k8s/swarm/mesos) consists of master nodes and worker 
nodes. In these COEs, control services (i.e. scheduler) run on master nodes, 
and containers run on worker nodes. If we can port the COE control services to 
Magnum control plate and share them with all tenants, we eliminate the need of 
master nodes thus improving resource utilization. In the new COE, users 
create/manage containers through Magnum API endpoints. Magnum is responsible to 
spin tenant VMs, schedule containers to the VMs, and manage the life-cycle of 
those containers. Unlike other COEs, containers created by this COE are 
considered as OpenStack-manage resources. That means they will be tracked in 
Magnum DB, and accessible by other OpenStack services (i.e. Horizon, Heat, 
etc.).

What do you feel about this proposal? Let’s discuss.

[1] https://etherpad.openstack.org/p/magnum-native-api

Best regards,
Hongbin

From: Kris G. Lindgren [mailto:klindg...@godaddy.com]
Sent: September-30-15 7:26 PM
To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

We are looking at deploying magnum as an answer for how do we do containers 
company wide at Godaddy.  I am going to agree with both you and josh.

I agree that managing one large system is going to be a pain and pas experience 
tells me this wont be practical/scale, however from experience I also know 
exactly the pain Josh is talking about.

We currently have ~4k projects in our internal openstack cloud, about 1/4 of 
the projects are currently doing some form of containers on their own, with 
more joining every day.  If all of these projects were to convert of to the 
current magnum configuration we would suddenly be attempting to 
support/configure ~1k magnum clusters.  Considering that everyone will want it 
HA, we are looking at a minimum of 2 kube nodes per cluster + lbaas vips + 
floating ips.  From a capacity standpoint this is an excessive amount of 
duplicated infrastructure to spinup in projects where people maybe running 
10–20 containers per project.  From an operator support perspective this is a 
special level of

Re: [openstack-dev] [magnum] Re: Assistance with Magnum Setup

2016-02-14 Thread Hongbin Lu
Steve,

Thanks for directing Shiva to here. BTW, most of your code on objects and db 
are still here :).

Shiva,

Please do join the #openstack-containers channel (It is hard to do 
trouble-shooting in ML). I believe contributors in the channel are happy to 
help you. For Magnum team, it looks we should have an installation guide. Do we 
have a BP for that? If not, I think we should create one and give it a high 
priority.

Best regards,
Hongbin

From: Steven Dake (stdake) [mailto:std...@cisco.com]
Sent: February-14-16 10:54 AM
To: Shiva Ramdeen
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] Re: Assistance with Magnum Setup

Shiva,

First off, welcome to OpenStack :)  Feel free to call me Steve.

Ccing openstack-dev which is typically about development questions not usage 
questions, but you might have found some kind of bug.

I am not sure what the state of Magnum and Keystone is with OpenStack.  I 
recall at our Liberty midcycle we were planning to implement trusts.  Perhaps 
some of that work broke?

I would highly recommend obtaining yourself an IRC client, joining a freenode 
server, and joining the #openstack-containers channel.  Here you can meet with 
the core reviewers and many users who may have seen your problem in the past 
and have pointers for resolution.

Another option is to search the IRC archives for the channel here:
http://eavesdrop.openstack.org/irclogs/%23openstack-containers/

Finally, my detailed knowledge of Magnum is a bit dated, not having written any 
code for Magnum for over 6 months.  Although I wrote a lot of the initial code, 
most of it has been replaced ;) by the rockin Magnum core review team.  They 
can definitely get you going - just find them on irc.

Regards
-steve

From: Shiva Ramdeen 
mailto:shiva.ramd...@outlook.com>>
Date: Sunday, February 14, 2016 at 6:33 AM
To: Steven Dake mailto:std...@cisco.com>>
Subject: Assistance with Magnum Setup


Hello Mr. Dake,



Firstly let me introduce myself. My name is Shiva Ramdeen, I am a final year 
student at the University of the West Indies studying for my degree in
Electrical and Computer Engineering. I am currently working on a my final year 
project which deals with the performance of Magnum and Nova-Docker. I have been 
attempting to install Magnum on a liberty install of Openstack. However, I have 
been unable till to get Magnum to authenticate with keystone and thus cannot 
create swarm bays. I fear that I have depleted all of the online resources that 
explain the setup of Magnum and as a last resort I am seeking any assistance 
that you may be able to provide that may help me resolve this issue.  I would 
be available to provide any further details at your best convenience. Thank you 
in advance.



Kindest Regards,

Shiva Ramdeen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2016-02-14 Thread Hongbin Lu
Kai Qiang,

A major benefit is to have Magnum manage the COEs for end-users. Currently, 
Magnum basically have its end-users manage the COEs by themselves after a 
successful deployment. This might work well for domain users, but it is a pain 
for non-domain users to manage their COEs. By moving master nodes out of users’ 
tenants, Magnum could offer users a COE management service. For example, Magnum 
could offer to monitor the etcd/swarm-manage clusters and recover them on 
failure. Again, the pattern of managing COEs for end-users is what Google 
container service and AWS container service offer. I guess it is fair to 
conclude that there are use cases out there?

If we decide to offer a COE management service, we could discuss further on how 
to consolidate the IaaS resources for improving utilization. Solutions could be 
(i) introducing a centralized control services for all tenants/clusters, or 
(ii) keeping the control services separated but isolating them by containers 
(instead of VMs). A typical use case is what Kris mentioned below.

Best regards,
Hongbin

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: February-13-16 11:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?


Hi HongBin and Egor,
I went through what you talked about, and thinking what's the great benefits 
for utilisation here.
For user cases, looks like following:

user A want to have a COE provision.
user B want to have a separate COE. (different tenant, non-share)
user C want to use existed COE (same tenant as User A, share)

When you talked about utilisation case, it seems you mentioned:
different tenant users want to use same control node to manage different nodes, 
it seems that try to make COE openstack tenant aware, it also means you want to 
introduce another control schedule layer above the COEs, we need to think about 
the if it is typical user case, and what's the benefit compared with 
containerisation.


And finally, it is a topic can be discussed in middle cycle meeting.


Thanks

Best Wishes,

Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com<mailto:wk...@cn.ibm.com>
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

[Inactive hide details for Hongbin Lu ---13/02/2016 11:02:13 am---Egor, Thanks 
for sharing your insights. I gave it more thought]Hongbin Lu ---13/02/2016 
11:02:13 am---Egor, Thanks for sharing your insights. I gave it more thoughts. 
Maybe the goal can be achieved with

From: Hongbin Lu mailto:hongbin...@huawei.com>>
To: Guz Egor mailto:guz_e...@yahoo.com>>, "OpenStack 
Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: 13/02/2016 11:02 am
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?





Egor,

Thanks for sharing your insights. I gave it more thoughts. Maybe the goal can 
be achieved without implementing a shared COE. We could move all the master 
nodes out of user tenants, containerize them, and consolidate them into a set 
of VMs/Physical servers.

I think we could separate the discussion into two:
1. Should Magnum introduce a new bay type, in which master nodes are managed by 
Magnum (not users themselves)? Like what GCE [1] or ECS [2] does.
2. How to consolidate the control services that originally runs on master nodes 
of each cluster?

Note that the proposal is for adding a new COE (not for changing the existing 
COEs). That means users will continue to provision existing self-managed COE 
(k8s/swarm/mesos) if they choose to.

[1] https://cloud.google.com/container-engine/
[2] http://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html

Best regards,
Hongbin

From: Guz Egor [mailto:guz_e...@yahoo.com]
Sent: February-12-16 2:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Hongbin Lu
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Hongbin,

I am not sure that it's good idea, it looks you propose Magnum enter to 
"schedulers war" (personally I tired from these debates Mesos vs Kub vs Swarm).
If your concern is just utilization you can always run control plane at 
"agent/slave" nodes, there main reason why operators (at least in our case) 
keep them
separate because they need different attention (e.g. I almost don't care 
why/when "agent/slave" node died, but always double check that master node was
repaired or replaced).

One use case I see for shared COE (at least in our environment), when 
developers want run just docker container w

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2016-02-14 Thread Hongbin Lu
My replies are inline.

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: February-14-16 7:17 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?


HongBin,

See my replies and questions in line. >>


Thanks

Best Wishes,

Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com<mailto:wk...@cn.ibm.com>
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

[Inactive hide details for Hongbin Lu ---15/02/2016 01:26:09 am---Kai Qiang, A 
major benefit is to have Magnum manage the COEs f]Hongbin Lu ---15/02/2016 
01:26:09 am---Kai Qiang, A major benefit is to have Magnum manage the COEs for 
end-users. Currently, Magnum basica

From: Hongbin Lu mailto:hongbin...@huawei.com>>
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: 15/02/2016 01:26 am
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?





Kai Qiang,

A major benefit is to have Magnum manage the COEs for end-users. Currently, 
Magnum basically have its end-users manage the COEs by themselves after a 
successful deployment. This might work well for domain users, but it is a pain 
for non-domain users to manage their COEs. By moving master nodes out of users’ 
tenants, Magnum could offer users a COE management service. For example, Magnum 
could offer to monitor the etcd/swarm-manage clusters and recover them on 
failure. Again, the pattern of managing COEs for end-users is what Google 
container service and AWS container service offer. I guess it is fair to 
conclude that there are use cases out there?

>> I am not sure when you talked about domain here, is it keystone domain or 
>> other case ? What's the non-domain users case to manage the COEs?
Reply: I mean domain experts, someone who are experts of kubernetes/swarm/mesos.


If we decide to offer a COE management service, we could discuss further on how 
to consolidate the IaaS resources for improving utilization. Solutions could be 
(i) introducing a centralized control services for all tenants/clusters, or 
(ii) keeping the control services separated but isolating them by containers 
(instead of VMs). A typical use case is what Kris mentioned below.

>> for (i) it is more complicated than (ii), and I did not see much benefits 
>> gain for utilization case here for (i), instead it could introduce much 
>> burden for upgrade case and service interference for all tenants/clusters
Reply: Definitely we could discuss it further. I don’t have preference in mind 
right now.



Best regards,
Hongbin

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: February-13-16 11:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Hi HongBin and Egor,
I went through what you talked about, and thinking what's the great benefits 
for utilisation here.
For user cases, looks like following:

user A want to have a COE provision.
user B want to have a separate COE. (different tenant, non-share)
user C want to use existed COE (same tenant as User A, share)

When you talked about utilisation case, it seems you mentioned:
different tenant users want to use same control node to manage different nodes, 
it seems that try to make COE openstack tenant aware, it also means you want to 
introduce another control schedule layer above the COEs, we need to think about 
the if it is typical user case, and what's the benefit compared with 
containerisation.


And finally, it is a topic can be discussed in middle cycle meeting.


Thanks

Best Wishes,

Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com<mailto:wk...@cn.ibm.com>
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193
--------
Follow your heart. You are miracle!

[Inactive hide details for Hongbin Lu ---13/02/2016 11:02:13 am---Egor, Thanks 
for sharing your insights. I gave it more thought]Hongbin Lu ---13/02/2016 
11:02:13 am---Egor, Thanks for sharing your insights. I gave it more thoughts. 
Maybe the goal can be achieved with

From: Hongbin Lu mailto:hongbin...@huawei.com>>
To: Guz Egor mailto:guz_e...@yahoo.com>>, "OpenStack 
Development Mailing List (not for usage questions)"

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2016-02-15 Thread Hongbin Lu
Regarding to the COE mode, it seems there are three options:

1.   Place both master nodes and worker nodes to user’s tenant (current 
implementation).

2.   Place only worker nodes to user’s tenant.

3.   Hide both master nodes and worker nodes from user’s tenant.

Frankly, I don’t know which one will succeed/fail in the future. Each mode 
seems to have use cases. Maybe magnum could support multiple modes?

Best regards,
Hongbin

From: Corey O'Brien [mailto:coreypobr...@gmail.com]
Sent: February-15-16 8:43 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Hi all,

A few thoughts to add:

I like the idea of isolating the masters so that they are not 
tenant-controllable, but I don't think the Magnum control plane is the right 
place for them. They still need to be running on tenant-owned resources so that 
they have access to things like isolated tenant networks or that any bandwidth 
they consume can still be attributed and billed to tenants.

I think we should extend that concept a little to include worker nodes as well. 
While they should live in the tenant like the masters, they shouldn't be 
controllable by the tenant through anything other than the COE API. The main 
use case that Magnum should be addressing is providing a managed COE 
environment. Like Hongbin mentioned, Magnum users won't have the domain 
knowledge to properly maintain the swarm/k8s/mesos infrastructure the same way 
that Nova users aren't expected to know how to manage a hypervisor.

I agree with Egor that trying to have Magnum schedule containers is going to be 
a losing battle. Swarm/K8s/Mesos are always going to have better scheduling for 
their containers. We don't have the resources to try to be yet another 
container orchestration engine. Besides that, as a developer, I don't want to 
learn another set of orchestration semantics when I already know swarm or k8s 
or mesos.

@Kris, I appreciate the real use case you outlined. In your idea of having 
multiple projects use the same masters, how would you intend to isolate them? 
As far as I can tell none of the COEs would have any way to isolate those teams 
from each other if they share a master. I think this is a big problem with the 
idea of sharing masters even within a single tenant. As an operator, I 
definitely want to know that users can isolate their resources from other users 
and tenants can isolate their resources from other tenants.

Corey

On Mon, Feb 15, 2016 at 1:24 AM Peng Zhao mailto:p...@hyper.sh>> 
wrote:
Hi,

I wanted to give some thoughts to the thread.

There are various perspective around “Hosted vs Self-managed COE”, But if you 
stand at the developer's position, it basically comes down to “Ops vs 
Flexibility”.

For those who want more control of the stack, so as to customize in anyway they 
see fit, self-managed is a more appealing option. However, one may argue that 
the same job can be done with a heat template+some patchwork of cinder/neutron. 
And the heat template is more customizable than magnum, which probably 
introduces some requirements on the COE configuration.

For people who don't want to manage the COE, hosted is a no-brainer. The 
question here is that which one is the core compute engine is the stack, nova 
or COE? Unless you are running a public, multi-tenant OpenStack deployment, it 
is highly likely that you are sticking with only one COE. Supposing k8s is what 
your team is dealing with everyday, then why you need nova sitting under k8s, 
whose job is just launching some VMs. After all, it is the COE that 
orchestrates cinder/neutron.

One idea of this is to put COE at the same layer of nova. Instead of running 
atop nova, these two run side by side. So you got two compute engines: nova for 
IaaS workload, k8s for CaaS workload. If you go this way, hypernetes 
<https://github.com/hyperhq/hypernetes> is probably what you are looking for.

Another idea is “Dockerized (Immutable) IaaS”, e.g. replace Glance with Docker 
registry, and use nova to launch Docker images. But this is not done by 
nova-docker, simply because it is hard to integrate things like cinder/neutron 
with lxc. The idea is a nova-hyper 
driver<https://openstack.nimeyo.com/49570/openstack-dev-proposal-of-nova-hyper-driver>.
 Since Hyper is hypervisor-based, it is much easier to make it work with 
others. SHAMELESS PROMOTION: if you are interested in this idea, we've 
submitted a proposal at the Austin summit: 
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/8211.

Peng

Disclaim: I maintainer Hyper.

-
Hyper - Make VM run like Container



On Mon, Feb 15, 2016 at 9:53 AM, Hongbin Lu 
mailto:hongbin...@huawei.com>> wrote:
My replies are inline.

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com<mailto:wk...@cn.ibm.com>]
Se

Re: [openstack-dev] [openstack][Magnum] Operation for COE

2016-02-16 Thread Hongbin Lu
Wanghua,

Please add your requests to the midcycle agenda [1], or bring it up in the team 
meeting under the open discussion. We can discuss it if agenda allows.

[1] https://etherpad.openstack.org/p/magnum-mitaka-midcycle-topics

Best regards,
Hongbin

From: 王华 [mailto:wanghua.hum...@gmail.com]
Sent: February-16-16 1:35 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [openstack][Magnum] Operation for COE

Hi all,

Should we add some operational function for COE in Magnum? For example, collect 
logs, upgrade COE and modify COE configuration. I think these features are very 
important in production.

Regards,
Wanghua
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat][magnum] Magnum gate issue

2016-02-22 Thread Hongbin Lu
Hi Heat team,

It looks Magnum gate broke after this patch was landed: 
https://review.openstack.org/#/c/273631/ . I would appreciate if anyone can 
help for trouble-shooting the issue. If the issue is confirmed, I would  prefer 
a quick-fix or a revert, since we want to unlock the gate ASAP. Thanks.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] containers across availability zones

2016-02-23 Thread Hongbin Lu
Hi Ricardo,

+1 from me. I like this feature.

Best regards,
Hongbin

-Original Message-
From: Ricardo Rocha [mailto:rocha.po...@gmail.com] 
Sent: February-23-16 5:11 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] containers across availability zones

Hi.

Has anyone looked into having magnum bay nodes deployed in different 
availability zones? The goal would be to have multiple instances of a container 
running on nodes across multiple AZs.

Looking at docker swarm this could be achieved using (for example) affinity 
filters based on labels. Something like:

docker run -it -d -p 80:80 --label nova.availability-zone=my-zone-a nginx 
https://docs.docker.com/swarm/scheduler/filter/#use-an-affinity-filter

We can do this if we change the templates/config scripts to add to the docker 
daemon params some labels exposing availability zone or other metadata (taken 
from the nova metadata).
https://docs.docker.com/engine/userguide/labels-custom-metadata/#daemon-labels

It's a bit less clear how we would get heat to launch nodes across availability 
zones using ResourceGroup(s), but there are other heat resources that support 
it (i'm sure this can be done).

Does this make sense? Any thoughts or alternatives?

If it makes sense i'm happy to submit a blueprint.

Cheers,
  Ricardo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Failed to create trustee %(username) in domain $(domain_id)

2016-02-25 Thread Hongbin Lu
Hi team,

FYI, you might encounter the following error if you pull from master recently:

magnum bay-create --name swarmbay --baymodel swarmbaymodel --node-count 1
Create for bay swarmbay failed: Failed to create trustee %(username) in domain 
$(domain_id) (HTTP 500)"

This is due to a recent commit that added support for trust. In case you don't 
know, this error can be resolved by running the following steps:

# 1. create the necessary domain and user:
export OS_TOKEN=password
export OS_URL=http://127.0.0.1:5000/v3
export OS_IDENTITY_API_VERSION=3
openstack domain create magnum
openstack user create trustee_domain_admin --password=secret --domain=magnum
openstack role add --user=trustee_domain_admin --domain=magnum admin

# 2. populate configs
source /opt/stack/devstack/functions
export MAGNUM_CONF=/etc/magnum/magnum.conf
iniset $MAGNUM_CONF trust trustee_domain_id $(openstack domain show magnum | 
awk '/ id /{print $4}')
iniset $MAGNUM_CONF trust trustee_domain_admin_id $(openstack user show 
trustee_domain_admin | awk '/ id /{print $4}')
iniset $MAGNUM_CONF trust trustee_domain_admin_password secret

# 3. screen -r stack, and restart m-api and m-cond

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Failed to create trustee %(username) in domain $(domain_id)

2016-02-26 Thread Hongbin Lu
Agreed.

Every new features should be introduced in a backward-compatible way if 
possible. If new change will break existing version, it should be properly 
versioned and/or follow the corresponding deprecation process. Please feel free 
to ask for clarification if the procedure is unclear.

Best regards,
Hongbin

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: February-25-16 8:43 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Failed to create trustee %(username) in 
domain $(domain_id)


Thanks Hongbin for your info.

I really think it is not good way for new feature introduced.
As new feature introduced often break old work. it more often better with 
add-in feature is plus, old work still funciton.

Or at least, the error should say "swarm bay now requires trust to work, please 
use trust related access information before deploy a new swarm bay"



Thanks


Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com<mailto:wk...@cn.ibm.com>
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

[Inactive hide details for Hongbin Lu ---26/02/2016 08:02:23 am---Hi team, FYI, 
you might encounter the following error if you p]Hongbin Lu ---26/02/2016 
08:02:23 am---Hi team, FYI, you might encounter the following error if you pull 
from master recently:

From: Hongbin Lu mailto:hongbin...@huawei.com>>
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: 26/02/2016 08:02 am
Subject: [openstack-dev] [magnum] Failed to create trustee %(username) in 
domain $(domain_id)





Hi team,

FYI, you might encounter the following error if you pull from master recently:

magnum bay-create --name swarmbay --baymodel swarmbaymodel --node-count 1
Create for bay swarmbay failed: Failed to create trustee %(username) in domain 
$(domain_id) (HTTP 500)"

This is due to a recent commit that added support for trust. In case you don’t 
know, this error can be resolved by running the following steps:

# 1. create the necessary domain and user:
export OS_TOKEN=password
export OS_URL=http://127.0.0.1:5000/v3
export OS_IDENTITY_API_VERSION=3
openstack domain create magnum
openstack user create trustee_domain_admin --password=secret --domain=magnum
openstack role add --user=trustee_domain_admin --domain=magnum admin

# 2. populate configs
source /opt/stack/devstack/functions
export MAGNUM_CONF=/etc/magnum/magnum.conf
iniset $MAGNUM_CONF trust trustee_domain_id $(openstack domain show magnum | 
awk '/ id /{print $4}')
iniset $MAGNUM_CONF trust trustee_domain_admin_id $(openstack user show 
trustee_domain_admin | awk '/ id /{print $4}')
iniset $MAGNUM_CONF trust trustee_domain_admin_password secret

# 3. screen -r stack, and restart m-api and m-cond
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-26 Thread Hongbin Lu


-Original Message-
From: James Bottomley [mailto:james.bottom...@hansenpartnership.com] 
Sent: February-26-16 12:38 PM
To: Daniel P. Berrange
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] A proposal to separate the design summit

On Fri, 2016-02-26 at 17:24 +, Daniel P. Berrange wrote:
> On Fri, Feb 26, 2016 at 08:55:52AM -0800, James Bottomley wrote:
> > On Fri, 2016-02-26 at 16:03 +, Daniel P. Berrange wrote:
> > > On Fri, Feb 26, 2016 at 10:39:08AM -0500, Rich Bowen wrote:
> > > > 
> > > > 
> > > > On 02/22/2016 10:14 AM, Thierry Carrez wrote:
> > > > > Hi everyone,
> > > > > 
> > > > > TL;DR: Let's split the events, starting after Barcelona.
> > > > > 
> > > > > 
> > > > > 
> > > > > Comments, thoughts ?
> > > > 
> > > > Thierry (and Jay, who wrote a similar note much earlier in 
> > > > February, and Lauren, who added more clarity over on the 
> > > > marketing list, and the many, many of you who have spoken up in 
> > > > this thread ...),
> > > > 
> > > > as a community guy, I have grave concerns about what the long 
> > > > -term effect of this move would be. I agree with your reasons, 
> > > > and the problems, but I worry that this is not the way to solve 
> > > > it.
> > > > 
> > > > Summit is one time when we have an opportunity to hold community 
> > > > up to the folks that think only product - to show them how 
> > > > critical it is that the people that are on this mailing list are 
> > > > doing the awesome things that they're doing, in the upstream, in 
> > > > cooperation and collaboration with their competitors.
> > > > 
> > > > I worry that splitting the two events would remove the community 
> > > > aspect from the conference. The conference would become more 
> > > > corporate, more product, and less project.
> > > > 
> > > > My initial response was "crap, now I have to go to four events 
> > > > instead of two", but as I thought about it, it became clear that 
> > > > that wouldn't happen. I, and everyone else, would end up picking 
> > > > one event or the other, and the division between product and 
> > > > project would deepen.
> > > > 
> > > > Summit, for me specifically, has frequently been at least as 
> > > > much about showing the community to the sales/marketing folks in 
> > > > my own company, as showing our wares to the customer.
> > > 
> > > I think what you describe is a prime reason for why separating the 
> > > events would be *beneficial* for the community contributors. The 
> > > conference has long ago become so corporate focused that its 
> > > session offers little to no value to me as a project contributor. 
> > > What you describe as a benefit of being able to put community 
> > > people infront of business people is in fact a significant 
> > > negative for the design summit productivity. It causes key 
> > > community contributors to be pulled out of important design 
> > > sessions to go talk to business people, making the design sessions 
> > > significantly less productive.
> > 
> > It's Naïve to think that something is so sacrosanct that it will be 
> > protected come what may.  Everything eventually has to justify 
> > itself to the funders.  Providing quid pro quo to sales and 
> > marketing helps enormously with that justification and it can be 
> > managed so it's not a huge drain on productive time.  OpenStack may 
> > be the new shiny now, but one day it won't be and then you'll need 
> > the support of the people you're currently disdaining.
> > 
> > I've said this before in the abstract, but let me try to make it 
> > specific and personal: once the kernel was the new shiny and money 
> > was poured all over us; we were pure and banned management types 
> > from the kernel summit and other events, but that all changed when 
> > the dot com bust came.  You're from Red Hat, if you ask the old 
> > timers about the Ottawa Linux Symposium and allied Kernel Summit I 
> > believe they'll recall that in 2005(or 6) the Red Hat answer to a 
> > plea to fund travel was here's $25 a head, go and find a floor to 
> > crash on.  As the wrangler for the new Linux Plumbers Conference I 
> > had to come up with all sorts of convoluted schemes for getting Red 
> > Hat to fund developer travel most of which involved embarrassing 
> > Brian Stevens into approving it over the objections of his managers.  
> > I don't want to go into detail about how Red Hat reached this 
> > situation; I just want to remind you that it happened before and it 
> > could happen again.
> 
> The proposal to split the design summit off actually aims to reduce 
> the travel cost burden. Currently we have a conference+design summit 
> at the wrong time, which is fairly unproductive due to people being 
> pulled out of the design summit for other tasks. So  we "fixed" that 
> by introducing mid-cycles to get real design work done. IOW 
> contributors end up with 4 events to travel to each year. With the 
> proposed 

[openstack-dev] FW: [magnum][magnum-ui] Liaisons for I18n

2016-02-29 Thread Hongbin Lu
Hi team,

FYI, I18n team needs liaisons from magnum-ui. Please contact the i18n team if 
you interest in this role.

Best regards,
Hongbin

From: Ying Chun Guo [mailto:guoyi...@cn.ibm.com]
Sent: February-29-16 3:48 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [all][i18n] Liaisons for I18n

Hello,

Mitaka translation will start soon, from this week.
In Mitaka translation, IBM full time translators will join the
translation team and work with community translators.
With their help, I18n team is able to cover more projects.
So I need liaisons from dev projects who can help I18n team to work
compatibly with development team in the release cycle.

I especially need liaisons in below projects, which are in Mitaka translation 
plan:
nova, glance, keystone, cinder, swift, neutron, heat, horizon, ceilometer.

I also need liaisons from Horizon plugin projects, which are ready in 
translation website:
trove-dashboard, sahara-dashboard,designate-dasbhard, magnum-ui,
monasca-ui, murano-dashboard and senlin-dashboard.
I need liaisons tell us whether they are ready for translation from project 
view.

As to other projects, liaisons are welcomed too.

Here are the descriptions of I18n liaisons:
- The liaison should be a core reviewer for the project and understand the i18n 
status of this project.
- The liaison should understand project release schedule very well.
- The liaison should notify I18n team happens of important moments in the 
project release in time.
For example, happen of soft string freeze, happen of hard string freeze, and 
happen of RC1 cutting.
- The liaison should take care of translation patches to the project, and make 
sure the patches are
successfully merged to the final release version. When the translation patch is 
failed, the liaison
should notify I18n team.

If you are interested to be a liaison and help translators,
input your information here: 
https://wiki.openstack.org/wiki/CrossProjectLiaisons#I18n .

Thank you for your support.
Best regards
Ying Chun Guo (Daisy)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-02-29 Thread Hongbin Lu
Hi team,

This is a continued discussion from a review [1]. Corey O'Brien suggested to 
have Magnum support a single OS distro (Atomic). I disagreed. I think we should 
bring the discussion to here to get broader set of inputs.

Corey O'Brien
>From the midcycle, we decided we weren't going to continue to support 2 
>different versions of the k8s template. Instead, we were going to maintain the 
>Fedora Atomic version of k8s and remove the coreos templates from the tree. I 
>don't think we should continue to develop features for coreos k8s if that is 
>true.
In addition, I don't think we should break the coreos template by adding the 
trust token as a heat parameter.

Hongbin Lu
I was on the midcycle and I don't remember any decision to remove CoreOS 
support. Why you want to remove CoreOS templates from the tree. Please note 
that this is a very big decision and please discuss it with the team 
thoughtfully and make sure everyone agree.

Corey O'Brien
Removing the coreos templates was a part of the COE drivers decision. Since 
each COE driver will only support 1 distro+version+coe we discussed which ones 
to support in tree. The decision was that instead of trying to support every 
distro and every version for every coe, the magnum tree would only have support 
for 1 version of 1 distro for each of the 3 COEs (swarm/docker/mesos). Since we 
already are going to support Atomic for swarm, removing coreos and keeping 
Atomic for kubernetes was the favored choice.

Hongbin Lu
Strongly disagree. It is a huge risk to support a single distro. The selected 
distro could die in the future. Who knows. Why make Magnum take this huge risk? 
Again, the decision of supporting single distro is a very big decision. Please 
bring it up to the team and have it discuss thoughtfully before making any 
decision. Also, Magnum doesn't have to support every distro and every version 
for every coe, but should support *more than one* popular distro for some COEs 
(especially for the popular COEs).

Corey O'Brien
The discussion at the midcycle started from the idea of adding support for RHEL 
and CentOS. We all discussed and decided that we wouldn't try to support 
everything in tree. Magnum would provide support in-tree for 1 per COE and the 
COE driver interface would allow others to add support for their preferred 
distro out of tree.

Hongbin Lu
I agreed the part that "we wouldn't try to support everything in tree". That 
doesn't imply the decision to support single distro. Again, support single 
distro is a huge risk. Why make Magnum take this huge risk?

[1] https://review.openstack.org/#/c/277284/

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-02-29 Thread Hongbin Lu


From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: February-29-16 1:36 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro

Consider this: Which OS runs on the bay nodes is not important to end users. 
What matters to users is the environments their containers execute in, which 
has only one thing in common with the bay
The bay nodes are under user’s tenant. That means end users can to SSH to the 
nodes and play with the containers. Therefore, the choice of OS is important to 
end users.

node OS: the kernel. The linux syscall interface is stable enough that the 
various linux distributions can all run concurrently in neighboring containers 
sharing same kernel. There is really no material reason why the bay OS choice 
must match what distro the container is based on. Although I’m persuaded by 
Hongbin’s concern to mitigate risk of future changes WRT whatever OS distro is 
the prevailing one for bay nodes, there are a few items of concern about 
duality I’d like to zero in on:

1) Participation from Magnum contributors to support the CoreOS specific 
template features has been weak in recent months. By comparison, participation 
relating to Fedora/Atomic have been much stronger.
I have been fixing the CoreOS templates recently. If other contributors are 
willing to work with me on this efforts, it is reasonable to expect the CoreOS 
contribution to be stronger.

2) Properly testing multiple bay node OS distros (would) significantly increase 
the run time and complexity of our functional tests.
This is not true technically. We can re-run the Atomic tests on CoreOS by 
changing a single field (which is the image). What needs to be done is moving 
common modules into a base class and let OS-specific modules inherit from them.

3) Having support for multiple bay node OS choices requires more extensive 
documentation, and more comprehensive troubleshooting details.
This might be true, but we could point to the troubleshooting document of 
specific OS. If the selected OS delivered a comprehensive troubleshooting 
document, this problem is resolved.

If we proceed with just one supported disto for bay nodes, and offer 
extensibility points to allow alternates to be used in place of it, we should 
be able to address the risk concern of the chosen distro by selecting an 
alternate when that change is needed, by using those extensibility points. 
These include the ability to specify your own bay image, and the ability to use 
your own associated Heat template.

I see value in risk mitigation, it may make sense to simplify in the short term 
and address that need when it becomes necessary. My point of view might be 
different if we had contributors willing
I think it becomes necessary now. I have been working on Magnum starting from 
the early stage of the project. Probably, I am the most senior active 
contributor. Based on my experiences, there are a lot of problems of locking in 
a single OS. Basically, all the issues from OS upstream are populated to Magnum 
(e.g. we experienced various known/unknown bugs, pain on image building, lack 
of documentation, lack of upstream support etc.). All these experiences remind 
me not relying on a single OS, because you never know what will be the next 
obstacle.

and ready to address the variety of drawbacks that accompany the strategy of 
supporting multiple bay node OS choices. In absence of such a community 
interest, my preference is to simplify to increase our velocity. This seems to 
me to be a relatively easy way to reduce complexity around heat template 
versioning. What do you think?

Thanks,

Adrian

On Feb 29, 2016, at 8:40 AM, Hongbin Lu 
mailto:hongbin...@huawei.com>> wrote:

Hi team,

This is a continued discussion from a review [1]. Corey O'Brien suggested to 
have Magnum support a single OS distro (Atomic). I disagreed. I think we should 
bring the discussion to here to get broader set of inputs.

Corey O'Brien
From the midcycle, we decided we weren't going to continue to support 2 
different versions of the k8s template. Instead, we were going to maintain the 
Fedora Atomic version of k8s and remove the coreos templates from the tree. I 
don't think we should continue to develop features for coreos k8s if that is 
true.
In addition, I don't think we should break the coreos template by adding the 
trust token as a heat parameter.

Hongbin Lu
I was on the midcycle and I don't remember any decision to remove CoreOS 
support. Why you want to remove CoreOS templates from the tree. Please note 
that this is a very big decision and please discuss it with the team 
thoughtfully and make sure everyone agree.

Corey O'Brien
Removing the coreos templates was a part of the COE drivers decision. Since 
each COE driver will only support 1 distro+version+coe we discussed which ones 
to support in tree. The decision wa

Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-01 Thread Hongbin Lu
 the variety of 
drawbacks that accompany the strategy of supporting multiple bay node OS 
choices. In absence of such a community interest, my preference is to simplify 
to increase our velocity. This seems to me to be a relatively easy way to 
reduce complexity around heat template versioning. What do you think?

Thanks,

Adrian

On Feb 29, 2016, at 8:40 AM, Hongbin Lu 
mailto:hongbin...@huawei.com>> wrote:

Hi team,

This is a continued discussion from a review [1]. Corey O'Brien suggested to 
have Magnum support a single OS distro (Atomic). I disagreed. I think we should 
bring the discussion to here to get broader set of inputs.

Corey O'Brien
>From the midcycle, we decided we weren't going to continue to support 2 
>different versions of the k8s template. Instead, we were going to maintain the 
>Fedora Atomic version of k8s and remove the coreos templates from the tree. I 
>don't think we should continue to develop features for coreos k8s if that is 
>true.
In addition, I don't think we should break the coreos template by adding the 
trust token as a heat parameter.

Hongbin Lu
I was on the midcycle and I don't remember any decision to remove CoreOS 
support. Why you want to remove CoreOS templates from the tree. Please note 
that this is a very big decision and please discuss it with the team 
thoughtfully and make sure everyone agree.

Corey O'Brien
Removing the coreos templates was a part of the COE drivers decision. Since 
each COE driver will only support 1 distro+version+coe we discussed which ones 
to support in tree. The decision was that instead of trying to support every 
distro and every version for every coe, the magnum tree would only have support 
for 1 version of 1 distro for each of the 3 COEs (swarm/docker/mesos). Since we 
already are going to support Atomic for swarm, removing coreos and keeping 
Atomic for kubernetes was the favored choice.

Hongbin Lu
Strongly disagree. It is a huge risk to support a single distro. The selected 
distro could die in the future. Who knows. Why make Magnum take this huge risk? 
Again, the decision of supporting single distro is a very big decision. Please 
bring it up to the team and have it discuss thoughtfully before making any 
decision. Also, Magnum doesn't have to support every distro and every version 
for every coe, but should support *more than one* popular distro for some COEs 
(especially for the popular COEs).

Corey O'Brien
The discussion at the midcycle started from the idea of adding support for RHEL 
and CentOS. We all discussed and decided that we wouldn't try to support 
everything in tree. Magnum would provide support in-tree for 1 per COE and the 
COE driver interface would allow others to add support for their preferred 
distro out of tree.

Hongbin Lu
I agreed the part that "we wouldn't try to support everything in tree". That 
doesn't imply the decision to support single distro. Again, support single 
distro is a huge risk. Why make Magnum take this huge risk?

[1] https://review.openstack.org/#/c/277284/

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FW: [magnum][magnum-ui] Liaisons for I18n

2016-03-01 Thread Hongbin Lu
+1. Shu Muto contributed a lot to magnum-ui. Highly recommended.

Best regards,
Hongbin

From: 大塚元央 [mailto:yuany...@oeilvert.org]
Sent: March-01-16 9:11 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] FW: [magnum][magnum-ui] Liaisons for I18n

Hi team,

Shu Muto is interested in to became liaisons  from magnum-ui.
He put great effort into translating English to Japanease in magnum-ui and 
horizon.
I recommend him to be liaison.

Thanks
-yuanying
2016年2月29日(月) 23:56 Hongbin Lu 
mailto:hongbin...@huawei.com>>:
Hi team,

FYI, I18n team needs liaisons from magnum-ui. Please contact the i18n team if 
you interest in this role.

Best regards,
Hongbin

From: Ying Chun Guo [mailto:guoyi...@cn.ibm.com<mailto:guoyi...@cn.ibm.com>]
Sent: February-29-16 3:48 AM
To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Subject: [openstack-dev] [all][i18n] Liaisons for I18n

Hello,

Mitaka translation will start soon, from this week.
In Mitaka translation, IBM full time translators will join the
translation team and work with community translators.
With their help, I18n team is able to cover more projects.
So I need liaisons from dev projects who can help I18n team to work
compatibly with development team in the release cycle.

I especially need liaisons in below projects, which are in Mitaka translation 
plan:
nova, glance, keystone, cinder, swift, neutron, heat, horizon, ceilometer.

I also need liaisons from Horizon plugin projects, which are ready in 
translation website:
trove-dashboard, sahara-dashboard,designate-dasbhard, magnum-ui,
monasca-ui, murano-dashboard and senlin-dashboard.
I need liaisons tell us whether they are ready for translation from project 
view.

As to other projects, liaisons are welcomed too.

Here are the descriptions of I18n liaisons:
- The liaison should be a core reviewer for the project and understand the i18n 
status of this project.
- The liaison should understand project release schedule very well.
- The liaison should notify I18n team happens of important moments in the 
project release in time.
For example, happen of soft string freeze, happen of hard string freeze, and 
happen of RC1 cutting.
- The liaison should take care of translation patches to the project, and make 
sure the patches are
successfully merged to the final release version. When the translation patch is 
failed, the liaison
should notify I18n team.

If you are interested to be a liaison and help translators,
input your information here: 
https://wiki.openstack.org/wiki/CrossProjectLiaisons#I18n .

Thank you for your support.
Best regards
Ying Chun Guo (Daisy)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-04 Thread Hongbin Lu
I don’t think there is any consensus on supporting single distro. There are 
multiple disagreements on this thread, including several senior team members 
and a project co-founder. This topic should be re-discussed (possibly at the 
design summit).

Best regards,
Hongbin

From: Corey O'Brien [mailto:coreypobr...@gmail.com]
Sent: March-04-16 11:37 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro

I don't think anyone is saying that code should somehow block support for 
multiple distros. The discussion at midcycle was about what the we should gate 
on and ensure feature parity for as a team. Ideally, we'd like to get support 
for every distro, I think, but no one wants to have that many gates. Instead, 
the consensus at the midcycle was to have 1 reference distro for each COE, gate 
on those and develop features there, and then have any other distros be 
maintained by those in the community that are passionate about them.

The issue also isn't about how difficult or not it is. The problem we want to 
avoid is spending precious time guaranteeing that new features and bug fixes 
make it through multiple distros.

Corey

On Fri, Mar 4, 2016 at 11:18 AM Steven Dake (stdake) 
mailto:std...@cisco.com>> wrote:
My position on this is simple.

Operators are used to using specific distros because that is what they used in 
the 90s,and the 00s, and the 10s.  Yes, 25 years of using a distro, and you 
learn it inside and out.  This means you don't want to relearn a new distro, 
especially if your an RPM user going to DEB or a DEB user going to RPM.  These 
are non-starter options for operators, and as a result, mean that distro choice 
is a must.  Since CoreOS is a new OS in the marketplace, it may make sense to 
consider placing it in "third" position in terms of support.

Besides that problem, various distribution companies will only support distros 
running in Vms if it matches the host kernel, which makes total sense to me.  
This means on an Ubuntu host if I want support I need to run Ubuntu vms, on a 
RHEL host I want to run RHEL vms, because, hey, I want my issues supported.

For these reasons and these reasons alone, there is no good rationale to remove 
multi-distro support  from Magnum.  All I've heard in this thread so far is 
"its too hard".  Its not too hard, especially with Heat conditionals making 
their way into Mitaka.

Regards
-steve

From: Hongbin Lu mailto:hongbin...@huawei.com>>
Reply-To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, February 29, 2016 at 9:40 AM
To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [magnum] Discussion of supporting single/multiple OS 
distro

Hi team,

This is a continued discussion from a review [1]. Corey O'Brien suggested to 
have Magnum support a single OS distro (Atomic). I disagreed. I think we should 
bring the discussion to here to get broader set of inputs.

Corey O'Brien
From the midcycle, we decided we weren't going to continue to support 2 
different versions of the k8s template. Instead, we were going to maintain the 
Fedora Atomic version of k8s and remove the coreos templates from the tree. I 
don't think we should continue to develop features for coreos k8s if that is 
true.
In addition, I don't think we should break the coreos template by adding the 
trust token as a heat parameter.

Hongbin Lu
I was on the midcycle and I don't remember any decision to remove CoreOS 
support. Why you want to remove CoreOS templates from the tree. Please note 
that this is a very big decision and please discuss it with the team 
thoughtfully and make sure everyone agree.

Corey O'Brien
Removing the coreos templates was a part of the COE drivers decision. Since 
each COE driver will only support 1 distro+version+coe we discussed which ones 
to support in tree. The decision was that instead of trying to support every 
distro and every version for every coe, the magnum tree would only have support 
for 1 version of 1 distro for each of the 3 COEs (swarm/docker/mesos). Since we 
already are going to support Atomic for swarm, removing coreos and keeping 
Atomic for kubernetes was the favored choice.

Hongbin Lu
Strongly disagree. It is a huge risk to support a single distro. The selected 
distro could die in the future. Who knows. Why make Magnum take this huge risk? 
Again, the decision of supporting single distro is a very big decision. Please 
bring it up to the team and have it discuss thoughtfully before making any 
decision. Also, Magnum doesn't have to support every distro and every version 
for every coe, but should support *m

Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-04 Thread Hongbin Lu
 is appropriate. We will continue our efforts to make them 
reasonably efficient.

Thanks,

Adrian


Regards
-steve


Note that it will take a thoughtful approach (subject to discussion) to balance 
these interests. Please take a moment to review the interest above. Do you or 
others disagree with these? If so, why?

Adrian

On Mar 4, 2016, at 9:09 AM, Hongbin Lu 
mailto:hongbin...@huawei.com>> wrote:

I don't think there is any consensus on supporting single distro. There are 
multiple disagreements on this thread, including several senior team members 
and a project co-founder. This topic should be re-discussed (possibly at the 
design summit).

Best regards,
Hongbin

From: Corey O'Brien [mailto:coreypobr...@gmail.com]
Sent: March-04-16 11:37 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro

I don't think anyone is saying that code should somehow block support for 
multiple distros. The discussion at midcycle was about what the we should gate 
on and ensure feature parity for as a team. Ideally, we'd like to get support 
for every distro, I think, but no one wants to have that many gates. Instead, 
the consensus at the midcycle was to have 1 reference distro for each COE, gate 
on those and develop features there, and then have any other distros be 
maintained by those in the community that are passionate about them.

The issue also isn't about how difficult or not it is. The problem we want to 
avoid is spending precious time guaranteeing that new features and bug fixes 
make it through multiple distros.

Corey

On Fri, Mar 4, 2016 at 11:18 AM Steven Dake (stdake) 
mailto:std...@cisco.com>> wrote:
My position on this is simple.

Operators are used to using specific distros because that is what they used in 
the 90s,and the 00s, and the 10s.  Yes, 25 years of using a distro, and you 
learn it inside and out.  This means you don't want to relearn a new distro, 
especially if your an RPM user going to DEB or a DEB user going to RPM.  These 
are non-starter options for operators, and as a result, mean that distro choice 
is a must.  Since CoreOS is a new OS in the marketplace, it may make sense to 
consider placing it in "third" position in terms of support.

Besides that problem, various distribution companies will only support distros 
running in Vms if it matches the host kernel, which makes total sense to me.  
This means on an Ubuntu host if I want support I need to run Ubuntu vms, on a 
RHEL host I want to run RHEL vms, because, hey, I want my issues supported.

For these reasons and these reasons alone, there is no good rationale to remove 
multi-distro support  from Magnum.  All I've heard in this thread so far is 
"its too hard".  Its not too hard, especially with Heat conditionals making 
their way into Mitaka.

Regards
-steve

From: Hongbin Lu mailto:hongbin...@huawei.com>>
Reply-To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, February 29, 2016 at 9:40 AM
To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [magnum] Discussion of supporting single/multiple OS 
distro

Hi team,

This is a continued discussion from a review [1]. Corey O'Brien suggested to 
have Magnum support a single OS distro (Atomic). I disagreed. I think we should 
bring the discussion to here to get broader set of inputs.

Corey O'Brien
>From the midcycle, we decided we weren't going to continue to support 2 
>different versions of the k8s template. Instead, we were going to maintain the 
>Fedora Atomic version of k8s and remove the coreos templates from the tree. I 
>don't think we should continue to develop features for coreos k8s if that is 
>true.
In addition, I don't think we should break the coreos template by adding the 
trust token as a heat parameter.

Hongbin Lu
I was on the midcycle and I don't remember any decision to remove CoreOS 
support. Why you want to remove CoreOS templates from the tree. Please note 
that this is a very big decision and please discuss it with the team 
thoughtfully and make sure everyone agree.

Corey O'Brien
Removing the coreos templates was a part of the COE drivers decision. Since 
each COE driver will only support 1 distro+version+coe we discussed which ones 
to support in tree. The decision was that instead of trying to support every 
distro and every version for every coe, the magnum tree would only have support 
for 1 version of 1 distro for each of the 3 COEs (swarm/docker/mesos). Since we 
already are going to support Atomic for swarm, removing coreos and keeping 
Atomic for kubernetes was the favored choice.

Hongbin Lu
Strongl

Re: [openstack-dev] [magnum-ui] Proposed Core addition, and removal notice

2016-03-05 Thread Hongbin Lu
+1

BTW, I am magnum core, not magnum-ui core. Not sure if my vote is counted.

Best regards,
Hongbin

-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com] 
Sent: March-04-16 7:29 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum-ui] Proposed Core addition, and removal notice

Magnum UI Cores,

I propose the following changes to the magnum-ui core group [1]:

+ Shu Muto
- Dims (Davanum Srinivas), by request - justified by reduced activity level.

Please respond with your +1 votes to approve this change or -1 votes to oppose.

Thanks,

Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][magnum-ui] Liaisons for I18n

2016-03-05 Thread Hongbin Lu
Adrian,

I think Shu Muto was originally proposed to be a magnum-ui liaison, not magnum 
liaison.

Best regards,
Hongbin

-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com] 
Sent: March-04-16 7:27 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][magnum-ui] Liaisons for I18n

Kato,

I have confirmed with Shu Muto, who will be assuming our I18n Liaison role for 
Magnum until further notice. Thanks for raising this important request.

Regards,

Adrian

> On Mar 3, 2016, at 6:53 AM, KATO Tomoyuki  wrote:
> 
> I added Magnum to the list... Feel free to add your name and IRC nick, Shu.
> 
> https://wiki.openstack.org/wiki/CrossProjectLiaisons#I18n
> 
>> One thing to note.
>> 
>> The role of i18n liaison is not to keep it well translated.
>> The main role is in a project side,
>> for example, to encourage i18n related reviews and fixes, or to 
>> suggest what kind of coding is recommended from i18n point of view.
> 
> Yep, that is a reason why a core reviewer is preferred for liaison.
> We sometimes have various requirements:
> word ordering (block trans), n-plural form, and so on.
> Some of them may not be important for Japanese.
> 
> Regards,
> KATO Tomoyuki
> 
>> 
>> Akihiro
>> 
>> 2016-03-02 12:17 GMT+09:00 Shuu Mutou :
>>> Hi Hongbin, Yuanying and team,
>>> 
>>> Thank you for your recommendation.
>>> I'm keeping 100% of EN to JP translation of Magnum-UI everyday.
>>> I'll do my best, if I become a liaison.
>>> 
>>> Since translation has became another point of review for Magnum-UI, I hope 
>>> that members translate Magnum-UI into your native language.
>>> 
>>> Best regards,
>>> Shu Muto
> 
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-07 Thread Hongbin Lu


From: Corey O'Brien [mailto:coreypobr...@gmail.com]
Sent: March-07-16 8:11 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro

Hongbin, I think the offer to support different OS options is a perfect example 
both of what we want and what we don't want. We definitely want to allow for 
someone like yourself to maintain templates for whatever OS they want and to 
have that option be easily integrated in to a Magnum deployment. However, when 
developing features or bug fixes, we can't wait for you to have time to add it 
for whatever OS you are promising to maintain.
It might be true that supporting additional OS could slow down the development 
speed, but the key question is how much the impact will be. Does it outweigh 
the benefits? IMO, the impact doesn’t seem to be significant, given the fact 
that most features and bug fixes are OS agnostic. Also, keep in mind that every 
features we introduced (variety of COEs, variety of Nova virt-driver, variety 
of network driver, variety of volume driver, variety of …) incurs a maintenance 
overhead. If you want an optimal development speed, we will be limited to 
support a single COE/virt driver/network driver/volume driver. I guess that is 
not the direction we like to be?

Instead, we would all be forced to develop the feature for that OS as well. If 
every member of the team had a special OS like that we'd all have to maintain 
all of them.
To be clear, I don’t have a special OS, I guess neither do others who disagreed 
in this thread.

Alternatively, what was agreed on by most at the midcycle was that if someone 
like yourself wanted to support a specific OS option, we would have an easy 
place for those contributions to go without impacting the rest of the team. The 
team as a whole would agree to develop all features for at least the reference 
OS.
Could we re-confirm that this is a team agreement? There is no harm to 
re-confirm it in the design summit/ML/team meeting. Frankly, it doesn’t seem to 
be.

Then individuals or companies who are passionate about an alternative OS can 
develop the features for that OS.

Corey

On Sat, Mar 5, 2016 at 12:30 AM Hongbin Lu 
mailto:hongbin...@huawei.com>> wrote:


From: Adrian Otto 
[mailto:adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>]
Sent: March-04-16 6:31 PM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro

Steve,

On Mar 4, 2016, at 2:41 PM, Steven Dake (stdake) 
mailto:std...@cisco.com>> wrote:

From: Adrian Otto mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Friday, March 4, 2016 at 12:48 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro

Hongbin,

To be clear, this pursuit is not about what OS options cloud operators can 
select. We will be offering a method of choice. It has to do with what we plan 
to build comprehensive testing for,
This is easy. Once we build comprehensive tests for the first OS, just re-run 
it for other OS(s).

and the implications that has on our pace of feature development. My guidance 
here is that we resist the temptation to create a system with more permutations 
than we can possibly support. The relation between bay node OS, Heat Template, 
Heat Template parameters, COE, and COE dependencies (could-init, docker, 
flannel, etcd, etc.) are multiplicative in nature. From the mid cycle, it was 
clear to me that:

1) We want to test at least one OS per COE from end-to-end with comprehensive 
functional tests.
2) We want to offer clear and precise integration points to allow cloud 
operators to substitute their own OS in place of whatever one is the default 
for the given COE.

A COE shouldn’t have a default necessarily that locks out other defaults.  
Magnum devs are the experts in how these systems operate, and as such need to 
take on the responsibility of the implementation for multi-os support.

3) We want to control the total number of configuration permutations to 
simplify our efforts as a project. We agreed that gate testing all possible 
permutations is intractable.

I disagree with this point, but I don't have the bandwidth available to prove 
it ;)

That’s exactly my point. It takes a chunk of human bandwidth to carry that 
responsibility. If we had a system engineer assigned from each of the various 
upstream OS distros working with Magnum, this would not be a big deal. 
Expecting our current contributors to support a variety of OS variants is not 
realistic.
You have my promise to support an additional OS 

[openstack-dev] [magnum] SELinux is temporarily disabled due to bug 1551648

2016-03-08 Thread Hongbin Lu
Hi team,

FYI. In short, we have to temporarily disable SELinux [1] due to bug 1551648 
[2].

SELinux is an important security features for Linux kernel. It improves 
isolation between neighboring containers in the same host. In before, Magnum 
had it turned on in each bay node. However, we have to turn it off for now 
because k8s bay is not functioning if it is turned on. The details were 
described in the bug report [2]. We will turn SELinux back on once the issue is 
resolved (you are welcomed to contribute a fix). Thanks.

[1] https://review.openstack.org/#/c/289626/
[2] https://bugs.launchpad.net/magnum/+bug/1551648
Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] Reminder: WSME is not being actively maintained

2016-03-11 Thread Hongbin Lu
I think we'd better to have a clear guidance here.

For projects that are currently using WSME, should they have a plan to migrate 
to other tools? If yes, is there any suggestion for the replacement tools? I 
think it will be more clear to have an official guideline in this matter.

Best regards,
Hongbin

-Original Message-
From: Doug Hellmann [mailto:d...@doughellmann.com] 
Sent: March-08-16 10:51 AM
To: openstack-dev
Subject: Re: [openstack-dev] [all] [api] Reminder: WSME is not being actively 
maintained

Excerpts from Chris Dent's message of 2016-03-08 11:25:48 +:
> 
> Last summer Lucas Gomes and I were press ganged into becoming core on 
> WSME. Since then we've piecemeal been verifying bug fixes and 
> generally trying to keep things moving. However, from the beginning we 
> both agreed that WSME is _not_ a web framework that we should be 
> encouraging. Though it looks like it started with very good 
> intentions, it never really reached a state where any of the following are 
> true:
> 
> * The WSME code is easy to understand and maintain.
> * WSME provides correct handling of HTTP (notably response status
>and headers).
> * WSME has an architecture that is suitable for creating modern
>Python-based web applications.
> 
> Last summer we naively suggested that projects that are using it move 
> to using something else. That suggestion did not take into account the 
> realities of OpenStack.
> 
> So we need to come up with a new plan. Lucas and I can continue to 
> merge bug fixes as people provide them (and we become aware of them) 
> and we can continue to hassle Doug Hellman to make a release when one 
> is necessary but this does little to address the three gaps above nor 
> the continued use of the framework in existing projects.
> 
> Ideas?

One big reason for choosing WSME early on was that it had support for both XML 
and JSON APIs without the application code needing to do anything explicitly. 
In the time since projects started using WSME, the community has decided to 
stop providing XML API support and some other tools have been picked up 
(JSONSchema, Voluptuous,
etc.) that provide parsing and validation features similar to WSME.
It seems natural that we build new APIs using those tools instead of WSME. For 
existing functioning API endpoints, we can leave them alone (using WSME) or 
change them one at a time as they are extended with new features. I don't see 
any reason to rewrite anything just to change tools.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-14 Thread Hongbin Lu


From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: March-14-16 4:49 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro

Steve,

I think you may have misunderstood our intent here. We are not seeking to lock 
in to a single OS vendor. Each COE driver can have a different OS. We can have 
multiple drivers per COE. The point is that drivers should be simple, and 
therefore should support one Bay node OS each. That would mean taking what we 
have today in our Kubernetes Bay type implementation and breaking it down into 
two drivers: one for CoreOS and another for Fedora/Atomic. New drivers would 
start out in a contrib directory where complete functional testing would not be 
required. In order to graduate one out of contrib and into the realm of support 
of the Magnum dev team, it would need to have a full set of tests, and someone 
actively maintaining it.
OK. It sounds like the proposal allows more than one OS to be in-tree, as long 
as the second OS goes through an incubation process. If that is what you mean, 
it sounds reasonable to me.

Multi-personality driers would be relatively complex. That approach would slow 
down COE specific feature development, and complicate maintenance that is 
needed as new versions of the dependency chain are bundled in (docker, k8s, 
etcd, etc.). We have all agreed that having integration points that allow for 
alternate OS selection is still our direction. This follows the pattern that we 
set previously when deciding what networking options to support. We will have 
one that's included as a default, and a way to plug in alternates.

Here is what I expect to see when COE drivers are implemented:

Docker Swarm:
Default driver Fedora/Atomic
Alternate driver: TBD

Kubernetes:
Default driver Fedora/Atomic
Alternate driver: CoreOS

Apache Mesos/Marathon:
Default driver: Ubuntu
Alternate driver: TBD

We can allow an arbitrary number of alternates. Those TBD items can be 
initially added to the contrib directory, and with the right level of community 
support can be advanced to defaults if shown to work better, be more 
straightforward to maintain, be more secure, or whatever criteria is important 
to us when presented with the choice. Such criteria will be subject to 
community consensus. This should allow for free experimentation with alternates 
to allow for innovation. See how this is not locking in a single OS vendor?

Adrian

On Mar 14, 2016, at 12:41 PM, Steven Dake (stdake) 
mailto:std...@cisco.com>> wrote:

Hongbin,

When we are at a disagreement in the Kolla core team, we have the Kolla core 
reviewers vote on the matter. This is typical standard OpenStack best practice.

I think the vote would be something like
"Implement one OS/COE/network/storage prototype, or implement many."

I don't have a horse in this race, but I think it would be seriously damaging 
to Magnum to lock in to a single vendor.

Regards
-steve


From: Hongbin Lu mailto:hongbin...@huawei.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, March 7, 2016 at 10:06 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro



From: Corey O'Brien [mailto:coreypobr...@gmail.com]
Sent: March-07-16 8:11 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro

Hongbin, I think the offer to support different OS options is a perfect example 
both of what we want and what we don't want. We definitely want to allow for 
someone like yourself to maintain templates for whatever OS they want and to 
have that option be easily integrated in to a Magnum deployment. However, when 
developing features or bug fixes, we can't wait for you to have time to add it 
for whatever OS you are promising to maintain.
It might be true that supporting additional OS could slow down the development 
speed, but the key question is how much the impact will be. Does it outweigh 
the benefits? IMO, the impact doesn't seem to be significant, given the fact 
that most features and bug fixes are OS agnostic. Also, keep in mind that every 
features we introduced (variety of COEs, variety of Nova virt-driver, variety 
of network driver, variety of volume driver, variety of ...) incurs a 
maintenance overhead. If you want an optimal development speed, we will be 
limited to support a single COE/virt driver/network driver/volume driver. I 
guess that is not the direction we like to be?

Instead, we would all be forced to develop the feature for that OS as well. If 
every member of the team had a special OS like that we&

Re: [openstack-dev] [magnum] High Availability

2016-03-18 Thread Hongbin Lu
OK. If using Keystone is not acceptable, I am going to propose a new approach:

· Store data in Magnum DB

· Encrypt data before writing it to DB

· Decrypt data after loading it from DB

· Have the encryption/decryption key stored in config file

· Use encryption/decryption algorithm provided by a library

The approach above is the exact approach used by Heat to protect hidden 
parameters [1]. Compared to the Barbican option, this approach is much lighter 
and simpler, and provides a basic level of data protection. This option is a 
good supplement to the Barbican option, which is heavy but provides advanced 
level of protection. It will fit into the use cases that users don’t want to 
install Barbican but want a basic protection.

If you disagree, I would request you to justify why this approach works for 
Heat but not for Magnum. Also, I also wonder if Heat has a plan to set a hard 
dependency on Barbican for just protecting the hidden parameters.

If you don’t like code duplication between Magnum and Heat, I would suggest to 
move the implementation to a oslo library to make it DRY. Thoughts?

[1] 
https://specs.openstack.org/openstack/heat-specs/specs/juno/encrypt-hidden-parameters.html

Best regards,
Hongbin

From: David Stanek [mailto:dsta...@dstanek.com]
Sent: March-18-16 4:12 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability


On Fri, Mar 18, 2016 at 4:03 PM Douglas Mendizábal 
mailto:douglas.mendiza...@rackspace.com>> 
wrote:
[snip]
>
> Regarding the Keystone solution, I'd like to hear the Keystone team's 
> feadback on that.  It definitely sounds to me like you're trying to put a 
> square peg in a round hole.
>

I believe that using Keystone for this is a mistake. As mentioned in the 
blueprint, Keystone is not encrypting the data so magnum would be on the hook 
to do it. So that means that if security is a requirement you'd have to 
duplicate more than just code. magnum would start having a larger security 
burden. Since we have a system designed to securely store data I think that's 
the best place for data that needs to be secure.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-03-18 Thread Hongbin Lu
Douglas,

I am not opposed to adopt Barbican in Magnum (In fact, we already adopted 
Barbican). What I am opposed to is a Barbican lock-in, which already has a 
negative impact on Magnum adoption based on our feedback. I also want to see an 
increase of Barbican adoption in the future, and all our users have Barbican 
installed in their clouds. If that happens, I have no problem to have a hard 
dependency on Barbican.

Best regards,
Hongbin

-Original Message-
From: Douglas Mendizábal [mailto:douglas.mendiza...@rackspace.com] 
Sent: March-18-16 9:45 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum] High Availability

Hongbin,

I think Adrian makes some excellent points regarding the adoption of Barbican.  
As the PTL for Barbican, it's frustrating to me to constantly hear from other 
projects that securing their sensitive data is a requirement but then turn 
around and say that deploying Barbican is a problem.

I guess I'm having a hard time understanding the operator persona that is 
willing to deploy new services with security features but unwilling to also 
deploy the service that is meant to secure sensitive data across all of 
OpenStack.

I understand one barrier to entry for Barbican is the high cost of Hardware 
Security Modules, which we recommend as the best option for the Storage and 
Crypto backends for Barbican.  But there are also other options for securing 
Barbican using open source software like DogTag or SoftHSM.

I also expect Barbican adoption to increase in the future, and I was hoping 
that Magnum would help drive that adoption.  There are also other projects that 
are actively developing security features like Swift Encryption, and DNSSEC 
support in Desginate.  Eventually these features will also require Barbican, so 
I agree with Adrian that we as a community should be encouraging deployers to 
adopt the best security practices.

Regarding the Keystone solution, I'd like to hear the Keystone team's feadback 
on that.  It definitely sounds to me like you're trying to put a square peg in 
a round hole.

- Doug

On 3/17/16 8:45 PM, Hongbin Lu wrote:
> Thanks Adrian,
> 
>  
> 
> I think the Keystone approach will work. For others, please speak up 
> if it doesn't work for you.
> 
>  
> 
> Best regards,
> 
> Hongbin
> 
>  
> 
> *From:*Adrian Otto [mailto:adrian.o...@rackspace.com]
> *Sent:* March-17-16 9:28 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [magnum] High Availability
> 
>  
> 
> Hongbin,
> 
>  
> 
> I tweaked the blueprint in accordance with this approach, and approved 
> it for Newton:
> 
> https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-sto
> re
> 
>  
> 
> I think this is something we can all agree on as a middle ground, If 
> not, I'm open to revisiting the discussion.
> 
>  
> 
> Thanks,
> 
>  
> 
> Adrian
> 
>  
> 
> On Mar 17, 2016, at 6:13 PM, Adrian Otto  <mailto:adrian.o...@rackspace.com>> wrote:
> 
>  
> 
> Hongbin,
> 
> One alternative we could discuss as an option for operators that
> have a good reason not to use Barbican, is to use Keystone.
> 
> Keystone credentials store:
> 
> http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-ap
> i-v3.html#credentials-v3-credentials
> 
> The contents are stored in plain text in the Keystone DB, so we
> would want to generate an encryption key per bay, encrypt the
> certificate and store it in keystone. We would then use the same key
> to decrypt it upon reading the key back. This might be an acceptable
> middle ground for clouds that will not or can not run Barbican. This
> should work for any OpenStack cloud since Grizzly. The total amount
> of code in Magnum would be small, as the API already exists. We
> would need a library function to encrypt and decrypt the data, and
> ideally a way to select different encryption algorithms in case one
> is judged weak at some point in the future, justifying the use of an
> alternate.
> 
> Adrian
> 
> 
> On Mar 17, 2016, at 4:55 PM, Adrian Otto  <mailto:adrian.o...@rackspace.com>> wrote:
> 
> Hongbin,
> 
> 
> On Mar 17, 2016, at 2:25 PM, Hongbin Lu  <mailto:hongbin...@huawei.com>> wrote:
> 
> Adrian,
> 
> I think we need a boarder set of inputs in this matter, so I moved
> the discussion from whiteboard back to here. Please check my replies
> inline.
> 
> 
> I would like to get a clear problem statement written for this.
> As I see it, the problem is that there is no safe place to put
> cer

Re: [openstack-dev] [magnum] High Availability

2016-03-18 Thread Hongbin Lu
Thanks Adrian,

I think the Keystone approach will work. For others, please speak up if it 
doesn’t work for you.

Best regards,
Hongbin

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: March-17-16 9:28 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability

Hongbin,

I tweaked the blueprint in accordance with this approach, and approved it for 
Newton:
https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-store

I think this is something we can all agree on as a middle ground, If not, I’m 
open to revisiting the discussion.

Thanks,

Adrian

On Mar 17, 2016, at 6:13 PM, Adrian Otto 
mailto:adrian.o...@rackspace.com>> wrote:

Hongbin,

One alternative we could discuss as an option for operators that have a good 
reason not to use Barbican, is to use Keystone.

Keystone credentials store: 
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#credentials-v3-credentials

The contents are stored in plain text in the Keystone DB, so we would want to 
generate an encryption key per bay, encrypt the certificate and store it in 
keystone. We would then use the same key to decrypt it upon reading the key 
back. This might be an acceptable middle ground for clouds that will not or can 
not run Barbican. This should work for any OpenStack cloud since Grizzly. The 
total amount of code in Magnum would be small, as the API already exists. We 
would need a library function to encrypt and decrypt the data, and ideally a 
way to select different encryption algorithms in case one is judged weak at 
some point in the future, justifying the use of an alternate.

Adrian


On Mar 17, 2016, at 4:55 PM, Adrian Otto 
mailto:adrian.o...@rackspace.com>> wrote:

Hongbin,


On Mar 17, 2016, at 2:25 PM, Hongbin Lu 
mailto:hongbin...@huawei.com>> wrote:

Adrian,

I think we need a boarder set of inputs in this matter, so I moved the 
discussion from whiteboard back to here. Please check my replies inline.


I would like to get a clear problem statement written for this.
As I see it, the problem is that there is no safe place to put certificates in 
clouds that do not run Barbican.
It seems the solution is to make it easy to add Barbican such that it's 
included in the setup for Magnum.
No, the solution is to explore an non-Barbican solution to store certificates 
securely.

I am seeking more clarity about why a non-Barbican solution is desired. Why is 
there resistance to adopting both Magnum and Barbican together? I think the 
answer is that people think they can make Magnum work with really old clouds 
that were set up before Barbican was introduced. That expectation is simply not 
reasonable. If there were a way to easily add Barbican to older clouds, perhaps 
this reluctance would melt away.


Magnum should not be in the business of credential storage when there is an 
existing service focused on that need.

Is there an issue with running Barbican on older clouds?
Anyone can choose to use the builtin option with Magnum if hey don't have 
Barbican.
A known limitation of that approach is that certificates are not replicated.
I guess the *builtin* option you referred is simply placing the certificates to 
local file system. A few of us had concerns on this approach (In particular, 
Tom Cammann has gave -2 on the review [1]) because it cannot scale beyond a 
single conductor. Finally, we made a compromise to land this option and use it 
for testing/debugging only. In other words, this option is not for production. 
As a result, Barbican becomes the only option for production which is the root 
of the problem. It basically forces everyone to install Barbican in order to 
use Magnum.

[1] https://review.openstack.org/#/c/212395/


It's probably a bad idea to replicate them.
That's what Barbican is for. --adrian_otto
Frankly, I am surprised that you disagreed here. Back to July 2015, we all 
agreed to have two phases of implementation and the statement was made by you 
[2].


#agreed Magnum will use Barbican for an initial implementation for certificate 
generation and secure storage/retrieval.  We will commit to a second phase of 
development to eliminating the hard requirement on Barbican with an alternate 
implementation that implements the functional equivalent implemented in Magnum, 
which may depend on libraries, but not Barbican.


[2] http://lists.openstack.org/pipermail/openstack-dev/2015-July/069130.html

The context there is important. Barbican was considered for two purposes: (1) 
CA signing capability, and (2) certificate storage. My willingness to implement 
an alternative was based on our need to get a certificate generation and 
signing solution that actually worked, as Barbican did not work for that at the 
time. I have always view

Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Hongbin Lu
The problem of missing Barbican alternative implementation has been raised 
several times by different people. IMO, this is a very serious issue that will 
hurt Magnum adoption. I created a blueprint for that [1] and set the PTL as 
approver. It will be picked up by a contributor once it is approved.

[1] https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-store 

Best regards,
Hongbin

-Original Message-
From: Ricardo Rocha [mailto:rocha.po...@gmail.com] 
Sent: March-17-16 2:39 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability

Hi.

We're on the way, the API is using haproxy load balancing in the same way all 
openstack services do here - this part seems to work fine.

For the conductor we're stopped due to bay certificates - we don't currently 
have barbican so local was the only option. To get them accessible on all nodes 
we're considering two options:
- store bay certs in a shared filesystem, meaning a new set of credentials in 
the boxes (and a process to renew fs tokens)
- deploy barbican (some bits of puppet missing we're sorting out)

More news next week.

Cheers,
Ricardo

On Thu, Mar 17, 2016 at 6:46 PM, Daneyon Hansen (danehans)  
wrote:
> All,
>
> Does anyone have experience deploying Magnum in a highly-available fashion?
> If so, I’m interested in learning from your experience. My biggest 
> unknown is the Conductor service. Any insight you can provide is 
> greatly appreciated.
>
> Regards,
> Daneyon Hansen
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Hongbin Lu
Adrian,

I think we need a boarder set of inputs in this matter, so I moved the 
discussion from whiteboard back to here. Please check my replies inline.

> I would like to get a clear problem statement written for this.
> As I see it, the problem is that there is no safe place to put certificates 
> in clouds that do not run Barbican.
> It seems the solution is to make it easy to add Barbican such that it's 
> included in the setup for Magnum.
No, the solution is to explore an non-Barbican solution to store certificates 
securely.

> Magnum should not be in the business of credential storage when there is an 
> existing service focused on that need.
>
> Is there an issue with running Barbican on older clouds?
> Anyone can choose to use the builtin option with Magnum if hey don't have 
> Barbican.
> A known limitation of that approach is that certificates are not replicated.
I guess the *builtin* option you referred is simply placing the certificates to 
local file system. A few of us had concerns on this approach (In particular, 
Tom Cammann has gave -2 on the review [1]) because it cannot scale beyond a 
single conductor. Finally, we made a compromise to land this option and use it 
for testing/debugging only. In other words, this option is not for production. 
As a result, Barbican becomes the only option for production which is the root 
of the problem. It basically forces everyone to install Barbican in order to 
use Magnum.

[1] https://review.openstack.org/#/c/212395/ 

> It's probably a bad idea to replicate them.
> That's what Barbican is for. --adrian_otto
Frankly, I am surprised that you disagreed here. Back to July 2015, we all 
agreed to have two phases of implementation and the statement was made by you 
[2].


#agreed Magnum will use Barbican for an initial implementation for certificate 
generation and secure storage/retrieval.  We will commit to a second phase of 
development to eliminating the hard requirement on Barbican with an alternate 
implementation that implements the functional equivalent implemented in Magnum, 
which may depend on libraries, but not Barbican.


[2] http://lists.openstack.org/pipermail/openstack-dev/2015-July/069130.html

Best regards,
Hongbin

-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com] 
Sent: March-17-16 4:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability

I have trouble understanding that blueprint. I will put some remarks on the 
whiteboard. Duplicating Barbican sounds like a mistake to me.

--
Adrian

> On Mar 17, 2016, at 12:01 PM, Hongbin Lu  wrote:
> 
> The problem of missing Barbican alternative implementation has been raised 
> several times by different people. IMO, this is a very serious issue that 
> will hurt Magnum adoption. I created a blueprint for that [1] and set the PTL 
> as approver. It will be picked up by a contributor once it is approved.
> 
> [1] 
> https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-sto
> re
> 
> Best regards,
> Hongbin
> 
> -Original Message-
> From: Ricardo Rocha [mailto:rocha.po...@gmail.com]
> Sent: March-17-16 2:39 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] High Availability
> 
> Hi.
> 
> We're on the way, the API is using haproxy load balancing in the same way all 
> openstack services do here - this part seems to work fine.
> 
> For the conductor we're stopped due to bay certificates - we don't currently 
> have barbican so local was the only option. To get them accessible on all 
> nodes we're considering two options:
> - store bay certs in a shared filesystem, meaning a new set of 
> credentials in the boxes (and a process to renew fs tokens)
> - deploy barbican (some bits of puppet missing we're sorting out)
> 
> More news next week.
> 
> Cheers,
> Ricardo
> 
>> On Thu, Mar 17, 2016 at 6:46 PM, Daneyon Hansen (danehans) 
>>  wrote:
>> All,
>> 
>> Does anyone have experience deploying Magnum in a highly-available fashion?
>> If so, I'm interested in learning from your experience. My biggest 
>> unknown is the Conductor service. Any insight you can provide is 
>> greatly appreciated.
>> 
>> Regards,
>> Daneyon Hansen
>> 
>> _
>> _  OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mai

[openstack-dev] [magnum] PTL candidacy

2016-03-19 Thread Hongbin Lu
Hi,

I would like to announce my candidacy for the PTL position of Magnum.

To introduce myself, my involvement in Magnum began in December 2014, in which 
the project was at a very early stage. Since then, I have been working with the 
team to explore the roadmap, implement and refine individual components, and 
gradually grow the feature set. Along the way, I've developed comprehensive 
knowledge of the architecture and has led me to take more leadership 
responsibilities. In the past release cycle, I started taking some of the PTL 
responsibilities when the current PTL was unavailable. I believe my past 
experience shows that I am qualified for the Magnum PTL position.

In my opinion, Magnum's key objective is to pursue tight integration between 
OpenStack and the various Container Orchestration Engines (COE) such as 
Kubernetes, Docker Swarm, and Apache Mesos. Therefore, I would suggest to give 
priority to the features that will improve the integration in this regard. In 
particular, I would emphasize the following features:

* Neutron integration: Currently, Flannel is the only supported network driver  
for providing connectivity between containers in different hosts. Flannel is  
mostly used for overlay networking, and it has significant performance 
overhead. In the Newton cycle, I would suggest we collaborate with the Kuryr  
team to develop a non-overlay network driver.
* Cinder integration: Magnum supports using Cinder volume for storing container 
 images. We should add support for mounting Cinder volumes to containers as  
data volumes as well.
* Ironic integration: Add support for Ironic virt-driver to enable support for 
high-performance containers on baremetal servers. We identified this feature  
as a key feature in a few release cycles previously but unfortunately it  
hasn't been fully implemented yet.

In addition, I believe the items below are important and need attention in the 
Newton cycle:

* Pluggable architecture: Refine the architecture to make it extensible. As a 
result, third-party vendors can plugin their own flavor of COEs.
* Quality assurance: Improve coverage of integration and unit tests.
* Documentation: Add missing documents and enhance existing documents.
* Remove hard dependency: Eliminate hard dependency on Barbican by implementing 
a functional equivalent replacement. Note that this is a technical debt [1] and 
should be clean up in Newton cycle.
* Horizon UI: Enhance our Horizon plugin.
* Grow the community: Attract new contributors to Magnum.

In the long term, I hope to work towards the goal of making OpenStack become a 
compelling platform for hosting containerized applications. To achieve this 
goal, we need to identify and develop unique capabilities that could 
differentiate Magnum from its competitors, thus attracting users to move their 
container workloads to OpenStack. As an initial start, below is a list features 
that I believe we could explore. Please don't consider it as final decisions 
and we will definitely debate each of them. Also, you are always welcome to 
contribute your own list of requirements:

* Resource interconnection and orchestration: Support dynamically connecting 
COE-managed resources (i.e. a container) to OpenStack-managed resources (i.e. a 
Neutron network), thus providing the capabilities to link containerized 
applications to existing OpenStack infrastructure. By doing  that, we enable 
orchestrations across COE-managed resources and OpenStack-managed resources 
through a Heat template.
* Integrated authentication system: Integrate COE authentication system with 
Keystone, thus eliminating the pain of handling multiple authentication 
mechanism.
* Standard APIs: Hide the heterogeneity of various COEs and expose a unified 
interface to manage resources of various kinds.

Thank you for considering my PTL candidacy.

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-July/069130.html

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-03-20 Thread Hongbin Lu
Thanks for your inputs. It sounds like we have no other option besides Barbican 
as long as we need to store credentials in Magnum. Then I have a new proposal: 
switch to an alternative authentication mechanism that doesn't require to store 
credentials in Magnum. For example, the following options are available in 
Kubernetes [1]:

· Client certificate authentication

· Token File

· OpenID Connect ID Token

· Basic authentication

· Keystone authentication

Could we pick one of those?

[1] http://kubernetes.io/docs/admin/authentication/

Best regards,
Hongbin

From: Dave McCowan (dmccowan) [mailto:dmcco...@cisco.com]
Sent: March-19-16 10:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability


The most basic requirement here for Magnum is that it needs a safe place to 
store credentials.  A safe place can not be provided by just a library or even 
by just a daemon.  Secure storage is provided by either hardware solution (an 
HSM) or a software solution (SoftHSM, DogTag, IPA, IdM).  A project should give 
a variety of secure storage options to the user.

On this, we have competing requirements.  Devs need a turnkey option for easy 
testing locally or in the gate.  Users kicking the tires want a realistic 
solution they try out easily with DevStack.  Operators who already have secure 
storage deployed for their cloud want an option that plugs into their existing 
HSMs.

Any roll-your-own option is not going to meet all of these requirements.

A good example, that does meet all of these requirements, is the key manager 
implementation in Nova and Cinder. [1] [2]

Nova and Cinder work together to provide volume encryption, and like Magnum, 
have a need to store and share keys securely.  Using a plugin architecture, and 
the Barbican API, they implement a variety of key storage options:
- Fixed key allows for insecure stand alone operation, running only Nova and 
Cinder
- Barbican with static key, allows for easy deployment that can be started 
within DevStack by few lines of config.
- Barbican with a secure backend, allows for production grade secure storage of 
keys that has been tested on a variety of HSMs and software options.

Barbican's adoption is growing.  Nova, Cinder, Neutron LBaaS, Sahara, and 
Magnum all have implementations using Barbican.  Swift and DNSSec also have use 
cases.  There are both RPM and Debian packages available for Barbican.  There 
are (at least tech preview)  versions of puppet modules, Ansible playbooks, and 
DevStack plugins to deploy Barbican.

In summary, I think using Barbican absorbs the complexity of doing secure 
storage correctly.  It gives operators production grade secure storage options, 
while giving devs easier options.

--Dave McCowan

[1] https://github.com/openstack/nova/tree/master/nova/keymgr
[2] https://github.com/openstack/cinder/tree/master/cinder/keymgr

From: Hongbin Lu mailto:hongbin...@huawei.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Friday, March 18, 2016 at 10:52 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] High Availability

OK. If using Keystone is not acceptable, I am going to propose a new approach:

? Store data in Magnum DB

? Encrypt data before writing it to DB

? Decrypt data after loading it from DB

? Have the encryption/decryption key stored in config file

? Use encryption/decryption algorithm provided by a library

The approach above is the exact approach used by Heat to protect hidden 
parameters [1]. Compared to the Barbican option, this approach is much lighter 
and simpler, and provides a basic level of data protection. This option is a 
good supplement to the Barbican option, which is heavy but provides advanced 
level of protection. It will fit into the use cases that users don't want to 
install Barbican but want a basic protection.

If you disagree, I would request you to justify why this approach works for 
Heat but not for Magnum. Also, I also wonder if Heat has a plan to set a hard 
dependency on Barbican for just protecting the hidden parameters.

If you don't like code duplication between Magnum and Heat, I would suggest to 
move the implementation to a oslo library to make it DRY. Thoughts?

[1] 
https://specs.openstack.org/openstack/heat-specs/specs/juno/encrypt-hidden-parameters.html

Best regards,
Hongbin

From: David Stanek [mailto:dsta...@dstanek.com]
Sent: March-18-16 4:12 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability


On Fri, Mar 18, 2016 at 4:03 PM Douglas Mendizábal 
mailto:douglas.mendiza...@rackspace.com>> 
wrote:
[snip]
>
>

Re: [openstack-dev] [magnum] High Availability

2016-03-20 Thread Hongbin Lu
The Magnum team discussed Anchor several times (in the design summit/midcycle). 
According to what I remembered, the conclusion is to leverage Anchor though 
Barbican (presumably there is an Anchor backend for Barbican). Is Anchor 
support in Barbican still in the roadmap?

Best regards,
Hongbin

> -Original Message-
> From: Clark, Robert Graham [mailto:robert.cl...@hpe.com]
> Sent: March-20-16 1:57 AM
> To: maishsk+openst...@maishsk.com; OpenStack Development Mailing List
> (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] High Availability
> 
> At the risk of muddying the waters further, I recently chatted with
> some of you about Anchor, it's an ephemeral PKI system setup to provide
> private community PKI - certificate services for internal systems, a
> lot like k8 pods.
> 
> An overview of why revocation doesn't work very well in many cases and
> how ephemeral PKI helps: https://openstack-
> security.github.io/tooling/2016/01/20/ephemeral-pki.html
> 
> First half of a threat analysis on Anchor, the Security Project's
> implementation of ephemeral PKI: https://openstack-
> security.github.io/threatanalysis/2016/02/07/anchorTA.html
> 
> This might not solve your problem, it's certainly not a direct drop in
> for Barbican (and it never will be) but if your primary concern is
> Certificate Management for internal systems (not presenting
> certificates over the edge of the cloud) you might find some of it's
> properties valuable. Not least, it's trivial to HA being stateless and
> it's trivial to deploy being a single Pecan service.
> 
> There's a reasonably complete deck on Anchor here:
> https://docs.google.com/presentation/d/1HDyEiSA5zp6HNdDZcRAYMT5GtxqkHrx
> brqDRzITuSTc/edit?usp=sharing
> 
> And of course, code over here:
> http://git.openstack.org/cgit/openstack/anchor
> 
> Cheers
> -Rob
> 
> > -Original Message-
> > From: Maish Saidel-Keesing [mailto:mais...@maishsk.com]
> > Sent: 19 March 2016 18:10
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [magnum] High Availability
> >
> > Forgive me for the top post and also for asking the obvious (with my
> > Operator hat on)
> >
> > Relying on an external service for certificate store - is the best
> > option - assuming of course that the certificate store is actually
> > also highly available.
> >
> > Is that the case today with Barbican?
> >
> > According to the architecture docs [1] I see that they are using a
> > relational database. MySQL? PostgreSQL? Does that now mean we have an
> > additional database to maintain, backup, provide HA for as an
> Operator?
> >
> > The only real reference I can see to anything remotely HA is this [2]
> > and this [3]
> >
> > An overall solution is highly available *only* if all of the parts it
> > relies are also highly available as well.
> >
> >
> > [1]
> >
> http://docs.openstack.org/developer/barbican/contribute/architecture.h
> > tml#overall-architecture [2]
> > https://github.com/cloudkeep-ops/barbican-vagrant-zero
> > [3]
> > http://lists.openstack.org/pipermail/openstack/2014-March/006100.html
> >
> > Some food for thought
> >
> > --
> > Best Regards,
> > Maish Saidel-Keesing
> >
> >
> > On 03/18/16 17:18, Hongbin Lu wrote:
> > > Douglas,
> > >
> > > I am not opposed to adopt Barbican in Magnum (In fact, we already
> > > adopted Barbican). What I am opposed to is a Barbican lock-in,
> which
> > already has a negative impact on Magnum adoption based on our
> > feedback. I also want to see an increase of Barbican adoption in the
> future, and all our users have Barbican installed in their clouds. If
> that happens, I have no problem to have a hard dependency on Barbican.
> > >
> > > Best regards,
> > > Hongbin
> > >
> > > -Original Message-
> > > From: Douglas Mendizábal [mailto:douglas.mendiza...@rackspace.com]
> > > Sent: March-18-16 9:45 AM
> > > To: openstack-dev@lists.openstack.org
> > > Subject: Re: [openstack-dev] [magnum] High Availability
> > >
> > > Hongbin,
> > >
> > > I think Adrian makes some excellent points regarding the adoption
> of
> > > Barbican.  As the PTL for Barbican, it's frustrating to me to
> > constantly hear from other projects that securing their sensitive
> data
> > is a requirement but then turn around and say that deploying Barbican
> is a problem.
> > >
> > > I guess 

Re: [openstack-dev] [magnum] High Availability

2016-03-21 Thread Hongbin Lu
Tim,

Thanks for your advice. I respect your point of view and we will definitely 
encourage our users to try Barbican if they see fits. However, for the sake of 
Magnum, I think we have to decouple from Barbican at current stage. The 
coupling of Magnum and Barbican will increase the size of the system by two (1 
project -> 2 project), which will significant increase the overall complexities.

· For developers, it incurs significant overheads on development, 
quality assurance, and maintenance.

· For operators, it doubles the amount of efforts of deploying and 
monitoring the system.

· For users, a large system is likely to be unstable and fragile which 
affects the user experience.
In my point of view, I would like to minimize the system we are going to ship, 
so that we can reduce the overheads of maintenance and provides a stable system 
to our users.

I noticed that there are several suggestions to “force” our users to install 
Barbican, which I would respectfully disagree. Magnum is a young project and we 
are struggling to increase the adoption rate. I think we need to be nice to our 
users, otherwise, they will choose our competitors (there are container service 
everywhere). Please understand that we are not a mature project, like Nova, who 
has thousands of users. We really don’t have the power to force our users to do 
what they don’t like to do.

I also recognized there are several disagreements from the Barbican team. Per 
my understanding, most of the complaints are about the re-invention of Barbican 
equivalent functionality in Magnum. To address that, I am going to propose an 
idea to achieve the goal without duplicating Barbican. In particular, I suggest 
to add support for additional authentication system (Keystone in particular) 
for our Kubernetes bay (potentially for swarm/mesos). As a result, users can 
specify how to secure their bay’s API endpoint:

· TLS: This option requires Barbican to be installed for storing the 
TLS certificates.

· Keystone: This option doesn’t require Barbican. Users will use their 
OpenStack credentials to log into Kubernetes.

I am going to send another ML to describe the details. You are welcome to 
provide your inputs. Thanks.

Best regards,
Hongbin

From: Tim Bell [mailto:tim.b...@cern.ch]
Sent: March-19-16 5:55 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability


From: Hongbin Lu mailto:hongbin...@huawei.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Saturday 19 March 2016 at 04:52
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] High Availability

...
If you disagree, I would request you to justify why this approach works for 
Heat but not for Magnum. Also, I also wonder if Heat has a plan to set a hard 
dependency on Barbican for just protecting the hidden parameters.


There is a risk that we use decisions made by other projects to justify how 
Magnum is implemented. Heat was created 3 years ago according to 
https://www.openstack.org/software/project-navigator/ and Barbican only 2 years 
ago, thus Barbican may not have been an option (or a high risk one).

Barbican has demonstrated that the project has corporate diversity and good 
stability 
(https://www.openstack.org/software/releases/liberty/components/barbican). 
There are some areas that could be improved (packaging and puppet modules are 
often needing some more investment).

I think it is worth a go to try it out and have concrete areas to improve if 
there are problems.

Tim

If you don’t like code duplication between Magnum and Heat, I would suggest to 
move the implementation to a oslo library to make it DRY. Thoughts?

[1] 
https://specs.openstack.org/openstack/heat-specs/specs/juno/encrypt-hidden-parameters.html

Best regards,
Hongbin

From: David Stanek [mailto:dsta...@dstanek.com]
Sent: March-18-16 4:12 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability


On Fri, Mar 18, 2016 at 4:03 PM Douglas Mendizábal 
mailto:douglas.mendiza...@rackspace.com>> 
wrote:
[snip]
>
> Regarding the Keystone solution, I'd like to hear the Keystone team's 
> feadback on that.  It definitely sounds to me like you're trying to put a 
> square peg in a round hole.
>

I believe that using Keystone for this is a mistake. As mentioned in the 
blueprint, Keystone is not encrypting the data so magnum would be on the hook 
to do it. So that means that if security is a requirement you'd have to 
duplicate more than just code. magnum would start having a larger security 
burden. Since we have a system designed to s

Re: [openstack-dev] [Neutron][LBaaS][heat] Removing LBaaS v1 - are weready?

2016-03-24 Thread Hongbin Lu


> -Original Message-
> From: Assaf Muller [mailto:as...@redhat.com]
> Sent: March-24-16 9:24 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS][heat] Removing LBaaS v1 -
> are weready?
> 
> On Thu, Mar 24, 2016 at 1:48 AM, Takashi Yamamoto
>  wrote:
> > On Thu, Mar 24, 2016 at 6:17 AM, Doug Wiegley
> >  wrote:
> >> Migration script has been submitted, v1 is not going anywhere from
> stable/liberty or stable/mitaka, so it’s about to disappear from master.
> >>
> >> I’m thinking in this order:
> >>
> >> - remove jenkins jobs
> >> - wait for heat to remove their jenkins jobs ([heat] added to this
> >> thread, so they see this coming before the job breaks)
> >
> > magnum is relying on lbaasv1.  (with heat)
> 
> Is there anything blocking you from moving to v2?

A ticket was created for that: 
https://blueprints.launchpad.net/magnum/+spec/migrate-to-lbaas-v2 . It will be 
picked up by contributors once it is approved. Please give us sometimes to 
finish the work.

> 
> >
> >> - remove q-lbaas from devstack, and any references to lbaas v1 in
> devstack-gate or infra defaults.
> >> - remove v1 code from neutron-lbaas
> >>
> >> Since newton is now open for commits, this process is going to get
> started.
> >>
> >> Thanks,
> >> doug
> >>
> >>
> >>
> >>> On Mar 8, 2016, at 11:36 AM, Eichberger, German
>  wrote:
> >>>
> >>> Yes, it’s Database only — though we changed the agent driver in the
> DB from V1 to V2 — so if you bring up a V2 with that database it should
> reschedule all your load balancers on the V2 agent driver.
> >>>
> >>> German
> >>>
> >>>
> >>>
> >>>
> >>> On 3/8/16, 3:13 AM, "Samuel Bercovici"  wrote:
> >>>
>  So this looks like only a database migration, right?
> 
>  -Original Message-
>  From: Eichberger, German [mailto:german.eichber...@hpe.com]
>  Sent: Tuesday, March 08, 2016 12:28 AM
>  To: OpenStack Development Mailing List (not for usage questions)
>  Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 -
> are weready?
> 
>  Ok, for what it’s worth we have contributed our migration script:
>  https://review.openstack.org/#/c/289595/ — please look at this as
> a
>  starting point and feel free to fix potential problems…
> 
>  Thanks,
>  German
> 
> 
> 
> 
>  On 3/7/16, 11:00 AM, "Samuel Bercovici" 
> wrote:
> 
> > As far as I recall, you can specify the VIP in creating the LB so
> you will end up with same IPs.
> >
> > -Original Message-
> > From: Eichberger, German [mailto:german.eichber...@hpe.com]
> > Sent: Monday, March 07, 2016 8:30 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 -
> are weready?
> >
> > Hi Sam,
> >
> > So if you have some 3rd party hardware you only need to change
> the
> > database (your steps 1-5) since the 3rd party hardware will just
> > keep load balancing…
> >
> > Now for Kevin’s case with the namespace driver:
> > You would need a 6th step to reschedule the loadbalancers with
> the V2 namespace driver — which can be done.
> >
> > If we want to migrate to Octavia or (from one LB provider to
> another) it might be better to use the following steps:
> >
> > 1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools,
> > Health Monitors , Members) into some JSON format file(s) 2.
> Delete LBaaS v1 3.
> > Uninstall LBaaS v1 4. Install LBaaS v2 5. Transform the JSON
> > format file into some scripts which recreate the load balancers
> > with your provider of choice —
> >
> > 6. Run those scripts
> >
> > The problem I see is that we will probably end up with different
> > VIPs so the end user would need to change their IPs…
> >
> > Thanks,
> > German
> >
> >
> >
> > On 3/6/16, 5:35 AM, "Samuel Bercovici" 
> wrote:
> >
> >> As for a migration tool.
> >> Due to model changes and deployment changes between LBaaS v1 and
> LBaaS v2, I am in favor for the following process:
> >>
> >> 1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools,
> >> Health Monitors , Members) into some JSON format file(s) 2.
> Delete LBaaS v1 3.
> >> Uninstall LBaaS v1 4. Install LBaaS v2 5. Import the data from 1
> >> back over LBaaS v2 (need to allow moving from falvor1-->flavor2,
> >> need to make room to some custom modification for mapping
> between
> >> v1 and v2
> >> models)
> >>
> >> What do you think?
> >>
> >> -Sam.
> >>
> >>
> >>
> >>
> >> -Original Message-
> >> From: Fox, Kevin M [mailto:kevin@pnnl.gov]
> >> Sent: Friday, March 04, 2016 2:06 AM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> Subject: Re: [openstack-dev] [Neutron][

Re: [openstack-dev] [Kuryr][Magnum] Clarification of expanded mission statement

2016-03-27 Thread Hongbin Lu
Gal,

Thanks for clarifying the initiative. I added “[Magnum]” to the title so that 
Magnum team members can cast their inputs to this thread (if any).

Best regards,
Hongbin

From: Gal Sagie [mailto:gal.sa...@gmail.com]
Sent: March-19-16 6:04 AM
To: Fox, Kevin M
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Kuryr] Clarification of expanded mission statement

Hi Russell,

Thanks for starting this thread, i have been wanting to do so myself.

First, to me Kuryr is much more then just providing a "libnetwork driver" or a 
"CNI driver"
in the networking part.

Kuryr goal (to me at least) is to simplify orchestration, management and 
performance
and avoid vendor lock-in by providing these drivers but also being able to 
expose and enhance additional
policy level features that OpenStack has but are lacking in COEs, We are also 
looking at easier
deployment and packaging and providing additional value with features that make 
things more efficient and address issues
operators/users are facing (like attaching to existing Neutron networks).

We see our selfs operating both on OpenStack projects, helping with features 
needed for this integration but
also in any other project (like Kubernetes / Docker) if this will make more 
sense and show better value.

The plan is to continue this with storage, we will have to examine things and 
decide where is the best
place to locate them the pros and cons.
I personally don't want to run and start implementing things at other 
communities and under other
governance model unless they make much more sense and show better value for the 
overall solution.
So my initial reaction is that we can show a lot of value in the storage part 
as part of OpenStack Kuryr and hence
the mission statement change.

There are many features that i believe we can work in that are currently 
lacking and we will
need to examine them one by one and keep doing the design and spec process open 
with the community
so everyone can review and judge the value.
The last thing i am going to do is drive to re-implement things that are 
already there and in good enough shape,
none of us have the need or time to do that :)

In the storage area i see the plugins (and not just for Kubernetes), i see the 
persistent and re-using of storage
parts as being interesting to start with.
Another area that i included as storage is mostly disaster recovery and backup, 
i think we can bring a lot of value
to containers deployments by leveraging projects like Smaug and Freezer which 
offer application backups
and recovery.
I really prefer we do this thinking process together as a community and i 
already talked with some people that showed
interest in some of these features.

My intention was to first get the TC approval to explore this area and make 
sure it doesnt conflict and
only then start working on defining the details again with the broad community, 
openly just like we do
everything else.


On Fri, Mar 18, 2016 at 10:12 PM, Fox, Kevin M 
mailto:kevin@pnnl.gov>> wrote:
I'd assume a volume plugin for cinder support and/or a volume plugin for manila 
support?

Either would be useful.

Thanks,
Kevin

From: Russell Bryant [rbry...@redhat.com]
Sent: Friday, March 18, 2016 4:59 AM
To: OpenStack Development Mailing List (not for usage questions); 
gal.sa...@gmail.com
Subject: [openstack-dev] [Kuryr] Clarification of expanded mission statement
The Kuryr project proposed an update to its mission statement and I agreed to 
start a ML thread seeking clarification on the update.

https://review.openstack.org/#/c/289993

The change expands the current networking focus to also include storage 
integration.

I was interested to learn more about what work you expect to be doing.  On the 
networking side, it's clear to me: a libnetwork plugin, and now perhaps a CNI 
plugin.  What specific code do you expect to deliver as a part of your expanded 
scope?  Will that code be in Kuryr, or be in upstream projects?

If you don't know yet, that's fine.  I was just curious what you had in mind.  
We don't really have OpenStack projects that are organizing around contributing 
to other upstreams, but I think this case is fine.

--
Russell Bryant



--
Best Regards ,

The G.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] multicloud support for ec2

2015-01-28 Thread Hongbin Lu
Hi,

I would appreciate if someone replies the email below. Thanks.

Best regards,
Hongbin

On Sun, Jan 25, 2015 at 12:03 AM, Hongbin Lu  wrote:

> Hi Heat team,
>
> I am looking for a solution to bridge between OpenStack and EC2. According
> to documents, it seems that Heat has multicloud support but the remote
> cloud(s) must be OpenStack. I wonder if Heat supports multicloud in the
> context of supporting remote EC2 cloud. For example, does Heat support a
> remote stack that contains resources from EC2 cloud? As a result, creating
> a stack will provision local OpenStack resources along with remote EC2
> resources.
>
> If this feature is not supported, will the dev team accept blueprint
> and/or contributions for that?
>
> Thanks,
> Hongbin
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Propose removing Dmitry Guryanov from magnum-core

2015-02-17 Thread Hongbin Lu
-1

On Mon, Feb 16, 2015 at 10:20 PM, Steven Dake (stdake) 
wrote:

>  The initial magnum core team was founded at a meeting where several
> people committed to being active in reviews and writing code for Magnum.
> Nearly all of the folks that made that initial commitment have been active
> in IRC, on the mailing lists, or participating in code reviews or code
> development.
>
>  Out of our core team of 9 members [1], everyone has been active in some
> way except for Dmitry.  I propose removing him from the core team.  Dmitry
> is welcome to participate in the future if he chooses and be held to the
> same high standards we have held our last 4 new core members to that didn’t
> get an initial opt-in but were voted in by their peers.
>
>  Please vote (-1 remove, abstain, +1 keep in core team) - a vote of +1
> from any core acts as a veto meaning Dmitry will remain in the core team.
>
>  [1] https://review.openstack.org/#/admin/groups/473,members
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Issue on going through the quickstart guide

2015-02-21 Thread Hongbin Lu
Hi all,

I tried to go through the new redis example at the quickstart guide [1],
but was not able to go through. I was blocked by connecting to the redis
slave container:

*$ docker exec -i -t $REDIS_ID redis-cli*
*Could not connect to Redis at 127.0.0.1:6379 :
Connection refused*

Here is the container log:

*$ docker logs $REDIS_ID*
*Error: Server closed the connection*
*Failed to find master.*

It looks like the redis master disappeared at some point. I tried to check
the status in about every one minute. Below is the output.

*$ kubectl get pod*
*NAME   IMAGE(S)  HOST
   LABELS  STATUS*
*51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/

name=redis-sentinel,redis-sentinel=true,role=sentinel   Pending*
*redis-master   kubernetes/redis:v1   10.0.0.4/
   name=redis,redis-sentinel=true,role=master
 Pending*
*   kubernetes/redis:v1*
*512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
   name=redis
 Pending*

*$ kubectl get pod*
*NAME   IMAGE(S)  HOST
   LABELS  STATUS*
*512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
   name=redis
 Running*
*51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/

name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
*redis-master   kubernetes/redis:v1   10.0.0.4/
   name=redis,redis-sentinel=true,role=master
 Running*
*   kubernetes/redis:v1*

*$ kubectl get pod*
*NAME   IMAGE(S)  HOST
   LABELS  STATUS*
*redis-master   kubernetes/redis:v1   10.0.0.4/
   name=redis,redis-sentinel=true,role=master
 Running*
*   kubernetes/redis:v1*
*512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
   name=redis
 Failed*
*51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/

name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
*233fa7d1-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
   name=redis
 Running*

*$ kubectl get pod*
*NAME   IMAGE(S)  HOST
   LABELS  STATUS*
*512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
   name=redis
 Running*
*51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/

name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
*233fa7d1-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
   name=redis
 Running*
*3b164230-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.4/

name=redis-sentinel,redis-sentinel=true,role=sentinel   Pending*

Is anyone able to reproduce the problem above? If yes, I am going to file a
bug.

Thanks,
Hongbin

[1]
https://github.com/stackforge/magnum/blob/master/doc/source/dev/dev-quickstart.rst#exercising-the-services-using-devstack
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Issue on going through the quickstart guide

2015-02-22 Thread Hongbin Lu
Thanks Jay,

I checked the kubelet log. There are a lot of Watch closed error like
below. Here is the full log http://fpaste.org/188964/46261561/ .

*Status:"Failure", Message:"unexpected end of JSON input", Reason:""*
*Status:"Failure", Message:"501: All the given peers are not reachable*

Please note that my environment was setup by following the quickstart
guide. It seems that all the kube components were running (checked by using
systemctl status command), and all nodes can ping each other. Any further
suggestion?

Thanks,
Hongbin


On Sun, Feb 22, 2015 at 3:58 AM, Jay Lau  wrote:

> Can you check the kubelet log on your minions? Seems the container failed
> to start, there might be something wrong for your minions node. Thanks.
>
> 2015-02-22 15:08 GMT+08:00 Hongbin Lu :
>
>> Hi all,
>>
>> I tried to go through the new redis example at the quickstart guide [1],
>> but was not able to go through. I was blocked by connecting to the redis
>> slave container:
>>
>> *$ docker exec -i -t $REDIS_ID redis-cli*
>> *Could not connect to Redis at 127.0.0.1:6379 <http://127.0.0.1:6379>:
>> Connection refused*
>>
>> Here is the container log:
>>
>> *$ docker logs $REDIS_ID*
>> *Error: Server closed the connection*
>> *Failed to find master.*
>>
>> It looks like the redis master disappeared at some point. I tried to
>> check the status in about every one minute. Below is the output.
>>
>> *$ kubectl get pod*
>> *NAME   IMAGE(S)  HOST
>>  LABELS  STATUS*
>> *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
>> <http://10.0.0.5/>
>> name=redis-sentinel,redis-sentinel=true,role=sentinel   Pending*
>> *redis-master   kubernetes/redis:v1   10.0.0.4/
>> <http://10.0.0.4/>   name=redis,redis-sentinel=true,role=master
>>  Pending*
>> *   kubernetes/redis:v1*
>> *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
>> <http://10.0.0.5/>   name=redis
>>  Pending*
>>
>> *$ kubectl get pod*
>> *NAME   IMAGE(S)  HOST
>>  LABELS  STATUS*
>> *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
>> <http://10.0.0.5/>   name=redis
>>  Running*
>> *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
>> <http://10.0.0.5/>
>> name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
>> *redis-master   kubernetes/redis:v1   10.0.0.4/
>> <http://10.0.0.4/>   name=redis,redis-sentinel=true,role=master
>>  Running*
>> *   kubernetes/redis:v1*
>>
>> *$ kubectl get pod*
>> *NAME   IMAGE(S)  HOST
>>  LABELS  STATUS*
>> *redis-master   kubernetes/redis:v1   10.0.0.4/
>> <http://10.0.0.4/>   name=redis,redis-sentinel=true,role=master
>>  Running*
>> *   kubernetes/redis:v1*
>> *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
>> <http://10.0.0.5/>   name=redis
>>  Failed*
>> *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
>> <http://10.0.0.5/>
>> name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
>> *233fa7d1-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
>> <http://10.0.0.5/>   name=redis
>>  Running*
>>
>> *$ kubectl get pod*
>> *NAME   IMAGE(S)  HOST
>>  LABELS  STATUS*
>> *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
>> <http://10.0.0.5/>   name=redis
>>  Running*
>> *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
>> <http://10.0.0.5/>
>> name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
>> *233fa7d1-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
>> <http://10.0.0.5/>   name=redis
>>  Running*
>> *3b164230-ba21-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.4/
>> <http://10.0.0.4

Re: [openstack-dev] [magnum] Issue on going through the quickstart guide

2015-02-22 Thread Hongbin Lu
Hi Jay,

I tried the native k8s commands (in a fresh bay):

*kubectl create -s http://192.168.1.249:8080 <http://192.168.1.249:8080> -f
./redis-master.yaml*
*kubectl create -s http://192.168.1.249:8080 <http://192.168.1.249:8080> -f
./redis-sentinel-service.yaml*
*kubectl create -s http://192.168.1.249:8080 <http://192.168.1.249:8080> -f
./redis-controller.yaml*
*kubectl create -s http://192.168.1.249:8080 <http://192.168.1.249:8080> -f
./redis-sentinel-controller.yaml*

It still didn't work (same symptom as before). I cannot spot any difference
between the original yaml file and the parsed yam file. Any other idea?

Thanks,
Hongbin

On Sun, Feb 22, 2015 at 8:38 PM, Jay Lau  wrote:

> I suspect that there are some error after the pod/services parsed, can you
> please use the native k8s command have a try first then debug k8s api part
> to check the difference of the original json file and the parsed json file?
> Thanks!
>
> kubectl create -f xxxx.json xxx
>
>
>
> 2015-02-23 1:40 GMT+08:00 Hongbin Lu :
>
>> Thanks Jay,
>>
>> I checked the kubelet log. There are a lot of Watch closed error like
>> below. Here is the full log http://fpaste.org/188964/46261561/ .
>>
>> *Status:"Failure", Message:"unexpected end of JSON input", Reason:""*
>> *Status:"Failure", Message:"501: All the given peers are not reachable*
>>
>> Please note that my environment was setup by following the quickstart
>> guide. It seems that all the kube components were running (checked by using
>> systemctl status command), and all nodes can ping each other. Any further
>> suggestion?
>>
>> Thanks,
>> Hongbin
>>
>>
>> On Sun, Feb 22, 2015 at 3:58 AM, Jay Lau  wrote:
>>
>>> Can you check the kubelet log on your minions? Seems the container
>>> failed to start, there might be something wrong for your minions node.
>>> Thanks.
>>>
>>> 2015-02-22 15:08 GMT+08:00 Hongbin Lu :
>>>
>>>> Hi all,
>>>>
>>>> I tried to go through the new redis example at the quickstart guide
>>>> [1], but was not able to go through. I was blocked by connecting to the
>>>> redis slave container:
>>>>
>>>> *$ docker exec -i -t $REDIS_ID redis-cli*
>>>> *Could not connect to Redis at 127.0.0.1:6379 <http://127.0.0.1:6379>:
>>>> Connection refused*
>>>>
>>>> Here is the container log:
>>>>
>>>> *$ docker logs $REDIS_ID*
>>>> *Error: Server closed the connection*
>>>> *Failed to find master.*
>>>>
>>>> It looks like the redis master disappeared at some point. I tried to
>>>> check the status in about every one minute. Below is the output.
>>>>
>>>> *$ kubectl get pod*
>>>> *NAME   IMAGE(S)  HOST
>>>>LABELS  STATUS*
>>>> *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
>>>> <http://10.0.0.5/>
>>>> name=redis-sentinel,redis-sentinel=true,role=sentinel   Pending*
>>>> *redis-master   kubernetes/redis:v1   10.0.0.4/
>>>> <http://10.0.0.4/>   name=redis,redis-sentinel=true,role=master
>>>>  Pending*
>>>> *   kubernetes/redis:v1*
>>>> *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
>>>> <http://10.0.0.5/>   name=redis
>>>>  Pending*
>>>>
>>>> *$ kubectl get pod*
>>>> *NAME   IMAGE(S)  HOST
>>>>LABELS  STATUS*
>>>> *512cf350-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
>>>> <http://10.0.0.5/>   name=redis
>>>>  Running*
>>>> *51c68981-ba20-11e4-84dc-fa163e318555   kubernetes/redis:v1   10.0.0.5/
>>>> <http://10.0.0.5/>
>>>> name=redis-sentinel,redis-sentinel=true,role=sentinel   Running*
>>>> *redis-master   kubernetes/redis:v1   10.0.0.4/
>>>> <http://10.0.0.4/>   name=redis,redis-sentinel=true,role=master
>>>>  Running*
>>>> *   kubernetes/redis:v1*
>>>>
>>>> *$ kubectl get pod*
>>>> *NAME   IMAGE(

Re: [openstack-dev] [Magnum][Heat] Expression of Bay Status

2015-03-10 Thread Hongbin Lu
Hi Adrian,

On Mon, Mar 9, 2015 at 6:53 PM, Adrian Otto 
wrote:

> Magnum Team,
>
> In the following review, we have the start of a discussion about how to
> tackle bay status:
>
> https://review.openstack.org/159546
>
> I think a key issue here is that we are not subscribing to an event feed
> from Heat to tell us about each state transition, so we have a low degree
> of confidence that our state will match the actual state of the stack in
> real-time. At best, we have an eventually consistent state for Bay
> following a bay creation.
>
> Here are some options for us to consider to solve this:
>
> 1) Propose enhancements to Heat (or learn about existing features) to emit
> a set of notifications upon state changes to stack resources so the state
> can be mirrored in the Bay resource.
>

A drawback of this option is that it increases the difficulty of
trouble-shooting. In my experience of using Heat (SoftwareDeployments in
particular), Ironic and Trove, one of the most frequent errors I
encountered is that the provisioning resources stayed in deploying state
(never went to completed). The reason is that they were waiting a callback
signal from the provisioning resource to indicate its completion, but the
callback signal was blocked due to various reasons (e.g. incorrect firewall
rules, incorrect configs, etc.). Troubling-shooting such problem is
generally harder.


>
> 2) Spawn a task to poll the Heat stack resource for state changes, and
> express them in the Bay status, and allow that task to exit once the stack
> reaches its terminal (completed) state.
>
> 3) Don’t store any state in the Bay object, and simply query the heat
> stack for status as needed.


> Are each of these options viable? Are there other options to consider?
> What are the pro/con arguments for each?
>
> Thanks,
>
> Adrian
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Memory recommendation for running magnum with devstack

2015-03-19 Thread Hongbin Lu
Hi Surojit,

I think 8G of RAM and 80G of disk should be considered the minimum. The
guide will create 3 m1.small VMs (each with 2G of RAM and 20G of disk), and
2 volumes (5G each).

In your case, I am not sure why you get the memory error. Probably, you
could walk-around it by creating a flavor with less computing resources,
then use the new flavor to create the cluster:

# create a new flavor with 1G of RAM and 10G of disk
$ nova flavor-create m2.small 1234 1024 10 1

$ magnum baymodel-create --name testbaymodel --image-id fedora-21-atomic \
   --keypair-id testkey \
   --external-network-id $NIC_ID \
   --dns-nameserver 8.8.8.8 --flavor-id m2.small \
   --docker-volume-size 5

Thanks,
Hongbin

On Thu, Mar 19, 2015 at 11:06 PM, Surojit Pathak 
wrote:

> Team,
>
> Do we have a ballpark amount for the memory of the devstack machine to run
> magnum? I am running devstack as a VM with (4 VCPU/50G-Disk/8G-Mem) and
> running magnum on it as per[1].
>
> I am observing the kube-Nodes goes often in "SHUTOFF" state. If I do 'nova
> reset-state', the instance goes into ERROR state with message indicating
> that it has run out of memory[2].
>
> Do we have any recommendation on the size of the RAM for the deployment
> described in[1]?
>
> --
> Regards,
> SURO
>
> [1] -https://github.com/stackforge/magnum/blob/master/
> doc/source/dev/dev-quickstart.rst
> [2] - "internal error: process exited while connecting to monitor: Cannot
> set up guest memory 'pc.ram'
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] swagger-codegen generated code for python-k8sclient

2015-03-23 Thread Hongbin Lu
Hi Madhuri,

Amazing work! I wouldn't concern the code duplication and modularity issue
since the codes are generated. However, there is another concern here: if
we find a bug/improvement of the generated code, we probably need to modify
the generator. The question is if the upstream will accept the
modifications? If yes, how fast the patch will go through.

I would prefer to maintain a folk of the generator. By this way, we would
have full control of the generated code. Thoughts?

Thanks,
Hongbin

On Mon, Mar 23, 2015 at 10:11 AM, Steven Dake (stdake) 
wrote:

>
>
>   From: Madhuri Rai 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Monday, March 23, 2015 at 1:53 AM
> To: "openstack-dev@lists.openstack.org"  >
> Subject: [openstack-dev] [magnum] swagger-codegen generated code for
> python-k8sclient
>
>   Hi All,
>
> This is to have a discussion on the blueprint for implementing
> python-k8client for magnum.
>
> https://blueprints.launchpad.net/magnum/+spec/python-k8sclient
>
> I have committed the code generated by swagger-codegen at
> https://review.openstack.org/#/c/166720/.
> But I feel the quality of the code generated by swagger-codegen is not
> good.
>
> Some of the points:
> 1) There is lot of code duplication. If we want to generate code for two
> or more versions, same code is duplicated for each API version.
> 2) There is no modularity. CLI code for all the APIs are written in same
> file.
>
> So, I would like your opinion on this. How should we proceed further?
>
>
>  Madhuri,
>
>  First off, spectacular that you figured out how to do this!  Great great
> job!  I suspected the swagger code would be a bunch of garbage.  Just
> looking over the review, the output isn’t too terribly bad.  It has some
> serious pep8 problems.
>
>  Now that we have seen the swagger code generator works, we need to see
> if it produces useable output.  In other words, can the API be used by the
> magnum backend.  Google is “all-in” on swagger for their API model.
> Realistically maintaining a python binding would be a huge job.  If we
> could just use swagger for the short term, even though its less then ideal,
> that would be my preference.  Even if its suboptimal.  We can put a readme
> in the TLD saying the code was generated by a a code generator and explain
> how to generate the API.
>
>  One last question.  I didn’t see immediately by looking at the api, but
> does it support TLS auth?  We will need that.
>
>  Super impressed!
>
>  Regards
> -steve
>
>
>
> Regards,
> Madhuri Kumari
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Proposal for Madhuri Kumari to join Core Team

2015-04-30 Thread Hongbin Lu
+1!

On Apr 28, 2015, at 11:14 PM, "Steven Dake (stdake)"  wrote:

> Hi folks,
> 
> I would like to nominate Madhuri Kumari  to the core team for Magnum.  Please 
> remember a +1 vote indicates your acceptance.  A –1 vote acts as a complete 
> veto.
> 
> Why Madhuri for core?
> She participates on IRC heavily
> She has been heavily involved in a really difficult project  to remove 
> Kubernetes kubectl and replace it with a native python language binding which 
> is really close to be done (TM)
> She provides helpful reviews and her reviews are of good quality
> Some of Madhuri’s stats, where she performs in the pack with the rest of the 
> core team:
> 
> reviews: http://stackalytics.com/?release=kilo&module=magnum-group
> commits: 
> http://stackalytics.com/?release=kilo&module=magnum-group&metric=commits
> 
> Please feel free to vote if your a Magnum core contributor.
> 
> Regards
> -steve
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Choose a project mascot

2017-02-27 Thread Hongbin Lu
Hi all,

We discussed the mascot choice a few times. At the last team meeting, we 
decided to choose dolphins as the Zun’s mascot. Thanks Pradeep for proposing 
this mascot and thanks all for providing feedback.

Best regards,
Hongbin

From: Pradeep Singh [mailto:ps4openst...@gmail.com]
Sent: February-16-17 10:46 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Zun] Choose a project mascot

I was thinking about falcon(light, powerful and fast), or dolphins or tiger.

On Wed, Feb 15, 2017 at 12:29 AM, Hongbin Lu 
mailto:hongbin...@huawei.com>> wrote:
Hi Zun team,

OpenStack has a mascot program [1]. Basically, if we like, we can choose a 
mascot to represent our team. The process is as following:
* We choose a mascot from the natural world, which can be an animal (i.e. fish, 
bird), natural feature (i.e. waterfall) or other natural element (i.e. flame).
* Once we choose a mascot, I communicate the choice with OpenStack foundation 
staff.
* Someone will work on a draft based on the style of the family of logos.
* The draft will be sent back to us for approval.

The final mascot will be used to present our team. All, any idea for the mascot 
choice?

[1] https://www.openstack.org/project-mascots/

Best regards,
Hongbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-16 Thread Hongbin Lu
Zun team could squeeze the session into 45 minutes and give the other 45 
minutes to another team if anyone interest.

Best regards,
Hongbin

From: Kendall Nelson [mailto:kennelso...@gmail.com]
Sent: March-16-17 11:11 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [ptls] Project On-Boarding Rooms

Hello All!
I am pleased to see how much interest there is in these onboarding rooms. As of 
right now I can accommodate all the official projects (sorry Cyborg) that have 
requested a room to make all the requests fit, I have combined docs and i18n 
and taken Thierry's suggestion to combine Infra/QA/RelMgmt/Regs/Stable.
These are the projects that have requested a slot:
Solum
Tricircle
Karbor
Freezer
Kuryr
Mistral
Dragonflow
Coudkitty
Designate
Trove
Watcher
Magnum
Barbican
Charms
Tacker
Zun
Swift
Watcher
Kolla
Horizon
Keystone
Nova
Cinder
Telemetry
Infra/QA/RelMgmt/Regs/Stable
Docs/i18n
If there are any other projects willing to share a slot together please let me 
know!
-Kendall Nelson (diablo_rojo)

On Thu, Mar 16, 2017 at 8:49 AM Jeremy Stanley 
mailto:fu...@yuggoth.org>> wrote:
On 2017-03-16 10:31:49 +0100 (+0100), Thierry Carrez wrote:
[...]
> I think we could share a 90-min slot between a number of the supporting
> teams:
>
> Infrastructure, QA, Release Management, Requirements, Stable maint
>
> Those teams are all under-staffed and wanting to grow new members, but
> 90 min is both too long and too short for them. I feel like regrouping
> them in a single slot and give each of those teams ~15 min to explain
> what they do, their process and tooling, and a pointer to next steps /
> mentors would be immensely useful.

I can see this working okay for the Infra team. Pretty sure I can't
come up with anything useful (to our team) we could get through in a
90-minute slot given our new contributor learning curve, so would
feel bad wasting a full session. A "this is who we are and what we
do, if you're interested in these sorts of things and want to find
out more on getting involved go here, thank you for your time" over
10 minutes with an additional 5 for questions could at least be
minimally valuable for us, on the other hand.
--
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Hongbin Lu
Zun had a similar issue of colliding on the keyword "container", and we chose 
to use an alternative term "appcontainer" that is not perfect but acceptable. 
IMHO, this kind of top-level name collision issue would be better resolved by 
introducing namespace per project, which is the approach adopted by AWS.

Best regards,
Hongbin

> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: March-20-17 3:35 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [magnum][osc] What name to use for magnum
> commands in osc?
> 
> On 03/20/2017 03:08 PM, Adrian Otto wrote:
> > Team,
> >
> > Stephen Watson has been working on an magnum feature to add magnum
> commands to the openstack client by implementing a plugin:
> >
> >
> https://review.openstack.org/#/q/status:open+project:openstack/python-
> > magnumclient+osc
> >
> > In review of this work, a question has resurfaced, as to what the
> client command name should be for magnum related commands. Naturally,
> we’d like to have the name “cluster” but that word is already in use by
> Senlin.
> 
> Unfortunately, the Senlin API uses a whole bunch of generic terms as
> top-level REST resources, including "cluster", "event", "action",
> "profile", "policy", and "node". :( I've warned before that use of
> these generic terms in OpenStack APIs without a central group
> responsible for curating the API would lead to problems like this. This
> is why, IMHO, we need the API working group to be ultimately
> responsible for preventing this type of thing from happening. Otherwise,
> there ends up being a whole bunch of duplication and same terms being
> used for entirely different things.
> 
>  >Stephen opened a discussion with Dean Troyer about this, and found
> that “infra” might be a suitable name and began using that, and
> multiple team members are not satisfied with it.
> 
> Yeah, not sure about "infra". That is both too generic and not an
> actual "thing" that Magnum provides.
> 
>  > The name “magnum” was excluded from consideration because OSC aims
> to be project name agnostic. We know that no matter what word we pick,
> it’s not going to be ideal. I’ve added an agenda on our upcoming team
> meeting to judge community consensus about which alternative we should
> select:
> >
> > https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2017-
> 03
> > -21_1600_UTC
> >
> > Current choices on the table are:
> >
> >   * c_cluster (possible abbreviation alias for
> container_infra_cluster)
> >   * coe_cluster
> >   * mcluster
> >   * infra
> >
> > For example, our selected name would appear in “openstack …” commands.
> Such as:
> >
> > $ openstack c_cluster create …
> >
> > If you have input to share, I encourage you to reply to this thread,
> or come to the team meeting so we can consider your input before the
> team makes a selection.
> 
> What is Magnum's service-types-authority service_type?
> 
> Best,
> -jay
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Hongbin Lu


> -Original Message-
> From: Dean Troyer [mailto:dtro...@gmail.com]
> Sent: March-20-17 5:19 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum][osc] What name to use for magnum
> commands in osc?
> 
> On Mon, Mar 20, 2017 at 3:37 PM, Adrian Otto 
> wrote:
> > the  argument is actually the service name, such as “ec2”.
> This is the same way the openstack cli works. Perhaps there is another
> tool that you are referring to. Have I misunderstood something?
> 
> I am going to jump in here and clarify one thing.  OSC does not do
> project namespacing, or any other sort of namespacing for its resource
> names.  It uses qualified resource names (fully-qualified even?).  In
> some cases this results in something that looks a lot like namespacing,
> but it isn't. The Volume API commands are one example of this, nearly
> every resource there includes the word 'volume' but not because that is
> the API name, it is because that is the correct name for those
> resources ('volume backup', etc).

[Hongbin Lu] I might provide a minority point of view here. What confused me is 
inconsistent style of the resource name. For example, there is a "container" 
resource for a swift container, and there is "secret container" resource a 
barbican container. I just found it odd to have both un-qualified resource 
(i.e. container) and qualified resource name (i.e. secret container) in the 
same CLI. It appears to me that some resources are namespaced and others are 
not, and this kind of style provides a suboptimal user experiences from my 
point of view.

I think the style would be more consistent if all the resources are qualified 
or un-qualified, not the mix of both.

> 
> > We could so the same thing and use the text “container_infra”, but we
> felt that might be burdensome for interactive use and wanted to find
> something shorter that would still make sense.
> 
> Naming resources is hard to get right.  Here's my throught process:
> 
> For OSC, start with how to describe the specific 'thing' being
> manipulated.  In this case, it is some kind of cluster.  In the list
> you posted in the first email, 'coe cluster' seems to be the best
> option.  I think 'coe' is acceptable as an abbreviation (we usually do
> not use them) because that is a specific term used in the field and
> satisfies the 'what kind of cluster?' question.  No underscores please,
> and in fact no dash here, resource names have spaces in them.
> 
> dt
> 
> --
> 
> Dean Troyer
> dtro...@gmail.com
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Document about how to deploy Zun with multiple hosts

2017-03-22 Thread Hongbin Lu
Kevin,

I don’t think there is any such document right now. I submitted a ticket for 
creating one:

https://bugs.launchpad.net/zun/+bug/1675245

There is a guidance for setting up a multi-host devstack environment: 
https://docs.openstack.org/developer/devstack/guides/multinode-lab.html . You 
could possibly use it as a starting point and inject Zun-specific configuration 
there. The guide divide nodes into two kinds: cluster controller and compute 
node. In the case of Zun, zun-api & zun-compute can run on cluster controller, 
and zun-compute can run on compute node. Hope it helps.

Best regards,
Hongbin

From: Kevin Zhao [mailto:kevin.z...@linaro.org]
Sent: March-22-17 10:39 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Zun] Document about how to deploy Zun with multiple 
hosts

Hi guys,
Nowadays I want to try Zun in multiple hosts. But I didn't find the doc 
about how to deploy it.
I wonder where is document to show the users about how to deploy zun with 
multiple hosts? That will be easy for development.
Thanks  :-)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa][devstack][kuryr][fuxi][zun] Consolidate docker installation

2017-04-02 Thread Hongbin Lu
Hi devstack team,

Please find my proposal about consolidating docker installation into one place 
that is devstack tree:

https://review.openstack.org/#/c/452575/

Currently, there are several projects that installed docker in their devstack 
plugins in various different ways. This potentially introduce issues if more 
than one such services were enabled in devstack because the same software 
package will be installed and configured multiple times. To resolve the 
problem, an option is to consolidate the docker installation script into one 
place so that all projects will leverage it. Before continuing this effort, I 
wanted to get early feedback to confirm if this kind of work will be accepted. 
BTW, etcd installation might have a similar problem and I would be happy to 
contribute another patch to consolidate it if that is what will be accepted as 
well.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] [daisycloud-core] [requirements] [magnum] [oslo] Do we really need to upgrade pbr, docker-py and oslo.utils

2017-04-19 Thread Hongbin Lu
Zun required docker-py to be 1.8 or higher because older version of
docker-py didn't have the API we need. Sorry if it caused difficulties on
your side but I don't think it is feasible to downgrade the version for now
since it will affect a ton of other projects.

Best regards,
Hongbin

On Thu, Apr 20, 2017 at 12:15 AM, Steven Dake (stdake) 
wrote:

> Hu,
>
>
>
> Kolla does not manage the global requirements process as it is global to
> OpenStack.  The Kolla core reviewers essentially rubber stamp changes from
> the global requirements bot assuming they pass our gating.  If they don’t
> pass our gating, we work with the committer to sort out a working solution.
>
>
>
> Taking a look at the specific issues you raised:
>
>
>
> Pbr: https://github.com/openstack/requirements/blame/stable/
> ocata/global-requirements.txt#L158
>
> Here is the change: https://github.com/openstack/requirements/commit/
> 74a8e159e3eda7c702a39e38ab96327ba85ced3c
>
> (from the infrastructure team)
>
>
>
> Docker-py: https://github.com/openstack/requirements/blame/stable/
> ocata/global-requirements.txt#L338
>
> Here is the change: https://github.com/openstack/requirements/commit/
> 330139835347a26f435ab1262f16cf9e559f32a6
>
> (from the magnum team)
>
>
>
> oslo-utils: https://github.com/openstack/requirements/blame/
> 62383acc175b77fe7f723979cefaaca65a8d12fe/global-requirements.txt#L136
>
> https://github.com/openstack/requirements/commit/
> 510c4092f48a3a9ac7518decc5d3724df8088eb7
>
> (I am not sure which team this is – the oslo team perhaps?)
>
>
>
> I would recommend taking the changes up with the requirements team or the
> direct authors.
>
>
>
> Regards
>
> -steve
>
>
>
>
>
>
>
> *From: *"hu.zhiji...@zte.com.cn" 
> *Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)" 
> *Date: *Wednesday, April 19, 2017 at 8:45 PM
> *To: *"openstack-dev@lists.openstack.org"  openstack.org>
> *Subject: *[openstack-dev] [kolla] [daisycloud-core]Do we really need to
> upgrade pbr, docker-py and oslo.utils
>
>
>
> Hello,
>
>
>
> As global requirements changed in Ocata, Kolla upgrads pbr>=1.8 [1] ,
>
> docker-py>=1.8.1[2] . Besides, Kolla also starts depending on
>
> oslo.utils>=3.18.0 to use uuidutils.generate_uuid() instead of
> uuid.uuid4() to
>
> generate UUID.
>
>
>
> IMHO, Upgrading of [1] and [2] are actually not what Kolla really need to,
>
> and uuidutils.generate_uuid() is also supported by oslo.utils-3.16. I mean
>
> If we keep Kolla's requirement in Ocata as what it was in Newton, upper
> layer
>
> user of Kolla like daisycloud-core project can still keep other things
> unchanged
>
> to upgrade Kolla from stable/newton to stable/ocata. Otherwise, we have to
>
> upgrade from centos-release-openstack-newton to
>
> centos-release-openstack-ocata(we do not use pip since it conflicts with
> yum
>
> on files installed by same packages). But this kind of upgrade may be too
>
> invasive that may impacts other applications.
>
>
>
> I know that there were some discusstions about global requirements update
>
> these days. So if not really need to do these upgrades by Kolla itself, can
>
> we just keep the requirement unchanged as long as possible?
>
>
>
> My 2c.
>
>
>
> [1] https://github.com/openstack/kolla/commit/
> 2f50beb452918e37dec6edd25c53e407c6e47f53
>
> [2] https://github.com/openstack/kolla/commit/
> 85abee13ba284bb087af587b673f4e44187142da
>
> [3] https://github.com/openstack/kolla/commit/
> cee89ee8bef92914036189d02745c08894a9955b
>
>
>
>
>
>
>
>
>
>
>
> B. R.,
>
> Zhijiang
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Zun] Zun Mascot

2017-04-21 Thread Hongbin Lu
Hi team,

Please review the mascot below and let me know your feedback if any. We will 
discuss/approve the mascot at the next team meeting.

Best regards,
Hongbin

From: Heidi Joy Tretheway [mailto:heidi...@openstack.org]
Sent: April-21-17 6:16 PM
To: Hongbin Lu
Subject: Re: Zun mascot follow-up

Hi Hongbin,
Our designers came up with a great mascot (dolphin) for your team that looks 
substantially different than Magnum’s shark (which was my concern). Would you 
please let me know what your team thinks?

[cid:51A123EE-C4AD-440E-9A01-B45C3F586012]
On Feb 21, 2017, at 10:28 AM, Hongbin Lu 
mailto:hongbin...@huawei.com>> wrote:

Heidi,

Thanks for following this up and the advice. No problem for the website things. 
I will have the Zun team to choose another mascot and let you know.

Best regards,
Hongbin

From: Heidi Joy Tretheway [mailto:heidi...@openstack.org]
Sent: February-21-17 1:19 PM
To: Hongbin Lu
Subject: Zun mascot follow-up


Hi Hongbin,

I wanted to follow up to ensure you got my note to the Zun dev team list. I 
apologize that your mascot choice was listed wrong on 
openstack.org/project-mascots<http://openstack.org/project-mascots>. It should 
have shown as Zun (mascot not chosen) but instead showed up as Tricircle’s 
chosen mascot, the panda.

The error is entirely my fault, and we’ll get it fixed on the website shortly. 
Thanks for your patience, and please carry on with your debate over the best 
Zun mascot!

Below, your choices can work except for the barrel, because there are no 
human-made objects allowed. Also you are correct that it could be confusing to 
have both a Hawk (Winstackers) and a Falcon, so I would advise the team to look 
at the stork, dolphin, or tiger.

Thank you!



Thanks for the inputs. By aggregating feedback from different source, the 
choice is as below:

* Barrel

* Storks

* Falcon (I am not sure this one since another team already chose Hawk)

* Dolphins

* Tiger

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Zun] project on-board schedule

2017-04-27 Thread Hongbin Lu
Hi all,

There is a recent change on schedule about the Zun new contributors on-board 
session at Boston Summit. The new time is Monday, May 8, 2:00pm-3:30pm [1]. 
Please feel free to let me know if the new time doesn't work for you. Look 
forward to seeing you all there.

[1] 
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18693/zun-project-onboarding

Best regards,
Hongbin

From: joehuang
Sent: April-26-17 2:31 AM
To: Kendall Nelson; OpenStack Development Mailing List (not for usage 
questions); Hongbin Lu
Subject: RE: [openstack-dev] project on-board schedule

Thank you very much, Hongbin and Kendall.

Best Regards
Chaoyi Huang (joehuang)

From: Kendall Nelson [kennelso...@gmail.com]
Sent: 26 April 2017 11:19
To: joehuang; OpenStack Development Mailing List (not for usage questions); 
Hongbin Lu
Subject: Re: [openstack-dev] project on-board schedule

Yes, I should be able to make that happen :)

- Kendall

On Tue, Apr 25, 2017, 10:03 PM joehuang 
mailto:joehu...@huawei.com>> wrote:
Hello, Kendall,

Thank you very much for the slot you provided, but consider that it's launch 
time, I am afraid that audience need to have launch too.

I just discussed with Hongbin, the PTL of Zun, he said it's OK to exchange the 
project on-boarding time slot between Zun[1] and Tricircle[2].

After exchange, Tricircle will share with Sahara and use the first half (45 
minutes) just like Zun in this time slot. And Zun's on-boarding session will be 
moved to Monday 2:00pm~3:30pm.

Is this exchange is feasible?

[1] 
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18701/zunsahara-project-onboarding
[2] 
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18693/tricircle-project-onboarding

Best Regards
Chaoyi Huang (joehuang)

From: Kendall Nelson [kennelso...@gmail.com<mailto:kennelso...@gmail.com>]
Sent: 26 April 2017 4:07
To: OpenStack Development Mailing List (not for usage questions); joehuang

Subject: Re: [openstack-dev] project on-board schedule
Hello Joe,

I can offer TriCircle a lunch slot on Wednesday from 12:30-1:50?
-Kendall


On Tue, Apr 25, 2017 at 4:08 AM joehuang 
mailto:joehu...@huawei.com>> wrote:
Hi,

Thank you Tom, I found that the on-boarding session of Tricircle [1] is 
overlapping with my talk{2]:

[1] 
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18693/tricircle-project-onboarding
[2] 
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18076/when-one-cloud-is-not-enough-an-overview-of-sites-regions-edges-distributed-clouds-and-more

Is there any other project can help us to exchange the on-boarding session? 
Thanks a lot, I just find the issue.

Best Regards
Chaoyi Huang (joehuang)


From: Tom Fifield [t...@openstack.org<mailto:t...@openstack.org>]
Sent: 25 April 2017 16:50
To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] project on-board schedule

On 25/04/17 16:35, joehuang wrote:
> Hello,
>
> Where can I find the project on-board schedule in OpenStack Boston
> summit? I haven't found it yet, and maybe I missed some mail. Thanks a lot.

It's listed on the main summit schedule, under the Forum :)

Here's a direct link to the Forum category:

https://www.openstack.org/summit/boston-2017/summit-schedule/#track=146


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Zun] Proposal a change of Zun core team

2017-04-28 Thread Hongbin Lu
Hi all,

I proposes a change of Zun's core team memberships as below:

+ Feng Shengqin (feng-shengqin)
- Wang Feilong (flwang)

Feng Shengqin has contributed a lot to the Zun projects. Her contribution 
includes BPs, bug fixes, and reviews. In particular, she completed an essential 
BP and had a lot of accepted commits in Zun's repositories. I think she is 
qualified for the core reviewer position. I would like to thank Wang Feilong 
for his interest to join the team when the project was found. I believe we are 
always friends regardless of his core membership.

By convention, we require a minimum of 4 +1 votes from Zun core reviewers 
within a 1 week voting window (consider this proposal as a +1 vote from me). A 
vote of -1 is a veto. If we cannot get enough votes or there is a veto vote 
prior to the end of the voting window, this proposal is rejected.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [all] systemd in devstack by default

2017-05-03 Thread Hongbin Lu
Hi Sean,

I tried the new systemd devstack and frankly I don't like it. There are several 
handy operations in screen that seems to be impossible after switching to 
systemd. For example, freeze a process by "Ctrl + a + [". In addition, 
navigating though the logs seems difficult (perhaps I am not familiar with 
journalctl).

From my understanding, the plan is dropping screen entirely in devstack? I 
would argue that it is better to keep both screen and systemd, and let users 
choose one of them based on their preference.

Best regards,
Hongbin

> -Original Message-
> From: Sean Dague [mailto:s...@dague.net]
> Sent: May-03-17 6:10 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [devstack] [all] systemd in devstack by
> default
> 
> On 05/02/2017 08:30 AM, Sean Dague wrote:
> > We started running systemd for devstack in the gate yesterday, so far
> > so good.
> >
> > The following patch (which will hopefully land soon), will convert
> the
> > default local use of devstack to systemd as well -
> > https://review.openstack.org/#/c/461716/. It also includes
> > substantially updated documentation.
> >
> > Once you take this patch, a "./clean.sh" is recommended. Flipping
> > modes can cause some cruft to build up, and ./clean.sh should be
> > pretty good at eliminating them.
> >
> > https://review.openstack.org/#/c/461716/2/doc/source/development.rst
> > is probably specifically interesting / useful for people to read, as
> > it shows how the standard development workflows will change (for the
> > better) with systemd.
> >
> > -Sean
> 
> As a follow up, there are definitely a few edge conditions we've hit
> with some jobs, so the following is provided as information in case you
> have a job that seems to fail in one of these ways.
> 
> Doing process stop / start
> ==
> 
> The nova live migration job is special, it was restarting services
> manually, however it was doing so with some copy / pasted devstack code,
> which means it didn't evolve with the rest of devstack. So the stop
> code stopped working (and wasn't robust enough to make it clear that
> was the issue).
> 
> https://review.openstack.org/#/c/461803/ is the fix (merged)
> 
> run_process limitations
> ===
> 
> When doing the systemd conversion I looked for a path forward which was
> going to make 90% of everything just work. The key trick here was that
> services start as the "stack" user, and aren't daemonizing away from
> the console. We can take the run_process command and make that the
> ExecStart in a unit file.
> 
> *Except* that only works if the command is specified by an *absolute
> path*.
> 
> So things like this in kuryr-libnetwork become an issue
> https://github.com/openstack/kuryr-
> libnetwork/blob/3e2891d6fc5d55b3712258c932a5a8b9b323f6c2/devstack/plugi
> n.sh#L148
> 
> There is also a second issue there, which is calling sudo in the
> run_process line. If you need to run as a user/group different than the
> default, you need to specify that directly.
> 
> The run_process command now supports that -
> https://github.com/openstack-
> dev/devstack/blob/803acffcf9254e328426ad67380a99f4f5b164ec/functions-
> common#L1531-L1535
> 
> And lastly, run_process really always did expect that the thing you
> started remained attached to the console. These are run as "simple"
> services in systemd. If you are running a thing which already
> daemonizes systemd is going to assume (correctly in this simple mode)
> the fact that the process detatched from it means it died, and kill and
> clean it up.
> 
> This is the issue the OpenDaylight plugin ran into.
> https://review.openstack.org/#/c/461889/ is the proposed fix.
> 
> 
> 
> If you run into any other issues please pop into #openstack-qa (or
> respond to this email) and we'll try to work through them.
> 
>   -Sean
> 
> --
> Sean Dague
> http://dague.net
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][devstack][kuryr][fuxi][zun] Consolidate docker installation

2017-05-04 Thread Hongbin Lu
Hi all,

Just want to give a little bit update about this. After discussing with the QA 
team, we agreed to create a dedicated repo for this purpose: 
https://github.com/openstack/devstack-plugin-container . In addition, a few 
patches [1][2][3] were proposed to different projects for switching to this 
common devstack plugin. I hope more teams will interest in using this plugin 
and helping out to improve and maintain it.

[1] https://review.openstack.org/#/c/457348/
[2] https://review.openstack.org/#/c/461210/
[3] https://review.openstack.org/#/c/461212/

Best regards,
Hongbin

> -Original Message-
> From: Davanum Srinivas [mailto:dava...@gmail.com]
> Sent: April-02-17 8:17 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [qa][devstack][kuryr][fuxi][zun]
> Consolidate docker installation
> 
> Hongbin,
> 
> Nice. +1 in theory :) the etcd one i have a WIP for the etcd/DLM,
> please see here https://review.openstack.org/#/c/445432/
> 
> -- Dims
> 
> On Sun, Apr 2, 2017 at 8:13 PM, Hongbin Lu 
> wrote:
> > Hi devstack team,
> >
> >
> >
> > Please find my proposal about consolidating docker installation into
> > one place that is devstack tree:
> >
> >
> >
> > https://review.openstack.org/#/c/452575/
> >
> >
> >
> > Currently, there are several projects that installed docker in their
> > devstack plugins in various different ways. This potentially
> introduce
> > issues if more than one such services were enabled in devstack
> because
> > the same software package will be installed and configured multiple
> > times. To resolve the problem, an option is to consolidate the docker
> > installation script into one place so that all projects will leverage
> > it. Before continuing this effort, I wanted to get early feedback to
> > confirm if this kind of work will be accepted. BTW, etcd installation
> > might have a similar problem and I would be happy to contribute
> > another patch to consolidate it if that is what will be accepted as
> well.
> >
> >
> >
> > Best regards,
> >
> > Hongbin
> >
> >
> >
> __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> 
> --
> Davanum Srinivas :: https://twitter.com/dims
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Proposal a change of Zun core team

2017-05-05 Thread Hongbin Lu
Hi all,

Thanks for your vote. According to the feedback, I adjusted the core team 
membership accordingly. Welcome Feng Shengqin to the core team.

https://review.openstack.org/#/admin/groups/1382,members

Best regards,
Hongbin

From: shubham sharma [mailto:shubham@gmail.com]
Sent: May-02-17 1:03 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Zun] Proposal a change of Zun core team

+1

Regards
Shubham

On Tue, May 2, 2017 at 6:33 AM, Qiming Teng 
mailto:teng...@linux.vnet.ibm.com>> wrote:
+1

Qiming


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Create subnetpool on dynamic credentials

2017-05-20 Thread Hongbin Lu
Hi QA team,

I have a proposal to create subnetpool/subnet pair on dynamic credentials: 
https://review.openstack.org/#/c/466440/ . We (Zun team) have use cases for 
using subnets with subnetpools. I wanted to get some early feedback on this 
proposal. Will this proposal be accepted? If not, would appreciate alternative 
suggestion if any. Thanks in advance.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuxi][stackube][kuryr] IRC meeting

2017-05-22 Thread Hongbin Lu
Hi all,

We will have an IRC meeting at UTC 1400-1500 Tuesday (2017-05-23). At the 
meeting, we will discuss the k8s storage integration with OpenStack. This 
effort might cross more than one teams (i.e. kuryr and stackube). You are more 
than welcomed to join us at #openstack-meeting-cp tomorrow.

Best regards,
Hongbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Create subnetpool on dynamic credentials

2017-05-24 Thread Hongbin Lu
Hi Andrea,

Sorry I just got a chance to get back to this. Yes, an advantage is 
creating/deleting subnetpool once instead of creating/deleting per test. It 
seems neutron doesn’t support setting subnetpool_id after a subnet is created. 
If this is true, it means we cannot leverage the pre-created subnet from 
credential provider because we want to test against a subnet with a subnetpool. 
Eventually, we need to create a pair of subnet/subnetpool for each test and 
take care of the configuration of these resources. This looks complex 
especially for our contributors most of who don’t have a strong networking 
background.

Another motivation of this proposal is that we want to run all the tests 
against a subnet with subnetpool. We currently run tests without subnetpool but 
it doesn’t work well in some dev environment [1]. The issue was tracked down to 
the limitation of the docker networking model that makes its plugin hard to 
identify the correct subnet (unless it has a subnetpool because libnetwork will 
record its uuid). This is why I prefer to run tests against a pre-created 
subnet/subnetpool pair. Ideally, Tempest could provide a feasible solution to 
address our use cases.

[1] https://bugs.launchpad.net/zun/+bug/1690284

Best regards,
Hongbin

From: Andrea Frittoli [mailto:andrea.fritt...@gmail.com]
Sent: May-22-17 9:23 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [qa] Create subnetpool on dynamic credentials

Hi Hongbin,

If several of your test cases require a subnet pool, I think the simplest 
solution would be creating one in the resource creation step of the tests.
As I understand it, subnet pools can be created by regular projects (they do 
not require admin credentials).

The main advantage that I can think of for having subnet pools provisioned as 
part of the credential provider code is that - in case of pre-provisioned 
credentials - the subnet pool would be created and delete once per test user as 
opposed to once per test class.

That said I'm not opposed to the proposal in general, but if possible I would 
prefer to avoid adding complexity to an already complex part of the code.

andrea

On Sun, May 21, 2017 at 2:54 AM Hongbin Lu 
mailto:hongbin...@huawei.com>> wrote:
Hi QA team,

I have a proposal to create subnetpool/subnet pair on dynamic credentials: 
https://review.openstack.org/#/c/466440/ . We (Zun team) have use cases for 
using subnets with subnetpools. I wanted to get some early feedback on this 
proposal. Will this proposal be accepted? If not, would appreciate alternative 
suggestion if any. Thanks in advance.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for Fuxi-golang

2017-05-30 Thread Hongbin Lu
Please consider leveraging Fuxi instead. Kuryr/Fuxi team is working very hard 
to deliver the docker network/storage plugins. I wish you will work with us to 
get them integrated with Magnum-provisioned cluster. Currently, COE clusters 
provisioned by Magnum is far away from enterprise-ready. I think the Magnum 
project will be better off if it can adopt Kuryr/Fuxi which will give you a 
better OpenStack integration.

Best regards,
Hongbin

From: Spyros Trigazis [mailto:strig...@gmail.com]
Sent: May-30-17 7:47 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for Fuxi-golang

FYI, there is already a cinder volume driver for docker available, written
in golang, from rexray [1].

Our team recently contributed to libstorage [3], it could support manila too. 
Rexray
also supports the popular cloud providers.

Magnum's docker swarm cluster driver, already leverages rexray for cinder 
integration. [2]

Cheers,
Spyros

[1] https://github.com/codedellemc/rexray/releases/tag/v0.9.0
[2] https://github.com/codedellemc/libstorage/releases/tag/v0.6.0
[3] 
http://git.openstack.org/cgit/openstack/magnum/tree/magnum/drivers/common/templates/swarm/fragments/volume-service.sh?h=stable/ocata

On 27 May 2017 at 12:15, zengchen 
mailto:chenzeng...@163.com>> wrote:
Hi John & Ben:
 I have committed a patch[1] to add a new repository to Openstack. Please take 
a look at it. Thanks very much!

 [1]: https://review.openstack.org/#/c/468635

Best Wishes!
zengchen




在 2017-05-26 21:30:48,"John Griffith" 
mailto:john.griffi...@gmail.com>> 写道:



On Thu, May 25, 2017 at 10:01 PM, zengchen 
mailto:chenzeng...@163.com>> wrote:

Hi john:
I have seen your updates on the bp. I agree with your plan on how to 
develop the codes.
However, there is one issue I have to remind you that at present, Fuxi not 
only can convert
 Cinder volume to Docker, but also Manila file. So, do you consider to involve 
Manila part of codes
 in the new Fuxi-golang?
Agreed, that's a really good and important point.  Yes, I believe Ben 
Swartzlander

is interested, we can check with him and make sure but I certainly hope that 
Manila would be interested.
Besides, IMO, It is better to create a repository for Fuxi-golang, because
 Fuxi is the project of Openstack,
Yeah, that seems fine; I just didn't know if there needed to be any more 
conversation with other folks on any of this before charing ahead on new repos 
etc.  Doesn't matter much to me though.


   Thanks very much!

Best Wishes!
zengchen



At 2017-05-25 22:47:29, "John Griffith" 
mailto:john.griffi...@gmail.com>> wrote:



On Thu, May 25, 2017 at 5:50 AM, zengchen 
mailto:chenzeng...@163.com>> wrote:
Very sorry to foget attaching the link for bp of rewriting Fuxi with go 
language.
https://blueprints.launchpad.net/fuxi/+spec/convert-to-golang

At 2017-05-25 19:46:54, "zengchen" 
mailto:chenzeng...@163.com>> wrote:

Hi guys:
hongbin had committed a bp of rewriting Fuxi with go language[1]. My 
question is where to commit codes for it.
We have two choice, 1. create a new repository, 2. create a new branch.  IMO, 
the first one is much better. Because
there are many differences in the layer of infrastructure, such as CI.  What's 
your opinion? Thanks very much

Best Wishes
zengchen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Hi Zengchen,

For now I was thinking just use Github and PR's outside of the OpenStack 
projects to bootstrap things and see how far we can get.  I'll update the BP 
this morning with what I believe to be the key tasks to work through.

Thanks,
John


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for Fuxi-golang

2017-05-31 Thread Hongbin Lu
Please find my replies inline.

Best regards,
Hongbin

From: Spyros Trigazis [mailto:strig...@gmail.com]
Sent: May-30-17 9:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for Fuxi-golang



On 30 May 2017 at 15:26, Hongbin Lu 
mailto:hongbin...@huawei.com>> wrote:
Please consider leveraging Fuxi instead.

Is there a missing functionality from rexray?

[Hongbin Lu] From my understanding, Rexray targets on the overcloud use cases 
and assumes that containers are running on top of nova instances. You mentioned 
Magnum is leveraging Rexray for Cinder integration. Actually, I am the core 
reviewer who reviewed and approved those Rexray patches. From what I observed, 
the functionalities provided by Rexray are minimal. What it was doing is simply 
calling Cinder API to search an existing volume, attach the volume to the Nova 
instance, and let docker to bind-mount the volume to the container. At the time 
I was testing it, it seems to have some mystery bugs that prevented me to get 
the cluster to work. It was packaged by a large container image, which might 
take more than 5 minutes to pull down. With that said, Rexray might be a choice 
for someone who are looking for cross cloud-providers solution. Fuxi will focus 
on OpenStack and targets on both overcloud and undercloud use cases. That means 
Fuxi can work with Nova+Cinder or a standalone Cinder. As John pointed out in 
another reply, another benefit of Fuxi is to resolve the fragmentation problem 
of existing solutions. Those are the differentiators of Fuxi.

Kuryr/Fuxi team is working very hard to deliver the docker network/storage 
plugins. I wish you will work with us to get them integrated with 
Magnum-provisioned cluster.

Patches are welcome to support fuxi as an *option* instead of rexray, so users 
can choose.

Currently, COE clusters provisioned by Magnum is far away from 
enterprise-ready. I think the Magnum project will be better off if it can adopt 
Kuryr/Fuxi which will give you a better OpenStack integration.

Best regards,
Hongbin

fuxi feature request: Add authentication using a trustee and a trustID.

[Hongbin Lu] I believe this is already supported.

Cheers,
Spyros


From: Spyros Trigazis [mailto:strig...@gmail.com<mailto:strig...@gmail.com>]
Sent: May-30-17 7:47 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for Fuxi-golang

FYI, there is already a cinder volume driver for docker available, written
in golang, from rexray [1].

Our team recently contributed to libstorage [3], it could support manila too. 
Rexray
also supports the popular cloud providers.

Magnum's docker swarm cluster driver, already leverages rexray for cinder 
integration. [2]

Cheers,
Spyros

[1] https://github.com/codedellemc/rexray/releases/tag/v0.9.0
[2] https://github.com/codedellemc/libstorage/releases/tag/v0.6.0
[3] 
http://git.openstack.org/cgit/openstack/magnum/tree/magnum/drivers/common/templates/swarm/fragments/volume-service.sh?h=stable/ocata

On 27 May 2017 at 12:15, zengchen 
mailto:chenzeng...@163.com>> wrote:
Hi John & Ben:
 I have committed a patch[1] to add a new repository to Openstack. Please take 
a look at it. Thanks very much!

 [1]: https://review.openstack.org/#/c/468635

Best Wishes!
zengchen



在 2017-05-26 21:30:48,"John Griffith" 
mailto:john.griffi...@gmail.com>> 写道:


On Thu, May 25, 2017 at 10:01 PM, zengchen 
mailto:chenzeng...@163.com>> wrote:

Hi john:
I have seen your updates on the bp. I agree with your plan on how to 
develop the codes.
However, there is one issue I have to remind you that at present, Fuxi not 
only can convert
 Cinder volume to Docker, but also Manila file. So, do you consider to involve 
Manila part of codes
 in the new Fuxi-golang?
Agreed, that's a really good and important point.  Yes, I believe Ben 
Swartzlander

is interested, we can check with him and make sure but I certainly hope that 
Manila would be interested.
Besides, IMO, It is better to create a repository for Fuxi-golang, because
 Fuxi is the project of Openstack,
Yeah, that seems fine; I just didn't know if there needed to be any more 
conversation with other folks on any of this before charing ahead on new repos 
etc.  Doesn't matter much to me though.


   Thanks very much!

Best Wishes!
zengchen


At 2017-05-25 22:47:29, "John Griffith" 
mailto:john.griffi...@gmail.com>> wrote:


On Thu, May 25, 2017 at 5:50 AM, zengchen 
mailto:chenzeng...@163.com>> wrote:
Very sorry to foget attaching the link for bp of rewriting Fuxi with go 
language.
https://blueprints.launchpad.net/fuxi/+spec/convert-to-golang

At 2017-05-25 19:46:54, "zengchen" 
mailto:chenzeng...@163.com>> wrote:
Hi guys:
hongbin had committed a bp of rewriting Fuxi with go language[1]. My 
question i

[openstack-dev] [Zun] Propose addition of Zun core team and removal notice

2017-06-19 Thread Hongbin Lu
Hi all,

I would like to propose the following change to the Zun core team:

+ Shunli Zhou (shunliz)

Shunli has been contributing to Zun for a while and did a lot of work. He has 
completed the BP for supporting resource claim and be closed to finish the 
filter scheduler BP. He showed a good understanding of the Zun's code base and 
expertise on other OpenStack projects. The quantity [1] and quality of his 
submitted code also shows his qualification. Therefore, I think he will be a 
good addition to the core team.

In addition, I have a removal notice. Davanum Srinivas (Dims) and Yanyan Hu 
requested to be removed from the core team. Dims had been helping us since the 
inception of the project. I treated him as mentor and his guidance is always 
helpful for the whole team. As the project becomes mature and stable, I agree 
with him that it is time to relieve him from the core reviewer responsibility 
because he has many other important responsibilities for the OpenStack 
community. Yanyan's leaving is because he has been relocated and focused on an 
out-of-OpenStack area. I would like to take this chance to thank Dims and 
Yanyan for their contribution to Zun.

Core reviewers, please cast your vote on this proposal.

Best regards,
Hongbin



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-22 Thread Hongbin Lu
> Incidentally, the reason that discussions always come back to that is
> because OpenStack isn't very good at it, which is a huge problem not
> only for the *aaS projects but for user applications in general running
> on OpenStack.
> 
> If we had fine-grained authorisation and ubiquitous multi-tenant
> asynchronous messaging in OpenStack then I firmly believe that we, and
> application developers, would be in much better shape.
> 
> > If you create these projects as applications that run on cloud
> > infrastructure (OpenStack, k8s or otherwise),
> 
> I'm convinced there's an interesting idea here, but the terminology
> you're using doesn't really capture it. When you say 'as applications
> that run on cloud infrastructure', it sounds like you mean they should
> run in a Nova VM, or in a Kubernetes cluster somewhere, rather than on
> the OpenStack control plane. I don't think that's what you mean though,
> because you can (and IIUC Rackspace does) deploy OpenStack services
> that way already, and it has no real effect on the architecture of
> those services.
> 
> > then the discussions focus
> > instead on how the real end-users -- the ones that actually call the
> > APIs and utilize the service -- would interact with the APIs and not
> > the underlying infrastructure itself.
> >
> > Here's an example to think about...
> >
> > What if a provider of this DBaaS service wanted to jam 100 database
> > instances on a single VM and provide connectivity to those database
> > instances to 100 different tenants?
> >
> > Would those tenants know if those databases were all serviced from a
> > single database server process running on the VM?
> 
> You bet they would when one (or all) of the other 99 decided to run a
> really expensive query at an inopportune moment :)
> 
> > Or 100 contains each
> > running a separate database server process? Or 10 containers running
> > 10 database server processes each?
> >
> > No, of course not. And the tenant wouldn't care at all, because the
> 
> Well, if they had any kind of regulatory (or even performance)
> requirements then the tenant might care really quite a lot. But I take
> your point that many might not and it would be good to be able to offer
> them lower cost options.
> 
> > point of the DBaaS service is to get a database. It isn't to get one
> > or more VMs/containers/baremetal servers.
> 
> I'm not sure I entirely agree here. There are two kinds of DBaaS. One
> is a data API: a multitenant database a la DynamoDB. Those are very
> cool, and I'm excited about the potential to reduce the granularity of
> billing to a minimum, in much the same way Swift does for storage, and
> I'm sad that OpenStack's attempt in this space (MagnetoDB) didn't work
> out. But Trove is not that.
> 
> People use Trove because they want to use a *particular* database, but
> still have all the upgrades, backups, &c. handled for them. Given that
> the choice of database is explicitly *not* abstracted away from them,
> things like how many different VMs/containers/baremetal servers the
> database is running on are very much relevant IMHO, because what you
> want depends on both the database and how you're trying to use it. And
> because (afaik) none of them have native multitenancy, it's necessary
> that no tenant should have to share with any other.
> 
> Essentially Trove operates at a moderate level of abstraction -
> somewhere between managing the database + the infrastructure it runs on
> yourself and just an API endpoint you poke data into. It also operates
> at the coarse end of a granularity spectrum running from
> VMs->Containers->pay as you go.
> 
> It's reasonable to want to move closer to the middle of the granularity
> spectrum. But you can't go all the way to the high abstraction/fine
> grained ends of the spectra (which turn out to be equivalent) without
> becoming something qualitatively different.
> 
> > At the end of the day, I think Trove is best implemented as a hosted
> > application that exposes an API to its users that is entirely
> separate
> > from the underlying infrastructure APIs like Cinder/Nova/Neutron.
> >
> > This is similar to Kevin's k8s Operator idea, which I support but in
> a
> > generic fashion that isn't specific to k8s.
> >
> > In the same way that k8s abstracts the underlying infrastructure (via
> > its "cloud provider" concept), I think that Trove and similar
> projects
> > need to use a similar abstraction and focus on providing a different
> > API t

Re: [openstack-dev] [Zun] Propose addition of Zun core team and removal notice

2017-06-24 Thread Hongbin Lu
Hi all,

Thanks for your votes. According to the feedback, I added Shunli to the core 
team [1].

Best regards,
Hongbin

[1] https://review.openstack.org/#/admin/groups/1382,members

> -Original Message-
> From: Shuu Mutou [mailto:shu-mu...@rf.jp.nec.com]
> Sent: June-21-17 8:56 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: Haruhiko Katou
> Subject: Re: [openstack-dev] [Zun] Propose addition of Zun core team
> and removal notice
> 
> +1 to all from me.
> 
> Welcome Shunli! And greate thanks to Dims and Yanyan!!.
> 
> Best regards,
> Shu
> 
> > -Original Message-
> > From: Kumari, Madhuri [mailto:madhuri.kum...@intel.com]
> > Sent: Wednesday, June 21, 2017 12:30 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > 
> > Subject: Re: [openstack-dev] [Zun] Propose addition of Zun core team
> > and removal notice
> >
> > +1 from me as well.
> >
> >
> >
> > Thanks Dims and Yanyan for you contribution to Zun :)
> >
> >
> >
> > Regards,
> >
> > Madhuri
> >
> >
> >
> > From: Kevin Zhao [mailto:kevin.z...@linaro.org]
> > Sent: Wednesday, June 21, 2017 6:37 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > 
> > Subject: Re: [openstack-dev] [Zun] Propose addition of Zun core team
> > and removal notice
> >
> >
> >
> > +1 for me.
> >
> > Thx!
> >
> >
> >
> > On 20 June 2017 at 13:50, Pradeep Singh  > <mailto:ps4openst...@gmail.com> > wrote:
> >
> > +1 from me,
> >
> > Thanks Shunli for your great work :)
> >
> >
> >
> > On Tue, Jun 20, 2017 at 10:02 AM, Hongbin Lu
>  > <mailto:hongbin...@huawei.com> > wrote:
> >
> > Hi all,
> >
> >
> >
> > I would like to propose the following change to the Zun
> core team:
> >
> >
> >
> > + Shunli Zhou (shunliz)
> >
> >
> >
> > Shunli has been contributing to Zun for a while and did a
> lot of
> > work. He has completed the BP for supporting resource claim and be
> > closed to finish the filter scheduler BP. He showed a good
> > understanding of the Zun’s code base and expertise on other OpenStack
> > projects. The quantity [1] and quality of his submitted code also
> shows his qualification.
> > Therefore, I think he will be a good addition to the core team.
> >
> >
> >
> > In addition, I have a removal notice. Davanum Srinivas
> > (Dims) and Yanyan Hu requested to be removed from the core team. Dims
> > had been helping us since the inception of the project. I treated him
> > as mentor and his guidance is always helpful for the whole team. As
> > the project becomes mature and stable, I agree with him that it is
> > time to relieve him from the core reviewer responsibility because he
> > has many other important responsibilities for the OpenStack community.
> > Yanyan’s leaving is because he has been relocated and focused on an
> > out-of-OpenStack area. I would like to take this chance to thank Dims
> and Yanyan for their contribution to Zun.
> >
> >
> >
> > Core reviewers, please cast your vote on this proposal.
> >
> >
> >
> > Best regards,
> >
> > Hongbin
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > __
> > 
> > OpenStack Development Mailing List (not for usage
> > questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> >
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-
> > dev
> >
> >
> >
> >
> > __
> > 
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-
> > dev
> >
> >
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Zun] Upgrade from 'docker-py' to 'docker'

2017-06-24 Thread Hongbin Lu
Hi team,

We have recently finished the upgrade from 'docker-py' to 'docker'. If your 
devstack environment run into errors due to the incompatibility between the old 
and new docker python package (such as [1]), you could try the commands below:

  $ sudo pip uninstall docker-py docker-pycreds
  $ sudo pip install -c /opt/stack/requirements/upper-constraints.txt \
  -e /opt/stack/zun
  $ sudo systemctl restart devstack@kuryr*
  $ sudo systemctl restart devstack@zun*

For context, 'docker-py' is the old python-binding library for consuming docker 
REST API. It has been renamed to 'docker' and the old package will be dropped 
eventually. At the last a few days, there are several reports of errors due to 
double installation of both 'docker-py' and 'docker' packages in the 
development environment, so we need to migrate from 'docker-py' to 'docker' to 
resolve the issue.

Right now, all Zun's components and dependencies has finished the upgrade 
[2][3][4], and there is another proposed patch to drop 'docker-py' from global 
requirements [5]. The package conflict issue will be entirely resolved when the 
upgrade is finished globally.

[1] https://bugs.launchpad.net/zun/+bug/1693425
[2] https://review.openstack.org/#/c/475526/
[3] https://review.openstack.org/#/c/475863/
[4] https://review.openstack.org/#/c/475893/
[5] https://review.openstack.org/#/c/475962/

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zun] Some general ZUN use case / drivers type questions

2017-07-06 Thread Hongbin Lu
Hi Greg,

Please find my replies inline.

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-06-17 11:49 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Apologize I have some ‘newbie’ questions on zun.
I have looked a bit at zun ... a few slide decks and a few summit presentation 
videos.
I am somewhat familiar with old container orchestration attempts in openstack 
... nova and heat.
And somewhat familiar with Magnum for COEs on VMs.


Question 1:

-  in long term, will ZUN manage containers hosted by OpenStack VMs or 
OpenStack Hosts or both ?

oI think the answer is both, and

oI think technically ZUN will manage the containers in OpenStack VM(s) or 
OpenStack Host(s), thru a COE

•  where the COE is kubernetes, swarm, mesos ... or, initially, some very 
simple default COE provided by ZUN itself.

[Hongbin Lu] Yes. Zun aims to support containers in VMs, baremetal, or COEs in 
long term. A clarification is Zun doesn’t aim to become a COE, but it could be 
used together with Heat [1] to achieve some container orchestration equivalent 
functionalities.

[1] https://review.openstack.org/#/c/437810/
Question 2:
-  what is currently supported in MASTER ?

[Hongbin Lu] What currently supported is container-in-baremetal scenario. The 
next release might introduce container-in-vm. COE integration might be the long 
term pursue.


Question 3:
-  in the scenario where ZUN is managing containers thru Kubernetes 
directly on OpenStack Host(s)
oI believe the intent is that,
at the same time, and on the same OpenStack Host(s),
NOVA is managing VMs on the OpenStack Host(s)
o??? Has anyone started to look at the Resource Management / Arbitration of 
the OpenStack Host’s Resources,
   between ZUN and NOVA ???
[Hongbin Lu] No, it hasn’t. We started with an assumption that Zun and Nova are 
managing disjoined set of resources (i.e. compute hosts) so there is not 
resource contention. The ability to share compute resources across multiple 
OpenStack services for VMs and containers is cool and it might require 
discussions across multiple teams to build consensus of this pursue.
Question 4:
-  again, in the scenario where ZUN is managing containers thru 
Kubernetes directly on OpenStack Host(s)
-  what are the Technical Pros / Cons of this approach, relative to 
using OpenStack VM(s) ?
oPROs
•  ??? does this really use less resources than the VM Scenario ???
• is there an example you can walk me thru ?
•  I suppose that instead of pre-allocating resources to a fairly large VM for 
hosting containers,
you would only use the resources for the containers that are actually launched,
oCONs
•  for application containers, you are restricted by the OS running on the 
OpenStack Host,

[Hongbin Lu] Yes, there are pros and cons of either approach, and Zun is not 
biased on either approach. Instead, Zun aims to support both if it is feasible.


Greg.
WIND RIVER
Titanium Cloud
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zun] Some general ZUN use case / drivers type questions

2017-07-07 Thread Hongbin Lu
Hi Greg,

Sorry for the confusion. I used the term “container-in-baremetal” to refer to a 
deployment pattern that containers are running on physical compute nodes (not 
an instance provided by Nova/Ironic). I think your second interpretation is 
right if “OpenStack Hosts” means a compute node. I think a diagram [1] could 
explain the current deployment scenario better.

For the container-in-coe scenario, it is out of the current focus but the team 
is exploring ideas on it. I don’t have specific answers for the two questions 
you raised but I encourage you to bring up your use cases to the team and keep 
the discussion open.

[1] https://www.slideshare.net/hongbin034/clipboards/zun-deployment

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-07-17 7:05 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Hongbin Lu
Subject: Re: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hongbin,
Thanks for the responses.
A couple of follow up, clarifying questions ...


• You mentioned that currently Zun supports the container-in-baremetal 
scenario

ois this done by leveraging Ironic baremetal service ?

•  e.g. does Zun launch an Ironic baremetal instance (running docker) in order 
to host containers being launched by Zun ?

oOR

odo you must mean that, in this scenario, OpenStack Hosts are 
deployed&configured with docker software,
and Zun expects docker to be running on each OpenStack Host, in order to launch 
its containers ?


• In the future, when Zun supports the container-in-coe scenario

ois the idea that the COE (Kubernetes or Swarm) will abstract from Zun 
whether the COE’s minion nodes are OpenStack VMs or OpenStack Baremetal 
Instances (or OpenStack Hosts) ?

ois the idea that Magnum will support launching COEs with VM minion nodes 
and/or Baremetal minion nodes ?


Greg.


From: Hongbin Lu mailto:hongbin...@huawei.com>>
Reply-To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, July 6, 2017 at 2:39 PM
To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hi Greg,

Please find my replies inline.

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-06-17 11:49 AM
To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Subject: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Apologize I have some ‘newbie’ questions on zun.
I have looked a bit at zun ... a few slide decks and a few summit presentation 
videos.
I am somewhat familiar with old container orchestration attempts in openstack 
... nova and heat.
And somewhat familiar with Magnum for COEs on VMs.


Question 1:

-  in long term, will ZUN manage containers hosted by OpenStack VMs or 
OpenStack Hosts or both ?

oI think the answer is both, and

oI think technically ZUN will manage the containers in OpenStack VM(s) or 
OpenStack Host(s), thru a COE

•  where the COE is kubernetes, swarm, mesos ... or, initially, some very 
simple default COE provided by ZUN itself.

[Hongbin Lu] Yes. Zun aims to support containers in VMs, baremetal, or COEs in 
long term. A clarification is Zun doesn’t aim to become a COE, but it could be 
used together with Heat [1] to achieve some container orchestration equivalent 
functionalities.

[1] https://review.openstack.org/#/c/437810/
Question 2:
-  what is currently supported in MASTER ?

[Hongbin Lu] What currently supported is container-in-baremetal scenario. The 
next release might introduce container-in-vm. COE integration might be the long 
term pursue.


Question 3:
-  in the scenario where ZUN is managing containers thru Kubernetes 
directly on OpenStack Host(s)
oI believe the intent is that,
at the same time, and on the same OpenStack Host(s),
NOVA is managing VMs on the OpenStack Host(s)
o??? Has anyone started to look at the Resource Management / Arbitration of 
the OpenStack Host’s Resources,
   between ZUN and NOVA ???
[Hongbin Lu] No, it hasn’t. We started with an assumption that Zun and Nova are 
managing disjoined set of resources (i.e. compute hosts) so there is not 
resource contention. The ability to share compute resources across multiple 
OpenStack services for VMs and containers is cool and it might require 
discussions across multiple teams to build consensus of this pursue.
Question 4:
-  again, in the scenario where ZUN is managing containers thru 
Kubernetes directly on OpenStack Host(s)
-  what are the Technical Pros / Cons of this approach, relative to 
using OpenStack VM(s) ?
oPROs
•  ??? does this really use less reso

Re: [openstack-dev] [zun] Some general ZUN use case / drivers type questions

2017-07-07 Thread Hongbin Lu
Hi Greg,

Zun currently leverages the “--memory", “--cpu-period”, and “--cpu-quota” 
options to limit the CPU and memory. Zun does do resource tracking and 
scheduling right now, but this is temporary. The long-term plan is to switch to 
the Placement API [1] after it is spited out from Nova.

[1] https://docs.openstack.org/nova/latest/placement.html

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-07-17 11:00 AM
To: Hongbin Lu; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Thanks Hongbin.

I’ve got zun setup in devstack now, so will play with it a bit to better 
understand.

Although a couple more questions (sorry)

• in the current zun implementation of containers directly on compute 
nodes,
does zun leverage any of the docker capabilities to restrict the amount of 
resources used by a container ?
e.g. the amount and which cpu cores the container’s processes are allowed to 
use,
 how much memory the container is allowed to access/use, etc.

e.g. see https://docs.docker.com/engine/admin/resource_constraints/

• and then,
I know you mentioned that the assumption is that there are separate 
availability zones for zun and nova.

obut does zun do Resource Tracking and Scheduling based on that Resource 
Tracking for the nodes its using ?

Greg.


From: Hongbin Lu mailto:hongbin...@huawei.com>>
Date: Friday, July 7, 2017 at 10:42 AM
To: Greg Waines mailto:greg.wai...@windriver.com>>, 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
mailto:openstack-dev@lists.openstack.org>>
Subject: RE: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hi Greg,

Sorry for the confusion. I used the term “container-in-baremetal” to refer to a 
deployment pattern that containers are running on physical compute nodes (not 
an instance provided by Nova/Ironic). I think your second interpretation is 
right if “OpenStack Hosts” means a compute node. I think a diagram [1] could 
explain the current deployment scenario better.

For the container-in-coe scenario, it is out of the current focus but the team 
is exploring ideas on it. I don’t have specific answers for the two questions 
you raised but I encourage you to bring up your use cases to the team and keep 
the discussion open.

[1] https://www.slideshare.net/hongbin034/clipboards/zun-deployment

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-07-17 7:05 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Hongbin Lu
Subject: Re: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hongbin,
Thanks for the responses.
A couple of follow up, clarifying questions ...


• You mentioned that currently Zun supports the container-in-baremetal 
scenario

ois this done by leveraging Ironic baremetal service ?

•  e.g. does Zun launch an Ironic baremetal instance (running docker) in order 
to host containers being launched by Zun ?

oOR

odo you must mean that, in this scenario, OpenStack Hosts are 
deployed&configured with docker software,
and Zun expects docker to be running on each OpenStack Host, in order to launch 
its containers ?


• In the future, when Zun supports the container-in-coe scenario

ois the idea that the COE (Kubernetes or Swarm) will abstract from Zun 
whether the COE’s minion nodes are OpenStack VMs or OpenStack Baremetal 
Instances (or OpenStack Hosts) ?

ois the idea that Magnum will support launching COEs with VM minion nodes 
and/or Baremetal minion nodes ?


Greg.


From: Hongbin Lu mailto:hongbin...@huawei.com>>
Reply-To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, July 6, 2017 at 2:39 PM
To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hi Greg,

Please find my replies inline.

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-06-17 11:49 AM
To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Subject: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Apologize I have some ‘newbie’ questions on zun.
I have looked a bit at zun ... a few slide decks and a few summit presentation 
videos.
I am somewhat familiar with old container orchestration attempts in openstack 
... nova and heat.
And somewhat familiar with Magnum for COEs on VMs.


Question 1:

-  in long term, will ZUN manage containers hosted by OpenStack VMs or 
OpenStack Hosts or both ?

oI think the answer is both,

Re: [openstack-dev] [zun] Some general ZUN use case / drivers type questions

2017-07-11 Thread Hongbin Lu
Hi Greg,

There is no such API in Zun. I created a BP for this feature request: 
https://blueprints.launchpad.net/zun/+spec/show-container-engine-info . 
Hopefully, the implementation will be available at the next release or two.

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-11-17 10:24 AM
To: Hongbin Lu; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hey Hongbin,

is there a way to display ZUN’s resource usage ?
i.e. analogous to nova’s “nova hypervisor-show ”
e.g. memory usages, cpu usage, etc .

Greg.


From: Hongbin Lu mailto:hongbin...@huawei.com>>
Date: Friday, July 7, 2017 at 2:08 PM
To: Greg Waines mailto:greg.wai...@windriver.com>>, 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
mailto:openstack-dev@lists.openstack.org>>
Subject: RE: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hi Greg,

Zun currently leverages the “--memory", “--cpu-period”, and “--cpu-quota” 
options to limit the CPU and memory. Zun does do resource tracking and 
scheduling right now, but this is temporary. The long-term plan is to switch to 
the Placement API [1] after it is spited out from Nova.

[1] https://docs.openstack.org/nova/latest/placement.html

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-07-17 11:00 AM
To: Hongbin Lu; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Thanks Hongbin.

I’ve got zun setup in devstack now, so will play with it a bit to better 
understand.

Although a couple more questions (sorry)

• in the current zun implementation of containers directly on compute 
nodes,
does zun leverage any of the docker capabilities to restrict the amount of 
resources used by a container ?
e.g. the amount and which cpu cores the container’s processes are allowed to 
use,
 how much memory the container is allowed to access/use, etc.

e.g. see https://docs.docker.com/engine/admin/resource_constraints/

• and then,
I know you mentioned that the assumption is that there are separate 
availability zones for zun and nova.

obut does zun do Resource Tracking and Scheduling based on that Resource 
Tracking for the nodes its using ?

Greg.


From: Hongbin Lu mailto:hongbin...@huawei.com>>
Date: Friday, July 7, 2017 at 10:42 AM
To: Greg Waines mailto:greg.wai...@windriver.com>>, 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
mailto:openstack-dev@lists.openstack.org>>
Subject: RE: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hi Greg,

Sorry for the confusion. I used the term “container-in-baremetal” to refer to a 
deployment pattern that containers are running on physical compute nodes (not 
an instance provided by Nova/Ironic). I think your second interpretation is 
right if “OpenStack Hosts” means a compute node. I think a diagram [1] could 
explain the current deployment scenario better.

For the container-in-coe scenario, it is out of the current focus but the team 
is exploring ideas on it. I don’t have specific answers for the two questions 
you raised but I encourage you to bring up your use cases to the team and keep 
the discussion open.

[1] https://www.slideshare.net/hongbin034/clipboards/zun-deployment

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-07-17 7:05 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Hongbin Lu
Subject: Re: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hongbin,
Thanks for the responses.
A couple of follow up, clarifying questions ...


• You mentioned that currently Zun supports the container-in-baremetal 
scenario

ois this done by leveraging Ironic baremetal service ?

•  e.g. does Zun launch an Ironic baremetal instance (running docker) in order 
to host containers being launched by Zun ?

oOR

odo you must mean that, in this scenario, OpenStack Hosts are 
deployed&configured with docker software,
and Zun expects docker to be running on each OpenStack Host, in order to launch 
its containers ?


• In the future, when Zun supports the container-in-coe scenario

ois the idea that the COE (Kubernetes or Swarm) will abstract from Zun 
whether the COE’s minion nodes are OpenStack VMs or OpenStack Baremetal 
Instances (or OpenStack Hosts) ?

ois the idea that Magnum will support launching COEs with VM minion nodes 
and/or Baremetal minion nodes ?


Greg.


From: Hongbin Lu mailto:hongbin...@huawei.com>>
Reply-To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
mailto:openstack-dev@li

Re: [openstack-dev] [zun] Some general ZUN use case / drivers type questions

2017-07-11 Thread Hongbin Lu
Greg,

No, it isn’t. We are working hard to integrate with Cinder (either via Fuxi or 
direct integration). Perhaps this design spec can provide some information 
about where we are heading to: https://review.openstack.org/#/c/468658/ .

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-11-17 2:13 PM
To: Hongbin Lu; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Thanks Hongbin,

another quick question,
is ZUN integrated with FUXI for Container mounting of Cinder Volumes yet ?

( my guess is no ... don’t see any options for that in the zun cli for create 
or run )

Greg.

From: Hongbin Lu mailto:hongbin...@huawei.com>>
Date: Tuesday, July 11, 2017 at 2:04 PM
To: Greg Waines mailto:greg.wai...@windriver.com>>, 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
mailto:openstack-dev@lists.openstack.org>>
Subject: RE: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hi Greg,

There is no such API in Zun. I created a BP for this feature request: 
https://blueprints.launchpad.net/zun/+spec/show-container-engine-info . 
Hopefully, the implementation will be available at the next release or two.

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-11-17 10:24 AM
To: Hongbin Lu; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hey Hongbin,

is there a way to display ZUN’s resource usage ?
i.e. analogous to nova’s “nova hypervisor-show ”
e.g. memory usages, cpu usage, etc .

Greg.


From: Hongbin Lu mailto:hongbin...@huawei.com>>
Date: Friday, July 7, 2017 at 2:08 PM
To: Greg Waines mailto:greg.wai...@windriver.com>>, 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
mailto:openstack-dev@lists.openstack.org>>
Subject: RE: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hi Greg,

Zun currently leverages the “--memory", “--cpu-period”, and “--cpu-quota” 
options to limit the CPU and memory. Zun does do resource tracking and 
scheduling right now, but this is temporary. The long-term plan is to switch to 
the Placement API [1] after it is spited out from Nova.

[1] https://docs.openstack.org/nova/latest/placement.html

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-07-17 11:00 AM
To: Hongbin Lu; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Thanks Hongbin.

I’ve got zun setup in devstack now, so will play with it a bit to better 
understand.

Although a couple more questions (sorry)

• in the current zun implementation of containers directly on compute 
nodes,
does zun leverage any of the docker capabilities to restrict the amount of 
resources used by a container ?
e.g. the amount and which cpu cores the container’s processes are allowed to 
use,
 how much memory the container is allowed to access/use, etc.

e.g. see https://docs.docker.com/engine/admin/resource_constraints/

• and then,
I know you mentioned that the assumption is that there are separate 
availability zones for zun and nova.

obut does zun do Resource Tracking and Scheduling based on that Resource 
Tracking for the nodes its using ?

Greg.


From: Hongbin Lu mailto:hongbin...@huawei.com>>
Date: Friday, July 7, 2017 at 10:42 AM
To: Greg Waines mailto:greg.wai...@windriver.com>>, 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
mailto:openstack-dev@lists.openstack.org>>
Subject: RE: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hi Greg,

Sorry for the confusion. I used the term “container-in-baremetal” to refer to a 
deployment pattern that containers are running on physical compute nodes (not 
an instance provided by Nova/Ironic). I think your second interpretation is 
right if “OpenStack Hosts” means a compute node. I think a diagram [1] could 
explain the current deployment scenario better.

For the container-in-coe scenario, it is out of the current focus but the team 
is exploring ideas on it. I don’t have specific answers for the two questions 
you raised but I encourage you to bring up your use cases to the team and keep 
the discussion open.

[1] https://www.slideshare.net/hongbin034/clipboards/zun-deployment

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-07-17 7:05 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Hongbin Lu
Subject: Re: [openstack-dev] [zun] Some general ZUN use case / drivers type 
questions

Hongbin,
Thanks for the responses.
A couple of

Re: [openstack-dev] [zun] sandbox and clearcontainers

2017-07-11 Thread Hongbin Lu
Hi Surya,

First, I would like to provide some context for folks who are not familiar with 
the sandbox concept in Zun. The "sandbox" is for providing isolated environment 
for one or multiple containers. In docker driver, we used it as a placeholder 
of a set of Linux namespaces (i.e. network, ipc, etc.) that the "real" 
container(s) is going to run. For example, if end-user run "zun run nginx", Zun 
will first create an infra container (sandbox) and leverage the set of Linux 
namespace it creates, then Zun will create the "real" (nginx) container by 
using the Linux namespaces of the infra container. Strictly speaking, this is 
not container inside container, but it is container inside a set of 
pre-existing Linux namespaces.

Second, we are working on making sandbox optional [1]. After this feature is 
implemented (targeted on Pike), operators can configure Zun into one of the two 
modes: "container-in-sandbox" and "standalone container". Each container driver 
will have a choice to support either modes or support both. For clear 
container, I assume it can be integrated with Zun via a clear container driver. 
Then, the driver can implement the "standalone" mode, in which there is only a 
bare clear container. An alternative is to implement "container-in-sandbox" 
mode. In this scenario, the sandbox itself is a clear container as you 
mentioned. Inside the clear container, I guess there is a kernel that can be 
used to boot user's container image(s) (like how to run hypercontainer as pod 
[2]). However, I am not exactly sure if this scenario is possible.

Hope this answers your question.

[1] https://blueprints.launchpad.net/zun/+spec/make-sandbox-optional
[2] 
http://blog.kubernetes.io/2016/05/hypernetes-security-and-multi-tenancy-in-kubernetes.html

Best regards,
Hongbin

From: surya.prabha...@dell.com [mailto:surya.prabha...@dell.com]
Sent: July-11-17 7:14 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [zun] sandbox and clearcontainers

Dell - Internal Use - Confidential
Hi Folks,
I am just trying to wrap my head around zun's sandboxing and clear 
containers.   From what Hongbin told in Barcelona ( see the attached pic which 
I scrapped from his video)

[cid:image002.jpg@01D2FA9E.8B2A7D00]

current implementation in Zun is, Sandbox is the outer container and the real 
user container is nested inside the sandbox.  I am trying to figure out how 
this is going to play out
when we have clear containers.

I envision the following scenarios:


1)  Scenario 1: where the sandbox itself is a clear container and user will 
nest another clear container inside the sandbox. This is like nested 
virtualization.

But I am not sure how this is going to work since the nested containers won't 
get VT-D cpu flags.

2)  Scenario 2: the outer sandbox is just going to be a standard docker 
container without vt-d and the inside container is going to be the real clear 
container with vt-d.  Now this

might work well but we might be losing the isolation features for the network 
and storage which lies open in the sandbox. Wont this defeat the whole purpose 
of using clear containers.

I am just wondering what is the thought process for this design inside zun.  If 
this is trivial and if I am missing something please shed some light :).

Thanks
Surya ( spn )
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zun] "--image-driver glance" doesn't seem to work @ master

2017-07-12 Thread Hongbin Lu
Hi Greg,

I created a bug to record the issue: 
https://bugs.launchpad.net/zun/+bug/1703955 . Due to this bug, Zun couldn’t 
find the docker image if the image was uploaded to glance under a different 
name. I think it will work if you can upload the image to glance with name 
“cirros”. For example:

$ docker pull cirros
$ docker save cirros | glance image-create --visibility public 
--container-format=docker --disk-format=raw --name cirros
$ zun run -i --name ctn-ping --image-driver glance cirros ping 8.8.8.8

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-12-17 1:01 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [zun] "--image-driver glance" doesn't seem to work @ 
master

Just tried this, this morning.
I can not launch a container when I specify to pull the container image from 
glance (instead of docker hub).
I get an error back from docker saying the “:latest” can not be 
found.
I tried renaming the glance image to “:latest” ... but that didn’t 
work either.


stack@devstack-zun:~/devstack$ glance image-list

+--+--+

| ID   | Name |

+--+--+

| 6483d319-69d8-4c58-b0fb-7338a1aff85f | cirros-0.3.5-x86_64-disk |

| 3055d450-d780-4699-bc7d-3b83f3391fe9 | gregos   |  <-- it 
is of container format docker

| e8f3cab8-056c-4851-9f67-141dda91b9a2 | kubernetes/pause |

+--+--+

stack@devstack-zun:~/devstack$ docker images

REPOSITORY  TAG IMAGE IDCREATED 
SIZE

scratch latest  019a481dc9ea5 days ago  
0B

kuryr/busybox   latest  a3bb6046b1195 days ago  
1.21MB

cirros  latest  f8ce316a37a718 months ago   
7.74MB

kubernetes/pauselatest  f9d5de0795392 years ago 
240kB

stack@devstack-zun:~/devstack$ zun run --name ctn-ping --image-driver glance 
gregos ping 8.8.8.8

...

...
stack@devstack-zun:~/devstack$ zun show ctn-ping
+---+-+
| Property  | Value 


  |
+---+-+
| addresses | 10.0.0.6, fdac:1365:7242:0:f816:3eff:fea4:fb65


  |
| links | ["{u'href': 
u'http://10.10.10.17:9517/v1/containers/cb83a98c-776c-4ea8-83a7-ef3430f5e6d2', 
u'rel': u'self'}", "{u'href': 
u'http://10.10.10.17:9517/containers/cb83a98c-776c-4ea8-83a7-ef3430f5e6d2', 
u'rel': u'bookmark'}"] |
| image | gregos


  |
  

 |
| status| Error 


  |

| status_reason | Docker internal error: 404 Client Error: Not Found ("No 
such image: gregos:latest").

|


stack@devstack-zun:~/devstack$



Am I doing something wrong ?

Greg.





FULL logs below


stack@devstack-zun:~/devstack$ source openrc admin demo

WARNING: setting legacy OS_TENANT_NAME to support cli tools.

stack@devstack-zun:~/devstack$ docker images

REPOSITORY  TAG IMAGE IDCREATED 
SIZE

kuryr/busybox   latest  a3bb6046b1195 days ago  
1.21MB

scratch latest  019a481dc9ea5 days ago  
0B

kubernetes/pauselatest 

Re: [openstack-dev] [zun] "--nets network=..." usage question

2017-07-12 Thread Hongbin Lu
Hi Greg,

This parameter has just been added to the CLI and it hasn’t been fully 
implemented yet. Sorry for the confusion. Here is how I expect this parameter 
to work:

1. Create from neutron network name:
$ zun run --name ctn-ping --nets network=private …

2. Create from neutron network uuid:
$ zun run --name ctn-ping --nets network=c59455d9-c103-4c05-b28c-a1f5d041d804 …

3. Create from neutron port uuid/name:
$ zun run --name ctn-ping --nets port= …

4. Give me a network:
$ zun run --name ctn-ping --nets auto …

For now, please simply ignore this parameter. Zun will find a usable network 
under your tenant to boot the container.

Best regards,
Hongbin

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: July-12-17 1:20 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [zun] "--nets network=..." usage question

What is expected for the “--nets network=...” parameter on zun run or create ?
Is it the network name, the subnet name, the network uuid, the subnet uuid, ... 
I think I’ve tried them all and none work.

Full logs:

stack@devstack-zun:~/devstack$ neutron net-list

neutron CLI is deprecated and will be removed in the future. Use openstack CLI 
instead.

+--+-+--+--+

| id   | name| tenant_id
| subnets  |

+--+-+--+--+

| c1731d77-c849-4b6b-b5e9-85030c8c6b52 | public  | 
dcea3cea809f40c1a53b85ec3522de36 | aec0bc66-fb6a-453b-93c7-d04537a6bb05 
2001:db8::/64   |

|  | |  
| 8c881229-982e-417b-bbaa-e86d6192afa6 172.24.4.0/24   |

| c59455d9-c103-4c05-b28c-a1f5d041d804 | private | 
c8398b3154094049960e86b3caba1a4a | e12679b1-87e6-42cf-a2fe-e0f954dbd15f 
fdac:1365:7242::/64 |

|  | |  
| a1fc0a84-8cae-4193-8d33-711b612529b7 10.0.0.0/26 |

+--+-+--+--+

stack@devstack-zun:~/devstack$

stack@devstack-zun:~/devstack$

stack@devstack-zun:~/devstack$

stack@devstack-zun:~/devstack$

stack@devstack-zun:~/devstack$ zun run --name ctn-ping --nets network=private 
cirros ping 8.8.8.8
...
stack@devstack-zun:~/devstack$ zun list
+--+--++++---+---+
| uuid | name | image  | status | 
task_state | addresses | ports |
+--+--++++---+---+
| 649724f6-2ccd-4b21-8684-8f6616228d86 | ctn-ping | cirros | Error  | None  
 |   | []|
+--+--++++---+---+
stack@devstack-zun:~/devstack$ zun show ctn-ping | fgrep reason
| status_reason | Docker internal error: 404 Client Error: Not Found 
("network private not found").  

 |
stack@devstack-zun:~/devstack$

stack@devstack-zun:~/devstack$ zun delete ctn-ping

Request to delete container ctn-ping has been accepted.

stack@devstack-zun:~/devstack$

stack@devstack-zun:~/devstack$ zun run --name ctn-ping --nets 
network=c59455d9-c103-4c05-b28c-a1f5d041d804 cirros ping 8.8.8.8

...
stack@devstack-zun:~/devstack$ zun list
+--+--++++---+---+
| uuid | name | image  | status | 
task_state | addresses | ports |
+--+--++++---+---+
| 6093bdc2-d288-4ea9-a98b-3ca055318c9e | ctn-ping | cirros | Error  | None  
 |   | []|
+--+--++++---+---+
stack@devstack-zun:~/devstack$ zun show ctn-ping | fgrep reason
| status_reason | Docker internal error: 404 Client Error: Not Found 
("network c59455d9-c103-4c05-b28c-a1f5d041d804 not found"). 

 |
stack@devstack-zun:~/devstack$



Any ideas ?

Greg.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/ope

Re: [openstack-dev] [zun][api version]Does anyone know the idea of default api version in the versioned API?

2017-07-26 Thread Hongbin Lu
Hi all,

Here is a bit of the context. Zun has introduced API micro version in the 
server [1] and the client [2]. The micro version needs to be bumped in server 
side [3] as long as a backward-incompatible change is made. In client side, we 
currently hard-code the default version. The client will pick the default 
version unless the version is explicitly specified.

As far as I know, the openstack community doesn’t have consensus on the 
specification of the default API version. Some projects picked a stable version 
as default, and other projects picked the latest version. How to bump the 
default version is also controversial. If the default version is hard-coded, it 
might need to be bumped every time a change is made. Alternatively, there are 
some workarounds to avoid the hard-code default version. Each approach has pros 
and cons.

For Zun, I think the following options are available (refer this spec [4] if 
you interest to read more details):
1. Negotiate the default version between client and server, and pick the 
maximum version that both client and server are supporting.
2. Hard-code the default version and bump it manually or periodically (how to 
bump it periodically?)
3. Hard-code the default version and keep it unchanged.
4. Pick the latest version as default.

Thoughts on this?

[1] https://blueprints.launchpad.net/zun/+spec/api-microversion
[2] https://blueprints.launchpad.net/zun/+spec/api-microversion-cli
[3] 
https://docs.openstack.org/zun/latest/contributor/api-microversion.html#when-do-i-need-a-new-microversion
[4] 
https://specs.openstack.org/openstack/ironic-specs/specs/approved/cli-default-api-version.html

Best regards,
Hongbin

From: Shunli Zhou [mailto:shunli6...@gmail.com]
Sent: July-25-17 9:29 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [zun][api version]Does anyone know the idea of default 
api version in the versioned API?

Does anyone know the idea of default api version in versioned api?
I'm not sure if we should bump the default api version everytime the api 
version bumped? Could anyone explain the policy of how to bump the default api 
version?

Thanks.
B.R.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zun][unit test] Could anyone help one the unittest fail?

2017-07-27 Thread Hongbin Lu
Hi Shunli,

Sorry for the late reply, I saw you uploaded a revision of the patch and got 
the gate pass. I guess you have resolved this issue?

Best regards,
Hongbin

From: Shunli Zhou [mailto:shunli6...@gmail.com]
Sent: July-25-17 10:20 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [zun][unit test] Could anyone help one the unittest 
fail?

Could anyone help on the unittest fail about the pecan api, refer to 
http://logs.openstack.org/31/486931/1/check/gate-zun-python27-ubuntu-xenial/c329b47/console.html#_2017-07-25_08_13_05_180414

I have two api, they are added in two patches. The first is 
HostController:get_all, which list all the zun host. The second is the 
HostController:get_one. So the get_all version restrict to 1.4 and get_one 
version is restricted to 1.5.

Not know why the pecan will call get_one when test get_all. I debugged the 
code, the pecan first call get_all with version 1.4, and everything is ok, but 
after that pecan will also route the request to get_one, which requires 1.5 
version. And then the test failed. The code works fine in devstack.

Could anyone help me why the test failed, what's wrong about the test code?


Thanks.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Zun] Queens PTL candidacy

2017-08-01 Thread Hongbin Lu
Hi all,

I nominated myself to be a candidate of Zun PTL for Queens. As the founder of 
this
project, it is my honor to work with all of you to build an innovative
OpenStack container service.

OpenStack provides a full-featured data center management solution which
includes multi-tenant security, networking, storage, management and monitoring,
and more. All theses services are needed regardless of whether containers,
virtual machines, or baremetal servers are being used [1]. In this context,
Zun's role is to bring prevailing container technologies to OpenStack and
enable the reuse of existing infrastructure services for containers.
Eventually, different container technologies should be easily accessible by
cloud consumers, which is a goal Zun is contributing to.

Since April 2016, in which the project was founded, the Zun team has been
working hard to achieve its mission. We managed to delivere most of the
important features includes:
* A full-featured container API.
* A docker driver that serves as reference implementation.
* Neutron integration via Kuryr-libnetwork.
* Two image drivers: Docker Registry (i.e. Docker Hub) and Glance.
* Multi-tenancy: Containers are isolated by Keystone projects.
* Horizon integration.
* OpenStack Client integration.
* Heat integration.

By looking ahead to Queens, I would suggest the Zun team to focus on the
followings:
* NFV: Containerized NFV workload is emerging and we wants to adapt this trend.
* Containers-on-VMs: Provide an option to auto-provision VMs for containers.
  This is for use cases that containers need to be strongly isolated by VMs.
* Cinder integration: Leverage Cinder for providing data volume for containers.
* Alternative container runtime: Introduce a second container runtime as a
  Docker alternative.
* Capsule API: Pack multiple containers into a managed unit.

Beyond Pike, I would estimate Zun to move toward the following directions:
* Kubernetes: Kubernetes is probably the most popluar containers orchestration
  tool, but there are still some gaps that prevent Kubernetes to work well with
  OpenStack. I think Zun might be able to help to reduce the gaps. We could
  explore integration options for Kubernetes to make OpenStack more appealing
  for cloud-native users.
* Placement API: Nova team is working to split its scheduler out and Zun would
  like to leverage this new service if appropriate.

[1] https://www.openstack.org/assets/pdf-downloads/Containers-and-OpenStack.pdf

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Zun] Propose change of the core team

2017-09-14 Thread Hongbin Lu
Hi all,

I propose the following change of the Zun core reviewer team.

+ Kien Nguyen (kiennt2609)
- Aditi Sharma (adi-sky17)

Kien has been contributing to the Zun project for a few months. His 
contributions include proposing high-quality codes, providing helpful code 
reviews, participating team discussion at weekly team meeting and IRC, etc. He 
is the one who setup the multi-node job in the CI and the job is up and running 
now. I think his contribution is significant which qualifies him to be a core 
reviewer. Aditi is a member of the initial core team but becomes inactive for a 
while.

Core reviewers, please cast your vote on this proposal.

Best regards,
Hongbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zun][unit test] Any python utils can collect pci info?

2017-09-17 Thread Hongbin Lu
Hi Shunli,

I am not aware of any prevailing python utils for this. An alternative is to 
shell out Linux commands to collect the information. After a quick search, it 
looks xenapi [1] uses “lspci -vmmnk” to collect PCI device detail info and “ls 
/sys/bus/pci/devices//” to detect the PCI device type (PF or VF). 
FWIW, you might find it helpful to refer the implementation of Nova’s xenapi 
driver for gettiing PCI resources [2]. Hope it helps.

[1] 
https://github.com/openstack/os-xenapi/blob/master/os_xenapi/dom0/etc/xapi.d/plugins/xenhost.py#L593
[2] https://github.com/openstack/nova/blob/master/nova/virt/xenapi/host.py#L154

Best regards,
Hongbin

From: Shunli Zhou [mailto:shunli6...@gmail.com]
Sent: September-17-17 9:35 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [zun][unit test] Any python utils can collect pci info?

Hi all,

For https://blueprints.launchpad.net/zun/+spec/support-pcipassthroughfilter 
this BP, Nova use the libvirt to collect the PCI device info. But for zun, 
libvirt seems is a heavy dependecies. Is there a python utils that can be used 
to collect the PCI device detail info? Such as the whether it's a PF of network 
pci device of VF, the device capabilities, etc.

Note: For 'lspci -D -nnmm' , there are some info can not get.


Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [k8s][deployment][kolla-kubernetes][magnum][kuryr][zun][qa][api] Proposal for SIG-K8s

2017-09-18 Thread Hongbin Lu
Hi Chris,

Sorry I missed the meeting since I was not in PTG last week. After a quick 
research on the mission of SIG-K8s, I think we (the OpenStack Zun team) have an 
item that fits well into this SIG, which is the k8s connector feature:

  https://blueprints.launchpad.net/zun/+spec/zun-connector-for-k8s

I added it to the etherpad and hope it will be well accepted by the SIG.

Best regards,
Hongbin

From: Chris Hoge [mailto:ch...@openstack.org]
Sent: September-15-17 12:25 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] 
[k8s][deployment][kolla-kubernetes][magnum][kuryr][qa][api] Proposal for SIG-K8s

Link to the etherpad for the upcoming meeting.

https://etherpad.openstack.org/p/queens-ptg-sig-k8s


On Sep 14, 2017, at 10:23 AM, Chris Hoge 
mailto:ch...@openstack.org>> wrote:

This Friday, September 15 at the PTG we will be hosting an organizational
meeting for SIG-K8s. More information on the proposal, meeting time, and
remote attendance is in the openstack-sigs mailing list [1].

Thanks,
Chris Hoge
Interop Engineer
OpenStack Foundation

[1] 
http://lists.openstack.org/pipermail/openstack-sigs/2017-September/51.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zun][unit test] Any python utils can collect pci info?

2017-09-20 Thread Hongbin Lu
Hi Eric,

Thanks for pointing this out. This BP 
(https://blueprints.launchpad.net/zun/+spec/use-privsep) was created to track 
the introduction of privsep.

Best regards,
Hongbin

> -Original Message-
> From: Eric Fried [mailto:openst...@fried.cc]
> Sent: September-18-17 10:51 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [zun][unit test] Any python utils can
> collect pci info?
> 
> You may get a little help from the methods in nova.pci.utils.
> 
> If you're calling out to lspci or accessing sysfs, be aware of this
> series [1] and do it via the new privsep mechanisms.
> 
> [1]
> https://review.openstack.org/#/q/status:open+project:openstack/nova+bra
> nch:master+topic:hurrah-for-privsep
> 
> On 09/17/2017 09:41 PM, Hongbin Lu wrote:
> > Hi Shunli,
> >
> >
> >
> > I am not aware of any prevailing python utils for this. An
> alternative
> > is to shell out Linux commands to collect the information. After a
> > quick search, it looks xenapi [1] uses “lspci -vmmnk” to collect PCI
> > device detail info and “ls /sys/bus/pci/devices//” to
> > detect the PCI device type (PF or VF). FWIW, you might find it
> helpful
> > to refer the implementation of Nova’s xenapi driver for gettiing PCI
> resources [2].
> > Hope it helps.
> >
> >
> >
> > [1]
> > https://github.com/openstack/os-
> xenapi/blob/master/os_xenapi/dom0/etc/
> > xapi.d/plugins/xenhost.py#L593
> >
> > [2]
> >
> https://github.com/openstack/nova/blob/master/nova/virt/xenapi/host.py
> > #L154
> >
> >
> >
> > Best regards,
> >
> > Hongbin
> >
> >
> >
> > *From:*Shunli Zhou [mailto:shunli6...@gmail.com]
> > *Sent:* September-17-17 9:35 PM
> > *To:* openstack-dev@lists.openstack.org
> > *Subject:* [openstack-dev] [zun][unit test] Any python utils can
> > collect pci info?
> >
> >
> >
> > Hi all,
> >
> >
> >
> > For
> > https://blueprints.launchpad.net/zun/+spec/support-
> pcipassthroughfilte
> > r this BP, Nova use the libvirt to collect the PCI device info. But
> > for zun, libvirt seems is a heavy dependecies. Is there a python
> utils
> > that can be used to collect the PCI device detail info? Such as the
> > whether it's a PF of network pci device of VF, the device
> > capabilities, etc.
> >
> >
> >
> > Note: For 'lspci -D -nnmm' , there are some info can not get.
> >
> >
> >
> >
> >
> > Thanks
> >
> >
> >
> >
> __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Unable to add member to Zun core team

2017-09-20 Thread Hongbin Lu
Hi Infra team,

I tried to add Kien Nguyen kie...@vn.fujitsu.com 
to the Zun's core team [1], but gerrit prevented me to do that. Attached file 
showed the error. Could anyone provide suggestion for this?

Best regards,
Hongbin

[1] https://review.openstack.org/#/admin/groups/1382,members
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] vPTG schedule

2017-10-01 Thread Hongbin Lu
Hi Toni,

The time of a few proposed sessions look inconsistent with the etherpad.
Could you double check?

On Thu, Sep 28, 2017 at 5:48 AM, Antoni Segura Puimedon 
wrote:

> Hi fellow Kuryrs!
>
> It's that time of the cycle again where we hold our virtual project team
> gathering[0]. The dates this time are:
>
> October 2nd, 3rd and 4th
>
> The proposed sessions are:
>
> October 2nd 13:00utc: Scale discussion
> In this session we'll talk about the recent scale testing we have
> performed
> in a 112 node cluster. From this starting point. We'll work on
> identifying
> and prioritizing several initiatives to improve the performance of the
> pod-in-VM and the baremetal scenarios.
>
> October 2nd 14:00utc: Scenario testing
> The September 27th's release of zuulv3 opens the gates for better
> scenario
> testing, specially regarding multinode scenarios. We'll discuss the
> tasks
> and outstanding challenges to achieve good scenario testing coverage
> and
> document well how to write these tests in our tempest plugin.
>
> October 3rd 13:00utc: Multi networks
> As the Kubernetes community Network SIG draws near to having a
> consensus on
> multi network implementations, we must elaborate a plan on a PoC that
> takes
> the upstream Kubernetes consensus and implements it with
> Kuryr-Kubernetes
> in a way that we can serve normal overlay and accelerated networking.
>
> October 4th 14:00utc: Network Policy
> Each cycle we aim to narrow the gap between Kubernetes networking
> entities
> and our translations. In this cycle, apart from the Loadbalancer
> service
> type support, we'll be tackling how we map Network Policy to Neutron
> networking. This session will first lay out Network Policy and its use
> and
> then discuss about one or more mappings.
>
> October 5th 13:00utc: Kuryr-libnetwork
>
This session is Oct 4th in the etherpad.

> We'll do the cycle planing for Kuryr-libnetwork. Blueprints and bugs
> and
> general discussion.
>
> October 6th 14:00utc: Fuxi
>
This session is Oct 4th in the etherpad.

> In this session we'll discuss everything related to storage, both in
> the
> Docker and in the Kubernetes worlds.
>
>
> I'll put the links to the bluejeans sessions in the etherpad[0].
>
>
> [0] https://etherpad.openstack.org/p/kuryr-queens-vPTG
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [zun][neutron] About Notifier

2017-10-01 Thread Hongbin Lu
Hi Neutron team,

I saw neutron has a Nova notifier [1] that is able to notify Nova via REST API 
when a certain set of events happen. I think Zun would like to be notified as 
how Nova is. For example, we would like to receive notification whenever a port 
assigned to a container has been associated with a floating IP. If I propose a 
Zun notifier (preferably out-of-tree) for that, will you accept the patch? Or 
anyone has an alternative suggestion to stratify our use case.

[1] https://github.com/openstack/neutron/blob/master/neutron/notifiers/nova.py

Best regards,
Hongbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nova docker replaced by zun?

2017-10-03 Thread Hongbin Lu


> -Original Message-
> From: Sean Dague [mailto:s...@dague.net]
> Sent: October-03-17 5:44 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] Nova docker replaced by zun?
> 
> On 09/29/2017 10:48 AM, ADAMS, STEVEN E wrote:
> > Can anyone point me to some background on why nova docker was
> > discontinued and how zun is the heir?
> >
> > Thx,
> >
> > Steve Adams
> >
> > AT&T
> >
> > https://github.com/openstack/nova-docker/blob/master/README.rst
> 
> The nova-docker driver discontinued because it was not maintained. In
> the entire OpenStack community we could not find a second person to
> help with the maintenance of it (it was only Dims doing any needed
> fixes).
> This was even though the driver was known to be running in multiple
> production clouds.
> 
> The project was shut down for that reason so that no one would
> mistakenly assume there was any maintenance or support on it. If you or
> others want to revive the project, that would be fine, as long as we
> can identify 2 individuals who will step up as maintainers.
> 
>   -Sean

[Hongbin Lu] A possibility is to revive nova-docker and engineer it as a thin 
layer to Zun. Zun has implemented several important functionalities, such as 
container lifecycle management, container networking with neutron, 
bind-mounting cinder volumes, etc. If nova-docker is engineered as a proxy to 
Zun, the burden of maintenance would be significant reduced. I believe Zun team 
would be happy to help to get the virt driver working well with Nova.

Best regards,
Hongbin

> 
> --
> Sean Dague
> http://dague.net
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [zun] Client long description on PyPi

2017-10-11 Thread Hongbin Lu
Hi Zigo,

According to https://github.com/pypa/warehouse/issues/2170 , it is impossible 
to update the description manually. I will release a new version of 
python-zunclient to get the description updated.

Best regards,
Hongbin 

> -Original Message-
> From: Thomas Goirand [mailto:z...@debian.org]
> Sent: October-11-17 4:50 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [zun] Client long description on PyPi
> 
> Hi,
> 
> Could someone write something relevant, instead of the current
> placeholder? See here:
> 
> https://pypi.python.org/pypi/python-zunclient
> 
> and see that "this is a hard requirement".
> 
> Cheers,
> 
> Thomas Goirand (zigo)
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Zun] Change in Zun core team

2017-11-21 Thread Hongbin Lu
Hi all,

I would like to announce the following change to the Zun core reviewers
team.

+ miaohb (miao-hongbao)
- Sheel Rana (ranasheel2000)

Miaohb has been consistently contributed to Zun for a few months. So far,
he has 60 commits in Zun, which ranked on top 3 in the commit metric. I
think his hard work justified his qualification as a core reviewer in Zun.

This change was approved unanimously by the existing core team. Below are
the core team members who supported this change:

Hongbin Lu
Shunli Zhou
Kien Nguyen
Kevin Zhao
Madhuri Kumari
Namrata Sitlani
Shubham Sharma

Best regards,
Hongbin

[1] http://stackalytics.com/?metric=commits&module=zun-group
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Questions about Caas with Magnum

2017-11-22 Thread Hongbin Lu
As a record, if magnum team doesn't interest to maintain the CoreOS driver,
it is an indication that this driver should be spitted out and maintained
by another team. CoreOS is one of the prevailing container OS. I believe
there will be a lot of interests after the split.

Disclaim: I am an author of the CoreOS driver

Best regards,
Hongbin

On Wed, Nov 22, 2017 at 3:29 AM, Spyros Trigazis  wrote:

> Hi Sergio,
>
> On 22 November 2017 at 03:31, Sergio Morales Acuña 
> wrote:
> > I'm using Openstack Ocata and trying Magnum.
> >
> > I encountered a lot of problems but I been able to solved many of them.
>
> Which problems did you encounter? Can you be more specific? Can we solve
> them
> for everyone else?
>
> >
> > Now I'm curious about some aspects of Magnum:
> >
> > ¿Do I need a newer version of Magnum to run K8S 1.7? ¿Or I just need to
> > create a custom fedora-atomic-27? What about RBAC?
>
> Since Pike, magnum is running kubernetes in containers on fedora 26.
> In fedora atomic 27 kubernetes etcd and flannel are removed from the
> base image so running them in containers is the only way.
>
> For RBAC, you need 1.8 and with Pike you can get it. just by changing
> one parameter.
>
> >
> > ¿Any one here using Magnum on daily basis? If yes, What version are you
> > using?
>
> In our private cloud at CERN we have ~120 clusters with ~450 vms, we are
> running
> Pike and we use only the fedora atomic drivers.
> http://openstack-in-production.blogspot.ch/2017/
> 01/containers-on-cern-cloud.html
> Vexxhost is running magnum:
> https://vexxhost.com/public-cloud/container-services/kubernetes/
> Stackhpc:
> https://www.stackhpc.com/baremetal-cloud-capacity.html
>
> >
> > ¿What driver is, in your opinion, better: Atomic or CoreOS? ¿Do I need to
> > upgrade Magnum to follow K8S's crazy changes?
>
> Atomic is maintained and supported much more than CoreOS in magnum.
> There wasn't much interest from developers for CoreOS.
>
> >
> > ¿Any tips on the CaaS problem?¿It's Magnum Ocata too old for this world?
>
> Magnum Ocata is not too old but it will eventually be since it misses the
> capability of running kubernetes on containers. Pike allows this option
> and can
> keep up with kubernetes easily.
>
> >
> > ¿Where I can found updated articles about the state of Magnum and it's
> > future?
>
> I did the project update presentation for magnum at the Sydney summit.
> https://www.openstack.org/videos/sydney-2017/magnum-project-update
>
> Chees,
> Spyros
>
> >
> > Cheers
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Zun] PTL on vacation for 3 weeks

2017-12-08 Thread Hongbin Lu
Hi team,

I will be on vacation during Dec 11 - Jan 2. Madhuri Kumari (cc-ed) kindly
agreed to serve the PTL role while I am away. Wish everyone have a happy
holiday.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Containers in privileged mode

2018-01-02 Thread Hongbin Lu
Hi Joao,

Right now, it is impossible to create containers with escalated privileged,
such as setting privileged mode or adding additional caps. This is
intentional for security reasons. Basically, what Zun currently provides is
"serverless" containers, which means Zun is not using VMs to isolate
containers (for people who wanted strong isolation as VMs, they can choose
secure container runtime such as Clear Container). Therefore, it is
insecure to give users control of any kind of privilege escalation.
However, if you want this feature, I would love to learn more about the use
cases.

Best regards,
Hongbin

On Tue, Jan 2, 2018 at 10:20 AM, João Paulo Sá da Silva <
joao-sa-si...@alticelabs.com> wrote:

> Hello!
>
> Is it possible to create containers in privileged mode or to add caps as
> NET_ADMIN?
>
>
>
> Kind regards,
>
> João
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Containers in privileged mode

2018-01-02 Thread Hongbin Lu
Please find my reply inline.

Best regards,
Hongbin

On Tue, Jan 2, 2018 at 2:06 PM, João Paulo Sá da Silva <
joao-sa-si...@alticelabs.com> wrote:

> Thanks for your answer, Hongbin, it is very appreciated.
>
>
>
> The use case is to use Virtualized Network Functions in containers instead
> of virtual machines. The rational for using containers instead of VMs is
> better VNF density in resource constrained hosts.
>
> The goal is to have several VNFs (DHCP, FW, etc) running on severely
> resource constrained Openstack compute node.  But without NET_ADMIN cap I
> can’t even start dnsmasq.
>
Make sense. Would you help writing a blueprint for this feature:
https://blueprints.launchpad.net/zun ? We use blueprint to track all
requested features.


>
>
> Is it possible to use clear container with zun/openstack?
>
Yes, it is possible. We are adding documentation about that:
https://review.openstack.org/#/c/527611/ .

>
>
> From checking gerrit it seems that this point was already address and
> dropped? Regarding the security concerns I disagree, if users choose to
> allow such situation they should be allowed.
>
> It is the user responsibility to recognize the dangers and act
> accordingly.
>
>
>
> In Neutron you can go as far as fully disabling  port security, this was
> implemented again with VNFs in mind.
>
Make sense as well. IMHO, we should disallow privilege escalation by
default, but I am open to introduce a configurable option to allow it. I
can see this is necessary for some use cases. Cloud administrators should
be reminded the security implication of doing that.


>
>
> Kind regards,
>
> João
>
>
>
>
>
> >Hi Joao,
>
> >
>
> >Right now, it is impossible to create containers with escalated
> privileged,
>
> >such as setting privileged mode or adding additional caps. This is
>
> >intentional for security reasons. Basically, what Zun currently provides
> is
>
> >"serverless" containers, which means Zun is not using VMs to isolate
>
> >containers (for people who wanted strong isolation as VMs, they can choose
>
> >secure container runtime such as Clear Container). Therefore, it is
>
> >insecure to give users control of any kind of privilege escalation.
>
> >However, if you want this feature, I would love to learn more about the
> use
>
> >cases.
>
> >
>
> >Best regards,
>
> >Hongbin
>
> >
>
> >On Tue, Jan 2, 2018 at 10:20 AM, João Paulo Sá da Silva <
>
> >joao-sa-silva at alticelabs.com> wrote:
>
> >
>
> >> Hello!
>
> >>
>
> >> Is it possible to create containers in privileged mode or to add caps as
>
> >> NET_ADMIN?
>
> >>
>
> >>
>
> >>
>
> >> Kind regards,
>
> >>
>
> >> João
>
> >>
>
> >>
>
> >>
>
> >> 
> __
>
> >> OpenStack Development Mailing List (not for usage questions)
>
> >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:
> unsubscribe
>
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> >>
>
> >>
>
> -- next part --
>
> An HTML attachment was scrubbed...
>
> URL:  attachments/20180102/e1ecb71a/attachment.html>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Containers in privileged mode

2018-01-03 Thread Hongbin Lu
On Wed, Jan 3, 2018 at 10:41 AM, João Paulo Sá da Silva <
joao-sa-si...@alticelabs.com> wrote:

> Hello,
>
>
>
> I created the BP: https://blueprints.launchpad.
> net/zun/+spec/add-capacities-to-containers .
>
Thanks for creating the BP.


>
>
> About the clear containers, I’m not quite sure how using them solves my
> capabilities situation. Can you elaborate on that?
>
What I was trying to say is that Zun offers choice of container runtime:
runc or clear container. I am not sure how clear container deal with
capabilities and privilege escalation. I will leave this question to others.


>
>
> Will zun ever be able to launch LXD containers?
>
Not for now. Only Docker is supported.


>
>
> Kind regards,
>
> João
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api-wg] [api] [cinder] [nova] Support specify action name in request url

2018-01-19 Thread Hongbin Lu
I remembered there are several discussions about action APIs in the past. This 
is one discussion I can find: 
http://lists.openstack.org/pipermail/openstack-dev/2016-December/109136.html . 
An obvious alternative is to expose each action with an independent API 
endpoint. For example:

* POST /servers//start:Start a server
* POST /servers//stop:Stop a server
* POST /servers//reboot:Reboot a server
* POST /servers//pause:Pause a server

Several people pointed out the pros and cons of either approach and other 
alternatives [1] [2] [3]. Eventually, we (OpenStack Zun team) have adopted the 
alternative approach [4] above and it works very well from my perspective. 
However, I understand that there is no consensus on this approach within the 
OpenStack community.

[1] http://lists.openstack.org/pipermail/openstack-dev/2016-December/109178.html
[2] http://lists.openstack.org/pipermail/openstack-dev/2016-December/109208.html
[3] http://lists.openstack.org/pipermail/openstack-dev/2016-December/109248.html
[4] 
https://developer.openstack.org/api-ref/application-container/#manage-containers

Best regards,
Hongbin

From: TommyLike Hu [mailto:tommylik...@gmail.com]
Sent: January-18-18 5:07 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [api-wg] [api] [cinder] [nova] Support specify action 
name in request url

Hey all,
   Recently We found an issue related to our OpenStack action APIs. We usually 
expose our OpenStack APIs by registering them to our API Gateway (for instance 
Kong [1]), but it becomes very difficult when regarding to action APIs. We can 
not register and control them seperately because them all share the same 
request url which will be used as the identity in the gateway service, not say 
rate limiting and other advanced gateway features, take a look at the basic 
resources in OpenStack

   1. Server: "/servers/{server_id}/action"  35+ APIs are include.
   2. Volume: "/volumes/{volume_id}/action"  14 APIs are include.
   3. Other resource

We have tried to register different interfaces with same upstream url, such as:

   api gateway: /version/resource_one/action/action1 => upstream: 
/version/resource_one/action
   api gateway: /version/resource_one/action/action2 => upstream: 
/version/resource_one/action

But it's not secure enough cause we can pass action2 in the request body while 
invoking /action/action1, also, try to read the full body for route is not 
supported by most of the api gateways(maybe plugins) and will have a 
performance impact when proxy. So my question is do we have any solution or 
suggestion for this case? Could we support specify action name both in request 
body and url such as:

URL:/volumes/{volume_id}/action
BODY:{'extend':{}}

and:

URL:/volumes/{volume_id}/action/extend
BODY: {'extend':{}}

Thanks
Tommy

[1]: https://github.com/Kong/kong
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr][libnetwork] Release kuryr-libnetwork 1.x for Queens

2018-01-19 Thread Hongbin Lu
Hi Kuryr team,

I think Kuryr-libnetwork is ready to move out of beta status. I propose to
make the first 1.x release of Kuryr-libnetwork for Queens and cut a stable
branch on it. What do you think about this proposal?

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron] Extend instance IP filter for floating IP

2018-01-24 Thread Hongbin Lu
Hi all,

Nova currently allows us to filter instances by fixed IP address(es). This 
feature is known to be useful in an operational scenario that cloud 
administrators detect abnormal traffic in an IP address and want to trace down 
to the instance that this IP address belongs to. This feature works well except 
a limitation that it only supports fixed IP address(es). In the real 
operational scenarios, cloud administrators might find that the abused IP 
address is a floating IP and want to do the filtering in the same way as fixed 
IP.

Right now, unfortunately, the experience is diverged between these two classes 
of IP address. Cloud administrators need to deploy the logic to (i) detect the 
class of IP address (fixed or floating), (ii) use nova's IP filter if the 
address is a fixed IP address, (iii) do manual filtering if the address is a 
floating IP address. I wonder if nova team is willing to accept an enhancement 
that makes the IP filter support both. Optimally, cloud administrators can 
simply pass the abused IP address to nova and nova will handle the 
heterogeneity.

In term of implementation, I expect the change is small. After this patch [1], 
Nova will query Neutron to compile a list of ports' device_ids (device_id is 
equal to the uuid of the instance to which the port binds) and use the 
device_ids to query the instances. If Neutron returns an empty list, Nova can 
give a second try to query Neutron for floating IPs. There is a RFE [2] and POC 
[3] for proposing to add a device_id attribute to the floating IP API resource. 
Nova can leverage this attribute to compile a list of instances uuids and use 
it as filter on listing the instances.

If this feature is implemented, will it benefit the general community? Finally, 
I also wonder how others are tackling a similar problem. Appreciate your 
feedback.

[1] https://review.openstack.org/#/c/525505/
[2] https://bugs.launchpad.net/neutron/+bug/1723026
[3] https://review.openstack.org/#/c/534882/

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Big tent? (Related to Plugins for all)

2016-07-15 Thread Hongbin Lu
No, Magnum still uses Barbican as an optional dependency, and I believe nobody 
has proposed to remove Barbican entirely. I have no position about big tent but 
using Magnum as an example of "projects are not working together" is 
inappropriate.

Best regards,
Hongbin

> -Original Message-
> From: Fox, Kevin M [mailto:kevin@pnnl.gov]
> Sent: July-15-16 2:37 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [tc][all] Big tent? (Related to Plugins
> for all)
> 
> Some specific things:
> 
> Magnum trying to not use Barbican as it adds an addition dependency.
> See the discussion on the devel mailing list for details.
> 
> Horizon discussions at the summit around wanting to use Zaqar for
> dynamic ui updates instead of polling, but couldn't depend on a non
> widely deployed subsystem.
> 
> Each Advanced OpenStack Service implementing a guest controller
> communication channel that are incompatible with each other and work
> around communications issues differently. This makes a lot more pain
> for Ops to debug or architect a viable solution. For example:
>  * Sahara uses ssh from the controllers to the vms. This does not play
> well with tenant networks. They have tried to work around this several
> ways:
> * require every vm to have a floating ip. (Unnecessary attack
> surface)
> * require the controller to be on the one and only network node
> (Uses ip netns exec to tunnel. Doesn't work for large clouds)
> * Double tunnel ssh via the controller vm's. so some vms have fips,
> some don't. Better then all, but still not good.
>   * Trove uses Rabbit for the guest agent to talk back to the
> controllers. This has traffic going the right direction to work well
> with tenant networks.
> * But Rabbit is not multitenant so a security risk if any user can
> get into any one of the database vm's.
> Really, I believe the right solution is to use a multitenant aware
> message queue so that the guest agent can pull in the right direction
> for tenant networks, and not have the security risk. We have such a
> system already, Zaqar, but its not widely deployed and projects don't
> want to depend on other projects that aren't widely deployed.
> 
> The lack of Instance Users has caused lots of projects to try and work
> around the lack thereof. I know for sure, Mangum, Heat, and Trove work
> around the lack. I'm positive others have too. As an operator, it makes
> me have to very carefully consider all the tradeoffs each project made,
> and decide if I can accept the same risk they assumed. Since each is
> different, thats much harder.
> 
> I'm sure there are more examples. but I hope you get I'm not just
> trying to troll.
> 
> The TC did apply inconsistant rules on letting projects in. That was
> for sure a negative before the big tent. But it also provided a way to
> apply pressure to projects to fix some of the issues that multiple
> projects face, and that plague user/operators and raise the whole
> community up, and that has fallen to the wayside since. Which is a big
> negative now. Maybe that could be bolted on top of the Big Tent I don't
> know.
> 
> I could write a very long description about the state of being an
> OpenStack App developer too that touches on all the problems with
> getting a consistent target and all the cross project communication
> issues there of. But thats probably for some other time.
> 
> Thanks,
> Kevin
> 
> 
> From: Jay Pipes [jaypi...@gmail.com]
> Sent: Friday, July 15, 2016 8:17 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [tc][all] Big tent? (Related to Plugins
> for all)
> 
> Kevin, can you please be *specific* about your complaints below? Saying
> things like "less project communication" and "projects not working
> together because of fear of adding dependencies" and "worse user
> experience" are your personal opinions. Please back those opinions up
> with specific examples of what you are talking about so that we may
> address specific things and not vague ideas.
> 
> Also, the overall goal of the Big Tent, as I've said repeatedly and
> people keep willfully ignoring, was *not* to "make the community more
> inclusive". It was to replace the inconsistently-applied-by-the-TC
> *subjective* criteria for project applications to OpenStack with an
> *objective* list of application requirements that could be
> *consistently* reviewed by the TC.
> 
> Thanks,
> -jay
> 
> On 07/14/2016 01:30 PM, Fox, Kevin M wrote:
> > I'm going to go ahead and ask the difficult question now as the
> answer is relevant to the attached proposal...
> >
> > Should we reconsider whether the big tent is the right approach going
> forward?
> >
> > There have been some major downsides I think to the Big Tent approach,
> such as:
> >   * Projects not working together because of fear of adding extra
> dependencies Ops don't already have
> >   * Reimplementing features, badl

[openstack-dev] [magnum] Proposing Spyros Trigazis for Magnum core reviewer team

2016-07-22 Thread Hongbin Lu
Hi all,

Spyros has consistently contributed to Magnum for a while. In my opinion, what 
differentiate him from others is the significance of his contribution, which 
adds concrete value to the project. For example, the operator-oriented install 
guide he delivered attracts a significant number of users to install Magnum, 
which facilitates the adoption of the project. I would like to emphasize that 
the Magnum team has been working hard but struggling to increase the adoption, 
and Spyros's contribution means a lot in this regards. He also completed 
several essential and challenging tasks, such as adding support for OverlayFS, 
adding Rally job for Magnum, etc. In overall, I am impressed by the amount of 
high-quality patches he submitted. He is also helpful in code reviews, and his 
comments often help us identify pitfalls that are not easy to identify. He is 
also very active in IRC and ML. Based on his contribution and expertise, I 
think he is qualified to be a Magnum core reviewer.

I am happy to propose Spyros to be a core reviewer of Magnum team. According to 
the OpenStack Governance process [1], we require a minimum of 4 +1 votes from 
Magnum core reviewers within a 1 week voting window (consider this proposal as 
a +1 vote from me). A vote of -1 is a veto. If we cannot get enough votes or 
there is a veto vote prior to the end of the voting window, Spyros is not able 
to join the core team and needs to wait 30 days to reapply.

The voting is open until Thursday July 29st.

[1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<    1   2   3   4   5   >