Re: [openstack-dev] [TripleO][Kolla][Heat][Higgins][Magnum][Kuryr] Gap analysis: Heat as a k8s orchestrator

2016-05-29 Thread Hongbin Lu


> -Original Message-
> From: Steven Dake (stdake) [mailto:std...@cisco.com]
> Sent: May-29-16 3:29 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev]
> [TripleO][Kolla][Heat][Higgins][Magnum][Kuryr] Gap analysis: Heat as a
> k8s orchestrator
> 
> Quick question below.
> 
> On 5/28/16, 1:16 PM, "Hongbin Lu" <hongbin...@huawei.com> wrote:
> 
> >
> >
> >> -Original Message-
> >> From: Zane Bitter [mailto:zbit...@redhat.com]
> >> Sent: May-27-16 6:31 PM
> >> To: OpenStack Development Mailing List
> >> Subject: [openstack-dev]
> >> [TripleO][Kolla][Heat][Higgins][Magnum][Kuryr]
> >> Gap analysis: Heat as a k8s orchestrator
> >>
> >> I spent a bit of time exploring the idea of using Heat as an
> external
> >> orchestration layer on top of Kubernetes - specifically in the case
> >> of TripleO controller nodes but I think it could be more generally
> >> useful too - but eventually came to the conclusion it doesn't work
> >> yet, and probably won't for a while. Nevertheless, I think it's
> >> helpful to document a bit to help other people avoid going down the
> >> same path, and also to help us focus on working toward the point
> >> where it _is_ possible, since I think there are other contexts where
> >> it would be useful too.
> >>
> >> We tend to refer to Kubernetes as a "Container Orchestration Engine"
> >> but it does not actually do any orchestration, unless you count just
> >> starting everything at roughly the same time as 'orchestration'.
> >> Which I wouldn't. You generally handle any orchestration
> requirements
> >> between services within the containers themselves, possibly using
> >> external services like etcd to co-ordinate. (The Kubernetes project
> >> refer to this as "choreography", and explicitly disclaim any attempt
> >> at
> >> orchestration.)
> >>
> >> What Kubernetes *does* do is more like an actively-managed version
> of
> >> Heat's SoftwareDeploymentGroup (emphasis on the _Group_). Brief
> recap:
> >> SoftwareDeploymentGroup is a type of ResourceGroup; you give it a
> map
> >> of resource names to server UUIDs and it creates a
> SoftwareDeployment
> >> for each server. You have to generate the list of servers somehow to
> >> give it (the easiest way is to obtain it from the output of another
> >> ResourceGroup containing the servers). If e.g. a server goes down
> you
> >> have to detect that externally, and trigger a Heat update that
> >> removes it from the templates, redeploys a replacement server, and
> >> regenerates the server list before a replacement SoftwareDeployment
> >> is created. In constrast, Kubernetes is running on a cluster of
> >> servers, can use rules to determine where to run containers, and can
> >> very quickly redeploy without external intervention in response to a
> >> server or container falling over. (It also does rolling updates,
> >> which Heat can also do albeit in a somewhat hacky way when it comes
> >> to SoftwareDeployments - which we're planning to fix.)
> >>
> >> So this seems like an opportunity: if the dependencies between
> >> services could be encoded in Heat templates rather than baked into
> >> the containers then we could use Heat as the orchestration layer
> >> following the dependency-based style I outlined in [1]. (TripleO is
> >> already moving in this direction with the way that composable-roles
> >> uses
> >> SoftwareDeploymentGroups.) One caveat is that fully using this style
> >> likely rules out for all practical purposes the current
> >> Pacemaker-based HA solution. We'd need to move to a lighter-weight
> HA
> >> solution, but I know that TripleO is considering that anyway.
> >>
> >> What's more though, assuming this could be made to work for a
> >> Kubernetes cluster, a couple of remappings in the Heat environment
> >> file should get you an otherwise-equivalent single-node non-HA
> >> deployment basically for free. That's particularly exciting to me
> >> because there are definitely deployments of TripleO that need HA
> >> clustering and deployments that don't and which wouldn't want to pay
> >> the complexity cost of running Kubernetes when they don't make any
> real use of it.
> >>
> >> So you'd have a Heat resource type for the controller cluster that
> >> maps to eit

Re: [openstack-dev] [Kuryr][Magnum] - Kuryr nested containers and Magnum Integration

2016-06-21 Thread Hongbin Lu
Gal,

Thanks for starting this ML. Since the work involves both team, I think it is a 
good idea to start by splitting the task first. Then, we can see which items go 
to which teams. Vikas, do you mind to update this ML once the task is spitted? 
Thanks in advance.

Best regards,
Hongbin

From: Gal Sagie [mailto:gal.sa...@gmail.com]
Sent: June-21-16 2:14 AM
To: OpenStack Development Mailing List (not for usage questions); Vikas 
Choudhary; Antoni Segura Puimedon; Irena Berezovsky; Fawad Khaliq; Omer Anson; 
Hongbin Lu
Subject: [Kuryr][Magnum] - Kuryr nested containers and Magnum Integration

Hello all,

I am writing this out to provide awareness and hopefully get some work started
on the above topic.

We have merged a spec about supporting nested containers and integration with 
Magnum
some time ago [1] , Fawad (CCed) led this spec.

We are now seeking for volunteers to start implementing this on both Kuryr and 
the needed
parts in Magnum.

Vikas (CCed) volunteered in the last IRC meeting [2] to start and split this 
work into sub-tasks
so it will be easier to share, anyone else that is interested to join this 
effort is more then welcome to join in and contact Vikas.
I do know several other people showed interest to work on this so i hope we can 
pull everyone
together in this thread, or online at IRC.

Thanks
Gal.

[1] https://review.openstack.org/#/c/269039/
[2] 
http://eavesdrop.openstack.org/meetings/kuryr/2016/kuryr.2016-06-20-14.00.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-06-17 Thread Hongbin Lu
Ricardo,

Thanks for sharing. It is good to hear that Magnum works well with a 200 nodes 
cluster.

Best regards,
Hongbin

> -Original Message-
> From: Ricardo Rocha [mailto:rocha.po...@gmail.com]
> Sent: June-17-16 11:14 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [magnum] 2 million requests / sec, 100s of
> nodes
> 
> Hi.
> 
> Just thought the Magnum team would be happy to hear :)
> 
> We had access to some hardware the last couple days, and tried some
> tests with Magnum and Kubernetes - following an original blog post from
> the kubernetes team.
> 
> Got a 200 node kubernetes bay (800 cores) reaching 2 million requests /
> sec.
> 
> Check here for some details:
> https://openstack-in-production.blogspot.ch/2016/06/scaling-magnum-and-
> kubernetes-2-million.html
> 
> We'll try bigger in a couple weeks, also using the Rally work from
> Winnie, Ton and Spyros to see where it breaks. Already identified a
> couple issues, will add bugs or push patches for those. If you have
> ideas or suggestions for the next tests let us know.
> 
> Magnum is looking pretty good!
> 
> Cheers,
> Ricardo
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Midcycle location and date

2016-06-20 Thread Hongbin Lu
Hi all,

This is a reminder that there are doodle pools for midcycle participants to 
select the location and time:

Location: http://doodle.com/poll/2x9utspir7vk8ter
Date: http://doodle.com/poll/5tbcyc37yb7ckiec

If you are able to attend the midcycle, I encourage you to vote for your 
preferred location and date. We will try to finalize everything in the team 
meeting tomorrow.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Higgins][Zun] Project roadmap

2016-06-16 Thread Hongbin Lu
Welcome! Please feel free to ping us in IRC (#openstack-zun) or join our weekly 
meeting (https://wiki.openstack.org/wiki/Zun#Meetings). I am happy to discuss 
how to collaborate further.

Best regards,
Hongbin

From: Pengfei Ni [mailto:feisk...@gmail.com]
Sent: June-16-16 6:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Qi Ming Teng; yanya...@cn.ibm.com; flw...@catalyst.net.nz; 
adit...@nectechnologies.in; sitlani.namr...@yahoo.in; Chandan Kumar; Sheel Rana 
Insaan; Yuanying
Subject: Re: [openstack-dev] [Higgins][Zun] Project roadmap

Hello, everyone,

Hypernetes has done some work same as this project, that is

- Leverate Neutron for container network
- Leverate Cinder for storage
- Leverate Keystone for auth
- Leverate HyperContainer for hypervisor-based container runtime

We could help to provide hypervisor-based container runtime (HyperContainer) 
integration for Zun.

See https://github.com/hyperhq/hypernetes and 
http://blog.kubernetes.io/2016/05/hypernetes-security-and-multi-tenancy-in-kubernetes.html
 for more information about Hypernetes, and see 
https://github.com/hyperhq/hyperd for more information about HyperContainer.


Best regards.


---
Pengfei Ni
Software Engineer @Hyper

2016-06-13 6:10 GMT+08:00 Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>>:
Hi team,

During the team meetings these weeks, we collaborated the initial project 
roadmap. I summarized it as below. Please review.

* Implement a common container abstraction for different container runtimes. 
The initial implementation will focus on supporting basic container operations 
(i.e. CRUD).
* Focus on non-nested containers use cases (running containers on physical 
hosts), and revisit nested containers use cases (running containers on VMs) 
later.
* Provide two set of APIs to access containers: The Nova APIs and the 
Zun-native APIs. In particular, the Zun-native APIs will expose full container 
capabilities, and Nova APIs will expose capabilities that are shared between 
containers and VMs.
* Leverage Neutron (via Kuryr) for container networking.
* Leverage Cinder for container data volume.
* Leverage Glance for storing container images. If necessary, contribute to 
Glance for missing features (i.e. support layer of container images).
* Support enforcing multi-tenancy by doing the following:
** Add configurable options for scheduler to enforce neighboring containers 
belonging to the same tenant.
** Support hypervisor-based container runtimes.

The following topics have been discussed, but the team cannot reach consensus 
on including them into the short-term project scope. We skipped them for now 
and might revisit them later.
* Support proxying API calls to COEs.
* Advanced container operations (i.e. keep container alive, load balancer 
setup, rolling upgrade).
* Nested containers use cases (i.e. provision container hosts).
* Container composition (i.e. support docker-compose like DSL).

NOTE: I might forgot and mis-understood something. Please feel free to point 
out if anything is wrong or missing.

Best regards,
Hongbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr][Magnum] - Kuryr nested containers and Magnum Integration

2016-06-24 Thread Hongbin Lu


From: Vikas Choudhary [mailto:choudharyvika...@gmail.com]
Sent: June-22-16 3:58 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Kuryr][Magnum] - Kuryr nested containers and 
Magnum Integration

Hi Eli Qiao,

Please find my responses inline.

On Wed, Jun 22, 2016 at 12:47 PM, taget 
<qiaoliy...@gmail.com<mailto:qiaoliy...@gmail.com>> wrote:
hi Vikas,
thanks for you clarify, relied in lines.
On 2016年06月22日 14:36, Vikas Choudhary wrote:

Magnum:

  1.  Support to pass neutron network names at container creation apis such as 
pod-create in k8s case.
Hmm. Magnum has deleted all wrapper API for container creation and pod-create
Oh, I was referring to older design then. In that case, What would be 
corresponding alternative now?
Is this related to Zun somehow?
[Hongbin Lu] The alternative is native CLI tool (i.e. kubectl, docker). This is 
unrelated to Zun. Zun is a totally independent service, regardless of Magnum 
exists or not.




  1.  If Kuryr is used as network driver at bay creation, update heat template 
creation logic for kuryr-agent provisioning on all the bay nodes. This will 
also include passing required configuration and credentials also.

In this case, I am confused, we need to install kuryr-agent on all bay nodes, 
so and kuryr-agent's for binding neutron ports and containers port, we will 
need to install neutron-agent on bay nodes too?
Neutron-agent will not be required on bay nodes. Only kuryr-agent will be 
sufficient to plumb the vifs and vlan tagging.





--

Best Regards,

Eli Qiao (乔立勇), Intel OTC.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum]Midcycle

2016-06-24 Thread Hongbin Lu
Hi all,

The Magnum midcycle will be held in Aug 4 - 5 at Austin. Below is the link to 
register. Hope to see you all there.

https://www.eventbrite.com/e/magnum-midcycle-tickets-26245489967

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

2016-01-13 Thread Hongbin Lu
Hi Jamie,

I would like to clarify several things.

First, a container uuid is intended to be unique globally (not within 
individual cluster). If you create a container with duplicated uuid, the 
creation will fail regardless of its bay. Second, you are in control of the 
uuid of the container that you are going to create. In Rest API, you can set 
the “uuid” field in the json request body (this is not supported in CLI, but it 
is an easy add). If a uuid is provided, Magnum will use it as the uuid of the 
container (instead of generating a new uuid).

For the idea of nesting container resource, I prefer not to do that if there 
are alternatives or it can be work around. IMO, it sets a limitation that a 
container must have a bay, which might not be the case in future. For example, 
we might add a feature that creating a container will automatically create a 
bay. If a container must have a bay on creation, such feature is impossible.

Best regards,
Hongbin

From: Jamie Hannaford [mailto:jamie.hannaf...@rackspace.com]
Sent: January-13-16 4:43 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [magnum] Nesting /containers resource under /bays


I've recently been gathering feedback about the Magnum API and one of the 
things that people commented on​ was the global /containers endpoints. One 
person highlighted the danger of UUID collisions:



"""

It takes a container ID which is intended to be unique within that individual 
cluster. Perhaps this doesn't matter, considering the surface for hash 
collisions. You're running a 1% risk of collision on the shorthand container 
IDs:



In [14]: n = lambda p,H: math.sqrt(2*H * math.log(1/(1-p)))
In [15]: n(.01, 0x1)
Out[15]: 2378620.6298183016



(this comes from the Birthday Attack - 
https://en.wikipedia.org/wiki/Birthday_attack)



The main reason I questioned this is that we're not in control of how the 
hashes are created whereas each Docker node or Swarm cluster will pick a new ID 
under collisions. We don't have that guarantee when aggregating across.



The use case that was outlined appears to be aggregation and reporting. That 
can be done in a different manner than programmatic access to single 
containers.​

"""



Representing a resource without reference to its parent resource also goes 
against the convention of many other OpenStack APIs.



Nesting a container resource under its parent bay would mitigate both of these 
issues:



/bays/{uuid}/containers/{uuid}​



I'd like to get feedback from folks in the Magnum team and see if anybody has 
differing opinions about this.



Jamie






Rackspace International GmbH a company registered in the Canton of Zurich, 
Switzerland (company identification number CH-020.4.047.077-1) whose registered 
office is at Pfingstweidstrasse 60, 8005 Zurich, Switzerland. Rackspace 
International GmbH privacy policy can be viewed at 
www.rackspace.co.uk/legal/swiss-privacy-policy
 - This e-mail message may contain confidential or privileged information 
intended for the recipient. Any dissemination, distribution or copying of the 
enclosed material is prohibited. If you receive this transmission in error, 
please notify us immediately by e-mail at 
ab...@rackspace.com and delete the original 
message. Your cooperation is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

2016-01-15 Thread Hongbin Lu
A reason is the container abstraction brings containers to OpenStack: Keystone 
for authentication, Heat for orchestration, Horizon for UI, etc.

From: Kyle Kelley [mailto:rgb...@gmail.com]
Sent: January-15-16 10:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

What are the reasons for keeping /containers?

On Fri, Jan 15, 2016 at 9:14 PM, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:
Disagree.

If the container managing part is removed, Magnum is just a COE deployment 
tool. This is really a scope-mismatch IMO. The middle ground I can see is to 
have a flag that allows operators to turned off the container managing part. If 
it is turned off, COEs are not managed by Magnum and requests sent to the 
/container endpoint will return a reasonable error code. Thoughts?

Best regards,
Hongbin

From: Mike Metral 
[mailto:mike.met...@rackspace.com<mailto:mike.met...@rackspace.com>]
Sent: January-15-16 6:24 PM
To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>

Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

I too believe that the /containers endpoint is obstructive to the overall goal 
of Magnum.

IMO, Magnum’s scope should only be concerned with:

  1.  Provisioning the underlying infrastructure required by the Container 
Orchestration Engine (COE) and
  2.  Instantiating the COE itself on top of said infrastructure from step #1.
Anything further regarding Magnum interfacing or interacting with containers 
starts to get into a gray area that could easily evolve into:

  *   Potential race conditions between Magnum and the designated COE and
  *   Would create design & implementation overhead and debt that could bite us 
in the long run seeing how all COE’s operate & are based off various different 
paradigms in terms of describing & managing containers, and this divergence 
will only continue to grow with time.
  *   Not to mention, the recreation of functionality around managing 
containers in Magnum seems redundant in nature as this is the very reason to 
want to use a COE in the first place – because it’s a more suited tool for the 
task
If there is low-hanging fruit in terms of common functionality across all 
COE’s, then those generic capabilities could be abstracted and integrated into 
Magnum, but these have to be carefully examined beforehand to ensure true 
parity exists for the capability across all COE’s.

However, I still worry that going down this route toes the line that Magnum 
should and could be a part of the managing container story to some degree – 
which again should be the sole responsibility of the COE, not Magnum.

I’m in favor of doing away with the /containers endpoint – continuing with it 
just looks like a snowball of scope-mismatch and management issues just waiting 
to happen.

Mike Metral
Product Architect – Private Cloud R - Rackspace
____________
From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
Sent: Thursday, January 14, 2016 1:59 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

In short, the container IDs assigned by Magnum are independent of the container 
IDs assigned by Docker daemon. Magnum do the IDs mapping before doing a native 
API call. In particular, here is how it works.

If users create a container through Magnum endpoint, Magnum will do the 
followings:
1.   Generate a uuid (if not provided).
2.   Call Docker Swarm API to create a container, with its hostname equal 
to the generated uuid.
3.   Persist container to DB with the generated uuid.

If users perform an operation on an existing container, they must provide the 
uuid (or the name) of the container (if name is provided, it will be used to 
lookup the uuid). Magnum will do the followings:
1.   Call Docker Swarm API to list all containers.
2.   Find the container whose hostname is equal to the provided uuid, 
record its “docker_id” that is the ID assigned by native tool.
3.   Call Docker Swarm API with “docker_id” to perform the operation.

Magnum doesn’t assume all operations to be routed through Magnum endpoints. 
Alternatively, users can directly call the native APIs. In this case, the 
created resources are not managed by Magnum and won’t be accessible through 
Magnum’s endpoints.

Hope it is clear.

Best regards,
Hongbin

From: Kyle Kelley [mailto:kyle.kel...@rackspace.com]
Sent: January-14-16 11:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

This presumes a model where Magnum is in complete control of the IDs of 
individual containers. How does this work with the Docker daemon?

> In Rest API, yo

Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

2016-01-15 Thread Hongbin Lu
Disagree.

If the container managing part is removed, Magnum is just a COE deployment 
tool. This is really a scope-mismatch IMO. The middle ground I can see is to 
have a flag that allows operators to turned off the container managing part. If 
it is turned off, COEs are not managed by Magnum and requests sent to the 
/container endpoint will return a reasonable error code. Thoughts?

Best regards,
Hongbin

From: Mike Metral [mailto:mike.met...@rackspace.com]
Sent: January-15-16 6:24 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

I too believe that the /containers endpoint is obstructive to the overall goal 
of Magnum.

IMO, Magnum’s scope should only be concerned with:

  1.  Provisioning the underlying infrastructure required by the Container 
Orchestration Engine (COE) and
  2.  Instantiating the COE itself on top of said infrastructure from step #1.
Anything further regarding Magnum interfacing or interacting with containers 
starts to get into a gray area that could easily evolve into:

  *   Potential race conditions between Magnum and the designated COE and
  *   Would create design & implementation overhead and debt that could bite us 
in the long run seeing how all COE’s operate & are based off various different 
paradigms in terms of describing & managing containers, and this divergence 
will only continue to grow with time.
  *   Not to mention, the recreation of functionality around managing 
containers in Magnum seems redundant in nature as this is the very reason to 
want to use a COE in the first place – because it’s a more suited tool for the 
task
If there is low-hanging fruit in terms of common functionality across all 
COE’s, then those generic capabilities could be abstracted and integrated into 
Magnum, but these have to be carefully examined beforehand to ensure true 
parity exists for the capability across all COE’s.

However, I still worry that going down this route toes the line that Magnum 
should and could be a part of the managing container story to some degree – 
which again should be the sole responsibility of the COE, not Magnum.

I’m in favor of doing away with the /containers endpoint – continuing with it 
just looks like a snowball of scope-mismatch and management issues just waiting 
to happen.

Mike Metral
Product Architect – Private Cloud R - Rackspace
____
From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
Sent: Thursday, January 14, 2016 1:59 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

In short, the container IDs assigned by Magnum are independent of the container 
IDs assigned by Docker daemon. Magnum do the IDs mapping before doing a native 
API call. In particular, here is how it works.

If users create a container through Magnum endpoint, Magnum will do the 
followings:
1.   Generate a uuid (if not provided).
2.   Call Docker Swarm API to create a container, with its hostname equal 
to the generated uuid.
3.   Persist container to DB with the generated uuid.

If users perform an operation on an existing container, they must provide the 
uuid (or the name) of the container (if name is provided, it will be used to 
lookup the uuid). Magnum will do the followings:
1.   Call Docker Swarm API to list all containers.
2.   Find the container whose hostname is equal to the provided uuid, 
record its “docker_id” that is the ID assigned by native tool.
3.   Call Docker Swarm API with “docker_id” to perform the operation.

Magnum doesn’t assume all operations to be routed through Magnum endpoints. 
Alternatively, users can directly call the native APIs. In this case, the 
created resources are not managed by Magnum and won’t be accessible through 
Magnum’s endpoints.

Hope it is clear.

Best regards,
Hongbin

From: Kyle Kelley [mailto:kyle.kel...@rackspace.com]
Sent: January-14-16 11:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

This presumes a model where Magnum is in complete control of the IDs of 
individual containers. How does this work with the Docker daemon?

> In Rest API, you can set the “uuid” field in the json request body (this is 
> not supported in CLI, but it is an easy add).​

In the Rest API for Magnum or Docker? Has Magnum completely broken away from 
exposing native tooling - are all container operations assumed to be routed 
through Magnum endpoints?

> For the idea of nesting container resource, I prefer not to do that if there 
> are alternatives or it can be work around. IMO, it sets a limitation that a 
> container must have a bay, which might not be the case in future. For 
> example, we might add a feature that creating a container will automatically 
>

Re: [openstack-dev] [magnum] Temporarily remove swarm func test from gate

2016-01-18 Thread Hongbin Lu
Hi Egor,

Thanks for investigating on the issue. I will review the patch. Agreed. We can 
definitely enable the swarm tests if everything works fine.

Best regards,
Hongbin

-Original Message-
From: Egor Guz [mailto:e...@walmartlabs.com] 
Sent: January-18-16 2:42 PM
To: OpenStack Development Mailing List
Cc: Hongbin Lu
Subject: Re: [openstack-dev] [magnum] Temporarily remove swarm func test from 
gate

Hongbin,

I did some digging and found that docker storage driver wasn’t configured 
correctly at agent nodes.
Also it looks like Atomic folks recommend use deicated volumes for DeviceMapper 
(http://www.projectatomic.io/blog/2015/06/notes-on-fedora-centos-and-docker-storage-drivers/).
So added Cinder volume for the master as well (I tried create volumes at local 
storage, but it’s not even enough space for 1G volume).

Please take a look at https://review.openstack.org/#/c/267996, did around ~12 
gates run and got only 2 failures (tests cannot connect to master, but all 
containers logs looks alrignt. e.g. 
http://logs.openstack.org/96/267996/3/check/gate-functional-dsvm-magnum-swarm/d8d855b/console.html#_2016-01-18_04_31_17_312),
 we have similar error rates with Kub. So after merging this code we can try to 
enable voting for Swarm tests, thoughts?

—
Egor

On Jan 8, 2016, at 12:01, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:

There are other symptoms as well, which I have no idea without a deep dip.

-Original Message-
From: Egor Guz [mailto:e...@walmartlabs.com]
Sent: January-08-16 2:14 PM
To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Cc: Hongbin Lu
Subject: Re: [openstack-dev] [magnum] Temporarily remove swarm func test from 
gate

Hongbin,

I belive most failures are related to containers tests. Maybe we should comment 
only them out and keep Swarm cluster provisioning.
Thoughts?

—
Egor

On Jan 8, 2016, at 06:37, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com><mailto:hongbin...@huawei.com>>
 wrote:

Done: https://review.openstack.org/#/c/264998/

Best regards,
Hongbin

-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: January-07-16 10:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Temporarily remove swarm func test from 
gate

Hongbin,

I’m not aware of any viable options besides using a nonvoting gate job. Are 
there other alternatives to consider? If not, let’s proceed with that approach.

Adrian

On Jan 7, 2016, at 3:34 PM, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com><mailto:hongbin...@huawei.com>>
 wrote:

Clark,

That is true. The check pipeline must pass in order to enter the gate pipeline. 
Here is the problem we are facing. A patch that was able to pass the check 
pipeline is blocked in gate pipeline, due to the instability of the test. The 
removal of unstable test from gate pipeline aims to unblock the patches that 
already passed the check.

An alternative is to remove the unstable test from check pipeline as well or 
mark it as non-voting test. If that is what the team prefers, I will adjust the 
review accordingly.

Best regards,
Honbgin

-Original Message-
From: Clark Boylan [mailto:cboy...@sapwetik.org]
Sent: January-07-16 6:04 PM
To: 
openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [magnum] Temporarily remove swarm func test from 
gate

On Thu, Jan 7, 2016, at 02:59 PM, Hongbin Lu wrote:
Hi folks,

It looks the swarm func test is currently unstable, which negatively impacts 
the patch submission workflow. I proposed to remove it from Jenkins gate (but 
keep it in Jenkins check), until it becomes stable.
Please find the details in the review
(https://review.openstack.org/#/c/264998/) and let me know if you have any 
concern.

Removing it from gate but not from check doesn't necessarily help much because 
you can only enter the gate pipeline once the change has a +1 from Jenkins. 
Jenkins applies the +1 after check tests pass.

Clark

__
 OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org><mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
 OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (no

Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

2016-01-14 Thread Hongbin Lu
In short, the container IDs assigned by Magnum are independent of the container 
IDs assigned by Docker daemon. Magnum do the IDs mapping before doing a native 
API call. In particular, here is how it works.

If users create a container through Magnum endpoint, Magnum will do the 
followings:

1.   Generate a uuid (if not provided).

2.   Call Docker Swarm API to create a container, with its hostname equal 
to the generated uuid.

3.   Persist container to DB with the generated uuid.

If users perform an operation on an existing container, they must provide the 
uuid (or the name) of the container (if name is provided, it will be used to 
lookup the uuid). Magnum will do the followings:

1.   Call Docker Swarm API to list all containers.

2.   Find the container whose hostname is equal to the provided uuid, 
record its “docker_id” that is the ID assigned by native tool.

3.   Call Docker Swarm API with “docker_id” to perform the operation.

Magnum doesn’t assume all operations to be routed through Magnum endpoints. 
Alternatively, users can directly call the native APIs. In this case, the 
created resources are not managed by Magnum and won’t be accessible through 
Magnum’s endpoints.

Hope it is clear.

Best regards,
Hongbin

From: Kyle Kelley [mailto:kyle.kel...@rackspace.com]
Sent: January-14-16 11:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays


This presumes a model where Magnum is in complete control of the IDs of 
individual containers. How does this work with the Docker daemon?



> In Rest API, you can set the “uuid” field in the json request body (this is 
> not supported in CLI, but it is an easy add).​



In the Rest API for Magnum or Docker? Has Magnum completely broken away from 
exposing native tooling - are all container operations assumed to be routed 
through Magnum endpoints?



> For the idea of nesting container resource, I prefer not to do that if there 
> are alternatives or it can be work around. IMO, it sets a limitation that a 
> container must have a bay, which might not be the case in future. For 
> example, we might add a feature that creating a container will automatically 
> create a bay. If a container must have a bay on creation, such feature is 
> impossible.



If that's *really* a feature you need and are fully involved in designing for, 
this seems like a case where creating a container via these endpoints would 
create a bay and return the full resource+subresource.



Personally, I think these COE endpoints need to not be in the main spec, to 
reduce the surface area until these are put into further use.







____________
From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
Sent: Wednesday, January 13, 2016 5:00 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

Hi Jamie,

I would like to clarify several things.

First, a container uuid is intended to be unique globally (not within 
individual cluster). If you create a container with duplicated uuid, the 
creation will fail regardless of its bay. Second, you are in control of the 
uuid of the container that you are going to create. In Rest API, you can set 
the “uuid” field in the json request body (this is not supported in CLI, but it 
is an easy add). If a uuid is provided, Magnum will use it as the uuid of the 
container (instead of generating a new uuid).

For the idea of nesting container resource, I prefer not to do that if there 
are alternatives or it can be work around. IMO, it sets a limitation that a 
container must have a bay, which might not be the case in future. For example, 
we might add a feature that creating a container will automatically create a 
bay. If a container must have a bay on creation, such feature is impossible.

Best regards,
Hongbin

From: Jamie Hannaford [mailto:jamie.hannaf...@rackspace.com]
Sent: January-13-16 4:43 AM
To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Subject: [openstack-dev] [magnum] Nesting /containers resource under /bays


I've recently been gathering feedback about the Magnum API and one of the 
things that people commented on​ was the global /containers endpoints. One 
person highlighted the danger of UUID collisions:



"""

It takes a container ID which is intended to be unique within that individual 
cluster. Perhaps this doesn't matter, considering the surface for hash 
collisions. You're running a 1% risk of collision on the shorthand container 
IDs:



In [14]: n = lambda p,H: math.sqrt(2*H * math.log(1/(1-p)))
In [15]: n(.01, 0x1)
Out[15]: 2378620.6298183016



(this comes from the Birthday Attack - 
https://en.wikipedia.org/wiki/Birthday_attack)<https://en.wikipedia.org/wiki/Birthda

[openstack-dev] [magnum][heat] Bug 1544227

2016-02-10 Thread Hongbin Lu
Hi Heat team,

As mentioned in IRC, magnum gate broke with bug 1544227 . Rabi submitted on a 
fix (https://review.openstack.org/#/c/278576/), but it doesn't seem to be 
enough to unlock the broken gate. In particular, it seems templates with 
SoftwareDeploymentGroup resource failed to complete (I have commented on the 
review above for how to reproduce).

Right now, I prefer to merge the reverted patch 
(https://review.openstack.org/#/c/278575/) to unlock our gate immediately, 
unless someone can work on a quick fix. We appreciate the help.

Best regards,
Hongbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] gate issues

2016-02-05 Thread Hongbin Lu
Corey,

Thanks for investigating the gate issues and summarizing it. It looks there are 
multiple problems to solve, and tickets were created for each one.


1.   https://bugs.launchpad.net/magnum/+bug/1542384

2.   https://bugs.launchpad.net/magnum/+bug/1541964

3.   https://bugs.launchpad.net/magnum/+bug/1542386

4.   https://bugs.launchpad.net/magnum/+bug/1536739

I gave #3 the highest priority because, without this issue being resolved, the 
gate takes several hours to run a single job. It would be tedious for testing 
patches and trouble-shooting other issues in such environment. Any kind of help 
for this issue is greatly appreciated.

Egor, thanks for the advice. A ticket was created to track the logs missing 
issue you mentioned: https://bugs.launchpad.net/magnum/+bug/1542390

Best regards,
Hongbin

From: Guz Egor [mailto:guz_e...@yahoo.com]
Sent: February-05-16 2:44 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] gate issues

Corey,

I think we should do more investigation before applying any "hot" patches. E.g. 
I look at several failures today and honestly there is no way to find out 
reasons.
I believe we are not copying logs 
(https://github.com/openstack/magnum/blob/master/magnum/tests/functional/python_client_base.py#L163)
 during test failure,
we register handler at setUp 
(https://github.com/openstack/magnum/blob/master/magnum/tests/functional/python_client_base.py#L244),
 but Swarm tests, create
bay in setUpClass 
(https://github.com/openstack/magnum/blob/master/magnum/tests/functional/swarm/test_swarm_python_client.py#L48)
 which called before setUp.
So there is no way to see any logs from vm.

sorry, I cannot submit patch/debug by myself because I will get my laptop back 
only on Tue ):

---
 Egor


From: Corey O'Brien >
To: OpenStack Development Mailing List (not for usage questions) 
>
Sent: Thursday, February 4, 2016 9:03 PM
Subject: [openstack-dev] [Magnum] gate issues

So as we're all aware, the gate is a mess right now. I wanted to sum up some of 
the issues so we can figure out solutions.

1. The functional-api job sometimes fails because bays timeout building after 1 
hour. The logs look something like this:
magnum.tests.functional.api.v1.test_bay.BayTest.test_create_list_and_delete_bays
 [3733.626171s] ... FAILED
I can reproduce this hang on my devstack with etcdctl 2.0.10 as described in 
this bug (https://bugs.launchpad.net/magnum/+bug/1541105), but apparently 
either my fix with using 2.2.5 (https://review.openstack.org/#/c/275994/) is 
incomplete or there is another intermittent problem because it happened again 
even with that fix: 
(http://logs.openstack.org/94/275994/1/check/gate-functional-dsvm-magnum-api/32aacb1/console.html)

2. The k8s job has some sort of intermittent hang as well that causes a similar 
symptom as with swarm. https://bugs.launchpad.net/magnum/+bug/1541964

3. When the functional-api job runs, it frequently destroys the VM causing the 
jenkins slave agent to die. Example: 
http://logs.openstack.org/03/275003/6/check/gate-functional-dsvm-magnum-api/a9a0eb9//console.html
When this happens, zuul re-queues a new build from the start on a new VM. This 
can happen many times in a row before the job completes.
I chatted with openstack-infra about this and after taking a look at one of the 
VMs, it looks like memory over consumption leading to thrashing was a possible 
culprit. The sshd daemon was also dead but the console showed things like 
"INFO: task kswapd0:77 blocked for more than 120 seconds". A cursory glance and 
following some of the jobs seems to indicate that this doesn't happen on RAX 
VMs which have swap devices unlike the OVH VMs as well.

4. In general, even when things work, the gate is really slow. The sequential 
master-then-node build process in combination with underpowered VMs makes bay 
builds take 25-30 minutes when they do succeed. Since we're already close to 
tipping over a VM, we run functional tests with concurrency=1, so 2 bay builds 
means almost the entire allotted devstack testing time (generally 75 minutes of 
actual test time available it seems).

Corey

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [Magnum] gate issues

2016-02-08 Thread Hongbin Lu
Hi Team,

In order to resolve issue #3, it looks like we have to significantly reduce the 
memory consumption of the gate tests. Details can be found in this patch 
https://review.openstack.org/#/c/276958/ . For core team, a fast review and 
approval of that patch would be greatly appreciated, since it is hard to work 
with a gate that takes several hours to complete. Thanks.

Best regards,
Hongbin

From: Corey O'Brien [mailto:coreypobr...@gmail.com]
Sent: February-05-16 12:04 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Magnum] gate issues

So as we're all aware, the gate is a mess right now. I wanted to sum up some of 
the issues so we can figure out solutions.

1. The functional-api job sometimes fails because bays timeout building after 1 
hour. The logs look something like this:
magnum.tests.functional.api.v1.test_bay.BayTest.test_create_list_and_delete_bays
 [3733.626171s] ... FAILED
I can reproduce this hang on my devstack with etcdctl 2.0.10 as described in 
this bug (https://bugs.launchpad.net/magnum/+bug/1541105), but apparently 
either my fix with using 2.2.5 (https://review.openstack.org/#/c/275994/) is 
incomplete or there is another intermittent problem because it happened again 
even with that fix: 
(http://logs.openstack.org/94/275994/1/check/gate-functional-dsvm-magnum-api/32aacb1/console.html)

2. The k8s job has some sort of intermittent hang as well that causes a similar 
symptom as with swarm. https://bugs.launchpad.net/magnum/+bug/1541964

3. When the functional-api job runs, it frequently destroys the VM causing the 
jenkins slave agent to die. Example: 
http://logs.openstack.org/03/275003/6/check/gate-functional-dsvm-magnum-api/a9a0eb9//console.html
When this happens, zuul re-queues a new build from the start on a new VM. This 
can happen many times in a row before the job completes.
I chatted with openstack-infra about this and after taking a look at one of the 
VMs, it looks like memory over consumption leading to thrashing was a possible 
culprit. The sshd daemon was also dead but the console showed things like 
"INFO: task kswapd0:77 blocked for more than 120 seconds". A cursory glance and 
following some of the jobs seems to indicate that this doesn't happen on RAX 
VMs which have swap devices unlike the OVH VMs as well.

4. In general, even when things work, the gate is really slow. The sequential 
master-then-node build process in combination with underpowered VMs makes bay 
builds take 25-30 minutes when they do succeed. Since we're already close to 
tipping over a VM, we run functional tests with concurrency=1, so 2 bay builds 
means almost the entire allotted devstack testing time (generally 75 minutes of 
actual test time available it seems).

Corey
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][heat] Bug 1544227

2016-02-11 Thread Hongbin Lu
Rabi,

As you observed, I have uploaded two testing patches [1][2] that depends on 
your fix patch [3] and the reverted patch [4] respectively. An observation is 
that the test "gate-functional-dsvm-magnum-mesos" failed in [1], but passed in 
[2]. That implies the reverted patch does resolve an issue (although I am not 
sure exactly how).

I did notice there are several 404 errors from Neutron, but those errors exist 
in successful tests as well so I don't think they are the root cause.

[1] https://review.openstack.org/#/c/278578/
[2] https://review.openstack.org/#/c/278778/
[3] https://review.openstack.org/#/c/278576/
[4] https://review.openstack.org/#/c/278575/

Best regards,
Hongbin

-Original Message-
From: Rabi Mishra [mailto:ramis...@redhat.com] 
Sent: February-11-16 12:46 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][heat] Bug 1544227

Hi,

We did some analysis of the issue you are facing.

One of the issues from heat side is, we convert None(singleton) resource 
references to 'None'(string) and the translation logic is not ignoring them. 
Though we don't apply translation rules to resource references[1].We don't see 
this issue after this patch[2].

The issue you mentioned below with respect to SD and SDG, does not look like 
something to do with this patch. I also see the similar issues when you tested 
with the reverted patch[3].

I also noticed that there are some 404 from neutron in the engine logs[4] for 
the test patch. 
I did not notice them when I tested locally with the templates you had provided.


Having said that, we can still revert the patch, if that resolves your issue. 

[1] 
https://github.com/openstack/heat/blob/master/heat/engine/translation.py#L234
[2] https://review.openstack.org/#/c/278576/
[3]http://logs.openstack.org/78/278778/1/check/gate-functional-dsvm-magnum-k8s/ea48ba2/console.html#_2016-02-11_03_07_49_039
[4] 
http://logs.openstack.org/78/278578/1/check/gate-functional-dsvm-magnum-swarm/51eeb3b/logs/screen-h-eng.txt


Regards,
Rabi

> Hi Heat team,
> 
> As mentioned in IRC, magnum gate broke with bug 1544227 . Rabi 
> submitted on a fix (https://review.openstack.org/#/c/278576/), but it 
> doesn't seem to be enough to unlock the broken gate. In particular, it 
> seems templates with SoftwareDeploymentGroup resource failed to 
> complete (I have commented on the review above for how to reproduce).
> 
> Right now, I prefer to merge the reverted patch
> (https://review.openstack.org/#/c/278575/) to unlock our gate 
> immediately, unless someone can work on a quick fix. We appreciate the help.
> 
> Best regards,
> Hongbin
> 
> 
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum] Kuryr-Magnm integration spec

2016-02-01 Thread Hongbin Lu
Hi Magnum team,

FYI, you might interest to review the Magnum integration spec from Kuryr team: 
https://review.openstack.org/#/c/269039/

Best regards,
Hongbin

From: Gal Sagie [mailto:gal.sa...@gmail.com]
Sent: January-31-16 2:57 AM
To: OpenStack Development Mailing List (not for usage questions); Antoni Segura 
Puimedon; Kyle Mestery; Irena Berezovsky; Taku Fukushima; Mohammad Banikazemi; 
Fawad Khaliq; Vikas Choudhary; Eran Gampel; Adrian Otto; Daneyon Hansen
Subject: [openstack-dev] [Kuryr] IRC Meeting - Monday (2/1) 1500 UTC 
(#openstack-meeting-4)


Hello All,

We are going to have an IRC Meeting tomorrow (2/1) at 1500 UTC
in #openstack-meeting-4

The meeting agenda can be seen here [1].
We are going to focus most of the meeting on the Kubernetes-Kuryr integration.
You can view the logs from our specific Kuryr-Kubernetes integration IRC 
meeting [2]

Please come with some modeling ideas, i think this topic will take most of the 
time.

I would also like us to discuss about Fawad spec about Magnum-Kuryr 
integration. [3]
and nested containers support.

I have CCed Adrian/Danyeon from the Magnum team here, hopefully you guys
can provide some feedback as well.

Thanks and see you there!
Gal.

[1] https://wiki.openstack.org/wiki/Meetings/Kuryr
[2] 
http://eavesdrop.openstack.org/meetings/kuryr/2016/kuryr.2016-01-26-15.03.log.html
[3] https://review.openstack.org/#/c/269039/3

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] API service won't work if conductor down?

2016-02-03 Thread Hongbin Lu
I can clarify Eli’s question further.

1) is this by designed that we don't allow magnum-api to access DB directly ?
Yes, that is what it is. Actually, The magnum-api was allowed to access DB 
directly in before. After the indirection API patch landed [1], magnum-api 
starts using magnum-conductor as a proxy to access DB. According to the inputs 
from oslo team, this design allows operators to take down either magnum-api or 
magnum-conductor to upgrade. This is not the same as nova-api, because 
nova-api, nova-scheduler, and nova-conductor are assumed to be shutdown all 
together as an atomic unit.

I think we should make our own decision here. If we can pair magnum-api with 
magnum-conductor as a unit, we can remove the indirection API and allow both 
binaries to access DB. This could mitigate the potential performance bottleneck 
of message queue. On the other hand, if we stay with the current design, we 
would allow magnum-api and magnum-conductor to scale independently. Thoughts?

[1] https://review.openstack.org/#/c/184791/

Best regards,
Hongbin

From: Kumari, Madhuri [mailto:madhuri.kum...@intel.com]
Sent: February-03-16 10:57 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] API service won't work if conductor down?

Corey the one you are talking about has changed to coe-service-*.

Eli, IMO we should display proper error message. M-api service should only have 
read permission.

Regards,
Madhuri

From: Corey O'Brien [mailto:coreypobr...@gmail.com]
Sent: Wednesday, February 3, 2016 6:50 PM
To: OpenStack Development Mailing List (not for usage questions) 
>
Subject: Re: [openstack-dev] [Magnum] API service won't work if conductor down?

The service-* commands aren't related to the magnum services (e.g. 
magnum-conductor). The service-* commands are for services on the bay that the 
user creates and deletes.

Corey

On Wed, Feb 3, 2016 at 2:25 AM Eli Qiao 
> wrote:
hi
Whey I try to run magnum service-list to list all services (seems now we only 
have m-cond service), it m-cond is down(which means no conductor at all),
API won't response and will return a timeout error.

taget@taget-ThinkStation-P300:~/devstack$ magnum service-list
ERROR: Timed out waiting for a reply to message ID 
fd1e9529f60f42bf8db903bbf75bbade (HTTP 500)

And I debug more and compared with nova service-list, nova will give response 
and will tell the conductor is down.

and deeper I get this in magnum-api boot up:

# Enable object backporting via the conductor
base.MagnumObject.indirection_api = base.MagnumObjectIndirectionAPI()

so in magnum_service api code

return objects.MagnumService.list(context, limit, marker, sort_key,
  sort_dir)

will require to use magnum-conductor to access DB, but no magnum-conductor at 
all, then we get a 500 error.
(nova-api doesn't specify indirection_api so nova-api can access DB)

My question is:

1) is this by designed that we don't allow magnum-api to access DB directly ?
2) if 1) is by designed, then `magnum service-list` won't work, and the error 
message should be improved such as "magnum service is down , please check 
magnum conductor is alive"

What do you think?

P.S. I tested comment this line:
# base.MagnumObject.indirection_api = base.MagnumObjectIndirectionAPI()
magnum-api will response but failed to create bay(), which means api service 
have read access but can not write it at all since(all db write happened in 
conductor layer).


--

Best Regards, Eli(Li Yong)Qiao

Intel OTC China
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Bug 1541105 options

2016-02-03 Thread Hongbin Lu
I would vote for a quick fix + a blueprint.

BTW, I think it is a general consensus that we should move away from Atomic for 
various reasons (painful image building, lack of document, hard to use, etc.). 
We are working on fixing the CoreOS templates which could replace Atomic in the 
future.

Best regards,
Hongbin

From: Corey O'Brien [mailto:coreypobr...@gmail.com]
Sent: February-03-16 2:53 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] Bug 1541105 options

As long as configurations for 2.2 and 2.0 are compatible we shouldn't have an 
issue I wouldn't think. I just don't know enough about etcd deployment to be 
sure about that.

If we want to quickly improve the gate, I can patch the problematic areas in 
the templates and then we can make a blueprint for upgrading to Atomic 23.

Corey

On Wed, Feb 3, 2016 at 1:47 PM Vilobh Meshram 
> 
wrote:
Hi Corey,

This is slowing down our merge rate and needs to be fixed IMHO.

What risk are you talking about when using newer version of etcd ? Is it 
documented somewhere for the team to have a look ?

-Vilobh

On Wed, Feb 3, 2016 at 8:11 AM, Corey O'Brien 
> wrote:
Hey team,

I've been looking into https://bugs.launchpad.net/magnum/+bug/1541105 which 
covers a bug with etcdctl, and I wanted opinions on how best to fix it.

Should we update the image to include the latest version of etcd? Or, should we 
temporarily install the latest version as a part of notify-heat (see bug for 
patch)?

I'm personally in favor of updating the image, but there is presumably some 
small risk with using a newer version of etcd.

Thanks,
Corey O'Brien

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] New Core Reviewers

2016-02-01 Thread Hongbin Lu
+1

-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com] 
Sent: February-01-16 10:59 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Magnum] New Core Reviewers

Magnum Core Team,

I propose Ton Ngo (Tango) and Egor Guz (eghobo) as new Magnum Core Reviewers. 
Please respond with your votes.

Thanks,

Adrian Otto
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Use Liberty Magnum bits with Kilo/ Icehouse Openstack ?

2016-01-28 Thread Hongbin Lu
As Kai Qiang mentioned, in term of OpenStack projects, Magnum depends on 
Keystone, Nova, Glance, Heat, Cinder. If you are looking for the exact set of 
dependencies, you can find it here: 
https://github.com/openstack/magnum/blob/stable/liberty/requirements.txt .

If you want to run Magnum with older version of OpenStack, I would suggest to 
have Magnum running in its own virtual environment [1], or have Magnum running 
in a different box. This will ensure dependencies of different version of 
OpenStack don’t interfere each other.

[1] http://docs.python-guide.org/en/latest/dev/virtualenvs/

Best regards,
Hongbin

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: January-28-16 3:41 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] Use Liberty Magnum bits with Kilo/ 
Icehouse Openstack ?


HI,

For Magnum, The community keep upstream development since Kilo.

Magnum has some dependence on Keystone, Nova, Glance, Heat, Cinder.


If you try to use Liberty Magnum against with Icehouse and Kilo OpenStack. It 
depends on related if heat templates include any non-icehouse, kilo resources.


For example, according to 
http://docs.openstack.org/developer/heat/template_guide/openstack.html
Magnum, mesos support used OS::Heat::SoftwareDeploymentGroup, It is available 
since Liberty in heat.

It means you could not use Magnum to deploy mesos cluster on icehouse or kilo 
openstack.

I suggest you two ways:

1> Try Liberty openstack possible(if you want to Liberty Magnum)

Or

2> Check related heat templates in Magnum, and compare related support in heat 
template guide. if all resources support. It could run. But you need test on 
that.


Hope it can help !

Thanks




Best Wishes,

Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

[Inactive hide details for "Sanjeev Rampal (srampal)" ---28/01/2016 04:10:26 
pm---A newbie question ... Is it described somewher]"Sanjeev Rampal (srampal)" 
---28/01/2016 04:10:26 pm---A newbie question ... Is it described somewhere 
what exactly is the set of Liberty dependencies for

From: "Sanjeev Rampal (srampal)" >
To: 
"openstack-dev@lists.openstack.org" 
>
Date: 28/01/2016 04:10 pm
Subject: [openstack-dev] [Magnum] Use Liberty Magnum bits with Kilo/ Icehouse 
Openstack ?





A newbie question …

Is it described somewhere what exactly is the set of Liberty dependencies for 
Magnum ? Since a significant fraction of it is orchestration templates, one 
would expect it should be possible to run Liberty Magnum bits along with an 
Icehouse or Kilo version of Openstack.

Can someone help clarify the set of dependencies for Magnum on having a Liberty 
version of Openstack ?


Rgds,
Sanjeev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] containers across availability zones

2016-02-23 Thread Hongbin Lu
Hi Ricardo,

+1 from me. I like this feature.

Best regards,
Hongbin

-Original Message-
From: Ricardo Rocha [mailto:rocha.po...@gmail.com] 
Sent: February-23-16 5:11 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] containers across availability zones

Hi.

Has anyone looked into having magnum bay nodes deployed in different 
availability zones? The goal would be to have multiple instances of a container 
running on nodes across multiple AZs.

Looking at docker swarm this could be achieved using (for example) affinity 
filters based on labels. Something like:

docker run -it -d -p 80:80 --label nova.availability-zone=my-zone-a nginx 
https://docs.docker.com/swarm/scheduler/filter/#use-an-affinity-filter

We can do this if we change the templates/config scripts to add to the docker 
daemon params some labels exposing availability zone or other metadata (taken 
from the nova metadata).
https://docs.docker.com/engine/userguide/labels-custom-metadata/#daemon-labels

It's a bit less clear how we would get heat to launch nodes across availability 
zones using ResourceGroup(s), but there are other heat resources that support 
it (i'm sure this can be done).

Does this make sense? Any thoughts or alternatives?

If it makes sense i'm happy to submit a blueprint.

Cheers,
  Ricardo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat][magnum] Magnum gate issue

2016-02-22 Thread Hongbin Lu
Hi Heat team,

It looks Magnum gate broke after this patch was landed: 
https://review.openstack.org/#/c/273631/ . I would appreciate if anyone can 
help for trouble-shooting the issue. If the issue is confirmed, I would  prefer 
a quick-fix or a revert, since we want to unlock the gate ASAP. Thanks.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-02-29 Thread Hongbin Lu
Hi team,

This is a continued discussion from a review [1]. Corey O'Brien suggested to 
have Magnum support a single OS distro (Atomic). I disagreed. I think we should 
bring the discussion to here to get broader set of inputs.

Corey O'Brien
>From the midcycle, we decided we weren't going to continue to support 2 
>different versions of the k8s template. Instead, we were going to maintain the 
>Fedora Atomic version of k8s and remove the coreos templates from the tree. I 
>don't think we should continue to develop features for coreos k8s if that is 
>true.
In addition, I don't think we should break the coreos template by adding the 
trust token as a heat parameter.

Hongbin Lu
I was on the midcycle and I don't remember any decision to remove CoreOS 
support. Why you want to remove CoreOS templates from the tree. Please note 
that this is a very big decision and please discuss it with the team 
thoughtfully and make sure everyone agree.

Corey O'Brien
Removing the coreos templates was a part of the COE drivers decision. Since 
each COE driver will only support 1 distro+version+coe we discussed which ones 
to support in tree. The decision was that instead of trying to support every 
distro and every version for every coe, the magnum tree would only have support 
for 1 version of 1 distro for each of the 3 COEs (swarm/docker/mesos). Since we 
already are going to support Atomic for swarm, removing coreos and keeping 
Atomic for kubernetes was the favored choice.

Hongbin Lu
Strongly disagree. It is a huge risk to support a single distro. The selected 
distro could die in the future. Who knows. Why make Magnum take this huge risk? 
Again, the decision of supporting single distro is a very big decision. Please 
bring it up to the team and have it discuss thoughtfully before making any 
decision. Also, Magnum doesn't have to support every distro and every version 
for every coe, but should support *more than one* popular distro for some COEs 
(especially for the popular COEs).

Corey O'Brien
The discussion at the midcycle started from the idea of adding support for RHEL 
and CentOS. We all discussed and decided that we wouldn't try to support 
everything in tree. Magnum would provide support in-tree for 1 per COE and the 
COE driver interface would allow others to add support for their preferred 
distro out of tree.

Hongbin Lu
I agreed the part that "we wouldn't try to support everything in tree". That 
doesn't imply the decision to support single distro. Again, support single 
distro is a huge risk. Why make Magnum take this huge risk?

[1] https://review.openstack.org/#/c/277284/

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] FW: [magnum][magnum-ui] Liaisons for I18n

2016-02-29 Thread Hongbin Lu
Hi team,

FYI, I18n team needs liaisons from magnum-ui. Please contact the i18n team if 
you interest in this role.

Best regards,
Hongbin

From: Ying Chun Guo [mailto:guoyi...@cn.ibm.com]
Sent: February-29-16 3:48 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [all][i18n] Liaisons for I18n

Hello,

Mitaka translation will start soon, from this week.
In Mitaka translation, IBM full time translators will join the
translation team and work with community translators.
With their help, I18n team is able to cover more projects.
So I need liaisons from dev projects who can help I18n team to work
compatibly with development team in the release cycle.

I especially need liaisons in below projects, which are in Mitaka translation 
plan:
nova, glance, keystone, cinder, swift, neutron, heat, horizon, ceilometer.

I also need liaisons from Horizon plugin projects, which are ready in 
translation website:
trove-dashboard, sahara-dashboard,designate-dasbhard, magnum-ui,
monasca-ui, murano-dashboard and senlin-dashboard.
I need liaisons tell us whether they are ready for translation from project 
view.

As to other projects, liaisons are welcomed too.

Here are the descriptions of I18n liaisons:
- The liaison should be a core reviewer for the project and understand the i18n 
status of this project.
- The liaison should understand project release schedule very well.
- The liaison should notify I18n team happens of important moments in the 
project release in time.
For example, happen of soft string freeze, happen of hard string freeze, and 
happen of RC1 cutting.
- The liaison should take care of translation patches to the project, and make 
sure the patches are
successfully merged to the final release version. When the translation patch is 
failed, the liaison
should notify I18n team.

If you are interested to be a liaison and help translators,
input your information here: 
https://wiki.openstack.org/wiki/CrossProjectLiaisons#I18n .

Thank you for your support.
Best regards
Ying Chun Guo (Daisy)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-02-29 Thread Hongbin Lu


From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: February-29-16 1:36 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro

Consider this: Which OS runs on the bay nodes is not important to end users. 
What matters to users is the environments their containers execute in, which 
has only one thing in common with the bay
The bay nodes are under user’s tenant. That means end users can to SSH to the 
nodes and play with the containers. Therefore, the choice of OS is important to 
end users.

node OS: the kernel. The linux syscall interface is stable enough that the 
various linux distributions can all run concurrently in neighboring containers 
sharing same kernel. There is really no material reason why the bay OS choice 
must match what distro the container is based on. Although I’m persuaded by 
Hongbin’s concern to mitigate risk of future changes WRT whatever OS distro is 
the prevailing one for bay nodes, there are a few items of concern about 
duality I’d like to zero in on:

1) Participation from Magnum contributors to support the CoreOS specific 
template features has been weak in recent months. By comparison, participation 
relating to Fedora/Atomic have been much stronger.
I have been fixing the CoreOS templates recently. If other contributors are 
willing to work with me on this efforts, it is reasonable to expect the CoreOS 
contribution to be stronger.

2) Properly testing multiple bay node OS distros (would) significantly increase 
the run time and complexity of our functional tests.
This is not true technically. We can re-run the Atomic tests on CoreOS by 
changing a single field (which is the image). What needs to be done is moving 
common modules into a base class and let OS-specific modules inherit from them.

3) Having support for multiple bay node OS choices requires more extensive 
documentation, and more comprehensive troubleshooting details.
This might be true, but we could point to the troubleshooting document of 
specific OS. If the selected OS delivered a comprehensive troubleshooting 
document, this problem is resolved.

If we proceed with just one supported disto for bay nodes, and offer 
extensibility points to allow alternates to be used in place of it, we should 
be able to address the risk concern of the chosen distro by selecting an 
alternate when that change is needed, by using those extensibility points. 
These include the ability to specify your own bay image, and the ability to use 
your own associated Heat template.

I see value in risk mitigation, it may make sense to simplify in the short term 
and address that need when it becomes necessary. My point of view might be 
different if we had contributors willing
I think it becomes necessary now. I have been working on Magnum starting from 
the early stage of the project. Probably, I am the most senior active 
contributor. Based on my experiences, there are a lot of problems of locking in 
a single OS. Basically, all the issues from OS upstream are populated to Magnum 
(e.g. we experienced various known/unknown bugs, pain on image building, lack 
of documentation, lack of upstream support etc.). All these experiences remind 
me not relying on a single OS, because you never know what will be the next 
obstacle.

and ready to address the variety of drawbacks that accompany the strategy of 
supporting multiple bay node OS choices. In absence of such a community 
interest, my preference is to simplify to increase our velocity. This seems to 
me to be a relatively easy way to reduce complexity around heat template 
versioning. What do you think?

Thanks,

Adrian

On Feb 29, 2016, at 8:40 AM, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:

Hi team,

This is a continued discussion from a review [1]. Corey O'Brien suggested to 
have Magnum support a single OS distro (Atomic). I disagreed. I think we should 
bring the discussion to here to get broader set of inputs.

Corey O'Brien
From the midcycle, we decided we weren't going to continue to support 2 
different versions of the k8s template. Instead, we were going to maintain the 
Fedora Atomic version of k8s and remove the coreos templates from the tree. I 
don't think we should continue to develop features for coreos k8s if that is 
true.
In addition, I don't think we should break the coreos template by adding the 
trust token as a heat parameter.

Hongbin Lu
I was on the midcycle and I don't remember any decision to remove CoreOS 
support. Why you want to remove CoreOS templates from the tree. Please note 
that this is a very big decision and please discuss it with the team 
thoughtfully and make sure everyone agree.

Corey O'Brien
Removing the coreos templates was a part of the COE drivers decision. Since 
each COE driver will only support 1 distro+version+coe we discussed which ones 
to support in tree. The decision was 

Re: [openstack-dev] [magnum] Failed to create trustee %(username) in domain $(domain_id)

2016-02-26 Thread Hongbin Lu
Agreed.

Every new features should be introduced in a backward-compatible way if 
possible. If new change will break existing version, it should be properly 
versioned and/or follow the corresponding deprecation process. Please feel free 
to ask for clarification if the procedure is unclear.

Best regards,
Hongbin

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: February-25-16 8:43 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Failed to create trustee %(username) in 
domain $(domain_id)


Thanks Hongbin for your info.

I really think it is not good way for new feature introduced.
As new feature introduced often break old work. it more often better with 
add-in feature is plus, old work still funciton.

Or at least, the error should say "swarm bay now requires trust to work, please 
use trust related access information before deploy a new swarm bay"



Thanks


Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com<mailto:wk...@cn.ibm.com>
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

[Inactive hide details for Hongbin Lu ---26/02/2016 08:02:23 am---Hi team, FYI, 
you might encounter the following error if you p]Hongbin Lu ---26/02/2016 
08:02:23 am---Hi team, FYI, you might encounter the following error if you pull 
from master recently:

From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: 26/02/2016 08:02 am
Subject: [openstack-dev] [magnum] Failed to create trustee %(username) in 
domain $(domain_id)





Hi team,

FYI, you might encounter the following error if you pull from master recently:

magnum bay-create --name swarmbay --baymodel swarmbaymodel --node-count 1
Create for bay swarmbay failed: Failed to create trustee %(username) in domain 
$(domain_id) (HTTP 500)"

This is due to a recent commit that added support for trust. In case you don’t 
know, this error can be resolved by running the following steps:

# 1. create the necessary domain and user:
export OS_TOKEN=password
export OS_URL=http://127.0.0.1:5000/v3
export OS_IDENTITY_API_VERSION=3
openstack domain create magnum
openstack user create trustee_domain_admin --password=secret --domain=magnum
openstack role add --user=trustee_domain_admin --domain=magnum admin

# 2. populate configs
source /opt/stack/devstack/functions
export MAGNUM_CONF=/etc/magnum/magnum.conf
iniset $MAGNUM_CONF trust trustee_domain_id $(openstack domain show magnum | 
awk '/ id /{print $4}')
iniset $MAGNUM_CONF trust trustee_domain_admin_id $(openstack user show 
trustee_domain_admin | awk '/ id /{print $4}')
iniset $MAGNUM_CONF trust trustee_domain_admin_password secret

# 3. screen -r stack, and restart m-api and m-cond
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-26 Thread Hongbin Lu


-Original Message-
From: James Bottomley [mailto:james.bottom...@hansenpartnership.com] 
Sent: February-26-16 12:38 PM
To: Daniel P. Berrange
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] A proposal to separate the design summit

On Fri, 2016-02-26 at 17:24 +, Daniel P. Berrange wrote:
> On Fri, Feb 26, 2016 at 08:55:52AM -0800, James Bottomley wrote:
> > On Fri, 2016-02-26 at 16:03 +, Daniel P. Berrange wrote:
> > > On Fri, Feb 26, 2016 at 10:39:08AM -0500, Rich Bowen wrote:
> > > > 
> > > > 
> > > > On 02/22/2016 10:14 AM, Thierry Carrez wrote:
> > > > > Hi everyone,
> > > > > 
> > > > > TL;DR: Let's split the events, starting after Barcelona.
> > > > > 
> > > > > 
> > > > > 
> > > > > Comments, thoughts ?
> > > > 
> > > > Thierry (and Jay, who wrote a similar note much earlier in 
> > > > February, and Lauren, who added more clarity over on the 
> > > > marketing list, and the many, many of you who have spoken up in 
> > > > this thread ...),
> > > > 
> > > > as a community guy, I have grave concerns about what the long 
> > > > -term effect of this move would be. I agree with your reasons, 
> > > > and the problems, but I worry that this is not the way to solve 
> > > > it.
> > > > 
> > > > Summit is one time when we have an opportunity to hold community 
> > > > up to the folks that think only product - to show them how 
> > > > critical it is that the people that are on this mailing list are 
> > > > doing the awesome things that they're doing, in the upstream, in 
> > > > cooperation and collaboration with their competitors.
> > > > 
> > > > I worry that splitting the two events would remove the community 
> > > > aspect from the conference. The conference would become more 
> > > > corporate, more product, and less project.
> > > > 
> > > > My initial response was "crap, now I have to go to four events 
> > > > instead of two", but as I thought about it, it became clear that 
> > > > that wouldn't happen. I, and everyone else, would end up picking 
> > > > one event or the other, and the division between product and 
> > > > project would deepen.
> > > > 
> > > > Summit, for me specifically, has frequently been at least as 
> > > > much about showing the community to the sales/marketing folks in 
> > > > my own company, as showing our wares to the customer.
> > > 
> > > I think what you describe is a prime reason for why separating the 
> > > events would be *beneficial* for the community contributors. The 
> > > conference has long ago become so corporate focused that its 
> > > session offers little to no value to me as a project contributor. 
> > > What you describe as a benefit of being able to put community 
> > > people infront of business people is in fact a significant 
> > > negative for the design summit productivity. It causes key 
> > > community contributors to be pulled out of important design 
> > > sessions to go talk to business people, making the design sessions 
> > > significantly less productive.
> > 
> > It's Naïve to think that something is so sacrosanct that it will be 
> > protected come what may.  Everything eventually has to justify 
> > itself to the funders.  Providing quid pro quo to sales and 
> > marketing helps enormously with that justification and it can be 
> > managed so it's not a huge drain on productive time.  OpenStack may 
> > be the new shiny now, but one day it won't be and then you'll need 
> > the support of the people you're currently disdaining.
> > 
> > I've said this before in the abstract, but let me try to make it 
> > specific and personal: once the kernel was the new shiny and money 
> > was poured all over us; we were pure and banned management types 
> > from the kernel summit and other events, but that all changed when 
> > the dot com bust came.  You're from Red Hat, if you ask the old 
> > timers about the Ottawa Linux Symposium and allied Kernel Summit I 
> > believe they'll recall that in 2005(or 6) the Red Hat answer to a 
> > plea to fund travel was here's $25 a head, go and find a floor to 
> > crash on.  As the wrangler for the new Linux Plumbers Conference I 
> > had to come up with all sorts of convoluted schemes for getting Red 
> > Hat to fund developer travel most of which involved embarrassing 
> > Brian Stevens into approving it over the objections of his managers.  
> > I don't want to go into detail about how Red Hat reached this 
> > situation; I just want to remind you that it happened before and it 
> > could happen again.
> 
> The proposal to split the design summit off actually aims to reduce 
> the travel cost burden. Currently we have a conference+design summit 
> at the wrong time, which is fairly unproductive due to people being 
> pulled out of the design summit for other tasks. So  we "fixed" that 
> by introducing mid-cycles to get real design work done. IOW 
> contributors end up with 4 events to travel to each year. With the 
> proposed 

[openstack-dev] [magnum] Failed to create trustee %(username) in domain $(domain_id)

2016-02-25 Thread Hongbin Lu
Hi team,

FYI, you might encounter the following error if you pull from master recently:

magnum bay-create --name swarmbay --baymodel swarmbaymodel --node-count 1
Create for bay swarmbay failed: Failed to create trustee %(username) in domain 
$(domain_id) (HTTP 500)"

This is due to a recent commit that added support for trust. In case you don't 
know, this error can be resolved by running the following steps:

# 1. create the necessary domain and user:
export OS_TOKEN=password
export OS_URL=http://127.0.0.1:5000/v3
export OS_IDENTITY_API_VERSION=3
openstack domain create magnum
openstack user create trustee_domain_admin --password=secret --domain=magnum
openstack role add --user=trustee_domain_admin --domain=magnum admin

# 2. populate configs
source /opt/stack/devstack/functions
export MAGNUM_CONF=/etc/magnum/magnum.conf
iniset $MAGNUM_CONF trust trustee_domain_id $(openstack domain show magnum | 
awk '/ id /{print $4}')
iniset $MAGNUM_CONF trust trustee_domain_admin_id $(openstack user show 
trustee_domain_admin | awk '/ id /{print $4}')
iniset $MAGNUM_CONF trust trustee_domain_admin_password secret

# 3. screen -r stack, and restart m-api and m-cond

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

2016-01-19 Thread Hongbin Lu
I don't see why the existent of /containers endpoint blocks your workflow. 
However, with /containers gone, the alternate workflows are blocked.

As a counterexample, some users want to manage containers through an OpenStack 
API for various reasons (i.e. single integration point, lack of domain 
knowledge of COEs, orchestration with other OpenStack resources: VMs, networks, 
volumes, etc.):

* Deployment of a cluster
* Management of that cluster
* Creation of a container
* Management of that container

As another counterexample, some users just want a container:

* Creation of a container
* Management of that container

Then, should we remove the /bays endpoint as well? Mangum is currently in an 
early stage, so workflows are diverse, non-static, and hypothetical. It is a 
risk to have Magnum overfit into a specific workflow by removing others. 

For your analogies, Cinder is a block storage service so it doesn't abstract 
the filesystems. Mangum is a container service [1] so it is reasonable to 
abstract containers. Again, if your logic is applied, should Nova have an 
endpoint that let you work with individual hypervisor? Probably not, because 
Nova is a Compute service.

[1] https://github.com/openstack/magnum/blob/master/specs/containers-service.rst

Best regards,
Hongbin

-Original Message-
From: Kyle Kelley [mailto:kyle.kel...@rackspace.com] 
Sent: January-19-16 2:37 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

With /containers gone, what Magnum offers is a workflow for consuming container 
orchestration engines:

* Deployment of a cluster
* Management of that cluster
* Key handling (creation, upload, revocation, etc.)

The first two are handled underneath by Nova + Heat, the last is in the purview 
of Barbican. That doesn't matter though.

What users care about is getting access to these resources without having to 
write their own heat template, create a backing key store, etc. They'd like to 
get started immediately with container technologies that are proven.

If you're looking for analogies Hongbin, this would be more like saying that 
Cinder shouldn't have an endpoint that let you work with individual files on a 
volume. It would be unreasonable to try to abstract across filesystems in a 
meaningful and sustainable way.


From: Hongbin Lu <hongbin...@huawei.com>
Sent: Tuesday, January 19, 2016 9:43 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

Assume your logic is applied. Should Nova remove the endpoint of managing VMs? 
Should Cinder remove the endpoint of managing volumes?

I think the best way to deal with the heterogeneity is to introduce a common 
abstraction layer, not to decouple from it. The real critical functionality 
Magnum could offer to OpenStack is to provide a Container-as-a-Service. If 
Magnum is a Deployment-as-a-service, it will be less useful and won't bring too 
much value to the OpenStack ecosystem.

Best regards,
Hongbin

-Original Message-
From: Clark, Robert Graham [mailto:robert.cl...@hpe.com]
Sent: January-19-16 5:19 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

+1

Doing this, and doing this well, provides critical functionality to OpenStack 
while keeping said functionality reasonably decoupled from the COE API vagaries 
that would inevitably encumber a solution that sought to provide ‘one api to 
control them all’.

-Rob

From: Mike Metral
Reply-To: OpenStack List
Date: Saturday, 16 January 2016 02:24
To: OpenStack List
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

The requirements that running a fully containerized application optimally & 
effectively requires the usage of a dedicated COE tool such as Swarm, 
Kubernetes or Marathon+Mesos.

OpenStack is better suited for managing the underlying infrastructure.

Mike Metral
Product Architect – Private Cloud R
email: mike.met...@rackspace.com<mailto:mike.met...@rackspace.com>
cell: +1-<tel:%2B1-305-282-7606>305-282-7606<tel:%2B1-305-282-7606>

From: Hongbin Lu
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
Date: Friday, January 15, 2016 at 8:02 PM
To: "OpenStack Development Mailing List (not for usage questions)"
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

A reason is the container abstraction brings containers to OpenStack: Keystone 
for authentication, Heat for orchestration, Horizon for UI, etc.

From: Kyle Kelley [mailto:rgb...@gmail.com]
Sent: January-15-16 10:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /ba

Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

2016-01-19 Thread Hongbin Lu
Assume your logic is applied. Should Nova remove the endpoint of managing VMs? 
Should Cinder remove the endpoint of managing volumes?

I think the best way to deal with the heterogeneity is to introduce a common 
abstraction layer, not to decouple from it. The real critical functionality 
Magnum could offer to OpenStack is to provide a Container-as-a-Service. If 
Magnum is a Deployment-as-a-service, it will be less useful and won't bring too 
much value to the OpenStack ecosystem.

Best regards,
Hongbin 

-Original Message-
From: Clark, Robert Graham [mailto:robert.cl...@hpe.com] 
Sent: January-19-16 5:19 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

+1

Doing this, and doing this well, provides critical functionality to OpenStack 
while keeping said functionality reasonably decoupled from the COE API vagaries 
that would inevitably encumber a solution that sought to provide ‘one api to 
control them all’.

-Rob

From: Mike Metral
Reply-To: OpenStack List
Date: Saturday, 16 January 2016 02:24
To: OpenStack List
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

The requirements that running a fully containerized application optimally & 
effectively requires the usage of a dedicated COE tool such as Swarm, 
Kubernetes or Marathon+Mesos.

OpenStack is better suited for managing the underlying infrastructure.

Mike Metral
Product Architect – Private Cloud R
email: mike.met...@rackspace.com<mailto:mike.met...@rackspace.com>
cell: +1-<tel:%2B1-305-282-7606>305-282-7606<tel:%2B1-305-282-7606>

From: Hongbin Lu
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
Date: Friday, January 15, 2016 at 8:02 PM
To: "OpenStack Development Mailing List (not for usage questions)"
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

A reason is the container abstraction brings containers to OpenStack: Keystone 
for authentication, Heat for orchestration, Horizon for UI, etc.

From: Kyle Kelley [mailto:rgb...@gmail.com]
Sent: January-15-16 10:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

What are the reasons for keeping /containers?

On Fri, Jan 15, 2016 at 9:14 PM, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:
Disagree.

If the container managing part is removed, Magnum is just a COE deployment 
tool. This is really a scope-mismatch IMO. The middle ground I can see is to 
have a flag that allows operators to turned off the container managing part. If 
it is turned off, COEs are not managed by Magnum and requests sent to the 
/container endpoint will return a reasonable error code. Thoughts?

Best regards,
Hongbin

From: Mike Metral 
[mailto:mike.met...@rackspace.com<mailto:mike.met...@rackspace.com>]
Sent: January-15-16 6:24 PM
To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>

Subject: Re: [openstack-dev] [magnum] Nesting /containers resource under /bays

I too believe that the /containers endpoint is obstructive to the overall goal 
of Magnum.

IMO, Magnum’s scope should only be concerned with:

  1.  Provisioning the underlying infrastructure required by the Container 
Orchestration Engine (COE) and
  2.  Instantiating the COE itself on top of said infrastructure from step #1.
Anything further regarding Magnum interfacing or interacting with containers 
starts to get into a gray area that could easily evolve into:

  *   Potential race conditions between Magnum and the designated COE and
  *   Would create design & implementation overhead and debt that could bite us 
in the long run seeing how all COE’s operate & are based off various different 
paradigms in terms of describing & managing containers, and this divergence 
will only continue to grow with time.
  *   Not to mention, the recreation of functionality around managing 
containers in Magnum seems redundant in nature as this is the very reason to 
want to use a COE in the first place – because it’s a more suited tool for the 
task
If there is low-hanging fruit in terms of common functionality across all 
COE’s, then those generic capabilities could be abstracted and integrated into 
Magnum, but these have to be carefully examined beforehand to ensure true 
parity exists for the capability across all COE’s.

However, I still worry that going down this route toes the line that Magnum 
should and could be a part of the managing container story to some degree – 
which again should be the sole responsibility of the COE, not Magnum.

I’m in favor of doing away with the /containers endpoint – continuing with it 
just looks like a snowball of scope-mismatch and management issues just waiting 
to happen.

Mike Metral
Product Architect – Private C

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2016-02-12 Thread Hongbin Lu
Egor,

Thanks for sharing your insights. I gave it more thoughts. Maybe the goal can 
be achieved without implementing a shared COE. We could move all the master 
nodes out of user tenants, containerize them, and consolidate them into a set 
of VMs/Physical servers.

I think we could separate the discussion into two:

1.   Should Magnum introduce a new bay type, in which master nodes are 
managed by Magnum (not users themselves)? Like what GCE [1] or ECS [2] does.

2.   How to consolidate the control services that originally runs on master 
nodes of each cluster?

Note that the proposal is for adding a new COE (not for changing the existing 
COEs). That means users will continue to provision existing self-managed COE 
(k8s/swarm/mesos) if they choose to.

[1] https://cloud.google.com/container-engine/
[2] http://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html

Best regards,
Hongbin

From: Guz Egor [mailto:guz_e...@yahoo.com]
Sent: February-12-16 2:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Hongbin Lu
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Hongbin,

I am not sure that it's good idea, it looks you propose Magnum enter to 
"schedulers war" (personally I tired from these debates Mesos vs Kub vs Swarm).
If your  concern is just utilization you can always run control plane at 
"agent/slave" nodes, there main reason why operators (at least in our case) 
keep them
separate because they need different attention (e.g. I almost don't care 
why/when "agent/slave" node died, but always double check that master node was
repaired or replaced).

One use case I see for shared COE (at least in our environment), when 
developers want run just docker container without installing anything locally
(e.g docker-machine). But in most cases it's just examples from internet or 
there own experiments ):

But we definitely should discuss it during midcycle next week.

---
Egor

________
From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Sent: Thursday, February 11, 2016 8:50 PM
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Hi team,

Sorry for bringing up this old thread, but a recent debate on container 
resource [1] reminded me the use case Kris mentioned below. I am going to 
propose a preliminary idea to address the use case. Of course, we could 
continue the discussion in the team meeting or midcycle.

Idea: Introduce a docker-native COE, which consists of only minion/worker/slave 
nodes (no master nodes).
Goal: Eliminate duplicated IaaS resources (master node VMs, lbaas vips, 
floating ips, etc.)
Details: Traditional COE (k8s/swarm/mesos) consists of master nodes and worker 
nodes. In these COEs, control services (i.e. scheduler) run on master nodes, 
and containers run on worker nodes. If we can port the COE control services to 
Magnum control plate and share them with all tenants, we eliminate the need of 
master nodes thus improving resource utilization. In the new COE, users 
create/manage containers through Magnum API endpoints. Magnum is responsible to 
spin tenant VMs, schedule containers to the VMs, and manage the life-cycle of 
those containers. Unlike other COEs, containers created by this COE are 
considered as OpenStack-manage resources. That means they will be tracked in 
Magnum DB, and accessible by other OpenStack services (i.e. Horizon, Heat, 
etc.).

What do you feel about this proposal? Let’s discuss.

[1] https://etherpad.openstack.org/p/magnum-native-api

Best regards,
Hongbin

From: Kris G. Lindgren [mailto:klindg...@godaddy.com]
Sent: September-30-15 7:26 PM
To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

We are looking at deploying magnum as an answer for how do we do containers 
company wide at Godaddy.  I am going to agree with both you and josh.

I agree that managing one large system is going to be a pain and pas experience 
tells me this wont be practical/scale, however from experience I also know 
exactly the pain Josh is talking about.

We currently have ~4k projects in our internal openstack cloud, about 1/4 of 
the projects are currently doing some form of containers on their own, with 
more joining every day.  If all of these projects were to convert of to the 
current magnum configuration we would suddenly be attempting to 
support/configure ~1k magnum clusters.  Considering that everyone will want it 
HA, we are looking at a minimum of 2 kube nodes per cluster + lbaas vips + 
floating ips.  From a capacity standpoint this is an excessive amount of 
duplicated infrastructure to spinup in projects where people maybe running 
10–20 containers per project.  From an o

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2016-02-15 Thread Hongbin Lu
Regarding to the COE mode, it seems there are three options:

1.   Place both master nodes and worker nodes to user’s tenant (current 
implementation).

2.   Place only worker nodes to user’s tenant.

3.   Hide both master nodes and worker nodes from user’s tenant.

Frankly, I don’t know which one will succeed/fail in the future. Each mode 
seems to have use cases. Maybe magnum could support multiple modes?

Best regards,
Hongbin

From: Corey O'Brien [mailto:coreypobr...@gmail.com]
Sent: February-15-16 8:43 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Hi all,

A few thoughts to add:

I like the idea of isolating the masters so that they are not 
tenant-controllable, but I don't think the Magnum control plane is the right 
place for them. They still need to be running on tenant-owned resources so that 
they have access to things like isolated tenant networks or that any bandwidth 
they consume can still be attributed and billed to tenants.

I think we should extend that concept a little to include worker nodes as well. 
While they should live in the tenant like the masters, they shouldn't be 
controllable by the tenant through anything other than the COE API. The main 
use case that Magnum should be addressing is providing a managed COE 
environment. Like Hongbin mentioned, Magnum users won't have the domain 
knowledge to properly maintain the swarm/k8s/mesos infrastructure the same way 
that Nova users aren't expected to know how to manage a hypervisor.

I agree with Egor that trying to have Magnum schedule containers is going to be 
a losing battle. Swarm/K8s/Mesos are always going to have better scheduling for 
their containers. We don't have the resources to try to be yet another 
container orchestration engine. Besides that, as a developer, I don't want to 
learn another set of orchestration semantics when I already know swarm or k8s 
or mesos.

@Kris, I appreciate the real use case you outlined. In your idea of having 
multiple projects use the same masters, how would you intend to isolate them? 
As far as I can tell none of the COEs would have any way to isolate those teams 
from each other if they share a master. I think this is a big problem with the 
idea of sharing masters even within a single tenant. As an operator, I 
definitely want to know that users can isolate their resources from other users 
and tenants can isolate their resources from other tenants.

Corey

On Mon, Feb 15, 2016 at 1:24 AM Peng Zhao <p...@hyper.sh<mailto:p...@hyper.sh>> 
wrote:
Hi,

I wanted to give some thoughts to the thread.

There are various perspective around “Hosted vs Self-managed COE”, But if you 
stand at the developer's position, it basically comes down to “Ops vs 
Flexibility”.

For those who want more control of the stack, so as to customize in anyway they 
see fit, self-managed is a more appealing option. However, one may argue that 
the same job can be done with a heat template+some patchwork of cinder/neutron. 
And the heat template is more customizable than magnum, which probably 
introduces some requirements on the COE configuration.

For people who don't want to manage the COE, hosted is a no-brainer. The 
question here is that which one is the core compute engine is the stack, nova 
or COE? Unless you are running a public, multi-tenant OpenStack deployment, it 
is highly likely that you are sticking with only one COE. Supposing k8s is what 
your team is dealing with everyday, then why you need nova sitting under k8s, 
whose job is just launching some VMs. After all, it is the COE that 
orchestrates cinder/neutron.

One idea of this is to put COE at the same layer of nova. Instead of running 
atop nova, these two run side by side. So you got two compute engines: nova for 
IaaS workload, k8s for CaaS workload. If you go this way, hypernetes 
<https://github.com/hyperhq/hypernetes> is probably what you are looking for.

Another idea is “Dockerized (Immutable) IaaS”, e.g. replace Glance with Docker 
registry, and use nova to launch Docker images. But this is not done by 
nova-docker, simply because it is hard to integrate things like cinder/neutron 
with lxc. The idea is a nova-hyper 
driver<https://openstack.nimeyo.com/49570/openstack-dev-proposal-of-nova-hyper-driver>.
 Since Hyper is hypervisor-based, it is much easier to make it work with 
others. SHAMELESS PROMOTION: if you are interested in this idea, we've 
submitted a proposal at the Austin summit: 
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/8211.

Peng

Disclaim: I maintainer Hyper.

-
Hyper - Make VM run like Container



On Mon, Feb 15, 2016 at 9:53 AM, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:
My replies are inline.

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com<mailto:wk...@cn.ibm.com>]

Re: [openstack-dev] [openstack][Magnum] Operation for COE

2016-02-16 Thread Hongbin Lu
Wanghua,

Please add your requests to the midcycle agenda [1], or bring it up in the team 
meeting under the open discussion. We can discuss it if agenda allows.

[1] https://etherpad.openstack.org/p/magnum-mitaka-midcycle-topics

Best regards,
Hongbin

From: 王华 [mailto:wanghua.hum...@gmail.com]
Sent: February-16-16 1:35 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [openstack][Magnum] Operation for COE

Hi all,

Should we add some operational function for COE in Magnum? For example, collect 
logs, upgrade COE and modify COE configuration. I think these features are very 
important in production.

Regards,
Wanghua
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2016-02-14 Thread Hongbin Lu
Kai Qiang,

A major benefit is to have Magnum manage the COEs for end-users. Currently, 
Magnum basically have its end-users manage the COEs by themselves after a 
successful deployment. This might work well for domain users, but it is a pain 
for non-domain users to manage their COEs. By moving master nodes out of users’ 
tenants, Magnum could offer users a COE management service. For example, Magnum 
could offer to monitor the etcd/swarm-manage clusters and recover them on 
failure. Again, the pattern of managing COEs for end-users is what Google 
container service and AWS container service offer. I guess it is fair to 
conclude that there are use cases out there?

If we decide to offer a COE management service, we could discuss further on how 
to consolidate the IaaS resources for improving utilization. Solutions could be 
(i) introducing a centralized control services for all tenants/clusters, or 
(ii) keeping the control services separated but isolating them by containers 
(instead of VMs). A typical use case is what Kris mentioned below.

Best regards,
Hongbin

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: February-13-16 11:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?


Hi HongBin and Egor,
I went through what you talked about, and thinking what's the great benefits 
for utilisation here.
For user cases, looks like following:

user A want to have a COE provision.
user B want to have a separate COE. (different tenant, non-share)
user C want to use existed COE (same tenant as User A, share)

When you talked about utilisation case, it seems you mentioned:
different tenant users want to use same control node to manage different nodes, 
it seems that try to make COE openstack tenant aware, it also means you want to 
introduce another control schedule layer above the COEs, we need to think about 
the if it is typical user case, and what's the benefit compared with 
containerisation.


And finally, it is a topic can be discussed in middle cycle meeting.


Thanks

Best Wishes,

Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com<mailto:wk...@cn.ibm.com>
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

[Inactive hide details for Hongbin Lu ---13/02/2016 11:02:13 am---Egor, Thanks 
for sharing your insights. I gave it more thought]Hongbin Lu ---13/02/2016 
11:02:13 am---Egor, Thanks for sharing your insights. I gave it more thoughts. 
Maybe the goal can be achieved with

From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
To: Guz Egor <guz_e...@yahoo.com<mailto:guz_e...@yahoo.com>>, "OpenStack 
Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: 13/02/2016 11:02 am
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?





Egor,

Thanks for sharing your insights. I gave it more thoughts. Maybe the goal can 
be achieved without implementing a shared COE. We could move all the master 
nodes out of user tenants, containerize them, and consolidate them into a set 
of VMs/Physical servers.

I think we could separate the discussion into two:
1. Should Magnum introduce a new bay type, in which master nodes are managed by 
Magnum (not users themselves)? Like what GCE [1] or ECS [2] does.
2. How to consolidate the control services that originally runs on master nodes 
of each cluster?

Note that the proposal is for adding a new COE (not for changing the existing 
COEs). That means users will continue to provision existing self-managed COE 
(k8s/swarm/mesos) if they choose to.

[1] https://cloud.google.com/container-engine/
[2] http://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html

Best regards,
Hongbin

From: Guz Egor [mailto:guz_e...@yahoo.com]
Sent: February-12-16 2:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Hongbin Lu
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Hongbin,

I am not sure that it's good idea, it looks you propose Magnum enter to 
"schedulers war" (personally I tired from these debates Mesos vs Kub vs Swarm).
If your concern is just utilization you can always run control plane at 
"agent/slave" nodes, there main reason why operators (at least in our case) 
keep them
separate because they need different attention (e.g. I almost don't care 
why/when "agent/slave" node died, but always double check that master node was
repaired or replaced).

One use case I see for shared COE (at 

Re: [openstack-dev] [magnum] Re: Assistance with Magnum Setup

2016-02-14 Thread Hongbin Lu
Steve,

Thanks for directing Shiva to here. BTW, most of your code on objects and db 
are still here :).

Shiva,

Please do join the #openstack-containers channel (It is hard to do 
trouble-shooting in ML). I believe contributors in the channel are happy to 
help you. For Magnum team, it looks we should have an installation guide. Do we 
have a BP for that? If not, I think we should create one and give it a high 
priority.

Best regards,
Hongbin

From: Steven Dake (stdake) [mailto:std...@cisco.com]
Sent: February-14-16 10:54 AM
To: Shiva Ramdeen
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] Re: Assistance with Magnum Setup

Shiva,

First off, welcome to OpenStack :)  Feel free to call me Steve.

Ccing openstack-dev which is typically about development questions not usage 
questions, but you might have found some kind of bug.

I am not sure what the state of Magnum and Keystone is with OpenStack.  I 
recall at our Liberty midcycle we were planning to implement trusts.  Perhaps 
some of that work broke?

I would highly recommend obtaining yourself an IRC client, joining a freenode 
server, and joining the #openstack-containers channel.  Here you can meet with 
the core reviewers and many users who may have seen your problem in the past 
and have pointers for resolution.

Another option is to search the IRC archives for the channel here:
http://eavesdrop.openstack.org/irclogs/%23openstack-containers/

Finally, my detailed knowledge of Magnum is a bit dated, not having written any 
code for Magnum for over 6 months.  Although I wrote a lot of the initial code, 
most of it has been replaced ;) by the rockin Magnum core review team.  They 
can definitely get you going - just find them on irc.

Regards
-steve

From: Shiva Ramdeen 
>
Date: Sunday, February 14, 2016 at 6:33 AM
To: Steven Dake >
Subject: Assistance with Magnum Setup


Hello Mr. Dake,



Firstly let me introduce myself. My name is Shiva Ramdeen, I am a final year 
student at the University of the West Indies studying for my degree in
Electrical and Computer Engineering. I am currently working on a my final year 
project which deals with the performance of Magnum and Nova-Docker. I have been 
attempting to install Magnum on a liberty install of Openstack. However, I have 
been unable till to get Magnum to authenticate with keystone and thus cannot 
create swarm bays. I fear that I have depleted all of the online resources that 
explain the setup of Magnum and as a last resort I am seeking any assistance 
that you may be able to provide that may help me resolve this issue.  I would 
be available to provide any further details at your best convenience. Thank you 
in advance.



Kindest Regards,

Shiva Ramdeen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-01 Thread Hongbin Lu
erent if we had contributors willing and ready to address the variety of 
drawbacks that accompany the strategy of supporting multiple bay node OS 
choices. In absence of such a community interest, my preference is to simplify 
to increase our velocity. This seems to me to be a relatively easy way to 
reduce complexity around heat template versioning. What do you think?

Thanks,

Adrian

On Feb 29, 2016, at 8:40 AM, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:

Hi team,

This is a continued discussion from a review [1]. Corey O'Brien suggested to 
have Magnum support a single OS distro (Atomic). I disagreed. I think we should 
bring the discussion to here to get broader set of inputs.

Corey O'Brien
>From the midcycle, we decided we weren't going to continue to support 2 
>different versions of the k8s template. Instead, we were going to maintain the 
>Fedora Atomic version of k8s and remove the coreos templates from the tree. I 
>don't think we should continue to develop features for coreos k8s if that is 
>true.
In addition, I don't think we should break the coreos template by adding the 
trust token as a heat parameter.

Hongbin Lu
I was on the midcycle and I don't remember any decision to remove CoreOS 
support. Why you want to remove CoreOS templates from the tree. Please note 
that this is a very big decision and please discuss it with the team 
thoughtfully and make sure everyone agree.

Corey O'Brien
Removing the coreos templates was a part of the COE drivers decision. Since 
each COE driver will only support 1 distro+version+coe we discussed which ones 
to support in tree. The decision was that instead of trying to support every 
distro and every version for every coe, the magnum tree would only have support 
for 1 version of 1 distro for each of the 3 COEs (swarm/docker/mesos). Since we 
already are going to support Atomic for swarm, removing coreos and keeping 
Atomic for kubernetes was the favored choice.

Hongbin Lu
Strongly disagree. It is a huge risk to support a single distro. The selected 
distro could die in the future. Who knows. Why make Magnum take this huge risk? 
Again, the decision of supporting single distro is a very big decision. Please 
bring it up to the team and have it discuss thoughtfully before making any 
decision. Also, Magnum doesn't have to support every distro and every version 
for every coe, but should support *more than one* popular distro for some COEs 
(especially for the popular COEs).

Corey O'Brien
The discussion at the midcycle started from the idea of adding support for RHEL 
and CentOS. We all discussed and decided that we wouldn't try to support 
everything in tree. Magnum would provide support in-tree for 1 per COE and the 
COE driver interface would allow others to add support for their preferred 
distro out of tree.

Hongbin Lu
I agreed the part that "we wouldn't try to support everything in tree". That 
doesn't imply the decision to support single distro. Again, support single 
distro is a huge risk. Why make Magnum take this huge risk?

[1] https://review.openstack.org/#/c/277284/

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-03-18 Thread Hongbin Lu
OK. If using Keystone is not acceptable, I am going to propose a new approach:

· Store data in Magnum DB

· Encrypt data before writing it to DB

· Decrypt data after loading it from DB

· Have the encryption/decryption key stored in config file

· Use encryption/decryption algorithm provided by a library

The approach above is the exact approach used by Heat to protect hidden 
parameters [1]. Compared to the Barbican option, this approach is much lighter 
and simpler, and provides a basic level of data protection. This option is a 
good supplement to the Barbican option, which is heavy but provides advanced 
level of protection. It will fit into the use cases that users don’t want to 
install Barbican but want a basic protection.

If you disagree, I would request you to justify why this approach works for 
Heat but not for Magnum. Also, I also wonder if Heat has a plan to set a hard 
dependency on Barbican for just protecting the hidden parameters.

If you don’t like code duplication between Magnum and Heat, I would suggest to 
move the implementation to a oslo library to make it DRY. Thoughts?

[1] 
https://specs.openstack.org/openstack/heat-specs/specs/juno/encrypt-hidden-parameters.html

Best regards,
Hongbin

From: David Stanek [mailto:dsta...@dstanek.com]
Sent: March-18-16 4:12 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability


On Fri, Mar 18, 2016 at 4:03 PM Douglas Mendizábal 
> 
wrote:
[snip]
>
> Regarding the Keystone solution, I'd like to hear the Keystone team's 
> feadback on that.  It definitely sounds to me like you're trying to put a 
> square peg in a round hole.
>

I believe that using Keystone for this is a mistake. As mentioned in the 
blueprint, Keystone is not encrypting the data so magnum would be on the hook 
to do it. So that means that if security is a requirement you'd have to 
duplicate more than just code. magnum would start having a larger security 
burden. Since we have a system designed to securely store data I think that's 
the best place for data that needs to be secure.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-03-18 Thread Hongbin Lu
Thanks Adrian,

I think the Keystone approach will work. For others, please speak up if it 
doesn’t work for you.

Best regards,
Hongbin

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: March-17-16 9:28 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability

Hongbin,

I tweaked the blueprint in accordance with this approach, and approved it for 
Newton:
https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-store

I think this is something we can all agree on as a middle ground, If not, I’m 
open to revisiting the discussion.

Thanks,

Adrian

On Mar 17, 2016, at 6:13 PM, Adrian Otto 
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>> wrote:

Hongbin,

One alternative we could discuss as an option for operators that have a good 
reason not to use Barbican, is to use Keystone.

Keystone credentials store: 
http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v3.html#credentials-v3-credentials

The contents are stored in plain text in the Keystone DB, so we would want to 
generate an encryption key per bay, encrypt the certificate and store it in 
keystone. We would then use the same key to decrypt it upon reading the key 
back. This might be an acceptable middle ground for clouds that will not or can 
not run Barbican. This should work for any OpenStack cloud since Grizzly. The 
total amount of code in Magnum would be small, as the API already exists. We 
would need a library function to encrypt and decrypt the data, and ideally a 
way to select different encryption algorithms in case one is judged weak at 
some point in the future, justifying the use of an alternate.

Adrian


On Mar 17, 2016, at 4:55 PM, Adrian Otto 
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>> wrote:

Hongbin,


On Mar 17, 2016, at 2:25 PM, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:

Adrian,

I think we need a boarder set of inputs in this matter, so I moved the 
discussion from whiteboard back to here. Please check my replies inline.


I would like to get a clear problem statement written for this.
As I see it, the problem is that there is no safe place to put certificates in 
clouds that do not run Barbican.
It seems the solution is to make it easy to add Barbican such that it's 
included in the setup for Magnum.
No, the solution is to explore an non-Barbican solution to store certificates 
securely.

I am seeking more clarity about why a non-Barbican solution is desired. Why is 
there resistance to adopting both Magnum and Barbican together? I think the 
answer is that people think they can make Magnum work with really old clouds 
that were set up before Barbican was introduced. That expectation is simply not 
reasonable. If there were a way to easily add Barbican to older clouds, perhaps 
this reluctance would melt away.


Magnum should not be in the business of credential storage when there is an 
existing service focused on that need.

Is there an issue with running Barbican on older clouds?
Anyone can choose to use the builtin option with Magnum if hey don't have 
Barbican.
A known limitation of that approach is that certificates are not replicated.
I guess the *builtin* option you referred is simply placing the certificates to 
local file system. A few of us had concerns on this approach (In particular, 
Tom Cammann has gave -2 on the review [1]) because it cannot scale beyond a 
single conductor. Finally, we made a compromise to land this option and use it 
for testing/debugging only. In other words, this option is not for production. 
As a result, Barbican becomes the only option for production which is the root 
of the problem. It basically forces everyone to install Barbican in order to 
use Magnum.

[1] https://review.openstack.org/#/c/212395/


It's probably a bad idea to replicate them.
That's what Barbican is for. --adrian_otto
Frankly, I am surprised that you disagreed here. Back to July 2015, we all 
agreed to have two phases of implementation and the statement was made by you 
[2].


#agreed Magnum will use Barbican for an initial implementation for certificate 
generation and secure storage/retrieval.  We will commit to a second phase of 
development to eliminating the hard requirement on Barbican with an alternate 
implementation that implements the functional equivalent implemented in Magnum, 
which may depend on libraries, but not Barbican.


[2] http://lists.openstack.org/pipermail/openstack-dev/2015-July/069130.html

The context there is important. Barbican was considered for two purposes: (1) 
CA signing capability, and (2) certificate storage. My willingness to implement 
an alternative was based on our need to get a certificate generation and 
signing solution that actua

Re: [openstack-dev] [magnum] High Availability

2016-03-18 Thread Hongbin Lu
Douglas,

I am not opposed to adopt Barbican in Magnum (In fact, we already adopted 
Barbican). What I am opposed to is a Barbican lock-in, which already has a 
negative impact on Magnum adoption based on our feedback. I also want to see an 
increase of Barbican adoption in the future, and all our users have Barbican 
installed in their clouds. If that happens, I have no problem to have a hard 
dependency on Barbican.

Best regards,
Hongbin

-Original Message-
From: Douglas Mendizábal [mailto:douglas.mendiza...@rackspace.com] 
Sent: March-18-16 9:45 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum] High Availability

Hongbin,

I think Adrian makes some excellent points regarding the adoption of Barbican.  
As the PTL for Barbican, it's frustrating to me to constantly hear from other 
projects that securing their sensitive data is a requirement but then turn 
around and say that deploying Barbican is a problem.

I guess I'm having a hard time understanding the operator persona that is 
willing to deploy new services with security features but unwilling to also 
deploy the service that is meant to secure sensitive data across all of 
OpenStack.

I understand one barrier to entry for Barbican is the high cost of Hardware 
Security Modules, which we recommend as the best option for the Storage and 
Crypto backends for Barbican.  But there are also other options for securing 
Barbican using open source software like DogTag or SoftHSM.

I also expect Barbican adoption to increase in the future, and I was hoping 
that Magnum would help drive that adoption.  There are also other projects that 
are actively developing security features like Swift Encryption, and DNSSEC 
support in Desginate.  Eventually these features will also require Barbican, so 
I agree with Adrian that we as a community should be encouraging deployers to 
adopt the best security practices.

Regarding the Keystone solution, I'd like to hear the Keystone team's feadback 
on that.  It definitely sounds to me like you're trying to put a square peg in 
a round hole.

- Doug

On 3/17/16 8:45 PM, Hongbin Lu wrote:
> Thanks Adrian,
> 
>  
> 
> I think the Keystone approach will work. For others, please speak up 
> if it doesn't work for you.
> 
>  
> 
> Best regards,
> 
> Hongbin
> 
>  
> 
> *From:*Adrian Otto [mailto:adrian.o...@rackspace.com]
> *Sent:* March-17-16 9:28 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [magnum] High Availability
> 
>  
> 
> Hongbin,
> 
>  
> 
> I tweaked the blueprint in accordance with this approach, and approved 
> it for Newton:
> 
> https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-sto
> re
> 
>  
> 
> I think this is something we can all agree on as a middle ground, If 
> not, I'm open to revisiting the discussion.
> 
>  
> 
> Thanks,
> 
>  
> 
> Adrian
> 
>  
> 
> On Mar 17, 2016, at 6:13 PM, Adrian Otto <adrian.o...@rackspace.com
> <mailto:adrian.o...@rackspace.com>> wrote:
> 
>  
> 
> Hongbin,
> 
> One alternative we could discuss as an option for operators that
> have a good reason not to use Barbican, is to use Keystone.
> 
> Keystone credentials store:
> 
> http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-ap
> i-v3.html#credentials-v3-credentials
> 
> The contents are stored in plain text in the Keystone DB, so we
> would want to generate an encryption key per bay, encrypt the
> certificate and store it in keystone. We would then use the same key
> to decrypt it upon reading the key back. This might be an acceptable
> middle ground for clouds that will not or can not run Barbican. This
> should work for any OpenStack cloud since Grizzly. The total amount
> of code in Magnum would be small, as the API already exists. We
> would need a library function to encrypt and decrypt the data, and
> ideally a way to select different encryption algorithms in case one
> is judged weak at some point in the future, justifying the use of an
> alternate.
> 
> Adrian
> 
> 
> On Mar 17, 2016, at 4:55 PM, Adrian Otto <adrian.o...@rackspace.com
> <mailto:adrian.o...@rackspace.com>> wrote:
> 
> Hongbin,
> 
> 
> On Mar 17, 2016, at 2:25 PM, Hongbin Lu <hongbin...@huawei.com
> <mailto:hongbin...@huawei.com>> wrote:
> 
> Adrian,
> 
> I think we need a boarder set of inputs in this matter, so I moved
> the discussion from whiteboard back to here. Please check my replies
> inline.
> 
> 
> I would like to get a clear problem statement written for this.
> As I see it, the

Re: [openstack-dev] [all] [api] Reminder: WSME is not being actively maintained

2016-03-11 Thread Hongbin Lu
I think we'd better to have a clear guidance here.

For projects that are currently using WSME, should they have a plan to migrate 
to other tools? If yes, is there any suggestion for the replacement tools? I 
think it will be more clear to have an official guideline in this matter.

Best regards,
Hongbin

-Original Message-
From: Doug Hellmann [mailto:d...@doughellmann.com] 
Sent: March-08-16 10:51 AM
To: openstack-dev
Subject: Re: [openstack-dev] [all] [api] Reminder: WSME is not being actively 
maintained

Excerpts from Chris Dent's message of 2016-03-08 11:25:48 +:
> 
> Last summer Lucas Gomes and I were press ganged into becoming core on 
> WSME. Since then we've piecemeal been verifying bug fixes and 
> generally trying to keep things moving. However, from the beginning we 
> both agreed that WSME is _not_ a web framework that we should be 
> encouraging. Though it looks like it started with very good 
> intentions, it never really reached a state where any of the following are 
> true:
> 
> * The WSME code is easy to understand and maintain.
> * WSME provides correct handling of HTTP (notably response status
>and headers).
> * WSME has an architecture that is suitable for creating modern
>Python-based web applications.
> 
> Last summer we naively suggested that projects that are using it move 
> to using something else. That suggestion did not take into account the 
> realities of OpenStack.
> 
> So we need to come up with a new plan. Lucas and I can continue to 
> merge bug fixes as people provide them (and we become aware of them) 
> and we can continue to hassle Doug Hellman to make a release when one 
> is necessary but this does little to address the three gaps above nor 
> the continued use of the framework in existing projects.
> 
> Ideas?

One big reason for choosing WSME early on was that it had support for both XML 
and JSON APIs without the application code needing to do anything explicitly. 
In the time since projects started using WSME, the community has decided to 
stop providing XML API support and some other tools have been picked up 
(JSONSchema, Voluptuous,
etc.) that provide parsing and validation features similar to WSME.
It seems natural that we build new APIs using those tools instead of WSME. For 
existing functioning API endpoints, we can leave them alone (using WSME) or 
change them one at a time as they are extended with new features. I don't see 
any reason to rewrite anything just to change tools.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Hongbin Lu
Adrian,

I think we need a boarder set of inputs in this matter, so I moved the 
discussion from whiteboard back to here. Please check my replies inline.

> I would like to get a clear problem statement written for this.
> As I see it, the problem is that there is no safe place to put certificates 
> in clouds that do not run Barbican.
> It seems the solution is to make it easy to add Barbican such that it's 
> included in the setup for Magnum.
No, the solution is to explore an non-Barbican solution to store certificates 
securely.

> Magnum should not be in the business of credential storage when there is an 
> existing service focused on that need.
>
> Is there an issue with running Barbican on older clouds?
> Anyone can choose to use the builtin option with Magnum if hey don't have 
> Barbican.
> A known limitation of that approach is that certificates are not replicated.
I guess the *builtin* option you referred is simply placing the certificates to 
local file system. A few of us had concerns on this approach (In particular, 
Tom Cammann has gave -2 on the review [1]) because it cannot scale beyond a 
single conductor. Finally, we made a compromise to land this option and use it 
for testing/debugging only. In other words, this option is not for production. 
As a result, Barbican becomes the only option for production which is the root 
of the problem. It basically forces everyone to install Barbican in order to 
use Magnum.

[1] https://review.openstack.org/#/c/212395/ 

> It's probably a bad idea to replicate them.
> That's what Barbican is for. --adrian_otto
Frankly, I am surprised that you disagreed here. Back to July 2015, we all 
agreed to have two phases of implementation and the statement was made by you 
[2].


#agreed Magnum will use Barbican for an initial implementation for certificate 
generation and secure storage/retrieval.  We will commit to a second phase of 
development to eliminating the hard requirement on Barbican with an alternate 
implementation that implements the functional equivalent implemented in Magnum, 
which may depend on libraries, but not Barbican.


[2] http://lists.openstack.org/pipermail/openstack-dev/2015-July/069130.html

Best regards,
Hongbin

-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com] 
Sent: March-17-16 4:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability

I have trouble understanding that blueprint. I will put some remarks on the 
whiteboard. Duplicating Barbican sounds like a mistake to me.

--
Adrian

> On Mar 17, 2016, at 12:01 PM, Hongbin Lu <hongbin...@huawei.com> wrote:
> 
> The problem of missing Barbican alternative implementation has been raised 
> several times by different people. IMO, this is a very serious issue that 
> will hurt Magnum adoption. I created a blueprint for that [1] and set the PTL 
> as approver. It will be picked up by a contributor once it is approved.
> 
> [1] 
> https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-sto
> re
> 
> Best regards,
> Hongbin
> 
> -Original Message-
> From: Ricardo Rocha [mailto:rocha.po...@gmail.com]
> Sent: March-17-16 2:39 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] High Availability
> 
> Hi.
> 
> We're on the way, the API is using haproxy load balancing in the same way all 
> openstack services do here - this part seems to work fine.
> 
> For the conductor we're stopped due to bay certificates - we don't currently 
> have barbican so local was the only option. To get them accessible on all 
> nodes we're considering two options:
> - store bay certs in a shared filesystem, meaning a new set of 
> credentials in the boxes (and a process to renew fs tokens)
> - deploy barbican (some bits of puppet missing we're sorting out)
> 
> More news next week.
> 
> Cheers,
> Ricardo
> 
>> On Thu, Mar 17, 2016 at 6:46 PM, Daneyon Hansen (danehans) 
>> <daneh...@cisco.com> wrote:
>> All,
>> 
>> Does anyone have experience deploying Magnum in a highly-available fashion?
>> If so, I'm interested in learning from your experience. My biggest 
>> unknown is the Conductor service. Any insight you can provide is 
>> greatly appreciated.
>> 
>> Regards,
>> Daneyon Hansen
>> 
>> _
>> _  OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cg

[openstack-dev] [magnum] PTL candidacy

2016-03-19 Thread Hongbin Lu
Hi,

I would like to announce my candidacy for the PTL position of Magnum.

To introduce myself, my involvement in Magnum began in December 2014, in which 
the project was at a very early stage. Since then, I have been working with the 
team to explore the roadmap, implement and refine individual components, and 
gradually grow the feature set. Along the way, I've developed comprehensive 
knowledge of the architecture and has led me to take more leadership 
responsibilities. In the past release cycle, I started taking some of the PTL 
responsibilities when the current PTL was unavailable. I believe my past 
experience shows that I am qualified for the Magnum PTL position.

In my opinion, Magnum's key objective is to pursue tight integration between 
OpenStack and the various Container Orchestration Engines (COE) such as 
Kubernetes, Docker Swarm, and Apache Mesos. Therefore, I would suggest to give 
priority to the features that will improve the integration in this regard. In 
particular, I would emphasize the following features:

* Neutron integration: Currently, Flannel is the only supported network driver  
for providing connectivity between containers in different hosts. Flannel is  
mostly used for overlay networking, and it has significant performance 
overhead. In the Newton cycle, I would suggest we collaborate with the Kuryr  
team to develop a non-overlay network driver.
* Cinder integration: Magnum supports using Cinder volume for storing container 
 images. We should add support for mounting Cinder volumes to containers as  
data volumes as well.
* Ironic integration: Add support for Ironic virt-driver to enable support for 
high-performance containers on baremetal servers. We identified this feature  
as a key feature in a few release cycles previously but unfortunately it  
hasn't been fully implemented yet.

In addition, I believe the items below are important and need attention in the 
Newton cycle:

* Pluggable architecture: Refine the architecture to make it extensible. As a 
result, third-party vendors can plugin their own flavor of COEs.
* Quality assurance: Improve coverage of integration and unit tests.
* Documentation: Add missing documents and enhance existing documents.
* Remove hard dependency: Eliminate hard dependency on Barbican by implementing 
a functional equivalent replacement. Note that this is a technical debt [1] and 
should be clean up in Newton cycle.
* Horizon UI: Enhance our Horizon plugin.
* Grow the community: Attract new contributors to Magnum.

In the long term, I hope to work towards the goal of making OpenStack become a 
compelling platform for hosting containerized applications. To achieve this 
goal, we need to identify and develop unique capabilities that could 
differentiate Magnum from its competitors, thus attracting users to move their 
container workloads to OpenStack. As an initial start, below is a list features 
that I believe we could explore. Please don't consider it as final decisions 
and we will definitely debate each of them. Also, you are always welcome to 
contribute your own list of requirements:

* Resource interconnection and orchestration: Support dynamically connecting 
COE-managed resources (i.e. a container) to OpenStack-managed resources (i.e. a 
Neutron network), thus providing the capabilities to link containerized 
applications to existing OpenStack infrastructure. By doing  that, we enable 
orchestrations across COE-managed resources and OpenStack-managed resources 
through a Heat template.
* Integrated authentication system: Integrate COE authentication system with 
Keystone, thus eliminating the pain of handling multiple authentication 
mechanism.
* Standard APIs: Hide the heterogeneity of various COEs and expose a unified 
interface to manage resources of various kinds.

Thank you for considering my PTL candidacy.

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-July/069130.html

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-14 Thread Hongbin Lu


From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: March-14-16 4:49 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro

Steve,

I think you may have misunderstood our intent here. We are not seeking to lock 
in to a single OS vendor. Each COE driver can have a different OS. We can have 
multiple drivers per COE. The point is that drivers should be simple, and 
therefore should support one Bay node OS each. That would mean taking what we 
have today in our Kubernetes Bay type implementation and breaking it down into 
two drivers: one for CoreOS and another for Fedora/Atomic. New drivers would 
start out in a contrib directory where complete functional testing would not be 
required. In order to graduate one out of contrib and into the realm of support 
of the Magnum dev team, it would need to have a full set of tests, and someone 
actively maintaining it.
OK. It sounds like the proposal allows more than one OS to be in-tree, as long 
as the second OS goes through an incubation process. If that is what you mean, 
it sounds reasonable to me.

Multi-personality driers would be relatively complex. That approach would slow 
down COE specific feature development, and complicate maintenance that is 
needed as new versions of the dependency chain are bundled in (docker, k8s, 
etcd, etc.). We have all agreed that having integration points that allow for 
alternate OS selection is still our direction. This follows the pattern that we 
set previously when deciding what networking options to support. We will have 
one that's included as a default, and a way to plug in alternates.

Here is what I expect to see when COE drivers are implemented:

Docker Swarm:
Default driver Fedora/Atomic
Alternate driver: TBD

Kubernetes:
Default driver Fedora/Atomic
Alternate driver: CoreOS

Apache Mesos/Marathon:
Default driver: Ubuntu
Alternate driver: TBD

We can allow an arbitrary number of alternates. Those TBD items can be 
initially added to the contrib directory, and with the right level of community 
support can be advanced to defaults if shown to work better, be more 
straightforward to maintain, be more secure, or whatever criteria is important 
to us when presented with the choice. Such criteria will be subject to 
community consensus. This should allow for free experimentation with alternates 
to allow for innovation. See how this is not locking in a single OS vendor?

Adrian

On Mar 14, 2016, at 12:41 PM, Steven Dake (stdake) 
<std...@cisco.com<mailto:std...@cisco.com>> wrote:

Hongbin,

When we are at a disagreement in the Kolla core team, we have the Kolla core 
reviewers vote on the matter. This is typical standard OpenStack best practice.

I think the vote would be something like
"Implement one OS/COE/network/storage prototype, or implement many."

I don't have a horse in this race, but I think it would be seriously damaging 
to Magnum to lock in to a single vendor.

Regards
-steve


From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Monday, March 7, 2016 at 10:06 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro



From: Corey O'Brien [mailto:coreypobr...@gmail.com]
Sent: March-07-16 8:11 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro

Hongbin, I think the offer to support different OS options is a perfect example 
both of what we want and what we don't want. We definitely want to allow for 
someone like yourself to maintain templates for whatever OS they want and to 
have that option be easily integrated in to a Magnum deployment. However, when 
developing features or bug fixes, we can't wait for you to have time to add it 
for whatever OS you are promising to maintain.
It might be true that supporting additional OS could slow down the development 
speed, but the key question is how much the impact will be. Does it outweigh 
the benefits? IMO, the impact doesn't seem to be significant, given the fact 
that most features and bug fixes are OS agnostic. Also, keep in mind that every 
features we introduced (variety of COEs, variety of Nova virt-driver, variety 
of network driver, variety of volume driver, variety of ...) incurs a 
maintenance overhead. If you want an optimal development speed, we will be 
limited to support a single COE/virt driver/network driver/volume driver. I 
guess that is not the direction we like to be?

Instead, we would all be force

[openstack-dev] [magnum] Agenda for tomorrow team meeting

2016-03-28 Thread Hongbin Lu
Hi team,

Please review the agenda for our team meeting tomorrow: 
https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2016-03-29_1600_UTC
 . Please feel free to add items to the agenda if you like.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Austin Design Summit track layout

2016-03-30 Thread Hongbin Lu
Hi Thierry,

After discussing with the Kuryr PTL (Gal), we agreed to have a shared fishbowl 
session between Magnum and Kuryr. I would like to schedule it at Thursday 11:50 
- 12:30 for now (by using the original Magnum fishbowl slot). We might adjust 
the time later if needed. Thanks.

Best regards,
Hongbin

-Original Message-
From: Thierry Carrez [mailto:thie...@openstack.org] 
Sent: March-30-16 5:25 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] Austin Design Summit track layout

See new version attached. This will be pushed to the official schedule 
tomorrow, so please reply here today if you see any major issue with it.

Changes:
- swapped the release and stable slots to accommodate mriedem
- moved Astara fishbowl to Thu morning to avoid conflict with Tacker
- moved OpenStackClient, Stable and Release to a larger fishbowl room

Cheers,

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Containers lifecycle management

2016-04-06 Thread Hongbin Lu


> -Original Message-
> From: Flavio Percoco [mailto:fla...@redhat.com]
> Sent: April-06-16 12:16 PM
> To: Hongbin Lu
> Cc: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] Containers lifecycle management
> 
> On 06/04/16 15:54 +, Hongbin Lu wrote:
> >
> >
> >> -Original Message-
> >> From: Flavio Percoco [mailto:fla...@redhat.com]
> >> Sent: April-06-16 9:14 AM
> >> To: openstack-dev@lists.openstack.org
> >> Subject: [openstack-dev] [magnum] Containers lifecycle management
> >>
> >>
> >> Greetings,
> >>
> >> I'm fairly new to Magnum and I hope my comments below are accurate.
> >>
> >> After reading some docs, links and other references, I seem to
> >> understand the Magnum team has a debate on whether providing
> >> abstraction for containers lifecycle is something the project should
> >> do or not. There's a patch that attempts to remove PODs and some
> >> debates on whether `container-*` commands are actually useful or not.
> >
> >FYI, according to the latest decision [1][2], below is what it will be:
> >* The k8s abstractions (pod/service/replication controller) will be
> removed. Users will need to use native tool (i.e. kubectl) to consume
> the k8s service.
> >* The docker swarm abstraction (container) will be moved to a
> separated driver. In particular, there will be two drivers for
> operators to select. The first driver will have minimum functionality
> (i.e. provision/manage/delete the swarm cluster). The second driver
> will have additional APIs to manage container resources in the swarm
> bay.
> >
> >[1] https://wiki.openstack.org/wiki/Magnum/NativeAPI
> >[2] https://etherpad.openstack.org/p/magnum-native-api
> >
> >>
> >> Based on the above, I wanted to understand what would be the
> >> recommended way for services willing to consume magnum to run
> >> containers? I've been digging a bit into what would be required for
> >> Trove to consume Magnum and based on the above, it seems the answer
> >> is that it should support either docker, k8s or mesos instead.
> >>
> >> - Is the above correct?
> >
> >I think it is correct. At current stage, Trove needs to select a bay
> type (docker swarm, k8s or mesos). If the use case is to manage a
> single container, it is recommended to choose the docker swarm bay type.
> >
> >> - Is there a way to create a container, transparently, on whatever
> >> backend using
> >>   Magnum's API?
> >
> >At current stage, it is impossible. There is a blueprint [3] for
> proposing to unify the heterogeneity of different bay types, but we are
> in disagreement on whether Magnum should provide such functionality.
> You are welcome to contribute your use cases if you prefer to have it
> implemented.
> >
> >[3] https://blueprints.launchpad.net/magnum/+spec/unified-containers
> 
> Thanks for the clarifications Hongbin.
> 
> Would it make sense to have the containers abstraction do this for
> other bays too?

This is a controversial topic. The Magnum team have discussed it before and we 
are in disagreement. I have proposed to re-discuss it in the design summit 
(requested topic #16).

[1] https://etherpad.openstack.org/p/magnum-newton-design-summit-topics

> 
> Flavio
> 
> --
> @flaper87
> Flavio Percoco
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum][neutron] AttributeError: 'str' object has no attribute 'strftime'

2016-04-07 Thread Hongbin Lu
Hi all,
Magnum gate recently broke with error: "AttributeError: 'str' object has no 
attribute 'strftime'" (here is a full log [1]). I would like to confirm if 
there is a recent commit in Neutron that causes the breakage. If yes, a quick 
fix is greatly appreciated.

[1] 
http://logs.openstack.org/91/301891/1/check/gate-functional-dsvm-magnum-api/ea0d4ba/logs/screen-q-lbaas.txt.gz

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Proposing Eli Qiao for Magnum core reviewer team

2016-04-07 Thread Hongbin Lu
Hi all,
Thanks for your feedback. The vote is unanimous. Eli Qiao has been added to the 
core team [1].

[1] https://review.openstack.org/#/admin/groups/473,members

Best regards,
Hongbin

From: Vilobh Meshram [mailto:vilobhmeshram.openst...@gmail.com]
Sent: April-04-16 10:31 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Proposing Eli Qiao for Magnum core 
reviewer team

+1

On Sat, Apr 2, 2016 at 7:24 AM, Kai Qiang Wu 
<wk...@cn.ibm.com<mailto:wk...@cn.ibm.com>> wrote:

+ 1 for Eli :)


Thanks

Best Wishes,

Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com<mailto:wk...@cn.ibm.com>
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

[Inactive hide details for Hongbin Lu ---01/04/2016 02:20:17 am---Hi all, Eli 
Qiao has been consistently contributed to Magnum f]Hongbin Lu ---01/04/2016 
02:20:17 am---Hi all, Eli Qiao has been consistently contributed to Magnum for 
a while. His contribution started f

From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: 01/04/2016 02:20 am
Subject: [openstack-dev] [magnum] Proposing Eli Qiao for Magnum core reviewer 
team





Hi all,

Eli Qiao has been consistently contributed to Magnum for a while. His 
contribution started from about 10 months ago. Along the way, he implemented 
several important blueprints and fixed a lot of bugs. His contribution covers 
various aspects (i.e. APIs, conductor, unit/functional tests, all the COE 
templates, etc.), which shows that he has a good understanding of almost every 
pieces of the system. The feature set he contributed to is proven to be 
beneficial to the project. For example, the gate testing framework he heavily 
contributed to is what we rely on every days. His code reviews are also 
consistent and useful.

I am happy to propose Eli Qiao to be a core reviewer of Magnum team. According 
to the OpenStack Governance process [1], we require a minimum of 4 +1 votes 
within a 1 week voting window (consider this proposal as a +1 vote from me). A 
vote of -1 is a veto. If we cannot get enough votes or there is a veto vote 
prior to the end of the voting window, Eli is not able to join the core team 
and needs to wait 30 days to reapply.

The voting is open until Thursday April 7st.

[1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Are Floating IPs really needed for all nodes?

2016-04-05 Thread Hongbin Lu
Hi Monty,

Thanks for your guidance. I have appended your inputs to the blueprint [1].

[1] https://blueprints.launchpad.net/magnum/+spec/bay-with-no-floating-ips

Best regards,
Honbgin

-Original Message-
From: Monty Taylor [mailto:mord...@inaugust.com] 
Sent: March-31-16 1:18 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum] Are Floating IPs really needed for all 
nodes?

A few things:

Public IPs and Floating IPs are not the same thing.
Some clouds have public IPs. Some have floating ips. Some have both.

I think it's important to be able to have magnum work with all of the above.

If the cloud does not require using a floating IP (as most do not) to get 
externally routable network access, magnum should work with that.

If the cloud does require using a floating IP (as some do) to get externally 
rouatable network access, magnum should be able to work with that.

In either case, it's also possible the user will not desire the thing they are 
deploying in magnum to be assigned an IP on a network that routes off of the 
cloud. That should also be supported.

Shade has code to properly detect most of those situations that you can look at 
for all of the edge cases - however, since magnum is installed by the operator, 
I'd suggest making a config value for it that allows the operator to express 
whether or not the cloud in question requires floating ips as it's 
EXCEPTIONALLY hard to accurately detect.

On 03/31/2016 12:42 PM, Guz Egor wrote:
> Hongbin,
> It's correct, I was involved in two big OpenStack private cloud 
> deployments and we never had public ips.
> In such case Magnum shouldn't create any private networks, operator 
> need to provide network id/name or it should use default  (we used to 
> have networking selection logic in
> scheduler) .
>
> ---
> Egor
>
> ----------
> --
> *From:* Hongbin Lu <hongbin...@huawei.com>
> *To:* Guz Egor <guz_e...@yahoo.com>; OpenStack Development Mailing 
> List (not for usage questions) <openstack-dev@lists.openstack.org>
> *Sent:* Thursday, March 31, 2016 7:29 AM
> *Subject:* RE: [openstack-dev] [magnum] Are Floating IPs really needed 
> for all nodes?
>
> Egor,
> I agree with what you said, but I think we need to address the problem 
> that some clouds are lack of public IP addresses. It is not uncommon 
> that a private cloud is running without public IP addresses, and they 
> already figured out how to route traffics in and out. In such case, a 
> bay doesn’t need to have floating IPs and the NodePort feature seems 
> to work with the private IP address.
> Generally speaking, I think it is useful to have a feature that allows 
> bays to work without public IP addresses. I don’t want to end up in a 
> situation that Magnum is unusable because the clouds don’t have enough 
> public IP addresses.
> Best regards,
> Hongbin
> *From:*Guz Egor [mailto:guz_e...@yahoo.com]
> *Sent:* March-31-16 12:08 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [magnum] Are Floating IPs really needed 
> for all nodes?
> -1
> who is going to run/support this proxy? also keep in mind that 
> Kubernetes Service/NodePort
> (http://kubernetes.io/docs/user-guide/services/#type-nodeport)
> functionality is not going to work without public ip and this is very 
> handy feature.
> ---
> Egor
> --
> --
> *From:*王华<wanghua.hum...@gmail.com <mailto:wanghua.hum...@gmail.com>>
> *To:* OpenStack Development Mailing List (not for usage questions) 
> <openstack-dev@lists.openstack.org
> <mailto:openstack-dev@lists.openstack.org>>
> *Sent:* Wednesday, March 30, 2016 8:41 PM
> *Subject:* Re: [openstack-dev] [magnum] Are Floating IPs really needed 
> for all nodes?
> Hi yuanying,
> I agree to reduce the usage of floating IP. But as far as I know, if 
> we need to pull docker images from docker hub in nodes floating ips 
> are needed. To reduce the usage of floating ip, we can use proxy. Only 
> some nodes have floating ips, and other nodes can access docker hub by 
> proxy.
> Best Regards,
> Wanghua
> On Thu, Mar 31, 2016 at 11:19 AM, Eli Qiao <liyong.q...@intel.com 
> <mailto:liyong.q...@intel.com>> wrote:
>
> Hi Yuanying,
> +1
> I think we can add option on whether to using floating ip address 
> since IP address are kinds of resource which not wise to waste.
> On 2016年03月31日10:40, 大塚元央wrote:
>
> Hi team,
> Previously, we had a reason why all nodes should have floating ips [1].
> But now we have a LoadBalancer features for masters [2] and minions [3].
> And also minions do not n

Re: [openstack-dev] [magnum] Are Floating IPs really needed for all nodes?

2016-04-05 Thread Hongbin Lu
Hi all,
Thanks for your inputs. We discussed this proposal in our team meeting [1], and 
we all agreed to support an option to remove the need of floating IPs. A 
blueprint was created for implementing this feature: 
https://blueprints.launchpad.net/magnum/+spec/bay-with-no-floating-ips . Please 
feel free to assign it to yourselves if you interest to implement it. Thanks.
[1] 
http://eavesdrop.openstack.org/meetings/containers/2016/containers.2016-04-05-16.00.txt

Best regards,
Hongbin

From: Yuanying OTSUKA [mailto:yuany...@oeilvert.org]
Sent: April-01-16 1:57 AM
To: Guz Egor; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Are Floating IPs really needed for all 
nodes?

Hi Egor,

I know some people still want to use floating ips to connect nodes,
So I will not remove floating ips feature completely.
Only introduce the option which will disable to assign floating ip to nodes.
Because some people doesn’t want to assign floating ip to nodes.

Thanks
-yuaning
2016年3月31日(木) 13:11 Guz Egor >:
-1

who is going to run/support this proxy? also keep in mind that Kubernetes 
Service/NodePort (http://kubernetes.io/docs/user-guide/services/#type-nodeport)
functionality is not going to work without public ip and this is very handy 
feature.

---
Egor


From: 王华 >
To: OpenStack Development Mailing List (not for usage questions) 
>
Sent: Wednesday, March 30, 2016 8:41 PM

Subject: Re: [openstack-dev] [magnum] Are Floating IPs really needed for all 
nodes?

Hi yuanying,

I agree to reduce the usage of floating IP. But as far as I know, if we need to 
pull
docker images from docker hub in nodes floating ips are needed. To reduce the
usage of floating ip, we can use proxy. Only some nodes have floating ips, and
other nodes can access docker hub by proxy.

Best Regards,
Wanghua

On Thu, Mar 31, 2016 at 11:19 AM, Eli Qiao 
> wrote:
Hi Yuanying,
+1
I think we can add option on whether to using floating ip address since IP 
address are
kinds of resource which not wise to waste.

On 2016年03月31日 10:40, 大塚元央 wrote:
Hi team,

Previously, we had a reason why all nodes should have floating ips [1].
But now we have a LoadBalancer features for masters [2] and minions [3].
And also minions do not necessarily need to have floating ips [4].
I think it’s the time to remove floating ips from all nodes.

I know we are using floating ips in gate to get log files,
So it’s not good idea to remove floating ips entirely.

I want to introduce `disable-floating-ips-to-nodes` parameter to bay model.

Thoughts?

[1]: http://lists.openstack.org/pipermail/openstack-dev/2015-June/067213.html
[2]: https://blueprints.launchpad.net/magnum/+spec/make-master-ha
[3]: https://blueprints.launchpad.net/magnum/+spec/external-lb
[4]: http://lists.openstack.org/pipermail/openstack-dev/2015-June/067280.html

Thanks
-yuanying


__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--

Best Regards, Eli Qiao (乔立勇)

Intel OTC China

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Are Floating IPs really needed for all nodes?

2016-04-05 Thread Hongbin Lu
Egor,
I agree with you. I think Magnum should support another option to connect a bay 
to an existing neutron private network instead of creating one. If you like, we 
can discuss it separately in our next team meeting or in the design summit.

Best regards,
Hongbin

From: Guz Egor [mailto:guz_e...@yahoo.com]
Sent: March-31-16 12:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Hongbin Lu
Subject: Re: [openstack-dev] [magnum] Are Floating IPs really needed for all 
nodes?

Hongbin,

It's correct, I was involved in two big OpenStack private cloud deployments and 
we never had public ips.
In such case Magnum shouldn't create any private networks, operator need to 
provide network id/name or
it should use default  (we used to have networking selection logic in 
scheduler) .

---
Egor


From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
To: Guz Egor <guz_e...@yahoo.com<mailto:guz_e...@yahoo.com>>; OpenStack 
Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Sent: Thursday, March 31, 2016 7:29 AM
Subject: RE: [openstack-dev] [magnum] Are Floating IPs really needed for all 
nodes?

Egor,

I agree with what you said, but I think we need to address the problem that 
some clouds are lack of public IP addresses. It is not uncommon that a private 
cloud is running without public IP addresses, and they already figured out how 
to route traffics in and out. In such case, a bay doesn’t need to have floating 
IPs and the NodePort feature seems to work with the private IP address.

Generally speaking, I think it is useful to have a feature that allows bays to 
work without public IP addresses. I don’t want to end up in a situation that 
Magnum is unusable because the clouds don’t have enough public IP addresses.

Best regards,
Hongbin

From: Guz Egor [mailto:guz_e...@yahoo.com]
Sent: March-31-16 12:08 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Are Floating IPs really needed for all 
nodes?

-1

who is going to run/support this proxy? also keep in mind that Kubernetes 
Service/NodePort (http://kubernetes.io/docs/user-guide/services/#type-nodeport)
functionality is not going to work without public ip and this is very handy 
feature.

---
Egor


From: 王华 <wanghua.hum...@gmail.com<mailto:wanghua.hum...@gmail.com>>
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Sent: Wednesday, March 30, 2016 8:41 PM
Subject: Re: [openstack-dev] [magnum] Are Floating IPs really needed for all 
nodes?

Hi yuanying,

I agree to reduce the usage of floating IP. But as far as I know, if we need to 
pull
docker images from docker hub in nodes floating ips are needed. To reduce the
usage of floating ip, we can use proxy. Only some nodes have floating ips, and
other nodes can access docker hub by proxy.

Best Regards,
Wanghua

On Thu, Mar 31, 2016 at 11:19 AM, Eli Qiao 
<liyong.q...@intel.com<mailto:liyong.q...@intel.com>> wrote:
Hi Yuanying,
+1
I think we can add option on whether to using floating ip address since IP 
address are
kinds of resource which not wise to waste.

On 2016年03月31日 10:40, 大塚元央 wrote:
Hi team,

Previously, we had a reason why all nodes should have floating ips [1].
But now we have a LoadBalancer features for masters [2] and minions [3].
And also minions do not necessarily need to have floating ips [4].
I think it’s the time to remove floating ips from all nodes.

I know we are using floating ips in gate to get log files,
So it’s not good idea to remove floating ips entirely.

I want to introduce `disable-floating-ips-to-nodes` parameter to bay model.

Thoughts?

[1]: http://lists.openstack.org/pipermail/openstack-dev/2015-June/067213.html
[2]: https://blueprints.launchpad.net/magnum/+spec/make-master-ha
[3]: https://blueprints.launchpad.net/magnum/+spec/external-lb
[4]: http://lists.openstack.org/pipermail/openstack-dev/2015-June/067280.html

Thanks
-yuanying

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Best Regards, Eli Qiao (乔立勇)
Intel OTC China

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay

2016-04-12 Thread Hongbin Lu
Hi all,

We discussed this in our last team meeting, and we were in disagreement. Some 
of us preferred option #1, others preferred option #2. I would suggest to leave 
this topic to the design summit so that our team members would have more times 
to research each option. If we are in disagreement again, I will let the core 
team to vote (hopefully we will have all the core team in the design summit).

Best regards,
Hongbin

From: Guz Egor [mailto:guz_e...@yahoo.com]
Sent: April-11-16 4:31 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay

+1 for "#1: Mesos and Marathon". Most deployments that I am aware of has this 
setup. Also we can provide several line instructions how to run Chronos on top 
of Marathon.

honestly I don't see how #2 will work, because Marathon installation is 
different from Aurora installation.

---
Egor


From: Kai Qiang Wu <wk...@cn.ibm.com<mailto:wk...@cn.ibm.com>>
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Sent: Sunday, April 10, 2016 6:59 PM
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay

#2 seems more flexible, and if it be proved it can "make the SAME mesos bay 
applied with mutilple frameworks." It would be great. Which means, one mesos 
bay should support multiple frameworks.




Thanks


Best Wishes,

Kai Qiang Wu (吴开强 Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com<mailto:wk...@cn.ibm.com>
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193
--------
Follow your heart. You are miracle!

Hongbin Lu ---11/04/2016 12:06:07 am---My preference is #1, but I don’t feel 
strong to exclude #2. I would agree to go with #2 for now and

From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: 11/04/2016 12:06 am
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay




My preference is #1, but I don’t feel strong to exclude #2. I would agree to go 
with #2 for now and switch back to #1 if there is a demand from users. For 
Ton’s suggestion to push Marathon into the introduced configuration hook, I 
think it is a good idea.

Best regards,
Hongbin

From: Ton Ngo [mailto:t...@us.ibm.com]
Sent: April-10-16 11:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay
I would agree that #2 is the most flexible option, providing a well defined 
path for additional frameworks such as Kubernetes and Swarm.
I would suggest that the current Marathon framework be refactored to use this 
new hook, to serve as an example and to be the supported
framework in Magnum. This will also be useful to users who want other 
frameworks but not Marathon.
Ton,

Adrian Otto ---04/08/2016 08:49:52 PM---On Apr 8, 2016, at 3:15 PM, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:

From: Adrian Otto <adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>>
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: 04/08/2016 08:49 PM
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay


On Apr 8, 2016, at 3:15 PM, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:

Hi team,
I would like to give an update for this thread. In the last team, we discussed 
several options to introduce Chronos to our mesos bay:
1. Add Chronos to the mesos bay. With this option, the mesos bay will have two 
mesos frameworks by default (Marathon and Chronos).
2. Add a configuration hook for users to configure additional mesos frameworks, 
such as Chronos. With this option, Magnum team doesn’t need to maintain extra 
framework configuration. However, users need to do it themselves.
This is my preference.

Adrian
3. Create a dedicated bay type for Chronos. With this option, we separate 
Marathon and Chronos into two different bay types. As a result, each bay type 
becomes easier to maintain, but those two mesos framework cannot share 
resources (a key feature of mesos is to have different frameworks running on 
the same cluster to increase resource utilization).Which option you prefer? Or 

[openstack-dev] [magnum][keystone][all] Using Keystone /v3/credentials to store TLS certificates

2016-04-12 Thread Hongbin Lu
Hi all,

In short, some Magnum team members proposed to store TLS certificates in 
Keystone credential store. As Magnum PTL, I want to get agreements (or 
non-disagreement) from OpenStack community in general, Keystone community in 
particular, before approving the direction.

In details, Magnum leverages TLS to secure the API endpoint of 
kubernetes/docker swarm. The usage of TLS requires a secure store for storing 
TLS certificates. Currently, we leverage Barbican for this purpose, but we 
constantly received requests to decouple Magnum from Barbican (because users 
normally don't have Barbican installed in their clouds). Some Magnum team 
members proposed to leverage Keystone credential store as a Barbican 
alternative [1]. Therefore, I want to confirm what is Keystone team position 
for this proposal (I remembered someone from Keystone mentioned this is an 
inappropriate use of Keystone. Would I ask for further clarification?). Thanks 
in advance.

[1] https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-store

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][neutron] AttributeError: 'str' object has no attribute 'strftime'

2016-04-08 Thread Hongbin Lu
Thanks Ihar & neutron team for the quick fix.

Best regards,
Hongbin

> -Original Message-
> From: Ihar Hrachyshka [mailto:ihrac...@redhat.com]
> Sent: April-08-16 7:56 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum][neutron] AttributeError: 'str'
> object has no attribute 'strftime'
> 
> Kevin's patch landed. I hope it solves the issue for Magnum and other
> projects that could be affected thru LBaaS. If not, please ping me in
> #openstack-neutron channel.
> 
> Ihar
> 
> Hirofumi Ichihara <ichihara.hirof...@lab.ntt.co.jp> wrote:
> 
> >
> >
> > On 2016/04/08 12:10, Kevin Benton wrote:
> >> Try depending on I2a10a8f15cdd5a144b172ee44fc3efd9b95d5b7e
> > I tried. Let's wait for the result.
> >
> >
> >> On Thu, Apr 7, 2016 at 8:02 PM, Hongbin Lu <hongbin...@huawei.com>
> wrote:
> >>
> >>
> >> > -Original Message-
> >> > From: Ihar Hrachyshka [mailto:ihrac...@redhat.com]
> >> > Sent: April-07-16 12:04 PM
> >> > To: OpenStack Development Mailing List (not for usage questions)
> >> > Subject: Re: [openstack-dev] [magnum][neutron] AttributeError:
> 'str'
> >> > object has no attribute 'strftime'
> >> >
> >> > Hongbin Lu <hongbin...@huawei.com> wrote:
> >> >
> >> > > Hi all,
> >> > > Magnum gate recently broke with error: "AttributeError: 'str'
> >> > > object has no attribute 'strftime'" (here is a full log [1]). I
> >> > > would like
> >> > to
> >> > > confirm if there is a recent commit in Neutron that causes the
> >> > breakage.
> >> > > If yes, a quick fix is greatly appreciated.
> >> > >
> >> > > [1]
> >> > > http://logs.openstack.org/91/301891/1/check/gate-functional-
> dsvm-
> >> > magnu
> >> > > m-api/ea0d4ba/logs/screen-q-lbaas.txt.gz
> >> > >
> >> >
> >> > The fix should be: https://review.openstack.org/#/c/302904/
> >>
> >> This patch doesn't resolve the problem. I depends on the patch and
> >> re-ran the tests [1], but the tests still failed with the same error
> [2].
> >>
> >> [1] https://review.openstack.org/#/c/303179/
> >> [2]
> >> http://logs.openstack.org/79/303179/1/check/gate-functional-dsvm-
> magn
> >> um-k8s/711813d/logs/screen-q-lbaas.txt.gz#_2016-04-08_02_19_30_027
> >>
> >> >
> >> > Ihar
> >> >
> >> >
> ___
> >> > 
> >> > ___
> >> > OpenStack Development Mailing List (not for usage questions)
> >> > Unsubscribe: OpenStack-dev-
> >> > requ...@lists.openstack.org?subject:unsubscribe
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> _
> >> _ OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >>
> >>
> _
> >> _ OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay

2016-04-08 Thread Hongbin Lu
Hi team,
I would like to give an update for this thread. In the last team, we discussed 
several options to introduce Chronos to our mesos bay:

1.   Add Chronos to the mesos bay. With this option, the mesos bay will 
have two mesos frameworks by default (Marathon and Chronos).

2.   Add a configuration hook for users to configure additional mesos 
frameworks, such as Chronos. With this option, Magnum team doesn’t need to 
maintain extra framework configuration. However, users need to do it themselves.

3.   Create a dedicated bay type for Chronos. With this option, we separate 
Marathon and Chronos into two different bay types. As a result, each bay type 
becomes easier to maintain, but those two mesos framework cannot share 
resources (a key feature of mesos is to have different frameworks running on 
the same cluster to increase resource utilization).
Which option you prefer? Or you have other suggestions? Advices are welcome.

Best regards,
Hongbin

From: Guz Egor [mailto:guz_e...@yahoo.com]
Sent: March-28-16 12:19 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay

Jay,

just keep in mind that Chronos can be run by Marathon.

---
Egor


From: Jay Lau >
To: OpenStack Development Mailing List (not for usage questions) 
>
Sent: Friday, March 25, 2016 7:01 PM
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay

Yes, that's exactly what I want to do, adding dcos cli and also add Chronos to 
Mesos Bay to make it can handle both long running services and batch jobs.

Thanks,

On Fri, Mar 25, 2016 at 5:25 PM, Michal Rostecki 
> wrote:

On 03/25/2016 07:57 AM, Jay Lau wrote:

Hi Magnum,

The current mesos bay only include mesos and marathon, it is better to
enhance the mesos bay have more components and finally enhance it to a
DCOS which focus on container service based on mesos.

For more detail, please refer to
https://docs.mesosphere.com/getting-started/installing/installing-enterprise-edition/

The mesosphere now has a template on AWS which can help customer deploy
a DCOS on AWS, it would be great if Magnum can also support it based on
OpenStack.

I filed a bp here
https://blueprints.launchpad.net/magnum/+spec/mesos-dcos , please show
your comments if any.

--
Thanks,

Jay Lau (Guangya Liu)

Sorry if I'm missing something, but isn't DCOS a closed source software?

However, the "DCOS cli"[1] seems to be working perfectly with Marathon and 
Mesos installed by any way if you configure it well. I think that the thing 
which can be done in Magnum is to make the experience with "DOCS" tools as easy 
as possible by using open source components from Mesosphere.

Cheers,
Michal

[1] https://github.com/mesosphere/dcos-cli

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Thanks,
Jay Lau (Guangya Liu)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay

2016-04-10 Thread Hongbin Lu
My preference is #1, but I don’t feel strong to exclude #2. I would agree to go 
with #2 for now and switch back to #1 if there is a demand from users. For 
Ton’s suggestion to push Marathon into the introduced configuration hook, I 
think it is a good idea.

Best regards,
Hongbin

From: Ton Ngo [mailto:t...@us.ibm.com]
Sent: April-10-16 11:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay


I would agree that #2 is the most flexible option, providing a well defined 
path for additional frameworks such as Kubernetes and Swarm.
I would suggest that the current Marathon framework be refactored to use this 
new hook, to serve as an example and to be the supported
framework in Magnum. This will also be useful to users who want other 
frameworks but not Marathon.
Ton,

[Inactive hide details for Adrian Otto ---04/08/2016 08:49:52 PM---On Apr 8, 
2016, at 3:15 PM, Hongbin Lu <hongbin...@huawei.com]Adrian Otto ---04/08/2016 
08:49:52 PM---On Apr 8, 2016, at 3:15 PM, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:

From: Adrian Otto <adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>>
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: 04/08/2016 08:49 PM
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay

________


On Apr 8, 2016, at 3:15 PM, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:

Hi team,
I would like to give an update for this thread. In the last team, we discussed 
several options to introduce Chronos to our mesos bay:
1. Add Chronos to the mesos bay. With this option, the mesos bay will have two 
mesos frameworks by default (Marathon and Chronos).
2. Add a configuration hook for users to configure additional mesos frameworks, 
such as Chronos. With this option, Magnum team doesn’t need to maintain extra 
framework configuration. However, users need to do it themselves.

This is my preference.

Adrian
3. Create a dedicated bay type for Chronos. With this option, we separate 
Marathon and Chronos into two different bay types. As a result, each bay type 
becomes easier to maintain, but those two mesos framework cannot share 
resources (a key feature of mesos is to have different frameworks running on 
the same cluster to increase resource utilization).
Which option you prefer? Or you have other suggestions? Advices are welcome.

Best regards,
Hongbin

From: Guz Egor [mailto:guz_e...@yahoo.com]
Sent: March-28-16 12:19 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay

Jay,

just keep in mind that Chronos can be run by Marathon.

---
Egor

From: Jay Lau <jay.lau@gmail.com<mailto:jay.lau@gmail.com>>
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Sent: Friday, March 25, 2016 7:01 PM
Subject: Re: [openstack-dev] [magnum] Enhance Mesos bay to a DCOS bay

Yes, that's exactly what I want to do, adding dcos cli and also add Chronos to 
Mesos Bay to make it can handle both long running services and batch jobs.

Thanks,

On Fri, Mar 25, 2016 at 5:25 PM, Michal Rostecki 
<michal.roste...@gmail.com<mailto:michal.roste...@gmail.com>> wrote:

On 03/25/2016 07:57 AM, Jay Lau wrote:

Hi Magnum,

The current mesos bay only include mesos and marathon, it is better to
enhance the mesos bay have more components and finally enhance it to a
DCOS which focus on container service based on mesos.

For more detail, please refer to
https://docs.mesosphere.com/getting-started/installing/installing-enterprise-edition/

The mesosphere now has a template on AWS which can help customer deploy
a DCOS on AWS, it would be great if Magnum can also support it based on
OpenStack.

I filed a bp here
https://blueprints.launchpad.net/magnum/+spec/mesos-dcos , please show
your comments if any.

--
Thanks,

Jay Lau (Guangya Liu)

Sorry if I'm missing something, but isn't DCOS a closed source software?

However, the "DCOS cli"[1] seems to be working perfectly with Marathon and 
Mesos installed by any way if you configure it well. I think that the thing 
which can be done in Magnum is to make the experience with "DOCS" tools as easy 
as possible by using open source components from Mesosphere.

Cheers,
Michal

[1] https://github.com/mesosphere/dcos-cli

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org/?su

[openstack-dev] [magnum][requirements][release] The introduction of package py2-ipaddress

2016-04-10 Thread Hongbin Lu
Hi requirements team,

In short, the recently introduced package py2-ipaddress [1] seems to break 
Magnum. In details, Magnum gate recently broke by an error: "'\xac\x18\x05\x07' 
does not appear to be an IPv4 or IPv6 address" [2] (the gate breakage has been 
temporarily fixed but we are looking for a permanent fix [3]). After 
investigation, I opened a ticket in Cryptography for help [4]. According to the 
feedback from Cryptography community, the problem is from py2-address, which 
was introduced to OpenStack recently [1].

I wonder if we can get any advice from requirements team in this regards. In 
particular, what is the proper way to handle the problematic package?

[1] https://review.openstack.org/#/c/302539/
[2] https://bugs.launchpad.net/magnum/+bug/1568212
[3] https://bugs.launchpad.net/magnum/+bug/1568427
[4] https://github.com/pyca/cryptography/issues/2870

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Collaborating topics for Magnum design summit

2016-04-05 Thread Hongbin Lu
Hi team,
As mentioned in the team meeting, Magnum team is using an etherpad [1] to 
collaborate topics for design summit. If you interest to join us in the Newton 
design summit, I would request your inputs in the etherpad. In particular, you 
can do the followings:
* Propose new topics that you want to discuss.
* Vote on existing topics (+1 on the topics you like).
Magnum has 5 fishbowl and 5 workroom session, so I will select 10 topics based 
on the feedback (the rest will be placed on the Friday meetup session). Your 
inputs are greatly appreciated. Thanks.
[1] https://etherpad.openstack.org/p/magnum-newton-design-summit-topics

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Containers lifecycle management

2016-04-06 Thread Hongbin Lu


> -Original Message-
> From: Flavio Percoco [mailto:fla...@redhat.com]
> Sent: April-06-16 9:14 AM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [magnum] Containers lifecycle management
> 
> 
> Greetings,
> 
> I'm fairly new to Magnum and I hope my comments below are accurate.
> 
> After reading some docs, links and other references, I seem to
> understand the Magnum team has a debate on whether providing
> abstraction for containers lifecycle is something the project should do
> or not. There's a patch that attempts to remove PODs and some debates
> on whether `container-*` commands are actually useful or not. 

FYI, according to the latest decision [1][2], below is what it will be:
* The k8s abstractions (pod/service/replication controller) will be removed. 
Users will need to use native tool (i.e. kubectl) to consume the k8s service.
* The docker swarm abstraction (container) will be moved to a separated driver. 
In particular, there will be two drivers for operators to select. The first 
driver will have minimum functionality (i.e. provision/manage/delete the swarm 
cluster). The second driver will have additional APIs to manage container 
resources in the swarm bay.

[1] https://wiki.openstack.org/wiki/Magnum/NativeAPI
[2] https://etherpad.openstack.org/p/magnum-native-api

> 
> Based on the above, I wanted to understand what would be the
> recommended way for services willing to consume magnum to run
> containers? I've been digging a bit into what would be required for
> Trove to consume Magnum and based on the above, it seems the answer is
> that it should support either docker, k8s or mesos instead.
> 
> - Is the above correct?

I think it is correct. At current stage, Trove needs to select a bay type 
(docker swarm, k8s or mesos). If the use case is to manage a single container, 
it is recommended to choose the docker swarm bay type.

> - Is there a way to create a container, transparently, on whatever
> backend using
>   Magnum's API?

At current stage, it is impossible. There is a blueprint [3] for proposing to 
unify the heterogeneity of different bay types, but we are in disagreement on 
whether Magnum should provide such functionality. You are welcome to contribute 
your use cases if you prefer to have it implemented.

[3] https://blueprints.launchpad.net/magnum/+spec/unified-containers

> 
> Sorry if I got something wrong,
> Flavio
> 
> --
> @flaper87
> Flavio Percoco
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][neutron] AttributeError: 'str' object has no attribute 'strftime'

2016-04-07 Thread Hongbin Lu


> -Original Message-
> From: Ihar Hrachyshka [mailto:ihrac...@redhat.com]
> Sent: April-07-16 12:04 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum][neutron] AttributeError: 'str'
> object has no attribute 'strftime'
> 
> Hongbin Lu <hongbin...@huawei.com> wrote:
> 
> > Hi all,
> > Magnum gate recently broke with error: “AttributeError: 'str' object
> > has no attribute 'strftime'” (here is a full log [1]). I would like
> to
> > confirm if there is a recent commit in Neutron that causes the
> breakage.
> > If yes, a quick fix is greatly appreciated.
> >
> > [1]
> > http://logs.openstack.org/91/301891/1/check/gate-functional-dsvm-
> magnu
> > m-api/ea0d4ba/logs/screen-q-lbaas.txt.gz
> >
> 
> The fix should be: https://review.openstack.org/#/c/302904/

This patch doesn't resolve the problem. I depends on the patch and re-ran the 
tests [1], but the tests still failed with the same error [2].

[1] https://review.openstack.org/#/c/303179/
[2] 
http://logs.openstack.org/79/303179/1/check/gate-functional-dsvm-magnum-k8s/711813d/logs/screen-q-lbaas.txt.gz#_2016-04-08_02_19_30_027

> 
> Ihar
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-11 Thread Hongbin Lu
Sorry, I disagree.

Magnum team doesn’t have consensus to reject the idea of unifying APIs from 
different container technology. In contrast, the idea of unified Container APIs 
has been constantly proposed by different people in the past. I will try to 
allocate a session in design summit to discuss it as a team.

Best regards,
Hongbin

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: April-11-16 12:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org
Subject: Re: [OpenStack Foundation] [openstack-dev] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Amrith,

I respect your point of view, and agree that the idea of a common compute API 
is attractive… until you think a bit deeper about what that would mean. We 
seriously considered a “global” compute API at the time we were first 
contemplating Magnum. However, what we came to learn through the journey of 
understanding the details of how such a thing would be implemented, that such 
an API would either be (1) the lowest common denominator (LCD) of all compute 
types, or (2) an exceedingly complex interface.

You expressed a sentiment below that trying to offer choices for VM, Bare Metal 
(BM), and Containers for Trove instances “adds considerable complexity”. 
Roughly the same complexity would accompany the use of a comprehensive compute 
API. I suppose you were imagining an LCD approach. If that’s what you want, 
just use the existing Nova API, and load different compute drivers on different 
host aggregates. A single Nova client can produce VM, BM (Ironic), and 
Container (lbvirt-lxc) instances all with a common API (Nova) if it’s 
configured in this way. That’s what we do. Flavors determine which compute type 
you get.

If what you meant is that you could tap into the power of all the unique 
characteristics of each of the various compute types (through some modular 
extensibility framework) you’ll likely end up with complexity in Trove that is 
comparable to integrating with the native upstream APIs, along with the 
disadvantage of waiting for OpenStack to continually catch up to the pace of 
change of the various upstream systems on which it depends. This is a recipe 
for disappointment.

We concluded that wrapping native APIs is a mistake, particularly when they are 
sufficiently different than what the Nova API already offers. Containers APIs 
have limited similarities, so when you try to make a universal interface to all 
of them, you end up with a really complicated mess. It would be even worse if 
we tried to accommodate all the unique aspects of BM and VM as well. Magnum’s 
approach is to offer the upstream native API’s for the different container 
orchestration engines (COE), and compose Bays for them to run on that are built 
from the compute types that OpenStack supports. We do this by using different 
Heat orchestration templates (and conditional templates) to arrange a COE on 
the compute type of your choice. With that said, there are still gaps where not 
all storage or network drivers work with Ironic, and there are non-trivial 
security hurdles to clear to safely use Bays composed of libvirt-lxc instances 
in a multi-tenant environment.

My suggestion to get what you want for Trove is to see if the cloud has Magnum, 
and if it does, create a bay with the flavor type specified for whatever 
compute type you want, and then use the native API for the COE you selected for 
that bay. Start your instance on the COE, just like you use Nova today. This 
way, you have low complexity in Trove, and you can scale both the number of 
instances of your data nodes (containers), and the infrastructure on which they 
run (Nova instances).

Regards,

Adrian



On Apr 11, 2016, at 8:47 AM, Amrith Kumar 
> wrote:

Monty, Dims,

I read the notes and was similarly intrigued about the idea. In particular, 
from the perspective of projects like Trove, having a common Compute API is 
very valuable. It would allow the projects to have a single view of 
provisioning compute, as we can today with Nova and get the benefit of bare 
metal through Ironic, VM's through Nova VM's, and containers through 
nova-docker.

With this in place, a project like Trove can offer database-as-a-service on a 
spectrum of compute infrastructures as any end-user would expect. Databases 
don't always make sense in VM's, and while containers are great for quick and 
dirty prototyping, and VM's are great for much more, there are databases that 
will in production only be meaningful on bare-metal.

Therefore, if there is a move towards offering a common API for VM's, 
bare-metal and containers, that would be huge.

Without such a mechanism, consuming containers in Trove adds considerable 
complexity and leads to a very sub-optimal architecture (IMHO). FWIW, a working 
prototype of Trove leveraging Ironic, VM's, and nova-docker to provision 

Re: [openstack-dev] [magnum][requirements][release] The introduction of package py2-ipaddress

2016-04-11 Thread Hongbin Lu
Hi Thierry,

Thanks for your advice. I submitted a patch [1] to downgrade docker-py to 
1.7.2. In long term, we will negotiate with upstream maintainers to resolve the 
module conflicting issue.

[1] https://review.openstack.org/#/c/304296/

Best regards,
Hongbin

> -Original Message-
> From: Thierry Carrez [mailto:thie...@openstack.org]
> Sent: April-11-16 5:28 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [magnum][requirements][release] The
> introduction of package py2-ipaddress
> 
> Hongbin Lu wrote:
> > Hi requirements team,
> >
> > In short, the recently introduced package py2-ipaddress [1] seems to
> > break Magnum. In details, Magnum gate recently broke by an error:
> > "'\xac\x18\x05\x07' does not appear to be an IPv4 or IPv6 address" [2]
> > (the gate breakage has been temporarily fixed but we are looking for
> a
> > permanent fix [3]). After investigation, I opened a ticket in
> > Cryptography for help [4]. According to the feedback from
> Cryptography
> > community, the problem is from py2-address, which was introduced to
> > OpenStack recently [1].
> >
> > I wonder if we can get any advice from requirements team in this
> > regards. In particular, what is the proper way to handle the
> > problematic package?
> >
> > [1] https://review.openstack.org/#/c/302539/
> > [2] https://bugs.launchpad.net/magnum/+bug/1568212
> > [3] https://bugs.launchpad.net/magnum/+bug/1568427
> > [4] https://github.com/pyca/cryptography/issues/2870
> 
> py2-ipaddress was introduced as a dependency by docker-py 1.8.0.
> Short-term solution would be to cap <1.8.0 in global-requirements
> (which will make us fallback to 1.7.2 and remove py2-ipaddress).
> 
> If the two modules are conflicting we should determine which one is the
> best and converge to it. ipaddress seems a lot more used and pulled by
> a lot of packages. So long-term solution would be to make docker-py
> upstream depend on ipaddress instead...
> 
> --
> Thierry Carrez (ttx)
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-11 Thread Hongbin Lu


From: Fox, Kevin M [mailto:kevin@pnnl.gov]
Sent: April-11-16 2:52 PM
To: OpenStack Development Mailing List (not for usage questions); Adrian Otto
Cc: foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Yeah, I think there are two places where it may make sense.

1. Ironic's nova plugin is a lowst common denominator for treating a physical 
host like a vm. Ironic's api is much more rich, but sometimes all you need is 
the lowest common denominator and don't want to rewrite a bunch of code. In 
this case, it may make sense to have a nova plugin that talks to magnum to 
launch a heavy weight container to make the use case easy.
If I understand correctly, you were proposing a Magnum virt-driver for Nova, 
which is used to provision containers in Magnum bays? Magnum has different bay 
types (i.e. kubernetes, swarm, mesos) so the proposed driver needs to 
understand the APIs of different container orchestration engines (COEs). I 
think it will work only if Magnum provides an unified Container APIs so that 
the introduced Nova virt-driver can call Magnum unified APIs to launch 
containers.


2. Basic abstraction of Orchestration systems. Most (all?) docker orchestration 
systems work with a yaml file. What's in it differs, but shipping it from point 
A to point B using an authenticated channel can probably be nicely abstracted. 
I think this would be a big usability gain as well. Things like the 
applications catalog could much more easily hook into it then. The catalog 
would provide the yaml, and a tag to know which orchestrator type it is, and 
just pass that info along to magnum.
I am open to discuss that, but inventing a standard DSL for all COEs is a 
significant amount of work. We need to evaluate the benefits and costs before 
proceeding to this direction. In comparison, the proposal of unifying Container 
APIs [1] looks easier to implement and maintain.
[1] https://blueprints.launchpad.net/magnum/+spec/unified-containers


Thanks,
Kevin


From: Hongbin Lu [hongbin...@huawei.com]
Sent: Monday, April 11, 2016 11:10 AM
To: Adrian Otto; OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org<mailto:foundat...@lists.openstack.org>
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)
Sorry, I disagree.

Magnum team doesn’t have consensus to reject the idea of unifying APIs from 
different container technology. In contrast, the idea of unified Container APIs 
has been constantly proposed by different people in the past. I will try to 
allocate a session in design summit to discuss it as a team.

Best regards,
Hongbin

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: April-11-16 12:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org<mailto:foundat...@lists.openstack.org>
Subject: Re: [OpenStack Foundation] [openstack-dev] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Amrith,

I respect your point of view, and agree that the idea of a common compute API 
is attractive… until you think a bit deeper about what that would mean. We 
seriously considered a “global” compute API at the time we were first 
contemplating Magnum. However, what we came to learn through the journey of 
understanding the details of how such a thing would be implemented, that such 
an API would either be (1) the lowest common denominator (LCD) of all compute 
types, or (2) an exceedingly complex interface.

You expressed a sentiment below that trying to offer choices for VM, Bare Metal 
(BM), and Containers for Trove instances “adds considerable complexity”. 
Roughly the same complexity would accompany the use of a comprehensive compute 
API. I suppose you were imagining an LCD approach. If that’s what you want, 
just use the existing Nova API, and load different compute drivers on different 
host aggregates. A single Nova client can produce VM, BM (Ironic), and 
Container (lbvirt-lxc) instances all with a common API (Nova) if it’s 
configured in this way. That’s what we do. Flavors determine which compute type 
you get.

If what you meant is that you could tap into the power of all the unique 
characteristics of each of the various compute types (through some modular 
extensibility framework) you’ll likely end up with complexity in Trove that is 
comparable to integrating with the native upstream APIs, along with the 
disadvantage of waiting for OpenStack to continually catch up to the pace of 
change of the various upstream systems on which it depends. This is a recipe 
for disappointment.

We concluded that wrapping native APIs is a mistake, particularly when they are 
sufficiently different than what the Nova API already offers. Container

Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Hongbin Lu
The problem of missing Barbican alternative implementation has been raised 
several times by different people. IMO, this is a very serious issue that will 
hurt Magnum adoption. I created a blueprint for that [1] and set the PTL as 
approver. It will be picked up by a contributor once it is approved.

[1] https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-store 

Best regards,
Hongbin

-Original Message-
From: Ricardo Rocha [mailto:rocha.po...@gmail.com] 
Sent: March-17-16 2:39 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability

Hi.

We're on the way, the API is using haproxy load balancing in the same way all 
openstack services do here - this part seems to work fine.

For the conductor we're stopped due to bay certificates - we don't currently 
have barbican so local was the only option. To get them accessible on all nodes 
we're considering two options:
- store bay certs in a shared filesystem, meaning a new set of credentials in 
the boxes (and a process to renew fs tokens)
- deploy barbican (some bits of puppet missing we're sorting out)

More news next week.

Cheers,
Ricardo

On Thu, Mar 17, 2016 at 6:46 PM, Daneyon Hansen (danehans)  
wrote:
> All,
>
> Does anyone have experience deploying Magnum in a highly-available fashion?
> If so, I’m interested in learning from your experience. My biggest 
> unknown is the Conductor service. Any insight you can provide is 
> greatly appreciated.
>
> Regards,
> Daneyon Hansen
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-03-21 Thread Hongbin Lu
Tim,

Thanks for your advice. I respect your point of view and we will definitely 
encourage our users to try Barbican if they see fits. However, for the sake of 
Magnum, I think we have to decouple from Barbican at current stage. The 
coupling of Magnum and Barbican will increase the size of the system by two (1 
project -> 2 project), which will significant increase the overall complexities.

· For developers, it incurs significant overheads on development, 
quality assurance, and maintenance.

· For operators, it doubles the amount of efforts of deploying and 
monitoring the system.

· For users, a large system is likely to be unstable and fragile which 
affects the user experience.
In my point of view, I would like to minimize the system we are going to ship, 
so that we can reduce the overheads of maintenance and provides a stable system 
to our users.

I noticed that there are several suggestions to “force” our users to install 
Barbican, which I would respectfully disagree. Magnum is a young project and we 
are struggling to increase the adoption rate. I think we need to be nice to our 
users, otherwise, they will choose our competitors (there are container service 
everywhere). Please understand that we are not a mature project, like Nova, who 
has thousands of users. We really don’t have the power to force our users to do 
what they don’t like to do.

I also recognized there are several disagreements from the Barbican team. Per 
my understanding, most of the complaints are about the re-invention of Barbican 
equivalent functionality in Magnum. To address that, I am going to propose an 
idea to achieve the goal without duplicating Barbican. In particular, I suggest 
to add support for additional authentication system (Keystone in particular) 
for our Kubernetes bay (potentially for swarm/mesos). As a result, users can 
specify how to secure their bay’s API endpoint:

· TLS: This option requires Barbican to be installed for storing the 
TLS certificates.

· Keystone: This option doesn’t require Barbican. Users will use their 
OpenStack credentials to log into Kubernetes.

I am going to send another ML to describe the details. You are welcome to 
provide your inputs. Thanks.

Best regards,
Hongbin

From: Tim Bell [mailto:tim.b...@cern.ch]
Sent: March-19-16 5:55 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability


From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Saturday 19 March 2016 at 04:52
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] High Availability

...
If you disagree, I would request you to justify why this approach works for 
Heat but not for Magnum. Also, I also wonder if Heat has a plan to set a hard 
dependency on Barbican for just protecting the hidden parameters.


There is a risk that we use decisions made by other projects to justify how 
Magnum is implemented. Heat was created 3 years ago according to 
https://www.openstack.org/software/project-navigator/ and Barbican only 2 years 
ago, thus Barbican may not have been an option (or a high risk one).

Barbican has demonstrated that the project has corporate diversity and good 
stability 
(https://www.openstack.org/software/releases/liberty/components/barbican). 
There are some areas that could be improved (packaging and puppet modules are 
often needing some more investment).

I think it is worth a go to try it out and have concrete areas to improve if 
there are problems.

Tim

If you don’t like code duplication between Magnum and Heat, I would suggest to 
move the implementation to a oslo library to make it DRY. Thoughts?

[1] 
https://specs.openstack.org/openstack/heat-specs/specs/juno/encrypt-hidden-parameters.html

Best regards,
Hongbin

From: David Stanek [mailto:dsta...@dstanek.com]
Sent: March-18-16 4:12 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability


On Fri, Mar 18, 2016 at 4:03 PM Douglas Mendizábal 
<douglas.mendiza...@rackspace.com<mailto:douglas.mendiza...@rackspace.com>> 
wrote:
[snip]
>
> Regarding the Keystone solution, I'd like to hear the Keystone team's 
> feadback on that.  It definitely sounds to me like you're trying to put a 
> square peg in a round hole.
>

I believe that using Keystone for this is a mistake. As mentioned in the 
blueprint, Keystone is not encrypting the data so magnum would be on the hook 
to do it. So that means that if security is a requirement you'd have to 
duplicate more than just code. magnum would sta

Re: [openstack-dev] [Neutron][LBaaS][heat] Removing LBaaS v1 - are weready?

2016-03-24 Thread Hongbin Lu


> -Original Message-
> From: Assaf Muller [mailto:as...@redhat.com]
> Sent: March-24-16 9:24 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS][heat] Removing LBaaS v1 -
> are weready?
> 
> On Thu, Mar 24, 2016 at 1:48 AM, Takashi Yamamoto
>  wrote:
> > On Thu, Mar 24, 2016 at 6:17 AM, Doug Wiegley
> >  wrote:
> >> Migration script has been submitted, v1 is not going anywhere from
> stable/liberty or stable/mitaka, so it’s about to disappear from master.
> >>
> >> I’m thinking in this order:
> >>
> >> - remove jenkins jobs
> >> - wait for heat to remove their jenkins jobs ([heat] added to this
> >> thread, so they see this coming before the job breaks)
> >
> > magnum is relying on lbaasv1.  (with heat)
> 
> Is there anything blocking you from moving to v2?

A ticket was created for that: 
https://blueprints.launchpad.net/magnum/+spec/migrate-to-lbaas-v2 . It will be 
picked up by contributors once it is approved. Please give us sometimes to 
finish the work.

> 
> >
> >> - remove q-lbaas from devstack, and any references to lbaas v1 in
> devstack-gate or infra defaults.
> >> - remove v1 code from neutron-lbaas
> >>
> >> Since newton is now open for commits, this process is going to get
> started.
> >>
> >> Thanks,
> >> doug
> >>
> >>
> >>
> >>> On Mar 8, 2016, at 11:36 AM, Eichberger, German
>  wrote:
> >>>
> >>> Yes, it’s Database only — though we changed the agent driver in the
> DB from V1 to V2 — so if you bring up a V2 with that database it should
> reschedule all your load balancers on the V2 agent driver.
> >>>
> >>> German
> >>>
> >>>
> >>>
> >>>
> >>> On 3/8/16, 3:13 AM, "Samuel Bercovici"  wrote:
> >>>
>  So this looks like only a database migration, right?
> 
>  -Original Message-
>  From: Eichberger, German [mailto:german.eichber...@hpe.com]
>  Sent: Tuesday, March 08, 2016 12:28 AM
>  To: OpenStack Development Mailing List (not for usage questions)
>  Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 -
> are weready?
> 
>  Ok, for what it’s worth we have contributed our migration script:
>  https://review.openstack.org/#/c/289595/ — please look at this as
> a
>  starting point and feel free to fix potential problems…
> 
>  Thanks,
>  German
> 
> 
> 
> 
>  On 3/7/16, 11:00 AM, "Samuel Bercovici" 
> wrote:
> 
> > As far as I recall, you can specify the VIP in creating the LB so
> you will end up with same IPs.
> >
> > -Original Message-
> > From: Eichberger, German [mailto:german.eichber...@hpe.com]
> > Sent: Monday, March 07, 2016 8:30 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 -
> are weready?
> >
> > Hi Sam,
> >
> > So if you have some 3rd party hardware you only need to change
> the
> > database (your steps 1-5) since the 3rd party hardware will just
> > keep load balancing…
> >
> > Now for Kevin’s case with the namespace driver:
> > You would need a 6th step to reschedule the loadbalancers with
> the V2 namespace driver — which can be done.
> >
> > If we want to migrate to Octavia or (from one LB provider to
> another) it might be better to use the following steps:
> >
> > 1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools,
> > Health Monitors , Members) into some JSON format file(s) 2.
> Delete LBaaS v1 3.
> > Uninstall LBaaS v1 4. Install LBaaS v2 5. Transform the JSON
> > format file into some scripts which recreate the load balancers
> > with your provider of choice —
> >
> > 6. Run those scripts
> >
> > The problem I see is that we will probably end up with different
> > VIPs so the end user would need to change their IPs…
> >
> > Thanks,
> > German
> >
> >
> >
> > On 3/6/16, 5:35 AM, "Samuel Bercovici" 
> wrote:
> >
> >> As for a migration tool.
> >> Due to model changes and deployment changes between LBaaS v1 and
> LBaaS v2, I am in favor for the following process:
> >>
> >> 1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools,
> >> Health Monitors , Members) into some JSON format file(s) 2.
> Delete LBaaS v1 3.
> >> Uninstall LBaaS v1 4. Install LBaaS v2 5. Import the data from 1
> >> back over LBaaS v2 (need to allow moving from falvor1-->flavor2,
> >> need to make room to some custom modification for mapping
> between
> >> v1 and v2
> >> models)
> >>
> >> What do you think?
> >>
> >> -Sam.
> >>
> >>
> >>
> >>
> >> -Original Message-
> >> From: Fox, Kevin M [mailto:kevin@pnnl.gov]
> >> Sent: Friday, 

Re: [openstack-dev] [Kuryr][Magnum] Clarification of expanded mission statement

2016-03-27 Thread Hongbin Lu
Gal,

Thanks for clarifying the initiative. I added “[Magnum]” to the title so that 
Magnum team members can cast their inputs to this thread (if any).

Best regards,
Hongbin

From: Gal Sagie [mailto:gal.sa...@gmail.com]
Sent: March-19-16 6:04 AM
To: Fox, Kevin M
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Kuryr] Clarification of expanded mission statement

Hi Russell,

Thanks for starting this thread, i have been wanting to do so myself.

First, to me Kuryr is much more then just providing a "libnetwork driver" or a 
"CNI driver"
in the networking part.

Kuryr goal (to me at least) is to simplify orchestration, management and 
performance
and avoid vendor lock-in by providing these drivers but also being able to 
expose and enhance additional
policy level features that OpenStack has but are lacking in COEs, We are also 
looking at easier
deployment and packaging and providing additional value with features that make 
things more efficient and address issues
operators/users are facing (like attaching to existing Neutron networks).

We see our selfs operating both on OpenStack projects, helping with features 
needed for this integration but
also in any other project (like Kubernetes / Docker) if this will make more 
sense and show better value.

The plan is to continue this with storage, we will have to examine things and 
decide where is the best
place to locate them the pros and cons.
I personally don't want to run and start implementing things at other 
communities and under other
governance model unless they make much more sense and show better value for the 
overall solution.
So my initial reaction is that we can show a lot of value in the storage part 
as part of OpenStack Kuryr and hence
the mission statement change.

There are many features that i believe we can work in that are currently 
lacking and we will
need to examine them one by one and keep doing the design and spec process open 
with the community
so everyone can review and judge the value.
The last thing i am going to do is drive to re-implement things that are 
already there and in good enough shape,
none of us have the need or time to do that :)

In the storage area i see the plugins (and not just for Kubernetes), i see the 
persistent and re-using of storage
parts as being interesting to start with.
Another area that i included as storage is mostly disaster recovery and backup, 
i think we can bring a lot of value
to containers deployments by leveraging projects like Smaug and Freezer which 
offer application backups
and recovery.
I really prefer we do this thinking process together as a community and i 
already talked with some people that showed
interest in some of these features.

My intention was to first get the TC approval to explore this area and make 
sure it doesnt conflict and
only then start working on defining the details again with the broad community, 
openly just like we do
everything else.


On Fri, Mar 18, 2016 at 10:12 PM, Fox, Kevin M 
> wrote:
I'd assume a volume plugin for cinder support and/or a volume plugin for manila 
support?

Either would be useful.

Thanks,
Kevin

From: Russell Bryant [rbry...@redhat.com]
Sent: Friday, March 18, 2016 4:59 AM
To: OpenStack Development Mailing List (not for usage questions); 
gal.sa...@gmail.com
Subject: [openstack-dev] [Kuryr] Clarification of expanded mission statement
The Kuryr project proposed an update to its mission statement and I agreed to 
start a ML thread seeking clarification on the update.

https://review.openstack.org/#/c/289993

The change expands the current networking focus to also include storage 
integration.

I was interested to learn more about what work you expect to be doing.  On the 
networking side, it's clear to me: a libnetwork plugin, and now perhaps a CNI 
plugin.  What specific code do you expect to deliver as a part of your expanded 
scope?  Will that code be in Kuryr, or be in upstream projects?

If you don't know yet, that's fine.  I was just curious what you had in mind.  
We don't really have OpenStack projects that are organizing around contributing 
to other upstreams, but I think this case is fine.

--
Russell Bryant



--
Best Regards ,

The G.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-03-20 Thread Hongbin Lu
The Magnum team discussed Anchor several times (in the design summit/midcycle). 
According to what I remembered, the conclusion is to leverage Anchor though 
Barbican (presumably there is an Anchor backend for Barbican). Is Anchor 
support in Barbican still in the roadmap?

Best regards,
Hongbin

> -Original Message-
> From: Clark, Robert Graham [mailto:robert.cl...@hpe.com]
> Sent: March-20-16 1:57 AM
> To: maishsk+openst...@maishsk.com; OpenStack Development Mailing List
> (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] High Availability
> 
> At the risk of muddying the waters further, I recently chatted with
> some of you about Anchor, it's an ephemeral PKI system setup to provide
> private community PKI - certificate services for internal systems, a
> lot like k8 pods.
> 
> An overview of why revocation doesn't work very well in many cases and
> how ephemeral PKI helps: https://openstack-
> security.github.io/tooling/2016/01/20/ephemeral-pki.html
> 
> First half of a threat analysis on Anchor, the Security Project's
> implementation of ephemeral PKI: https://openstack-
> security.github.io/threatanalysis/2016/02/07/anchorTA.html
> 
> This might not solve your problem, it's certainly not a direct drop in
> for Barbican (and it never will be) but if your primary concern is
> Certificate Management for internal systems (not presenting
> certificates over the edge of the cloud) you might find some of it's
> properties valuable. Not least, it's trivial to HA being stateless and
> it's trivial to deploy being a single Pecan service.
> 
> There's a reasonably complete deck on Anchor here:
> https://docs.google.com/presentation/d/1HDyEiSA5zp6HNdDZcRAYMT5GtxqkHrx
> brqDRzITuSTc/edit?usp=sharing
> 
> And of course, code over here:
> http://git.openstack.org/cgit/openstack/anchor
> 
> Cheers
> -Rob
> 
> > -Original Message-
> > From: Maish Saidel-Keesing [mailto:mais...@maishsk.com]
> > Sent: 19 March 2016 18:10
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [magnum] High Availability
> >
> > Forgive me for the top post and also for asking the obvious (with my
> > Operator hat on)
> >
> > Relying on an external service for certificate store - is the best
> > option - assuming of course that the certificate store is actually
> > also highly available.
> >
> > Is that the case today with Barbican?
> >
> > According to the architecture docs [1] I see that they are using a
> > relational database. MySQL? PostgreSQL? Does that now mean we have an
> > additional database to maintain, backup, provide HA for as an
> Operator?
> >
> > The only real reference I can see to anything remotely HA is this [2]
> > and this [3]
> >
> > An overall solution is highly available *only* if all of the parts it
> > relies are also highly available as well.
> >
> >
> > [1]
> >
> http://docs.openstack.org/developer/barbican/contribute/architecture.h
> > tml#overall-architecture [2]
> > https://github.com/cloudkeep-ops/barbican-vagrant-zero
> > [3]
> > http://lists.openstack.org/pipermail/openstack/2014-March/006100.html
> >
> > Some food for thought
> >
> > --
> > Best Regards,
> > Maish Saidel-Keesing
> >
> >
> > On 03/18/16 17:18, Hongbin Lu wrote:
> > > Douglas,
> > >
> > > I am not opposed to adopt Barbican in Magnum (In fact, we already
> > > adopted Barbican). What I am opposed to is a Barbican lock-in,
> which
> > already has a negative impact on Magnum adoption based on our
> > feedback. I also want to see an increase of Barbican adoption in the
> future, and all our users have Barbican installed in their clouds. If
> that happens, I have no problem to have a hard dependency on Barbican.
> > >
> > > Best regards,
> > > Hongbin
> > >
> > > -Original Message-
> > > From: Douglas Mendizábal [mailto:douglas.mendiza...@rackspace.com]
> > > Sent: March-18-16 9:45 AM
> > > To: openstack-dev@lists.openstack.org
> > > Subject: Re: [openstack-dev] [magnum] High Availability
> > >
> > > Hongbin,
> > >
> > > I think Adrian makes some excellent points regarding the adoption
> of
> > > Barbican.  As the PTL for Barbican, it's frustrating to me to
> > constantly hear from other projects that securing their sensitive
> data
> > is a requirement but then turn around and say that deploying Barbican
> is a problem.
> > >
> > > I guess I'm having a hard time understanding the

Re: [openstack-dev] [magnum] High Availability

2016-03-20 Thread Hongbin Lu
Thanks for your inputs. It sounds like we have no other option besides Barbican 
as long as we need to store credentials in Magnum. Then I have a new proposal: 
switch to an alternative authentication mechanism that doesn't require to store 
credentials in Magnum. For example, the following options are available in 
Kubernetes [1]:

· Client certificate authentication

· Token File

· OpenID Connect ID Token

· Basic authentication

· Keystone authentication

Could we pick one of those?

[1] http://kubernetes.io/docs/admin/authentication/

Best regards,
Hongbin

From: Dave McCowan (dmccowan) [mailto:dmcco...@cisco.com]
Sent: March-19-16 10:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability


The most basic requirement here for Magnum is that it needs a safe place to 
store credentials.  A safe place can not be provided by just a library or even 
by just a daemon.  Secure storage is provided by either hardware solution (an 
HSM) or a software solution (SoftHSM, DogTag, IPA, IdM).  A project should give 
a variety of secure storage options to the user.

On this, we have competing requirements.  Devs need a turnkey option for easy 
testing locally or in the gate.  Users kicking the tires want a realistic 
solution they try out easily with DevStack.  Operators who already have secure 
storage deployed for their cloud want an option that plugs into their existing 
HSMs.

Any roll-your-own option is not going to meet all of these requirements.

A good example, that does meet all of these requirements, is the key manager 
implementation in Nova and Cinder. [1] [2]

Nova and Cinder work together to provide volume encryption, and like Magnum, 
have a need to store and share keys securely.  Using a plugin architecture, and 
the Barbican API, they implement a variety of key storage options:
- Fixed key allows for insecure stand alone operation, running only Nova and 
Cinder
- Barbican with static key, allows for easy deployment that can be started 
within DevStack by few lines of config.
- Barbican with a secure backend, allows for production grade secure storage of 
keys that has been tested on a variety of HSMs and software options.

Barbican's adoption is growing.  Nova, Cinder, Neutron LBaaS, Sahara, and 
Magnum all have implementations using Barbican.  Swift and DNSSec also have use 
cases.  There are both RPM and Debian packages available for Barbican.  There 
are (at least tech preview)  versions of puppet modules, Ansible playbooks, and 
DevStack plugins to deploy Barbican.

In summary, I think using Barbican absorbs the complexity of doing secure 
storage correctly.  It gives operators production grade secure storage options, 
while giving devs easier options.

--Dave McCowan

[1] https://github.com/openstack/nova/tree/master/nova/keymgr
[2] https://github.com/openstack/cinder/tree/master/cinder/keymgr

From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Friday, March 18, 2016 at 10:52 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] High Availability

OK. If using Keystone is not acceptable, I am going to propose a new approach:

? Store data in Magnum DB

? Encrypt data before writing it to DB

? Decrypt data after loading it from DB

? Have the encryption/decryption key stored in config file

? Use encryption/decryption algorithm provided by a library

The approach above is the exact approach used by Heat to protect hidden 
parameters [1]. Compared to the Barbican option, this approach is much lighter 
and simpler, and provides a basic level of data protection. This option is a 
good supplement to the Barbican option, which is heavy but provides advanced 
level of protection. It will fit into the use cases that users don't want to 
install Barbican but want a basic protection.

If you disagree, I would request you to justify why this approach works for 
Heat but not for Magnum. Also, I also wonder if Heat has a plan to set a hard 
dependency on Barbican for just protecting the hidden parameters.

If you don't like code duplication between Magnum and Heat, I would suggest to 
move the implementation to a oslo library to make it DRY. Thoughts?

[1] 
https://specs.openstack.org/openstack/heat-specs/specs/juno/encrypt-hidden-parameters.html

Best regards,
Hongbin

From: David Stanek [mailto:dsta...@dstanek.com]
Sent: March-18-16 4:12 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability


On Fri, Mar 18, 2016 at 4:03 

Re: [openstack-dev] FW: [magnum][magnum-ui] Liaisons for I18n

2016-03-01 Thread Hongbin Lu
+1. Shu Muto contributed a lot to magnum-ui. Highly recommended.

Best regards,
Hongbin

From: 大塚元央 [mailto:yuany...@oeilvert.org]
Sent: March-01-16 9:11 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] FW: [magnum][magnum-ui] Liaisons for I18n

Hi team,

Shu Muto is interested in to became liaisons  from magnum-ui.
He put great effort into translating English to Japanease in magnum-ui and 
horizon.
I recommend him to be liaison.

Thanks
-yuanying
2016年2月29日(月) 23:56 Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>>:
Hi team,

FYI, I18n team needs liaisons from magnum-ui. Please contact the i18n team if 
you interest in this role.

Best regards,
Hongbin

From: Ying Chun Guo [mailto:guoyi...@cn.ibm.com<mailto:guoyi...@cn.ibm.com>]
Sent: February-29-16 3:48 AM
To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Subject: [openstack-dev] [all][i18n] Liaisons for I18n

Hello,

Mitaka translation will start soon, from this week.
In Mitaka translation, IBM full time translators will join the
translation team and work with community translators.
With their help, I18n team is able to cover more projects.
So I need liaisons from dev projects who can help I18n team to work
compatibly with development team in the release cycle.

I especially need liaisons in below projects, which are in Mitaka translation 
plan:
nova, glance, keystone, cinder, swift, neutron, heat, horizon, ceilometer.

I also need liaisons from Horizon plugin projects, which are ready in 
translation website:
trove-dashboard, sahara-dashboard,designate-dasbhard, magnum-ui,
monasca-ui, murano-dashboard and senlin-dashboard.
I need liaisons tell us whether they are ready for translation from project 
view.

As to other projects, liaisons are welcomed too.

Here are the descriptions of I18n liaisons:
- The liaison should be a core reviewer for the project and understand the i18n 
status of this project.
- The liaison should understand project release schedule very well.
- The liaison should notify I18n team happens of important moments in the 
project release in time.
For example, happen of soft string freeze, happen of hard string freeze, and 
happen of RC1 cutting.
- The liaison should take care of translation patches to the project, and make 
sure the patches are
successfully merged to the final release version. When the translation patch is 
failed, the liaison
should notify I18n team.

If you are interested to be a liaison and help translators,
input your information here: 
https://wiki.openstack.org/wiki/CrossProjectLiaisons#I18n .

Thank you for your support.
Best regards
Ying Chun Guo (Daisy)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-07 Thread Hongbin Lu


From: Corey O'Brien [mailto:coreypobr...@gmail.com]
Sent: March-07-16 8:11 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro

Hongbin, I think the offer to support different OS options is a perfect example 
both of what we want and what we don't want. We definitely want to allow for 
someone like yourself to maintain templates for whatever OS they want and to 
have that option be easily integrated in to a Magnum deployment. However, when 
developing features or bug fixes, we can't wait for you to have time to add it 
for whatever OS you are promising to maintain.
It might be true that supporting additional OS could slow down the development 
speed, but the key question is how much the impact will be. Does it outweigh 
the benefits? IMO, the impact doesn’t seem to be significant, given the fact 
that most features and bug fixes are OS agnostic. Also, keep in mind that every 
features we introduced (variety of COEs, variety of Nova virt-driver, variety 
of network driver, variety of volume driver, variety of …) incurs a maintenance 
overhead. If you want an optimal development speed, we will be limited to 
support a single COE/virt driver/network driver/volume driver. I guess that is 
not the direction we like to be?

Instead, we would all be forced to develop the feature for that OS as well. If 
every member of the team had a special OS like that we'd all have to maintain 
all of them.
To be clear, I don’t have a special OS, I guess neither do others who disagreed 
in this thread.

Alternatively, what was agreed on by most at the midcycle was that if someone 
like yourself wanted to support a specific OS option, we would have an easy 
place for those contributions to go without impacting the rest of the team. The 
team as a whole would agree to develop all features for at least the reference 
OS.
Could we re-confirm that this is a team agreement? There is no harm to 
re-confirm it in the design summit/ML/team meeting. Frankly, it doesn’t seem to 
be.

Then individuals or companies who are passionate about an alternative OS can 
develop the features for that OS.

Corey

On Sat, Mar 5, 2016 at 12:30 AM Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:


From: Adrian Otto 
[mailto:adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>]
Sent: March-04-16 6:31 PM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro

Steve,

On Mar 4, 2016, at 2:41 PM, Steven Dake (stdake) 
<std...@cisco.com<mailto:std...@cisco.com>> wrote:

From: Adrian Otto <adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Friday, March 4, 2016 at 12:48 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro

Hongbin,

To be clear, this pursuit is not about what OS options cloud operators can 
select. We will be offering a method of choice. It has to do with what we plan 
to build comprehensive testing for,
This is easy. Once we build comprehensive tests for the first OS, just re-run 
it for other OS(s).

and the implications that has on our pace of feature development. My guidance 
here is that we resist the temptation to create a system with more permutations 
than we can possibly support. The relation between bay node OS, Heat Template, 
Heat Template parameters, COE, and COE dependencies (could-init, docker, 
flannel, etcd, etc.) are multiplicative in nature. From the mid cycle, it was 
clear to me that:

1) We want to test at least one OS per COE from end-to-end with comprehensive 
functional tests.
2) We want to offer clear and precise integration points to allow cloud 
operators to substitute their own OS in place of whatever one is the default 
for the given COE.

A COE shouldn’t have a default necessarily that locks out other defaults.  
Magnum devs are the experts in how these systems operate, and as such need to 
take on the responsibility of the implementation for multi-os support.

3) We want to control the total number of configuration permutations to 
simplify our efforts as a project. We agreed that gate testing all possible 
permutations is intractable.

I disagree with this point, but I don't have the bandwidth available to prove 
it ;)

That’s exactly my point. It takes a chunk of human bandwidth to carry that 
responsibility. If we had a system engineer assigned from each of the various 
upstream OS distros working with Magnum, this would not 

Re: [openstack-dev] [magnum-ui] Proposed Core addition, and removal notice

2016-03-05 Thread Hongbin Lu
+1

BTW, I am magnum core, not magnum-ui core. Not sure if my vote is counted.

Best regards,
Hongbin

-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com] 
Sent: March-04-16 7:29 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum-ui] Proposed Core addition, and removal notice

Magnum UI Cores,

I propose the following changes to the magnum-ui core group [1]:

+ Shu Muto
- Dims (Davanum Srinivas), by request - justified by reduced activity level.

Please respond with your +1 votes to approve this change or -1 votes to oppose.

Thanks,

Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][magnum-ui] Liaisons for I18n

2016-03-05 Thread Hongbin Lu
Adrian,

I think Shu Muto was originally proposed to be a magnum-ui liaison, not magnum 
liaison.

Best regards,
Hongbin

-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com] 
Sent: March-04-16 7:27 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][magnum-ui] Liaisons for I18n

Kato,

I have confirmed with Shu Muto, who will be assuming our I18n Liaison role for 
Magnum until further notice. Thanks for raising this important request.

Regards,

Adrian

> On Mar 3, 2016, at 6:53 AM, KATO Tomoyuki  wrote:
> 
> I added Magnum to the list... Feel free to add your name and IRC nick, Shu.
> 
> https://wiki.openstack.org/wiki/CrossProjectLiaisons#I18n
> 
>> One thing to note.
>> 
>> The role of i18n liaison is not to keep it well translated.
>> The main role is in a project side,
>> for example, to encourage i18n related reviews and fixes, or to 
>> suggest what kind of coding is recommended from i18n point of view.
> 
> Yep, that is a reason why a core reviewer is preferred for liaison.
> We sometimes have various requirements:
> word ordering (block trans), n-plural form, and so on.
> Some of them may not be important for Japanese.
> 
> Regards,
> KATO Tomoyuki
> 
>> 
>> Akihiro
>> 
>> 2016-03-02 12:17 GMT+09:00 Shuu Mutou :
>>> Hi Hongbin, Yuanying and team,
>>> 
>>> Thank you for your recommendation.
>>> I'm keeping 100% of EN to JP translation of Magnum-UI everyday.
>>> I'll do my best, if I become a liaison.
>>> 
>>> Since translation has became another point of review for Magnum-UI, I hope 
>>> that members translate Magnum-UI into your native language.
>>> 
>>> Best regards,
>>> Shu Muto
> 
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-04 Thread Hongbin Lu
-kvm instance to help alleviate this. Until then, limiting the scope of 
our gate tests is appropriate. We will continue our efforts to make them 
reasonably efficient.

Thanks,

Adrian


Regards
-steve


Note that it will take a thoughtful approach (subject to discussion) to balance 
these interests. Please take a moment to review the interest above. Do you or 
others disagree with these? If so, why?

Adrian

On Mar 4, 2016, at 9:09 AM, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:

I don't think there is any consensus on supporting single distro. There are 
multiple disagreements on this thread, including several senior team members 
and a project co-founder. This topic should be re-discussed (possibly at the 
design summit).

Best regards,
Hongbin

From: Corey O'Brien [mailto:coreypobr...@gmail.com]
Sent: March-04-16 11:37 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro

I don't think anyone is saying that code should somehow block support for 
multiple distros. The discussion at midcycle was about what the we should gate 
on and ensure feature parity for as a team. Ideally, we'd like to get support 
for every distro, I think, but no one wants to have that many gates. Instead, 
the consensus at the midcycle was to have 1 reference distro for each COE, gate 
on those and develop features there, and then have any other distros be 
maintained by those in the community that are passionate about them.

The issue also isn't about how difficult or not it is. The problem we want to 
avoid is spending precious time guaranteeing that new features and bug fixes 
make it through multiple distros.

Corey

On Fri, Mar 4, 2016 at 11:18 AM Steven Dake (stdake) 
<std...@cisco.com<mailto:std...@cisco.com>> wrote:
My position on this is simple.

Operators are used to using specific distros because that is what they used in 
the 90s,and the 00s, and the 10s.  Yes, 25 years of using a distro, and you 
learn it inside and out.  This means you don't want to relearn a new distro, 
especially if your an RPM user going to DEB or a DEB user going to RPM.  These 
are non-starter options for operators, and as a result, mean that distro choice 
is a must.  Since CoreOS is a new OS in the marketplace, it may make sense to 
consider placing it in "third" position in terms of support.

Besides that problem, various distribution companies will only support distros 
running in Vms if it matches the host kernel, which makes total sense to me.  
This means on an Ubuntu host if I want support I need to run Ubuntu vms, on a 
RHEL host I want to run RHEL vms, because, hey, I want my issues supported.

For these reasons and these reasons alone, there is no good rationale to remove 
multi-distro support  from Magnum.  All I've heard in this thread so far is 
"its too hard".  Its not too hard, especially with Heat conditionals making 
their way into Mitaka.

Regards
-steve

From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
Reply-To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Monday, February 29, 2016 at 9:40 AM
To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [magnum] Discussion of supporting single/multiple OS 
distro

Hi team,

This is a continued discussion from a review [1]. Corey O'Brien suggested to 
have Magnum support a single OS distro (Atomic). I disagreed. I think we should 
bring the discussion to here to get broader set of inputs.

Corey O'Brien
>From the midcycle, we decided we weren't going to continue to support 2 
>different versions of the k8s template. Instead, we were going to maintain the 
>Fedora Atomic version of k8s and remove the coreos templates from the tree. I 
>don't think we should continue to develop features for coreos k8s if that is 
>true.
In addition, I don't think we should break the coreos template by adding the 
trust token as a heat parameter.

Hongbin Lu
I was on the midcycle and I don't remember any decision to remove CoreOS 
support. Why you want to remove CoreOS templates from the tree. Please note 
that this is a very big decision and please discuss it with the team 
thoughtfully and make sure everyone agree.

Corey O'Brien
Removing the coreos templates was a part of the COE drivers decision. Since 
each COE driver will only support 1 distro+version+coe we discussed which ones 
to support in tree. The decision was that instead of trying to support every 
distro and every version for every coe, the magnum tree would only have support 
for 1 version of 1 distro fo

Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-03-04 Thread Hongbin Lu
I don’t think there is any consensus on supporting single distro. There are 
multiple disagreements on this thread, including several senior team members 
and a project co-founder. This topic should be re-discussed (possibly at the 
design summit).

Best regards,
Hongbin

From: Corey O'Brien [mailto:coreypobr...@gmail.com]
Sent: March-04-16 11:37 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Discussion of supporting single/multiple 
OS distro

I don't think anyone is saying that code should somehow block support for 
multiple distros. The discussion at midcycle was about what the we should gate 
on and ensure feature parity for as a team. Ideally, we'd like to get support 
for every distro, I think, but no one wants to have that many gates. Instead, 
the consensus at the midcycle was to have 1 reference distro for each COE, gate 
on those and develop features there, and then have any other distros be 
maintained by those in the community that are passionate about them.

The issue also isn't about how difficult or not it is. The problem we want to 
avoid is spending precious time guaranteeing that new features and bug fixes 
make it through multiple distros.

Corey

On Fri, Mar 4, 2016 at 11:18 AM Steven Dake (stdake) 
<std...@cisco.com<mailto:std...@cisco.com>> wrote:
My position on this is simple.

Operators are used to using specific distros because that is what they used in 
the 90s,and the 00s, and the 10s.  Yes, 25 years of using a distro, and you 
learn it inside and out.  This means you don't want to relearn a new distro, 
especially if your an RPM user going to DEB or a DEB user going to RPM.  These 
are non-starter options for operators, and as a result, mean that distro choice 
is a must.  Since CoreOS is a new OS in the marketplace, it may make sense to 
consider placing it in "third" position in terms of support.

Besides that problem, various distribution companies will only support distros 
running in Vms if it matches the host kernel, which makes total sense to me.  
This means on an Ubuntu host if I want support I need to run Ubuntu vms, on a 
RHEL host I want to run RHEL vms, because, hey, I want my issues supported.

For these reasons and these reasons alone, there is no good rationale to remove 
multi-distro support  from Magnum.  All I've heard in this thread so far is 
"its too hard".  Its not too hard, especially with Heat conditionals making 
their way into Mitaka.

Regards
-steve

From: Hongbin Lu <hongbin...@huawei.com<mailto:hongbin...@huawei.com>>
Reply-To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Monday, February 29, 2016 at 9:40 AM
To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [magnum] Discussion of supporting single/multiple OS 
distro

Hi team,

This is a continued discussion from a review [1]. Corey O'Brien suggested to 
have Magnum support a single OS distro (Atomic). I disagreed. I think we should 
bring the discussion to here to get broader set of inputs.

Corey O'Brien
From the midcycle, we decided we weren't going to continue to support 2 
different versions of the k8s template. Instead, we were going to maintain the 
Fedora Atomic version of k8s and remove the coreos templates from the tree. I 
don't think we should continue to develop features for coreos k8s if that is 
true.
In addition, I don't think we should break the coreos template by adding the 
trust token as a heat parameter.

Hongbin Lu
I was on the midcycle and I don't remember any decision to remove CoreOS 
support. Why you want to remove CoreOS templates from the tree. Please note 
that this is a very big decision and please discuss it with the team 
thoughtfully and make sure everyone agree.

Corey O'Brien
Removing the coreos templates was a part of the COE drivers decision. Since 
each COE driver will only support 1 distro+version+coe we discussed which ones 
to support in tree. The decision was that instead of trying to support every 
distro and every version for every coe, the magnum tree would only have support 
for 1 version of 1 distro for each of the 3 COEs (swarm/docker/mesos). Since we 
already are going to support Atomic for swarm, removing coreos and keeping 
Atomic for kubernetes was the favored choice.

Hongbin Lu
Strongly disagree. It is a huge risk to support a single distro. The selected 
distro could die in the future. Who knows. Why make Magnum take this huge risk? 
Again, the decision of supporting single distro is a very big decision. Please 
bring it up to the team and have it discuss thoughtfully before making any 
decision. Also, Magnum doesn't have to supp

[openstack-dev] [magnum] SELinux is temporarily disabled due to bug 1551648

2016-03-08 Thread Hongbin Lu
Hi team,

FYI. In short, we have to temporarily disable SELinux [1] due to bug 1551648 
[2].

SELinux is an important security features for Linux kernel. It improves 
isolation between neighboring containers in the same host. In before, Magnum 
had it turned on in each bay node. However, we have to turn it off for now 
because k8s bay is not functioning if it is turned on. The details were 
described in the bug report [2]. We will turn SELinux back on once the issue is 
resolved (you are welcomed to contribute a fix). Thanks.

[1] https://review.openstack.org/#/c/289626/
[2] https://bugs.launchpad.net/magnum/+bug/1551648
Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Split k8s-client in separate python package

2016-04-04 Thread Hongbin Lu
Thanks Dims.

-Original Message-
From: Davanum Srinivas [mailto:dava...@gmail.com] 
Sent: April-02-16 8:10 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Split k8s-client in separate python 
package

Hongbin,

Here's what i came up with based on your feedback:
https://review.openstack.org/#/c/300729/

Thanks,
Dims

On Fri, Apr 1, 2016 at 6:19 PM, Hongbin Lu <hongbin...@huawei.com> wrote:
> Dims,
>
> Thanks for splitting the k8sclient code out. Below is my answer to your 
> question.
>
> - Should this be a new openstack/ repo?
> Yes, I think so. We need it in openstack namespace in order to leverage the 
> openstack infra for testing and packaging.
>
> - Would the Magnum team own the repo and use the new python package?
> If this library is mainly used by Magnum, I think Magnum team has no problem 
> to own it. If it is also used by other projects, I am afraid Magnum team 
> won't have enough bandwidth to support it. In such case, it is better to have 
> other teams to share the ownership.
>
> Best regards,
> Hongbin
>
> -Original Message-
> From: Davanum Srinivas [mailto:dava...@gmail.com]
> Sent: April-01-16 3:36 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [magnum] Split k8s-client in separate python 
> package
>
> Team,
>
> I've been meaning to do this for a while. Short version, see repo:
> https://github.com/dims/python-k8sclient/
>
> Long version:
> - Started from the magnum repo.
> - pulled out just ./magnum/common/pythonk8sclient
> - preserved entire history in case we had to go back
> - reorganized directory structure
> - ran openstack cookie cutter and added generated files
> - added a test that actually works against a live k8s :)
>   
> https://github.com/dims/python-k8sclient/blob/master/k8sclient/tests/t
> est_k8sclient.py
>
> Question:
> - Should this be a new openstack/ repo?
> - Would the Magnum team own the repo and use the new python package?
>
> Thanks,
> Dims
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum][kuryr] Shared session in design summit

2016-03-29 Thread Hongbin Lu
Hi all,

As discussed before, our team members want to establish a shared session 
between Magnum and Kuryr. We expected a lot of attendees in the session so we 
need a large room (fishbowl). Currently, Kuryr has only 1 fishbowl session, and 
they possibly need it for other purposes. A solution is to promote one of the 
Magnum fishbowl session to be the shared session, or leverage one of the free 
fishbowl slot. The schedule is as below.

Please vote your favorite time slot: http://doodle.com/poll/zuwercgnw2uecs5y .

Magnum fishbowl session:

* 11:00 - 11:40 (Thursday)

* 11:50 - 12:30

* 1:30 - 2:10

* 2:20 - 3:00

* 3:10 - 3:50

Free fishbowl slots:

* 9:00 - 9:40 (Thursday)

* 9:50 - 10:30

* 3:10 - 3:50 (conflict with Magnum session)

* 4:10 - 4:50 (conflict with Magnum session)

* 5:00 - 5:40 (conflict with Magnum session)

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][kuryr] Shared session in design summit

2016-03-30 Thread Hongbin Lu
Gal,

Thursday 4:10 – 4:50 conflicts with a Magnum workroom session, but we can 
choose from:

· 11:00 – 11:40

· 11:50 – 12:30

· 3:10 – 3:50

Please let us know if some of the slots don’t work well with your schedule.

Best regards,
Hongbin

From: Gal Sagie [mailto:gal.sa...@gmail.com]
Sent: March-30-16 2:00 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][kuryr] Shared session in design summit

Anything you pick is fine with me, Kuryr fishbowl session is on Thursday 4:10 - 
4:50, i personally
think the Magnum integration is important enough and i dont mind using this 
time for the session as well.

Either way i am also ok with the 11-11:40 and the 11:50-12:30 sessions or the 
3:10-3:50

On Tue, Mar 29, 2016 at 11:32 PM, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:
Hi all,

As discussed before, our team members want to establish a shared session 
between Magnum and Kuryr. We expected a lot of attendees in the session so we 
need a large room (fishbowl). Currently, Kuryr has only 1 fishbowl session, and 
they possibly need it for other purposes. A solution is to promote one of the 
Magnum fishbowl session to be the shared session, or leverage one of the free 
fishbowl slot. The schedule is as below.

Please vote your favorite time slot: http://doodle.com/poll/zuwercgnw2uecs5y .

Magnum fishbowl session:

• 11:00 - 11:40 (Thursday)

• 11:50 - 12:30

• 1:30 - 2:10

• 2:20 - 3:00

• 3:10 - 3:50

Free fishbowl slots:

• 9:00 – 9:40 (Thursday)

• 9:50 – 10:30

• 3:10 – 3:50 (conflict with Magnum session)

• 4:10 – 4:50 (conflict with Magnum session)

• 5:00 – 5:40 (conflict with Magnum session)

Best regards,
Hongbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Best Regards ,

The G.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum][magnum-ui] Have magnum jobs respect upper-constraints.txt

2016-03-30 Thread Hongbin Lu
Hi team,

After a quick check, it seems python-magnumclient and magnum-ui don't use upper 
constraints. Magnum (the main repo) uses upper constraints in integration tests 
(gate-functional-*), but doesn't use it in others (e.g. py27, py34, pep8, docs, 
coverage). The missing of upper constraints could be problematic. Tickets were 
created to fix that: https://bugs.launchpad.net/trove/+bug/1563038 .

Best regards,
Hongbin

-Original Message-
From: Davanum Srinivas [mailto:dava...@gmail.com] 
Sent: March-30-16 8:33 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [release][all] What is upper-constraints.txt?

Folks,

Quick primer/refresh because of some gate/CI issues we saw last few days with 
Routes===2.3

upper-constraints.txt is the current set of all the global libraries that 
should be used by all the CI jobs.

This file is in the openstack/requirements repo:
http://git.openstack.org/cgit/openstack/requirements/tree/upper-constraints.txt
http://git.openstack.org/cgit/openstack/requirements/tree/upper-constraints.txt?h=stable/mitaka

Anyone working on a project, please ensure that all CI jobs respect 
constraints, example from trove below. If jobs don't respect constraints then 
they are more likely to break:
https://review.openstack.org/#/c/298850/

Anyone deploying openstack, please consult this file as it's the one
*sane* set of libraries that we test with.

Yes, global-requirements.txt has the ranges that end up in project requirements 
files. However, upper-constraints.txt is what we test for sure in OpenStack CI.

Thanks,
Dims

--
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Generate atomic images using diskimage-builder

2016-03-30 Thread Hongbin Lu
Another use case I can think of is to cache the required docker images in the 
Glance image.

This is an important use case because we have containerized most of the COE 
components (e.g. kube-scheduler, swarm-manager, etc.). As a result, each bay 
needs to pull docker images over the Internet on provisioning or scaling stage. 
If a large number of bays pull docker images at the same time, it will generate 
a lot of traffic. Therefore, it is desirable to have all the required docker 
images pre-downloaded into the Glance image. I expect we can leverage 
diskimage-builder to achieve the goal.

Best regards,
Hongbin

From: Ton Ngo [mailto:t...@us.ibm.com]
Sent: March-29-16 4:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Generate atomic images using 
diskimage-builder


In multiple occasions in the past, we have had to use version of some software 
that's not available yet
in the upstream image for bug fixes or new features (Kubernetes, Docker, 
Flannel,...). Eventually the upstream
image would catch up, but having the tool to customize let us push forward with 
development, and gate tests
if it makes sense.

Ton Ngo,


[Inactive hide details for Yolanda Robla Mota ---03/29/2016 01:35:48 PM---So 
the advantages I can see with diskimage-builder are]Yolanda Robla Mota 
---03/29/2016 01:35:48 PM---So the advantages I can see with diskimage-builder 
are: - we reuse the same tooling that is present

From: Yolanda Robla Mota 
>
To: 
>
Date: 03/29/2016 01:35 PM
Subject: Re: [openstack-dev] [magnum] Generate atomic images using 
diskimage-builder





So the advantages I can see with diskimage-builder are:
- we reuse the same tooling that is present in other openstack projects
to generate images, rather than relying on an external image
- it improves the control we have on the contents of the image, instead
of seeing that as a black box. At the moment we can rely on the default
tree for fedora 23, but this can be updated per magnum needs
- reusability: we have atomic 23 now, but why not create magnum images
with dib, for ubuntu, or any other distros ? Relying on
diskimage-builder makes it easy and flexible, because it's a matter of
adding the right elements.

Best
Yolanda

El 29/03/16 a las 21:54, Steven Dake (stdake) escribió:
> Adrian,
>
> Makes sense.  Do the images have to be built to be mirrored though?  Can't
> they just be put on the mirror sites fro upstream?
>
> Thanks
> -steve
>
> On 3/29/16, 11:02 AM, "Adrian Otto" 
> > wrote:
>
>> Steve,
>>
>> I¹m very interested in having an image locally cached in glance in each
>> of the clouds used by OpenStack infra. The local caching of the glance
>> images will produce much faster gate testing times. I don¹t care about
>> how the images are built, but we really do care about the performance
>> outcome.
>>
>> Adrian
>>
>>> On Mar 29, 2016, at 10:38 AM, Steven Dake (stdake) 
>>> >
>>> wrote:
>>>
>>> Yolanda,
>>>
>>> That is a fantastic objective.  Matthieu asked why build our own images
>>> if
>>> the upstream images work and need no further customization?
>>>
>>> Regards
>>> -steve
>>>
>>> On 3/29/16, 1:57 AM, "Yolanda Robla Mota" 
>>> >
>>> wrote:
>>>
 Hi
 The idea is to build own images using diskimage-builder, rather than
 downloading the image from external sources. By that way, the image can
 live in our mirrors, and is built using the same pattern as other
 images
 used in OpenStack.
 It also opens the door to customize the images, using custom trees, if
 there is a need for it. Actually we rely on official tree for Fedora 23
 Atomic (https://dl.fedoraproject.org/pub/fedora/linux/atomic/23/) as
 default.

 Best,
 Yolanda

 El 29/03/16 a las 10:17, Mathieu Velten escribió:
> Hi,
>
> We are using the official Fedora Atomic 23 images here (on Mitaka M1
> however) and it seems to work fine with at least Kubernetes and Docker
> Swarm.
> Any reason to continue building specific Magnum image ?
>
> Regards,
>
> Mathieu
>
> Le mercredi 23 mars 2016 à 12:09 +0100, Yolanda Robla Mota a écrit :
>> Hi
>> I wanted to start a discussion on how Fedora Atomic images are being
>> built. Currently the process for generating the atomic images used
>> on
>> Magnum is described here:
>> http://docs.openstack.org/developer/magnum/dev/build-atomic-image.htm
>> l.
>> The image needs to be built manually, uploaded to fedorapeople, and
>> then
>> consumed from there in the magnum tests.
>> I have been working on a feature to allow 

[openstack-dev] [magnum] Discuss the blueprint "support-private-registry"

2016-03-29 Thread Hongbin Lu
Hi team,

This is the item we didn't have time to discuss in our team meeting, so I 
started the discussion in here.

Here is the blueprint: 
https://blueprints.launchpad.net/magnum/+spec/support-private-registry . Per my 
understanding, the goal of the BP is to allow users to specify the url of their 
private docker registry where the bays pull the kube/swarm images (if they are 
not able to access docker hub or other public registry). An assumption is that 
users need to pre-install their own private registry and upload all the 
required images to there. There are several potential issues of this proposal:

* Is the private registry secure or insecure? If secure, how to handle 
the authentication secrets. If insecure, is it OK to connect a secure bay to an 
insecure registry?

* Should we provide an instruction for users to pre-install the private 
registry? If not, how to verify the correctness of this feature?

Thoughts?

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discuss the blueprint"support-private-registry"

2016-03-31 Thread Hongbin Lu
th read-only access everywhere, but it means you need
to always authenticate at least with this account)
https://github.com/docker/docker/issues/17317
- swarm 1.1 and kub8s 1.0 allow authentication to the registry from
the client (which was good news, and it works fine), handy if you want
to push/pull with authentication.

Cheers,
 Ricardo

>
>
>
> On 2016年03月30日 07:23, Hongbin Lu wrote:
>
> Hi team,
>
>
>
> This is the item we didn’t have time to discuss in our team meeting, so I
> started the discussion in here.
>
>
>
> Here is the blueprint:
> https://blueprints.launchpad.net/magnum/+spec/support-private-registry . Per
> my understanding, the goal of the BP is to allow users to specify the url of
> their private docker registry where the bays pull the kube/swarm images (if
> they are not able to access docker hub or other public registry). An
> assumption is that users need to pre-install their own private registry and
> upload all the required images to there. There are several potential issues
> of this proposal:
>
> · Is the private registry secure or insecure? If secure, how to
> handle the authentication secrets. If insecure, is it OK to connect a secure
> bay to an insecure registry?
>
> · Should we provide an instruction for users to pre-install the
> private registry? If not, how to verify the correctness of this feature?
>
>
>
> Thoughts?
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> --
> Best Regards, Eli Qiao (乔立勇)
> Intel OTC China
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Are Floating IPs really needed for all nodes?

2016-03-31 Thread Hongbin Lu
Egor,

I agree with what you said, but I think we need to address the problem that 
some clouds are lack of public IP addresses. It is not uncommon that a private 
cloud is running without public IP addresses, and they already figured out how 
to route traffics in and out. In such case, a bay doesn’t need to have floating 
IPs and the NodePort feature seems to work with the private IP address.

Generally speaking, I think it is useful to have a feature that allows bays to 
work without public IP addresses. I don’t want to end up in a situation that 
Magnum is unusable because the clouds don’t have enough public IP addresses.

Best regards,
Hongbin

From: Guz Egor [mailto:guz_e...@yahoo.com]
Sent: March-31-16 12:08 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Are Floating IPs really needed for all 
nodes?

-1

who is going to run/support this proxy? also keep in mind that Kubernetes 
Service/NodePort (http://kubernetes.io/docs/user-guide/services/#type-nodeport)
functionality is not going to work without public ip and this is very handy 
feature.

---
Egor


From: 王华 >
To: OpenStack Development Mailing List (not for usage questions) 
>
Sent: Wednesday, March 30, 2016 8:41 PM
Subject: Re: [openstack-dev] [magnum] Are Floating IPs really needed for all 
nodes?

Hi yuanying,

I agree to reduce the usage of floating IP. But as far as I know, if we need to 
pull
docker images from docker hub in nodes floating ips are needed. To reduce the
usage of floating ip, we can use proxy. Only some nodes have floating ips, and
other nodes can access docker hub by proxy.

Best Regards,
Wanghua

On Thu, Mar 31, 2016 at 11:19 AM, Eli Qiao 
> wrote:

Hi Yuanying,
+1
I think we can add option on whether to using floating ip address since IP 
address are
kinds of resource which not wise to waste.

On 2016年03月31日 10:40, 大塚元央 wrote:
Hi team,

Previously, we had a reason why all nodes should have floating ips [1].
But now we have a LoadBalancer features for masters [2] and minions [3].
And also minions do not necessarily need to have floating ips [4].
I think it’s the time to remove floating ips from all nodes.

I know we are using floating ips in gate to get log files,
So it’s not good idea to remove floating ips entirely.

I want to introduce `disable-floating-ips-to-nodes` parameter to bay model.

Thoughts?

[1]: http://lists.openstack.org/pipermail/openstack-dev/2015-June/067213.html
[2]: https://blueprints.launchpad.net/magnum/+spec/make-master-ha
[3]: https://blueprints.launchpad.net/magnum/+spec/external-lb
[4]: http://lists.openstack.org/pipermail/openstack-dev/2015-June/067280.html

Thanks
-yuanying



__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Best Regards, Eli Qiao (乔立勇)

Intel OTC China

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Proposing Eli Qiao for Magnum core reviewer team

2016-03-31 Thread Hongbin Lu
Hi all,

Eli Qiao has been consistently contributed to Magnum for a while. His 
contribution started from about 10 months ago. Along the way, he implemented 
several important blueprints and fixed a lot of bugs. His contribution covers 
various aspects (i.e. APIs, conductor, unit/functional tests, all the COE 
templates, etc.), which shows that he has a good understanding of almost every 
pieces of the system. The feature set he contributed to is proven to be 
beneficial to the project. For example, the gate testing framework he heavily 
contributed to is what we rely on every days. His code reviews are also 
consistent and useful.

I am happy to propose Eli Qiao to be a core reviewer of Magnum team. According 
to the OpenStack Governance process [1], we require a minimum of 4 +1 votes 
within a 1 week voting window (consider this proposal as a +1 vote from me). A 
vote of -1 is a veto. If we cannot get enough votes or there is a veto vote 
prior to the end of the voting window, Eli is not able to join the core team 
and needs to wait 30 days to reapply.

The voting is open until Thursday April 7st.

[1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

Best regards,
Hongbin


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Split k8s-client in separate python package

2016-04-01 Thread Hongbin Lu
Dims,

Thanks for splitting the k8sclient code out. Below is my answer to your 
question.

- Should this be a new openstack/ repo?
Yes, I think so. We need it in openstack namespace in order to leverage the 
openstack infra for testing and packaging.

- Would the Magnum team own the repo and use the new python package?
If this library is mainly used by Magnum, I think Magnum team has no problem to 
own it. If it is also used by other projects, I am afraid Magnum team won't 
have enough bandwidth to support it. In such case, it is better to have other 
teams to share the ownership.

Best regards,
Hongbin

-Original Message-
From: Davanum Srinivas [mailto:dava...@gmail.com] 
Sent: April-01-16 3:36 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] Split k8s-client in separate python package

Team,

I've been meaning to do this for a while. Short version, see repo:
https://github.com/dims/python-k8sclient/

Long version:
- Started from the magnum repo.
- pulled out just ./magnum/common/pythonk8sclient
- preserved entire history in case we had to go back
- reorganized directory structure
- ran openstack cookie cutter and added generated files
- added a test that actually works against a live k8s :)
  
https://github.com/dims/python-k8sclient/blob/master/k8sclient/tests/test_k8sclient.py

Question:
- Should this be a new openstack/ repo?
- Would the Magnum team own the repo and use the new python package?

Thanks,
Dims


-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][keystone][all] Using Keystone /v3/credentials to store TLS certificates

2016-04-13 Thread Hongbin Lu
I think there are two questions here:

1.   Should Magnum decouple from Barbican?

2.   Which options Magnum should use to achieve #1 (leverage Keystone 
credential store, or other alternatives [1])?
For question #1, Magnum team has thoughtfully discussed it. I think we all 
agreed that Magnum should decouple from Barbican for now (I didn’t hear any 
disagreement from any of our team members). What we are currently debating is 
question #2. That is which approach we should use to achieve the goal. The 
first option is to store TLS credentials in Keystone. The second option is to 
store the credentials in Magnum DB. The third option is to eliminate the need 
to store TLS credentials (e.g. switch to another non-TLS authentication 
mechanism). What we want to know is if Keystone team allows us to pursue the 
first option. If it is disallowed, I will suggest Magnum team to pursue other 
options.
So, for the original question, does Keystone team allow us to store encrypted 
data in Keystone? A point of view is that if the data to be stored is already 
encrypted, there will be no disagreement from Keystone side (so far, all the 
concerns is about the security implications of storing un-encrypted data). 
Would I confirm if Keystone team agrees (or doesn’t disagree) with this point 
of view?

[1] https://etherpad.openstack.org/p/magnum-barbican-alternative

Best regards,
Hongbin

From: Morgan Fainberg [mailto:morgan.fainb...@gmail.com]
Sent: April-13-16 12:08 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][keystone][all] Using Keystone 
/v3/credentials to store TLS certificates



On Tue, Apr 12, 2016 at 8:06 PM, Adrian Otto 
<adrian.o...@rackspace.com<mailto:adrian.o...@rackspace.com>> wrote:
Please don't miss the point here. We are seeking a solution that allows a 
location to place a client side encrypted blob of data (A TLS cert) that 
multiple magnum-conductor processes on different hosts can reach over the 
network.

We *already* support using Barbican for this purpose, as well as storage in 
flat files (not as secure as Barbican, and only works with a single conductor) 
and are seeking a second alternative for clouds that have not yet adopted 
Barbican, and want to use multiple conductors. Once Barbican is common in 
OpenStack clouds, both alternatives are redundant and can be deprecated. If 
Keystone depends on Barbican, then we have no reason to keep using it. That 
will mean that Barbican is core to OpenStack.

Our alternative to using Keystone is storing the encrypted blobs in the Magnum 
database which would cause us to add an API feature in magnum that is the exact 
functional equivalent of the credential store in Keystone. That is something we 
are trying to avoid by leveraging existing OpenStack APIs.

Is it really unreasonable to make Magnum depend on Barbican? I know I discussed 
this with you previously, but I would like to know how much pushback you're 
really seeing on saying "Barbican is important for these security reasons in a 
scaled-up environment and here is why we made this choice to depend on it". 
Secure by default is far better than an option that is significantly 
sub-optimal.

So, is Barbican support really hampering Magnum in significant ways? If so, 
what can we do to improve the story to make Barbican compelling instead of 
needing this alternative?

+1 to Dolph's comment on Barbican being more mature *and* another +1 for the 
comment that credentials being un-encrypted in keystone makes storing secure 
credentials in keystone significantly less desirable.

These questions are intended to just fill in some blanks I am seeing so we have 
a complete story and can look at prioritizing work/specs/etc.

--
Adrian

On Apr 12, 2016, at 3:44 PM, Dolph Mathews 
<dolph.math...@gmail.com<mailto:dolph.math...@gmail.com>> wrote:

On Tue, Apr 12, 2016 at 3:27 PM, Lance Bragstad 
<lbrags...@gmail.com<mailto:lbrags...@gmail.com>> wrote:
Keystone's credential API pre-dates barbican. We started talking about having 
the credential API back to barbican after it was a thing. I'm not sure if any 
work has been done to move the credential API in this direction. From a 
security perspective, I think it would make sense for keystone to back to 
barbican.

+1

And regarding the "inappropriate use of keystone," I'd agree... without this 
spec, keystone is entirely useless as any sort of alternative to Barbican:

  https://review.openstack.org/#/c/284950/

I suspect Barbican will forever be a much more mature choice for Magnum.


On Tue, Apr 12, 2016 at 2:43 PM, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:
Hi all,

In short, some Magnum team members proposed to store TLS certificates in 
Keystone credential store. As Magnum PTL, I want to get agreements (or 
non-disagreement) from OpenStack community in general, Keystone community in 
particula

Re: [openstack-dev] [Magnum]Cache docker images

2016-04-20 Thread Hongbin Lu


From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: April-20-16 9:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum]Cache docker images

Hongbin,

Both of approaches you suggested may only work for one binary format. If you 
try to use docker on a different system architecture, the pre-cache of images 
makes it even more difficult to get the correct images built and loaded.

I assume there are ways to detect the system architecture and kernel 
information when we are using disk-imagebuilder to build the image? If yes, we 
can catch the mismatch of system architecture and/or other kernel compatibility 
issues at the early stage.

I suggest we take an approach that allows the Baymodel creator to specify a 
docker registry and/or prefix that will determine where docker images are 
pulled from if they are not found in the local cache. That would give cloud 
operators the option to set up such a registry locally and populate it with the 
right images. This approach would also make it easier to customize the Magnum 
setup by tweaking the container images prior to use.

Works for me.


Thanks,

Adrian

On Apr 19, 2016, at 11:58 AM, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:
Eli,

The approach of pre-pulling docker images has a problem. It only works for 
specific docker storage driver. In comparison, the tar file approach is 
portable across different storage drivers.

Best regards,
Hongbin

From: taget [mailto:qiaoliy...@gmail.com]
Sent: April-19-16 4:26 AM
To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [Magnum]Cache docker images

hi hello again

I believe you are talking about this bp 
https://blueprints.launchpad.net/magnum/+spec/cache-docker-images
then ignore my previous reply, that may another topic to solve network limited 
problem.

I think you are on the right way to build docker images but this image could 
only bootstrap by cloud-init, without cloud-init
the container image tar file are not loaded at all, but seems this may not be 
the best way.

I'v suggest that may be the best way is we pull docker images while building 
atomic-image. Per my understanding, the
image build process is we mount the image to read/write mode to some tmp 
directory and chroot to to that dircetory,
we can do some custome operation there.

I can do a try on the build progress(guess rpm-ostree should support some hook 
scripts)


On 2016年04月19日 11:41, Eli Qiao wrote:
@wanghua

I think there were some discussion already , check 
https://blueprints.launchpad.net/magnum/+spec/support-private-registry
and https://blueprints.launchpad.net/magnum/+spec/allow-user-softwareconfig
On 2016年04月19日 10:57, 王华 wrote:
Hi all,

We want to eliminate pulling docker images over the Internet on bay 
provisioning. There are two problems of this approach:
1. Pulling docker images over the Internet is slow and fragile.
2. Some clouds don't have external Internet access.

It is suggested to build all the required images into the cloud images to 
resolved the issue.

Here is a solution:
We export the docker images as tar files, and put the tar files into a dir in 
the image when we build the image. And we add scripts to load the tar files in 
cloud-init, so that we don't need to download the docker images.

Any advice for this solution or any better solution?

Regards,
Wanghua





__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

Best Regards, Eli Qiao (乔立勇)

Intel OTC China





__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

Best Regards, Eli Qiao (乔立勇)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-04-21 Thread Hongbin Lu
Ricardo,

That is great! It is good to hear Magnum works well in your side.

Best regards,
Hongbin

> -Original Message-
> From: Ricardo Rocha [mailto:rocha.po...@gmail.com]
> Sent: April-21-16 1:48 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] High Availability
> 
> Hi.
> 
> The thread is a month old, but I sent a shorter version of this to
> Daneyon before with some info on the things we dealt with to get Magnum
> deployed successfully. We wrapped it up in a post (there's a video
> linked there with some demos at the end):
> 
> http://openstack-in-production.blogspot.ch/2016/04/containers-and-cern-
> cloud.html
> 
> Hopefully the pointers to the relevant blueprints for some of the
> issues we found will be useful for others.
> 
> Cheers,
>   Ricardo
> 
> On Fri, Mar 18, 2016 at 3:42 PM, Ricardo Rocha <rocha.po...@gmail.com>
> wrote:
> > Hi.
> >
> > We're running a Magnum pilot service - which means it's being
> > maintained just like all other OpenStack services and running on the
> > production infrastructure, but only available to a subset of tenants
> > for a start.
> >
> > We're learning a lot in the process and will happily report on this
> in
> > the next couple weeks.
> >
> > The quick summary is that it's looking good and stable with a few
> > hicks in the setup, which are handled by patches already under review.
> > The one we need the most is the trustee user (USER_TOKEN in the bay
> > heat params is preventing scaling after the token expires), but with
> > the review in good shape we look forward to try it very soon.
> >
> > Regarding barbican we'll keep you posted, we're working on the
> missing
> > puppet bits.
> >
> > Ricardo
> >
> > On Fri, Mar 18, 2016 at 2:30 AM, Daneyon Hansen (danehans)
> > <daneh...@cisco.com> wrote:
> >> Adrian/Hongbin,
> >>
> >> Thanks for taking the time to provide your input on this matter.
> After reviewing your feedback, my takeaway is that Magnum is not ready
> for production without implementing Barbican or some other future
> feature such as the Keystone option Adrian provided.
> >>
> >> All,
> >>
> >> Is anyone using Magnum in production? If so, I would appreciate your
> input.
> >>
> >> -Daneyon Hansen
> >>
> >>> On Mar 17, 2016, at 6:16 PM, Adrian Otto <adrian.o...@rackspace.com>
> wrote:
> >>>
> >>> Hongbin,
> >>>
> >>> One alternative we could discuss as an option for operators that
> have a good reason not to use Barbican, is to use Keystone.
> >>>
> >>> Keystone credentials store:
> >>> http://specs.openstack.org/openstack/keystone-
> specs/api/v3/identity-
> >>> api-v3.html#credentials-v3-credentials
> >>>
> >>> The contents are stored in plain text in the Keystone DB, so we
> would want to generate an encryption key per bay, encrypt the
> certificate and store it in keystone. We would then use the same key to
> decrypt it upon reading the key back. This might be an acceptable
> middle ground for clouds that will not or can not run Barbican. This
> should work for any OpenStack cloud since Grizzly. The total amount of
> code in Magnum would be small, as the API already exists. We would need
> a library function to encrypt and decrypt the data, and ideally a way
> to select different encryption algorithms in case one is judged weak at
> some point in the future, justifying the use of an alternate.
> >>>
> >>> Adrian
> >>>
> >>>> On Mar 17, 2016, at 4:55 PM, Adrian Otto
> <adrian.o...@rackspace.com> wrote:
> >>>>
> >>>> Hongbin,
> >>>>
> >>>>> On Mar 17, 2016, at 2:25 PM, Hongbin Lu <hongbin...@huawei.com>
> wrote:
> >>>>>
> >>>>> Adrian,
> >>>>>
> >>>>> I think we need a boarder set of inputs in this matter, so I
> moved the discussion from whiteboard back to here. Please check my
> replies inline.
> >>>>>
> >>>>>> I would like to get a clear problem statement written for this.
> >>>>>> As I see it, the problem is that there is no safe place to put
> certificates in clouds that do not run Barbican.
> >>>>>> It seems the solution is to make it easy to add Barbican such
> that it's included in the setup for Magnum.
> >>>>> No, the solution is to explore an non-Barbican solut

Re: [openstack-dev] [magnum] Seek advices for a licence issue

2016-04-23 Thread Hongbin Lu
Jay,

I will discuss the proposal [1] in the design summit. Do you plan to contribute 
on this efforts or someone from DCOS community interest to contribute?

[1] https://blueprints.launchpad.net/magnum/+spec/mesos-dcos

Best regards,
Hongbin

From: Jay Lau [mailto:jay.lau@gmail.com]
Sent: April-22-16 12:12 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Seek advices for a licence issue

I got confirmation from Mesosphere that we can use the open source DC/OS in 
Magnum now, it is a good time to enhance the Mesos Bay to Open Source DCOS.
From Mesosphere
DC/OS software is licensed under the Apache License, so you should feel free to 
use it within the terms of that license.
---
Thanks.

On Thu, Apr 21, 2016 at 5:35 AM, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:
Hi Mark,

I have went though the announcement in details, From my point of view, it seems 
to resolve the license issue that was blocking us in before. I have included 
the Magnum team in ML to see if our team members have any comment.

Thanks for the support from foundation.

Best regards,
Hongbin

From: Mark Collier [mailto:m...@openstack.org<mailto:m...@openstack.org>]
Sent: April-19-16 12:36 PM
To: Hongbin Lu
Cc: foundat...@lists.openstack.org<mailto:foundat...@lists.openstack.org>; 
Guang Ya GY Liu
Subject: Re: [OpenStack Foundation] [magnum] Seek advices for a licence issue

Hopefully today’s news that Mesosphere is open major sourcing components of 
DCOS under an Apache 2.0 license will make things easier:

https://mesosphere.com/blog/2016/04/19/open-source-dcos/

I’ll be interested to hear your take after you have time to look at it in more 
detail, Hongbin.

Mark



On Apr 9, 2016, at 10:02 AM, Hongbin Lu 
<hongbin...@huawei.com<mailto:hongbin...@huawei.com>> wrote:

Hi all,

A brief introduction to myself. I am the Magnum Project Team Lead (PTL). Magnum 
is the OpenStack container service. I wrote this email because the Magnum team 
is seeking clarification for a licence issue for shipping third-party software 
(DCOS [1] in particular) and I was advised to consult OpenStack Board of 
Directors in this regards.

Before getting into the question, I think it is better to provide some 
backgroup information. A feature provided by Magnum is to provision container 
management tool on top of a set of Nova instances. One of the container 
management tool Magnum supports is Apache Mesos [2]. Generally speaking, Magnum 
ships Mesos by providing a custom cloud image with the necessary packages 
pre-installed. So far, all the shipped components are open source with 
appropriate license, so we are good so far.

Recently, one of our contributors suggested to extend the Mesos support to DCOS 
[3]. The Magnum team is unclear if there is a license issue for shipping DCOS, 
which looks like a close-source product but has community version in Amazon Web 
Services [4]. I want to know what are the appropriate actions Magnum team 
should take in this pursuit, or we should stop pursuing this direction further? 
Advices are greatly appreciated. Please let us know if we need to provide 
further information. Thanks.

[1] https://docs.mesosphere.com/
[2] http://mesos.apache.org/
[3] https://blueprints.launchpad.net/magnum/+spec/mesos-dcos
[4] 
https://docs.mesosphere.com/administration/installing/installing-community-edition/

Best regards,
Hongbin



___
Foundation mailing list
foundat...@lists.openstack.org<mailto:foundat...@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Thanks,
Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-23 Thread Hongbin Lu
I am not necessary agree with the viewpoint below, but that is the majority 
viewpoints when I was trying to sell Magnum to them. There are people who 
interested in adopting Magnum, but they ran away after they figured out what 
Magnum actually offers is a COE deployment service. My takeaway is COE 
deployment is not the real pain, and there are several alternatives available 
(Heat, Ansible, Chef, Puppet, Juju, etc.). Limiting Magnum to be a COE 
deployment service might prolong the existing adoption problem.

Best regards,
Hongbin

From: Georgy Okrokvertskhov [mailto:gokrokvertsk...@mirantis.com]
Sent: April-20-16 6:51 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified 
abstraction for all COEs

If Magnum will be focused on installation and management for COE it will be 
unclear how much it is different from Heat and other generic orchestrations.  
It looks like most of the current Magnum functionality is provided by Heat. 
Magnum focus on deployment will potentially lead to another Heat-like  API.
Unless Magnum is really focused on containers its value will be minimal for 
OpenStack users who already use Heat/Orchestration.


On Wed, Apr 20, 2016 at 3:12 PM, Keith Bray 
> wrote:
Magnum doesn¹t have to preclude tight integration for single COEs you
speak of.  The heavy lifting of tight integration of the COE in to
OpenStack (so that it performs optimally with the infra) can be modular
(where the work is performed by plug-in models to Magnum, not performed by
Magnum itself. The tight integration can be done by leveraging existing
technologies (Heat and/or choose your DevOps tool of choice:
Chef/Ansible/etc). This allows interested community members to focus on
tight integration of whatever COE they want, focusing specifically on the
COE integration part, contributing that integration focus to Magnum via
plug-ins, without having to actually know much about Magnum, but instead
contribute to the COE plug-in using DevOps tools of choice.   Pegging
Magnum to one-and-only one COE means there will be a Magnum2, Magnum3,
etc. project for every COE of interest, all with different ways of kicking
off COE management.  Magnum could unify that experience for users and
operators, without picking a winner in the COE space ‹ this is just like
Nova not picking a winner between VM flavors or OS types.  It just
facilitates instantiation and management of thins.  Opinion here:  The
value of Magnum is in being a light-weight/thin API, providing modular
choice and plug-ability to COE provisioning and management, thereby
providing operators and users choice of COE instantiation and management
(via the bay concept), where each COE can be as tightly or loosely
integrated as desired by different plug-ins contributed to perform the COE
setup and configurations.  So, Magnum could have two or more swarm plug-in
options contributed to the community.. One overlays generic swarm on VMs.
The other swarm plug-in could instantiate swarm tightly integrated to
neutron, keystone, etc on to bare metal.  Magnum just facilities a plug-in
model with thin API to offer choice of CEO instantiation and management.
The plug-in does the heavy lifting using whatever methods desired by the
curator.

That¹s my $0.2.

-Keith

On 4/20/16, 4:49 PM, "Joshua Harlow" 
> wrote:

>Thierry Carrez wrote:
>> Adrian Otto wrote:
>>> This pursuit is a trap. Magnum should focus on making native container
>>> APIs available. We should not wrap APIs with leaky abstractions. The
>>> lowest common denominator of all COEs is an remarkably low value API
>>> that adds considerable complexity to Magnum that will not
>>> strategically advance OpenStack. If we instead focus our effort on
>>> making the COEs work better on OpenStack, that would be a winning
>>> strategy. Support and compliment our various COE ecosystems.
>
>So I'm all for avoiding 'wrap APIs with leaky abstractions' and 'making
>COEs work better on OpenStack' but I do dislike the part about COEs
>(plural) because it is once again the old non-opinionated problem that
>we (as a community) suffer from.
>
>Just my 2 cents, but I'd almost rather we pick one COE and integrate
>that deeply/tightly with openstack, and yes if this causes some part of
>the openstack community to be annoyed, meh, to bad. Sadly I have a
>feeling we are hurting ourselves by continuing to try to be everything
>and not picking anything (it's a general thing we, as a group, seem to
>be good at, lol). I mean I get the reason to just support all the
>things, but it feels like we as a community could just pick something,
>work together on figuring out how to pick one, using all these bright
>leaders we have to help make that possible (and yes this might piss some
>people off, to bad). Then work toward making that something great and
>move on...
>
>>
>> 

Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-23 Thread Hongbin Lu
>> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build
> unified
> >> abstraction for all COEs
> >>
> >> On 04/21/2016 03:18 PM, Fox, Kevin M wrote:
> >> > Here's where we disagree.
> >>
> >> We may have to agree to disagree.
> >>
> >> > Your speaking for everyone in the world now, and all you need is
> >> > one counter example. I'll be that guy. Me. I want a common
> >> > abstraction for some common LCD stuff.
> >>
> >> We also disagree on this. Just because one human wants something
> does
> >> not make implementing that feature a good idea. In fact, good design
> >> is largely about appropriately and selectively saying no.
> >>
> >> Now I'm not going to pretend that we're good at design around here...
> >> we seem to very easily fall into the trap that your assertion
> >> presents. But in almost every one of those cases, having done so
> >> winds up having been a mistake.
> >>
> >> > Both Sahara and Trove have LCD abstractions for very common things.
> >> > Magnum should too.
> >> >
> >> > You are falsely assuming that if an LCD abstraction is provided,
> >> > then users cant use the raw api directly. This is false. There is
> >> > no either/or. You can have both. I would be against it too if they
> >> > were mutually exclusive. They are not.
> >>
> >> I'm not assuming that at all. I'm quite clearly asserting that the
> >> existence of an OpenStack LCD is a Bad Idea. This is a thing we
> >> disagree about.
> >>
> >> I think it's unfriendly to the upstreams in question. I think it
> does
> >> not provide significant enough value to the world to justify that
> >> unfriendliness. And also, https://xkcd.com/927/
> >>
> >> > Thanks, Kevin  From: Monty
> >> > Taylor [mord...@inaugust.com] Sent: Thursday, April 21, 2016 10:22
> >> > AM
> >> > To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev]
> >> > [magnum][app-catalog][all] Build unified abstraction for all COEs
> >> >
> >> > On 04/21/2016 11:03 AM, Tim Bell wrote:
> >> >>
> >> >>
> >> >> On 21/04/16 17:38, "Hongbin Lu" <hongbin...@huawei.com> wrote:
> >> >>
> >> >>>
> >> >>>
> >> >>>> -Original Message- From: Adrian Otto
> >> >>>> [mailto:adrian.o...@rackspace.com] Sent: April-21-16 10:32 AM
> >> >>>> To: OpenStack Development Mailing List (not for usage
> >> >>>> questions) Subject: Re: [openstack-dev]
> >> >>>> [magnum][app-catalog][all] Build unified abstraction for all
> >> >>>> COEs
> >> >>>>
> >> >>>>
> >> >>>>> On Apr 20, 2016, at 2:49 PM, Joshua Harlow
> >> >>>>> <harlo...@fastmail.com>
> >> >>>> wrote:
> >> >>>>>
> >> >>>>> Thierry Carrez wrote:
> >> >>>>>> Adrian Otto wrote:
> >> >>>>>>> This pursuit is a trap. Magnum should focus on making native
> >> >>>>>>> container APIs available. We should not wrap APIs with leaky
> >> >>>>>>> abstractions. The lowest common denominator of all COEs is
> an
> >> >>>>>>> remarkably low value API that adds considerable complexity
> to
> >> >>>> Magnum
> >> >>>>>>> that will not strategically advance OpenStack. If we instead
> >> >>>>>>> focus our effort on making the COEs work better on OpenStack,
> >> >>>>>>> that would be a winning strategy. Support and compliment our
> >> >>>>>>> various COE
> >> >>>> ecosystems.
> >> >>>>>
> >> >>>>> So I'm all for avoiding 'wrap APIs with leaky abstractions'
> >> >>>>> and 'making COEs work better on OpenStack' but I do dislike
> the
> >> >>>>> part
> >> >>>> about COEs (plural) because it is once again the old
> >> >>>> non-opinionated problem that we (as a community) suffer from.
> >> >>>>>
> >> >>>>> Just my 2 cents, but I'd almost rather we pick one COE and
>

Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-21 Thread Hongbin Lu
Hi Monty,

I respect your position, but I want to point out that there is not only one 
human wants this. There are a group of people want this. I have been working 
for Magnum in about a year and a half. Along the way, I have been researching 
how to attract users to Magnum. My observation is there are two groups of 
potential users. The first group of users are generally in the domain of 
individual COEs and they want to use the native COE APIs. The second group of 
users are generally out of the domain and they want an OpenStack way to manage 
containers. Below are the specific use cases:
* Some people want to migrate the workload from VM to container
* Some people want to support hybrid deployment (VMs & containers) of their 
application
* Some people want to bring containers (in Magnum bays) to a Heat template, and 
enable connections between containers and other OpenStack resources
* Some people want to bring containers to Horizon
* Some people want to send container metrics to Ceilometer
* Some people want a portable experience across COEs
* Some people just want a container and don't want the complexities of others 
(COEs, bays, baymodels, etc.)

I think we need to research how large the second group of users is. Then, based 
on the data, we can decide if the LCD APIs should be part of Magnum, a Magnum 
plugin, or it should not exist. Thoughts?

Best regards,
Hongbin 

> -Original Message-
> From: Monty Taylor [mailto:mord...@inaugust.com]
> Sent: April-21-16 4:42 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
> abstraction for all COEs
> 
> On 04/21/2016 03:18 PM, Fox, Kevin M wrote:
> > Here's where we disagree.
> 
> We may have to agree to disagree.
> 
> > Your speaking for everyone in the world now, and all you need is one
> > counter example. I'll be that guy. Me. I want a common abstraction
> for
> > some common LCD stuff.
> 
> We also disagree on this. Just because one human wants something does
> not make implementing that feature a good idea. In fact, good design is
> largely about appropriately and selectively saying no.
> 
> Now I'm not going to pretend that we're good at design around here...
> we seem to very easily fall into the trap that your assertion presents.
> But in almost every one of those cases, having done so winds up having
> been a mistake.
> 
> > Both Sahara and Trove have LCD abstractions for very common things.
> > Magnum should too.
> >
> > You are falsely assuming that if an LCD abstraction is provided, then
> > users cant use the raw api directly. This is false. There is no
> > either/or. You can have both. I would be against it too if they were
> > mutually exclusive. They are not.
> 
> I'm not assuming that at all. I'm quite clearly asserting that the
> existence of an OpenStack LCD is a Bad Idea. This is a thing we
> disagree about.
> 
> I think it's unfriendly to the upstreams in question. I think it does
> not provide significant enough value to the world to justify that
> unfriendliness. And also, https://xkcd.com/927/
> 
> > Thanks, Kevin  From: Monty
> > Taylor [mord...@inaugust.com] Sent: Thursday, April 21, 2016 10:22 AM
> > To: openstack-dev@lists.openstack.org Subject: Re: [openstack-dev]
> > [magnum][app-catalog][all] Build unified abstraction for all COEs
> >
> > On 04/21/2016 11:03 AM, Tim Bell wrote:
> >>
> >>
> >> On 21/04/16 17:38, "Hongbin Lu" <hongbin...@huawei.com> wrote:
> >>
> >>>
> >>>
> >>>> -Original Message- From: Adrian Otto
> >>>> [mailto:adrian.o...@rackspace.com] Sent: April-21-16 10:32 AM
> >>>> To: OpenStack Development Mailing List (not for usage
> >>>> questions) Subject: Re: [openstack-dev] [magnum][app-catalog][all]
> >>>> Build unified abstraction for all COEs
> >>>>
> >>>>
> >>>>> On Apr 20, 2016, at 2:49 PM, Joshua Harlow <harlo...@fastmail.com>
> >>>> wrote:
> >>>>>
> >>>>> Thierry Carrez wrote:
> >>>>>> Adrian Otto wrote:
> >>>>>>> This pursuit is a trap. Magnum should focus on making
> >>>>>>> native container APIs available. We should not wrap APIs
> >>>>>>> with leaky abstractions. The lowest common denominator of
> >>>>>>> all COEs is an remarkably low value API that adds
> >>>>>>> considerable complexity to
> >>>> Magnum
> >>>>>>> that will not strategically advance 

[openstack-dev] [magnum] Notes for Magnum design summit

2016-04-29 Thread Hongbin Lu
Hi team,

For reference, below is a summary of the discussions/decisions in Austin design 
summit. Please feel free to point out if anything is incorrect or incomplete. 
Thanks.

1. Bay driver: https://etherpad.openstack.org/p/newton-magnum-bay-driver
- Refactor existing code into bay drivers
- Each bay driver will be versioned
- Individual bay driver can have API extension and magnum CLI could load the 
extensions dynamically
- Work incrementally and support same API before and after the driver change

2. Bay lifecycle operations: 
https://etherpad.openstack.org/p/newton-magnum-bays-lifecycle-operations
- Support the following operations: reset the bay, rebuild the bay, rotate TLS 
certificates in the bay, adjust storage of the bay, scale the bay.

3. Scalability: https://etherpad.openstack.org/p/newton-magnum-scalability
- Implement Magnum plugin for Rally
- Implement the spec to address the scalability of deploying multiple bays 
concurrently: https://review.openstack.org/#/c/275003/

4. Container storage: 
https://etherpad.openstack.org/p/newton-magnum-container-storage
- Allow choice of storage driver
- Allow choice of data volume driver
- Work with Kuryr/Fuxi team to have data volume driver available in COEs 
upstream

5. Container network: 
https://etherpad.openstack.org/p/newton-magnum-container-network
- Discuss how to scope/pass/store OpenStack credential in bays (needed by Kuryr 
to communicate with Neutron).
- Several options were explored. No perfect solution was identified.

6. Ironic Integration: 
https://etherpad.openstack.org/p/newton-magnum-ironic-integration
- Start the implementation immediately
- Prefer quick work-around for identified issues (cinder volume attachment, 
variation of number of ports, etc.)

7. Magnum adoption challenges: 
https://etherpad.openstack.org/p/newton-magnum-adoption-challenges
- The challenges is listed in the etherpad above

8. Unified abstraction for COEs: 
https://etherpad.openstack.org/p/newton-magnum-unified-abstraction
- Create a new project for this efforts
- Alter Magnum mission statement to clarify its goal (Magnum is not a container 
service, it is sort of a COE management service)

9. Magnum Heat template version: 
https://etherpad.openstack.org/p/newton-magnum-heat-template-versioning
- In each bay driver, version the template and template definition.
- Bump template version for minor changes, and bump bay driver version for 
major changes.

10. Monitoring: https://etherpad.openstack.org/p/newton-magnum-monitoring
- Add support for sending notifications to Ceilometer
- Revisit bay monitoring and self-healing later
- Container monitoring should not be done by Magnum, but it can be done by 
cAdvisor, Heapster, etc.

11. Others: https://etherpad.openstack.org/p/newton-magnum-meetup
- Clear Container support: Clear Container needs to integrate with COEs first. 
After the integration is done, Magnum team will revisit bringing the Clear 
Container COE to Magnum.
- Enhance mesos bay to DCOS bay: Need to do it step-by-step: First, create a 
new DCOS bay type. Then, deprecate and delete the mesos bay type.
- Start enforcing API deprecation policy: 
https://governance.openstack.org/reference/tags/assert_follows-standard-deprecation.html
- Freeze API v1 after some patches are merged.
- Multi-tenancy within a bay: not the priority in Newton cycle
- Manually manage bay nodes (instead of being managed by Heat ResourceGroup): 
It can address the use case of heterogeneity of bay nodes (i.e. different 
availability zones, flavors), but need to elaborate the details further.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to provision minion nodes

2016-04-26 Thread Hongbin Lu


> -Original Message-
> From: Ma, Wen-Tao (Mike, HP Servers-PSC-BJ) [mailto:wentao...@hpe.com]
> Sent: April-26-16 3:01 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to
> provision minion nodes
> 
> Hi Hongbin, Ricardo
> This is mike, I am working with Gary now.
> Thanks for Ricardo's good suggestion. I have tried the "map/index"
> method ,  we can use it to passed the minion_flavor_map and the index
> into the minion cluster stack. It does work well.
> I think we can update magnum baymodel-create to set the N minion
> flavors in the minion_flavor_map and assign minion counts for each
> flavor.
> For example :
> magnum baymodel-create --name k8s-bay-model  --flavor-id minion-flavor-
> 0:3,minion-flavor-1:5, minion-flavor-2:2. It will create 3 types flavor

The suggested approach seems to break the existing behaviour. I think it is 
better to support this feature in a backward-compatible way. How about using 
labels:

$ magnum baymodel-create --name k8sbaymodel --flavor-id minion-flavor-0 
--node-count 3 --labels extra-flavor-ids=minions-flavor-1:5,minion-flavor-2:2

> minion node and total minion nodes  count is 10. The magnum baymode.py
> will parse  this  dictionary and pass them to the heat template
> parameters minion_flavor_map, minion_flavor_count_map. Then the heat
> stack will work well.
> 
> kubecluster-fedora-ironic.yaml
> parameters:
>   minion_flavor_map:
> type: json
> default:
>   '0': minion-flavor-0
>   '1': minion-flavor-1
>   '2': minion-flavor-2
> 
>   minion_flavor_count_map:
> type: json
> default:
>   '0': 3
>   '1': 5
>   '2': 2
> 
> resources:
> kube_minions_flavors:
> type: OS::Heat::ResourceGroup
> properties:
>   count: { get_param: minion_flavors_counts }
>   resource_def:
> type: kubecluster-minion-fedora-ironic.yaml
> properties:
>   minion_flavor_map: {get_param: minion_flavor_map}
>   minion_flavor_count_map: {get_param: minion_flavor_count_map}
>   minion_flavor_index: '%index%'
> 
> How do you think about this interface in magnum baymodel to support N
> falvor to provision minion nodes? Do you have any comments about this
> design for this feature?
> 
> Thanks && Regards
> Mike Ma
> HP Servers Core Platform Software China Email wentao...@hpe.com
> 
> -Original Message-
> From: Duan, Li-Gong (Gary, HPServers-Core-OE-PSC)
> Sent: Monday, April 25, 2016 3:37 PM
> To: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>
> Cc: Ma, Wen-Tao (Mike, HP Servers-PSC-BJ) <wentao...@hpe.com>
> Subject: RE: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to
> provision minion nodes
> 
> Hi Ricardo,
> 
> This is really good suggestion. I'd like to see whether we can use
> "foreach"/"repeat" in ResourceGroup in Heat.
> 
> Regards,
> Gary Duan
> 
> -Original Message-
> From: Ricardo Rocha [mailto:rocha.po...@gmail.com]
> Sent: Thursday, April 21, 2016 3:49 AM
> To: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to
> provision minion nodes
> 
> Hi Hongbin.
> 
> On Wed, Apr 20, 2016 at 8:13 PM, Hongbin Lu <hongbin...@huawei.com>
> wrote:
> >
> >
> >
> >
> > From: Duan, Li-Gong (Gary, HPServers-Core-OE-PSC)
> > [mailto:li-gong.d...@hpe.com]
> > Sent: April-20-16 3:39 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: [openstack-dev] [Magnum] Magnum supports 2 Nova flavor to
> > provision minion nodes
> >
> >
> >
> > Hi Folks,
> >
> >
> >
> > We are considering whether Magnum can supports 2 Nova flavors to
> > provision Kubernetes and other COE minion nodes.
> >
> > This requirement comes from the below use cases:
> >
> > -  There are 2 kind of baremetal machines in customer site:
> one is
> > legacy machines which doesn’t support UEFI secure boot and others are
> > new machines which support UEFI secure boot. User want to use Magnum
> > to provisions a Magnum bay of Kubernetes from these 2 kind of
> > baremetal machines and for the machines supporting secure boot, user
> > wants to use UEFI secure boot to boot them up. And 2 Kubernetes
> > label(secure-booted and
> > non-secure-booted) are created and User can deploy their
> > data-senstive/cirtical workload/containers/pods on the bar

[openstack-dev] [magnum] Link to the latest atomic image

2016-04-21 Thread Hongbin Lu
Hi team,

Based on a request, I created a link to the latest atomic image that Magnum is 
using: https://fedorapeople.org/groups/magnum/fedora-atomic-latest.qcow2 . We 
plan to keep this link pointing to the newest atomic image so that we can avoid 
updating the name of the image for every image upgrade. A ticket was created 
for updating the docs accordingly: 
https://bugs.launchpad.net/magnum/+bug/1573361 .

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum][kuryr] Nested containers networking

2016-05-23 Thread Hongbin Lu
Hi Kuryr team,

I want to start this ML to sync up the latest status of the nested container 
networking implementation. Could I know who is implementing this feature in 
Kuryr side and how Magnum team could help in this efforts? In addition, I 
wonder if it makes sense to establish cross-project liaisons between Kuryr and 
Magnum. Magnum relies on Kuryr to implement several important features so I 
think it is helpful to setup a communication channel between both teams. 
Thoughts?

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] FW: Mentors needed in specific technical areas

2016-05-23 Thread Hongbin Lu
FYI,

If you interest to be a mentor for containers or other areas, check below...

Best regards,
Hongbin

From: Emily K Hugenbruch [mailto:ekhugenbr...@us.ibm.com]
Sent: May-23-16 10:25 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [PTLs][all][mentoring] Mentors needed in specific 
technical areas


Hi,
The lightweight mentoring program sponsored by the Women of OpenStack has 
really taken off, and we have about 35 mentees looking for technical help that 
we don't have mentors for. We're asking for help from the PTLs to announce the 
mentoring program in team meetings then direct people to the guidelines 
(here
 and 
here)
 and signup form if 
they're interested.

Mentors should be regular contributors to a project, with an interest in 
helping new people and about 4 hours a month for mentoring. They do not have to 
be women; the program is just sponsored by WoO, we welcome all mentees and 
mentors.

These are the projects/areas where we especially need mentors:

 *   Cinder
 *   Containers
 *   Documentation
 *   Glance
 *   Keystone
 *   Murano
 *   Neutron
 *   Nova
 *   Ops
 *   Searchlight
 *   Telemetry
 *   TripleO
 *   Trove
If you have any questions you can contact me, or ask on openstack-women where 
the mentoring committee hangs out.
Thanks!
Emily Hugenbruch
IRC: ekhugen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [higgins] Meeting reminder

2016-05-22 Thread Hongbin Lu
Hi all,

This is a reminder that we are going to have the second Higgins team meeting at 
tomorrow. Hope to see you all there.

https://wiki.openstack.org/wiki/Higgins#Agenda_for_2016-05-24_0300_UTC

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<    1   2   3   4   5   >