Re: [openstack-dev] [magnum] Proposing Spyros Trigazis for Magnum core reviewer team

2016-07-25 Thread Cammann, Tom
+1 great addition to the team

From: Hongbin Lu 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Friday, 22 July 2016 at 21:27
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: [openstack-dev] [magnum] Proposing Spyros Trigazis for Magnum core 
reviewer team

Hi all,

Spyros has consistently contributed to Magnum for a while. In my opinion, what 
differentiate him from others is the significance of his contribution, which 
adds concrete value to the project. For example, the operator-oriented install 
guide he delivered attracts a significant number of users to install Magnum, 
which facilitates the adoption of the project. I would like to emphasize that 
the Magnum team has been working hard but struggling to increase the adoption, 
and Spyros’s contribution means a lot in this regards. He also completed 
several essential and challenging tasks, such as adding support for OverlayFS, 
adding Rally job for Magnum, etc. In overall, I am impressed by the amount of 
high-quality patches he submitted. He is also helpful in code reviews, and his 
comments often help us identify pitfalls that are not easy to identify. He is 
also very active in IRC and ML. Based on his contribution and expertise, I 
think he is qualified to be a Magnum core reviewer.

I am happy to propose Spyros to be a core reviewer of Magnum team. According to 
the OpenStack Governance process [1], we require a minimum of 4 +1 votes from 
Magnum core reviewers within a 1 week voting window (consider this proposal as 
a +1 vote from me). A vote of -1 is a veto. If we cannot get enough votes or 
there is a veto vote prior to the end of the voting window, Spyros is not able 
to join the core team and needs to wait 30 days to reapply.

The voting is open until Thursday July 29st.

[1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-05-16 Thread Cammann, Tom
The discussion at the summit was very positive around this requirement but as 
this change will make a large impact to Magnum it will need a spec.

On the API of things, I was thinking a slightly more generic approach to 
incorporate other lifecycle operations into the same API.
Eg:
magnum bay-manage  

magnum bay-manage  reset –hard
magnum bay-manage  rebuild
magnum bay-manage  node-delete 
magnum bay-manage  node-add –flavor 
magnum bay-manage  node-reset 
magnum bay-manage  node-list

Tom

From: Yuanying OTSUKA 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Monday, 16 May 2016 at 01:07
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually managing the 
bay nodes

Hi,

I think, user also want to specify the deleting node.
So we should manage “node” individually.

For example:
$ magnum node-create —bay …
$ magnum node-list —bay
$ magnum node-delete $NODE_UUID

Anyway, if magnum want to manage a lifecycle of container infrastructure.
This feature is necessary.

Thanks
-yuanying


2016年5月16日(月) 7:50 Hongbin Lu 
mailto:hongbin...@huawei.com>>:
Hi all,

This is a continued discussion from the design summit. For recap, Magnum 
manages bay nodes by using ResourceGroup from Heat. This approach works but it 
is infeasible to manage the heterogeneity across bay nodes, which is a 
frequently demanded feature. As an example, there is a request to provision bay 
nodes across availability zones [1]. There is another request to provision bay 
nodes with different set of flavors [2]. For the request features above, 
ResourceGroup won’t work very well.

The proposal is to remove the usage of ResourceGroup and manually create Heat 
stack for each bay nodes. For example, for creating a cluster with 2 masters 
and 3 minions, Magnum is going to manage 6 Heat stacks (instead of 1 big Heat 
stack as right now):
* A kube cluster stack that manages the global resources
* Two kube master stacks that manage the two master nodes
* Three kube minion stacks that manage the three minion nodes

The proposal might require an additional API endpoint to manage nodes or a 
group of nodes. For example:
$ magnum nodegroup-create --bay XXX --flavor m1.small --count 2 
--availability-zone us-east-1 ….
$ magnum nodegroup-create --bay XXX --flavor m1.medium --count 3 
--availability-zone us-east-2 …

Thoughts?

[1] https://blueprints.launchpad.net/magnum/+spec/magnum-availability-zones
[2] https://blueprints.launchpad.net/magnum/+spec/support-multiple-flavor

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Jinja2 for Heat template

2016-05-12 Thread Cammann, Tom
I’m in broad agreement with Hongbin. Having tried a patch to use jinja2 in the 
templates, it certainly adds complexity. I am in favor of using conditionals 
and consuming the latest version of heat. If we intend to support older 
versions of OpenStack this should be a clearly defined goal and needs to be 
tested. An aspiration to work with older versions isn’t a good policy.

I would like to understand a bit better the “chaos” option 3 would cause.

Tom

From: Hongbin Lu [mailto:hongbin...@huawei.com]
Sent: 12 May 2016 16:35
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Jinja2 for Heat template

We discussed the management of Heat templates several times. It seems the 
consensus is to leverage the *conditionals*feature from Heat (option #1). From 
the past discussion, it sounds like option #2 or #3 will significantly 
complicate our Heat templates, thus incurring burden on maintenance.

However, I agree with Yuanying that option #1 will make Newton (or newer) 
version of Magnum incompatible with Mitaka (or older) version of OpenStack. A 
solution I can think of is to have a Jinja2 version of Heat template in the 
contrib folder, so that operators can swap the Heat templates if they want to 
run newer version of Magnum with older version of OpenStack. Thoughts.

Best regards,
Hongbin

From: Yuanying OTSUKA [mailto:yuany...@oeilvert.org]
Sent: May-12-16 6:02 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Jinja2 for Heat template

Hi,
Thanks for your helpful comment.

I didn’t know about the pattern you suggested.
We often want to “if” or “for” etc…

For example,
* if private network is supplied as parameter, disable creating network 
resource.
* if https parameter is enable, tcp 6443 port should be opened instead of 8080 
at“OS::Neutron::SecurityGroup".
* if https parameter is enable, loadbalancing protocol should be TCP instead of 
HTTP

and so on.
So, I want to Jinja2 template to manage it.

I’ll try to use the composition model above,
and also test the limited use of jinja2 templating.


Thanks
- OTSUKA, Yuanying



2016年5月12日(木) 17:46 Steven Hardy mailto:sha...@redhat.com>>:
On Thu, May 12, 2016 at 11:08:02AM +0300, Pavlo Shchelokovskyy wrote:
>Hi,
>
>not sure why 3 will bring chaos when implemented properly.

I agree - heat is designed with composition in mind, and e.g in TripleO
we're making heavy use of it for optional configurations and it works
pretty well:

http://docs.openstack.org/developer/heat/template_guide/composition.html

https://www.youtube.com/watch?v=fw0JhywwA1E

http://hardysteven.blogspot.co.uk/2015/05/tripleo-heat-templates-part-1-roles-and.html

https://github.com/openstack/tripleo-heat-templates/tree/master/environments

>Can you abstract the "thing" (sorry, not quite familiar with Magnum) that
>needs FP + FP itself into a custom resource/nested stack? Then you could
>use single master template plus two environments (one with FP, one
>without), and choose which one to use right where you have this logic
>split in your code.

Yes, this is exactly the model we make heavy use of in TripleO, it works
pretty well.

Note there's now an OS::Heat::None resource in heat, which makes it easy to
conditionally disable things (without the need for a noop.yaml template
that contains matching parameters):

http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Heat::None

So you'd have two environment files like:

cat enable_floating.yaml:
resource_registry:
  OS::Magnum::FloatingIP: templates/the_floating_config.yaml

cat disable_floating.yaml:
resource_registry:
  OS::Magnum::FloatingIP: OS::Heat::None

Again, this pattern is well proven and works pretty well.

Conditionals may provide an alternative way to do this, but at the expense
of some additional complexity inside the templates.

>Option 2 is not so bad either IMO (AFAIK Trove was doing that at sometime,
>not sure of current status), but the above would be nicer.

Yes, in the past[1] I've commented that the composition model above may be
preferable to jinja templating, but recently I've realized there are pros
and cons to each approach.

The heat composition model works pretty well when you want to combine
multiple pieces (nested stacks) which contain some mixture of different
resources, but it doesn't work so well when you want to iterate over a
common pattern and build a template (e.g based on a loop).

You can use ResourceGroups in some cases, but that adds to the stack depth
(number of nested stacks), and may not be workable for upgrades, so TripleO
is now looking at some limited use of jinja2 templating also, I agree it's
not so bad provided the interfaces presented to the user are carefully
constrained.

Steve

[1] https://review.openstack.org/#/c/211771/

__
OpenStack Development Mailing Lis

Re: [openstack-dev] [magnum] How to document 'labels'

2016-05-12 Thread Cammann, Tom
The canonical place for the help/documentation must be the online 
documentation. We must provide information in the docs first, additional 
content should be added to the CLI help. The labels field contains arbitrary 
metadata which can be consumed and used by the COE/bay, it has no set structure 
of format. Having an exhaustive list of possible metadata values should not be 
an aim of the command line help. Prior art in this area includes glance image 
properties[1] in which the CLI provides no reference to the values that can be 
passed, nova scheduler hints[2] which also provides no information on the 
values that can be passed.

There is a middle ground we should be treading. Users should not be using the 
CLI as a referencing for working out what label keys and values should be 
passed in to elicit a certain response from the bay, that should found in the 
docs. I am in favor of adding a short list of commonly used labels to the CLI 
help to jog the memory of the user.

I’m all for an additional man page which lists labels documentation, however 
this change is specifically about CLI “ --help”.

[1] 
https://github.com/openstack/python-glanceclient/blob/master/glanceclient/v1/shell.py#L218
[2] 
https://github.com/openstack/python-novaclient/blob/master/novaclient/v2/shell.py#L526

Tom

From: Tim Bell [mailto:tim.b...@cern.ch]
Sent: 12 May 2016 17:58
To: OpenStack Development Mailing List (not for usage questions)
Cc: Qun XK Wang
Subject: Re: [openstack-dev] [magnum] How to document 'labels'

I’d be in favor of 1.

At the end of the man page or full help text, a URL could be useful for more 
information but since most people using the CLI will have to do a context 
change to access the docs, it is not a simple click but a copy/paste/find the 
browser window which is not so friendly.

Tim

From: Jamie Hannaford 
mailto:jamie.hannaf...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday 12 May 2016 at 16:04
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Cc: Qun XK Wang mailto:bjw...@cn.ibm.com>>
Subject: Re: [openstack-dev] [magnum] How to document 'labels'


+1 for 1 and 3.



I'm not sure maintainability should discourage us from exposing information to 
the user through the client - we'll face the same maintenance burden as we 
currently do, and IMO it's our job as a team to ensure our docs are up-to-date. 
Any kind of input which touches the API should also live in the API docs, 
because that's in line with every other OpenStack service.



I don't think I've seen documentation exposed via the API before (#2). I think 
it's a lot of work too, and I don't see what benefit it provides.



Jamie




From: Hongbin Lu mailto:hongbin...@huawei.com>>
Sent: 11 May 2016 21:52
To: OpenStack Development Mailing List (not for usage questions)
Cc: Qun XK Wang
Subject: [openstack-dev] [magnum] How to document 'labels'

Hi all,

This is a continued discussion from the last team meeting. For recap, ‘labels’ 
is a property in baymodel and is used by users to input additional key-pair 
pairs to configure the bay. In the last team meeting, we discussed what is the 
best way to document ‘labels’. In general, I heard three options:

1.   Place the documentation in Magnum CLI as help text (as Wangqun 
proposed [1][2]).

2.   Place the documentation in Magnum server and expose them via the REST 
API. Then, have the CLI to load help text of individual properties from Magnum 
server.

3.   Place the documentation in a documentation server (like 
developer.openstack.org/…), and add the doc link to the CLI help text.
For option #1, I think an advantage is that it is close to end-users, thus 
providing a better user experience. In contrast, Tom Cammann pointed out a 
disadvantage that the CLI help text might be easier to become out of date. For 
option #2, it should work but incurs a lot of extra work. For option #3, the 
disadvantage is the user experience (since user need to click the link to see 
the documents) but it makes us easier to maintain. I am thinking if it is 
possible to have a combination of #1 and #3. Thoughts?

[1] https://review.openstack.org/#/c/307631/
[2] https://review.openstack.org/#/c/307642/

Best regards,
Hongbin


Rackspace International GmbH a company registered in the Canton of Zurich, 
Switzerland (company identification number CH-020.4.047.077-1) whose registered 
office is at Pfingstweidstrasse 60, 8005 Zurich, Switzerland. Rackspace 
International GmbH privacy policy can be viewed at 
www.rackspace.co.uk/legal/swiss-privacy-policy
 - This e-mail message may contain confidential or privileged information 
intended for the recipient. Any dissemination, distribution or copying of the 
enclosed material is prohib

Re: [openstack-dev] [magnum] Notes for Magnum design summit

2016-05-02 Thread Cammann, Tom
Thanks for the write up Hongbin and thanks to all those who contributed to the 
design summit. A few comments on the summaries below.

6. Ironic Integration: 
https://etherpad.openstack.org/p/newton-magnum-ironic-integration
- Start the implementation immediately
- Prefer quick work-around for identified issues (cinder volume attachment, 
variation of number of ports, etc.)

We need to implement a bay template that can use a flat networking model as 
this is the only networking model Ironic currently supports. Multi-tenant 
networking is imminent. This should be done before work on an Ironic template 
starts.

7. Magnum adoption challenges: 
https://etherpad.openstack.org/p/newton-magnum-adoption-challenges
- The challenges is listed in the etherpad above

Ideally we need to turn this list into a set of actions which we can implement 
over the cycle, i.e. create a BP to remove requirement for LBaaS.

9. Magnum Heat template version: 
https://etherpad.openstack.org/p/newton-magnum-heat-template-versioning
- In each bay driver, version the template and template definition.
- Bump template version for minor changes, and bump bay driver version for 
major changes.

We decided only bay driver versioning was required. The template and template 
driver does not need versioning due to the fact we can get heat to pass back 
the template which it used to create the bay.

10. Monitoring: https://etherpad.openstack.org/p/newton-magnum-monitoring
- Add support for sending notifications to Ceilometer
- Revisit bay monitoring and self-healing later
- Container monitoring should not be done by Magnum, but it can be done by 
cAdvisor, Heapster, etc.

We split this topic into 3 parts – bay telemetry, bay monitoring, container 
monitoring.
Bay telemetry is done around actions such as bay/baymodel CRUD operations. This 
is implemented using using ceilometer notifications.
Bay monitoring is around monitoring health of individual nodes in the bay 
cluster and we decided to postpone work as more investigation is required on 
what this should look like and what users actually need.
Container monitoring focuses on what containers are running in the bay and 
general usage of the bay COE. We decided this will be done completed by Magnum 
by adding access to cAdvisor/heapster through baking access to cAdvisor by 
default.

- Manually manage bay nodes (instead of being managed by Heat ResourceGroup): 
It can address the use case of heterogeneity of bay nodes (i.e. different 
availability zones, flavors), but need to elaborate the details further.

The idea revolves around creating a heat stack for each node in the bay. This 
idea shows a lot of promise but needs more investigation and isn’t a current 
priority.

Tom


From: Hongbin Lu 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Saturday, 30 April 2016 at 05:05
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: [openstack-dev] [magnum] Notes for Magnum design summit

Hi team,

For reference, below is a summary of the discussions/decisions in Austin design 
summit. Please feel free to point out if anything is incorrect or incomplete. 
Thanks.

1. Bay driver: https://etherpad.openstack.org/p/newton-magnum-bay-driver
- Refactor existing code into bay drivers
- Each bay driver will be versioned
- Individual bay driver can have API extension and magnum CLI could load the 
extensions dynamically
- Work incrementally and support same API before and after the driver change

2. Bay lifecycle operations: 
https://etherpad.openstack.org/p/newton-magnum-bays-lifecycle-operations
- Support the following operations: reset the bay, rebuild the bay, rotate TLS 
certificates in the bay, adjust storage of the bay, scale the bay.

3. Scalability: https://etherpad.openstack.org/p/newton-magnum-scalability
- Implement Magnum plugin for Rally
- Implement the spec to address the scalability of deploying multiple bays 
concurrently: https://review.openstack.org/#/c/275003/

4. Container storage: 
https://etherpad.openstack.org/p/newton-magnum-container-storage
- Allow choice of storage driver
- Allow choice of data volume driver
- Work with Kuryr/Fuxi team to have data volume driver available in COEs 
upstream

5. Container network: 
https://etherpad.openstack.org/p/newton-magnum-container-network
- Discuss how to scope/pass/store OpenStack credential in bays (needed by Kuryr 
to communicate with Neutron).
- Several options were explored. No perfect solution was identified.

6. Ironic Integration: 
https://etherpad.openstack.org/p/newton-magnum-ironic-integration
- Start the implementation immediately
- Prefer quick work-around for identified issues (cinder volume attachment, 
variation of number of ports, etc.)

7. Magnum adoption challenges: 
https://etherpad.openstack.org/p/newton-magnum-adoption-challenges
- The challenges is listed in the etherpad above

8. Unified abstraction for COEs: 
https://etherpad.openstack.org/p/newton-magnum-unifie

Re: [openstack-dev] [magnum] Proposing Eli Qiao for Magnum core reviewer team

2016-04-01 Thread Cammann, Tom
+1

From: Hongbin Lu 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, 31 March 2016 at 19:18
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: [openstack-dev] [magnum] Proposing Eli Qiao for Magnum core reviewer 
team

Hi all,

Eli Qiao has been consistently contributed to Magnum for a while. His 
contribution started from about 10 months ago. Along the way, he implemented 
several important blueprints and fixed a lot of bugs. His contribution covers 
various aspects (i.e. APIs, conductor, unit/functional tests, all the COE 
templates, etc.), which shows that he has a good understanding of almost every 
pieces of the system. The feature set he contributed to is proven to be 
beneficial to the project. For example, the gate testing framework he heavily 
contributed to is what we rely on every days. His code reviews are also 
consistent and useful.

I am happy to propose Eli Qiao to be a core reviewer of Magnum team. According 
to the OpenStack Governance process [1], we require a minimum of 4 +1 votes 
within a 1 week voting window (consider this proposal as a +1 vote from me). A 
vote of -1 is a veto. If we cannot get enough votes or there is a veto vote 
prior to the end of the voting window, Eli is not able to join the core team 
and needs to wait 30 days to reapply.

The voting is open until Thursday April 7st.

[1] https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

Best regards,
Hongbin


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discussion of supporting single/multiple OS distro

2016-02-29 Thread Cammann, Tom
One of the main reasons for moving to a "bay driver” model was to allow us to
focus our efforts. We talked about focusing our support to the distros with a
religion around them, e.g. Ubuntu and a Red Hat derivative.

Being frank, I do not see much benefit in supporting a small distro if we have
support for a big one. We have seen the templates for these various distros
languish in the past and become quickly outdated. I would much rather have a
concerted effort around a single distro.

The way we could support multiple distros in Magnum would be to create a new
“bay driver” for that distro+template+template_definition. This set of items
would be self contained and would not interact with another bay driver that
used that same COE. This will allow that bay driver to move at the pace of the
team working on it and explicitly list out the features supported by this bay
driver. As it currently stands it is difficult to understand the parity between
two distros using the same COE.

I raised the concern that this could lead to a duplication of code, but we felt
that this refactor had more benefits and we could easily work around this
duplication.

Tom


From: Hongbin Lu mailto:hongbin...@huawei.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, 29 February 2016 at 16:40
To: 
"openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [magnum] Discussion of supporting single/multiple OS 
distro

Hi team,

This is a continued discussion from a review [1]. Corey O'Brien suggested to 
have Magnum support a single OS distro (Atomic). I disagreed. I think we should 
bring the discussion to here to get broader set of inputs.

Corey O'Brien
From the midcycle, we decided we weren't going to continue to support 2 
different versions of the k8s template. Instead, we were going to maintain the 
Fedora Atomic version of k8s and remove the coreos templates from the tree. I 
don't think we should continue to develop features for coreos k8s if that is 
true.
In addition, I don't think we should break the coreos template by adding the 
trust token as a heat parameter.

Hongbin Lu
I was on the midcycle and I don't remember any decision to remove CoreOS 
support. Why you want to remove CoreOS templates from the tree. Please note 
that this is a very big decision and please discuss it with the team 
thoughtfully and make sure everyone agree.

Corey O'Brien
Removing the coreos templates was a part of the COE drivers decision. Since 
each COE driver will only support 1 distro+version+coe we discussed which ones 
to support in tree. The decision was that instead of trying to support every 
distro and every version for every coe, the magnum tree would only have support 
for 1 version of 1 distro for each of the 3 COEs (swarm/docker/mesos). Since we 
already are going to support Atomic for swarm, removing coreos and keeping 
Atomic for kubernetes was the favored choice.

Hongbin Lu
Strongly disagree. It is a huge risk to support a single distro. The selected 
distro could die in the future. Who knows. Why make Magnum take this huge risk? 
Again, the decision of supporting single distro is a very big decision. Please 
bring it up to the team and have it discuss thoughtfully before making any 
decision. Also, Magnum doesn't have to support every distro and every version 
for every coe, but should support *more than one* popular distro for some COEs 
(especially for the popular COEs).

Corey O'Brien
The discussion at the midcycle started from the idea of adding support for RHEL 
and CentOS. We all discussed and decided that we wouldn't try to support 
everything in tree. Magnum would provide support in-tree for 1 per COE and the 
COE driver interface would allow others to add support for their preferred 
distro out of tree.

Hongbin Lu
I agreed the part that "we wouldn't try to support everything in tree". That 
doesn't imply the decision to support single distro. Again, support single 
distro is a huge risk. Why make Magnum take this huge risk?

[1] https://review.openstack.org/#/c/277284/

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] New Core Reviewers

2016-02-01 Thread Cammann, Tom
+1+1

Well deserved!



On 01/02/2016, 15:58, "Adrian Otto"  wrote:

>Magnum Core Team,
>
>I propose Ton Ngo (Tango) and Egor Guz (eghobo) as new Magnum Core Reviewers. 
>Please respond with your votes.
>
>Thanks,
>
>Adrian Otto
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Removing pod, rcs and service APIs

2015-12-16 Thread Cammann, Tom
I don’t see a benefit from supporting the old API through a microversion 
when the same functionality will be available through the native API. We 
are still early enough in Magnum to make significant API changes, no one 
is using Magnum as a whole in production.

Have we had any discussion on adding a v2 API and what changes (beyond 
removing pod, rc, service) we would include in that change. What sort of 
timeframe would we expect to remove the v1 API. I would like to move to a 
v2 in this cycle, then we can think about removing v1 in N.

Tom



On 16/12/2015, 15:57, "Hongbin Lu"  wrote:

>Hi Tom,
>
>If I remember correctly, the decision is to drop the COE-specific API 
>(Pod, Service, Replication Controller) in the next API version. I think a 
>good way to do that is to put a deprecated warning in current API version 
>(v1) for the removed resources, and remove them in the next API version 
>(v2).
>
>An alternative is to drop them in current API version. If we decide to do 
>that, we need to bump the micro-version [1], and ask users to specify the 
>microversion as part of the requests when they want the removed APIs.
>
>[1] 
>http://docs.openstack.org/developer/nova/api_microversions.html#removing-a
>n-api-method
>
>Best regards,
>Hongbin
>
>-Original Message-
>From: Cammann, Tom [mailto:tom.camm...@hpe.com] 
>Sent: December-16-15 8:21 AM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: [openstack-dev] [magnum] Removing pod, rcs and service APIs
>
>I have been noticing a fair amount of redundant work going on in magnum, 
>python-magnumclient and magnum-ui with regards to APIs we have been 
>intending to drop support for. During the Tokyo summit it was decided 
>that we should support for only COE APIs that all COEs can support which 
>means dropping support for Kubernetes specific APIs for Pod, Service and 
>Replication Controller.
>
>Egor has submitted a blueprint[1] “Unify container actions between all 
>COEs” which has been approved to cover this work and I have submitted the 
>first of many patches that will be needed to unify the APIs.
>
>The controversial patches are here: 
>https://review.openstack.org/#/c/258485/ and 
>https://review.openstack.org/#/c/258454/
>
>These patches are more a forcing function for our team to decide how to 
>correctly deprecate these APIs as I mention there is a lot of redundant 
>work going on these APIs. Please let me know your thoughts.
>
>Tom
>
>[1] https://blueprints.launchpad.net/magnum/+spec/unified-containers
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Removing pod, rcs and service APIs

2015-12-16 Thread Cammann, Tom
I have been noticing a fair amount of redundant work going on in magnum, 
python-magnumclient and magnum-ui with regards to APIs we have been intending 
to drop support for. During the Tokyo summit it was decided that we should 
support for only COE APIs that all COEs can support which means dropping 
support for Kubernetes specific APIs for Pod, Service and Replication 
Controller.

Egor has submitted a blueprint[1] “Unify container actions between all COEs” 
which has been approved to cover this work and I have submitted the first of 
many patches that will be needed to unify the APIs.

The controversial patches are here: https://review.openstack.org/#/c/258485/ 
and https://review.openstack.org/#/c/258454/

These patches are more a forcing function for our team to decide how to 
correctly deprecate these APIs as I mention there is a lot of redundant work 
going on these APIs. Please let me know your thoughts.

Tom

[1] https://blueprints.launchpad.net/magnum/+spec/unified-containers
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev