Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

2016-06-02 Thread John McDowall
Juno

Whatever gets it done faster- let me get the three repos aligned. I need to get 
the ovs/ovn work done so networking-ovn can call it, and the networking-sfc can 
call networking-ovn.

Hopefully I will have it done tomorrow or over the weekend - let's touch base 
Monday or Sunday night.

Regards

John

Sent from my iPhone

On Jun 2, 2016, at 6:30 PM, Na Zhu > 
wrote:

Hi John,

I agree with submitting WIP patches to community, because you already did many 
works on networking-sfc and networking-ovn, it is better that you submit the 
initial patches about networking-sfc and networking-ovn, then me and Srilatha 
take over the patches. Do you have time to do it? if not, me and Srilatha can 
help to do it and you are always the co-author.




Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New District, 
Shanghai, China (201203)



From:John McDowall 
>
To:Na Zhu/China/IBM@IBMCN
Cc:"disc...@openvswitch.org" 
>, "OpenStack 
Development Mailing List" 
>, 
Ryan Moats >, Srilatha Tangirala 
>
Date:2016/06/03 00:08
Subject:Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC 
andOVN




Juno,

Sure make sense. I will have ovs/ovn in rough shape by end of week (hopefully) 
that will allow you to call the interfaces from networking-ovn. Ryan has asked 
that we submit WIP patches etc so hopefully that will kickstart the review 
process.
Also, hopefully some of the networking-sfc team will also be able to help – I 
will let them speak for themselves.

Regards

John

From: Na Zhu >
Date: Wednesday, June 1, 2016 at 7:02 PM
To: John McDowall 
>
Cc: "disc...@openvswitch.org" 
>, OpenStack 
Development Mailing List 
>, 
Ryan Moats >, Srilatha Tangirala 
>
Subject: Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

Hi John,

Thanks your reply.

Seems you have covered everything :)
The development work can be broken down in 3 parts:
1, add ovn driver to networking-sfc
2, provide APIs in networking-ovn for networking-sfc
3, implement the sfc in ovn

So what about we take part 1 and part 2, and you take part 3? because we are 
familiar with networking-sfc and networking-ovn and we can do it faster:)





Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New District, 
Shanghai, China (201203)



From:John McDowall 
>
To:Na Zhu/China/IBM@IBMCN
Cc:Ryan Moats >, OpenStack 
Development Mailing List 
>, 
"disc...@openvswitch.org" 
>, Srilatha Tangirala 
>
Date:2016/06/01 23:26
Subject:Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC 
andOVN




Na/Srilatha,

Great, I am working from three repos:

https://github.com/doonhammer/networking-sfc
https://github.com/doonhammer/networking-ovn
https://github.com/doonhammer/ovs

I had an original 

[openstack-dev] What's Up, Doc? 3 June 2016

2016-06-02 Thread Lana Brindley
Hi everyone,

I'm very pleased to be able to announce the results of our Install Guide naming 
poll this week. We ended up with 31 responses, and a very clear winner in 
"OpenStack Installation Tutorial". Thank you to everyone who voted! Also, just 
a note that I'm still very much in need of representatives from the various 
project teams to pitch in and help us get things running. Please make sure your 
project is represented by coming along to meetings, or at least contacting us 
through the mailing list.

This week I've also spent some time with the Upstream training team, 
identifying gaps in the current material, and preparing for Barcelona.

== Progress towards Newton ==

124 days to go!

Bugs closed so far: 134

Newton deliverables 
https://wiki.openstack.org/wiki/Documentation/NewtonDeliverables
Feel free to add more detail and cross things off as they are achieved 
throughout the release.

== Speciality Team Reports ==

'''HA Guide: Bogdan Dobrelya'''
No report this week.

'''Install Guide: Lana Brindley'''
Poll concluded, winner is "OpenStack Installation Tutorial". Connected with 
CPLs this week to encourage more participation. Next meeting: Tue 7 June 0600 
UTC.

'''Networking Guide: Edgar Magana'''
Moved DHCP HA chapter to Networking Guide for better maintenance and updates. 
Planning to move more sections and focus on DVR HA. Suffering of very low 
attendance for the IRC meeting.

'''Security Guide: Nathaniel Dillon'''
No report this week.

'''User Guides: Joseph Robinson'''
Some contact with the Magnum team. Held meetings, brought up the link changes, 
which is something I most need some assistance with. IA plan still forthcoming.

'''Ops Guide: Shilla Saebi'''
No report this week.

'''API Guide: Anne Gentle'''
New layout for API reference docs from additions to openstackdocs theme. Also 
already added four-color scheme for GET/PUT/POST/DELETE Thanks Graham Hayes! 
Work in progress here: https://api.os.gra.ham.ie/compute/
Reviews:  
https://review.openstack.org/#/q/project:openstack/openstackdocstheme+status:open
 and https://review.openstack.org/#/q/project:openstack/os-api-ref+status:open
Discussion on SDKs and FirstApp audience happening on user-committee list: 
http://lists.openstack.org/pipermail/user-committee/2016-May/000889.html

'''Config/CLI Ref: Tomoyuki Kato'''
Closed some bugs with Mitaka backport. Fixed the incorrect RST markup at  the 
new options section, with generation tool update.

'''Training labs: Pranav Salunke, Roger Luethi'''
No report this week.

'''Training Guides: Matjaz Pancur'''
Upstream training updates, Barcelona schedule.

'''Hypervisor Tuning Guide: Blair Bethwaite
No report this week.

'''UX/UI Guidelines: Michael Tullis, Stephen Ballard'''
The first prototype of content was presented and source information gaps were 
identified. A first draft will be complete by June 9.

== Site Stats ==

In the Install Guide naming poll, "OpenStack Installation Tutorial" finished 
with 29% of the vote, well ahead of "OpenStack Evaluation Setup Guide" at 19%, 
and "Basic Install Guide" at 16%. And, because charts are fun, here are the 
final results: 
https://docs.google.com/spreadsheets/d/1VlNFebI_KFobs-XIT5oRhULRymwrqON8SV9pH4uiYIQ/pubchart?oid=2070399201=image

I personally thought "OpenStack from Scratch" was the most creative title. And 
the potentially confusing "Manual Install Guide" ("Guided Install Manual"?) had 
a small following at just under 10% of the vote. Thanks for all the great 
suggestions :)

== Doc team meeting ==

Next meetings:

The APAC meeting was held this week, you can read the minutes here: 
https://wiki.openstack.org/wiki/Documentation/MeetingLogs#2016-06-01

Next meetings:
US: Wednesday 8 June, 19:00 UTC
APAC: Wednesday 15 June, 00:30 UTC

Please go ahead and add any agenda items to the meeting page here: 
https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting#Agenda_for_next_meeting

--

Keep on doc'ing!

Lana

https://wiki.openstack.org/wiki/Documentation/WhatsUpDoc#3_June_2016

-- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra][tricircle] patch not able to be merged

2016-06-02 Thread joehuang
Hello,

There is one quite strange issue in Tricircle stable/mitaka branch 
(https://github.com/openstack/tricircle/tree/stable/mitaka) . Even the patch ( 
https://review.openstack.org/#/c/324209/ ) were given Code-Review +2 and 
Workflow +1, the gating job not started, and the patch was not merged. 

This also happen even we cherry pick a patch from the master branch to the 
stable/mitaka branch, for example, https://review.openstack.org/#/c/307627/.

Is there configuration missing for the stable branch after tagging, or some 
issue in infra?

Thanks for help.

Best Regards
Chaoyi Huang ( Joe Huang )

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] new stable-maint cores for Horizon

2016-06-02 Thread Tony Breeds
On Thu, Jun 02, 2016 at 08:27:29AM +0200, Matthias Runge wrote:
> Horizoners,
> 
> please join me to welcome
> 
> * Richard Jones
> * Rob Cresswell
> * Thai Tran
> 
> as new Horizon stable core reviewers.
> 
> Thank you guys for stepping up and thank you tonyb for pulling stats and
> pushing this.

Welcome!

I've added the newest members to the group :)

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] openstack Unauthorized HTTP 401 'Could not find user' when sahara call heat

2016-06-02 Thread 阮鹏飞





Hi, Friends,


I used Openstack-Juno heat, keystone and Mitaka sahara in CentOS7. Sahara is 
installed in docker container using host network.
When sahara wants to call heat to create a hadoop cluster, the below error is 
happened.
Could you help to check this issue? I guess, the heat couldn't create user in 
keystone. The attachment is the conf file and log file. Please reference them.
Thanks for your help in advance. Hoping for your answer.


2016-05-31 11:22:45.625 41759 INFO urllib3.connectionpool [-] Starting new HTTP 
connection (1): 10.252.100.4
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource Traceback (most recent 
call last):
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource   File 
"/usr/lib/python2.6/site-packages/heat/engine/resource.py", line 435, in 
_action_recorder
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource yield
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource   File 
"/usr/lib/python2.6/site-packages/heat/engine/resource.py", line 505, in 
_do_action
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource yield 
self.action_handler_task(action, args=handler_args)
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource   File 
"/usr/lib/python2.6/site-packages/heat/engine/scheduler.py", line 286, in 
wrapper
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource step = 
next(subtask)
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource   File 
"/usr/lib/python2.6/site-packages/heat/engine/resource.py", line 476, in 
action_handler_task
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource handler_data = 
handler(*args)
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource   File 
"/usr/lib/python2.6/site-packages/heat/engine/resources/wait_condition.py", 
line 143, in handle_create
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource token = 
self._user_token()
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource   File 
"/usr/lib/python2.6/site-packages/heat/engine/stack_user.py", line 75, in 
_user_token
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource 
project_id=project_id, password=password)
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource   File 
"/usr/lib/python2.6/site-packages/heat/common/heat_keystoneclient.py", line 
410, in stack_domain_user_token
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource 
authenticated=False)
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource   File 
"/usr/lib/python2.6/site-packages/keystoneclient/session.py", line 430, in post
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource return 
self.request(url, 'POST', **kwargs)
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource   File 
"/usr/lib/python2.6/site-packages/keystoneclient/utils.py", line 318, in inner
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource return func(*args, 
**kwargs)
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource   File 
"/usr/lib/python2.6/site-packages/keystoneclient/session.py", line 346, in 
request
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource raise 
exceptions.from_response(resp, method, url)
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource Unauthorized: Could 
not find user: 
haddp45018380-test-master-ajdlwfudliu2-0-hnnojqmbzkbr-test-master-wc-handle-dp4cqhkmtykr
 (Disable debug mode to suppress these details.) (HTTP 401)
2016-05-31 11:22:45.663 41759 TRACE heat.engine.resource


Fred Ruan






 

log
Description: Binary data
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

2016-06-02 Thread Na Zhu
Hi John,

I agree with submitting WIP patches to community, because you already did 
many works on networking-sfc and networking-ovn, it is better that you 
submit the initial patches about networking-sfc and networking-ovn, then 
me and Srilatha take over the patches. Do you have time to do it? if not, 
me and Srilatha can help to do it and you are always the co-author.




Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:   John McDowall 
To: Na Zhu/China/IBM@IBMCN
Cc: "disc...@openvswitch.org" , "OpenStack 
Development Mailing List" , Ryan Moats 
, Srilatha Tangirala 
Date:   2016/06/03 00:08
Subject:Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] 
SFC andOVN



Juno,

Sure make sense. I will have ovs/ovn in rough shape by end of week 
(hopefully) that will allow you to call the interfaces from 
networking-ovn. Ryan has asked that we submit WIP patches etc so hopefully 
that will kickstart the review process.
Also, hopefully some of the networking-sfc team will also be able to help 
�C I will let them speak for themselves.

Regards

John

From: Na Zhu 
Date: Wednesday, June 1, 2016 at 7:02 PM
To: John McDowall 
Cc: "disc...@openvswitch.org" , OpenStack 
Development Mailing List , Ryan Moats <
rmo...@us.ibm.com>, Srilatha Tangirala 
Subject: Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC 
andOVN

Hi John,

Thanks your reply.

Seems you have covered everything :)
The development work can be broken down in 3 parts:
1, add ovn driver to networking-sfc
2, provide APIs in networking-ovn for networking-sfc 
3, implement the sfc in ovn

So what about we take part 1 and part 2, and you take part 3? because we 
are familiar with networking-sfc and networking-ovn and we can do it 
faster:)





Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:John McDowall 
To:Na Zhu/China/IBM@IBMCN
Cc:Ryan Moats , OpenStack Development Mailing 
List , "disc...@openvswitch.org" <
disc...@openvswitch.org>, Srilatha Tangirala 
Date:2016/06/01 23:26
Subject:Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] 
SFC andOVN



Na/Srilatha,

Great, I am working from three repos:

https://github.com/doonhammer/networking-sfc
https://github.com/doonhammer/networking-ovn
https://github.com/doonhammer/ovs

I had an original prototype working that used an API I created. Since 
then, based on feedback from everyone I have been moving the API to the 
networking-sfc model and then supporting that API in networking-ovn and 
ovs/ovn. I have created a new driver in networking-sfc for ovn.

I am in the process of moving networking-ovn and ovs to support the sfc 
model. Basically I am intending to pass a deep copy of the port-chain 
(sample attached, sfc_dict.py) from the ovn driver in networking-sfc to 
networking-ovn.  This , as Ryan pointed out will minimize the dependancies 
between networking-sfc and networking-ovn. I have created additional 
schema for ovs/ovn (attached) that will provide the linkage between 
networking-ovn and ovs/ovn. I have the schema in ovs/ovn and I am in the 
process of  updating my code to support it.

Not sure where you guys want to jump in �C but I can help in any way you 
need.

Regards

John

From: Na Zhu 
Date: Tuesday, May 31, 2016 at 9:02 PM
To: John McDowall 
Cc: Ryan Moats , OpenStack Development Mailing List <
openstack-dev@lists.openstack.org>, "disc...@openvswitch.org" <
disc...@openvswitch.org>, Srilatha Tangirala 
Subject: Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC 
andOVN

+ Add Srilatha.



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:Na Zhu/China/IBM
To:John McDowall 
Cc:Ryan Moats , OpenStack Development Mailing 
List , "disc...@openvswitch.org" <
disc...@openvswitch.org>
Date:2016/06/01 12:01
Subject:Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] 
SFC andOVN


John,

Thanks.

Me and Srilatha (srila...@us.ibm.com) want to working together with 

[openstack-dev] [kolla] Google Hangouts discussion for dueling specifications for Dockerfile customization

2016-06-02 Thread Steven Dake (stdake)
Hey folks,

IRC and mailing list were going far too slow for us to make progress on the 
competing specifications for handling Dockerfile customization.  Instead we 
held a hangout, which I don't like because it isn't recorded, but it is high 
bandwidth and permitted us to work through the problem in 1 hour instead of 1 
month.

The essence of the discussion:

  1.  I will use inc0's patch as a starting point and will do the following:
 *   Prototype base with  operations using the specification items 
in the elemental DSL
 *   Prototype mariadb with  operations using the specification 
items in the elemental DSL
 *   I will create a document assuming these two prototypes work that 
describe how to use the jinja2  operations to replace or merge sections 
of Dockerfile.j2 files.
 *   We will stop specification development as it has served its purpose 
(of defining the requirements) assuming the prototypes meet people's taste test
  2.  We believe the Jinja2  operation will meet the requirements set 
forth in the original elemental DSL specification
  3.  We as a community will need to modify our 115 dockerfiles, of which I'd 
like people to take 1 or 2 container sets each (40 in total), in a distributed 
fashion to implement the documentation described in section 1.3
  4.  We will produce an optional DSL parser (based upon the prototyping work) 
that outputs the proper  Dockerfile.j2 files or alternatively operators 
can create their own block syntax files
  5.  All customization will be done in one master block replacement file
  6.  Original dockerfile.j2 files will stay intact with the addition of a 
bunch of block operations
  7.  Some RUN layer compression will be lost (the && in our Dockerfiles)
  8.  There are 8 DSL operations but we will need twice as many to handle both 
override and merging in a worst case scenario.  That means 16 blocks will need 
to be added to each Dockerfile.
  9.  Operators that have already customized their Dockerfile.j2 files can 
carry those changes or migrate to this new customization technique when this 
feature hits Newton, up to them
  10. If the prototypes don't work, back to the drawing board - that said I am 
keen to have any solution that meets the requirements so I will do a thorough 
job on the prototypes of inc0's work

If you have questions, or I missed key points, please feel fee to ask or speak 
up.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][SR-IOV/PCI] SR-IOV/PCI CI for mutli node

2016-06-02 Thread Moshe Levi
Hi, 

As part of the  Performance VMs CI and technical debt session we had in  Ausin 
summit,  we decide to focus on fixing pci resize and migration bugs.
Currently the pci resize patch [1]  and migration patch [2] are up for review.

The resize patch is tested with Intel PCI CI 
http://52.27.155.124/pci/307124/21/testr_experimental_result.html.gz - which is 
great.
We also would like to test the pci passthrough and SR-IOV  migration patches  
as well. For that we need SR-IOV/PCI CI for mutli node. 
For some reason I remember that Intel CI engineers volunteer to do that in 
Austin. If that is the case please join the SR-IOV meeting so we can track 
progress [3]
If not we are looking for someone to setup and maintain mutli node  PCI and 
SR-IOV CI for migration testing.

Another feature that is related to pci CI is the direct passthrough. Does 
anyone work on CI for this feature?  Does this feature even works?

[1] - https://review.openstack.org/#/c/307124/
[2] - https://review.openstack.org/#/c/242573/ 
[3] - http://eavesdrop.openstack.org/#SR-IOV/PCI_Passthrough_Meeting 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][security] Item #5 of the VMT

2016-06-02 Thread Steven Dake (stdake)
Hi folks,

I think we are nearly done with Item #5 [1] of the VMT.  One question remains.

We need to know which repo the analysis documentation will land in .  There is 
security-doc we could use for this purpose, but we could also create a new 
repository called "security-analysis" (or open to other names).  I'll create 
the repo, get reno integrated with it, get sphinx integrated with it, and get a 
basic documentation index.rst in place using cookiecutter + extra git reviews.  
I'll also set up project-config for you.  After that, I don't think there is 
much I can do as my plate is pretty full :)

Regards
-steve

[1] https://review.openstack.org/#/c/300698/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Changing the project name uniqueness constraint

2016-06-02 Thread Adam Young

On 06/02/2016 07:22 PM, Henry Nash wrote:

Hi

As you know, I have been working on specs that change the way we 
handle the uniqueness of project names in Newton. The goal of this is 
to better support project hierarchies, which as they stand today are 
restrictive in that all project names within a domain must be unique, 
irrespective of where in the hierarchy that projects sits (unlike, 
say, the unix directory structure where a node name only has to be 
unique within its parent). Such a restriction is particularly 
problematic when enterprise start modelling things like test, QA and 
production as branches of a project hierarchy, e.g.:


/mydivsion/projectA/dev
/mydivsion/projectA/QA
/mydivsion/projectA/prod
/mydivsion/projectB/dev
/mydivsion/projectB/QA
/mydivsion/projectB/prod

Obviously the idea of a project name (née tenant) being unique has 
been around since near the beginning of (OpenStack) time, so we must 
be cautions. There are two alternative specs proposed:


1) Relax project name constraints: 
https://review.openstack.org/#/c/310048/

2) Hierarchical project naming: https://review.openstack.org/#/c/318605/

First, here’s what they have in common:

a) They both solve the above problem
b) They both allow an authorization scope to use a path rather than 
just a simple name, hence allowing you to address a project anywhere 
in the hierarchy
c) Neither have any impact if you are NOT using a hierarchy - i.e. if 
you just have a flat layer of projects in a domain, then they have no 
API or semantic impact (since both ensure that a project’s name must 
still be unique within a parent)


Here’s how the differ:

- Relax project name constraints (1), keeps the meaning of the ‘name’ 
attribute of a project to be its node-name in the hierarchy, but 
formally relaxes the uniqueness constraint to say that it only has to 
be unique within its parent. In other words, let’s really model this a 
bit like a unix directory tree.
- Hierarchical project naming (2), formally changes the meaning of the 
‘name’ attribute to include the path to the node as well as the node 
name, and hence ensures that the (new) value of the name attribute 
remains unique.


While whichever approach we chose would only be included in a new 
microversion (3.7) of the Identity API, although some relevant APIs 
can remain unaffected for a client talking 3.6 to a Newton server, not 
all can be. As pointed out be jamielennox, this is a data modelling 
problem - if a Newton server has created multiple projects called 
“dev” in the hierarchy, a 3.6 client trying to scope a token simply to 
“dev” cannot be answered correctly (and it is proposed we would have 
to return an HTTP 409 Conflict error if multiple nodes with the same 
name were detected). This is true for both approaches.


Other comments on the approaches:

- Having a full path as the name seems duplicative with the current 
project entity - since we already return the parent_id (hence 
parent_id + name is, today, sufficient to place a project in the 
hierarchy).


The one thing I like is the ability to specify just the full path for 
the OS_PROJECT_NAME env var, but we could make that a separate 
variable.  Just as DOMAIN_ID + PROJECT_NAME is unique today, 
OS_PROJECT_PATH should be able to fully specify a project 
unambiguously.  I'm not sure which would have a larger impact on users.



- In the past, we have been concerned about the issue of what we do if 
there is a project further up the tree that we do not have any roles 
on. In such cases, APIs like list project parents will not display 
anything other than the project ID for such projects. In the case of 
making the name the full path, we would be effectively exposing the 
name of all projects above us, irrespective of whether we had roles on 
them. Maybe this is OK, maybe it isn’t.


I think it is OK.  If this info needs to be hidden from a user, the 
project should probably be in a different domain.


- While making the name the path keeps it unique, this is fine if 
clients blindly use this attribute to plug back into another API to 
call. However if, for example, you are Horizon and are displaying them 
in a UI then you need to start breaking down the path into its 
components, where you don’t today.
- One area where names as the hierarchical path DOES look right is 
calling the /auth/projects API - where what the caller wants is a list 
of projects they can scope to - so you WANT this to be the path you 
can put in an auth request.


Given that neither can fully protect a 3.6 client, my personal 
preference is to go with the cleaner logical approach which I believe 
is the Relax project name constraints (1), with the addition of 
changing GET /auth/projects to return the path (since this is a 
specialised API that happens before authentication) - but I am open to 
persuasion (as the song goes).


There are those that might say that perhaps we just can’t change this. 
I would argue that since this ONLY affects 

Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-06-02 Thread Fox, Kevin M
As an operator that has clouds that are partitioned into different host 
aggregates with different flavors targeting them, I totally believe we will 
have users that want to have a single k8s cluster span multiple different 
flavor types. I'm sure once I deploy magnum, I will want it too. You could have 
some special hardware on some nodes, not on others. but you can still have 
cattle, if you have enough of them and the labels are set appropriately. Labels 
allow you to continue to partition things when you need to, and ignore it when 
you dont, making administration significantly easier.

Say I have a tenant with 5 gpu nodes, and 10 regular nodes allocated into a k8s 
cluster. I may want 30 instances of container x that doesn't care where they 
land, and prefer 5 instances that need cuda. The former can be deployed with a 
k8s deployment. The latter can be deployed with a daemonset. All should work 
well and very non pet'ish. The whole tenant could be viewed with a single pane 
of glass, making it easy to manage.

Thanks,
Kevin

From: Adrian Otto [adrian.o...@rackspace.com]
Sent: Thursday, June 02, 2016 4:24 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually managing the 
bay nodes

I am really struggling to accept the idea of heterogeneous clusters. My 
experience causes me to question whether a heterogeneus cluster makes sense for 
Magnum. I will try to explain why I have this hesitation:

1) If you have a heterogeneous cluster, it suggests that you are using external 
intelligence to manage the cluster, rather than relying on it to be 
self-managing. This is an anti-pattern that I refer to as “pets" rather than 
“cattle”. The anti-pattern results in brittle deployments that rely on external 
intelligence to manage (upgrade, diagnose, and repair) the cluster. The 
automation of the management is much harder when a cluster is heterogeneous.

2) If you have a heterogeneous cluster, it can fall out of balance. This means 
that if one of your “important” or “large” members fail, there may not be 
adequate remaining members in the cluster to continue operating properly in the 
degraded state. The logic of how to track and deal with this needs to be 
handled. It’s much simpler in the heterogeneous case.

3) Heterogeneous clusters are complex compared to homogeneous clusters. They 
are harder to work with, and that usually means that unplanned outages are more 
frequent, and last longer than they with a homogeneous cluster.

Summary:

Heterogeneous:
  - Complex
  - Prone to imbalance upon node failure
  - Less reliable

Heterogeneous:
  - Simple
  - Don’t get imbalanced when a min_members concept is supported by the cluster 
controller
  - More reliable

My bias is to assert that applications that want a heterogeneous mix of system 
capacities at a node level should be deployed on multiple homogeneous bays, not 
a single heterogeneous one. That way you end up with a composition of simple 
systems rather than a larger complex one.

Adrian


> On Jun 1, 2016, at 3:02 PM, Hongbin Lu  wrote:
>
> Personally, I think this is a good idea, since it can address a set of 
> similar use cases like below:
> * I want to deploy a k8s cluster to 2 availability zone (in future 2 
> regions/clouds).
> * I want to spin up N nodes in AZ1, M nodes in AZ2.
> * I want to scale the number of nodes in specific AZ/region/cloud. For 
> example, add/remove K nodes from AZ1 (with AZ2 untouched).
>
> The use case above should be very common and universal everywhere. To address 
> the use case, Magnum needs to support provisioning heterogeneous set of nodes 
> at deploy time and managing them at runtime. It looks the proposed idea 
> (manually managing individual nodes or individual group of nodes) can address 
> this requirement very well. Besides the proposed idea, I cannot think of an 
> alternative solution.
>
> Therefore, I vote to support the proposed idea.
>
> Best regards,
> Hongbin
>
>> -Original Message-
>> From: Hongbin Lu
>> Sent: June-01-16 11:44 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: RE: [openstack-dev] [magnum] Discuss the idea of manually
>> managing the bay nodes
>>
>> Hi team,
>>
>> A blueprint was created for tracking this idea:
>> https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-
>> nodes . I won't approve the BP until there is a team decision on
>> accepting/rejecting the idea.
>>
>> From the discussion in design summit, it looks everyone is OK with the
>> idea in general (with some disagreements in the API style). However,
>> from the last team meeting, it looks some people disagree with the idea
>> fundamentally. so I re-raised this ML to re-discuss.
>>
>> If you agree or disagree with the idea of manually managing the Heat
>> stacks (that contains individual bay nodes), please write down your
>> 

Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-06-02 Thread Steven Dake (stdake)
Hongbin,

Have you considered a workflow engine?

FWIW I agree with Adrian about the difficulties of heterogenous systems.
Much better to operate, and in reality the world has moved entirely to
x86_64 + Linux.  I could see a future in which ARM breaks into the server
space, but that is multiple years away if at ever.

Regards
-steve


On 6/2/16, 7:42 AM, "Hongbin Lu"  wrote:

>Madhuri,
>
>It looks both of us agree the idea of having heterogeneous set of nodes.
>For the implementation, I am open to alternative (I supported the
>work-around idea because I cannot think of a feasible implementation by
>purely using Heat, unless Heat support "for" logic which is very unlikely
>to happen. However, if anyone can think of a pure Heat implementation, I
>am totally fine with that).
>
>Best regards,
>Hongbin
>
>> -Original Message-
>> From: Kumari, Madhuri [mailto:madhuri.kum...@intel.com]
>> Sent: June-02-16 12:24 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
>> managing the bay nodes
>> 
>> Hi Hongbin,
>> 
>> I also liked the idea of having heterogeneous set of nodes but IMO such
>> features should not be implemented in Magnum, thus deviating Magnum
>> again from its roadmap. Whereas we should leverage Heat(or may be
>> Senlin) APIs for the same.
>> 
>> I vote +1 for this feature.
>> 
>> Regards,
>> Madhuri
>> 
>> -Original Message-
>> From: Hongbin Lu [mailto:hongbin...@huawei.com]
>> Sent: Thursday, June 2, 2016 3:33 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> 
>> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
>> managing the bay nodes
>> 
>> Personally, I think this is a good idea, since it can address a set of
>> similar use cases like below:
>> * I want to deploy a k8s cluster to 2 availability zone (in future 2
>> regions/clouds).
>> * I want to spin up N nodes in AZ1, M nodes in AZ2.
>> * I want to scale the number of nodes in specific AZ/region/cloud. For
>> example, add/remove K nodes from AZ1 (with AZ2 untouched).
>> 
>> The use case above should be very common and universal everywhere. To
>> address the use case, Magnum needs to support provisioning
>> heterogeneous set of nodes at deploy time and managing them at runtime.
>> It looks the proposed idea (manually managing individual nodes or
>> individual group of nodes) can address this requirement very well.
>> Besides the proposed idea, I cannot think of an alternative solution.
>> 
>> Therefore, I vote to support the proposed idea.
>> 
>> Best regards,
>> Hongbin
>> 
>> > -Original Message-
>> > From: Hongbin Lu
>> > Sent: June-01-16 11:44 AM
>> > To: OpenStack Development Mailing List (not for usage questions)
>> > Subject: RE: [openstack-dev] [magnum] Discuss the idea of manually
>> > managing the bay nodes
>> >
>> > Hi team,
>> >
>> > A blueprint was created for tracking this idea:
>> > https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-
>> > nodes . I won't approve the BP until there is a team decision on
>> > accepting/rejecting the idea.
>> >
>> > From the discussion in design summit, it looks everyone is OK with
>> the
>> > idea in general (with some disagreements in the API style). However,
>> > from the last team meeting, it looks some people disagree with the
>> > idea fundamentally. so I re-raised this ML to re-discuss.
>> >
>> > If you agree or disagree with the idea of manually managing the Heat
>> > stacks (that contains individual bay nodes), please write down your
>> > arguments here. Then, we can start debating on that.
>> >
>> > Best regards,
>> > Hongbin
>> >
>> > > -Original Message-
>> > > From: Cammann, Tom [mailto:tom.camm...@hpe.com]
>> > > Sent: May-16-16 5:28 AM
>> > > To: OpenStack Development Mailing List (not for usage questions)
>> > > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
>> > > managing the bay nodes
>> > >
>> > > The discussion at the summit was very positive around this
>> > requirement
>> > > but as this change will make a large impact to Magnum it will need
>> a
>> > > spec.
>> > >
>> > > On the API of things, I was thinking a slightly more generic
>> > > approach to incorporate other lifecycle operations into the same
>> API.
>> > > Eg:
>> > > magnum bay-manage  
>> > >
>> > > magnum bay-manage  reset –hard
>> > > magnum bay-manage  rebuild
>> > > magnum bay-manage  node-delete  magnum bay-manage
>> > >  node-add –flavor  magnum bay-manage  node-reset
>> > >  magnum bay-manage  node-list
>> > >
>> > > Tom
>> > >
>> > > From: Yuanying OTSUKA 
>> > > Reply-To: "OpenStack Development Mailing List (not for usage
>> > > questions)" 
>> > > Date: Monday, 16 May 2016 at 01:07
>> > > To: "OpenStack Development Mailing List (not for usage questions)"
>> > > 

Re: [openstack-dev] [stable][all] Tagging kilo-eol for "the world"

2016-06-02 Thread Tony Breeds
On Thu, Jun 02, 2016 at 08:41:40AM -0700, John Dickinson wrote:
> open swift/swiftclient patches to stable/kilo have been abandoned

Thanks John

Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Tagging kilo-eol for "the world"

2016-06-02 Thread Tony Breeds
On Thu, Jun 02, 2016 at 07:10:23PM -0400, Emilien Macchi wrote:

> I think that all openstack/puppet-* projects that have stable/kilo can
> be kilo-EOLed.
> Let me know if it's ok and I'll abandon all open reviews.

Totally fine with me.

I've added them.  Feel free to abanond the reviews.  Any you don't get to by
2016-06-09 00:00 UTC  I'll take care of.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Tagging kilo-eol for "the world"

2016-06-02 Thread Tony Breeds
On Thu, Jun 02, 2016 at 12:38:15PM +0200, Ihar Hrachyshka wrote:
> I think all networking-* repos should EOL too, since they are plugins to
> neutron which is already EOL. I struggle to find a way that could maintain
> their gate without neutron.

Thanks I've added them.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-06-02 Thread Adrian Otto
I am really struggling to accept the idea of heterogeneous clusters. My 
experience causes me to question whether a heterogeneus cluster makes sense for 
Magnum. I will try to explain why I have this hesitation:

1) If you have a heterogeneous cluster, it suggests that you are using external 
intelligence to manage the cluster, rather than relying on it to be 
self-managing. This is an anti-pattern that I refer to as “pets" rather than 
“cattle”. The anti-pattern results in brittle deployments that rely on external 
intelligence to manage (upgrade, diagnose, and repair) the cluster. The 
automation of the management is much harder when a cluster is heterogeneous.

2) If you have a heterogeneous cluster, it can fall out of balance. This means 
that if one of your “important” or “large” members fail, there may not be 
adequate remaining members in the cluster to continue operating properly in the 
degraded state. The logic of how to track and deal with this needs to be 
handled. It’s much simpler in the heterogeneous case.

3) Heterogeneous clusters are complex compared to homogeneous clusters. They 
are harder to work with, and that usually means that unplanned outages are more 
frequent, and last longer than they with a homogeneous cluster.

Summary:

Heterogeneous:
  - Complex
  - Prone to imbalance upon node failure
  - Less reliable

Heterogeneous:
  - Simple
  - Don’t get imbalanced when a min_members concept is supported by the cluster 
controller
  - More reliable

My bias is to assert that applications that want a heterogeneous mix of system 
capacities at a node level should be deployed on multiple homogeneous bays, not 
a single heterogeneous one. That way you end up with a composition of simple 
systems rather than a larger complex one.

Adrian


> On Jun 1, 2016, at 3:02 PM, Hongbin Lu  wrote:
> 
> Personally, I think this is a good idea, since it can address a set of 
> similar use cases like below:
> * I want to deploy a k8s cluster to 2 availability zone (in future 2 
> regions/clouds).
> * I want to spin up N nodes in AZ1, M nodes in AZ2.
> * I want to scale the number of nodes in specific AZ/region/cloud. For 
> example, add/remove K nodes from AZ1 (with AZ2 untouched).
> 
> The use case above should be very common and universal everywhere. To address 
> the use case, Magnum needs to support provisioning heterogeneous set of nodes 
> at deploy time and managing them at runtime. It looks the proposed idea 
> (manually managing individual nodes or individual group of nodes) can address 
> this requirement very well. Besides the proposed idea, I cannot think of an 
> alternative solution.
> 
> Therefore, I vote to support the proposed idea.
> 
> Best regards,
> Hongbin
> 
>> -Original Message-
>> From: Hongbin Lu
>> Sent: June-01-16 11:44 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: RE: [openstack-dev] [magnum] Discuss the idea of manually
>> managing the bay nodes
>> 
>> Hi team,
>> 
>> A blueprint was created for tracking this idea:
>> https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-
>> nodes . I won't approve the BP until there is a team decision on
>> accepting/rejecting the idea.
>> 
>> From the discussion in design summit, it looks everyone is OK with the
>> idea in general (with some disagreements in the API style). However,
>> from the last team meeting, it looks some people disagree with the idea
>> fundamentally. so I re-raised this ML to re-discuss.
>> 
>> If you agree or disagree with the idea of manually managing the Heat
>> stacks (that contains individual bay nodes), please write down your
>> arguments here. Then, we can start debating on that.
>> 
>> Best regards,
>> Hongbin
>> 
>>> -Original Message-
>>> From: Cammann, Tom [mailto:tom.camm...@hpe.com]
>>> Sent: May-16-16 5:28 AM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
>>> managing the bay nodes
>>> 
>>> The discussion at the summit was very positive around this
>> requirement
>>> but as this change will make a large impact to Magnum it will need a
>>> spec.
>>> 
>>> On the API of things, I was thinking a slightly more generic approach
>>> to incorporate other lifecycle operations into the same API.
>>> Eg:
>>> magnum bay-manage  
>>> 
>>> magnum bay-manage  reset –hard
>>> magnum bay-manage  rebuild
>>> magnum bay-manage  node-delete  magnum bay-manage
>>>  node-add –flavor  magnum bay-manage  node-reset
>>>  magnum bay-manage  node-list
>>> 
>>> Tom
>>> 
>>> From: Yuanying OTSUKA 
>>> Reply-To: "OpenStack Development Mailing List (not for usage
>>> questions)" 
>>> Date: Monday, 16 May 2016 at 01:07
>>> To: "OpenStack Development Mailing List (not for usage questions)"
>>> 
>>> Subject: Re: [openstack-dev] [magnum] Discuss the idea of 

[openstack-dev] [keystone] Changing the project name uniqueness constraint

2016-06-02 Thread Henry Nash
Hi

As you know, I have been working on specs that change the way we handle the 
uniqueness of project names in Newton. The goal of this is to better support 
project hierarchies, which as they stand today are restrictive in that all 
project names within a domain must be unique, irrespective of where in the 
hierarchy that projects sits (unlike, say, the unix directory structure where a 
node name only has to be unique within its parent). Such a restriction is 
particularly problematic when enterprise start modelling things like test, QA 
and production as branches of a project hierarchy, e.g.:

/mydivsion/projectA/dev
/mydivsion/projectA/QA
/mydivsion/projectA/prod
/mydivsion/projectB/dev
/mydivsion/projectB/QA
/mydivsion/projectB/prod

Obviously the idea of a project name (née tenant) being unique has been around 
since near the beginning of (OpenStack) time, so we must be cautions. There are 
two alternative specs proposed:

1) Relax project name constraints: https://review.openstack.org/#/c/310048/ 
 
2) Hierarchical project naming: https://review.openstack.org/#/c/318605/ 


First, here’s what they have in common:

a) They both solve the above problem
b) They both allow an authorization scope to use a path rather than just a 
simple name, hence allowing you to address a project anywhere in the hierarchy
c) Neither have any impact if you are NOT using a hierarchy - i.e. if you just 
have a flat layer of projects in a domain, then they have no API or semantic 
impact (since both ensure that a project’s name must still be unique within a 
parent)

Here’s how the differ:

- Relax project name constraints (1), keeps the meaning of the ‘name’ attribute 
of a project to be its node-name in the hierarchy, but formally relaxes the 
uniqueness constraint to say that it only has to be unique within its parent. 
In other words, let’s really model this a bit like a unix directory tree.
- Hierarchical project naming (2), formally changes the meaning of the ‘name’ 
attribute to include the path to the node as well as the node name, and hence 
ensures that the (new) value of the name attribute remains unique.

While whichever approach we chose would only be included in a new microversion 
(3.7) of the Identity API, although some relevant APIs can remain unaffected 
for a client talking 3.6 to a Newton server, not all can be. As pointed out be 
jamielennox, this is a data modelling problem - if a Newton server has created 
multiple projects called “dev” in the hierarchy, a 3.6 client trying to scope a 
token simply to “dev” cannot be answered correctly (and it is proposed we would 
have to return an HTTP 409 Conflict error if multiple nodes with the same name 
were detected). This is true for both approaches.

Other comments on the approaches:

- Having a full path as the name seems duplicative with the current project 
entity - since we already return the parent_id (hence parent_id + name is, 
today, sufficient to place a project in the hierarchy).
- In the past, we have been concerned about the issue of what we do if there is 
a project further up the tree that we do not have any roles on. In such cases, 
APIs like list project parents will not display anything other than the project 
ID for such projects. In the case of making the name the full path, we would be 
effectively exposing the name of all projects above us, irrespective of whether 
we had roles on them. Maybe this is OK, maybe it isn’t.
- While making the name the path keeps it unique, this is fine if clients 
blindly use this attribute to plug back into another API to call. However if, 
for example, you are Horizon and are displaying them in a UI then you need to 
start breaking down the path into its components, where you don’t today.
- One area where names as the hierarchical path DOES look right is calling the 
/auth/projects API - where what the caller wants is a list of projects they can 
scope to - so you WANT this to be the path you can put in an auth request.

Given that neither can fully protect a 3.6 client, my personal preference is to 
go with the cleaner logical approach which I believe is the Relax project name 
constraints (1), with the addition of changing GET /auth/projects to return the 
path (since this is a specialised API that happens before authentication) - but 
I am open to persuasion (as the song goes).

There are those that might say that perhaps we just can’t change this. I would 
argue that since this ONLY affects people who actually create hierarchies and 
that today such hierarchical use is in its infancy, then now IS the time to 
change this. If we leave it too long, then it will become really hard to change 
what will by then have become a tough restriction.

Henry


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [stable][all] Tagging kilo-eol for "the world"

2016-06-02 Thread Emilien Macchi
On Thu, Jun 2, 2016 at 6:31 AM, Tony Breeds  wrote:
> Hi all,
> In early May we tagged/EOL'd several (13) projects.  We'd like to do a
> final round for a more complete set.  We looked for projects meet one or more
> of the following criteria:
> - The project is openstack-dev/devstack, openstack-dev/grenade or
>   openstack/requirements
> - The project has the 'check-requirements' job listed as a template in
>   project-config:zuul/layout.yaml
> - The project is listed in governance:reference/projects.yaml and is tagged
>   with 'release:managed' or 'stable:follows-policy' (or both).
>
> The list of 171 projects that match above is at [1].  There are another 68
> projects at [2] that have kilo branches but do NOT match the criteria above.
>
> Please look over both lists by 2016-06-09 00:00 UTC and let me know if:
> - A project is in list 1 and *really* *really* wants to opt *OUT* of EOLing 
> and
>   why.
> - A project is in list 2 that would like to opt *IN* to tagging/EOLing

I think that all openstack/puppet-* projects that have stable/kilo can
be kilo-EOLed.
Let me know if it's ok and I'll abandon all open reviews.

Thanks,

> Any projects that will be EOL'd will need all open reviews abandoned before it
> can be processed.  I'm very happy to do this.
>
> I'd like to hand over the list of ready to EOL repos to the infra team on
> 2016-09-10 (UTC)
>
> Yours Tony.
> [1] http://paste.openstack.org/show/507233/
> [2] http://paste.openstack.org/show/507232/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [octavia] enabling new topologies

2016-06-02 Thread Sergey Guenender
Stephen, Michael, thank you for having a look.

I'll respond to every issue you mentioned when I get to work on Sunday.

Until then, in case you don't mind inspecting a small diff, just to 
clarify my point, please have a look at a rather straightforward change, 
which
1. exemplifies pretty much all I'm currently proposing (just splitting 
amphorae into semantic sub-clusters to facilitate code-reuse)
2. I'm hoping should provide everything needed (and thus frictionless 
review) for the virtual non-shared distributor of active active topology
3. is quite transparent for other topologies, including future 
active-active shared, hardware, what-have-you, just because it's fully 
compliant with existing code

https://github.com/sgserg/octavia/commit/030e786ce4966bbf24e73c00364f167596aef004

Needless to say, I wouldn't expect anything like this to be merged until 
we see an end-to-end working (virtual-private-d'tor) AA N+1 create-lb 
proof of concept (not destroying existing topologies).

I'm not married to this idea, it's just something I came up with having 
spent a few weeks in front of the code, trying to imagine how the simplest 
active-active use-case would go around performing the same tasks (vrrp, 
vip plugging, etc.).

-Sergey.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][lbaas] Operator-facing installation guide

2016-06-02 Thread Adrian Otto
Brandon,

Magnum uses neutron’s LBaaS service to allow for multi-master bays. We can 
balance connections between multiple kubernetes masters, for example. It’s not 
needed for single master bays, which are much more common. We have a blueprint 
that is in design stage for de-coupling magnum from neutron LBaaS for use cases 
that don’t require it:

https://blueprints.launchpad.net/magnum/+spec/decouple-lbaas

Adrian

> On Jun 2, 2016, at 2:48 PM, Brandon Logan  wrote:
> 
> Call me ignorance, but I'm surprised at neutron-lbaas being a dependency
> of magnum.  Why is this?  Sorry if it has been asked before and I've
> just missed that answer?
> 
> Thanks,
> Brandon
> On Wed, 2016-06-01 at 14:39 +, Hongbin Lu wrote:
>> Hi lbaas team,
>> 
>> 
>> 
>> I wonder if there is an operator-facing installation guide for
>> neutron-lbaas. I asked that because Magnum is working on an
>> installation guide [1] and neutron-lbaas is a dependency of Magnum. We
>> want to link to an official lbaas guide so that our users will have a
>> completed instruction. Any pointer?
>> 
>> 
>> 
>> [1] https://review.openstack.org/#/c/319399/
>> 
>> 
>> 
>> Best regards,
>> 
>> Hongbin
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kolla] About kolla-ansible reconfigure

2016-06-02 Thread Steven Dake (stdake)
Hu,

Reconfigure was not designed to handle changes to globals.yml.  I think its a 
good goal that it should be able to do so, but it does not today.

Reconfigure was designed to handle changes to /etc/kolla/config/* (where custom 
config for services live).  Reconfigure in its current incarnation in all our 
branches and master is really a misnomer - it should be service-reconfgiure - 
but that is wordy and we plan to make globals.yml reconfigurable if feasible - 
but probably not anytime soon.

Regards
-steve


From: "hu.zhiji...@zte.com.cn" 
>
Reply-To: 
"openstack-dev@lists.openstack.org" 
>
Date: Wednesday, June 1, 2016 at 6:24 PM
To: 
"openstack-dev@lists.openstack.org" 
>
Subject: [openstack-dev] [Kolla] About kolla-ansible reconfigure

Hi

After modifying the kolla_internal_vip_address in /etc/kolla/global.yml , I use 
kolla-ansible reconfigure to reconfigure OpenStack. But I got the following 
error.

TASK: [mariadb | Restart containers] **
skipping: [localhost] => (item=[{'group': 'mariadb', 'name': 'mariadb'}, 
{'KOLLA_BASE_DISTRO': 'centos', 'PS1': '$(tput bold)($(printenv 
KOLLA_SERVICE_NAME))$(tput sgr0)[$(id -un)@$(hostname -s) $(pwd)]$ ', 
'KOLLA_INSTALL_TYPE': 'binary', 'changed': False, 'item': {'group': 'mariadb', 
'name': 'mariadb'}, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'PATH': 
'/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin', 'invocation': 
{'module_name': u'kolla_docker', 'module_complex_args': {'action': 
'get_container_env', 'name': u'mariadb'}, 'module_args': ''}, 
'KOLLA_SERVICE_NAME': 'mariadb', 'KOLLA_INSTALL_METATYPE': 'rdo'}, {'cmd': 
['docker', 'exec', 'mariadb', '/usr/local/bin/kolla_set_configs', '--check'], 
'end': '2016-06-02 11:32:18.866276', 'stderr': 'INFO:__main__:Loading config 
file at /var/lib/kolla/config_files/config.json\nINFO:__main__:Validating 
config file\nINFO:__main__:The config files are in the expected state', 
'stdout': u'', 'item': {'group': 'mariadb', 'name': 'mariadb'}, 'changed': 
False, 'rc': 0, 'failed': False, 'warnings': [], 'delta': '0:00:00.075316', 
'invocation': {'module_name': u'command', 'module_complex_args': {}, 
'module_args': u'docker exec mariadb /usr/local/bin/kolla_set_configs 
--check'}, 'stdout_lines': [], 'failed_when_result': False, 'start': 
'2016-06-02 11:32:18.790960'}])

TASK: [mariadb | Waiting for MariaDB service to be ready through VIP] *
failed: [localhost] => {"attempts": 6, "changed": false, "cmd": ["docker", 
"exec", "mariadb", "mysql", "-h", "10.43.114.148/24", "-u", "haproxy", "-e", 
"show databases;"], "delta": "0:00:03.924516", "end": "2016-06-02 
11:33:57.928139", "failed": true, "rc": 1, "start": "2016-06-02 
11:33:54.003623", "stdout_lines": [], "warnings": []}
stderr: ERROR 2005 (HY000): Unknown MySQL server host '10.43.114.148/24' (-2)
msg: Task failed as maximum retries was encountered

FATAL: all hosts have already failed -- aborting


It seems that mariadb was not restart as expected.




ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the information contained is strictly prohibited.  If you have received 
this mail in error, please delete it and notify us immediately.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Changes to Ramdisk and iPXE defaults in Devstack and many gate jobs

2016-06-02 Thread Jay Faulkner
These changes have all merged and taken effect. Ironic and IPA gate jobs 
are now operating as mentioned below, with one change; during review it 
was decided to lower the amount of ram per node to 384mb instead of 
512mb of RAM. This will ensure that we don't add additional bloat to 
TinyIPA ramdisks without gate jobs indicating it via failure.



Thanks to everyone's hard work on getting TinyIPA support working. This 
is a big step towards making our CI work faster and use less resources.


Thanks,
Jay Faulkner (JayF)
OSIC

On 5/12/16 8:54 AM, Jay Faulkner wrote:


Hi all,


A change (https://review.openstack.org/#/c/313035/) to Ironic 
devstack is in the gate, changing the default ironic-python-agent 
(IPA) ramdisk from CoreOS to TinyIPA, and changing iPXE to default 
enabled.



As part of the work to improve and speed up gate jobs, we determined 
that using iPXE speeds up deployments and makes them more reliable by 
using http to transfer ramdisks instead of tftp. Additionally, 
the TinyIPA image, in development over the last few months, uses less 
ram and is smaller, allowing faster transfers and more simultaneous 
VMs to run in the gate.



In addition to changing the devstack default, there's also a patch up: 
https://review.openstack.org/#/c/313800/ to change most Ironic jobs to 
use iPXE and TinyIPA. This change will make IPA have voting check jobs 
and tarball publishing jobs for supported ramdisks (CoreOS and 
TinyIPA). Ironic (and any other projects other than IPA) will use the 
publicly published tinyipa image.



In summary:

*- Devstack changes (merging now):*

**- Defaults to TinyIPA ramdisk

  - Defaults to iPXE enabled

*- Gate changes (needs review at: 
https://review.openstack.org/#/c/313800/ )*


  - Ironic-Python-Agent

- Voting CoreOS + TinyIPA source (ramdisk built on the fly jobs)

  - Ironic

- Change all jobs (except bash ramdisk pxe_ssh job) to TinyIPA

- Change all jobs but one to use iPXE

- Change all gate jobs to use 512mb of ram


If there are any questions or concerns, feel free to ask here or in 
#openstack-ironic.



P.S. I welcome users of the DIB ramdisk to help make a job to run 
against IPA. All supported ramdisks should be checked in IPA's gate to 
avoid breakage as IPA is inherently dependent on its environment.




Thanks,

Jay Faulkner (JayF)

OSIC



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Completed: Move to TinyIPA Ramdisk and iPXE defaults in gate jobs

2016-06-02 Thread Jay Faulkner
These changes have all merged and taken effect. Ironic and IPA gate jobs 
are now operating as mentioned below, with one change; during review it 
was decided to lower the amount of ram per node to 384mb instead of 
512mb of RAM. This will ensure that we don't add additional bloat to 
TinyIPA ramdisks without gate jobs indicating it via failure. If anyone 
sees any strange behavior in the gate as a result, feel free to ping me 
on IRC.



The remaining pending change is https://review.openstack.org/#/c/323994/ 
in order to enable similar changes for ironic-inspector jobs. Your 
review attention on this is appreciated; it's a simple one word change.


Thanks to everyone's hard work on getting TinyIPA support working. This 
is a big step towards making our CI work faster and use less resources.


Thanks,
Jay Faulkner (JayF)
OSIC

On 5/12/16 8:54 AM, Jay Faulkner wrote:


Hi all,


A change (https://review.openstack.org/#/c/313035/) to Ironic 
devstack is in the gate, changing the default ironic-python-agent 
(IPA) ramdisk from CoreOS to TinyIPA, and changing iPXE to default 
enabled.



As part of the work to improve and speed up gate jobs, we determined 
that using iPXE speeds up deployments and makes them more reliable by 
using http to transfer ramdisks instead of tftp. Additionally, 
the TinyIPA image, in development over the last few months, uses less 
ram and is smaller, allowing faster transfers and more simultaneous 
VMs to run in the gate.



In addition to changing the devstack default, there's also a patch up: 
https://review.openstack.org/#/c/313800/ to change most Ironic jobs to 
use iPXE and TinyIPA. This change will make IPA have voting check jobs 
and tarball publishing jobs for supported ramdisks (CoreOS and 
TinyIPA). Ironic (and any other projects other than IPA) will use the 
publicly published tinyipa image.



In summary:

*- Devstack changes (merging now):*

**- Defaults to TinyIPA ramdisk

  - Defaults to iPXE enabled

*- Gate changes (needs review at: 
https://review.openstack.org/#/c/313800/ )*


  - Ironic-Python-Agent

- Voting CoreOS + TinyIPA source (ramdisk built on the fly jobs)

  - Ironic

- Change all jobs (except bash ramdisk pxe_ssh job) to TinyIPA

- Change all jobs but one to use iPXE

- Change all gate jobs to use 512mb of ram


If there are any questions or concerns, feel free to ask here or in 
#openstack-ironic.



P.S. I welcome users of the DIB ramdisk to help make a job to run 
against IPA. All supported ramdisks should be checked in IPA's gate to 
avoid breakage as IPA is inherently dependent on its environment.




Thanks,

Jay Faulkner (JayF)

OSIC



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][lbaas] Operator-facing installation guide

2016-06-02 Thread Brandon Logan
Call me ignorance, but I'm surprised at neutron-lbaas being a dependency
of magnum.  Why is this?  Sorry if it has been asked before and I've
just missed that answer?

Thanks,
Brandon
On Wed, 2016-06-01 at 14:39 +, Hongbin Lu wrote:
> Hi lbaas team,
> 
>  
> 
> I wonder if there is an operator-facing installation guide for
> neutron-lbaas. I asked that because Magnum is working on an
> installation guide [1] and neutron-lbaas is a dependency of Magnum. We
> want to link to an official lbaas guide so that our users will have a
> completed instruction. Any pointer?
> 
>  
> 
> [1] https://review.openstack.org/#/c/319399/
> 
>  
> 
> Best regards,
> 
> Hongbin
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] v1 API and nova-net support removal in newton

2016-06-02 Thread Ben Swartzlander
We have made the decision to remove the v1 API from Manila in Newton (it 
was deprecated in Mitaka). Only v2.0+ will be supported. For those that 
don't know, v2.0 is exactly the same as v1 but it has microversion 
support. You need a client library from Liberty or Mitaka to get 
microversion support, but scripts should work exactly the same and 
software that imports the library should work find with the new library.


We also made the decision to drop the nova-net plugin from Manila 
because nova-net has been deprecated since before we added it. This 
won't affect anyone unless they're using one of the few drivers that 
support shares servers (not including the Generic driver) AND they're 
still using nova-net instead of neutron. The recommended workaround for 
those users is to switch to neutron.


-Ben Swartzlander


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Newton deadlines (reminder)

2016-06-02 Thread Ben Swartzlander
At the start of the Newton release we agreed to keep the same deadlines 
we had for Mitaka. I thought everyone knew what those were but there is 
some confusion so I'll remind everyone.


As always, we will enforce a Feature Freeze on the N-3 milestone date: 
September 1st [1]. Only bugfixes and documentation changes are allowed 
to merge after that date without an explicit feature freeze exception (FFE).


Also like before, we will enforce a feature proposal freeze 2 weeks 
before the feature freeze, on Aug 18th. New feature patches must be 
submitted to gerrit, with complete test coverage, and passing Jenkins by 
this date.


New drivers must be submitted 3 weeks before the feature freeze, by Aug 
11th, with the same requirements as above, and working CI. Additionally 
driver refactor patches (any patch that significantly reworks existing 
driver code) will be subject to the same deadline, because these patches 
tend to take as much resources to review as a whole new driver.


"Large" new features must be submitted to gerrit 6 weeks before the 
feature freeze, by Jul 21 (a week after the N-2 milestone [1]). The 
definition of a "large" feature is the same as we defined it for Mitaka [2].


The manila specs repo is a new thing for Newton and for now there are no 
deadlines for specs.


Also I want to remind everyone that changes to our "library" project 
including python-manilaclient and manila-ui have the same deadlines. We 
can't sneak features in the libraries after the core manila patches 
land, they need to go in together.


-Ben Swartzlander


[1] http://releases.openstack.org/newton/schedule.html
[2] 
http://lists.openstack.org/pipermail/openstack-dev/2015-November/079901.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] versioning of IPA, it is time or is it?

2016-06-02 Thread Dmitry Tantsur
2 июня 2016 г. 10:19 PM пользователь "Loo, Ruby" 
написал:
>
> Hi,
>
> I recently reviewed a patch [1] that is trying to address an issue with
ironic (master) talking to a ramdisk that has a mitaka IPA lurking around.
>
> It made me think that IPA may no longer be a teenager (yay, boo). IPA now
has a stable branch. I think it is time it grows up and acts responsibly.
Ironic needs to know which era of IPA it is talking to. Or conversely, does
ironic want to specify which microversion of IPA it wants to use? (Sorry,
Dmitry, I realize you are cringing.)

With versioning in place we'll have to fix one IPA version in ironic.
Meaning, as soon as we introduce a new feature, we have to explicitly break
compatibility with old ramdisk by requesting a version it does not support.
Even if the feature itself is optional. Or we have to wait some long time
before using new IPA features in ironic. I hate both options.

Well, or we can use some different versioning procedure :)

>
> Has anyone thought more than I have about this (i.e., more than 2ish
minutes)?
>
> If the solution (whatever it is) is going to take a long time to
implement, is there anything we can do in the short term (ie, in this
cycle)?
>
> --ruby
>
> [1] https://review.openstack.org/#/c/319183/
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Midcycle meetup

2016-06-02 Thread Ben Swartzlander
As of now we're planning to hold our midcycle meetup in virtually on 
June 28, 29, and possibly June 30 (depending on agenda).


If any core reviewers or significant contributors can't attend those 
days please let me know.


Also if anyone wants to travel to RTP to join those of us based here I 
also need to know so I can get a space reserved. Given the geographic 
spread of the team I'm prioritizing remote participation though.


-Ben Swartzlander


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][upgrades] Bi-weekly upgrades work status. 6/2/2016

2016-06-02 Thread Darek Smigiel
Thanks Artur for this summarize.

> On Jun 2, 2016, at 3:29 PM, Korzeniewski, Artur 
>  wrote:
> 
> Hi Neutrinos,
> I would like to start the first bi-weekly upgrades work report.
>  
> TLDR:
> In order to inform community what is going on in upgrades field, we would 
> like to start bi-weekly reporting. We would like to show progress in database 
> resource transition to Oslo VersionedObjects. Also list of code refactoring 
> places will be provided. Community members can take a look at the list and 
> see if their work is not conflicting with upgrades effort.
>  
> General approach:
> During Mitaka release cycle, we have started the journey to port database 
> resources to Oslo VersionedObject (OVO). As first, we have chosen the Port, 
> Subnet and Network resources. As the process is very complicated, we have 
> divided the work to first define interface object, and then work on 
> integration patches in core Neutron code. The NeutronDbObject base class is 
> still evolving, and we have spent the Mitaka release cycle on working on 
> solid base, in order to reuse the code for all derived classes.
> We are still working on basic OVO integration, so any help in getting the 
> work done would be appreciated. For detailed info, please take a look at 
> below list.
> We would like to finish already started patches, priority has the Port, 
> Subnet and Network objects. If you want to contribute and port new objects to 
> OVO, please prepare object implementation and some usage in core Neutron 
> code. In order to see which object is already covered, please take a look at 
> below list of existing patches.
>  
>  
> I would like to remind that agreed approach at Design Summit in Austin was, 
> that every new resource added to neutron should have OVO implemented. Please 
> comply, and core reviewers please take care of this requirements in patches 
> you review.
>  
>  
> The effort to move all database resources to Oslo VersionedObject will 
> contribute to block offline contracting migration in Ocata release. In Newton 
> cycle we would like to have our last offline data migration.
>  
>  
> Objects merged:
> Subnetpool https://review.openstack.org/275789 
> 
> Subnet https://review.openstack.org/264273 
> 
> Port extension: Allowed address pairs https://review.openstack.org/268274/ 
> 
> Port extension: Extra DHCP opt https://review.openstack.org/273072/ 
> 
> Port extension: Port allowed address pairs 
> https://review.openstack.org/268274 
> Port extension: Port security https://review.openstack.org/292178 
> 
> OVO for VLAN aware VMs: https://review.openstack.org/310410 
> 
>  
>  
> Objects under review:
> Network https://review.openstack.org/269658 
> 
> Port https://review.openstack.org/253641 
> Port extension: security groups https://review.openstack.org/284738 
> 
> Agent  https://review.openstack.org/297887 
> 
> Route, RoutePort and RouterRoute https://review.openstack.org/307964 
> 
> DistributedVirtualRouter mac address https://review.openstack.org/304873/ 
> 
> Service Type: https://review.openstack.org/304322 
> 
> Flavor and Service Profile https://review.openstack.org/306685 
> 
>  
>  
> Integration patches merged:
> Integrate the port allowed address pairs VersionedObject in 
> Neutronhttps://review.openstack.org/287756 
> 
> Integrate the Extra Dhcp Opt VersionedObject in Neutron 
> https://review.openstack.org/285397 
>  
> Integration patches Under development:
> Subnet OVO https://review.openstack.org/321001 
> 
> Identified usages of Subnet:
> · main integration with db_base_plugin and ml2 plugin
> · DHCP RPC usage
> · IPAM usage
> · dvr_mac_db.py
> · l3_db.py
> · extraroute_db.py
> Subnetpool usage: https://review.openstack.org/300056 
> 
> Replace plugin class for address scope ovo 
> https://review.openstack.org/308005 
>  
>  
> Testing:
> The API tests for sorting/pagination has been added for port and network
> https://review.openstack.org/306272 
> https://review.openstack.org/320980 
> More tests are needed for resources that use sorting/pagination on API level.
> Testing in gate has been covered 

[openstack-dev] [neutron][upgrades] Bi-weekly upgrades work status. 6/2/2016

2016-06-02 Thread Korzeniewski, Artur
Hi Neutrinos,

I would like to start the first bi-weekly upgrades work report.



TLDR:

In order to inform community what is going on in upgrades field, we would like 
to start bi-weekly reporting. We would like to show progress in database 
resource transition to Oslo VersionedObjects. Also list of code refactoring 
places will be provided. Community members can take a look at the list and see 
if their work is not conflicting with upgrades effort.



General approach:

During Mitaka release cycle, we have started the journey to port database 
resources to Oslo VersionedObject (OVO). As first, we have chosen the Port, 
Subnet and Network resources. As the process is very complicated, we have 
divided the work to first define interface object, and then work on integration 
patches in core Neutron code. The NeutronDbObject base class is still evolving, 
and we have spent the Mitaka release cycle on working on solid base, in order 
to reuse the code for all derived classes.

We are still working on basic OVO integration, so any help in getting the work 
done would be appreciated. For detailed info, please take a look at below list.

We would like to finish already started patches, priority has the Port, Subnet 
and Network objects. If you want to contribute and port new objects to OVO, 
please prepare object implementation and some usage in core Neutron code. In 
order to see which object is already covered, please take a look at below list 
of existing patches.





I would like to remind that agreed approach at Design Summit in Austin was, 
that every new resource added to neutron should have OVO implemented. Please 
comply, and core reviewers please take care of this requirements in patches you 
review.





The effort to move all database resources to Oslo VersionedObject will 
contribute to block offline contracting migration in Ocata release. In Newton 
cycle we would like to have our last offline data migration.





Objects merged:

Subnetpool https://review.openstack.org/275789

Subnet https://review.openstack.org/264273

Port extension: Allowed address pairs https://review.openstack.org/268274/

Port extension: Extra DHCP opt https://review.openstack.org/273072/

Port extension: Port allowed address pairs https://review.openstack.org/268274

Port extension: Port security https://review.openstack.org/292178

OVO for VLAN aware VMs: https://review.openstack.org/310410





Objects under review:

Network https://review.openstack.org/269658

Port https://review.openstack.org/253641

Port extension: security groups https://review.openstack.org/284738

Agent  https://review.openstack.org/297887

Route, RoutePort and RouterRoute https://review.openstack.org/307964

DistributedVirtualRouter mac address https://review.openstack.org/304873/

Service Type: https://review.openstack.org/304322

Flavor and Service Profile https://review.openstack.org/306685





Integration patches merged:

Integrate the port allowed address pairs VersionedObject in Neutron 
https://review.openstack.org/287756

Integrate the Extra Dhcp Opt VersionedObject in Neutron 
https://review.openstack.org/285397



Integration patches Under development:

Subnet OVO https://review.openstack.org/321001

Identified usages of Subnet:
* main integration with db_base_plugin and ml2 plugin
* DHCP RPC usage
* IPAM usage
* dvr_mac_db.py
* l3_db.py
* extraroute_db.py

Subnetpool usage: https://review.openstack.org/300056

Replace plugin class for address scope ovo https://review.openstack.org/308005





Testing:

The API tests for sorting/pagination has been added for port and network

https://review.openstack.org/306272

https://review.openstack.org/320980

More tests are needed for resources that use sorting/pagination on API level.

Testing in gate has been covered in multinode Grenade jobs, one for legacy 
scenario, and the other DVR.





Improvements for NeutronDbObject:
* Merged: objects: support advanced criteria for get_objects: 
https://review.openstack.org/300055
* Merged: Standard attributes are automagically added to all relevant 
neutron resources in object's base class
* objects: stop using internal _context attribute 
https://review.openstack.org/283616
* Allow unique keys to be used with get_object 
https://review.openstack.org/322024
* qos: support advanced sorting/pagination criteria 
https://review.openstack.org/318251/





TODOs:
1.   Help in review and code development for already existing patches.
2.   Add integration patches for objects: network, port, router, agent, 
service type...
3.   Add more API tests for resources: subnets, routers, all the resource 
supporting sorting/pagination on API level
4.   Add usages of SQLAlchemy decorator classes (MAC Address, IP Address, 
CIDR) in SQL schema.
5.   Improve Grenade coverage for DHCP, L3 and DVR upgrade tests.
6.   Add objects for not yet covered database 

[openstack-dev] [ironic] versioning of IPA, it is time or is it?

2016-06-02 Thread Loo, Ruby
Hi,

I recently reviewed a patch [1] that is trying to address an issue with ironic 
(master) talking to a ramdisk that has a mitaka IPA lurking around.

It made me think that IPA may no longer be a teenager (yay, boo). IPA now has a 
stable branch. I think it is time it grows up and acts responsibly. Ironic 
needs to know which era of IPA it is talking to. Or conversely, does ironic 
want to specify which microversion of IPA it wants to use? (Sorry, Dmitry, I 
realize you are cringing.)

Has anyone thought more than I have about this (i.e., more than 2ish minutes)?

If the solution (whatever it is) is going to take a long time to implement, is 
there anything we can do in the short term (ie, in this cycle)?

--ruby 

[1] https://review.openstack.org/#/c/319183/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia] enabling new topologies

2016-06-02 Thread Michael Johnson
Hi Sergey,  Welcome to working on Octavia!

I'm not sure I fully understand your proposals, but I can give my
thoughts/opinion on the challenge for Active/Active.

In general I agree with Stephen.

The intention of using TaskFlow is to facilitate code reuse across
similar but different code flows.

For an Active/Active provisioning request I envision it as a new flow
that is loaded as opposed to the current standalone and Active/Standby
flow.  I would expect it would include many existing tasks (example,
plug_network) that may be required for the requested action.  This new
flow will likely include a number of concurrent sub-flows using these
existing tasks.

I do expect that the "distributor" will need to be a new "element".
Because the various stakeholders are considering implementing this
function in different ways, we agreed that an API and driver would be
developed for interactions with the distributor.  This should also
take into account that there may be some deployments where
distributors are not shared.

I still need to review the latest version of the Act/Act spec to
understand where that was left after my first round of comments and
our mid-cycle discussions.

Michael


On Wed, Jun 1, 2016 at 10:23 AM, Stephen Balukoff  wrote:
> Hey Sergey--
>
> Apologies for the delay in my response. I'm still wrapping my head around
> your option 2 suggestion and the implications it might have for the code
> base moving forward. I think, though, that I'm against your option 2
> proposal and in favor of option 1 (which, yes, is more work initially) for
> the following reasons:
>
> A. We have a precedent in the code tree with how the stand-alone and
> active-standby topologies are currently being handled. Yes, this does entail
> various conditionals and branches in tasks and flows-- which is not really
> that ideal, as it means the controller worker needs to have more specific
> information on how topologies work than I think any of us would like, and
> this adds some rigidity to the implementation (meaning 3rd party vendors may
> have more trouble interfacing at that level)...  but it's actually "not that
> bad" in many ways, especially given we don't anticipate supporting a large
> or variable number of topologies. (stand-alone, active-standby,
> active-active... and then what? We've been doing this for a number of years
> and nobody has mentioned any radically new topologies they would like in
> their load balancing. Things like auto-scale are just a specific case of
> active-active).
>
> B. If anything Option 2 builds more less-obvious rigidity into the
> implementation than option 1. For example, it makes the assumption that the
> distributor is necessarily an amphora or service VM, whereas we have already
> heard that some will implement the distributor as a pure network routing
> function that isn't going to be managed the same way other amphorae are.
>
> C. Option 2 seems like it's going to have a lot more permutations that would
> need testing to ensure that code changes don't break existing / potentially
> supported functionality. Option 1 keeps the distributor and amphorae
> management code separate, which means tests should be more straight-forward,
> and any breaking changes which slip through potentially break less stuff.
> Make sense?
>
> Stephen
>
>
> On Sun, May 29, 2016 at 7:12 AM, Sergey Guenender  wrote:
>>
>> I'm working with the IBM team implementing the Active-Active N+1 topology
>> [1].
>>
>> I've been commissioned with the task to help integrate the code supporting
>> the new topology while a) making as few code changes and b) reusing as much
>> code as possible.
>>
>> To make sure the changes to existing code are future-proof, I'd like to
>> implement them outside AA N+1, submit them on their own and let the AA N+1
>> base itself on top of it.
>>
>> --TL;DR--
>>
>> what follows is a description of the challenges I'm facing and the way I
>> propose to solve them. Please skip down to the end of the email to see the
>> actual questions.
>>
>> --The details--
>>
>> I've been studying the code for a few weeks now to see where the best
>> places for minimal changes might be.
>>
>> Currently I see two options:
>>
>>1. introduce a new kind of entity (the distributor) and make sure it's
>> being handled on any of the 6 levels of controller worker code (endpoint,
>> controller worker, *_flows, *_tasks, *_driver)
>>
>>2. leave most of the code layers intact by building on the fact that
>> distributor will inherit most of the controller worker logic of amphora
>>
>>
>> In Active-Active topology, very much like in Active/StandBy:
>> * top level of distributors will have to run VRRP
>> * the distributors will have a Neutron port made on the VIP network
>> * the distributors' neutron ports on VIP network will need the same
>> security groups
>> * the amphorae facing the pool member networks still require
>> * ports on the pool member networks
>>

Re: [openstack-dev] [octavia] is l7 rule using routed L3 connectivity

2016-06-02 Thread Michael Johnson
I agree that if this occurred it is a bug.  Please open a bug for us
in launchpad and include your controller worker logs and amphora-agent
log from the impacted amphora.

Thanks,
Michael


On Wed, Jun 1, 2016 at 9:55 AM, Stephen Balukoff  wrote:
> Hello Yong Sheng Gong!
>
> Apologies for the lateness of my reply (I've been intermittently available
> over the last month and am now just catching up on ML threads). Anyway, did
> you get your question answered elsewhere? It looks like you've discovered a
> bug in the behavior here-- when you created the member on server_subnet2, an
> interface should have been added to the amphora in the amphora-haproxy
> namespace. If you haven't yet filed a bug report on this, could you?
>
> In any case, the behavior you're seeing (which is not correct in this case)
> is that if the amphora doesn't have a directly-connected interface is that
> it will use its default route to attempt to reach the member.
>
> Stephen
>
> On Tue, May 10, 2016 at 12:01 AM, 龚永生  wrote:
>>
>> Hi, Stephen,
>>
>> By running following commands:
>> neutron lbaas-pool-create --lb-algorithm ROUND_ROBIN --loadbalancer
>> 85800051-31fb-4ca0-962d-8835656a61ef --protocol HTTP --name pool2
>>
>> neutron net-create server_net2
>>
>> neutron subnet-create server_net2 10.20.2.0/24 --name server_subnet2
>>
>> neutron lbaas-member-create --subnet server_subnet2 --address 10.20.2.10
>> --protocol-port 8080 pool2
>>
>> neutron lbaas-l7policy-create --name policy1  --action REDIRECT_TO_POOL
>> --redirect-pool pool2 --listener 8ec3a2e5-8cb5-472e-a12c-f067eefa4b7a
>>
>> neutron lbaas-l7rule-create --type PATH --compare-type STARTS_WITH --value
>> "/api" policy1
>>
>>
>> I found there is no interface on server_subnet2 in namespace
>> amphora-haproxy on amphora:
>> ubuntu@amphora-86359e7c-f473-41c3-9531-c8bf129ec6b7:~$ sudo ip netns exec
>> amphora-haproxy ip a
>> 1: lo:  mtu 65536 qdisc noop state DOWN group default
>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>> 3: eth1:  mtu 1450 qdisc pfifo_fast state
>> UP group default qlen 1000
>> link/ether fa:16:3e:5f:4c:c1 brd ff:ff:ff:ff:ff:ff
>> inet 10.20.0.33/24 brd 10.20.0.255 scope global eth1
>>valid_lft forever preferred_lft forever
>> inet 10.20.0.32/24 brd 10.20.0.255 scope global secondary eth1:0
>>valid_lft forever preferred_lft forever
>> inet6 fe80::f816:3eff:fe5f:4cc1/64 scope link
>>valid_lft forever preferred_lft forever
>> 4: eth2:  mtu 1450 qdisc pfifo_fast state
>> UP group default qlen 1000
>> link/ether fa:16:3e:8a:3d:f5 brd ff:ff:ff:ff:ff:ff
>> inet 10.20.1.5/24 brd 10.20.1.255 scope global eth2
>>valid_lft forever preferred_lft forever
>> inet6 fe80::f816:3eff:fe8a:3df5/64 scope link
>>valid_lft forever preferred_lft forever
>>
>> Is L7 policy using "Routed (layer-3) connectivity"?
>>
>> Thanks
>> yong sheng gong
>> --
>> 龚永生
>> 九州云信息科技有限公司 99CLOUD Co. Ltd.
>> 邮箱(Email):gong.yongsh...@99cloud.net
>> 地址:北京市海淀区上地三街嘉华大厦B座806
>> Addr : Room 806, Tower B, Jiahua Building, No. 9 Shangdi 3rd Street,
>> Haidian District, Beijing, China
>> 手机(Mobile):+86-18618199879
>> 公司网址(WebSite):http://99cloud.net
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-06-02 Thread Keith Bray
Has an email been posted to the [heat] community for their input?  Maybe I
missed it.

Thanks,
-Keith

On 6/2/16, 9:42 AM, "Hongbin Lu"  wrote:

>Madhuri,
>
>It looks both of us agree the idea of having heterogeneous set of nodes.
>For the implementation, I am open to alternative (I supported the
>work-around idea because I cannot think of a feasible implementation by
>purely using Heat, unless Heat support "for" logic which is very unlikely
>to happen. However, if anyone can think of a pure Heat implementation, I
>am totally fine with that).
>
>Best regards,
>Hongbin
>
>> -Original Message-
>> From: Kumari, Madhuri [mailto:madhuri.kum...@intel.com]
>> Sent: June-02-16 12:24 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
>> managing the bay nodes
>> 
>> Hi Hongbin,
>> 
>> I also liked the idea of having heterogeneous set of nodes but IMO such
>> features should not be implemented in Magnum, thus deviating Magnum
>> again from its roadmap. Whereas we should leverage Heat(or may be
>> Senlin) APIs for the same.
>> 
>> I vote +1 for this feature.
>> 
>> Regards,
>> Madhuri
>> 
>> -Original Message-
>> From: Hongbin Lu [mailto:hongbin...@huawei.com]
>> Sent: Thursday, June 2, 2016 3:33 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> 
>> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
>> managing the bay nodes
>> 
>> Personally, I think this is a good idea, since it can address a set of
>> similar use cases like below:
>> * I want to deploy a k8s cluster to 2 availability zone (in future 2
>> regions/clouds).
>> * I want to spin up N nodes in AZ1, M nodes in AZ2.
>> * I want to scale the number of nodes in specific AZ/region/cloud. For
>> example, add/remove K nodes from AZ1 (with AZ2 untouched).
>> 
>> The use case above should be very common and universal everywhere. To
>> address the use case, Magnum needs to support provisioning
>> heterogeneous set of nodes at deploy time and managing them at runtime.
>> It looks the proposed idea (manually managing individual nodes or
>> individual group of nodes) can address this requirement very well.
>> Besides the proposed idea, I cannot think of an alternative solution.
>> 
>> Therefore, I vote to support the proposed idea.
>> 
>> Best regards,
>> Hongbin
>> 
>> > -Original Message-
>> > From: Hongbin Lu
>> > Sent: June-01-16 11:44 AM
>> > To: OpenStack Development Mailing List (not for usage questions)
>> > Subject: RE: [openstack-dev] [magnum] Discuss the idea of manually
>> > managing the bay nodes
>> >
>> > Hi team,
>> >
>> > A blueprint was created for tracking this idea:
>> > https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-
>> > nodes . I won't approve the BP until there is a team decision on
>> > accepting/rejecting the idea.
>> >
>> > From the discussion in design summit, it looks everyone is OK with
>> the
>> > idea in general (with some disagreements in the API style). However,
>> > from the last team meeting, it looks some people disagree with the
>> > idea fundamentally. so I re-raised this ML to re-discuss.
>> >
>> > If you agree or disagree with the idea of manually managing the Heat
>> > stacks (that contains individual bay nodes), please write down your
>> > arguments here. Then, we can start debating on that.
>> >
>> > Best regards,
>> > Hongbin
>> >
>> > > -Original Message-
>> > > From: Cammann, Tom [mailto:tom.camm...@hpe.com]
>> > > Sent: May-16-16 5:28 AM
>> > > To: OpenStack Development Mailing List (not for usage questions)
>> > > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
>> > > managing the bay nodes
>> > >
>> > > The discussion at the summit was very positive around this
>> > requirement
>> > > but as this change will make a large impact to Magnum it will need
>> a
>> > > spec.
>> > >
>> > > On the API of things, I was thinking a slightly more generic
>> > > approach to incorporate other lifecycle operations into the same
>> API.
>> > > Eg:
>> > > magnum bay-manage  
>> > >
>> > > magnum bay-manage  reset –hard
>> > > magnum bay-manage  rebuild
>> > > magnum bay-manage  node-delete  magnum bay-manage
>> > >  node-add –flavor  magnum bay-manage  node-reset
>> > >  magnum bay-manage  node-list
>> > >
>> > > Tom
>> > >
>> > > From: Yuanying OTSUKA 
>> > > Reply-To: "OpenStack Development Mailing List (not for usage
>> > > questions)" 
>> > > Date: Monday, 16 May 2016 at 01:07
>> > > To: "OpenStack Development Mailing List (not for usage questions)"
>> > > 
>> > > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
>> > > managing the bay nodes
>> > >
>> > > Hi,
>> > >
>> > > I think, user also want to specify the deleting node.
>> > > So we should manage “node” 

Re: [openstack-dev] [freezer] Addition to the core team

2016-06-02 Thread fausto.marzi


+1


Sent from my Samsung device

 Original message 
From: "Ramirez Garcia, Guillermo"  
Date: 02/06/2016  17:40  (GMT+01:00) 
To: "Mathieu, Pierre-Arthur" , 
openstack-dev@lists.openstack.org 
Cc: freezer-eskimos  
Subject: Re: [openstack-dev] [freezer] Addition to the core team 

+1

-Original Message-
From: Mathieu, Pierre-Arthur 
Sent: Thursday 2 June 2016 16:29
To: openstack-dev@lists.openstack.org
Cc: freezer-eskimos 
Subject: [openstack-dev][freezer] Addition to the core team

Hello,

I would like to propose that we make Deklan Dieterly (ddieterly) core on 
freezer.
He has been a highly valuable developper for the past few month, mainly working 
on integration testing for Freezer components.
He has also been helping a lot with features and Ux testing.


His work can be found here: [1]
And his stackalitics profile here: [2]

Unless there is a disagreement I plan to make Saad core by the end of the week.


Thanks
- Pierre, Freezer PTL

[1] https://review.openstack.org/#/q/owner:%22Deklan+Dieterly%22
[2] http://stackalytics.com/?user_id=deklan=all=loc

















__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [Tempest] Abondoned old code reviews

2016-06-02 Thread Ken'ichi Ohmichi
Thanks for feedback here.
There are not any objections, so I will drop old code reviews on Tempest queue.

Thanks
Ken Ohmichi

---


2016-05-31 23:55 GMT-07:00 Masayuki Igawa :
> On Wed, Jun 1, 2016 at 3:05 AM, Andrea Frittoli
>  wrote:
>> On Mon, 30 May 2016, 6:25 p.m. Ken'ichi Ohmichi, 
>> wrote:
>>>
>>> Hi,
>>>
>>> There are many patches which are not updated in Tempest review queue
>>> even if having gotten negative feedback from reviewers or jenkins.
>>> Nova team is abandoning such patches like [1].
>>> I feel it would be nice to abandone such patches which are not updated
>>> since the end of 2015.
>>> Any thoughts?
>
> +1
> I think 5 months is enough to abandon :)
>
> Best Regards,
> -- Masayuki
>
>
>>
>>
>> I don't mind either way, if you prefer abandoning them it's ok with me.
>> I rely on gerrit dashboards and IRC communication to decide which patches I
>> should review; but I understand it would be nice to remove some clutter.
>>
>> Andrea
>>
>>>
>>> [1]:
>>> http://lists.openstack.org/pipermail/openstack-dev/2016-May/096112.html
>>>
>>> Thanks
>>> Ken Ohmichi
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [congress] Spec for congress.conf

2016-06-02 Thread Tim Hinrichs
Bryan,

I spent some time looking into the proper way to document configuration
options in OpenStack.  There's a configuration reference that's part of the
openstack-manuals project.  It'll take some time to figure out the best way
of contributing to that.

But for now I added a section to our docs detailing the most important
options for Congress.  All the rest of the config options are common to
most OpenStack projects.  We should document those too, but for the time
being I think we've got the most important part in place.

Here's the change in review, if you want to comment.
https://review.openstack.org/#/c/324732/

If you'd rather just have the Congress-specific config options, here's the
text from that change.
As Masahito said, everything has a default value, so all of these are
optional for you to include in the configuration file.  In practice,
though, you should always provide the ``drivers`` option, since otherwise
you won't be able to create any datasources.

Quick request: Could you give us feedback around whether an empty default
value for ``drivers`` makes sense, or whether it would be (much) better for
the default to be all of the drivers that exist in Congress?



The options most important to Congress are described below, all of which
appear under the [DEFAULT] section of the configuration file.

``drivers``
The list of permitted datasource drivers.  Default is the empty list.
The list is a comma separated list of Python class paths. For example:
drivers =
congress.datasources.neutronv2_driver.NeutronV2Driver,congress.datasources.glancev2_driver.GlanceV2Driver

``datasource_sync_period``
The number of seconds to wait between synchronizing datasource config
from the database.  Default is 0.

``enable_execute_action``
Whether or not congress will execute actions.  If false, Congress will
never execute any actions to do manual reactive enforcement, even if
there
are policy statements that say actions should be executed and the
conditions of those actions become true.  Default is True.

One of Congress's new experimental features is distributing its services
across multiple services and even hosts.  Here are the options for using
that feature.

``distributed_architecture``
Whether to enable the distributed architecture.  Don't set it to true in
before Newton release since the new architecture is still under
development as of Newton.  Default is false.

``node_id``
Unique ID of this Congress instance.  Can be any string.  Useful if
you want to create multiple, distributed instances of Congress.

Here are the most often-used, but standard OpenStack options.  These
are specified in the [DEFAULT] section of the configuration file.

``auth_strategy``
Method for authenticating Congress users.
Can be assigned to either 'keystone' meaning that the user must provide
Keystone credentials or to 'noauth' meaning that no authentication is
required.  Default is 'keystone'.

``verbose``
Controls whether the INFO-level of logging is enabled.  If false,
logging
level will be set to WARNING.  Default is false.  Deprecated.

``debug``
Whether or not the DEBUG-level of logging is enabled. Default is false.




Tim

On Tue, May 31, 2016 at 10:18 AM Tim Hinrichs  wrote:

> We should add a section to our docs that details the config option names,
> their descriptions, and which ones are required.  We should backport that
> to mitaka and maybe liberty.
>
> Tim
>
> On Mon, May 30, 2016 at 12:49 AM Masahito MUROI <
> muroi.masah...@lab.ntt.co.jp> wrote:
>
>> Hi Bryan,
>>
>>
>> On 2016/05/28 2:52, Bryan Sullivan wrote:
>> > Masahito,
>> >
>> > Sorry, I'm not quite clear on the guidance. Sounds like you're saying
>> > all options will be defaulted by Oslo.config if not set in the
>> > congress.conf file. That's OK, if I understood.
>> you're right.
>>
>> >
>> > It's clear to me that some will be deployment-specific.
>> >
>> > But what I am asking is where is the spec for:
>> > - what congress.conf fields are supported i.e. defined for possible
>> > setting in a release
>> Your generated congress.conf has a list of all supported config fields.
>>
>> > - which fields are mandatory to be set (or Congress will simply not
>> work)
>> > - which fields are not mandatory, but must be set for some specific
>> > purpose, which right now is unclear
>> Without deployment-specific configs, IIRC what you need to change from
>> default only is "drivers" fields to run Congress with default setting.
>>
>> >
>> > I'm hoping the answer isn't "go look at the code"! That won't work for
>> > end-users, who are looking to use Congress but not decipher the
>> > meaning/importance of specific fields from the code.
>> I guess your generated config has the purpose of each config fields.
>>
>> If you expect the spec means documents like [1], unfortunately Congress
>> doesn't have these kind of document now.
>>
>> [1] 

Re: [openstack-dev] [nova] API changes on limit / marker / sort in Newton

2016-06-02 Thread Sean Dague
On 06/02/2016 12:53 PM, Everett Toews wrote:
> 
>> On Jun 1, 2016, at 2:01 PM, Matt Riedemann  
>> wrote:
>>
>> Agree with Sean, I'd prefer separate microversions since it makes getting 
>> these in easier since they are easier to review (and remember we make 
>> changes to python-novaclient for each of these also).
>>
>> Also agree with using a single spec in the future, like Sean did with the 
>> API deprecation spec - deprecating multiple APIs but a single spec since the 
>> changes are the same.
> 
> I appreciate that Nova has a long and storied history around it's API. 
> Nonetheless, since it seems you're considering moving to  a new microversion, 
> we'd appreciate it if you would consider adhering to the Sorting guideline 
> [1] and helping drive consensus into the Pagination guideline [2].

Everett,

Could you be more specific as to what your complaints are? This response
is extremely vague, and mildly passive aggressive, so I don't even know
where to start on responses.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [scheduler] New filter: AggregateInstanceAffinityFilter

2016-06-02 Thread Mooney, Sean K


> -Original Message-
> From: Alonso Hernandez, Rodolfo
> [mailto:rodolfo.alonso.hernan...@intel.com]
> Sent: Thursday, June 2, 2016 6:00 PM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: [openstack-dev] [nova] [scheduler] New filter:
> AggregateInstanceAffinityFilter
> 
> Hello:
> 
> For the last two cycles we have tried to introduce a new filter to be
> able to interact better with the aggregates, using the metadata to
> accept or reject an instance depending on the flavor:
>   https://review.openstack.org/#/c/189279/
> 
> This filter was reverted and we agreed to present a new one, being
> backwards compatible with AggregateInstanceExtraSpecsFilter and adding
> more flexibility to the original filter. We have this proposal and we
> ask you to review it:
>   https://review.openstack.org/#/c/314097/
> 
> Regards.
> 
> PD: I know the non-priority feature spec freeze is today and that's why
> I'm asking you to take a look at it.
> 

[Mooney, Sean K] looks like you forgot to disable the automatic footer 
now that you have your new laptop. 
For the res of the list please ignore  the footer.
But reviews would be welcome

> --
> Intel Research and Development Ireland Limited Registered in Ireland
> Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
> Registered Number: 308263
> 
> 
> This e-mail and any attachments may contain confidential material for
> the sole use of the intended recipient(s). Any review or distribution by
> others is strictly prohibited. If you are not the intended recipient,
> please contact the sender and delete all copies.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][security] Service User Permissions

2016-06-02 Thread Shawn McKinney

> On Jun 2, 2016, at 10:58 AM, Adam Young  wrote:
> 
> Any senseible RBAC setup would support this, but we are not using a sensible 
> one, we are using a hand rolled one. Replacing everything with Fortress 
> implies a complete rewrite of what we do now.  Nuke it from orbit type stuff.
> 
> What I would rather focus on is the splitting of the current policy into two 
> parts:
> 
> 1. Scope check done in code
> 2. Role check done in middleware
> 
> Role check should be donebased on URL, not on the policy key like 
> identity:create_user
> 
> 
> Then, yes, a Fortress style query could be done, or it could be done by 
> asking the service itself.

Mostly in agreement.  I prefer to focus on the model (RBAC) rather than a 
specific impl like Fortress. That is to say support the model and allow the 
impl to remain pluggable.  That way you enable many vendors to participate in 
your ecosystem and more important, one isn’t tied to a specific backend 
(ldapv3, sql, …)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [scheduler] New filter: AggregateInstanceAffinityFilter

2016-06-02 Thread Alonso Hernandez, Rodolfo
Hello:

For the last two cycles we have tried to introduce a new filter to be able to 
interact better with the aggregates, using the metadata to accept or reject an 
instance depending on the flavor:
https://review.openstack.org/#/c/189279/

This filter was reverted and we agreed to present a new one, being backwards 
compatible with AggregateInstanceExtraSpecsFilter and adding more flexibility 
to the original filter. We have this proposal and we ask you to review it:
https://review.openstack.org/#/c/314097/

Regards.

PD: I know the non-priority feature spec freeze is today and that's why I'm 
asking you to take a look at it.

--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263


This e-mail and any attachments may contain confidential material for the sole
use of the intended recipient(s). Any review or distribution by others is
strictly prohibited. If you are not the intended recipient, please contact the
sender and delete all copies.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] API changes on limit / marker / sort in Newton

2016-06-02 Thread Everett Toews

> On Jun 1, 2016, at 2:01 PM, Matt Riedemann  wrote:
> 
> Agree with Sean, I'd prefer separate microversions since it makes getting 
> these in easier since they are easier to review (and remember we make changes 
> to python-novaclient for each of these also).
> 
> Also agree with using a single spec in the future, like Sean did with the API 
> deprecation spec - deprecating multiple APIs but a single spec since the 
> changes are the same.

I appreciate that Nova has a long and storied history around it's API. 
Nonetheless, since it seems you're considering moving to  a new microversion, 
we'd appreciate it if you would consider adhering to the Sorting guideline [1] 
and helping drive consensus into the Pagination guideline [2].

Thanks,
Everett

[1] 
http://specs.openstack.org/openstack/api-wg/guidelines/pagination_filter_sort.html#sorting
[2] 
https://review.openstack.org/#/c/190743/21/guidelines/pagination_filter_sort.rst
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OVN] [networking-ovn] [networking-sfc] SFC and OVN

2016-06-02 Thread Ryan Moats
John McDowall  wrote on 06/02/2016 11:03:28
AM:

> From: John McDowall 
> To: Ryan Moats/Omaha/IBM@IBMUS
> Cc: Ben Pfaff , "disc...@openvswitch.org"
> , Justin Pettit ,
> "OpenStack Development Mailing List"  d...@lists.openstack.org>, Russell Bryant 
> Date: 06/02/2016 11:03 AM
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> Ryan,
>
> Sure – may need some help and it will probably be next week before Iget
to it.
>
> Regards
>
> John

John-

Let me see what I can do to push the ball along...

Ryan

>
> From: Ryan Moats 
> Date: Wednesday, June 1, 2016 at 1:25 PM
> To: John McDowall 
> Cc: Ben Pfaff , "disc...@openvswitch.org" <
> disc...@openvswitch.org>, Justin Pettit , OpenStack
> Development Mailing List , Russell
Bryant <
> russ...@ovn.org>
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> John McDowall  wrote on 05/31/2016
> 07:57:02 PM:
>
> > From: John McDowall 
> > To: Ryan Moats/Omaha/IBM@IBMUS
> > Cc: Ben Pfaff , "disc...@openvswitch.org"
> > , Justin Pettit ,
> > "OpenStack Development Mailing List"  > d...@lists.openstack.org>, Russell Bryant 
> > Date: 05/31/2016 07:57 PM
> > Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
> >
> > Ryan,
> >
> > More help is always great :-). As far as who to collaborate, what
> > ever Is easiest for everyone – I am pretty flexible.
> >
> > Regards
> >
> > John
>
> Ok, then I'll ask that we go the submit WIP patches to each of
> networking-sfc and networking-ovn and an RFC patch to d...@openvswitch.org
> and iterate through review.openstack.org and patchworks.
>
> Could you submit the initial patches today or tomorrow? I'd rather
> go that route since you have the lion's share of the work so far
> and credit where credit is due...
>
> Ryan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-wg/news

2016-06-02 Thread michael mccune

Greetings OpenStack community,

This week there are no new merged guidelines nor guidelines proposed for 
freeze but there is a new guideline discussing ways to ensure that URIs 
are semantically consistent: https://review.openstack.org/#/c/322194/


# Recently merged guidelines

These guidelines have been recently merged by the group.

* Delete multiple metadata items with a single request
  https://review.openstack.org/281511
* Description of how to use etags to avoid lost updates
  https://review.openstack.org/301846

# API guidelines proposed for freeze

The following guidelines are available for broader review by interested 
parties. These will be merged in one week if there is no further feedback.


* None this week

# Guidelines currently under review

These are guidelines that the working group are debating and working on 
for consistency and language. We encourage any interested parties to 
join in the conversation.


* Add the beginning of a set of guidlines for URIs
  https://review.openstack.org/#/c/322194/
* Add description of pagination parameters
  https://review.openstack.org/190743
* Add guideline for Experimental APIs
  https://review.openstack.org/273158
* Add version discovery guideline
  https://review.openstack.org/254895

Note that some of these guidelines were introduced quite a long time ago 
and need to either be refreshed by their original authors, or adopted by 
new interested parties.


# API Impact reviews currently open

Reviews marked as APIImpact [1] are meant to help inform the working 
group about changes which would benefit from wider inspection by group 
members and liaisons. While the working group will attempt to address 
these reviews whenever possible, it is highly recommended that 
interested parties attend the API-WG meetings [2] to promote 
communication surrounding their reviews.


Thanks for reading and see you next week!

[1] 
https://review.openstack.org/#/q/status:open+AND+(message:ApiImpact+OR+message:APIImpact),n,z

[2] https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

2016-06-02 Thread John McDowall
Juno,

Sure make sense. I will have ovs/ovn in rough shape by end of week (hopefully) 
that will allow you to call the interfaces from networking-ovn. Ryan has asked 
that we submit WIP patches etc so hopefully that will kickstart the review 
process.
Also, hopefully some of the networking-sfc team will also be able to help – I 
will let them speak for themselves.

Regards

John

From: Na Zhu >
Date: Wednesday, June 1, 2016 at 7:02 PM
To: John McDowall 
>
Cc: "disc...@openvswitch.org" 
>, OpenStack 
Development Mailing List 
>, 
Ryan Moats >, Srilatha Tangirala 
>
Subject: Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

Hi John,

Thanks your reply.

Seems you have covered everything :)
The development work can be broken down in 3 parts:
1, add ovn driver to networking-sfc
2, provide APIs in networking-ovn for networking-sfc
3, implement the sfc in ovn

So what about we take part 1 and part 2, and you take part 3? because we are 
familiar with networking-sfc and networking-ovn and we can do it faster:)





Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New District, 
Shanghai, China (201203)



From:John McDowall 
>
To:Na Zhu/China/IBM@IBMCN
Cc:Ryan Moats >, OpenStack 
Development Mailing List 
>, 
"disc...@openvswitch.org" 
>, Srilatha Tangirala 
>
Date:2016/06/01 23:26
Subject:Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC 
andOVN




Na/Srilatha,

Great, I am working from three repos:

https://github.com/doonhammer/networking-sfc
https://github.com/doonhammer/networking-ovn
https://github.com/doonhammer/ovs

I had an original prototype working that used an API I created. Since then, 
based on feedback from everyone I have been moving the API to the 
networking-sfc model and then supporting that API in networking-ovn and 
ovs/ovn. I have created a new driver in networking-sfc for ovn.

I am in the process of moving networking-ovn and ovs to support the sfc model. 
Basically I am intending to pass a deep copy of the port-chain (sample 
attached, sfc_dict.py) from the ovn driver in networking-sfc to networking-ovn. 
 This , as Ryan pointed out will minimize the dependancies between 
networking-sfc and networking-ovn. I have created additional schema for ovs/ovn 
(attached) that will provide the linkage between networking-ovn and ovs/ovn. I 
have the schema in ovs/ovn and I am in the process of  updating my code to 
support it.

Not sure where you guys want to jump in – but I can help in any way you need.

Regards

John

From: Na Zhu >
Date: Tuesday, May 31, 2016 at 9:02 PM
To: John McDowall 
>
Cc: Ryan Moats >, OpenStack 
Development Mailing List 
>, 
"disc...@openvswitch.org" 
>, Srilatha Tangirala 
>
Subject: Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

+ Add Srilatha.



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 

Re: [openstack-dev] [OVN] [networking-ovn] [networking-sfc] SFC and OVN

2016-06-02 Thread John McDowall
Ryan,

Sure - may need some help and it will probably be next week before I get to it.

Regards

John

From: Ryan Moats >
Date: Wednesday, June 1, 2016 at 1:25 PM
To: John McDowall 
>
Cc: Ben Pfaff >, 
"disc...@openvswitch.org" 
>, Justin Pettit 
>, OpenStack Development Mailing List 
>, 
Russell Bryant >
Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN


John McDowall 
> wrote 
on 05/31/2016 07:57:02 PM:

> From: John McDowall 
> >
> To: Ryan Moats/Omaha/IBM@IBMUS
> Cc: Ben Pfaff >, 
> "disc...@openvswitch.org"
> >, Justin Pettit 
> >,
> "OpenStack Development Mailing List"  d...@lists.openstack.org>, Russell Bryant 
> >
> Date: 05/31/2016 07:57 PM
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> Ryan,
>
> More help is always great :-). As far as who to collaborate, what
> ever Is easiest for everyone - I am pretty flexible.
>
> Regards
>
> John

Ok, then I'll ask that we go the submit WIP patches to each of
networking-sfc and networking-ovn and an RFC patch to 
d...@openvswitch.org
and iterate through review.openstack.org and patchworks.

Could you submit the initial patches today or tomorrow? I'd rather
go that route since you have the lion's share of the work so far
and credit where credit is due...

Ryan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][security] Service User Permissions

2016-06-02 Thread Adam Young

On 06/02/2016 11:36 AM, Shawn McKinney wrote:

On Jun 2, 2016, at 10:03 AM, Adam Young  wrote:

To do all of this right, however, requires a degree of introspection that we do not have 
in OpenStack.  Trove needs to ask Nova "I want to do X, what role do I need?"  
and there is no where in the system today that this information lives.

So, while we could make something that works for service users as the problem 
is defined by Nova today, that would be, in a word, bad.  We need something 
that works for the larger OpenStack ecosystem, to include less trusted third 
party services, and still deal with the long running tasks.

Hello,

If openstack supported RBAC (ANSI INCITS 359) you would be able to call 
(something like) this API:

List permissionRoles(Permission  perm) throws SecurityException

Return a list of type String of all roles that have granted a particular 
permission.

RBAC Review APIs:
http://directory.apache.org/fortress/gen-docs/latest/apidocs/org/apache/directory/fortress/core/ReviewMgr.html

One of the advantages of pursuing published standards, you enjoy support for 
requirements across a broad spectrum of requirements, and perhaps for things 
you didn’t know was needed (at design time).


Any senseible RBAC setup would support this, but we are not using a 
sensible one, we are using a hand rolled one.  Replacing everything with 
Fortress implies a complete rewrite of what we do now.  Nuke it from 
orbit type stuff.


What I would rather focus on is the splitting of the current policy into 
two parts:


1. Scope check done in code
2. Role check done in middleware

Role check should be donebased on URL, not on the policy key like 
identity:create_user



Then, yes, a Fortress style query could be done, or it could be done by 
asking the service itself.






Hope this helps,

Shawn
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Monasca][Gnocchi] influxDB clustering and HA will be "commercial option".

2016-06-02 Thread Fox, Kevin M
Has anyone talked with the gnocchi folks? It seems like a good time to. :)

Thanks,
Kevin

From: Jay Pipes [jaypi...@gmail.com]
Sent: Thursday, June 02, 2016 4:55 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Monasca] influxDB clustering and HA will be 
"commercial option".

On 06/02/2016 04:02 AM, Monty Taylor wrote:
> On 06/02/2016 10:06 AM, Hochmuth, Roland M wrote:
>> Hi Jaesuk, The change in InfluxDB licensing was announced in the blog at, 
>> https://influxdata.com/blog/update-on-influxdb-clustering-high-availability-and-monetization/.
>>  Up until that announcement, InfluxDB was planning on supporting all their 
>> clustering and HA capabilities in the open-source version, which is one of 
>> the reasons we had added it to Monasca.
>>
>> There has been some discussion on supporting other databases in Monasca. Due 
>> to performance and reliability concerns with InfluxDB, we had started 
>> looking at Cassandra as an alternative. There are several reviews to look at 
>> if you are interested at, 
>> https://review.openstack.org/#/q/monasca+cassandra. Shinya Kawabata has been 
>> looking into Cassandra most recently.
>
> I'm sad that InfluxDB has decided to turn Open Core - but I'm glad that
> work was already underway to look at Cassandra. Well done.

Seems to me that a database that doesn't support aggregate/grouping
operations isn't particularly appropriate for time-series metric
structured data. Am I missing something basic here?

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [freezer] Addition to the core team

2016-06-02 Thread Ramirez Garcia, Guillermo
+1

-Original Message-
From: Mathieu, Pierre-Arthur 
Sent: Thursday 2 June 2016 16:29
To: openstack-dev@lists.openstack.org
Cc: freezer-eskimos 
Subject: [openstack-dev][freezer] Addition to the core team

Hello,

I would like to propose that we make Deklan Dieterly (ddieterly) core on 
freezer.
He has been a highly valuable developper for the past few month, mainly working 
on integration testing for Freezer components.
He has also been helping a lot with features and Ux testing.


His work can be found here: [1]
And his stackalitics profile here: [2]

Unless there is a disagreement I plan to make Saad core by the end of the week.


Thanks
- Pierre, Freezer PTL

[1] https://review.openstack.org/#/q/owner:%22Deklan+Dieterly%22
[2] http://stackalytics.com/?user_id=deklan=all=loc

















__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Tagging kilo-eol for "the world"

2016-06-02 Thread John Dickinson
open swift/swiftclient patches to stable/kilo have been abandoned

--John



On 2 Jun 2016, at 4:45, Jesse Pretorius wrote:

> Hi Tony,
>
> OpenStack-Ansible is just waiting for the requirements repository and the
> swift repository kilo-eol tags. Once they're done we'd like to bump the
> SHA's for our 'kilo' to the EOL tags of those two repositories, tag a
> release, then do our own kilo-eol tag.
>
> Thanks,
>
> Jesse
> IRC: odyssey4me
>
> On 2 June 2016 at 11:31, Tony Breeds  wrote:
>
>> Hi all,
>> In early May we tagged/EOL'd several (13) projects.  We'd like to do a
>> final round for a more complete set.  We looked for projects meet one or
>> more
>> of the following criteria:
>> - The project is openstack-dev/devstack, openstack-dev/grenade or
>>   openstack/requirements
>> - The project has the 'check-requirements' job listed as a template in
>>   project-config:zuul/layout.yaml
>> - The project is listed in governance:reference/projects.yaml and is tagged
>>   with 'release:managed' or 'stable:follows-policy' (or both).
>>
>> The list of 171 projects that match above is at [1].  There are another 68
>> projects at [2] that have kilo branches but do NOT match the criteria
>> above.
>>
>> Please look over both lists by 2016-06-09 00:00 UTC and let me know if:
>> - A project is in list 1 and *really* *really* wants to opt *OUT* of
>> EOLing and
>>   why.
>> - A project is in list 2 that would like to opt *IN* to tagging/EOLing
>>
>> Any projects that will be EOL'd will need all open reviews abandoned
>> before it
>> can be processed.  I'm very happy to do this.
>>
>> I'd like to hand over the list of ready to EOL repos to the infra team on
>> 2016-09-10 (UTC)
>>
>> Yours Tony.
>> [1] http://paste.openstack.org/show/507233/
>> [2] http://paste.openstack.org/show/507232/
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> -- 
> Jesse Pretorius
> mobile: +44 7586 906045
> email: jesse.pretor...@gmail.com
> skype: jesse.pretorius
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [networking-sfc] how to install networking-sfc on compute node

2016-06-02 Thread Na Zhu
Hi,

>From this link 
https://github.com/openstack/networking-sfc/tree/master/devstack, it is 
about installing networking-sfc together with neutron-server,
I want to install networking-sfc on compute node, can anyone tell me how 
to set the local.conf? 



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Collecting our wiki use cases

2016-06-02 Thread Anita Kuno
On 06/02/2016 10:59 AM, Thierry Carrez wrote:
> Thanks to everyone who helped collecting wiki use cases on that etherpad.
> 
> I tried to categorize the various use cases and I think they fit in 4
> categories:
> 
> 1/ Things that are already in the process of being moved to reference
> websites or documentation
> 
> That would be the main "portal" page with its links to all the other
> websites, the 'How To Contribute' stuff, the information about
> elections, release naming, User committee governance...
> 
> 2/ Things that should probably be published elsewhere
> 
> Sprints, IRC channels, Mailing lists, Board meeting information,
> Successbot & Statusbot logging pages...
> 
> 3/ "Cheap websites" for teams, working groups and some events
> 
> That is the bulk of the remaining use cases. The wiki makes for an easy
> and immediate way to publish information about a specific team or
> working group, including specific processes, self-service team signup,
> additional meeting information... They also work well as quick-and-basic
> websites for community-led events like the Design Summit or Ops Meetups.
> 
> 4/ "Etherpad on steroids" - ephemeral slightly richer documents
> 
> ...where the wiki is used for building very ephemeral documents like
> meeting agendas, newsletter drafts, sharing pictures
> 
> 
> While I think we should continue the effort on (1) and (2), we need a
> long-term wiki-like solution for (3).
> 
> One interesting aspect of (3) is that all of the content there is
> clearly linked to a team of people. So it could easily be that team's
> duty to keep those pages limited in number and updated, reducing the
> nasty side-effects of stale pages. If the tool supports it, teams could
> use ACLs and/or have to vet the creation of new pages under their
> ownership, reducing the spam aspect. One issue with MediaWiki (compared
> to some other wikis or light publication platforms) is that it's
> essentially flat, so this "ownership" concept (which helps with keeping
> spam and staleness under control) is not really baked in.
> 
> That leaves (4), where using the wiki leads to stale pages with no real
> author or maintainer being returned in Google searches. I'd argue that
> unless the document is clearly owned by a team, permanent and maintained
> up to date, the wiki should not be used. We have etherpads, we have
> pastebins, we could add others similar tools if those are not sufficient
> as ephemeral collaborative scratchpads. If we keep that category under a
> wiki-like platform, that should at least be under some /tmp special
> category that we would clean up aggressively.

I'm interpreting "clean up aggressively" as bulk delete via a cron job,
timing to be determined.

> Another learning of this exercise is that while some teams definitely
> rely on the wiki, we have a finite number of cases to handle. So a rip
> and replace approach is not completely out of question, if we find a
> better tool and decide that selective content-copy is cleaner and faster
> than general cleanup + bulk migration.
> 
> For the immediate future (Newton) we'll likely focus on completing (1),
> find solutions for (2), and research potential tools for (3) and (4).
> 
> Thanks again for the feedback!
>

Thanks for collecting and analyzing Thierry,
Anita.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [freezer] Addition to the core team

2016-06-02 Thread Mathieu, Pierre-Arthur
Small correction for the final line of the last email.
I am proposing Deklan and not Saad as core.

- Pierre


From: Mathieu, Pierre-Arthur
Sent: Thursday, June 2, 2016 4:29:29 PM
To: openstack-dev@lists.openstack.org
Cc: freezer-eskimos
Subject: [openstack-dev][freezer] Addition to the core team

Hello,

I would like to propose that we make Deklan Dieterly (ddieterly) core on
freezer.
He has been a highly valuable developper for the past few month, mainly working
on integration testing for Freezer components.
He has also been helping a lot with features and Ux testing.


His work can be found here: [1]
And his stackalitics profile here: [2]

Unless there is a disagreement I plan to make Saad core by the end of the week.


Thanks
- Pierre, Freezer PTL

[1] https://review.openstack.org/#/q/owner:%22Deklan+Dieterly%22
[2] http://stackalytics.com/?user_id=deklan=all=loc

















__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][security] Service User Permissions

2016-06-02 Thread Shawn McKinney

> On Jun 2, 2016, at 10:03 AM, Adam Young  wrote:
> 
> To do all of this right, however, requires a degree of introspection that we 
> do not have in OpenStack.  Trove needs to ask Nova "I want to do X, what role 
> do I need?"  and there is no where in the system today that this information 
> lives.
> 
> So, while we could make something that works for service users as the problem 
> is defined by Nova today, that would be, in a word, bad.  We need something 
> that works for the larger OpenStack ecosystem, to include less trusted third 
> party services, and still deal with the long running tasks.

Hello,

If openstack supported RBAC (ANSI INCITS 359) you would be able to call 
(something like) this API:

List permissionRoles(Permission  perm) throws SecurityException

Return a list of type String of all roles that have granted a particular 
permission.

RBAC Review APIs:
http://directory.apache.org/fortress/gen-docs/latest/apidocs/org/apache/directory/fortress/core/ReviewMgr.html

One of the advantages of pursuing published standards, you enjoy support for 
requirements across a broad spectrum of requirements, and perhaps for things 
you didn’t know was needed (at design time).

Hope this helps,

Shawn
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] State machines in Nova

2016-06-02 Thread Miles Gould

On 01/06/16 13:50, Andrew Laski wrote:

This is a great point. I think most people have an implicit assumption
that the state machine will be exposed to end users via the API. I would
like to avoid that for exactly the reason you've mentioned. Of course
we'll want to expose something to users but whatever that is should be
loosely coupled with the internal states that actually drive the system.


That would probably help, but think about how you'll handle things when 
you have to make changes to the client-visible representation :-)


Miles

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [freezer] Addition to the core team

2016-06-02 Thread Mathieu, Pierre-Arthur
Hello, 

I would like to propose that we make Deklan Dieterly (ddieterly) core on
freezer.
He has been a highly valuable developper for the past few month, mainly working
on integration testing for Freezer components.
He has also been helping a lot with features and Ux testing.


His work can be found here: [1]
And his stackalitics profile here: [2]

Unless there is a disagreement I plan to make Saad core by the end of the week.


Thanks
- Pierre, Freezer PTL

[1] https://review.openstack.org/#/q/owner:%22Deklan+Dieterly%22
[2] http://stackalytics.com/?user_id=deklan=all=loc

















__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][dvr] Wasting so many external network IPs in DVR mode?

2016-06-02 Thread Carl Baldwin
On Thu, Jun 2, 2016 at 12:04 AM, zhi  wrote:
> The reason putting the routers namespaces behind the fip namespace is
> saving mac address tables in switches. In Centralized Virtual Router,  there
> are many "qg" interfaces in the external bridge. Every "qg" interface may
> contains one or more floating ips. I think this is a problem. The mac
> address tables in switches will learn many mac items from different "qg"
> interfaces, like this:
>
> |MAC address | Port|
> |mac of qg1 | 2 |
> |mac of qg2 | 2 |
> |mac of qg3 | 2 |
> |mac of qg4 | 2 |
> |mac of qgN | 2 |
>
>  In DVR, I think there is no problems about that I mentioned above.
> Because physical switches can learn all the fips's mac address from the same
> port —— "fg" interface. I think the mac address tables in physical switches
> like this:
>
> |MAC address | Port|
> |mac of fg| 2 |
>
> In this situation,  just one relationship between Port and MAC address
> can be learned by the physical switches.

This is mostly correct, you've got the right idea.  If it is a compute
host on port 2 then you're correct.  But, remember that a DVR also has
a central component just like CVR.  So, the network nodes still look
like they do with CVR.

The big problem with DVR is that the qg port is distributed in many
different places, one time for each compute host with a DVR
serviceable port behind the router, and once for the central part of
the router.  If these were all directly connected to the external
network, each would need its own mac address.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-06-02 Thread 乔立勇
Hongbin,

for the implementation of heterogeneous, I think we should avoid to talking
with nova or other service directly, which will bring lots of coding.
maybe the best way is to refactor our heat template, and let a bay support
several heat template when we scale-out new node or delete additional node.

Eli.

2016-06-02 22:42 GMT+08:00 Hongbin Lu :

> Madhuri,
>
> It looks both of us agree the idea of having heterogeneous set of nodes.
> For the implementation, I am open to alternative (I supported the
> work-around idea because I cannot think of a feasible implementation by
> purely using Heat, unless Heat support "for" logic which is very unlikely
> to happen. However, if anyone can think of a pure Heat implementation, I am
> totally fine with that).
>
> Best regards,
> Hongbin
>
> > -Original Message-
> > From: Kumari, Madhuri [mailto:madhuri.kum...@intel.com]
> > Sent: June-02-16 12:24 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> > managing the bay nodes
> >
> > Hi Hongbin,
> >
> > I also liked the idea of having heterogeneous set of nodes but IMO such
> > features should not be implemented in Magnum, thus deviating Magnum
> > again from its roadmap. Whereas we should leverage Heat(or may be
> > Senlin) APIs for the same.
> >
> > I vote +1 for this feature.
> >
> > Regards,
> > Madhuri
> >
> > -Original Message-
> > From: Hongbin Lu [mailto:hongbin...@huawei.com]
> > Sent: Thursday, June 2, 2016 3:33 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > 
> > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> > managing the bay nodes
> >
> > Personally, I think this is a good idea, since it can address a set of
> > similar use cases like below:
> > * I want to deploy a k8s cluster to 2 availability zone (in future 2
> > regions/clouds).
> > * I want to spin up N nodes in AZ1, M nodes in AZ2.
> > * I want to scale the number of nodes in specific AZ/region/cloud. For
> > example, add/remove K nodes from AZ1 (with AZ2 untouched).
> >
> > The use case above should be very common and universal everywhere. To
> > address the use case, Magnum needs to support provisioning
> > heterogeneous set of nodes at deploy time and managing them at runtime.
> > It looks the proposed idea (manually managing individual nodes or
> > individual group of nodes) can address this requirement very well.
> > Besides the proposed idea, I cannot think of an alternative solution.
> >
> > Therefore, I vote to support the proposed idea.
> >
> > Best regards,
> > Hongbin
> >
> > > -Original Message-
> > > From: Hongbin Lu
> > > Sent: June-01-16 11:44 AM
> > > To: OpenStack Development Mailing List (not for usage questions)
> > > Subject: RE: [openstack-dev] [magnum] Discuss the idea of manually
> > > managing the bay nodes
> > >
> > > Hi team,
> > >
> > > A blueprint was created for tracking this idea:
> > > https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-
> > > nodes . I won't approve the BP until there is a team decision on
> > > accepting/rejecting the idea.
> > >
> > > From the discussion in design summit, it looks everyone is OK with
> > the
> > > idea in general (with some disagreements in the API style). However,
> > > from the last team meeting, it looks some people disagree with the
> > > idea fundamentally. so I re-raised this ML to re-discuss.
> > >
> > > If you agree or disagree with the idea of manually managing the Heat
> > > stacks (that contains individual bay nodes), please write down your
> > > arguments here. Then, we can start debating on that.
> > >
> > > Best regards,
> > > Hongbin
> > >
> > > > -Original Message-
> > > > From: Cammann, Tom [mailto:tom.camm...@hpe.com]
> > > > Sent: May-16-16 5:28 AM
> > > > To: OpenStack Development Mailing List (not for usage questions)
> > > > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> > > > managing the bay nodes
> > > >
> > > > The discussion at the summit was very positive around this
> > > requirement
> > > > but as this change will make a large impact to Magnum it will need
> > a
> > > > spec.
> > > >
> > > > On the API of things, I was thinking a slightly more generic
> > > > approach to incorporate other lifecycle operations into the same
> > API.
> > > > Eg:
> > > > magnum bay-manage  
> > > >
> > > > magnum bay-manage  reset –hard
> > > > magnum bay-manage  rebuild
> > > > magnum bay-manage  node-delete  magnum bay-manage
> > > >  node-add –flavor  magnum bay-manage  node-reset
> > > >  magnum bay-manage  node-list
> > > >
> > > > Tom
> > > >
> > > > From: Yuanying OTSUKA 
> > > > Reply-To: "OpenStack Development Mailing List (not for usage
> > > > questions)" 
> > > > Date: Monday, 16 May 2016 at 01:07
> > > > To: "OpenStack Development 

Re: [openstack-dev] [keystone][security] Service User Permissions

2016-06-02 Thread Adam Young

On 06/02/2016 01:23 AM, Jamie Lennox wrote:

Hi All,

I'd like to bring to the attention of the wider security groups and 
OpenStack users the Service Users Permissions [1] spec currently 
proposed against keystonemiddleware.


To summarize quickly OpenStack has long had the problem of token 
expiry happening in the middle of a long running operation and failing 
service to service requests and there have been a number of ways 
proposed around this including trusts and using the service users to 
perform operations.


Ideally in a big system like this we only want to validate a token and 
policy once on a user's first entry to the system, however all 
services only communicate via the public interfaces so we cannot tell 
at validation time whether this is the first, second, or twentieth 
time we are validating a token. (If we ever do OpenStack 2.0 we should 
change this)


Validating the "token" happens only once.

Validating the "user" permissions can happen multiple times, assuming 
that nothing changes, the operation goes through.



The part I have trouble with is not Validating the delegation from the 
end user to the service user.  This is a CVE waiting to happen.


A users token should be short (5 minutes) and just kick off the 
workflow.  But that should then be used to create a delegation for the 
remote service.  THat delegation can last longer than the duration of 
the token, to cover the long running tasks, but should not last forever.


While the usual discussion centers aroun Nova based tasks, think about 
all the *aaService endpoint that are going to have this same need.  If I 
kick off a workflow via Trove or Sahara, that endpoint should only be 
able to do what I ask it to do:  spin up the appropriate number of  vms 
in the corresponding projects, and so on.


The delegation mechanism needs to be lighter weight than trusts, but 
should have the sma constraints (redelegation and so on).



To do all of this right, however, requires a degree of introspection 
that we do not have in OpenStack.  Trove needs to ask Nova "I want to do 
X, what role do I need?"  and there is no where in the system today that 
this information lives.


So, while we could make something that works for service users as the 
problem is defined by Nova today, that would be, in a word, bad.  We 
need something that works for the larger OpenStack ecosystem, to include 
less trusted third party services, and still deal with the long running 
tasks.


S4U2Proxy from the Kerberos world is a decent approximation of what we 
need.  A user with a service ticket goes to a remote service and askes 
for an operation.  That service then gets its own proxy service ticket, 
based on its own identity and the service ticket of the requesting 
user.  This Proxy  service ticket is then used for operations on behalf 
of the real user.  The proxy ticket can have a reduced degree of 
authorization, but does not require a deliberate delegation agreement 
between each user and the service.






The proposed spec provides a way to simulate the at-edge validation 
for service to service communication. If a request has an 
X-Service-Token header (an existing concept) then instead of 
validating the user's token we should trust all the headers sent with 
that request (X_USER_ID, X_PROJECT_ID etc). We would still validate 
the X-Service-Token header. This has the effect that one service 
asserts to another that it has already validated this token and the 
receiving service shouldn't validate it again and bypass the expiry 
problem.


The glaring security issue here is that a user with the service role 
can now emulate any request on behalf of any user by sending the 
expected authenticated headers. This will place an extreme level of 
trust on accounts that up to now have generally only been able to 
validate a token. There is both the concern here that a malicious 
service could craft new requests with bogus credentials as well as 
services deciding that this provides them the ability to do 
non-expiring trusts from a user where it can simply replay the headers 
it received on previous requests to perform future operations on 
behalf of a user. This is _absolutely not_ the intended use case but 
something I expect to come up.


There is a variation of this mentioned in the spec where we pass only 
the user-id, project-id and audit information from service to service 
and then middleware can recreate the token from this information 
similar to how fernet tokens work today. There is additional 
processing here which in the standard case will simply reproduce the 
same headers that the last service already knew and it still allows a 
large amount of emulation from the service.


There are possibly ways we can secure this header bundle via signing 
however the practical result is essentially a secondary expiry time 
and an operational complexity that will make PKI tokens and rotating 
fernet keys appear trivial for the benefit of securing a 

Re: [openstack-dev] Collecting our wiki use cases

2016-06-02 Thread Thierry Carrez

Thanks to everyone who helped collecting wiki use cases on that etherpad.

I tried to categorize the various use cases and I think they fit in 4 
categories:


1/ Things that are already in the process of being moved to reference 
websites or documentation


That would be the main "portal" page with its links to all the other 
websites, the 'How To Contribute' stuff, the information about 
elections, release naming, User committee governance...


2/ Things that should probably be published elsewhere

Sprints, IRC channels, Mailing lists, Board meeting information, 
Successbot & Statusbot logging pages...


3/ "Cheap websites" for teams, working groups and some events

That is the bulk of the remaining use cases. The wiki makes for an easy 
and immediate way to publish information about a specific team or 
working group, including specific processes, self-service team signup, 
additional meeting information... They also work well as quick-and-basic 
websites for community-led events like the Design Summit or Ops Meetups.


4/ "Etherpad on steroids" - ephemeral slightly richer documents

...where the wiki is used for building very ephemeral documents like 
meeting agendas, newsletter drafts, sharing pictures



While I think we should continue the effort on (1) and (2), we need a 
long-term wiki-like solution for (3).


One interesting aspect of (3) is that all of the content there is 
clearly linked to a team of people. So it could easily be that team's 
duty to keep those pages limited in number and updated, reducing the 
nasty side-effects of stale pages. If the tool supports it, teams could 
use ACLs and/or have to vet the creation of new pages under their 
ownership, reducing the spam aspect. One issue with MediaWiki (compared 
to some other wikis or light publication platforms) is that it's 
essentially flat, so this "ownership" concept (which helps with keeping 
spam and staleness under control) is not really baked in.


That leaves (4), where using the wiki leads to stale pages with no real 
author or maintainer being returned in Google searches. I'd argue that 
unless the document is clearly owned by a team, permanent and maintained 
up to date, the wiki should not be used. We have etherpads, we have 
pastebins, we could add others similar tools if those are not sufficient 
as ephemeral collaborative scratchpads. If we keep that category under a 
wiki-like platform, that should at least be under some /tmp special 
category that we would clean up aggressively.



Another learning of this exercise is that while some teams definitely 
rely on the wiki, we have a finite number of cases to handle. So a rip 
and replace approach is not completely out of question, if we find a 
better tool and decide that selective content-copy is cleaner and faster 
than general cleanup + bulk migration.


For the immediate future (Newton) we'll likely focus on completing (1), 
find solutions for (2), and research potential tools for (3) and (4).


Thanks again for the feedback!

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] [Fuel-Astute] Core reviewers changes

2016-06-02 Thread Vladimir Sharshov
Hi all,
Evgeny Li, one of two core reviewers of fuel-astute project [1], could not
participate on this project anymore because of his high load on another
project.
I want to express my sincere gratitude for his help and great advices and
wish
good luck in his new project.

For safe and better availability i added our PTL Vladimir Kozhukalov as
core-reviewer.
He has experience in Fuel Astute [2]. This is technical decision which give
us ability to review and apply patches to one of the core fuel project if i
will not
available for some reason.

New list of core reviewers you can check here [3].

[1] https://github.com/openstack/fuel-astute
[2] https://github.com/openstack/fuel-astute/graphs/contributors
[3] https://review.openstack.org/#/admin/groups/655,members
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][openstack] os-client-config 1.18.0 release (newton)

2016-06-02 Thread no-reply
We are psyched to announce the release of:

os-client-config 1.18.0: OpenStack Client Configuation Library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/os-client-config

With package available at:

https://pypi.python.org/pypi/os-client-config

Please report issues through launchpad:

http://bugs.launchpad.net/os-client-config

For more details, please see below.

1.18.0
^^


New Features


* Added helper method for constructing OpenStack SDK Connection
  objects.

* Added helper method for constructing shade OpenStackCloud objects.


Deprecation Notes
*

* Renamed session_client to make_rest_client. session_client will
  continue to be supported for backwards compatability.

Changes in os-client-config 1.17.0..1.18.0
--

4f36eca Reword the entries in the README a bit
7d63f12 Add shade constructor helper method
6a83406 Rename session_client to make_rest_client
41ac156 Add helper method for OpenStack SDK constructor
fbe1b38 Add missing "cloud" argument to _validate_auth_ksc
090a265 Workaround bad required params in troveclient
44efe9c Trivial: Remove 'MANIFEST.in'
189a604 Trivial: remove openstack/common from flake8 exclude list
b0fa438 drop python3.3 support in classifier
700ab6f Fix formatting in readme file
1028f5a Remove discover from test-requirements.txt
d9f9c05 Add version string

Diffstat (except docs and test files)
-

MANIFEST.in|   6 --
README.rst | 107 +
os_client_config/__init__.py   |  36 ++-
os_client_config/cloud_config.py   |   7 ++
os_client_config/config.py |   4 +-
.../notes/make-rest-client-dd3d365632a26fa0.yaml   |   4 +
.../notes/sdk-helper-41f8d815cfbcfb00.yaml |   4 +
.../notes/shade-helper-568f8cb372eef6d9.yaml   |   4 +
setup.cfg  |   1 -
test-requirements.txt  |   1 -
tox.ini|   2 +-
11 files changed, 146 insertions(+), 30 deletions(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index 5e4c304..0138f13 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -10 +9,0 @@ fixtures>=0.3.14
-discover



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes

2016-06-02 Thread Hongbin Lu
Madhuri,

It looks both of us agree the idea of having heterogeneous set of nodes. For 
the implementation, I am open to alternative (I supported the work-around idea 
because I cannot think of a feasible implementation by purely using Heat, 
unless Heat support "for" logic which is very unlikely to happen. However, if 
anyone can think of a pure Heat implementation, I am totally fine with that).

Best regards,
Hongbin

> -Original Message-
> From: Kumari, Madhuri [mailto:madhuri.kum...@intel.com]
> Sent: June-02-16 12:24 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> managing the bay nodes
> 
> Hi Hongbin,
> 
> I also liked the idea of having heterogeneous set of nodes but IMO such
> features should not be implemented in Magnum, thus deviating Magnum
> again from its roadmap. Whereas we should leverage Heat(or may be
> Senlin) APIs for the same.
> 
> I vote +1 for this feature.
> 
> Regards,
> Madhuri
> 
> -Original Message-
> From: Hongbin Lu [mailto:hongbin...@huawei.com]
> Sent: Thursday, June 2, 2016 3:33 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> managing the bay nodes
> 
> Personally, I think this is a good idea, since it can address a set of
> similar use cases like below:
> * I want to deploy a k8s cluster to 2 availability zone (in future 2
> regions/clouds).
> * I want to spin up N nodes in AZ1, M nodes in AZ2.
> * I want to scale the number of nodes in specific AZ/region/cloud. For
> example, add/remove K nodes from AZ1 (with AZ2 untouched).
> 
> The use case above should be very common and universal everywhere. To
> address the use case, Magnum needs to support provisioning
> heterogeneous set of nodes at deploy time and managing them at runtime.
> It looks the proposed idea (manually managing individual nodes or
> individual group of nodes) can address this requirement very well.
> Besides the proposed idea, I cannot think of an alternative solution.
> 
> Therefore, I vote to support the proposed idea.
> 
> Best regards,
> Hongbin
> 
> > -Original Message-
> > From: Hongbin Lu
> > Sent: June-01-16 11:44 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: RE: [openstack-dev] [magnum] Discuss the idea of manually
> > managing the bay nodes
> >
> > Hi team,
> >
> > A blueprint was created for tracking this idea:
> > https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-
> > nodes . I won't approve the BP until there is a team decision on
> > accepting/rejecting the idea.
> >
> > From the discussion in design summit, it looks everyone is OK with
> the
> > idea in general (with some disagreements in the API style). However,
> > from the last team meeting, it looks some people disagree with the
> > idea fundamentally. so I re-raised this ML to re-discuss.
> >
> > If you agree or disagree with the idea of manually managing the Heat
> > stacks (that contains individual bay nodes), please write down your
> > arguments here. Then, we can start debating on that.
> >
> > Best regards,
> > Hongbin
> >
> > > -Original Message-
> > > From: Cammann, Tom [mailto:tom.camm...@hpe.com]
> > > Sent: May-16-16 5:28 AM
> > > To: OpenStack Development Mailing List (not for usage questions)
> > > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> > > managing the bay nodes
> > >
> > > The discussion at the summit was very positive around this
> > requirement
> > > but as this change will make a large impact to Magnum it will need
> a
> > > spec.
> > >
> > > On the API of things, I was thinking a slightly more generic
> > > approach to incorporate other lifecycle operations into the same
> API.
> > > Eg:
> > > magnum bay-manage  
> > >
> > > magnum bay-manage  reset –hard
> > > magnum bay-manage  rebuild
> > > magnum bay-manage  node-delete  magnum bay-manage
> > >  node-add –flavor  magnum bay-manage  node-reset
> > >  magnum bay-manage  node-list
> > >
> > > Tom
> > >
> > > From: Yuanying OTSUKA 
> > > Reply-To: "OpenStack Development Mailing List (not for usage
> > > questions)" 
> > > Date: Monday, 16 May 2016 at 01:07
> > > To: "OpenStack Development Mailing List (not for usage questions)"
> > > 
> > > Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually
> > > managing the bay nodes
> > >
> > > Hi,
> > >
> > > I think, user also want to specify the deleting node.
> > > So we should manage “node” individually.
> > >
> > > For example:
> > > $ magnum node-create —bay …
> > > $ magnum node-list —bay
> > > $ magnum node-delete $NODE_UUID
> > >
> > > Anyway, if magnum want to manage a lifecycle of container
> > > infrastructure.
> > > This feature is necessary.
> > >
> > > Thanks
> > > 

Re: [openstack-dev] [TripleO][diskimage-builder] Proposing Stephane Miller to dib-core

2016-06-02 Thread Clint Byrum
Excerpts from Gregory Haynes's message of 2016-06-01 12:50:19 -0500:
> Hello everyone,
> 
> I'd like to propose adding Stephane Miller (cinerama) to the
> diskimage-builder core team. She has been a huge help with our reviews
> for some time now and I think she would make a great addition to our
> core team. I know I have benefited a lot from her bash expertise in many
> of my reviews and I am sure others have as well :).
> 
> I've spoken with many of the active cores privately and only received
> positive feedback on this, so rather than use this as an all out vote
> (although feel free to add your ++'s) I'd like to use this as a final
> call out in case any objections are wanting to be made. If none have
> been made by next Wednesday (6/8) I'll go ahead and add her to dib-core.

Sorry but I won't deny Stephanie her parade of +1's. ;)

+1. Stephanie has been doing a great job, and getting more reviews done
than me lately. :)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][stable] stable/liberty in phase II

2016-06-02 Thread Ihar Hrachyshka
Hi neutron-stable-maint members,

yesterday we released neutron 7.1.0 tarballs. Those tarballs are the last ones 
that allowed patches that did not fulfil requirements of Phase II: "Phase II 
(6-12 months): Only critical bugfixes and security patches are acceptable” [1] 
For Neutron’s sake, at this point we may consider High+ bug fixes only.

Please make sure that no patches that don’t conform to this requirement land in 
the branch.

Thanks
Ihar

[1] 
http://docs.openstack.org/project-team-guide/stable-branches.html#support-phases


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] [infra] Conflict between publishing jobs

2016-06-02 Thread Julien Danjou
Hi,

While importing Panko¹ into OpenStack, Andreas informed me that the jobs
"openstack-server-release-jobs" and "publish-to-pypi" were incompatible
and that the release team would know that. We actually want to publish
Panko as an OpenStack server and also to PyPI.

We already have both these jobs for Gnocchi without any problem.

Could the infra team enlighten us about the possible issue here?

Thanks!

¹  https://review.openstack.org/#/c/318677/

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila]: questions on update-access() changes

2016-06-02 Thread Ramana Raja
Hi,

There are a few changes that seem to be lined up for Newton to make manila's
share access control, update_access(), workflow better [1] --
reduce races in DB updates, avoid non-atomic state transitions, and
possibly enable the workflow fit in a HA active-active manila
configuration (if not already possible).

The proposed changes ...

a) Switch back to per rule access state (from per share access state) to
   avoid non-atomic state transition.
   
   Understood problem, but no spec or BP yet.


b) Use Tooz [2] (with Zookeeper?) for distributed lock management [3]
   in the access control workflow. 

   Still under investigation and for now fits the share replication workflow 
[4]. 


c) Allow drivers to update DB models in a restricted manner (only certain
   fields can be updated by a driver API).

   This topic is being actively discussed in the community, and there should be
   a consensus soon on figuring out the right approach, following which there
   might be a BP/spec targeted for Newton. 


Besides these changes, there's a update_access() change that I'd like to revive
(started in Mitaka), storing access keys (auth secrets) generated by a storage
backend when providing share access, i.e.  during update_access(), in the
``share_access_map`` table [5]. This change as you might have figured is a
smaller and a simpler change than the rest, but seems to depend on the 
approaches
that might be adopted by a) and c).

For now, I'm thinking of allowing a driver's update access()  to return a
dictionary of {access_id: access_key, ...} to (ShareManager)access_helper's
update_access(), which would then update the DB iteratively with access_key
per access_id. Would this approach be valid with changes a) and c) in 
Newton? change a) would make the driver report access status per rule via
the access_helper, during which an 'access_key' can also be returned, 
change c) might allow the driver to directly update the `access_key` in the
DB.

For now, should I proceed with implementing the approach currently outlined
in my spec [5], have the driver's update_access() return a dictionary of 
{access_id: access_key, ...} or wait for approaches for changes a) and c)
to be outlined better?

Thanks,
Ramana

[1] https://etherpad.openstack.org/p/newton-manila-update-access

[2] 
https://blueprints.launchpad.net/openstack/?searchtext=distributed-locking-with-tooz

[3] https://review.openstack.org/#/c/209661/38/specs/chronicles-of-a-dlm.rst

[4] https://review.openstack.org/#/c/318336/

[5] https://review.openstack.org/#/c/322971/
http://lists.openstack.org/pipermail/openstack-dev/2015-October/077602.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] prototype of a DSL for generating Dockerfiles

2016-06-02 Thread Ryan Hallisey
Looking at some of the capabilities jinja2 has, it's hard to justify changing 
the method already in place.
I think jinja2 can provide a clear and operational way for operators to 
customize the dockerfiles as needed.
Kolla just hasn't applied them yet.

I'm extremely hesitant to agree on changing this because I think kolla can 
solve these issues without having
the overhead that will come with this change.

-Ryan

- Original Message -
From: "Michał Jastrzębski" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Wednesday, June 1, 2016 4:55:50 PM
Subject: Re: [openstack-dev] [kolla] prototype of a DSL for generating  
Dockerfiles

Aaaand correct link: https://review.openstack.org/#/c/323589/ sorry
for pastefail.

On 1 June 2016 at 15:55, Michał Jastrzębski  wrote:
> So this is prototype of working template overrides:
> https://review.openstack.org/#/c/323612/
>
> Pass --template-overrides=path-to-file to build.py
> in file override you can add any custom code/logic/dockerfile stuff to
> any of hooks we provide in Dockerfiles, and we'll provide a lot of
> them as it's free and non breaking operation. With enough block you'll
> be able to do virtually anything with any of the containers.
>
> This one is already working. Only work needed is to provide more
> hooks/continue with refactoring of dockerfiles.
>
> Cheers,
> Michal
>
> On 31 May 2016 at 19:36, Steven Dake (stdake)  wrote:
>>
>>
>> On 5/31/16, 1:42 PM, "Michał Jastrzębski"  wrote:
>>
>>>I am opposed to this idea as I don't think we need this. We can solve
>>>many problems by using jinja2 to greater extend. I'll publish demo of
>>>few improvements soon, please bear with me before we make a
>>>arch-changing call.
>>
>> Can you make a specification please as you have asked me to do?
>>
>>>
>>>On 29 May 2016 at 14:41, Steven Dake (stdake)  wrote:

>On 5/27/16, 1:58 AM, "Steven Dake (stdake)"  wrote:
>
>>
>>
>>On 5/26/16, 8:45 PM, "Swapnil Kulkarni (coolsvap)" 
>>wrote:
>>
>>>On Fri, May 27, 2016 at 8:35 AM, Steven Dake (stdake)
>>>
>>>wrote:
 Hey folks,

 While Swapnil has been busy churning the dockerfile.j2 files to all
match
 the same style, and we also had summit where we declared we would
solve
the
 plugin problem, I have decided to begin work on a DSL prototype.

 Here are the problems I want to solve in order of importance by this
work:

 Build CentOS, Ubuntu, Oracle Linux, Debian, Fedora containers
 Provide a programmatic way to manage Dockerfile construction rather
then a
 manual (with vi or emacs or the like) mechanism
 Allow complete overrides of every facet of Dockerfile construction,
most
 especially repositories per container (rather than in the base
container) to
 permit the use case of dependencies from one version with
dependencies
in
 another version of a different service
 Get out of the business of maintaining 100+ dockerfiles but instead
maintain
 one master file which defines the data that needs to be used to
construct
 Dockerfiles
 Permit different types of optimizations or Dockerfile building by
changing
 around the parser implementation ­ to allow layering of each
operation,
or
 alternatively to merge layers as we do today

 I don't believe we can proceed with both binary and source plugins
given our
 current implementation of Dockerfiles in any sane way.

 I further don't believe it is possible to customize repositories &
installed
 files per container, which I receive increasing requests for
offline.

 To that end, I've created a very very rough prototype which builds
the
base
 container as well as a mariadb container.  The mariadb container
builds
and
 I suspect would work.

 An example of the DSL usage is here:
 https://review.openstack.org/#/c/321468/4/dockerdsl/dsl.yml

 A very poorly written parser is here:
 https://review.openstack.org/#/c/321468/4/dockerdsl/load.py

 I played around with INI as a format, to take advantage of
oslo.config
and
 kolla-build.conf, but that didn't work out.  YML is the way to go.

 I'd appreciate reviews on the YML implementation especially.

 How I see this work progressing is as follows:

 A yml file describing all docker containers for all distros is
placed
in
 

Re: [openstack-dev] [Nova] State machines in Nova

2016-06-02 Thread Murray, Paul (HP Cloud)


> -Original Message-
> From: Monty Taylor [mailto:mord...@inaugust.com]
> Sent: 01 June 2016 13:54
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Nova] State machines in Nova
> 
> On 06/01/2016 03:50 PM, Andrew Laski wrote:
> >
> >
> > On Wed, Jun 1, 2016, at 05:51 AM, Miles Gould wrote:
> >> On 31/05/16 21:03, Timofei Durakov wrote:
> >>> there is blueprint[1] that was approved during Liberty and
> >>> resubmitted to Newton(with spec[2]).
> >>> The idea is to define state machines for operations as
> >>> live-migration, resize, etc. and to deal with them operation states.
> >>
> >> +1 to introducing an explicit state machine - IME they make complex
> >> logic much easier to reason about. However, think carefully about how
> >> you'll make changes to that state machine later. In Ironic, this is
> >> an ongoing problem: every time we change the state machine, we have
> >> to decide whether to lie to older clients (and if so, what lie to
> >> tell them), or whether to present them with the truth (and if so, how
> >> badly they'll break). AIUI this would be a much smaller problem if
> >> we'd considered this possibility carefully at the beginning.
> >
> > This is a great point. I think most people have an implicit assumption
> > that the state machine will be exposed to end users via the API. I
> > would like to avoid that for exactly the reason you've mentioned. Of
> > course we'll want to expose something to users but whatever that is
> > should be loosely coupled with the internal states that actually drive the
> system.
> 
> +1billion
> 

I think this raises an interesting point.

tl;dr: I am starting to think we should not do the migration state machine spec 
being proposed before the tasks. But we should at least make the states we 
assign something other than arbitrary strings (e.g. constants defined in a 
particular place) and we should use the state names consistently.

Transitions can come from two places: 1) the user invokes the API to change the 
state of an instance, this is a good place to check that the instance is in a 
state to do the externally visible transition, 2) the state of the instance 
changes due to an internal event (host crash, deliberate operation...) this 
implies a change in the externally visible state of the instance, but cannot be 
prevented just because the state machine says this shouldn't happen (usually 
this is captured by the error state, but we can do better sometimes).

I think the state machines that are being defined in these changes are actually 
high level phases of the migration process that are currently observed by the 
user. I'm not sure they are particularly useful for coordinating the migration 
process itself and so are maybe not the right place to enforce internal 
transitions.

Live migration is an oddity in nova. Usually an instance is a single entity 
running on a single host (ignoring clustered hypervisors for the moment). There 
is a host manager responsible for that host that has the best view of the 
actual state of the instance or operations being performed on it. Generally the 
host manager is the natural place to coordinate operations on the instance.

In the case of live migration there are actually two VMs running on different 
hosts at a same time. The migration process involves coordinating transitions 
of those two VMs (attaching disks, plugging networks, starting the target VM, 
starting the migration, rebinding ports, stopping the source VM.). The two 
VMs and their own individual states in this process are not represented 
explicitly. We only have an overall process coordinated by a distributed  
sequence of rpcs. There is a current spec moving that coordination to the 
conductor. When that sequence is interrupted or even completely lost (e.g. by a 
conductor failing or being restarted) we get into trouble. I think this is 
where our real problem lies.

We should sort out the internal process. The external view given to the user 
can be a true reflection the current state of the instance. The transitions of 
the instance should be internally coordinated.

Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [heat] [murano] [app-catalog] OpenStack Apps Community, several suggestions how to improve collaboration

2016-06-02 Thread Jeremy Stanley
On 2016-06-02 11:08:14 +0300 (+0300), Sergey Kraynev wrote:
[...]
> I am happy to hear, that you don't mind about it. We suppose, that
> the number of such repositories should not be more then 10.
> Probably the number will be about 5-10 repos. I suppose, that it's
> extremely big number.
[...]

Yes, 10 is not a lot. That won't even get you into the top-ten list
of (official, TC-recognized) teams with the most Git repositories:

141: Infrastructure
40: OpenStackAnsible
39: oslo
39: Puppet OpenStack
27: horizon
25: Packaging-deb
23: fuel
21: neutron
20: tripleo
19: Chef OpenStack

(parsed from the current state of reference/projects.yaml in the
openstack/governance repo)

> Jeremy, it's some kind of repetition previous words, like: "if we will
> decide to move some apps to separate repositories - we will ask infra team
> to confirm, that it's ok." So if you agree with this whole idea (sometimes
> create new repos for some applications with number mentioned earlier (no
> more then 10 per year)), current question may be ignored.

Okay, thanks--I couldn't tell whether there was something else you
were needing help with.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cue] Removing the Cue project team from OpenStack official projects

2016-06-02 Thread Thierry Carrez

Hi there,

Due to obvious inactivity I proposed the removal of the Cue project team 
from the "Big Tent" list of official OpenStack projects under Technical 
Committee governance:


https://review.openstack.org/324412

Please comment on that review if you think it's the wrong call.

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Gerrit downtime on Friday 2016-06-03 at 20:00 UTC

2016-06-02 Thread Elizabeth K. Joseph
On Mon, May 23, 2016 at 10:58 AM, Elizabeth K. Joseph
 wrote:
> Hi everyone,
>
> On Friday, June 3, from approximately 20:00 through 24:00 UTC Gerrit
> will be unavailable while we rename some projects.
>
> Currently, we plan on renaming the following projects:
>
> openstack/openstack-ansible-ironic -> openstack/openstack-ansible-os_ironic
>
> openstack-infra/ansible-puppet -> openstack-infra/ansible-role-puppet
>
> This list is subject to change. If you need a rename, please be sure
> to get your change in soon so we can review it and add it to
> https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Upcoming_Project_Renames
>
> Existing reviews, project watches, etc, should all be carried over.
>
> If you have any questions about the maintenance, please reply here or
> contact us in #openstack-infra on freenode.

Just a quick reminder that this downtime window is coming up at the
end of Friday, June 3rd, UTC time.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Monasca] influxDB clustering and HA will be "commercial option".

2016-06-02 Thread Jay Pipes

On 06/02/2016 04:02 AM, Monty Taylor wrote:

On 06/02/2016 10:06 AM, Hochmuth, Roland M wrote:

Hi Jaesuk, The change in InfluxDB licensing was announced in the blog at, 
https://influxdata.com/blog/update-on-influxdb-clustering-high-availability-and-monetization/.
 Up until that announcement, InfluxDB was planning on supporting all their 
clustering and HA capabilities in the open-source version, which is one of the 
reasons we had added it to Monasca.

There has been some discussion on supporting other databases in Monasca. Due to 
performance and reliability concerns with InfluxDB, we had started looking at 
Cassandra as an alternative. There are several reviews to look at if you are 
interested at, https://review.openstack.org/#/q/monasca+cassandra. Shinya 
Kawabata has been looking into Cassandra most recently.


I'm sad that InfluxDB has decided to turn Open Core - but I'm glad that
work was already underway to look at Cassandra. Well done.


Seems to me that a database that doesn't support aggregate/grouping 
operations isn't particularly appropriate for time-series metric 
structured data. Am I missing something basic here?


-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Monasca] influxDB clustering and HA will be "commercial option".

2016-06-02 Thread Jay Pipes

On 06/02/2016 04:02 AM, Monty Taylor wrote:

On 06/02/2016 10:06 AM, Hochmuth, Roland M wrote:

Hi Jaesuk, The change in InfluxDB licensing was announced in the blog at, 
https://influxdata.com/blog/update-on-influxdb-clustering-high-availability-and-monetization/.
 Up until that announcement, InfluxDB was planning on supporting all their 
clustering and HA capabilities in the open-source version, which is one of the 
reasons we had added it to Monasca.

There has been some discussion on supporting other databases in Monasca. Due to 
performance and reliability concerns with InfluxDB, we had started looking at 
Cassandra as an alternative. There are several reviews to look at if you are 
interested at, https://review.openstack.org/#/q/monasca+cassandra. Shinya 
Kawabata has been looking into Cassandra most recently.


I'm sad that InfluxDB has decided to turn Open Core - but I'm glad that
work was already underway to look at Cassandra. Well done.


I looked at OpenTSDB several years ago. There are several concerns with 
OpenTSDB, but the more significant one for us has been around deployment, as it 
requires HBase which is built on HDFS. If you already have Hadoop, HDFS and 
Hbase deployed then OpenTSDB is an incremental addition, but if you don't, it 
is a significant investment. At the time that I had evaluated OpenTSDB 
performance was not on-par with the other alternatives I considered.

Regards --Roland

From: Jaesuk Ahn >
Reply-To: OpenStack List 
>
Date: Monday, May 30, 2016 at 9:59 AM
To: OpenStack List 
>
Subject: [openstack-dev] [Monasca] influxDB clustering and HA will be "commercial 
option".

Hi, Monasca developers and users,

https://influxdata.com/blog/update-on-influxdb-clustering-high-availability-and-monetization/
"For our current and future customers, we’ll be offering clustering and high 
availability through Influx Cloud, our managed hosting offering, and Influx 
Enterprise, our on-premise offering, in the coming months.”


It seems like “clustering” and “high availablity” of influxDB will be available 
only in commercial version.
Monasca is currently leveraging influxDB as a metrics and alarm database. 
Beside vertical, influxDB is currently only an open source option to use.

With this update stating “influxDB open source sw version will not have 
clustering / ha feature”,
I would like to know if there has been any discussion among monasca community 
to add more database backend rather than influxDB, especially OpenTSDB.


Thank you.





--
Jaesuk Ahn, Ph.D.
Software Defined Infra Tech. Lab.
SKT
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Tagging kilo-eol for "the world"

2016-06-02 Thread Jesse Pretorius
Hi Tony,

OpenStack-Ansible is just waiting for the requirements repository and the
swift repository kilo-eol tags. Once they're done we'd like to bump the
SHA's for our 'kilo' to the EOL tags of those two repositories, tag a
release, then do our own kilo-eol tag.

Thanks,

Jesse
IRC: odyssey4me

On 2 June 2016 at 11:31, Tony Breeds  wrote:

> Hi all,
> In early May we tagged/EOL'd several (13) projects.  We'd like to do a
> final round for a more complete set.  We looked for projects meet one or
> more
> of the following criteria:
> - The project is openstack-dev/devstack, openstack-dev/grenade or
>   openstack/requirements
> - The project has the 'check-requirements' job listed as a template in
>   project-config:zuul/layout.yaml
> - The project is listed in governance:reference/projects.yaml and is tagged
>   with 'release:managed' or 'stable:follows-policy' (or both).
>
> The list of 171 projects that match above is at [1].  There are another 68
> projects at [2] that have kilo branches but do NOT match the criteria
> above.
>
> Please look over both lists by 2016-06-09 00:00 UTC and let me know if:
> - A project is in list 1 and *really* *really* wants to opt *OUT* of
> EOLing and
>   why.
> - A project is in list 2 that would like to opt *IN* to tagging/EOLing
>
> Any projects that will be EOL'd will need all open reviews abandoned
> before it
> can be processed.  I'm very happy to do this.
>
> I'd like to hand over the list of ready to EOL repos to the infra team on
> 2016-09-10 (UTC)
>
> Yours Tony.
> [1] http://paste.openstack.org/show/507233/
> [2] http://paste.openstack.org/show/507232/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Jesse Pretorius
mobile: +44 7586 906045
email: jesse.pretor...@gmail.com
skype: jesse.pretorius
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] State machines in Nova

2016-06-02 Thread Miles Gould

On 01/06/16 16:45, Joshua Harlow wrote:

Do u have any more details (perhaps an 'real-life' example that you can
walk us through) of this and how it played out. It'd be interesting to
hear (I believe it has happened a few times but I've never heard how it
was resolved or the details of it).


The most recent example was IIRC the proposed addition of an ADOPTING state:

https://review.openstack.org/#/c/275766/

Here's the log of the meeting where we argued about how to deal with it:

http://eavesdrop.openstack.org/meetings/ironic/2016/ironic.2016-05-16-17.00.log.html#l-66

Eventually we put it to a vote, and the "tell the client the truth, even 
if they're too old to handle it properly" side won.


Miles

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Tagging kilo-eol for "the world"

2016-06-02 Thread Ihar Hrachyshka
I think all networking-* repos should EOL too, since they are plugins to 
neutron which is already EOL. I struggle to find a way that could maintain 
their gate without neutron.

> On 02 Jun 2016, at 12:31, Tony Breeds  wrote:
> 
> Hi all,
>In early May we tagged/EOL'd several (13) projects.  We'd like to do a
> final round for a more complete set.  We looked for projects meet one or more
> of the following criteria:
> - The project is openstack-dev/devstack, openstack-dev/grenade or
>  openstack/requirements
> - The project has the 'check-requirements' job listed as a template in
>  project-config:zuul/layout.yaml
> - The project is listed in governance:reference/projects.yaml and is tagged
>  with 'release:managed' or 'stable:follows-policy' (or both).
> 
> The list of 171 projects that match above is at [1].  There are another 68
> projects at [2] that have kilo branches but do NOT match the criteria above.
> 
> Please look over both lists by 2016-06-09 00:00 UTC and let me know if:
> - A project is in list 1 and *really* *really* wants to opt *OUT* of EOLing 
> and
>  why.
> - A project is in list 2 that would like to opt *IN* to tagging/EOLing
> 
> Any projects that will be EOL'd will need all open reviews abandoned before it
> can be processed.  I'm very happy to do this.
> 
> I'd like to hand over the list of ready to EOL repos to the infra team on
> 2016-09-10 (UTC)
> 
> Yours Tony.
> [1] http://paste.openstack.org/show/507233/
> [2] http://paste.openstack.org/show/507232/
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable][all] Tagging kilo-eol for "the world"

2016-06-02 Thread Tony Breeds
Hi all,
In early May we tagged/EOL'd several (13) projects.  We'd like to do a
final round for a more complete set.  We looked for projects meet one or more
of the following criteria:
- The project is openstack-dev/devstack, openstack-dev/grenade or
  openstack/requirements
- The project has the 'check-requirements' job listed as a template in
  project-config:zuul/layout.yaml
- The project is listed in governance:reference/projects.yaml and is tagged
  with 'release:managed' or 'stable:follows-policy' (or both).

The list of 171 projects that match above is at [1].  There are another 68
projects at [2] that have kilo branches but do NOT match the criteria above.

Please look over both lists by 2016-06-09 00:00 UTC and let me know if:
- A project is in list 1 and *really* *really* wants to opt *OUT* of EOLing and
  why.
- A project is in list 2 that would like to opt *IN* to tagging/EOLing

Any projects that will be EOL'd will need all open reviews abandoned before it
can be processed.  I'm very happy to do this.

I'd like to hand over the list of ready to EOL repos to the infra team on
2016-09-10 (UTC)

Yours Tony.
[1] http://paste.openstack.org/show/507233/
[2] http://paste.openstack.org/show/507232/


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][security] Service User Permissions

2016-06-02 Thread David Chadwick
Hi Jamie

In my opinion no security token should have the potential to last
forever. This is a bad idea and can lead to all sorts of security
vulnerabilities, some of which you highlight below. I thus take issue
with your statement 'Ideally in a big system like this we only want to
validate a token and policy once on a user's first entry to the system'.
Whilst this is true for many situations, I think it should be qualified
with a statement about how long this validation should hold for, such as
'but only for a configurable amount of time, or a certain amount of
resource usage'.

If a security token does not have an expiry time or condition, then
there are two common ways of solving this: polling or notifications.
Various techniques have been deployed for these e.g. CRLs,
publish/subscribe etc. This will require a security layer in OpenStack
that will allow interprocess communications. Is there any ongoing work
along these lines, or plans to introduce it? I believe that various
publish/subscribe mechanisms are available for OpenStack e.g. Marconi.
If so why not leverage one of these, rather than going down the path you
suggest?

At least one research project (e.g. Coco Cloud) has built U-Con into
OpenStack and this provides the functionality you require, but because
this implementation relies on XACML and obligations, it is probably too
heavyweight for you. However the conceptual model is sound and could be
adapted to solve your requirements.

Hope this is helpful to you

regards

David

On 02/06/2016 06:23, Jamie Lennox wrote:
> Hi All,
> 
> I'd like to bring to the attention of the wider security groups and
> OpenStack users the Service Users Permissions [1] spec currently
> proposed against keystonemiddleware.
> 
> To summarize quickly OpenStack has long had the problem of token expiry
> happening in the middle of a long running operation and failing service
> to service requests and there have been a number of ways proposed around
> this including trusts and using the service users to perform operations.
> 
> Ideally in a big system like this we only want to validate a token and
> policy once on a user's first entry to the system, however all services
> only communicate via the public interfaces so we cannot tell at
> validation time whether this is the first, second, or twentieth time we
> are validating a token. (If we ever do OpenStack 2.0 we should change this)
> 
> The proposed spec provides a way to simulate the at-edge validation for
> service to service communication. If a request has an X-Service-Token
> header (an existing concept) then instead of validating the user's token
> we should trust all the headers sent with that request (X_USER_ID,
> X_PROJECT_ID etc). We would still validate the X-Service-Token header.
> This has the effect that one service asserts to another that it has
> already validated this token and the receiving service shouldn't
> validate it again and bypass the expiry problem.
> 
> The glaring security issue here is that a user with the service role can
> now emulate any request on behalf of any user by sending the expected
> authenticated headers. This will place an extreme level of trust on
> accounts that up to now have generally only been able to validate a
> token. There is both the concern here that a malicious service could
> craft new requests with bogus credentials as well as services deciding
> that this provides them the ability to do non-expiring trusts from a
> user where it can simply replay the headers it received on previous
> requests to perform future operations on behalf of a user. This is
> _absolutely not_ the intended use case but something I expect to come up.
> 
> There is a variation of this mentioned in the spec where we pass only
> the user-id, project-id and audit information from service to service
> and then middleware can recreate the token from this information similar
> to how fernet tokens work today. There is additional processing here
> which in the standard case will simply reproduce the same headers that
> the last service already knew and it still allows a large amount of
> emulation from the service.
> 
> There are possibly ways we can secure this header bundle via signing
> however the practical result is essentially a secondary expiry time and
> an operational complexity that will make PKI tokens and rotating fernet
> keys appear trivial for the benefit of securing a service that we
> already trust with our tokens.
> 
> As this has such far reaching implications throughout openstack i would
> like outside input on whether the risks are worth the reward in this
> case, and what we would need to do to secure a deployment like this.
> 
> Please comment here and on the spec.
> 
> 
> 
> Thanks,
> 
> Jamie
> 
> 
> 
> [1] https://review.openstack.org/#/c/317266/
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

[openstack-dev] [vitrage] multi-tenancy in Vitrage

2016-06-02 Thread Afek, Ifat (Nokia - IL)
Hi,

During the discussion about accepting vitrage into the big tent[1], there was a 
concern raised by a few people regarding whether Vitrage is suitable for a 
public cloud. They were bothered that allowing each user to see the entire 
cloud topology is not suitable in such environments. 

It is true that currently Vitrage does not support such a tenant-centric view. 
However, it is in our Newton roadmap to solve it. We are currently in the 
process of working on multi-tenancy support design.

The basic idea we are currently considering is that the APIs for admin and 
tenants will be the same: topology show, alarms list, etc. However, the output 
of each API method call will depend on the caller. Admin will get all the data, 
while specific users/accounts will get only resources and alarms that are 
related to their tenant resources, and those directly impacting them.

Hope this clarifies the issue,
Ifat.

[1] https://review.openstack.org/#/c/320296


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [higgins] Should we rename "Higgins"?

2016-06-02 Thread Thierry Carrez

Yanyan Hu wrote:

Aha, it's pretty interesting, I vote for Zun as well :)


I don't get to vote, but since I was the one to suggest Higgins in the 
first place, I must admit that Zun sounds like a good alternative.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Release] Changing release model for *-aas services

2016-06-02 Thread Thierry Carrez

Mark Voelker wrote:

On Jun 1, 2016, at 12:27 PM, Armando M.  wrote:
[...]
To the best of my knowledge none of the *-aas projects are part of defcore, and 
since [1] has no presence of vpn, fw, lb, nor planned, I thought I was on the 
safe side.


Thanks for checking.  You are correct: LBaaS, VPNaaS, and FWaaS capabilities 
are not present in existing Board-approved DefCore Guidelines, nor have they 
been proposed for the next one. [2]

[2] http://git.openstack.org/cgit/openstack/defcore/tree/next.json


Thanks for the quick check! So this is clear from a Defcore perspective. 
Let's continue discussing on the review, and sorry for the noise :)


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [heat] [murano] [app-catalog] OpenStack Apps Community, several suggestions how to improve collaboration

2016-06-02 Thread Sergey Kraynev
Hi Jeremy,
Please see my answers below:

On 31 May 2016 at 19:38, Jeremy Stanley  wrote:

> On 2016-05-31 19:20:22 +0300 (+0300), Sergey Kraynev wrote:
> [...]
> > * *Second part related with changes with future repositories and*
> > important for Openstack Infra team *
> > JFYI, what we plan to do as next steps.
> >
> > Murano team will re-create some applications in their repositories using
> > name murano-examples, as reference implementation of some of the
> > applications which Murano team decides to keep in their project for
> > reference. This can be done by Murano team, no external help needed.
> >
> > Some of the applications (complicated and big applications like CI/CD
> > pipeline or Kubernetes cluster) will have their own repositories in the
> > future under openstack/. Actually CI/CD pipeline already lives in
> separated
> > repository, probably Kubernetes should be also moved to separated repo
> > going forward. Hopefully this shouldn't be a big deal for OpenStack Infra
> > team.
> > *However* we would like to get confirmation, that *Infra team* is ok with
> > it?
>
> Infra hasn't balked in the past at project teams having however many
> Git repositories they need to be able to effectively maintain their
> software (see configuration management projects for examples of
> fairly large sets of repos). Do you have any guesses as to how many
> you're talking about creating in, say, the next year?
>

>>>  I am happy to hear, that you don't mind about it. We suppose, that the
number of such repositories should not be more then 10. Probably the number
will be about 5-10 repos. I suppose, that it's extremely big number.

>
> > Suggestion is to use common template for names of repositories with
> Murano
> > applications in the future, namely openstack/murano-app-...
> > (openstack/murano-app-kubernetes, openstack/murano-app-docker, ...).
> We'll
> > describe overall approach in more details using
> > https://launchpad.net/murano-apps as entry point.
> >
> > Simple applications or applications where there is no active development
> > will keep being stored in murano-apps until there is a demand to move
> some
> > of them to separated repository. At that point we'll ask OpenStack Infra
> > team to do it.
> [...]
>
> Can you clarify what it is you're going to ask Infra to do? I think
> the things you're describing can be done entirely through
> configuration (you just need some project-config-core reviewers to
> approve the changes you propose), but that might mean I'm
> misunderstanding you.
>

>>> Jeremy, it's some kind of repetition previous words, like: "if we will
decide to move some apps to separate repositories - we will ask infra team
to confirm, that it's ok." So if you agree with this whole idea (sometimes
create new repos for some applications with number mentioned earlier (no
more then 10 per year)), current question may be ignored.

> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Sergey.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Monasca] influxDB clustering and HA will be "commercial option".

2016-06-02 Thread Monty Taylor
On 06/02/2016 10:06 AM, Hochmuth, Roland M wrote:
> Hi Jaesuk, The change in InfluxDB licensing was announced in the blog at, 
> https://influxdata.com/blog/update-on-influxdb-clustering-high-availability-and-monetization/.
>  Up until that announcement, InfluxDB was planning on supporting all their 
> clustering and HA capabilities in the open-source version, which is one of 
> the reasons we had added it to Monasca.
> 
> There has been some discussion on supporting other databases in Monasca. Due 
> to performance and reliability concerns with InfluxDB, we had started looking 
> at Cassandra as an alternative. There are several reviews to look at if you 
> are interested at, https://review.openstack.org/#/q/monasca+cassandra. Shinya 
> Kawabata has been looking into Cassandra most recently.

I'm sad that InfluxDB has decided to turn Open Core - but I'm glad that
work was already underway to look at Cassandra. Well done.

> I looked at OpenTSDB several years ago. There are several concerns with 
> OpenTSDB, but the more significant one for us has been around deployment, as 
> it requires HBase which is built on HDFS. If you already have Hadoop, HDFS 
> and Hbase deployed then OpenTSDB is an incremental addition, but if you 
> don't, it is a significant investment. At the time that I had evaluated 
> OpenTSDB performance was not on-par with the other alternatives I considered.
> 
> Regards --Roland
> 
> From: Jaesuk Ahn >
> Reply-To: OpenStack List 
> >
> Date: Monday, May 30, 2016 at 9:59 AM
> To: OpenStack List 
> >
> Subject: [openstack-dev] [Monasca] influxDB clustering and HA will be 
> "commercial option".
> 
> Hi, Monasca developers and users,
> 
> https://influxdata.com/blog/update-on-influxdb-clustering-high-availability-and-monetization/
> "For our current and future customers, we’ll be offering clustering and high 
> availability through Influx Cloud, our managed hosting offering, and Influx 
> Enterprise, our on-premise offering, in the coming months.”
> 
> 
> It seems like “clustering” and “high availablity” of influxDB will be 
> available only in commercial version.
> Monasca is currently leveraging influxDB as a metrics and alarm database. 
> Beside vertical, influxDB is currently only an open source option to use.
> 
> With this update stating “influxDB open source sw version will not have 
> clustering / ha feature”,
> I would like to know if there has been any discussion among monasca community 
> to add more database backend rather than influxDB, especially OpenTSDB.
> 
> 
> Thank you.
> 
> 
> 
> 
> 
> --
> Jaesuk Ahn, Ph.D.
> Software Defined Infra Tech. Lab.
> SKT
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Monasca] influxDB clustering and HA will be "commercial option".

2016-06-02 Thread Hochmuth, Roland M
Hi László, as another alternative you could achieve something similar in 
Monasca, without using the InfluxDB Relay project, by configuring multiple 
Monasca Persisters each in a different consumer group, and with it's own 
independent InfluxDB server instance. Not sure which is the better approach. I 
believe the answer to your question is that multiple instances of a metrics 
database in Monasca is already supported. Regards --Roland

From: László Hegedüs 
>
Organization: Ericsson AB
Reply-To: OpenStack List 
>
Date: Thursday, June 2, 2016 at 12:51 AM
To: OpenStack List 
>
Subject: Re: [openstack-dev] [Monasca] influxDB clustering and HA will be 
"commercial option".

The blog post also states that:

"For our users looking for free open source options, we’ll be releasing the 
open source InfluxDB Relay project along with a landing page how to achieve 
high availability using pure open source and subscription options with the 
0.12.0 releases and beyond. From that point forward our clustering efforts will 
be focused on the closed source Influx Enterprise offering."

https://github.com/influxdata/influxdb-relay/blob/master/README.md

So there is still an option to have it HA.

Of course it would be nice if multiple databases were supported by Monasca.

On 05/31/2016 09:30 AM, Julien Danjou wrote:

On Mon, May 30 2016, Jaesuk Ahn wrote:



It seems like “clustering” and “high availablity” of influxDB will be
available only in commercial version.
Monasca is currently leveraging influxDB as a metrics and alarm database.
Beside vertical, influxDB is currently only an open source option to use.


Indeed, it's a shame than there's nobody developing an opensource TSDB
based on open technologies that is used in OpenStack, which supports
high availability, clustering, and a ton of other features…

Wait… what about OpenStack Gnocchi?

  http://gnocchi.xyz/

:)





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [higgins] Should we rename "Higgins"?

2016-06-02 Thread Yanyan Hu
Aha, it's pretty interesting, I vote for Zun as well :)

2016-06-02 12:56 GMT+08:00 Fei Long Wang :

> +1 for Zun, I love it and it's definitely a good container :)
>
>
> On 02/06/16 15:46, Monty Taylor wrote:
> > On 06/02/2016 06:29 AM, 秀才 wrote:
> >> i suggest a name Zun :)
> >> please see the reference: https://en.wikipedia.org/wiki/Zun
> > It's available on pypi and launchpad. I especially love that one of the
> > important examples is the "Four-goat square Zun"
> >
> > https://en.wikipedia.org/wiki/Zun#Four-goat_square_Zun
> >
> > I don't get a vote - but I vote for this one.
> >
> >> -- Original --
> >> *From: * "Rochelle Grober";;
> >> *Date: * Thu, Jun 2, 2016 09:47 AM
> >> *To: * "OpenStack Development Mailing List (not for usage
> >> questions)";
> >> *Cc: * "Haruhiko Katou";
> >> *Subject: * Re: [openstack-dev] [higgins] Should we rename "Higgins"?
> >>
> >> Well, you could stick with the wine bottle analogy  and go with a bigger
> >> size:
> >>
> >> Jeroboam
> >> Methuselah
> >> Salmanazar
> >> Balthazar
> >> Nabuchadnezzar
> >>
> >> --Rocky
> >>
> >> -Original Message-
> >> From: Kumari, Madhuri [mailto:madhuri.kum...@intel.com]
> >> Sent: Wednesday, June 01, 2016 3:44 AM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> Cc: Haruhiko Katou
> >> Subject: Re: [openstack-dev] [higgins] Should we rename "Higgins"?
> >>
> >> Thanks Shu for providing suggestions.
> >>
> >> I wanted the new name to be related to containers as Magnum is also
> >> synonym for containers. So I have few options here.
> >>
> >> 1. Casket
> >> 2. Canister
> >> 3. Cistern
> >> 4. Hutch
> >>
> >> All above options are free to be taken on pypi and Launchpad.
> >> Thoughts?
> >>
> >> Regards
> >> Madhuri
> >>
> >> -Original Message-
> >> From: Shuu Mutou [mailto:shu-mu...@rf.jp.nec.com]
> >> Sent: Wednesday, June 1, 2016 11:11 AM
> >> To: openstack-dev@lists.openstack.org
> >> Cc: Haruhiko Katou 
> >> Subject: Re: [openstack-dev] [higgins] Should we rename "Higgins"?
> >>
> >> I found container related names and checked whether other project uses.
> >>
> >> https://en.wikipedia.org/wiki/Straddle_carrier
> >> https://en.wikipedia.org/wiki/Suezmax
> >> https://en.wikipedia.org/wiki/Twistlock
> >>
> >> These words are not used by other project on PYPI and Launchpad.
> >>
> >> ex.)
> >> https://pypi.python.org/pypi/straddle
> >> https://launchpad.net/straddle
> >>
> >>
> >> However the chance of renaming in N cycle will be done by Infra-team on
> >> this Friday, we would not meet the deadline. So
> >>
> >> 1. use 'Higgins' ('python-higgins' for package name) 2. consider other
> >> name for next renaming chance (after a half year)
> >>
> >> Thoughts?
> >>
> >>
> >> Regards,
> >> Shu
> >>
> >>
> >>> -Original Message-
> >>> From: Hongbin Lu [mailto:hongbin...@huawei.com]
> >>> Sent: Wednesday, June 01, 2016 11:37 AM
> >>> To: OpenStack Development Mailing List (not for usage questions)
> >>> 
> >>> Subject: Re: [openstack-dev] [higgins] Should we rename "Higgins"?
> >>>
> >>> Shu,
> >>>
> >>> According to the feedback from the last team meeting, Gatling doesn't
> >>> seem to be a suitable name. Are you able to find an alternative name?
> >>>
> >>> Best regards,
> >>> Hongbin
> >>>
>  -Original Message-
>  From: Shuu Mutou [mailto:shu-mu...@rf.jp.nec.com]
>  Sent: May-24-16 4:30 AM
>  To: openstack-dev@lists.openstack.org
>  Cc: Haruhiko Katou
>  Subject: [openstack-dev] [higgins] Should we rename "Higgins"?
> 
>  Hi all,
> 
>  Unfortunately "higgins" is used by media server project on Launchpad
>  and CI software on PYPI. Now, we use "python-higgins" for our
>  project on Launchpad.
> 
>  IMO, we should rename project to prevent increasing points to patch.
> 
>  How about "Gatling"? It's only association from Magnum. It's not
>  used on both Launchpad and PYPI.
>  Is there any idea?
> 
>  Renaming opportunity will come (it seems only twice in a year) on
>  Friday, June 3rd. Few projects will rename on this date.
>  http://markmail.org/thread/ia3o3vz7mzmjxmcx
> 
>  And if project name issue will be fixed, I'd like to propose UI
>  subproject.
> 
>  Thanks,
>  Shu
> 
> 
> 
> >>> __
> >>> _
>  ___
>  OpenStack Development Mailing List (not for usage questions)
>  Unsubscribe: OpenStack-dev-
>  requ...@lists.openstack.org?subject:unsubscribe
>  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>> __
> >>> 
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> 

Re: [openstack-dev] [Monasca] influxDB clustering and HA will be "commercial option".

2016-06-02 Thread Hochmuth, Roland M
My understanding of Prometheus is that it doesn't support HA, fault-tolerant 
clustering either.

The recommendation from the Prometheus developers for HA and 
fault-tolerance/reliability is to run multiple Prometheus servers with one 
server scraping metrics from another server.

To do something similar in Monasca you could run multiple instances of InfluxDB 
using the Kafka metrics topic and multiple consumer groups to replicate all 
metrics to each InfluxDB server, or use the InfluxDB Relay project at, 
https://github.com/influxdata/influxdb-relay.

The non-clustered version of InfluxDB remains free and open-source. It is only 
the clustered version of InfluxDB that has now moved to a closed source license.


From: Martinx - ジェームズ 
>
Reply-To: OpenStack List 
>
Date: Monday, May 30, 2016 at 7:20 PM
To: OpenStack List 
>
Subject: Re: [openstack-dev] [Monasca] influxDB clustering and HA will be 
"commercial option".



On 30 May 2016 at 11:59, Jaesuk Ahn 
> wrote:
Hi, Monasca developers and users,

https://influxdata.com/blog/update-on-influxdb-clustering-high-availability-and-monetization/
"For our current and future customers, we’ll be offering clustering and high 
availability through Influx Cloud, our managed hosting offering, and Influx 
Enterprise, our on-premise offering, in the coming months.”


It seems like “clustering” and “high availablity” of influxDB will be available 
only in commercial version.
Monasca is currently leveraging influxDB as a metrics and alarm database. 
Beside vertical, influxDB is currently only an open source option to use.

With this update stating “influxDB open source sw version will not have 
clustering / ha feature”,
I would like to know if there has been any discussion among monasca community 
to add more database backend rather than influxDB, especially OpenTSDB.


Thank you.





--
Jaesuk Ahn, Ph.D.
Software Defined Infra Tech. Lab.
SKT


What about Prometheus?

https://prometheus.io/

https://prometheus.io/docs/introduction/comparison/

Cheers!
Thiago
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][keystone] keystoneauth1 2.8.0 release (newton)

2016-06-02 Thread no-reply
We are delighted to announce the release of:

keystoneauth1 2.8.0: Authentication Library for OpenStack Identity

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/keystoneauth

With package available at:

https://pypi.python.org/pypi/keystoneauth1

Please report issues through launchpad:

http://bugs.launchpad.net/keystoneauth

For more details, please see below.

2.8.0
^


New Features


* Added a new OidcAccessToken plugin, accessible via the
  'v3oidcaccesstoken' entry point, making possible to authenticate
  using an existing OpenID Connect Access token.


Bug Fixes
*

* [bug 1583780
  (https://bugs.launchpad.net/keystoneauth/+bug/1583780)] OpenID
  connect support should include authenticating using directly an
  access token.

Changes in keystoneauth1 2.7.0..2.8.0
-

3562dd4 Updated from global requirements
924e04c Updated from global requirements
156f340 Updated from global requirements
cdec6c9 Updated from global requirements
4343ce5 Let Oidc* auth plugins accept scope parameters as kwargs
8d6c22b Updated from global requirements
797865c Update keystoneauth fixture to support v3
cc49aaa Check that all defined entry points can be loaded
aae4612 Use betamax hooks to mask fixture results
5623c01 oidc: fix typo on class name
2469c61 oidc: fix option definition
553a523 oidc: add OidcAccessToken class to authenticate reusing an access token
1c07cdd oidc: fix typo in docstring
fe773c9 oidc: DRY when obtaining an access token
f678ecd oidc: DRY when obtaining a keystone token
356f5e3 oidc: Remove unused parameters in _OidcBase
b1f1e50 Add is_domain to keystoneauth token

Diffstat (except docs and test files)
-

keystoneauth1/access/access.py |  16 ++
keystoneauth1/fixture/hooks.py |  58 +++
keystoneauth1/fixture/keystoneauth_betamax.py  |   8 +-
keystoneauth1/fixture/v3.py|  18 ++-
keystoneauth1/identity/__init__.py |   6 +-
keystoneauth1/identity/v3/oidc.py  | 116 +-
keystoneauth1/loading/_plugins/identity/v3.py  |  16 ++
.../notes/bug-1582774-49af731b6dfc6f2f.yaml|   4 +
.../unit/data/keystone_v2_sample_request.json  |   1 +
.../unit/data/keystone_v2_sample_response.json |  49 ++
.../unit/data/keystone_v3_sample_request.json  |  13 ++
.../unit/data/keystone_v3_sample_response.json |  15 ++
releasenotes/notes/1583780-700f99713e06324e.yaml   |   9 ++
setup.cfg  |   3 +-
test-requirements.txt  |   6 +-
21 files changed, 614 insertions(+), 71 deletions(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index ae1ba37..6d08bf7 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -12 +12 @@ fixtures<2.0,>=1.3.1 # Apache-2.0/BSD
-mock>=1.2 # BSD
+mock>=2.0 # BSD
@@ -15 +15 @@ oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0
-oslo.utils>=3.5.0 # Apache-2.0
+oslo.utils>=3.11.0 # Apache-2.0
@@ -17 +17 @@ oslotest>=1.10.0 # Apache-2.0
-os-testr>=0.4.1 # Apache-2.0
+os-testr>=0.7.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Monasca] influxDB clustering and HA will be "commercial option".

2016-06-02 Thread Hochmuth, Roland M
Hi Jaesuk, The change in InfluxDB licensing was announced in the blog at, 
https://influxdata.com/blog/update-on-influxdb-clustering-high-availability-and-monetization/.
 Up until that announcement, InfluxDB was planning on supporting all their 
clustering and HA capabilities in the open-source version, which is one of the 
reasons we had added it to Monasca.

There has been some discussion on supporting other databases in Monasca. Due to 
performance and reliability concerns with InfluxDB, we had started looking at 
Cassandra as an alternative. There are several reviews to look at if you are 
interested at, https://review.openstack.org/#/q/monasca+cassandra. Shinya 
Kawabata has been looking into Cassandra most recently.

I looked at OpenTSDB several years ago. There are several concerns with 
OpenTSDB, but the more significant one for us has been around deployment, as it 
requires HBase which is built on HDFS. If you already have Hadoop, HDFS and 
Hbase deployed then OpenTSDB is an incremental addition, but if you don't, it 
is a significant investment. At the time that I had evaluated OpenTSDB 
performance was not on-par with the other alternatives I considered.

Regards --Roland

From: Jaesuk Ahn >
Reply-To: OpenStack List 
>
Date: Monday, May 30, 2016 at 9:59 AM
To: OpenStack List 
>
Subject: [openstack-dev] [Monasca] influxDB clustering and HA will be 
"commercial option".

Hi, Monasca developers and users,

https://influxdata.com/blog/update-on-influxdb-clustering-high-availability-and-monetization/
"For our current and future customers, we’ll be offering clustering and high 
availability through Influx Cloud, our managed hosting offering, and Influx 
Enterprise, our on-premise offering, in the coming months.”


It seems like “clustering” and “high availablity” of influxDB will be available 
only in commercial version.
Monasca is currently leveraging influxDB as a metrics and alarm database. 
Beside vertical, influxDB is currently only an open source option to use.

With this update stating “influxDB open source sw version will not have 
clustering / ha feature”,
I would like to know if there has been any discussion among monasca community 
to add more database backend rather than influxDB, especially OpenTSDB.


Thank you.





--
Jaesuk Ahn, Ph.D.
Software Defined Infra Tech. Lab.
SKT
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Monasca] influxDB clustering and HA will be "commercial option".

2016-06-02 Thread László Hegedüs

The blog post also states that:

"For our users looking for free open source options, we’ll be releasing 
the open source InfluxDB Relay project along with a landing page how to 
achieve high availability using pure open source and subscription 
options with the 0.12.0 releases and beyond. From that point forward our 
clustering efforts will be focused on the closed source Influx 
Enterprise offering."


https://github.com/influxdata/influxdb-relay/blob/master/README.md

So there is still an option to have it HA.

Of course it would be nice if multiple databases were supported by Monasca.

On 05/31/2016 09:30 AM, Julien Danjou wrote:

On Mon, May 30 2016, Jaesuk Ahn wrote:


It seems like “clustering” and “high availablity” of influxDB will be
available only in commercial version.
Monasca is currently leveraging influxDB as a metrics and alarm database.
Beside vertical, influxDB is currently only an open source option to use.

Indeed, it's a shame than there's nobody developing an opensource TSDB
based on open technologies that is used in OpenStack, which supports
high availability, clustering, and a ton of other features…

Wait… what about OpenStack Gnocchi?

   http://gnocchi.xyz/

:)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [diskimage-builder] Howto refactor?

2016-06-02 Thread Andre Florath
Hi!

> ++, but one clarification: We do have a spec process which is to use the
> tripleo-specs repo. Since this is obviously not super clear and there is
> a SnR issue for folks who are only dib core maybe we should move specs
> in to the dib repo?

Good to know!
I'd really love to use the dib repo for this: IMHO the spec, requirements,
design and source code should go together.


> Splitting these out would help a lot. This whole set of features is
> going to take a while to iterate on (sorry! - reviewer capacity is
> limited and there are big changes here) and a few of these are pretty
> straightforward things I think we really want (such as the cleanup
> phase). There's also a lot of risk to us in merging large changes since
> we are the poster child for how having external dependencies makes
> testing hard. Making smaller changes lets us release / debug /
> potentially revert them individually which is a huge win.

Of course I tried smaller parts; but for some changes the source code
other parts must be changed which again needs changes.
Is like sticky spaghetti: you start with a single noodle polling out
but when you finished you have the whole pot.
Will have a detailed look again.


> As for what to do about the existing and potentially conflicting changes
> -  that's harder to answer. I think there's a very valid concern from
> the original authors about scope creep of their original goal. We also,
> obviously, don't want to land something that will make it more difficult
> for us to enhance later on.

A general remark here: I do not want to block features.
I just try to give my opinion - which might be wrong.
If you think, the patches are fine: let them go.


> I think with the LVM patch there actually isn't much of risk to making
> your work more difficult - the proposed change is pretty small and has a
> small input surface area - it should be easy to preserve its behavior
> while also supporting a more general solution.

If you are really talking about 'preserving the behavior' and not
'preserving the user experience' (like how to use and configure LVM)
I'm on your side.

I'm somewhat careful with this patch:
For me the use case (the WHY behind) is still completely unclear -
and questions about this are not answered [1].
The only small hint (building docker images) make completely no sense
to me: docker files are (more or less) tar files; LVM just cannot
be used. [2]
I'm missing the 'Big Picture' here ;-)


> For the EFI change there
> are some issues you've hit on that need to be fixed, but I am not sure
> they are going to require basing the change off  a more general fix. It
> might be as easy as copying the element contents in to a new dir when a
> more general solution is completed in which case getting the changes
> completed in smaller portions is more beneficial IMO.

My opinion here is that this patch adds to the stickiness of the
spaghetti :-)
It adds more places where assumptions are made about names of block device
or partition names. Of course it's possible to clean up afterwards -
maybe this is a strange attitude of me - I try to clean up things
before doing the work.

One more technical detail here: the partitions are currently organized
in the way, that the boot partition is the second one. Did you ever
saw a system like this?
I asked me, why this was done in this way and came to the conclusion:
because other parts of the source code assume, that the first
partition is the root partition.
So you get a running EFI system, but it is not longer possible to
enlarge the root partition...


But again: these are only my opinions - it's you who decide.


> I also wanted to say thanks a ton for the work and reviews - it is
> extremely useful stuff and we desperately need the help. :)

Thank you all for your support and explanations!
I have not that much time - because I do everything in my spare time here.
Unfortunately looks like I need to spend more time writing mails and
documentation instead of source code  :-)

Kind regards

Andre



[1] https://review.openstack.org/#/c/252041/
[2] https://docs.docker.com/engine/userguide/eng-image/baseimages/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] new stable-maint cores for Horizon

2016-06-02 Thread Matthias Runge
Horizoners,

please join me to welcome

* Richard Jones
* Rob Cresswell
* Thai Tran

as new Horizon stable core reviewers.

Thank you guys for stepping up and thank you tonyb for pulling stats and
pushing this.

Best,
Matthias
-- 
Matthias Runge 

Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn,
Commercial register: Amtsgericht Muenchen, HRB 153243,
Managing Directors: Charles Cachera, Michael Cunningham,
Michael O'Neill, Eric Shander

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][dvr] Wasting so many external network IPs in DVR mode?

2016-06-02 Thread zhi
Hi, Carl

Thanks for your reply! ;-)

I have some ideas about your explanation. Please review it. :-)

The reason putting the routers namespaces behind the fip namespace is
saving mac address tables in switches. In Centralized Virtual Router,
 there are many "qg" interfaces in the external bridge. Every "qg"
interface may contains one or more floating ips. I think this is a problem.
The mac address tables in switches will learn many mac items from different
"qg" interfaces, like this:

|MAC address | Port|
|mac of qg1 | 2 |
|mac of qg2 | 2 |
|mac of qg3 | 2 |
|mac of qg4 | 2 |
|mac of qgN | 2 |

 In DVR, I think there is no problems about that I mentioned above.
Because physical switches can learn all the fips's mac address from the
same port —— "fg" interface. I think the mac address tables in physical
switches like this:

|MAC address | Port|
|mac of fg| 2 |

In this situation,  just one relationship between Port and MAC address
can be learned by the physical switches.


Does my thought was right?


Thanks
Zhi Chang

2016-06-02 0:18 GMT+08:00 Carl Baldwin :

> On Wed, Jun 1, 2016 at 9:48 AM, zhi  wrote:
> > hi, all
> >
> > I have some questions about north/south traffic in DVR mode.
> >
> > As we all know, packets will be sent to instance's  default gateway
> (qr
> > interface) when an instance want to communicate to the external network.
> > Next, these packets will be sent from rfp interface(qrouter interface) to
> > the fpr interface(fip namespace) after NAT by iptables rules in qrouter
> > namespace, Finally, packets will be forwarded by fg interface which
> exists
> > in the fip namespace.
> >
> > I was so confused by the "fg" interface.
> >
> > The device owner of "fg" interface is
> "network:floatingip_agent_gateway"
> > in Neutron. It is a special port which allocated from the external
> network.
> > I think, in this way, we have to wasted many IP addresses from the
> external
> > network. Because we need a dedicated router IP per compute node, didn't
> we?
>
> Yes, this is correct.  We have a simple spec [1] in review to solve
> this problem in Newton.  It will still require the same fg ports but
> will allow you to pull the IP addresses for these ports from a private
> address space so that your public IPs are not wasted on them.
>
> > In DVR mode, why not we use "qg" interface in qrouter namespace? Just
> > like the "Legacy L3 agent mode" !  We can also setup "qg" interface and
> "qr"
> > interfaces in qrouter namespaces in DVR mode.
>
> The main reason behind putting the routers behind the fip namespace,
> was the number of mac addresses that you would need.  Each port needs
> a unique mac address and some calculations showed that in some large
> environments, the number of mac addresses flying around could stretch
> the limits of mac address tables in switches and routers and cause
> degraded performance.
>
> Another thing is that it was not trivial to create a port without a
> permanent IP address to host floating ips which can come and go at any
> time.  It is also nice to have a permanent IP address on each port to
> allow debugging.  A number of ideas were thrown around for how to
> accomplish this but none ever came to fruition.  The spec I mentioned
> [1] will help with this by allowing a permanent IP for each port from
> a private pool of plentiful IP addresses to avoid wasting the public
> ones.
>
> > Maybe my thought was wrong, but I want to know what can we benefit
> from
> > the "fip" namespace and the reason why we do not use "qg" interfaces in
> DVR
> > mode just like Legacy L3 agent mode.
> >
> >
> > Hope for your reply.  ;-)
>
> Glad to help,
> Carl
>
> [1] https://review.openstack.org/#/c/300207/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev