Re: [openstack-dev] [nova] Austin summit priorities session recap

2016-06-14 Thread Bias, Randy
We understand.  We¹re willing, ready, and able to assist with all of the
upstream items that need to happen in order to get our submission in and
more.  We just need to know so we can help.

Best,


‹Randy




On 6/8/16, 6:09 PM, "Matt Riedemann"  wrote:

>That blueprint is high priority for a single vendor but low
>priority when compared to the very large backlog of items that Nova has
>for the release as a whole.
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

2016-06-14 Thread Na Zhu
Ryan,

Thanks your really helpful comments.
If the lswitch is determined by flow classifier, I think no need to record 
in logical router, ovn creates patch port pair for router interface, one 
patch port connects logical switch, the other connects logical router. The 
one connects logical switch is neutron router interface. We still can 
record port chain on logical switch for the logical-source-port is router 
interface, right?



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:   Ryan Moats/Omaha/IBM
To: John McDowall 
Cc: Na Zhu , Srilatha Tangirala/San 
Francisco/IBM@IBMUS, "OpenStack Development Mailing List \(not for usage 
questions\)" , discuss 

Date:   2016/06/15 12:42
Subject:Re: [ovs-discuss] [openstack-dev] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN


"discuss"  wrote on 06/14/2016 10:31:40 
PM:

> From: John McDowall 
> To: Na Zhu 
> Cc: Srilatha Tangirala/San Francisco/IBM@IBMUS, "OpenStack 
> Development Mailing List \(not for usage questions\)"  d...@lists.openstack.org>, discuss 
> Date: 06/14/2016 10:48 PM
> Subject: Re: [ovs-discuss] [openstack-dev] [OVN] [networking-ovn] 
> [networking-sfc] SFC andOVN
> Sent by: "discuss" 
> 
> Juno,
> 
> It is a container for port-pair-groups and flow-classifier. I 
> imagine there could be more the than one port-chain per switch. Also
> we may want to extend the model beyond a single lswitch 

I agree that there could be more than one port-chain per switch, 
determined
by the flow classifier. 

What I'm confused by is:

1. Why are items only recorded in logical switches?  I would think
that I could also attach an SFC to a logical router - although I admit
that the current neutron model for ports doesn't really allow that
easily.  Couple that with the change of name from Logical_Port to
Logical_Switch_Port, and I'm left wondering if we aren't better off
with the following "weak" links instead: 
-the Port_Chain includes the logical switch as an external_id
-each Port_Pair_Group includes the Port_Chain as an external_id
-each Port_Pair includes the PPG as an external_id
-each Logical_Switch_Port includes the PP as an external_id

I *think* that *might* allow me (in the future) to attach a port chain
to a logical router by setting the logical router as an external_id and
using Logical_Router_Ports to make up the PPs...


2. I still don't see what Logical_Flow_Classifier is buying me that
ACL doesn't - I can codify all of the classifiers given in the match
criteria of an ACL entry and codify the first PPG of the SFC as
the action of the ACL entry...

Ryan


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tempest] the project specific config option not generated together with tempest.conf.sample

2016-06-14 Thread joehuang
Hello, 

A tempest plugin was written for the Kingbird 
https://review.openstack.org/#/c/328683/, the plugin and test cases could be 
discovered by tempest, and the configuration is working if we add the 
configuration items into the tempest.conf manfully, but if we run tox 
-egenconfig in the tempest folder, these configuration items not generated in 
the tempest.conf.sample.

How to make the plugin customized configuration items also being generated in 
the tempest.conf.sample ? 

And for service_available group, it should be already there in the config, 
isn't it?

Thanks in advance.

Best Regards
Chaoyi Huang ( Joe Huang )

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

2016-06-14 Thread Ryan Moats
"discuss"  wrote on 06/14/2016 10:31:40
PM:

> From: John McDowall 
> To: Na Zhu 
> Cc: Srilatha Tangirala/San Francisco/IBM@IBMUS, "OpenStack
> Development Mailing List \(not for usage questions\)"  d...@lists.openstack.org>, discuss 
> Date: 06/14/2016 10:48 PM
> Subject: Re: [ovs-discuss] [openstack-dev] [OVN] [networking-ovn]
> [networking-sfc] SFC andOVN
> Sent by: "discuss" 
>
> Juno,
>
> It is a container for port-pair-groups and flow-classifier. I
> imagine there could be more the than one port-chain per switch. Also
> we may want to extend the model beyond a single lswitch

I agree that there could be more than one port-chain per switch, determined
by the flow classifier.

What I'm confused by is:

1. Why are items only recorded in logical switches?  I would think
that I could also attach an SFC to a logical router - although I admit
that the current neutron model for ports doesn't really allow that
easily.  Couple that with the change of name from Logical_Port to
Logical_Switch_Port, and I'm left wondering if we aren't better off
with the following "weak" links instead:
-the Port_Chain includes the logical switch as an external_id
-each Port_Pair_Group includes the Port_Chain as an external_id
-each Port_Pair includes the PPG as an external_id
-each Logical_Switch_Port includes the PP as an external_id

I *think* that *might* allow me (in the future) to attach a port chain
to a logical router by setting the logical router as an external_id and
using Logical_Router_Ports to make up the PPs...


2. I still don't see what Logical_Flow_Classifier is buying me that
ACL doesn't - I can codify all of the classifiers given in the match
criteria of an ACL entry and codify the first PPG of the SFC as
the action of the ACL entry...

Ryan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-14 Thread Mark Voelker
On Jun 14, 2016, at 7:28 PM, Monty Taylor  wrote:
> 
> On 06/14/2016 05:42 PM, Doug Hellmann wrote:
>> Excerpts from Matthew Treinish's message of 2016-06-14 15:12:45 -0400:
>>> On Tue, Jun 14, 2016 at 02:41:10PM -0400, Doug Hellmann wrote:
 Excerpts from Matthew Treinish's message of 2016-06-14 14:21:27 -0400:
> On Tue, Jun 14, 2016 at 10:57:05AM -0700, Chris Hoge wrote:
>> Last year, in response to Nova micro-versioning and extension updates[1],
>> the QA team added strict API schema checking to Tempest to ensure that
>> no additional properties were added to Nova API responses[2][3]. In the
>> last year, at least three vendors participating the the OpenStack Powered
>> Trademark program have been impacted by this change, two of which
>> reported this to the DefCore Working Group mailing list earlier this 
>> year[4].
>> 
>> The DefCore Working Group determines guidelines for the OpenStack Powered
>> program, which includes capabilities with associated functional tests
>> from Tempest that must be passed, and designated sections with associated
>> upstream code [5][6]. In determining these guidelines, the working group
>> attempts to balance the future direction of development with lagging
>> indicators of deployments and user adoption.
>> 
>> After a tremendous amount of consideration, I believe that the DefCore
>> Working Group needs to implement a temporary waiver for the strict API
>> checking requirements that were introduced last year, to give downstream
>> deployers more time to catch up with the strict micro-versioning
>> requirements determined by the Nova/Compute team and enforced by the
>> Tempest/QA team.
> 
> I'm very much opposed to this being done. If we're actually concerned with
> interoperability and verify that things behave in the same manner between 
> multiple
> clouds then doing this would be a big step backwards. The fundamental 
> disconnect
> here is that the vendors who have implemented out of band extensions or 
> were
> taking advantage of previously available places to inject extra attributes
> believe that doing so means they're interoperable, which is quite far from
> reality. **The API is not a place for vendor differentiation.**
 
 This is a temporary measure to address the fact that a large number
 of existing tests changed their behavior, rather than having new
 tests added to enforce this new requirement. The result is deployments
 that previously passed these tests may no longer pass, and in fact
 we have several cases where that's true with deployers who are
 trying to maintain their own standard of backwards-compatibility
 for their end users.
>>> 
>>> That's not what happened though. The API hasn't changed and the tests 
>>> haven't
>>> really changed either. We made our enforcement on Nova's APIs a bit 
>>> stricter to
>>> ensure nothing unexpected appeared. For the most these tests work on any 
>>> version
>>> of OpenStack. (we only test it in the gate on supported stable releases, 
>>> but I
>>> don't expect things to have drastically shifted on older releases) It also
>>> doesn't matter which version of the API you run, v2.0 or v2.1. Literally, 
>>> the
>>> only case it ever fails is when you run something extra, not from the 
>>> community,
>>> either as an extension (which themselves are going away [1]) or another 
>>> service
>>> that wraps nova or imitates nova. I'm personally not comfortable saying 
>>> those
>>> extras are ever part of the OpenStack APIs.
>>> 
 We have basically three options.
 
 1. Tell deployers who are trying to do the right for their immediate
   users that they can't use the trademark.
 
 2. Flag the related tests or remove them from the DefCore enforcement
   suite entirely.
 
 3. Be flexible about giving consumers of Tempest time to meet the
   new requirement by providing a way to disable the checks.
 
 Option 1 goes against our own backwards compatibility policies.
>>> 
>>> I don't think backwards compatibility policies really apply to what what 
>>> define
>>> as the set of tests that as a community we are saying a vendor has to pass 
>>> to
>>> say they're OpenStack. From my perspective as a community we either take a 
>>> hard
>>> stance on this and say to be considered an interoperable cloud (and to get 
>>> the
>>> trademark) you have to actually have an interoperable product. We slowly 
>>> ratchet
>>> up the requirements every 6 months, there isn't any implied backwards
>>> compatibility in doing that. You passed in the past but not in the newer 
>>> stricter
>>> guidelines.
>>> 
>>> Also, even if I did think it applied, we're not talking about a change which
>>> would fall into breaking that. The change was introduced a year and half ago
>>> during kilo and landed a year ago during 

[openstack-dev] [Congress] Progress on Congress support in OPNFV Colorado

2016-06-14 Thread SULLIVAN, BRYAN L
Hi Congress team,

A quick update on progress at OPNFV on integrating Congress into the OPNFV 
Colorado release (Mitaka-based). With the help of RedHat (Dan Radez) and 
Canonical (Narinder Gupta, Liam Young) we are getting very close to upstreaming 
two key things to OpenStack:

-  Congress Puppet module

o   https://github.com/radez/puppet-congress has been tested successfully for 
the OPNFV Colorado release on Centos 7 hosts.

o   This will be used in the base OPNFV deploy for the Apex project (RDO-based 
installer).

o   Dan is in the process of creating the official Puppet repo at 
https://github.com/openstack/puppet-congress

-  Congress JuJu Charm

o   https://github.com/gnuoy/charm-congress has been tested successfully for 
the OPNFV Colorado release on Ubuntu Trusty and Xenial hosts.

o   This will be used in the base OPNFV deploy for the JOID project 
(MAAS/JuJu-based installer).

o   Canonical has asked me to initiate the creation of the 
https://github.com/openstack/charm-congress repo to host this. I'd appreciate 
any help/pointers as to how to get that started.

This module and charm will help other OPNFV projects (e.g. Doctor) as Congress 
will be installed by default on the OPNFV platform, with the necessary 
datasource drivers and configuration ready to go.

I have three policy test cases that are working reliably for Mitaka (see 
https://git.opnfv.org/cgit/copper/tree/tests), soon to be integrated into the 
OPNFV CI/CD program, and will demo them at the upcoming OPNFV Summit in Berlin 
next week. I'll be adding other tests and developing a tested policy library 
once I get over the hurdles of completing the installer support (including for 
FUEL and Compass, the other two OPNFV installer projects).

Feels like it's been a long road, but also that we are nearing a major 
milestone!

Thanks,
Bryan Sullivan | AT

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

2016-06-14 Thread Na Zhu
John,

Here is the steps to configure sfc,
1, create flow-classifer
2, create port-pairs
3, create port-pair-group with port-pairs
4, create port-chain with flow-classifer and port-pair-groups

You can see that the port-chain is not related to network, my question is 
how to get the lswitch for networking-sfc and write it to database? 



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:   John McDowall 
To: Na Zhu/China/IBM@IBMCN
Cc: discuss , "OpenStack Development Mailing 
List (not for usage questions)" , 
"Srilatha Tangirala" 
Date:   2016/06/15 11:31
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN



Juno,

It is a container for port-pair-groups and flow-classifier. I imagine 
there could be more the than one port-chain per switch. Also we may want 
to extend the model beyond a single lswitch 

Regards

John

Sent from my iPhone

On Jun 14, 2016, at 8:09 PM, Na Zhu  wrote:

Hi John,

Another question, I think port-chain is irrelevant with lswitch, one 
port-chain includes multiple port-pair-groups and one flow-classifier, how 
to get the lswitch by port-chain?




Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:John McDowall 
To:Na Zhu/China/IBM@IBMCN
Cc:discuss , "OpenStack Development 
Mailing List (not for usage questions)" , "Srilatha Tangirala" 
Date:2016/06/15 11:04
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN



The reason I did that was to be able to create reusable VNF's

Regards

John

Sent from my iPhone

On Jun 14, 2016, at 7:15 PM, Na Zhu  wrote:

John,

Since you have port-chain as child of lswitch, do you need port-pairs as 
child of lswitch any more?





Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:Na Zhu/China/IBM
To:John McDowall 
Cc:discuss , "OpenStack Development 
Mailing List (not for usage questions)" , "Srilatha Tangirala" 
Date:2016/06/15 09:11
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN


John,

OK, I will change networking-ovn IDL to align with the new schema.




Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)




From:John McDowall 
To:Na Zhu/China/IBM@IBMCN
Cc:discuss , "OpenStack Development 
Mailing List (not for usage questions)" , "Srilatha Tangirala" 
Date:2016/06/15 08:30
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN



Juno,

I worked on ovs/ovn today and re-structured the schema. I could not figure 
out how to make this work without have lport-chains as a child of lswitch. 
So I have got the basics working �C attached a simple shell script that 
creates and shows the port-chains. I tried to merge with the upstream 
master but there are a bunch of changes that while minor would have taken 
sometime to merge in, so I skipped it for now.

The new schema will break the networking-ovn IDL, apologies.  The areas I 
can think of are:

Port-chain is now a child of lswitch so needs that as a parameter.
Flow-classifier is now a child of port-chain only so need to change from 
lswitch to lport-chain

If you can work on the changes to networking-ovn great (I promise not to 
change the schema again until we have had a wider review). If not I will 
get to it tomorrow. 

Regards

John

From: Na Zhu 
Date: Monday, June 13, 2016 at 9:57 PM
To: John McDowall 
Cc: discuss , "OpenStack Development Mailing List 
(not for usage questions)" , Srilatha 
Tangirala 
Subject: Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN

Hi John,

OK, I also find the column "port_pairs" and "flow_classifiers" can not be 

Re: [openstack-dev] [murano] Nominating Alexander Tivelkov and Zhu Rong for murano cores

2016-06-14 Thread Yang, Lin A
+1 both for Alexander Tivelkov and Zhu Rong. Well deserved.

Regards,
Lin Yang

On Jun 15, 2016, at 3:17 AM, Kirill Zaitsev 
> wrote:

Hello team, I want to annonce the following changes to murano core team:

1) I’d like to nominate Alexander Tivelkov for murano core. He has been part of 
the project for a very long time and has contributed to almost every part of 
murano. He has been fully committed to murano during mitaka cycle and continues 
doing so during newton [1]. His work on the scalable framework architecture is 
one of the most notable features scheduled for N release.

2) I’d like to nominate Zhu Rong for murano core. Last time he was nominated I 
-1’ed the proposal, because I believed he needed to start making more 
substantial contributions. I’m sure that Zhu Rong showed his commitment [2] to 
murano project and I’m happy to nominate him myself. His work on the separating 
cfapi from murano api and contributions headed at addressing murano’s technical 
debt are much appreciated.

3) Finally I would like to remove Steve McLellan[3] from murano core team. 
Steve has been part of murano from very early stages of it. However his focus 
has since shifted and he hasn’t been active in murano during last couple of 
cycles. I want to thank Steve for his contributions and express hope to see him 
back in the project in future.


Murano team, please respond with +1/-1 to the proposed changes.

[1] http://stackalytics.com/?user_id=ativelkov=marks
[2] http://stackalytics.com/?metric=marks_id=zhu-rong
[3] http://stackalytics.com/?user_id=sjmc7
--
Kirill Zaitsev
Software Engineer
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Stability and reliability of gate jobs

2016-06-14 Thread David Moreau Simard
Hi Kolla o/

I'm writing to you because I'm concerned.

In case you didn't already know, the RDO community collaborates with
upstream deployment and installation projects to test it's packaging.

This relationship is beneficial in a lot of ways for both parties, in summary:
- RDO has improved test coverage (because it's otherwise hard to test
different ways of installing, configuring and deploying OpenStack by
ourselves)
- The RDO community works with upstream projects (deployment or core
projects) to fix issues that we find
- In return, the collaborating deployment project can feel more
confident that the RDO packages it consumes have already been tested
using it's platform and should work

To make a long story short, we do this with a project called WeIRDO
[1] which essentially runs gate jobs outside of the gate.

I tried to get Kolla in our testing pipeline during the Mitaka cycle.
I really did.
I contributed the necessary features I needed in Kolla in order to
make this work, like the configurable Yum repositories for example.

However, in the end, I had to put off the initiative because the gate
jobs were very flappy and unreliable.
We cannot afford to have a job that is *expected* to flap in our
testing pipeline, it leads to a lot of wasted time, effort and
resources.

I think there's been a lot of improvements since my last attempt but
to get a sample of data, I looked at ~30 recently merged reviews.
Of 260 total build/deploy jobs, 55 (or over 20%) failed -- and I
didn't account for rechecks, just the last known status of the check
jobs.
I put up the results of those jobs here [2].

In the case that interests me most, CentOS binary jobs, it's 5
failures out of 50 jobs, so 10%. Not as bad but still a concern for
me.

Other deployment projects like Puppet-OpenStack, OpenStack Ansible,
Packstack and TripleO have quite a bit of *voting* integration testing
jobs.
Why are Kolla's jobs non-voting and so unreliable ?

Thanks,

[1]: https://github.com/rdo-infra/weirdo
[2]: 
https://docs.google.com/spreadsheets/d/1NYyMIDaUnlOD2wWuioAEOhjeVmZe7Q8_zdFfuLjquG4/edit#gid=0

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

2016-06-14 Thread Na Zhu
Hi John,

Another question, I think port-chain is irrelevant with lswitch, one 
port-chain includes multiple port-pair-groups and one flow-classifier, how 
to get the lswitch by port-chain?




Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:   John McDowall 
To: Na Zhu/China/IBM@IBMCN
Cc: discuss , "OpenStack Development Mailing 
List (not for usage questions)" , 
"Srilatha Tangirala" 
Date:   2016/06/15 11:04
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN



The reason I did that was to be able to create reusable VNF's

Regards

John

Sent from my iPhone

On Jun 14, 2016, at 7:15 PM, Na Zhu  wrote:

John,

Since you have port-chain as child of lswitch, do you need port-pairs as 
child of lswitch any more?





Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:Na Zhu/China/IBM
To:John McDowall 
Cc:discuss , "OpenStack Development 
Mailing List (not for usage questions)" , "Srilatha Tangirala" 
Date:2016/06/15 09:11
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN


John,

OK, I will change networking-ovn IDL to align with the new schema.




Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)




From:John McDowall 
To:Na Zhu/China/IBM@IBMCN
Cc:discuss , "OpenStack Development 
Mailing List (not for usage questions)" , "Srilatha Tangirala" 
Date:2016/06/15 08:30
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN



Juno,

I worked on ovs/ovn today and re-structured the schema. I could not figure 
out how to make this work without have lport-chains as a child of lswitch. 
So I have got the basics working �C attached a simple shell script that 
creates and shows the port-chains. I tried to merge with the upstream 
master but there are a bunch of changes that while minor would have taken 
sometime to merge in, so I skipped it for now.

The new schema will break the networking-ovn IDL, apologies.  The areas I 
can think of are:

Port-chain is now a child of lswitch so needs that as a parameter.
Flow-classifier is now a child of port-chain only so need to change from 
lswitch to lport-chain

If you can work on the changes to networking-ovn great (I promise not to 
change the schema again until we have had a wider review). If not I will 
get to it tomorrow. 

Regards

John

From: Na Zhu 
Date: Monday, June 13, 2016 at 9:57 PM
To: John McDowall 
Cc: discuss , "OpenStack Development Mailing List 
(not for usage questions)" , Srilatha 
Tangirala 
Subject: Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN

Hi John,

OK, I also find the column "port_pairs" and "flow_classifiers" can not be 
wrote by idl APIs, I will try to fix it.
If any update, i will send you email.



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:John McDowall 
To:Na Zhu/China/IBM@IBMCN
Cc:"OpenStack Development Mailing List (not for usage questions)" 
, discuss , 
Srilatha Tangirala 
Date:2016/06/14 12:17
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN



Juno,

Trying to implement this today showed that this will not work for OVN. I 
am going back to RussellB 's original model with port-chain as a child of 
lswitch. 

I can make this work and then we can evolve from there. It will require 
some re-write of the idl code - hopefully I will get it done tomorrow.

Regards

John

Sent from my iPhone

On Jun 13, 2016, at 8:41 PM, Na Zhu  wrote:

Hi John,

I see you add column "port_pairs" and "flow_classifiers" to table 
Logical_Switch, I am not clear about it, the port-pair ingress port and 

Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

2016-06-14 Thread Na Zhu
John,

Since you have port-chain as child of lswitch, do you need port-pairs as 
child of lswitch any more?





Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:   Na Zhu/China/IBM
To: John McDowall 
Cc: discuss , "OpenStack Development Mailing 
List (not for usage questions)" , 
"Srilatha Tangirala" 
Date:   2016/06/15 09:11
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN


John,

OK, I will change networking-ovn IDL to align with the new schema.




Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)




From:   John McDowall 
To: Na Zhu/China/IBM@IBMCN
Cc: discuss , "OpenStack Development Mailing 
List (not for usage questions)" , 
"Srilatha Tangirala" 
Date:   2016/06/15 08:30
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN



Juno,

I worked on ovs/ovn today and re-structured the schema. I could not figure 
out how to make this work without have lport-chains as a child of lswitch. 
So I have got the basics working �C attached a simple shell script that 
creates and shows the port-chains. I tried to merge with the upstream 
master but there are a bunch of changes that while minor would have taken 
sometime to merge in, so I skipped it for now.

The new schema will break the networking-ovn IDL, apologies.  The areas I 
can think of are:

Port-chain is now a child of lswitch so needs that as a parameter.
Flow-classifier is now a child of port-chain only so need to change from 
lswitch to lport-chain

If you can work on the changes to networking-ovn great (I promise not to 
change the schema again until we have had a wider review). If not I will 
get to it tomorrow. 

Regards

John

From: Na Zhu 
Date: Monday, June 13, 2016 at 9:57 PM
To: John McDowall 
Cc: discuss , "OpenStack Development Mailing List 
(not for usage questions)" , Srilatha 
Tangirala 
Subject: Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN

Hi John,

OK, I also find the column "port_pairs" and "flow_classifiers" can not be 
wrote by idl APIs, I will try to fix it.
If any update, i will send you email.



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:John McDowall 
To:Na Zhu/China/IBM@IBMCN
Cc:"OpenStack Development Mailing List (not for usage questions)" 
, discuss , 
Srilatha Tangirala 
Date:2016/06/14 12:17
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN



Juno,

Trying to implement this today showed that this will not work for OVN. I 
am going back to RussellB 's original model with port-chain as a child of 
lswitch. 

I can make this work and then we can evolve from there. It will require 
some re-write of the idl code - hopefully I will get it done tomorrow.

Regards

John

Sent from my iPhone

On Jun 13, 2016, at 8:41 PM, Na Zhu  wrote:

Hi John,

I see you add column "port_pairs" and "flow_classifiers" to table 
Logical_Switch, I am not clear about it, the port-pair ingress port and 
egress port can be the same, they also can be different and in 
same/different network, and the flow classifier is not per network 
neither, can you explain why you do that?




Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:Na Zhu/China/IBM@IBMCN
To:John McDowall 
Cc:Srilatha Tangirala , "OpenStack 
Development Mailing List \(not for usage questions\)" <
openstack-dev@lists.openstack.org>, discuss 
Date:2016/06/14 10:44
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN



Hi John,

My github account is JunoZhu, pls add me as member of your private repo.
If you submit WIP patch today, then i can update your WIP patch, no need 
to update your private repo.
If not, i 

Re: [openstack-dev] [DIB] Adding GPT support

2016-06-14 Thread Gregory Haynes
On Tue, Jun 14, 2016, at 07:36 PM, Tony Breeds wrote:
> Hi All,
> I'd like to add GPT partitioning supporg to DIB.  My current
> understanding is that the only partitioning in DIB currently is
> provided by partitioning-sfdisk, which will not support GPT.
> 

This isn't made very clear by looking at the elements, but there are
actually two ways to partition images. There is the partitioning-sfdisk
element (which I am guessing is what you found) that partitions with
sfdisk. There is also the vm element which is the way most users
partition / create a bootloader for their images. The vm element uses
parted. There is also a patch up which adds GPT/UEFI support[1].

> My proposed solution is:
> 
> 1. Create a new element called partitioning-parted  that (surprise
> surprise)
>uses parted to create the disk labal and partitions.  This would like
>along
>side partitioning-sfdisk but provide a somewhat compatible way

I'd still like to see this - it would be great to break the partitioning
bits out of the vm element and in to a partitioning-parted element which
the vm element depends on.

> 2. Teach partitioning-parted how to parse DIB_PARTITIONING_SFDISK_SCHEMA
> and
>use parted to produce msdos disklabeled partition tables
> 3. Deprecate partitioning-sfdisk
> 4. Remove partitioning-sfdisk in line with thew std. deprecation process.

>From my cursory reading it seems like parted is the thing to use for
this and there's really no reason to chose sfdisk over parted? I don't
have a ton of knowledge about this, but if that is the case then I like
this plan. I definitely want to make sure that there's no reason a user
would prefer to use sfdisk over parted, though...

> 
> Does this sound like a reasonable plan?
> Yours Tony.

Something else worth mentioning is that Andreas has been working on some
refactoring of our block-device.d phase[2]. I think the changes your
looking for are mostly addressed in[1] but if you're hoping to do
something larger its probably good to make sure they align with Andreas'
goals.

1:
https://review.openstack.org/#/c/287784/22/elements/vm/block-device.d/10-partition
2: https://review.openstack.org/#/c/319591/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] [stable] Re: [Openstack-stable-maint] Stable check of openstack/ceilometer failed

2016-06-14 Thread gordon chung


On 14/06/2016 12:10 PM, Ian Cordasco wrote:
> I wonder why more projects aren't seeing this in stable/liberty. Perhaps, 
> ceilometer stable/liberty isn't using upper-constraints? I think oslo.utils 
> 3.2.0 
> (https://github.com/openstack/requirements/blob/stable/liberty/upper-constraints.txt#L202)
>  is low enough to avoid this if you're using constraints. (It looks as if the 
> total_seconds removal was first released in 3.12.0 
> https://github.com/openstack/oslo.utils/commit/8f5e65cae3aaf8d0a89d16d8932c266151de44f7)
>
that's strange, i tried to see if it's just Ceilometer. the 
periodic-neutron-python27-liberty job seems to be capped appropriately 
to oslo.utils 3.2.0. maybe it's a dependency? the 
periodic-nova-python27-liberty job doesn't seem to have any entries 
since March so i can't verify that.

cheers,

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] Discussion on to enable "Citrix XenServer CI" to vote openstack/tempest

2016-06-14 Thread Jianghua Wang
Thanks Masayuki for the comments and thanks Bob for the explanation.

All, 
  Please let's know if there are more comments/ concerns or if we're good to 
proceed for voting. Thanks very much.
Regards,
Jianghua

-Original Message-
From: Bob Ball 
Sent: Tuesday, June 14, 2016 6:23 PM
To: OpenStack Development Mailing List (not for usage questions); Jianghua Wang
Subject: RE: [openstack-dev] [tempest] Discussion on to enable "Citrix 
XenServer CI" to vote openstack/tempest

Hi Masayuki,

We have been running against Tempest and commenting for many months (years?) 
and run in the Rackspace public cloud so have no capacity issues.

Indeed the execution time is longer than other jobs, because we actually have 
to use double nesting (Devstack running on Ubuntu in a VM under XenServer 
running on a VM under XenServer) to run these jobs in the Rackspace cloud.
We are not asking to block Jenkins reporting on these jobs, so taking a little 
longer than Jenkins to report shouldn't be a problem.

We're currently re-processing a backlog of tests from over the weekend, due to 
the broken Tempest change, so for the next 24 hours I expect the reporting 
times to be delayed while we process the backlog.  Looking at previous changes, 
such as https://review.openstack.org/#/c/325243/ or 
https://review.openstack.org/#/c/218355/ you can see that the Citrix XenServer 
CI consistently reports before Jenkins.

Thanks,

Bob

-Original Message-
From: Masayuki Igawa [mailto:masayuki.ig...@gmail.com]
Sent: 14 June 2016 11:07
To: Jianghua Wang 
Cc: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [tempest] Discussion on to enable "Citrix 
XenServer CI" to vote openstack/tempest

Hi Jianghua and QA team,

Thank you for bringing this up.

IMO, it's good to make it voting in openstack/tempest. Because it's already a 
voting job in Nova and seems having enough stability.
And we should keep test stability for projects.

My concern is that the resource of "Citrix XenServer CI". Do you have enough 
resource for it?
And it seems like "Citrix XenServer CI" job execution time is the longest one 
in Tempest jobs.

Best regards,
-- Masayuki

On Tue, Jun 14, 2016 at 2:36 PM, Jianghua Wang  wrote:
> Added project prefix in the subject and loop in Masayuki and Ghanshyam 
> who know the background as well. Thanks.
>
>
>
> Jianghua
>
>
>
> From: Jianghua Wang
> Sent: Tuesday, June 14, 2016 12:46 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: Jianghua Wang
> Subject: Discussion on to enable "Citrix XenServer CI" to vote 
> openstack/tempest
>
>
>
> Hi all,
>
>Recently the "Citrix XenServer CI" was broken due to a bad 
> commit[1] to openstack/tempest. As the commit was merged on Friday 
> which is vacation at here, it had been in failure for more than three 
> days before we noticed and fixed[2] this problem. As this CI votes for 
> openstack/nova, it had been keeping to vote -1 until disabled voting.
>
>So I suggest we also enable this XenServer CI voting on tempest 
> change to avoid similar cases in the future. We see in this case, the 
> tempest commit didn’t consider the different case for type-1 
> hypervisors, so it broke XenServer test. Actually “Citrix XenServer 
> CI” verified that patch set with failure result but which got ignored 
> due to no voting. So let’s enable the voting to make life easierJ
>
> Currently we have this CI voting for openstack/nova. Per the history 
> experience, it has been a stable CI(more stable than the Jenkins
> check) normally if there is no bad commit breaking it.
>
> Thanks for any comments.
>
>
>
> [1] https://review.openstack.org/#/c/316672
>
> [2] https://review.openstack.org/#/c/328836/
>
>
>
> Regards,
>
> Jianghua
>
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] xenial or trusty

2016-06-14 Thread Jeffrey Zhang
It is hard for kolla to support both release of Ubuntu.

Because docker separate the image build from deploy. We have no idea about
the Ubuntu version in Dockerfile
except runing command to test it. We build the image by using Dockerfile.
If we want support multi release of
Ubuntu, the Dockerfile will looks like:


RUN if [[ $(cat /etc/os-release | awk '/VERSION_ID/{print $2}') == '14.04'
]] then; \
apt-get install openjdk-7-jre \
elif $(cat /etc/os-release | awk '/VERSION_ID/{print $2}') == '16.04'
]] then; \
apt-get install openjdk-8-jre
fi

This is still possible. Think about how to use COPY in the Dockerfile to
support multi Ubuntu release? like

COPY sources.list /etc/apt/sources.list

As far as I can find out is:

COPY sources.list /etc/apt/sources.list
RUN if [[ $(cat /etc/os-release | awk '/VERSION_ID/{print $2}') == '14.04'
]] then; \
sed -i 's/UBUNTU_RELEASE/trusty/' /etc/apt/sources.list \
elif $(cat /etc/os-release | awk '/VERSION_ID/{print $2}') == '16.04'
]] then; \
sed -i 's/UBUNTU_RELEASE/xenial/' /etc/apt/sources.list \
fi

On Sat, May 7, 2016 at 12:51 AM, Jesse Pretorius 
wrote:

> On 6 May 2016 at 16:27, Jeffrey Zhang  wrote:
>
>>
>> On Fri, May 6, 2016 at 9:09 PM, Jesse Pretorius <
>> jesse.pretor...@gmail.com> wrote:
>>
>>> FWIW OpenStack-Ansible is choosing to support deployment on both Ubuntu
>>> 14.04 LTS and Ubuntu 16.04 LTS for both the Newton and Ocata cycles, with
>>> the current proposal to drop it in P. The intent is to provide our
>>> deployers the opportunity to transition with a mixed deployment.
>>
>>
>> Are you meaning the host/baremetal OS? ​the openstack-ansible deploy the
>> OpenStack in LXC.​
>> So it really do not care about the host machine's OS. Kolla is not care
>> about it, too.
>> I think the openstack-ansible a specify LXC image, and do not support
>> multi base image.
>>
>> if not, could u provide any prove for this?
>>
>
> OSA supports the implementation of OpenStack on bare metal or in LXC
> machine containers, so we need to cater for both. When an LXC machine
> container is deployed we've chosen to use the strategy of always
> implementing the same OS in the container as is implemented on the host.
> This simplifies our testing greatly.
>
> For the sake of background information, seeing as you asked, the base LXC
> image we're using comes from https://images.linuxcontainers.org/ giving
> us the ability to support multiple versions, multiple distributions and
> multiple architectures, and it's especially nifty that the entire image
> build process is open source and therefore can be implemented and
> customised by our deployers.
>
> I guess this is similar for Kolla in a different way because the image
> pipeline is defined by the project and implemented through the docker image
> building processes.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Invitation to join Hangzhou Bug Smash

2016-06-14 Thread Anita Kuno
On 06/13/2016 04:06 AM, Wang, Shane wrote:
> Hi, OpenStackers,
> 
> As you know, Huawei, Intel and CESI are hosting the 4th China OpenStack Bug 
> Smash at Hangzhou, China.
> The 1st China Bug Smash was at Shanghai, the 2nd was at Xi'an, and the 3rd 
> was at Chengdu.
> 
> We are constructing the etherpad page for registration, and the date will be 
> around July 11 (probably July 6 - 8, but to be determined very soon).
> 
> The China teams will still focus on Neutron, Nova, Cinder, Heat, Magnum, 
> Rally, Ironic, Dragonflow and Watcher, etc. projects, so need developers to 
> join and fix bugs as many as possible, and cores to be on site to moderate 
> the code changes and merges. Welcome to the smash mash at Hangzhou - 
> http://www.chinahighlights.com/hangzhou/attraction/.
> 
> Good news is still that for the first two cores who are from those above 
> projects and respond to this invitation in my email inbox and copy the CC 
> list, the sponsors are pleased to sponsor your international travel, 
> including flight and hotel. Please simply reply to me.
> 
> Best regards,
> --
> China OpenStack Bug Smash Team
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

I'll reply in private first because I am a core reviewer on the
project-config repo, which was not mentioned in your list but you might
consider useful to you at the bug smash nonetheless.

Let me know if you would like me to attend and I'll reply in public, if
not no worries.

Thank you,
Anita

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Invitation to join Hangzhou Bug Smash

2016-06-14 Thread Rochelle Grober
Perhaps the right way to schedule these bug smashes is to do it at the same 
time as the release scheduling is determined.  Decide on a fixed time within 
the release cycle (it's been just after M3/feature freeze a few times) and when 
the schedule is put together, the bugsmash is part of the schedule.

By having the release schedule determine the week of the bug smash, we have a 
long timeline to get the planning done and don't have to worry about 
development schedule conflicts.

--Rocky

-Original Message-
From: Daniel P. Berrange [mailto:berra...@redhat.com] 
Sent: Monday, June 13, 2016 2:46 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Zhuangzhen; Anni Lai; Liang, Maggie
Subject: Re: [openstack-dev] Invitation to join Hangzhou Bug Smash

On Mon, Jun 13, 2016 at 08:06:50AM +, Wang, Shane wrote:
> Hi, OpenStackers,
> 
> As you know, Huawei, Intel and CESI are hosting the 4th China OpenStack Bug 
> Smash at Hangzhou, China.
> The 1st China Bug Smash was at Shanghai, the 2nd was at Xi'an, and the 3rd 
> was at Chengdu.
> 
> We are constructing the etherpad page for registration, and the date will
> be around July 11 (probably July 6 - 8, but to be determined very soon).

The newton-2 milestone release date is July 15th, so you certainly *don't*
want the event during that week. IOW, the 8th July is the latest you should
schedule it - don't let it slip into the next week starting July 11th, as
during the week of the n-2 milestone focus of the teams will be almost
exclusively on prep for that release, to the detriment of any bug smash
event.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] os-brick privsep failures and an upgrade strategy?

2016-06-14 Thread Sean Dague

On 06/14/2016 06:11 PM, Angus Lees wrote:

Yep (3) is quite possible, and the only reason it doesn't just do this
already is because there's no way to find the name of the rootwrap
command to use (from any library, privsep or os-brick) - and I was never
very happy with the current need to specify a command line in
oslo.config purely for this lame reason.

As Sean points out, all the others involve some sort of configuration
change preceding the code.  I had imagined rollouts would work by
pushing out the harmless conf or sudoers change first, but hadn't
appreciated the strict change phases imposed by grenade (and ourselves).

If all "end-application" devs are happy calling something like (3)
before the first privileged operation occurs, then we should be good.  I
might even take the opportunity to phrase it as a general privsep.init()
function, and then we can use it for any other top-of-main()
privilege-setup steps that need to be taken in the future.


That sounds promising. It would be fine to emit a warning if it only was 
using the default, asking people to make a configuration change to make 
it go away. We're totally good with things functioning with warnings 
after transitions, that ops can adjust during their timetable.


-Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-14 Thread Sean Dague

On 06/14/2016 07:28 PM, Monty Taylor wrote:

On 06/14/2016 05:42 PM, Doug Hellmann wrote:



I think this is the most important thing to me as it relates to this.
I'm obviously a huge proponent of clouds behaving more samely. But I
also think that, as Doug nicely describes above, we've sort of backed in
to removing something without a deprecation window ... largely because
of the complexities involved with the system here - and I'd like to make
sure that when we are being clear about behavior changes that we give
the warning period so that people can adapt.


I also think that "pass" to "pass *"  is useful social incentive. While 
I think communication of this new direction has happened pretty broadly, 
organizations are complex places, and it didn't filter everywhere it 
needed to with the urgency that was probably needed.


"pass *"  * - with a greylist which goes away in 6 months

Will hopefully be a reasonable enough push to get the behavior we want, 
which is everyone publishing the same interface.


-Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

2016-06-14 Thread Na Zhu
John,

OK, I will change networking-ovn IDL to align with the new schema.




Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:   John McDowall 
To: Na Zhu/China/IBM@IBMCN
Cc: discuss , "OpenStack Development Mailing 
List (not for usage questions)" , 
"Srilatha Tangirala" 
Date:   2016/06/15 08:30
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN



Juno,

I worked on ovs/ovn today and re-structured the schema. I could not figure 
out how to make this work without have lport-chains as a child of lswitch. 
So I have got the basics working �C attached a simple shell script that 
creates and shows the port-chains. I tried to merge with the upstream 
master but there are a bunch of changes that while minor would have taken 
sometime to merge in, so I skipped it for now.

The new schema will break the networking-ovn IDL, apologies.  The areas I 
can think of are:

Port-chain is now a child of lswitch so needs that as a parameter.
Flow-classifier is now a child of port-chain only so need to change from 
lswitch to lport-chain

If you can work on the changes to networking-ovn great (I promise not to 
change the schema again until we have had a wider review). If not I will 
get to it tomorrow. 

Regards

John

From: Na Zhu 
Date: Monday, June 13, 2016 at 9:57 PM
To: John McDowall 
Cc: discuss , "OpenStack Development Mailing List 
(not for usage questions)" , Srilatha 
Tangirala 
Subject: Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN

Hi John,

OK, I also find the column "port_pairs" and "flow_classifiers" can not be 
wrote by idl APIs, I will try to fix it.
If any update, i will send you email.



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:John McDowall 
To:Na Zhu/China/IBM@IBMCN
Cc:"OpenStack Development Mailing List (not for usage questions)" 
, discuss , 
Srilatha Tangirala 
Date:2016/06/14 12:17
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN



Juno,

Trying to implement this today showed that this will not work for OVN. I 
am going back to RussellB 's original model with port-chain as a child of 
lswitch. 

I can make this work and then we can evolve from there. It will require 
some re-write of the idl code - hopefully I will get it done tomorrow.

Regards

John

Sent from my iPhone

On Jun 13, 2016, at 8:41 PM, Na Zhu  wrote:

Hi John,

I see you add column "port_pairs" and "flow_classifiers" to table 
Logical_Switch, I am not clear about it, the port-pair ingress port and 
egress port can be the same, they also can be different and in 
same/different network, and the flow classifier is not per network 
neither, can you explain why you do that?




Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:Na Zhu/China/IBM@IBMCN
To:John McDowall 
Cc:Srilatha Tangirala , "OpenStack 
Development Mailing List \(not for usage questions\)" <
openstack-dev@lists.openstack.org>, discuss 
Date:2016/06/14 10:44
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN



Hi John,

My github account is JunoZhu, pls add me as member of your private repo.
If you submit WIP patch today, then i can update your WIP patch, no need 
to update your private repo.
If not, i will update your private repo.

Thanks.



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:John McDowall 
To:Na Zhu/China/IBM@IBMCN
Cc:discuss , Srilatha Tangirala <
srila...@us.ibm.com>, "OpenStack Development Mailing List (not for usage 
questions)" 
Date:2016/06/13 23:55
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN



Juno,

What ever is easiest for you �C I can 

Re: [openstack-dev] networking-sfc: unable to use SFC (ovs driver) with multiple networks

2016-06-14 Thread Cathy Zhang
Hi Banszel,

Please see inline. 

Thanks,
Cathy 

-Original Message-
From: Banszel, MartinX [mailto:martinx.bans...@intel.com] 
Sent: Tuesday, June 14, 2016 9:07 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] networking-sfc: unable to use SFC (ovs driver) with 
multiple networks

Hello,

I'd need some help with using the SFC implementation in openstack.

I use liberty version of devstack + liberty branch of networking-sfc.

It's not clear to me if the SFC instance and it's networks should be separated 
from the remaining virtual network topology or if it should be connected to it.

E.g. consider the following topology, where SFC and its networks net2 and net3 
(one for ingress port, one for egress port) are connected to the tenants 
networks. I know that all three instances can share one network but a use case 
I am trying to implement requires that every instance has it's separated 
network and there is a different network for ingress and egress port of the SF.

 +---+ +-+ +---+
 | VMSRC | |  VMSFC  | | VMDST |
 +---+---+ +--+---+--+ +---+---+
 | p1 (1.1.1.1) p2|   |p3  |p4 (4.4.4.4)
 ||   ||
-++--- net1   |   |  --+---+- net4
  |   |   ||
  |  ---+-+---) net2   |
  |  ---)--+--+ net3   |
  | |  |   |
  |  +--+--+--+|
  +--+ ROUTER ++
 ++


All networks are connected to a single router ROUTER. I created a flow 
classifier that matches all traffic going from VMSRC to VMDST 
(--logical-source-port p1 --source-ip-prefix=1.1.1.1/32 
--destination-ip-prefix=4.4.4.4/32), port pair p2,p3, a port pair group 
containing this port pair and a port chain containing this port pair group and 
flow classifier.

If I try to ping from VMSRC the 5.4.4.4 address, it is correctly steered 
through the VMSFC (where just the ip_forwarding is set to 1) and forwarded back 
through the p3 port to the ROUTER.  The router finds out that there are packets 
with source address 1.1.1.1 coming from port where is should not (the router 
expects those packets from the net1 interface), they don't pass the reverse 
path filter and the router drops them.

It works when I set the rp_filter off via sysctl command in the router 
namespace on the controller. But I don't want to do this -- I expect the sfc to 
work without such changes.

Is such topology supported? What should the topology look like?

Cathy> No, current networking-sfc does not support this topology. 

I have noticed, that when I disconnect the net2 and net3 from the ROUTER, and 
add new routers ROUTER2 and ROUTER3 to the net2 and net3 networks respectivelly 
and don't connect them anyhow to the ROUTER nor the rest of the topology, the 
OVS is able to send the traffic to the p2 port on the ingress side. However, on 
the egress side the packet is routed to the ROUTER3 which drops it as it 
doesn't have any route for it.
Cathy> Router3 will not have any rule to forward the packet since 
networking-sfc does not install any new forwarding rules into a Router, that is 
why it is dropped. Anyway the current implementation does not have proper 
support for chain across multiple subnets. If you put VMSRC and VMSFC in the 
same subnet and VMDST in another subnet, it should work. 

Thanks for any hints!

Best regards
Martin Banszel
--
Intel Research and Development Ireland Limited Registered in Ireland Registered 
Office: Collinstown Industrial Park, Leixlip, County Kildare Registered Number: 
308263


This e-mail and any attachments may contain confidential material for the sole 
use of the intended recipient(s). Any review or distribution by others is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender and delete all copies.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [DIB] Adding GPT support

2016-06-14 Thread Tony Breeds
Hi All,
I'd like to add GPT partitioning supporg to DIB.  My current understanding 
is that the only partitioning in DIB currently is provided by 
partitioning-sfdisk, which will not support GPT.

My proposed solution is:

1. Create a new element called partitioning-parted  that (surprise surprise)
   uses parted to create the disk labal and partitions.  This would like along
   side partitioning-sfdisk but provide a somewhat compatible way
2. Teach partitioning-parted how to parse DIB_PARTITIONING_SFDISK_SCHEMA and
   use parted to produce msdos disklabeled partition tables
3. Deprecate partitioning-sfdisk
4. Remove partitioning-sfdisk in line with thew std. deprecation process.

Does this sound like a reasonable plan?
Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

2016-06-14 Thread John McDowall
Juno,

I worked on ovs/ovn today and re-structured the schema. I could not figure out 
how to make this work without have lport-chains as a child of lswitch. So I 
have got the basics working – attached a simple shell script that creates and 
shows the port-chains. I tried to merge with the upstream master but there are 
a bunch of changes that while minor would have taken sometime to merge in, so I 
skipped it for now.

The new schema will break the networking-ovn IDL, apologies.  The areas I can 
think of are:

Port-chain is now a child of lswitch so needs that as a parameter.
Flow-classifier is now a child of port-chain only so need to change from 
lswitch to lport-chain

If you can work on the changes to networking-ovn great (I promise not to change 
the schema again until we have had a wider review). If not I will get to it 
tomorrow.

Regards

John

From: Na Zhu >
Date: Monday, June 13, 2016 at 9:57 PM
To: John McDowall 
>
Cc: discuss >, 
"OpenStack Development Mailing List (not for usage questions)" 
>, 
Srilatha Tangirala >
Subject: Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN

Hi John,

OK, I also find the column "port_pairs" and "flow_classifiers" can not be wrote 
by idl APIs, I will try to fix it.
If any update, i will send you email.



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New District, 
Shanghai, China (201203)



From:John McDowall 
>
To:Na Zhu/China/IBM@IBMCN
Cc:"OpenStack Development Mailing List (not for usage questions)" 
>, 
discuss >, Srilatha 
Tangirala >
Date:2016/06/14 12:17
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN




Juno,

Trying to implement this today showed that this will not work for OVN. I am 
going back to RussellB 's original model with port-chain as a child of lswitch.

I can make this work and then we can evolve from there. It will require some 
re-write of the idl code - hopefully I will get it done tomorrow.

Regards

John

Sent from my iPhone

On Jun 13, 2016, at 8:41 PM, Na Zhu > 
wrote:

Hi John,

I see you add column "port_pairs" and "flow_classifiers" to table 
Logical_Switch, I am not clear about it, the port-pair ingress port and egress 
port can be the same, they also can be different and in same/different network, 
and the flow classifier is not per network neither, can you explain why you do 
that?




Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New District, 
Shanghai, China (201203)



From:Na Zhu/China/IBM@IBMCN
To:John McDowall 
>
Cc:Srilatha Tangirala 
>, "OpenStack Development 
Mailing List \(not for usage questions\)" 
>, 
discuss >
Date:2016/06/14 10:44
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN




Hi John,

My github account is JunoZhu, pls add me as member of your private repo.
If you submit WIP patch today, then i can update your WIP patch, no need to 
update your private repo.
If not, i will update your private repo.

Thanks.



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New District, 
Shanghai, China (201203)



From:John McDowall 
>
To:Na Zhu/China/IBM@IBMCN
Cc:discuss >, 
Srilatha Tangirala >, 
"OpenStack Development Mailing List (not for usage questions)" 
>
Date:2016/06/13 23:55
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 

Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports

2016-06-14 Thread Kevin Benton
>which generates an arbitrary name

I'm not a fan of this approach because it requires coordinated assumptions.
With the OVS hybrid plug strategy we have to make guesses on the agent side
about the presence of bridges with specific names that we never explicitly
requested and that we were never explicitly told about. So we end up with
code like [1] that is looking for a particular end of a veth pair it just
hopes is there so the rules have an effect.

>it seems that the LinuxBridge implementation can simply use an L2 agent
extension for creating the vlan interfaces for the subports

LinuxBridge implementation is the same regardless of the strategy for OVS.
The whole reason we have to come up with these alternative approaches for
OVS is because we can't use the obvious architecture of letting it plug
into the integration bridge due to VLANs already being used for network
isolation. I'm not sure pushing complexity out to os-vif to deal with this
is a great long-term strategy.

>Also, we didn’t make the OVS agent monitor for new linux bridges in the
hybrid_ovs strategy so that Neutron could be responsible for creating the
veth pair.

Linux Bridges are outside of the domain of OVS and even its agent. The L2
agent doesn't actually do anything with the bridge itself, it just needs a
veth device it can put iptables rules on. That's in contrast to these new
OVS bridges that we will be managing rules for, creating additional patch
ports, etc.

>Why shouldn't we use the tools that are already available to us?

Because we're trying to build a house and all we have are paint brushes. :)


1.
https://github.com/openstack/neutron/blob/f78e5b4ec812cfcf5ab8b50fca62d1ae0dd7741d/neutron/agent/linux/iptables_firewall.py#L919-L923

On Tue, Jun 14, 2016 at 9:49 AM, Peters, Rawlin 
wrote:

> On Tuesday, June 14, 2016 3:43 AM, Daniel P. Berrange (berra...@redhat.com)
> wrote:
> >
> > On Tue, Jun 14, 2016 at 02:35:57AM -0700, Kevin Benton wrote:
> > > In strategy 2 we just pass 1 bridge name to Nova. That's the one that
> > > is ensures is created and plumbs the VM to. Since it's not responsible
> > > for patch ports it doesn't need to know anything about the other
> bridge.
> >
> > Ok, so we're already passing that bridge name - all we need change is
> make
> > sure it is actuall created if it doesn't already exist ? If so that
> sounds simple
> > enough to add to os-vif - we already have exactly the same logic for the
> > linux_bridge plugin
>
> Neutron doesn't actually pass the bridge name in the vif_details today,
> but Nova will use that bridge rather than br-int if it's passed in the
> vif_details.
>
> In terms of strategy 1, I was still only envisioning one bridge name
> getting passed in the vif_details (br-int). The "plug" action is only a
> variation of the hybrid_ovs strategy I mentioned earlier, which generates
> an arbitrary name for the linux bridge, uses that bridge in the instance's
> libvert XML config file, then creates a veth pair between the linux bridge
> and br-int. Like hybrid_ovs, the only bridge Nova/os-vif needs to know
> about is br-int for Strategy 1.
>
> In terms of architecture, we get KISS with Strategy 1 (W.R.T. the OVS
> agent, which is the most complex piece of this IMO). Using an L2 agent
> extension, we will also get DRY as well because it seems that the
> LinuxBridge implementation can simply use an L2 agent extension for
> creating the vlan interfaces for the subports. Similar to how QoS has
> different drivers for its L2 agent extension, we could have different
> drivers for OVS and LinuxBridge within the 'trunk' L2 agent extension. Each
> driver will want to make use of the same RPC calls/push mechanisms for
> subport creation/deletion.
>
> Also, we didn’t make the OVS agent monitor for new linux bridges in the
> hybrid_ovs strategy so that Neutron could be responsible for creating the
> veth pair. Was that a mistake or just an instance of KISS? Why shouldn't we
> use the tools that are already available to us?
>
> Regards,
> Rawlin
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Tempest pre-provisioned credentials in the gate

2016-06-14 Thread Andrea Frittoli
Dear all,

TL;DR: I'd like to propose to start running some of the existing dsvm
check/gate jobs using Tempest pre-provisioned credentials.

Full Text:
Tempest provides tests with two mechanisms to acquire test credentials [0]:
dynamic credentials and pre-provisioned ones.

The current check and gate jobs only use the dynamic credentials provider.

The pre-provisioned credentials provider has been introduced to support
running test in parallel without the need of having access to admin
credentials in tempest configuration file - which is a valid use case
especially when testing public clouds or in general a deployment that is
not own by who runs the test.

As a small extra, since pre-provisioned credentials are re-used to run many
tests during a CI test run, they give an opportunity to discover issues
related to cleanup of test resources.

Pre-provisioned credentials is currently used in periodic jobs [1][2] - as
well as an experimental job defined for tempest changes. This means that
even if we are careful, there is a good chance for it to be inadvertently
broken by a change.

Until recently the periodic job suffered a racy failure on object-storage
tests. A recent refactor [3] of the tool that pre-proprovisioned the
accounts has fixed the issue: the past 8 runs of the periodic jobs have not
encountered that race anymore [4][5].

Specifically I'd like to propose changing to start changing of the neutron
jobs [6].

Andrea Frittoli

[0]
http://docs.openstack.org/developer/tempest/configuration.html#credential-provider-mechanisms
[1]
http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/devstack-gate.yaml#n220
[2]
http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/devstack-gate.yaml#n253
[3] https://review.openstack.org/#/c/317105/
[4]
http://status.openstack.org/openstack-health/#/job/periodic-tempest-dsvm-full-test-accounts-master
[5]
http://status.openstack.org/openstack-health/#/job/periodic-tempest-dsvm-neutron-full-test-accounts-master
[6] https://review.openstack.org/329723
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-14 Thread Doug Hellmann
Excerpts from Chris Hoge's message of 2016-06-14 16:19:11 -0700:
> 
> > On Jun 14, 2016, at 3:59 PM, Edward Leafe  wrote:
> > 
> > On Jun 14, 2016, at 5:50 PM, Matthew Treinish  wrote:
> > 
> >> But, if we add another possible state on the defcore side like conditional 
> >> pass,
> >> warning, yellow, etc. (the name doesn't matter) which is used to indicate 
> >> that
> >> things on product X could only pass when strict validation was disabled 
> >> (and
> >> be clear about where and why) then my concerns would be alleviated. I just 
> >> do
> >> not want this to end up not being visible to end users trying to evaluate
> >> interoperability of different clouds using the test results.
> > 
> > +1
> > 
> > Don't fail them, but don't cover up their incompatibility, either.
> > -- Ed Leafe
> 
> That’s not my proposal. My requirement is that vendors who want to do this
> state exactly which APIs are sending back additional data, and that this
> information be published.

I am reading the responses here as very much supporting that.

Doug

> 
> There are different levels of incompatibility. A response with additional data
> that can be safely ignored is different from a changed response that would
> cause a client to fail.
> 
> One would hope that micro-versions would be able to address this exact
> issue for vendors by giving them a means to propose optional but 
> well-defined API response additions (not extensions) that are defined
> upstream and usable by all vendors. If it’s not too off topic, can someone
> from the Nova team explain how something like that would work (if it
> would at all)?
> 
> -Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-14 Thread Chris Hoge
Top posting one note and direct comments inline, I’m proposing
this as a member of the DefCore working group, but this
proposal itself has not been accepted as the forward course of
action by the working group. These are my own views as the
administrator of the program and not that of the working group
itself, which may independently reject the idea outside of the
response from the upstream devs.

I posted a link to this thread to the DefCore mailing list to make
that working group aware of the outstanding issues.

> On Jun 14, 2016, at 3:50 PM, Matthew Treinish  wrote:
> 
> On Tue, Jun 14, 2016 at 05:42:16PM -0400, Doug Hellmann wrote:
>> Excerpts from Matthew Treinish's message of 2016-06-14 15:12:45 -0400:
>>> On Tue, Jun 14, 2016 at 02:41:10PM -0400, Doug Hellmann wrote:
 Excerpts from Matthew Treinish's message of 2016-06-14 14:21:27 -0400:
> On Tue, Jun 14, 2016 at 10:57:05AM -0700, Chris Hoge wrote:
>> Last year, in response to Nova micro-versioning and extension updates[1],
>> the QA team added strict API schema checking to Tempest to ensure that
>> no additional properties were added to Nova API responses[2][3]. In the
>> last year, at least three vendors participating the the OpenStack Powered
>> Trademark program have been impacted by this change, two of which
>> reported this to the DefCore Working Group mailing list earlier this 
>> year[4].
>> 
>> The DefCore Working Group determines guidelines for the OpenStack Powered
>> program, which includes capabilities with associated functional tests
>> from Tempest that must be passed, and designated sections with associated
>> upstream code [5][6]. In determining these guidelines, the working group
>> attempts to balance the future direction of development with lagging
>> indicators of deployments and user adoption.
>> 
>> After a tremendous amount of consideration, I believe that the DefCore
>> Working Group needs to implement a temporary waiver for the strict API
>> checking requirements that were introduced last year, to give downstream
>> deployers more time to catch up with the strict micro-versioning
>> requirements determined by the Nova/Compute team and enforced by the
>> Tempest/QA team.
> 
> I'm very much opposed to this being done. If we're actually concerned with
> interoperability and verify that things behave in the same manner between 
> multiple
> clouds then doing this would be a big step backwards. The fundamental 
> disconnect
> here is that the vendors who have implemented out of band extensions or 
> were
> taking advantage of previously available places to inject extra attributes
> believe that doing so means they're interoperable, which is quite far from
> reality. **The API is not a place for vendor differentiation.**
 
 This is a temporary measure to address the fact that a large number
 of existing tests changed their behavior, rather than having new
 tests added to enforce this new requirement. The result is deployments
 that previously passed these tests may no longer pass, and in fact
 we have several cases where that's true with deployers who are
 trying to maintain their own standard of backwards-compatibility
 for their end users.
>>> 
>>> That's not what happened though. The API hasn't changed and the tests 
>>> haven't
>>> really changed either. We made our enforcement on Nova's APIs a bit 
>>> stricter to
>>> ensure nothing unexpected appeared. For the most these tests work on any 
>>> version
>>> of OpenStack. (we only test it in the gate on supported stable releases, 
>>> but I
>>> don't expect things to have drastically shifted on older releases) It also
>>> doesn't matter which version of the API you run, v2.0 or v2.1. Literally, 
>>> the
>>> only case it ever fails is when you run something extra, not from the 
>>> community,
>>> either as an extension (which themselves are going away [1]) or another 
>>> service
>>> that wraps nova or imitates nova. I'm personally not comfortable saying 
>>> those
>>> extras are ever part of the OpenStack APIs.
>>> 
 We have basically three options.
 
 1. Tell deployers who are trying to do the right for their immediate
   users that they can't use the trademark.
 
 2. Flag the related tests or remove them from the DefCore enforcement
   suite entirely.
 
 3. Be flexible about giving consumers of Tempest time to meet the
   new requirement by providing a way to disable the checks.
 
 Option 1 goes against our own backwards compatibility policies.
>>> 
>>> I don't think backwards compatibility policies really apply to what what 
>>> define
>>> as the set of tests that as a community we are saying a vendor has to pass 
>>> to
>>> say they're OpenStack. From my perspective as a community we either take a 
>>> hard
>>> stance on this and say to 

Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-14 Thread Monty Taylor
On 06/14/2016 05:42 PM, Doug Hellmann wrote:
> Excerpts from Matthew Treinish's message of 2016-06-14 15:12:45 -0400:
>> On Tue, Jun 14, 2016 at 02:41:10PM -0400, Doug Hellmann wrote:
>>> Excerpts from Matthew Treinish's message of 2016-06-14 14:21:27 -0400:
 On Tue, Jun 14, 2016 at 10:57:05AM -0700, Chris Hoge wrote:
> Last year, in response to Nova micro-versioning and extension updates[1],
> the QA team added strict API schema checking to Tempest to ensure that
> no additional properties were added to Nova API responses[2][3]. In the
> last year, at least three vendors participating the the OpenStack Powered
> Trademark program have been impacted by this change, two of which
> reported this to the DefCore Working Group mailing list earlier this 
> year[4].
>
> The DefCore Working Group determines guidelines for the OpenStack Powered
> program, which includes capabilities with associated functional tests
> from Tempest that must be passed, and designated sections with associated
> upstream code [5][6]. In determining these guidelines, the working group
> attempts to balance the future direction of development with lagging
> indicators of deployments and user adoption.
>
> After a tremendous amount of consideration, I believe that the DefCore
> Working Group needs to implement a temporary waiver for the strict API
> checking requirements that were introduced last year, to give downstream
> deployers more time to catch up with the strict micro-versioning
> requirements determined by the Nova/Compute team and enforced by the
> Tempest/QA team.

 I'm very much opposed to this being done. If we're actually concerned with
 interoperability and verify that things behave in the same manner between 
 multiple
 clouds then doing this would be a big step backwards. The fundamental 
 disconnect
 here is that the vendors who have implemented out of band extensions or 
 were
 taking advantage of previously available places to inject extra attributes
 believe that doing so means they're interoperable, which is quite far from
 reality. **The API is not a place for vendor differentiation.**
>>>
>>> This is a temporary measure to address the fact that a large number
>>> of existing tests changed their behavior, rather than having new
>>> tests added to enforce this new requirement. The result is deployments
>>> that previously passed these tests may no longer pass, and in fact
>>> we have several cases where that's true with deployers who are
>>> trying to maintain their own standard of backwards-compatibility
>>> for their end users.
>>
>> That's not what happened though. The API hasn't changed and the tests haven't
>> really changed either. We made our enforcement on Nova's APIs a bit stricter 
>> to
>> ensure nothing unexpected appeared. For the most these tests work on any 
>> version
>> of OpenStack. (we only test it in the gate on supported stable releases, but 
>> I
>> don't expect things to have drastically shifted on older releases) It also
>> doesn't matter which version of the API you run, v2.0 or v2.1. Literally, the
>> only case it ever fails is when you run something extra, not from the 
>> community,
>> either as an extension (which themselves are going away [1]) or another 
>> service
>> that wraps nova or imitates nova. I'm personally not comfortable saying those
>> extras are ever part of the OpenStack APIs.
>>
>>> We have basically three options.
>>>
>>> 1. Tell deployers who are trying to do the right for their immediate
>>>users that they can't use the trademark.
>>>
>>> 2. Flag the related tests or remove them from the DefCore enforcement
>>>suite entirely.
>>>
>>> 3. Be flexible about giving consumers of Tempest time to meet the
>>>new requirement by providing a way to disable the checks.
>>>
>>> Option 1 goes against our own backwards compatibility policies.
>>
>> I don't think backwards compatibility policies really apply to what what 
>> define
>> as the set of tests that as a community we are saying a vendor has to pass to
>> say they're OpenStack. From my perspective as a community we either take a 
>> hard
>> stance on this and say to be considered an interoperable cloud (and to get 
>> the
>> trademark) you have to actually have an interoperable product. We slowly 
>> ratchet
>> up the requirements every 6 months, there isn't any implied backwards
>> compatibility in doing that. You passed in the past but not in the newer 
>> stricter
>> guidelines.
>>
>> Also, even if I did think it applied, we're not talking about a change which
>> would fall into breaking that. The change was introduced a year and half ago
>> during kilo and landed a year ago during liberty:
>>
>> https://review.openstack.org/#/c/156130/
>>
>> That's way longer than our normal deprecation period of 3 months and a 
>> release
>> boundary.
>>
>>>
>>> Option 2 gives us no winners and 

Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-14 Thread Chris Hoge

> On Jun 14, 2016, at 3:59 PM, Edward Leafe  wrote:
> 
> On Jun 14, 2016, at 5:50 PM, Matthew Treinish  wrote:
> 
>> But, if we add another possible state on the defcore side like conditional 
>> pass,
>> warning, yellow, etc. (the name doesn't matter) which is used to indicate 
>> that
>> things on product X could only pass when strict validation was disabled (and
>> be clear about where and why) then my concerns would be alleviated. I just do
>> not want this to end up not being visible to end users trying to evaluate
>> interoperability of different clouds using the test results.
> 
> +1
> 
> Don't fail them, but don't cover up their incompatibility, either.
> -- Ed Leafe

That’s not my proposal. My requirement is that vendors who want to do this
state exactly which APIs are sending back additional data, and that this
information be published.

There are different levels of incompatibility. A response with additional data
that can be safely ignored is different from a changed response that would
cause a client to fail.

One would hope that micro-versions would be able to address this exact
issue for vendors by giving them a means to propose optional but 
well-defined API response additions (not extensions) that are defined
upstream and usable by all vendors. If it’s not too off topic, can someone
from the Nova team explain how something like that would work (if it
would at all)?

-Chris
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-14 Thread Edward Leafe
On Jun 14, 2016, at 5:50 PM, Matthew Treinish  wrote:

> But, if we add another possible state on the defcore side like conditional 
> pass,
> warning, yellow, etc. (the name doesn't matter) which is used to indicate that
> things on product X could only pass when strict validation was disabled (and
> be clear about where and why) then my concerns would be alleviated. I just do
> not want this to end up not being visible to end users trying to evaluate
> interoperability of different clouds using the test results.

+1

Don't fail them, but don't cover up their incompatibility, either.

-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-14 Thread Matthew Treinish
On Tue, Jun 14, 2016 at 05:42:16PM -0400, Doug Hellmann wrote:
> Excerpts from Matthew Treinish's message of 2016-06-14 15:12:45 -0400:
> > On Tue, Jun 14, 2016 at 02:41:10PM -0400, Doug Hellmann wrote:
> > > Excerpts from Matthew Treinish's message of 2016-06-14 14:21:27 -0400:
> > > > On Tue, Jun 14, 2016 at 10:57:05AM -0700, Chris Hoge wrote:
> > > > > Last year, in response to Nova micro-versioning and extension 
> > > > > updates[1],
> > > > > the QA team added strict API schema checking to Tempest to ensure that
> > > > > no additional properties were added to Nova API responses[2][3]. In 
> > > > > the
> > > > > last year, at least three vendors participating the the OpenStack 
> > > > > Powered
> > > > > Trademark program have been impacted by this change, two of which
> > > > > reported this to the DefCore Working Group mailing list earlier this 
> > > > > year[4].
> > > > > 
> > > > > The DefCore Working Group determines guidelines for the OpenStack 
> > > > > Powered
> > > > > program, which includes capabilities with associated functional tests
> > > > > from Tempest that must be passed, and designated sections with 
> > > > > associated
> > > > > upstream code [5][6]. In determining these guidelines, the working 
> > > > > group
> > > > > attempts to balance the future direction of development with lagging
> > > > > indicators of deployments and user adoption.
> > > > > 
> > > > > After a tremendous amount of consideration, I believe that the DefCore
> > > > > Working Group needs to implement a temporary waiver for the strict API
> > > > > checking requirements that were introduced last year, to give 
> > > > > downstream
> > > > > deployers more time to catch up with the strict micro-versioning
> > > > > requirements determined by the Nova/Compute team and enforced by the
> > > > > Tempest/QA team.
> > > > 
> > > > I'm very much opposed to this being done. If we're actually concerned 
> > > > with
> > > > interoperability and verify that things behave in the same manner 
> > > > between multiple
> > > > clouds then doing this would be a big step backwards. The fundamental 
> > > > disconnect
> > > > here is that the vendors who have implemented out of band extensions or 
> > > > were
> > > > taking advantage of previously available places to inject extra 
> > > > attributes
> > > > believe that doing so means they're interoperable, which is quite far 
> > > > from
> > > > reality. **The API is not a place for vendor differentiation.**
> > > 
> > > This is a temporary measure to address the fact that a large number
> > > of existing tests changed their behavior, rather than having new
> > > tests added to enforce this new requirement. The result is deployments
> > > that previously passed these tests may no longer pass, and in fact
> > > we have several cases where that's true with deployers who are
> > > trying to maintain their own standard of backwards-compatibility
> > > for their end users.
> > 
> > That's not what happened though. The API hasn't changed and the tests 
> > haven't
> > really changed either. We made our enforcement on Nova's APIs a bit 
> > stricter to
> > ensure nothing unexpected appeared. For the most these tests work on any 
> > version
> > of OpenStack. (we only test it in the gate on supported stable releases, 
> > but I
> > don't expect things to have drastically shifted on older releases) It also
> > doesn't matter which version of the API you run, v2.0 or v2.1. Literally, 
> > the
> > only case it ever fails is when you run something extra, not from the 
> > community,
> > either as an extension (which themselves are going away [1]) or another 
> > service
> > that wraps nova or imitates nova. I'm personally not comfortable saying 
> > those
> > extras are ever part of the OpenStack APIs.
> >
> > > We have basically three options.
> > > 
> > > 1. Tell deployers who are trying to do the right for their immediate
> > >users that they can't use the trademark.
> > > 
> > > 2. Flag the related tests or remove them from the DefCore enforcement
> > >suite entirely.
> > > 
> > > 3. Be flexible about giving consumers of Tempest time to meet the
> > >new requirement by providing a way to disable the checks.
> > > 
> > > Option 1 goes against our own backwards compatibility policies.
> > 
> > I don't think backwards compatibility policies really apply to what what 
> > define
> > as the set of tests that as a community we are saying a vendor has to pass 
> > to
> > say they're OpenStack. From my perspective as a community we either take a 
> > hard
> > stance on this and say to be considered an interoperable cloud (and to get 
> > the
> > trademark) you have to actually have an interoperable product. We slowly 
> > ratchet
> > up the requirements every 6 months, there isn't any implied backwards
> > compatibility in doing that. You passed in the past but not in the newer 
> > stricter
> > guidelines.
> > 
> > Also, even if I did think it applied, we're not 

Re: [openstack-dev] [magnum][lbaas][docs] Operator-facing installation guide

2016-06-14 Thread Hongbin Lu
Thanks.

Actually, we were looking for lbaas v1 and the linked document seems to mainly 
talk about v2, but we are migrating to v2 so I am satisfied.

Best regards,
Hongbin

From: Anne Gentle [mailto:annegen...@justwriteclick.com]
Sent: June-14-16 6:06 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][lbaas][docs] Operator-facing installation 
guide

Let us know if this is what you're looking for:

http://docs.openstack.org/mitaka/networking-guide/adv-config-lbaas.html

Thanks Major Hayden for writing it up.
Anne

On Tue, Jun 14, 2016 at 3:54 PM, Hongbin Lu 
> wrote:
Hi neutron-lbaas team,

Could anyone confirm if there is an operator-facing install guide for 
neutron-lbaas. So far, the closest one we could find is: 
https://wiki.openstack.org/wiki/Neutron/LBaaS/HowToRun , which doesn't seem to 
be a comprehensive install guide. I asked that because there are several users 
who want to install Magnum but couldn't find an instruction to install 
neutron-lbaas. Although we are working on decoupling from neutron-lbaas, we 
still need to provide instruction for users who want a load balancer. If the 
install guide is missing, any plan to create one?

Best regards,
Hongbin

> -Original Message-
> From: Adrian Otto 
> [mailto:adrian.o...@rackspace.com]
> Sent: June-02-16 6:50 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum][lbaas] Operator-facing
> installation guide
>
> Brandon,
>
> Magnum uses neutron’s LBaaS service to allow for multi-master bays. We
> can balance connections between multiple kubernetes masters, for
> example. It’s not needed for single master bays, which are much more
> common. We have a blueprint that is in design stage for de-coupling
> magnum from neutron LBaaS for use cases that don’t require it:
>
> https://blueprints.launchpad.net/magnum/+spec/decouple-lbaas
>
> Adrian
>
> > On Jun 2, 2016, at 2:48 PM, Brandon Logan
> > wrote:
> >
> > Call me ignorance, but I'm surprised at neutron-lbaas being a
> > dependency of magnum.  Why is this?  Sorry if it has been asked
> before
> > and I've just missed that answer?
> >
> > Thanks,
> > Brandon
> > On Wed, 2016-06-01 at 14:39 +, Hongbin Lu wrote:
> >> Hi lbaas team,
> >>
> >>
> >>
> >> I wonder if there is an operator-facing installation guide for
> >> neutron-lbaas. I asked that because Magnum is working on an
> >> installation guide [1] and neutron-lbaas is a dependency of Magnum.
> >> We want to link to an official lbaas guide so that our users will
> >> have a completed instruction. Any pointer?
> >>
> >>
> >>
> >> [1] https://review.openstack.org/#/c/319399/
> >>
> >>
> >>
> >> Best regards,
> >>
> >> Hongbin
> >>
> >>
> >>
> _
> >> _ OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Anne Gentle
www.justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-14 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2016-06-14 17:29:06 -0400:
> On 06/14/2016 01:57 PM, Chris Hoge wrote:
> 
> > 
> > My proposal for addressing this problem approaches it at two levels:
> > 
> > * For the short term, I will submit a blueprint and patch to tempest that
> >   allows configuration of a grey-list of Nova APIs where strict response
> >   checking on additional properties will be disabled. So, for example,
> >   if the 'create  servers' API call returned extra properties on that call,
> >   the strict checking on this line[8] would be disabled at runtime.
> >   Use of this code path will emit a deprecation warning, and the
> >   code will be scheduled for removal in 2017 directly after the release
> >   of the 2017.01 guideline. Vendors would be required so submit the
> >   grey-list of APIs with additional response data that would be
> >   published to their marketplace entry.
> 
> To understand more. Will there be a visible asterisk with their
> registration that says they require a grey-list?
> 
> > * Longer term, vendors will be expected to work with upstream to update
> >   the API for returning additional data that is compatible with
> >   API micro-versioning as defined by the Nova team, and the waiver would
> >   no longer be allowed after the release of the 2017.01 guideline.
> > 
> > For the next half-year, I feel that this approach strengthens 
> > interoperability
> > by accurately capturing the current state of OpenStack deployments and
> > client tools. Before this change, additional properties on responses
> > weren't explicitly disallowed, and vendors and deployers took advantage
> > of this in production. While this is behavior that the Nova and QA teams
> > want to stop, it will take a bit more time to reach downstream. Also, as
> > of right now, as far as I know the only client that does strict response
> > checking for Nova responses is the Tempest client. Currently, additional
> > properties in responses are ignored and do not break existing client
> > functionality. There is currently little to no harm done to downstream
> > users by temporarily allowing additional data to be returned in responses.
> 
> In general I'm ok with this, as long as three things are true:
> 
> 1) registrations that need the grey list are visually indicated quite
> clearly and publicly that they needed it to pass.

I like that. Chris' proposal was that the information would need to be
submitted with the application, and I think publishing it makes sense.
I'd like to see the whole list, either which APIs had to be flagged or
at least which tests, whichever we can do.

> 
> 2) 2017.01 is a firm cutoff.
> 
> 3) We have evidence that folks that are having challenges with the
> strict enforcement have made getting compliant a top priority.
> 
> 
> 3 is the one where I don't have any data either way. But I didn't see
> any specs submissions (which are required for API changes in Nova) for
> Newton that would indicate anyone is working on this. For 2017 to be a
> hard stop, that means folks are either deleting this from their
> interface, or proposing in Ocata. Which is a really short runway if this
> stuff isn't super straight forward and already upstream agreed.
> 
> So I'm provisionally ok with this, if folks in the know feel like 3 is
> covered.
> 
> -Sean
> 
> P.S. The Tempest changes pretty much just anticipate the Nova changes
> which are deleting all these facilities in Newton -
> https://specs.openstack.org/openstack/nova-specs/specs/newton/approved/api-no-more-extensions.html
> - so in some ways we aren't doing folks a ton of favors letting them
> delay too far because they are about to hit a brick wall on the code side.
> 
> -Sean
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] os-brick privsep failures and an upgrade strategy?

2016-06-14 Thread Angus Lees
On Tue, 14 Jun 2016 at 23:04 Daniel P. Berrange  wrote:

> On Tue, Jun 14, 2016 at 07:49:54AM -0400, Sean Dague wrote:
>
> [snip]
>

Urgh, thanks for the in-depth analysis :/

> The crux of the problem is that os-brick 1.4 and privsep can't be used
> > without a config file change during the upgrade. Which violates our
> > policy, because it breaks rolling upgrades.
>
> os-vif support is going to face exactly the same problem. We just followed
> os-brick's lead by adding a change to devstack to explicitly set the
> required config options in nova.conf to change privsep to use rootwrap
> instead of plain sudo.
>
> Basically every single user of privsep is likely to face the same
> problem.
>
> > So... we have a few options:
> >
> > 1) make an exception here with release notes, because it's the only way
> > to move forward.
>
> That's quite user hostile I think.
>
> > 2) have some way for os-brick to use either mode for a transition period
> > (depending on whether privsep is configured to work)
>
> I'm not sure that's viable - at least for os-vif we started from
> a clean slate to assume use of privsep, so we won't be able to have
> any optional fallback to non-privsep mode.
>
> > 3) Something else ?
>
> 3) Add an API to oslo.privsep that lets us configure the default
>command to launch the helper. Nova would invoke this on startup
>
>   privsep.set_default_helper("sudo nova-rootwrap ")
>
> 4) Have oslo.privsep install a sudo rule that grants permission
>to run privsep-helper, without needing rootwrap.
>
> 5) Have each user of privsep install a sudo rule to grants
>permission to run privsep-helper with just their specific
>entry point context, without needing rootwrap
>
> Any of 3/4/5 work out of the box, but I'm probably favouring
> option 4, then 5, then 3.
>
>
Yep (3) is quite possible, and the only reason it doesn't just do this
already is because there's no way to find the name of the rootwrap command
to use (from any library, privsep or os-brick) - and I was never very happy
with the current need to specify a command line in oslo.config purely for
this lame reason.

As Sean points out, all the others involve some sort of configuration
change preceding the code.  I had imagined rollouts would work by pushing
out the harmless conf or sudoers change first, but hadn't appreciated the
strict change phases imposed by grenade (and ourselves).

If all "end-application" devs are happy calling something like (3) before
the first privileged operation occurs, then we should be good.  I might
even take the opportunity to phrase it as a general privsep.init()
function, and then we can use it for any other top-of-main()
privilege-setup steps that need to be taken in the future.

 - Gus
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][lbaas][docs] Operator-facing installation guide

2016-06-14 Thread Anne Gentle
Let us know if this is what you're looking for:

http://docs.openstack.org/mitaka/networking-guide/adv-config-lbaas.html

Thanks Major Hayden for writing it up.
Anne

On Tue, Jun 14, 2016 at 3:54 PM, Hongbin Lu  wrote:

> Hi neutron-lbaas team,
>
> Could anyone confirm if there is an operator-facing install guide for
> neutron-lbaas. So far, the closest one we could find is:
> https://wiki.openstack.org/wiki/Neutron/LBaaS/HowToRun , which doesn't
> seem to be a comprehensive install guide. I asked that because there are
> several users who want to install Magnum but couldn't find an instruction
> to install neutron-lbaas. Although we are working on decoupling from
> neutron-lbaas, we still need to provide instruction for users who want a
> load balancer. If the install guide is missing, any plan to create one?
>
> Best regards,
> Hongbin
>
> > -Original Message-
> > From: Adrian Otto [mailto:adrian.o...@rackspace.com]
> > Sent: June-02-16 6:50 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [magnum][lbaas] Operator-facing
> > installation guide
> >
> > Brandon,
> >
> > Magnum uses neutron’s LBaaS service to allow for multi-master bays. We
> > can balance connections between multiple kubernetes masters, for
> > example. It’s not needed for single master bays, which are much more
> > common. We have a blueprint that is in design stage for de-coupling
> > magnum from neutron LBaaS for use cases that don’t require it:
> >
> > https://blueprints.launchpad.net/magnum/+spec/decouple-lbaas
> >
> > Adrian
> >
> > > On Jun 2, 2016, at 2:48 PM, Brandon Logan
> >  wrote:
> > >
> > > Call me ignorance, but I'm surprised at neutron-lbaas being a
> > > dependency of magnum.  Why is this?  Sorry if it has been asked
> > before
> > > and I've just missed that answer?
> > >
> > > Thanks,
> > > Brandon
> > > On Wed, 2016-06-01 at 14:39 +, Hongbin Lu wrote:
> > >> Hi lbaas team,
> > >>
> > >>
> > >>
> > >> I wonder if there is an operator-facing installation guide for
> > >> neutron-lbaas. I asked that because Magnum is working on an
> > >> installation guide [1] and neutron-lbaas is a dependency of Magnum.
> > >> We want to link to an official lbaas guide so that our users will
> > >> have a completed instruction. Any pointer?
> > >>
> > >>
> > >>
> > >> [1] https://review.openstack.org/#/c/319399/
> > >>
> > >>
> > >>
> > >> Best regards,
> > >>
> > >> Hongbin
> > >>
> > >>
> > >>
> > _
> > >> _ OpenStack Development Mailing List (not for usage questions)
> > >> Unsubscribe:
> > >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
> > __
> > >  OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> > > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > ___
> > ___
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-
> > requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Anne Gentle
www.justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Virtuozzo (Compute) CI is incorrectly patching for resize support

2016-06-14 Thread Matt Riedemann
It was pointed out today in IRC that the Virtuozzo CI has been failing 
on this change for the libvirt imagebackend refactor:


https://review.openstack.org/#/c/282580/

Diana was having a hard time sorting out the line numbers in the stack 
trace though from the logs, because they didn't exist in her series.


Long story short, that's because the job checks to see if the change is 
the change that adds resize support for virtuozzo:


https://review.openstack.org/#/c/182257/

And if not, it fetches that change:

23:48:46 2016-06-10 23:48:58.863 | + cd /opt/stack/new/nova
23:48:46 2016-06-10 23:48:58.872 | + [[ 282580 -ne 182257 ]]
23:48:46 2016-06-10 23:48:58.875 | + git fetch 
https://review.openstack.org/p/openstack/nova refs/changes/57/182257/37
23:48:59 2016-06-10 23:49:11.357 | From 
https://review.openstack.org/p/openstack/nova
23:48:59 2016-06-10 23:49:11.359 |  * branch 
refs/changes/57/182257/37 -> FETCH_HEAD

23:48:59 2016-06-10 23:49:11.366 | + git cherry-pick FETCH_HEAD
23:48:59 2016-06-10 23:49:11.689 | [detached HEAD 44b6772] libvirt: 
virtuozzo instance resize support


It's not valid to patch Nova for your CI when testing other changes, it 
breaks the whole point of CI testing if you have to patch things in it 
that aren't in the actual dependency change or repo - because when it 
fails, like in this case, one doesn't know if it's their actual change 
that's broken or the patch in the CI job.


I'm assuming this was done because I asked for the Virtuozzo CI to run 
the resize tests in tempest against 
https://review.openstack.org/#/c/182257/ - which it is, but that didn't 
mean also do it for all other changes in Nova. The CI job should 
conditionally enable those tests when testing change 182257 but not 
anything else until that's merged.


As a side note, the job isn't fetching the latest patch set of the 
resize change anyway, at least not as of last week, it's fetching patch 
set 37 but 39 is the latest.


Anyway, this isn't meant to shame, but to inform and correct. No one 
from the Virtuozzo team was in the nova IRC channel when we discovered 
this, so I needed to get it into the dev list.


But please get this fixed ASAP since it's invalidating the Virtuozzo CI 
results.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-14 Thread Doug Hellmann
Excerpts from Matthew Treinish's message of 2016-06-14 15:12:45 -0400:
> On Tue, Jun 14, 2016 at 02:41:10PM -0400, Doug Hellmann wrote:
> > Excerpts from Matthew Treinish's message of 2016-06-14 14:21:27 -0400:
> > > On Tue, Jun 14, 2016 at 10:57:05AM -0700, Chris Hoge wrote:
> > > > Last year, in response to Nova micro-versioning and extension 
> > > > updates[1],
> > > > the QA team added strict API schema checking to Tempest to ensure that
> > > > no additional properties were added to Nova API responses[2][3]. In the
> > > > last year, at least three vendors participating the the OpenStack 
> > > > Powered
> > > > Trademark program have been impacted by this change, two of which
> > > > reported this to the DefCore Working Group mailing list earlier this 
> > > > year[4].
> > > > 
> > > > The DefCore Working Group determines guidelines for the OpenStack 
> > > > Powered
> > > > program, which includes capabilities with associated functional tests
> > > > from Tempest that must be passed, and designated sections with 
> > > > associated
> > > > upstream code [5][6]. In determining these guidelines, the working group
> > > > attempts to balance the future direction of development with lagging
> > > > indicators of deployments and user adoption.
> > > > 
> > > > After a tremendous amount of consideration, I believe that the DefCore
> > > > Working Group needs to implement a temporary waiver for the strict API
> > > > checking requirements that were introduced last year, to give downstream
> > > > deployers more time to catch up with the strict micro-versioning
> > > > requirements determined by the Nova/Compute team and enforced by the
> > > > Tempest/QA team.
> > > 
> > > I'm very much opposed to this being done. If we're actually concerned with
> > > interoperability and verify that things behave in the same manner between 
> > > multiple
> > > clouds then doing this would be a big step backwards. The fundamental 
> > > disconnect
> > > here is that the vendors who have implemented out of band extensions or 
> > > were
> > > taking advantage of previously available places to inject extra attributes
> > > believe that doing so means they're interoperable, which is quite far from
> > > reality. **The API is not a place for vendor differentiation.**
> > 
> > This is a temporary measure to address the fact that a large number
> > of existing tests changed their behavior, rather than having new
> > tests added to enforce this new requirement. The result is deployments
> > that previously passed these tests may no longer pass, and in fact
> > we have several cases where that's true with deployers who are
> > trying to maintain their own standard of backwards-compatibility
> > for their end users.
> 
> That's not what happened though. The API hasn't changed and the tests haven't
> really changed either. We made our enforcement on Nova's APIs a bit stricter 
> to
> ensure nothing unexpected appeared. For the most these tests work on any 
> version
> of OpenStack. (we only test it in the gate on supported stable releases, but I
> don't expect things to have drastically shifted on older releases) It also
> doesn't matter which version of the API you run, v2.0 or v2.1. Literally, the
> only case it ever fails is when you run something extra, not from the 
> community,
> either as an extension (which themselves are going away [1]) or another 
> service
> that wraps nova or imitates nova. I'm personally not comfortable saying those
> extras are ever part of the OpenStack APIs.
>
> > We have basically three options.
> > 
> > 1. Tell deployers who are trying to do the right for their immediate
> >users that they can't use the trademark.
> > 
> > 2. Flag the related tests or remove them from the DefCore enforcement
> >suite entirely.
> > 
> > 3. Be flexible about giving consumers of Tempest time to meet the
> >new requirement by providing a way to disable the checks.
> > 
> > Option 1 goes against our own backwards compatibility policies.
> 
> I don't think backwards compatibility policies really apply to what what 
> define
> as the set of tests that as a community we are saying a vendor has to pass to
> say they're OpenStack. From my perspective as a community we either take a 
> hard
> stance on this and say to be considered an interoperable cloud (and to get the
> trademark) you have to actually have an interoperable product. We slowly 
> ratchet
> up the requirements every 6 months, there isn't any implied backwards
> compatibility in doing that. You passed in the past but not in the newer 
> stricter
> guidelines.
> 
> Also, even if I did think it applied, we're not talking about a change which
> would fall into breaking that. The change was introduced a year and half ago
> during kilo and landed a year ago during liberty:
> 
> https://review.openstack.org/#/c/156130/
> 
> That's way longer than our normal deprecation period of 3 months and a release
> boundary.
> 
> > 
> > Option 2 gives us no 

Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-14 Thread Sean Dague
On 06/14/2016 01:57 PM, Chris Hoge wrote:

> 
> My proposal for addressing this problem approaches it at two levels:
> 
> * For the short term, I will submit a blueprint and patch to tempest that
>   allows configuration of a grey-list of Nova APIs where strict response
>   checking on additional properties will be disabled. So, for example,
>   if the 'create  servers' API call returned extra properties on that call,
>   the strict checking on this line[8] would be disabled at runtime.
>   Use of this code path will emit a deprecation warning, and the
>   code will be scheduled for removal in 2017 directly after the release
>   of the 2017.01 guideline. Vendors would be required so submit the
>   grey-list of APIs with additional response data that would be
>   published to their marketplace entry.

To understand more. Will there be a visible asterisk with their
registration that says they require a grey-list?

> * Longer term, vendors will be expected to work with upstream to update
>   the API for returning additional data that is compatible with
>   API micro-versioning as defined by the Nova team, and the waiver would
>   no longer be allowed after the release of the 2017.01 guideline.
> 
> For the next half-year, I feel that this approach strengthens interoperability
> by accurately capturing the current state of OpenStack deployments and
> client tools. Before this change, additional properties on responses
> weren't explicitly disallowed, and vendors and deployers took advantage
> of this in production. While this is behavior that the Nova and QA teams
> want to stop, it will take a bit more time to reach downstream. Also, as
> of right now, as far as I know the only client that does strict response
> checking for Nova responses is the Tempest client. Currently, additional
> properties in responses are ignored and do not break existing client
> functionality. There is currently little to no harm done to downstream
> users by temporarily allowing additional data to be returned in responses.

In general I'm ok with this, as long as three things are true:

1) registrations that need the grey list are visually indicated quite
clearly and publicly that they needed it to pass.

2) 2017.01 is a firm cutoff.

3) We have evidence that folks that are having challenges with the
strict enforcement have made getting compliant a top priority.


3 is the one where I don't have any data either way. But I didn't see
any specs submissions (which are required for API changes in Nova) for
Newton that would indicate anyone is working on this. For 2017 to be a
hard stop, that means folks are either deleting this from their
interface, or proposing in Ocata. Which is a really short runway if this
stuff isn't super straight forward and already upstream agreed.

So I'm provisionally ok with this, if folks in the know feel like 3 is
covered.

-Sean

P.S. The Tempest changes pretty much just anticipate the Nova changes
which are deleting all these facilities in Newton -
https://specs.openstack.org/openstack/nova-specs/specs/newton/approved/api-no-more-extensions.html
- so in some ways we aren't doing folks a ton of favors letting them
delay too far because they are about to hit a brick wall on the code side.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-14 Thread Matthew Treinish
On Tue, Jun 14, 2016 at 12:19:54PM -0700, Chris Hoge wrote:
> 
> > On Jun 14, 2016, at 11:21 AM, Matthew Treinish  wrote:
> > 
> > On Tue, Jun 14, 2016 at 10:57:05AM -0700, Chris Hoge wrote:
> >> Last year, in response to Nova micro-versioning and extension updates[1],
> >> the QA team added strict API schema checking to Tempest to ensure that
> >> no additional properties were added to Nova API responses[2][3]. In the
> >> last year, at least three vendors participating the the OpenStack Powered
> >> Trademark program have been impacted by this change, two of which
> >> reported this to the DefCore Working Group mailing list earlier this 
> >> year[4].
> >> 
> >> The DefCore Working Group determines guidelines for the OpenStack Powered
> >> program, which includes capabilities with associated functional tests
> >> from Tempest that must be passed, and designated sections with associated
> >> upstream code [5][6]. In determining these guidelines, the working group
> >> attempts to balance the future direction of development with lagging
> >> indicators of deployments and user adoption.
> >> 
> >> After a tremendous amount of consideration, I believe that the DefCore
> >> Working Group needs to implement a temporary waiver for the strict API
> >> checking requirements that were introduced last year, to give downstream
> >> deployers more time to catch up with the strict micro-versioning
> >> requirements determined by the Nova/Compute team and enforced by the
> >> Tempest/QA team.
> > 
> > I'm very much opposed to this being done. If we're actually concerned with
> > interoperability and verify that things behave in the same manner between 
> > multiple
> > clouds then doing this would be a big step backwards. The fundamental 
> > disconnect
> > here is that the vendors who have implemented out of band extensions or were
> > taking advantage of previously available places to inject extra attributes
> > believe that doing so means they're interoperable, which is quite far from
> > reality. **The API is not a place for vendor differentiation.**
> 
> Yes, it’s bad practice, but it’s also a reality, and I honestly believe that
> vendors have received the message and are working on changing.

They might be working on this, but this change was coming for quite some
time it shouldn't be a surprise to anyone at this point. I mean seriously, it's
been in tempest for 1 year, and it took 6months to land. Also, lets say we set
a hard deadline on this new option to disable the enforcement and enforce it.
Then we implement a similar change on keystone are we gonna have to do the same
thing again when vendors who have custom things running there fail.

> 
> > As a user of several clouds myself I can say that having random gorp in a
> > response makes it much more difficult to use my code against multiple 
> > clouds. I
> > have to determine which properties being returned are specific to that 
> > vendor's
> > cloud and if I actually need to depend on them for anything it makes 
> > whatever
> > code I'm writing incompatible for using against any other cloud. (unless I
> > special case that block for each cloud) Sean Dague wrote a good post where 
> > a lot
> > of this was covered a year ago when microversions was starting to pick up 
> > steam:
> > 
> > https://dague.net/2015/06/05/the-nova-api-in-kilo-and-beyond-2 
> > 
> > 
> > I'd recommend giving it a read, he explains the user first perspective more
> > clearly there.
> > 
> > I believe Tempest in this case is doing the right thing from an 
> > interoperability
> > perspective and ensuring that the API is actually the API. Not an API with 
> > extra
> > bits a vendor decided to add.
> 
> A few points on this, though. Right now, Nova is the only API that is
> enforcing this, and the clients. While this may change in the
> future, I don’t think it accurately represents the reality of what’s
> happening in the ecosystem.

This in itself doesn't make a difference. There is a disparity in the level of
testing across all the projects. Nova happens to be further along in regards
to api stability and testing things compared to a lot of projects, it's not
really a surprise that they're the first for this to come up on. It's only a
matter of time for other projects to follow nova's example and implement similar
enforcement.

> 
> As mentioned before, we also need to balance the lagging nature of
> DefCore as an interoperability guideline with the needs of testing
> upstream changes. I’m not asking for a permanent change that
> undermines the goals of Tempest for QA, rather a temporary
> upstream modification that recognizes the challenges faced by
> vendors in the market right now, and gives them room to continue
> to align themselves with upstream. Without this, the two other 
> alternatives are to:
> 
> * Have some vendors leave the Powered program unnecessarily,
>   weakening it.
> * 

Re: [openstack-dev] [magnum][lbaas][docs] Operator-facing installation guide

2016-06-14 Thread Hongbin Lu
Hi neutron-lbaas team,

Could anyone confirm if there is an operator-facing install guide for 
neutron-lbaas. So far, the closest one we could find is: 
https://wiki.openstack.org/wiki/Neutron/LBaaS/HowToRun , which doesn't seem to 
be a comprehensive install guide. I asked that because there are several users 
who want to install Magnum but couldn't find an instruction to install 
neutron-lbaas. Although we are working on decoupling from neutron-lbaas, we 
still need to provide instruction for users who want a load balancer. If the 
install guide is missing, any plan to create one?

Best regards,
Hongbin

> -Original Message-
> From: Adrian Otto [mailto:adrian.o...@rackspace.com]
> Sent: June-02-16 6:50 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum][lbaas] Operator-facing
> installation guide
> 
> Brandon,
> 
> Magnum uses neutron’s LBaaS service to allow for multi-master bays. We
> can balance connections between multiple kubernetes masters, for
> example. It’s not needed for single master bays, which are much more
> common. We have a blueprint that is in design stage for de-coupling
> magnum from neutron LBaaS for use cases that don’t require it:
> 
> https://blueprints.launchpad.net/magnum/+spec/decouple-lbaas
> 
> Adrian
> 
> > On Jun 2, 2016, at 2:48 PM, Brandon Logan
>  wrote:
> >
> > Call me ignorance, but I'm surprised at neutron-lbaas being a
> > dependency of magnum.  Why is this?  Sorry if it has been asked
> before
> > and I've just missed that answer?
> >
> > Thanks,
> > Brandon
> > On Wed, 2016-06-01 at 14:39 +, Hongbin Lu wrote:
> >> Hi lbaas team,
> >>
> >>
> >>
> >> I wonder if there is an operator-facing installation guide for
> >> neutron-lbaas. I asked that because Magnum is working on an
> >> installation guide [1] and neutron-lbaas is a dependency of Magnum.
> >> We want to link to an official lbaas guide so that our users will
> >> have a completed instruction. Any pointer?
> >>
> >>
> >>
> >> [1] https://review.openstack.org/#/c/319399/
> >>
> >>
> >>
> >> Best regards,
> >>
> >> Hongbin
> >>
> >>
> >>
> _
> >> _ OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] The Magnum Midcycle

2016-06-14 Thread Anita Kuno
On 06/09/2016 02:27 PM, Tim Bell wrote:
> If we can confirm the dates and location, there is a reasonable chance we 
> could also offer remote conferencing using Vidyo at CERN. While it is not the 
> same as an F2F experience, it would provide the possibility for remote 
> participation for those who could not make it to Geneva.
> 
> We may also be able to organize tours, such as to the anti-matter factory and 
> super conducting magnet test labs prior or afterwards if anyone is interested…
> 
> Tim

Shame Magnum said no, I was trying to figure out how to join the mid-cycle.

What other projects would make sense for you to host mid-cycles for
should the opportunity arise? I can keep my ears open.

Thanks Tim,
Anita.

> 
> From: Spyros Trigazis 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: Wednesday 8 June 2016 at 16:43
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Subject: Re: [openstack-dev] [Magnum] The Magnum Midcycle
> 
> Hi Hongbin.
> 
> CERN's location: https://goo.gl/maps/DWbDVjnAvJJ2
> 
> Cheers,
> Spyros
> 
> 
> On 8 June 2016 at 16:01, Hongbin Lu 
> > wrote:
> Ricardo,
> 
> Thanks for the offer. Would I know where is the exact location?
> 
> Best regards,
> Hongbin
> 
>> -Original Message-
>> From: Ricardo Rocha 
>> [mailto:rocha.po...@gmail.com]
>> Sent: June-08-16 5:43 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Magnum] The Magnum Midcycle
>>
>> Hi Hongbin.
>>
>> Not sure how this fits everyone, but we would be happy to host it at
>> CERN. How do people feel about it? We can add a nice tour of the place
>> as a bonus :)
>>
>> Let us know.
>>
>> Ricardo
>>
>>
>>
>> On Tue, Jun 7, 2016 at 10:32 PM, Hongbin Lu 
>> >
>> wrote:
>>> Hi all,
>>>
>>>
>>>
>>> Please find the Doodle pool below for selecting the Magnum midcycle
>> date.
>>> Presumably, it will be a 2 days event. The location is undecided for
>> now.
>>> The previous midcycles were hosted in bay area so I guess we will
>> stay
>>> there at this time.
>>>
>>>
>>>
>>> http://doodle.com/poll/5tbcyc37yb7ckiec
>>>
>>>
>>>
>>> In addition, the Magnum team is finding a host for the midcycle.
>>> Please let us know if you interest to host us.
>>>
>>>
>>>
>>> Best regards,
>>>
>>> Hongbin
>>>
>>>
>>>
>> __
>>>  OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> ___
>> ___
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-
>> requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] The Magnum Midcycle

2016-06-14 Thread Hongbin Lu
Hi team,

As discussed in the team meeting, we are going to choose between Austin and San 
Francisco. A doodle pool was created to select the location: 
http://doodle.com/poll/2x9utspir7vk8ter . Please cast your vote there. On 
behalf of Magnum team, thanks Rackspace for providing the host.

Best regards,
Hongbin

From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: June-09-16 3:56 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] The Magnum Midcycle

Rackspace is willing to host in Austin, TX or San Antonio, TX, or San 
Francisco, CA.

--
Adrian

On Jun 7, 2016, at 1:35 PM, Hongbin Lu 
> wrote:
Hi all,

Please find the Doodle pool below for selecting the Magnum midcycle date. 
Presumably, it will be a 2 days event. The location is undecided for now. The 
previous midcycles were hosted in bay area so I guess we will stay there at 
this time.

http://doodle.com/poll/5tbcyc37yb7ckiec

In addition, the Magnum team is finding a host for the midcycle. Please let us 
know if you interest to host us.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] The Magnum Midcycle

2016-06-14 Thread Hongbin Lu
Hi Tim,

Thanks for providing the host. We discussed the midcycle location at the last 
team meeting. It looks a significant number of Magnum team members has 
difficulties to travel to Geneva, so we are not able to hold the midcycle at 
CERN. Thanks again for the willingness to host us.

Best regards,
Hongbin

From: Tim Bell [mailto:tim.b...@cern.ch]
Sent: June-09-16 2:27 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] The Magnum Midcycle

If we can confirm the dates and location, there is a reasonable chance we could 
also offer remote conferencing using Vidyo at CERN. While it is not the same as 
an F2F experience, it would provide the possibility for remote participation 
for those who could not make it to Geneva.

We may also be able to organize tours, such as to the anti-matter factory and 
super conducting magnet test labs prior or afterwards if anyone is interested…

Tim

From: Spyros Trigazis >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday 8 June 2016 at 16:43
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [Magnum] The Magnum Midcycle

Hi Hongbin.

CERN's location: https://goo.gl/maps/DWbDVjnAvJJ2

Cheers,
Spyros


On 8 June 2016 at 16:01, Hongbin Lu 
> wrote:
Ricardo,

Thanks for the offer. Would I know where is the exact location?

Best regards,
Hongbin

> -Original Message-
> From: Ricardo Rocha 
> [mailto:rocha.po...@gmail.com]
> Sent: June-08-16 5:43 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Magnum] The Magnum Midcycle
>
> Hi Hongbin.
>
> Not sure how this fits everyone, but we would be happy to host it at
> CERN. How do people feel about it? We can add a nice tour of the place
> as a bonus :)
>
> Let us know.
>
> Ricardo
>
>
>
> On Tue, Jun 7, 2016 at 10:32 PM, Hongbin Lu 
> >
> wrote:
> > Hi all,
> >
> >
> >
> > Please find the Doodle pool below for selecting the Magnum midcycle
> date.
> > Presumably, it will be a 2 days event. The location is undecided for
> now.
> > The previous midcycles were hosted in bay area so I guess we will
> stay
> > there at this time.
> >
> >
> >
> > http://doodle.com/poll/5tbcyc37yb7ckiec
> >
> >
> >
> > In addition, the Magnum team is finding a host for the midcycle.
> > Please let us know if you interest to host us.
> >
> >
> >
> > Best regards,
> >
> > Hongbin
> >
> >
> >
> __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] NUMA, huge pages, and scheduling

2016-06-14 Thread Jay Pipes

On 06/14/2016 12:30 PM, Paul Michali wrote:

Well, looks like we figured out what is going on - maybe folks have some
ideas on how we could handle this issue.

What I see is that for each VM create (small flavor), 1024 huge pages
are used and NUMA node 0 used. It appears that, when there is no longer
enough huge pages on that NUMA node, Nova with then schedule to the
other NUMA node and use those huge pages.

In our case, we happen to have a special container running on the
compute nodes, that uses 512 huge pages. As a result, when there are 768
huge pages left, Nova thinks there are 1280 pages left and thinks one
more VM can be create. It tries, but the create fails.

Some questions...

1) Is there some way to "reserve" huge pages in Nova?


Yes. Code merged recently from Sahid does this:

https://review.openstack.org/#/c/277422/

Best,
-jay


2) If the create fails, should Nova try the other NUMA node (or is this
because it doesn't know why it failed)?
3) Any ideas on how we can deal with this - without changing Nova?

Thanks!

PCM



On Tue, Jun 14, 2016 at 1:09 PM Paul Michali > wrote:

Great info Chris and thanks for confirming the assignment of blocks
of pages to a numa node.

I'm still struggling with why each VM is being assigned to NUMA node
0. Any ideas on where I should look to see why Nova is not using
NUMA id 1?

Thanks!


PCM


On Tue, Jun 14, 2016 at 10:29 AM Chris Friesen
>
wrote:

On 06/13/2016 02:17 PM, Paul Michali wrote:
 > Hmm... I tried Friday and again today, and I'm not seeing the
VMs being evenly
 > created on the NUMA nodes. Every Cirros VM is created on
nodeid 0.
 >
 > I have the m1/small flavor (@GB) selected and am using
hw:numa_nodes=1 and
 > hw:mem_page_size=2048 flavor-key settings. Each VM is
consuming 1024 huge pages
 > (of size 2MB), but is on nodeid 0 always. Also, it seems that
when I reach 1/2
 > of the total number of huge pages used, libvirt gives an
error saying there is
 > not enough memory to create the VM. Is it expected that the
huge pages are
 > "allocated" to each NUMA node?

Yes, any given memory page exists on one NUMA node, and a
single-NUMA-node VM
will be constrained to a single host NUMA node and will use
memory from that
host NUMA node.

You can see and/or adjust how many hugepages are available on
each NUMA node via
/sys/devices/system/node/nodeX/hugepages/hugepages-2048kB/*
where X is the host
NUMA node number.

Chris



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Higgins][Zun] Project roadmap

2016-06-14 Thread Hongbin Lu


> -Original Message-
> From: Flavio Percoco [mailto:fla...@redhat.com]
> Sent: June-14-16 3:44 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Higgins][Zun] Project roadmap
> 
> On 13/06/16 18:46 +, Hongbin Lu wrote:
> >
> >
> >> -Original Message-
> >> From: Sudipto Biswas [mailto:sbisw...@linux.vnet.ibm.com]
> >> Sent: June-13-16 1:43 PM
> >> To: openstack-dev@lists.openstack.org
> >> Subject: Re: [openstack-dev] [Higgins][Zun] Project roadmap
> >>
> >>
> >>
> >> On Monday 13 June 2016 06:57 PM, Flavio Percoco wrote:
> >> > On 12/06/16 22:10 +, Hongbin Lu wrote:
> >> >> Hi team,
> >> >>
> >> >> During the team meetings these weeks, we collaborated the initial
> >> >> project roadmap. I summarized it as below. Please review.
> >> >>
> >> >> * Implement a common container abstraction for different
> container
> >> >> runtimes. The initial implementation will focus on supporting
> >> >> basic container operations (i.e. CRUD).
> >> >
> >> > What COE's are being considered for the first implementation? Just
> >> > docker and kubernetes?
> >[Hongbin Lu] Container runtimes, docker in particular, are being
> considered for the first implementation. We discussed how to support
> COEs in Zun but cannot reach an agreement on the direction. I will
> leave it for further discussion.
> >
> >> >
> >> >> * Focus on non-nested containers use cases (running containers on
> >> >> physical hosts), and revisit nested containers use cases (running
> >> >> containers on VMs) later.
> >> >> * Provide two set of APIs to access containers: The Nova APIs and
> >> the
> >> >> Zun-native APIs. In particular, the Zun-native APIs will expose
> >> >> full container capabilities, and Nova APIs will expose
> >> >> capabilities that are shared between containers and VMs.
> >> >
> >> > - Is the nova side going to be implemented in the form of a Nova
> >> > driver (like ironic's?)? What do you mean by APIs here?
> >[Hongbin Lu] Yes, the plan is to implement a Zun virt-driver for Nova.
> The idea is similar to Ironic.
> >
> >> >
> >> > - What operations are we expecting this to support (just CRUD
> >> > operations on containers?)?
> >[Hongbin Lu] We are working on finding the list of operations to
> support. There is a BP for tracking this effort:
> https://blueprints.launchpad.net/zun/+spec/api-design .
> >
> >> >
> >> > I can see this driver being useful for specialized services like
> >> Trove
> >> > but I'm curious/concerned about how this will be used by end users
> >> > (assuming that's the goal).
> >[Hongbin Lu] I agree that end users might not be satisfied by basic
> container operations like CRUD. We will discuss how to offer more to
> make the service to be useful in production.
> 
> I'd probably leave this out for now but this is just my opinion.
> Personally, I think users, if presented with both APIs - nova's and
> Zun's - they'll prefer Zun's.
> 
> Specifically, you don't interact with a container the same way you
> interact with a VM (but I'm sure you know all these way better than me).
> I guess my concern is that I don't see too much value in this other
> than allowing specialized services to run containers through Nova.

ACK

> 
> 
> >> >
> >> >
> >> >> * Leverage Neutron (via Kuryr) for container networking.
> >> >> * Leverage Cinder for container data volume.
> >> >> * Leverage Glance for storing container images. If necessary,
> >> >> contribute to Glance for missing features (i.e. support layer of
> >> >> container images).
> >> >
> >> > Are you aware of https://review.openstack.org/#/c/249282/ ?
> >> This support is very minimalistic in nature, since it doesn't do
> >> anything beyond just storing a docker FS tar ball.
> >> I think it was felt that, further support for docker FS was needed.
> >> While there were suggestions of private docker registry, having
> >> something in band (w.r.t openstack) maybe desirable.
> >[Hongbin Lu] Yes, Glance doesn't support layer of container images
> which is a missing feature.
> 
> Yup, I didn't mean to imply that would do it all for you rather that
> there's been some progress there. As far as layered containers goes,
> you might want to look into Glare.

Thanks for the advice.

> 
> Flavio
> 
> >> >> * Support enforcing multi-tenancy by doing the following:
> >> >> ** Add configurable options for scheduler to enforce neighboring
> >> >> containers belonging to the same tenant.
> >> >> ** Support hypervisor-based container runtimes.
> >> >>
> >> >> The following topics have been discussed, but the team cannot
> >> >> reach consensus on including them into the short-term project
> >> >> scope. We skipped them for now and might revisit them later.
> >> >> * Support proxying API calls to COEs.
> >> >
> >> > Any link to what this proxy will do and what service it'll talk to?
> >> > I'd generally advice against having proxy calls in services. We've
> >> > just done work in Nova to deprecate the Nova Image 

Re: [openstack-dev] [nova] NUMA, huge pages, and scheduling

2016-06-14 Thread Paul Michali
Well, looks like we figured out what is going on - maybe folks have some
ideas on how we could handle this issue.

What I see is that for each VM create (small flavor), 1024 huge pages are
used and NUMA node 0 used. It appears that, when there is no longer enough
huge pages on that NUMA node, Nova with then schedule to the other NUMA
node and use those huge pages.

In our case, we happen to have a special container running on the compute
nodes, that uses 512 huge pages. As a result, when there are 768 huge pages
left, Nova thinks there are 1280 pages left and thinks one more VM can be
create. It tries, but the create fails.

Some questions...

1) Is there some way to "reserve" huge pages in Nova?
2) If the create fails, should Nova try the other NUMA node (or is this
because it doesn't know why it failed)?
3) Any ideas on how we can deal with this - without changing Nova?

Thanks!

PCM



On Tue, Jun 14, 2016 at 1:09 PM Paul Michali  wrote:

> Great info Chris and thanks for confirming the assignment of blocks of
> pages to a numa node.
>
> I'm still struggling with why each VM is being assigned to NUMA node 0.
> Any ideas on where I should look to see why Nova is not using NUMA id 1?
>
> Thanks!
>
>
> PCM
>
>
> On Tue, Jun 14, 2016 at 10:29 AM Chris Friesen <
> chris.frie...@windriver.com> wrote:
>
>> On 06/13/2016 02:17 PM, Paul Michali wrote:
>> > Hmm... I tried Friday and again today, and I'm not seeing the VMs being
>> evenly
>> > created on the NUMA nodes. Every Cirros VM is created on nodeid 0.
>> >
>> > I have the m1/small flavor (@GB) selected and am using hw:numa_nodes=1
>> and
>> > hw:mem_page_size=2048 flavor-key settings. Each VM is consuming 1024
>> huge pages
>> > (of size 2MB), but is on nodeid 0 always. Also, it seems that when I
>> reach 1/2
>> > of the total number of huge pages used, libvirt gives an error saying
>> there is
>> > not enough memory to create the VM. Is it expected that the huge pages
>> are
>> > "allocated" to each NUMA node?
>>
>> Yes, any given memory page exists on one NUMA node, and a
>> single-NUMA-node VM
>> will be constrained to a single host NUMA node and will use memory from
>> that
>> host NUMA node.
>>
>> You can see and/or adjust how many hugepages are available on each NUMA
>> node via
>> /sys/devices/system/node/nodeX/hugepages/hugepages-2048kB/* where X is
>> the host
>> NUMA node number.
>>
>> Chris
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-14 Thread Chris Dent

On Tue, 14 Jun 2016, Matthew Treinish wrote:


By doing this, even as a temporary measure, we're saying it's ok to call things
an OpenStack API when you add random gorp to the responses. Which is something 
we've
very clearly said as a community is the exact opposite of the case, which the
testing reflects. I still contend just because some vendors were running old
versions of tempest and old versions of openstack where their incompatible API
changes weren't caught doesn't mean they should be given pass now.


Yes. Thanks.

--
Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-14 Thread Chris Hoge

> On Jun 14, 2016, at 11:21 AM, Matthew Treinish  wrote:
> 
> On Tue, Jun 14, 2016 at 10:57:05AM -0700, Chris Hoge wrote:
>> Last year, in response to Nova micro-versioning and extension updates[1],
>> the QA team added strict API schema checking to Tempest to ensure that
>> no additional properties were added to Nova API responses[2][3]. In the
>> last year, at least three vendors participating the the OpenStack Powered
>> Trademark program have been impacted by this change, two of which
>> reported this to the DefCore Working Group mailing list earlier this year[4].
>> 
>> The DefCore Working Group determines guidelines for the OpenStack Powered
>> program, which includes capabilities with associated functional tests
>> from Tempest that must be passed, and designated sections with associated
>> upstream code [5][6]. In determining these guidelines, the working group
>> attempts to balance the future direction of development with lagging
>> indicators of deployments and user adoption.
>> 
>> After a tremendous amount of consideration, I believe that the DefCore
>> Working Group needs to implement a temporary waiver for the strict API
>> checking requirements that were introduced last year, to give downstream
>> deployers more time to catch up with the strict micro-versioning
>> requirements determined by the Nova/Compute team and enforced by the
>> Tempest/QA team.
> 
> I'm very much opposed to this being done. If we're actually concerned with
> interoperability and verify that things behave in the same manner between 
> multiple
> clouds then doing this would be a big step backwards. The fundamental 
> disconnect
> here is that the vendors who have implemented out of band extensions or were
> taking advantage of previously available places to inject extra attributes
> believe that doing so means they're interoperable, which is quite far from
> reality. **The API is not a place for vendor differentiation.**

Yes, it’s bad practice, but it’s also a reality, and I honestly believe that
vendors have received the message and are working on changing.

> As a user of several clouds myself I can say that having random gorp in a
> response makes it much more difficult to use my code against multiple clouds. 
> I
> have to determine which properties being returned are specific to that 
> vendor's
> cloud and if I actually need to depend on them for anything it makes whatever
> code I'm writing incompatible for using against any other cloud. (unless I
> special case that block for each cloud) Sean Dague wrote a good post where a 
> lot
> of this was covered a year ago when microversions was starting to pick up 
> steam:
> 
> https://dague.net/2015/06/05/the-nova-api-in-kilo-and-beyond-2 
> 
> 
> I'd recommend giving it a read, he explains the user first perspective more
> clearly there.
> 
> I believe Tempest in this case is doing the right thing from an 
> interoperability
> perspective and ensuring that the API is actually the API. Not an API with 
> extra
> bits a vendor decided to add.

A few points on this, though. Right now, Nova is the only API that is
enforcing this, and the clients. While this may change in the
future, I don’t think it accurately represents the reality of what’s
happening in the ecosystem.

As mentioned before, we also need to balance the lagging nature of
DefCore as an interoperability guideline with the needs of testing
upstream changes. I’m not asking for a permanent change that
undermines the goals of Tempest for QA, rather a temporary
upstream modification that recognizes the challenges faced by
vendors in the market right now, and gives them room to continue
to align themselves with upstream. Without this, the two other 
alternatives are to:

* Have some vendors leave the Powered program unnecessarily,
  weakening it.
* Force DefCore to adopt non-upstream testing, either as a fork
  or an independent test suite.

Neither seem ideal to me.

One of my goals is to transparently strengthen the ties between
upstream and downstream development. There is a deadline
built into this proposal, and my intention is to enforce it.

> I don't think a cloud or product that does this
> to the api should be considered an interoperable OpenStack cloud and failing 
> the
> tests is the correct behavior.

I think it’s more nuanced than this, especially right now.
Only additions to responses will be considered, not changes.
These additions will be clearly labelled as variations,
signaling the differences to users. Existing clients in use
will not break. Correct behavior will eventually be enforced,
and this would be clearly signaled by both the test tool and
through the administrative program.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

[openstack-dev] [murano] Nominating Alexander Tivelkov and Zhu Rong for murano cores

2016-06-14 Thread Kirill Zaitsev
Hello team, I want to annonce the following changes to murano core team:

1) I’d like to nominate Alexander Tivelkov for murano core. He has been part of 
the project for a very long time and has contributed to almost every part of 
murano. He has been fully committed to murano during mitaka cycle and continues 
doing so during newton [1]. His work on the scalable framework architecture is 
one of the most notable features scheduled for N release.

2) I’d like to nominate Zhu Rong for murano core. Last time he was nominated I 
-1’ed the proposal, because I believed he needed to start making more 
substantial contributions. I’m sure that Zhu Rong showed his commitment [2] to 
murano project and I’m happy to nominate him myself. His work on the separating 
cfapi from murano api and contributions headed at addressing murano’s technical 
debt are much appreciated.

3) Finally I would like to remove Steve McLellan[3] from murano core team. 
Steve has been part of murano from very early stages of it. However his focus 
has since shifted and he hasn’t been active in murano during last couple of 
cycles. I want to thank Steve for his contributions and express hope to see him 
back in the project in future.


Murano team, please respond with +1/-1 to the proposed changes.

[1] http://stackalytics.com/?user_id=ativelkov=marks
[2] http://stackalytics.com/?metric=marks_id=zhu-rong
[3] http://stackalytics.com/?user_id=sjmc7
-- 
Kirill Zaitsev
Software Engineer
Mirantis, Inc

signature.asc
Description: Message signed with OpenPGP using AMPGpg
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Midcycle call for topics

2016-06-14 Thread Ben Swartzlander

Our midcycle meetup is 2 weeks away! Please propose topics on the etherpad:

https://etherpad.openstack.org/p/newton-manila-midcycle

Depending on how much material we need to cover I'll decide if we need 
the third day or not.


-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-14 Thread Matthew Treinish
On Tue, Jun 14, 2016 at 02:41:10PM -0400, Doug Hellmann wrote:
> Excerpts from Matthew Treinish's message of 2016-06-14 14:21:27 -0400:
> > On Tue, Jun 14, 2016 at 10:57:05AM -0700, Chris Hoge wrote:
> > > Last year, in response to Nova micro-versioning and extension updates[1],
> > > the QA team added strict API schema checking to Tempest to ensure that
> > > no additional properties were added to Nova API responses[2][3]. In the
> > > last year, at least three vendors participating the the OpenStack Powered
> > > Trademark program have been impacted by this change, two of which
> > > reported this to the DefCore Working Group mailing list earlier this 
> > > year[4].
> > > 
> > > The DefCore Working Group determines guidelines for the OpenStack Powered
> > > program, which includes capabilities with associated functional tests
> > > from Tempest that must be passed, and designated sections with associated
> > > upstream code [5][6]. In determining these guidelines, the working group
> > > attempts to balance the future direction of development with lagging
> > > indicators of deployments and user adoption.
> > > 
> > > After a tremendous amount of consideration, I believe that the DefCore
> > > Working Group needs to implement a temporary waiver for the strict API
> > > checking requirements that were introduced last year, to give downstream
> > > deployers more time to catch up with the strict micro-versioning
> > > requirements determined by the Nova/Compute team and enforced by the
> > > Tempest/QA team.
> > 
> > I'm very much opposed to this being done. If we're actually concerned with
> > interoperability and verify that things behave in the same manner between 
> > multiple
> > clouds then doing this would be a big step backwards. The fundamental 
> > disconnect
> > here is that the vendors who have implemented out of band extensions or were
> > taking advantage of previously available places to inject extra attributes
> > believe that doing so means they're interoperable, which is quite far from
> > reality. **The API is not a place for vendor differentiation.**
> 
> This is a temporary measure to address the fact that a large number
> of existing tests changed their behavior, rather than having new
> tests added to enforce this new requirement. The result is deployments
> that previously passed these tests may no longer pass, and in fact
> we have several cases where that's true with deployers who are
> trying to maintain their own standard of backwards-compatibility
> for their end users.

That's not what happened though. The API hasn't changed and the tests haven't
really changed either. We made our enforcement on Nova's APIs a bit stricter to
ensure nothing unexpected appeared. For the most these tests work on any version
of OpenStack. (we only test it in the gate on supported stable releases, but I
don't expect things to have drastically shifted on older releases) It also
doesn't matter which version of the API you run, v2.0 or v2.1. Literally, the
only case it ever fails is when you run something extra, not from the community,
either as an extension (which themselves are going away [1]) or another service
that wraps nova or imitates nova. I'm personally not comfortable saying those
extras are ever part of the OpenStack APIs.

> 
> We have basically three options.
> 
> 1. Tell deployers who are trying to do the right for their immediate
>users that they can't use the trademark.
> 
> 2. Flag the related tests or remove them from the DefCore enforcement
>suite entirely.
> 
> 3. Be flexible about giving consumers of Tempest time to meet the
>new requirement by providing a way to disable the checks.
> 
> Option 1 goes against our own backwards compatibility policies.

I don't think backwards compatibility policies really apply to what what define
as the set of tests that as a community we are saying a vendor has to pass to
say they're OpenStack. From my perspective as a community we either take a hard
stance on this and say to be considered an interoperable cloud (and to get the
trademark) you have to actually have an interoperable product. We slowly ratchet
up the requirements every 6 months, there isn't any implied backwards
compatibility in doing that. You passed in the past but not in the newer 
stricter
guidelines.

Also, even if I did think it applied, we're not talking about a change which
would fall into breaking that. The change was introduced a year and half ago
during kilo and landed a year ago during liberty:

https://review.openstack.org/#/c/156130/

That's way longer than our normal deprecation period of 3 months and a release
boundary.

> 
> Option 2 gives us no winners and actually reduces the interoperability
> guarantees we already have in place.
> 
> Option 3 applies our usual community standard of slowly rolling
> forward while maintaining compatibility as broadly as possible.

Except in this case there isn't actually any compatibility being maintained.
We're 

Re: [openstack-dev] [all] what do you work on upstream of OpenStack?

2016-06-14 Thread Matthew Thode
On 06/14/2016 09:52 AM, Chris Friesen wrote:
> On 06/13/2016 01:11 PM, Doug Hellmann wrote:
>> I'm trying to pull together some information about contributions
>> that OpenStack community members have made *upstream* of OpenStack,
>> via code, docs, bug reports, or anything else to dependencies that
>> we have.
>>
>> If you've made a contribution of that sort, I would appreciate a
>> quick note.  Please reply off-list, there's no need to spam everyone,
>> and I'll post the summary if folks want to see it.
> 
> Linux kernel.  For the second of these I was particularly surprised that
> nobody in OpenStack had stumbled over it.
> 
> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=d6d5e999e5df67f8ec20b6be45e2229455ee3699
> 
> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=f9c904b7613b8b4c85b10cd6b33ad41b2843fa9d
> 
> 
> 
> Distro packaging:  I reported a dependency bug in the iptables-services
> package for Fedora.
> 
> Chris
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I'll second both of these.

There was a neutron bug I was working on that triggered this kernel side.

http://lists.openwall.net/netdev/2015/02/18/2

I also package openstack for gentoo, which triggers things all over in
that side.

-- 
-- Matthew Thode (prometheanfire)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-14 Thread Doug Hellmann
Excerpts from Matthew Treinish's message of 2016-06-14 14:21:27 -0400:
> On Tue, Jun 14, 2016 at 10:57:05AM -0700, Chris Hoge wrote:
> > Last year, in response to Nova micro-versioning and extension updates[1],
> > the QA team added strict API schema checking to Tempest to ensure that
> > no additional properties were added to Nova API responses[2][3]. In the
> > last year, at least three vendors participating the the OpenStack Powered
> > Trademark program have been impacted by this change, two of which
> > reported this to the DefCore Working Group mailing list earlier this 
> > year[4].
> > 
> > The DefCore Working Group determines guidelines for the OpenStack Powered
> > program, which includes capabilities with associated functional tests
> > from Tempest that must be passed, and designated sections with associated
> > upstream code [5][6]. In determining these guidelines, the working group
> > attempts to balance the future direction of development with lagging
> > indicators of deployments and user adoption.
> > 
> > After a tremendous amount of consideration, I believe that the DefCore
> > Working Group needs to implement a temporary waiver for the strict API
> > checking requirements that were introduced last year, to give downstream
> > deployers more time to catch up with the strict micro-versioning
> > requirements determined by the Nova/Compute team and enforced by the
> > Tempest/QA team.
> 
> I'm very much opposed to this being done. If we're actually concerned with
> interoperability and verify that things behave in the same manner between 
> multiple
> clouds then doing this would be a big step backwards. The fundamental 
> disconnect
> here is that the vendors who have implemented out of band extensions or were
> taking advantage of previously available places to inject extra attributes
> believe that doing so means they're interoperable, which is quite far from
> reality. **The API is not a place for vendor differentiation.**

This is a temporary measure to address the fact that a large number
of existing tests changed their behavior, rather than having new
tests added to enforce this new requirement. The result is deployments
that previously passed these tests may no longer pass, and in fact
we have several cases where that's true with deployers who are
trying to maintain their own standard of backwards-compatibility
for their end users.

We have basically three options.

1. Tell deployers who are trying to do the right for their immediate
   users that they can't use the trademark.

2. Flag the related tests or remove them from the DefCore enforcement
   suite entirely.

3. Be flexible about giving consumers of Tempest time to meet the
   new requirement by providing a way to disable the checks.

Option 1 goes against our own backwards compatibility policies.

Option 2 gives us no winners and actually reduces the interoperability
guarantees we already have in place.

Option 3 applies our usual community standard of slowly rolling
forward while maintaining compatibility as broadly as possible.

No one is suggesting that a permanent, or even open-ended, exception
be granted.

Doug

> 
> As a user of several clouds myself I can say that having random gorp in a
> response makes it much more difficult to use my code against multiple clouds. 
> I
> have to determine which properties being returned are specific to that 
> vendor's
> cloud and if I actually need to depend on them for anything it makes whatever
> code I'm writing incompatible for using against any other cloud. (unless I
> special case that block for each cloud) Sean Dague wrote a good post where a 
> lot
> of this was covered a year ago when microversions was starting to pick up 
> steam:
> 
> https://dague.net/2015/06/05/the-nova-api-in-kilo-and-beyond-2
> 
> I'd recommend giving it a read, he explains the user first perspective more
> clearly there.
> 
> I believe Tempest in this case is doing the right thing from an 
> interoperability
> perspective and ensuring that the API is actually the API. Not an API with 
> extra
> bits a vendor decided to add. I don't think a cloud or product that does this
> to the api should be considered an interoperable OpenStack cloud and failing 
> the
> tests is the correct behavior.
> 
> -Matt Treinish
> 
> > 
> > My reasoning behind this is that while the change that enabled strict
> > checking was discussed publicly in the developer community and took
> > some time to be implemented, it still landed quickly and broke several
> > existing deployments overnight. As Tempest has moved forward with
> > bug and UX fixes (some in part to support the interoperability testing
> > efforts of the DefCore Working Group), using an older versions of Tempest
> > where this strict checking is not enforced is no longer a viable solution
> > for downstream deployers. The TC has passed a resolution to advise
> > DefCore to use Tempest as the single source of capability testing[7],
> > but this naturally 

Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-14 Thread Matthew Treinish
On Tue, Jun 14, 2016 at 10:57:05AM -0700, Chris Hoge wrote:
> Last year, in response to Nova micro-versioning and extension updates[1],
> the QA team added strict API schema checking to Tempest to ensure that
> no additional properties were added to Nova API responses[2][3]. In the
> last year, at least three vendors participating the the OpenStack Powered
> Trademark program have been impacted by this change, two of which
> reported this to the DefCore Working Group mailing list earlier this year[4].
> 
> The DefCore Working Group determines guidelines for the OpenStack Powered
> program, which includes capabilities with associated functional tests
> from Tempest that must be passed, and designated sections with associated
> upstream code [5][6]. In determining these guidelines, the working group
> attempts to balance the future direction of development with lagging
> indicators of deployments and user adoption.
> 
> After a tremendous amount of consideration, I believe that the DefCore
> Working Group needs to implement a temporary waiver for the strict API
> checking requirements that were introduced last year, to give downstream
> deployers more time to catch up with the strict micro-versioning
> requirements determined by the Nova/Compute team and enforced by the
> Tempest/QA team.

I'm very much opposed to this being done. If we're actually concerned with
interoperability and verify that things behave in the same manner between 
multiple
clouds then doing this would be a big step backwards. The fundamental disconnect
here is that the vendors who have implemented out of band extensions or were
taking advantage of previously available places to inject extra attributes
believe that doing so means they're interoperable, which is quite far from
reality. **The API is not a place for vendor differentiation.**

As a user of several clouds myself I can say that having random gorp in a
response makes it much more difficult to use my code against multiple clouds. I
have to determine which properties being returned are specific to that vendor's
cloud and if I actually need to depend on them for anything it makes whatever
code I'm writing incompatible for using against any other cloud. (unless I
special case that block for each cloud) Sean Dague wrote a good post where a lot
of this was covered a year ago when microversions was starting to pick up steam:

https://dague.net/2015/06/05/the-nova-api-in-kilo-and-beyond-2

I'd recommend giving it a read, he explains the user first perspective more
clearly there.

I believe Tempest in this case is doing the right thing from an interoperability
perspective and ensuring that the API is actually the API. Not an API with extra
bits a vendor decided to add. I don't think a cloud or product that does this
to the api should be considered an interoperable OpenStack cloud and failing the
tests is the correct behavior.

-Matt Treinish

> 
> My reasoning behind this is that while the change that enabled strict
> checking was discussed publicly in the developer community and took
> some time to be implemented, it still landed quickly and broke several
> existing deployments overnight. As Tempest has moved forward with
> bug and UX fixes (some in part to support the interoperability testing
> efforts of the DefCore Working Group), using an older versions of Tempest
> where this strict checking is not enforced is no longer a viable solution
> for downstream deployers. The TC has passed a resolution to advise
> DefCore to use Tempest as the single source of capability testing[7],
> but this naturally introduces tension between the competing goals of
> maintaining upstream functional testing and also tracking lagging
> indicators.
> 
> My proposal for addressing this problem approaches it at two levels:
> 
> * For the short term, I will submit a blueprint and patch to tempest that
>   allows configuration of a grey-list of Nova APIs where strict response
>   checking on additional properties will be disabled. So, for example,
>   if the 'create  servers' API call returned extra properties on that call,
>   the strict checking on this line[8] would be disabled at runtime.
>   Use of this code path will emit a deprecation warning, and the
>   code will be scheduled for removal in 2017 directly after the release
>   of the 2017.01 guideline. Vendors would be required so submit the
>   grey-list of APIs with additional response data that would be
>   published to their marketplace entry.
> 
> * Longer term, vendors will be expected to work with upstream to update
>   the API for returning additional data that is compatible with
>   API micro-versioning as defined by the Nova team, and the waiver would
>   no longer be allowed after the release of the 2017.01 guideline.
> 
> For the next half-year, I feel that this approach strengthens interoperability
> by accurately capturing the current state of OpenStack deployments and
> client tools. Before this change, additional properties on 

Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-14 Thread Doug Hellmann
Excerpts from Chris Hoge's message of 2016-06-14 10:57:05 -0700:
> Last year, in response to Nova micro-versioning and extension updates[1],
> the QA team added strict API schema checking to Tempest to ensure that
> no additional properties were added to Nova API responses[2][3]. In the
> last year, at least three vendors participating the the OpenStack Powered
> Trademark program have been impacted by this change, two of which
> reported this to the DefCore Working Group mailing list earlier this year[4].
> 
> The DefCore Working Group determines guidelines for the OpenStack Powered
> program, which includes capabilities with associated functional tests
> from Tempest that must be passed, and designated sections with associated
> upstream code [5][6]. In determining these guidelines, the working group
> attempts to balance the future direction of development with lagging
> indicators of deployments and user adoption.
> 
> After a tremendous amount of consideration, I believe that the DefCore
> Working Group needs to implement a temporary waiver for the strict API
> checking requirements that were introduced last year, to give downstream
> deployers more time to catch up with the strict micro-versioning
> requirements determined by the Nova/Compute team and enforced by the
> Tempest/QA team.
> 
> My reasoning behind this is that while the change that enabled strict
> checking was discussed publicly in the developer community and took
> some time to be implemented, it still landed quickly and broke several
> existing deployments overnight. As Tempest has moved forward with
> bug and UX fixes (some in part to support the interoperability testing
> efforts of the DefCore Working Group), using an older versions of Tempest
> where this strict checking is not enforced is no longer a viable solution
> for downstream deployers. The TC has passed a resolution to advise
> DefCore to use Tempest as the single source of capability testing[7],
> but this naturally introduces tension between the competing goals of
> maintaining upstream functional testing and also tracking lagging
> indicators.
> 
> My proposal for addressing this problem approaches it at two levels:
> 
> * For the short term, I will submit a blueprint and patch to tempest that
>   allows configuration of a grey-list of Nova APIs where strict response
>   checking on additional properties will be disabled. So, for example,
>   if the 'create  servers' API call returned extra properties on that call,
>   the strict checking on this line[8] would be disabled at runtime.
>   Use of this code path will emit a deprecation warning, and the
>   code will be scheduled for removal in 2017 directly after the release
>   of the 2017.01 guideline. Vendors would be required so submit the
>   grey-list of APIs with additional response data that would be
>   published to their marketplace entry.
> 
> * Longer term, vendors will be expected to work with upstream to update
>   the API for returning additional data that is compatible with
>   API micro-versioning as defined by the Nova team, and the waiver would
>   no longer be allowed after the release of the 2017.01 guideline.
> 
> For the next half-year, I feel that this approach strengthens interoperability
> by accurately capturing the current state of OpenStack deployments and
> client tools. Before this change, additional properties on responses
> weren't explicitly disallowed, and vendors and deployers took advantage
> of this in production. While this is behavior that the Nova and QA teams
> want to stop, it will take a bit more time to reach downstream. Also, as
> of right now, as far as I know the only client that does strict response
> checking for Nova responses is the Tempest client. Currently, additional
> properties in responses are ignored and do not break existing client
> functionality. There is currently little to no harm done to downstream
> users by temporarily allowing additional data to be returned in responses.

Thanks for putting this proposal together, Chris. The configuration
option you describe makes sense as a temporary solution to the
issue, and the timeline you propose (combined with the past year
since the change went in) should be plenty of time to handle upgrades.

Doug

> 
> Thanks,
> 
> Chris Hoge
> Interop Engineer
> OpenStack Foundation
> 
> [1] 
> https://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/api-microversions.html
> [2] 
> http://lists.openstack.org/pipermail/openstack-dev/2015-February/057613.html
> [3] https://review.openstack.org/#/c/156130
> [4] 
> http://lists.openstack.org/pipermail/defcore-committee/2016-January/000986.html
> [5] http://git.openstack.org/cgit/openstack/defcore/tree/2015.07.json
> [6] http://git.openstack.org/cgit/openstack/defcore/tree/2016.01.json
> [7] 
> http://git.openstack.org/cgit/openstack/governance/tree/resolutions/20160504-defcore-test-location.rst
> [8] 
> 

Re: [openstack-dev] [VPNaaS] Support for Stronger hashes and combined mode ciphers

2016-06-14 Thread Paul Michali
I think Kyle polled operators and a few mentioned using VPNaaS for
site-to-site IPSec - do a search in this ML for VPNaaS. AFAIK, no one so
far is stepping up to work on VPNaaS.

Regards,

PCM


On Tue, Jun 14, 2016 at 1:40 PM Mark Fenwick 
wrote:

> Hi Paul,
>
> On 06/14/16 10:27, Paul Michali wrote:
> > Certainly the ciphers and hashes could be enhanced for VPNaaS. This would
> > require converting the user selections into options for the underlying
> > device driver, modifying the neutron client (OSC) to allow entry of the
> new
> > selections, updating unit tests, and likely adding some validators to
> > reject these options on drivers that may not support them (e.g. if
> OpenSwan
> > doesn't support an option, you'll want to reject it).
> >
>
> I made some changes and got this working quiet quickly, would need some
> polish.
>
> > There is not an active VPNaaS team any more, so, if this is something
> that
> > you'd like to see, you'll need to provide some sweat equity to make it
> > happen. There are still some people that can core review changes, but
> don't
> > expect much community support for VPNaaS at this time. In fact, I think
> the
> > plan is to archive/mothball/whatever VPNaaS in a few months (it's on
> double
> > secret probation :)), if there is no-one actively supporting it (I'll
> leave
> > to the PTL to define what "support" means - not sure what the
> > qualifications will be to maintain this project).
>
> So I'm curious, does anybody actually use VPNaaS for anything ?
>
> Thanks
>
> Mark
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][security] Finishing the job on threat analysis for Kolla

2016-06-14 Thread Steven Dake (stdake)
Rob,

Do you have the source for reference #2 below?  I believe the next step was to 
produce copies of #2 based upon the special different types of containers in 
the system and combine them into one coherent doc.

I think continuing to use sequence diagrams makes sense.

My phone (out of batteries) has a photograph of the different things we wrote 
down - I was planning to combine that work with multiples of diagram #2, and 
submit it for review - to get the process started.

Regards
-steve

From: Rob C >
Date: Tuesday, June 14, 2016 at 1:34 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Cc: Steven Dake >, 
"robcl...@uk.ibm.com" 
>
Subject: Re: [openstack-dev] [kolla][security] Finishing the job on threat 
analysis for Kolla


I have returned from #drownload and I'm super keen to get ontop of this, in 
this email I'll just try to tie a few different threads together.

The etherpad we used at the summit, along with the Sequence Diagram texts are 
online [1] are we happy to continue using web sequence diagrams? I think the 
resulting output is very useful [2] - even if Kolla doesn't fit the typical 
project style that we anticipate using these for - they're better suited to 
more traditional software projects.

There's a big effort to formalize the TA process and have OSSP help as 
guardians of the code base[3] in future, with lots of effort being made to 
ensure that as new projects come into the fold they meet a certain minimum 
security level - we'll also attempt to help more established projects iterate 
to a level of equal security assurance.

I'll leave the process description for our actual documentation but a big part 
of it will be projects submitting security docs to the newly created 
security-analysis repo [4]. Projects are welcome to use this for staging and 
collaboration - the OSSP will largely ignore projects with the WIP flag set.

I think the next step is for Doug and I (and anyone else who cares) to review 
the current diagrams and provide a quick gap analysis for the Kolla devs 
detailing what else is required for us to do a proper review.


[1] https://etherpad.openstack.org/p/kolla-newton-summit-threat-analysis

[2] https://drive.google.com/file/d/0B0osRPn3qBq5X1poTGZqVFBRQW8/view

[3] https://review.openstack.org/#/c/300698/

[4] https://review.openstack.org/#/c/325049/

On Tue, May 31, 2016 at 5:37 PM, Chivers, Doug 
> wrote:
Thanks for following up Steve, the sessions at the summit were extremely useful.

Both Rob and I have been caught up with the day-job since we got back from the 
summit, but will discuss next steps and agree a plan this week.

Regards

Doug




From: "Steven Dake (stdake)" 
>>
Date: Tuesday, 24 May 2016 at 17:16
To: 
"openstack-dev@lists.openstack.org>"
 
>>
Cc: Doug Chivers 
>>,
 
"robcl...@uk.ibm.com>"
 
>>
Subject: [kolla][security] Finishing the job on threat analysis for Kolla

Rob and Doug,

At Summit we had 4 hours of highly productive work producing a list of "things" 
that can be "threatened".  We have about 4 or 5 common patterns where we follow 
the principle of least privilege.  On Friday of Summit we produced a list of 
all the things (in this case deployed containers).  I'm not sure who, I think 
it was Rob was working on a flow diagram for the least privileged case.  From 
there, the Kolla coresec team can produce the rest of the diagrams for 
increasing privileges.

I'd like to get that done, then move on to next steps.  Not sure what the next 
steps are, but lets cover the flow diagrams first since we know we need those.

Regards
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development 

[openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-14 Thread Chris Hoge
Last year, in response to Nova micro-versioning and extension updates[1],
the QA team added strict API schema checking to Tempest to ensure that
no additional properties were added to Nova API responses[2][3]. In the
last year, at least three vendors participating the the OpenStack Powered
Trademark program have been impacted by this change, two of which
reported this to the DefCore Working Group mailing list earlier this year[4].

The DefCore Working Group determines guidelines for the OpenStack Powered
program, which includes capabilities with associated functional tests
from Tempest that must be passed, and designated sections with associated
upstream code [5][6]. In determining these guidelines, the working group
attempts to balance the future direction of development with lagging
indicators of deployments and user adoption.

After a tremendous amount of consideration, I believe that the DefCore
Working Group needs to implement a temporary waiver for the strict API
checking requirements that were introduced last year, to give downstream
deployers more time to catch up with the strict micro-versioning
requirements determined by the Nova/Compute team and enforced by the
Tempest/QA team.

My reasoning behind this is that while the change that enabled strict
checking was discussed publicly in the developer community and took
some time to be implemented, it still landed quickly and broke several
existing deployments overnight. As Tempest has moved forward with
bug and UX fixes (some in part to support the interoperability testing
efforts of the DefCore Working Group), using an older versions of Tempest
where this strict checking is not enforced is no longer a viable solution
for downstream deployers. The TC has passed a resolution to advise
DefCore to use Tempest as the single source of capability testing[7],
but this naturally introduces tension between the competing goals of
maintaining upstream functional testing and also tracking lagging
indicators.

My proposal for addressing this problem approaches it at two levels:

* For the short term, I will submit a blueprint and patch to tempest that
  allows configuration of a grey-list of Nova APIs where strict response
  checking on additional properties will be disabled. So, for example,
  if the 'create  servers' API call returned extra properties on that call,
  the strict checking on this line[8] would be disabled at runtime.
  Use of this code path will emit a deprecation warning, and the
  code will be scheduled for removal in 2017 directly after the release
  of the 2017.01 guideline. Vendors would be required so submit the
  grey-list of APIs with additional response data that would be
  published to their marketplace entry.

* Longer term, vendors will be expected to work with upstream to update
  the API for returning additional data that is compatible with
  API micro-versioning as defined by the Nova team, and the waiver would
  no longer be allowed after the release of the 2017.01 guideline.

For the next half-year, I feel that this approach strengthens interoperability
by accurately capturing the current state of OpenStack deployments and
client tools. Before this change, additional properties on responses
weren't explicitly disallowed, and vendors and deployers took advantage
of this in production. While this is behavior that the Nova and QA teams
want to stop, it will take a bit more time to reach downstream. Also, as
of right now, as far as I know the only client that does strict response
checking for Nova responses is the Tempest client. Currently, additional
properties in responses are ignored and do not break existing client
functionality. There is currently little to no harm done to downstream
users by temporarily allowing additional data to be returned in responses.

Thanks,

Chris Hoge
Interop Engineer
OpenStack Foundation

[1] 
https://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/api-microversions.html
[2] http://lists.openstack.org/pipermail/openstack-dev/2015-February/057613.html
[3] https://review.openstack.org/#/c/156130
[4] 
http://lists.openstack.org/pipermail/defcore-committee/2016-January/000986.html
[5] http://git.openstack.org/cgit/openstack/defcore/tree/2015.07.json
[6] http://git.openstack.org/cgit/openstack/defcore/tree/2016.01.json
[7] 
http://git.openstack.org/cgit/openstack/governance/tree/resolutions/20160504-defcore-test-location.rst
[8] 
http://git.openstack.org/cgit/openstack/tempest-lib/tree/tempest_lib/api_schema/response/compute/v2_1/servers.py#n39


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] what do you work on upstream of OpenStack?

2016-06-14 Thread Jay Faulkner
I committed a small change to systemd-nspawn to ensure kernel modules 
could be loaded by IPA hardware managers in an older version of our 
CoreOS-based ramdisk (note that we don't use systemd-nspawn anymore, but 
instead use a chroot inside the CoreOS ramdisk).


Thanks,
Jay Faulkner
OSIC

On 6/13/16 12:11 PM, Doug Hellmann wrote:

I'm trying to pull together some information about contributions
that OpenStack community members have made *upstream* of OpenStack,
via code, docs, bug reports, or anything else to dependencies that
we have.

If you've made a contribution of that sort, I would appreciate a
quick note.  Please reply off-list, there's no need to spam everyone,
and I'll post the summary if folks want to see it.

Thanks,
Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [VPNaaS] Support for Stronger hashes and combined mode ciphers

2016-06-14 Thread Mark Fenwick

Hi Paul,

On 06/14/16 10:27, Paul Michali wrote:

Certainly the ciphers and hashes could be enhanced for VPNaaS. This would
require converting the user selections into options for the underlying
device driver, modifying the neutron client (OSC) to allow entry of the new
selections, updating unit tests, and likely adding some validators to
reject these options on drivers that may not support them (e.g. if OpenSwan
doesn't support an option, you'll want to reject it).



I made some changes and got this working quiet quickly, would need some 
polish.



There is not an active VPNaaS team any more, so, if this is something that
you'd like to see, you'll need to provide some sweat equity to make it
happen. There are still some people that can core review changes, but don't
expect much community support for VPNaaS at this time. In fact, I think the
plan is to archive/mothball/whatever VPNaaS in a few months (it's on double
secret probation :)), if there is no-one actively supporting it (I'll leave
to the PTL to define what "support" means - not sure what the
qualifications will be to maintain this project).


So I'm curious, does anybody actually use VPNaaS for anything ?

Thanks

Mark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] simple patches requiring only one +2/+A

2016-06-14 Thread Loo, Ruby
Hi,

There was a short discussion on IRC a few minutes ago [1] about when it would 
be acceptable for a patch to be approved with one +2 (as opposed to two +2s). 
The few of us that commented (I think all are cores in one or several ironic 
projects) agreed that it would be good to do that, but "use your common sense" 
didn't seem to provide enough guidance :)

I've updated our wiki [2] (see bullet 4, bullet 2) to reflect what I think we 
agreed on. It didn't seem necessary to vote on it. Please let me/us know if you 
disagree or have other comments.

(I know, we should move this information from the wiki to our documentation, 
but that's a separate issue.)

Thanks,
--ruby


[1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-ironic/%23openstack-ironic.2016-06-14.log.html#t2016-06-14T16:51:40
[2] https://wiki.openstack.org/wiki/Ironic/CoreTeam#Other_notes

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] os-brick privsep failures and an upgrade strategy?

2016-06-14 Thread Walter A. Boring IV
I just put up a WIP patch in os-brick that tests to see if os-privsep is 
configured with
the helper_command.  If it's not, then os-brick falls back to using 
processutils

with the root_helper and run_as_root kwargs passed in.

https://review.openstack.org/#/c/329586
If you can check this out that would be helpful.  If this is the route 
we want to go,

then I'll add unit tests and take it out of WIP and try to get it in.


So, if nova.conf and cinder.conf aren't updated with the privsep_osbrick 
sections
providing the helper_command, then os_brick will assume local 
processutils calls

with the configured root_helper passed in.

This should be backwards compatible (grenade upgrade tests).  But we 
should encourage
admins to add that section to their nova.conf and cinder.conf files.  
The other downside
to this is that if we have to keep this code in place, then we 
effectively still have to maintain

rootwrap filters in place and keep them up to date.   *sadness*


Walt

On 06/14/2016 04:49 AM, Sean Dague wrote:

os-brick 1.4 was released over the weekend, and was the first os-brick
to include privsep. We got a really odd failure rate in the
grenade-multinode jobs (1/3 - 1/2) after wards which was super non
obvious why. Hemma looks to have figured it out (this is a summary of
what I've seen on IRC to pull it all together)

Remembering the following -
https://github.com/openstack-dev/grenade#theory-of-upgrade and
https://governance.openstack.org/reference/tags/assert_supports-upgrade.html#requirements
- New code must work with N-1 configs. So this is `master` running with
`mitaka` configuration.

privsep requires a sudo rule or rootwrap rule (to get to sudo) to allow
the privsep daemon to be spawned for volume actions.

During gate testing we have a blanket sudoer rule for the stack user
during the run of grenade.sh. It has to do system level modifications
broadly to perform the upgrade. This sudoer rule is deleted at the end
of the grenade.sh run before Tempest tests are run, so that Tempest
tests don't accidentally require root privs on their target environment.

Grenade *also* makes sure that some resources live across the upgrade
boundary. This includes a boot from volume guest, which is torn down
before testing starts. And this is where things get interesting.

This means there is a volume teardown needed before grenade ends. But
there is only one. In single node grenade this happens about 30 seconds
for the end of the script, triggers the privsep daemon start, and then
we're done. And the 50_stack_sh sudoers file is removed. In multinode,
*if* the boot from volume server is on the upgrade node, then the same
thing happens. *However*, if it instead ended up on the subnode, which
is not upgraded, then the volume tear down in on the old node. No
os-brick calls are made on the upgraded node before grenade finishes.
The 50_stack_sh sudoers file is removed, as expected.

And now all volume tests on those nodes fail.

Which is what should happen. The point is that in production no one is
going to put a blanket sudoers rule like that in place. It's just we
needed it for this activity, and the userid on the services being the
same as the shell user (which is not root) let this fallback rule be used.

The crux of the problem is that os-brick 1.4 and privsep can't be used
without a config file change during the upgrade. Which violates our
policy, because it breaks rolling upgrades.

So... we have a few options:

1) make an exception here with release notes, because it's the only way
to move forward.

2) have some way for os-brick to use either mode for a transition period
(depending on whether privsep is configured to work)

3) Something else ?

https://bugs.launchpad.net/os-brick/+bug/1592043 is the bug we've got on
this. We should probably sort out the path forward here on the ML as
there are a bunch of folks in a bunch of different time zones that have
important perspectives here.

-Sean




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] os-brick privsep failures and an upgrade strategy?

2016-06-14 Thread Matt Riedemann

On 6/14/2016 11:33 AM, Sean Dague wrote:

On 06/14/2016 09:02 AM, Daniel P. Berrange wrote:

On Tue, Jun 14, 2016 at 07:49:54AM -0400, Sean Dague wrote:

[snip]


The crux of the problem is that os-brick 1.4 and privsep can't be used
without a config file change during the upgrade. Which violates our
policy, because it breaks rolling upgrades.


os-vif support is going to face exactly the same problem. We just followed
os-brick's lead by adding a change to devstack to explicitly set the
required config options in nova.conf to change privsep to use rootwrap
instead of plain sudo.

Basically every single user of privsep is likely to face the same
problem.


So... we have a few options:

1) make an exception here with release notes, because it's the only way
to move forward.


That's quite user hostile I think.


2) have some way for os-brick to use either mode for a transition period
(depending on whether privsep is configured to work)


I'm not sure that's viable - at least for os-vif we started from
a clean slate to assume use of privsep, so we won't be able to have
any optional fallback to non-privsep mode.


3) Something else ?


3) Add an API to oslo.privsep that lets us configure the default
   command to launch the helper. Nova would invoke this on startup

  privsep.set_default_helper("sudo nova-rootwrap ")

4) Have oslo.privsep install a sudo rule that grants permission
   to run privsep-helper, without needing rootwrap.

5) Have each user of privsep install a sudo rule to grants
   permission to run privsep-helper with just their specific
   entry point context, without needing rootwrap


4 & 5 are the same as 1, because python packages don't have standardized
management of /etc in their infrastructure. The code can't roll forward
without a config change.

Option #3 is a new one, I wonder if that would get us past here better.

-Sean



Yeah #3 sounds the best to me, but would need to hear from Angus on this.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [VPNaaS] Support for Stronger hashes and combined mode ciphers

2016-06-14 Thread Paul Michali
Certainly the ciphers and hashes could be enhanced for VPNaaS. This would
require converting the user selections into options for the underlying
device driver, modifying the neutron client (OSC) to allow entry of the new
selections, updating unit tests, and likely adding some validators to
reject these options on drivers that may not support them (e.g. if OpenSwan
doesn't support an option, you'll want to reject it).

There is not an active VPNaaS team any more, so, if this is something that
you'd like to see, you'll need to provide some sweat equity to make it
happen. There are still some people that can core review changes, but don't
expect much community support for VPNaaS at this time. In fact, I think the
plan is to archive/mothball/whatever VPNaaS in a few months (it's on double
secret probation :)), if there is no-one actively supporting it (I'll leave
to the PTL to define what "support" means - not sure what the
qualifications will be to maintain this project).

Regards,

PCM


On Wed, Jun 8, 2016 at 5:19 PM Mark Fenwick  wrote:

> Hi,
>
> I was wondering if there are any plans to extend support for IPsec and
> IKE algorithms. Looks like only AES-CBC mode and SHA1 are supported.
>
> It would be nice to see:
>
> SHA256, SHA384, SHA512
>
> As well as the combined mode ciphers:
>
> AES-CCM and AES-GCM
>
> StrongSWAN already supports all of these ciphers and hashes.
>
> Thanks
>
> Mark
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] NUMA, huge pages, and scheduling

2016-06-14 Thread Paul Michali
Great info Chris and thanks for confirming the assignment of blocks of
pages to a numa node.

I'm still struggling with why each VM is being assigned to NUMA node 0. Any
ideas on where I should look to see why Nova is not using NUMA id 1?

Thanks!


PCM


On Tue, Jun 14, 2016 at 10:29 AM Chris Friesen 
wrote:

> On 06/13/2016 02:17 PM, Paul Michali wrote:
> > Hmm... I tried Friday and again today, and I'm not seeing the VMs being
> evenly
> > created on the NUMA nodes. Every Cirros VM is created on nodeid 0.
> >
> > I have the m1/small flavor (@GB) selected and am using hw:numa_nodes=1
> and
> > hw:mem_page_size=2048 flavor-key settings. Each VM is consuming 1024
> huge pages
> > (of size 2MB), but is on nodeid 0 always. Also, it seems that when I
> reach 1/2
> > of the total number of huge pages used, libvirt gives an error saying
> there is
> > not enough memory to create the VM. Is it expected that the huge pages
> are
> > "allocated" to each NUMA node?
>
> Yes, any given memory page exists on one NUMA node, and a single-NUMA-node
> VM
> will be constrained to a single host NUMA node and will use memory from
> that
> host NUMA node.
>
> You can see and/or adjust how many hugepages are available on each NUMA
> node via
> /sys/devices/system/node/nodeX/hugepages/hugepages-2048kB/* where X is the
> host
> NUMA node number.
>
> Chris
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Proposal of a virtual mid-cycle instead of the co-located

2016-06-14 Thread Nikhil Komawar
Please note the details on the midcycle event are updated here:
https://wiki.openstack.org/wiki/VirtualSprints#Glance_Virtual_midcycle_sync

We will be using the etherpad (given in the link above) as a live wiki
for central place of updated info.

On 6/8/16 9:52 AM, Nikhil Komawar wrote:
> This event is confirmed, more details will be out soon.
>
>
> On 5/29/16 11:15 PM, Nikhil Komawar wrote:
>> Hello,
>>
>>
>> I would like to propose a two day 4 hour sessions each of Glance virtual
>> mid-cycle on June 15th Wednesday & 16th Thursday, 1400 UTC onward. This
>> is a replacement of the Glance mid-cycle meetup that we've cancelled.
>> Some people have already expressed some items to discuss then and I
>> would like for us to utilize a couple of hours discussing the
>> glance-specs so that we can apply spec-soft-freeze [1] in a better capacity.
>>
>>
>> We can try to accommodate topics according to the TZ, for example topics
>> proposed by folks in EMEA earlier in the day vs. for those in the PDT TZ
>> in the later part of the event.
>>
>>
>> Please vote with +1, 0, -1. If the time/date doesn't work, please
>> propose 2-3 additional slots.
>>
>>
>> We can use either hangouts, bluejeans or an IBM conferencing tool as
>> required, which is to be finalized closer to the event.
>>
>>
>> I will setup an agenda etherpad once we decide on the date/time.
>>
>>
>> [1] http://lists.openstack.org/pipermail/openstack-dev/2016-May/096175.html
>>

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports

2016-06-14 Thread Peters, Rawlin
On Tuesday, June 14, 2016 3:43 AM, Daniel P. Berrange (berra...@redhat.com) 
wrote:
> 
> On Tue, Jun 14, 2016 at 02:35:57AM -0700, Kevin Benton wrote:
> > In strategy 2 we just pass 1 bridge name to Nova. That's the one that
> > is ensures is created and plumbs the VM to. Since it's not responsible
> > for patch ports it doesn't need to know anything about the other bridge.
> 
> Ok, so we're already passing that bridge name - all we need change is make
> sure it is actuall created if it doesn't already exist ? If so that sounds 
> simple
> enough to add to os-vif - we already have exactly the same logic for the
> linux_bridge plugin

Neutron doesn't actually pass the bridge name in the vif_details today, but 
Nova will use that bridge rather than br-int if it's passed in the vif_details.

In terms of strategy 1, I was still only envisioning one bridge name getting 
passed in the vif_details (br-int). The "plug" action is only a variation of 
the hybrid_ovs strategy I mentioned earlier, which generates an arbitrary name 
for the linux bridge, uses that bridge in the instance's libvert XML config 
file, then creates a veth pair between the linux bridge and br-int. Like 
hybrid_ovs, the only bridge Nova/os-vif needs to know about is br-int for 
Strategy 1.

In terms of architecture, we get KISS with Strategy 1 (W.R.T. the OVS agent, 
which is the most complex piece of this IMO). Using an L2 agent extension, we 
will also get DRY as well because it seems that the LinuxBridge implementation 
can simply use an L2 agent extension for creating the vlan interfaces for the 
subports. Similar to how QoS has different drivers for its L2 agent extension, 
we could have different drivers for OVS and LinuxBridge within the 'trunk' L2 
agent extension. Each driver will want to make use of the same RPC calls/push 
mechanisms for subport creation/deletion.

Also, we didn’t make the OVS agent monitor for new linux bridges in the 
hybrid_ovs strategy so that Neutron could be responsible for creating the veth 
pair. Was that a mistake or just an instance of KISS? Why shouldn't we use the 
tools that are already available to us?

Regards,
Rawlin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] os-brick privsep failures and an upgrade strategy?

2016-06-14 Thread Sean Dague
On 06/14/2016 09:02 AM, Daniel P. Berrange wrote:
> On Tue, Jun 14, 2016 at 07:49:54AM -0400, Sean Dague wrote:
> 
> [snip]
> 
>> The crux of the problem is that os-brick 1.4 and privsep can't be used
>> without a config file change during the upgrade. Which violates our
>> policy, because it breaks rolling upgrades.
> 
> os-vif support is going to face exactly the same problem. We just followed
> os-brick's lead by adding a change to devstack to explicitly set the
> required config options in nova.conf to change privsep to use rootwrap
> instead of plain sudo.
> 
> Basically every single user of privsep is likely to face the same
> problem.
> 
>> So... we have a few options:
>>
>> 1) make an exception here with release notes, because it's the only way
>> to move forward.
> 
> That's quite user hostile I think.
> 
>> 2) have some way for os-brick to use either mode for a transition period
>> (depending on whether privsep is configured to work)
> 
> I'm not sure that's viable - at least for os-vif we started from
> a clean slate to assume use of privsep, so we won't be able to have
> any optional fallback to non-privsep mode.
> 
>> 3) Something else ?
> 
> 3) Add an API to oslo.privsep that lets us configure the default
>command to launch the helper. Nova would invoke this on startup
> 
>   privsep.set_default_helper("sudo nova-rootwrap ")
> 
> 4) Have oslo.privsep install a sudo rule that grants permission
>to run privsep-helper, without needing rootwrap.
> 
> 5) Have each user of privsep install a sudo rule to grants
>permission to run privsep-helper with just their specific
>entry point context, without needing rootwrap

4 & 5 are the same as 1, because python packages don't have standardized
management of /etc in their infrastructure. The code can't roll forward
without a config change.

Option #3 is a new one, I wonder if that would get us past here better.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Require a level playing field for OpenStack projects

2016-06-14 Thread Hayes, Graham
On 14/06/2016 17:14, Anita Kuno wrote:
> On 06/14/2016 10:44 AM, Hayes, Graham wrote:
>> On 14/06/2016 15:00, Thierry Carrez wrote:
>>> Hi everyone,
>>>
>>> I just proposed a new requirement for OpenStack "official" projects,
>>> which I think is worth discussing beyond the governance review:
>>>
>>> https://review.openstack.org/#/c/329448/
>>>
>>>  From an upstream perspective, I see us as being in the business of
>>> providing open collaboration playing fields in order to build projects
>>> to reach the OpenStack Mission. We collectively provide resources
>>> (infra, horizontal teams, events...) in order to enable that open
>>> collaboration.
>>>
>>> An important characteristic of these open collaboration grounds is that
>>> they need to be a level playing field, where no specific organization is
>>> being given an unfair advantage. I expect the teams that we bless as
>>> "official" project teams to operate in that fair manner. Otherwise we
>>> end up blessing what is essentially a trojan horse for a given
>>> organization, open-washing their project in the process. Such a project
>>> can totally exist as an unofficial project (and even be developed on
>>> OpenStack infrastructure) but I don't think it should be given free
>>> space in our Design Summits or benefit from "OpenStack community" branding.
>>>
>>> So if, in a given project team, developers from one specific
>>> organization benefit from access to specific knowledge or hardware
>>> (think 3rd-party testing blackboxes that decide if a patch goes in, or
>>> access to proprietary hardware or software that the open source code
>>> primarily interfaces with), then this project team should probably be
>>> rejected under the "open community" rule. Projects with a lot of drivers
>>> (like Cinder) provide an interesting grey area, but as long as all
>>> drivers are in and there is a fully functional (and popular) open source
>>> implementation, I think no specific organization would be considered as
>>> unfairly benefiting compared to others.
>>>
>>> A few months ago we had the discussion about what "no open core" means
>>> in 2016, in the context of the Poppy team candidacy. With our reading at
>>> the time we ended up rejecting Poppy partly because it was interfacing
>>> with proprietary technologies. However, I think what we originally
>>> wanted to ensure with this rule was that no specific organization would
>>> use the OpenStack open source code as crippled bait to sell their
>>> specific proprietary add-on.
>>>
>>> I think taking the view that OpenStack projects need to be open, level
>>> collaboration playing fields encapsulates that nicely. In the Poppy
>>> case, nobody in the Poppy team has an unfair advantage over others, so
>>> we should not reject them purely on the grounds that this interfaces
>>> with non-open-source solutions (leaving only the infrastructure/testing
>>> requirement to solve). On the other hand, a Neutron plugin targeting a
>>> specific piece of networking hardware would likely give an unfair
>>> advantage to developers of the hardware's manufacturer (having access to
>>> that gear for testing and being able to see and make changes to its
>>> proprietary source code) -- that project should probably live as an
>>> unofficial OpenStack project.
>>>
>>> Comments, thoughts ?
>>>
>>
>>
>>  From our perspective, we (designate) currently have a few drivers from
>> proprietary vendors, and would like to add one in the near future.
>>
>> The current drivers are marked as "release compatible" - aka someone is
>> nominated to test the driver throughout the release cycle, and then
>> during the RC fully validate the driver.
>>
>> The new driver will have 3rd party CI, to test it on every commit.
>>
>> These are (very) small parts of the code base, but part of it none
>> the less. If this is passes, should we push these plugins to separate
>> repos, and not include them as part of the Designate project?
>>
>> As another idea - if we have to move them out of tree - could we have
>> another "type" of project?
>>
>> A lot of projects have "drivers" for vendor hardware / software -
>> could there be a way of marking projects as drivers of a deliverable -
>> as most of these drivers will be very tied to specific versions of
>> OpenStack projects.
>>
>> I fully agree with the sentiment, and overall aim of the requirement,
>> I just want to ensure we have as little negative impact on deployers
>> as possible.
>>
>>   -- Graham
>
> I highly recommend you spend some time interacting with the Neutron,
> Nova, Cinder and Ironic communities to learn how they approach this
> issue. Each community has a slightly different approach to interacting
> with vendors with different pain points in each approach. I think
> learning from these projects regarding this issue would be a great way
> to formulate your best plan for designate. Also time spent with Mike
> Perez on this issue is an investment as far as I'm concerned.
>
> Thank you,
> Anita.
>


Re: [openstack-dev] [all][tc] Require a level playing field for OpenStack projects

2016-06-14 Thread Doug Hellmann
Excerpts from Hayes, Graham's message of 2016-06-14 14:44:36 +:
> On 14/06/2016 15:00, Thierry Carrez wrote:
> > Hi everyone,
> >
> > I just proposed a new requirement for OpenStack "official" projects,
> > which I think is worth discussing beyond the governance review:
> >
> > https://review.openstack.org/#/c/329448/
> >
> >  From an upstream perspective, I see us as being in the business of
> > providing open collaboration playing fields in order to build projects
> > to reach the OpenStack Mission. We collectively provide resources
> > (infra, horizontal teams, events...) in order to enable that open
> > collaboration.
> >
> > An important characteristic of these open collaboration grounds is that
> > they need to be a level playing field, where no specific organization is
> > being given an unfair advantage. I expect the teams that we bless as
> > "official" project teams to operate in that fair manner. Otherwise we
> > end up blessing what is essentially a trojan horse for a given
> > organization, open-washing their project in the process. Such a project
> > can totally exist as an unofficial project (and even be developed on
> > OpenStack infrastructure) but I don't think it should be given free
> > space in our Design Summits or benefit from "OpenStack community" branding.
> >
> > So if, in a given project team, developers from one specific
> > organization benefit from access to specific knowledge or hardware
> > (think 3rd-party testing blackboxes that decide if a patch goes in, or
> > access to proprietary hardware or software that the open source code
> > primarily interfaces with), then this project team should probably be
> > rejected under the "open community" rule. Projects with a lot of drivers
> > (like Cinder) provide an interesting grey area, but as long as all
> > drivers are in and there is a fully functional (and popular) open source
> > implementation, I think no specific organization would be considered as
> > unfairly benefiting compared to others.
> >
> > A few months ago we had the discussion about what "no open core" means
> > in 2016, in the context of the Poppy team candidacy. With our reading at
> > the time we ended up rejecting Poppy partly because it was interfacing
> > with proprietary technologies. However, I think what we originally
> > wanted to ensure with this rule was that no specific organization would
> > use the OpenStack open source code as crippled bait to sell their
> > specific proprietary add-on.
> >
> > I think taking the view that OpenStack projects need to be open, level
> > collaboration playing fields encapsulates that nicely. In the Poppy
> > case, nobody in the Poppy team has an unfair advantage over others, so
> > we should not reject them purely on the grounds that this interfaces
> > with non-open-source solutions (leaving only the infrastructure/testing
> > requirement to solve). On the other hand, a Neutron plugin targeting a
> > specific piece of networking hardware would likely give an unfair
> > advantage to developers of the hardware's manufacturer (having access to
> > that gear for testing and being able to see and make changes to its
> > proprietary source code) -- that project should probably live as an
> > unofficial OpenStack project.
> >
> > Comments, thoughts ?
> >
> 
> 
>  From our perspective, we (designate) currently have a few drivers from 
> proprietary vendors, and would like to add one in the near future.
> 
> The current drivers are marked as "release compatible" - aka someone is
> nominated to test the driver throughout the release cycle, and then
> during the RC fully validate the driver.
> 
> The new driver will have 3rd party CI, to test it on every commit.
> 
> These are (very) small parts of the code base, but part of it none
> the less. If this is passes, should we push these plugins to separate
> repos, and not include them as part of the Designate project?

No. What you're doing is perfectly acceptable. Obviously the more
testing you can do, the better, but it's up to the Designate team to
decide what code contributions it considers it can support as part of
it's official code base. Whether that is organized in one repository or
many is also up to the owners of the code.

The problem has come up because other teams have decided they cannot
manage the large number of disparate drivers. Those have been moved
out of the main source tree, and those repositories are now being
de-listed from the "official" list in the governance repo.

> As another idea - if we have to move them out of tree - could we have
> another "type" of project?
> 
> A lot of projects have "drivers" for vendor hardware / software -
> could there be a way of marking projects as drivers of a deliverable -
> as most of these drivers will be very tied to specific versions of
> OpenStack projects.

The location of the code is an implementation detail when it comes
to describing the thing we ship.  A "deliverable" can be made up
of more than one 

Re: [openstack-dev] [ceilometer] [stable] Re: [Openstack-stable-maint] Stable check of openstack/ceilometer failed

2016-06-14 Thread Ian Cordasco
 

-Original Message-
From: gordon chung 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: June 14, 2016 at 06:51:08
To: openstack-dev@lists.openstack.org 
Subject:  Re: [openstack-dev] [ceilometer] [stable] Re: 
[Openstack-stable-maint] Stable check of openstack/ceilometer failed

> i don't know if anyone is looking at this -- i'm not sure where this test is 
> even run. i usually  
> let mriedem yell at me but i take it he has bigger things on his plate now :)

Heh, I just looked through these quickly yesterday. I suspect Matt would have 
worked his way around to these.

> this seems like a pretty simple fix from the error output[1]. i guess my 
> question is: should  
> the correct fix be to cap oslo.utils? based on the error, the issue seems to 
> be total_seconds  
> method was removed. this was deprecated in Mitaka[2], so i don't think it 
> should've been  
> removed from Liberty. as this is an easy fix, i'm pretty indifferent if we 
> decide to fix  
> this rather than slow down progress. the original purpose of this method 
> (based on commit  
> messsage) seems to be related to py2.6. i don't know if this is still an 
> issue.

I wonder why more projects aren't seeing this in stable/liberty. Perhaps, 
ceilometer stable/liberty isn't using upper-constraints? I think oslo.utils 
3.2.0 
(https://github.com/openstack/requirements/blob/stable/liberty/upper-constraints.txt#L202)
 is low enough to avoid this if you're using constraints. (It looks as if the 
total_seconds removal was first released in 3.12.0 
https://github.com/openstack/oslo.utils/commit/8f5e65cae3aaf8d0a89d16d8932c266151de44f7)

I'll let soemone else determine if the right answer is capping stable/liberty.

Cheers,
--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Require a level playing field for OpenStack projects

2016-06-14 Thread Anita Kuno
On 06/14/2016 10:44 AM, Hayes, Graham wrote:
> On 14/06/2016 15:00, Thierry Carrez wrote:
>> Hi everyone,
>>
>> I just proposed a new requirement for OpenStack "official" projects,
>> which I think is worth discussing beyond the governance review:
>>
>> https://review.openstack.org/#/c/329448/
>>
>>  From an upstream perspective, I see us as being in the business of
>> providing open collaboration playing fields in order to build projects
>> to reach the OpenStack Mission. We collectively provide resources
>> (infra, horizontal teams, events...) in order to enable that open
>> collaboration.
>>
>> An important characteristic of these open collaboration grounds is that
>> they need to be a level playing field, where no specific organization is
>> being given an unfair advantage. I expect the teams that we bless as
>> "official" project teams to operate in that fair manner. Otherwise we
>> end up blessing what is essentially a trojan horse for a given
>> organization, open-washing their project in the process. Such a project
>> can totally exist as an unofficial project (and even be developed on
>> OpenStack infrastructure) but I don't think it should be given free
>> space in our Design Summits or benefit from "OpenStack community" branding.
>>
>> So if, in a given project team, developers from one specific
>> organization benefit from access to specific knowledge or hardware
>> (think 3rd-party testing blackboxes that decide if a patch goes in, or
>> access to proprietary hardware or software that the open source code
>> primarily interfaces with), then this project team should probably be
>> rejected under the "open community" rule. Projects with a lot of drivers
>> (like Cinder) provide an interesting grey area, but as long as all
>> drivers are in and there is a fully functional (and popular) open source
>> implementation, I think no specific organization would be considered as
>> unfairly benefiting compared to others.
>>
>> A few months ago we had the discussion about what "no open core" means
>> in 2016, in the context of the Poppy team candidacy. With our reading at
>> the time we ended up rejecting Poppy partly because it was interfacing
>> with proprietary technologies. However, I think what we originally
>> wanted to ensure with this rule was that no specific organization would
>> use the OpenStack open source code as crippled bait to sell their
>> specific proprietary add-on.
>>
>> I think taking the view that OpenStack projects need to be open, level
>> collaboration playing fields encapsulates that nicely. In the Poppy
>> case, nobody in the Poppy team has an unfair advantage over others, so
>> we should not reject them purely on the grounds that this interfaces
>> with non-open-source solutions (leaving only the infrastructure/testing
>> requirement to solve). On the other hand, a Neutron plugin targeting a
>> specific piece of networking hardware would likely give an unfair
>> advantage to developers of the hardware's manufacturer (having access to
>> that gear for testing and being able to see and make changes to its
>> proprietary source code) -- that project should probably live as an
>> unofficial OpenStack project.
>>
>> Comments, thoughts ?
>>
> 
> 
>  From our perspective, we (designate) currently have a few drivers from 
> proprietary vendors, and would like to add one in the near future.
> 
> The current drivers are marked as "release compatible" - aka someone is
> nominated to test the driver throughout the release cycle, and then
> during the RC fully validate the driver.
> 
> The new driver will have 3rd party CI, to test it on every commit.
> 
> These are (very) small parts of the code base, but part of it none
> the less. If this is passes, should we push these plugins to separate
> repos, and not include them as part of the Designate project?
> 
> As another idea - if we have to move them out of tree - could we have
> another "type" of project?
> 
> A lot of projects have "drivers" for vendor hardware / software -
> could there be a way of marking projects as drivers of a deliverable -
> as most of these drivers will be very tied to specific versions of
> OpenStack projects.
> 
> I fully agree with the sentiment, and overall aim of the requirement,
> I just want to ensure we have as little negative impact on deployers
> as possible.
> 
>   -- Graham

I highly recommend you spend some time interacting with the Neutron,
Nova, Cinder and Ironic communities to learn how they approach this
issue. Each community has a slightly different approach to interacting
with vendors with different pain points in each approach. I think
learning from these projects regarding this issue would be a great way
to formulate your best plan for designate. Also time spent with Mike
Perez on this issue is an investment as far as I'm concerned.

Thank you,
Anita.

> 
> __
> OpenStack Development Mailing List (not for usage 

[openstack-dev] networking-sfc: unable to use SFC (ovs driver) with multiple networks

2016-06-14 Thread Banszel, MartinX
Hello,

I'd need some help with using the SFC implementation in openstack.

I use liberty version of devstack + liberty branch of networking-sfc.

It's not clear to me if the SFC instance and it's networks should be separated
from the remaining virtual network topology or if it should be connected to it.

E.g. consider the following topology, where SFC and its networks net2 and net3
(one for ingress port, one for egress port) are connected to the tenants
networks. I know that all three instances can share one network but a use case I
am trying to implement requires that every instance has it's separated network
and there is a different network for ingress and egress port of the SF.

 +---+ +-+ +---+
 | VMSRC | |  VMSFC  | | VMDST |
 +---+---+ +--+---+--+ +---+---+
 | p1 (1.1.1.1) p2|   |p3  |p4 (4.4.4.4)
 ||   ||
-++--- net1   |   |  --+---+- net4
  |   |   ||
  |  ---+-+---) net2   |
  |  ---)--+--+ net3   |
  | |  |   |
  |  +--+--+--+|
  +--+ ROUTER ++
 ++


All networks are connected to a single router ROUTER. I created a flow
classifier that matches all traffic going from VMSRC to VMDST
(--logical-source-port p1 --source-ip-prefix=1.1.1.1/32
--destination-ip-prefix=4.4.4.4/32), port pair p2,p3, a port pair group
containing this port pair and a port chain containing this port pair group and
flow classifier.

If I try to ping from VMSRC the 5.4.4.4 address, it is correctly steered through
the VMSFC (where just the ip_forwarding is set to 1) and forwarded back through
the p3 port to the ROUTER.  The router finds out that there are packets with
source address 1.1.1.1 coming from port where is should not (the router expects
those packets from the net1 interface), they don't pass the reverse path filter
and the router drops them.

It works when I set the rp_filter off via sysctl command in the router namespace
on the controller. But I don't want to do this -- I expect the sfc to work
without such changes.

Is such topology supported? What should the topology look like?

I have noticed, that when I disconnect the net2 and net3 from the ROUTER, and
add new routers ROUTER2 and ROUTER3 to the net2 and net3 networks respectivelly
and don't connect them anyhow to the ROUTER nor the rest of the topology, the
OVS is able to send the traffic to the p2 port on the ingress side. However, on
the egress side the packet is routed to the ROUTER3 which drops it as it doesn't
have any route for it.

Thanks for any hints!

Best regards
Martin Banszel
--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263


This e-mail and any attachments may contain confidential material for the sole
use of the intended recipient(s). Any review or distribution by others is
strictly prohibited. If you are not the intended recipient, please contact the
sender and delete all copies.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][winstackers] os-win 1.0.0 release (newton)

2016-06-14 Thread no-reply
We are happy to announce the release of:

os-win 1.0.0: Windows / Hyper-V library for OpenStack projects.

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/os-win

With package available at:

https://pypi.python.org/pypi/os-win

Please report issues through launchpad:

http://bugs.launchpad.net/os-win

For more details, please see below.

Changes in os-win 0.3.2..1.0.0
--

e34566c Fixes loading HGS namespace on early builds of Windows 10
111db69 Add method for checking whether a share is local
16a1eed Fix disk rescan method
f51fe15 Fix event handle leaks
411732e Hyper-V: Shielded VMs
1ffa212 Updated from global requirements
6741ee8 Fix iSCSI initiator utils docstring
56b95db iSCSI utils: accept rescan_attempts arg in login method
8b11dde Add missing wmi to requirements.txt
f6d3eff Updated from global requirements
a0fd9ce Add method for retrieving local share paths
9ce38cf Updated from global requirements
356b6cc Events: use tpool only if thread module is patched
53a8d6d Fix event listeners
9df78ab Updated from global requirements
c478d43 Adds missing attribute from get_cpus_info query
70d3658 Fix retrieving VM notes race condition
a4e6cb7 Fix retrieving VHDX block size
d9c2c1b Updated from global requirements
47aebc1 Fix retrieving VM physical disk mapping
d073b7f Copies get_share_capacity_info to diskutils
ca4eb8a python3: Fixes vhdutils internal VHDX size
c5ea2aa Consistently raise exception if port not found
11f7a55 Ensure vmutils honors the host argument
de9f70d Sets OsWinBaseTestCase as base class for unit tests
edc939e Fixes PyMI compatiblity issue
5f7b473 Fixes vmutils take_vm_snapshot method
e61318b Adds check for VLAN and VSID operations
25aa5eb Fix named pipe handler cleanup regression
dc3456b Fixes vmutils get_vm_generation method
c88f49f Improve clusterutils with new pyMI features
74dabea Ensure namedpipe IO workers clean up handles when stopping
21a882c switch to post-versioning
25255d1 Bump version to 0.3.3

Diffstat (except docs and test files)
-

os_win/__init__.py |   8 ++
os_win/_utils.py   |   8 ++
os_win/constants.py|   9 ++
.../storage/initiator/test_base_iscsi_utils.py |   4 +-
.../utils/storage/initiator/test_iscsi_utils.py|   6 +-
.../storage/initiator/test_iscsi_wmi_utils.py  |   4 +-
os_win/utils/baseutils.py  |  41 ++-
os_win/utils/compute/clusterutils.py   | 100 +++-
os_win/utils/compute/livemigrationutils.py |  29 ++---
os_win/utils/compute/vmutils.py| 104 ++---
os_win/utils/compute/vmutils10.py  |  88 ++
os_win/utils/hostutils.py  |  20 ++--
os_win/utils/hostutils10.py|  56 +
os_win/utils/io/ioutils.py |  17 ++-
os_win/utils/io/namedpipe.py   |  48 ++--
os_win/utils/metrics/metricsutils.py   |   2 +-
os_win/utils/network/networkutils.py   |  87 --
os_win/utils/pathutils.py  |  23 
os_win/utils/storage/diskutils.py  |  37 +-
os_win/utils/storage/initiator/iscsi_utils.py  |   7 +-
os_win/utils/storage/smbutils.py   |  32 +
os_win/utils/storage/virtdisk/vhdutils.py  |  14 ++-
.../utils/storage/virtdisk/virtdisk_constants.py   |   1 +
os_win/utilsfactory.py |  16 +--
requirements.txt   |  11 +-
setup.cfg  |   1 -
test-requirements.txt  |   1 +
47 files changed, 1202 insertions(+), 366 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 462d6db..70eb018 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -6 +6 @@ pbr>=1.6 # Apache-2.0
-Babel>=1.3 # BSD
+Babel>=2.3.4 # BSD
@@ -9,2 +9,2 @@ eventlet!=0.18.3,>=0.18.2 # MIT
-oslo.concurrency>=3.5.0 # Apache-2.0
-oslo.config>=3.7.0 # Apache-2.0
+oslo.concurrency>=3.8.0 # Apache-2.0
+oslo.config>=3.10.0 # Apache-2.0
@@ -12 +12 @@ oslo.log>=1.14.0 # Apache-2.0
-oslo.utils>=3.5.0 # Apache-2.0
+oslo.utils>=3.11.0 # Apache-2.0
@@ -14 +14 @@ oslo.i18n>=2.1.0 # Apache-2.0
-oslo.service>=1.0.0 # Apache-2.0
+oslo.service>=1.10.0 # Apache-2.0
@@ -17,0 +18 @@ PyMI>=1.0.0;sys_platform=='win32' # Apache 2.0 License
+wmi;sys_platform=='win32'  # MIT
diff --git a/test-requirements.txt b/test-requirements.txt
index 8bd244e..92d4dd8 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -7,0 +8 @@ coverage>=3.6 # Apache-2.0
+ddt>=1.0.1 # MIT



__
OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [all][tc] Require a level playing field for OpenStack projects

2016-06-14 Thread Fox, Kevin M
Some counter arguments for keeping them in:
 * It gives the developers of the code that's being plugged into a better view 
of how the plugin api is used and what might break if they change it.
 * Vendors don't tend to support drivers forever. Often they drop support for a 
driver once the "new" hardware comes out. Keeping it open and official gives 
non vendors a place to fix the drivers in the open after the vendor abandons it 
and operators still have the hardware they need to support.

Thanks,
Kevin

From: Doug Hellmann [d...@doughellmann.com]
Sent: Tuesday, June 14, 2016 7:15 AM
To: openstack-dev
Subject: Re: [openstack-dev] [all][tc] Require a level playing field for
OpenStack projects

Excerpts from Thierry Carrez's message of 2016-06-14 15:57:10 +0200:
> Hi everyone,
>
> I just proposed a new requirement for OpenStack "official" projects,
> which I think is worth discussing beyond the governance review:
>
> https://review.openstack.org/#/c/329448/
>
>  From an upstream perspective, I see us as being in the business of
> providing open collaboration playing fields in order to build projects
> to reach the OpenStack Mission. We collectively provide resources
> (infra, horizontal teams, events...) in order to enable that open
> collaboration.
>
> An important characteristic of these open collaboration grounds is that
> they need to be a level playing field, where no specific organization is
> being given an unfair advantage. I expect the teams that we bless as
> "official" project teams to operate in that fair manner. Otherwise we
> end up blessing what is essentially a trojan horse for a given
> organization, open-washing their project in the process. Such a project
> can totally exist as an unofficial project (and even be developed on
> OpenStack infrastructure) but I don't think it should be given free
> space in our Design Summits or benefit from "OpenStack community" branding.
>
> So if, in a given project team, developers from one specific
> organization benefit from access to specific knowledge or hardware
> (think 3rd-party testing blackboxes that decide if a patch goes in, or
> access to proprietary hardware or software that the open source code
> primarily interfaces with), then this project team should probably be
> rejected under the "open community" rule. Projects with a lot of drivers
> (like Cinder) provide an interesting grey area, but as long as all
> drivers are in and there is a fully functional (and popular) open source
> implementation, I think no specific organization would be considered as
> unfairly benefiting compared to others.
>
> A few months ago we had the discussion about what "no open core" means
> in 2016, in the context of the Poppy team candidacy. With our reading at
> the time we ended up rejecting Poppy partly because it was interfacing
> with proprietary technologies. However, I think what we originally
> wanted to ensure with this rule was that no specific organization would
> use the OpenStack open source code as crippled bait to sell their
> specific proprietary add-on.
>
> I think taking the view that OpenStack projects need to be open, level
> collaboration playing fields encapsulates that nicely. In the Poppy
> case, nobody in the Poppy team has an unfair advantage over others, so
> we should not reject them purely on the grounds that this interfaces
> with non-open-source solutions (leaving only the infrastructure/testing
> requirement to solve). On the other hand, a Neutron plugin targeting a
> specific piece of networking hardware would likely give an unfair
> advantage to developers of the hardware's manufacturer (having access to
> that gear for testing and being able to see and make changes to its
> proprietary source code) -- that project should probably live as an
> unofficial OpenStack project.
>
> Comments, thoughts ?
>

I think external device-specific drivers are a much clearer case than
Poppy or Cinder. It's a bit unfortunate that the dissolution of some
projects into "core" and "driver" repositories is raising this issue,
but we've definitely had better success with some project teams than
others when it comes to vendors collaborating on core components.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] OpenStack Newton B1 for Ubuntu 16.04 LTS and Ubuntu 16.10

2016-06-14 Thread Martinx - ジェームズ
I can't wait to try "VLAN Aware VMs"... But it is not there yet... Maybe on
Newton B2...

https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms

This is *very* important for NFV Instances on OpenStack... Many telcos
needs this...

Cheers!
Thiago

On 14 June 2016 at 07:25, Jeffrey Zhang  wrote:

> Very Cool! Wait this for long.
> we(kolla project) will enable the ubuntu binary deploy gate in the CI.
>
> On Mon, Jun 13, 2016 at 8:54 PM, Corey Bryant 
> wrote:
>
>> Hi All,
>>
>> The Ubuntu OpenStack team is pleased to announce the general availability
>> of the OpenStack Newton B1 milestone in Ubuntu 16.10 and for Ubuntu 16.04
>> LTS via the Ubuntu Cloud Archive.
>>
>> Ubuntu 16.04 LTS
>> 
>>
>> You can enable the Ubuntu Cloud Archive pocket for OpenStack Newton on
>> Ubuntu 16.04 installations by running the following commands:
>>
>> echo "deb http://ubuntu-cloud.archive.canonical.com/ubuntu
>> xenial-updates/newton main" | sudo tee
>> /etc/apt/sources.list.d/newton-uca.list
>> sudo apt-get install -y ubuntu-cloud-keyring
>> sudo apt-get update
>>
>> The Ubuntu Cloud Archive for Newton includes updates for Cinder,
>> Designate, Glance, Heat, Horizon, Keystone, Manila, Neutron, Neutron-FWaaS,
>> Neutron-LBaaS, Neutron-VPNaaS, Nova, and Swift (2.8.0).
>>
>> You can check out the full list of packages and versions at [0].
>>
>> Ubuntu 16.10
>> --
>>
>> No extra steps required; just start installing OpenStack!
>>
>> Branch Package Builds
>> ---
>>
>> We’ve resurrected the branch package builds of OpenStack projects that we
>> had in place a while back - if you want to try out the latest master branch
>> updates, or updates to stable branches, the following PPA’s are now
>> up-to-date and maintained:
>>
>>sudo add-apt-repository ppa:openstack-ubuntu-testing/liberty
>>sudo add-apt-repository ppa:openstack-ubuntu-testing/mitaka
>>sudo add-apt-repository ppa:openstack-ubuntu-testing/newton
>>
>> bear in mind these are built per-commit-ish (30 min checks for new
>> commits at the moment) so ymmv from time-to-time.
>>
>> Reporting bugs
>> -
>>
>> Any issues please report bugs using the 'ubuntu-bug' tool:
>>
>>   sudo ubuntu-bug nova-conductor
>>
>> this will ensure that bugs get logged in the right place in Launchpad.
>>
>> Thanks and have fun!
>>
>> Regards,
>> Corey
>> (on behalf of the Ubuntu OpenStack team)
>>
>> [0]
>> http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/newton_versions.html
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Higgins][Zun] Project roadmap

2016-06-14 Thread Fox, Kevin M
I think there's a weird cycle in plugging into nova too...

Say the cloud has no native container support added except for Zun. The 
container api Zun provides could be mapped to a Zun plugin that:
 1. nova boot --image centos --user-data 'yum install -y docker; yum start 
docker; '
 2. starts the container that was requested on that docker supporting vm.

So if you add a flavor to nova to map to a container, and you accept a flavor 
to Zun to launch containers on, you have to know to filter out the flavors that 
might come right back to Zun so you don't get into an infinite loop of nested 
docker instances.

Thanks,
Kevin


From: Flavio Percoco [fla...@redhat.com]
Sent: Tuesday, June 14, 2016 12:43 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Higgins][Zun] Project roadmap

On 13/06/16 18:46 +, Hongbin Lu wrote:
>
>
>> -Original Message-
>> From: Sudipto Biswas [mailto:sbisw...@linux.vnet.ibm.com]
>> Sent: June-13-16 1:43 PM
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] [Higgins][Zun] Project roadmap
>>
>>
>>
>> On Monday 13 June 2016 06:57 PM, Flavio Percoco wrote:
>> > On 12/06/16 22:10 +, Hongbin Lu wrote:
>> >> Hi team,
>> >>
>> >> During the team meetings these weeks, we collaborated the initial
>> >> project roadmap. I summarized it as below. Please review.
>> >>
>> >> * Implement a common container abstraction for different container
>> >> runtimes. The initial implementation will focus on supporting basic
>> >> container operations (i.e. CRUD).
>> >
>> > What COE's are being considered for the first implementation? Just
>> > docker and kubernetes?
>[Hongbin Lu] Container runtimes, docker in particular, are being considered 
>for the first implementation. We discussed how to support COEs in Zun but 
>cannot reach an agreement on the direction. I will leave it for further 
>discussion.
>
>> >
>> >> * Focus on non-nested containers use cases (running containers on
>> >> physical hosts), and revisit nested containers use cases (running
>> >> containers on VMs) later.
>> >> * Provide two set of APIs to access containers: The Nova APIs and
>> the
>> >> Zun-native APIs. In particular, the Zun-native APIs will expose full
>> >> container capabilities, and Nova APIs will expose capabilities that
>> >> are shared between containers and VMs.
>> >
>> > - Is the nova side going to be implemented in the form of a Nova
>> > driver (like ironic's?)? What do you mean by APIs here?
>[Hongbin Lu] Yes, the plan is to implement a Zun virt-driver for Nova. The 
>idea is similar to Ironic.
>
>> >
>> > - What operations are we expecting this to support (just CRUD
>> > operations on containers?)?
>[Hongbin Lu] We are working on finding the list of operations to support. 
>There is a BP for tracking this effort: 
>https://blueprints.launchpad.net/zun/+spec/api-design .
>
>> >
>> > I can see this driver being useful for specialized services like
>> Trove
>> > but I'm curious/concerned about how this will be used by end users
>> > (assuming that's the goal).
>[Hongbin Lu] I agree that end users might not be satisfied by basic container 
>operations like CRUD. We will discuss how to offer more to make the service to 
>be useful in production.

I'd probably leave this out for now but this is just my opinion. Personally, I
think users, if presented with both APIs - nova's and Zun's - they'll prefer
Zun's.

Specifically, you don't interact with a container the same way you interact with
a VM (but I'm sure you know all these way better than me). I guess my concern is
that I don't see too much value in this other than allowing specialized services
to run containers through Nova.


>> >
>> >
>> >> * Leverage Neutron (via Kuryr) for container networking.
>> >> * Leverage Cinder for container data volume.
>> >> * Leverage Glance for storing container images. If necessary,
>> >> contribute to Glance for missing features (i.e. support layer of
>> >> container images).
>> >
>> > Are you aware of https://review.openstack.org/#/c/249282/ ?
>> This support is very minimalistic in nature, since it doesn't do
>> anything beyond just storing a docker FS tar ball.
>> I think it was felt that, further support for docker FS was needed.
>> While there were suggestions of private docker registry, having
>> something in band (w.r.t openstack) maybe desirable.
>[Hongbin Lu] Yes, Glance doesn't support layer of container images which is a 
>missing feature.

Yup, I didn't mean to imply that would do it all for you rather that there's 
been
some progress there. As far as layered containers goes, you might want to look
into Glare.

Flavio

>> >> * Support enforcing multi-tenancy by doing the following:
>> >> ** Add configurable options for scheduler to enforce neighboring
>> >> containers belonging to the same tenant.
>> >> ** Support hypervisor-based container runtimes.
>> >>
>> >> The following topics have been 

Re: [openstack-dev] [keystone] Changing the project name uniqueness constraint

2016-06-14 Thread Morgan Fainberg
On Jun 14, 2016 00:46, "Henry Nash"  wrote:

>
> On 14 Jun 2016, at 07:34, Morgan Fainberg 
> wrote:
>
>
>
> On Mon, Jun 13, 2016 at 3:30 PM, Henry Nash  wrote:
>
>> So, I think it depends what level of compatibility we are aiming at. Let
>> me articulate them, and we can agree which we want:
>>
>> C1) In all version of the our APIs today (v2 and v3.0 to v3.6), you have
>> been able to issue an auth request which used project/tenant name as the
>> scoping directive (with v3 you need a domain component as well, but that’s
>> not relevant for this discussion). In these APIs, we absolutely expect that
>> if you could issue an auth request to. say project “test”, in, say, v3.X,
>> then you could absolutely issue the exact same command at V3.(X+1). This
>> has remained true, even when we introduced project hierarchies, i.e.: if I
>> create:
>>
>> /development/myproject/test
>>
>> ...then I can still scope directly to the test project by simply
>> specifying “test” as the project name (since, of course, all project names
>> must still be unique in the domain). We never want to break this for so
>> long as we formally support any APIs that once allowed this.
>>
>> C2) To aid you issuing an auth request scoped by project (either name or
>> id), we support a special API as part of the auth url (GET/auth/projects)
>> that lists the projects the caller *could* scope to (i.e. those they have
>> any kind of role on). You can take the “name” or “id” returned by this API
>> and plug it directly into the auth request. Again for any API we currently
>> support, we can’t break this.
>>
>> C3) The name attribute of a project is its node-name in the hierarchy. If
>> we decide to change this in a future API, we would not want a client using
>> the existing API to get surprised and suddenly receive a path instead of
>> the just the node-name (e.g. what if this was a UI of some type).
>>
>> Given all the above, there is no solution that can keep the above all
>> true and allow more than one project of the same name in, say, v3.7 of the
>> API. Even if we relaxed C2 and C2 -  C1 can never be guaranteed to be still
>> supported. Neither of the original proposed solutions can address this
>> (since it is a data modelling problem, not an API problem).
>>
>> However, given that we will have, for the first time, the ability to
>> microversion the Identity API starting with 3.7, there are things we can do
>> to start us down this path. Let me re-articulate the options I am proposing:
>>
>> Option 1A) In v3.7 we add a ‘path_name' attribute to a project entity,
>> which is hence returned by any API that returns a project entity. The
>> ‘path_name' attribute will contain the full path name, including the
>> project itself. (Note that clients speaking 3.6 and earlier will not see
>> this new attribute). Further, for clients speaking 3.7 and later, we add
>> support to allow a ‘path_name' (as an alternative to ‘name' or ‘id') to be
>> used in auth scoping. We do not (yet) relax any uniqueness constraints, but
>> mark API 3.6 and earlier as deprecated, as well as using the ‘name’
>> attribute in the auth request. (we still support all these, we just mark
>> them as deprecated). At some time in the future (e.g. 3.8), we remove
>> support for using ‘name’ for auth, insisting on the use of ‘path_name’
>> instead. Sometime later (e.g. 3.10) we remove support for 3.8 and earlier.
>> Then and only then, do we relax the uniqueness constraint allowing projects
>> with duplicate node-names (but with different parents).
>>
>> Option 1B) The same as 1A, but we insist on path_name use in auth in v3.7
>> (i.e. no grace-period for still using just ’name', instead relying on the
>> fact that 3.6 clients will still work just fine). Then later (e.g. perhaps
>> v3.9), we remove support for v3.6 and before…and relax the uniqueness
>> constraint.
>>
>>
> Let say the assumption that we are "removing" 3.6 should be stopped right
> now. I don't want to embark on the "when we remove this" as an option or
> discuss how we remove previous versions. Please lets assume for the sake of
> this conversation unless we have a major API version increase (API v4 and
> do not expose v4 projects via v3 API) this is unlikely happen. Deprecated
> or not, planning the removal of current supported API auth functionality is
> not on the table. In v3 we are not going to "relax" the uniqueness
> constraint in the foreseeable future. Just assume v3.6 is going to live
> forever for now and we can revisit when/if limits on microversion lower
> bounds is addressed in OpenStack with TC direction/guidance.
>
>
> Why should we not be able to remove a microversion (once keystone properly
> supports microversioning, as we will in 3.7)? The cross project guidelines
> (see:
> https://specs.openstack.org/openstack/api-wg/guidelines/microversion_specification.html),
> clearly gives example of how we would support the dropping of 

[openstack-dev] [puppet] Re: weekly meeting #85

2016-06-14 Thread Emilien Macchi
On Mon, Jun 13, 2016 at 8:57 AM, Emilien Macchi <emil...@redhat.com> wrote:
> Hi Puppeteers!
>
> We'll have our weekly meeting tomorrow at 3pm UTC on
> #openstack-meeting-4.
>
> Here's a first agenda:
> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160614
>
> Feel free to add more topics, and any outstanding bug and patch.
>
> See you tomorrow!
> Thanks,
> --
> Emilien Macchi

Notes:
http://eavesdrop.openstack.org/meetings/puppet_openstack/2016/puppet_openstack.2016-06-14-15.00.html

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.messaging 5.4.0 release (newton)

2016-06-14 Thread no-reply
We are thrilled to announce the release of:

oslo.messaging 5.4.0: Oslo Messaging API

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.messaging

With package available at:

https://pypi.python.org/pypi/oslo.messaging

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.messaging

For more details, please see below.

Changes in oslo.messaging 5.3.0..5.4.0
--

8a9d828 [zmq] Remove unused Request.close method
282cbc2 Add query paramereters to TransportURL
3109828 Fix temporary problems with pika unit tests
956bbb9 [zmq] Periodic updates of endpoints connections

Diffstat (except docs and test files)
-

oslo_messaging/_drivers/common.py  | 65 ++
oslo_messaging/_drivers/impl_zmq.py|  6 +-
oslo_messaging/_drivers/pika_driver/pika_engine.py |  2 +
.../_drivers/zmq_driver/broker/zmq_queue_proxy.py  | 40 ---
.../dealer/zmq_dealer_publisher_proxy.py   | 13 
.../_drivers/zmq_driver/client/zmq_client_base.py  | 37 +--
.../_drivers/zmq_driver/client/zmq_request.py  |  3 -
.../zmq_driver/matchmaker/matchmaker_redis.py  | 33 ++
.../server/consumers/zmq_consumer_base.py  | 21 +++---
.../server/consumers/zmq_dealer_consumer.py| 18 -
.../_drivers/zmq_driver/server/zmq_server.py   |  6 +-
oslo_messaging/_drivers/zmq_driver/zmq_socket.py   |  2 +
oslo_messaging/_drivers/zmq_driver/zmq_updater.py  | 55 
oslo_messaging/transport.py| 40 ++-
17 files changed, 364 insertions(+), 84 deletions(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.reports 1.10.0 release (newton)

2016-06-14 Thread no-reply
We are glad to announce the release of:

oslo.reports 1.10.0: oslo.reports library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.reports

With package available at:

https://pypi.python.org/pypi/oslo.reports

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.reports

For more details, please see below.

Changes in oslo.reports 1.9.0..1.10.0
-

3804e06 Updated from global requirements

Diffstat (except docs and test files)
-

requirements.txt  | 2 +-
test-requirements.txt | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 7d88640..62f078c 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -11 +11 @@ oslo.i18n>=2.1.0 # Apache-2.0
-oslo.utils>=3.5.0 # Apache-2.0
+oslo.utils>=3.11.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index f1d52e8..ff73ef3 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -13 +13 @@ sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2 # BSD
-oslo.config>=3.9.0 # Apache-2.0
+oslo.config>=3.10.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] taskflow 2.2.0 release (newton)

2016-06-14 Thread no-reply
We are psyched to announce the release of:

taskflow 2.2.0: Taskflow structured state management library.

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/taskflow

With package available at:

https://pypi.python.org/pypi/taskflow

Please report issues through launchpad:

http://bugs.launchpad.net/taskflow/

For more details, please see below.

Changes in taskflow 2.1.0..2.2.0


f885bc1 Don't use deprecated method timeutils.isotime

Diffstat (except docs and test files)
-

taskflow/persistence/models.py | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.versionedobjects 1.11.0 release (newton)

2016-06-14 Thread no-reply
We are overjoyed to announce the release of:

oslo.versionedobjects 1.11.0: Oslo Versioned Objects library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.versionedobjects

With package available at:

https://pypi.python.org/pypi/oslo.versionedobjects

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.versionedobjects

For more details, please see below.

Changes in oslo.versionedobjects 1.10.0..1.11.0
---

1d0e199 Updated from global requirements

Diffstat (except docs and test files)
-

requirements.txt | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 29cc6f2..52bdb85 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -7,2 +7,2 @@ oslo.config>=3.10.0 # Apache-2.0
-oslo.context>=2.2.0 # Apache-2.0
-oslo.messaging>=4.5.0 # Apache-2.0
+oslo.context>=2.4.0 # Apache-2.0
+oslo.messaging>=5.2.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.log 3.10.0 release (newton)

2016-06-14 Thread no-reply
We are happy to announce the release of:

oslo.log 3.10.0: oslo.log library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.log

With package available at:

https://pypi.python.org/pypi/oslo.log

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.log

For more details, please see below.

Changes in oslo.log 3.9.0..3.10.0
-

b694235 Updated from global requirements
2a147c5 log: don't create foo.log

Diffstat (except docs and test files)
-

requirements.txt| 2 +-
2 files changed, 2 insertions(+), 1 deletion(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index c8905bf..26c3390 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -8 +8 @@ oslo.config>=3.10.0 # Apache-2.0
-oslo.context>=2.2.0 # Apache-2.0
+oslo.context>=2.4.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] tooz 1.39.0 release (newton)

2016-06-14 Thread no-reply
We are jubilant to announce the release of:

tooz 1.39.0: Coordination library for distributed systems.

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/tooz

With package available at:

https://pypi.python.org/pypi/tooz

Please report issues through launchpad:

http://bugs.launchpad.net/python-tooz/

For more details, please see below.

Changes in tooz 1.38.0..1.39.0
--

fdcaff6 Updated from global requirements
601fb26 Ensure etcd is in developer and driver docs

Diffstat (except docs and test files)
-

requirements.txt  |  2 +-
test-requirements.txt |  2 +-
4 files changed, 19 insertions(+), 3 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 7e6588e..e6cff75 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -16 +16 @@ futurist>=0.11.0 # Apache-2.0
-oslo.utils>=3.9.0 # Apache-2.0
+oslo.utils>=3.11.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index b7f0925..9e28f22 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -18 +18 @@ coverage>=3.6 # Apache-2.0
-fixtures<2.0,>=1.3.1 # Apache-2.0/BSD
+fixtures>=3.0.0 # Apache-2.0/BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.serialization 2.9.0 release (newton)

2016-06-14 Thread no-reply
We are glad to announce the release of:

oslo.serialization 2.9.0: Oslo Serialization library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.serialization

With package available at:

https://pypi.python.org/pypi/oslo.serialization

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.serialization

For more details, please see below.

Changes in oslo.serialization 2.8.0..2.9.0
--

8943c73 Support serializing ipaddress objs with jsonutils

Diffstat (except docs and test files)
-

oslo_serialization/jsonutils.py|  6 ++
test-requirements.txt  |  1 +
3 files changed, 19 insertions(+), 1 deletion(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index 61e43b4..013ac33 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -4,0 +5 @@ hacking<0.11,>=0.10.0
+ipaddress>=1.0.7;python_version<'3.3'  # PSF



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.vmware 2.9.0 release (newton)

2016-06-14 Thread no-reply
We are amped to announce the release of:

oslo.vmware 2.9.0: Oslo VMware library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.vmware

With package available at:

https://pypi.python.org/pypi/oslo.vmware

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.vmware

For more details, please see below.

Changes in oslo.vmware 2.8.0..2.9.0
---

9e75608 Imported Translations from Zanata
c191847 Refactor VmdkWriteHandle and VmdkReadHandle

Diffstat (except docs and test files)
-

.../en_GB/LC_MESSAGES/oslo_vmware-log-error.po |  62 
.../en_GB/LC_MESSAGES/oslo_vmware-log-info.po  |  27 ++
.../en_GB/LC_MESSAGES/oslo_vmware-log-warning.po   |  53 +++
.../locale/en_GB/LC_MESSAGES/oslo_vmware.po| 228 
oslo_vmware/locale/oslo_vmware-log-error.pot   |  71 
oslo_vmware/locale/oslo_vmware-log-info.pot|  35 --
oslo_vmware/locale/oslo_vmware-log-warning.pot |  65 
oslo_vmware/locale/oslo_vmware.pot | 285 ---
oslo_vmware/rw_handles.py  | 387 ++---
.../locale/en_GB/LC_MESSAGES/releasenotes.po   |  30 ++
11 files changed, 628 insertions(+), 690 deletions(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.policy 1.10.0 release (newton)

2016-06-14 Thread no-reply
We are eager to announce the release of:

oslo.policy 1.10.0: Oslo Policy library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.policy

With package available at:

https://pypi.python.org/pypi/oslo.policy

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.policy

For more details, please see below.

Changes in oslo.policy 1.9.0..1.10.0


6aa8551 Imported Translations from Zanata
9050c42 Improve policy sample generation testing
85ebe9e Add helper scripts for generating policy info

Diffstat (except docs and test files)
-

oslo_policy/generator.py |  96 +++-
oslo_policy/locale/ja/LC_MESSAGES/oslo_policy.po |  17 ++-
setup.cfg|   2 +
5 files changed, 344 insertions(+), 31 deletions(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.privsep 1.8.0 release (newton)

2016-06-14 Thread no-reply
We are delighted to announce the release of:

oslo.privsep 1.8.0: OpenStack library for privilege separation

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.privsep

With package available at:

https://pypi.python.org/pypi/oslo.privsep

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.privsep

For more details, please see below.

Changes in oslo.privsep 1.7.0..1.8.0


6bcde24 Updated from global requirements

Diffstat (except docs and test files)
-

requirements.txt  | 4 ++--
test-requirements.txt | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 11e30ae..e77185a 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -7,2 +7,2 @@ oslo.i18n>=2.1.0 # Apache-2.0
-oslo.config>=3.9.0 # Apache-2.0
-oslo.utils>=3.5.0 # Apache-2.0
+oslo.config>=3.10.0 # Apache-2.0
+oslo.utils>=3.11.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 3c4d32f..634e4c1 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -7,2 +7,2 @@ oslotest>=1.10.0 # Apache-2.0
-mock>=1.2 # BSD
-fixtures<2.0,>=1.3.1 # Apache-2.0/BSD
+mock>=2.0 # BSD
+fixtures>=3.0.0 # Apache-2.0/BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.middleware 3.13.0 release (newton)

2016-06-14 Thread no-reply
We are happy to announce the release of:

oslo.middleware 3.13.0: Oslo Middleware library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.middleware

With package available at:

https://pypi.python.org/pypi/oslo.middleware

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.middleware

For more details, please see below.

Changes in oslo.middleware 3.12.0..3.13.0
-

ef0cf8b Fix spelling of config option help

Diffstat (except docs and test files)
-

oslo_middleware/http_proxy_to_wsgi.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] what do you work on upstream of OpenStack?

2016-06-14 Thread Chris Friesen

On 06/13/2016 01:11 PM, Doug Hellmann wrote:

I'm trying to pull together some information about contributions
that OpenStack community members have made *upstream* of OpenStack,
via code, docs, bug reports, or anything else to dependencies that
we have.

If you've made a contribution of that sort, I would appreciate a
quick note.  Please reply off-list, there's no need to spam everyone,
and I'll post the summary if folks want to see it.


Linux kernel.  For the second of these I was particularly surprised that nobody 
in OpenStack had stumbled over it.


https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=d6d5e999e5df67f8ec20b6be45e2229455ee3699
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=f9c904b7613b8b4c85b10cd6b33ad41b2843fa9d


Distro packaging:  I reported a dependency bug in the iptables-services package 
for Fedora.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Require a level playing field for OpenStack projects

2016-06-14 Thread Hayes, Graham
On 14/06/2016 15:00, Thierry Carrez wrote:
> Hi everyone,
>
> I just proposed a new requirement for OpenStack "official" projects,
> which I think is worth discussing beyond the governance review:
>
> https://review.openstack.org/#/c/329448/
>
>  From an upstream perspective, I see us as being in the business of
> providing open collaboration playing fields in order to build projects
> to reach the OpenStack Mission. We collectively provide resources
> (infra, horizontal teams, events...) in order to enable that open
> collaboration.
>
> An important characteristic of these open collaboration grounds is that
> they need to be a level playing field, where no specific organization is
> being given an unfair advantage. I expect the teams that we bless as
> "official" project teams to operate in that fair manner. Otherwise we
> end up blessing what is essentially a trojan horse for a given
> organization, open-washing their project in the process. Such a project
> can totally exist as an unofficial project (and even be developed on
> OpenStack infrastructure) but I don't think it should be given free
> space in our Design Summits or benefit from "OpenStack community" branding.
>
> So if, in a given project team, developers from one specific
> organization benefit from access to specific knowledge or hardware
> (think 3rd-party testing blackboxes that decide if a patch goes in, or
> access to proprietary hardware or software that the open source code
> primarily interfaces with), then this project team should probably be
> rejected under the "open community" rule. Projects with a lot of drivers
> (like Cinder) provide an interesting grey area, but as long as all
> drivers are in and there is a fully functional (and popular) open source
> implementation, I think no specific organization would be considered as
> unfairly benefiting compared to others.
>
> A few months ago we had the discussion about what "no open core" means
> in 2016, in the context of the Poppy team candidacy. With our reading at
> the time we ended up rejecting Poppy partly because it was interfacing
> with proprietary technologies. However, I think what we originally
> wanted to ensure with this rule was that no specific organization would
> use the OpenStack open source code as crippled bait to sell their
> specific proprietary add-on.
>
> I think taking the view that OpenStack projects need to be open, level
> collaboration playing fields encapsulates that nicely. In the Poppy
> case, nobody in the Poppy team has an unfair advantage over others, so
> we should not reject them purely on the grounds that this interfaces
> with non-open-source solutions (leaving only the infrastructure/testing
> requirement to solve). On the other hand, a Neutron plugin targeting a
> specific piece of networking hardware would likely give an unfair
> advantage to developers of the hardware's manufacturer (having access to
> that gear for testing and being able to see and make changes to its
> proprietary source code) -- that project should probably live as an
> unofficial OpenStack project.
>
> Comments, thoughts ?
>


 From our perspective, we (designate) currently have a few drivers from 
proprietary vendors, and would like to add one in the near future.

The current drivers are marked as "release compatible" - aka someone is
nominated to test the driver throughout the release cycle, and then
during the RC fully validate the driver.

The new driver will have 3rd party CI, to test it on every commit.

These are (very) small parts of the code base, but part of it none
the less. If this is passes, should we push these plugins to separate
repos, and not include them as part of the Designate project?

As another idea - if we have to move them out of tree - could we have
another "type" of project?

A lot of projects have "drivers" for vendor hardware / software -
could there be a way of marking projects as drivers of a deliverable -
as most of these drivers will be very tied to specific versions of
OpenStack projects.

I fully agree with the sentiment, and overall aim of the requirement,
I just want to ensure we have as little negative impact on deployers
as possible.

  -- Graham

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [sr-iov] rename *device* config options to *interface*

2016-06-14 Thread Jay Pipes

On 06/13/2016 01:22 AM, Andreas Scheuring wrote:

While reviewing [1] I got hung up on the terms "device" and "interface".
It seems like in sr-iov agent they are used in a different manner than
in the linuxbridge agent.

For Example the lb agent uses a config option
"physical_interface_mappings" (mapping between uplink interface for
bridge and physnet). A similar option in the sr-iov agent is named
"physical_device_mappings" (mapping between PF and physnet -> missing in
config reference for some reason [2]). In the l2 agent context, a
variable named device typically references to a port specific device
(e.g. the tap device) and not to a shared host device (like eth0).

As now patchset [1] introduces a new agent extension for lb & ovs agent
including a new config option "shared_physcial_device_mappings", I
really got a bit confused during the review as now in lb context
"device" is something different (namely a physical interface).

Would it make sense rename all the sr-iov options from *device* to
*interface* to stay consistent and to have a clear separation between
port specific and shared host devices?

My proposal is to name
- shared host device: interface
- port specific devices: device


I kind of think in the reverse, actually... For host physical devices, I 
refer to them as devices. For Neutron port-specific stuff, I think of 
them as interfaces (due to the relationship to "vnic" in my head...


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] theoretical race between live migration and resource audit?

2016-06-14 Thread Chris Friesen
Under normal circumstances a bit of resource tracking error is generally okay. 
However in the case of CPU pinning it's a major problem because it's not caught 
at instance boot time, so you end up with two instances that both think they 
have exclusive access to one or more host CPUs.


If we get into this scenario it ends up raising a CPUPinningInvalid exception 
during the resource audit, which causes the audit to be aborted.


Chris

On 06/10/2016 02:36 AM, Matthew Booth wrote:

Yes, this is a race.

However, it's my understanding that this is 'ok'. The resource tracker doesn't
claim to be 100% accurate at all times, right? Otherwise why would it update
itself in a period task in the first place. It's my understanding that the
resource tracker is basically a best effort cache, and that scheduling decisions
can still fail at the host. The resource tracker will fix itself next time it
runs via its periodic task.

Matt (not a scheduler person)

On Thu, Jun 9, 2016 at 10:41 PM, Chris Friesen > wrote:

Hi,

I'm wondering if we might have a race between live migration and the
resource audit.  I've included a few people on the receiver list that have
worked directly with this code in the past.

In _update_available_resource() we have code that looks like this:

instances = objects.InstanceList.get_by_host_and_node()
self._update_usage_from_instances()
migrations = objects.MigrationList.get_in_progress_by_host_and_node()
self._update_usage_from_migrations()


In post_live_migration_at_destination() we do this (updating the host and
node as well as the task state):
 instance.host = self.host
 instance.task_state = None
 instance.node = node_name
 instance.save(expected_task_state=task_states.MIGRATING)


And in _post_live_migration() we update the migration status to "completed":
 if migrate_data and migrate_data.get('migration'):
 migrate_data['migration'].status = 'completed'
 migrate_data['migration'].save()


Both of the latter routines are not serialized by the
COMPUTE_RESOURCE_SEMAPHORE, so they can race relative to the code in
_update_available_resource().


I'm wondering if we can have a situation like this:

1) migration in progress
2) We start running _update_available_resource() on destination, and we call
instances = objects.InstanceList.get_by_host_and_node().  This will not
return the migration, because it is not yet on the destination host.
3) The migration completes and we call post_live_migration_at_destination(),
which sets the host/node/task_state on the instance.
4) In _update_available_resource() on destination, we call migrations =
objects.MigrationList.get_in_progress_by_host_and_node().  This will return
the migration for the instance in question, but when we run
self._update_usage_from_migrations() the uuid will not be in "instances" and
so we will use the instance from the newly-queried migration.  We will then
ignore the instance because it is not in a "migrating" state.

Am I imagining things, or is there a race here?  If so, the negative effects
would be that the resources of the migrating instance would be "lost",
allowing a newly-scheduled instance to claim the same resources (PCI
devices, pinned CPUs, etc.)

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Require a level playing field for OpenStack projects

2016-06-14 Thread Ed Leafe
On Jun 14, 2016, at 8:57 AM, Thierry Carrez  wrote:

> A few months ago we had the discussion about what "no open core" means in 
> 2016, in the context of the Poppy team candidacy. With our reading at the 
> time we ended up rejecting Poppy partly because it was interfacing with 
> proprietary technologies. However, I think what we originally wanted to 
> ensure with this rule was that no specific organization would use the 
> OpenStack open source code as crippled bait to sell their specific 
> proprietary add-on.

I saw the problem with Poppy was that since it depended on a proprietary 
product, there was no way to run any meaningful testing with it, since you 
can’t simply download that product into your testing environment. Had there 
been an equivalent free software implementation, I think many would have not 
had as strong an objection in including Poppy.


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] NUMA, huge pages, and scheduling

2016-06-14 Thread Chris Friesen

On 06/13/2016 02:17 PM, Paul Michali wrote:

Hmm... I tried Friday and again today, and I'm not seeing the VMs being evenly
created on the NUMA nodes. Every Cirros VM is created on nodeid 0.

I have the m1/small flavor (@GB) selected and am using hw:numa_nodes=1 and
hw:mem_page_size=2048 flavor-key settings. Each VM is consuming 1024 huge pages
(of size 2MB), but is on nodeid 0 always. Also, it seems that when I reach 1/2
of the total number of huge pages used, libvirt gives an error saying there is
not enough memory to create the VM. Is it expected that the huge pages are
"allocated" to each NUMA node?


Yes, any given memory page exists on one NUMA node, and a single-NUMA-node VM 
will be constrained to a single host NUMA node and will use memory from that 
host NUMA node.


You can see and/or adjust how many hugepages are available on each NUMA node via 
/sys/devices/system/node/nodeX/hugepages/hugepages-2048kB/* where X is the host 
NUMA node number.


Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Launchpad permissions

2016-06-14 Thread Jim Rollenhagen
Hi all,

This morning I did a quick audit of the launchpad projects for our
various projects, and found some projects that have some incorrect
permissions/ownership things. Can the current maintainers of these
projects please change the "driver" and "maintainer" fields on the home
page to "ironic-drivers"?

Bifrost:
https://launchpad.net/bifrost/

ironic-ui:
https://launchpad.net/ironic-ui/

python-dracclient:
https://launchpad.net/python-dracclient/

python-wsmanclient:
https://launchpad.net/python-wsmanclient/

Let me know when done. Thanks!

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Deeper diffs in OSA releases

2016-06-14 Thread Major Hayden
On 06/14/2016 08:08 AM, Jesse Pretorius wrote:
> That's neat Major! It'd be great to extend it to also do the diffs for the 
> included roles, both OpenStack and non-OpenStack to get full coverage.

That shouldn't be too difficult to implement.  I'd need to refactor the 
comparison code so that it works for both.

> I think the ops repo is the right one - we just need to get the scaffolding 
> in place. I'll put a review up shortly. 

Thanks, Jesse! :)

--
Major Hayden

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Require a level playing field for OpenStack projects

2016-06-14 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2016-06-14 15:57:10 +0200:
> Hi everyone,
> 
> I just proposed a new requirement for OpenStack "official" projects, 
> which I think is worth discussing beyond the governance review:
> 
> https://review.openstack.org/#/c/329448/
> 
>  From an upstream perspective, I see us as being in the business of 
> providing open collaboration playing fields in order to build projects 
> to reach the OpenStack Mission. We collectively provide resources 
> (infra, horizontal teams, events...) in order to enable that open 
> collaboration.
> 
> An important characteristic of these open collaboration grounds is that 
> they need to be a level playing field, where no specific organization is 
> being given an unfair advantage. I expect the teams that we bless as 
> "official" project teams to operate in that fair manner. Otherwise we 
> end up blessing what is essentially a trojan horse for a given 
> organization, open-washing their project in the process. Such a project 
> can totally exist as an unofficial project (and even be developed on 
> OpenStack infrastructure) but I don't think it should be given free 
> space in our Design Summits or benefit from "OpenStack community" branding.
> 
> So if, in a given project team, developers from one specific 
> organization benefit from access to specific knowledge or hardware 
> (think 3rd-party testing blackboxes that decide if a patch goes in, or 
> access to proprietary hardware or software that the open source code 
> primarily interfaces with), then this project team should probably be 
> rejected under the "open community" rule. Projects with a lot of drivers 
> (like Cinder) provide an interesting grey area, but as long as all 
> drivers are in and there is a fully functional (and popular) open source 
> implementation, I think no specific organization would be considered as 
> unfairly benefiting compared to others.
> 
> A few months ago we had the discussion about what "no open core" means 
> in 2016, in the context of the Poppy team candidacy. With our reading at 
> the time we ended up rejecting Poppy partly because it was interfacing 
> with proprietary technologies. However, I think what we originally 
> wanted to ensure with this rule was that no specific organization would 
> use the OpenStack open source code as crippled bait to sell their 
> specific proprietary add-on.
> 
> I think taking the view that OpenStack projects need to be open, level 
> collaboration playing fields encapsulates that nicely. In the Poppy 
> case, nobody in the Poppy team has an unfair advantage over others, so 
> we should not reject them purely on the grounds that this interfaces 
> with non-open-source solutions (leaving only the infrastructure/testing 
> requirement to solve). On the other hand, a Neutron plugin targeting a 
> specific piece of networking hardware would likely give an unfair 
> advantage to developers of the hardware's manufacturer (having access to 
> that gear for testing and being able to see and make changes to its 
> proprietary source code) -- that project should probably live as an 
> unofficial OpenStack project.
> 
> Comments, thoughts ?
> 

I think external device-specific drivers are a much clearer case than
Poppy or Cinder. It's a bit unfortunate that the dissolution of some
projects into "core" and "driver" repositories is raising this issue,
but we've definitely had better success with some project teams than
others when it comes to vendors collaborating on core components.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][tc] Require a level playing field for OpenStack projects

2016-06-14 Thread Thierry Carrez

Hi everyone,

I just proposed a new requirement for OpenStack "official" projects, 
which I think is worth discussing beyond the governance review:


https://review.openstack.org/#/c/329448/

From an upstream perspective, I see us as being in the business of 
providing open collaboration playing fields in order to build projects 
to reach the OpenStack Mission. We collectively provide resources 
(infra, horizontal teams, events...) in order to enable that open 
collaboration.


An important characteristic of these open collaboration grounds is that 
they need to be a level playing field, where no specific organization is 
being given an unfair advantage. I expect the teams that we bless as 
"official" project teams to operate in that fair manner. Otherwise we 
end up blessing what is essentially a trojan horse for a given 
organization, open-washing their project in the process. Such a project 
can totally exist as an unofficial project (and even be developed on 
OpenStack infrastructure) but I don't think it should be given free 
space in our Design Summits or benefit from "OpenStack community" branding.


So if, in a given project team, developers from one specific 
organization benefit from access to specific knowledge or hardware 
(think 3rd-party testing blackboxes that decide if a patch goes in, or 
access to proprietary hardware or software that the open source code 
primarily interfaces with), then this project team should probably be 
rejected under the "open community" rule. Projects with a lot of drivers 
(like Cinder) provide an interesting grey area, but as long as all 
drivers are in and there is a fully functional (and popular) open source 
implementation, I think no specific organization would be considered as 
unfairly benefiting compared to others.


A few months ago we had the discussion about what "no open core" means 
in 2016, in the context of the Poppy team candidacy. With our reading at 
the time we ended up rejecting Poppy partly because it was interfacing 
with proprietary technologies. However, I think what we originally 
wanted to ensure with this rule was that no specific organization would 
use the OpenStack open source code as crippled bait to sell their 
specific proprietary add-on.


I think taking the view that OpenStack projects need to be open, level 
collaboration playing fields encapsulates that nicely. In the Poppy 
case, nobody in the Poppy team has an unfair advantage over others, so 
we should not reject them purely on the grounds that this interfaces 
with non-open-source solutions (leaving only the infrastructure/testing 
requirement to solve). On the other hand, a Neutron plugin targeting a 
specific piece of networking hardware would likely give an unfair 
advantage to developers of the hardware's manufacturer (having access to 
that gear for testing and being able to see and make changes to its 
proprietary source code) -- that project should probably live as an 
unofficial OpenStack project.


Comments, thoughts ?

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >