Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-09 Thread Tony Breeds
On Tue, Aug 09, 2016 at 09:16:02PM -0700, John Griffith wrote:
> Sorry, I wasn't a part of the sessions in Austin on the topic of long
> terms support of Cinder drivers.  There's a lot going on during the summits
> these days.

For the record the session in Austin, that I think Matt was referencing,  were
about stable life-cycles. not cinder specific.

> Yeah, ok... I do see your point here, and as I mentioned I have had this
> conversation with you and others over he years and I don't disagree.  I also 
> don't have the ability to "force"
> said parties to do things differently.  So when I try and help customers
> that are having issues my only recourse is an out of tree patch, which then
> when said distro notices or finds out they don't want to support the
> customer any longer based on the code no longer being "their blessed
> code".  The fact is that the distros hold the power in these situations, if
> they happen to own the OS release and the storage then it works out great
> for them, not so much for anybody else.​

Right we can't 'force' the distros to participate (if we could we wouldn't be
having this discussion).  The community has a process and all we can do is
encourage distros and the like to participate in that process as it really is
best for them, and us.

> So is the consensus here that the only viable solution is for people to
> invest in keeping the stable branches in general supported longer?  How
> does that work for projects that are interested and have people willing to
> do the work vs projects that don't have the people willing to do the work?
> In other words, Cinder has a somewhat unique problem that Nova, Glance and
> Keystone don't have.  So for Cinder to try and follow the policies,
> processes and philosophies you outlined does that mean that as a project
> Cinder has to try and bend the will of "ALL" of the projects to make this
> happen?  Doesn't seem very realistic to me.​

So the 'Cinder' team wont need to do all the will bending, that's for the
Stable team to do with the support of *everyone* that cares about the outcome.
That probably doens't fill you with hope, but that is the reality.

> Just one last point and I'll move on from the topic.  I'm not sure where
> this illusion that we're testing all the drivers so well is coming from.
> Sure, we require the steps and facade of 3'rd party CI, but dig a bit
> deeper and you soon find that we're not really testing as much as some
> might think here.

That's probbaly true but if we created a 'mitaka-drivers' branch of cinder the
gate CI would rapidly degernate to a noop any unit/functional tests would be
*entirely* 3rd party.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-09 Thread John Griffith
On Tue, Aug 9, 2016 at 10:26 PM, Matthew Treinish 
wrote:

> On Tue, Aug 09, 2016 at 09:16:02PM -0700, John Griffith wrote:
> > On Tue, Aug 9, 2016 at 7:21 PM, Matthew Treinish 
> > wrote:
> >
> > > On Tue, Aug 09, 2016 at 05:28:52PM -0700, John Griffith wrote:
> > > > On Tue, Aug 9, 2016 at 4:53 PM, Sean McGinnis  >
> > > wrote:
> > > >
> > > > > .
> > > > > >
> > > > > > Mike, you must have left the midcycle by the time this topic came
> > > > > > up. On the issue of out-of-tree drivers, I specifically offered
> this
> > > > > > proposal (a community managed mechanism for distributing driver
> > > > > > bugfix backports) as an compromise alternative to try to address
> the
> > > > > > needs of both camps. Everyone who was in the room at the time
> (plus
> > > > > > DuncanT who wasn't) agreed that if we had that (a way to deal
> with
> > > > > > backports) that they wouldn't want drivers out of the tree
> anymore.
> > > > > >
> > > > > > Your point of view wasn't represented so go ahead and explain
> why,
> > > > > > if we did have a reasonable way for bugfixes to get backported to
> > > > > > the releases customers actually run (leaving that mechanism
> > > > > > unspecified for the time being), that you would still want the
> > > > > > drivers out of the tree.
> > > > > >
> > > > > > -Ben Swartzlander
> > > > >
> > > > > The conversation about this started around the 30 minute point
> here if
> > > > > anyone is interested in more of the background discussion on this:
> > > > >
> > > > > https://www.youtube.com/watch?v=g3MEDFp08t4
> > > > >
> > > > > 
> > > __
> > > > > OpenStack Development Mailing List (not for usage questions)
> > > > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> > > unsubscribe
> > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > > >
> > > >
> > > > ​I don't think anybody is whining at all here, we had a fairly
> productive
> > > > discussion at the mid-cycle surrounding this topic and I do think
> there
> > > are
> > > > some valid advantages to this approach regardless of the QA question.
> > > Note
> > > > that it's been pointed out we weren't talking about or considering
> > > > advertising this *special* branch as tested by the standard means or
> gate
> > > > CI etc.
> > > >
> > > > We did discuss this though mostly in the context of helping the
> package
> > > > maintainers and distributions.  The fact is that many of us currently
> > > offer
> > > > backports of fixes in our own various github accounts.  That's fine
> and
> > > it
> > > > works well for many.  The problem we were trying to address however
> is
> > > that
> > > > this practice is rather problematic for the distros.  For example
> RHEL,
> > > > Helion or Mirantis are most certainly not going to run around cherry
> > > > picking change sets from random github repos scattered around.
> > > >
> > > > The context of the discussion was that by having a long lived
> *driver*
> > > > (emphasis on driver) branch there would be a single location and an
> > > *easy*
> > > > method of contact and communication regarding fixes to drivers that
> may
> > > be
> > > > available for stable branches that are no longer supported.  This
> puts
> > > the
> > > > burden of QA/Testing mostly on the vendors and distros, which I
> think is
> > > > fine.  They can either choose to work with the Vendor and verify the
> > > > versions for backport on a regular basis, or they can choose to
> ignore
> > > them
> > > > and NOT provide them to their customers.
> > > >
> > > > I don't think this is an awful idea, and it's very far from the
> "drivers
> > > > out of tree" discussion.  The feedback from the distro maintainers
> during
> > > > the week was that they would gladly welcome a model where they could
> pull
> > > > updates from a single driver branch on a regular basis or as needed
> for
> > > > customers that are on *unsupported* releases and for whom a fix
> exists.
> > > > Note that support cycles are not the same for the distros as they
> are of
> > > > the upstream community.  This is in no way proposing a change to the
> > > > existing support time frames or processes we have now, and in that
> way it
> > > > differs significantly from proposals and discussions we've had in the
> > > past.
> > > >
> > > > The basic idea here was to eliminate the proliferation of custom
> backport
> > > > patches scattered all over the web, and to ease the burden for
> distros
> > > and
> > > > vendors in supporting their customers.  I think there may be some
> > > concepts
> > > > to iron out and I certainly understand some of the comments regarding
> > > being
> > > > disingenuous regarding what we're advertising.  I think that's a
> > > > misunderstanding of the intent however, the proposal is not to
> extend the
> > > > support life of stable from an 

Re: [openstack-dev] [neutron] Neutron Port MAC Address Uniqueness

2016-08-09 Thread huangdenghui
hi Armando
I think this feature causes problem in sriov scenario, since sriov NIC 
don't support the vf has the same mac,even the port belongs to the different 
network.


发自网易邮箱手机版



On 2016-08-10 04:55 , Armando M. Wrote:






On 9 August 2016 at 13:53, Anil Rao  wrote:


Is the MAC address of a Neutron port on a tenant virtual network globally 
unique or unique just within that particular tenant network?



The latter:


https://github.com/openstack/neutron/blob/master/neutron/db/models_v2.py#L139
 

 

Thanks,

Anil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-09 Thread Matthew Treinish
On Tue, Aug 09, 2016 at 09:16:02PM -0700, John Griffith wrote:
> On Tue, Aug 9, 2016 at 7:21 PM, Matthew Treinish 
> wrote:
> 
> > On Tue, Aug 09, 2016 at 05:28:52PM -0700, John Griffith wrote:
> > > On Tue, Aug 9, 2016 at 4:53 PM, Sean McGinnis 
> > wrote:
> > >
> > > > .
> > > > >
> > > > > Mike, you must have left the midcycle by the time this topic came
> > > > > up. On the issue of out-of-tree drivers, I specifically offered this
> > > > > proposal (a community managed mechanism for distributing driver
> > > > > bugfix backports) as an compromise alternative to try to address the
> > > > > needs of both camps. Everyone who was in the room at the time (plus
> > > > > DuncanT who wasn't) agreed that if we had that (a way to deal with
> > > > > backports) that they wouldn't want drivers out of the tree anymore.
> > > > >
> > > > > Your point of view wasn't represented so go ahead and explain why,
> > > > > if we did have a reasonable way for bugfixes to get backported to
> > > > > the releases customers actually run (leaving that mechanism
> > > > > unspecified for the time being), that you would still want the
> > > > > drivers out of the tree.
> > > > >
> > > > > -Ben Swartzlander
> > > >
> > > > The conversation about this started around the 30 minute point here if
> > > > anyone is interested in more of the background discussion on this:
> > > >
> > > > https://www.youtube.com/watch?v=g3MEDFp08t4
> > > >
> > > > 
> > __
> > > > OpenStack Development Mailing List (not for usage questions)
> > > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> > unsubscribe
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > >
> > >
> > > ​I don't think anybody is whining at all here, we had a fairly productive
> > > discussion at the mid-cycle surrounding this topic and I do think there
> > are
> > > some valid advantages to this approach regardless of the QA question.
> > Note
> > > that it's been pointed out we weren't talking about or considering
> > > advertising this *special* branch as tested by the standard means or gate
> > > CI etc.
> > >
> > > We did discuss this though mostly in the context of helping the package
> > > maintainers and distributions.  The fact is that many of us currently
> > offer
> > > backports of fixes in our own various github accounts.  That's fine and
> > it
> > > works well for many.  The problem we were trying to address however is
> > that
> > > this practice is rather problematic for the distros.  For example RHEL,
> > > Helion or Mirantis are most certainly not going to run around cherry
> > > picking change sets from random github repos scattered around.
> > >
> > > The context of the discussion was that by having a long lived *driver*
> > > (emphasis on driver) branch there would be a single location and an
> > *easy*
> > > method of contact and communication regarding fixes to drivers that may
> > be
> > > available for stable branches that are no longer supported.  This puts
> > the
> > > burden of QA/Testing mostly on the vendors and distros, which I think is
> > > fine.  They can either choose to work with the Vendor and verify the
> > > versions for backport on a regular basis, or they can choose to ignore
> > them
> > > and NOT provide them to their customers.
> > >
> > > I don't think this is an awful idea, and it's very far from the "drivers
> > > out of tree" discussion.  The feedback from the distro maintainers during
> > > the week was that they would gladly welcome a model where they could pull
> > > updates from a single driver branch on a regular basis or as needed for
> > > customers that are on *unsupported* releases and for whom a fix exists.
> > > Note that support cycles are not the same for the distros as they are of
> > > the upstream community.  This is in no way proposing a change to the
> > > existing support time frames or processes we have now, and in that way it
> > > differs significantly from proposals and discussions we've had in the
> > past.
> > >
> > > The basic idea here was to eliminate the proliferation of custom backport
> > > patches scattered all over the web, and to ease the burden for distros
> > and
> > > vendors in supporting their customers.  I think there may be some
> > concepts
> > > to iron out and I certainly understand some of the comments regarding
> > being
> > > disingenuous regarding what we're advertising.  I think that's a
> > > misunderstanding of the intent however, the proposal is not to extend the
> > > support life of stable from an upstream or community perspective but
> > > instead the proposal is geared at consolidation and tracking of drivers.
> >
> > I fully understood the proposal but I still think you're optimizing for the
> > wrong thing.
> 
> ​Ok, that's fair. It seemed like there might be some confusion with some of
> the comments that were made.

Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-09 Thread John Griffith
On Tue, Aug 9, 2016 at 7:21 PM, Matthew Treinish 
wrote:

> On Tue, Aug 09, 2016 at 05:28:52PM -0700, John Griffith wrote:
> > On Tue, Aug 9, 2016 at 4:53 PM, Sean McGinnis 
> wrote:
> >
> > > .
> > > >
> > > > Mike, you must have left the midcycle by the time this topic came
> > > > up. On the issue of out-of-tree drivers, I specifically offered this
> > > > proposal (a community managed mechanism for distributing driver
> > > > bugfix backports) as an compromise alternative to try to address the
> > > > needs of both camps. Everyone who was in the room at the time (plus
> > > > DuncanT who wasn't) agreed that if we had that (a way to deal with
> > > > backports) that they wouldn't want drivers out of the tree anymore.
> > > >
> > > > Your point of view wasn't represented so go ahead and explain why,
> > > > if we did have a reasonable way for bugfixes to get backported to
> > > > the releases customers actually run (leaving that mechanism
> > > > unspecified for the time being), that you would still want the
> > > > drivers out of the tree.
> > > >
> > > > -Ben Swartzlander
> > >
> > > The conversation about this started around the 30 minute point here if
> > > anyone is interested in more of the background discussion on this:
> > >
> > > https://www.youtube.com/watch?v=g3MEDFp08t4
> > >
> > > 
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> > ​I don't think anybody is whining at all here, we had a fairly productive
> > discussion at the mid-cycle surrounding this topic and I do think there
> are
> > some valid advantages to this approach regardless of the QA question.
> Note
> > that it's been pointed out we weren't talking about or considering
> > advertising this *special* branch as tested by the standard means or gate
> > CI etc.
> >
> > We did discuss this though mostly in the context of helping the package
> > maintainers and distributions.  The fact is that many of us currently
> offer
> > backports of fixes in our own various github accounts.  That's fine and
> it
> > works well for many.  The problem we were trying to address however is
> that
> > this practice is rather problematic for the distros.  For example RHEL,
> > Helion or Mirantis are most certainly not going to run around cherry
> > picking change sets from random github repos scattered around.
> >
> > The context of the discussion was that by having a long lived *driver*
> > (emphasis on driver) branch there would be a single location and an
> *easy*
> > method of contact and communication regarding fixes to drivers that may
> be
> > available for stable branches that are no longer supported.  This puts
> the
> > burden of QA/Testing mostly on the vendors and distros, which I think is
> > fine.  They can either choose to work with the Vendor and verify the
> > versions for backport on a regular basis, or they can choose to ignore
> them
> > and NOT provide them to their customers.
> >
> > I don't think this is an awful idea, and it's very far from the "drivers
> > out of tree" discussion.  The feedback from the distro maintainers during
> > the week was that they would gladly welcome a model where they could pull
> > updates from a single driver branch on a regular basis or as needed for
> > customers that are on *unsupported* releases and for whom a fix exists.
> > Note that support cycles are not the same for the distros as they are of
> > the upstream community.  This is in no way proposing a change to the
> > existing support time frames or processes we have now, and in that way it
> > differs significantly from proposals and discussions we've had in the
> past.
> >
> > The basic idea here was to eliminate the proliferation of custom backport
> > patches scattered all over the web, and to ease the burden for distros
> and
> > vendors in supporting their customers.  I think there may be some
> concepts
> > to iron out and I certainly understand some of the comments regarding
> being
> > disingenuous regarding what we're advertising.  I think that's a
> > misunderstanding of the intent however, the proposal is not to extend the
> > support life of stable from an upstream or community perspective but
> > instead the proposal is geared at consolidation and tracking of drivers.
>
> I fully understood the proposal but I still think you're optimizing for the
> wrong thing.

​Ok, that's fair. It seemed like there might be some confusion with some of
the comments that were made.
 ​


> We have a community process for doing backports and maintaining
> released versions of OpenStack code. The fundamental problem here is
> actually
> that the parties you've identified aren't actively involved in stable
> branch
> maintenance.

​Yes, I 

Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-09 Thread Tony Breeds
On Wed, Aug 10, 2016 at 01:39:55AM +, Jeremy Stanley wrote:

> So I guess what I'm asking: If stable branches exist as a place for
> package maintainers to collaborate on a common set of backported
> fixes, and are not actually usable to that end, why do we continue
> to provide them?

I don't this has a binary answer.  It's a scale.  The branches we have do have
value but they just don't last long enough to cover the '5 year support
contract' use case, but I think we're basically on the same page there.

If we could support release for longer, we would.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tacker]

2016-08-09 Thread gong_ys2004
Hi Sridhar,
You said in https://review.openstack.org/#/c/255146/ we should not expose the 
infra_driver and mgmt_driver to user,but they have been already exposed by API 
at 
https://github.com/openstack/tacker/blob/master/tacker/extensions/vnfm.py#L205,
so what do you think?Do we need remove the infra_driver and mgmt_driver from 
API side?
I think we can remove these two drivers since all of them are indicated by VIM.
Regards,yong sheng gong__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-09 Thread Tony Breeds
On Tue, Aug 09, 2016 at 10:21:19PM -0400, Matthew Treinish wrote:

> I fully understood the proposal but I still think you're optimizing for the
> wrong thing. We have a community process for doing backports and maintaining
> released versions of OpenStack code. The fundamental problem here is actually
> that the parties you've identified aren't actively involved in stable branch
> maintenance. The stable maint team and policy was primarily created as a
> solution to the exact problem you outlined above, that it providing a place 
> for
> vendors, distros, etc to collaborate on backports and stable branch maint.
> while following our communities process. Regardless of framing it as being 
> only
> for drivers it doesn't change that you're talking about the same thing. (this 
> is
> why in-tree vs out-of-tree was coming up from others, because if it was
> out-of-tree then you don't have to respect the stable policy)
> 
> What I was getting at before was if instead of ignoring all the previous
> conversations we've had about this exact topic (including 2 sessions in 
> Austin)
> and people actually stepped up and got involved we wouldn't be having this
> discussion today. More than likely we'd have enough people to actually 
> maintain
> a stable branch for longer than we can today. But, instead it feels like our
> community process for code review and actually testing proposed backports is 
> too
> much of a burden for any of these parties you've identified to bother getting
> involved. So we're instead in a situation where we have a proposal to actively
> circumvent our community policy and procedures for maintaining stable 
> branches.
> 
> Creating a free space where vendors can push whatever they want without any
> gating is not only contrary to our stable branch policy but also goes against
> some of the fundamental tenants of OpenStack. It's not something I think we
> should ever be doing.
> 
> > 
> > If this isn't something we can come to an agreement on as a community, then
> > I'd suggest we just create our own repo on github outside of upstream and
> > have it serve the same purpose.
> > 
> 
> If you're dead set on doing this then I think that's your only recourse, 
> because
> what you're attempting to do is something outside our community processes and
> I don't think it has any place in upstream. But, I really think it'd be much
> better if instead of going off into a corner that all of the people who have
> complained about the state of our current stable support windows and stable
> policy actually got involved in this space. That way over time we can make
> the stable branches something where we can have longer support windows. 
> 
> -Matt Treinish

I appologise for weighing in so late.  This is a complex issue with no answer 
today :(

I think think Matt's email neatly sums up my position.  The stable policy is
the way it is to balance support requirements and community effort.  Any effort
to extend/alter the stable life-cycle needs to start *now* with bodies on the
ground in cinder, infra and the stable team working together to enable this
goal.  A patch to the policy isn't really going to cut it.

Even splitting the drivers out wont work long term, without the effort on
stable support.

I've advocated for the last 12months to lengthen the support cycles, and will
do again as soon as I feel the balance tip towards success.

In short come to the stable team ... we have cookies[1]

Yours Tony.

[1] Collectable at summits, and select mid-cycles


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][networking-ovn]How to enable vxlan as default tenant_network_types use devstack and ovn

2016-08-09 Thread Wilence Yao
Hi all,

The default tenant_network_types is geneve, after installed following  this
document http://docs.openstack.org/developer/networking-ovn/testing.html

```
[ml2]
tenant_network_types = geneve
extension_drivers = port_security
type_drivers = local,flat,vlan,geneve
```

To enable vxlan, I have changed onfig file ml2_conf.ini to this:

```
[ml2]
tenant_network_types = vxlan
extension_drivers = port_security
type_drivers = vxlan,local,flat,vlan,geneve
```

then enable vxlan in compte by this command:

```
ovs-vsctl set open . external-ids:ovn-encap-type=geneve,vxlan
```

After those setup all above, I create a network use neutron command, but
neutron-server failed with "Invalid input for operation: Network type vxlan
is not supported"


```
2016-08-04 13:25:48.243 ERROR neutron.plugins.ml2.managers
[req-9fe72fbc-d6e8-44ae-8e9e-2ba9221dd33c admin
d9933c11512e4dc799490905174278b4] Mechanism driver 'ovn' failed in
create_network_precommit
2016-08-04 13:25:48.243 TRACE neutron.plugins.ml2.managers Traceback (most
recent call last):
2016-08-04 13:25:48.243 TRACE neutron.plugins.ml2.managers   File
"/opt/stack/neutron/neutron/plugins/ml2/managers.py", line 408, in
_call_on_drivers
2016-08-04 13:25:48.243 TRACE neutron.plugins.ml2.managers
getattr(driver.obj, method_name)(context)
2016-08-04 13:25:48.243 TRACE neutron.plugins.ml2.managers   File
"/opt/stack/networking-ovn/networking_ovn/ml2/mech_driver.py", line 259, in
create_network_precommit
2016-08-04 13:25:48.243 TRACE neutron.plugins.ml2.managers raise
n_exc.InvalidInput(error_message=msg)
2016-08-04 13:25:48.243 TRACE neutron.plugins.ml2.managers InvalidInput:
Invalid input for operation: Network type vxlan is not supported.
2016-08-04 13:25:48.243 TRACE neutron.plugins.ml2.managers
2016-08-04 13:25:48.294 ERROR neutron.api.v2.resource
[req-9fe72fbc-d6e8-44ae-8e9e-2ba9221dd33c admin
d9933c11512e4dc799490905174278b4] create failed: No details.
2016-08-04 13:25:48.294 TRACE neutron.api.v2.resource Traceback (most
recent call last):
2016-08-04 13:25:48.294 TRACE neutron.api.v2.resource   File
"/opt/stack/neutron/neutron/api/v2/resource.py", line 79, in resource
2016-08-04 13:25:48.294 TRACE neutron.api.v2.resource result =
method(request=request, **args)
2016-08-04 13:25:48.294 TRACE neutron.api.v2.resource   File
"/opt/stack/neutron/neutron/api/v2/base.py", line 397, in create
2016-08-04 13:25:48.294 TRACE neutron.api.v2.resource return
self._create(request, body, **kwargs)
2016-08-04 13:25:48.294 TRACE neutron.api.v2.resource   File
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 151, in wrapper
2016-08-04 13:25:48.294 TRACE neutron.api.v2.resource ectxt.value =
e.inner_exc
2016-08-04 13:25:48.294 TRACE neutron.api.v2.resource   File
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in
__exit__
2016-08-04 13:25:48.294 TRACE neutron.api.v2.resource
self.force_reraise()
2016-08-04 13:25:48.294 TRACE neutron.api.v2.resource   File
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in
force_reraise
2016-08-04 13:25:48.294 TRACE neutron.api.v2.resource
six.reraise(self.type_, self.value, self.tb)
2016-08-04 13:25:48.294 TRACE neutron.api.v2.resource   File
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 139, in wrapper
2016-08-04 13:25:48.294 TRACE neutron.api.v2.resource return f(*args,
**kwargs)
2016-08-04 13:25:48.294 TRACE neutron.api.v2.resource   File
"/opt/stack/neutron/neutron/api/v2/base.py", line 510, in _create
2016-08-04 13:25:48.294 TRACE neutron.api.v2.resource obj =
do_create(body)
2016-08-04 13:25:48.294 TRACE neutron.api.v2.resource   File
"/opt/stack/neutron/neutron/api/v2/base.py", line 492, in do_create
2016-08-04 13:25:48.294 TRACE neutron.api.v2.resource request.context,
reservation.reservation_id)
2016-08-04 13:25:48.294 TRACE neutron.api.v2.resource   File
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in
__exit__
2016-08-04 13:25:48.294 TRACE neutron.api.v2.resource
self.force_reraise()
2016-08-04 13:25:48.294 TRACE neutron.api.v2.resource   File
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in
force_reraise
2016-08-04 13:25:48.294 TRACE neutron.api.v2.resource
six.reraise(self.type_, self.value, self.tb)
2016-08-04 13:25:48.294 TRACE neutron.api.v2.resource   File
"/opt/stack/neutron/neutron/api/v2/base.py", line 485, in do_create
2016-08-04 13:25:48.294 TRACE neutron.api.v2.resource return
obj_creator(request.context, **kwargs)
2016-08-04 13:25:48.294 TRACE neutron.api.v2.resource   File
"/opt/stack/neutron/neutron/plugins/ml2/plugin.py", line 730, in
create_network
2016-08-04 13:25:48.294 TRACE neutron.api.v2.resource result,
mech_context = self._create_network_db(context, network)
2016-08-04 13:25:48.294 TRACE neutron.api.v2.resource   File
"/opt/stack/neutron/neutron/plugins/ml2/plugin.py", line 706, in
_create_network_db
2016-08-04 13:25:48.294 TRACE neutron.api.v2.resource

Re: [openstack-dev] [python-bileanclient] [infra] Duplicate entriesin test-requirement.txt

2016-08-09 Thread 吕冬兵
Hi,
 The issue still exists after the https://review.openstack.org/352490 
merged, what else should I do?
 
http://logs.openstack.org/58/351458/8/check/gate-python-bileanclient-requirements/fa1119c/console.html
-- Original --
From:  "Jeremy Stanley";
Date:  Mon, Aug 8, 2016 11:50 PM
To:  "OpenStack Development Mailing List (not for usage 
questions)"; 

Subject:  Re: [openstack-dev] [python-bileanclient] [infra] Duplicate entriesin 
test-requirement.txt

 
On 2016-08-08 17:09:33 +0800 (+0800), 吕冬兵 wrote:
> I uploaded a new project to gerrit, but test-requirement.txt had
> duplicate entries by mistake. That makes jenkins fail whatever I
> commit
> (http://logs.openstack.org/58/351458/7/check/gate-python-bileanclient-requirements/017cd3a/console.html).
> Anybody have some idea to fix that?

This looks like a corner case recently broken in our script for that
job. I have proposed https://review.openstack.org/352490 as a fix
for it.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-09 Thread Matthew Treinish
On Wed, Aug 10, 2016 at 01:39:55AM +, Jeremy Stanley wrote:
> On 2016-08-09 15:56:57 -0700 (-0700), Mike Perez wrote:
> > As others have said and as being a Cinder stable core myself, the status-quo
> > and this proposal itself are terrible practices because there is no testing
> > behind it, thereby it not being up to the community QA standards set.
> [...]
> 
> In fairness to Sean, this thread stared because he was asking in
> #openstack-infra for help creating some long-lived driver fix
> branches because he felt it was against stable branch policy to
> backport bugfixes for drivers. Since this was an unprecedented
> request, I recommended he first raise the topic on this list to find
> out if this is a common problem across other projects and whether
> stable branch policy should be revised to permit driver fixes.
> 
> There was a brief discussion of what to do if the Cinder team wanted
> driver fixes to EOL stable series, and I still firmly believe effort
> there is better expended attempting to help extend stable branch
> support since "convenience to package maintainers" (what he said
> this plan was trying to solve) is the primary reason we provide
> those branches to begin with.
> 
> So I guess what I'm asking: If stable branches exist as a place for
> package maintainers to collaborate on a common set of backported
> fixes, and are not actually usable to that end, why do we continue
> to provide them? Should we just stop testing stable branches
> altogether since their primary value would (as is suggested) be
> served even without our testing efforts? Ceasing any attempts to
> test backports post-release would certainly free up a lot of our
> current upstream effort and resources we could redirect into other
> priorities. Or is it just stable branch changes for drivers we
> shouldn't bother testing?

Well, at a bare minimum we need the previous release around to test upgrades
which is very important. But, this exact argument has come up in the past when
we've had this exact discussion before. (it's been at least once a cycle for as
long as I can remember) I might have actually proposed having only one stable
branch at a time during one of the past summits. But, every time it's been
proposed in the past people come out of the woodwork and say there is value in
continuing them, so we've continued maintaining them. I do agree though, that it
does feel like there is a disconnect between downstream consumers and upstream
when we get proposals like this while at the same time we have had recent quite
lengthy discussions where we decided not to extend our support windows because
it's not feasible given the level of activity.

As for not testing stable changes for drivers I fundamentally disagree with any
approach that puts us in a situation where we are landing patches in an
OpenStack project that does not have any testing. This is a core part of doing
development "the OpenStack way", (to quote the governance repo) if the driver
code is part of the project then we need to be testing it.

-Matt Treinish


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-09 Thread Matthew Treinish
On Tue, Aug 09, 2016 at 05:28:52PM -0700, John Griffith wrote:
> On Tue, Aug 9, 2016 at 4:53 PM, Sean McGinnis  wrote:
> 
> > .
> > >
> > > Mike, you must have left the midcycle by the time this topic came
> > > up. On the issue of out-of-tree drivers, I specifically offered this
> > > proposal (a community managed mechanism for distributing driver
> > > bugfix backports) as an compromise alternative to try to address the
> > > needs of both camps. Everyone who was in the room at the time (plus
> > > DuncanT who wasn't) agreed that if we had that (a way to deal with
> > > backports) that they wouldn't want drivers out of the tree anymore.
> > >
> > > Your point of view wasn't represented so go ahead and explain why,
> > > if we did have a reasonable way for bugfixes to get backported to
> > > the releases customers actually run (leaving that mechanism
> > > unspecified for the time being), that you would still want the
> > > drivers out of the tree.
> > >
> > > -Ben Swartzlander
> >
> > The conversation about this started around the 30 minute point here if
> > anyone is interested in more of the background discussion on this:
> >
> > https://www.youtube.com/watch?v=g3MEDFp08t4
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> ​I don't think anybody is whining at all here, we had a fairly productive
> discussion at the mid-cycle surrounding this topic and I do think there are
> some valid advantages to this approach regardless of the QA question.  Note
> that it's been pointed out we weren't talking about or considering
> advertising this *special* branch as tested by the standard means or gate
> CI etc.
> 
> We did discuss this though mostly in the context of helping the package
> maintainers and distributions.  The fact is that many of us currently offer
> backports of fixes in our own various github accounts.  That's fine and it
> works well for many.  The problem we were trying to address however is that
> this practice is rather problematic for the distros.  For example RHEL,
> Helion or Mirantis are most certainly not going to run around cherry
> picking change sets from random github repos scattered around.
> 
> The context of the discussion was that by having a long lived *driver*
> (emphasis on driver) branch there would be a single location and an *easy*
> method of contact and communication regarding fixes to drivers that may be
> available for stable branches that are no longer supported.  This puts the
> burden of QA/Testing mostly on the vendors and distros, which I think is
> fine.  They can either choose to work with the Vendor and verify the
> versions for backport on a regular basis, or they can choose to ignore them
> and NOT provide them to their customers.
> 
> I don't think this is an awful idea, and it's very far from the "drivers
> out of tree" discussion.  The feedback from the distro maintainers during
> the week was that they would gladly welcome a model where they could pull
> updates from a single driver branch on a regular basis or as needed for
> customers that are on *unsupported* releases and for whom a fix exists.
> Note that support cycles are not the same for the distros as they are of
> the upstream community.  This is in no way proposing a change to the
> existing support time frames or processes we have now, and in that way it
> differs significantly from proposals and discussions we've had in the past.
> 
> The basic idea here was to eliminate the proliferation of custom backport
> patches scattered all over the web, and to ease the burden for distros and
> vendors in supporting their customers.  I think there may be some concepts
> to iron out and I certainly understand some of the comments regarding being
> disingenuous regarding what we're advertising.  I think that's a
> misunderstanding of the intent however, the proposal is not to extend the
> support life of stable from an upstream or community perspective but
> instead the proposal is geared at consolidation and tracking of drivers.

I fully understood the proposal but I still think you're optimizing for the
wrong thing. We have a community process for doing backports and maintaining
released versions of OpenStack code. The fundamental problem here is actually
that the parties you've identified aren't actively involved in stable branch
maintenance. The stable maint team and policy was primarily created as a
solution to the exact problem you outlined above, that it providing a place for
vendors, distros, etc to collaborate on backports and stable branch maint.
while following our communities process. Regardless of framing it as being only
for drivers it doesn't change that you're talking about the same thing. (this is
why in-tree vs 

[openstack-dev] [nova] Nova API sub-team meeting

2016-08-09 Thread Alex Xu
Hi,

We have weekly Nova API meeting today. The meeting is being held Wednesday
UTC1300 and irc channel is #openstack-meeting-4.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][lbaas][api]API returns incorrectly when filtering fields in every LBaaS resource

2016-08-09 Thread zhi
hi, all.

I have faced some strange problems when getting LBaaS resources, such
as loadbalancers, listeners, pools, etc.

For example, when I send a request which only filtering "id" attribute,
like this:

>>> curl -g -i -X GET
http://10.0.44.233:9696/v2.0/lbaas/listeners.json?fields=id \
-H "User-Agent: python-neutronclient" \
-H "Accept: application/json" \
-H "X-Auth-Token: xxx"

>>> {"listeners": [{"protocol_port": 9998, "protocol": "HTTP",
"description": "", "default_tls_container_ref": null, "admin_state_up":
false, "loadbalancers": [{"id": "509781c5-4bab-42e6-99d5-343c991f018b"}],
"sni_container_refs": [], "connection_limit": -1, "default_pool_id": null,
"id": "e55cec57-060f-4d22-9b7c-1c37f612a4cd", "name": ""},
{"protocol_port": 99, "protocol": "HTTP", "description": "",
"default_tls_container_ref": null, "admin_state_up": true, "loadbalancers":
[{"id": "509781c5-4bab-42e6-99d5-343c991f018b"}], "sni_container_refs": [],
"connection_limit": -1, "default_pool_id":
"b360fc75-b23d-46a3-b936-6c9480d35219", "id":
"f8392236-e065-4aa2-a4ef-d6c6821cc038", "name": ""}, {"protocol_port":
9998, "protocol": "HTTP", "description": "", "default_tls_container_ref":
null, "admin_state_up": true, "loadbalancers": [{"id":
"744b68a0-f08f-459a-ab7e-c43a6cb3b299"}], "sni_container_refs": [],
"connection_limit": -1, "default_pool_id":
"83a9d8ed-017b-412d-89c8-bd1e36295d81", "id":
"c6ff129c-96c5-4121-b0dd-2258016b2f36", "name": ""}]}


API returns all the information about  the listeners rather only the
"id" attribute. This problem also exists in every LBaaS resource, such
loadbalaners, pools, etc.

I have already registered a bug in launchpad[1], and there is a patch
to solve this problem about pools resource[2]. But I don't know if my
solution is correctly. ;-(

Could someone give me some advice?


Thanks
Zhi Chang


[1]: https://bugs.launchpad.net/neutron/+bug/1609352
[2]: https://review.openstack.org/#/c/352693/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-09 Thread Jeremy Stanley
On 2016-08-09 15:56:57 -0700 (-0700), Mike Perez wrote:
> As others have said and as being a Cinder stable core myself, the status-quo
> and this proposal itself are terrible practices because there is no testing
> behind it, thereby it not being up to the community QA standards set.
[...]

In fairness to Sean, this thread stared because he was asking in
#openstack-infra for help creating some long-lived driver fix
branches because he felt it was against stable branch policy to
backport bugfixes for drivers. Since this was an unprecedented
request, I recommended he first raise the topic on this list to find
out if this is a common problem across other projects and whether
stable branch policy should be revised to permit driver fixes.

There was a brief discussion of what to do if the Cinder team wanted
driver fixes to EOL stable series, and I still firmly believe effort
there is better expended attempting to help extend stable branch
support since "convenience to package maintainers" (what he said
this plan was trying to solve) is the primary reason we provide
those branches to begin with.

So I guess what I'm asking: If stable branches exist as a place for
package maintainers to collaborate on a common set of backported
fixes, and are not actually usable to that end, why do we continue
to provide them? Should we just stop testing stable branches
altogether since their primary value would (as is suggested) be
served even without our testing efforts? Ceasing any attempts to
test backports post-release would certainly free up a lot of our
current upstream effort and resources we could redirect into other
priorities. Or is it just stable branch changes for drivers we
shouldn't bother testing?
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Fernet Key rotation

2016-08-09 Thread Adam Young

On 08/09/2016 06:00 PM, Zane Bitter wrote:


In either case a good mechanism might be to use a Heat Software 
Deployment via the Heat API directly (i.e. not as part of a stack) to 
push changes to the servers. (I say 'push' but it's more a case of 
making the data available for os-collect-config to grab it.)


This is the part that interests me most.  The rest, I'll code in python 
and we can call either from mistral or from Cron.  What would a stack 
like this look like?  Are there comparable examples?



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-09 Thread John Griffith
On Tue, Aug 9, 2016 at 4:53 PM, Sean McGinnis  wrote:

> .
> >
> > Mike, you must have left the midcycle by the time this topic came
> > up. On the issue of out-of-tree drivers, I specifically offered this
> > proposal (a community managed mechanism for distributing driver
> > bugfix backports) as an compromise alternative to try to address the
> > needs of both camps. Everyone who was in the room at the time (plus
> > DuncanT who wasn't) agreed that if we had that (a way to deal with
> > backports) that they wouldn't want drivers out of the tree anymore.
> >
> > Your point of view wasn't represented so go ahead and explain why,
> > if we did have a reasonable way for bugfixes to get backported to
> > the releases customers actually run (leaving that mechanism
> > unspecified for the time being), that you would still want the
> > drivers out of the tree.
> >
> > -Ben Swartzlander
>
> The conversation about this started around the 30 minute point here if
> anyone is interested in more of the background discussion on this:
>
> https://www.youtube.com/watch?v=g3MEDFp08t4
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

​I don't think anybody is whining at all here, we had a fairly productive
discussion at the mid-cycle surrounding this topic and I do think there are
some valid advantages to this approach regardless of the QA question.  Note
that it's been pointed out we weren't talking about or considering
advertising this *special* branch as tested by the standard means or gate
CI etc.

We did discuss this though mostly in the context of helping the package
maintainers and distributions.  The fact is that many of us currently offer
backports of fixes in our own various github accounts.  That's fine and it
works well for many.  The problem we were trying to address however is that
this practice is rather problematic for the distros.  For example RHEL,
Helion or Mirantis are most certainly not going to run around cherry
picking change sets from random github repos scattered around.

The context of the discussion was that by having a long lived *driver*
(emphasis on driver) branch there would be a single location and an *easy*
method of contact and communication regarding fixes to drivers that may be
available for stable branches that are no longer supported.  This puts the
burden of QA/Testing mostly on the vendors and distros, which I think is
fine.  They can either choose to work with the Vendor and verify the
versions for backport on a regular basis, or they can choose to ignore them
and NOT provide them to their customers.

I don't think this is an awful idea, and it's very far from the "drivers
out of tree" discussion.  The feedback from the distro maintainers during
the week was that they would gladly welcome a model where they could pull
updates from a single driver branch on a regular basis or as needed for
customers that are on *unsupported* releases and for whom a fix exists.
Note that support cycles are not the same for the distros as they are of
the upstream community.  This is in no way proposing a change to the
existing support time frames or processes we have now, and in that way it
differs significantly from proposals and discussions we've had in the past.

The basic idea here was to eliminate the proliferation of custom backport
patches scattered all over the web, and to ease the burden for distros and
vendors in supporting their customers.  I think there may be some concepts
to iron out and I certainly understand some of the comments regarding being
disingenuous regarding what we're advertising.  I think that's a
misunderstanding of the intent however, the proposal is not to extend the
support life of stable from an upstream or community perspective but
instead the proposal is geared at consolidation and tracking of drivers.

If this isn't something we can come to an agreement on as a community, then
I'd suggest we just create our own repo on github outside of upstream and
have it serve the same purpose.

Thanks,
John ​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-09 Thread Sean McGinnis
.
> 
> Mike, you must have left the midcycle by the time this topic came
> up. On the issue of out-of-tree drivers, I specifically offered this
> proposal (a community managed mechanism for distributing driver
> bugfix backports) as an compromise alternative to try to address the
> needs of both camps. Everyone who was in the room at the time (plus
> DuncanT who wasn't) agreed that if we had that (a way to deal with
> backports) that they wouldn't want drivers out of the tree anymore.
> 
> Your point of view wasn't represented so go ahead and explain why,
> if we did have a reasonable way for bugfixes to get backported to
> the releases customers actually run (leaving that mechanism
> unspecified for the time being), that you would still want the
> drivers out of the tree.
> 
> -Ben Swartzlander

The conversation about this started around the 30 minute point here if
anyone is interested in more of the background discussion on this:

https://www.youtube.com/watch?v=g3MEDFp08t4 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-09 Thread Ben Swartzlander

On 08/09/2016 06:56 PM, Mike Perez wrote:

On 10:31 Aug 06, Sean McGinnis wrote:


I'm open and welcome to any feedback on this. Unless there are any major
concerns raised, I will at least instruct any Cinder stable cores to
start allowing these bugfix patches through past the security only
phase.


As others have said and as being a Cinder stable core myself, the status-quo
and this proposal itself are terrible practices because there is no testing
behind it, thereby it not being up to the community QA standards set. I will be
issuing -2 on these changes in the stable branch regardless of your
instructions until the policy has changed.


I agree we can't drop the testing standards on the "stable" branches. 
I'm not in favor of that. I'd rather use differently-named branches with 
different and well-documented policies. Ideally the branches would be 
named something like driver-fixes/newton and driver-fixes/ocata, etc, to 
avoid confusion with the stable branches.


-Ben Swartzlander


If you want to change that, work with the stable team on the various options
provided. This tangent of people whining on the mailing list and in
#openstack-cinder is not going to accomplish anything.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-09 Thread Sean McGinnis
On Tue, Aug 09, 2016 at 03:56:57PM -0700, Mike Perez wrote:

> If you want to change that, work with the stable team on the various options
> provided. This tangent of people whining on the mailing list and in
> #openstack-cinder is not going to accomplish anything.

That's what we're doing here

> 
> -- 
> Mike Perez
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-09 Thread Ben Swartzlander

On 08/09/2016 05:45 PM, Mike Perez wrote:

On 19:40 Aug 08, Duncan Thomas wrote:

On 8 August 2016 at 18:31, Matthew Treinish  wrote:



This argument comes up at least once a cycle and there is a reason we
don't do
this. When we EOL a branch all of the infrastructure for running any ci
against
it goes away. This means devstack support, job definitions, tempest skip
checks,
etc. Leaving the branch around advertises that you can still submit
patches to
it which you can't anymore. As a community we've very clearly said that we
don't
land any code without ensuring it passes tests first, and we do not
maintain any
of the infrastructure for doing that after an EOL.



Ok, to turn the question around, we (the cinder team) have recognised a
definite and strong need to have somewhere for vendors to share patches on
versions of Cinder older than the stable branch policy allows.

Given this need, what are our options?

1. We could do all this outside Openstack infrastructure. There are
significant downsides to doing so from organisational, maintenance, cost
etc points of view. Also means that the place vendors go for these patches
is not obvious, and the process for getting patches in is not standard.

2. We could have something not named 'stable' that has looser rules than
stable branches,, maybe just pep8 / unit / cinder in-tree tests. No
devstack.

3. We go with the Neutron model and take drivers out of tree. This is not
something the cinder core team are in favour of - we see significant value
in the code review that drivers currently get - the code quality
improvements between when a driver is submitted and when it is merged are
sometimes very significant. Also, taking the code out of tree makes it
difficult to get all the drivers checked out in one place to analyse e.g.
how a certain driver call is implemented across all the drivers, when
reasoning or making changes to core code.


Just to set the record straight here, some Cinder core members are in favor of
out of tree.


Mike, you must have left the midcycle by the time this topic came up. On 
the issue of out-of-tree drivers, I specifically offered this proposal 
(a community managed mechanism for distributing driver bugfix backports) 
as an compromise alternative to try to address the needs of both camps. 
Everyone who was in the room at the time (plus DuncanT who wasn't) 
agreed that if we had that (a way to deal with backports) that they 
wouldn't want drivers out of the tree anymore.


Your point of view wasn't represented so go ahead and explain why, if we 
did have a reasonable way for bugfixes to get backported to the releases 
customers actually run (leaving that mechanism unspecified for the time 
being), that you would still want the drivers out of the tree.


-Ben Swartzlander


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Nova mascot

2016-08-09 Thread Heidi Joy Tretheway
TL;DR: If you don’t want a mascot, you don’t have to. But Nova, you’ll be 
missed. :-)

A few notes following up on Matt Riedemann, Clint Byrum, Daniel Berrange’s 
conversation regarding the Nova mascot…

Nova doesn’t have to have a mascot if the majority of the team doesn’t want 
one. I’m not sure if the Nova community took a vote or if it was more of an 
informal discussion. We have 53 projects with confirmed logos, and we’re 
planning some great swag associated with the new project mascots. (I’m 
surprised the Nova team didn’t immediately request a star nova as their mascot. 
I’ll give you three guesses what Swift picked...)

I’m not a fan of mandatory fun in any organization. Our intent was to give 
projects design resources and create a logo family. This particularly benefits 
smaller and lesser-known projects (many PTLs were thankful for design 
resources) and is something visual for promoting a project on the website, 
Summit, etc.

Daniel wrote, "It also ends up creating new problems that we then have to spend 
time on for no obviously clear benefit. eg we're ging to have to collect list 
of proposed mascots, check them with legal to make sure they don't clash with 
mascots used by other software companies with squadrons of attack lawyers, then 
arrange voting on them. Even after all that we'll probably find out later that 
in some culture the mascot we've chosen has negative connotations associated 
with it. All this will do nothing to improve life for people who actually 
deploy and use nova, so its all rather a waste of time IMHO.”

HJ: The process we laid out at openstack.org/project-mascots 
 doesn’t involve lawyers at all (it’s 
just me, Todd Morey, and some friendly illustrators). We asked projects for a 
list of candidates mainly in case two projects wanted the same animal (in some 
cases, we flipped a coin; in others, one or both projects changed their 
candidate). Most projects quickly collaborated & decided with an etherpad or a 
Condorcet poll. The parameters on mascots (no humans, human-made objects, or 
proper nouns) helped avoid cultural conflicts. We felt “nature” as a category 
is global, neutral, and relatable.

Will giving projects a logo improve life for people who actually deploy and use 
Nova? Maybe not next month. But the purpose of our project overall is give 
projects greater visibility in the community and a stronger identity, which can 
in turn attract more development talent and support a cohesive team…and teams 
ultimately make life better for people who deploy and use Nova. 

I’m here at OpenStack Days Silicon Valley today and just spent an hour talking 
with someone about a certain brand in our community that gets a lot of 
attention—people want that sticker on their laptops. I’m doing my best to make 
your project’s logo worthy of a place on your laptop!


Cheers,
Heidi Joy

__
Heidi Joy Tretheway
Senior Marketing Manager, OpenStack Foundation
503 816 9769  |  skype: heidi.tretheway





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron Port MAC Address Uniqueness

2016-08-09 Thread Anil Rao
Thanks Armando.

-Anil

From: Armando M. [mailto:arma...@gmail.com]
Sent: Tuesday, August 09, 2016 1:55 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] Neutron Port MAC Address Uniqueness



On 9 August 2016 at 13:53, Anil Rao 
> wrote:
Is the MAC address of a Neutron port on a tenant virtual network globally 
unique or unique just within that particular tenant network?

The latter:

https://github.com/openstack/neutron/blob/master/neutron/db/models_v2.py#L139


Thanks,
Anil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-09 Thread Mike Perez
On 10:31 Aug 06, Sean McGinnis wrote:

> I'm open and welcome to any feedback on this. Unless there are any major
> concerns raised, I will at least instruct any Cinder stable cores to
> start allowing these bugfix patches through past the security only
> phase.

As others have said and as being a Cinder stable core myself, the status-quo
and this proposal itself are terrible practices because there is no testing
behind it, thereby it not being up to the community QA standards set. I will be
issuing -2 on these changes in the stable branch regardless of your
instructions until the policy has changed.

If you want to change that, work with the stable team on the various options
provided. This tangent of people whining on the mailing list and in
#openstack-cinder is not going to accomplish anything.

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] History lesson please

2016-08-09 Thread Tony Breeds
On Tue, Aug 09, 2016 at 03:14:43PM -0400, Davanum Srinivas wrote:
> +1 for volunteers to step up.

I'll do it and I have a *very* basic prototype done.  It wont be in Newton
though.  Having said that if there is another volenteer I'm happy to work with
them or free up time :)

Yours Tony.

PS: Replying to the *esy* parts of this thread pre-coffee ;P


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Fernet Key rotation

2016-08-09 Thread Fox, Kevin M
It needs to work in a distributed way...

What happens if the one node you have cron running on doesn't work for a while. 
Keystone breaks?

If the undercloud deploys a timed workfow where the workflow can fail over from 
machine to machine, that would work.

Thanks,
Kevin

From: Zane Bitter [zbit...@redhat.com]
Sent: Tuesday, August 09, 2016 3:00 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tripleo] Fernet Key rotation

On 09/08/16 17:11, Adam Young wrote:
> The Fernet token format uses a symmetric key to sign tokens.  In order
> to check the signature, these keys need to be synchronized across all of
> the Keystone servers.
>
>
> I don't want to pass around nake symmetric keys.  The right way to do
> this is to put them into a PKCS 11 Envelope.  Roughly, this:
>
>
> 1.  Each server generates a keypair and sends the public key to the
> undercloud
>
> 2.  undercloud generates a Fernet key
>
> 3.  Undercloud puts the Fernet token into a PKCS11 document signed with
> the overcloud nodes public key
>
> 4.  Undercloud posts the PKCS11 data to metadata
>
> 5.  os-*config Node downloads and stores the proper PKCS11 data
>
> 6.  Something unpackst the pkcs11 data and puts the key into the Fernet
> key store
>
> That last step needs to make use of the keystone-manage fernet_rotate
> command.
>
>
> How do we go about making this happen?  The key rotations should be
> scheduled infrequently; let me throw out monthly as a starting point for
> the discussion, although that is probably way too frequent.  How do we
> schedule this?  Is this a new stack that depends on the Keystone role?

This sounds like a classic example of a workflow. Two possibilities come
to mind.

The fun way:
Implement this as a Mistral workflow. Deploy the workflow along with a
timed trigger somewhere in the overcloud Heat templates - you can grab
data from other resources to figure out e.g. which machines you need to
push keys to. The Mistral service on the undercloud will take care of
running the workflow for you (thanks to the timed trigger), and its
presence in the templates will ensure it gets set up automatically.

The boring way:
Implement this as a shell script. Add a cron job on the undercloud to
run the script. The cron service on the undercloud will take care of
running the workflow for you, and adding it to the undercloud puppet
manifests will ensure it gets set up automatically.

In either case a good mechanism might be to use a Heat Software
Deployment via the Heat API directly (i.e. not as part of a stack) to
push changes to the servers. (I say 'push' but it's more a case of
making the data available for os-collect-config to grab it.)

The biggest drawback of the cron job is that it will need to have some
way of obtaining credentials in order to push data onto the servers and
also to query the overcloud stack to find out which servers to push to.
Whereas the Mistral workflow runs as the undercloud (keystone) user who
created the 'overcloud' stack and the server list can be supplied
through the template.

cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Fernet Key rotation

2016-08-09 Thread Zane Bitter

On 09/08/16 17:11, Adam Young wrote:

The Fernet token format uses a symmetric key to sign tokens.  In order
to check the signature, these keys need to be synchronized across all of
the Keystone servers.


I don't want to pass around nake symmetric keys.  The right way to do
this is to put them into a PKCS 11 Envelope.  Roughly, this:


1.  Each server generates a keypair and sends the public key to the
undercloud

2.  undercloud generates a Fernet key

3.  Undercloud puts the Fernet token into a PKCS11 document signed with
the overcloud nodes public key

4.  Undercloud posts the PKCS11 data to metadata

5.  os-*config Node downloads and stores the proper PKCS11 data

6.  Something unpackst the pkcs11 data and puts the key into the Fernet
key store

That last step needs to make use of the keystone-manage fernet_rotate
command.


How do we go about making this happen?  The key rotations should be
scheduled infrequently; let me throw out monthly as a starting point for
the discussion, although that is probably way too frequent.  How do we
schedule this?  Is this a new stack that depends on the Keystone role?


This sounds like a classic example of a workflow. Two possibilities come 
to mind.


The fun way:
Implement this as a Mistral workflow. Deploy the workflow along with a 
timed trigger somewhere in the overcloud Heat templates - you can grab 
data from other resources to figure out e.g. which machines you need to 
push keys to. The Mistral service on the undercloud will take care of 
running the workflow for you (thanks to the timed trigger), and its 
presence in the templates will ensure it gets set up automatically.


The boring way:
Implement this as a shell script. Add a cron job on the undercloud to 
run the script. The cron service on the undercloud will take care of 
running the workflow for you, and adding it to the undercloud puppet 
manifests will ensure it gets set up automatically.


In either case a good mechanism might be to use a Heat Software 
Deployment via the Heat API directly (i.e. not as part of a stack) to 
push changes to the servers. (I say 'push' but it's more a case of 
making the data available for os-collect-config to grab it.)


The biggest drawback of the cron job is that it will need to have some 
way of obtaining credentials in order to push data onto the servers and 
also to query the overcloud stack to find out which servers to push to. 
Whereas the Mistral workflow runs as the undercloud (keystone) user who 
created the 'overcloud' stack and the server list can be supplied 
through the template.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [third-party] [ci] Please PIN your 3rd party ci setup to zuul v. 2.5.0

2016-08-09 Thread Asselin, Ramy
All,

In the previous e-mail, [1] was recommending 3rd party ci to pin zuul to v. 
2.1.0, but this has pip dependency conflicts with nodepool 0.3.0 when installed 
on the same VM (common case typical for 3rd party ci) causing zuul service to 
fail to start.
This is fixed by using the newly created zuul v. 2.5.0 tag [2], which has 
compatible pip dependencies along with many bug fixes.

It is strongly recommended to apply this pins to your setup and NOT use master 
branch in order to keep you CI systems stable. The full set of recommended pins 
is being maintained in this file [3].

Ramy

[1] https://review.openstack.org/#/c/348035/4/contrib/single_node_ci_data.yaml
[2] https://review.openstack.org/#/c/352560/1/contrib/single_node_ci_data.yaml
[3] 
http://git.openstack.org/cgit/openstack-infra/puppet-openstackci/tree/contrib/single_node_ci_data.yaml

-Original Message-
From: Asselin, Ramy 
Sent: Friday, August 05, 2016 11:29 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [third-party] [ci] Please PIN your 3rd party ci setup 
to JJB 1.6.1

All,

In case you're still using JJB master branch, it is highly recommended that you 
pin to 1.6.1. There are recent/upcoming changes that could break your CI setup.

You can do this by updating your puppet hiera file 
(/etc/puppet/environments/common.yaml [1]) as shown here [2][3]

Re-run puppet (sudo puppet apply --verbose /etc/puppet/manifests/site.pp [1]) 
and that will ensure your JJB & zuul installations are pinned to stable 
versions.

Ramy 

[1] http://docs.openstack.org/infra/openstackci/third_party_ci.html
[2] diff: 
https://review.openstack.org/#/c/348035/4/contrib/single_node_ci_data.yaml
[3] full: 
http://git.openstack.org/cgit/openstack-infra/puppet-openstackci/tree/contrib/single_node_ci_data.yaml

-Original Message-
From: Asselin, Ramy 
Sent: Friday, July 08, 2016 7:30 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [third-party] [ci] Upcoming changes to Nodepool for 
Zuul v3

All,

If you haven't already, it's recommended to pin nodepool to the 0.3.0 tag and 
not use master.
If you're using the puppet-openstackci solution, you can update your puppet 
hiera file as shown here: https://review.openstack.org/293112
Re-run puppet and restart nodepool.

Ramy 

-Original Message-
From: Monty Taylor [mailto:mord...@inaugust.com] 
Sent: Thursday, July 07, 2016 6:21 PM
To: openstack-in...@lists.openstack.org
Subject: [OpenStack-Infra] Upcoming changes to Nodepool for Zuul v3

Hey all!

tl;dr - nodepool 0.3.0 tagged, you should pin

Longer version:

As you are probably aware, we've been working towards Zuul v3 for a while. 
Hopefully you're as excited about that as we are.

We're about to start working in earnest on changes to nodepool in support of 
that. One of our goals with Zuul v3 is to make nodepool supportable in a CD 
manner for people who are not us. In support of that, we may break a few things 
over the next month or two.

So that it's not a steady stream of things you should pay attention to - we've 
cut a tag:

0.3.0

of what's running in production right now. If your tolerance for potentially 
breaking change is low, we strongly recommend pinning your install to it.

We will still be running CD from master the whole time - but we are also paying 
constant attention when we're landing things.

Once this next iteration is ready, we'll send out another announcement that 
master is in shape for consuming CD-style.

Thanks!
Monty

___
OpenStack-Infra mailing list
openstack-in...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-09 Thread Mike Perez
On 19:40 Aug 08, Duncan Thomas wrote:
> On 8 August 2016 at 18:31, Matthew Treinish  wrote:
> 
> >
> > This argument comes up at least once a cycle and there is a reason we
> > don't do
> > this. When we EOL a branch all of the infrastructure for running any ci
> > against
> > it goes away. This means devstack support, job definitions, tempest skip
> > checks,
> > etc. Leaving the branch around advertises that you can still submit
> > patches to
> > it which you can't anymore. As a community we've very clearly said that we
> > don't
> > land any code without ensuring it passes tests first, and we do not
> > maintain any
> > of the infrastructure for doing that after an EOL.
> >
> >
> Ok, to turn the question around, we (the cinder team) have recognised a
> definite and strong need to have somewhere for vendors to share patches on
> versions of Cinder older than the stable branch policy allows.
> 
> Given this need, what are our options?
> 
> 1. We could do all this outside Openstack infrastructure. There are
> significant downsides to doing so from organisational, maintenance, cost
> etc points of view. Also means that the place vendors go for these patches
> is not obvious, and the process for getting patches in is not standard.
> 
> 2. We could have something not named 'stable' that has looser rules than
> stable branches,, maybe just pep8 / unit / cinder in-tree tests. No
> devstack.
> 
> 3. We go with the Neutron model and take drivers out of tree. This is not
> something the cinder core team are in favour of - we see significant value
> in the code review that drivers currently get - the code quality
> improvements between when a driver is submitted and when it is merged are
> sometimes very significant. Also, taking the code out of tree makes it
> difficult to get all the drivers checked out in one place to analyse e.g.
> how a certain driver call is implemented across all the drivers, when
> reasoning or making changes to core code.

Just to set the record straight here, some Cinder core members are in favor of
out of tree.

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][heat] 2 million requests / sec, 100s of nodes

2016-08-09 Thread Zane Bitter

On 07/08/16 19:52, Clint Byrum wrote:

Excerpts from Steve Baker's message of 2016-08-08 10:11:29 +1200:

On 05/08/16 21:48, Ricardo Rocha wrote:

Hi.

Quick update is 1000 nodes and 7 million reqs/sec :) - and the number
of requests should be higher but we had some internal issues. We have
a submission for barcelona to provide a lot more details.

But a couple questions came during the exercise:

1. Do we really need a volume in the VMs? On large clusters this is a
burden, and local storage only should be enough?

2. We observe a significant delay (~10min, which is half the total
time to deploy the cluster) on heat when it seems to be crunching the
kube_minions nested stacks. Once it's done, it still adds new stacks
gradually, so it doesn't look like it precomputed all the info in advance

Anyone tried to scale Heat to stacks this size? We end up with a stack
with:
* 1000 nested stacks (depth 2)
* 22000 resources
* 47008 events

And already changed most of the timeout/retrial values for rpc to get
this working.

This delay is already visible in clusters of 512 nodes, but 40% of the
time in 1000 nodes seems like something we could improve. Any hints on
Heat configuration optimizations for large stacks very welcome.


Yes, we recommend you set the following in /etc/heat/heat.conf [DEFAULT]:
max_resources_per_stack = -1

Enforcing this for large stacks has a very high overhead, we make this
change in the TripleO undercloud too.



Wouldn't this necessitate having a private Heat just for Magnum? Not
having a resource limit per stack would leave your Heat engines
vulnerable to being DoS'd by malicious users, since one can create many
many thousands of resources, and thus python objects, in just a couple
of cleverly crafted templates (which is why I added the setting).


Although when you added it, all of the resources in a tree of nested 
stacks got handled by a single engine, so sending a really big tree of 
nested stacks was an easy way to DoS Heat. That's no longer the case 
since Kilo; we farm the child stacks out over RPC, so the difficulty of 
carrying out a DoS increases in proportion to the number of cores you 
have running Heat whereas before it was constant. (This is also the 
cause of the performance problem, since counting all the resources in 
the tree when then entire thing was already loaded in-memory was easy.)


Convergence splits it up even further, farming out each _resource_ as 
well as each stack over RPC.


I had the thought that having a per-tenant resource limit might be both 
more effective at both protecting the limited resource and more 
efficient to calculate, since we could have the DB simply count the 
Resource rows for stacks in a given tenant instead of recursively 
loading all of the stacks in a tree and counting the resources in 
heat-engine. However, the tenant isn't stored directly in the Stack 
table, and people who know databases tell me the resulting joins would 
be fearsome.


I'm still not convinced it'd be worse than what we have now, even after 
Steve did a lot of work to make it much, much better than it was at one 
point ;)



This makes perfect sense in the undercloud of TripleO, which is a
private, single tenant OpenStack. But, for Magnum.. now you're talking
about the Heat that users have access to.


Indeed, and now that we're seeing other users of very large stacks 
(Sahara is another) I think we need to come up with a solution that is 
both efficient enough to use on a large/deep tree of nested stacks but 
can still be tuned to protect against DoS at whatever scale Heat is 
deployed at.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Fernet Key rotation

2016-08-09 Thread Adam Young
The Fernet token format uses a symmetric key to sign tokens.  In order 
to check the signature, these keys need to be synchronized across all of 
the Keystone servers.



I don't want to pass around nake symmetric keys.  The right way to do 
this is to put them into a PKCS 11 Envelope.  Roughly, this:



1.  Each server generates a keypair and sends the public key to the 
undercloud


2.  undercloud generates a Fernet key

3.  Undercloud puts the Fernet token into a PKCS11 document signed with 
the overcloud nodes public key


4.  Undercloud posts the PKCS11 data to metadata

5.  os-*config Node downloads and stores the proper PKCS11 data

6.  Something unpackst the pkcs11 data and puts the key into the Fernet 
key store


That last step needs to make use of the keystone-manage fernet_rotate 
command.



How do we go about making this happen?  The key rotations should be 
scheduled infrequently; let me throw out monthly as a starting point for 
the discussion, although that is probably way too frequent.  How do we 
schedule this?  Is this a new stack that depends on the Keystone role?



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-09 Thread Chris Friesen

On 08/09/2016 02:10 PM, Ben Swartzlander wrote:


The best example of why this is good is Linux. If you tell the Linux people to
take their drivers out of the tree I can guarantee you they'll laugh you out of
the room. The reasons for their stance are many and I won't recount them here
(unless you want me to).


Just to play devil's advocate, the latest Intel ethernet drivers are generally 
found at https://sourceforge.net/projects/e1000


These drivers often have fixes/features that haven't made it into the kernel 
yet, and are compilable against a variety of kernel versions rather than just 
the most recent.


Of course the same developers also maintain in-tree kernel drivers, but changes 
to the in-tree drivers only affect the currently-maintained subset of kernel 
branches.


Other vendors also maintain out-of-tree drivers as well as in-tree drivers, so 
the situation is exactly analogous to Cinder.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Neutron Port MAC Address Uniqueness

2016-08-09 Thread Armando M.
On 9 August 2016 at 13:53, Anil Rao  wrote:

> Is the MAC address of a Neutron port on a tenant virtual network globally
> unique or unique just within that particular tenant network?
>

The latter:

https://github.com/openstack/neutron/blob/master/neutron/db/models_v2.py#L139


>
>
> Thanks,
>
> Anil
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Neutron Port MAC Address Uniqueness

2016-08-09 Thread Anil Rao
Is the MAC address of a Neutron port on a tenant virtual network globally 
unique or unique just within that particular tenant network?

Thanks,
Anil
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-09 Thread Walter A. Boring IV

On 08/09/2016 11:52 AM, Ihar Hrachyshka wrote:

Walter A. Boring IV  wrote:


On 08/08/2016 02:28 PM, Ihar Hrachyshka wrote:

Duncan Thomas  wrote:

On 8 August 2016 at 21:12, Matthew Treinish  
wrote:
Ignoring all that, this is also contrary to how we perform testing 
in OpenStack.
We don't turn off entire classes of testing we have so we can land 
patches,

that's just a recipe for disaster.

But is it more of a disaster (for the consumers) than zero testing, 
zero review, scattered around the internet 
if-you're-lucky-with-a-good-wind you'll maybe get the right patch 
set? Because that's where we are right now, and vendors, 
distributors and the cinder core team are all saying it's a disaster.


If consumers rely on upstream releases, then they are expected to 
migrate to newer releases after EOL, not switch to a random branch 
on the internet. If they rely on some commercial product, then they 
usually have an extended period of support and certification for 
their drivers, so it’s not a problem for them.


Ihar
This is entirely unrealistic.  Force customers to upgrade. Good luck 
explaining to a bank that in order to get their cinder driver fix in, 
they have to upgrade their entire OpenStack deployment. Real world 
customers simply will balk at this all day long.


Real world customers will pay for engineering to support their 
software, either their own or of one of OpenStack vendors. There is no 
free lunch from upstream here.


  Our customers are already paying us to support them and it's what we 
are doing.  Nobody is asking for a free lunch from upstream.  We are 
simply asking for a way to have a centralized repository that each 
vendor uses to support their drivers.


The problem is how to get customers patches against older drivers and 
then support following that.  We have no place to centrally place our 
patches against our driver other than our forked github account for 
older releases.   This is exactly what the rest of the Cinder driver 
vendors are doing, and is what we are trying to avoid.  The problem even 
gets worse when a customer has a LeftHand array and a SolidFire and/or 
Netapp and/or Pure array.  The customer will have to get fixes from each 
separate repository and monitor each of those for changes in the 
future.   Which fork to they follow?  This is utter chaos from a 
customer perspective as well as a distributor's perspective and is 
terrible for OpenStack users/deployers.



Walt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-08-09 Thread Ricardo Rocha
On Tue, Aug 9, 2016 at 10:00 PM, Clint Byrum  wrote:
> Excerpts from Ricardo Rocha's message of 2016-08-08 11:51:00 +0200:
>> Hi.
>>
>> On Mon, Aug 8, 2016 at 1:52 AM, Clint Byrum  wrote:
>> > Excerpts from Steve Baker's message of 2016-08-08 10:11:29 +1200:
>> >> On 05/08/16 21:48, Ricardo Rocha wrote:
>> >> > Hi.
>> >> >
>> >> > Quick update is 1000 nodes and 7 million reqs/sec :) - and the number
>> >> > of requests should be higher but we had some internal issues. We have
>> >> > a submission for barcelona to provide a lot more details.
>> >> >
>> >> > But a couple questions came during the exercise:
>> >> >
>> >> > 1. Do we really need a volume in the VMs? On large clusters this is a
>> >> > burden, and local storage only should be enough?
>> >> >
>> >> > 2. We observe a significant delay (~10min, which is half the total
>> >> > time to deploy the cluster) on heat when it seems to be crunching the
>> >> > kube_minions nested stacks. Once it's done, it still adds new stacks
>> >> > gradually, so it doesn't look like it precomputed all the info in 
>> >> > advance
>> >> >
>> >> > Anyone tried to scale Heat to stacks this size? We end up with a stack
>> >> > with:
>> >> > * 1000 nested stacks (depth 2)
>> >> > * 22000 resources
>> >> > * 47008 events
>> >> >
>> >> > And already changed most of the timeout/retrial values for rpc to get
>> >> > this working.
>> >> >
>> >> > This delay is already visible in clusters of 512 nodes, but 40% of the
>> >> > time in 1000 nodes seems like something we could improve. Any hints on
>> >> > Heat configuration optimizations for large stacks very welcome.
>> >> >
>> >> Yes, we recommend you set the following in /etc/heat/heat.conf [DEFAULT]:
>> >> max_resources_per_stack = -1
>> >>
>> >> Enforcing this for large stacks has a very high overhead, we make this
>> >> change in the TripleO undercloud too.
>> >>
>> >
>> > Wouldn't this necessitate having a private Heat just for Magnum? Not
>> > having a resource limit per stack would leave your Heat engines
>> > vulnerable to being DoS'd by malicious users, since one can create many
>> > many thousands of resources, and thus python objects, in just a couple
>> > of cleverly crafted templates (which is why I added the setting).
>> >
>> > This makes perfect sense in the undercloud of TripleO, which is a
>> > private, single tenant OpenStack. But, for Magnum.. now you're talking
>> > about the Heat that users have access to.
>>
>> We have it already at -1 for these tests. As you say a malicious user
>> could DoS, right now this is manageable in our environment. But maybe
>> move it to a per tenant value, or some special policy? The stacks are
>> created under a separate domain for magnum (for trustees), we could
>> also use that for separation.
>>
>> A separate heat instance sounds like an overkill.
>>
>
> It does, but there's really no way around it. If Magnum users are going
> to create massive stacks, then all of the heat engines will need to be
> able to handle massive stacks anyway, and a quota system would just mean
> that only Magnum gets to fully utilize those engines, which doesn't
> really make much sense at all, does it?

The best might be to see if there are improvements possible either in
the Heat engine (lots of what Zane mentioned seems to be of help,
we're willing to try that) or in the way Magnum creates the stacks.

In any case, things work right now just not perfect yet. Still ok to
get 1000 node clusters deployed in < 25min, people can handle that :)

Thanks!

Ricardo

>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][ironic][odm][ACTION NEEDED] Kolla's BiFrost and kolla-host work

2016-08-09 Thread Steven Dake (stdake)
Sean has done a fantastic job on the BiFrost work and kolla-host work.  What is 
needed is the last step - which is a final thorough technical review from the 
core review team so we can get it merged by end of week.  If your struggling to 
find the work in the review queue, use this link:

https://review.openstack.org/#/q/owner:%22sean+mooney+%253Csean.k.mooney%2540intel.com%253E%22

Finally a huge thanks to cinemera and the rest of the Ironic team for 
supporting our efforts to consume this upstream.

Regards,
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-09 Thread Ben Swartzlander

On 08/09/2016 03:01 PM, Ihar Hrachyshka wrote:

Walter A. Boring IV  wrote:




I think "currently active stable branches" is key there. These branches
would no longer be "currently active". They would get an EOL tag
when it
reaches the end of the support phases. We just wouldn't delete the
branch.

This argument comes up at least once a cycle and there is a reason we
don't do
this. When we EOL a branch all of the infrastructure for running any
ci against
it goes away. This means devstack support, job definitions, tempest
skip checks,
etc. Leaving the branch around advertises that you can still submit
patches to
it which you can't anymore. As a community we've very clearly said
that we don't
land any code without ensuring it passes tests first, and we do not
maintain any
of the infrastructure for doing that after an EOL.


And it's this exact policy that has lead us to this mess we are in
today.   As a vendor that has customers that use OpenStack, we have to
support very old releases.  Customers in the wild do not like to
upgrade once they get OpenStack up and running because it's very
difficult, time consuming and dangerous to do.  We have customers
still running Icehouse and they will most likely won't upgrade any
time soon.  Banks hate upgrading software after they have customers
running on it.   This is a community wide problem that needs to be
addressed.

Because of this problem, (not being able to backport bug fixes in our
drivers), we have been left with forking Cinder on our own github to
put our driver fixes there.   This is a terrible practice for the
OpenStack community in general, and terrible for customers/users of
OpenStack, as we have N driver vendors that have N different
mechanisms for getting bug fixes to their customers.  I believe this
is a major problem for users of OpenStack and it needs to be addressed.


Right. And so the proper solution in line with OpenStack practices would
be to allow vendors to own their plugins and maintain them, potentially
for extensive time. The fact the Cinder team is unwilling [as it seems
like from what I read in other replies to the thread] to provide that
kind of extensibility to vendors is unfortunate. BTW do we have a
write-up of reasons behind that?


I'm not sure what OpenStack practices you're referring to. The only 
project I'm aware of which does out of tree drivers is Neutron, and 
Neutron is a very different kind of project architecture that's not very 
comparable to Cinder. Most projects keep their drivers in tree.


The best example of why this is good is Linux. If you tell the Linux 
people to take their drivers out of the tree I can guarantee you they'll 
laugh you out of the room. The reasons for their stance are many and I 
won't recount them here (unless you want me to).



At the Cinder midcycle, we came up with a solution that would satisfy
Cinder customers, as Sean planned out.


All of them? I am not sure on that one. Some consumers may put some
trust into stable branches, and may be surprised by patches landing
there without upstream CI in place, undermining the promise the project
made by adopting the stable:follows-policy tag.


The main customers for bugfixes are OpenStack distros (like RHOS).

Only a tiny minority of users deploy from upstream and those users tend 
to be sophisticated enough to do their own backports or to pull from 
vendor's forked repos anyways.


What we're trying to achieve is to make the lives of the distro 
maintainers easier by creating a clearing house for bugfix backports 
which is owned and operating by the upstream community.


Regarding the stable:follows-policy tag, I never proposed using the 
stable branches -- I proposed creating different branches specifically 
to avoid the surprise you're alluding to. The infra team suggested that 
maybe reusing the stable branches was a better option.



We acknowledge that it's a driver maintainer's responsibility to make
sure they test any changes that get into the stable branches, because
there is no infra support for running CI against the patches of old
stable branches. I think that risk is far better than the existing
reality of N cinder forks floating around github.   It's just no way
to ship software to actual customers.


If Cinder would give you a right hooking entry point for your plugin,
you would not need to fork the whole thing just to extend your small
isolated bit. You would live on upstream infrastructure while stable/*
is alive, then move to your own git repo somewhere on the internet to
provide more bug fixes [as some vendors do for neutron drivers].


In theory you're right. We never touch stuff outside the drivers when we 
backport fixes. However, to maintain proper git histories and to be able 
to run the unit tests, etc, it's logistically simpler to fork the whole 
repo. It's trivially easy to verify that the all of the differences 
between a fork and its parent are limited to just the driver 
subdirectory, so in 

Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-08-09 Thread Clint Byrum
Excerpts from Ricardo Rocha's message of 2016-08-08 11:51:00 +0200:
> Hi.
> 
> On Mon, Aug 8, 2016 at 1:52 AM, Clint Byrum  wrote:
> > Excerpts from Steve Baker's message of 2016-08-08 10:11:29 +1200:
> >> On 05/08/16 21:48, Ricardo Rocha wrote:
> >> > Hi.
> >> >
> >> > Quick update is 1000 nodes and 7 million reqs/sec :) - and the number
> >> > of requests should be higher but we had some internal issues. We have
> >> > a submission for barcelona to provide a lot more details.
> >> >
> >> > But a couple questions came during the exercise:
> >> >
> >> > 1. Do we really need a volume in the VMs? On large clusters this is a
> >> > burden, and local storage only should be enough?
> >> >
> >> > 2. We observe a significant delay (~10min, which is half the total
> >> > time to deploy the cluster) on heat when it seems to be crunching the
> >> > kube_minions nested stacks. Once it's done, it still adds new stacks
> >> > gradually, so it doesn't look like it precomputed all the info in advance
> >> >
> >> > Anyone tried to scale Heat to stacks this size? We end up with a stack
> >> > with:
> >> > * 1000 nested stacks (depth 2)
> >> > * 22000 resources
> >> > * 47008 events
> >> >
> >> > And already changed most of the timeout/retrial values for rpc to get
> >> > this working.
> >> >
> >> > This delay is already visible in clusters of 512 nodes, but 40% of the
> >> > time in 1000 nodes seems like something we could improve. Any hints on
> >> > Heat configuration optimizations for large stacks very welcome.
> >> >
> >> Yes, we recommend you set the following in /etc/heat/heat.conf [DEFAULT]:
> >> max_resources_per_stack = -1
> >>
> >> Enforcing this for large stacks has a very high overhead, we make this
> >> change in the TripleO undercloud too.
> >>
> >
> > Wouldn't this necessitate having a private Heat just for Magnum? Not
> > having a resource limit per stack would leave your Heat engines
> > vulnerable to being DoS'd by malicious users, since one can create many
> > many thousands of resources, and thus python objects, in just a couple
> > of cleverly crafted templates (which is why I added the setting).
> >
> > This makes perfect sense in the undercloud of TripleO, which is a
> > private, single tenant OpenStack. But, for Magnum.. now you're talking
> > about the Heat that users have access to.
> 
> We have it already at -1 for these tests. As you say a malicious user
> could DoS, right now this is manageable in our environment. But maybe
> move it to a per tenant value, or some special policy? The stacks are
> created under a separate domain for magnum (for trustees), we could
> also use that for separation.
> 
> A separate heat instance sounds like an overkill.
> 

It does, but there's really no way around it. If Magnum users are going
to create massive stacks, then all of the heat engines will need to be
able to handle massive stacks anyway, and a quota system would just mean
that only Magnum gets to fully utilize those engines, which doesn't
really make much sense at all, does it?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tosca-parser] [heat-translator] [heat] [tacker] [opnfv] TOSCA-Parser 0.6.0 release

2016-08-09 Thread Sahdev P Zala
Hello Everyone, 

On behalf of the TOSCA-Parser team, I am pleased to announce the 0.6.0 
PyPI release of tosca-parser which can be downloaded from 
https://pypi.python.org/pypi/tosca-parser

This release includes following enhancements:

Python 3.5 support
Support for TOSCA Repository which is an external repository in TOSCA 
service template containing deployment and implementation artifacts
Implementation of triggers in policies
Support for Credential data type
Support for Token function
get_attribute function support for optional requirement or capability
Refactoring of TOSCA definition file for easy reuse
Added new exception UnsupportedType to handle TOSCA types that are not 
supported in parser at any given time 
New test template for Container node type
NFV definition updates for NFV.CP and NFV.VNFFG
Moved to use urllib2 from urllib to use with input templates specified via 
URL
Requirements updates
Small bug fixes

Thanks! 

Regards, 
Sahdev Zala

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-09 Thread Hayes, Graham
On 09/08/2016 19:58, Ihar Hrachyshka wrote:
> Walter A. Boring IV  wrote:
>
>> On 08/08/2016 02:28 PM, Ihar Hrachyshka wrote:
>>> Duncan Thomas  wrote:
>>>
 On 8 August 2016 at 21:12, Matthew Treinish  wrote:
 Ignoring all that, this is also contrary to how we perform testing in
 OpenStack.
 We don't turn off entire classes of testing we have so we can land
 patches,
 that's just a recipe for disaster.

 But is it more of a disaster (for the consumers) than zero testing,
 zero review, scattered around the internet
 if-you're-lucky-with-a-good-wind you'll maybe get the right patch set?
 Because that's where we are right now, and vendors, distributors and
 the cinder core team are all saying it's a disaster.
>>>
>>> If consumers rely on upstream releases, then they are expected to
>>> migrate to newer releases after EOL, not switch to a random branch on
>>> the internet. If they rely on some commercial product, then they usually
>>> have an extended period of support and certification for their drivers,
>>> so it’s not a problem for them.
>>>
>>> Ihar
>> This is entirely unrealistic.  Force customers to upgrade.   Good luck
>> explaining to a bank that in order to get their cinder driver fix in,
>> they have to upgrade their entire OpenStack deployment. Real world
>> customers simply will balk at this all day long.
>
> Real world customers will pay for engineering to support their software,
> either their own or of one of OpenStack vendors. There is no free lunch
> from upstream here.

Sure - that may well be the case.

But if a few OpenStack vendors are willing to collaborate on the work,
would it not be better to centralise it, instead of each vendor forking
and doing the same fixes?

No so much a free lunch, as a spreading the cost of a lunch.

> Ihar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] History lesson please

2016-08-09 Thread Davanum Srinivas
+1 for volunteers to step up.

-- Dims

On Tue, Aug 9, 2016 at 3:07 PM, Doug Hellmann  wrote:
> Excerpts from John Dickinson's message of 2016-08-09 11:14:57 -0700:
>> I'd like to advocate for *not* raising minimum versions very often. Every 
>> time some OpenStack project raises minimum versions, this change is 
>> propagated to all projects, and that puts extra burden on anyone who is 
>> maintaining packages and dependencies in their own deployment. If one 
>> project needs a new feature introduced in version 32, but another project 
>> claims compatibility with >=28, that's ok. There's no need for the second 
>> project to raise the minimum version when there isn't a conflict. (This is 
>> the position I advocated for at the Austin summit.)
>>
>> Yes, I know that currently we don't test every possible version permutation. 
>> Yes, I know that doing that is hard. I'm not ignoring that.
>>
>> --John
>
> As we said at the summit, when someone changes the requirements sync job
> to deal with overlapping and compatible ranges, this will be fine. We
> don't currently have anyone working on that, but since we agreed we
> would do it if there are any volunteers they should talk with the
> requirements team about how to get started.
>
> In the mean time, projects following the global requirements process
> are still expected to sync the patches created by the bot.
>
> Doug
>
>>
>> On 9 Aug 2016, at 9:24, Ian Cordasco wrote:
>>
>> >
>> >
>> > -Original Message-
>> > From: Sean Dague 
>> > Reply: OpenStack Development Mailing List (not for usage questions) 
>> > 
>> > Date: August 9, 2016 at 11:21:47
>> > To: openstack-dev@lists.openstack.org 
>> > Subject:  Re: [openstack-dev] [requirements] History lesson please
>> >
>> >> On 08/09/2016 11:25 AM, Matthew Thode wrote:
>> >>> On 08/09/2016 10:22 AM, Ian Cordasco wrote:
>>  -Original Message-
>>  From: Matthew Thode
>>  Reply: prometheanf...@gentoo.org , OpenStack Development
>> >> Mailing List (not for usage questions)
>>  Date: August 9, 2016 at 09:53:53
>>  To: openstack-dev@lists.openstack.org
>>  Subject: Re: [openstack-dev] [requirements] History lesson please
>> 
>> > One of the things on our todo list is to test the 'lower-constraints' 
>> > to
>> > make sure they still work with the head of branch.
>> 
>>  That's not sufficient. You need to find versions in between the lowest 
>>  tested version
>> >> and the current version to also make sure you don't end up with breakage. 
>> >> You might have
>> >> somepackage that has a lower version of 2.0.1 and a current constraint of 
>> >> 2.12.3. You
>> >> might even have a blacklist of versions in between those two versions, 
>> >> but you still need
>> >> other versions to ensure that things in between those continue to work.
>> 
>>  THe tiniest of accidental incompatibilities can cause some of the most 
>>  bizarre bugs.
>> 
>>  --
>>  Ian Cordasco
>> 
>> >>>
>> >>> I'm aware of this, but this would be a good start.
>> >>
>> >> And, more importantly, assuming that testing is only valid if it covers
>> >> every scenario, sets the bar at entirely the wrong place.
>> >>
>> >> A lower bound test would eliminate some of the worst fiction we've got.
>> >> Testing is never 100%. With a complex system like OpenStack, it's
>> >> probably not even 1% (of configs matrix for sure). But picking some
>> >> interesting representative scenarios and seeing that it's not completely
>> >> busted is worth while.
>> >
>> > Right. I'm not advocating for testing every released version of a 
>> > dependency. In general, it's good to test versions that have *triggered* 
>> > changes though. If upgrading from 2.3.0 to 2.4.1 caused you to need to fix 
>> > something, test something earlier than 2.4.1, and 2.4.1, and then 
>> > something later. That's what I'm advocating for.
>> >
>> > --
>> > Ian Cordasco
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] History lesson please

2016-08-09 Thread Doug Hellmann
Excerpts from Hayes, Graham's message of 2016-08-09 18:54:57 +:
> On 09/08/2016 19:41, John Dickinson wrote:
> >
> >
> > On 9 Aug 2016, at 11:33, Ian Cordasco wrote:
> >
> >>
> >>
> >> -Original Message-
> >> From: John Dickinson 
> >> Reply: OpenStack Development Mailing List (not for usage questions) 
> >> 
> >> Date: August 9, 2016 at 13:17:08
> >> To: OpenStack Development Mailing List 
> >> Subject:  Re: [openstack-dev] [requirements] History lesson please
> >>
> >>> I'd like to advocate for *not* raising minimum versions very often. Every 
> >>> time some OpenStack
> >>> project raises minimum versions, this change is propagated to all 
> >>> projects, and that
> >>> puts extra burden on anyone who is maintaining packages and dependencies 
> >>> in their own
> >>> deployment. If one project needs a new feature introduced in version 32, 
> >>> but another
> >>> project claims compatibility with >=28, that's ok. There's no need for 
> >>> the second project
> >>> to raise the minimum version when there isn't a conflict. (This is the 
> >>> position I advocated
> >>> for at the Austin summit.)
> >>>
> >>> Yes, I know that currently we don't test every possible version 
> >>> permutation. Yes, I know
> >>> that doing that is hard. I'm not ignoring that.
> >>
> >> Right. So with the current set-up, where these requirements are propogated 
> >> to every project, how do projects express their own minimum version 
> >> requirement?
> >>
> >> Let's assume someone is maintaining their own packages and dependencies. 
> >> If (for example) Glance requires a minimum version of Routes and Nova has 
> >> a minimum requirement newer than Glance's, they're not coinstallable 
> >> (which was the original goal of the requirements team). What you're asking 
> >> for ends up being "Don't rely on new features in a dependency". If 
> >> OpenStack drops the illusion of coinstallability that ends up being fine. 
> >> I don't think anyone wants to drop that though.
> >
> > In that case, they are still co-installable, because the nova minimum 
> > satisfies both.
> 
> But then packagers are going to have to do the work anyway, as it will
> have in effect raised the minimum version of routes for Glance, and thus
> need a new package.
> 
> It might not make a difference to deployers / packagers who only deploy
> one project from OpenStack, but they are in the minority - having a
> known good minimum for requirements helps deployers who have multiple
> services to deploy.

We've tried to be consistent in telling packagers to use the
versions listed in upper-constraints.txt unless there is an absolute
need to use something else. Those are the versions we test, and
therefore the versions we claim to support right now.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] History lesson please

2016-08-09 Thread Doug Hellmann
Excerpts from John Dickinson's message of 2016-08-09 11:14:57 -0700:
> I'd like to advocate for *not* raising minimum versions very often. Every 
> time some OpenStack project raises minimum versions, this change is 
> propagated to all projects, and that puts extra burden on anyone who is 
> maintaining packages and dependencies in their own deployment. If one project 
> needs a new feature introduced in version 32, but another project claims 
> compatibility with >=28, that's ok. There's no need for the second project to 
> raise the minimum version when there isn't a conflict. (This is the position 
> I advocated for at the Austin summit.)
> 
> Yes, I know that currently we don't test every possible version permutation. 
> Yes, I know that doing that is hard. I'm not ignoring that.
> 
> --John

As we said at the summit, when someone changes the requirements sync job
to deal with overlapping and compatible ranges, this will be fine. We
don't currently have anyone working on that, but since we agreed we
would do it if there are any volunteers they should talk with the
requirements team about how to get started.

In the mean time, projects following the global requirements process
are still expected to sync the patches created by the bot.

Doug

> 
> On 9 Aug 2016, at 9:24, Ian Cordasco wrote:
> 
> >  
> >
> > -Original Message-
> > From: Sean Dague 
> > Reply: OpenStack Development Mailing List (not for usage questions) 
> > 
> > Date: August 9, 2016 at 11:21:47
> > To: openstack-dev@lists.openstack.org 
> > Subject:  Re: [openstack-dev] [requirements] History lesson please
> >
> >> On 08/09/2016 11:25 AM, Matthew Thode wrote:
> >>> On 08/09/2016 10:22 AM, Ian Cordasco wrote:
>  -Original Message-
>  From: Matthew Thode
>  Reply: prometheanf...@gentoo.org , OpenStack Development
> >> Mailing List (not for usage questions)
>  Date: August 9, 2016 at 09:53:53
>  To: openstack-dev@lists.openstack.org
>  Subject: Re: [openstack-dev] [requirements] History lesson please
> 
> > One of the things on our todo list is to test the 'lower-constraints' to
> > make sure they still work with the head of branch.
> 
>  That's not sufficient. You need to find versions in between the lowest 
>  tested version
> >> and the current version to also make sure you don't end up with breakage. 
> >> You might have
> >> somepackage that has a lower version of 2.0.1 and a current constraint of 
> >> 2.12.3. You
> >> might even have a blacklist of versions in between those two versions, but 
> >> you still need
> >> other versions to ensure that things in between those continue to work.
> 
>  THe tiniest of accidental incompatibilities can cause some of the most 
>  bizarre bugs.
> 
>  --
>  Ian Cordasco
> 
> >>>
> >>> I'm aware of this, but this would be a good start.
> >>
> >> And, more importantly, assuming that testing is only valid if it covers
> >> every scenario, sets the bar at entirely the wrong place.
> >>
> >> A lower bound test would eliminate some of the worst fiction we've got.
> >> Testing is never 100%. With a complex system like OpenStack, it's
> >> probably not even 1% (of configs matrix for sure). But picking some
> >> interesting representative scenarios and seeing that it's not completely
> >> busted is worth while.
> >
> > Right. I'm not advocating for testing every released version of a 
> > dependency. In general, it's good to test versions that have *triggered* 
> > changes though. If upgrading from 2.3.0 to 2.4.1 caused you to need to fix 
> > something, test something earlier than 2.4.1, and 2.4.1, and then something 
> > later. That's what I'm advocating for.
> >
> > --
> > Ian Cordasco
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] History lesson please

2016-08-09 Thread Matthew Thode
On 08/09/2016 01:37 PM, John Dickinson wrote:
> In that case, they are still co-installable, because the nova minimum 
> satisfies both.

The requirements project currently advocates the use of
upper-requirements.txt as what is targeted for packagers.  This is
what's tested.

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-09 Thread Ihar Hrachyshka

Walter A. Boring IV  wrote:




I think "currently active stable branches" is key there. These branches
would no longer be "currently active". They would get an EOL tag when it
reaches the end of the support phases. We just wouldn't delete the
branch.
This argument comes up at least once a cycle and there is a reason we  
don't do
this. When we EOL a branch all of the infrastructure for running any ci  
against
it goes away. This means devstack support, job definitions, tempest skip  
checks,
etc. Leaving the branch around advertises that you can still submit  
patches to
it which you can't anymore. As a community we've very clearly said that  
we don't
land any code without ensuring it passes tests first, and we do not  
maintain any

of the infrastructure for doing that after an EOL.


And it's this exact policy that has lead us to this mess we are in  
today.   As a vendor that has customers that use OpenStack, we have to  
support very old releases.  Customers in the wild do not like to upgrade  
once they get OpenStack up and running because it's very difficult, time  
consuming and dangerous to do.  We have customers still running Icehouse  
and they will most likely won't upgrade any time soon.  Banks hate  
upgrading software after they have customers running on it.   This is a  
community wide problem that needs to be addressed.


Because of this problem, (not being able to backport bug fixes in our  
drivers), we have been left with forking Cinder on our own github to put  
our driver fixes there.   This is a terrible practice for the OpenStack  
community in general, and terrible for customers/users of OpenStack, as  
we have N driver vendors that have N different mechanisms for getting bug  
fixes to their customers.  I believe this is a major problem for users of  
OpenStack and it needs to be addressed.


Right. And so the proper solution in line with OpenStack practices would be  
to allow vendors to own their plugins and maintain them, potentially for  
extensive time. The fact the Cinder team is unwilling [as it seems like  
from what I read in other replies to the thread] to provide that kind of  
extensibility to vendors is unfortunate. BTW do we have a write-up of  
reasons behind that?


At the Cinder midcycle, we came up with a solution that would satisfy  
Cinder customers, as Sean planned out.


All of them? I am not sure on that one. Some consumers may put some trust  
into stable branches, and may be surprised by patches landing there without  
upstream CI in place, undermining the promise the project made by adopting  
the stable:follows-policy tag.


We acknowledge that it's a driver maintainer's responsibility to make  
sure they test any changes that get into the stable branches, because  
there is no infra support for running CI against the patches of old  
stable branches. I think that risk is far better than the existing  
reality of N cinder forks floating around github.   It's just no way to  
ship software to actual customers.


If Cinder would give you a right hooking entry point for your plugin, you  
would not need to fork the whole thing just to extend your small isolated  
bit. You would live on upstream infrastructure while stable/* is alive,  
then move to your own git repo somewhere on the internet to provide more  
bug fixes [as some vendors do for neutron drivers].


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] History lesson please

2016-08-09 Thread Hayes, Graham
On 09/08/2016 19:41, John Dickinson wrote:
>
>
> On 9 Aug 2016, at 11:33, Ian Cordasco wrote:
>
>>
>>
>> -Original Message-
>> From: John Dickinson 
>> Reply: OpenStack Development Mailing List (not for usage questions) 
>> 
>> Date: August 9, 2016 at 13:17:08
>> To: OpenStack Development Mailing List 
>> Subject:  Re: [openstack-dev] [requirements] History lesson please
>>
>>> I'd like to advocate for *not* raising minimum versions very often. Every 
>>> time some OpenStack
>>> project raises minimum versions, this change is propagated to all projects, 
>>> and that
>>> puts extra burden on anyone who is maintaining packages and dependencies in 
>>> their own
>>> deployment. If one project needs a new feature introduced in version 32, 
>>> but another
>>> project claims compatibility with >=28, that's ok. There's no need for the 
>>> second project
>>> to raise the minimum version when there isn't a conflict. (This is the 
>>> position I advocated
>>> for at the Austin summit.)
>>>
>>> Yes, I know that currently we don't test every possible version 
>>> permutation. Yes, I know
>>> that doing that is hard. I'm not ignoring that.
>>
>> Right. So with the current set-up, where these requirements are propogated 
>> to every project, how do projects express their own minimum version 
>> requirement?
>>
>> Let's assume someone is maintaining their own packages and dependencies. If 
>> (for example) Glance requires a minimum version of Routes and Nova has a 
>> minimum requirement newer than Glance's, they're not coinstallable (which 
>> was the original goal of the requirements team). What you're asking for ends 
>> up being "Don't rely on new features in a dependency". If OpenStack drops 
>> the illusion of coinstallability that ends up being fine. I don't think 
>> anyone wants to drop that though.
>
> In that case, they are still co-installable, because the nova minimum 
> satisfies both.

But then packagers are going to have to do the work anyway, as it will
have in effect raised the minimum version of routes for Glance, and thus
need a new package.

It might not make a difference to deployers / packagers who only deploy
one project from OpenStack, but they are in the minority - having a
known good minimum for requirements helps deployers who have multiple
services to deploy.

>
>>
>> --
>> Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-09 Thread Ihar Hrachyshka

Walter A. Boring IV  wrote:


On 08/08/2016 02:28 PM, Ihar Hrachyshka wrote:

Duncan Thomas  wrote:


On 8 August 2016 at 21:12, Matthew Treinish  wrote:
Ignoring all that, this is also contrary to how we perform testing in  
OpenStack.
We don't turn off entire classes of testing we have so we can land  
patches,

that's just a recipe for disaster.

But is it more of a disaster (for the consumers) than zero testing,  
zero review, scattered around the internet  
if-you're-lucky-with-a-good-wind you'll maybe get the right patch set?  
Because that's where we are right now, and vendors, distributors and  
the cinder core team are all saying it's a disaster.


If consumers rely on upstream releases, then they are expected to  
migrate to newer releases after EOL, not switch to a random branch on  
the internet. If they rely on some commercial product, then they usually  
have an extended period of support and certification for their drivers,  
so it’s not a problem for them.


Ihar
This is entirely unrealistic.  Force customers to upgrade.   Good luck  
explaining to a bank that in order to get their cinder driver fix in,  
they have to upgrade their entire OpenStack deployment. Real world  
customers simply will balk at this all day long.


Real world customers will pay for engineering to support their software,  
either their own or of one of OpenStack vendors. There is no free lunch  
from upstream here.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Making the nonha-multinode and undercloud jobs voting

2016-08-09 Thread Paul Belanger
On Tue, Aug 09, 2016 at 01:14:24PM -0400, James Slagle wrote:
> I've proposed:
> https://review.openstack.org/353019
> 
> which makes gate-tripleo-ci-centos-7-nonha-multinode-nv and
> gate-tripleo-ci-centos-7-undercloud-nv become voting jobs.
> 
> I think these jobs have proven to be stable enough that we can promote
> them to be voting. If you have concerns, please vote on the patch. The
> nice thing about having these jobs voting is that jenkins will
> actually vote -1 on TripleO patches when these jobs fail.
> 
This is a big accomplishment for tripleo, it's a long time coming more job to
voting in the gate.  Everybody should be happy with this step.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] History lesson please

2016-08-09 Thread John Dickinson


On 9 Aug 2016, at 11:33, Ian Cordasco wrote:

>  
>
> -Original Message-
> From: John Dickinson 
> Reply: OpenStack Development Mailing List (not for usage questions) 
> 
> Date: August 9, 2016 at 13:17:08
> To: OpenStack Development Mailing List 
> Subject:  Re: [openstack-dev] [requirements] History lesson please
>
>> I'd like to advocate for *not* raising minimum versions very often. Every 
>> time some OpenStack
>> project raises minimum versions, this change is propagated to all projects, 
>> and that
>> puts extra burden on anyone who is maintaining packages and dependencies in 
>> their own
>> deployment. If one project needs a new feature introduced in version 32, but 
>> another
>> project claims compatibility with >=28, that's ok. There's no need for the 
>> second project
>> to raise the minimum version when there isn't a conflict. (This is the 
>> position I advocated
>> for at the Austin summit.)
>>
>> Yes, I know that currently we don't test every possible version permutation. 
>> Yes, I know
>> that doing that is hard. I'm not ignoring that.
>
> Right. So with the current set-up, where these requirements are propogated to 
> every project, how do projects express their own minimum version requirement?
>
> Let's assume someone is maintaining their own packages and dependencies. If 
> (for example) Glance requires a minimum version of Routes and Nova has a 
> minimum requirement newer than Glance's, they're not coinstallable (which was 
> the original goal of the requirements team). What you're asking for ends up 
> being "Don't rely on new features in a dependency". If OpenStack drops the 
> illusion of coinstallability that ends up being fine. I don't think anyone 
> wants to drop that though.

In that case, they are still co-installable, because the nova minimum satisfies 
both.

>
> --
> Ian Cordasco


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] History lesson please

2016-08-09 Thread Ian Cordasco
 

-Original Message-
From: John Dickinson 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: August 9, 2016 at 13:17:08
To: OpenStack Development Mailing List 
Subject:  Re: [openstack-dev] [requirements] History lesson please

> I'd like to advocate for *not* raising minimum versions very often. Every 
> time some OpenStack  
> project raises minimum versions, this change is propagated to all projects, 
> and that  
> puts extra burden on anyone who is maintaining packages and dependencies in 
> their own  
> deployment. If one project needs a new feature introduced in version 32, but 
> another  
> project claims compatibility with >=28, that's ok. There's no need for the 
> second project  
> to raise the minimum version when there isn't a conflict. (This is the 
> position I advocated  
> for at the Austin summit.)
>  
> Yes, I know that currently we don't test every possible version permutation. 
> Yes, I know  
> that doing that is hard. I'm not ignoring that.

Right. So with the current set-up, where these requirements are propogated to 
every project, how do projects express their own minimum version requirement?

Let's assume someone is maintaining their own packages and dependencies. If 
(for example) Glance requires a minimum version of Routes and Nova has a 
minimum requirement newer than Glance's, they're not coinstallable (which was 
the original goal of the requirements team). What you're asking for ends up 
being "Don't rely on new features in a dependency". If OpenStack drops the 
illusion of coinstallability that ends up being fine. I don't think anyone 
wants to drop that though.

--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] History lesson please

2016-08-09 Thread Davanum Srinivas
fyi, Just so you all know. It's upper-constraints.txt. Note the word "upper" :)

-- Dims

On Tue, Aug 9, 2016 at 2:25 PM, Julien Danjou  wrote:
> On Tue, Aug 09 2016, John Dickinson wrote:
>
>> I'd like to advocate for *not* raising minimum versions very often. Every 
>> time
>> some OpenStack project raises minimum versions, this change is propagated to
>> all projects, and that puts extra burden on anyone who is maintaining 
>> packages
>> and dependencies in their own deployment. If one project needs a new feature
>> introduced in version 32, but another project claims compatibility with >=28,
>> that's ok. There's no need for the second project to raise the minimum 
>> version
>> when there isn't a conflict. (This is the position I advocated for at the
>> Austin summit.)
>
> Amen to that.
>
> --
> Julien Danjou
> # Free Software hacker
> # https://julien.danjou.info
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] History lesson please

2016-08-09 Thread Julien Danjou
On Tue, Aug 09 2016, John Dickinson wrote:

> I'd like to advocate for *not* raising minimum versions very often. Every time
> some OpenStack project raises minimum versions, this change is propagated to
> all projects, and that puts extra burden on anyone who is maintaining packages
> and dependencies in their own deployment. If one project needs a new feature
> introduced in version 32, but another project claims compatibility with >=28,
> that's ok. There's no need for the second project to raise the minimum version
> when there isn't a conflict. (This is the position I advocated for at the
> Austin summit.)

Amen to that.

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] History lesson please

2016-08-09 Thread John Dickinson
I'd like to advocate for *not* raising minimum versions very often. Every time 
some OpenStack project raises minimum versions, this change is propagated to 
all projects, and that puts extra burden on anyone who is maintaining packages 
and dependencies in their own deployment. If one project needs a new feature 
introduced in version 32, but another project claims compatibility with >=28, 
that's ok. There's no need for the second project to raise the minimum version 
when there isn't a conflict. (This is the position I advocated for at the 
Austin summit.)

Yes, I know that currently we don't test every possible version permutation. 
Yes, I know that doing that is hard. I'm not ignoring that.

--John




On 9 Aug 2016, at 9:24, Ian Cordasco wrote:

>  
>
> -Original Message-
> From: Sean Dague 
> Reply: OpenStack Development Mailing List (not for usage questions) 
> 
> Date: August 9, 2016 at 11:21:47
> To: openstack-dev@lists.openstack.org 
> Subject:  Re: [openstack-dev] [requirements] History lesson please
>
>> On 08/09/2016 11:25 AM, Matthew Thode wrote:
>>> On 08/09/2016 10:22 AM, Ian Cordasco wrote:
 -Original Message-
 From: Matthew Thode
 Reply: prometheanf...@gentoo.org , OpenStack Development
>> Mailing List (not for usage questions)
 Date: August 9, 2016 at 09:53:53
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [requirements] History lesson please

> One of the things on our todo list is to test the 'lower-constraints' to
> make sure they still work with the head of branch.

 That's not sufficient. You need to find versions in between the lowest 
 tested version
>> and the current version to also make sure you don't end up with breakage. 
>> You might have
>> somepackage that has a lower version of 2.0.1 and a current constraint of 
>> 2.12.3. You
>> might even have a blacklist of versions in between those two versions, but 
>> you still need
>> other versions to ensure that things in between those continue to work.

 THe tiniest of accidental incompatibilities can cause some of the most 
 bizarre bugs.

 --
 Ian Cordasco

>>>
>>> I'm aware of this, but this would be a good start.
>>
>> And, more importantly, assuming that testing is only valid if it covers
>> every scenario, sets the bar at entirely the wrong place.
>>
>> A lower bound test would eliminate some of the worst fiction we've got.
>> Testing is never 100%. With a complex system like OpenStack, it's
>> probably not even 1% (of configs matrix for sure). But picking some
>> interesting representative scenarios and seeing that it's not completely
>> busted is worth while.
>
> Right. I'm not advocating for testing every released version of a dependency. 
> In general, it's good to test versions that have *triggered* changes though. 
> If upgrading from 2.3.0 to 2.4.1 caused you to need to fix something, test 
> something earlier than 2.4.1, and 2.4.1, and then something later. That's 
> what I'm advocating for.
>
> --
> Ian Cordasco
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [neutron][dvr][fip] fg device allocated private ip address

2016-08-09 Thread Carl Baldwin
On Wed, Aug 3, 2016 at 1:22 AM, zhuna  wrote:

> Hi Carl,
>
>
>
> IMO, if the upstream router has the route to floating ip subnet, no need
> to assign additional IP address to the router.
>
>
>
> For example, there are 2 subnets in external network,
>
> Subnet1: 10.0.0.0/24 (fg ip address)
>
> Subnet2: 9.0.0.0/24 (fip)
>
>
>
> Suppose assign fip 9.0.0.10 for vm1, and the fg ip address is 10.0.0.10,
> so there are 2 ip address configured in fg, one is 9.0.0.10 and 10.0.0.10.
>
> +---+
>
> |  router ns   |
>
> +---+
>
> | fg (10.0.0.10, 9.0.0.10)
>
> |
>
> |
>
> | router-if (10.0.0.1)
>
> +---+
>
> |  upstream router   | Internet
>
> +---+
>
>
>
> The default route of router ns is 10.0.0.1,  add a static route
> 9.0.0.10/32 10.0.0.10 to upstream router , or learn the route by routing
> protocol (neutron-dynamic-routing).
>

This could certainly work. It would require some changes in the L3 agent
because the L3 agent assumes that each of the subnets on the external
network has a gateway address. It doesn't currently distinguish between
different service types. So, when it gets its static address, 10.0.0.10/24,
it will expect a gateway address within that subnet (e.g. 10.0.0.1).

We could make changes to the L3 agent to accommodate this but I'm hesitant
to do this because it is much easier to get the default route from the
interface's static address. This is because the floating IP addresses come
and go and so we'd have to manage the default gateway for the router as
they do. Also, routers that happen to not be hosting any floating ip
addresses (CVRs, not DVRs) will not have any default gateway. It seems a
lot more straight-forward to me to just require that the upstream router
have an address on fg subnet.

Carl
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Making the nonha-multinode and undercloud jobs voting

2016-08-09 Thread James Slagle
On Tue, Aug 9, 2016 at 1:30 PM, Michele Baldessari  wrote:
> Hi James,
>
> On Tue, Aug 09, 2016 at 01:14:24PM -0400, James Slagle wrote:
>> I've proposed:
>> https://review.openstack.org/353019
>>
>> which makes gate-tripleo-ci-centos-7-nonha-multinode-nv and
>> gate-tripleo-ci-centos-7-undercloud-nv become voting jobs.
>
> definitely +1 for the gate-tripleo-ci-centos-7-undercloud-nv job.
> About the nonha job, I was actually wondering if we should still
> keep any non-ha templates/jobs around now that the New HA architecture
> has landed. I cannot think of any real usage and the NG HA stuff deploys
> fine on 1 controller as well so the "develop on a smaller machine"
> use-case is covered.
>
> Is there any reason/use-case to keep any non-ha templates/jobs around?
> I'd love to remove them, but maybe there are some uses I have not
> thought of ;)
>
> Thanks,
> Michele
>
>> I think these jobs have proven to be stable enough that we can promote
>> them to be voting. If you have concerns, please vote on the patch. The
>> nice thing about having these jobs voting is that jenkins will
>> actually vote -1 on TripleO patches when these jobs fail.

I personally agree and think that we should consolidate our
development and testing efforts onto the single NG pacemaker
architecture and use that for both for non-HA and HA.

That being said, this needs to be driven via tripleo-heat-templates,
tripleoclient, etc, instead of from the tripleo-ci side. E.g., once
environments/puppet-pacemaker.yaml is the default environment in
tripleo-heat-templates, then tripleo-ci will be using it automatically
for nonha.

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] proposal: start gating on puppet4

2016-08-09 Thread Emilien Macchi
Hi,

Today Puppet OpenStack CI is running unit and functional test jobs
against puppet 3 and puppet 4.
Unit jobs for puppet 4 are currently voting and pretty stable.
Functional jobs for puppet 4 are not voting but also stable.

Even if Puppet4 has not been largely adopted by our community [1] yet,
I would like to encourage our users to upgrade the version of Puppet.
Fedora ships it by default [2] and for Ubuntu, it's also the default
since yakkety [3].

[1] 
https://docs.google.com/spreadsheets/d/1iIQ6YmpdOVctS2-wCV6SGPP1NSj8nKD9nv_xtZH9loY/edit?usp=sharing
[2] http://koji.fedoraproject.org/koji/packageinfo?packageID=3529
[3] http://packages.ubuntu.com/yakkety/puppet

So here's my proposal, feel free to bring any feedback:
- For stable/mitaka CI and stable/liberty nothing will change.
- For current master (future stable/newton in a few months), transform
non-voting puppet4 jobs into voting and add them to the gate. Also
keep puppet3 unit tests jobs, as voting.
- After Newton release (during Ocata cycle), change master CI to only
gate functional jobs on puppet4 (and remove puppet3 jobs for
puppet-openstack-integration); but keep puppet3 unit tests jobs, as
voting.
- During Ocata cycle, implement a periodic job that will nightly check
we can deploy with Puppet3. The periodic job is something our
community interested by Puppet 3 will have to monitor and report any
new failure so we can address it.

That way, we tell our users:
- don't worry if you deploy Liberty, Mitaka, Newton, we will
officially support Puppet 3.
- if you plan to deploy Puppet 4, we'll officially support you
starting from Newton.
- if you plan to deploy Ocata with Puppet 3, we won't support you
anymore since our functional testing jobs will be gone. Though we'll
make our best to be backward compatible thanks to our unit  and
periodic functional testing jobs.

Regarding packaging:
- on Ubuntu, we'll continue rely on what provides Puppetlabs because
Xenial doesn't provide Puppet4.
- on CentOS7, we are working on getting Puppet 4 packaged in RDO and
our CI will certainly use it.

Any feedback is welcome,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-09 Thread Walter A. Boring IV

On 08/08/2016 02:28 PM, Ihar Hrachyshka wrote:

Duncan Thomas  wrote:

On 8 August 2016 at 21:12, Matthew Treinish  
wrote:
Ignoring all that, this is also contrary to how we perform testing in 
OpenStack.
We don't turn off entire classes of testing we have so we can land 
patches,

that's just a recipe for disaster.

But is it more of a disaster (for the consumers) than zero testing, 
zero review, scattered around the internet 
if-you're-lucky-with-a-good-wind you'll maybe get the right patch 
set? Because that's where we are right now, and vendors, distributors 
and the cinder core team are all saying it's a disaster.


If consumers rely on upstream releases, then they are expected to 
migrate to newer releases after EOL, not switch to a random branch on 
the internet. If they rely on some commercial product, then they 
usually have an extended period of support and certification for their 
drivers, so it’s not a problem for them.


Ihar
This is entirely unrealistic.  Force customers to upgrade.   Good luck 
explaining to a bank that in order to get their cinder driver fix in, 
they have to upgrade their entire OpenStack deployment. Real world 
customers simply will balk at this all day long.


Walt


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-09 Thread Walter A. Boring IV



I think "currently active stable branches" is key there. These branches
would no longer be "currently active". They would get an EOL tag when it
reaches the end of the support phases. We just wouldn't delete the
branch.

This argument comes up at least once a cycle and there is a reason we don't do
this. When we EOL a branch all of the infrastructure for running any ci against
it goes away. This means devstack support, job definitions, tempest skip checks,
etc. Leaving the branch around advertises that you can still submit patches to
it which you can't anymore. As a community we've very clearly said that we don't
land any code without ensuring it passes tests first, and we do not maintain any
of the infrastructure for doing that after an EOL.


And it's this exact policy that has lead us to this mess we are in 
today.   As a vendor that has customers that use OpenStack, we have to 
support very old releases.  Customers in the wild do not like to upgrade 
once they get OpenStack up and running because it's very difficult, time 
consuming and dangerous to do.  We have customers still running Icehouse 
and they will most likely won't upgrade any time soon.  Banks hate 
upgrading software after they have customers running on it.   This is a 
community wide problem that needs to be addressed.


Because of this problem, (not being able to backport bug fixes in our 
drivers), we have been left with forking Cinder on our own github to put 
our driver fixes there.   This is a terrible practice for the OpenStack 
community in general, and terrible for customers/users of OpenStack, as 
we have N driver vendors that have N different mechanisms for getting 
bug fixes to their customers.  I believe this is a major problem for 
users of OpenStack and it needs to be addressed.
At the Cinder midcycle, we came up with a solution that would satisfy 
Cinder customers, as Sean planned out.  We acknowledge that it's a 
driver maintainer's responsibility to make sure they test any changes 
that get into the stable branches, because there is no infra support for 
running CI against the patches of old stable branches. I think that risk 
is far better than the existing reality of N cinder forks floating 
around github.   It's just no way to ship software to actual customers.


$0.02,
Walt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Making the nonha-multinode and undercloud jobs voting

2016-08-09 Thread James Slagle
On Tue, Aug 9, 2016 at 1:14 PM, James Slagle  wrote:
> I've proposed:
> https://review.openstack.org/353019
>
> which makes gate-tripleo-ci-centos-7-nonha-multinode-nv and
> gate-tripleo-ci-centos-7-undercloud-nv become voting jobs.
>
> I think these jobs have proven to be stable enough that we can promote
> them to be voting. If you have concerns, please vote on the patch. The
> nice thing about having these jobs voting is that jenkins will
> actually vote -1 on TripleO patches when these jobs fail.

Note this also means the jobs will be gating as well. When patches are
approved, instead of just the normal set of linters/pep8/unit test
type jobs running, we will also be running the nonha-multinode job and
undercloud job. The nonha-multinode job is the longer of the 2 jobs
and is currently averaging 67 minutes fwiw.

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Making the nonha-multinode and undercloud jobs voting

2016-08-09 Thread Michele Baldessari
Hi James,

On Tue, Aug 09, 2016 at 01:14:24PM -0400, James Slagle wrote:
> I've proposed:
> https://review.openstack.org/353019
> 
> which makes gate-tripleo-ci-centos-7-nonha-multinode-nv and
> gate-tripleo-ci-centos-7-undercloud-nv become voting jobs.

definitely +1 for the gate-tripleo-ci-centos-7-undercloud-nv job.
About the nonha job, I was actually wondering if we should still
keep any non-ha templates/jobs around now that the New HA architecture
has landed. I cannot think of any real usage and the NG HA stuff deploys
fine on 1 controller as well so the "develop on a smaller machine"
use-case is covered.

Is there any reason/use-case to keep any non-ha templates/jobs around?
I'd love to remove them, but maybe there are some uses I have not
thought of ;)

Thanks,
Michele

> I think these jobs have proven to be stable enough that we can promote
> them to be voting. If you have concerns, please vote on the patch. The
> nice thing about having these jobs voting is that jenkins will
> actually vote -1 on TripleO patches when these jobs fail.
-- 
Michele Baldessari
C2A5 9DA3 9961 4FFB E01B  D0BC DDD4 DCCB 7515 5C6D

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Making the nonha-multinode and undercloud jobs voting

2016-08-09 Thread James Slagle
I've proposed:
https://review.openstack.org/353019

which makes gate-tripleo-ci-centos-7-nonha-multinode-nv and
gate-tripleo-ci-centos-7-undercloud-nv become voting jobs.

I think these jobs have proven to be stable enough that we can promote
them to be voting. If you have concerns, please vote on the patch. The
nice thing about having these jobs voting is that jenkins will
actually vote -1 on TripleO patches when these jobs fail.

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Gluon] IRC Meeting canceled this week (8/10)

2016-08-09 Thread HU, BIN
Hello team,



The IRC meeting this week (8/10) is canceled.



Thank you

Bin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][osic] OSIC cluster status

2016-08-09 Thread Paul Bourke

Update Aug 9th:

We are on scenario two: 3 control, 20 storage, 100 compute with Ceph. 
Notes are now being collected in 
https://review.openstack.org/#/c/352101/ along with tempest/rally results.


-Paul

On 05/08/16 17:48, Paul Bourke wrote:

Hi Kolla,

Thought it will be helpful to send a status mail once we hit checkpoints
in the osic cluster work, so people can keep up to speed without having
to trawl IRC.

Reference: https://etherpad.openstack.org/p/kolla-N-midcycle-osic

Work began on the cluster Wed Aug 3rd, item 1) from the etherpad is now
complete. The 131 bare metal nodes have been provisioned with Ubuntu
14.04, networking is configured, and all Kolla prechecks are passing.

The default set of images (--profile default) have been built and pushed
to a registry running on the deployment node, the build taking a very
speedy 5m37.040s.

Cheers,
-Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nova mascot

2016-08-09 Thread Daniel P. Berrange
On Tue, Aug 09, 2016 at 11:26:21AM -0500, Matt Riedemann wrote:
> On 8/8/2016 4:10 PM, Clint Byrum wrote:
> > Excerpts from Matt Riedemann's message of 2016-08-08 14:35:12 -0500:
> > > Not to be a major curmudgeon but I think we'd basically decided at the
> > > midcycle (actually weeks before) that Nova wasn't doing the mascot thing.
> > > 
> > 
> > Could you maybe summarize the reason for this decision?
> > 
> > Seems like everybody else is taking this moment to look inward and
> > think about how they want to be seen. Why wouldn't Nova want to take an
> > opportunity to do the same?
> > 
> 
> idk, I'm open to it I guess if people are really passionate about picking a
> mascot, but for the most part when this has come up we've basically had
> jokes about horses asses and such.
> 
> Personally this feels like mandatory fun and I'm usually not interested in
> stuff like that.

It also ends up creating new problems that we then have to spend time on for
no obviously clear benefit. eg we're ging to have to collect list of proposed
mascots, check them with legal to make sure they don't clash with mascots
used by other software companies with squadrons of attack lawyers, then
arrange voting on them. Even after all that we'll probably find out later
that in some culture the mascot we've chosen has negative connotations
associated with it. All this will do nothing to improve life for people
who actually deploy and use nova, so its all rather a waste of time IMHO,

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nova mascot

2016-08-09 Thread Matt Riedemann

On 8/8/2016 4:10 PM, Clint Byrum wrote:

Excerpts from Matt Riedemann's message of 2016-08-08 14:35:12 -0500:

Not to be a major curmudgeon but I think we'd basically decided at the
midcycle (actually weeks before) that Nova wasn't doing the mascot thing.



Could you maybe summarize the reason for this decision?

Seems like everybody else is taking this moment to look inward and
think about how they want to be seen. Why wouldn't Nova want to take an
opportunity to do the same?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



idk, I'm open to it I guess if people are really passionate about 
picking a mascot, but for the most part when this has come up we've 
basically had jokes about horses asses and such.


Personally this feels like mandatory fun and I'm usually not interested 
in stuff like that.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] History lesson please

2016-08-09 Thread Ian Cordasco
 

-Original Message-
From: Sean Dague 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: August 9, 2016 at 11:21:47
To: openstack-dev@lists.openstack.org 
Subject:  Re: [openstack-dev] [requirements] History lesson please

> On 08/09/2016 11:25 AM, Matthew Thode wrote:
> > On 08/09/2016 10:22 AM, Ian Cordasco wrote:
> >> -Original Message-
> >> From: Matthew Thode  
> >> Reply: prometheanf...@gentoo.org , OpenStack Development  
> Mailing List (not for usage questions)  
> >> Date: August 9, 2016 at 09:53:53
> >> To: openstack-dev@lists.openstack.org  
> >> Subject: Re: [openstack-dev] [requirements] History lesson please
> >>
> >>> One of the things on our todo list is to test the 'lower-constraints' to
> >>> make sure they still work with the head of branch.
> >>
> >> That's not sufficient. You need to find versions in between the lowest 
> >> tested version  
> and the current version to also make sure you don't end up with breakage. You 
> might have  
> somepackage that has a lower version of 2.0.1 and a current constraint of 
> 2.12.3. You  
> might even have a blacklist of versions in between those two versions, but 
> you still need  
> other versions to ensure that things in between those continue to work.
> >>
> >> THe tiniest of accidental incompatibilities can cause some of the most 
> >> bizarre bugs.  
> >>
> >> --
> >> Ian Cordasco
> >>
> >
> > I'm aware of this, but this would be a good start.
>  
> And, more importantly, assuming that testing is only valid if it covers
> every scenario, sets the bar at entirely the wrong place.
>  
> A lower bound test would eliminate some of the worst fiction we've got.
> Testing is never 100%. With a complex system like OpenStack, it's
> probably not even 1% (of configs matrix for sure). But picking some
> interesting representative scenarios and seeing that it's not completely
> busted is worth while.

Right. I'm not advocating for testing every released version of a dependency. 
In general, it's good to test versions that have *triggered* changes though. If 
upgrading from 2.3.0 to 2.4.1 caused you to need to fix something, test 
something earlier than 2.4.1, and 2.4.1, and then something later. That's what 
I'm advocating for.

--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] How should ironic and related project names be written?

2016-08-09 Thread Loo, Ruby
Honestly, I don't think it matters what we use in our specifications, since 
specifications are not public documents. Personally, in specifications, I am 
fine with any of the proposed because they are all understandable! IF we have 
to decide on something or if we want some preferred ways, then I'd suggest 
going with what is more 'official'. By that, I mean we could do what is 
suggested for documentation on docs.openstack.org, as described at [1]: "Use 
lowercase when you refer to project names".

Which leads us to the question you ask, what is a "project name". I think we 
could use the names described in the projects.yaml file [2]. So 'ironic', 
'ironic-inspector', 'ironic-lib', 'ironic-python-agent'.

What do you think?

--ruby

[1] 
http://docs.openstack.org/contributor-guide/writing-style/openstack-components.html
[2] 
https://github.com/openstack/governance/blob/7176a32e158d685d4552d9935982e4981bb79bf7/reference/projects.yaml#L1978


On 2016-08-01, 6:35 AM, "Sam Betts (sambetts)" 
> wrote:

Its official OpenStack policy that project names be written in lower case, for 
example Ironic must always be written as ironic, however I was recently writing 
a spec for IPA, and was unsure how to approach writing IPAs name in full.
Discussing this with Dmitry on IRC, we decided it would be best brought to a 
wider audience on the ML because this affects any project that includes 
ironic’s name in their name.

Ironic Python Agent
ironic Python Agent
ironic python agent
ironic-python-agent

I prefer a capitalised Ironic Python Agent name, because it lines up with the 
way we write the acronym, IPA, and makes it obvious its a name, however I’m 
unsure if this aligns with the OpenStack policy. If we need to lower case the 
whole of IPA’s name, then I prefer we refer to it using including the dashes, 
so that it is obviously a project name.

A couple of other projects that also use ironic in the name:

Ironic Inspector
ironic Inspector
ironic inspector
Ironic-inspector

Ironic Lib
ironic Lib
ironic lib
ironic-lib

I would like to hear some opinions on whether we should always refer to the 
projects as they are written if you go to git.openstack.org (with dashes) or 
which of the above styles we’re allowed, and prefer?

Sam


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] History lesson please

2016-08-09 Thread Sean Dague
On 08/09/2016 11:25 AM, Matthew Thode wrote:
> On 08/09/2016 10:22 AM, Ian Cordasco wrote:
>> -Original Message-
>> From: Matthew Thode 
>> Reply: prometheanf...@gentoo.org , OpenStack 
>> Development Mailing List (not for usage questions) 
>> 
>> Date: August 9, 2016 at 09:53:53
>> To: openstack-dev@lists.openstack.org 
>> Subject:  Re: [openstack-dev] [requirements] History lesson please
>>
>>> One of the things on our todo list is to test the 'lower-constraints' to
>>> make sure they still work with the head of branch.
>>
>> That's not sufficient. You need to find versions in between the lowest 
>> tested version and the current version to also make sure you don't end up 
>> with breakage. You might have somepackage that has a lower version of 2.0.1 
>> and a current constraint of 2.12.3. You might even have a blacklist of 
>> versions in between those two versions, but you still need other versions to 
>> ensure that things in between those continue to work.
>>
>> THe tiniest of accidental incompatibilities can cause some of the most 
>> bizarre bugs.
>>
>> --  
>> Ian Cordasco
>>
> 
> I'm aware of this, but this would be a good start.

And, more importantly, assuming that testing is only valid if it covers
every scenario, sets the bar at entirely the wrong place.

A lower bound test would eliminate some of the worst fiction we've got.
Testing is never 100%. With a complex system like OpenStack, it's
probably not even 1% (of configs matrix for sure). But picking some
interesting representative scenarios and seeing that it's not completely
busted is worth while.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [charms] nominating thedac for charms-release team

2016-08-09 Thread James Page
On Mon, 8 Aug 2016 at 18:19 Ryan Beisner  wrote:

> Greetings,
>
> I would like to nominate David Ames  for addition to the
> charms-release team, as he has played a valuable role in the charm release
> processes.  This change will grant privileges such as new stable branch
> creation, among other things necessary to facilitate the charm release
> process.
>

+1
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] static Portgroup support.

2016-08-09 Thread Jay Pipes

On 08/09/2016 04:28 AM, Vasyl Saienko wrote:

Hello Ironic'ers!

We've recorded a demo that shows how static portgroup works at the moment:

Flat network scenario: https://youtu.be/vBlH0ie6Lm4
Multitenant network scenario: https://youtu.be/Kk5Cc_K1tV8


Just watched both the above demo videos. Great job Vasyl and Pavlo :)

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Smaug][Karbor] Smaug is now Karbor

2016-08-09 Thread Saggi Mizrahi
Hi everybody,
As a community we've decided we want to change the name of the project.
We felt that the name wasn't distinctive enough and might cause
confusion when looking for the project.
It was a long process but we've decide on the name Karbor.

Please bear with us while we move things around. A namechange isn't
easy but it's better to do it now when we're still relatively young and
are just now starting to get more traction from outside sources.

I would like to take this opportunity to thank whole openstack
community for accepting us and supporting the project and would like to
reiterate that the name change doesn't change our goals.
-
This email and any files transmitted and/or attachments with it are 
confidential and proprietary information of
Toga Networks Ltd., and intended solely for the use of the individual or entity 
to whom they are addressed.
If you have received this email in error please notify the system manager. This 
message contains confidential
information of Toga Networks Ltd., and is intended only for the individual 
named. If you are not the named
addressee you should not disseminate, distribute or copy this e-mail. Please 
notify the sender immediately
by e-mail if you have received this e-mail by mistake and delete this e-mail 
from your system. If you are not
the intended recipient you are notified that disclosing, copying, distributing 
or taking any action in reliance on
the contents of this information is strictly prohibited.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [smaug] voting for the project mascot

2016-08-09 Thread Edward Lee
Is this a secret ballot?

2016-08-09 17:54 GMT+08:00 xiangxinyong :

> Hello guys,
>
> Smaug is voting for the project mascot.
> Please feel free to give your vote:)
> https://etherpad.openstack.org/p/smaugmascot
>
> Thanks very much.
>
> Best Regards,
>   xiangxinyong
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] weekly meeting #89

2016-08-09 Thread Emilien Macchi
We did the meeting, you can read the notes:
http://eavesdrop.openstack.org/meetings/puppet_openstack/2016/puppet_openstack.2016-08-09-15.00.html

Thanks,

On Mon, Aug 8, 2016 at 3:07 PM, Emilien Macchi <emil...@redhat.com> wrote:
> If you have any topic for our weekly meeting, please add it here:
> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160809
>
> See you tomorrow,
>
> On Tue, Aug 2, 2016 at 11:06 AM, Emilien Macchi <emil...@redhat.com> wrote:
>> no item in our agenda, we cancelled the meeting, see you next week!
>>
>> On Mon, Aug 1, 2016 at 3:31 PM, Emilien Macchi <emil...@redhat.com> wrote:
>>> Hi Puppeteers!
>>>
>>> We'll have our weekly meeting tomorrow at 3pm UTC on #openstack-meeting-4.
>>>
>>> Here's a first agenda:
>>> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160802
>>>
>>> Feel free to add topics, and any outstanding bug and patch.
>>>
>>> See you tomorrow!
>>> Thanks,
>>> --
>>> Emilien Macchi
>>
>>
>>
>> --
>> Emilien Macchi
>
>
>
> --
> Emilien Macchi



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] History lesson please

2016-08-09 Thread Matthew Thode
On 08/09/2016 10:22 AM, Ian Cordasco wrote:
> -Original Message-
> From: Matthew Thode 
> Reply: prometheanf...@gentoo.org , OpenStack 
> Development Mailing List (not for usage questions) 
> 
> Date: August 9, 2016 at 09:53:53
> To: openstack-dev@lists.openstack.org 
> Subject:  Re: [openstack-dev] [requirements] History lesson please
> 
>> One of the things on our todo list is to test the 'lower-constraints' to
>> make sure they still work with the head of branch.
> 
> That's not sufficient. You need to find versions in between the lowest tested 
> version and the current version to also make sure you don't end up with 
> breakage. You might have somepackage that has a lower version of 2.0.1 and a 
> current constraint of 2.12.3. You might even have a blacklist of versions in 
> between those two versions, but you still need other versions to ensure that 
> things in between those continue to work.
> 
> THe tiniest of accidental incompatibilities can cause some of the most 
> bizarre bugs.
> 
> --  
> Ian Cordasco
> 

I'm aware of this, but this would be a good start.

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] History lesson please

2016-08-09 Thread Ian Cordasco
-Original Message-
From: Matthew Thode 
Reply: prometheanf...@gentoo.org , OpenStack 
Development Mailing List (not for usage questions) 

Date: August 9, 2016 at 09:53:53
To: openstack-dev@lists.openstack.org 
Subject:  Re: [openstack-dev] [requirements] History lesson please

> One of the things on our todo list is to test the 'lower-constraints' to
> make sure they still work with the head of branch.

That's not sufficient. You need to find versions in between the lowest tested 
version and the current version to also make sure you don't end up with 
breakage. You might have somepackage that has a lower version of 2.0.1 and a 
current constraint of 2.12.3. You might even have a blacklist of versions in 
between those two versions, but you still need other versions to ensure that 
things in between those continue to work.

THe tiniest of accidental incompatibilities can cause some of the most bizarre 
bugs.

--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] History lesson please

2016-08-09 Thread Doug Hellmann
Excerpts from Tony Breeds's message of 2016-08-09 16:38:35 +1000:
> Hi all,
> I guess this is aimed at the long term requirements team members.
> 
> The current policy for approving requirements[1] bumps contains the following 
> text:
> 
> Changes to update the minimum version of a library developed by the
> OpenStack community can be approved by one reviewer, as long as the
> constraints are correct and the tests pass.
> 
> Perhaps I'm a little risk adverse but this seems a little strange to me.  Can
> folks that know more about how this came about help me understand why that is?
> 
> Yours Tony.
> 
> [1] 
> https://github.com/openstack/requirements/blob/master/README.rst#for-upgrading-requirements-versions

It's limited to libraries we maintain, so that lowers the risk some. But
as Sean pointed out, it's riskier to assume the older releases continue
to work because typically a request to raise a minimum version means an
app wants to depend on a feature that appears in the newer version.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project

2016-08-09 Thread Steven Dake (stdake)
Well don't let me stop ya :)

On 8/9/16, 7:21 AM, "Ian Cordasco"  wrote:

> 
>
>-Original Message-
>From: Steven Dake (stdake) 
>Reply: OpenStack Development Mailing List (not for usage questions)
>
>Date: August 6, 2016 at 07:48:01
>To: OpenStack Development Mailing List (not for usage questions)
>
>Subject:  Re: [openstack-dev]
>[Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project
>
>> an,
>>  
>> I value your input, but concern still stands. Amazon's compute API
>>moves slowly in comparison
>> to Docker registry's API. Making a parity implementation to the Docker
>>v2 registry API  
>> is a complex and difficult challenge. It is much more significant then
>>simply making  
>> an API. An implementation needs to stand behind that API.
>
>I don't disagree. I just have a higher opinion of the community's ability
>to achieve this goal and use Glare as the backend.
>
>>  
>> From: Ian Cordasco >
>> Reply-To: "OpenStack Development Mailing List (not for usage
>>questions)" >  
>> Date: Saturday, August 6, 2016 at 4:52 AM
>> To: "OpenStack Development Mailing List (not for usage questions)" >
>> Subject: Re: [openstack-dev]
>>[Glance][TC][Heat][App-Catalog][Murano][Tacker]
>> Glare as a new Project
>>  
>>  
>> However, interested parties could start a project like the ec2 project
>>that is independently
>> released and provides that compatibility using glare
>>  
>> On Aug 6, 2016 5:18 AM, "Steven Dake (stdake)" >
>> wrote:
>> Kevin,
>>  
>> Agree it would be a very useful feature, however, easier said then
>>done. Part of Docker's
>> approach is to "move fast";they schedule releases every 2 months. I'm
>>sure the glare  
>> team is quite competent, however, keeping API parity on such a fast
>>moving project such
>> as the docker registry API is a big objective not to be undertaken
>>lightly. If there isn't
>> complete API parity with the docker rregistry v2 API, the work wouldn't
>>be particularly  
>> useful to many in the container communities inside OpenStack as Hongbin
>>pointed out.  
>>  
>> Regards
>> -steve
>>  
>> From: "Fox, Kevin M" >
>> Reply-To: "OpenStack Development Mailing List (not for usage
>>questions)" >  
>> Date: Friday, Mooney"OpenStack Development Mailing List (not for usage
>>questions)"  
>> son between Image API and Artifact API is not
>> correct. Moreover, in my opinion Image API imposes artificial
>>constraints. Just imagine
>> that your file system can only store images in JPG format (more
>>precisely, it could store
>> any data, but it is imperative that all files must have the extension
>>".jpg"). Likewise
>> Glance - I can put there any data, it can be both packages and
>>templates, as well as video
>> from my holiday. And this interface, though not ideal, may not work for
>>all services.  
>> But those artificial limitations that have been created, do Glance
>>uncomfortable even
>> for storing images.
>>  
>> On the other hand Glare provides unified interface for all possible
>>binary data types.
>> If we take the example with filesystem, in Glare's case it supports all
>>file extensions, 
>> folders, history of file changes on your disk, data validation and
>>conversion, import/export
>> files from different computers and so on. These features are not
>>presented in Glance
>> and I think they never will, because of deficiencies in the
>>architecture.
>>  
>> For this reason I think Glare's adoption is important and it will be a
>>huge step forward
>> for OpenStack and the whole community.
>>  
>> Thanks again! If you want to support us, please vote for our talk on
>>Barcelona summit -
>> https://www.openstack.org/summit/barcelona-2016/vote-for-speakers/
>>Search  
>> "Glare" and there will be our presentation.
>>  
>> Best,
>> Mike
>>  
>> On Fri, Aug 5, 2016 at 5:22 PM, Jonathan D. Proulx >
>> wrote:
>>  
>> I don't have a strong opinion on the split vs stay discussion. It
>> does seem there's been sustained if ineffective attempts to keep this
>> together so I lean toward supporting the divorce.
>>  
>> But let's not pretend there are no costs for this.
>>  
>> On Thu, Aug 04, 2016 at 07:02:48PM -0400, Jay Pipes wrote:
>> :On 08/04/2016 06:40 PM, Clint Byrum wrote:
>>  
>> :>But, if I look at this from a us"OpenStack Development Mailing List
>>(not for usage questions)"
>> y
>> :>confusing.
>> :
>> :Actually, I beg to differ. A unified OpenStack Artifacts API,
>> :long-term, will be more user-friendly and less confusing since a
>> :single API can be used for various kinds of similar artifacts --
>> :images, Heat templates, Tosca flows, Murano app manifests, maybe
>> :Solum things, maybe eventually Nova flavor-like things, etc.
>>  
>> The confusion is the current state of two API's, not having a future
>> integrated API.
>>  
>> Remember how well that served us with nova-network and neutron (né
>> quantum).
>>  
>> I also agree with 

Re: [openstack-dev] [requirements] History lesson please

2016-08-09 Thread Matthew Thode
On 08/09/2016 09:25 AM, Ian Cordasco wrote:
>  
> 
> -Original Message-
> From: Sean Dague 
> Reply: OpenStack Development Mailing List (not for usage questions) 
> 
> Date: August 9, 2016 at 05:44:55
> To: openstack-dev@lists.openstack.org 
> Subject:  Re: [openstack-dev] [requirements] History lesson please
> 
>> On 08/09/2016 02:38 AM, Tony Breeds wrote:
>>> Hi all,
>>> I guess this is aimed at the long term requirements team members.
>>>
>>> The current policy for approving requirements[1] bumps contains the 
>>> following text:  
>>>
>>> Changes to update the minimum version of a library developed by the
>>> OpenStack community can be approved by one reviewer, as long as the
>>> constraints are correct and the tests pass.
>>>
>>> Perhaps I'm a little risk adverse but this seems a little strange to me. Can
>>> folks that know more about how this came about help me understand why that 
>>> is?
>>>
>>> Yours Tony.
>>>
>>> [1] 
>>> https://github.com/openstack/requirements/blob/master/README.rst#for-upgrading-requirements-versions
>>>   
>>  
>> With constraints, the requirements minimum bump is pretty low risk. Very
>> little of our jobs are impacted by it.
>>  
>> It's in many ways more risking to leave minimums where they are and bump
>> constraints, because the minimums could be lying that they still work at
>> the lower level.
>>  
>> -Sean
> 
> I maintain a few libraries outside of OpenStack that have generous lower 
> limits and testing them is resource intensive both as a developer and in 
> continuous integration. I'd love to see OpenStack be *more* aggressive about 
> the oldest version it supports because in most cases I severely distrust the 
> version ranges we use. I do recognize, however, that we have to coordinate 
> with some distributions that will not update their packaged versions (which 
> are often an old version number with security patches poorly cherry-picked). 
> So you may need to coordinate with them before bumping version minimums as 
> well.
> 
> --  
> Ian Cordasco
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

One of the things on our todo list is to test the 'lower-constraints' to
make sure they still work with the head of branch.

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project

2016-08-09 Thread Ian Cordasco
 

-Original Message-
From: Steven Dake (stdake) 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: August 6, 2016 at 07:48:01
To: OpenStack Development Mailing List (not for usage questions) 

Subject:  Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] 
Glare as a new Project

> an,
>  
> I value your input, but concern still stands. Amazon's compute API moves 
> slowly in comparison  
> to Docker registry's API. Making a parity implementation to the Docker v2 
> registry API  
> is a complex and difficult challenge. It is much more significant then simply 
> making  
> an API. An implementation needs to stand behind that API.

I don't disagree. I just have a higher opinion of the community's ability to 
achieve this goal and use Glare as the backend.

>  
> From: Ian Cordasco >  
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" >  
> Date: Saturday, August 6, 2016 at 4:52 AM
> To: "OpenStack Development Mailing List (not for usage questions)" >  
> Subject: Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker]  
> Glare as a new Project
>  
>  
> However, interested parties could start a project like the ec2 project that 
> is independently  
> released and provides that compatibility using glare
>  
> On Aug 6, 2016 5:18 AM, "Steven Dake (stdake)" >  
> wrote:
> Kevin,
>  
> Agree it would be a very useful feature, however, easier said then done. Part 
> of Docker's  
> approach is to "move fast";they schedule releases every 2 months. I'm sure 
> the glare  
> team is quite competent, however, keeping API parity on such a fast moving 
> project such  
> as the docker registry API is a big objective not to be undertaken lightly. 
> If there isn't  
> complete API parity with the docker rregistry v2 API, the work wouldn't be 
> particularly  
> useful to many in the container communities inside OpenStack as Hongbin 
> pointed out.  
>  
> Regards
> -steve
>  
> From: "Fox, Kevin M" >
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" >  
> Date: Friday, Mooney"OpenStack Development Mailing List (not for usage 
> questions)"  
> son between Image API and Artifact API is not  
> correct. Moreover, in my opinion Image API imposes artificial constraints. 
> Just imagine  
> that your file system can only store images in JPG format (more precisely, it 
> could store  
> any data, but it is imperative that all files must have the extension 
> ".jpg"). Likewise  
> Glance - I can put there any data, it can be both packages and templates, as 
> well as video  
> from my holiday. And this interface, though not ideal, may not work for all 
> services.  
> But those artificial limitations that have been created, do Glance 
> uncomfortable even  
> for storing images.
>  
> On the other hand Glare provides unified interface for all possible binary 
> data types.  
> If we take the example with filesystem, in Glare's case it supports all file 
> extensions,  
> folders, history of file changes on your disk, data validation and 
> conversion, import/export  
> files from different computers and so on. These features are not presented in 
> Glance  
> and I think they never will, because of deficiencies in the architecture.
>  
> For this reason I think Glare's adoption is important and it will be a huge 
> step forward  
> for OpenStack and the whole community.
>  
> Thanks again! If you want to support us, please vote for our talk on 
> Barcelona summit -  
> https://www.openstack.org/summit/barcelona-2016/vote-for-speakers/ Search  
> "Glare" and there will be our presentation.
>  
> Best,
> Mike
>  
> On Fri, Aug 5, 2016 at 5:22 PM, Jonathan D. Proulx >  
> wrote:
>  
> I don't have a strong opinion on the split vs stay discussion. It
> does seem there's been sustained if ineffective attempts to keep this
> together so I lean toward supporting the divorce.
>  
> But let's not pretend there are no costs for this.
>  
> On Thu, Aug 04, 2016 at 07:02:48PM -0400, Jay Pipes wrote:
> :On 08/04/2016 06:40 PM, Clint Byrum wrote:
>  
> :>But, if I look at this from a us"OpenStack Development Mailing List (not 
> for usage questions)"  
> y
> :>confusing.
> :
> :Actually, I beg to differ. A unified OpenStack Artifacts API,
> :long-term, will be more user-friendly and less confusing since a
> :single API can be used for various kinds of similar artifacts --
> :images, Heat templates, Tosca flows, Murano app manifests, maybe
> :Solum things, maybe eventually Nova flavor-like things, etc.
>  
> The confusion is the current state of two API's, not having a future
> integrated API.
>  
> Remember how well that served us with nova-network and neutron (né
> quantum).
>  
> I also agree with Tim's point. Yes if a new project is fully
> documented and integrated well into packaging and config management
> implementing it is trivial, 

Re: [openstack-dev] [requirements] History lesson please

2016-08-09 Thread Ian Cordasco
 

-Original Message-
From: Sean Dague 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: August 9, 2016 at 05:44:55
To: openstack-dev@lists.openstack.org 
Subject:  Re: [openstack-dev] [requirements] History lesson please

> On 08/09/2016 02:38 AM, Tony Breeds wrote:
> > Hi all,
> > I guess this is aimed at the long term requirements team members.
> >
> > The current policy for approving requirements[1] bumps contains the 
> > following text:  
> >
> > Changes to update the minimum version of a library developed by the
> > OpenStack community can be approved by one reviewer, as long as the
> > constraints are correct and the tests pass.
> >
> > Perhaps I'm a little risk adverse but this seems a little strange to me. Can
> > folks that know more about how this came about help me understand why that 
> > is?
> >
> > Yours Tony.
> >
> > [1] 
> > https://github.com/openstack/requirements/blob/master/README.rst#for-upgrading-requirements-versions
> >   
>  
> With constraints, the requirements minimum bump is pretty low risk. Very
> little of our jobs are impacted by it.
>  
> It's in many ways more risking to leave minimums where they are and bump
> constraints, because the minimums could be lying that they still work at
> the lower level.
>  
> -Sean

I maintain a few libraries outside of OpenStack that have generous lower limits 
and testing them is resource intensive both as a developer and in continuous 
integration. I'd love to see OpenStack be *more* aggressive about the oldest 
version it supports because in most cases I severely distrust the version 
ranges we use. I do recognize, however, that we have to coordinate with some 
distributions that will not update their packaged versions (which are often an 
old version number with security patches poorly cherry-picked). So you may need 
to coordinate with them before bumping version minimums as well.

--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][nova] os_vif 1.1.0 release (newton)

2016-08-09 Thread no-reply
We are pleased to announce the release of:

os_vif 1.1.0: A library for plugging and unplugging virtual interfaces
in OpenStack.

This release is part of the newton release series.

With package available at:

https://pypi.python.org/pypi/os_vif

For more details, please see below.

1.1.0
^


New Features


* The ovs plugin has been extended to support vhost-user interfaces.
  vhost-user is a userspace protocol for high speed virtual networking
  introduced in qemu 2.1 and first supported in ovs 2.4 with dpdk 2.0

* The ovs plugin has been modified to ensure that the specified OVS
  bridge that the vif will be attached to has been created.  If the
  OVS bridge does not exist, it will be created with the proper
  datapath_type.


Bug Fixes
*

* The OpenVSwitch plugin was registered with an entrypoint name of
  "ovs", but its describe method mistakenly reported that its name was
  "ovs_hybrid". The latter has been fixed to match the registered
  name.

* os-vif plugins were previously incorrectly registered in both the
  setup.py and setup.cfg. All plugin registration have been removed
  form the setup.py as they were not used and may have blocked
  registration of out of tree plugins.

* The ovs plugin now handles vifs of type VIFOpenVSwitch properly.
  Before, it would improperly create an extraneous linux bridge and
  veth pair attached to the target OVS bridge.

Changes in os_vif 1.0.0..1.1.0
--

c16a74b Simplified if statement
4da0ab1 Updated from global requirements
d9c72d2 revert removal of create_ovs_vif_port timeout
3d62d8e Ensure the OVS bridge exists when plugging
d5b119b Don't create extraneous linux bridge/veth pair for VIFOpenVSwitch
2eb892c Updated from global requirements
2c96373 ovs: Avoids setting MTU if MTU is None or 0
8adde2f os_vif: fix logging of exceptions during plug/unplug
632db77 vif_plug_ovs: clarify that the plugin was not in fact renamed
d78a33e os_vif: add logging for each plugin that is loaded
8b72ece os_vif: register objects before loading plugins
8c58816 Add support for vhost-user
65ae52b This change renames the ovs plugin
8354942 Updated from global requirements
4cacdaa remove unused entrypoints


Diffstat (except docs and test files)
-

os_vif/__init__.py |  17 ++--
os_vif/objects/host_info.py|   2 +-
...add-ovs-vhostuser-support-2ba8de51c1f3a244.yaml |   6 ++
.../notes/ensure-ovs-bridge-a0c1b51f469c92d0.yaml  |   7 ++
.../fix-ovs-plugin-describe-049750609559f1ba.yaml  |   6 ++
...fix-stevedore-entrypoints-8002ec7a5166c977.yaml |   6 ++
.../fix-vif-openvswitch-fa0d19be9dd668e1.yaml  |   6 ++
requirements.txt   |  18 ++--
setup.py   |  16 +--
test-requirements.txt  |  22 ++--
vif_plug_ovs/constants.py  |  16 +++
vif_plug_ovs/linux_net.py  |  50 +++--
vif_plug_ovs/ovs.py|  48 +++--
15 files changed, 350 insertions(+), 67 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 9b22978..896b7d9 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -5,9 +5,9 @@
-pbr>=1.6
-netaddr>=0.7.12,!=0.7.16
-oslo.config>=3.4.0 # Apache-2.0
-oslo.log>=1.14.0  # Apache-2.0
-oslo.i18n>=1.5.0  # Apache-2.0
-oslo.privsep>=1.3.0  # Apache-2.0
-oslo.versionedobjects>=0.13.0
-six>=1.9.0
-stevedore>=1.5.0  # Apache-2.0
+pbr>=1.6 # Apache-2.0
+netaddr!=0.7.16,>=0.7.12 # BSD
+oslo.config>=3.10.0 # Apache-2.0
+oslo.log>=1.14.0 # Apache-2.0
+oslo.i18n>=2.1.0 # Apache-2.0
+oslo.privsep>=1.9.0 # Apache-2.0
+oslo.versionedobjects>=1.9.1 # Apache-2.0
+six>=1.9.0 # MIT
+stevedore>=1.10.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 3f5d210..ab729b7 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -5,11 +5,11 @@
-hacking>=0.10.2,<0.11
-coverage>=3.6
-discover
-python-subunit>=0.0.18
-reno>=1.6.2 # Apache2
-sphinx>=1.1.2,!=1.2.0,!=1.3b1,<1.3
-oslosphinx>=2.5.0,!=3.4.0  # Apache-2.0
-oslotest>=1.10.0  # Apache-2.0
-testrepository>=0.0.18
-testscenarios>=0.4
-testtools>=1.4.0
+hacking<0.11,>=0.10.2
+coverage>=3.6 # Apache-2.0
+discover # BSD
+python-subunit>=0.0.18 # Apache-2.0/BSD
+reno>=1.8.0 # Apache2
+sphinx!=1.3b1,<1.3,>=1.2.1 # BSD
+oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0
+oslotest>=1.10.0 # Apache-2.0
+testrepository>=0.0.18 # Apache-2.0/BSD
+testscenarios>=0.4 # Apache-2.0/BSD
+testtools>=1.4.0 # MIT



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova][SR-IOV] deprecation of supported_pci_vendor_devs

2016-08-09 Thread Sergey Nikitin
+1
I think we have enough checks in Nova.

2016-08-09 12:46 GMT+03:00 Moshe Levi :

> This is the deprecation patch [2]
>
> [2] - https://review.openstack.org/#/c/352812/
>
> -Original Message-
> From: Moshe Levi [mailto:mosh...@mellanox.com]
> Sent: Monday, August 08, 2016 3:43 PM
> To: OpenStack Development Mailing List (not for usage questions) (
> openstack-dev@lists.openstack.org) 
> Subject: [openstack-dev] [neutron][nova][SR-IOV] deprecation of
> supported_pci_vendor_devs
>
> Hi all,
>
> To reduce complexity in configuring SR-IOV I want to deprecate the
> supported_pci_vendor_devs option [1] in the neutron-server ml2 config.
> This option is doing extra validation that pci vendor id and product id
> provided by nova in the neutron port binding profile is matching to the
> vendor id and product id  in supported_pci_vendor_devs.
>
> In my opinion this is redundant, nova-scheduler is the point to do
> validation and select a suitable hypervisor.
> The compute node is already validating this through the
> pci_passthrough_whitelist option in nova.conf [2].
>
> I don't see a reason why the neutron-server should validate the pci
> vendor_id and product_id again from the neutron port binding profile.
>
> If there is good reason to keep it please let me know, otherwise I will
> deprecate it.
>
> [1] - supported_pci_vendor_devs = ['15b3:1004', '8086:10ca'] [2] -
> pci_passthrough_whitelist = {"address":"*:06:00.*","
> physical_network":"physnet1"}
>
>
> Thanks,
> Moshe
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] tripleo-common bugs, bug tracking and launchpad tags

2016-08-09 Thread Ryan Brady
On Wed, Jul 27, 2016 at 9:06 AM, Dougal Matthews  wrote:

>
>
> On 19 July 2016 at 16:20, Steven Hardy  wrote:
>
>> On Mon, Jul 18, 2016 at 12:28:10PM +0100, Julie Pichon wrote:
>> > Hi,
>> >
>> > On Friday Dougal mentioned on IRC that he hadn't realised there was a
>> > separate project for tripleo-common bugs on Launchpad [1] and that he'd
>> > been using the TripleO main tracker [2] instead.
>> >
>> > Since the TripleO tracker is also used for client bugs (as far as I can
>> > tell?), and there doesn't seem to be a huge amount of tripleo-common
>> > bugs perhaps it would make sense to also track those in the main
>> > tracker? If there is a previous conversation or document about bug
>> > triaging beyond [3] I apologise for missing it (and would love a
>> > URL!). At the moment it's a bit confusing.
>>
>> Thanks for raising this, yes there is a bit of a proliferation of LP
>> projects, but FWIW the only one I'm using to track coordinated milestone
>> releases for Newton is this one:
>>
>> https://launchpad.net/tripleo/
>>
>> > If we do encourage using the same bug tracker for multiple components,
>> > I think it would be useful to curate a list of official tags [4]. The
>> > main advantage of doing that is that the tags will auto-complete so
>> > it'd be easier to keep them consistent (and thus actually useful).
>>
>> +1 I'm fine with adding tags, but I would prefer that we stopped adding
>> more LP projects unless the associated repos aren't planned to be part of
>> the coordinated release (e.g I don't have to track them ;)
>>
>> > Personally, I wanted to look through open bugs against
>> > python-tripleoclient but people use different ways of marking them at
>> > the moment - e.g. [tripleoclient] or [python-tripleoclient] or
>> > tripleoclient (or nothing?) in the bug name. I tried my luck at adding
>> > a 'tripleoclient' tag [5] to the obvious ones as an example. Maybe
>> > something shorter like 'cli', 'common' would make more sense. If there
>> > are other tags that come back regularly it'd probably be helpful to
>> > list them explicitly as well.
>>
>> Sure, well I know that many python-*clients do have separate LP projects,
>> but in the case of TripleO our client is quite highly coupled to the the
>> other TripleO pieces, in particular tripleo-common.  So my vote is to
>> create some tags in the main tripleo project and use that to filter bugs
>> as
>> needed.
>>
>> There are two projects we might consider removing, tripleo-common, which
>> looks pretty much unused and tripleo-validations which was recently added
>> by the sub-team working on validations.
>>
>
> I agree with retiring these and I'd also like to add tripleo-workflows to
> the
> list for consideration, it has been created but hasn't yet been used as
> far
> as I can tell.
>

Shortly after I started creating this, you were very vocal about being -1
to this and so we did not use it.  I haven't had time to delete it yet, but
it's on my task list after the N-3 items.


>
> Sorry the the late reply. I'm glad this was brought up, it was on my mental
> todo list. It should make things clearer internally and also for users
> less
> familiar with the project that want to report bugs.
>
>
> If folks find either useful then they can stay, but it's going to be easier
>> to get a clear view on when to cut a release if we track everything
>> considered part of the tripleo deliverable in one place IMHO.
>>
>> Thanks,
>>
>> Steve
>>
>> >
>> > Julie
>> >
>> > [1] https://bugs.launchpad.net/tripleo-common
>> > [2] https://bugs.launchpad.net/tripleo
>> > [3] https://wiki.openstack.org/wiki/TripleO#Bug_Triage
>> > [4] https://wiki.openstack.org/wiki/Bug_Tags
>> > [5] https://bugs.launchpad.net/tripleo?field.tag=tripleoclient
>> >
>> > 
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> --
>> Steve Hardy
>> Red Hat Engineering, Cloud
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Ryan Brady
Cloud Engineering
rbr...@redhat.com
919.890.8925
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [octavia] Multi-node controller testing

2016-08-09 Thread Miguel Angel Ajo Pelayo
On Mon, Aug 8, 2016 at 4:56 PM, Kosnik, Lubosz  wrote:
> Great work with that multi-node setup Miguel.

Thanks, I have to get my hands dirtier with octavia, it's just a tiny thing.

> About that multinode Infra is supporting two nodes setup used currently by 
> grenade jobs but in my opinion we don’t have any tests which can cover that 
> type of testing. We’re still struggling with selecting proper tool to test 
> Octavia from integration/functional perspective so probably it’s too early to 
> make it happen.


Well, any current tests we run should pass equally well in a multi
node controller, and that's the point, that, regardless of the
deployment architecture the behaviour shall not change at all. We may
not need any specific test.


> Maybe it’s great start to finally make some decision about testing tools and 
> there will be a lot of work for you after that also with setting up an infra 
> multi-node job for that.

I'm not fully aware of what are we running today for octavia, so if
you can give me some pointers about where are those jobs configured,
and what do they target, it could be a start, to provide feedback.

What are the current options/tools we're considering?


>
> Cheers,
> Lubosz Kosnik
> Cloud Software Engineer OSIC
> lubosz.kos...@intel.com
>
>> On Aug 8, 2016, at 7:04 AM, Miguel Angel Ajo Pelayo  
>> wrote:
>>
>> Recently, I sent a series of patches [1] to make it easier for
>> developers to deploy a multi node octavia controller with
>> n_controllers x [api, cw, hm, hk] with an haproxy in front of the API.
>>
>> Since this is the way the service is designed to work (with horizontal
>> scalability in mind), and we want to have a good guarantee that any
>> bug related to such configuration is found early, and addressed, I was
>> thinking that an extra job that runs a two node controller deployment
>> could be beneficial for the project.
>>
>>
>> If we all believe it makes sense, I would be willing to take on this
>> work but I'd probably need some pointers and light help, since I've
>> never dealt with setting up or modifying existing jobs.
>>
>> How does this sound?
>>
>>
>> [1] 
>> https://review.openstack.org/#/q/status:merged+project:openstack/octavia+branch:master+topic:multinode-devstack
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia] Multi-node controller testing

2016-08-09 Thread Miguel Angel Ajo Pelayo
Thank you!! :)

On Mon, Aug 8, 2016 at 5:49 PM, Michael Johnson  wrote:
> Miguel,
>
> Thank you for your work here.  I would support an effort to setup a
> multi-node gate job.
>
> Michael
>
>
> On Mon, Aug 8, 2016 at 5:04 AM, Miguel Angel Ajo Pelayo
>  wrote:
>> Recently, I sent a series of patches [1] to make it easier for
>> developers to deploy a multi node octavia controller with
>> n_controllers x [api, cw, hm, hk] with an haproxy in front of the API.
>>
>> Since this is the way the service is designed to work (with horizontal
>> scalability in mind), and we want to have a good guarantee that any
>> bug related to such configuration is found early, and addressed, I was
>> thinking that an extra job that runs a two node controller deployment
>> could be beneficial for the project.
>>
>>
>> If we all believe it makes sense, I would be willing to take on this
>> work but I'd probably need some pointers and light help, since I've
>> never dealt with setting up or modifying existing jobs.
>>
>> How does this sound?
>>
>>
>> [1] 
>> https://review.openstack.org/#/q/status:merged+project:openstack/octavia+branch:master+topic:multinode-devstack
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-ovs-dpdk] conntrack security group driver with ovs-dpdk

2016-08-09 Thread Mooney, Sean K

> -Original Message-
> From: kostiantyn.volenbovs...@swisscom.com
> [mailto:kostiantyn.volenbovs...@swisscom.com]
> Sent: Tuesday, August 9, 2016 12:58 PM
> To: openstack-dev@lists.openstack.org; Mooney, Sean K
> 
> Subject: RE: [openstack-dev] [neutron][networking-ovs-dpdk] conntrack
> security group driver with ovs-dpdk
> 
> Hi,
> (sorry for using incorrect threading)
> 
> > > About 2 weeks ago I did some light testing with the conntrack
> > > security group driver and the newly
> > >
> > > Merged upserspace conntrack support in ovs.
> > >
> By 'recently' - whether you mean patch v4
> http://openvswitch.org/pipermail/dev/2016-June/072700.html
> or you used OVS 2.5 itself (which I think includes v2 of the same patch
> series)?
[Mooney, Sean K] I used 
http://openvswitch.org/pipermail/dev/2016-June/072700.html or specifically
i used the following commit 
https://github.com/openvswitch/ovs/commit/0c87efe4b5017de4c5ae99e7b9c36e8a6e846669
which is just after userspace conntrack was merged,
> 
> So in general - I am a bit confused about conntrack support in OVS.
> 
> OVS 2.5 release notes http://openvswitch.org/pipermail/announce/2016-
> February/81.html state:
> "This release includes the highly anticipated support for connection
> tracking in the Linux kernel.  This feature makes it possible to
> implement stateful firewalls and will be the basis for future stateful
> features such as NAT and load-balancing.  Work is underway to bring
> connection tracking to the userspace datapath (used by DPDK) and the
> port to Hyper-V."  - in the way that 'work is underway' (=work is
> ongoing) means that a time of OVS 2.5 release the feature was not
> 'classified' as ready?
[Mooney, Sean K] 
In ovs 2.5 only linux kernel conntrack was supported assuming you had a
4.x kernel that supported it. that means that the feature was not available on 
bsd,windows or with dpdk.

In the upcoming ovs 2.6 release conntrack support has been added to the 
Netdev datapath which is used with dpdk and on bsd. As far as I am aware 
windows conntrack support is still
Missing but I may be wrong.

If you are interested the devstack local.conf I used to test that it functioned 
is available here
http://paste.openstack.org/show/552434/

I used an OpenStack vm using the Ubuntu 16.04 and 2 e1000 interfaces to do the 
testing.


> 
> 
> BR,
> Konstantin
> 
> 
> 
> > On Sat, Aug 6, 2016 at 8:16 PM, Mooney, Sean K
> 
> > wrote:
> > > Hi just a quick fyi,
> > >
> > > About 2 weeks ago I did some light testing with the conntrack
> security
> > > group driver and the newly
> > >
> > > Merged upserspace conntrack support in ovs.
> > >
> > >
> > >
> > > I can confirm that at least form my initial smoke tests where I
> > >
> > > Uses netcat ping and ssh to try and establish connections between
> two
> > > vms the
> > >
> > > Conntrack security group driver appears to function correctly with
> the
> > > userspace connection tracker.
> > >
> > >
> > >
> > > We have not looked at any of the performance yet but assuming it is
> at
> > > an acceptable level I am planning to
> > >
> > > Deprecate the learn action based driver in networking-ovs-dpdk and
> > > remove it once  we have cut the stable newton
> > >
> > > Branch.
> > >
> > >
> > >
> > > We hope to do some rfc 2544 throughput testing to evaluate the
> > > performance sometime mid-September.
> > >
> > > Assuming all goes well I plan on enabling the conntrack based
> security
> > > group driver by default when the
> > >
> > > Networking-ovs-dpdk devstack plugin is loaded. We will also
> evaluate
> > > enabling the security group tests
> > >
> > > In our third party ci to ensure it continues to function correctly
> > > with ovs-dpdk.
> > >
> > >
> > >
> > > Regards
> > >
> > > Seán
> > >
> > >
> > >
> > >
> > >
> > _
> > _
> > >  OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> > > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> > _
> > _
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-ovs-dpdk] conntrack security group driver with ovs-dpdk

2016-08-09 Thread Kostiantyn.Volenbovskyi
Hi,
(sorry for using incorrect threading)

> > About 2 weeks ago I did some light testing with the conntrack security
> > group driver and the newly
> >
> > Merged upserspace conntrack support in ovs.
> >
By 'recently' - whether you mean patch v4 
http://openvswitch.org/pipermail/dev/2016-June/072700.html
or you used OVS 2.5 itself (which I think includes v2 of the same patch series)?

So in general - I am a bit confused about conntrack support in OVS.

OVS 2.5 release notes 
http://openvswitch.org/pipermail/announce/2016-February/81.html state:
"This release includes the highly anticipated support for connection tracking 
in the Linux kernel.  This feature makes it possible to implement stateful 
firewalls and will be the basis for future stateful features such as NAT and 
load-balancing.  Work is underway to bring connection tracking to the userspace 
datapath (used by DPDK) and the port to Hyper-V."  - in the way that 'work is 
underway' (=work is ongoing) means that a time of OVS 2.5 release the feature 
was not 'classified' as ready?
 

BR, 
Konstantin



> On Sat, Aug 6, 2016 at 8:16 PM, Mooney, Sean K 
> wrote:
> > Hi just a quick fyi,
> >
> > About 2 weeks ago I did some light testing with the conntrack security
> > group driver and the newly
> >
> > Merged upserspace conntrack support in ovs.
> >
> >
> >
> > I can confirm that at least form my initial smoke tests where I
> >
> > Uses netcat ping and ssh to try and establish connections between two
> > vms the
> >
> > Conntrack security group driver appears to function correctly with the
> > userspace connection tracker.
> >
> >
> >
> > We have not looked at any of the performance yet but assuming it is at
> > an acceptable level I am planning to
> >
> > Deprecate the learn action based driver in networking-ovs-dpdk and
> > remove it once  we have cut the stable newton
> >
> > Branch.
> >
> >
> >
> > We hope to do some rfc 2544 throughput testing to evaluate the
> > performance sometime mid-September.
> >
> > Assuming all goes well I plan on enabling the conntrack based security
> > group driver by default when the
> >
> > Networking-ovs-dpdk devstack plugin is loaded. We will also evaluate
> > enabling the security group tests
> >
> > In our third party ci to ensure it continues to function correctly
> > with ovs-dpdk.
> >
> >
> >
> > Regards
> >
> > Seán
> >
> >
> >
> >
> >
> _
> _
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] spec for datasource

2016-08-09 Thread Yujun Zhang
Hi lfat,

Thank you for the answers, please see my reply inline.

On Tue, Aug 9, 2016 at 6:51 PM Afek, Ifat (Nokia - IL) 
wrote:

> Hi Yujun,
>
> Please see my answers below.
>
> Best Regards,
> Ifat.
>
> From: Yujun Zhang
> Date: Tuesday, 9 August 2016 at 12:06
>
>
> For proprietary datasource, I'm considering to adapt the api to common
> protocol/interface, e.g. RESTful, SNMP and etc and I wish to know how to
> add support for these interface.
>
> [Ifat]: Do you mean that you want to write a datasource that gets its
> information from SNMP/REST API? Note that for alarm datasources, we have in
> our roadmap to support OPNFV Doctor SB REST API[1]. Will it be relevant for
> your use cases?
>
[yujunz] Great, this could also be a good start point for us

> Some documents are found in https://github.com/openstack/vitrage-specs and
>> datasource seems to be related to synchronizer but I didn't find a
>> dedicated spec.
>>
> [Ifat]: We started documenting the process of adding a new datasource, but
> the document is not final. We will try to finish it shortly. BTW, there are
> many other documents in Vitrage wiki page[2].
>
[yujunz] Thanks.

> 1. How do I register a new datasource in an existing system?
>>
> [yujunz] It seems to be in
https://github.com/openstack/vitrage/blob/master/vitrage/datasources/__init__.py


> 2. Is the type of datasource (ALARM/RESOURCE) configured in
>> `/etc/vitrage/datasource_values/.yaml` ?
>>
> [Ifat]: No, it is configured in the datasource code. For information about
> datasource_values please see [3]
>
[yujunz] which code file? I found `category: RESOURCE` in the datasource
configuration file as
https://github.com/openstack/vitrage/blob/master/doc/source/resource-state-config.rst#format

> 3. Is there any other datasource type besides ALARM/RESOURCE?
>
> [Ifat]: No, and at the moment we don’t see a need for that. Vitrage can
> hold resources of any type in a topology graph, and manage alarms that are
> raised on these resources. If you see a use case for other datasource
> types, let us know.
>
[yujunz]  I agree. Just ask to confirm my guess :-)

> 4. What does `aggregated values` and `priority` mean
>>
> [Ifat]: Detailed in [3]
>
[yujunz]: Clear now.

> 5. What is the required data format for the datasource driver api? The
>> inline comments give some brief description but didn't specify the api
>> signature.
>> Many thanks.
>>
> [Ifat]: This should be part of the datasource documentation that we need
> to add. But basically, the driver should return a dictionary with few
> predefined fields (like datasource type, timestamp), and add whatever data
> is needed for the datasource. Then, the datasource transformer will
> transform this data into a vertex and decide where to connect it to the
> graph.
>
[yujunz] So if I understand it correctly, driver and transformer works as a
pair and the intermediate data format is not exposed to vitrage. It
consumes data from the sources and convert them into graph

> [1] https://gerrit.opnfv.org/gerrit/12179
> [2] https://wiki.openstack.org/wiki/Vitrage
> *[3] *
> https://github.com/openstack/vitrage/blob/master/doc/source/resource-state-config.rst
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] devstack changed to neutron by default - merged

2016-08-09 Thread Ricardo Carrillo Cruz
Thanks a million for this!

Ricky

2016-08-09 13:25 GMT+02:00 Davanum Srinivas :

> Yay! Long time to get here. But we are here :)
>
> -- Dims
>
> On Tue, Aug 9, 2016 at 7:04 AM, Sean Dague  wrote:
> > https://review.openstack.org/#/c/350750/ has merged, which moves us over
> > to neutron by default in devstack.
> >
> > If you have manually set services in your local.conf, you won't see any
> > changes. If you don't regularly set those services, you'll be using
> > neutron on your next stack.sh after this change.
> >
> > The *one* major difference in configuration is that PUBLIC_INTERFACE
> > means something different with neutron. This now means the interface
> > that you would give to neutron and let it completely take over. There
> > will be docs coming soon to explain this a bit better on the devstack
> > documentation site (http://devstack.org) once I'm a few more cups of
> > coffee into the morning. However, in the mean time, if you see weird
> > fails during stacking, try deleting PUBLIC_INTERFACE from your
> local.conf.
> >
> > -Sean
> >
> > --
> > Sean Dague
> > http://dague.net
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] spec for datasource

2016-08-09 Thread Malin, Eylon (Nokia - IL)
Hi Yujun,

Regards your questions :
For example, let's say you have 2 packages for your datasources named : 
yujun.ds.snmp ,  yujun.ds.my_alerts

1. For proprietary datasources: 
a. You need that your datasource packages would be in the python path 
or installed in your python site-packages 
   (.e.g in RHEL 7 : under /usr/lib/python2.7/site-packages/)
b. You need to edit the path value pf datasource section in 
vitrage.conf to include the base path of your datasource. 
In our example, you need to edit under [datasource] section, 
the path value to: vitrage.datasources, yujun.ds
c. You need to have in each __init__.py of your datasources an OPTS for 
transformer and driver. See vitrage/vitrage/datasources/aodh/__init__.py for 
example.

2.   The type of datasource can be anything. 
   Best practice is declare it as Constant in the __init__.py of the 
package as some string. For example SNMP=’snmp’.
   And use that Constant as the sync_type in make_pickleable, and as 
entity_type= in the transformer while creating vertex.
   Anyway in  `/etc/vitrage/datasource_values/ you need to put file with 
the same name as the type (string) in the entity_type of the vertex that 
created in the transformer.
   In this example, create `/etc/vitrage/datasource_values/snmp.yaml

3. Can be a lot of datasource types. ALARM/RESOURCE are Entity Category, and 
there are no more categories yet. (see vitrage.common.constants.EntityCategory)
4.  I will let Alexy Weyl to explain about that.
5. What do you mean in API format ? In your driver python class, you need to 
inherit from DriverBase and implement all the abstract methods.
 You can see a lot of examples in current code.
 The Datasource itself (the external system) can use any API, and your 
driver need to communicate with the datasource in the API that the datasource 
support (for example SNMP protocol if it is SNMP datasource)

Hopes that my answers helps you...

BR

Eylon


From: Yujun Zhang [mailto:zhangyujun+...@gmail.com] 
Sent: Tuesday, August 09, 2016 12:07 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [vitrage] spec for datasource

Hi, Eylon,

It is not decided yet what datasources will be required. But we may assume 
there will be both open and proprietary ones.

The example of the former could be include service status of nova host in the 
topology (currently only host, instance and zone are found in the code).

For proprietary datasource, I'm considering to adapt the api to common 
protocol/interface, e.g. RESTful, SNMP and etc and I wish to know how to add 
support for these interface.

At the moment, I'm making an evaluation on the extensibility of vitrage 
architecture and estimate the workload for new datasource. Any idea how I can 
proceed?

--
Yujun

On Tue, Aug 9, 2016 at 4:34 PM Malin, Eylon (Nokia - IL) 
 wrote:
Hi,

There are different instruction for datasource that are part of openstack 
vitrage upstream, and for propriety datasource.
So for better understanding the case, do you want to add new datasource that 
would be contributed to openstack, or is it propriety one ?
I'm meaning do you plan to push the new datasource to vitrage upstream or leave 
it private ?

Eylon


From: Yujun Zhang [mailto:zhangyujun+...@gmail.com]
Sent: Tuesday, August 09, 2016 10:22 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [vitrage] spec for datasource

Dear all,

Is there a guide on how to understand the design of datasource? I want to 
extend the existing one and also try to create a custom datasource from scratch.

Some documents are found in https://github.com/openstack/vitrage-specs and 
datasource seems to be related to synchronizer but I didn't find a dedicated 
spec.

Currently I have the following questions
1. How do I register a new datasource in an existing system?
2. Is the type of datasource (ALARM/RESOURCE) configured in 
`/etc/vitrage/datasource_values/.yaml` ?
3. Is there any other datasource type besides ALARM/RESOURCE?
4. What does `aggregated values` and `priority` mean
5. What is the required data format for the datasource driver api? The inline 
comments give some brief description but didn't specify the api signature.
Many thanks.

--
Yujun
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [all] devstack changed to neutron by default - merged

2016-08-09 Thread Davanum Srinivas
Yay! Long time to get here. But we are here :)

-- Dims

On Tue, Aug 9, 2016 at 7:04 AM, Sean Dague  wrote:
> https://review.openstack.org/#/c/350750/ has merged, which moves us over
> to neutron by default in devstack.
>
> If you have manually set services in your local.conf, you won't see any
> changes. If you don't regularly set those services, you'll be using
> neutron on your next stack.sh after this change.
>
> The *one* major difference in configuration is that PUBLIC_INTERFACE
> means something different with neutron. This now means the interface
> that you would give to neutron and let it completely take over. There
> will be docs coming soon to explain this a bit better on the devstack
> documentation site (http://devstack.org) once I'm a few more cups of
> coffee into the morning. However, in the mean time, if you see weird
> fails during stacking, try deleting PUBLIC_INTERFACE from your local.conf.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] tripleo-common bugs, bug tracking and launchpad tags

2016-08-09 Thread Julie Pichon
Thank you for the replies on this thread and apologies for the delay, I
didn't quite grasp what "this should be a policy" actually meant :-)

On 18 July 2016 at 17:23, Ben Nemec  wrote:
> On 07/18/2016 06:28 AM, Julie Pichon wrote:
>> Hi,
>>
>> On Friday Dougal mentioned on IRC that he hadn't realised there was a
>> separate project for tripleo-common bugs on Launchpad [1] and that he'd
>> been using the TripleO main tracker [2] instead.
>>
>> Since the TripleO tracker is also used for client bugs (as far as I can
>> tell?), and there doesn't seem to be a huge amount of tripleo-common
>> bugs perhaps it would make sense to also track those in the main
>> tracker? If there is a previous conversation or document about bug
>> triaging beyond [3] I apologise for missing it (and would love a
>> URL!). At the moment it's a bit confusing.
>
> +1.  Given the heavily interconnected nature of tripleo-common,
> tripleoclient, t-h-t, puppet-triple, et al I think it would get a bit
> crazy trying to track bugs with repo-specific trackers.
>
> Rather than use the wiki, whose future is in doubt, I would like to
> propose that this become a policy like
> https://review.openstack.org/#/c/339236/

Thank you for the link. I made a first attempt at a policy over
there [1] based on this thread and taking some inspiration from the
Neutron folks [2]. The initial list is based on existing TripleO tags,
tags that came up in this conversation and the wider OpenStack project
tags. Let me know what you think!

Cheers,

Julie

[1] https://review.openstack.org/#/c/352852/
[2] http://docs.openstack.org/developer/neutron/policies/bugs.html#tagging-bugs

>> If we do encourage using the same bug tracker for multiple components,
>> I think it would be useful to curate a list of official tags [4]. The
>> main advantage of doing that is that the tags will auto-complete so
>> it'd be easier to keep them consistent (and thus actually useful).
>>
>> Personally, I wanted to look through open bugs against
>> python-tripleoclient but people use different ways of marking them at
>> the moment - e.g. [tripleoclient] or [python-tripleoclient] or
>> tripleoclient (or nothing?) in the bug name. I tried my luck at adding
>> a 'tripleoclient' tag [5] to the obvious ones as an example. Maybe
>> something shorter like 'cli', 'common' would make more sense. If there
>> are other tags that come back regularly it'd probably be helpful to
>> list them explicitly as well.
>>
>> Julie
>>
>> [1] https://bugs.launchpad.net/tripleo-common
>> [2] https://bugs.launchpad.net/tripleo
>> [3] https://wiki.openstack.org/wiki/TripleO#Bug_Triage
>> [4] https://wiki.openstack.org/wiki/Bug_Tags
>> [5] https://bugs.launchpad.net/tripleo?field.tag=tripleoclient

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] devstack changed to neutron by default - merged

2016-08-09 Thread Sean Dague
https://review.openstack.org/#/c/350750/ has merged, which moves us over
to neutron by default in devstack.

If you have manually set services in your local.conf, you won't see any
changes. If you don't regularly set those services, you'll be using
neutron on your next stack.sh after this change.

The *one* major difference in configuration is that PUBLIC_INTERFACE
means something different with neutron. This now means the interface
that you would give to neutron and let it completely take over. There
will be docs coming soon to explain this a bit better on the devstack
documentation site (http://devstack.org) once I'm a few more cups of
coffee into the morning. However, in the mean time, if you see weird
fails during stacking, try deleting PUBLIC_INTERFACE from your local.conf.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] spec for datasource

2016-08-09 Thread Afek, Ifat (Nokia - IL)
Hi Yujun,

Please see my answers below.

Best Regards,
Ifat.

From: Yujun Zhang
Date: Tuesday, 9 August 2016 at 12:06

For proprietary datasource, I'm considering to adapt the api to common 
protocol/interface, e.g. RESTful, SNMP and etc and I wish to know how to add 
support for these interface.
[Ifat]: Do you mean that you want to write a datasource that gets its 
information from SNMP/REST API? Note that for alarm datasources, we have in our 
roadmap to support OPNFV Doctor SB REST API[1]. Will it be relevant for your 
use cases?

Some documents are found in https://github.com/openstack/vitrage-specs and 
datasource seems to be related to synchronizer but I didn't find a dedicated 
spec.
[Ifat]: We started documenting the process of adding a new datasource, but the 
document is not final. We will try to finish it shortly. BTW, there are many 
other documents in Vitrage wiki page[2].

1. How do I register a new datasource in an existing system?
2. Is the type of datasource (ALARM/RESOURCE) configured in 
`/etc/vitrage/datasource_values/.yaml` ?
[Ifat]: No, it is configured in the datasource code. For information about 
datasource_values please see [3]

3. Is there any other datasource type besides ALARM/RESOURCE?
[Ifat]: No, and at the moment we don’t see a need for that. Vitrage can hold 
resources of any type in a topology graph, and manage alarms that are raised on 
these resources. If you see a use case for other datasource types, let us know.

4. What does `aggregated values` and `priority` mean
[Ifat]: Detailed in [3]

5. What is the required data format for the datasource driver api? The inline 
comments give some brief description but didn't specify the api signature.
Many thanks.
[Ifat]: This should be part of the datasource documentation that we need to 
add. But basically, the driver should return a dictionary with few predefined 
fields (like datasource type, timestamp), and add whatever data is needed for 
the datasource. Then, the datasource transformer will transform this data into 
a vertex and decide where to connect it to the graph.


[1] https://gerrit.opnfv.org/gerrit/12179
[2] https://wiki.openstack.org/wiki/Vitrage
[3] 
https://github.com/openstack/vitrage/blob/master/doc/source/resource-state-config.rst



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] History lesson please

2016-08-09 Thread Sean Dague
On 08/09/2016 02:38 AM, Tony Breeds wrote:
> Hi all,
> I guess this is aimed at the long term requirements team members.
> 
> The current policy for approving requirements[1] bumps contains the following 
> text:
> 
> Changes to update the minimum version of a library developed by the
> OpenStack community can be approved by one reviewer, as long as the
> constraints are correct and the tests pass.
> 
> Perhaps I'm a little risk adverse but this seems a little strange to me.  Can
> folks that know more about how this came about help me understand why that is?
> 
> Yours Tony.
> 
> [1] 
> https://github.com/openstack/requirements/blob/master/README.rst#for-upgrading-requirements-versions

With constraints, the requirements minimum bump is pretty low risk. Very
little of our jobs are impacted by it.

It's in many ways more risking to leave minimums where they are and bump
constraints, because the minimums could be lying that they still work at
the lower level.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project

2016-08-09 Thread Kirill Zaitsev
 Undoubtedly, in the short term it will be painful, but I believe that in the 
long run Glare will win.

Let’s hope that projects, that use Glare would also win from this decision. =)

It seems to me, that murano is currently the only one that has been actively 
trying to incorporate glare into it’s development process. We have a 
(non-voting) integration job with glare backend. The split probably means, that 
devstack (and thus dsvm job) installations would be run via plugin from new 
repository. I would like to ask glance and glare teams to approach the process 
responsibly and not remove the code until it’s ready to be used from the new 
repo.

I’m going to echo Tim’s concerns and suggest that glare team put packaging and 
ease of use/deployment high on the list of new projects priorities.

-- 
Kirill Zaitsev
Murano Project Tech Lead
Software Engineer at
Mirantis, Inc

On 5 août 2016 at 21:11:12, Mikhail Fedosin (mfedo...@mirantis.com) wrote:

Thank you all for your responses!

From my side I can add that our separation is a deliberate step. We pre-weighed 
all pros and cons and our final decision was that moving forward as a new 
project is the lesser of two evils.

Also, I want to say, that Glare was designed as an open project and we want to 
build a good community with members from different companies. Glare suppose to 
be a backend for Heat (and therefore TripleO), App-Catalog, Tacker and 
definitely Nova. In addition we are considering the possibility of storage 
Docker containers, which may be useful for Magnum.

Then, I think that comparison between Image API and Artifact API is not 
correct. Moreover, in my opinion Image API imposes artificial constraints. Just 
imagine that your file system can only store images in JPG format (more 
precisely, it could store any data, but it is imperative that all files must 
have the extension ".jpg"). Likewise Glance - I can put there any data, it can 
be both packages and templates, as well as video from my holiday. And this 
interface, though not ideal, may not work for all services. But those 
artificial limitations that have been created, do Glance uncomfortable even for 
storing images.

On the other hand Glare provides unified interface for all possible binary data 
types. If we take the example with filesystem, in Glare's case it supports all 
file extensions, folders, history of file changes on your disk, data validation 
and conversion, import/export files from different computers and so on. These 
features are not presented in Glance and I think they never will, because of 
deficiencies in the architecture. 

For this reason I think Glare's adoption is important and it will be a huge 
step forward for OpenStack and the whole community.

Thanks again! If you want to support us, please vote for our talk on Barcelona 
summit - https://www.openstack.org/summit/barcelona-2016/vote-for-speakers/ 
Search "Glare" and there will be our presentation.

Best,
Mike 

On Fri, Aug 5, 2016 at 5:22 PM, Jonathan D. Proulx  wrote:

I don't have a strong opinion on the split vs stay discussion. It
does seem there's been sustained if ineffective attempts to keep this
together so I lean toward supporting the divorce.

But let's not pretend there are no costs for this.

On Thu, Aug 04, 2016 at 07:02:48PM -0400, Jay Pipes wrote:
:On 08/04/2016 06:40 PM, Clint Byrum wrote:

:>But, if I look at this from a user perspective, if I do want to use
:>anything other than images as cloud artifacts, the story is pretty
:>confusing.
:
:Actually, I beg to differ. A unified OpenStack Artifacts API,
:long-term, will be more user-friendly and less confusing since a
:single API can be used for various kinds of similar artifacts --
:images, Heat templates, Tosca flows, Murano app manifests, maybe
:Solum things, maybe eventually Nova flavor-like things, etc.

The confusion is the current state of two API's, not having a future
integrated API.

Remember how well that served us with nova-network and neutron (né
quantum).

I also agree with Tim's point.  Yes if a new project is fully
documented and integrated well into packaging and config management
implementing it is trivial, but history again teaches this is a long
road.

It also means extra dev overhead to create and mange these
supporting structures to hide the complexity from end users. Now if
the two project are sufficiently different this may not be a
significant delta as the new docs and config management code would be
need in the old project if the new service stayed stayed there.

-Jon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__  
OpenStack Development Mailing List (not for usage questions)  
Unsubscribe: 

Re: [openstack-dev] [Fuel] Patches for python-fuelclient are blocked by broken master.python-fuelclient.pkgs.ubuntu.review_fuel_client

2016-08-09 Thread Roman Prykhodchenko
Folks,

It’s been several days when the entire python-fuelclient team is blocked by a 
broken CI job. Please make it non-voting ASAP.


- romcheg

> 8 серп. 2016 р. о 12:19 Roman Prykhodchenko  написав(ла):
> 
> Vladimir,
> 
> Thanks you for the update on this. Is there any ETA available?
> 
> On Mon, Aug 8, 2016 at 12:09 PM Vladimir Kozhukalov  > wrote:
> We are working on this. Will fix soon.
> 
> Vladimir Kozhukalov
> 
> On Mon, Aug 8, 2016 at 12:52 PM, Roman Prykhodchenko  > wrote:
> If it’s not possible to fix this job during next few hours, let’s mark in as 
> non-voting until the bug(s) are fixed.
> 
> > 8 серп. 2016 р. о 11:48 Roman Prykhodchenko  > > написав(ла):
> >
> > Folks,
> >
> > Since the end of last week 
> > master.python-fuelclient.pkgs.ubuntu.review_fuel_client [1] blocks all 
> > patches to python-fuelclient. As logs suggest this issue is caused by the 
> > Xenial merge party.
> >
> > Please resolve the issue ASAP because some folks are blocked and cannot 
> > finish their jobs in time.
> >
> >
> > 1. 
> > https://ci.fuel-infra.org/job/master.python-fuelclient.pkgs.ubuntu.review_fuel_client/
> >  
> > 
> >
> > - romcheg
> >
> >
> >
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [smaug] voting for the project mascot

2016-08-09 Thread xiangxinyong
Hello guys,


Smaug is voting for the project mascot.
Please feel free to give your vote:)
https://etherpad.openstack.org/p/smaugmascot


Thanks very much.



Best Regards,
  xiangxinyong__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova][SR-IOV] deprecation of supported_pci_vendor_devs

2016-08-09 Thread Moshe Levi
This is the deprecation patch [2] 

[2] - https://review.openstack.org/#/c/352812/

-Original Message-
From: Moshe Levi [mailto:mosh...@mellanox.com] 
Sent: Monday, August 08, 2016 3:43 PM
To: OpenStack Development Mailing List (not for usage questions) 
(openstack-dev@lists.openstack.org) 
Subject: [openstack-dev] [neutron][nova][SR-IOV] deprecation of 
supported_pci_vendor_devs

Hi all,

To reduce complexity in configuring SR-IOV I want to deprecate the 
supported_pci_vendor_devs option [1] in the neutron-server ml2 config.
This option is doing extra validation that pci vendor id and product id 
provided by nova in the neutron port binding profile is matching to the vendor 
id and product id  in supported_pci_vendor_devs. 

In my opinion this is redundant, nova-scheduler is the point to do validation 
and select a suitable hypervisor. 
The compute node is already validating this through the 
pci_passthrough_whitelist option in nova.conf [2].

I don't see a reason why the neutron-server should validate the pci vendor_id 
and product_id again from the neutron port binding profile. 

If there is good reason to keep it please let me know, otherwise I will 
deprecate it.

[1] - supported_pci_vendor_devs = ['15b3:1004', '8086:10ca'] [2] - 
pci_passthrough_whitelist = 
{"address":"*:06:00.*","physical_network":"physnet1"} 


Thanks,
Moshe

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >