[openstack-dev] [neutron][ovs-agent] Interface monitor is not active

2016-05-20 Thread zhi
hi, all

" Interface monitor is not active ", ovs agent log file says this error
message. I have no idea about that.

I think that ovs agent cant not connect the ovsdb rightly. This
message, "is_connected: true" doesn't  display in my terminal  when I run "
ovs-vsctl show ".

Could someone give me some advice about that? What should I do?

Hope for your reply.  ;-)

Thanks
Zhi Chang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

2016-05-20 Thread Armando M.
On 20 May 2016 at 17:37, Elzur, Uri  wrote:

> Hi Armando, Cathy, All
>
>
>
> First I apologize for the delay, returning from a week long international
> trip. (yes, I know,  a lousy excuse on many accounts…)
>
>
>
> If I’m attempting to summarize all the responses, it seems like
>
> · A given abstraction in Neutron is allowed (e.g. in support of
> SFC), preferably not specific to a given technology e.g. NSH for SFC
>
> · A stadium project is not held to the same tests (but we do not
> have a “formal” model here, today) and therefore can support even a
> specific technology e.g. NSH (definitely better with abstractions to meet
> Neutron standards for future integration)
>

A given abstraction is allowed so long as there is enough agreement that it
is indeed technology agnostic. If the abstraction maps neatly to a given
technology, the implementation may exist within the context of Neutron or
elsewhere.

Having said that I'd like to clarify a point: you seem to refer to the
stadium as a golden standard. The stadium is nothing else but a list of
software repositories that the Neutron team develops and maintain. Given
the maturity of a specific repo, it may or may not implement an abstraction
with integration code to non open technologies. This is left at discretion
of the group of folks who are directly in control of the specific repo,
though it has been the general direction to strongly encourage and promote
openness throughout the entire stack that falls under the responsibility of
the Neutron team and thus the stadium.


>
> However,
>
> · There still is a chicken and egg phenomenon… how can a
> technology become main stream with OPEN SOURCE support  if we can’t get an
> OpenStack to support the required abstractions *before* the technology
> was adopted elsewhere??
>
> o   Especially as Stadium, can we let Neutron to lead the industry, given
> broad enough community interest?
>
> · BTW,  in this particular case, there originally has been a
> *direct* ODL access as a NSH solution (i.e. NO OpenStack option), then we
> got Tacker (now an Neutron Stadium project, if I get it right) to support
> SFC and NSH, but we are still told that networking-sfc (another Neutron
> Stadium project ) can’t do the same….
>
I cannot comment for the experience and the conversations you've had so far
as I have no context. All I know is that if you want to experiment with
OpenDaylight and its NSH provider and want to use that as a Neutron backend
you can. However, if that requires new abstractions, these new abstractions
must be agreed by all interested parties, be technology agnostic, and allow
for multiple implementation, an open one included. That's the nature of
OpenStack.

> · Also regarding the  following comment made on another message
> in this thread, “As to OvS features, I guess the OvS ml is a better
> place, but wonder if the Neutron community wants to hold itself hostage to
> the pace of other projects who are reluctant to adopt a feature”, what I
> mean is again, that chicken and egg situation as above. Personally, I think
> OpenStack Neutron should allow mechanisms that are of interest / value to
> the networking community at large, to “ experiment with the abstraction” as
> you stated, *independent of other organizations/projects*…
>
I personally I see no catch-22 if you operate under the premises I stated
above. If Neutron allowed to experiment with *any* mechanism without taking
into consideration the importance of abstractions and community consensus,
we as a community have failed, especially in relation to the aspect of
interoperability.

>
>
> SOOO, is the bottom line that we agree that supporting NSH explicitly in
> networking-sfc can be added now?
>

I don't know what you mean by supporting NSH explicitly in networking-sfc.
Can you be more specific? Do you intend via OpenDaylight? What would be the
NSH provider?


>
>
>
>
> Thx
>
>
>
> Uri (“Oo-Ree”)
>
> C: 949-378-7568
>
>
>
> *From:* Armando M. [mailto:arma...@gmail.com]
> *Sent:* Friday, May 13, 2016 5:14 PM
> *To:* Cathy Zhang 
> *Cc:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* Re: [openstack-dev] [Neutron] support of NSH in networking-SFC
>
>
>
>
>
>
>
> On 13 May 2016 at 16:01, Cathy Zhang  wrote:
>
> Hi Uri,
>
>
>
> Current networking-sfc API allows the user to specify the data path SFC
> encapsulation mechanism and NSH could be one of the encapsulation options.
>
> But since OVS release has not supported the NSH yet, we have to wait until
>  NSH is added into OVS and then start to support the NSH encapsulation
> mechanism in the data path.
>
>
>
> One can support NSH whichever way they see fit. NSH in OVS is not
> something Neutron can do anything about. Neutron is about defining
> abstractions that can apply to a variety of technologies and experiment
> with what 

[openstack-dev] [Keystone] Welcome Keystone to the World of Python 3

2016-05-20 Thread Morgan Fainberg
We've gone through all of our test cases and all of our code base. At this
point Keystone is no longer skipping any of the tests (which do tend to
test the entire request stack) and we are properly gating on being
Python3.4 compatible.

I want to thank everyone who has put in effort in the last few weeks to
punt the last of the patches though the gate. It would not have been doable
without those hacking on LdapPool, doing test cleanup, and those
reviewing/trying the code out.

If you run across issues with Keystone and Python3, please let us know.

A sincere thanks to the entire Keystone team involved in this multicycle
effort.

--Morgan
--
Morgan Fainberg (notmorgan)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [aodhclient] does the aodh have the feature for import/export batch alarms?

2016-05-20 Thread li . yuanzhen
Hi, 
Thank you for giving me a good solution, 
although now I'm not familiar with Heat Template too much, I will 
research it to implement the requirement. 

BR, 
Rajen



> 
> 
> Hi,
> 
> 
> I also agree with ZhiQiang.
> 
> How about using Heat Template which improve portability of app and 
configs?
> 
> 
> Cheers,
> Ryota
> 
> > -Original Message-
> > From: ZhiQiang Fan [mailto:aji.zq...@gmail.com]
> > Sent: Friday, May 20, 2016 11:42 AM
> > To: li.yuanz...@zte.com.cn
> > Cc: OpenStack Development Mailing List; Ildikó Váncsa; 
lianhao...@intel.com; Sheng Liu; Mibu Ryota(壬生 亮太); Julien
> > Danjou
> > Subject: Re: [openstack-dev] [aodhclient] does the aodh have the 
feature for import/export batch alarms?
> > 
> > batch alarm is not supported, and I think it is a burden instead of 
good feature to implement it in aodh
> > 
> > import/export alarm is not supported, considering dump db and restore 
it in new env? or you can get alarm list from old
> > env and create new alarm in new env via REST API if data set is not 
too large.
> > 
> > On Fri, May 20, 2016 at 9:37 AM,  wrote:
> > 
> > 
> >  HI All,
> > 
> >  in the aodh/aodhclient, I not find the feature 
for import/export batch alarms.
> > 
> >  I mainly want to use it to implement the 
following requirement:
> >  In "migrate alarms from one openstack env to 
another openstack env" scenario, I would like to do this
> > requirement
> > by a simple method, such as exporting/downloading 
the alarms from one env and then importing these alarms
> > to a new env.
> > 
> >  currently, does the aodh have the similar command 
for import/export batch alarms?
> >  or does have an alternative method to implement 
this requirement?
> >  If not have, does the feature need to add in 
aodh/aodhclient?
> > 
> >  Rajen(liyuanzhen)
> > 
> > 
> >  
> >  ZTE Information Security Notice: The information 
contained in this mail (and any attachment transmitted herewith)
> > is privileged and confidential and is intended for the exclusive use 
of the addressee(s).  If you are not an intended
> > recipient, any disclosure, reproduction, distribution or other 
dissemination or use of the information contained is strictly
> > prohibited.  If you have received this mail in error, please delete it 
and notify us immediately.
> > 
> > 
> > 
> > 
> 
> 
> 

ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the information contained is strictly prohibited.  If you have received 
this mail in error, please delete it and notify us immediately.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

2016-05-20 Thread Tim Rozet
Hi Uri,
I originally wrote the Tacker->ODL SFC NSH piece and have been working with 
Tacker and networking-sfc team to bring it upstream into OpenStack.  Cathy, 
Stephen, Louis and the rest of the networking-sfc team have been very receptive 
to changes specific to NSH around their current API and DB model.  The proper 
place for SFC to live in OpenStack is networking-sfc, while Tacker can do its 
orchestration job by rendering ETSI MANO TOSCA input like VNF Descriptors and 
VNF Forwarding Graph Descriptors.

We currently have a spec in netwoking-odl to migrate my original driver for ODL 
to do IETF NSH.  That driver will be supported in networking-sfc, along with 
some changes to networking-sfc to account for NSH awareness and encap type 
(like VXLAN+GPE or Ethernet).  The OVS work to support NSH is coming along and 
patches are under review.  Yi Yang has built a private OVS version with these 
changes and we can use that for now to test with.

I think it is all coming together and will take a couple more months before all 
of the pieces (Tacker, networking-sfc, networking-odl, ovs) are in place.  I 
don't think networking-sfc is holding up any progress.

Thanks,

Tim Rozet
Red Hat SDN Team

- Original Message -
From: "Uri Elzur" 
To: "OpenStack Development Mailing List (not for usage questions)" 
, "Cathy Zhang" 
Sent: Friday, May 20, 2016 8:37:26 PM
Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC



Hi Armando, Cathy, All 



First I apologize for the delay, returning from a week long international trip. 
(yes, I know, a lousy excuse on many accounts…) 



If I’m attempting to summarize all the responses, it seems like 

· A given abstraction in Neutron is allowed (e.g. in support of SFC), 
preferably not specific to a given technology e.g. NSH for SFC 

· A stadium project is not held to the same tests (but we do not have a 
“formal” model here, today) and therefore can support even a specific 
technology e.g. NSH (definitely better with abstractions to meet Neutron 
standards for future integration) 



However, 

· There still is a chicken and egg phenomenon… how can a technology become main 
stream with OPEN SOURCE support if we can’t get an OpenStack to support the 
required abstractions before the technology was adopted elsewhere?? 

o Especially as Stadium, can we let Neutron to lead the industry, given broad 
enough community interest? 

· BTW, in this particular case, there originally has been a direct ODL access 
as a NSH solution (i.e. NO OpenStack option), then we got Tacker (now an 
Neutron Stadium project, if I get it right) to support SFC and NSH, but we are 
still told that networking-sfc (another Neutron Stadium project ) can’t do the 
same…. 

· Also regarding the following comment made on another message in this thread, 
“ As to OvS features, I guess the OvS ml is a better place, but wonder if the 
Neutron community wants to hold itself hostage to the pace of other projects 
who are reluctant to adopt a feature ”, what I mean is again, that chicken and 
egg situation as above. Personally, I think OpenStack Neutron should allow 
mechanisms that are of interest / value to the networking community at large, 
to “ experiment with the abstraction” as you stated, independent of other 
organizations/projects … 



SOOO, is the bottom line that we agree that supporting NSH explicitly in 
networking-sfc can be added now? 





Thx 



Uri (“Oo-Ree”) 

C: 949-378-7568 



From: Armando M. [mailto:arma...@gmail.com] 
Sent: Friday, May 13, 2016 5:14 PM 
To: Cathy Zhang  
Cc: OpenStack Development Mailing List (not for usage questions) 
 
Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC 










On 13 May 2016 at 16:01, Cathy Zhang < cathy.h.zh...@huawei.com > wrote: 




Hi Uri, 



Current networking-sfc API allows the user to specify the data path SFC 
encapsulation mechanism and NSH could be one of the encapsulation options. 

But since OVS release has not supported the NSH yet, we have to wait until NSH 
is added into OVS and then start to support the NSH encapsulation mechanism in 
the data path. 





One can support NSH whichever way they see fit. NSH in OVS is not something 
Neutron can do anything about. Neutron is about defining abstractions that can 
apply to a variety of technologies and experiment with what open source 
component is available on the shelves. Anyone can take the abstraction and 
deliver whatever technology stack they want with it and we'd happily gather any 
feedback to iterate on the abstraction to address more and more use case. 










AFAIK, it is the position of Neutron to have any OVS related new features 
developed inside the OVS community. 



Thanks, 

Cathy 




From: Elzur, Uri [mailto: uri.el...@intel.com ] 
Sent: Friday, May 13, 2016 3:02 PM 
To: 

Re: [openstack-dev] [Neutron][TC] support of NSH in networking-SFC

2016-05-20 Thread Doug Wiegley
In a nutshell, you’ve got it, you can’t add an API without a reference 
implementation, including data-plane, which has to be open-source (though does 
not have to itself be openstack.)

> o   Especially as Stadium, can we let Neutron to lead the industry, given 
> broad enough community interest?


You can do anything you want outside the stadium, which is where 
experimental/incubation is meant to happen.  Inside the stadium means, 
“official openstack project”, which means it has an open-source implementation.

If all backends are closed-source, it’s not open as openstack defines it: 
https://governance.openstack.org/reference/opens.html 


There isn’t any wiggle room there. This isn’t a neutron argument; feel free to 
take it up with the TC.

Thanks,
doug



> On May 20, 2016, at 6:37 PM, Elzur, Uri  wrote:
> 
> Hi Armando, Cathy, All <>
>  
> First I apologize for the delay, returning from a week long international 
> trip. (yes, I know,  a lousy excuse on many accounts…)
>  
> If I’m attempting to summarize all the responses, it seems like
> · A given abstraction in Neutron is allowed (e.g. in support of SFC), 
> preferably not specific to a given technology e.g. NSH for SFC
> · A stadium project is not held to the same tests (but we do not have 
> a “formal” model here, today) and therefore can support even a specific 
> technology e.g. NSH (definitely better with abstractions to meet Neutron 
> standards for future integration)
>  
> However,
> · There still is a chicken and egg phenomenon… how can a technology 
> become main stream with OPEN SOURCE support  if we can’t get an OpenStack to 
> support the required abstractions before the technology was adopted 
> elsewhere??
> o   Especially as Stadium, can we let Neutron to lead the industry, given 
> broad enough community interest?
> · BTW,  in this particular case, there originally has been a direct 
> ODL access as a NSH solution (i.e. NO OpenStack option), then we got Tacker 
> (now an Neutron Stadium project, if I get it right) to support SFC and NSH, 
> but we are still told that networking-sfc (another Neutron Stadium project ) 
> can’t do the same….
> · Also regarding the  following comment made on another message in 
> this thread, “As to OvS features, I guess the OvS ml is a better place, but 
> wonder if the Neutron community wants to hold itself hostage to the pace of 
> other projects who are reluctant to adopt a feature”, what I mean is again, 
> that chicken and egg situation as above. Personally, I think OpenStack 
> Neutron should allow mechanisms that are of interest / value to the 
> networking community at large, to “ experiment with the abstraction” as you 
> stated, independent of other organizations/projects…
>  
> SOOO, is the bottom line that we agree that supporting NSH explicitly in 
> networking-sfc can be added now?
>  
>  
> Thx
>  
> Uri (“Oo-Ree”)
> C: 949-378-7568
>  
>  <>From: Armando M. [mailto:arma...@gmail.com] 
> Sent: Friday, May 13, 2016 5:14 PM
> To: Cathy Zhang 
> Cc: OpenStack Development Mailing List (not for usage questions) 
> 
> Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC
>  
>  
>  
> On 13 May 2016 at 16:01, Cathy Zhang  > wrote:
> Hi Uri,
>  
> Current networking-sfc API allows the user to specify the data path SFC 
> encapsulation mechanism and NSH could be one of the encapsulation options. 
> But since OVS release has not supported the NSH yet, we have to wait until  
> NSH is added into OVS and then start to support the NSH encapsulation 
> mechanism in the data path.
>  
> One can support NSH whichever way they see fit. NSH in OVS is not something 
> Neutron can do anything about. Neutron is about defining abstractions that 
> can apply to a variety of technologies and experiment with what open source 
> component is available on the shelves. Anyone can take the abstraction and 
> deliver whatever technology stack they want with it and we'd happily gather 
> any feedback to iterate on the abstraction to address more and more use case.
>  
>  
> AFAIK, it is the position of Neutron to have any OVS related new features 
> developed inside the OVS community. 
>  
> Thanks,
> Cathy
>  
> From: Elzur, Uri [mailto:uri.el...@intel.com ] 
> Sent: Friday, May 13, 2016 3:02 PM
> To: OpenStack Development Mailing List (not for usage questions); Armando M
> Subject: [openstack-dev] [Neutron] support of NSH in networking-SFC
>  
> Hi Armando <>
>  
> As an industry we are working on SFC for 3 years or so (more?). Still to 
> date, we are told we can’t get Neutron or even a Stadium project e.g. 
> networking-SFC to support NSH (in IETF LC phase) because OvS has not 
> supported NSH. Is this an official position of 

Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

2016-05-20 Thread Elzur, Uri
Hi Armando, Cathy, All

First I apologize for the delay, returning from a week long international trip. 
(yes, I know,  a lousy excuse on many accounts…)

If I’m attempting to summarize all the responses, it seems like

· A given abstraction in Neutron is allowed (e.g. in support of SFC), 
preferably not specific to a given technology e.g. NSH for SFC

· A stadium project is not held to the same tests (but we do not have a 
“formal” model here, today) and therefore can support even a specific 
technology e.g. NSH (definitely better with abstractions to meet Neutron 
standards for future integration)

However,

· There still is a chicken and egg phenomenon… how can a technology 
become main stream with OPEN SOURCE support  if we can’t get an OpenStack to 
support the required abstractions before the technology was adopted elsewhere??

o   Especially as Stadium, can we let Neutron to lead the industry, given broad 
enough community interest?

· BTW,  in this particular case, there originally has been a direct ODL 
access as a NSH solution (i.e. NO OpenStack option), then we got Tacker (now an 
Neutron Stadium project, if I get it right) to support SFC and NSH, but we are 
still told that networking-sfc (another Neutron Stadium project ) can’t do the 
same….

· Also regarding the  following comment made on another message in this 
thread, “As to OvS features, I guess the OvS ml is a better place, but wonder 
if the Neutron community wants to hold itself hostage to the pace of other 
projects who are reluctant to adopt a feature”, what I mean is again, that 
chicken and egg situation as above. Personally, I think OpenStack Neutron 
should allow mechanisms that are of interest / value to the networking 
community at large, to “ experiment with the abstraction” as you stated, 
independent of other organizations/projects…

SOOO, is the bottom line that we agree that supporting NSH explicitly in 
networking-sfc can be added now?


Thx

Uri (“Oo-Ree”)
C: 949-378-7568

From: Armando M. [mailto:arma...@gmail.com]
Sent: Friday, May 13, 2016 5:14 PM
To: Cathy Zhang 
Cc: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC



On 13 May 2016 at 16:01, Cathy Zhang 
> wrote:
Hi Uri,

Current networking-sfc API allows the user to specify the data path SFC 
encapsulation mechanism and NSH could be one of the encapsulation options.
But since OVS release has not supported the NSH yet, we have to wait until  NSH 
is added into OVS and then start to support the NSH encapsulation mechanism in 
the data path.

One can support NSH whichever way they see fit. NSH in OVS is not something 
Neutron can do anything about. Neutron is about defining abstractions that can 
apply to a variety of technologies and experiment with what open source 
component is available on the shelves. Anyone can take the abstraction and 
deliver whatever technology stack they want with it and we'd happily gather any 
feedback to iterate on the abstraction to address more and more use case.


AFAIK, it is the position of Neutron to have any OVS related new features 
developed inside the OVS community.

Thanks,
Cathy

From: Elzur, Uri [mailto:uri.el...@intel.com]
Sent: Friday, May 13, 2016 3:02 PM
To: OpenStack Development Mailing List (not for usage questions); Armando M
Subject: [openstack-dev] [Neutron] support of NSH in networking-SFC

Hi Armando

As an industry we are working on SFC for 3 years or so (more?). Still to date, 
we are told we can’t get Neutron or even a Stadium project e.g. networking-SFC 
to support NSH (in IETF LC phase) because OvS has not supported NSH. Is this an 
official position of Neutron that OvS is the gold standard to support any new 
feature?

We have seen OvS support other overlays that are not ahead of VXLAN-gpe in the 
IETF.

Thx

Uri (“Oo-Ree”)
C: 949-378-7568


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

2016-05-20 Thread Elzur, Uri
Hi Cathy

Pls note my other response to the list on this subject.
It is not clear to me on what ground is the following conclusion derived. Was 
asking Armando a clear answer to the topic at hand


Thx

Uri (“Oo-Ree”)
C: 949-378-7568

-Original Message-
From: Cathy Zhang [mailto:cathy.h.zh...@huawei.com] 
Sent: Monday, May 16, 2016 1:10 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

Hi Uri,

I hope all the replies have helped answer your question. 

To echo what Paul said, the networking-sfc approach is to separate the API from 
the backend drivers. The actual data plane forwarder is not part of 
networking-sfc. We aren't going to maintain the out-of-tree OVS NSH code. When 
OVS accepts the NSH functionality, our network-sfc OVS driver will be updated 
to support "push NSH" and "pop NSH" etc. to make use of the NSH encapsulation 
available in the data plane forwarder. 
If you know any other open source vSwitch/vRouter that already supports NSH and 
if someone wants to write a networking-sfc driver for it, that code would be 
welcomed. 

Thanks,
Cathy

-Original Message-
From: Paul Carver [mailto:pcar...@paulcarver.us]
Sent: Saturday, May 14, 2016 7:25 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] support of NSH in networking-SFC

On Fri, 13 May 2016 17:13:59 -0700
"Armando M."  wrote:

> On 13 May 2016 at 16:10, Elzur, Uri  wrote:
> 
> > Hi Cathy
> >
> >
> >
> > Thank you for the quick response. This is the essence of my question 
> > – does Neutron keep OvS as a gold standard and why
> >  
> 
> Not at all true. Neutron, the open source implementation, uses a 
> variety of open components, OVS being one of them. If you know of any 
> open component that supports NSH readily available today, I'd be happy 
> to hear about it.

I agree with Armando and Cathy. There's nothing "gold standard" about OvS. The 
networking-sfc approach is to separate the API from the backend drivers and the 
OvS driver is only one of several. We have a place in the API where we expect 
to capture the tenant's intent to use NSH.

What we don't currently have is a backend, OvS or other, that supports NSH. The 
actual dataplane forwarder is not part of networking-sfc. We aren't going to 
maintain the out-of-tree OvS NSH code or depend on it.
When OvS accepts the NSH functionality upstream then our network-sfc driver 
will be able to make use of it.

If any other vSwitch/vRouter that already supports NSH and if someone wants to 
write a networking-sfc driver for, that code would be welcome.

We've also started discussing how to implement a capabilities discovery API so 
that if some backends support a capability (e.g. NSH) and other backends don't 
support it, we will provide the tenant with an abstract way to query the 
networking-sfc API in order to determine whether a particular capability can be 
provided by the current backend.

The thing networking-sfc won't take on is ownership of the upstream dataplane 
forwarder projects. We'll simply provide an abstraction so that a common API 
can invoke SFC across pre-existing SFC-capable dataplanes.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [javascript] Seeking contributors, js-generator-openstack

2016-05-20 Thread Zhang Yujun
Hi, Michael

As you are no longer alone now, we'd better to put things in your head onto
documents so that everybody who wish to contribute will know where to go.

Besides the technical roadmap, I think we shall need a space for issue
tracking and proposal discussion. After we make the project more open to
the community, it won't be long that more developers join this project.

That's my basic thoughts for the moment.

--
Yujun

On Sat, May 21, 2016 at 1:10 AM Michael Krotscheck 
wrote:

> Hi there!
>
> Well, the first thing we need is other reviewers, which is the fastest way
> to become a core :). The project page right now is the README.md file in
> the project itself. The main reason for this is that the target audience -
> javascript engineers - usually find that first via NPM. Most of the Todo
> items there have already been done, actually, so the next step would be to
> really identify what this project needs to accomplish, group it into major
> categories, and start working on it. Off the top of my head, here's a list:
>
>
>1. Dependency synchronization: Keep a list of semver
>global-dependencies.json at the root of the project, and update a project's
>dependencies if the versions are out of sync.
>2. Eslint invocation. Infra's Common Testing Interface states that all
>javascript projects must support 'npm run lint', using
>eslint-config-openstack. The generator should add/update this to any
>project it's run in.
>3. nsp invocation. Not strictly necessary, but a postinstall scan of
>the project for publicly known vulnerabilities is always a good thing.
>
> After these pieces, the next step becomes more complicated, as we need to
> choose whether the user is creating a web application, or a node
> application. This then allows us to switch out which test harness and
> runner we're using, so that the `npm test` command can be consistent. Once
> this lands, we can start talking about project src/dist directories, how to
> best use gulp in each project type, and actual project templates :).
>
> Is there something in particular you'd like to work on?
>
> Michael
>
>
> On Thu, May 19, 2016 at 12:39 AM Zhang Yujun 
> wrote:
>
>> Hi, Michael,
>>
>> I have several project experience in JavaScript and please let me know
>> how I could help on this project?
>>
>> Is there a project page?
>>
>> Or we shall getting started with gerrit review?
>>
>> --
>> Yujun
>>
>> On Wed, May 18, 2016 at 11:45 PM Michael Krotscheck 
>> wrote:
>>
>>> Hello everyone!
>>>
>>> The js-generator-openstack project has been incubated under
>>> openstack-infra, and is seeking contributors (and cores). The purpose of
>>> the project is as follows:
>>>
>>>- Help manage common project configuration aspects, such as
>>>licenses, gerrit, authors, and more.
>>>- Assist in keeping dependencies up-to-date and synchronized across
>>>javascript projects (JS equivalent of global requirements).
>>>- Provide all the necessary hooks for OpenStack's JavaScript Common
>>>Testing Interface.
>>>- Suggest common tools to use for tasks such as linting, unit
>>>testing, functional testing, and more.
>>>- (Newton Stretch) Provide a quick way of bootstrapping a new
>>>CORS-consuming OpenStack UI.
>>>
>>> I'm looking for help- firstly, because right now I'm the only person
>>> who's willing to review JavaScript amongst the various infra cores, and I'd
>>> really like more eyeballs on this project. Secondly, because I know that
>>> I'm not the only person who has opinions about how we should be doing
>>> JavaScript things.
>>>
>>> Come on over to
>>> https://review.openstack.org/#/q/project:openstack-infra/js-generator-openstack+status:open
>>>  and
>>> help me out, would ya? If you've got questions, I'm active in the
>>> #openstack-javascript channel.
>>>
>>> Michael
>>>
>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

[openstack-dev] [glance] Focus for week R-19 May 23-27

2016-05-20 Thread Nikhil Komawar
Hi team,


The focus for week R-19 is as follows:


* Mon-Wed: please focus on reviewing specs and lite-specs. Remember the
soft-spec freeze is approaching (R-16), this is when we decide what
specs are likely to be merged in Newton. So, keep reviewing specs. And I
have pointed out to many of your earlier and on ML email that the more
you review, the more likely you are to get reviews. So, keep those
reviews going.


* Thurs-Fri: please focus on anything that is specific to newton-1
release that we'd get released; any important bug, any lite-spec that
we'd target (to see if we can expedite on the process work too), etc.
Our newton-1 target is Tuesday May 31st although the newton-1 general
deadline is Thur Jun 2 in R-18. We need to get reviews merged by Friday
that are intended for newton-1, and the Monday Jun 1 is kept open to
help alleviate last minute issues, additions, etc.


* Help review and provide feedback to any of the process updates I ask
feedback for, as we progress through the week.




Reference for the week numbers:
http://releases.openstack.org/newton/schedule.html


(unlike some of my emails meant to be notices, this email is ok for
discussion)


As always, feel free to reach out for any questions or concerns.

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Best practices for glance weekly team meetings

2016-05-20 Thread Nikhil Komawar
Hi team,


I have noticed it lately that people were being reluctant on paying
attention to the agenda proposed; either there were too many proposals
or one too many last minute additions. We also did not have any process
to effectively manage the agenda already proposed.  So, as discussed at
this week's team meeting [1], I've added a few best practices about
adding agenda items, to our weekly meeting's etherpad [2] and there's a
bit of protocol to be followed (below). Please note them carefully as I
will be paying close attention to those who do not follow and call them
out if needed.


Please note the following carefully:


* We will be limiting the number of agenda topics per week and there
will be indication in the agenda etherpad about the same.


* Be mindful of others' time, type short precise messages especially
when it comes to updates or review requests.


* Indicate clearly when you are done.


* Try to stay on topic and don't clutter the open discussions, rather
try to propose a topic beforehand.


* Keep focus for the week to stay on track, instead of asking for more
reviews, give reviews that are important for that week. Raise important
questions and discuss findings from that week.


[1]
http://eavesdrop.openstack.org/meetings/glance/2016/glance.2016-05-19-14.00.log.html#l-25

[2] https://etherpad.openstack.org/p/glance-team-meeting-agenda

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] [defcore] [interop] Proposal for a virtual sync dedicated to Import Refactor May 26th

2016-05-20 Thread Nikhil Komawar
Hello all,


I want to propose having a dedicated virtual sync next week Thursday May
26th at 1500UTC for one hour on the Import Refactor work [1] ongoing in
Glance. We are making a few updates to the spec; so it would be good to
have everyone on the same page and soon start merging those spec changes.


Also, I would like for this sync to be cross project one so that all the
different stakeholders are aware of the updates to this work even if you
just want to listen in.


Please vote with +1, 0, -1. Also, if the time doesn't work please
propose 2-3 additional time slots.


We can decide later on the tool and I will setup agenda if we have
enough interest.


[1]
http://specs.openstack.org/openstack/glance-specs/specs/mitaka/approved/image-import/image-import-refactor.html


-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] API changes on limit / marker / sort in Newton

2016-05-20 Thread Jay Pipes

+1 on all your suggestions below, Sean.

-jay

On 05/20/2016 08:05 AM, Sean Dague wrote:

There are a number of changes up for spec reviews that add parameters to
LIST interfaces in Newton:

* keypairs-pagination (MERGED) -
https://github.com/openstack/nova-specs/blob/8d16fc11ee6d01b5a9fe1b8b7ab7fa6dff460e2a/specs/newton/approved/keypairs-pagination.rst#L2
* os-instances-actions - https://review.openstack.org/#/c/240401/
* hypervisors - https://review.openstack.org/#/c/240401/
* os-migrations - https://review.openstack.org/#/c/239869/

I think that limit / marker is always a legit thing to add, and I almost
wish we just had a single spec which is "add limit / marker to the
following APIs in Newton"

Most of these came in with sort_keys as well. We currently don't have
schema enforcement on sort_keys, so I don't think we should add any more
instances of it until we scrub it. Right now sort_keys is mostly a way
to generate a lot of database load because users can sort by things not
indexed in your DB. We really should close that issue in the future, but
I don't think we should make it any worse. I have -1s on
os-instance-actions and hypervisors for that reason.

os-instances-actions and os-migrations are time based, so they are
proposing a changes-since. That seems logical and fine. Date seems like
the natural sort order for those anyway, so it's "almost" limit/marker,
except from end not the beginning. I think that in general changes-since
on any resource which is time based should be fine, as long as that
resource is going to natural sort by the time field in question.

So... I almost feel like this should just be soft policy at this point:

limit / marker - always ok
sort_* - no more until we have a way to scrub sort (and we fix weird
sort key issues we have)
changes-since - ok on any resource that will natural sort with the
updated time


That should make proposing these kinds of additions easier for folks,

-Sean



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [VMT] [Security] Proposal to add Brian Rosmaita to the glance-coresec team

2016-05-20 Thread Nikhil Komawar


On 5/20/16 5:23 PM, Jeremy Stanley wrote:
> On 2016-05-20 16:47:50 -0400 (-0400), Nikhil Komawar wrote:
> [...]
>> Please note, however, after talking with Brian and Hemanth, Hemanth has
>> signed up to be Glance liaison to the VMT team along with me. I've
>> updated the wiki:
>> https://wiki.openstack.org/wiki/CrossProjectLiaisons#Vulnerability_management
> [...]
>
> Awesome--thanks! I'm always glad to see more people interested in
> fixing security vulnerabilities in OpenStack.

I do share the sentiment.


-- 

Thanks,
Nikhil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [VMT] [Security] Proposal to add Brian Rosmaita to the glance-coresec team

2016-05-20 Thread Jeremy Stanley
On 2016-05-20 16:47:50 -0400 (-0400), Nikhil Komawar wrote:
[...]
> Please note, however, after talking with Brian and Hemanth, Hemanth has
> signed up to be Glance liaison to the VMT team along with me. I've
> updated the wiki:
> https://wiki.openstack.org/wiki/CrossProjectLiaisons#Vulnerability_management
[...]

Awesome--thanks! I'm always glad to see more people interested in
fixing security vulnerabilities in OpenStack.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] [VMT] [security] NOTICE: Current structure of the glance-core-sec team

2016-05-20 Thread Nikhil Komawar
Hi all,


There have been some recent changes to the glance-core-sec team and
liaisons. So, I wanted to send this notice out for people to be aware
and know who to reach out to if they have any issues to discuss.


The current glance-core-sec team includes:

* Brian Rosmaita

* Flavio Percoco

* Hemanth Makkapati

* Kairat Kushaev

* Nikhil Komawar


The current Glance liaisons to the VMT team have been updated here
https://wiki.openstack.org/wiki/CrossProjectLiaisons#Vulnerability_management
.

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [VMT] [Security] Proposal to add Brian Rosmaita to the glance-coresec team

2016-05-20 Thread Nikhil Komawar


On 5/12/16 5:38 PM, Nikhil Komawar wrote:
> Comments, alternate proposal inline.
>
>
>
> On 5/12/16 8:35 AM, Jeremy Stanley wrote:
>> On 2016-05-11 23:39:58 -0400 (-0400), Nikhil Komawar wrote:
>>> I would like to propose adding add Brian to the team.
>> [...]
>>
>> I'm thrilled to see Glance adding more security-minded reviewers for
>> embargoed vulnerability reports! One thing to keep in mind though is
>> that you need to keep the list of people with access to these
>> relatively small; I see
>> https://launchpad.net/~glance-coresec/+members has five members now.
> Thanks for raising this. Yes, we are worried about it too. But as you
> bring it up, it becomes even more important. A lot of Glancers time
> share with other projects and lack bandwidth to contribute fully to this
> responsibility. Currently, I do not know if anyone can be rotated out as
> we have had pretty good input from all the folks there.
>
>> While the size I picked in item #2 at
>> > https://governance.openstack.org/reference/tags/vulnerability_managed.html#requirements
>>  >
>> is not meant to be a strict limit, you may still want to take this
>> as an opportunity to rotate out some of your less-active reviewers
>> (if there are any).
>>
>>
> Thanks for not being strict on it.
>
> I do however, want to make another proposal:
>
>
> Since Stuart is our VMT liaison and he's on hiatus, can we add Brian as
> his substitute. As soon as Stuart is back and is ready to shoulder this
> responsibility we should do the rotation.


As per the proposal, +1s and no objections raised, I've made the
substitution.

Please note, however, after talking with Brian and Hemanth, Hemanth has
signed up to be Glance liaison to the VMT team along with me. I've
updated the wiki:
https://wiki.openstack.org/wiki/CrossProjectLiaisons#Vulnerability_management


>
> Please vote +1, 0, -1.
>
> I will consider final votes by Thur May 19 2100 UTC.
>

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Nominate XiYuan Wang and Hao Wang for Zaqar core

2016-05-20 Thread Victoria Martínez de la Cruz
+1

Thanks for all your work wxy and wanghao

2016-05-20 15:11 GMT-03:00 Eva Balycheva :

> +1
>
> Yes. I also noticed that they are good at finding and fixing bugs. They
> explore Zaqar and question everything. They don't close their eyes on
> problems. And I like it.
>
> Hao and XiYuan, thank you for your contributions. Let's continue working
> together. =)
>
> On 05/19/2016 02:53 PM, Fei Long Wang wrote:
>
>> Hi team,
>>
>> I would like to propose adding XiYuan Wang(wxy) and Hao Wang(wanghao) for
>> the Zaqar core team. They have been awesome contributors since joining the
>> Zaqar team about 6 months ago. And now they are currently the most active
>> non-core reviewers on Zaqar projects for the last 90 days[1]. XiYuan has
>> great technical expertise and contributed many high quality patches. Hao
>> has got an good eye for review and contributed some wonderful patches. I'm
>> sure they would make excellent addition to the team. If no one objects,
>> I'll proceed and add them in a week from now.
>>
>> [1]http://stackalytics.com/report/contribution/zaqar-group/90
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] NOTICE: Glance midcycle meetup for Newton CANCELLED

2016-05-20 Thread Nikhil Komawar
Hi,


Please note that the Glance midcycle meetup for Newton has been
cancelled as per the agreement [1] at the Glance weekly meeting that
happened Thursday May 19th. The meetup agenda [2] etherpad has been
updated to reflect the same.


Note to wiki moderators: The Sprints wiki [3] has been updated as well.


[1]
http://eavesdrop.openstack.org/meetings/glance/2016/glance.2016-05-19-14.00.log.html#l-131

[2] https://etherpad.openstack.org/p/newton-glance-midcycle-meetup

[3] https://wiki.openstack.org/wiki/Sprints

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] DHCP Agent Scheduling for Segments

2016-05-20 Thread Brandon Logan
On Thu, 2016-05-19 at 14:16 -0600, Carl Baldwin wrote:
> On Wed, May 18, 2016 at 1:36 PM, Kevin Benton  wrote:
> >>I may have wrongly assumed that segments MAY have the possibility of being
> >> l2 adjacent, even if the entire network they are in is not, which would 
> >> mean
> >> that viewing and scheduling these in the context of a segment could be
> >> useful.
> >
> > Segments could be L2 adjacent, but I think it would be pretty uncommon for a
> > DHCP agent to have access to multiple L2 adjacent segments for the same
> > network. But even if that happens, the main use case I see for the scheduler
> > API is taking networks off of dead agents, agents going under maintenance,
> > or agents under heavy load. With the introduction of segments, all of those
> > are still possible via the network-based API.
> 
> I think I agree with this.  Let's not change the API at all to begin
> with.  I do think this means that the current API should work with or
> without segments.  I'm not sure that the current approach of doing
> scheduling for segments completely independently of scheduling for
> networks works for this.  Does it?
> 

I still think it does, but we can make it work without making them
separate.  My original plan was to keep them together, but that ended up
causing some unclean code and also the possibility of requiring an
interface change, which would break out-of-tree schedulers like bgp,
that just got moved out of tree.  If I can devise an alternative to
breaking that interface, then I'll go forward without separate
schedulers.

> >>Do you feel like it'd be beneficial to show what segment a dhcp agent is
> >> bound to in the API?
> >
> > Probably useful in some cases. This will already be possible by showing the
> > port details for the DHCP agent's port, but it might be worth adding in just
> > to eliminate the extra steps.
> 
> ++

This one is a lower priority, but I agree it could be beneficial.

> 
> Carl
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2][Routed Networks]

2016-05-20 Thread Brandon Logan
On Wed, 2016-05-18 at 15:29 -0600, Carl Baldwin wrote:
> On Wed, May 18, 2016 at 5:24 AM, Hong Hui Xiao  wrote:
> > I update [1] to auto delete dhcp port if there is no other ports. But
> > after the dhcp port is deleted, the dhcp service is not usable. I can
> 
> I think this is what I expect.
> 
> > resume the dhcp service by adding another subnet, but I don't think it is
> > a good way. Do we need to consider bind dhcp port to another segment when
> > deleting the existing one?
> 
> Where would you bind the port?  DHCP requires L2 connectivity to the
> segment which it serves.  But, you deleted the segment.  So, it makes
> sense that it wouldn't work.
> 
> Brandon is working on DHCP scheduling which should take care of this.
> DHCP should be scheduled to all of the segments with DHCP enabled
> subnets.  It should have a port for each of these segments.  So, if a
> segment (and its ports) are deleted, I think the right thing to do is
> to make sure that DHCP scheduling removes DHCP from that segment.  I
> would expect this to happen automatically when the subnet is deleted.
> We should check with Brandon to make sure this works (or will work
> when his work merges).

This is definitely something I've thought about, basically I'm treating
each segment as its own network, so in this case the rules that apply to
the network will be carried over for each segment with dhcp enabled
subnets.

> 
> Carl
> 
> > [1] https://review.openstack.org/#/c/317358


Thanks,
Brandon
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [Heat][Glance] Can't migrate to glance v2 completely

2016-05-20 Thread Monty Taylor
On 05/20/2016 01:03 PM, Clint Byrum wrote:
> Excerpts from Erno Kuvaja's message of 2016-05-20 13:20:11 +0100:
>>
>> The only reliable way to create Glance images for consumption in general
>> manner is to make sure that we use the normal workflows (currently
>> uploading the image to Glance and in future the supported manners of Image
>> Import) and let Glance and Glance only to deal with it's backends.
>>
> 
> Sounds good to me, Glance needs to be the gateway for images, not
> anything else.
> 
> I wonder if the shade library would be useful here.
> 
> If Heat were to use shade, which hides the complexities of not only
> v1 vs v2, but also v2 with import vs. v2 with upload through glance,
> then one could have a fairly generic image type.
> 
> Ansible has done this, which is quite nice, because now ansible users
> can be confident that whatever OpenStack cloud they're interacting with,
> they can just use the os_image module with the same arguments.
> 
> Anyway, if shade isn't used, Heat will need to do something similar,
> which is to provide template users a way to port their templates to the
> API's available. Perhaps the provider template system could be used so
> each Heat operator can provide a single "image upload" template snippet
> that gets used.

Go look at create_image in shade and then all of the different
sub-methods that calls and you'll get a ood sense of all of the edge cases

> That would bring on a second question, which is how does one even upload
> a large file to Heat. This is non-trivial, and I think the only way Heat
> can reasonably support this is by handing the user a Swift tempurl in
> the create/update stack API whenever a file is referenced. That swift
> object would then be used by the engine to proxy that file into glance if
> import isn't supported, or if it is, to tell glance where to import from.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron: Octavia: LBaaS: RFE created for DVR support for unbound allowed_address_pair port with FIP which are associated with multiple VMs that are active.

2016-05-20 Thread Brandon Logan
Thanks for putting that up, very detailed!

On Thu, 2016-05-19 at 16:43 +, Vasudevan, Swaminathan (PNB
Roseville) wrote:
> Hi Folks,
> 
> There has been recently a lot of requests for Neutron DVR to support
> unbound allowed_address_pair port with FIP which are associated with
> multiple VMs that are ACTIVE to provide High Availability to the VMs.
> 
>  
> 
> This use case is being heavily used by Octavia.
> 
>  
> 
> Based on the request and the current DVR design I have put together an
> RFE and I need the community input to proceed further with your
> feedback and thoughts.
> 
>  
> 
> https://bugs.launchpad.net/neutron/+bug/1583694
> 
>  
> 
> Please provide your feedback on thoughts on the RFE.
> 
>  
> 
> Thanks.
> 
>  
> 
> Swaminathan Vasudevan
> 
> Systems Software Engineer (TC)
> 
>  
> 
>  
> 
> HP Networking
> 
> Hewlett-Packard
> 
> 8000 Foothills Blvd
> 
> M/S 5541
> 
> Roseville, CA - 95747
> 
> tel: 916.785.0937
> 
> fax: 916.785.1815
> 
> email: swaminathan.vasude...@hp.com
> 
>  
> 
>  
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] Nominate XiYuan Wang and Hao Wang for Zaqar core

2016-05-20 Thread Eva Balycheva

+1

Yes. I also noticed that they are good at finding and fixing bugs. They 
explore Zaqar and question everything. They don't close their eyes on 
problems. And I like it.


Hao and XiYuan, thank you for your contributions. Let's continue working 
together. =)


On 05/19/2016 02:53 PM, Fei Long Wang wrote:

Hi team,

I would like to propose adding XiYuan Wang(wxy) and Hao Wang(wanghao) 
for the Zaqar core team. They have been awesome contributors since 
joining the Zaqar team about 6 months ago. And now they are currently 
the most active non-core reviewers on Zaqar projects for the last 90 
days[1]. XiYuan has great technical expertise and contributed many 
high quality patches. Hao has got an good eye for review and 
contributed some wonderful patches. I'm sure they would make excellent 
addition to the team. If no one objects, I'll proceed and add them in 
a week from now.


[1]http://stackalytics.com/report/contribution/zaqar-group/90




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [tracking] Renames and verification; was Re: ceilometer-specs submodule path is invalid

2016-05-20 Thread Jeremy Stanley
On 2016-05-20 09:52:39 +0800 (+0800), Gerard Braad wrote:
[...]
> This process seems very prone to human-error. Let's hope this
> would happen less often in the future with the instructions. Is
> there a way to verify this?

It's a bit messy, but updates to that file have only ever been
infrequent, manual and best-effort. I simply performed a quick
comparison between the projects in the .gitmodules file and the
project list in Gerrit to find any references to nonexistent repos.
It's under the control of the Release Managers, so they can weigh in
on whether there's a necessity to update it through automation vs
sticking with the current manual process.

> It seems jenkins updates the information, but this happens from a
> working copy and not a clean checkout / git submodule update
> --init.

I'm not sure why "it seems jenkins updates the information" as it
definitely doesn't. You can see that file's full commit history at
http://git.openstack.org/cgit/openstack/openstack/log/.gitmodules
(such as it is), updated once or twice a year by a total of three
people over the entirety of its lifespan... four once my fix is
approved.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [Heat][Glance] Can't migrate to glance v2 completely

2016-05-20 Thread Clint Byrum
Excerpts from Erno Kuvaja's message of 2016-05-20 13:20:11 +0100:
> 
> The only reliable way to create Glance images for consumption in general
> manner is to make sure that we use the normal workflows (currently
> uploading the image to Glance and in future the supported manners of Image
> Import) and let Glance and Glance only to deal with it's backends.
> 

Sounds good to me, Glance needs to be the gateway for images, not
anything else.

I wonder if the shade library would be useful here.

If Heat were to use shade, which hides the complexities of not only
v1 vs v2, but also v2 with import vs. v2 with upload through glance,
then one could have a fairly generic image type.

Ansible has done this, which is quite nice, because now ansible users
can be confident that whatever OpenStack cloud they're interacting with,
they can just use the os_image module with the same arguments.

Anyway, if shade isn't used, Heat will need to do something similar,
which is to provide template users a way to port their templates to the
API's available. Perhaps the provider template system could be used so
each Heat operator can provide a single "image upload" template snippet
that gets used.

That would bring on a second question, which is how does one even upload
a large file to Heat. This is non-trivial, and I think the only way Heat
can reasonably support this is by handing the user a Swift tempurl in
the create/update stack API whenever a file is referenced. That swift
object would then be used by the engine to proxy that file into glance if
import isn't supported, or if it is, to tell glance where to import from.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Zaqar messages standardization

2016-05-20 Thread Dan Prince
On Fri, 2016-05-20 at 17:52 +0200, Jiri Tomasek wrote:
> Hey all,
> 
> I've been recently working on getting the TripleO UI integrated with
> Zaqar, so it can receive a messages from Mistral workflows and act
> upon them without having to do various polling hacks.
> 
> Since there is currently quite a large amount of new TripleO
> workflows comming to tripleo-common, we need to standardize this
> communication so clients can consume the messages consistently.
> 
> I'll try to outline the requirements as I see it to start the
> discussion.
> 
> Zaqar queues:
> To listen to the Zaqar messages it requires the client to connect to
> Zaqar WebSocket, send authenticate message and subscribe to queue(s)
> which it wants to listen to. The currently pending workflow patches
> which send Zaqar messages [1, 2] expect that the queue is created by
> client and name is passed as an input to the workflow [3].
> 
> From the client perspective, it would IMHO be better if all workflows
> sent messages to the same queue and provide means to identify itself
> by carrying workflow name and execution id. The reason is, that if
> client creates a queue and triggers the workflow and then disconnects
> from the Socket (user refreshes browser), then it does not know what
> queues it previously created and which it should listen to. If there
> is single 'tripleo' queue, then all clients always know that it is
> where it will get all the messages from.

I think each workflow that supports queue messages (probably most of
them) should probably allow to set your own queue_name that will get
messages posted to it. Then it would simply be a convention that the
client simply pass the same queue name to any concurrent workflows that
are executed.

The single queue -> multiple workflows use case is however important to
support for the UI so adding the execution_id and fully qualified
workflow name to each queue message should allow both patterns to work
fine.

And while the queue name is configurable perhaps we default it to
'tripleo' so that you really don't have to set it anywhere unless you
really want to.

If you buy this I can update the patches linked below per the latest
feedback.

Dan


> 
> Messages identification and content:
> The client should be able to identify message by it's name so it can
> act upon it. The name should probably be relevant to the action or
> workflow it reports on.
> 
> { 
>   body: {
>     name: 'tripleo.validations.v1.run_validation,
>     execution_id: '123123123'
>     data: {}
>   }
> }
> 
> Other parts of the message are optional but it would be good to
> provide information relevant to the message's purpose, so the client
> can update relevant state and does not have to do any additional API
> calls. So e.g. in case of running the validation a message includes
> validation id.
>  
> 
> [1] https://review.openstack.org/#/c/313953/2/workbooks/deployment.ya
> ml
> [2] https://review.openstack.org/#/c/313632/8/workbooks/validations.y
> aml
> [3] https://review.openstack.org/#/c/313957/1/tripleoclient/v1/overcl
> oud_execute.py
> 
> -- Jirka
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Newton priorities and primary contacts

2016-05-20 Thread Bruno Cornec

Hello,

Ruby Loo said on Wed, May 04, 2016 at 02:29:34PM -0400:

If you are interested in a particular feature on this list, or would like
to contribute code towards that end, or would like to help review, or test,
that is wonderful and I thank you so much for your interest and desire to
contribute. However, I would really like to see primary contacts do more,
as described above. To take responsibility and (try to) commit the
time/effort it will take to see this priority items through.
[1] https://etherpad.openstack.org/p/ironic-newton-summit-priorities


I see in the priority list that "redfish support" is still highly wanted 
(4 +1).
And we are still very interested to provide that. It took much more time 
than expected, as we first decided that we wanted a good low level library 
to help us dialog following the Redfish standard to the management board 
of systems.


Now that we have this [1] much more in place than last year [2], and due 
to some nascient customer demand, we would like to come back to this 
community to propose to work with you on providing this feature into future 
Ironic releases.


We're not the most proficient OpenStack contributors as of now, so will need 
your help and guidance wrt both processes and code aspects.
And as our knowledge of the internals of Ironic is still weak, we may have 
difficulties to describe precisely in a Blueprint what will be the impact at 
Ironic level of the addition of that feature.


I understood that this community is now using RFE bugs to follow this type of 
work, and I suppose we need to resubmit a new proposal (IIUC maybe more 
precise, less generic wrt architecture). Is the BR indeed the right place to 
do that (as I understood from [3]) ? Should we rather start working at the 
code level to understand how we could hook that feature in the current code 
base (idea would be to mimic how the iLO driver is doing it today to have a 
skeleton of code for our redfish driver) and then show some code before being 
able to see the proposal accepted (even if they get the -2 mentioned on [4]) ?


Some basic questions:
I'm also a bit lost with terminology: Should I call this a redfish driver 
(like an iLO driver) or a redfish module, with drivers being pxe_redfish ?
Should I put my proposal in ironic-specs under specs/not-implemented ? 
There is no directory for Newton there so I guess the process changed, but I 
haven't found a doc guiding me on where to put new spec proposals sorry. 
Menwhile it's readable at [5].


Let me know your thoughts on this.
Best regards,
Bruno.

[1] https://github.com/bcornec/python-redfish
[2] https://review.openstack.org/184653
[3] http://docs.openstack.org/developer/ironic/dev/code-contribution-guide.html
[4] https://wiki.openstack.org/wiki/Ironic/Specs_Process
[5] 
https://github.com/bcornec/ironic-specs/blob/redfish-spec/specs/liberty/ironic-redfish.rst
--
Open Source Profession, WW Linux Community Lead  http://www.hpintelco.net
HPE EMEA EG Open Source Technology Strategist http://hp.com/go/opensource
FLOSS projects: http://mondorescue.org http://project-builder.org
Musique ancienne? http://www.musique-ancienne.org http://www.medieval.org

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-20 Thread Clint Byrum
Excerpts from Thomas Goirand's message of 2016-05-20 12:42:09 +0200:
> On 05/11/2016 04:17 PM, Dean Troyer wrote:
> > The big difference with Go here is that the dependency work happens at
> > build time, not deploy/runtime in most cases.  That shifts much of the
> > burden to people (theoretically) better suited to manage that work.
> 
> I am *NOT* buying that doing static linking is a progress. We're back 30
> years in the past, before the .so format. It is amazing that some of us
> think it's better. It simply isn't. It's a huge regression, for package
> maintainers, system admins, production/ops, and our final users. The
> only group of people who like it are developers, because they just don't
> need to care about shared library API/ABI incompatibilities and
> regressions anymore.
> 

Static linking is _a_ model. Dynamic linking is _a_ model.

There are aspects of each model, and when you lay different values over
any model, it will appear better or worse depending on those values.

Debian values slow, massively reusable change. Everyone advances at
around the same pace, and as a result, the whole community has a net
positive improvement. This is fantastically useful and is not outdated
in any way IMO. I love my stable OS.

But there is more software being written now than ever before, and that
growth does not have a downward curve. As people write more software,
and the demands on them get more intense, they have less reasons to
reuse a wider set of libraries, and have more of a need to reuse a
narrow subset, in a specific way. This gives rise to the continuous
delivery model where one ships a narrow subset all together and tests it
deeply, rather than testing the broader tools in isolation. That means
sometimes they go faster than the rest of the community in one area,
and slower in others. They give up the broad long term efficiency for
short term agility.

That may sound crass, like it's just fast and loose with no regard for the
future. But without the agility, they will just get run over by somebody
else more agile. When somebody chooses this, they're choosing it because
they have to, not because they don't understand what they're giving up.

Whichever model is chosen, It doesn't mean one doesn't care about the
greater community. It simply means one has a set of challenges when
contributing along side those with conflicting values.

But it's not a regression, it is simply people with a different set of
values, finding the same old solutions useful again for different reasons.

So, I'd urge that we all seek to find some empathy with people in other
positions, and compromise when we can. Debian has done so already,
with a Built-Using helper now for go programs. When libraries update,
one can just rebuild the packages that are built using it. So, rather
than fighting this community, Debian seems to be embracing it. Only
tradition stands in the way of harmony in this case.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [javascript] Seeking contributors, js-generator-openstack

2016-05-20 Thread Michael Krotscheck
Hi there!

Well, the first thing we need is other reviewers, which is the fastest way
to become a core :). The project page right now is the README.md file in
the project itself. The main reason for this is that the target audience -
javascript engineers - usually find that first via NPM. Most of the Todo
items there have already been done, actually, so the next step would be to
really identify what this project needs to accomplish, group it into major
categories, and start working on it. Off the top of my head, here's a list:


   1. Dependency synchronization: Keep a list of semver
   global-dependencies.json at the root of the project, and update a project's
   dependencies if the versions are out of sync.
   2. Eslint invocation. Infra's Common Testing Interface states that all
   javascript projects must support 'npm run lint', using
   eslint-config-openstack. The generator should add/update this to any
   project it's run in.
   3. nsp invocation. Not strictly necessary, but a postinstall scan of the
   project for publicly known vulnerabilities is always a good thing.

After these pieces, the next step becomes more complicated, as we need to
choose whether the user is creating a web application, or a node
application. This then allows us to switch out which test harness and
runner we're using, so that the `npm test` command can be consistent. Once
this lands, we can start talking about project src/dist directories, how to
best use gulp in each project type, and actual project templates :).

Is there something in particular you'd like to work on?

Michael


On Thu, May 19, 2016 at 12:39 AM Zhang Yujun 
wrote:

> Hi, Michael,
>
> I have several project experience in JavaScript and please let me know how
> I could help on this project?
>
> Is there a project page?
>
> Or we shall getting started with gerrit review?
>
> --
> Yujun
>
> On Wed, May 18, 2016 at 11:45 PM Michael Krotscheck 
> wrote:
>
>> Hello everyone!
>>
>> The js-generator-openstack project has been incubated under
>> openstack-infra, and is seeking contributors (and cores). The purpose of
>> the project is as follows:
>>
>>- Help manage common project configuration aspects, such as licenses,
>>gerrit, authors, and more.
>>- Assist in keeping dependencies up-to-date and synchronized across
>>javascript projects (JS equivalent of global requirements).
>>- Provide all the necessary hooks for OpenStack's JavaScript Common
>>Testing Interface.
>>- Suggest common tools to use for tasks such as linting, unit
>>testing, functional testing, and more.
>>- (Newton Stretch) Provide a quick way of bootstrapping a new
>>CORS-consuming OpenStack UI.
>>
>> I'm looking for help- firstly, because right now I'm the only person
>> who's willing to review JavaScript amongst the various infra cores, and I'd
>> really like more eyeballs on this project. Secondly, because I know that
>> I'm not the only person who has opinions about how we should be doing
>> JavaScript things.
>>
>> Come on over to
>> https://review.openstack.org/#/q/project:openstack-infra/js-generator-openstack+status:open
>>  and
>> help me out, would ya? If you've got questions, I'm active in the
>> #openstack-javascript channel.
>>
>> Michael
>>
> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-20 Thread Fox, Kevin M
Containers are popular right now for the same reason statically linking is. But 
it gives you something half way in between. Static linking is really hard to do 
right. Even go's "static link all the things" is incomplete. if you run ldd on 
a go binary, it isn't actually static. Containers ensure the stuff inside 
really is statically linked.

So these days, I'd rather use docker for gaining the static linking like 
functionality rather then try really hard to get a single ".exe" that has 0 
dependencies. Its very hard.

That being said, the distro's have a big point about deps being handled 
dynamically so that the problem of vulnerability/bug patching is tractable. At 
the moment, Docker or any statically linking thing doesn't have a good solution 
to the problem. The importance of this issue can't be understated.

It can be fixed, but there isn't a project I'm aware of yet to commonly solve 
it.

Building containers using distro packages I think is a part of the solution. 
You can then much more easily find which containers are out of date/have 
vulnerabilities. but you still need machinery to scan for updates, rebuild 
containers, and a way to push them out to all the places that need updating.

Until those things are solved, or you have operators that very very carefully 
pay attention, traditional distro's have a strong thing going for them in the 
security realm.

Go's statically linking everything is an anti-feature in my opinion, better 
left to systems like Docker.

Thanks,
Kevin


From: Dean Troyer [dtro...@gmail.com]
Sent: Friday, May 20, 2016 5:48 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tc] supporting Go

On Fri, May 20, 2016 at 5:42 AM, Thomas Goirand 
> wrote:
I am *NOT* buying that doing static linking is a progress. We're back 30
years in the past, before the .so format. It is amazing that some of us
think it's better. It simply isn't. It's a huge regression, for package
maintainers, system admins, production/ops, and our final users. The
only group of people who like it are developers, because they just don't
need to care about shared library API/ABI incompatibilities and
regressions anymore.

I disagree, there are certainly places static linking is appropriate, however, 
I didn't mention that at all.  Much of the burden with Python dependency at 
install/run time is due to NO linking.  Even with C, you make choices at build 
time WRT what you link against, either statically or dynamically.  Even with 
shared libs, when the interface changes you have to re-link everything that 
uses that interface.  It is not as black and white as you suggest.

And I say that as a user, who so desperately wants an install process for OSC 
to match PuTTY on Windows: 1) copy an .exe; 2) run it.

dt

[Thomas, I have done _EVERY_ one of the jobs above that you listed, as a 
$DAY_JOB, and know exactly what it takes to run production-scale services built 
from everything from vendor packages to house-built source.  It would be nice 
if you refined your argument to stop leaning on static linking as the biggest 
problem since stack overflows.  There are other reasons this might be a bad 
idea, but I sense that you are losing traction fixating on only this one.]

--

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Update_port can not remove allocation from auto-addressed subnets

2016-05-20 Thread Carl Baldwin
On Fri, May 20, 2016 at 6:06 AM, Pavel Bondar  wrote:
> Hi,
>
> Currently using update_port workflow user can not remove ip addresses from
> auto-addressed subnets (SLAAC). It prevents me from implementing complete
> fix for [1].
>
> Typically for removing ip address from port, 'fixed_ips' list is updated and
> ip address is cleaned up from it.
> But for auto-addressed subnets if ip address is removed from 'fixed_ips',
> port_update is called, but SLAAC ip are not removed because of [2].
> This area was significantly reworked during liberty, but the same
> behavior is
> preserved at least from kilo [3].
>
> To make subnet deletion to comply with IPAM interface [1] any ip address
> deallocation has to be done via ipam interface (update_port), but
> update_port
> currently skips deallocation of SLAAC addresses.

Just reiterating:  This is tough because the subnet is not yet
deleted, it is about to be deleted.  So, update port doesn't know that
it should release the IPs because the subnet is going away.  It thinks
it should keep them because of the policy for each port to have an IP
on all auto-address subnets.

I also thought of calling IPAM directly from the delete subnet code.
However, this could be problematic because we really want this to go
through the update port code path so that the port is properly updated
with the new set of addresses.  I have a feeling that the port update
path might just reallocate the address for the subnet because of the
same policy.

> So I am looking for advice about a way to allow deallocation of SLAAC
> addresses via update_port.
>
> I see several possible solutions, but they are not ideal, so please let
> me know
> if you see a better solution:
> - Add additional parameter like 'allow_slaac_deletion' to update_port
> method,
> and pass it through
> update_port->update_port_with_ips->_update_ips_for_port->
> _get_changed_ips_for_port to alternate behavior in [2]. It involves changing
> parameters for API exposed method update_port, so not sure if it can be
> accepted.
> - Another way is to introduce new state for 'fixed_ips' list. Currently
> it can
> have 'subnet_id' and 'ip_address' as keys. We could add new key like
> 'delete_subnet_id' to force delete allocations for slaac subnets. This way
> there is no need to update parameters for bunch of methods.

I guess I prefer something like the latter:  an attribute in fixed ips
communicating that certain auto-address subnets are going away and
their addresses should be deallocated.  Maybe someone else will have
another idea.  I can't think of anything better yet.

Carl

> Please share your thoughts about the ways to fix it.
>
> [1] https://bugs.launchpad.net/neutron/+bug/1564335
> [2]
> https://github.com/openstack/neutron/blob/f494de47fcef7776f7d29d5ceb2cc4db96bd1efd/neutron/db/ipam_backend_mixin.py#L435
> [3]
> https://github.com/openstack/neutron/blob/stable/kilo/neutron/db/db_base_plugin_v2.py#L444

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Multi-attach/Cinder-Nova weekly IRC meetings

2016-05-20 Thread Ildikó Váncsa
Hi All,

We have now the approved slot for the Cinder-Nova interaction changes meeting 
series. The new slot is __Monday, 1700UTC__, it will be on channel  
__#openstack-meeting-cp__.

Related etherpad: https://etherpad.openstack.org/p/cinder-nova-api-changes 
Summary about ongoing items: 
http://lists.openstack.org/pipermail/openstack-dev/2016-May/094089.html 

We will have one exception which is May 30 as it is a US holiday, I will 
announce a temporary slot for that week.

Thanks,
/Ildikó

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron-lbaas] Multiple back-end support for lbaas v2

2016-05-20 Thread Brandon Logan
What Sergey said is absolutely correct.  Additionally, if a user does
not provide "provider" in the request to create a load balancer than the
service_provider that is tagged with the default flag will be chosen.

Thanks,
Brandon
On Fri, 2016-05-20 at 12:23 +0300, Sergey Belous wrote:
> Hi.
> 
> 
> Actually, you can specify multiple providers, but these configuration
> directives are repeatable and are not comma-separated. That's mean you
> should add the another service_provider in the [service_providers]
> section as a separate line.
> 
> 
> And yes, you can try to pass parameter 'provider' to create a
> loadbalancer of specific driver (according to code of lbaas).
> 
> 
> --
> Best Regards,
> Sergey Belous
> 
> > On 20 May 2016, at 11:47, Wilence Yao  wrote:
> > 
> > 
> > 
> > Hi all,
> > 
> > 
> > Can I enable multiple service_providers for lbaas v2  at the same
> > time?
> > such as
> > 
> > 
> > ```
> > service_provider=LOADBALANCER:Haproxy:neutron_lbaas.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default,
> >  
> > LOADBALANCER:radware:neutron_lbaas.services.loadbalancer.drivers.radware.driver.LoadBalancerDriver
> > ```
> > 
> > 
> > Then pass parameter 'provider' to create a loadbalancer of specific
> > driver
> > 
> > 
> > ```
> > neutron lbaas-loadbalancer-create --provider radware
> > 
> > ```
> > 
> > 
> > Thanks for any help
> > 
> > 
> > Wilence Yao
> > 
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Zaqar messages standardization

2016-05-20 Thread Jiri Tomasek
Hey all,

I've been recently working on getting the TripleO UI integrated with Zaqar,
so it can receive a messages from Mistral workflows and act upon them
without having to do various polling hacks.

Since there is currently quite a large amount of new TripleO workflows
comming to tripleo-common, we need to standardize this communication so
clients can consume the messages consistently.

I'll try to outline the requirements as I see it to start the discussion.

Zaqar queues:
To listen to the Zaqar messages it requires the client to connect to Zaqar
WebSocket, send authenticate message and subscribe to queue(s) which it
wants to listen to. The currently pending workflow patches which send Zaqar
messages [1, 2] expect that the queue is created by client and name is
passed as an input to the workflow [3].

>From the client perspective, it would IMHO be better if all workflows sent
messages to the same queue and provide means to identify itself by carrying
workflow name and execution id. The reason is, that if client creates a
queue and triggers the workflow and then disconnects from the Socket (user
refreshes browser), then it does not know what queues it previously created
and which it should listen to. If there is single 'tripleo' queue, then all
clients always know that it is where it will get all the messages from.

Messages identification and content:
The client should be able to identify message by it's name so it can act
upon it. The name should probably be relevant to the action or workflow it
reports on.

{
  body: {
name: 'tripleo.validations.v1.run_validation,
execution_id: '123123123'
data: {}
  }
}

Other parts of the message are optional but it would be good to provide
information relevant to the message's purpose, so the client can update
relevant state and does not have to do any additional API calls. So e.g. in
case of running the validation a message includes validation id.


[1] https://review.openstack.org/#/c/313953/2/workbooks/deployment.yaml
[2] https://review.openstack.org/#/c/313632/8/workbooks/validations.yaml
[3]
https://review.openstack.org/#/c/313957/1/tripleoclient/v1/overcloud_execute.py

-- Jirka
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Deprecated options in sample configs?

2016-05-20 Thread Markus Zoeller
On 20.05.2016 16:51, Ben Nemec wrote:
> On 05/20/2016 04:26 AM, Markus Zoeller wrote:
>> On 05/19/2016 09:18 PM, Ben Nemec wrote:
>>> On 05/17/2016 07:27 PM, Matt Fischer wrote:

  If config sample files are being used as a living document then that
  would be a reason to leave the deprecated options in there. In my
  experience as a cloud deployer I never once used them in that manner
  so it didn't occur to me that people might, hence my question to the
  list.

  This may also indicate that people aren't checking release notes as
  we hope they are. A release note is where I would expect to find
  this information aggregated with all the other changes I should be
  aware of. That seems easier to me than aggregating that data myself
  by checking various sources.



 One way to think about this is that the config file has to be accurate
 or the code won't work, but release notes can miss things with no
 consequences other than perhaps an annoyed operator. So they are sources
 of truth about the state options on of a release or branch.
>>>
>>> On this note, I had another thought about an alternative way to handle
>>> this.  What if we generated one sample file without deprecated opts, and
>>> another with them (either exclusively, or in addition to all the other
>>> opts)?  That way there's a deprecation-free version for new deployers
>>> and one for people who want to see all the current deprecations.
>>>
>>
>> I'm not sure if it is well known that the "configuration reference" 
>> manual provides a summary page of new, updated and deprecated config 
>> options at [1].
> 
> Ah, I thought I had heard something about that but I couldn't find it.
> I wonder if we could link it from
> http://docs.openstack.org/developer/nova/sample_config.html
> 

Sure, should be a simple patch to [1].

>> Also, the release notes have already a section to announce the 
>> deprecation of a config option and this should be the source of truth 
>> IMO. From Nova I can tell that it is part of the normal reviews to 
>> ensure that a reno file (with a good explanation) is part of the change 
>> when deprecating something (see a updated-per-commit version at [2]). 
>> Introducing yet another way of telling people what's deprecated would 
>> weaken the position of the release notes which I'd like to avoid.
> 
> The problem is that the ultimate source of truth is the code, and since
> the sample config is generated from the code it's the only method not
> subject to human error.  As I said earlier in the thread, I personally
> agree that this is release note information and would prefer to rely on
> the logged deprecation warning to address the human error case, but at
> the same time I can understand that some people are not okay with that
> so I'm trying to find an alternative that both cleans up the sample
> config and still leaves the deprecations somewhere for people to find.
> 
>>
>> References:
>> [1] 
>> http://docs.openstack.org/mitaka/config-reference/tables/conf-changes/nova.html
>> [2] 
>> http://docs.openstack.org/releasenotes/nova/unreleased.html#deprecation-notes
>>



  Anyways, I have no strong cause for removing the deprecated options.
  I just wondered if it was a low hanging fruit and thought I would ask.


 It's always good to have these kind of conversations, thanks for
 starting it.

References:
[1]
https://github.com/openstack/nova/blob/master/doc/source/sample_config.rst

-- 
Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-20 Thread Adam Young

On 05/20/2016 08:48 AM, Dean Troyer wrote:
On Fri, May 20, 2016 at 5:42 AM, Thomas Goirand > wrote:


I am *NOT* buying that doing static linking is a progress. We're
back 30
years in the past, before the .so format. It is amazing that some
of us
think it's better. It simply isn't. It's a huge regression, for
package
maintainers, system admins, production/ops, and our final users. The
only group of people who like it are developers, because they just
don't
need to care about shared library API/ABI incompatibilities and
regressions anymore.


I disagree, there are certainly places static linking is appropriate, 
however, I didn't mention that at all. Much of the burden with Python 
dependency at install/run time is due to NO linking.  Even with C, you 
make choices at build time WRT what you link against, either 
statically or dynamically.  Even with shared libs, when the interface 
changes you have to re-link everything that uses that interface.  It 
is not as black and white as you suggest.


And I say that as a user, who so desperately wants an install process 
for OSC to match PuTTY on Windows: 1) copy an .exe; 2) run it.


dt

[Thomas, I have done _EVERY_ one of the jobs above that you listed, as 
a $DAY_JOB, and know exactly what it takes to run production-scale 
services built from everything from vendor packages to house-built 
source.  It would be nice if you refined your argument to stop leaning 
on static linking as the biggest problem since stack overflows.  There 
are other reasons this might be a bad idea, but I sense that you are 
losing traction fixating on only this one.]


Static linking Bad.  We can debate why elsewhere.

Go with dynamic linking is possible, and should be what the 
distributions target. This is a solvable problem.


/me burns bikeshed and installs a Hubcycle/Citibike kiosk.




--

Dean Troyer
dtro...@gmail.com 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][scheduler] Next Scheduler sub-team meeting

2016-05-20 Thread Ed Leafe
The next meeting of the Nova Scheduler sub-team will be Monday, May 23 at 1400 
UTC.
http://www.timeanddate.com/worldclock/fixedtime.html?iso=20160523T14

The agenda is here: https://wiki.openstack.org/wiki/Meetings/NovaScheduler

Please update the agenda before the meeting with any items you would like to 
discuss.


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Deprecated options in sample configs?

2016-05-20 Thread Ben Nemec
On 05/20/2016 04:26 AM, Markus Zoeller wrote:
> On 05/19/2016 09:18 PM, Ben Nemec wrote:
>> On 05/17/2016 07:27 PM, Matt Fischer wrote:
>>>
>>>  If config sample files are being used as a living document then that
>>>  would be a reason to leave the deprecated options in there. In my
>>>  experience as a cloud deployer I never once used them in that manner
>>>  so it didn't occur to me that people might, hence my question to the
>>>  list.
>>>
>>>  This may also indicate that people aren't checking release notes as
>>>  we hope they are. A release note is where I would expect to find
>>>  this information aggregated with all the other changes I should be
>>>  aware of. That seems easier to me than aggregating that data myself
>>>  by checking various sources.
>>>
>>>
>>>
>>> One way to think about this is that the config file has to be accurate
>>> or the code won't work, but release notes can miss things with no
>>> consequences other than perhaps an annoyed operator. So they are sources
>>> of truth about the state options on of a release or branch.
>>
>> On this note, I had another thought about an alternative way to handle
>> this.  What if we generated one sample file without deprecated opts, and
>> another with them (either exclusively, or in addition to all the other
>> opts)?  That way there's a deprecation-free version for new deployers
>> and one for people who want to see all the current deprecations.
>>
> 
> I'm not sure if it is well known that the "configuration reference" 
> manual provides a summary page of new, updated and deprecated config 
> options at [1].

Ah, I thought I had heard something about that but I couldn't find it.
I wonder if we could link it from
http://docs.openstack.org/developer/nova/sample_config.html

> Also, the release notes have already a section to announce the 
> deprecation of a config option and this should be the source of truth 
> IMO. From Nova I can tell that it is part of the normal reviews to 
> ensure that a reno file (with a good explanation) is part of the change 
> when deprecating something (see a updated-per-commit version at [2]). 
> Introducing yet another way of telling people what's deprecated would 
> weaken the position of the release notes which I'd like to avoid.

The problem is that the ultimate source of truth is the code, and since
the sample config is generated from the code it's the only method not
subject to human error.  As I said earlier in the thread, I personally
agree that this is release note information and would prefer to rely on
the logged deprecation warning to address the human error case, but at
the same time I can understand that some people are not okay with that
so I'm trying to find an alternative that both cleans up the sample
config and still leaves the deprecations somewhere for people to find.

> 
> References:
> [1] 
> http://docs.openstack.org/mitaka/config-reference/tables/conf-changes/nova.html
> [2] 
> http://docs.openstack.org/releasenotes/nova/unreleased.html#deprecation-notes
> 
>>>
>>>
>>>
>>>  Anyways, I have no strong cause for removing the deprecated options.
>>>  I just wondered if it was a low hanging fruit and thought I would ask.
>>>
>>>
>>> It's always good to have these kind of conversations, thanks for
>>> starting it.
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] [antiddos] New project antiddos

2016-05-20 Thread adriano fialho araujo
Hello, fellow developers!

i need to create a new project to sell antiDDoS the horizon . what do you
recommend me as the first start of the project?
Does anyone know any documentation to start a project in django on
OpenStack standard.

thank you
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Proposing Ivan Berezovskiy for puppet-openstack-core

2016-05-20 Thread Anastasia Urlapova
+1

On Fri, May 20, 2016 at 5:07 PM, Ivan Berezovskiy  wrote:

> Thank you all for the trust! I'll do my best :)
>
> 2016-05-19 18:57 GMT+03:00 Alex Schultz :
>
>> +1 great job Ivan
>>
>> On Thu, May 19, 2016 at 8:32 AM, Matt Fischer 
>> wrote:
>>
>>> +1 from me!
>>>
>>> On Thu, May 19, 2016 at 8:17 AM, Emilien Macchi 
>>> wrote:
>>>
 Hi,

 I don't need to introduce Ivan Berezovskiy (iberezovskiy on IRC), he's
 been doing tremendous work in Puppet OpenStack over the last months,
 in a regular way.

 Some highlights about his contributions:
 * Fantastic work on puppet-oslo! I really mean it... Thanks to you and
 others, we have now consistency for Oslo parameters in our modules.
 * Excellent quality of code in general and in reviews.
 * Full understanding of our process (code style, release notes, CI,
 doc, etc).
 * Very often, he helps with CI things (Fuel or Puppet OpenStack CI).
 * Constant presence on IRC meetings and in our channel where he never
 hesitate to give support.

 I would like to propose him part of our Puppet OpenStack core team, as
 usual please -1/+1.

 Thanks Ivan for your hard work, keep going!
 --
 Emilien Macchi


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Thanks, Ivan Berezovskiy
> MOS Puppet Team Lead
> at Mirantis 
>
> slack: iberezovskiy
> skype: bouhforever
> phone: + 7-960-343-42-46
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova-lxd]Nova-lxd with Linuxbridge

2016-05-20 Thread Chuck Short
Hi,

Currently it only works with OVS mode, there is a patch lying around for
Linuxbridge agent support that needs to be integrated.

Thanks
chuck

On Fri, May 20, 2016 at 8:47 AM, Gyorgy Szombathelyi <
gyorgy.szombathe...@doclerholding.com> wrote:

> Hi!
>
> I just have a simple question: is nova-lxd supposed to work with the
> Linuxbridge agent?
> As I see, the LXD driver creates veth interface pairs, and vif.py connects
> it to a normal linux bridge. However the Linuxbridge agent code scans only
> for tap devices.
> So the question is: Should the LXD driver create a tap device, bridged
> with that veth?
>
> Br,
> György
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [osops-tools-monitoring][monitoring-for-openstack] Code duplication

2016-05-20 Thread Jeremy Stanley
On 2016-05-20 15:28:48 +0200 (+0200), Martin Magr wrote:
[...]
> so from "This import will probably lead to the end of
> monitoring-for-openstack project" it seems that project deletion
> just was not performed at the end. Is anybody against submitting
> patch to openstack-infra to delete the project?

It's fine with one slight alteration: we don't (can't really) "delete"
Git repos, we merely "retire" them. The process is outlined at
http://docs.openstack.org/infra/manual/drivers.html#retiring-a-project
when you're ready to proceed.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Proposing Ivan Berezovskiy for puppet-openstack-core

2016-05-20 Thread Ivan Berezovskiy
Thank you all for the trust! I'll do my best :)

2016-05-19 18:57 GMT+03:00 Alex Schultz :

> +1 great job Ivan
>
> On Thu, May 19, 2016 at 8:32 AM, Matt Fischer 
> wrote:
>
>> +1 from me!
>>
>> On Thu, May 19, 2016 at 8:17 AM, Emilien Macchi 
>> wrote:
>>
>>> Hi,
>>>
>>> I don't need to introduce Ivan Berezovskiy (iberezovskiy on IRC), he's
>>> been doing tremendous work in Puppet OpenStack over the last months,
>>> in a regular way.
>>>
>>> Some highlights about his contributions:
>>> * Fantastic work on puppet-oslo! I really mean it... Thanks to you and
>>> others, we have now consistency for Oslo parameters in our modules.
>>> * Excellent quality of code in general and in reviews.
>>> * Full understanding of our process (code style, release notes, CI, doc,
>>> etc).
>>> * Very often, he helps with CI things (Fuel or Puppet OpenStack CI).
>>> * Constant presence on IRC meetings and in our channel where he never
>>> hesitate to give support.
>>>
>>> I would like to propose him part of our Puppet OpenStack core team, as
>>> usual please -1/+1.
>>>
>>> Thanks Ivan for your hard work, keep going!
>>> --
>>> Emilien Macchi
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks, Ivan Berezovskiy
MOS Puppet Team Lead
at Mirantis 

slack: iberezovskiy
skype: bouhforever
phone: + 7-960-343-42-46
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Nodes management in our shiny new TripleO API

2016-05-20 Thread Dmitry Tantsur

On 05/20/2016 03:42 PM, John Trowbridge wrote:



On 05/19/2016 09:31 AM, Dmitry Tantsur wrote:

Hi all!

We started some discussions on https://review.openstack.org/#/c/300200/
about the future of node management (registering, configuring and
introspecting) in the new API, but I think it's more fair (and
convenient) to move it here. The goal is to fix several long-standing
design flaws that affect the logic behind tripleoclient. So fasten your
seatbelts, here it goes.

If you already understand why we need to change this logic, just scroll
down to "what do you propose?" section.

"introspection bulk start" is evil
--

As many of you obviously know, TripleO used the following command for
introspection:

 openstack baremetal introspection bulk start

As not everyone knows though, this command does not come from
ironic-inspector project, it's part of TripleO itself. And the ironic
team has some big problems with it.

The way it works is

1. Take all nodes in "available" state and move them to "manageable" state
2. Execute introspection for all nodes in "manageable" state
3. Move all nodes with successful introspection to "available" state.

Step 3 is pretty controversial, step 1 is just horrible. This not how
the ironic-inspector team designed introspection to work (hence it
refuses to run on nodes in "available" state), and that's now how the
ironic team expects the ironic state machine to be handled. To explain
it I'll provide a brief information on the ironic state machine.

ironic node lifecycle
-

With recent versions of the bare metal API (starting with 1.11), nodes
begin their life in a state called "enroll". Nodes in this state are not
available for deployment, nor for most of other actions. Ironic does not
touch such nodes in any way.

To make nodes alive an operator uses "manage" provisioning action to
move nodes to "manageable" state. During this transition the power and
management credentials (IPMI, SSH, etc) are validated to ensure that
nodes in "manageable" state are, well, manageable. This state is still
not available for deployment. With nodes in this state an operator can
execute various pre-deployment actions, such as introspection, RAID
configuration, etc. So to sum it up, nodes in "manageable" state are
being configured before exposing them into the cloud.

The last step before the deployment it to make nodes "available" using
the "provide" provisioning action. Such nodes are exposed to nova, and
can be deployed to at any moment. No long-running configuration actions
should be run in this state. The "manage" action can be used to bring
nodes back to "manageable" state for configuration (e.g. reintrospection).

so what's the problem?
--

The problem is that TripleO essentially bypasses this logic by keeping
all nodes "available" and walking them through provisioning steps
automatically. Just a couple of examples of what gets broken:

(1) Imagine I have 10 nodes in my overcloud, 10 nodes ready for
deployment (including potential autoscaling) and I want to enroll 10
more nodes.

Both introspection and ready-state operations nowadays will touch both
10 new nodes AND 10 nodes which are ready for deployment, potentially
making the latter not ready for deployment any more (and definitely
moving them out of pool for some time).

Particularly, any manual configuration made by an operator before making
nodes "available" may get destroyed.

(2) TripleO has to disable automated cleaning. Automated cleaning is a
set of steps (currently only wiping the hard drive) that happens in
ironic 1) before nodes are available, 2) after an instance is deleted.
As TripleO CLI constantly moves nodes back-and-forth from and to
"available" state, cleaning kicks in every time. Unless it's disabled.

Disabling cleaning might sound a sufficient work around, until you need
it. And you actually do. Here is a real life example of how to get
yourself broken by not having cleaning:

a. Deploy an overcloud instance
b. Delete it
c. Deploy an overcloud instance on a different hard drive
d. Boom.

As we didn't pass cleaning, there is still a config drive on the disk
used in the first deployment. With 2 config drives present cloud-init
will pick a random one, breaking the deployment.

To top it all, TripleO users tend to not use root device hints, so
switching root disks may happen randomly between deployments. Have fun
debugging.

what do you propose?


I would like the new TripleO mistral workflows to start following the
ironic state machine closer. Imagine the following workflows:

1. register: take JSON, create nodes in "manageable" state. I do believe
we can automate the enroll->manageable transition, as it serves the
purpose of validation (and discovery, but lets put it aside).

2. provide: take a list of nodes or all "managable" nodes and move them
to "available". By using this workflow an operator will make a
*conscious* decision 

Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-20 Thread Dean Troyer
On Fri, May 20, 2016 at 8:16 AM, Chris Dent  wrote:

> I don't think language does (or should) have anything to do with it.
>

^1024

Language is what finally forced this discussion (as a prerequisite), now
that we're here, lets finish the prerequisite before going back to what got
us here.


> The question is whether or not the tool (whether service or
> dependent library) is useful to and usable outside the openstack-stack.
> For example gnocchi is useful to openstack but you can use it with other
> stuff, therefore _not_ openstack. More controversially: swift can be
> usefully used all by its lonesome: _not_ openstack.
>

Heh, projects usually see this as a positive.

Not being in OpenStack (where "in" means "of the product") is good
> for OpenStack, good for the project and good for opensource in general:
>

[excellent list of project attributes removed]

I would really hope these could also apply to OpenStack projects, and
understand why they usually don't.  In or out of the tent, those attributes
have at least one strong common thread, the ability to say no when
necessary and follow a specific path.  Self-discipline.  In or out of the
tent, projects without that disintegrate eventually.  I'm posting that list
on the clubhouse wall.

Since we are, by definition in the mission statement, all things to all
people, we got really inclusive in the last year or two.  The pendulum does
appear to be swinging the other way however, somewhat due to the costs of
scale.  Maybe entry into the tent should come with a ticket price to help
defray some of those costs?  Not just direct $$$ (corporate foundation
membership), but dedicated X hours per cycle for documentation or Y hours
for Infra or Z core/time units of cloud resources.  Wait, we do something
like that already?  Maybe its time for a cost-of-living adjustment...

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Nodes management in our shiny new TripleO API

2016-05-20 Thread John Trowbridge


On 05/19/2016 09:31 AM, Dmitry Tantsur wrote:
> Hi all!
> 
> We started some discussions on https://review.openstack.org/#/c/300200/
> about the future of node management (registering, configuring and
> introspecting) in the new API, but I think it's more fair (and
> convenient) to move it here. The goal is to fix several long-standing
> design flaws that affect the logic behind tripleoclient. So fasten your
> seatbelts, here it goes.
> 
> If you already understand why we need to change this logic, just scroll
> down to "what do you propose?" section.
> 
> "introspection bulk start" is evil
> --
> 
> As many of you obviously know, TripleO used the following command for
> introspection:
> 
>  openstack baremetal introspection bulk start
> 
> As not everyone knows though, this command does not come from
> ironic-inspector project, it's part of TripleO itself. And the ironic
> team has some big problems with it.
> 
> The way it works is
> 
> 1. Take all nodes in "available" state and move them to "manageable" state
> 2. Execute introspection for all nodes in "manageable" state
> 3. Move all nodes with successful introspection to "available" state.
> 
> Step 3 is pretty controversial, step 1 is just horrible. This not how
> the ironic-inspector team designed introspection to work (hence it
> refuses to run on nodes in "available" state), and that's now how the
> ironic team expects the ironic state machine to be handled. To explain
> it I'll provide a brief information on the ironic state machine.
> 
> ironic node lifecycle
> -
> 
> With recent versions of the bare metal API (starting with 1.11), nodes
> begin their life in a state called "enroll". Nodes in this state are not
> available for deployment, nor for most of other actions. Ironic does not
> touch such nodes in any way.
> 
> To make nodes alive an operator uses "manage" provisioning action to
> move nodes to "manageable" state. During this transition the power and
> management credentials (IPMI, SSH, etc) are validated to ensure that
> nodes in "manageable" state are, well, manageable. This state is still
> not available for deployment. With nodes in this state an operator can
> execute various pre-deployment actions, such as introspection, RAID
> configuration, etc. So to sum it up, nodes in "manageable" state are
> being configured before exposing them into the cloud.
> 
> The last step before the deployment it to make nodes "available" using
> the "provide" provisioning action. Such nodes are exposed to nova, and
> can be deployed to at any moment. No long-running configuration actions
> should be run in this state. The "manage" action can be used to bring
> nodes back to "manageable" state for configuration (e.g. reintrospection).
> 
> so what's the problem?
> --
> 
> The problem is that TripleO essentially bypasses this logic by keeping
> all nodes "available" and walking them through provisioning steps
> automatically. Just a couple of examples of what gets broken:
> 
> (1) Imagine I have 10 nodes in my overcloud, 10 nodes ready for
> deployment (including potential autoscaling) and I want to enroll 10
> more nodes.
> 
> Both introspection and ready-state operations nowadays will touch both
> 10 new nodes AND 10 nodes which are ready for deployment, potentially
> making the latter not ready for deployment any more (and definitely
> moving them out of pool for some time).
> 
> Particularly, any manual configuration made by an operator before making
> nodes "available" may get destroyed.
> 
> (2) TripleO has to disable automated cleaning. Automated cleaning is a
> set of steps (currently only wiping the hard drive) that happens in
> ironic 1) before nodes are available, 2) after an instance is deleted.
> As TripleO CLI constantly moves nodes back-and-forth from and to
> "available" state, cleaning kicks in every time. Unless it's disabled.
> 
> Disabling cleaning might sound a sufficient work around, until you need
> it. And you actually do. Here is a real life example of how to get
> yourself broken by not having cleaning:
> 
> a. Deploy an overcloud instance
> b. Delete it
> c. Deploy an overcloud instance on a different hard drive
> d. Boom.
> 
> As we didn't pass cleaning, there is still a config drive on the disk
> used in the first deployment. With 2 config drives present cloud-init
> will pick a random one, breaking the deployment.
> 
> To top it all, TripleO users tend to not use root device hints, so
> switching root disks may happen randomly between deployments. Have fun
> debugging.
> 
> what do you propose?
> 
> 
> I would like the new TripleO mistral workflows to start following the
> ironic state machine closer. Imagine the following workflows:
> 
> 1. register: take JSON, create nodes in "manageable" state. I do believe
> we can automate the enroll->manageable transition, as it serves the
> purpose of validation (and discovery, but lets 

Re: [openstack-dev] [osops-tools-monitoring][monitoring-for-openstack] Code duplication

2016-05-20 Thread Martin Magr
On Fri, May 20, 2016 at 2:31 PM, Simon Pasquier 
wrote:

> Hello,
> You can find the rationale in the review [1] importing m.o.f. into o.t.m.
> Basically it was asked by the operators community to avoid the sprawl of
> repositories.
> BR,
> Simon
> [1] https://review.openstack.org/#/c/248352/
>

Thanks Simon,

  so from "This import will probably lead to the end of
monitoring-for-openstack project" it seems that project deletion just was
not performed at the end. Is anybody against submitting patch to
openstack-infra to delete the project?

Regards,
Martin


>
>
> On Fri, May 20, 2016 at 11:08 AM, Martin Magr  wrote:
>
>> Greetings guys,
>>
>>   there is a duplication of code within openstack/osops-tools-monitoring
>> and openstack/monitoring-for-openstack projects.
>>
>> It seems that m-o-f became part of o-t-m, but the former project wasn't
>> deleted. I was just wandering if there is a reason for the duplication (or
>> fork, considering the projects have different core group maintaining each)?
>>
>> I'm assuming that m-f-o is just a leftover, so can you guys tell me what
>> was the reason to create one project to rule them all (eg.
>> openstack/osops-tools-monitoring) instead keeping the small projects
>> instead?
>>
>> Thanks in advance for answer,
>> Martin
>>
>> --
>> Martin Mágr
>> Senior Software Engineer
>> Red Hat Czech
>>
>
>


-- 
Martin Mágr
Senior Software Engineer
Red Hat Czech
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-20 Thread Dean Troyer
On Fri, May 20, 2016 at 7:48 AM, Thierry Carrez 
wrote:

> The other approach is product-centric: "lower-level pieces are OpenStack
> dependencies, rather than OpenStack itself". If we are missing a
> lower-level piece to achieve our mission and are developing it as a result,
> it could be developed on OpenStack infrastructure by members of the
> OpenStack community but it is not "OpenStack the product", it's an
> OpenStack *dependency*. It is not governed by the TC, it can use any
> language and tool deemed necessary.
>

I think we should include the degree of OpenStack-specificness here, as
something that may fit every other of your criteria for 'used by but not
part of OpenStack' is really only useful to OpenStack (Designate's DNS
code, for example, apparently reads the DB) should be part of OpenStack.
IIRC that has been used as one of the criteria for deciding if a new
library should be part of Oslo or stand-alone (independent of in-or-out of
OpenStack governance).


> That is what I mean by 'scope': where does "OpenStack" stop, and where do
> "OpenStack dependencies" start ? It is a lot easier and a lot less
> community-costly to allow additional languages in OpenStack dependencies
> (we already have plenty there).


So down the road when one of these dependencies that is very important to
OpenStack goes more dormant than we would like due to resource allocation
issues because it is not-Big Tent, will we adopt it like we have done with
other dependencies where that made more sense than re-writing around it?

I do hope that should the TC adopt the position of drawing the scope line
tighter around the core, that the tent-cleaning will follow in both
directions, down toward kernel-space and up toward end-user-space.  We are
historically bad at leaving that sort of debt lying and cleaning can make
some strides toward reducing the community cost of maintaining the current
ecosystem.

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-20 Thread Chris Dent

On Fri, 20 May 2016, Thierry Carrez wrote:

The other approach is product-centric: "lower-level pieces are OpenStack 
dependencies, rather than OpenStack itself". If we are missing a lower-level 
piece to achieve our mission and are developing it as a result, it could be 
developed on OpenStack infrastructure by members of the OpenStack community 
but it is not "OpenStack the product", it's an OpenStack *dependency*. It is 
not governed by the TC, it can use any language and tool deemed necessary.


On this second approach, there is the obvious question of where "lower-level" 
starts, which as you explained above is not really clear-cut. A good litmus 
test for it could be whenever Python is not enough. If you can't develop it 
effectively with the language that is currently sufficient for the rest of 
OpenStack, then developing it as an OpenStack dependency in whatever language 
is appropriate might be the solution...


I don't think language does (or should) have anything to do with it.

The question is whether or not the tool (whether service or
dependent library) is useful to and usable outside the openstack-stack.
For example gnocchi is useful to openstack but you can use it with other
stuff, therefore _not_ openstack. More controversially: swift can be
usefully used all by its lonesome: _not_ openstack.

Not being in OpenStack (where "in" means "of the product") is good
for OpenStack, good for the project and good for opensource in general:

* Outside the OpenStack bubble, looking in, one can see a bunch of
  complexity and a bunch of bad architecture decisions but rarely
  see the good stuff that is actually there, so it is easy enough to walk
  away. Good stuff that a larger audience could benefit from may get
  dismissed, if that good stuff has an opportunity to have an
  independent identity, it can be useful.

* A project that is used by a larger and more diverse audience
  (people-wise and technology-wise) will of necessity be more
  robust.

* A project that defines itself as independent will be required to
  have strong and narrow contracts to satisfy its diverse audiences.

* A project that has those strong and narrow contracts can use what
  ever language it likes and still be useful and nobody really needs
  to care all that deeply except for the people making it. If they
  want to be in a language that infra doesn't want to support,
  that's fine: there are plenty of other ways to do CI.

* An openstack which is more narrow is far easier for people to
  comprehend and contemplate.

* A broad opensource world that has lots of nice things is a better
  one.

--
Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Nodes management in our shiny new TripleO API

2016-05-20 Thread Dmitry Tantsur

On 05/20/2016 02:54 PM, Steven Hardy wrote:

Hi Dmitry,

Thanks for the detailed write-up, some comments below:

On Thu, May 19, 2016 at 03:31:36PM +0200, Dmitry Tantsur wrote:


what do you propose?


I would like the new TripleO mistral workflows to start following the ironic
state machine closer. Imagine the following workflows:

1. register: take JSON, create nodes in "manageable" state. I do believe we
can automate the enroll->manageable transition, as it serves the purpose of
validation (and discovery, but lets put it aside).

2. provide: take a list of nodes or all "managable" nodes and move them to
"available". By using this workflow an operator will make a *conscious*
decision to add some nodes to the cloud.

3. introspect: take a list of "managable" (!!!) nodes or all "manageable"
nodes and move them through introspection. This is an optional step between
"register" and "provide".

4. set_node_state: a helper workflow to move nodes between states. The
"provide" workflow is essentially set_node_state with verb=provide, but is
separate due to its high importance in the node lifecycle.

5. configure: given a couple of parameters (deploy image, local boot flag,
etc), update given or all "manageable" nodes with them.

Essentially the only addition here is the "provide" action which I hope you
already realize should be an explicit step.

what about tripleoclient


Of course we want to keep backward compatibility. The existing commands

 openstack baremetal import
 openstack baremetal configure boot
 openstack baremetal introspection bulk start

will use some combinations of workflows above and will be deprecated.

The new commands (also avoiding hijacking into the bare metal namespaces)
will be provided strictly matching the workflows (especially in terms of the
state machine):

 openstack overcloud node import
 openstack overcloud node configure
 openstack overcloud node introspect
 openstack overcloud node provide


So, provided we maintain backwards compatibility this sounds OK, but one
question - is there any alternative approach that might solve this problem
more generally, e.g not only for TripleO?


I was thinking about that.

We could move the import command to ironicclient, but it won't support 
TripleO format and additions then. It's still a good thing to have, I'll 
talk about it upstream.


As to introspect and provide, the only thing which is different from 
ironic analogs is that ironic commands don't act on "all nodes in XXX 
state", and I don't think we ever will.




Given that we're likely to implement these workflows in mistral, it
probably does make sense to switch to a TripleO specific namespace, but I
can't help wondering if we're solving a general problem in a TripleO
specific way - e.g isn't this something any user adding nodes from an
inventory, introspecting them and finally making them available for
deployment going to need?

Also, and it may be too late to fix this, "openstack overcloud node" is
kinda strange, because we're importing nodes on the undercloud, which could
in theory be used for any purpose, not only overcloud deployments.


I agree but keeping our stuff in ironic's namespace leads to even more 
confusion and even potential conflicts (e.g. we can't introduce 
"baremetal import", cause tripleo reserved it).




We've already done arguably the wrong thing with e.g openstack overcloud image
upload (which, actually, uploads images to the undercloud), but I wanted to
point out that we're maintaining that inconsistency with your proposed
interface (which may be the least-bad option I suppose).

Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] enabled convergence background

2016-05-20 Thread xiangxinyong
Thanks Thomas Herve, Steven Hardy and Anant Patil.
Thanks team.
It help me a lot.


Best Regards,
xiangxinyong


On Fri, May 20, 2016 05:48 PM,  Anant Patil wrote:
> On 20-May-16 13:51, Steven Hardy wrote:
>> On Fri, May 20, 2016 at 09:26:46AM +0200, Thomas Herve wrote:
>>> On Fri, May 20, 2016 at 5:46 AM, xiangxinyong  wrote:
 Hi Team,

 I noticed that heat enabled convergence.
>>>
>>> I hope that's not the case :). We haven't made the switch yet. We
>>> continue to do testing, and we're still finding issues so we won't
>>> make it until we have a good confidence that it's mostly seamless.
>> 
>> 
>> It's not ;)
>> 
>> https://github.com/openstack/heat/blob/master/heat/common/config.py#L181
>> 
 Could someone tell me about the background about the convergence.
 or some specs to introduce it?
>>>
>>> I believe the spec is a good introduction to it:
>>> https://specs.openstack.org/openstack/heat-specs/specs/juno/convergence.html
>> 
>> Rico also did a nice overview in his talk in Tokyo:
>> 
>> https://www.openstack.org/summit/tokyo-2015/videos/presentation/inwinstack-heat-up-your-stack-deep-dive-to-heat-learn-how-to-orchestrate-your-cloud
>> 
>> It's towards the end (from around 28mins)
>> 
>
>You can find more details in this talk:
>https://www.openstack.org/videos/video/scalable-heat-engine-using-convergence
>
>-- Anant
>
>
>> Steve
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] process for making decisions?

2016-05-20 Thread Jim Rollenhagen
On Thu, May 19, 2016 at 02:02:32PM +, Loo, Ruby wrote:
> Hi,
> 
> I think it would be good if we came up with some general guidelines wrt the 
> processes by which decisions are made. By ³decisions², I mean decisions that 
> we, as a community, will try to abide by ?
> 
> I have noticed in the past, that discussions in the mailing list (ML) 
> sometimes peter out without a conclusion or decision. The ML seems to be a 
> decent forum for discussion but not for decision making. At least, that¹s my 
> impression so far.
> 
> The (formal?) decision-making mechanisms we have are:
> - voting at our weekly ironic meeting. (When there is a vote, one can see who 
> voted for what.)
> - In summit design sessions
> - in mid-cycles
> - via spec or code patches. if someone asks for more people to voice their 
> opinion
> - anything else?
> 
>  We also tend to get consensus/agreement on IRC (if there are enough people 
> present, where ³enough² typically means some number of core reviewers voicing 
> their opinion).
> 
> I have a few concerns about the above. The ones that come to mind right now:
> - I would like it to be possible to make decisions without everyone being 
> present at the same time. Or if that isn¹t possible/do-able right now, at 
> least let¹s make it clearer what a process might be, with some caveat for 
> people-who-couldn¹t-attend to disagree later?
> - consensus/agreement on IRC is nice, but I think it needs to move beyond 
> that to being recorded somewhere. (I think we are doing this via comments in 
> patches and other means but I don¹t know for sure.)
> 
> The reason I am bringing it up now (yes, the truth comes out) is because I 
> asked a question on the mailing list on Monday [1] and it is now Thursday and 
> maybe I am impatient ? I was about to reply to some of the comments and 
> started to wonder whether it was worth replying or maybe it would be more 
> effective (with respect to gathering the most feedback in the shortest amount 
> of time) to move the discussion to our weekly meeting and hopefully have a 
> decision then. (And the reason I even brought up that question was because I 
> was reviewing someone¹s patch and it seemed like a good idea to try to 
> unblock that patch instead of letting it languish there until someone else 
> did something about it. But I digressŠ ?)
> 
> So what do folks think? Should a process for 
> not-critical-or-time-sensitive-issues be e.g.: 
> - bring it up for discussion in the mailing list
> - after some elapsed amount of time (how long?) and/or petering out of 
> replies, bring it up in some meeting for a decision?

Yeah, this is a real problem we have today. Thanks for bringing it up. :)

I have a feeling that nobody wants to be the person pushing their
decision on everyone else, for the sake of "community". However, this
leads to folks voicing their opinion on things, instead of proposing a
solution. Lots of words are said, no decisions are made.

I think that when we see discussions starting to peter out without a
decision, someone needs to step up and say "okay, I've read everything
on this, here's what I propose: ... any further objections?" (this is
best served with a patch in gerrit). Further discussion can be had in
the thread, or the patch, or whatever, but regardless the end result
should be a merged patch (whether that's specs or docs or code).

So in the particular case you mention, I'd recommend putting up a patch
to ironic-lib's readme that says "this is only meant to be used by
ironic/IPA/etc" and move the discussion there.

In general, we need to lead by example and push the discussion toward a
decision rather than more waffling in email. I realize this isn't a
concrete answer to your question, but I don't know if there is one.

If we do think we need a formal process for making decisions as you
define above, I think it should be something like:

* bring it up on the mailing list
* someone /must/ propose a solution along the way, in gerrit, perhaps
  the person that started the thread if nobody else steps up
* (if we think this is a really big decision, we can declare that X% of
  cores should vote on it before landing it)

Hope that helps. Feel free to tell me I'm talking nonsense. :)

// jim

> 
> --ruby
> 
> [1] http://lists.openstack.org/pipermail/openstack-dev/2016-May/095090.html
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs] What's Up, Doc? 20 May 2016

2016-05-20 Thread Anne Gentle
On Thu, May 19, 2016 at 11:48 PM, Lana Brindley 
wrote:

> Hi everyone,
>
> I've been struck down by the dreaded flu this week, so it's been a bit
> quiet around here. I have, however, crawled out from under the tissues,
> throat lozenges, and episodes of Containment to pen my regular newsletter
> for you. See how dedicated I am to the cause? ;)
>
> The biggest updates this week have been on the API side of the ship, with
> the api-site bug list cleared out. Well done to Anne and Atsushi-san for
> the heavy lifting there.


All hail Atshushi for that herculean effort! He cleared over a hundred bugs
and asked great questions. Thank you so much Atsushi, I greatly appreciate
it.

Anne


> We also saw a lot of speciality team meetings kick off again this week,
> with more coming next week, so keep an eye on the mailing list and your
> ical for the projects you're interested in. We have also now hit our stride
> on the regular docs meetings, with the APAC meeting held this week, and the
> US one rolling around again next week. If you're a docs cross-project
> liaison, make sure you check out the timing and pick one that works for
> you, so we can make sure we're discussing *your* project.
>
> == Progress towards Newton ==
>
> 138 days to go!
>
> Bugs closed so far: 89
>
> Newton deliverables
> https://wiki.openstack.org/wiki/Documentation/NewtonDeliverables
> Feel free to add more detail and cross things off as they are achieved
> throughout the release. I will also do my best to ensure it's kept up to
> date for each newsletter.
>
> == Speciality Team Reports ==
>
> '''HA Guide: Bogdan Dobrelya'''
> No report this week.
>
> '''Install Guide: Lana Brindley'''
> Meetings start up next week:
> http://eavesdrop.openstack.org/#Documentation_Install_Team_Meeting
> Cookie cutter has been merged, repo here:
> http://git.openstack.org/cgit/openstack/installguide-cookiecutter/
> Need to merge spec: https://review.openstack.org/#/c/310588
>
> '''Networking Guide: Edgar Magana'''
> We have successfully changed the bi-weekly meeting schedule from odd to
> even weeks.
> There will not be meeting this week. Next meeting will be on June 2nd.
>
> '''Security Guide: Nathaniel Dillon'''
> No report this week.
>
> '''User Guides: Joseph Robinson'''
> Team Meeting restarted - Just me this week, I'll send a summary to the
> mailing list to keep everyone up to date, and if anyone is interested in
> joining in.
> Python SDK file moving - one item left from the Mitaka patch - Finding a
> location to move these files - dev and doc mailing list email forthcoming
> on where to put these files.
>
> '''Ops Guide: Shilla Saebi'''
> No report this week.
>
> '''API Guide: Anne Gentle'''
> The extension, os-api-ref, is now available via Pypi so that all projects
> can re-use it with test-requirements.txt. Thanks Sean Dague for this effort!
> Several projects have reviews in progress for their API reference
> conversion: glance https://review.openstack.org/#/c/312259/, manila
> https://review.openstack.org/#/c/313874, neutron
> https://review.openstack.org/#/c/314819, swift
> https://review.openstack.org/#/c/312315/,  trove
> https://review.openstack.org/#/c/316381. (There are probably more but
> those are on my radar).
> Please review the response code table at
> https://review.openstack.org/#/c/318281/ and
> http://i.imgur.com/onsRFtI.png for your use cases for API reference docs.
>
> '''Config/CLI Ref: Tomoyuki Kato'''
> No report this week.
>
> '''Training labs: Pranav Salunke, Roger Luethi'''
> No report this week.
>
> '''Training Guides: Matjaz Pancur'''
> Work on the slides for Training guides (
> https://review.openstack.org/#/c/295016/)
> Feedback about Upstream training in Austin (see
> http://eavesdrop.openstack.org/meetings/training_guides/2016/training_guides.2016-05-16-17.02.html
> )
>
> '''Hypervisor Tuning Guide: Blair Bethwaite
> Hi! I'm going to try looking after this for a while as Joe focuses on
> other things. Not promising much at this point beyond a little wiki
> gardening, but longer term I hope to align it with some activities in the
> scientific-wg (which I'm co-chairing with Stig Telfer), and it will
> probably become a point of reference for some of the activities we already
> have planned this cycle.
>
> '''UX/UI Guidelines: Michael Tullis, Stephen Ballard'''
> No report this week.
>
> == Site Stats ==
>
> While 90% of our readers have their browsers set to US English, just under
> 8% browse in Chinese, and a mere 0.15% use British English. That last one
> might be just me ;)
>
> == Doc team meeting ==
>
> Next meetings:
>
> The APAC meeting was held this week, you can read the minutes here:
> https://wiki.openstack.org/wiki/Documentation/MeetingLogs#2016-05-18
>
> Next meetings:
> US: Wednesday 25 May, 19:00 UTC
> APAC: Wednesday 1 June, 00:30 UTC
>
> Please go ahead and add any agenda items to the meeting page here:
> 

Re: [openstack-dev] [tripleo] Nodes management in our shiny new TripleO API

2016-05-20 Thread Steven Hardy
Hi Dmitry,

Thanks for the detailed write-up, some comments below:

On Thu, May 19, 2016 at 03:31:36PM +0200, Dmitry Tantsur wrote:

> what do you propose?
> 
> 
> I would like the new TripleO mistral workflows to start following the ironic
> state machine closer. Imagine the following workflows:
> 
> 1. register: take JSON, create nodes in "manageable" state. I do believe we
> can automate the enroll->manageable transition, as it serves the purpose of
> validation (and discovery, but lets put it aside).
> 
> 2. provide: take a list of nodes or all "managable" nodes and move them to
> "available". By using this workflow an operator will make a *conscious*
> decision to add some nodes to the cloud.
> 
> 3. introspect: take a list of "managable" (!!!) nodes or all "manageable"
> nodes and move them through introspection. This is an optional step between
> "register" and "provide".
> 
> 4. set_node_state: a helper workflow to move nodes between states. The
> "provide" workflow is essentially set_node_state with verb=provide, but is
> separate due to its high importance in the node lifecycle.
> 
> 5. configure: given a couple of parameters (deploy image, local boot flag,
> etc), update given or all "manageable" nodes with them.
> 
> Essentially the only addition here is the "provide" action which I hope you
> already realize should be an explicit step.
> 
> what about tripleoclient
> 
> 
> Of course we want to keep backward compatibility. The existing commands
> 
>  openstack baremetal import
>  openstack baremetal configure boot
>  openstack baremetal introspection bulk start
> 
> will use some combinations of workflows above and will be deprecated.
> 
> The new commands (also avoiding hijacking into the bare metal namespaces)
> will be provided strictly matching the workflows (especially in terms of the
> state machine):
> 
>  openstack overcloud node import
>  openstack overcloud node configure
>  openstack overcloud node introspect
>  openstack overcloud node provide

So, provided we maintain backwards compatibility this sounds OK, but one
question - is there any alternative approach that might solve this problem
more generally, e.g not only for TripleO?

Given that we're likely to implement these workflows in mistral, it
probably does make sense to switch to a TripleO specific namespace, but I
can't help wondering if we're solving a general problem in a TripleO
specific way - e.g isn't this something any user adding nodes from an
inventory, introspecting them and finally making them available for
deployment going to need?

Also, and it may be too late to fix this, "openstack overcloud node" is
kinda strange, because we're importing nodes on the undercloud, which could
in theory be used for any purpose, not only overcloud deployments.

We've already done arguably the wrong thing with e.g openstack overcloud image
upload (which, actually, uploads images to the undercloud), but I wanted to
point out that we're maintaining that inconsistency with your proposed
interface (which may be the least-bad option I suppose).

Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-20 Thread Dean Troyer
On Fri, May 20, 2016 at 5:42 AM, Thomas Goirand  wrote:

> I am *NOT* buying that doing static linking is a progress. We're back 30
> years in the past, before the .so format. It is amazing that some of us
> think it's better. It simply isn't. It's a huge regression, for package
> maintainers, system admins, production/ops, and our final users. The
> only group of people who like it are developers, because they just don't
> need to care about shared library API/ABI incompatibilities and
> regressions anymore.
>

I disagree, there are certainly places static linking is appropriate,
however, I didn't mention that at all.  Much of the burden with Python
dependency at install/run time is due to NO linking.  Even with C, you make
choices at build time WRT what you link against, either statically or
dynamically.  Even with shared libs, when the interface changes you have to
re-link everything that uses that interface.  It is not as black and white
as you suggest.

And I say that as a user, who so desperately wants an install process for
OSC to match PuTTY on Windows: 1) copy an .exe; 2) run it.

dt

[Thomas, I have done _EVERY_ one of the jobs above that you listed, as a
$DAY_JOB, and know exactly what it takes to run production-scale services
built from everything from vendor packages to house-built source.  It would
be nice if you refined your argument to stop leaning on static linking as
the biggest problem since stack overflows.  There are other reasons this
might be a bad idea, but I sense that you are losing traction fixating on
only this one.]

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-20 Thread Thierry Carrez

John Dickinson wrote:

[...]
In the same vein, if we consider lower-level projects (which often
require such native optimization) as part of "OpenStack", rather
than as external open source projects that should be integrated by
"OpenStack", then we need a language like golang in our toolbelt.
There is basically no point in saying no to golang in OpenStack if
we need lower-level native optimization in OpenStack: we'll have to
accept the community cost that comes with such a community scope.


Defining "lower-level" is very hard. Since the Nova API[1] is
listening to a public network interface and coordinating with various
services in a cluster, is it low-level enough to need to consider
optimizations? Does the Nova API require optimization to handle a very
large number of connections using all of the hardware available on a
single server? If Nova is going to eek out every drop of performance
possible from a server, it probably does need to consider all kinds of
"low-level" optimizations.[2]


That is fair. There is no clear line between lower-level and 
higher-level (although one might argue there is a line between projects 
requiring Go and projects that don't require it). It's more of a gradient.



[...]

So the real question we need to answer is... where does OpenStack
stop, and where does the wider open source community start ? If
OpenStack is purely an "integration engine", glue code for other
lower-level technologies like hypervisors, databases, or distributed
block storage, then the scope is limited, Python should be plenty
enough, and we don't need to fragment our community. If OpenStack is
"whatever it takes to reach our mission", then yes we need to add one
language to cover lower-level/native optimization, because we'll
need that... and we need to accept the community cost as a
consequence of that scope choice. Those are the only two options on
the table.

I'm actually not sure what is the best answer. But I'm convinced we,
as a community, need to have a clear answer to that. We've been
avoiding that clear answer until now, creating tension between the
advocates of an ASF-like collection of tools and the advocates of a
tighter-integrated "openstack" product. We have created silos and
specialized areas as we got into the business of developing time-
series databases or SDNs. As a result, it's not "one community"
anymore. Should we further encourage that, or should we focus on
what the core of our mission is, what we have in common, this
integration engine that binds all those other open source projects
into one programmable infrastructure solution ?


You said the answer in your question. OpenStack isn't defined as an
integration engine[3]. The definition of OpenStack is whatever it
takes to fulfill our mission[4][5]. I don't mean that as a tautology.
I mean that we've already gone to the effort of defining OpenStack. It's
our mission statement. We're all about building a cloud platform upon
which people can run their apps ("cloud-native" or otherwise), so we
write the software needed to do that.

So where does OpenStack stop and the wider community start? OpenStack
includes the projects needed to fulfill its mission.


I'd totally agree with you if OpenStack was developed in a vacuum. But 
there is a large number of open source projects and libraries that 
OpenStack needs to fulfill its mission that are not in "OpenStack": they 
are external open source projects we depend on. Python, MySQL, libvirt, 
KVM, Ceph, OpenvSwitch, RabbitMQ... We are not asking that those should 
be included in OpenStack, and we are not NIHing replacements for those 
in OpenStack either.


So it is not as clear-cut as you present it, and you can approach this 
dependency question from two directions.


One is community-centric: "anything produced by our community is 
OpenStack". If we are missing a lower-level piece to achieve our mission 
and are developing it ourselves as a result, then it is OpenStack, even 
if it ends up being a message queue or a database.


The other approach is product-centric: "lower-level pieces are OpenStack 
dependencies, rather than OpenStack itself". If we are missing a 
lower-level piece to achieve our mission and are developing it as a 
result, it could be developed on OpenStack infrastructure by members of 
the OpenStack community but it is not "OpenStack the product", it's an 
OpenStack *dependency*. It is not governed by the TC, it can use any 
language and tool deemed necessary.


On this second approach, there is the obvious question of where 
"lower-level" starts, which as you explained above is not really 
clear-cut. A good litmus test for it could be whenever Python is not 
enough. If you can't develop it effectively with the language that is 
currently sufficient for the rest of OpenStack, then developing it as an 
OpenStack dependency in whatever language is appropriate might be the 
solution...


That is what I mean by 'scope': where does "OpenStack" stop, and where 
do 

[openstack-dev] [nova-lxd]Nova-lxd with Linuxbridge

2016-05-20 Thread Gyorgy Szombathelyi
Hi!

I just have a simple question: is nova-lxd supposed to work with the 
Linuxbridge agent?
As I see, the LXD driver creates veth interface pairs, and vif.py connects it 
to a normal linux bridge. However the Linuxbridge agent code scans only for tap 
devices.
So the question is: Should the LXD driver create a tap device, bridged with 
that veth?

Br,
György

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] usage of ironic-lib

2016-05-20 Thread Jim Rollenhagen
On Thu, May 19, 2016 at 01:21:35PM -0700, Devananda van der Veen wrote:
> On 05/16/2016 07:14 AM, Lucas Alvares Gomes wrote:
> > On Mon, May 16, 2016 at 2:57 PM, Loo, Ruby  wrote:
> >> Hi,
> >>
> >> A patch to ironic-lib made me wonder about what is our supported usage of
> >> ironic-lib. Or even the intent/scope of it. This patch changes a method,
> >> ‘bootable’ parameter is removed and ‘boot_flag’ parameter is added [1].
> >>
> >> If this library/method is used by some out-of-tree thing (or even some
> >> in-tree but outside of ironic), this will be a breaking change. If this
> >> library is meant to be internal to ironic program itself, and to e.g. only
> >> be used by ironic and IPA, then that is different. I was under the
> >> impression that it was a library and meant to be used by whatever, no
> >> restrictions on what that whatever was. It would be WAY easier if we 
> >> limited
> >> this for usage by only a few specified projects.
> >>
> >> What do people think?
> >>
> > 
> > I still believe that the ironic-lib project was designed to share code
> > between the Ironic projects _only_. Otherwise, if it the code was
> > supposed to be shared across multiple projects we should have put it
> > in oslo instead.
> 
> I agree, and don't see a compelling reason, today, for anyone to do the work 
> to
> make ironic-lib into a stable library. So...
> 
> I think we should keep ironic-lib where it is (in ironic, not oslo) and keep 
> the
> scope we intended (only for use within the Ironic project group [1]).
> 
> We should more clearly signal that intent within the library (eg, in the 
> README)
> and the project description (eg. on PyPI).
> 
> [1]
> https://github.com/openstack/governance/blob/master/reference/projects.yaml#L1915

+1, let's not put extra burden on ourselves at this time.

// jim

> 
> 
> my 2c,
> Devananda
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Plugins] custom roles need to configure the default gateway

2016-05-20 Thread Simon Pasquier
Hello,
This is a heads-up for the plugin developers because we found this issue
[1] with the StackLight plugins. If your plugin targets MOS 8 and provides
custom roles, you probably want to call the 'configure_default_route' task
otherwise the nodes will use the Fuel node as the default gateway instead
of the virtual router on the management network.
I did a quick test and found out that for example, the detach-database and
detach-rabbitmq plugins are affected by this bug.
Note that AFAICT it applies only if you want to support MOS 8 (and before).
See [2] for the details.
BR,
Simon
[1] https://bugs.launchpad.net/lma-toolchain/+bug/1583994
[2] https://bugs.launchpad.net/fuel/+bug/1541309
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [osops-tools-monitoring][monitoring-for-openstack] Code duplication

2016-05-20 Thread Simon Pasquier
Hello,
You can find the rationale in the review [1] importing m.o.f. into o.t.m.
Basically it was asked by the operators community to avoid the sprawl of
repositories.
BR,
Simon
[1] https://review.openstack.org/#/c/248352/

On Fri, May 20, 2016 at 11:08 AM, Martin Magr  wrote:

> Greetings guys,
>
>   there is a duplication of code within openstack/osops-tools-monitoring
> and openstack/monitoring-for-openstack projects.
>
> It seems that m-o-f became part of o-t-m, but the former project wasn't
> deleted. I was just wandering if there is a reason for the duplication (or
> fork, considering the projects have different core group maintaining each)?
>
> I'm assuming that m-f-o is just a leftover, so can you guys tell me what
> was the reason to create one project to rule them all (eg.
> openstack/osops-tools-monitoring) instead keeping the small projects
> instead?
>
> Thanks in advance for answer,
> Martin
>
> --
> Martin Mágr
> Senior Software Engineer
> Red Hat Czech
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [Heat][Glance] Can't migrate to glance v2 completely

2016-05-20 Thread Erno Kuvaja
Hi Huangtianhua,

Bit more general pointers here and I respond to your exact cases inline.

OpenStack community has made really clear that we must not expose backend
configuration details of the services to users and we must not rely users
knowing any of those details to operate in the cloud. The locations
settings are exact opposite from this. To set the locations in a manner
that glance can access the image when the user tries to use it excepts the
user to provide correct, backend specific location of the image to Glance.
This causes very much poor user experience from the beginning but
definitely so when the user tries to transfer those processes between
clouds that are configured differently.

Also utilizing external locations for enabled backends (locations that are
not controlled by Glance) we give away the immutability promise right away.
We also break the image in the cloud in the case the image is not anymore
available in that external backend preventing spinning up new VMs from that
image, migrating those VMs to other hosts and IIUC spinning up even VMs
from the snapshots based on that base image. All these things just leading
horrible results for any users not understanding the internals how our
clouds operates and consequences of such decisions.

On Fri, May 20, 2016 at 4:36 AM, Huangtianhua 
wrote:

> Thanks very much and sorry to reply so late. Comments inline.
>
> -邮件原件-
> 发件人: Nikhil Komawar [mailto:nik.koma...@gmail.com]
> 发送时间: 2016年5月11日 22:03
> 收件人: OpenStack Development Mailing List (not for usage questions)
> 抄送: Huangtianhua
> 主题: Re: [openstack-dev] [Heat][Glance] Can't migrate to glance v2
> completely
>
>
> Thanks for your email. Comments inline.
>
> On 5/11/16 3:06 AM, Huangtianhua wrote:
> >
> > Hi glance-team:
> >
> > 1.
> > glance v1 supports '--location' in Api image-create, we support
> > 'location' in heat for glance image resource,
> >
> >
> > and we don't support '--file' to use local files for upload, as the
> > caller has no control over local files on the
> >
> >
> > server running heat-engine or there are some security risk.
> >
>
> We had a session during the summit to discuss the deprecation path. You
> are right currently v2 does not have the location support. Also, please be
> mindful that location concept in v2 (you mention above) is a bit different
> from that in v1.
>
> It's unfortunate that public facing services have exposed v1 as v1 was
> designed to be the internal only (service) API for usage by Infrastructure
> services. v2 on the other had has been designed to be used by end users and
> PaaS services.
>
> Ideally, a location should never be set by the end user as the location
> mechanism used by Glance needs to be opaque to the end user (they can not
> be sure the scheme in which the location needs to be set to be acceptable
> to Glance). location logic was introduced to help admin
> (operators) set a custom location on an image to help speed the boot
> times. Hence, it's a service API in a way (unless you run a very small
> trusted cloud). (In cast of heat, the scale and type of cloud would be
> quite different.)
>
> --
> In fact, I don't understand why the end user can't set 'location', the
> 'location' to me is the url where the data for the image already resides,
> and let's consider a simple user case with heat template:
>
> heat_template_version: 2013-05-23
> resources:
>   fedora_image:
> type: OS::Glance::Image
> properties:
>   disk_format: qcow2
>   container_format: bare
>   location:
> http://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/x86_64/Fedora-Cloud-Base-20141203-21.x86_64.qcow2
>   my_server:
> type: OS::Nova::Server
> properties:
>   # other properties
>   image: {get_resources: fedora_image}
>
> as above user want to use a fedora release image to create a nova server.
> So if user can't set the image 'location', how to use the image, is there
> any other way in glance v2?
>
>
This is actually good example why the user nor the cloud owner would not
want to have this happening:
a) This needs glance http backend to be enabled, it's not by default and
the state varies between deployments. Poor interoperability.
b) If the http backend is enabled glance will download the image from
fedoraproject.org every time that image is requested (assuming glance image
cache is not enabled or the image is not in the cache of serving node)
causing lots of external network traffic and most of the cases greatly
slowing the boot process down.
c) Lets assume that location URI is directing some smaller and perhaps less
stable direction than fedoraproject. The user uses that VM happily, takes a
snapshot of the baseline for his/her future uses. Meanwhile the Image
creator has uploaded new version of that image and removes the old one to
save some space. Now our user tries to spin up that snapshot, the image
does not exist in the URI 

[openstack-dev] [neutron] Update_port can not remove allocation from auto-addressed subnets

2016-05-20 Thread Pavel Bondar
Hi,

Currently using update_port workflow user can not remove ip addresses from
auto-addressed subnets (SLAAC). It prevents me from implementing complete
fix for [1].

Typically for removing ip address from port, 'fixed_ips' list is updated and
ip address is cleaned up from it.
But for auto-addressed subnets if ip address is removed from 'fixed_ips',
port_update is called, but SLAAC ip are not removed because of [2].
This area was significantly reworked during liberty, but the same
behavior is
preserved at least from kilo [3].

To make subnet deletion to comply with IPAM interface [1] any ip address
deallocation has to be done via ipam interface (update_port), but
update_port
currently skips deallocation of SLAAC addresses.

So I am looking for advice about a way to allow deallocation of SLAAC
addresses via update_port.

I see several possible solutions, but they are not ideal, so please let
me know
if you see a better solution:
- Add additional parameter like 'allow_slaac_deletion' to update_port
method,
and pass it through
update_port->update_port_with_ips->_update_ips_for_port->
_get_changed_ips_for_port to alternate behavior in [2]. It involves changing
parameters for API exposed method update_port, so not sure if it can be
accepted.
- Another way is to introduce new state for 'fixed_ips' list. Currently
it can
have 'subnet_id' and 'ip_address' as keys. We could add new key like
'delete_subnet_id' to force delete allocations for slaac subnets. This way
there is no need to update parameters for bunch of methods.

Please share your thoughts about the ways to fix it.

[1] https://bugs.launchpad.net/neutron/+bug/1564335
[2]
https://github.com/openstack/neutron/blob/f494de47fcef7776f7d29d5ceb2cc4db96bd1efd/neutron/db/ipam_backend_mixin.py#L435
[3]
https://github.com/openstack/neutron/blob/stable/kilo/neutron/db/db_base_plugin_v2.py#L444

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Nodes management in our shiny new TripleO API

2016-05-20 Thread Dmitry Tantsur

On 05/20/2016 01:44 PM, Dan Prince wrote:

On Thu, 2016-05-19 at 15:31 +0200, Dmitry Tantsur wrote:

Hi all!

We started some discussions on https://review.openstack.org/#/c/30020
0/
about the future of node management (registering, configuring and
introspecting) in the new API, but I think it's more fair (and
convenient) to move it here. The goal is to fix several long-
standing
design flaws that affect the logic behind tripleoclient. So fasten
your
seatbelts, here it goes.

If you already understand why we need to change this logic, just
scroll
down to "what do you propose?" section.

"introspection bulk start" is evil
--

As many of you obviously know, TripleO used the following command
for
introspection:

  openstack baremetal introspection bulk start

As not everyone knows though, this command does not come from
ironic-inspector project, it's part of TripleO itself. And the
ironic
team has some big problems with it.

The way it works is

1. Take all nodes in "available" state and move them to "manageable"
state
2. Execute introspection for all nodes in "manageable" state
3. Move all nodes with successful introspection to "available" state.

Step 3 is pretty controversial, step 1 is just horrible. This not
how
the ironic-inspector team designed introspection to work (hence it
refuses to run on nodes in "available" state), and that's now how
the
ironic team expects the ironic state machine to be handled. To
explain
it I'll provide a brief information on the ironic state machine.

ironic node lifecycle
-

With recent versions of the bare metal API (starting with 1.11),
nodes
begin their life in a state called "enroll". Nodes in this state are
not
available for deployment, nor for most of other actions. Ironic does
not
touch such nodes in any way.

To make nodes alive an operator uses "manage" provisioning action to
move nodes to "manageable" state. During this transition the power
and
management credentials (IPMI, SSH, etc) are validated to ensure that
nodes in "manageable" state are, well, manageable. This state is
still
not available for deployment. With nodes in this state an operator
can
execute various pre-deployment actions, such as introspection, RAID
configuration, etc. So to sum it up, nodes in "manageable" state are
being configured before exposing them into the cloud.

The last step before the deployment it to make nodes "available"
using
the "provide" provisioning action. Such nodes are exposed to nova,
and
can be deployed to at any moment. No long-running configuration
actions
should be run in this state. The "manage" action can be used to
bring
nodes back to "manageable" state for configuration (e.g.
reintrospection).

so what's the problem?
--

The problem is that TripleO essentially bypasses this logic by
keeping
all nodes "available" and walking them through provisioning steps
automatically. Just a couple of examples of what gets broken:

(1) Imagine I have 10 nodes in my overcloud, 10 nodes ready for
deployment (including potential autoscaling) and I want to enroll 10
more nodes.

Both introspection and ready-state operations nowadays will touch
both
10 new nodes AND 10 nodes which are ready for deployment,
potentially
making the latter not ready for deployment any more (and definitely
moving them out of pool for some time).

Particularly, any manual configuration made by an operator before
making
nodes "available" may get destroyed.

(2) TripleO has to disable automated cleaning. Automated cleaning is
a
set of steps (currently only wiping the hard drive) that happens in
ironic 1) before nodes are available, 2) after an instance is
deleted.
As TripleO CLI constantly moves nodes back-and-forth from and to
"available" state, cleaning kicks in every time. Unless it's
disabled.

Disabling cleaning might sound a sufficient work around, until you
need
it. And you actually do. Here is a real life example of how to get
yourself broken by not having cleaning:

a. Deploy an overcloud instance
b. Delete it
c. Deploy an overcloud instance on a different hard drive
d. Boom.


This sounds like an Ironic bug to me. Cleaning (wiping a disk) and
removing state that would break subsequent installations on a different
drive are different things. In TripleO I think the reason we disable
cleaning is largely because of the extra time it takes and the fact
that our baremetal cloud isn't multi-tenant (currently at least).


We fix this "bug" by introducing cleaning. This is the process to 
guarantee each deployment starts with a clean environment. It's hard to 
known which remained data can cause which problem (e.g. what about a 
remaining UEFI partition? any remainings of Ceph? I don't know).






As we didn't pass cleaning, there is still a config drive on the
disk
used in the first deployment. With 2 config drives present cloud-
init
will pick a random one, breaking the deployment.


TripleO isn't using config drive is it? Until Nova 

[openstack-dev] [nova] API changes on limit / marker / sort in Newton

2016-05-20 Thread Sean Dague
There are a number of changes up for spec reviews that add parameters to
LIST interfaces in Newton:

* keypairs-pagination (MERGED) -
https://github.com/openstack/nova-specs/blob/8d16fc11ee6d01b5a9fe1b8b7ab7fa6dff460e2a/specs/newton/approved/keypairs-pagination.rst#L2
* os-instances-actions - https://review.openstack.org/#/c/240401/
* hypervisors - https://review.openstack.org/#/c/240401/
* os-migrations - https://review.openstack.org/#/c/239869/

I think that limit / marker is always a legit thing to add, and I almost
wish we just had a single spec which is "add limit / marker to the
following APIs in Newton"

Most of these came in with sort_keys as well. We currently don't have
schema enforcement on sort_keys, so I don't think we should add any more
instances of it until we scrub it. Right now sort_keys is mostly a way
to generate a lot of database load because users can sort by things not
indexed in your DB. We really should close that issue in the future, but
I don't think we should make it any worse. I have -1s on
os-instance-actions and hypervisors for that reason.

os-instances-actions and os-migrations are time based, so they are
proposing a changes-since. That seems logical and fine. Date seems like
the natural sort order for those anyway, so it's "almost" limit/marker,
except from end not the beginning. I think that in general changes-since
on any resource which is time based should be fine, as long as that
resource is going to natural sort by the time field in question.

So... I almost feel like this should just be soft policy at this point:

limit / marker - always ok
sort_* - no more until we have a way to scrub sort (and we fix weird
sort key issues we have)
changes-since - ok on any resource that will natural sort with the
updated time


That should make proposing these kinds of additions easier for folks,

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Nodes management in our shiny new TripleO API

2016-05-20 Thread Lucas Alvares Gomes
Hi,

> This sounds like an Ironic bug to me. Cleaning (wiping a disk) and
> removing state that would break subsequent installations on a different
> drive are different things. In TripleO I think the reason we disable
> cleaning is largely because of the extra time it takes and the fact
> that our baremetal cloud isn't multi-tenant (currently at least).
>

It's a complicated issue, there are ways in Ironic to make sure the
image will always be deployed onto a specific hard drive [0]. But when
it's not specified Ironic will pick the first disk that appears and in
Linux, at least for SATA, SCSI or IDE disk controllers, the order in
which the devices are added is arbitrary, e.g, /dev/sda and /dev/sdb
could swap around between reboots.

>>
>> As we didn't pass cleaning, there is still a config drive on the
>> disk
>> used in the first deployment. With 2 config drives present cloud-
>> init
>> will pick a random one, breaking the deployment.
>
> TripleO isn't using config drive is it? Until Nova supports config
> drives via Ironic I think we are blocked on using it.
>

It's already supported, for two or more cycles already [1]. The
difference with VMs is that, with baremetal the config drive lives in
the disk as a partition and for VMs it's presented as an external
device.

[0] 
http://docs.openstack.org/developer/ironic/deploy/install-guide.html#specifying-the-disk-for-deployment
[1] 
http://docs.openstack.org/developer/ironic/deploy/install-guide.html#enabling-the-configuration-drive-configdrive

Hope that helps,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Centralize Configuration: ignore service list for newton

2016-05-20 Thread Markus Zoeller

On 05/20/2016 11:33 AM, John Garbutt wrote:

Hi,

The current config template includes a list of "Services which consume this":
http://specs.openstack.org/openstack/nova-specs/specs/mitaka/implemented/centralize-config-options.html#quality-view

I propose we drop this list from the template.

I am worried this is going to be hard to maintain, and hard to review
/ check. As such, its of limited use to most deployers in its current
form.



Unfortunately I still haven't found a way to collect this information in 
an automated way. :(



I have been thinking about a possible future replacement. Two separate
sample configuration files, one for the Compute node, and one for
non-compute nodes (i.e. "controller" nodes). The reason for this
split, is our move towards removing sensitive credentials from compute
nodes, etc. Over time, we could prove the split in gate testing, where
we look for conf options accessed by computes that shouldn't be, and
v.v.


Having said that, for newton, I propose we concentrate on:
* completing the move of all the conf options (almost there)


Only one left: https://review.openstack.org/314091

For the sake of completeness, there are two "SubCommandOpt" instances 
[1][2] which are purely used for CLI options and are *not* part of the 
"nova.conf" file. I think it's best to leave them where they are. All 
other config options in "nova/conf/" then share the same behavior of 
being configurable by the "nova.conf" file.



* (skip tidy up of deprecated options)
* tidying up the main description of each conf option
* tidy up the Opt group and Opt types, i.e. int min/max, str choices, etc
** move options to use stevedoor, where needed
* deprecating ones that are dumb / unused
* identifying "required" options (those you have to set)
* add config group descriptions
* note any surprising dependencies or value meanings (-1 vs 0 etc)
* ensure the docs and sample files are complete and correct

I am thinking we could copy API ref and add a comment at the top of
each file (expecting a separate patch for each step):
* fix_opt_registration_consistency (see sfinucan's tread)
* fix_opt_description_indentation
* check_deprecation_status
* check_opt_group_and_type
* fix_opt_description

Does that sound like a good plan? If so, I can write this up in a wiki page.


Yes, sounds good. I can prepare a burndown chart like Sean did for the 
api-ref work [3].




Thanks,
John

PS
I also have concerns around the related config options bits and
possible values bit, but thats a different thread. Lets focus on the
main body of the description for now.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



References:
[1] 
https://github.com/openstack/nova/blob/d619ad6ba15df1cf7dc92ddf84d1c65af018682f/nova/cmd/dhcpbridge.py#L92-L92
[2] 
https://github.com/openstack/nova/blob/b8aac794d4620aca341b269c6db71ea9e70d2210/nova/cmd/manage.py#L1397-L1397

[3] http://burndown.dague.org/

--
Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Nodes management in our shiny new TripleO API

2016-05-20 Thread Dan Prince
On Thu, 2016-05-19 at 15:31 +0200, Dmitry Tantsur wrote:
> Hi all!
> 
> We started some discussions on https://review.openstack.org/#/c/30020
> 0/ 
> about the future of node management (registering, configuring and 
> introspecting) in the new API, but I think it's more fair (and 
> convenient) to move it here. The goal is to fix several long-
> standing 
> design flaws that affect the logic behind tripleoclient. So fasten
> your 
> seatbelts, here it goes.
> 
> If you already understand why we need to change this logic, just
> scroll 
> down to "what do you propose?" section.
> 
> "introspection bulk start" is evil
> --
> 
> As many of you obviously know, TripleO used the following command
> for 
> introspection:
> 
>   openstack baremetal introspection bulk start
> 
> As not everyone knows though, this command does not come from 
> ironic-inspector project, it's part of TripleO itself. And the
> ironic 
> team has some big problems with it.
> 
> The way it works is
> 
> 1. Take all nodes in "available" state and move them to "manageable"
> state
> 2. Execute introspection for all nodes in "manageable" state
> 3. Move all nodes with successful introspection to "available" state.
> 
> Step 3 is pretty controversial, step 1 is just horrible. This not
> how 
> the ironic-inspector team designed introspection to work (hence it 
> refuses to run on nodes in "available" state), and that's now how
> the 
> ironic team expects the ironic state machine to be handled. To
> explain 
> it I'll provide a brief information on the ironic state machine.
> 
> ironic node lifecycle
> -
> 
> With recent versions of the bare metal API (starting with 1.11),
> nodes 
> begin their life in a state called "enroll". Nodes in this state are
> not 
> available for deployment, nor for most of other actions. Ironic does
> not 
> touch such nodes in any way.
> 
> To make nodes alive an operator uses "manage" provisioning action to 
> move nodes to "manageable" state. During this transition the power
> and 
> management credentials (IPMI, SSH, etc) are validated to ensure that 
> nodes in "manageable" state are, well, manageable. This state is
> still 
> not available for deployment. With nodes in this state an operator
> can 
> execute various pre-deployment actions, such as introspection, RAID 
> configuration, etc. So to sum it up, nodes in "manageable" state are 
> being configured before exposing them into the cloud.
> 
> The last step before the deployment it to make nodes "available"
> using 
> the "provide" provisioning action. Such nodes are exposed to nova,
> and 
> can be deployed to at any moment. No long-running configuration
> actions 
> should be run in this state. The "manage" action can be used to
> bring 
> nodes back to "manageable" state for configuration (e.g.
> reintrospection).
> 
> so what's the problem?
> --
> 
> The problem is that TripleO essentially bypasses this logic by
> keeping 
> all nodes "available" and walking them through provisioning steps 
> automatically. Just a couple of examples of what gets broken:
> 
> (1) Imagine I have 10 nodes in my overcloud, 10 nodes ready for 
> deployment (including potential autoscaling) and I want to enroll 10 
> more nodes.
> 
> Both introspection and ready-state operations nowadays will touch
> both 
> 10 new nodes AND 10 nodes which are ready for deployment,
> potentially 
> making the latter not ready for deployment any more (and definitely 
> moving them out of pool for some time).
> 
> Particularly, any manual configuration made by an operator before
> making 
> nodes "available" may get destroyed.
> 
> (2) TripleO has to disable automated cleaning. Automated cleaning is
> a 
> set of steps (currently only wiping the hard drive) that happens in 
> ironic 1) before nodes are available, 2) after an instance is
> deleted. 
> As TripleO CLI constantly moves nodes back-and-forth from and to 
> "available" state, cleaning kicks in every time. Unless it's
> disabled.
> 
> Disabling cleaning might sound a sufficient work around, until you
> need 
> it. And you actually do. Here is a real life example of how to get 
> yourself broken by not having cleaning:
> 
> a. Deploy an overcloud instance
> b. Delete it
> c. Deploy an overcloud instance on a different hard drive
> d. Boom.

This sounds like an Ironic bug to me. Cleaning (wiping a disk) and
removing state that would break subsequent installations on a different
drive are different things. In TripleO I think the reason we disable
cleaning is largely because of the extra time it takes and the fact
that our baremetal cloud isn't multi-tenant (currently at least).

> 
> As we didn't pass cleaning, there is still a config drive on the
> disk 
> used in the first deployment. With 2 config drives present cloud-
> init 
> will pick a random one, breaking the deployment.

TripleO isn't using config drive is it? Until Nova supports config
drives via 

Re: [openstack-dev] [nova] Centralize Configuration: ignore service list for newton

2016-05-20 Thread Chris Dent

On Fri, 20 May 2016, John Garbutt wrote:


I have been thinking about a possible future replacement. Two separate
sample configuration files, one for the Compute node, and one for
non-compute nodes (i.e. "controller" nodes). The reason for this
split, is our move towards removing sensitive credentials from compute
nodes, etc. Over time, we could prove the split in gate testing, where
we look for conf options accessed by computes that shouldn't be, and
v.v.



That would be marvelous and would be a nice step in the direction of
making the compute node more independent. Truly long term it would
be great for reducing cognitive load and increasing contractual
boundaries and strength if nova-compute were in its own repo.


/me dances away from the pragmatists


Does that sound like a good plan? If so, I can write this up in a wiki page.


+1

--
Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to single sign on with windows authentication with Keystone

2016-05-20 Thread Kseniya Tychkova
Hi
I would like to share article Keystone and WebSSO: Using Active Directory
Federation Services with OpenStack Keystone

 (http://xuctarine.blogspot.ru/2016/05/keystone-and-websso-using-active.html
).
In this article you can find step-by-step manual for SSO on Windows with
Keystone.


On Fri, May 20, 2016 at 3:03 AM, Adam Young  wrote:

> On 05/19/2016 07:40 AM, Rodrigo Duarte wrote:
>
> Hi,
>
> So you are trying to use keystone to authorize your users, but want to
> avoid having to authenticate via keystone, right?
>
> Check if the Federated Identity feature [1] covers your use case.
>
> [1]
> http://docs.openstack.org/security-guide/identity/federated-keystone.html
>
> On Thu, May 19, 2016 at 8:27 AM, OpenStack Mailing List Archive <
> cor...@gmail.com> wrote:
>
>> Link: https://openstack.nimeyo.com/85057/?show=85057#q85057
>> From: imocha 
>>
>> I have to call the keystone APIs and want to use the windows
>> authentication using Active Directory. Keystone provides integration with
>> AD at the back end. To get the initial token to use OpenStack APIs, I need
>> to pass user name and password in the keystone token creation api.
>>
>> Since I am already logged on to my windows domain, is there any way that
>> I can get the token without passing the password in the api.
>>
> Yes, use SSSD and Mod_Lookup_Identity:
>
>
> https://adam.younglogic.com/2014/05/keystone-federation-via-mod_lookup_identity/
>
>
>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Rodrigo Duarte Sousa
> Senior Quality Engineer @ Red Hat
> MSc in Computer Science
> http://rodrigods.com
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-20 Thread Thomas Goirand
On 05/11/2016 04:17 PM, Dean Troyer wrote:
> The big difference with Go here is that the dependency work happens at
> build time, not deploy/runtime in most cases.  That shifts much of the
> burden to people (theoretically) better suited to manage that work.

I am *NOT* buying that doing static linking is a progress. We're back 30
years in the past, before the .so format. It is amazing that some of us
think it's better. It simply isn't. It's a huge regression, for package
maintainers, system admins, production/ops, and our final users. The
only group of people who like it are developers, because they just don't
need to care about shared library API/ABI incompatibilities and
regressions anymore.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Centralize Configuration: ignore service list for newton

2016-05-20 Thread John Garbutt
Hi,

The current config template includes a list of "Services which consume this":
http://specs.openstack.org/openstack/nova-specs/specs/mitaka/implemented/centralize-config-options.html#quality-view

I propose we drop this list from the template.

I am worried this is going to be hard to maintain, and hard to review
/ check. As such, its of limited use to most deployers in its current
form.

I have been thinking about a possible future replacement. Two separate
sample configuration files, one for the Compute node, and one for
non-compute nodes (i.e. "controller" nodes). The reason for this
split, is our move towards removing sensitive credentials from compute
nodes, etc. Over time, we could prove the split in gate testing, where
we look for conf options accessed by computes that shouldn't be, and
v.v.


Having said that, for newton, I propose we concentrate on:
* completing the move of all the conf options (almost there)
* (skip tidy up of deprecated options)
* tidying up the main description of each conf option
* tidy up the Opt group and Opt types, i.e. int min/max, str choices, etc
** move options to use stevedoor, where needed
* deprecating ones that are dumb / unused
* identifying "required" options (those you have to set)
* add config group descriptions
* note any surprising dependencies or value meanings (-1 vs 0 etc)
* ensure the docs and sample files are complete and correct

I am thinking we could copy API ref and add a comment at the top of
each file (expecting a separate patch for each step):
* fix_opt_registration_consistency (see sfinucan's tread)
* fix_opt_description_indentation
* check_deprecation_status
* check_opt_group_and_type
* fix_opt_description

Does that sound like a good plan? If so, I can write this up in a wiki page.


Thanks,
John

PS
I also have concerns around the related config options bits and
possible values bit, but thats a different thread. Lets focus on the
main body of the description for now.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Deprecated options in sample configs?

2016-05-20 Thread Markus Zoeller

On 05/19/2016 09:18 PM, Ben Nemec wrote:

On 05/17/2016 07:27 PM, Matt Fischer wrote:


 If config sample files are being used as a living document then that
 would be a reason to leave the deprecated options in there. In my
 experience as a cloud deployer I never once used them in that manner
 so it didn't occur to me that people might, hence my question to the
 list.

 This may also indicate that people aren't checking release notes as
 we hope they are. A release note is where I would expect to find
 this information aggregated with all the other changes I should be
 aware of. That seems easier to me than aggregating that data myself
 by checking various sources.



One way to think about this is that the config file has to be accurate
or the code won't work, but release notes can miss things with no
consequences other than perhaps an annoyed operator. So they are sources
of truth about the state options on of a release or branch.


On this note, I had another thought about an alternative way to handle
this.  What if we generated one sample file without deprecated opts, and
another with them (either exclusively, or in addition to all the other
opts)?  That way there's a deprecation-free version for new deployers
and one for people who want to see all the current deprecations.



I'm not sure if it is well known that the "configuration reference" 
manual provides a summary page of new, updated and deprecated config 
options at [1].
Also, the release notes have already a section to announce the 
deprecation of a config option and this should be the source of truth 
IMO. From Nova I can tell that it is part of the normal reviews to 
ensure that a reno file (with a good explanation) is part of the change 
when deprecating something (see a updated-per-commit version at [2]). 
Introducing yet another way of telling people what's deprecated would 
weaken the position of the release notes which I'd like to avoid.


References:
[1] 
http://docs.openstack.org/mitaka/config-reference/tables/conf-changes/nova.html
[2] 
http://docs.openstack.org/releasenotes/nova/unreleased.html#deprecation-notes






 Anyways, I have no strong cause for removing the deprecated options.
 I just wondered if it was a low hanging fruit and thought I would ask.


It's always good to have these kind of conversations, thanks for
starting it.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron-lbaas] Multiple back-end support for lbaas v2

2016-05-20 Thread Sergey Belous
Hi.

Actually, you can specify multiple providers, but these configuration 
directives are repeatable and are not comma-separated. That's mean you should 
add the another service_provider in the [service_providers] section as a 
separate line.

And yes, you can try to pass parameter 'provider' to create a loadbalancer of 
specific driver (according to code of lbaas).

--
Best Regards,
Sergey Belous

> On 20 May 2016, at 11:47, Wilence Yao  wrote:
> 
> 
> Hi all,
> 
> Can I enable multiple service_providers for lbaas v2  at the same time?
> such as
> 
> ```
> service_provider=LOADBALANCER:Haproxy:neutron_lbaas.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default,
>  
> LOADBALANCER:radware:neutron_lbaas.services.loadbalancer.drivers.radware.driver.LoadBalancerDriver
> ```
> 
> Then pass parameter 'provider' to create a loadbalancer of specific driver
> 
> ```
> neutron lbaas-loadbalancer-create --provider radware
> ```
> 
> Thanks for any help
> 
> Wilence Yao
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] enabled convergence background

2016-05-20 Thread Anant Patil
On 20-May-16 13:51, Steven Hardy wrote:
> On Fri, May 20, 2016 at 09:26:46AM +0200, Thomas Herve wrote:
>> On Fri, May 20, 2016 at 5:46 AM, xiangxinyong  wrote:
>>> Hi Team,
>>>
>>> I noticed that heat enabled convergence.
>>
>> I hope that's not the case :). We haven't made the switch yet. We
>> continue to do testing, and we're still finding issues so we won't
>> make it until we have a good confidence that it's mostly seamless.
> 
> 
> It's not ;)
> 
> https://github.com/openstack/heat/blob/master/heat/common/config.py#L181
> 
>>> Could someone tell me about the background about the convergence.
>>> or some specs to introduce it?
>>
>> I believe the spec is a good introduction to it:
>> https://specs.openstack.org/openstack/heat-specs/specs/juno/convergence.html
> 
> Rico also did a nice overview in his talk in Tokyo:
> 
> https://www.openstack.org/summit/tokyo-2015/videos/presentation/inwinstack-heat-up-your-stack-deep-dive-to-heat-learn-how-to-orchestrate-your-cloud
> 
> It's towards the end (from around 28mins)
> 

You can find more details in this talk:
https://www.openstack.org/videos/video/scalable-heat-engine-using-convergence

-- Anant


> Steve
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [osops-tools-monitoring][monitoring-for-openstack] Code duplication

2016-05-20 Thread Martin Magr
Greetings guys,

  there is a duplication of code within openstack/osops-tools-monitoring
and openstack/monitoring-for-openstack projects.

It seems that m-o-f became part of o-t-m, but the former project wasn't
deleted. I was just wandering if there is a reason for the duplication (or
fork, considering the projects have different core group maintaining each)?

I'm assuming that m-f-o is just a leftover, so can you guys tell me what
was the reason to create one project to rule them all (eg.
openstack/osops-tools-monitoring) instead keeping the small projects
instead?

Thanks in advance for answer,
Martin

-- 
Martin Mágr
Senior Software Engineer
Red Hat Czech
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron-lbaas] Multiple back-end support for lbaas v2

2016-05-20 Thread Wilence Yao
Hi all,

Can I enable multiple service_providers for lbaas v2  at the same time?
such as

```
service_provider=LOADBALANCER:Haproxy:neutron_lbaas.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default,
LOADBALANCER:radware:neutron_lbaas.services.loadbalancer.drivers.radware.driver.LoadBalancerDriver
```

Then pass parameter 'provider' to create a loadbalancer of specific driver

```
neutron lbaas-loadbalancer-create --provider radware
```

Thanks for any help

Wilence Yao
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Proposing Mauricio Lima for core reviewer

2016-05-20 Thread Kwasniewska, Alicja
+1

Kind regards,
Alicja 

-Original Message-
From: Michał Jastrzębski [mailto:inc...@gmail.com] 
Sent: Wednesday, May 18, 2016 6:41 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [kolla] Proposing Mauricio Lima for core reviewer

+1 :)

On 18 May 2016 at 10:02, Ryan Hallisey  wrote:
> +1 nice work mlima!
>
> -Ryan
>
> - Original Message -
> From: "Vikram Hosakote (vhosakot)" 
> To: openstack-dev@lists.openstack.org
> Sent: Wednesday, May 18, 2016 9:45:53 AM
> Subject: Re: [openstack-dev] [kolla] Proposing Mauricio Lima for core   
> reviewer
>
> Yes, +1 for sure!
>
> Thanks a lot Mauricio for all the great work especially for adding 
> Manila to kolla and also updating the cleanup scripts and documentation!
>
>
> Regards,
> Vikram Hosakote
> IRC: vhosakot
>
> From: "Steven Dake (stdake)" < std...@cisco.com >
> Reply-To: " openstack-dev@lists.openstack.org " < 
> openstack-dev@lists.openstack.org >
> Date: Tuesday, May 17, 2016 at 3:00 PM
> To: " openstack-dev@lists.openstack.org " < 
> openstack-dev@lists.openstack.org >
> Subject: [openstack-dev] [kolla] Proposing Mauricio Lima for core 
> reviewer
>
> Hello core reviewers,
>
> I am proposing Mauricio (mlima on irc) for the core review team. He has done 
> a fantastic job reviewing appearing in the middle of the pack for 90 days [1] 
> and appearing as #2 in 45 days [2]. His IRC participation is also fantastic 
> and does a good job on technical work including implementing Manila from zero 
> experience :) as well as code cleanup all over the code base and 
> documentation. Consider my proposal a +1 vote.
>
> I will leave voting open for 1 week until May 24th. Please vote +1 (approve), 
> or –2 (veto), or abstain. I will close voting early if there is a veto vote, 
> or a unanimous vote is reached.
>
> Thanks,
> -steve
>
> [1] http://stackalytics.com/report/contribution/kolla/90
> [2] http://stackalytics.com/report/contribution/kolla/45
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] enabled convergence background

2016-05-20 Thread Steven Hardy
On Fri, May 20, 2016 at 09:26:46AM +0200, Thomas Herve wrote:
> On Fri, May 20, 2016 at 5:46 AM, xiangxinyong  wrote:
> > Hi Team,
> >
> > I noticed that heat enabled convergence.
> 
> I hope that's not the case :). We haven't made the switch yet. We
> continue to do testing, and we're still finding issues so we won't
> make it until we have a good confidence that it's mostly seamless.


It's not ;)

https://github.com/openstack/heat/blob/master/heat/common/config.py#L181

> > Could someone tell me about the background about the convergence.
> > or some specs to introduce it?
> 
> I believe the spec is a good introduction to it:
> https://specs.openstack.org/openstack/heat-specs/specs/juno/convergence.html

Rico also did a nice overview in his talk in Tokyo:

https://www.openstack.org/summit/tokyo-2015/videos/presentation/inwinstack-heat-up-your-stack-deep-dive-to-heat-learn-how-to-orchestrate-your-cloud

It's towards the end (from around 28mins)

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Is a volume always not attachable if it's 'migration_status' is 'migrating'?

2016-05-20 Thread liuxinguo
Hi cinder team,

When a volume's 'migration_status' is 'migrating', we've no idea whether this 
volume is able to attach to a server to use:
IF the migration way is host-copy, it is not attachable to use, but if the 
migration is handling by driver, some driver's implementation can make it 
attachable to use.

End users will always have no idea whether the volume is attachable to use 
through cinder list or cinder show command, they just know whether the volume 
is 'migrating'.

So should we have the assumption that if a volume's 'migration_status' is 
'migrating', we should take it as not attachable to use?
Need we add some flags to indicate whether a volume is attachable when it's 
'migration_status' is 'migrating'?

I raise this question because I have a little confusion about this when I 
propose this spec:
https://review.openstack.org/#/c/312853/ , anyone interesting about this can 
take a look at this spec:)

Thanks for any input!
Wilson Liu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] enabled convergence background

2016-05-20 Thread Thomas Herve
On Fri, May 20, 2016 at 5:46 AM, xiangxinyong  wrote:
> Hi Team,
>
> I noticed that heat enabled convergence.

I hope that's not the case :). We haven't made the switch yet. We
continue to do testing, and we're still finding issues so we won't
make it until we have a good confidence that it's mostly seamless.

> Could someone tell me about the background about the convergence.
> or some specs to introduce it?

I believe the spec is a good introduction to it:
https://specs.openstack.org/openstack/heat-specs/specs/juno/convergence.html

-- 
Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Find out the instance availability zone before creating volume.

2016-05-20 Thread Jiri Suchomel
Hi all,

it's a long time since I have opened proposal to fix bug
https://bugs.launchpad.net/nova/+bug/1497253 "different availability
zone for nova and cinder when AZ is not explicitly given". I know the
solution is controversial, but could anyone interested give a look?

To summarize what's the situation and proposed solution about:

1. User has cinder AZ's and nova AZ of the same names

2. AZ's physical location is different, and user wants instances to
have same AZ as their volumes

3. This is generally achieved by setting cross_az_attach option to
False, because since https://review.openstack.org/#/c/157041/ volumes
are created in the same AZ as instances.

4. However, what if user doesn't explicitly provide AZ when creating
the instance (so the scheduler can distribute the load evenly and
according to available resources)? This is the situation possibly
requiring a fix. In such case, nova uses the None value for
availability zone at the time it calls volume_api.create. Cinder
creates the volume in some AZ it has available, and when nova finishes
creating the instance it creates it in some of its available AZ.
There's no relation between these two, so if they end up being
different, we'll hit the error " Instance %(instance)s and volume
%(vol)s are not in the same availability_zone"

So, my proposal (as expressed in https://review.openstack.org/225119)
is that:

- if cross_az_attach is set to false, nova should ensure cinder and
  nova AZ's are matching, AND it should make sure this rule is true
  also in the case when AZ was not specified by user. Thus I propose to
  look for instance's real AZ BEFORE actually trying to create the
  volume, and use this value also for volume.


Jiri
-- 
Jiri Suchomel

SUSE LINUX, s.r.o.
Lihovarská 1060/12
tel: +420 284 028 960
190 00 Praha 9, Czech Republichttp://www.suse.cz

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle] Cross OpenStac L2 Networking Specification

2016-05-20 Thread Shinobu Kinjo
Hi Team,

Probably we would not be able to make any progress without completing
${subject} which is under review right now. [1]

It's because:

 1.Blueprint of the Tricircle is almost based on this feature at the moment.
 2.Further implementation will be mostly based on this feature.
 3.Our direction at this stage will be based on Cross OpenStack L2
Networking specification.

Since that, I would like to complete this specification as soon as
possible. But we should not keep uncompromising standards as much as
possible as well.

What do you think?

Any suggestion, objection, idea or whatever you could come up with
would be really appreciated.

Thank you for your response in advance!

[1] https://review.openstack.org/#/c/304540/

Cheers,
Shinobu

-- 
Email:
shin...@linux.com
shin...@redhat.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev