Re: [openstack-dev] [Zun] Proposal a change of Zun core team

2017-04-28 Thread Pradeep Singh
+1 for me.

Thanks,
Pradeep Singh

On Sat, Apr 29, 2017 at 9:35 AM, Hongbin Lu  wrote:

> Hi all,
>
>
>
> I proposes a change of Zun’s core team memberships as below:
>
>
>
> + Feng Shengqin (feng-shengqin)
>
> - Wang Feilong (flwang)
>
>
>
> Feng Shengqin has contributed a lot to the Zun projects. Her contribution
> includes BPs, bug fixes, and reviews. In particular, she completed an
> essential BP and had a lot of accepted commits in Zun’s repositories. I
> think she is qualified for the core reviewer position. I would like to
> thank Wang Feilong for his interest to join the team when the project was
> found. I believe we are always friends regardless of his core membership.
>
>
>
> By convention, we require a minimum of 4 +1 votes from Zun core reviewers
> within a 1 week voting window (consider this proposal as a +1 vote from
> me). A vote of -1 is a veto. If we cannot get enough votes or there is a
> veto vote prior to the end of the voting window, this proposal is rejected.
>
>
>
> Best regards,
>
> Hongbin
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Zun] Proposal a change of Zun core team

2017-04-28 Thread Hongbin Lu
Hi all,

I proposes a change of Zun's core team memberships as below:

+ Feng Shengqin (feng-shengqin)
- Wang Feilong (flwang)

Feng Shengqin has contributed a lot to the Zun projects. Her contribution 
includes BPs, bug fixes, and reviews. In particular, she completed an essential 
BP and had a lot of accepted commits in Zun's repositories. I think she is 
qualified for the core reviewer position. I would like to thank Wang Feilong 
for his interest to join the team when the project was found. I believe we are 
always friends regardless of his core membership.

By convention, we require a minimum of 4 +1 votes from Zun core reviewers 
within a 1 week voting window (consider this proposal as a +1 vote from me). A 
vote of -1 is a veto. If we cannot get enough votes or there is a veto vote 
prior to the end of the voting window, this proposal is rejected.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][cinder][mistral][manila] A path forward to shiny consistent service types

2017-04-28 Thread Renat Akhmerov
This looks like a simple and elegant way to solve the issue. 100% supported by 
me (and hopefully others).

Thanks for addressing it.

Renat

On 29 Apr 2017, 06:19 +0700, Monty Taylor , wrote:
> On 04/28/2017 06:07 PM, Adrian Turjak wrote:
> >
> > This sounds like a fantastic​ path forward as the version in the service
> > ​ type is a source​ of frustration in some ways. I personally love the
> > version​less discoverability of Keystone as an API model.
>
> ++ ... I'll follow up on Monday with an email about consumption of
> version discovery.
>
> > I'm also assuming this is related to this repo here:
> > https://github.com/openstack/service-types-authority
> >
> > Are there plans to actually fill that repo out and start building the
> > full 'official' catalog of service types? Because right now that repo is
> > missing many services but appears to actually be a good place to list
> > what all the various service types are already in use.
>
> Yes, absolutely. If we can get this particular party moving I'd like to
> use that as a little motivation to chug through and get that repo
> completed. (It's not super hard - but without an answer to what to do
> about the old names, the conversations stall out a bit)
>
> > I know that trying to choose a good new service type for a project is
> > hard because finding a list for the ones already in use isn't that easy.
> > I was sad to find that repo, although ideal, was lacking.
>
> ++ totally agree.
>
> >
> > On 29 Apr. 2017 10:26 am, Monty Taylor  wrote:
> >
> > Hey everybody!
> >
> > Yay! (I'm sure you're all saying this, given the topic. I'll let you
> > collect yourself from your exuberant celebration)
> >
> > == Background ==
> >
> > As I'm sure you all know, we've been trying to make some hearway for a
> > while on getting service-types that are registered in the keystone
> > service catalog to be consistent. The reason for this is so that API
> > Consumers can know how to request a service from the catalog. That
> > might
> > sound like a really easy task - but uh-hoh, you'd be so so wrong. :)
> >
> > The problem is that we have some services that went down the path of
> > suggesting people register a new service in the catalog with a version
> > appended. This pattern was actually started by nova for the v3 api but
> > which we walked back from - with "computev3". The pattern was picked up
> > by at least cinder (volumev2, volumev3) and mistral (workflowv2) that I
> > am aware of. We're also suggesting in the service-types-authority that
> > manila go by "shared-file-system" instead of "share".
> >
> > (Incidentally, this is related to a much larger topic of version
> > discovery, which I will not bore you with in this email, but about
> > which
> > I have a giant pile of words just waiting for you in a little bit. Get
> > excited about that!)
> >
> > == Proposed Solution ==
> >
> > As a follow up to the consuming version discovery spec, which you
> > should
> > absolutely run away from and never read, I wrote these:
> >
> > https://review.openstack.org/#/c/460654/ (Consuming historical aliases)
> > and
> > https://review.openstack.org/#/c/460539/ (Listing historical aliases)
> >
> > It's not a particularly clever proposal - but it breaks down like this:
> >
> > * Make a list of the known historical aliases we're aware of - in a
> > place that isn't just in one of our python libraries (460539)
> > * Write down a process for using them as part of finding a service from
> > the catalog so that there is a clear method that can be implemented by
> > anyone doing libraries or REST interactions. (460654)
> > * Get agreement on that process as the "recommended" way to look up
> > services by service-type in the catalog.
> > * Implement it in the base libraries OpenStack ships.
> > * Contact the authors of as many OpenStack API libraries that we can
> > find.
> > * Add tempest tests to verify the mappings in both directions.
> > * Change things in devstack/deployer guides.
> >
> > The process as described is backwards compatible. That is, once
> > implemented it means that a user can request "volumev2" or
> > "block-storage" with version=2 - and both will return the endpoint the
> > user expects. It also means that we're NOT asking existing clouds to
> > run
> > out and break their users. New cloud deployments can do the new thing -
> > but the old values are handled in both directions.
> >
> > There is a hole, which is that people who are not using the base libs
> > OpenStack ships may find themselves with a new cloud that has a
> > different service-type in the catalog than they have used before. It's
> > not idea, to be sure. BUT - hopefully active outreach to the community
> > libraries coupled with documentation will keep the issues to a minimum.
> >
> > If we can agree on the matching and fallback model, I am
> > volunteering to
> > do the work to implement in every client library in which it needs
> > to be
> > 

Re: [openstack-dev] All Hail our Newest Release Name - OpenStack Rocky

2017-04-28 Thread Steven Dake (stdake)
Monty,

I guess the obligatory joke “It is going to be a Rocky release ahead!” must be 
made. \o/

Regards
-steve


-Original Message-
From: Monty Taylor 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Friday, April 28, 2017 at 2:54 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
, Openstack Users 

Subject: [openstack-dev] All Hail our Newest Release Name - OpenStack Rocky

Hey everybody!

There isn't a ton more to say past the subject. The "R" release of 
OpenStack shall henceforth be known as "Rocky".

I believe it's the first time we've managed to name a release after a 
community member - so please everyone buy RockyG a drink if you see her 
in Boston.

For those of you who remember the actual election results, you may 
recall that "Radium" was the top choice. Radium was judged to have legal 
risk, so as per our name selection process, we moved to the next name on 
the list.

Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [User-committee] [Forum] Moderators needed!

2017-04-28 Thread Arkady.Kanevsky
Shamail,
I can moderate either
Achieving Resiliency at Scales of 
1000+
 or
High Availability in 
OpenStack

Thanks,
Arkady

From: Shamail Tahir [mailto:itzsham...@gmail.com]
Sent: Friday, April 28, 2017 7:23 AM
To: openstack-operators ; OpenStack 
Development Mailing List (not for usage questions) 
; user-committee 

Subject: [User-committee] [Forum] Moderators needed!

Hi everyone,

Most of the proposed/accepted Forum sessions currently have moderators but 
there are six sessions that do not have a confirmed moderator yet. Please look 
at the list below and let us know if you would be willing to help moderate any 
of these sessions.

The topics look really interesting but it will be difficult to keep the 
sessions on the schedule if there is not an assigned moderator. We look forward 
to seeing you at the Summit/Forum in Boston soon!

Achieving Resiliency at Scales of 
1000+

Feedback from users for I18n & translation - important 
part?

Neutron Pain 
Points

Making Neutron easy for people who want basic 
networking

High Availability in 
OpenStack

Cloud-Native Design/Refactoring across 
OpenStack



Thanks,
Doug, Emilien, Melvin, Mike, Shamail & Tom
Forum Scheduling Committee
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] All Hail our Newest Release Name - OpenStack Rocky

2017-04-28 Thread Ihar Hrachyshka
Hi,

It would also be nice to actually get an email to vote. I haven't seen one.
:) Not that I'm not ok with the name, just saying that it may be worth
exploring what happened.

Ihar

On Fri, Apr 28, 2017 at 3:34 PM Morgan Fainberg 
wrote:

> It would be nice if there was a bit more transparency on the "legal
> risk" (conflicts with another project, etc), but thanks for passing on
> the information none-the-less. I, for one, welcome our new "Rocky"
> overlord project name :)
>
> Cheers,
> --Morgan
>
> On Fri, Apr 28, 2017 at 2:54 PM, Monty Taylor 
> wrote:
> > Hey everybody!
> >
> > There isn't a ton more to say past the subject. The "R" release of
> OpenStack
> > shall henceforth be known as "Rocky".
> >
> > I believe it's the first time we've managed to name a release after a
> > community member - so please everyone buy RockyG a drink if you see her
> in
> > Boston.
> >
> > For those of you who remember the actual election results, you may recall
> > that "Radium" was the top choice. Radium was judged to have legal risk,
> so
> > as per our name selection process, we moved to the next name on the list.
> >
> > Monty
> >
> > ___
> > Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > Post to : openst...@lists.openstack.org
> > Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][cinder][mistral][manila] A path forward to shiny consistent service types

2017-04-28 Thread Monty Taylor

On 04/28/2017 06:07 PM, Adrian Turjak wrote:


This sounds like a fantastic​ path forward as the version in the service
​ type is a source​ of frustration in some ways. I personally love the
version​less discoverability of Keystone as an API model.


++ ... I'll follow up on Monday with an email about consumption of 
version discovery.



I'm also assuming this is related to this repo here:
https://github.com/openstack/service-types-authority

>

Are there plans to actually fill that repo out and start building the
full 'official' catalog of service types? Because right now that repo is
missing many services but appears to actually be a good place to list
what all the various service types are already in use.


Yes, absolutely. If we can get this particular party moving I'd like to 
use that as a little motivation to chug through and get that repo 
completed. (It's not super hard - but without an answer to what to do 
about the old names, the conversations stall out a bit)



I know that trying to choose a good new service type for a project is
hard because finding a list for the ones already in use isn't that easy.
I was sad to find that repo, although ideal, was lacking.


++ totally agree.



On 29 Apr. 2017 10:26 am, Monty Taylor  wrote:

Hey everybody!

Yay! (I'm sure you're all saying this, given the topic. I'll let you
collect yourself from your exuberant celebration)

== Background ==

As I'm sure you all know, we've been trying to make some hearway for a
while on getting service-types that are registered in the keystone
service catalog to be consistent. The reason for this is so that API
Consumers can know how to request a service from the catalog. That
might
sound like a really easy task - but uh-hoh, you'd be so so wrong. :)

The problem is that we have some services that went down the path of
suggesting people register a new service in the catalog with a version
appended. This pattern was actually started by nova for the v3 api but
which we walked back from - with "computev3". The pattern was picked up
by at least cinder (volumev2, volumev3) and mistral (workflowv2) that I
am aware of. We're also suggesting in the service-types-authority that
manila go by "shared-file-system" instead of "share".

(Incidentally, this is related to a much larger topic of version
discovery, which I will not bore you with in this email, but about
which
I have a giant pile of words just waiting for you in a little bit. Get
excited about that!)

== Proposed Solution ==

As a follow up to the consuming version discovery spec, which you
should
absolutely run away from and never read, I wrote these:

https://review.openstack.org/#/c/460654/ (Consuming historical aliases)
and
https://review.openstack.org/#/c/460539/ (Listing historical aliases)

It's not a particularly clever proposal - but it breaks down like this:

* Make a list of the known historical aliases we're aware of - in a
place that isn't just in one of our python libraries (460539)
* Write down a process for using them as part of finding a service from
the catalog so that there is a clear method that can be implemented by
anyone doing libraries or REST interactions. (460654)
* Get agreement on that process as the "recommended" way to look up
services by service-type in the catalog.
* Implement it in the base libraries OpenStack ships.
* Contact the authors of as many OpenStack API libraries that we can
find.
* Add tempest tests to verify the mappings in both directions.
* Change things in devstack/deployer guides.

The process as described is backwards compatible. That is, once
implemented it means that a user can request "volumev2" or
"block-storage" with version=2 - and both will return the endpoint the
user expects. It also means that we're NOT asking existing clouds to
run
out and break their users. New cloud deployments can do the new thing -
but the old values are handled in both directions.

There is a hole, which is that people who are not using the base libs
OpenStack ships may find themselves with a new cloud that has a
different service-type in the catalog than they have used before. It's
not idea, to be sure. BUT - hopefully active outreach to the community
libraries coupled with documentation will keep the issues to a minimum.

If we can agree on the matching and fallback model, I am
volunteering to
do the work to implement in every client library in which it needs
to be
implemented across OpenStack and to add the tempest tests. (it's
actually mostly a patch to keystoneauth, so that's actually not _that_
impressive of a volunteer) I will also reach out to as many of the
OpenStack API client library authors as I can find, point them at the
docs and suggest they add the 

Re: [openstack-dev] [all][tc][cinder][mistral][manila] A path forward to shiny consistent service types

2017-04-28 Thread Eric Fried
I love this.  Will it be done by July 20th [1] so I can use it in Pike
for [2]?

[1] https://wiki.openstack.org/wiki/Nova/Pike_Release_Schedule
[2] https://review.openstack.org/#/c/458257/4/nova/utils.py@1508

On 04/28/2017 05:26 PM, Monty Taylor wrote:
> Hey everybody!
> 
> Yay! (I'm sure you're all saying this, given the topic. I'll let you
> collect yourself from your exuberant celebration)
> 
> == Background ==
> 
> As I'm sure you all know, we've been trying to make some hearway for a
> while on getting service-types that are registered in the keystone
> service catalog to be consistent. The reason for this is so that API
> Consumers can know how to request a service from the catalog. That might
> sound like a really easy task - but uh-hoh, you'd be so so wrong. :)
> 
> The problem is that we have some services that went down the path of
> suggesting people register a new service in the catalog with a version
> appended. This pattern was actually started by nova for the v3 api but
> which we walked back from - with "computev3". The pattern was picked up
> by at least cinder (volumev2, volumev3) and mistral (workflowv2) that I
> am aware of. We're also suggesting in the service-types-authority that
> manila go by "shared-file-system" instead of "share".
> 
> (Incidentally, this is related to a much larger topic of version
> discovery, which I will not bore you with in this email, but about which
> I have a giant pile of words just waiting for you in a little bit. Get
> excited about that!)
> 
> == Proposed Solution ==
> 
> As a follow up to the consuming version discovery spec, which you should
> absolutely run away from and never read, I wrote these:
> 
> https://review.openstack.org/#/c/460654/ (Consuming historical aliases)
> and
> https://review.openstack.org/#/c/460539/ (Listing historical aliases)
> 
> It's not a particularly clever proposal - but it breaks down like this:
> 
> * Make a list of the known historical aliases we're aware of - in a
> place that isn't just in one of our python libraries (460539)
> * Write down a process for using them as part of finding a service from
> the catalog so that there is a clear method that can be implemented by
> anyone doing libraries or REST interactions. (460654)
> * Get agreement on that process as the "recommended" way to look up
> services by service-type in the catalog.
> * Implement it in the base libraries OpenStack ships.
> * Contact the authors of as many OpenStack API libraries that we can find.
> * Add tempest tests to verify the mappings in both directions.
> * Change things in devstack/deployer guides.
> 
> The process as described is backwards compatible. That is, once
> implemented it means that a user can request "volumev2" or
> "block-storage" with version=2 - and both will return the endpoint the
> user expects. It also means that we're NOT asking existing clouds to run
> out and break their users. New cloud deployments can do the new thing -
> but the old values are handled in both directions.
> 
> There is a hole, which is that people who are not using the base libs
> OpenStack ships may find themselves with a new cloud that has a
> different service-type in the catalog than they have used before. It's
> not idea, to be sure. BUT - hopefully active outreach to the community
> libraries coupled with documentation will keep the issues to a minimum.
> 
> If we can agree on the matching and fallback model, I am volunteering to
> do the work to implement in every client library in which it needs to be
> implemented across OpenStack and to add the tempest tests. (it's
> actually mostly a patch to keystoneauth, so that's actually not _that_
> impressive of a volunteer) I will also reach out to as many of the
> OpenStack API client library authors as I can find, point them at the
> docs and suggest they add the support.
> 
> Thoughts? Anyone violently opposed?
> 
> Thanks for reading...
> 
> Monty
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] All Hail our Newest Release Name - OpenStack Rocky

2017-04-28 Thread Morgan Fainberg
It would be nice if there was a bit more transparency on the "legal
risk" (conflicts with another project, etc), but thanks for passing on
the information none-the-less. I, for one, welcome our new "Rocky"
overlord project name :)

Cheers,
--Morgan

On Fri, Apr 28, 2017 at 2:54 PM, Monty Taylor  wrote:
> Hey everybody!
>
> There isn't a ton more to say past the subject. The "R" release of OpenStack
> shall henceforth be known as "Rocky".
>
> I believe it's the first time we've managed to name a release after a
> community member - so please everyone buy RockyG a drink if you see her in
> Boston.
>
> For those of you who remember the actual election results, you may recall
> that "Radium" was the top choice. Radium was judged to have legal risk, so
> as per our name selection process, we moved to the next name on the list.
>
> Monty
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openst...@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][tc][cinder][mistral][manila] A path forward to shiny consistent service types

2017-04-28 Thread Monty Taylor

Hey everybody!

Yay! (I'm sure you're all saying this, given the topic. I'll let you 
collect yourself from your exuberant celebration)


== Background ==

As I'm sure you all know, we've been trying to make some hearway for a 
while on getting service-types that are registered in the keystone 
service catalog to be consistent. The reason for this is so that API 
Consumers can know how to request a service from the catalog. That might 
sound like a really easy task - but uh-hoh, you'd be so so wrong. :)


The problem is that we have some services that went down the path of 
suggesting people register a new service in the catalog with a version 
appended. This pattern was actually started by nova for the v3 api but 
which we walked back from - with "computev3". The pattern was picked up 
by at least cinder (volumev2, volumev3) and mistral (workflowv2) that I 
am aware of. We're also suggesting in the service-types-authority that 
manila go by "shared-file-system" instead of "share".


(Incidentally, this is related to a much larger topic of version 
discovery, which I will not bore you with in this email, but about which 
I have a giant pile of words just waiting for you in a little bit. Get 
excited about that!)


== Proposed Solution ==

As a follow up to the consuming version discovery spec, which you should 
absolutely run away from and never read, I wrote these:


https://review.openstack.org/#/c/460654/ (Consuming historical aliases)
and
https://review.openstack.org/#/c/460539/ (Listing historical aliases)

It's not a particularly clever proposal - but it breaks down like this:

* Make a list of the known historical aliases we're aware of - in a 
place that isn't just in one of our python libraries (460539)
* Write down a process for using them as part of finding a service from 
the catalog so that there is a clear method that can be implemented by 
anyone doing libraries or REST interactions. (460654)
* Get agreement on that process as the "recommended" way to look up 
services by service-type in the catalog.

* Implement it in the base libraries OpenStack ships.
* Contact the authors of as many OpenStack API libraries that we can find.
* Add tempest tests to verify the mappings in both directions.
* Change things in devstack/deployer guides.

The process as described is backwards compatible. That is, once 
implemented it means that a user can request "volumev2" or 
"block-storage" with version=2 - and both will return the endpoint the 
user expects. It also means that we're NOT asking existing clouds to run 
out and break their users. New cloud deployments can do the new thing - 
but the old values are handled in both directions.


There is a hole, which is that people who are not using the base libs 
OpenStack ships may find themselves with a new cloud that has a 
different service-type in the catalog than they have used before. It's 
not idea, to be sure. BUT - hopefully active outreach to the community 
libraries coupled with documentation will keep the issues to a minimum.


If we can agree on the matching and fallback model, I am volunteering to 
do the work to implement in every client library in which it needs to be 
implemented across OpenStack and to add the tempest tests. (it's 
actually mostly a patch to keystoneauth, so that's actually not _that_ 
impressive of a volunteer) I will also reach out to as many of the 
OpenStack API client library authors as I can find, point them at the 
docs and suggest they add the support.


Thoughts? Anyone violently opposed?

Thanks for reading...

Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] All Hail our Newest Release Name - OpenStack Rocky

2017-04-28 Thread Tom Barron


On 04/28/2017 05:54 PM, Monty Taylor wrote:
> Hey everybody!
> 
> There isn't a ton more to say past the subject. The "R" release of
> OpenStack shall henceforth be known as "Rocky".
> 
> I believe it's the first time we've managed to name a release after a
> community member - so please everyone buy RockyG a drink if you see her
> in Boston.

Deal!


> 
> For those of you who remember the actual election results, you may
> recall that "Radium" was the top choice. Radium was judged to have legal
> risk, so as per our name selection process, we moved to the next name on
> the list.
> 
> Monty
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] All Hail our Newest Release Name - OpenStack Rocky

2017-04-28 Thread Monty Taylor

Hey everybody!

There isn't a ton more to say past the subject. The "R" release of 
OpenStack shall henceforth be known as "Rocky".


I believe it's the first time we've managed to name a release after a 
community member - so please everyone buy RockyG a drink if you see her 
in Boston.


For those of you who remember the actual election results, you may 
recall that "Radium" was the top choice. Radium was judged to have legal 
risk, so as per our name selection process, we moved to the next name on 
the list.


Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [User-committee] [Forum] Moderators needed!

2017-04-28 Thread UKASICK, ANDREW
Hi Shamail.

Alan Meadows will still be doing the Cloud-Native Design/Refactoring across 
OpenStack session.  He just forgot to confirm that.
Please accept my message here as his confirmation.

-Andy


From: Shamail Tahir [mailto:itzsham...@gmail.com]
Sent: Friday, April 28, 2017 7:23 AM
To: openstack-operators ; OpenStack 
Development Mailing List (not for usage questions) 
; user-committee 

Subject: [User-committee] [Forum] Moderators needed!

Hi everyone,

Most of the proposed/accepted Forum sessions currently have moderators but 
there are six sessions that do not have a confirmed moderator yet. Please look 
at the list below and let us know if you would be willing to help moderate any 
of these sessions.

The topics look really interesting but it will be difficult to keep the 
sessions on the schedule if there is not an assigned moderator. We look forward 
to seeing you at the Summit/Forum in Boston soon!

Achieving Resiliency at Scales of 
1000+

Feedback from users for I18n & translation - important 
part?

Neutron Pain 
Points

Making Neutron easy for people who want basic 
networking

High Availability in 
OpenStack

Cloud-Native Design/Refactoring across 
OpenStack



Thanks,
Doug, Emilien, Melvin, Mike, Shamail & Tom
Forum Scheduling Committee
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][gate] tempest slow - where do we execute them in gate?

2017-04-28 Thread Matt Riedemann

On 4/17/2017 1:55 PM, Ihar Hrachyshka wrote:

But since it's not executed anywhere in tempest gate, even as
non-voting (?), it's effectively dead code that may be long broken
without anyone knowing. Of course there are consumers of the tests
downstream, but for those consumers it's a tough call to start
depending on the tests if they are not sanity checked by tempest
itself. Wouldn't it make sense to have some job in tempest gate that
would execute those tests (maybe just them to speed up such a job?
maybe non-voting? maybe even as periodic? but there should be
something that keeps it green in long run).


I enabled the job that runs the slow scenario tests in the nova 
experimental queue:


https://review.openstack.org/#/c/458676/

Because we were changing some of the encrypted volume code in nova and 
that's only exercised by a scenario test that was marked slow, but as 
you said I need to run tests to have coverage on those changes. That 
works for me in this case, the problem is I had to (1) know those tests 
existed and (2) go out of my way looking for their results in a job run 
and find out they weren't there, then hunt them out. That's exceptional 
for most patches so yes, things are going to slip through and break most 
likely because we're not gating on them. I understand why the QA team 
did what they did though so I'm not pushing back on that.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova][glance] Who needs multiple api_servers?

2017-04-28 Thread Monty Taylor
See - this is what happens when I write an entirely too long email while 
on coffee number one. :)


Yes - you are, obviously, entirely right. I had clearly blocked memory 
of those pieces out of my memory. I still blame the coffee. It's down to 
the round-robin impl.


Thanks for keeping me honest.

On 04/28/2017 10:46 AM, Mike Dorman wrote:

Maybe we are talking about two different things here?  I’m a bit confused.

Our Glance config in nova.conf on HV’s looks like this:

[glance]
api_servers=http://glance1:9292,http://glance2:9292,http://glance3:9292,http://glance4:9292
glance_api_insecure=True
glance_num_retries=4
glance_protocol=http

So we do provide the full URLs, and there is SSL support.  Right?  I am fairly 
certain we tested this to ensure that if one URL fails, nova goes on to retry 
the next one.  That failure does not get bubbled up to the user (which is 
ultimately the goal.)

I don’t disagree with you that the client side choose-a-server-at-random is not 
a great load balancer.  (But isn’t this roughly the same thing that 
oslo-messaging does when we give it a list of RMQ servers?)  For us it’s more 
about the failure handling if one is down than it is about actually equally 
distributing the load.

In my mind options One and Two are the same, since today we are already 
providing full URLs and not only server names.  At the end of the day, I don’t 
feel like there is a compelling argument here to remove this functionality 
(that people are actively making use of.)

To be clear, I, and I think others, are fine with nova by default getting the 
Glance endpoint from Keystone.  And that in Keystone there should exist only 
one Glance endpoint.  What I’d like to see remain is the ability to override 
that for nova-compute and to target more than one Glance URL for purposes of 
fail over.

Thanks,
Mike




On 4/28/17, 8:20 AM, "Monty Taylor"  wrote:

Thank you both for your feedback - that's really helpful.

Let me say a few more words about what we're trying to accomplish here
overall so that maybe we can figure out what the right way forward is.
(it may be keeping the glance api servers setting, but let me at least
make the case real quick)

 From a 10,000 foot view, the thing we're trying to do is to get nova's
consumption of all of the OpenStack services it uses to be less special.

The clouds have catalogs which list information about the services -
public, admin and internal endpoints and whatnot - and then we're asking
admins to not only register that information with the catalog, but to
also put it into the nova.conf. That means that any updating of that
info needs to be an API call to keystone and also a change to nova.conf.
If we, on the other hand, use the catalog, then nova can pick up changes
in real time as they're rolled out to the cloud - and there is hopefully
a sane set of defaults we could choose (based on operator feedback like
what you've given) so that in most cases you don't have to tell nova
where to find glance _at_all_ becuase the cloud already knows where it
is. (nova would know to look in the catalog for the interal interface of
the image service - for instance - there's no need to ask an operator to
add to the config "what is the service_type of the image service we
should talk to" :) )

Now - glance, and the thing you like that we don't - is especially hairy
because of the api_servers list. The list, as you know, is just a list
of servers, not even of URLs. This  means it's not possible to configure
nova to talk to glance over SSL (which I know you said works for you,
but we'd like for people to be able to choose to SSL all their things)
We could add that, but it would be an additional pile of special config.
Because of all of that, we also have to attempt to make working URLs
from what is usually a list of IP addresses. This is also clunky and
prone to failure.

The implementation on the underside of the api_servers code is the
world's dumbest load balancer. It picks a server from the  list at
random and uses it. There is no facility for dealing with a server in
the list that stops working or for allowing rolling upgrades like there
would with a real load-balancer across the set. If one of the API
servers goes away, we have no context to know that, so just some of your
internal calls to glance fail.

Those are the issues - basically:
- current config is special and fragile
- impossible to SSL
- unflexible/unpowerful de-facto software loadbalancer

Now - as is often the case - it turns out the combo of those things is
working very well for you -so we need to adjust our thinking on the
topic a bit. Let me toss out some alternatives and see what you think:

Alternative One - Do Both things

We add the new "consume from catalog" and make it default. (and make it

Re: [openstack-dev] [neutron] stepping down from neutron core team

2017-04-28 Thread Kevin Benton
Thanks for all of your work. Come back soon. ;)

On Apr 28, 2017 05:02, "Miguel Angel Ajo Pelayo" 
wrote:

>
> Hi everybody,
>
> Some of you already know, but I wanted to make it official.
>
> Recently I moved to work on the networking-ovn component,
> and OVS/OVN itself, and while I'll stick around and I will be available
> on IRC for any questions I'm already not doing a good work with
> neutron reviews, so...
>
> It's time to leave room for new reviewers.
>
> It's always a pleasure to work with you folks.
>
> Best regards,
> Miguel Ángel Ajo.
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] summary of the virtual meetup

2017-04-28 Thread Dmitry Tantsur

Hi all!

We had a virtual meetup on Tuesday, he's a short summary.
Detailed notes are here: https://etherpad.openstack.org/p/ironic-virtual-meetup

We went over the priorities list to figure out what changes have to be made to 
accommodate the recent team changes. Four priorities found new owners: rescue 
mode, python 3 support, reference guide and changing default CLI API version.
Three changes did not, and are to be dumped from the list: specific faults API, 
documentation reorganization and IPA authentication. On the bright side, we 
found more people to help with boot-from-volume and redfish work.


Next, Jim introduced the latest change to how Nova schedules bare metal 
instances. It did raise a few questions in the community, which we agreed to 
direct at the Nova team. I personally will get more involved into Ironic-Nova 
interactions (as I should have done long ago), and Nisha volunteered to also 
help, especially around the capabilities-aka-traits future discussion.


Then we talked about 3rdparty CI status. We got good presence from people 
running CI (thanks for that!). Among the actions items were:
1. Re-establish regular ironic QA/CI meeting to create a contact point between 
the developers and the CI maintainers.
2. Figure out existing practices (ask Cinder, Neutron, Nova, TripleO) on 
escalating 3rdparty CI issues caused by problems outside of the CI itself and 
the driver it tests.

3. More actively share best practices to run CI between CI maintainers
4. Track 3rdparty CI outages on our main whiteboard and create a new etherpad 
for CI debugging coordination.
5. CI maintainers to investigate running standalone tests instead of our full 
tempest test.


We discussed the future of the driver properties API, which ended up in a weird 
position after dynamic drivers were introduced. Our initial plan is to create 
new API endpoints:

 GET /v1/nodes//driver-properties
 GET /v1/driver-interfaces/power/
The former will return summary of driver information for a node, while the 
latter - for a driver interface. This may not be the final endpoints - we agreed 
to continue the discussion on the spec (to be written by Ruby).


Finally, we discussed speeding up reviews for vendor patches. We got feedback 
that generic things that the core team plans on always get prioritized at the 
expense of vendor proposals. We tried a short root cause analysis for this 
issue, and discussed several proposals. We ended up with the usual 
recommendation for vendors to increase involvement into reviewing code and 
contributing things outside of their immediate field of interests. We also 
agreed that it may help a bit to move vendor documentation out of tree.


We had a retrospective after the meetup, where most of the folks agreed that 
such virtual meetups are a good idea, and we need to organize them from time to 
time. It was particularly noted that the meetup allowed people who cannot afford 
going to the PTG and/or the Forum to meet the team.


Thanks all for participating!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] stepping down from neutron core team

2017-04-28 Thread Sławek Kapłoński
Hello,

Thank you for all Your help.
Good luck in Your new projects :)

—
Best regards
Slawek Kaplonski
sla...@kaplonski.pl




> Wiadomość napisana przez Miguel Angel Ajo Pelayo  w dniu 
> 28.04.2017, o godz. 11:02:
> 
> 
> Hi everybody,
> 
> Some of you already know, but I wanted to make it official.
> 
> Recently I moved to work on the networking-ovn component,
> and OVS/OVN itself, and while I'll stick around and I will be available
> on IRC for any questions I'm already not doing a good work with
> neutron reviews, so...
> 
> It's time to leave room for new reviewers.
> 
> It's always a pleasure to work with you folks.
> 
> Best regards,
> Miguel Ángel Ajo.



signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [docs] OpenStack documentation: Forum session

2017-04-28 Thread Ildiko Vancsa
Hi All,

I’m reaching out to you to draw your attention to a Forum topic we proposed 
with Alex (OpenStack Manuals PTL): 
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18939/openstack-documentation-the-future-depends-on-all-of-us
 


As the documentation team is heavily affected by the OSIC news we are looking 
for more involvement from both our user and developer community. The 
documentation team is looking for new contributors and also options to 
restructure some of the guides and how the team operates.

Considering the importance of good and reliable documentation we hope to have 
many of you joining us on Monday in Boston. You can add ideas or questions to 
the Forum etherpad to help us prepare: 
https://etherpad.openstack.org/p/doc-future 
 

If you have any questions or comments in advance please respond to this thread 
or reach out to me (IRC: ildikov) or Alex (IRC: asettle).

Thanks and Best Regards,
Ildikó & Alex__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Goodbye ironic o/

2017-04-28 Thread Dmitry Tantsur
Thanks Mario, it was a pleasure to work with you! You did help us a lot during 
this (unfortunately short) time as an Ironic core. As it usually is, you're 
welcome to be fast-forwarded back in the team, if the stars align for you to get 
back to ironic. Good luck with your new assignments!


On 04/28/2017 06:12 PM, Mario Villaplana wrote:

Hi ironic team,

You may have noticed a decline in my upstream contributions the past few weeks. 
Unfortunately, I'm no longer being paid to work on ironic. It's unlikely that 
I'll be contributing enough to keep up with the project in my new job, too, so 
please do feel free to remove my core access.


It's been great working with all of you. I've learned so much about open source, 
baremetal provisioning, Python, and more from all of you, and I will definitely 
miss it. I hope that we all get to work together again in the future someday.


I am not sure that I'll be at the Forum during the day, but please do ping me 
for a weekend or evening hangout if you're attending. I'd love to show anyone 
who's interested around the Boston area if our schedules align.


Also feel free to contact me via IRC/email/carrier pigeon with any questions 
about work in progress I had upstream.


Good luck with the project, and thanks for everything!

Best wishes,
Mario Villaplana


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova][glance] Who needs multiple api_servers?

2017-04-28 Thread Eric Fried
If it's *just* glance we're making an exception for, I prefer #1 (don't
deprecate/remove [glance]api_servers).  It's way less code &
infrastructure, and it discourages others from jumping on the
multiple-endpoints bandwagon.  If we provide endpoint_override_list
(handwave), people will think it's okay to use it.

Anyone aware of any other services that use multiple endpoints?

On 04/28/2017 10:46 AM, Mike Dorman wrote:
> Maybe we are talking about two different things here?  I’m a bit confused.
> 
> Our Glance config in nova.conf on HV’s looks like this:
> 
> [glance]
> api_servers=http://glance1:9292,http://glance2:9292,http://glance3:9292,http://glance4:9292
> glance_api_insecure=True
> glance_num_retries=4
> glance_protocol=http
> 
> So we do provide the full URLs, and there is SSL support.  Right?  I am 
> fairly certain we tested this to ensure that if one URL fails, nova goes on 
> to retry the next one.  That failure does not get bubbled up to the user 
> (which is ultimately the goal.)
> 
> I don’t disagree with you that the client side choose-a-server-at-random is 
> not a great load balancer.  (But isn’t this roughly the same thing that 
> oslo-messaging does when we give it a list of RMQ servers?)  For us it’s more 
> about the failure handling if one is down than it is about actually equally 
> distributing the load.
> 
> In my mind options One and Two are the same, since today we are already 
> providing full URLs and not only server names.  At the end of the day, I 
> don’t feel like there is a compelling argument here to remove this 
> functionality (that people are actively making use of.)
> 
> To be clear, I, and I think others, are fine with nova by default getting the 
> Glance endpoint from Keystone.  And that in Keystone there should exist only 
> one Glance endpoint.  What I’d like to see remain is the ability to override 
> that for nova-compute and to target more than one Glance URL for purposes of 
> fail over.
> 
> Thanks,
> Mike
> 
> 
> 
> 
> On 4/28/17, 8:20 AM, "Monty Taylor"  wrote:
> 
> Thank you both for your feedback - that's really helpful.
> 
> Let me say a few more words about what we're trying to accomplish here 
> overall so that maybe we can figure out what the right way forward is. 
> (it may be keeping the glance api servers setting, but let me at least 
> make the case real quick)
> 
>  From a 10,000 foot view, the thing we're trying to do is to get nova's 
> consumption of all of the OpenStack services it uses to be less special.
> 
> The clouds have catalogs which list information about the services - 
> public, admin and internal endpoints and whatnot - and then we're asking 
> admins to not only register that information with the catalog, but to 
> also put it into the nova.conf. That means that any updating of that 
> info needs to be an API call to keystone and also a change to nova.conf. 
> If we, on the other hand, use the catalog, then nova can pick up changes 
> in real time as they're rolled out to the cloud - and there is hopefully 
> a sane set of defaults we could choose (based on operator feedback like 
> what you've given) so that in most cases you don't have to tell nova 
> where to find glance _at_all_ becuase the cloud already knows where it 
> is. (nova would know to look in the catalog for the interal interface of 
> the image service - for instance - there's no need to ask an operator to 
> add to the config "what is the service_type of the image service we 
> should talk to" :) )
> 
> Now - glance, and the thing you like that we don't - is especially hairy 
> because of the api_servers list. The list, as you know, is just a list 
> of servers, not even of URLs. This  means it's not possible to configure 
> nova to talk to glance over SSL (which I know you said works for you, 
> but we'd like for people to be able to choose to SSL all their things) 
> We could add that, but it would be an additional pile of special config. 
> Because of all of that, we also have to attempt to make working URLs 
> from what is usually a list of IP addresses. This is also clunky and 
> prone to failure.
> 
> The implementation on the underside of the api_servers code is the 
> world's dumbest load balancer. It picks a server from the  list at 
> random and uses it. There is no facility for dealing with a server in 
> the list that stops working or for allowing rolling upgrades like there 
> would with a real load-balancer across the set. If one of the API 
> servers goes away, we have no context to know that, so just some of your 
> internal calls to glance fail.
> 
> Those are the issues - basically:
> - current config is special and fragile
> - impossible to SSL
> - unflexible/unpowerful de-facto software loadbalancer
> 
> Now - as is often the 

[openstack-dev] [ironic] Goodbye ironic o/

2017-04-28 Thread Mario Villaplana
Hi ironic team,

You may have noticed a decline in my upstream contributions the past few
weeks. Unfortunately, I'm no longer being paid to work on ironic. It's
unlikely that I'll be contributing enough to keep up with the project in my
new job, too, so please do feel free to remove my core access.

It's been great working with all of you. I've learned so much about open
source, baremetal provisioning, Python, and more from all of you, and I
will definitely miss it. I hope that we all get to work together again in
the future someday.

I am not sure that I'll be at the Forum during the day, but please do ping
me for a weekend or evening hangout if you're attending. I'd love to show
anyone who's interested around the Boston area if our schedules align.

Also feel free to contact me via IRC/email/carrier pigeon with any
questions about work in progress I had upstream.

Good luck with the project, and thanks for everything!

Best wishes,
Mario Villaplana
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] per-sack vs per-metric locking tradeoffs

2017-04-28 Thread gordon chung


On 28/04/17 10:50 AM, Julien Danjou wrote:
> On Fri, Apr 28 2017, gordon chung wrote:
>
>> if the sack is unlocked, then it means a processing worker isn't looking
>> at the sack, and when it does lock the sack, it first has to check sack
>> for existing measures to process and then check indexer to validate that
>> they are still active. because it checks indexer later, even if both
>> janitor and processing worker check lock at same time, we can guarantee
>> it will have indexer state processing worker sees is > 00:00:00 since
>> janitor has state before getting lock, while processing worker as state
>> sometime after getting lock.
>
> My brain hurts but that sounds perfect. That even means we potentially
> did not have to lock currently, sack or no sack.
>

oh darn, i didn't consider multiple janitors... so this only works if we 
make janitor completely separate and only allow one janitor ever. back 
to square 1

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] per-sack vs per-metric locking tradeoffs

2017-04-28 Thread gordon chung


On 28/04/17 10:50 AM, Julien Danjou wrote:
> On Fri, Apr 28 2017, gordon chung wrote:
>
>> if the sack is unlocked, then it means a processing worker isn't looking
>> at the sack, and when it does lock the sack, it first has to check sack
>> for existing measures to process and then check indexer to validate that
>> they are still active. because it checks indexer later, even if both
>> janitor and processing worker check lock at same time, we can guarantee
>> it will have indexer state processing worker sees is > 00:00:00 since
>> janitor has state before getting lock, while processing worker as state
>> sometime after getting lock.
>
> My brain hurts but that sounds perfect. That even means we potentially
> did not have to lock currently, sack or no sack.
>

yeah, i think you're right.

success! confused you with my jumble of random words.

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova][glance] Who needs multiple api_servers?

2017-04-28 Thread Monty Taylor
Well, endpoint_overrride itself is already a concept with keystoneauth 
and all of the various client libraries (and more generally is already a 
concept in consuming the API services) It's a singleton - so we'd need 
to add a concept of an endpoint_override_list (*handwave on name*)


Oh - oops - I just now noticed that this was a reply to Eric and not to 
my ludicrously unreadable long novel of an email. :) I'll go back to my 
hole ...


On 04/28/2017 09:25 AM, Mike Dorman wrote:

Ok.  That would solve some of the problem for us, but we’d still be losing the 
redundancy.  We could do some HAProxy tricks to route around downed services, 
but it wouldn’t handle the case when that one physical box is down.

Is there some downside to allowing endpoint_override to remain a list?   That 
piece seems orthogonal to the spec and IRC discussion referenced, which are 
more around the service catalog.  I don’t think anyone in this thread is 
arguing against the idea that there should be just one endpoint URL in the 
catalog.  But it seems like there are good reasons to allow multiples on the 
override setting (at least for glance in nova-compute.)

Thanks,
Mike



On 4/28/17, 8:05 AM, "Eric Fried"  wrote:

Blair, Mike-

There will be an endpoint_override that will bypass the service
catalog.  It still only takes one URL, though.

Thanks,
Eric (efried)

On 04/27/2017 11:50 PM, Blair Bethwaite wrote:
> We at Nectar are in the same boat as Mike. Our use-case is a little
> bit more about geo-distributed operations though - our Cells are in
> different States around the country, so the local glance-apis are
> particularly important for caching popular images close to the
> nova-computes. We consider these glance-apis as part of the underlying
> cloud infra rather than user-facing, so I think we'd prefer not to see
> them in the service-catalog returned to users either... is there going
> to be a (standard) way to hide them?
>
> On 28 April 2017 at 09:15, Mike Dorman  wrote:
>> We make extensive use of the [glance]/api_servers list.  We configure 
that on hypervisors to direct them to Glance servers which are more “local” 
network-wise (in order to reduce network traffic across security 
zones/firewalls/etc.)  This way nova-compute can fail over in case one of the Glance 
servers in the list is down, without putting them behind a load balancer.  We also 
don’t run https for these “internal” Glance calls, to save the overhead when 
transferring images.
>>
>> End-user calls to Glance DO go through a real load balancer and then are 
distributed out to the Glance servers on the backend.  From the end-user’s 
perspective, I totally agree there should be one, and only one URL.
>>
>> However, we would be disappointed to see the change you’re suggesting 
implemented.  We would lose the redundancy we get now by providing a list.  Or we 
would have to shunt all the calls through the user-facing endpoint, which would 
generate a lot of extra traffic (in places where we don’t want it) for image 
transfers.
>>
>> Thanks,
>> Mike
>>
>>
>>
>> On 4/27/17, 4:02 PM, "Matt Riedemann"  wrote:
>>
>> On 4/27/2017 4:52 PM, Eric Fried wrote:
>> > Y'all-
>> >
>> >   TL;DR: Does glance ever really need/use multiple endpoint URLs?
>> >
>> >   I'm working on bp use-service-catalog-for-endpoints[1], which 
intends
>> > to deprecate disparate conf options in various groups, and 
centralize
>> > acquisition of service endpoint URLs.  The idea is to introduce
>> > nova.utils.get_service_url(group) -- note singular 'url'.
>> >
>> >   One affected conf option is [glance]api_servers[2], which 
currently
>> > accepts a *list* of endpoint URLs.  The new API will only ever 
return *one*.
>> >
>> >   Thus, as planned, this blueprint will have the side effect of
>> > deprecating support for multiple glance endpoint URLs in Pike, and
>> > removing said support in Queens.
>> >
>> >   Some have asserted that there should only ever be one endpoint 
URL for
>> > a given service_type/interface combo[3].  I'm fine with that - it
>> > simplifies things quite a bit for the bp impl - but wanted to make 
sure
>> > there were no loudly-dissenting opinions before we get too far 
down this
>> > path.
>> >
>> > [1]
>> > 
https://blueprints.launchpad.net/nova/+spec/use-service-catalog-for-endpoints
>> > [2]
>> > 
https://github.com/openstack/nova/blob/7e7bdb198ed6412273e22dea72e37a6371fce8bd/nova/conf/glance.py#L27-L37
>> > [3]
>> > 

Re: [openstack-dev] [Openstack-operators] [nova][glance] Who needs multiple api_servers?

2017-04-28 Thread Mike Dorman
Maybe we are talking about two different things here?  I’m a bit confused.

Our Glance config in nova.conf on HV’s looks like this:

[glance]
api_servers=http://glance1:9292,http://glance2:9292,http://glance3:9292,http://glance4:9292
glance_api_insecure=True
glance_num_retries=4
glance_protocol=http

So we do provide the full URLs, and there is SSL support.  Right?  I am fairly 
certain we tested this to ensure that if one URL fails, nova goes on to retry 
the next one.  That failure does not get bubbled up to the user (which is 
ultimately the goal.)

I don’t disagree with you that the client side choose-a-server-at-random is not 
a great load balancer.  (But isn’t this roughly the same thing that 
oslo-messaging does when we give it a list of RMQ servers?)  For us it’s more 
about the failure handling if one is down than it is about actually equally 
distributing the load.

In my mind options One and Two are the same, since today we are already 
providing full URLs and not only server names.  At the end of the day, I don’t 
feel like there is a compelling argument here to remove this functionality 
(that people are actively making use of.)

To be clear, I, and I think others, are fine with nova by default getting the 
Glance endpoint from Keystone.  And that in Keystone there should exist only 
one Glance endpoint.  What I’d like to see remain is the ability to override 
that for nova-compute and to target more than one Glance URL for purposes of 
fail over.

Thanks,
Mike




On 4/28/17, 8:20 AM, "Monty Taylor"  wrote:

Thank you both for your feedback - that's really helpful.

Let me say a few more words about what we're trying to accomplish here 
overall so that maybe we can figure out what the right way forward is. 
(it may be keeping the glance api servers setting, but let me at least 
make the case real quick)

 From a 10,000 foot view, the thing we're trying to do is to get nova's 
consumption of all of the OpenStack services it uses to be less special.

The clouds have catalogs which list information about the services - 
public, admin and internal endpoints and whatnot - and then we're asking 
admins to not only register that information with the catalog, but to 
also put it into the nova.conf. That means that any updating of that 
info needs to be an API call to keystone and also a change to nova.conf. 
If we, on the other hand, use the catalog, then nova can pick up changes 
in real time as they're rolled out to the cloud - and there is hopefully 
a sane set of defaults we could choose (based on operator feedback like 
what you've given) so that in most cases you don't have to tell nova 
where to find glance _at_all_ becuase the cloud already knows where it 
is. (nova would know to look in the catalog for the interal interface of 
the image service - for instance - there's no need to ask an operator to 
add to the config "what is the service_type of the image service we 
should talk to" :) )

Now - glance, and the thing you like that we don't - is especially hairy 
because of the api_servers list. The list, as you know, is just a list 
of servers, not even of URLs. This  means it's not possible to configure 
nova to talk to glance over SSL (which I know you said works for you, 
but we'd like for people to be able to choose to SSL all their things) 
We could add that, but it would be an additional pile of special config. 
Because of all of that, we also have to attempt to make working URLs 
from what is usually a list of IP addresses. This is also clunky and 
prone to failure.

The implementation on the underside of the api_servers code is the 
world's dumbest load balancer. It picks a server from the  list at 
random and uses it. There is no facility for dealing with a server in 
the list that stops working or for allowing rolling upgrades like there 
would with a real load-balancer across the set. If one of the API 
servers goes away, we have no context to know that, so just some of your 
internal calls to glance fail.

Those are the issues - basically:
- current config is special and fragile
- impossible to SSL
- unflexible/unpowerful de-facto software loadbalancer

Now - as is often the case - it turns out the combo of those things is 
working very well for you -so we need to adjust our thinking on the 
topic a bit. Let me toss out some alternatives and see what you think:

Alternative One - Do Both things

We add the new "consume from catalog" and make it default. (and make it 
default to consuming the internal interface by default) We have to do 
that in parallel with the current glance api_servers setting anyway, 
because of deprecation periods, so the code to support both approaches 
will exist. Instead of then deprecating the 

Re: [openstack-dev] [gnocchi] per-sack vs per-metric locking tradeoffs

2017-04-28 Thread gordon chung


On 28/04/17 10:11 AM, Julien Danjou wrote:
> On Fri, Apr 28 2017, gordon chung wrote:
>
>> refresh i believe is always disabled by default regardless of what
>> interface you're using.
>
> You gotta to show me where it is 'cause I can't see that and I don't
> recall any option for that. :/

https://github.com/openstack/gnocchi/commit/72a2091727431555eba65c6ef8ff89448f3432f0
 


although now that i check, i see it's blocking by default... which means 
we did guarantee all measures would be return.

>
>> in the case of cross-metric aggregations, this is a timeout for entire
>> request or per metric timeout? i think it's going to get quite chaotic
>> in the multiple metric (multiple sacks) refresh. :(
>
> Right I did not think about multiple refresh. Well it'll be a nice
> slalom of lock acquiring. :-)
>
>> i'm hoping not to have a timeout because i imagine there will be
>> scenarios where we block trying to lock sack, and when we finally get
>> sack lock, we find there is no work for us. this means we just added x
>> seconds to response to for no reason.
>
> Right, I see your point. Though we _could_ enhance refresh to first
> check if there's any job to do. It's lock-free. Just checking. :)
>

hmmm. true. i'm still hoping we don't have to lock an entire sack for 
one metric and not return an error status just because it can't lock. 
doesn't seem like a good experience.

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][Neutron]Why we use secuirity group which only support dispatching whiltelist rules?

2017-04-28 Thread Akihiro Motoki
2017-04-28 7:03 GMT+09:00 Monty Taylor :
> On 04/25/2017 10:32 AM, Gary Kotton wrote:
>>
>> Hi,
>> I would like us to think of considering enabling an API that would allow
>> ‘deny’, for example an admin could overwrite a tenant’s security groups. For
>> example, and admin may not want a specific source range to access the
>> tenants VM’s. The guys working on FWaaS say that this may happen in V2, but
>> that looks very far away. Making this change in Neutron would be pretty
>> simple and give us a nice feature add.
>> If you would like to work on this I would be happy to develop this with
>> you. It could be added an extension.
>> Thanks
>> Gary
>>
>> On 4/24/17, 6:37 AM, "Ihar Hrachyshka"  wrote:
>>
>> All traffic is denied by default. OpenStack security groups API is
>> modeled to reflect what AWS does. You may find your needs better
>> served by fwaas plugin for neutron that is not constrained by AWS
>> compatibility.
>
>
> OpenStack does not claim to have or strive for AWS compatibility.
>
> It is not a goal. It may have been one for someone during the writing of the
> security-groups code, and thus may be a good description of why the
> security-groups are structured and behave the way they do. Moving forward,
> AWS compatibility should really never be a reason we do or don't do
> something if that thing is beneficial to our users.

I think one good reason that neutron security group only supports
whitelist rules
is to keep rule management simple.
If we support black list rules (i.e., deny/reject rules), users need
to consider the order of rules.
If blacklist rules and whitelist rules have overlapped areas, we need
priority of rules.
Supporting whitelist rules only makes rule management really simple and
I believe this is what is the security group API.

The rough consensus of the neutron community is that more complicated rules
like blacklist rules or rule priorities should go to FWaaS.

This topic was discussed several times in the neutron history and as of now
the above and what Gary and Ihar commented is our consensus.
The main background of the consensus is not just because of AWS compatibility.
In my understanding it is because what current users expect on the
security group.
Isn't it confusing that blacklist or rule priority is introduced at
some point from user perspective?

There are still gray zones. For example, a request we received multiple times
is "can neutron provide a way to define a set of default rules for a
new security group?".
It happened several months ago and at that time the proposal was rejected
because it changes what users have even though a feature is discoverable.


>
>
>> On Sun, Apr 23, 2017 at 8:33 PM, 田明明  wrote:
>> > Can we add an "action" to security group rule api, so that we could
>> dispatch
>> > rules with "deny" action? Until now, security group only supports
>> add
>> > white-list rules but this couldn't satisfy many people's needs.
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [openstack] API to get tunnel port connect to other host

2017-04-28 Thread James Denton
Hi Vikash,

The VXLAN tunnel endpoint address is listed in the output of a neutron 
agent-show :

$ neutron agent-show cb45e3f8-4a28-475a-994d-83bc27806c38
+-++
| Field   | Value  |
+-++
| admin_state_up  | True   |
| agent_type  | Linux bridge agent |
| alive   | True   |
| availability_zone   ||
| binary  | neutron-linuxbridge-agent  |
| configurations  | {  |
| |  "tunneling_ip": "172.29.232.66",  |
| |  "devices": 2, |
| |  "interface_mappings": {   |
| |   "vlan": "br-vlan"|
| |  },|
| |  "extensions": [], |
| |  "l2_population": true,|
| |  "tunnel_types": [ |
| |   "vxlan"  |
| |  ],|
| |  "bridge_mappings": {} |
| | }  |
| created_at  | 2017-04-19 23:12:47|
| description ||
| heartbeat_timestamp | 2017-04-28 15:07:59|
| host| 841445-compute007  |
| id  | cb45e3f8-4a28-475a-994d-83bc27806c38   |
| started_at  | 2017-04-20 17:38:03|
| topic   | N/A|
+-++

The actual Layer 4 port used may vary between drivers (linuxbridge vs OVS), but 
that would either be hard-coded or defined within a configuration file.

James

From: Vikash Kumar 
Date: Friday, April 28, 2017 at 6:50 AM
To: openstack-dev , Openstack Milis 

Subject: [Openstack] [openstack-dev][openstack] API to get tunnel port connect 
to other host

Is there any neutron API, which returns the tunnel port details connected to 
other host ?
For eg. I have Host-A and Host-B. Is there a way to know what is the 
tunnel-port on Host-A which connects Host-B ?
Can't use OVS commands directly.

--
Regards,
Vikash
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][stable][ptls] Tagging mitaka as EOL

2017-04-28 Thread Emilien Macchi
On Mon, Apr 24, 2017 at 10:43 PM, Emilien Macchi  wrote:
> On Mon, Apr 17, 2017 at 1:03 PM, Emilien Macchi  wrote:
>> On Wed, Apr 12, 2017 at 2:47 AM, Tony Breeds  wrote:
>>> Hi all,
>>> I'm late in sending this announement, but I'm glad to see several 
>>> projects
>>> have already requested EOL releases to make it trivial and obvious where to
>>> apply the tag.
>>>
>>> I'm proposing to EOL all projects that meet one or more of the following
>>> criteria:
>>>
>>> - The project is openstack-dev/devstack, openstack-dev/grenade or
>>>   openstack/requirements
>>> - The project has the 'check-requirements' job listed as a template in
>>>   project-config:zuul/layout.yaml
>>> - The project gates with either devstack or grenade jobs
>>> - The project is listed in governance:reference/projects.yaml and is tagged
>>>   with 'stable:follows-policy'.
>>>
>>> Some statistics:
>>> All Repos  : 1584 (covered in zuul/layout.yaml)
>>> Checked Repos  :  416 (match one or more of the above 
>>> criteria)
>>> Repos with mitaka branches :  409
>>> EOL Repos  :  199 (repos that match the criteria *and* 
>>> have
>>>a mitaka branch) [1]
>>> NOT EOL Repos  :  210 (repos with a mitaka branch but
>>>otherwise do not match) [2]
>>> DSVM Repos (staying)   :   68 (repos that use dsvm but don't have
>>>mitaka branches)
>>> Tagged Repos   :0
>>> Open Reviews   :  159 (reviews to close)
>>>
>>> Please look over both lists by 2017-04-17 00:00 UTC and let me know if:
>>> - A project is in list 1 and *really* *really* wants to opt *OUT* of EOLing 
>>> and
>>>   why.  Note doing this will amost certainly reduce the testing coverage you
>>>   have in the gate.
>>> - A project is in list 2 that would like to opt *IN* to tagging/EOLing
>>>
>>> Any projects that will be EOL'd will need all open reviews abandoned before 
>>> it
>>> can be processed.  I'm very happy to do this, or if I don't have permissios 
>>> to
>>> do it ask a gerrit admin to do it.
>>
>> Please EOL stable/mitaka for:
>> - instack-undercloud (and featureV2/juno/kilo old branches)
>> - tripleo-heat-templates (and icehouse)
>> - tripleo-puppet-elements
>> - puppet-tripleo
>> - tripleo-image-elements (just featureV2)
>> - python-tripleoclient
>> - tripleo-common
>
> I'll take care of EOLing them, after reading latest email from Tony.
> I would need some help from infra to remove the branches I mentioned ^
> please (featureV2, juno, kilo, icehouse).
>
> I'll keep you posted on my progress.

EOLing for TripleO has been processed.
Big thanks to fungi & Ajaeger for their precious help here.

>> Thanks,
>>
>>> Yours Tony.
>>>
>>> [1] 
>>> https://gist.github.com/tbreeds/c99e62bf8da19380e4eb130be8783be7#file-mitaka_eol_data-txt-L1
>>> [2] 
>>> https://gist.github.com/tbreeds/c99e62bf8da19380e4eb130be8783be7#file-mitaka_eol_data-txt-L209
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>> Emilien Macchi
>
>
>
> --
> Emilien Macchi



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] per-sack vs per-metric locking tradeoffs

2017-04-28 Thread Julien Danjou
On Fri, Apr 28 2017, gordon chung wrote:

> if the sack is unlocked, then it means a processing worker isn't looking 
> at the sack, and when it does lock the sack, it first has to check sack 
> for existing measures to process and then check indexer to validate that 
> they are still active. because it checks indexer later, even if both 
> janitor and processing worker check lock at same time, we can guarantee 
> it will have indexer state processing worker sees is > 00:00:00 since 
> janitor has state before getting lock, while processing worker as state 
> sometime after getting lock.

My brain hurts but that sounds perfect. That even means we potentially
did not have to lock currently, sack or no sack.

-- 
Julien Danjou
// Free Software hacker
// https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [Forum] Moderators needed!

2017-04-28 Thread Michał Jastrzębski
I can moderate HA session if you want (although there is one listed in
schedule?). Feel free to sign me up

On 28 April 2017 at 06:07, Jay Pipes  wrote:
> On 04/28/2017 08:22 AM, Shamail Tahir wrote:
>>
>> Hi everyone,
>>
>> Most of the proposed/accepted Forum sessions currently have moderators
>> but there are six sessions that do not have a confirmed moderator yet.
>> Please look at the list below and let us know if you would be willing to
>> help moderate any of these sessions.
>
>
> 
>
>> Cloud-Native Design/Refactoring across OpenStack
>>
>> 
>
>
> Hi Shamail,
>
> The one above looks like Alan (cc'd) is the moderator. :)
>
> Despite him having a awkwardly over-sized brain -- which unfortunately will
> limit the number of other people that can fit in the room -- I do think Alan
> will be a good moderator.
>
> Best,
> -jay
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova][glance] Who needs multiple api_servers?

2017-04-28 Thread Mike Dorman
Ok.  That would solve some of the problem for us, but we’d still be losing the 
redundancy.  We could do some HAProxy tricks to route around downed services, 
but it wouldn’t handle the case when that one physical box is down.

Is there some downside to allowing endpoint_override to remain a list?   That 
piece seems orthogonal to the spec and IRC discussion referenced, which are 
more around the service catalog.  I don’t think anyone in this thread is 
arguing against the idea that there should be just one endpoint URL in the 
catalog.  But it seems like there are good reasons to allow multiples on the 
override setting (at least for glance in nova-compute.)

Thanks,
Mike



On 4/28/17, 8:05 AM, "Eric Fried"  wrote:

Blair, Mike-

There will be an endpoint_override that will bypass the service
catalog.  It still only takes one URL, though.

Thanks,
Eric (efried)

On 04/27/2017 11:50 PM, Blair Bethwaite wrote:
> We at Nectar are in the same boat as Mike. Our use-case is a little
> bit more about geo-distributed operations though - our Cells are in
> different States around the country, so the local glance-apis are
> particularly important for caching popular images close to the
> nova-computes. We consider these glance-apis as part of the underlying
> cloud infra rather than user-facing, so I think we'd prefer not to see
> them in the service-catalog returned to users either... is there going
> to be a (standard) way to hide them?
> 
> On 28 April 2017 at 09:15, Mike Dorman  wrote:
>> We make extensive use of the [glance]/api_servers list.  We configure 
that on hypervisors to direct them to Glance servers which are more “local” 
network-wise (in order to reduce network traffic across security 
zones/firewalls/etc.)  This way nova-compute can fail over in case one of the 
Glance servers in the list is down, without putting them behind a load 
balancer.  We also don’t run https for these “internal” Glance calls, to save 
the overhead when transferring images.
>>
>> End-user calls to Glance DO go through a real load balancer and then are 
distributed out to the Glance servers on the backend.  From the end-user’s 
perspective, I totally agree there should be one, and only one URL.
>>
>> However, we would be disappointed to see the change you’re suggesting 
implemented.  We would lose the redundancy we get now by providing a list.  Or 
we would have to shunt all the calls through the user-facing endpoint, which 
would generate a lot of extra traffic (in places where we don’t want it) for 
image transfers.
>>
>> Thanks,
>> Mike
>>
>>
>>
>> On 4/27/17, 4:02 PM, "Matt Riedemann"  wrote:
>>
>> On 4/27/2017 4:52 PM, Eric Fried wrote:
>> > Y'all-
>> >
>> >   TL;DR: Does glance ever really need/use multiple endpoint URLs?
>> >
>> >   I'm working on bp use-service-catalog-for-endpoints[1], which 
intends
>> > to deprecate disparate conf options in various groups, and 
centralize
>> > acquisition of service endpoint URLs.  The idea is to introduce
>> > nova.utils.get_service_url(group) -- note singular 'url'.
>> >
>> >   One affected conf option is [glance]api_servers[2], which 
currently
>> > accepts a *list* of endpoint URLs.  The new API will only ever 
return *one*.
>> >
>> >   Thus, as planned, this blueprint will have the side effect of
>> > deprecating support for multiple glance endpoint URLs in Pike, and
>> > removing said support in Queens.
>> >
>> >   Some have asserted that there should only ever be one endpoint 
URL for
>> > a given service_type/interface combo[3].  I'm fine with that - it
>> > simplifies things quite a bit for the bp impl - but wanted to make 
sure
>> > there were no loudly-dissenting opinions before we get too far 
down this
>> > path.
>> >
>> > [1]
>> > 
https://blueprints.launchpad.net/nova/+spec/use-service-catalog-for-endpoints
>> > [2]
>> > 
https://github.com/openstack/nova/blob/7e7bdb198ed6412273e22dea72e37a6371fce8bd/nova/conf/glance.py#L27-L37
>> > [3]
>> > 
http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2017-04-27.log.html#t2017-04-27T20:38:29
>> >
>> > Thanks,
>> > Eric Fried (efried)
>> > .
>> >
>> > 
__
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>

Re: [openstack-dev] [kolla] Which distros are used as base ones?

2017-04-28 Thread Michał Jastrzębski
We tried to use mariadb package few months ago, but it turns out it
was ancient version that broke horribly on multinode..

On 28 April 2017 at 02:41, Christian Berendt
 wrote:
>
>> On 27. Apr 2017, at 11:46, Marcin Juszkiewicz 
>>  wrote:
>>
>> Does someone care about Ubuntu?
>
> Yes, we do. We are using the Ubuntu source images with the Newton and Ocata 
> branches from kolla/kolla-ansible.
>
> Christian.
>
> --
> Christian Berendt
> Chief Executive Officer (CEO)
>
> Mail: bere...@betacloud-solutions.de
> Web: https://www.betacloud-solutions.de
>
> Betacloud Solutions GmbH
> Teckstrasse 62 / 70190 Stuttgart / Deutschland
>
> Geschäftsführer: Christian Berendt
> Unternehmenssitz: Stuttgart
> Amtsgericht: Stuttgart, HRB 756139
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova][glance] Who needs multiple api_servers?

2017-04-28 Thread Monty Taylor

Thank you both for your feedback - that's really helpful.

Let me say a few more words about what we're trying to accomplish here 
overall so that maybe we can figure out what the right way forward is. 
(it may be keeping the glance api servers setting, but let me at least 
make the case real quick)


From a 10,000 foot view, the thing we're trying to do is to get nova's 
consumption of all of the OpenStack services it uses to be less special.


The clouds have catalogs which list information about the services - 
public, admin and internal endpoints and whatnot - and then we're asking 
admins to not only register that information with the catalog, but to 
also put it into the nova.conf. That means that any updating of that 
info needs to be an API call to keystone and also a change to nova.conf. 
If we, on the other hand, use the catalog, then nova can pick up changes 
in real time as they're rolled out to the cloud - and there is hopefully 
a sane set of defaults we could choose (based on operator feedback like 
what you've given) so that in most cases you don't have to tell nova 
where to find glance _at_all_ becuase the cloud already knows where it 
is. (nova would know to look in the catalog for the interal interface of 
the image service - for instance - there's no need to ask an operator to 
add to the config "what is the service_type of the image service we 
should talk to" :) )


Now - glance, and the thing you like that we don't - is especially hairy 
because of the api_servers list. The list, as you know, is just a list 
of servers, not even of URLs. This  means it's not possible to configure 
nova to talk to glance over SSL (which I know you said works for you, 
but we'd like for people to be able to choose to SSL all their things) 
We could add that, but it would be an additional pile of special config. 
Because of all of that, we also have to attempt to make working URLs 
from what is usually a list of IP addresses. This is also clunky and 
prone to failure.


The implementation on the underside of the api_servers code is the 
world's dumbest load balancer. It picks a server from the  list at 
random and uses it. There is no facility for dealing with a server in 
the list that stops working or for allowing rolling upgrades like there 
would with a real load-balancer across the set. If one of the API 
servers goes away, we have no context to know that, so just some of your 
internal calls to glance fail.


Those are the issues - basically:
- current config is special and fragile
- impossible to SSL
- unflexible/unpowerful de-facto software loadbalancer

Now - as is often the case - it turns out the combo of those things is 
working very well for you -so we need to adjust our thinking on the 
topic a bit. Let me toss out some alternatives and see what you think:


Alternative One - Do Both things

We add the new "consume from catalog" and make it default. (and make it 
default to consuming the internal interface by default) We have to do 
that in parallel with the current glance api_servers setting anyway, 
because of deprecation periods, so the code to support both approaches 
will exist. Instead of then deprecating the api_servers list, we keep 
it- but add a big doc warning listing the gotchas and limitations - but 
for those folks for whom they are not an issue, you've got an out.


Alternative Two - Hybrid Approach - optional list of URLs

We go ahead and move to service config being the standard way one lists 
how to consume a service from the catalog. One of the standard options 
for consuming services is "endpoint_override" - which is a way an API 
user can say "hi, please to ignore the catalog and use this endpoint 
I've given you instead". The endpoint in question is a full URL, so 
https/http and ports and whatnot are all handled properly.


We add, in addition, an additional option "endpoint_override_list" which 
allows you to provide a list of URLs (not API servers) and if you 
provide that option, we'll keep the logic of choosing one at random at 
API call time. It's still a poor load balancer, and we'll still put 
warnings in the docs about it not being a featureful load balancing 
solution, but again would be available if needed.


Alternative Three - We ignore you and give you docs

I'm only including this because in the name of completeness. But we 
could write a bunch of docs about a recommended way of putting your 
internal endpoints in a load balancer and registering that with the 
internal endpoint in keystone. (I would prefer to make the operators 
happy, so let's say whatever vote I have is not for this option)


Alternative Four - We update client libs to understand multiple values 
from keystone for endpoints


I _really_ don't like this one - as I think us doing dumb software 
loadbalancing client side is prone to a ton of failures. BUT - right now 
the assumption when consuming endpoints from the catalog is that one and 
only one endpoint will be returned for a 

Re: [openstack-dev] [neutron] stepping down from neutron core team

2017-04-28 Thread Miguel Lavalle
Tocayo,

Good luck!



On Fri, Apr 28, 2017 at 4:02 AM, Miguel Angel Ajo Pelayo <
majop...@redhat.com> wrote:

>
> Hi everybody,
>
> Some of you already know, but I wanted to make it official.
>
> Recently I moved to work on the networking-ovn component,
> and OVS/OVN itself, and while I'll stick around and I will be available
> on IRC for any questions I'm already not doing a good work with
> neutron reviews, so...
>
> It's time to leave room for new reviewers.
>
> It's always a pleasure to work with you folks.
>
> Best regards,
> Miguel Ángel Ajo.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] per-sack vs per-metric locking tradeoffs

2017-04-28 Thread Julien Danjou
On Fri, Apr 28 2017, gordon chung wrote:

> refresh i believe is always disabled by default regardless of what 
> interface you're using.

You gotta to show me where it is 'cause I can't see that and I don't
recall any option for that. :/

> in the case of cross-metric aggregations, this is a timeout for entire 
> request or per metric timeout? i think it's going to get quite chaotic 
> in the multiple metric (multiple sacks) refresh. :(

Right I did not think about multiple refresh. Well it'll be a nice
slalom of lock acquiring. :-)

> i'm hoping not to have a timeout because i imagine there will be 
> scenarios where we block trying to lock sack, and when we finally get 
> sack lock, we find there is no work for us. this means we just added x 
> seconds to response to for no reason.

Right, I see your point. Though we _could_ enhance refresh to first
check if there's any job to do. It's lock-free. Just checking. :)

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova][glance] Who needs multiple api_servers?

2017-04-28 Thread Eric Fried
Blair, Mike-

There will be an endpoint_override that will bypass the service
catalog.  It still only takes one URL, though.

Thanks,
Eric (efried)

On 04/27/2017 11:50 PM, Blair Bethwaite wrote:
> We at Nectar are in the same boat as Mike. Our use-case is a little
> bit more about geo-distributed operations though - our Cells are in
> different States around the country, so the local glance-apis are
> particularly important for caching popular images close to the
> nova-computes. We consider these glance-apis as part of the underlying
> cloud infra rather than user-facing, so I think we'd prefer not to see
> them in the service-catalog returned to users either... is there going
> to be a (standard) way to hide them?
> 
> On 28 April 2017 at 09:15, Mike Dorman  wrote:
>> We make extensive use of the [glance]/api_servers list.  We configure that 
>> on hypervisors to direct them to Glance servers which are more “local” 
>> network-wise (in order to reduce network traffic across security 
>> zones/firewalls/etc.)  This way nova-compute can fail over in case one of 
>> the Glance servers in the list is down, without putting them behind a load 
>> balancer.  We also don’t run https for these “internal” Glance calls, to 
>> save the overhead when transferring images.
>>
>> End-user calls to Glance DO go through a real load balancer and then are 
>> distributed out to the Glance servers on the backend.  From the end-user’s 
>> perspective, I totally agree there should be one, and only one URL.
>>
>> However, we would be disappointed to see the change you’re suggesting 
>> implemented.  We would lose the redundancy we get now by providing a list.  
>> Or we would have to shunt all the calls through the user-facing endpoint, 
>> which would generate a lot of extra traffic (in places where we don’t want 
>> it) for image transfers.
>>
>> Thanks,
>> Mike
>>
>>
>>
>> On 4/27/17, 4:02 PM, "Matt Riedemann"  wrote:
>>
>> On 4/27/2017 4:52 PM, Eric Fried wrote:
>> > Y'all-
>> >
>> >   TL;DR: Does glance ever really need/use multiple endpoint URLs?
>> >
>> >   I'm working on bp use-service-catalog-for-endpoints[1], which intends
>> > to deprecate disparate conf options in various groups, and centralize
>> > acquisition of service endpoint URLs.  The idea is to introduce
>> > nova.utils.get_service_url(group) -- note singular 'url'.
>> >
>> >   One affected conf option is [glance]api_servers[2], which currently
>> > accepts a *list* of endpoint URLs.  The new API will only ever return 
>> *one*.
>> >
>> >   Thus, as planned, this blueprint will have the side effect of
>> > deprecating support for multiple glance endpoint URLs in Pike, and
>> > removing said support in Queens.
>> >
>> >   Some have asserted that there should only ever be one endpoint URL 
>> for
>> > a given service_type/interface combo[3].  I'm fine with that - it
>> > simplifies things quite a bit for the bp impl - but wanted to make sure
>> > there were no loudly-dissenting opinions before we get too far down 
>> this
>> > path.
>> >
>> > [1]
>> > 
>> https://blueprints.launchpad.net/nova/+spec/use-service-catalog-for-endpoints
>> > [2]
>> > 
>> https://github.com/openstack/nova/blob/7e7bdb198ed6412273e22dea72e37a6371fce8bd/nova/conf/glance.py#L27-L37
>> > [3]
>> > 
>> http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2017-04-27.log.html#t2017-04-27T20:38:29
>> >
>> > Thanks,
>> > Eric Fried (efried)
>> > .
>> >
>> > 
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> +openstack-operators
>>
>> --
>>
>> Thanks,
>>
>> Matt
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> ___
>> OpenStack-operators mailing list
>> openstack-operat...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 
> 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] CI Squad Meeting Summary (week 17)

2017-04-28 Thread Attila Darazs
If the topics below interest you and you want to contribute to the 
discussion, feel free to join the next meeting:


Time: Thursdays, 14:30-15:30 UTC
Place: https://bluejeans.com/4113567798/

Full minutes: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting

Our meeting was an hour later as Gabriele's Quickstart Deep Dive session 
was conflicting wiht it. The session was excellent and if you didn't 
attend, I'd highly recommend watching it once the recording of it comes 
out. Meanwhile you can check out the summary here[1].


= Quickstart Transition Phase 2 Status =

We estimate that the transition of the OVB jobs will take place on *9th 
of May*. The following jobs are going to switch to be run by Quickstart:


* ovb-ha
* ovb-nonha
* ovb-updates

The -oooq equivalent jobs are already running close to the final 
configuration which gives us good confidence for the transition.


= Smaller topics =

* Sagi brought it up that openstack-infra's image building broke us 2 
times in the last weeks, and it would be nice to find some solution for 
the problem. maybe promoting those images too? Sagi will bring this 
topic up at the infra meeting.


* The OVB based devmode.sh is stuck because we can't use shade properly 
from the virtual environment, this needs further investigation.


* How we use featuresets: Wes brought it up that we are not very 
consistent with using the new "featureset" style configuration 
everywhere. Indeed, we need to move to using them on RDO CI as well, but 
at least their use in tripleo-ci is consistent among the transitioned jobs.


* Wes suggested to develop a rotation for watching the gating jobs to 
free up developers from constantly watching them. We need to figure out 
a good system for this.


Thank you for reading the summary.

Best regards,
Attila

[1] https://etherpad.openstack.org/p/quickstart-deep-dive

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] per-sack vs per-metric locking tradeoffs

2017-04-28 Thread gordon chung


On 28/04/17 09:23 AM, Julien Danjou wrote:
> On Fri, Apr 28 2017, gordon chung wrote:
>
>> what if we just don't lock ever on delete, but if we still check lock to
>> see if it's processed. at that point, the janitor knows that metric(s)
>> is deleted, and since no one else is working on sack, any metricd that
>> follows will also know that the metric(s) are deleted and therefore,
>> won't work on it.
>
> I did not understand your whole idea, can you detail a bit more?
> Though the "check lock" approach usually does not work and is a source
> of race condition, but again, I did not get the whole picture. :)
>

my thinking is that, when the janitor goes to process a sack, it means 
it has indexer state from time 00:00:00.

if the sack is locked, then it means a processing worker is looking at 
sack and has indexer state from time <= 00:00:00.

if the sack is unlocked, then it means a processing worker isn't looking 
at the sack, and when it does lock the sack, it first has to check sack 
for existing measures to process and then check indexer to validate that 
they are still active. because it checks indexer later, even if both 
janitor and processing worker check lock at same time, we can guarantee 
it will have indexer state processing worker sees is > 00:00:00 since 
janitor has state before getting lock, while processing worker as state 
sometime after getting lock.

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tooz] etcd3 driver

2017-04-28 Thread Jay Pipes

On 04/28/2017 09:25 AM, Julien Danjou wrote:

On Sun, Mar 19 2017, Jay Pipes wrote:

Jay, are you working or planning to work on the group interface soon?
I'm planning to take a look at it ASAP now but I'd hate to duplicate any
work. :)


Go for it, JD! :)

I had not had that on my plate, no, so if you can tackle, that would be 
awesome.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] per-sack vs per-metric locking tradeoffs

2017-04-28 Thread gordon chung


On 28/04/17 09:19 AM, Julien Danjou wrote:
> That's not what I meant. You can have the same mechanism as currently,
> but then you compute the sacks of all metrics and you
> itertools.groupby() per sack on them before locking the sack and
> expunging them.

yeah, we should do that. i'll add that as a patch.

>
>> > refresh is currently disabled by default so i think we're ok.
> Well you mean it's disabled by default _in the CLI_, not in the API
> right?

refresh i believe is always disabled by default regardless of what 
interface you're using.

>
>> > what's the timeout for? timeout api's attempt to aggregate metric? i
>> > think it's a bad experience if we add any timeout since i assume it will
>> > still return what it can return and then the results become somewhat
>> > ambiguous.
> No, I meant timeout for grabbing the sack's lock. You wouldn't return a
> 2xx but a 5xx stating the API is unable to compute stuff right now, so
> try again without refresh or something.

in the case of cross-metric aggregations, this is a timeout for entire 
request or per metric timeout? i think it's going to get quite chaotic 
in the multiple metric (multiple sacks) refresh. :(

i'm hoping not to have a timeout because i imagine there will be 
scenarios where we block trying to lock sack, and when we finally get 
sack lock, we find there is no work for us. this means we just added x 
seconds to response to for no reason.

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tooz] etcd3 driver (was: [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM)

2017-04-28 Thread Julien Danjou
On Sun, Mar 19 2017, Jay Pipes wrote:

Jay, are you working or planning to work on the group interface soon?
I'm planning to take a look at it ASAP now but I'd hate to duplicate any
work. :)

> Also, I gave a stab at an etcd3 tooz coordination driver:
>
> https://review.openstack.org/#/c/447223/
>
> Can't figure out how to get the tests to run properly, but meh, I'll get to it
> in a bit :)
>
> -jay
>
> On 03/19/2017 04:34 PM, Davanum Srinivas wrote:
>> Quick update: We have 3 options to talk to etcd:
>>
>> 1) existing V2 HTTP API - still not deprecated in etcd 3.1.x, so
>> existing code in tooz still works.
>> 2) grpc based V3 API - We can use the etcd3 package, discussion in
>> review[1], problem is that this will not work currently with eventlet
>> based services
>> 3) v3alpha HTTP API - See details in [2], quick prototype requests
>> based code [3]
>>
>> Thanks,
>> Dims
>>
>> [1] https://review.openstack.org/#/c/446983/
>> [2]
>> https://github.com/coreos/etcd/blob/master/Documentation/dev-guide/api_grpc_gateway.md
>> [3] https://gist.github.com/dims/19ceaf9472ef54aa3011d7a11e809371
>>
>> On Sun, Mar 19, 2017 at 9:32 AM, Jay Pipes  wrote:
>>> On 03/18/2017 07:48 PM, Mike Perez wrote:

 On 12:35 Mar 14, Jay Pipes wrote:
>
> On 03/14/2017 08:57 AM, Julien Danjou wrote:
>>
>> On Tue, Mar 14 2017, Davanum Srinivas wrote:
>>
>>> Let's do it!! (etcd v2-v3 in tooz)
>>
>>
>> Hehe. I'll move that higher in my priority list, I swear. But anyone is
>> free to beat me to it in the meantime. ;)
>
>
> A weekend experiment:
>
> https://github.com/jaypipes/os-lively
>
> Not tooz, because I'm not interested in a DLM nor leader election library
> (that's what the underlying etcd3 cluster handles for me), only a fast
> service liveness/healthcheck system, but it shows usage of etcd3 and
> Google
> Protocol Buffers implementing a simple API for liveness checking and host
> maintenance reporting.
>
> My plan is to push some proof-of-concept patches that replace Nova's
> servicegroup API with os-lively and eliminate Nova's use of an RDBMS for
> service liveness checking, which should dramatically reduce the amount of
> both DB traffic as well as conductor/MQ service update traffic.


 As Monty has mentioned, I'd love for us to decide on one thing. As being
 a moderator of that discussion I was trying not to be one-sided.

 Whether or not a decision was made 1.5 years ago, the community that was
 present at that time of the decision at the summit decided on an
 abstraction
 layer to have options. Forcing an option on the community without
 gathering
 feedback of what the community currently looks like is not a good idea.

 I'd recommend if you want to make this base service, start the discussions
 in
 somewhere other than the dev list, like the Forum.
>>>
>>>
>>> Mike, it was an experiment :)
>>>
>>> But, yes, happy to participate in a discussion at the forum.
>>>
>>>
>>> Best,
>>> -jay
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] per-sack vs per-metric locking tradeoffs

2017-04-28 Thread Julien Danjou
On Fri, Apr 28 2017, gordon chung wrote:

> what if we just don't lock ever on delete, but if we still check lock to 
> see if it's processed. at that point, the janitor knows that metric(s) 
> is deleted, and since no one else is working on sack, any metricd that 
> follows will also know that the metric(s) are deleted and therefore, 
> won't work on it.

I did not understand your whole idea, can you detail a bit more?
Though the "check lock" approach usually does not work and is a source
of race condition, but again, I did not get the whole picture. :)

-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] per-sack vs per-metric locking tradeoffs

2017-04-28 Thread Julien Danjou
On Fri, Apr 28 2017, gordon chung wrote:

> so the tradeoff here is that now we're doing a lot more calls to 
> indexer. additionally, we're pulling a lot more unused results from db.
> a single janitor currently just grabs all deleted metrics and starts 
> attempting to clean them up one at a time. if we merge, we will have n 
> calls to indexer, where n is number of workers, each pulling in all the 
> deleted metrics, and then checking to see if the metric is in it's sack, 
> and if not, moving on. that's a lot of extra, wasted work. we could 
> reduce that work by adding sack information to indexer ;) but that will 
> still add significantly more calls to indexer (which we could reduce by 
> not triggering cleanup every job interval)

That's not what I meant. You can have the same mechanism as currently,
but then you compute the sacks of all metrics and you
itertools.groupby() per sack on them before locking the sack and
expunging them.

> refresh is currently disabled by default so i think we're ok.

Well you mean it's disabled by default _in the CLI_, not in the API
right?

> what's the timeout for? timeout api's attempt to aggregate metric? i 
> think it's a bad experience if we add any timeout since i assume it will 
> still return what it can return and then the results become somewhat 
> ambiguous.

No, I meant timeout for grabbing the sack's lock. You wouldn't return a
2xx but a 5xx stating the API is unable to compute stuff right now, so
try again without refresh or something.

-- 
Julien Danjou
// Free Software hacker
// https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] Moving python-gnocchiclient to GitHub

2017-04-28 Thread Julien Danjou
On Fri, Apr 28 2017, Javier Pena wrote:

> I see none of the tags or branches have been moved over, could they be copied?
> It would of great help to packagers.

Oh for sure, that's my mistake, I pushed all branches and old tags!
Thanks for noting!

-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] per-sack vs per-metric locking tradeoffs

2017-04-28 Thread gordon chung


On 28/04/17 03:48 AM, Julien Danjou wrote:
> Yes, I wrote that in a review somewhere. We need to rework 1. so
> deletion happens at the same time we lock the sack to process metrics
> basically. We might want to merge the janitor into the worker I imagine.
> Currently a janitor can grab metrics and do dumb things like:
> - metric1 from sackA
> - metric2 from sackB
> - metric3 from sackA
>
> and do 3 different lock+delete -_-

what if we just don't lock ever on delete, but if we still check lock to 
see if it's processed. at that point, the janitor knows that metric(s) 
is deleted, and since no one else is working on sack, any metricd that 
follows will also know that the metric(s) are deleted and therefore, 
won't work on it.

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [Forum] Moderators needed!

2017-04-28 Thread Jay Pipes

On 04/28/2017 08:22 AM, Shamail Tahir wrote:

Hi everyone,

Most of the proposed/accepted Forum sessions currently have moderators
but there are six sessions that do not have a confirmed moderator yet.
Please look at the list below and let us know if you would be willing to
help moderate any of these sessions.





Cloud-Native Design/Refactoring across OpenStack



Hi Shamail,

The one above looks like Alan (cc'd) is the moderator. :)

Despite him having a awkwardly over-sized brain -- which unfortunately 
will limit the number of other people that can fit in the room -- I do 
think Alan will be a good moderator.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Compute Node restart

2017-04-28 Thread John Garbutt
On 27 April 2017 at 06:45, Ajay Kalambur (akalambu)  wrote:
> I am just issuing a reboot command on the compute node
>
> Not a reboot –f
>
> From: Mark Mielke 
> Date: Wednesday, April 26, 2017 at 8:42 PM
> To: "OpenStack Development Mailing List (not for usage questions)"
> , Ajay Kalambur 
> Subject: Re: [openstack-dev] [nova] Compute Node restart
>
> On Apr 25, 2017 2:45 AM, "Ajay Kalambur (akalambu)" 
> wrote:
>
> I see that when a host is gracefully rebooted nova-compute receives a
> lifecycle event to shutdown the instance and it updates the database with
> the state set to SHUTOFF.
> Now when compute node reboot and libvirt brings the VM back up nova checks
> its database and issues stop api and hence shuts down the VM
>
> So even though resume guest state on reboot is set it does not work. Why
> does nova compute ignore the lifecycle event from libvirt saying bring up
> the VM and reconciles with database
>
>
> How are you defining "graceful reboot"?
>
> I am thinking that you may be defining it in a way that implies prior safe
> shutdown of the guests including setting their state to "power off", in
> which case restoring them to the prior state before reboot will correctly
> leave them "power off". I believe this feature is only intended to work with
> a shutdown of the hypervisor such as occurs when you "shutdown -r now" on
> the hypervisor without first shutting down the guests.

It sounds like libvirt service is stopping (and stopping all the VMs)
before the nova-compute service is being stopped.

I would have expected nova-compute service to stop running before the
VMs are stopped.

Is the service order incorrect somehow? I suspect this is
package/distro specific, what are you running?

Thanks,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] placement/resource providers update 21

2017-04-28 Thread Chris Dent


Placement and resource providers update 21. Please let me know if
anything is incorrect or missing.

If you're going to be in Boston there are some placement related
sessions that may be worth your while:

* Scheduler Wars: A New Hope
   
https://www.openstack.org/summit/boston-2017/summit-schedule/events/17501/scheduler-wars-a-new-hope

* Behind the Scenes with Placement and Resource Tracking in Nova
   
https://www.openstack.org/summit/boston-2017/summit-schedule/events/17511/behind-the-scenes-with-placement-and-resource-tracking-in-nova

* Comparing Kubernetes and OpenStack Resource Management
   
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18726/comparing-kubernetes-and-openstack-resource-management

# What Matters Most

To much acclaim, the claims via the scheduler spec merged and work
has begun. Engaging with that is the top priority. There's plenty
of other work in progress too which needs to advance. Lots of links
within.

# What's Changed

Besides the claims spec merging, most of the progress has been on
moving the various themes forward, but no major changes.

# Help Wanted

(This section not changed since last week)

Areas where volunteers are needed.

* General attention to bugs tagged placement:
   https://bugs.launchpad.net/nova/+bugs?field.tag=placement

* Helping to create api documentation for placement (see the Docs
   section below).

* Helping to create and evaluate functional tests of the resource
   tracker and the ways in which it and nova-scheduler use the
   reporting client. For some info see
   https://etherpad.openstack.org/p/nova-placement-functional
   and talk to edleafe. He has a work in progress at:

   https://review.openstack.org/#/c/446123/

   that seeks input and assistance.

* Performance testing. If you have access to some nodes, some basic
  benchmarking and profiling would be very useful. See the
  performance section below.

# Main Themes

## Claims in the Scheduler

Work has started on placement-claims blueprint:


https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/placement-claims

We intentionally left some detail out of the spec because we knew
that we would find some edge cases while the implemention is done.

## Traits

The main API is in place, there's one patch left for a new command
to sync the os-traits library into the database:

 https://review.openstack.org/#/c/450125/

Work will be required at some point on filtering resource providers
based on traits, and adding traits to resource providers from the
resource tracker.

There are also pending changes to the os-traits library to add more
traits and automate creating symbols associated with the trait
strings:

https://review.openstack.org/#/q/project:openstack/os-traits+status:open

## Shared Resource Providers

https://blueprints.launchpad.net/nova/+spec/shared-resources-pike

Initial work has begun on this:

https://review.openstack.org/#/q/status:open+topic:bp/shared-resources-pike

## Docs

(This section has not changed since last week)

https://review.openstack.org/#/q/topic:cd/placement-api-ref

Several reviews are in progress for documenting the placement API.
This is likely going to take quite a few iterations as we work out
the patterns and tooling. But it's great to see the progress and
when looking at the draft rendered docs it makes placement feel like
a real thing™.

We need multiple reviewers on this stuff, early in the process, as
it helps to identify missteps in the phrasing and styling before we
develop bad habits.

Find me (cdent) or Andrey (avolkov) if you want to help out or have
other questions.

## Performance

(This section has not changed since last week)

We're aware that there are some redundancies in the resource tracker
that we'd like to clean up

 
http://lists.openstack.org/pipermail/openstack-dev/2017-January/110953.html

but it's also the case that we've done no performance testing on the
placement service itself.

We ought to do some testing to make sure there aren't unexpected
performance drains.

## Nested Resource Providers

(This section has not changed since last week)

On hold while attention is given to traits and claims. There's a
stack of code waiting until all of that settles:

  
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/nested-resource-providers

## Ironic/Custom Resource Classes

(This section has not changed since last week)

There's a blueprint for "custom resource classes in flavors" that
describes the stuff that will actually make use of custom resource
classes:

 
https://blueprints.launchpad.net/nova/+spec/custom-resource-classes-in-flavors

The spec has merged, but the implementation has not yet started.

Over in Ironic some functional and integration tests have started:

 https://review.openstack.org/#/c/443628/

There's also a spec in progress discussing ways 

Re: [openstack-dev] [gnocchi] per-sack vs per-metric locking tradeoffs

2017-04-28 Thread gordon chung


On 28/04/17 03:48 AM, Julien Danjou wrote:
>
> Yes, I wrote that in a review somewhere. We need to rework 1. so
> deletion happens at the same time we lock the sack to process metrics
> basically. We might want to merge the janitor into the worker I imagine.
> Currently a janitor can grab metrics and do dumb things like:
> - metric1 from sackA
> - metric2 from sackB
> - metric3 from sackA
>
> and do 3 different lock+delete -_-

so the tradeoff here is that now we're doing a lot more calls to 
indexer. additionally, we're pulling a lot more unused results from db.
a single janitor currently just grabs all deleted metrics and starts 
attempting to clean them up one at a time. if we merge, we will have n 
calls to indexer, where n is number of workers, each pulling in all the 
deleted metrics, and then checking to see if the metric is in it's sack, 
and if not, moving on. that's a lot of extra, wasted work. we could 
reduce that work by adding sack information to indexer ;) but that will 
still add significantly more calls to indexer (which we could reduce by 
not triggering cleanup every job interval)


>>
>> alternatively, this could be solved by keeping per-metric locks in
>> addition to per-sack locks. this would effectively double the number of
>> active locks we have so instead of each metricd worker having a single
>> per-sack lock, it will also have a per-metric lock for whatever metric
>> it may be publishing at the time.
>
> If we got a timeout set for scenario 3, I'm not that worried. I guess
> worst thing is that people would be unhappy with the API spending time
> doing computation anyway so we'd need to rework how refresh work or add
> an ability to disable it.
>

refresh is currently disabled by default so i think we're ok.

what's the timeout for? timeout api's attempt to aggregate metric? i 
think it's a bad experience if we add any timeout since i assume it will 
still return what it can return and then the results become somewhat 
ambiguous.

now that i think about it more this issue still exists in per-metric 
scenario (but to lesser extent). 'refresh' can still be blocked by 
metricd but it's just a significantly smaller chance and the window for 
missed unprocessed measures is smaller.

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] Moving python-gnocchiclient to GitHub

2017-04-28 Thread Javier Pena


- Original Message -
> On Tue, Apr 25 2017, Julien Danjou wrote:
> 
> > We're in the process of moving python-gnocchiclient to GitHub. The
> > patches are up for review:
> >
> >   https://review.openstack.org/#/c/459748/
> >
> > and its dependencies need to be merged before this happen. As soon as
> > this patch is merged, the repository will officially be available at:
> >
> >   https://github.com/gnocchixyz/python-gnocchiclient
> 
> This happened! The repository has now been moved to GitHub. I've also
> created copies of the existing opened bugs there so we don't lose that
> data.
> 
> If I missed anything, let me know.
> 

Hi Julien,

I see none of the tags or branches have been moved over, could they be copied? 
It would of great help to packagers.

Thanks,
Javier

> --
> Julien Danjou
> ;; Free Software hacker
> ;; https://julien.danjou.info
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Forum] Moderators needed!

2017-04-28 Thread Shamail Tahir
Hi everyone,

Most of the proposed/accepted Forum sessions currently have moderators but
there are six sessions that do not have a confirmed moderator yet. Please
look at the list below and let us know if you would be willing to help
moderate any of these sessions.

The topics look really interesting but it will be difficult to keep the
sessions on the schedule if there is not an assigned moderator. We look
forward to seeing you at the Summit/Forum in Boston soon!

Achieving Resiliency at Scales of 1000+

Feedback from users for I18n & translation - important part?

Neutron Pain Points

Making Neutron easy for people who want basic networking

High Availability in OpenStack

Cloud-Native Design/Refactoring across OpenStack



Thanks,
Doug, Emilien, Melvin, Mike, Shamail & Tom
Forum Scheduling Committee
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack] API to get tunnel port connect to other host

2017-04-28 Thread Vikash Kumar
Is there any neutron API, which returns the tunnel port details connected
to other host ?

For eg. I have Host-A and Host-B. Is there a way to know what is the
tunnel-port on Host-A which connects Host-B ?

Can't use OVS commands directly.

-- 
Regards,
Vikash
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] stepping down from neutron core team

2017-04-28 Thread Anna Taraday
Thanks a lot for all that you've done! Wish you all the best!
On Fri, Apr 28, 2017 at 1:04 PM Miguel Angel Ajo Pelayo 
wrote:

>
> Hi everybody,
>
> Some of you already know, but I wanted to make it official.
>
> Recently I moved to work on the networking-ovn component,
> and OVS/OVN itself, and while I'll stick around and I will be available
> on IRC for any questions I'm already not doing a good work with
> neutron reviews, so...
>
> It's time to leave room for new reviewers.
>
> It's always a pleasure to work with you folks.
>
> Best regards,
> Miguel Ángel Ajo.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Regards,
Ann Taraday
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] Moving python-gnocchiclient to GitHub

2017-04-28 Thread Julien Danjou
On Tue, Apr 25 2017, Julien Danjou wrote:

> We're in the process of moving python-gnocchiclient to GitHub. The
> patches are up for review:
>
>   https://review.openstack.org/#/c/459748/
>
> and its dependencies need to be merged before this happen. As soon as
> this patch is merged, the repository will officially be available at:
>
>   https://github.com/gnocchixyz/python-gnocchiclient

This happened! The repository has now been moved to GitHub. I've also
created copies of the existing opened bugs there so we don't lose that
data.

If I missed anything, let me know.

-- 
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Blazar] Meeting time slots in Boston

2017-04-28 Thread Sylvain Bauza


Le 21/04/2017 06:41, Masahito MUROI a écrit :
> Hi all,
> 
> Thanks for choosing time slots!  Based on the table of Doodle, I'd like
> to pick following two slots for Blazar team meeting.
> 
> 1. 1pm-4pm on Monday for Blazar's internal features
> 2. 9am-10am or 11am on Thursday for discussions with PlacementAPI team
> 
> The first meeting will focus on Blazar's internal features, roadmap and
> etc.  1pm-2pm is also Lunch time. So it could start as lunch meeting.
> 
> In the second slot, we plan to discuss with PlacementAPI team. Summit
> would have breakout rooms or tables as usual.  We'll gather one of these
> and discuss concerns and/or usecases of collaboration with PlacementAPI.
> 

Being in the first meeting is difficult for me, but I can try to attend
it if you want me :-) Just ping me in case so.

For the second meeting, I'll be there.

-Sylvain

> 
> best regards,
> Masahito
> 
> 
> On 2017/04/18 13:21, Masahito MUROI wrote:
>> Hi Blazar folks,
>>
>> I created doodle[1] to decide our meeting time slots in Boston summit.
>> Please check slots you can join the meeting by Thursday.  I'll decide
>> the slots on this Friday.
>>
>> Additionally, I'd like to ask you to write down how many hours we have
>> the meeting in comments of the page.
>>
>> 1. http://doodle.com/poll/a7pccnhqsuk9tax7
>>
>> best regards,
>> Masahito
>>
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Which distros are used as base ones?

2017-04-28 Thread Christian Berendt

> On 27. Apr 2017, at 11:46, Marcin Juszkiewicz  
> wrote:
> 
> Does someone care about Ubuntu?

Yes, we do. We are using the Ubuntu source images with the Newton and Ocata 
branches from kolla/kolla-ansible.

Christian.

-- 
Christian Berendt
Chief Executive Officer (CEO)

Mail: bere...@betacloud-solutions.de
Web: https://www.betacloud-solutions.de

Betacloud Solutions GmbH
Teckstrasse 62 / 70190 Stuttgart / Deutschland

Geschäftsführer: Christian Berendt
Unternehmenssitz: Stuttgart
Amtsgericht: Stuttgart, HRB 756139


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone]oslo.config 4.0 will break projects' unit test

2017-04-28 Thread ChangBo Guo
Most projects landed fixes for oslo.config 4.0.  Keystone needs more effort
to make the unit test pass. In addition to
https://review.openstack.org/#/c/455391/  which fixes most of failures.
There are other failures [1] , I created a bug [2] for Keytone.  These
chagnes are related with Keystone domain knowlege , I only tried to fix
simple ones. so keystone team, please help dig, thanks

[1]
http://logs.openstack.org/periodic/periodic-keystone-py27-with-oslo-master/0868d74/testr_results.html.gz
[2] https://bugs.launchpad.net/keystone/+bug/1686921



2017-04-16 15:32 GMT+08:00 ChangBo Guo :

> As I expect, there are some failures in periodic tasks recently [1] if we
> set enforce_type with True by default,  we'd better fix them before we
> release oslo.config 4.0.  Some guys had been working on this :
> Nova: https://review.openstack.org/455534 should fix failures
> tempest:  https://review.openstack.org/456445 fixed
> Keystone:  https://review.openstack.org/455391 wait for oslo.config 4.0
>
> We still need help from Glance/Ironic/Octavia
> Glance:  https://review.openstack.org/#/c/455522/ need review
> Ironic:  Need fix failure in http://logs.openstack.org/
> periodic/periodic-ironic-py27-with-oslo-master/680abfe/
> testr_results.html.gz
> Octavia: Need fix failure in http://logs.openstack.org/
> periodic/periodic-octavia-py35-with-oslo-master/80fee03/
> testr_results.html.gz
>
> [1] http://status.openstack.org/openstack-health/#/?groupKey=
> build_name=hour=-with-oslo
>
> 2017-04-04 0:01 GMT+08:00 ChangBo Guo :
>
>> Hi ALL,
>>
>> oslo_config provides method CONF.set_override[1] , developers usually use
>> it to change config option's value in tests. That's convenient . By
>> default  parameter enforce_type=False,  it doesn't check any type or
>> value of override. If set enforce_type=True , will check parameter
>> override's type and value.  In production code(running time code),
>> oslo_config  always checks  config option's value. In short, we test and
>> run code in different ways. so there's  gap:  config option with wrong
>> type or invalid value can pass tests when
>> parameter enforce_type = False in consuming projects.  that means some
>> invalid or wrong tests are in our code base.
>>
>>
>> We began to warn user about the change since Sep, 2016 in [2]. This
>> change will notify consuming project to write correct test cases with
>> config options.
>> We would make enforce_type = true by default in [3], that may break some
>> projects' tests, that's also raise wrong unit tests. The failure is easy to
>> fix, which
>> is recommended.
>>
>>
>> [1] https://github.com/openstack/oslo.config/blob/efb287a94645b1
>> 5b634e8c344352696ff85c219f/oslo_config/cfg.py#L2613
>> [2] https://review.openstack.org/#/c/365476/
>> [3] https://review.openstack.org/328692
>>
>> --
>> ChangBo Guo(gcb)
>>
>
>
>
> --
> ChangBo Guo(gcb)
>



-- 
ChangBo Guo(gcb)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] stepping down from neutron core team

2017-04-28 Thread Miguel Angel Ajo Pelayo
Hi everybody,

Some of you already know, but I wanted to make it official.

Recently I moved to work on the networking-ovn component,
and OVS/OVN itself, and while I'll stick around and I will be available
on IRC for any questions I'm already not doing a good work with
neutron reviews, so...

It's time to leave room for new reviewers.

It's always a pleasure to work with you folks.

Best regards,
Miguel Ángel Ajo.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] allow vfs to be trusted

2017-04-28 Thread Sahid Orentino Ferdjaoui
Hello Matt,

There is a serie of patches pushed upstream [0] to configure virtual
functions of a SRIOV device to be "trusted".

That is to fix an issue when bonding two SRIOV nics in failover mode,
basically without that capabability set to the VFs assigned, the guest
would not have the privilege to update the MAC address of the slave.

I do think that is spec-less but needs a reno note that well
explaining how to configure it.

Is the blueprint attached to it can be considered for Pike?

The 3 patches are relatively small;
 - network: add command to configure trusted mode for VFs
 - network: update pci request spec to handle trusted tags
 - libvirt: configure trust mode for vfs

[0] 
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/sriov-trusted-vfs

Thanks,
s.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][heat][murano][daisycloud] Removing Heat support from Tempest

2017-04-28 Thread Andrea Frittoli
On Fri, Apr 28, 2017 at 10:29 AM Rabi Mishra  wrote:

> On Thu, Apr 27, 2017 at 3:55 PM, Andrea Frittoli <
> andrea.fritt...@gmail.com> wrote:
>
>> Dear stackers,
>>
>> starting in the Liberty cycle Tempest has defined a set of projects which
>> are in scope for direct
>> testing in Tempest [0]. The current list includes keystone, nova, glance,
>> swift, cinder and neutron.
>> All other projects can use the same Tempest testing infrastructure (or
>> parts of it) by taking advantage
>> the Tempest plugin and stable interfaces.
>>
>> Tempest currently hosts a set of API tests as well as a service client
>> for the Heat project.
>> The Heat service client is used by the tests in Tempest, which run in
>> Heat gate as part of the grenade
>> job, as well as in the Tempest gate (check pipeline) as part of the
>> layer4 job.
>> According to code search [3] the Heat service client is also used by
>> Murano and Daisycore.
>>
>
> For the heat grenade job, I've proposed two patches.
>
> 1. To run heat tree gabbi api tests as part of grenade 'post-upgrade' phase
>
> https://review.openstack.org/#/c/460542/
>
> 2. To remove tempest tests from the grenade job
>
> https://review.openstack.org/#/c/460810/
>
>
>
>> I proposed a patch to Tempest to start the deprecation counter for Heat /
>> orchestration related
>> configuration items in Tempest [4], and I would like to make sure that
>> all tests and the service client
>> either find a new home outside of Tempest, or are removed, by the end the
>> Pike cycle at the latest.
>>
>> Heat has in-tree integration tests and Gabbi based API tests, but I don't
>> know if those provide
>> enough coverage to replace the tests on Tempest side.
>>
>>
> Yes, the heat gabbi api tests do not yet have the same coverage as the
> tempest tree api tests (lacks tests using nova, neutron and swift
> resources),  but I think that should not stop us from *not* running the
> tempest tests in the grenade job.
>
> I also don't know if the tempest tree heat tests are used by any other
> upstream/downstream jobs. We could surely add more tests to bridge the gap.
>
> Also, It's possible to run the heat integration tests (we've enough
> coverage there) with tempest plugin after doing some initial setup, as we
> do in all our dsvm gate jobs.
>
> It would propose to move tests and client to a Tempest plugin owned /
>> maintained by
>> the Heat team, so that the Heat team can have full flexibility in
>> consolidating their integration
>> tests. For Murano and Daisycloud - and any other team that may want to
>> use the Heat service
>> client in their tests, even if the client is removed from Tempest, it
>> would still be available via
>> the Heat Tempest plugin. As long as the plugin implements the service
>> client interface,
>> the Heat service client will register automatically in the service client
>> manager and be available
>> for use as today.
>>
>>
> if I understand correctly, you're proposing moving the existing tempest
> tests and service clients to a separate repo managed by heat team. Though
> that would be collective decision, I'm not sure that's something I would
> like to do. To start with we may look at adding some of the missing pieces
> in heat tree itself.
>

I'm proposing to move tests and the service client outside of tempest to a
new home.

I also suggested that the new home could be a dedicate repo, since that
would allow you to maintain the
current branchless nature of those tests. A more detailed discussion about
the topic can be found
in the corresponding proposed queens goal [5],

Using a dedicated repo *is not* a precondition for moving tests and service
client out of Tempest.

andrea

[5] https://review.openstack.org/#/c/369749/


>
> Andrea Frittoli (andreaf)
>>
>> [0]
>> https://docs.openstack.org/developer/tempest/test_removal.html#tempest-scope
>> [1] https://docs.openstack.org/developer/tempest/plugin.html
>> [2] https://docs.openstack.org/developer/tempest/library.html
>> [3]
>> http://codesearch.openstack.org/?q=self.orchestration_client=nope==
>>
>> [4] https://review.openstack.org/#/c/456843/
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Regards,
> Rabi Mishra
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [nova] Discussions for DPDK support in OpenStack

2017-04-28 Thread sfinucan
On Fri, 2017-04-28 at 13:23 +0900, TETSURO NAKAMURA wrote:
> Hi Nova team,
> 
> I'm writing this e-mail because I'd like to have a discussion about
> DPDK support at OpenStack Summit in Boston.
> 
> We have developed a dpdk-based patch panel named SPP[1], and we'd
> like to start working on Openstack (ML2 driver) to develop
> "networking-spp".
> 
> Especially, we'd like to use DPDK-ivshmem that was used to be used
> to create "dpdkr" interface in ovs-dpdk[2].

To the best of my knowledge, IVSHMEM ports are no longer supported in
upstream. The documentation for this feature was recently removed from
OVS [1] stating:

  - The ivshmem library has been removed in DPDK since DPDK 16.11.
  - The instructions/scheme provided will not work with current
    supported and future DPDK versions.
  - The linked patch needed to enable support in QEMU has never
    been upstreamed and does not apply to the last 4 QEMU releases.
  - Userspace vhost has become the defacto OVS-DPDK path to the guest.

Note: I worked on DPDK vSwitch [2] way back when, and there were severe
security implications with sharing a chunk of host memory between
multiple guests (which is how IVSHMEM works). I'm not at all surprised
the feature was killed.

> We have issued a blueprint[3] for that use case.

Per above, I don't think this is necessary. vhost-user ports already
work as expected in nova.

> As we are attending Boston Summit, could you have a discussion with
> us at the Summit?

I'll be around the summit (IRC: sfinucan) if you want to chat more.
However, I'd suggest reaching out to Sean Mooney or Igor Duarte Cardoso
(both CCd) if you want further information about general support of
OVS-DPDK in OpenStack and DPDK acceleration in SFC, respectively. I'd
also suggest looking at networking-ovs-dpdk [3] which contains a lot of
helper tools for using OVS-DPDK in OpenStack, along with links to a
Brighttalk video I recently gave regarding the state of OVS-DPDK in
OpenStack.

Hope this helps,
Stephen

[1] https://github.com/openvswitch/ovs/commit/90ca71dd317fea1ccf0040389
dae895aa7b2b561
[2] https://github.com/01org/dpdk-ovs
[3] https://github.com/openstack/networking-ovs-dpdk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][heat][murano][daisycloud] Removing Heat support from Tempest

2017-04-28 Thread Rabi Mishra
On Thu, Apr 27, 2017 at 3:55 PM, Andrea Frittoli 
wrote:

> Dear stackers,
>
> starting in the Liberty cycle Tempest has defined a set of projects which
> are in scope for direct
> testing in Tempest [0]. The current list includes keystone, nova, glance,
> swift, cinder and neutron.
> All other projects can use the same Tempest testing infrastructure (or
> parts of it) by taking advantage
> the Tempest plugin and stable interfaces.
>
> Tempest currently hosts a set of API tests as well as a service client for
> the Heat project.
> The Heat service client is used by the tests in Tempest, which run in Heat
> gate as part of the grenade
> job, as well as in the Tempest gate (check pipeline) as part of the layer4
> job.
> According to code search [3] the Heat service client is also used by
> Murano and Daisycore.
>

For the heat grenade job, I've proposed two patches.

1. To run heat tree gabbi api tests as part of grenade 'post-upgrade' phase

https://review.openstack.org/#/c/460542/

2. To remove tempest tests from the grenade job

https://review.openstack.org/#/c/460810/



> I proposed a patch to Tempest to start the deprecation counter for Heat /
> orchestration related
> configuration items in Tempest [4], and I would like to make sure that all
> tests and the service client
> either find a new home outside of Tempest, or are removed, by the end the
> Pike cycle at the latest.
>
> Heat has in-tree integration tests and Gabbi based API tests, but I don't
> know if those provide
> enough coverage to replace the tests on Tempest side.
>
>
Yes, the heat gabbi api tests do not yet have the same coverage as the
tempest tree api tests (lacks tests using nova, neutron and swift
resources),  but I think that should not stop us from *not* running the
tempest tests in the grenade job.

I also don't know if the tempest tree heat tests are used by any other
upstream/downstream jobs. We could surely add more tests to bridge the gap.

Also, It's possible to run the heat integration tests (we've enough
coverage there) with tempest plugin after doing some initial setup, as we
do in all our dsvm gate jobs.

It would propose to move tests and client to a Tempest plugin owned /
> maintained by
> the Heat team, so that the Heat team can have full flexibility in
> consolidating their integration
> tests. For Murano and Daisycloud - and any other team that may want to use
> the Heat service
> client in their tests, even if the client is removed from Tempest, it
> would still be available via
> the Heat Tempest plugin. As long as the plugin implements the service
> client interface,
> the Heat service client will register automatically in the service client
> manager and be available
> for use as today.
>
>
if I understand correctly, you're proposing moving the existing tempest
tests and service clients to a separate repo managed by heat team. Though
that would be collective decision, I'm not sure that's something I would
like to do. To start with we may look at adding some of the missing pieces
in heat tree itself.

Andrea Frittoli (andreaf)
>
> [0] https://docs.openstack.org/developer/tempest/test_remova
> l.html#tempest-scope
> [1] https://docs.openstack.org/developer/tempest/plugin.html
> [2] https://docs.openstack.org/developer/tempest/library.html
> [3] http://codesearch.openstack.org/?q=self.orchestration_
> client=nope==
> [4] https://review.openstack.org/#/c/456843/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Rabi Mishra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] per-sack vs per-metric locking tradeoffs

2017-04-28 Thread Julien Danjou
On Thu, Apr 27 2017, gordon chung wrote:

> so as we transition to the bucket/shard/sack framework for incoming 
> writes, we've set up locks on the sacks so we only have one process 
> handling any given sack. this allows us to remove the per-metric locking 
> we had previously.

yay!

> the issue i've notice now is that if we only have per-sack locking, 
> metric based actions can affect sack level processing. for example:
>
> scenario 1:
> 1. delete metric, locks sack to delete single metric,
> 2. metricd processor attempts to process entire sack but can't, so skips.

Yes, I wrote that in a review somewhere. We need to rework 1. so
deletion happens at the same time we lock the sack to process metrics
basically. We might want to merge the janitor into the worker I imagine.
Currently a janitor can grab metrics and do dumb things like:
- metric1 from sackA
- metric2 from sackB
- metric3 from sackA

and do 3 different lock+delete -_-

> scenario 2:
> 1. API request passes in 'refresh' param so they want all unaggregated 
> metrics to be processed on demand and returned.
> 2. API locks 1 or more sacks to process 1 or more metrics
> 3. metricd processor attempts to process entire sack but can't, so 
> skips. potentially multiple sacks unprocessed in currently cycle.
>
> scenario 3
> same as scenario 2 but metricd processor locks first, and either blocks
> API process OR  doesn't allow API to guarantee 'all measures processed'.

Yes, I'm even more worried about scenario 3, we should probably add a
safe guard timeout parameter set by the admin there.

> i imagine these scenarios are not critical unless a very large 
> processing interval is defined or if for some unfortunate reason, the 
> metric-based actions are perfectly timed to lock out background processing.
>
> alternatively, this could be solved by keeping per-metric locks in 
> addition to per-sack locks. this would effectively double the number of 
> active locks we have so instead of each metricd worker having a single 
> per-sack lock, it will also have a per-metric lock for whatever metric 
> it may be publishing at the time.

If we got a timeout set for scenario 3, I'm not that worried. I guess
worst thing is that people would be unhappy with the API spending time
doing computation anyway so we'd need to rework how refresh work or add
an ability to disable it.

-- 
Julien Danjou
// Free Software hacker
// https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Which distros are used as base ones?

2017-04-28 Thread Marcin Juszkiewicz
W dniu 27.04.2017 o 17:16, Michał Jastrzębski pisze:

> On 27 April 2017 at 02:46, Marcin Juszkiewicz
>  wrote:
>> Hi
>>
>> When I joined Kolla project I got info that Debian is going away. So I
>> took care of it and now it is updated to current 'testing' and has far
>> more images enabled than in past.
>>
>> CentOS support works fine. Even on AArch64 (if you have [1] applied).
>>
>> 1. https://review.openstack.org/#/c/430940/
>>
>> But what about Ubuntu? I have a feeling that no one is using it. Ocata
>> packages were used in master until I switched them to use Pike
>> repository instead.
>>
>> Today 'openstack-base' failed for me in 'ubuntu/source' build because
>> 'libmariadbclient-dev' does not exist in Ubuntu repositories.

> Ofc we do. I, for one, run mostly ubuntu (but I must admit I haven't
> been building images last 2 or so weeks). It's strange what you're
> saying because ubunut-source build is a voting gate, so if there would
> be problem like that - we couldn't merge anything... Let's try to find
> out why your build failed.

I found out. Will send a fix.

The problem was due to fact that I build on three different
architectures and looked into build logs on aarch64 thinking that this
is my x86-64 builder.

Ubuntu is using external repo for mariadb packages and that repo has
packages for x86-64 and ppc64le only.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Discussions for DPDK support in OpenStack

2017-04-28 Thread Guo, Ruijing
Get information that  DPDK removed ivshmem
from http://dpdk.org/ml/archives/dev/2016-July/044552.html

-Original Message-
From: TETSURO NAKAMURA [mailto:nakamura.tets...@lab.ntt.co.jp] 
Sent: Friday, April 28, 2017 12:23 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [nova] Discussions for DPDK support in OpenStack

Hi Nova team,

I'm writing this e-mail because I'd like to have a discussion about DPDK 
support at OpenStack Summit in Boston.

We have developed a dpdk-based patch panel named SPP[1], and we'd like to start 
working on Openstack (ML2 driver) to develop "networking-spp".

Especially, we'd like to use DPDK-ivshmem that was used to be used to create 
"dpdkr" interface in ovs-dpdk[2].

We have issued a blueprint[3] for that use case.

As we are attending Boston Summit, could you have a discussion with us at the 
Summit?

[1] http://www.dpdk.org/browse/apps/spp/
[2]
http://openvswitch.org/support/dist-docs-2.5/INSTALL.DPDK.md.html#L446-L490
[3] https://blueprints.launchpad.net/nova/+spec/libvirt-options-for-dpdk

Sincerely,

--
Tetsuro Nakamura  NTT Network Service Systems 
Laboratories
TEL:0422 59 6914(National)/+81 422 59 6914(International) 3-9-11, Midori-Cho 
Musashino-Shi, Tokyo 180-8585 Japan



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev