Re: [openstack-dev] [Tatu][Nova] Handling instance destruction

2018-03-15 Thread Michael Still
Thanks for this. I read the README for the project after this and I do now
realise you're using notifications for some of these events.

I guess I'm still pondering if its reasonable to have everyone listen to
notifications to build systems like these, or if we should messages to
vendordata to handle these actions. Vendordata is intended at deployers, so
having a simple and complete interface seems important.

There were also comments in the README about wanting to change the data
that appears in the metadata server over time. I'm wondering how that maps
into the configdrive universe. Could you explain those comments a bit more
please?

Thanks for your quick reply,
Michael




On Fri, Mar 16, 2018 at 2:18 PM, Pino de Candia  wrote:

> Hi Michael,
>
> Thanks for your message... and thanks for your vendordata work!
>
> About your question, Tatu listens to events on the oslo message bus.
> Specifically, it reacts to compute.instance.delete.end by cleaning up
> per-instance resources. It also listens to project creation and user role
> assignment changes. The code is at:
> https://github.com/openstack/tatu/blob/master/tatu/notifications.py
>
> best,
> Pino
>
>
> On Thu, Mar 15, 2018 at 3:42 PM, Michael Still  wrote:
>
>> Heya,
>>
>> I've just stumbled across Tatu and the design presentation [1], and I am
>> wondering how you handle cleaning up instances when they are deleted given
>> that nova vendordata doesn't expose a "delete event".
>>
>> Specifically I'm wondering if we should add support for such an event to
>> vendordata somehow, given I can now think of a couple of use cases for it.
>>
>> Thanks,
>> Michael
>>
>> 1: https://docs.google.com/presentation/d/1HI5RR3SNUu1If-A5Z
>> i4EMvjl-3TKsBW20xEUyYHapfM/edit#slide=id.p
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api] POST /api-sig/news

2018-03-15 Thread Gilles Dubreuil

Hello,

In order to continue and progress on the API Schema guideline [1] as 
mentioned in [2] to make APIs more machine-discoverable and also 
discussed during [3].


Unfortunately until a new or either a second meeting time slot has been 
allocated,  inconveniently for everyone, have to be done by emails.



> We felt that this was more of a one-off need rather than something 
we'd like to see rolled out across all OpenStack APIs.



Of course new features have to be decided (voted) by the community but 
how does that work when there are not enough people voting in?
It seems unfair to decide not to move forward and ignore the request 
because the others people interested are not participating at this level.


It's very important  to consider the fact "I" am representing more than 
just myself but an Openstack integration team, whose members are 
supporting me, and our work impacts others teams involved in their open 
source product consuming OpenStack. I'm sorry if I haven't made this 
more clear from the beginning, I guess I'm still learning on the 
particiaption process. So from now on, I'm going to use "us" instead.


Also from discussions with other developers from AT (OpenStack summit 
in Sydney) and SAP (Misty project) who are already using automation to 
consume APIs, this is really needed.


I've also mentioned the now known fact that no SDK has full time 
resources to maintain it (which was the initial trigger for us) more 
automation is the only sustainable way to continue the journey.


Finally how can we dare say no to more automation? Unless of course, 
only artisan work done by real hipster is allowed ;)



> Furthermore, API-Schema will be problematic for services that use 
microversions. If you have some insight or opinions on this, please add 
your comments to that review.



I understand microversion standardization (OpenAPI) has not happened yet 
or if it ever does but that shouldn't preclude making progress.
As a matter of fact we can represent different versions of a resource in 
many ways just not in standardized fashion, and in the simplest (or 
worst case) scenario an API Schema can represent only one microversion 
version, more likely the latest at a specific point of time such Schema 
was built.



Also responding to [4]:

  the goal, from gilles, is being able to create client code that works 
against real deployments


In some initial brainstorm discussion effectively came up the idea of 
doing of building the SDK client dynamically against any OpenStack 
service. For instance, depending on the specific (micro)versions 
supported by the servers the API schema would be given in response. 
Although an interesting idea, we are not talking about such feature, 
which could be an interesting academic topic (or not!).


So summarize and clarify, we are talking about SDK being able to build 
their interface to Openstack APIs in an automated way but statically 
from API Schema generated by every project. Such API Schema is already 
built in memory during API reference documentation generation and could 
be saved in JSON format (for instance) (see [5]).


[1] https://review.openstack.org/#/c/524467/
[2] 
http://lists.openstack.org/pipermail/openstack-dev/2018-February/127140.html
[3] 
eavesdrop.openstack.org/meetings/api_sig/2018/api_sig.2018-02-08-16.00.log.html#l-95
[4] 
http://eavesdrop.openstack.org/meetings/api_sig/2018/api_sig.2018-02-08-16.00.log.html#l-127

[5] https://review.openstack.org/#/c/528801

Cheers,
Gilles

On 16/03/18 04:30, Chris Dent wrote:


Greetings OpenStack community,

A rousing good time at the API-SIG meeting today. We opened with some 
discussion on what might be missing from the Methods [7] section of 
the HTTP guidelines. At the PTG we had discussed that perhaps we 
needed more info on which methods were appropriate when. It turns that 
what we probably need is better discoverability, so we're going to 
work on that but at the same time do a general review of that entire 
page.


We then talked about microversions a bit (because it wouldn't be an 
API-SIG without them). There's an in-progress history of microversions 
document (linked below) that we need to decide if we'll revive. If you 
have a strong opinion, let us know.


And finally we explored the options for how or if Neutron can cleanly 
resolve the handling of invalid query parameters. This was raised a 
while back in an email thread [8]. It's generally a good idea not to 
break existing client code, but what if that client code is itself 
broken? The next step will be to make the choice configurable. Neutron 
doesn't support microversions so "throw another microversion at it" 
won't work.


As always if you're interested in helping out, in addition to coming 
to the meetings, there's also:


* The list of bugs [5] indicates several missing or incomplete 
guidelines.
* The existing guidelines [2] always need refreshing to account for 
changes over time. If you find something that's not quite 

Re: [openstack-dev] [TripleO][CI][QA][HA][Eris][LCOO] Validating HA on upstream

2018-03-15 Thread Sam P
Hi All,
 Sorry, Late to the party...
 I have added myself.

--- Regards,
Sampath


On Fri, Mar 16, 2018 at 9:31 AM, Ghanshyam Mann 
wrote:

> On Thu, Mar 15, 2018 at 9:45 PM, Adam Spiers  wrote:
> > Raoul Scarazzini  wrote:
> >>
> >> On 15/03/2018 01:57, Ghanshyam Mann wrote:
> >>>
> >>> Thanks all for starting the collaboration on this which is long pending
> >>> things and we all want to have some start on this.
> >>> Myself and SamP talked about it during OPS meetup in Tokyo and we
> talked
> >>> about below draft plan-
> >>> - Update the Spec - https://review.openstack.org/#/c/443504/. which is
> >>> almost ready as per SamP and his team is working on that.
> >>> - Start the technical debate on tooling we can use/reuse like Yardstick
> >>> etc, which is more this mailing thread.
> >>> - Accept the new repo for Eris under QA and start at least something in
> >>> Rocky cycle.
> >>> I am in for having meeting on this which is really good idea. non-IRC
> >>> meeting is totally fine here. Do we have meeting place and time setup ?
> >>> -gmann
> >>
> >>
> >> Hi Ghanshyam,
> >> as I wrote earlier in the thread it's no problem for me to offer my
> >> bluejeans channel, let's sort out which timeslice can be good. I've
> >> added to the main etherpad [1] my timezone (line 53), let's do all that
> >> so that we can create the meeting invite.
> >>
> >> [1] https://etherpad.openstack.org/p/extreme-testing-contacts
> >
> >
> > Good idea!  I've added mine.  We're still missing replies from several
> > key stakeholders though (lines 62++) - probably worth getting buy-in
> > from a few more people before we organise anything.  I'm pinging a few
> > on IRC with reminders about this.
> >
>
> Thanks rasca, aspiers. I have added myself there and yea good ides to
> ping remaining on IRC.
>
> -gmann
>
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [k8s][octavia][lbaas] Experiences on using the LB APIs with K8s

2018-03-15 Thread Joe Topjian
Hi Chris,

I wear a number of hats related to this discussion, so I'll add a few
points of view :)

It turns out that with
> Terraform, it's possible to tear down resources in a way that causes
> Neutron to
> leak administrator-privileged resources that can not be deleted by a
> non-privileged users. In discussions with the Neutron and Octavia teams,
> it was
> strongly recommended that I move away from the Neutron LBaaSv2 API and
> instead
> adopt Octavia. Vexxhost graciously installed Octavia and my request and I
> was
> able to move past this issue.
>

Terraform hat! I want to slightly nit-pick this one since the words "leak"
and "admin-priv" can sound scary: Terraform technically wasn't doing
anything wrong. The problem was that Octavia was creating resources but not
setting ownership to the tenant. When it came time to delete the resources,
Octavia was correctly refusing, though it incorrectly created said
resources.

>From reviewing the discussion, other parties were discovering this issue
and patching in parallel to your discovery. Both xgerman and Vexxhost
jumped in to confirm the behavior seen by Terraform. Vexxhost quickly
applied the patch. It was a really awesome collaboration between yourself,
dims, xgerman, and Vexxhost.


> This highlights the first call to action for our public and private cloud
> community: encouraging the rapid migration from older, unsupported APIs to
> Octavia.
>

Operator hat! The clouds my team and I run are more compute-based. Our
users would be more excited if we increased our GPU pool than enhanced the
networking services. With that in mind, when I hear it said that "Octavia
is backwards-compatible with Neutron LBaaS v2", I think "well, cool, that
means we can keep running Neutron LBaaS v2 for now" and focus our efforts
elsewhere.

I totally get why Octavia is advertised this way and it's very much
appreciated. When I learned about Octavia, my knee-jerk reaction was "oh
no, not another load balancer" but that was remedied when I learned it's
more like LBaaSv2++. I'm sure we'll deploy Octavia some day, but it's not
our primary focus and we can still squeak by with Neutron's LBaaS v2.

If you *really* wanted us to deploy Octavia ASAP, then a migration guide
would be wonderful. I read over the "Developer / Operator Quick Start
Guide" and found it very well written! I groaned over having to build an
image but I also really appreciate the image builder script. If there can't
be pre-built images available for testing, the second-best option is that
script.


> This highlights a second call to action for the SDK and provider
> developers:
> recognizing the end of life of the Neutron LBaaSv2 API[4][5] and adding
> support for more advanced Octavia features.
>

Gophercloud hat! We've supported Octavia for a few months now, but purely
by having the load-balancer client piggyback off of the Neutron LBaaS v2
API. We made the decision this morning, coincidentally enough, to have
Octavia be a first-class service peered with Neutron rather than think of
Octavia as a Neutron/network child. This will allow Octavia to fully
flourish without worry of affecting the existing LBaaS v2 API (which we'll
still keep around separately).

Thanks,
Joe
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cyborg]Summary of Mar 14 Meeting

2018-03-15 Thread Zhipeng Huang
Hi Team,

Here are the meeting summary for our post-ptg kickoff meeting.

0. Meeting recordings: https://www.youtube.com/watch?v=6AZn0SUC_hw ,
https://www.youtube.com/watch?v=-wE2GkSibDo ,
https://www.youtube.com/watch?v=E40JOm311WI

1. PoC from Shaohe and Dolpher (
https://etherpad.openstack.org/p/cyborg-nova-poc)
(1) Agree with the resource class and trait custom definition
(2) Move the claim/release design to the os-acc lib. Also change the
allocation to assigment in order to avoid confusion with Placement
funcationality.
(3) Should avoid cache image as much as possible, but when necessary it
is recommended that cyborg config a default temp folder for the image
cache. Vendoer implementation could point to that location with subfolders
for their images
(4) Agreed that cyborg-agent will be responsible for pulling the image
and cyborg-conductor for coordination with Placement.
(5) Agreed that the programming operation should be a blocking one (if
it fails then everything fails) since that although the delay of
programming varies generally it should not be a major concern.

2. Rocky Cycle Task Assignments:

Please refer to the meeting minutes about the action items:
http://eavesdrop.openstack.org/meetings/openstack_cyborg/2018/openstack_cyborg.2018-03-14-14.07.html

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] APAC-friendly API-SIG meeting times

2018-03-15 Thread Gilles Dubreuil

Hi,

Any chance we can progress on this one?

I believe there are not enough participants to split the API SIG meeting 
in 2, and also more likely because of the same lack of people across the 
2 it could make it pretty inefficient. Therefore I think changing the 
main meeting time to another might be better but I could be wrong.


Anyway in all cases I can't make progress with a meeting in the middle 
of the night for me so I would appreciate if we could re-activate this 
discussion.


Thanks,
Gilles


On 13/12/17 02:22, Ed Leafe wrote:

Re-sending this in the hope of getting more responses. If you’re in the APAC 
region and interested in contributing to our discussions, please indicate your 
preferences on the link below.


That brought up another issue: the current meeting time for the API-SIG is 
1600UTC, which is not very convenient for APAC contributors. Gilles is in 
Australia, and so it was the middle of the night for him! As one of the goals 
for the API-SIG is to expand our audience and membership, edleafe committed to 
seeing if there is an available meeting slot at 2200UTC, which would be 
convenient for APAC, and still early enough for US people to attend. If an 
APAC-friendly meeting time would be good for you, please keep an eye out on the 
mailing list for an announcement if we are able to set that up, and then please 
attend and participate!

Looking at the current meeting schedule, there are openings at 2200UTC  on 
Tuesday, Wednesday, and Thursday mornings in APAC (Monday, Tuesday, and 
Wednesday afternoons in the US).

I’ve set up a doodle so that people can record their preferences:

https://doodle.com/poll/bec9gfff38zvh3ud

If you’re interested in attending API-SIG meetings, please fill out the form at 
that URL with your preferences. I’ll summarize the results at the next API-SIG 
meeting.


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Gilles Dubreuil
Senior Software Engineer, Openstack DFG Integration
Mobile: +61 400 894 219
Email: gil...@redhat.com
GitHub/IRC: gildub



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tatu][Nova] Handling instance destruction

2018-03-15 Thread Pino de Candia
Hi Michael,

Thanks for your message... and thanks for your vendordata work!

About your question, Tatu listens to events on the oslo message bus.
Specifically, it reacts to compute.instance.delete.end by cleaning up
per-instance resources. It also listens to project creation and user role
assignment changes. The code is at:
https://github.com/openstack/tatu/blob/master/tatu/notifications.py

best,
Pino


On Thu, Mar 15, 2018 at 3:42 PM, Michael Still  wrote:

> Heya,
>
> I've just stumbled across Tatu and the design presentation [1], and I am
> wondering how you handle cleaning up instances when they are deleted given
> that nova vendordata doesn't expose a "delete event".
>
> Specifically I'm wondering if we should add support for such an event to
> vendordata somehow, given I can now think of a couple of use cases for it.
>
> Thanks,
> Michael
>
> 1: https://docs.google.com/presentation/d/1HI5RR3SNUu1If-
> A5Zi4EMvjl-3TKsBW20xEUyYHapfM/edit#slide=id.p
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gate] Bug 1741275 is our top gate failure since March 13th

2018-03-15 Thread Matt Riedemann
If you're noticed any volume-related tests failing this week, it's not 
just you. There is an old bug that is back where the c-sch 
CapacityFilter is kicking out the host because there is too much going 
on in the single host at once.


http://status.openstack.org/elastic-recheck/#1741275

Rechecking changes at this point isn't going to help much.

It looks like the spike started around March 13 so can people help look 
for things like new tests which maybe pushed over the number of 
concurrently created volumes/snapshots in a single test run?


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][CI][QA][HA][Eris][LCOO] Validating HA on upstream

2018-03-15 Thread Ghanshyam Mann
On Thu, Mar 15, 2018 at 9:45 PM, Adam Spiers  wrote:
> Raoul Scarazzini  wrote:
>>
>> On 15/03/2018 01:57, Ghanshyam Mann wrote:
>>>
>>> Thanks all for starting the collaboration on this which is long pending
>>> things and we all want to have some start on this.
>>> Myself and SamP talked about it during OPS meetup in Tokyo and we talked
>>> about below draft plan-
>>> - Update the Spec - https://review.openstack.org/#/c/443504/. which is
>>> almost ready as per SamP and his team is working on that.
>>> - Start the technical debate on tooling we can use/reuse like Yardstick
>>> etc, which is more this mailing thread.
>>> - Accept the new repo for Eris under QA and start at least something in
>>> Rocky cycle.
>>> I am in for having meeting on this which is really good idea. non-IRC
>>> meeting is totally fine here. Do we have meeting place and time setup ?
>>> -gmann
>>
>>
>> Hi Ghanshyam,
>> as I wrote earlier in the thread it's no problem for me to offer my
>> bluejeans channel, let's sort out which timeslice can be good. I've
>> added to the main etherpad [1] my timezone (line 53), let's do all that
>> so that we can create the meeting invite.
>>
>> [1] https://etherpad.openstack.org/p/extreme-testing-contacts
>
>
> Good idea!  I've added mine.  We're still missing replies from several
> key stakeholders though (lines 62++) - probably worth getting buy-in
> from a few more people before we organise anything.  I'm pinging a few
> on IRC with reminders about this.
>

Thanks rasca, aspiers. I have added myself there and yea good ides to
ping remaining on IRC.

-gmann

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects

2018-03-15 Thread Doug Hellmann
Excerpts from Matthew Thode's message of 2018-03-15 18:36:50 -0500:
> On 18-03-15 19:29:37, Doug Hellmann wrote:
> > Excerpts from Matthew Thode's message of 2018-03-15 10:24:10 -0500:
> > > On 18-03-15 07:03:11, Doug Hellmann wrote:
> > > > What I Want to Do
> > > > -
> > > > 
> > > > 1. Update the requirements-check test job to change the check for
> > > >an exact match to be a check for compatibility with the
> > > >upper-constraints.txt value.
> > > > 
> > > >We would check the value for the dependency from 
> > > > upper-constraints.txt
> > > >against the range of allowed values in the project. If the
> > > >constraint version is compatible, the dependency range is OK.
> > > > 
> > > >This rule means that in order to change the dependency settings
> > > >for a project in a way that are incompatible with the constraint,
> > > >the constraint (and probably the global requirements list) would
> > > >have to be changed first in openstack/requirements. However, if
> > > >the change to the dependency is still compatible with the
> > > >constraint, no change would be needed in openstack/requirements.
> > > >For example, if the global list constraints a library to X.Y.Z
> > > >and a project lists X.Y.Z-2 as the minimum version but then needs
> > > >to raise that because it needs a feature in X.Y.Z-1, it can do
> > > >that with a single patch in-tree.
> > > > 
> > > 
> > > I think what may be better is for global-requirements to become a
> > > gathering place for projects that requirements watches to have their
> > > smallest constrainted installable set defined in.
> > > 
> > > Upper-constraints has a req of foo===2.0.3
> > > Project A has a req of foo>=1.0.0,!=1.6.0
> > > Project B has a req of foo>=1.4.0
> > > Global reqs would be updated with foo>=1.4.0,!=1.6.0
> > > Project C comes along and sets foo>=2.0.0
> > > Global reqs would be updated with foo>=2.0.0
> > > 
> > > This would make global-reqs descriptive rather than prescriptive for
> > > versioning and would represent the 'true' version constraints of
> > > openstack.
> > 
> > It sounds like you're suggesting syncing in the other direction, which
> > could be useful. I think we can proceed with what I've described and
> > consider the work to build what you describe as a separate project.
> > 
> 
> Yes, this would be a follow-on thing.
> 
> > > 
> > > >We also need to change requirements-check to look at the exclusions
> > > >to ensure they all appear in the global-requirements.txt list
> > > >(the local list needs to be a subset of the global list, but
> > > >does not have to match it exactly). We can't have one project
> > > >excluding a version that others do not, because we could then
> > > >end up with a conflict with the upper constraints list that could
> > > >wedge the gate as we had happen in the past.
> > > > 
> > > 
> > > How would this happen when using constraints?  A project is not allowed
> > > to have a requirement that masks a constriant (and would be verified via
> > > the requirements-check job).
> > 
> > If project A excludes version X before the constraint list is updated to
> > use it, and then project B starts trying to depend on version X, they
> > become incompatible.
> > 
> > We need to continue to manage our declarations of incompatible versions
> > to ensure that the constraints list is a good list of versions to test
> > everything under.
> > 
> > > There's a failure mode not covered, a project could add a mask (!=) to
> > > their requirements before we update constraints.  The project that was
> > > passing the requirements-check job would then become incompatable.  This
> > > means that the requirements-check would need to be run for each
> > > changeset to catch this as soon as it happens, instead of running only
> > > on requirements changes.
> > 
> > I'm not clear on what you're describing here, but it sounds like a
> > variation of the failure modes that would be prevented if we require
> > exclusions to exist in the global list before they could be added to the
> > local list.
> > 
> 
> Yes, that'd work (require exclusions to be global before local).

OK. That's what I was trying to describe as the new rules.

> 
> > > 
> > > >We also need to verify that projects do not cap dependencies for
> > > >the same reason. Caps prevent us from advancing to versions of
> > > >dependencies that are "too new" and possibly incompatible. We
> > > >can manage caps in the global requirements list, which would
> > > >cause that list to calculate the constraints correctly.
> > > > 
> > > >This change would immediately allow all projects currently
> > > >following the global requirements lists to specify different
> > > >lower bounds from that global list, as long as those lower bounds
> > > >still allow the dependencies to be co-installable. (The upper
> > > >bounds, managed through the 

Re: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects

2018-03-15 Thread Matthew Thode
On 18-03-15 19:29:37, Doug Hellmann wrote:
> Excerpts from Matthew Thode's message of 2018-03-15 10:24:10 -0500:
> > On 18-03-15 07:03:11, Doug Hellmann wrote:
> > > What I Want to Do
> > > -
> > > 
> > > 1. Update the requirements-check test job to change the check for
> > >an exact match to be a check for compatibility with the
> > >upper-constraints.txt value.
> > > 
> > >We would check the value for the dependency from upper-constraints.txt
> > >against the range of allowed values in the project. If the
> > >constraint version is compatible, the dependency range is OK.
> > > 
> > >This rule means that in order to change the dependency settings
> > >for a project in a way that are incompatible with the constraint,
> > >the constraint (and probably the global requirements list) would
> > >have to be changed first in openstack/requirements. However, if
> > >the change to the dependency is still compatible with the
> > >constraint, no change would be needed in openstack/requirements.
> > >For example, if the global list constraints a library to X.Y.Z
> > >and a project lists X.Y.Z-2 as the minimum version but then needs
> > >to raise that because it needs a feature in X.Y.Z-1, it can do
> > >that with a single patch in-tree.
> > > 
> > 
> > I think what may be better is for global-requirements to become a
> > gathering place for projects that requirements watches to have their
> > smallest constrainted installable set defined in.
> > 
> > Upper-constraints has a req of foo===2.0.3
> > Project A has a req of foo>=1.0.0,!=1.6.0
> > Project B has a req of foo>=1.4.0
> > Global reqs would be updated with foo>=1.4.0,!=1.6.0
> > Project C comes along and sets foo>=2.0.0
> > Global reqs would be updated with foo>=2.0.0
> > 
> > This would make global-reqs descriptive rather than prescriptive for
> > versioning and would represent the 'true' version constraints of
> > openstack.
> 
> It sounds like you're suggesting syncing in the other direction, which
> could be useful. I think we can proceed with what I've described and
> consider the work to build what you describe as a separate project.
> 

Yes, this would be a follow-on thing.

> > 
> > >We also need to change requirements-check to look at the exclusions
> > >to ensure they all appear in the global-requirements.txt list
> > >(the local list needs to be a subset of the global list, but
> > >does not have to match it exactly). We can't have one project
> > >excluding a version that others do not, because we could then
> > >end up with a conflict with the upper constraints list that could
> > >wedge the gate as we had happen in the past.
> > > 
> > 
> > How would this happen when using constraints?  A project is not allowed
> > to have a requirement that masks a constriant (and would be verified via
> > the requirements-check job).
> 
> If project A excludes version X before the constraint list is updated to
> use it, and then project B starts trying to depend on version X, they
> become incompatible.
> 
> We need to continue to manage our declarations of incompatible versions
> to ensure that the constraints list is a good list of versions to test
> everything under.
> 
> > There's a failure mode not covered, a project could add a mask (!=) to
> > their requirements before we update constraints.  The project that was
> > passing the requirements-check job would then become incompatable.  This
> > means that the requirements-check would need to be run for each
> > changeset to catch this as soon as it happens, instead of running only
> > on requirements changes.
> 
> I'm not clear on what you're describing here, but it sounds like a
> variation of the failure modes that would be prevented if we require
> exclusions to exist in the global list before they could be added to the
> local list.
> 

Yes, that'd work (require exclusions to be global before local).

> > 
> > >We also need to verify that projects do not cap dependencies for
> > >the same reason. Caps prevent us from advancing to versions of
> > >dependencies that are "too new" and possibly incompatible. We
> > >can manage caps in the global requirements list, which would
> > >cause that list to calculate the constraints correctly.
> > > 
> > >This change would immediately allow all projects currently
> > >following the global requirements lists to specify different
> > >lower bounds from that global list, as long as those lower bounds
> > >still allow the dependencies to be co-installable. (The upper
> > >bounds, managed through the upper-constraints.txt list, would
> > >still be built by selecting the newest compatible version because
> > >that is how pip's dependency resolver works.)
> > > 
> > > 2. We should stop syncing dependencies by turning off the
> > >propose-update-requirements job entirely.
> > > 
> > >Turning off the job will stop 

Re: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects

2018-03-15 Thread Doug Hellmann
Excerpts from Matthew Thode's message of 2018-03-15 10:24:10 -0500:
> On 18-03-15 07:03:11, Doug Hellmann wrote:
> > What I Want to Do
> > -
> > 
> > 1. Update the requirements-check test job to change the check for
> >an exact match to be a check for compatibility with the
> >upper-constraints.txt value.
> > 
> >We would check the value for the dependency from upper-constraints.txt
> >against the range of allowed values in the project. If the
> >constraint version is compatible, the dependency range is OK.
> > 
> >This rule means that in order to change the dependency settings
> >for a project in a way that are incompatible with the constraint,
> >the constraint (and probably the global requirements list) would
> >have to be changed first in openstack/requirements. However, if
> >the change to the dependency is still compatible with the
> >constraint, no change would be needed in openstack/requirements.
> >For example, if the global list constraints a library to X.Y.Z
> >and a project lists X.Y.Z-2 as the minimum version but then needs
> >to raise that because it needs a feature in X.Y.Z-1, it can do
> >that with a single patch in-tree.
> > 
> 
> I think what may be better is for global-requirements to become a
> gathering place for projects that requirements watches to have their
> smallest constrainted installable set defined in.
> 
> Upper-constraints has a req of foo===2.0.3
> Project A has a req of foo>=1.0.0,!=1.6.0
> Project B has a req of foo>=1.4.0
> Global reqs would be updated with foo>=1.4.0,!=1.6.0
> Project C comes along and sets foo>=2.0.0
> Global reqs would be updated with foo>=2.0.0
> 
> This would make global-reqs descriptive rather than prescriptive for
> versioning and would represent the 'true' version constraints of
> openstack.

It sounds like you're suggesting syncing in the other direction, which
could be useful. I think we can proceed with what I've described and
consider the work to build what you describe as a separate project.

> 
> >We also need to change requirements-check to look at the exclusions
> >to ensure they all appear in the global-requirements.txt list
> >(the local list needs to be a subset of the global list, but
> >does not have to match it exactly). We can't have one project
> >excluding a version that others do not, because we could then
> >end up with a conflict with the upper constraints list that could
> >wedge the gate as we had happen in the past.
> > 
> 
> How would this happen when using constraints?  A project is not allowed
> to have a requirement that masks a constriant (and would be verified via
> the requirements-check job).

If project A excludes version X before the constraint list is updated to
use it, and then project B starts trying to depend on version X, they
become incompatible.

We need to continue to manage our declarations of incompatible versions
to ensure that the constraints list is a good list of versions to test
everything under.

> There's a failure mode not covered, a project could add a mask (!=) to
> their requirements before we update constraints.  The project that was
> passing the requirements-check job would then become incompatable.  This
> means that the requirements-check would need to be run for each
> changeset to catch this as soon as it happens, instead of running only
> on requirements changes.

I'm not clear on what you're describing here, but it sounds like a
variation of the failure modes that would be prevented if we require
exclusions to exist in the global list before they could be added to the
local list.

> 
> >We also need to verify that projects do not cap dependencies for
> >the same reason. Caps prevent us from advancing to versions of
> >dependencies that are "too new" and possibly incompatible. We
> >can manage caps in the global requirements list, which would
> >cause that list to calculate the constraints correctly.
> > 
> >This change would immediately allow all projects currently
> >following the global requirements lists to specify different
> >lower bounds from that global list, as long as those lower bounds
> >still allow the dependencies to be co-installable. (The upper
> >bounds, managed through the upper-constraints.txt list, would
> >still be built by selecting the newest compatible version because
> >that is how pip's dependency resolver works.)
> > 
> > 2. We should stop syncing dependencies by turning off the
> >propose-update-requirements job entirely.
> > 
> >Turning off the job will stop the bot from proposing more
> >dependency updates to projects.
> > 
> >As part of deleting the job we can also remove the "requirements"
> >case from playbooks/proposal/propose_update.sh, since it won't
> >need that logic any more. We can also remove the update-requirements
> >command from the openstack/requirements 

Re: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects

2018-03-15 Thread Doug Hellmann
Excerpts from Jeremy Stanley's message of 2018-03-15 14:28:49 +:
> On 2018-03-15 07:03:11 -0400 (-0400), Doug Hellmann wrote:
> [...]
> > 1. Update the requirements-check test job to change the check for
> >an exact match to be a check for compatibility with the
> >upper-constraints.txt value.
> [...]
> 
> I thought it might be possible to even just do away with this job
> entirely, but some cursory testing shows that if you supply a
> required versionspec which excludes your constrained version of the
> same package, you'll still get the constrained version installed
> even though you indicated it wasn't in your "supported" range. Might
> be a nice patch to work on upstream in pip, making it explicitly
> error on such a mismatch (and _then_ we might be able to stop
> bothering with this job).
> 
> >We also need to change requirements-check to look at the exclusions
> >to ensure they all appear in the global-requirements.txt list
> >(the local list needs to be a subset of the global list, but
> >does not have to match it exactly). We can't have one project
> >excluding a version that others do not, because we could then
> >end up with a conflict with the upper constraints list that could
> >wedge the gate as we had happen in the past.
> [...]
> 
> At first it seems like this wouldn't end up being necessary; as long
> as you're not setting an upper bound or excluding the constrained
> version, there shouldn't be a coinstallability problem right? Though

That second case is what this prevents. There's a race condition between
updating the requirements range (and exclusions) in a project tree and
updating the upper-constraints.txt list. The check forces those lists to
be updated in an order that avoids a case where the version in
constraints is not compatible with an app installed in an integration
test job.

> I suppose there are still a couple of potential pitfalls if we don't
> check exclusions: setting an exclusion for a future version which
> hasn't been released yet or is otherwise higher than the global
> upper constraint; situations where we need to roll back a constraint
> to an earlier version (e.g., we discover a bug in it) and some
> project has that earlier version excluded. So I suppose there is
> some merit to centrally coordinating these, making sure we can still
> pick sane constraints which work for all projects (mental
> exercise: do we also need to build a tool which can make sure that
> proposed exclusions don't eliminate all possible version numbers?).

Yes, those are all good failure cases that this prevents, too.

> >As the minimum
> >versions of dependencies diverge within projects, there will no
> >longer *be* a real global set of minimum values. Tracking a list of
> >"highest minimums", would either require rebuilding the list from the
> >settings in all projects, or requiring two patches to change the
> >minimum version of a dependency within a project.
> [...]
> 
> It's also been suggested in the past that package maintainers for
> some distributions relied on the ranges in our global requirements
> list to determine what the minimum acceptable version of a
> dependency is so they know whether/when it needs updating (fairly
> critical when you consider that within a given distro some
> dependencies may be shared by entirely unrelated software outside
> our ecosystem and may not be compatible with new versions as soon as
> we are). On the other hand, we never actually _test_ our lower
> bounds, so this was to some extent a convenient fiction anyway.

The lack of testing is an issue, but the tight coupling of those
lower bounds is a bigger problem to me. I know that distros don't
necessarily package exactly what we have in the upper-constraints.txt
list, but they're doing their own testing with those alternatives.

> 
> > 1. Set up a new tox environment called "lower-constraints" with
> >base-python set to "python3" and with the deps setting configured
> >to include a copy of the existing global lower constraints file
> >from the openstack/requirements repo.
> [...]
> 
> I didn't realize lower-constraints.txt already existed (looks like
> it got added a little over a week ago). Reviewing the log it seems

Yes, Dirk did that work.

> to have been updated based on individual projects' declared minimums
> so far which seems to make it a questionable starting point for a
> baseline. I suppose the assumption is that projects have been
> merging requirements proposals which bump their declared
> lower-bounds, though experience suggests that this doesn't happen
> consistently in projects receiving g-r updates today (they will
> either ignore the syncs or amend them to undo the lower-bounds
> changes before merging). At any rate, I suppose that's a separate
> conversation to be had, and as you say it's just a place to start
> from but projects will be able to change it to whatever values they
> want at that point.


Re: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects

2018-03-15 Thread Doug Hellmann
Excerpts from Matthew Thode's message of 2018-03-15 10:05:50 -0500:
> On 18-03-15 09:45:38, Doug Hellmann wrote:
> > Excerpts from Thierry Carrez's message of 2018-03-15 14:34:50 +0100:
> > > Doug Hellmann wrote:
> > > > [...]
> > > > TL;DR
> > > > -
> > > > 
> > > > Let's stop copying exact dependency specifications into all our
> > > > projects to allow them to reflect the actual versions of things
> > > > they depend on. The constraints system in pip makes this change
> > > > safe. We still need to maintain some level of compatibility, so the
> > > > existing requirements-check job (run for changes to requirements.txt
> > > > within each repo) will change a bit rather than going away completely.
> > > > We can enable unit test jobs to verify the lower constraint settings
> > > > at the same time that we're doing the other work.
> > > 
> > > Thanks for the very detailed plan, Doug. It all makes sense to me,
> > > although I have a precision question (see below).
> > > 
> > > > [...]
> > > >We also need to change requirements-check to look at the exclusions
> > > >to ensure they all appear in the global-requirements.txt list
> > > >(the local list needs to be a subset of the global list, but
> > > >does not have to match it exactly). We can't have one project
> > > >excluding a version that others do not, because we could then
> > > >end up with a conflict with the upper constraints list that could
> > > >wedge the gate as we had happen in the past.
> > > > [...]
> > > > 2. We should stop syncing dependencies by turning off the
> > > >propose-update-requirements job entirely.
> > > > 
> > > >Turning off the job will stop the bot from proposing more
> > > >dependency updates to projects.
> > > > [...]
> > > > After these 3 steps are done, the requirements team will continue
> > > > to maintain the global-requirements.txt and upper-constraints.txt
> > > > files, as before. Adding a new dependency to a project will still
> > > > involve a review step to add it to the global list so we can monitor
> > > > licensing, duplication, python 3 support, etc. But adjusting the
> > > > version numbers once that dependency is in the global list will be
> > > > easier.
> > > 
> > > How would you set up an exclusion in that new world order ? We used to
> > > add it to the global-requirements file and the bot would automatically
> > > sync it to various consuming projects.
> > > 
> > > Now since any exclusion needs to also appear on the global file, you
> > > would push it first in the global-requirements, then to the project
> > > itself, is that correct ? In the end the global-requirements file would
> > > only contain those exclusions, right ?
> > > 
> > 
> > The first step would need to be adding it to the global-requirements.txt
> > list. After that, it would depend on how picky we want to be. If the
> > upper-constraints.txt list is successfully updated to avoid the release,
> > we might not need anything in the project. If the project wants to
> > provide detailed guidance about compatibility, then they could add the
> > exclusion. For example, if a version of oslo.config breaks cinder but
> > not nova, we might only put the exclusion in global-requirements.txt and
> > the requirements.txt for cinder.
> > 
> 
> I wonder if we'd be able to have projects decide via a flag in their tox
> or zuul config if they'd like to opt into auto-updating exclusions only.
> 

We could just change the job that does the sync and use the existing
projects.txt file, couldn't we?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] Rocky PTG summary - nova/neutron

2018-03-15 Thread Matt Riedemann

On 3/15/2018 3:30 PM, melanie witt wrote:
     * We don't need to block bandwidth-based scheduling support for 
doing port creation in conductor (it's not trivial), however, if nova 
creates a port on a network with a QoS policy, nova is going to have to 
munge the allocations and update placement (from nova-compute) ... so 
maybe we should block this on moving port creation to conductor after all


This is not the current direction in the spec. The spec is *large* and 
detailed, and this is one of the things being discussed in there. For 
the latest on all of it, gonna need to get caught up on the spec. But it 
won't be updated for awhile because Brother Gib is on vacation.



   * On routed provider networks:
     * On the Neutron side, this is already done: 


I still don't know how this works because the summit videos talk about 
needing port creation happening in conductor which we never got done in 
nova.


I suggested to Miguel the other day that we/someone should setup a 
multi-node CI job which does all of the configuration required here to 
make sure this stuff actually works, because it's quite complicated 
given there are three services involved in making it all work.


Hopefully anyone that's already using this successfully in production, 
or is thinking about using it, would help out with setting up the CI 
configuration for testing this out - we could do it in the nova-next job 
if we wanted to make that multinode. Maybe jroll can be guilted into 
helping since he was the one asking about this at the PTG I think.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [First Contact][SIG] Weekly Meeting

2018-03-15 Thread Matthew Oliver
sorry I missed the meeting, I've been off with family related gastro fun,
fun times... first me and then the pregnant wife which means spending the
whole morning yesterday in hostpital re-hydrating the wife, just as a
precaution to keep the baby safe. good news was the toddler wasn't hit as
bad as us.

Anyway, I'll be at the next one. My apologies, this week ended up being
kind of a write off for me :(

Matt

On Wed, Mar 14, 2018 at 10:13 AM, Kendall Nelson 
wrote:

> Hello!
>
> [1] has been merged and we have an agenda [2] so we are full steam ahead
> for the upcoming meeting!
>
> Our inaugural First Contact SIG meeting will be in #openstack-meeting at
> 0800 UTC Wednesday!
>
> Hope to see you all in ~9 hours!
>
> -Kendall (diablo_rojo)
>
> [1]https://review.openstack.org/#/c/549849/
> [2] https://wiki.openstack.org/wiki/First_Contact_SIG#Meeting_Agenda
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [k8s][octavia][lbaas] Experiences on using the LB APIs with K8s

2018-03-15 Thread Chris Hoge
As I've been working more in the Kubernetes community, I've been evaluating the
different points of integration between OpenStack services and the Kubernetes
application platform. One of the weaker points of integration has been in using
the OpenStack LBaaS APIs to create load balancers for Kubernetes applications.
Using this as a framing device, I'd like to begin a discussion about the
general development, deployment, and usage of the LBaaS API and how different
parts of our community can rally around and strengthen the API in the coming
year.

I'd like to note right from the beginning that this isn't a disparagement of
the fantastic work that's being done by the Octavia team, but rather an
evaluation of the current state of the API and a call to our rich community of
developers, cloud deployers, users, and app developers to help move the API to
a place where it is expected to be present and shows the same level of
consistency across deployments that we see with the Nova, Cinder, and Neutron
core APIs. The seed of this discussion comes from my efforts to enable
third-party Kubernetes cloud provider testing, as well as discussions with the
Kubernetes-SIG-OpenStack community in the #sig-openstack Slack channel in the
Kubernetes organization[0]. As a full disclaimer, my recounting of this 
discussion
represents my own impressions, and although I mention active participants by
name I do not represent their views. Any mistakes I make are my own.

To set the stage, Kubernetes uses a third-party load-balancer service (either
from a Kubernetes hosted application or from a cloud-provider API) to
provide high-availability for the applications it manages. The OpenStack
provider offers a generic interface to the LBaaSv2, with an option to enable
Octavia instead of the Neutron API. The provider is build off of the
GopherCloud SDK. In my own efforts to enable testing of this provider, I'm
using Terraform to orchestrate the K8s deployment and installation. Since I
needed to use a public cloud provider to turn this automated testing over to a
third party, I chose Vexxhost, as they have been generous donors in this effort
for the the CloudLab efforts in general, and have provided tremendous support
in debugging problems I've run in to. The first major issue I ran in to was a
race condition in using the Neutron LBaaSv2 API. It turns out that with
Terraform, it's possible to tear down resources in a way that causes Neutron to
leak administrator-privileged resources that can not be deleted by a
non-privileged users. In discussions with the Neutron and Octavia teams, it was
strongly recommended that I move away from the Neutron LBaaSv2 API and instead
adopt Octavia. Vexxhost graciously installed Octavia and my request and I was
able to move past this issue.

This raises a fundamental issue facing our community with regards to the load
balancer APIs: there is little consistency as to which API is deployed, and we
have installations that still deploy on the LBaaSv1 API. Indeed, the OpenStack
User Survey reported in November of 2017 that only 7% of production
installations were running Octavia[1]. Meanwhile, Neutron LBaaSv1 was deprecated
in Liberty, and Neutron LBaaSv2 was recently deprecated in the Queens release.
The lack of a migration path from v1 to v2 helped to slow adoption, and the
additional requirements for installing Octavia has also been a factor in
increasing adoption of the supported LBaaSv2 implementation.

This highlights the first call to action for our public and private cloud
community: encouraging the rapid migration from older, unsupported APIs to
Octavia.

Because of this wide range of deployed APIs, I changed my own deployment code
to launch a user-space VM and install a non-tls-terminating Nginx load balancer
for my Kubernetes control plane[2]. I'm not the only person who has adopted an 
approach like this. In the #sig-openstack channel, Saverio Proto (zioproto)
discussed how he uses the K8s Nginx ingress load balancer[3] in favor of the
OpenStack provider load balancer. My take away from his description is that it's
preferable to use the K8s-based ingress load balancer because:

* The common LBaaSv2 API does not support TLS termination.
* You don't need provision an additional virtual machine.
* You aren't dependent on an appropriate and supported API being available on
  your cloud.

German Eichberger (xgerman) and Adam Harwell (rm_you) from the Octavia team
were present for the discussion, and presented a strong case for using the
Octavia APIs. My take away was:

* Octavia does support TLS termination, and it's the dependence on the 
  Neutron API that removes the ability to take advantage of it.
* It provides a lot more than just a "VM with haproxy", and has stability
  guarantees.

This highlights a second call to action for the SDK and provider developers:
recognizing the end of life of the Neutron LBaaSv2 API[4][5] and adding
support for more advanced Octavia features.

As part of 

Re: [openstack-dev] [nova] about rebuild instance booted from volume

2018-03-15 Thread Matt Riedemann

On 3/15/2018 5:29 PM, Dan Smith wrote:

Yep, for sure. I think if there are snapshots, we have to refuse to do
te thing. My comment was about the "does nova have authority to destroy
the root volume during a rebuild" and I think it does, if
delete_on_termination=True, and if there are no snapshots.


Agree with this.

Things do get a bit weird with delete_on_termination and if nova 'owns' 
the volume. delete_on_termination is False by default, even if you're 
doing boot from volume with source_type of 'image' or 'snapshot' where 
nova creates the volume for you.


If a user really cared about preserving the volume, they'd probably 
pre-create it (with their favorite volume type since you can't tell nova 
the volume type to use) and pass it to nova with 
delete_on_termination=False explicitly.


Given the defaults, I'm not sure how many people are going to specify 
delete_on_termination=True, thinking about the implications, which then 
means they can't rebuild their volume-backed instance later because nova 
can't / won't delete the volume.


If we can solve this without deleting the volume at all and just 
re-image it, then it's a non-issue.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] about rebuild instance booted from volume

2018-03-15 Thread Dan Smith
> Deleting all snapshots would seem dangerous though...
>
> 1. I want to reset my instance to how it was before
> 2. I'll just do a snapshot in case I need any data in the future
> 3. rebuild
> 4. oops

Yep, for sure. I think if there are snapshots, we have to refuse to do
te thing. My comment was about the "does nova have authority to destroy
the root volume during a rebuild" and I think it does, if
delete_on_termination=True, and if there are no snapshots.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Zuul project evolution

2018-03-15 Thread James E. Blair
Hi,

To date, Zuul has (perhaps rightly) often been seen as an
OpenStack-specific tool.  That's only natural since we created it
explicitly to solve problems we were having in scaling the testing of
OpenStack.  Nevertheless, it is useful far beyond OpenStack, and even
before v3, it has found adopters elsewhere.  Though as we talk to more
people about adopting it, it is becoming clear that the less experience
they have with OpenStack, the more likely they are to perceive that Zuul
isn't made for them.

At the same time, the OpenStack Foundation has identified a number of
strategic focus areas related to open infrastructure in which to invest.
CI/CD is one of these.  The OpenStack project infrastructure team, the
Zuul team, and the Foundation staff recently discussed these issues and
we feel that establishing Zuul as its own top-level project with the
support of the Foundation would benefit everyone.

It's too early in the process for me to say what all the implications
are, but here are some things I feel confident about:

* The folks supporting the Zuul running for OpenStack will continue to
  do so.  We love OpenStack and it's just way too fun running the
  world's most amazing public CI system to do anything else.

* Zuul will be independently promoted as a CI/CD tool.  We are
  establishing our own website and mailing lists to facilitate
  interacting with folks who aren't otherwise interested in OpenStack.
  You can expect to hear more about this over the coming months.

* We will remain just as open as we have been -- the "four opens" are
  intrinsic to what we do.

As a first step in this process, I have proposed a change[1] to remove
Zuul from the list of official OpenStack projects.  If you have any
questions, please don't hesitate to discuss them here, or privately
contact me or the Foundation staff.

-Jim

[1] https://review.openstack.org/552637

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] update_provider_tree design updates

2018-03-15 Thread Eric Fried
Excellent and astute questions, both of which came up in the discussion,
but I neglected to mention.  (I had to miss *something*, right?)

See inline.

On 03/15/2018 02:29 PM, Chris Dent wrote:
> On Thu, 15 Mar 2018, Eric Fried wrote:
> 
>> One of the takeaways from the Queens retrospective [1] was that we
>> should be summarizing discussions that happen in person/hangout/IRC/etc.
>> to the appropriate mailing list for the benefit of those who weren't
>> present (or paying attention :P ).  This is such a summary.
> 
> Thank you _very_ much for doing this. I've got two questions within.
> 
>> ...which we discussed earlier this week in IRC [4][5].  We concluded:
>>
>> - Compute is the source of truth for any and all traits it could ever
>> assign, which will be a subset of what's in os-traits, plus whatever
>> CUSTOM_ traits it stakes a claim to.  If an outside agent sets a trait
>> that's in that list, compute can legitimately remove it.  If an outside
>> agent removes a trait that's in that list, compute can reassert it.
> 
> Where does that list come from? Or more directly how does Compute
> stake the claim for "mine"?

One piece of the list should come from the traits associated with the
compute driver capabilities [2].  Likewise anything else in the future
that's within compute but outside of virt.  In other words, we're
declaring that it doesn't make sense for an operator to e.g. set the
"has_imagecache" trait on a compute if the compute doesn't do that
itself.  The message being that you can't turn on a capability by
setting a trait.

Beyond that, each virt driver is going to be responsible for figuring
out its own list.  Thinking this through with my PowerVM hat on, it
won't actually be as hard as it initially sounded - though it will
require more careful accounting.  Essentially, the driver is going to
ask the platform questions and get responses in its own language; then
map those responses to trait names.  So we'll be writing blocks like:

 if sys_caps.can_modify_io:
 provider_tree.add_trait(nodename, "CUSTOM_LIVE_RESIZE_CAPABLE")
 else:
 provider_tree.remove_trait(nodename, "CUSTOM_LIVE_RESIZE_CAPABLE")

And, for some subset of the "owned" traits, we should be able to
maintain a dict such that this works:

 for feature in trait_map.values():
 if feature in sys_features:
 provider_tree.add_trait(nodename, trait_map[feature])
 else:
 provider_tree.remove_trait(nodename, trait_map[feature])

BUT what about *dynamic* features?  If I have code like (don't kill me):

 vendor_id_trait = 'CUSTOM_DEV_VENDORID_' + slugify(io_device.vendor_id)
 provider_tree.add_trait(io_dev_rp, vendor_id_trait)

...then there's no way I can know ahead of time what all those might be.
 (In particular, if I want to support new devices without updating my
code.)  I.e. I *can't* write the corresponding
provider_tree.remove_trait(...) condition.  Maybe that never becomes a
real problem because we'll never need to remove a dynamic trait.  Or
maybe we can tolerate "leakage".  Or maybe we do something
clever-but-ugly with namespacing (if
trait.startswith('CUSTOM_DEV_VENDORID_')...).  We're consciously kicking
this can down the road.

And note that this "dynamic" problem is likely to be a much larger
portion (possibly all) of the domain when we're talking about aggregates.

Then there's ironic, which is currently set up to get its traits blindly
from Inspector.  So Inspector not only needs to maintain the "owned
traits" list (with all the same difficulties as above), but it must also
either a) communicate that list to ironic virt so the latter can manage
the add/remove logic; or b) own the add/remove logic and communicate the
individual traits with a +/- on them so virt knows whether to add or
remove them.

> How does an outside agent know what Compute has claimed? Presumably
> they want to know that so they can avoid wastefully doing something
> that's going to get clobbered?

Yup [11].  It was deemed that we don't need an API/CLI to discover those
lists (assuming that would even be possible).  The reasoning was
two-pronged:
- We'll document that there are traits "owned" by nova and attempts to
set/unset them will be frustrated.  You can't find out which ones they
are except when a manually-set/-unset trait magically dis-/re-appears.
- It probably won't be an issue because outside agents will be setting
traits based on some specific thing they want to do, and the
documentation for that thing will specify traits that are known not to
interfere with those in nova's wheelhouse.

> [2] https://review.openstack.org/#/c/538498/
[11]
http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-03-12.log.html#t2018-03-12T16:26:29

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [nova][cinder] Rocky PTG summary - nova/cinder

2018-03-15 Thread melanie witt
I realized I forgot to add the [cinder] tag to the subject line when I 
sent this originally. Sorry about that.


Hello all,

Here’s the PTG summary etherpad [0] for the nova/cinder session from the 
PTG, also included as a plain text export on this email.


Cheers,
-melanie

[0] https://etherpad.openstack.org/p/nova-ptg-rocky-cinder-summary

*Nova/Cinder: Rocky PTG Summary

https://etherpad.openstack.org/p/nova-ptg-rocky L63

*Key topics

  * New attach flow fixes and multi-attach
* Attach mode
* Swap volume with two read/write attachments
* SHELVED_OFFLOADED and 'in-use' state in old attach flow
* Server multi-create with attaching to the same volume fails
  * Data migration for old-style attachments
  * Volume replication for in-use volumes
  * Object-ifying os-brick connection_info
  * Formatting blank encrypted volumes during creation on the cinder side
  * Volume detail show reveals the attached compute hostname for non-admins
  * Bulk volume create/attach

*Agreements and decisions

  * To handle attach mode for a multi-attach volume to several 
instances, we will change the compute API to allow the user to pass the 
attach mode so we can pass it through to cinder
* The second attachment is going to be read/write by default and if 
the user wants read-only, they have to specify it

* Spec: https://review.openstack.org/#/c/552078/
  * Swap volume with two read/write attachments could definitely 
corrupt data. However, the cinder API doesn't allow retype/migration of 
in-use multi-attach volumes, so this isn't a problem right now
  * It would be reasonable to fix SHELVED_OFFLOADED to leave the volume 
in 'reserved' state instead of 'in-use', but it's low priority
  * The bug with server multi-create and multi-attach will be fixed on 
the cinder side and we'll add a new compute API microversion to leverage 
the cinder fix

* Spec: https://review.openstack.org/#/c/552078/
  * We'll migrate old-style attachments on-the-fly when a change is 
made to a volume, such as a migration. For the rest, we'll migrate 
old-style attachments on compute startup to new-style attachments
* Compute startup data migration patch: 
https://review.openstack.org/#/c/549130/
  * For volume replication of in-use volumes, on the cinder side, we'll 
need a prototype and spec, and drivers will need to indicate the type of 
replication and what recovery on the nova side needs to be. On the nova 
side, we'll need a new API microversion for the 
os-server-external-events change (like extended volume)

* Owner: jgriffith
  * On the possibility of object-ifying connection_info in os-brick, it 
would be best to defer it until nova/neutron have worked out vif 
negotiation using os-vif

* lyarwood asked to restore https://review.openstack.org/#/c/269867/
  * On formatting blank encrypted volumes during creation, it sounded 
like we had agreement to fix it on the cinder side as they already have 
code for it. Need to double-check with the cinder team to make sure
  * For volume detail show revealing the attached compute hostname for 
non-admins, cinder will make a change to add a policy to not display the 
compute hostname for non-admins

* Note: this doesn't impact nova, but it might impact glance.
  * On bulk volume create/attach, it will be up to cinder to decide 
whether they will want to implement bulk create. In nova, we are not 
going to support bulk attach as that's a job better done by an 
orchestration system like Heat
* Note: Cinder team agreed to not support bulk create: 
https://wiki.openstack.org/wiki/CinderRockyPTGSummary#Bulk_Volume_Create.2FAttach



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tatu][Nova] Handling instance destruction

2018-03-15 Thread Michael Still
Heya,

I've just stumbled across Tatu and the design presentation [1], and I am
wondering how you handle cleaning up instances when they are deleted given
that nova vendordata doesn't expose a "delete event".

Specifically I'm wondering if we should add support for such an event to
vendordata somehow, given I can now think of a couple of use cases for it.

Thanks,
Michael

1:
https://docs.google.com/presentation/d/1HI5RR3SNUu1If-A5Zi4EMvjl-3TKsBW20xEUyYHapfM/edit#slide=id.p
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron] Rocky PTG summary - nova/neutron

2018-03-15 Thread melanie witt

Hello Stackers,

I've put together an etherpad [0] for the summary of the nova/neutron 
session from the PTG in the Croke Park Hotel breakfast area and included 
it as a plain text export on this email. Please feel free to edit or 
reply to this thread to add/correct anything I've missed.


Cheers,
-melanie

[0] https://etherpad.openstack.org/p/nova-ptg-rocky-neutron-summary

*Nova/Neutron: Rocky PTG Summary

https://etherpad.openstack.org/p/nova-ptg-rocky L159

*Key topics

  * NUMA-aware vSwitches
  * Minimum bandwidth-based scheduling
  * New port binding API in Neutron
  * Filtering instances by floating IP in Nova
  * Nova bug around re-attaching network interfaces on nova-compute 
restart -- port re-plug and re-create results in loss of some 
configuration like VLANs

* https://bugs.launchpad.net/nova/+bug/1670628
  * Routed provider networks needs to move to placement aggregates

*Agreements and decisions

  * For NUMA-aware vSwitches, we'll go forward with a config-based 
solution for Rocky and deprecate it later when we have support in 
placement for the necessary inventory reporting bit (which will be 
implemented as part of the bandwidth-based scheduling work). We'll use 
dynamic attributes like "physnet_mapping_[name] = nodes" to avoid the 
JSON blob problem (Cinder and Manila do this) and thusly we'll avoid 
having to deprecate any additional YAML config file or JSON blob based 
thing when the placement support is available.

* Spec: https://review.openstack.org/#/c/541290/
  * On minimum bandwidth-based scheduling:
* Neutron will create the network related RPs under the compute RP 
in placement
  * It's reasonable to require unique hostnames (across cells, the 
internet, the world) and we'll solve the host -- compute uuid issue 
separately

* Neutron will report the bandwidth inventory to placement
* On the interaction of Neutron and Nova to communicate the 
requested bandwidth per port:
  * The requested minimum bandwidth for a neturon port wil be 
available in the neutron port API 
https://review.openstack.org/#/c/396297/7/specs/pike/strict-minimum-bandwidth-support.rst@68

  * The work does not depend on the new neutron port binding API
  * We'll need not just resources but traits as well on the neutron 
port and neutron should add the physnet to the port as a trait. We'll 
assume that the requested resources and traits are from a single 
provider per port
* We don't need to block bandwidth-based scheduling support for 
doing port creation in conductor (it's not trivial), however, if nova 
creates a port on a network with a QoS policy, nova is going to have to 
munge the allocations and update placement (from nova-compute) ... so 
maybe we should block this on moving port creation to conductor after all
* Nova will merge the requested bandwidth into the 
allocation_candidate request by a new request filter
* Nova will create the allocation in placement for bandwidth 
resources and the allocation uuid will be the instance uuid. Multiple 
ports with different QoS rules will be distinguishable because they will 
have allocations from different providers
* As PF/VF modeling in placement has not been done yet we can phase 
this feature to support OVS first and add support for SRIOV after the 
PF/VF modelling is done

* Nova spec: https://review.openstack.org/#/c/502306/
* Neutron spec: https://review.openstack.org/#/c/508149
  * On the new port binding API in Neutron, there is solid progress on 
the Neutron side and the Nova skeleton patches are making progress and 
depend on the Neutron patch, so some testing will be possible soon 
(still need to plumb in the libvirt driver changes)
* Spec: 
https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/neutron-new-port-binding-api.html

* Neutron patch: https://review.openstack.org/#/c/414251/
  * On the Nova bug about re-attaching network interfaces:
* There was a bug in OVS back in 2014 for which a workaround was 
added: https://github.com/openstack/nova/commit/33cc64fb817
* The bug was fixed in OVS in 2015 and is available in OVS 2.6.0 
onward: 
https://github.com/openvswitch/ovs/commit/e21c6643a02c6b446d2fbdfde366ea303b4c2730
* The old workaround in Nova (now in os-vif) was determined to be 
causing the bug, so a fix to os-vif was made which essentially reverted 
the workaround: https://review.openstack.org/#/c/546588
* We can close the bug in Nova once we have a os-vif library 
release and we depend on its version in our requirements.txt

  * On routed provider networks:
* On the Neutron side, this is already done: 
https://docs.openstack.org/neutron/latest/admin/config-routed-networks.html

* Summit videos about routed provider networks:
  * 
https://www.openstack.org/videos/barcelona-2016/scaling-up-openstack-networking-with-routed-networks
  * 

Re: [openstack-dev] Vancouver Summit Schedule now live!

2018-03-15 Thread melanie witt

On Thu, 15 Mar 2018 19:53:16 +, Kendall Nelson wrote:


Jay's correct the Updates are happening. Project Updates are their own 
thing separate from the standard submission. Anne Bertucio had sent an 
email out to all PTLs (shortly before the election so it went to you and 
not Melanie) saying we had slots and you needed to request one through 
her. I **thiiink** there are spots left, but I am not positive.


Matt did request a project update slot back before the election. I've 
emailed Anne directly to find out what's going on and whether we can get 
a project update slot.


I had sent an email AFTER the election to PTLs about Project Onboarding 
and needed a response if the team was interested. I don't currently have 
Nova on the list and I have less than 5 spots left so I need to know 
ASAP if you want one.


FYI to all, I had held off on requesting an onboarding slot until after 
asking for volunteers to help me with it in today's Nova meeting so I 
could have all of the speaker names ready. But I've gone ahead and 
gotten us a slot with only me as speaker for now and I'll update Kendall 
when/if I get volunteer speakers to help. :)


-melanie





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] about rebuild instance booted from volume

2018-03-15 Thread Tim Bell
Deleting all snapshots would seem dangerous though...

1. I want to reset my instance to how it was before
2. I'll just do a snapshot in case I need any data in the future
3. rebuild
4. oops

Tim

-Original Message-
From: Ben Nemec 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, 15 March 2018 at 20:42
To: Dan Smith 
Cc: "OpenStack Development Mailing List (not for usage questions)" 
, openstack-operators 

Subject: Re: [openstack-dev] [nova] about rebuild instance booted from volume



On 03/15/2018 09:46 AM, Dan Smith wrote:
>> Rather than overload delete_on_termination, could another flag like
>> delete_on_rebuild be added?
> 
> Isn't delete_on_termination already the field we want? To me, that field
> means "nova owns this". If that is true, then we should be able to
> re-image the volume (in-place is ideal, IMHO) and if not, we just
> fail. Is that reasonable?

If that's what the flag means then it seems reasonable.  I got the 
impression from the previous discussion that not everyone was seeing it 
that way though.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Vancouver Summit Schedule now live!

2018-03-15 Thread Kendall Nelson
Hey Matt and Jay :)

Jay's correct the Updates are happening. Project Updates are their own
thing separate from the standard submission. Anne Bertucio had sent an
email out to all PTLs (shortly before the election so it went to you and
not Melanie) saying we had slots and you needed to request one through her.
I **thiiink** there are spots left, but I am not positive.

I had sent an email AFTER the election to PTLs about Project Onboarding and
needed a response if the team was interested. I don't currently have Nova
on the list and I have less than 5 spots left so I need to know ASAP if you
want one.

We had been holding off on scheduling Onboarding and Updates until we could
do them mostly together (this was a request from several people over the
last two rounds of Onboarding). Anne, Kendall W and I have a call for
starting to puzzle things together into the schedule late tomorrow. Barring
any issues, all of that should be live on the schedule sometime next week.

-Kendall (diablo_rojo)

On Thu, Mar 15, 2018 at 11:20 AM Jay S Bryant  wrote:

>
>
> On 3/15/2018 1:14 PM, Matt Riedemann wrote:
> > On 3/15/2018 11:05 AM, Kendall Waters wrote:
> >> The schedule is organized by new tracks according to use cases:
> >> private & hybrid cloud, public cloud, container infrastructure, CI /
> >> CD, edge computing, HPC / GPUs / AI, and telecom / NFV. You can sort
> >> within the schedule to find sessions and speakers around each topic
> >> or open source project (with new tags!).
> >
> > I've asked about this before, but are the project updates no longer
> > happening at this summit? Maybe those are too silo'ed to fall into the
> > track buckets. I ask because I don't see anything in the schedule
> > about project updates.
> >
> Matt,
>
> Project updates are happening.  I think it is Anne and Kendall N that
> are setting those up.  An e-mail went out to PTLs about that a while
> back asking if they wanted to participate.
>
> I found it weird too that the schedule went out without those listed.  I
> know they are busy, however, trying to coordinate those and the
> onboarding sessions.
>
> Jay
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] about rebuild instance booted from volume

2018-03-15 Thread Ben Nemec



On 03/15/2018 09:46 AM, Dan Smith wrote:

Rather than overload delete_on_termination, could another flag like
delete_on_rebuild be added?


Isn't delete_on_termination already the field we want? To me, that field
means "nova owns this". If that is true, then we should be able to
re-image the volume (in-place is ideal, IMHO) and if not, we just
fail. Is that reasonable?


If that's what the flag means then it seems reasonable.  I got the 
impression from the previous discussion that not everyone was seeing it 
that way though.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] Using Octavia without neutron's extensions allowed-address-pairs and security-groups.

2018-03-15 Thread Michael Johnson
Hi Vadim,

Yes, currently the only network driver available for Octavia (called
allowed-address-pairs) uses the allowed-address-pairs feature of
neutron. This allows active/standby and VIP migration during failover
situations.

If you need to run without that feature, an non-allowed-address-pairs
driver will need to be created. This driver would not support the
active/standby load balancer topology.

Michael

On Thu, Mar 15, 2018 at 1:39 AM, Вадим Пономарев  wrote:
> Hi,
>
> I'm trying to install Octavia (from branch master) in my openstack
> installation. In my installation, neutron works with disabled extension
> allowed-address-pairs and disabled extension security-groups. This is done
> to improve performance. At the moment, i see that Octavia supporting for
> neutron only the network_driver allowed_address_pairs_driver, but this
> driver requires the extensions [1]. How can i use Octavia without the
> extensions? Or the only option is to write your own driver?
>
> [1]
> https://github.com/openstack/octavia/blob/master/octavia/network/drivers/neutron/allowed_address_pairs.py#L57
>
> --
> Best regards,
> Vadim Ponomarev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] update_provider_tree design updates

2018-03-15 Thread Chris Dent

On Thu, 15 Mar 2018, Eric Fried wrote:


One of the takeaways from the Queens retrospective [1] was that we
should be summarizing discussions that happen in person/hangout/IRC/etc.
to the appropriate mailing list for the benefit of those who weren't
present (or paying attention :P ).  This is such a summary.


Thank you _very_ much for doing this. I've got two questions within.


...which we discussed earlier this week in IRC [4][5].  We concluded:

- Compute is the source of truth for any and all traits it could ever
assign, which will be a subset of what's in os-traits, plus whatever
CUSTOM_ traits it stakes a claim to.  If an outside agent sets a trait
that's in that list, compute can legitimately remove it.  If an outside
agent removes a trait that's in that list, compute can reassert it.


Where does that list come from? Or more directly how does Compute
stake the claim for "mine"?

How does an outside agent know what Compute has claimed? Presumably
they want to know that so they can avoid wastefully doing something
that's going to get clobbered?

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] core nomination for caoyuan

2018-03-15 Thread Richard Wellum
+1

On Thu, Mar 15, 2018 at 5:40 AM Martin André  wrote:

> +1
>
> On Tue, Mar 13, 2018 at 5:50 PM, Swapnil Kulkarni 
> wrote:
> > On Mon, Mar 12, 2018 at 7:36 AM, Jeffrey Zhang 
> wrote:
> >> Kolla core reviewer team,
> >>
> >> It is my pleasure to nominate caoyuan for kolla core team.
> >>
> >> caoyuan's output is fantastic over the last cycle. And he is the most
> >> active non-core contributor on Kolla project for last 180 days[1]. He
> >> focuses on configuration optimize and improve the pre-checks feature.
> >>
> >> Consider this nomination a +1 vote from me.
> >>
> >> A +1 vote indicates you are in favor of caoyuan as a candidate, a -1
> >> is a veto. Voting is open for 7 days until Mar 12th, or a unanimous
> >> response is reached or a veto vote occurs.
> >>
> >> [1] http://stackalytics.com/report/contribution/kolla-group/180
> >> --
> >> Regards,
> >> Jeffrey Zhang
> >> Blog: http://xcodest.me
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> > +1
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack Ansible Disk requirements [docs] [osa]

2018-03-15 Thread Gordon, Kent S
Compute host disk requirements for Openstack Ansible seem high in the
documentation.

I think I have used smaller compute hosts in the past.
Did something change in Queens?

https://docs.openstack.org/project-deploy-guide/openstack-ansible/latest/overview-requirements.html


Compute hosts

Disk space requirements depend on the total number of instances running on
each host and the amount of disk space allocated to each instance.

   - Compute hosts must have a minimum of 1 TB of disk space available.




-- 
Kent S. Gordon
kent.gor...@verizonwireless.com Work:682-831-3601 Mobile: 817-905-6518
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][placement] update_provider_tree design updates

2018-03-15 Thread Eric Fried
One of the takeaways from the Queens retrospective [1] was that we
should be summarizing discussions that happen in person/hangout/IRC/etc.
to the appropriate mailing list for the benefit of those who weren't
present (or paying attention :P ).  This is such a summary.

As originally conceived, ComputeDriver.update_provider_tree was intended
to be the sole source of truth for traits and aggregates on resource
providers under its purview.

Then came the idea of reflecting compute driver capabilities as traits
[2], which would be done outside of update_provider_tree, but still
within the bounds of nova compute.

Then Friday discussions at the PTG [3] brought to light the fact that we
need to honor traits set by outside agents (operators, other services
like neutron, etc.), effectively merging those with whatever the virt
driver sets.  Concerns were raised about how to reconcile overlaps, and
in particular how compute (via update_provider_tree or otherwise) can
know if a trait is safe to *remove*.  At the PTG, we agreed we need to
do this, but deferred the details.

...which we discussed earlier this week in IRC [4][5].  We concluded:

- Compute is the source of truth for any and all traits it could ever
assign, which will be a subset of what's in os-traits, plus whatever
CUSTOM_ traits it stakes a claim to.  If an outside agent sets a trait
that's in that list, compute can legitimately remove it.  If an outside
agent removes a trait that's in that list, compute can reassert it.
- Anything outside of that list of compute-owned traits is fair game for
outside agents to set/unset.  Compute won't mess with those, ever.
- Compute (and update_provider_tree) will therefore need to know what
that list comprises.  Furthermore, it must take care to use merging
logic such that it only sets/unsets traits it "owns".
- To facilitate this on the compute side, ProviderTree will get new
methods to add/remove provider traits.  (Technically, it could all be
done via update_traits [6], which replaces the entire set of traits on a
provider, but then every update_provider_tree implementation would have
to write the same kind of merging logic.)
- For operators, we'll need OSC affordance for setting/unsetting
provider traits.

And finally:
- Everything above *also* applies to provider aggregates.  NB: Here
there be tygers.  Unlike traits, the comprehensive list of which can
conceivably be known a priori (even including CUSTOM_*s), aggregate
UUIDs are by their nature unique and likely generated dynamically.
Knowing that you "own" an aggregate UUID is relatively straightforward
when you need to set it; but to know you can/must unset it, you need to
have kept a record of having set it in the first place.  A record that
persists e.g. across compute service restarts.  Can/should virt drivers
write a file?  If so, we better make sure it works across upgrades.  And
so on.  Ugh.  For the time being, we're kinda punting on this issue
until it actually becomes a problem IRL.

And now for the moment you've all been awaiting with bated breath:
- Delta [7] to the update_provider_tree spec [8].
- Patch for ProviderTree methods to add/remove traits/aggregates [9].
- Patch modifying the update_provider_tree docstring, and adding devref
content for update_provider_tree [10].

Please feel free to email or reach out in #openstack-nova if you have
any questions.

Thanks,
efried

[1] https://etherpad.openstack.org/p/nova-queens-retrospective (L122 as
of this writing)
[2] https://review.openstack.org/#/c/538498/
[3] https://etherpad.openstack.org/p/nova-ptg-rocky (L496-502 aotw)
[4]
http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-03-12.log.html#t2018-03-12T16:02:08
[5]
http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-03-12.log.html#t2018-03-12T19:20:23
[6]
https://github.com/openstack/nova/blob/5f38500df6a8e1665b968c3e98b804e0fdfefc63/nova/compute/provider_tree.py#L494
[7] https://review.openstack.org/552122
[8]
http://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/update-provider-tree.html
[9] https://review.openstack.org/553475
[10] https://review.openstack.org/553476


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Vancouver Summit Schedule now live!

2018-03-15 Thread Jay S Bryant



On 3/15/2018 1:14 PM, Matt Riedemann wrote:

On 3/15/2018 11:05 AM, Kendall Waters wrote:
The schedule is organized by new tracks according to use cases: 
private & hybrid cloud, public cloud, container infrastructure, CI / 
CD, edge computing, HPC / GPUs / AI, and telecom / NFV. You can sort 
within the schedule to find sessions and speakers around each topic 
or open source project (with new tags!).


I've asked about this before, but are the project updates no longer 
happening at this summit? Maybe those are too silo'ed to fall into the 
track buckets. I ask because I don't see anything in the schedule 
about project updates.



Matt,

Project updates are happening.  I think it is Anne and Kendall N that 
are setting those up.  An e-mail went out to PTLs about that a while 
back asking if they wanted to participate.


I found it weird too that the schedule went out without those listed.  I 
know they are busy, however, trying to coordinate those and the 
onboarding sessions.


Jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Vancouver Summit Schedule now live!

2018-03-15 Thread Matt Riedemann

On 3/15/2018 11:05 AM, Kendall Waters wrote:
The schedule is organized by new tracks according to use cases: private 
& hybrid cloud, public cloud, container infrastructure, CI / CD, edge 
computing, HPC / GPUs / AI, and telecom / NFV. You can sort within the 
schedule to find sessions and speakers around each topic or open source 
project (with new tags!).


I've asked about this before, but are the project updates no longer 
happening at this summit? Maybe those are too silo'ed to fall into the 
track buckets. I ask because I don't see anything in the schedule about 
project updates.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-sig/news

2018-03-15 Thread Chris Dent


Greetings OpenStack community,

A rousing good time at the API-SIG meeting today. We opened with some 
discussion on what might be missing from the Methods [7] section of the HTTP 
guidelines. At the PTG we had discussed that perhaps we needed more info on 
which methods were appropriate when. It turns that what we probably need is 
better discoverability, so we're going to work on that but at the same time do 
a general review of that entire page.

We then talked about microversions a bit (because it wouldn't be an API-SIG 
without them). There's an in-progress history of microversions document (linked 
below) that we need to decide if we'll revive. If you have a strong opinion, 
let us know.

And finally we explored the options for how or if Neutron can cleanly resolve the 
handling of invalid query parameters. This was raised a while back in an email thread 
[8]. It's generally a good idea not to break existing client code, but what if that 
client code is itself broken? The next step will be to make the choice configurable. 
Neutron doesn't support microversions so "throw another microversion at it" 
won't work.

As always if you're interested in helping out, in addition to coming to the 
meetings, there's also:

* The list of bugs [5] indicates several missing or incomplete guidelines.
* The existing guidelines [2] always need refreshing to account for changes 
over time. If you find something that's not quite right, submit a patch [6] to 
fix it.
* Have you done something for which you think guidance would have made things 
easier but couldn't find any? Submit a patch and help others [6].

# Newly Published Guidelines

None this week.

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

None this week.

# Guidelines Currently Under Review [3]

* Add guidance on needing cache-control headers
  https://review.openstack.org/550468

* Add guideline on exposing microversions in SDKs
  https://review.openstack.org/#/c/532814/

* Add API-schema guide (still being defined)
  https://review.openstack.org/#/c/524467/

* A (shrinking) suite of several documents about doing version and service 
discovery
  Start at https://review.openstack.org/#/c/459405/

* WIP: microversion architecture archival doc (very early; not yet ready for 
review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API SIG about APIs that you are 
developing or changing, please address your concerns in an email to the OpenStack 
developer mailing list[1] with the tag "[api]" in the subject. In your email, 
you should include any relevant reviews, links, and comments to help guide the discussion 
of the specific challenge you are facing.

To learn more about the API SIG mission and the work we do, see our wiki page 
[4] and guidelines [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z
[4] https://wiki.openstack.org/wiki/API_SIG
[5] https://bugs.launchpad.net/openstack-api-wg
[6] https://git.openstack.org/cgit/openstack/api-wg
[7] 
http://specs.openstack.org/openstack/api-wg/guidelines/http.html#http-methods
[8] http://lists.openstack.org/pipermail/openstack-dev/2018-March/128023.html

Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_sig/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [docs] Retiring openstack-doc-specs-core

2018-03-15 Thread Petr Kovar
Hi all,

For historical reasons, the docs team maintains a separate core group for
the docs-specs repo. With the new docs processes in place, we agreed at the
Rocky PTG to further simplify the docs group setup and retire
openstack-doc-specs-core by removing the existing members and adding
openstack-doc-core as a group member. 

That way, we will only have one core group, which better reflects the
current status of the team. Would there be any objections to this?

The current openstack-doc-specs-core membership can be found here:

https://review.openstack.org/#/admin/groups/384,members

Thanks,
pk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] TLS by default

2018-03-15 Thread Dmitry Tantsur

On 03/15/2018 12:51 AM, Julia Kreger wrote:

On Wed, Mar 14, 2018 at 4:52 AM, Dmitry Tantsur  wrote:

Just to clarify: only for public endpoints, right? I don't think e.g.
ironic-python-agent can talk to self-signed certificates yet.




For what it is worth, it is possible for IPA to speak to a self signed
certificate, although it requires injecting the signing private CA
certificate into the ramdisk or iso image that is being used. There
are a few other options that can be implemented, but those may also
lower overall security posture.


Yep, that's the problem.

We can quite easily make IPA talk to custom https.

We cannot securely make IPA expose an https endpoint without using virtual media 
(not supported by tripleo, vendor-specific).


We cannot (IIUC) make iPXE use https with custom certificates without rebuilding 
the firmware from source.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] New release for Pike is overdue

2018-03-15 Thread Miguel Lavalle
I just +1ed Kuba's patch

On Thu, Mar 15, 2018 at 10:49 AM, Jakub Libosvar 
wrote:

> Thanks for notice. I sent a patch to request a new release:
> https://review.openstack.org/#/c/553447/
>
> Jakub
>
> On 15/03/2018 11:28, Jens Harbott wrote:
> > The last neutron release for Pike has been made in November, a lot of
> > bug fixes have made it into the stable/pike branch, can we please get
> > a fresh release for it soon?
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Edge-computing] [FEMDC] Brainstorming regarding the Vancouver Forum

2018-03-15 Thread Paul-Andre Raymond
I have added a couple of points and links.

 
Paul-André
--
 
On 3/15/18, 9:46 AM, "lebre.adr...@free.fr"  wrote:

Hi all, 

I just created an FEMDC etherpad following the Melvin's email regarding the 
next Forum in Vancouver. 
Please do not hesitate to propose ideas for sessions at the forum : 
https://wiki.openstack.org/wiki/Forum/Vancouver2018


++
Ad_ri3n_

___
Edge-computing mailing list
edge-comput...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Vancouver Summit Schedule now live!

2018-03-15 Thread Kendall Waters
The schedule is now live for the Vancouver Summit! Check out the 100+ sessions, 
demos, workshops and tutorials that will be featured at the Vancouver Summit, 
May 21-24. 

What’s New?
As infrastructure has evolved, so has the Summit. In addition to OpenStack 
features and operations, you'll find a strong focus on cross-project 
integration and addressing new use cases like edge computing and machine 
learning. Sessions will feature user stories from the likes of JPMorgan Chase, 
Progressive Insurance, Target, Wells Fargo, and more, as well as the 
integration and use of projects like Kata Containers, Kubernetes, Istio, Ceph, 
ONAP, Ansible, and many others. 

The schedule is organized by new tracks according to use cases: private & 
hybrid cloud, public cloud, container infrastructure, CI / CD, edge computing, 
HPC / GPUs / AI, and telecom / NFV. You can sort within the schedule to find 
sessions and speakers around each topic or open source project (with new 
tags!). 

Please check out this Superuser article and help us promote it via social 
media: http://superuser.openstack.org/articles/whats-new-vancouver-summit/ 


Submit Sessions to the Forum
The Technical Committee and User Committee are now collecting sessions for the 
Forum at the Vancouver Summit. If you have a project-specific session, 
strategic community-wide discussion or cross-project that you would like to 
propose, add links to the etherpads found at the Vancouver Forum Wiki 
(https://wiki.openstack.org/wiki/Forum/Vancouver2018) 
. 

Time to Register
The early bird deadline is approaching, so please register 
https://www.openstack.org/summit/vancouver-2018/ 
 before prices increase on 
April 4 at 11:59pm PT.
 
For speakers whose sessions were accepted, look for an email from 
speakersupp...@openstack.org for next steps on registration. ATCs and AUCs 
should also check their inbox for discount codes.

Questions? Email sum...@openstack.org

Cheers, 
Kendall 


Kendall Waters
OpenStack Marketing
kend...@openstack.org



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] New release for Pike is overdue

2018-03-15 Thread Jakub Libosvar
Thanks for notice. I sent a patch to request a new release:
https://review.openstack.org/#/c/553447/

Jakub

On 15/03/2018 11:28, Jens Harbott wrote:
> The last neutron release for Pike has been made in November, a lot of
> bug fixes have made it into the stable/pike branch, can we please get
> a fresh release for it soon?
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] OSSN-0076 brainstorming session Friday 16 March

2018-03-15 Thread Brian Rosmaita
I'm working on a spec to alleviate OSSN-0076 that would follow up the
OSSN-0075 proposal [0], but have run into some problems.  It would be
helpful to lay them out and get some feedback.  I'll be in the
#openstack-glance channel at 16:30 UTC tomorrow (Friday) to discuss.
Will take less than 1/2 hour.

cheers,
brian

[0] https://review.openstack.org/#/c/468179/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects

2018-03-15 Thread Matthew Thode
On 18-03-15 07:03:11, Doug Hellmann wrote:
> What I Want to Do
> -
> 
> 1. Update the requirements-check test job to change the check for
>an exact match to be a check for compatibility with the
>upper-constraints.txt value.
> 
>We would check the value for the dependency from upper-constraints.txt
>against the range of allowed values in the project. If the
>constraint version is compatible, the dependency range is OK.
> 
>This rule means that in order to change the dependency settings
>for a project in a way that are incompatible with the constraint,
>the constraint (and probably the global requirements list) would
>have to be changed first in openstack/requirements. However, if
>the change to the dependency is still compatible with the
>constraint, no change would be needed in openstack/requirements.
>For example, if the global list constraints a library to X.Y.Z
>and a project lists X.Y.Z-2 as the minimum version but then needs
>to raise that because it needs a feature in X.Y.Z-1, it can do
>that with a single patch in-tree.
> 

I think what may be better is for global-requirements to become a
gathering place for projects that requirements watches to have their
smallest constrainted installable set defined in.

Upper-constraints has a req of foo===2.0.3
Project A has a req of foo>=1.0.0,!=1.6.0
Project B has a req of foo>=1.4.0
Global reqs would be updated with foo>=1.4.0,!=1.6.0
Project C comes along and sets foo>=2.0.0
Global reqs would be updated with foo>=2.0.0

This would make global-reqs descriptive rather than prescriptive for
versioning and would represent the 'true' version constraints of
openstack.

>We also need to change requirements-check to look at the exclusions
>to ensure they all appear in the global-requirements.txt list
>(the local list needs to be a subset of the global list, but
>does not have to match it exactly). We can't have one project
>excluding a version that others do not, because we could then
>end up with a conflict with the upper constraints list that could
>wedge the gate as we had happen in the past.
> 

How would this happen when using constraints?  A project is not allowed
to have a requirement that masks a constriant (and would be verified via
the requirements-check job).

There's a failure mode not covered, a project could add a mask (!=) to
their requirements before we update constraints.  The project that was
passing the requirements-check job would then become incompatable.  This
means that the requirements-check would need to be run for each
changeset to catch this as soon as it happens, instead of running only
on requirements changes.

>We also need to verify that projects do not cap dependencies for
>the same reason. Caps prevent us from advancing to versions of
>dependencies that are "too new" and possibly incompatible. We
>can manage caps in the global requirements list, which would
>cause that list to calculate the constraints correctly.
> 
>This change would immediately allow all projects currently
>following the global requirements lists to specify different
>lower bounds from that global list, as long as those lower bounds
>still allow the dependencies to be co-installable. (The upper
>bounds, managed through the upper-constraints.txt list, would
>still be built by selecting the newest compatible version because
>that is how pip's dependency resolver works.)
> 
> 2. We should stop syncing dependencies by turning off the
>propose-update-requirements job entirely.
> 
>Turning off the job will stop the bot from proposing more
>dependency updates to projects.
> 
>As part of deleting the job we can also remove the "requirements"
>case from playbooks/proposal/propose_update.sh, since it won't
>need that logic any more. We can also remove the update-requirements
>command from the openstack/requirements repository, since that
>is the tool that generates the updated list and it won't be
>needed if we aren't proposing updates any more.
> 
> 3. Remove the minimum specifications from the global requirements
>list to make clear that the global list is no longer expressing
>minimums.
> 
>This clean-up step has been a bit more controversial among the
>requirements team, but I think it is a key piece. As the minimum
>versions of dependencies diverge within projects, there will no
>longer *be* a real global set of minimum values. Tracking a list of
>"highest minimums", would either require rebuilding the list from the
>settings in all projects, or requiring two patches to change the
>minimum version of a dependency within a project.
> 
>Maintaining a global list of minimums also implies that we
>consider it OK to run OpenStack as a whole with that list. This
>message conflicts with the message we've been sending about the
>upper 

Re: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects

2018-03-15 Thread Matthew Thode
On 18-03-15 09:45:38, Doug Hellmann wrote:
> Excerpts from Thierry Carrez's message of 2018-03-15 14:34:50 +0100:
> > Doug Hellmann wrote:
> > > [...]
> > > TL;DR
> > > -
> > > 
> > > Let's stop copying exact dependency specifications into all our
> > > projects to allow them to reflect the actual versions of things
> > > they depend on. The constraints system in pip makes this change
> > > safe. We still need to maintain some level of compatibility, so the
> > > existing requirements-check job (run for changes to requirements.txt
> > > within each repo) will change a bit rather than going away completely.
> > > We can enable unit test jobs to verify the lower constraint settings
> > > at the same time that we're doing the other work.
> > 
> > Thanks for the very detailed plan, Doug. It all makes sense to me,
> > although I have a precision question (see below).
> > 
> > > [...]
> > >We also need to change requirements-check to look at the exclusions
> > >to ensure they all appear in the global-requirements.txt list
> > >(the local list needs to be a subset of the global list, but
> > >does not have to match it exactly). We can't have one project
> > >excluding a version that others do not, because we could then
> > >end up with a conflict with the upper constraints list that could
> > >wedge the gate as we had happen in the past.
> > > [...]
> > > 2. We should stop syncing dependencies by turning off the
> > >propose-update-requirements job entirely.
> > > 
> > >Turning off the job will stop the bot from proposing more
> > >dependency updates to projects.
> > > [...]
> > > After these 3 steps are done, the requirements team will continue
> > > to maintain the global-requirements.txt and upper-constraints.txt
> > > files, as before. Adding a new dependency to a project will still
> > > involve a review step to add it to the global list so we can monitor
> > > licensing, duplication, python 3 support, etc. But adjusting the
> > > version numbers once that dependency is in the global list will be
> > > easier.
> > 
> > How would you set up an exclusion in that new world order ? We used to
> > add it to the global-requirements file and the bot would automatically
> > sync it to various consuming projects.
> > 
> > Now since any exclusion needs to also appear on the global file, you
> > would push it first in the global-requirements, then to the project
> > itself, is that correct ? In the end the global-requirements file would
> > only contain those exclusions, right ?
> > 
> 
> The first step would need to be adding it to the global-requirements.txt
> list. After that, it would depend on how picky we want to be. If the
> upper-constraints.txt list is successfully updated to avoid the release,
> we might not need anything in the project. If the project wants to
> provide detailed guidance about compatibility, then they could add the
> exclusion. For example, if a version of oslo.config breaks cinder but
> not nova, we might only put the exclusion in global-requirements.txt and
> the requirements.txt for cinder.
> 

I wonder if we'd be able to have projects decide via a flag in their tox
or zuul config if they'd like to opt into auto-updating exclusions only.

-- 
Matthew Thode (prometheanfire)


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] about rebuild instance booted from volume

2018-03-15 Thread Dan Smith
> Rather than overload delete_on_termination, could another flag like
> delete_on_rebuild be added?

Isn't delete_on_termination already the field we want? To me, that field
means "nova owns this". If that is true, then we should be able to
re-image the volume (in-place is ideal, IMHO) and if not, we just
fail. Is that reasonable?

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [refstack] Full list of API Tests versus 'OpenStack Powered' Tests

2018-03-15 Thread Jeremy Stanley
On 2018-03-15 14:16:30 + (+), arkady.kanev...@dell.com wrote:
[...]
> This can be submitted anonymously if you like.

Anonymous submissions got disabled (and the existing set of data
from them deleted). See the announcement from a month ago for
details:

http://lists.openstack.org/pipermail/openstack-dev/2018-February/127103.html

-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] about rebuild instance booted from volume

2018-03-15 Thread Ben Nemec



On 03/14/2018 08:46 AM, Matt Riedemann wrote:

On 3/14/2018 3:42 AM, 李杰 wrote:


             This is the spec about  rebuild a instance booted from 
volume.In the spec,there is a
       question about if we should delete the old root_volume.Anyone 
who is interested in
       booted from volume can help to review this. Any suggestion is 
welcome.Thank you!

       The link is here.
       Re:the rebuild spec:https://review.openstack.org/#/c/532407/


Copying the operators list and giving some more context.

This spec is proposing to add support for rebuild with a new image for 
volume-backed servers, which today is just a 400 failure in the API 
since the compute doesn't support that scenario.


With the proposed solution, the backing root volume would be deleted and 
a new volume would be created from the new image, similar to how boot 
from volume works.


The question raised in the spec is whether or not nova should delete the 
root volume even if its delete_on_termination flag is set to False. The 
semantics get a bit weird here since that flag was not meant for this 
scenario, it's meant to be used when deleting the server to which the 
volume is attached. Rebuilding a server is not deleting it, but we would 
need to replace the root volume, so what do we do with the volume we're 
replacing?


Do we say that delete_on_termination only applies to deleting a server 
and not rebuild and therefore nova can delete the root volume during a 
rebuild?


If we don't delete the volume during rebuild, we could end up leaving a 
lot of volumes lying around that the user then has to clean up, 
otherwise they'll eventually go over quota.


We need user (and operator) feedback on this issue and what they would 
expect to happen.




As a developer who would also be a user of this functionality, I don't 
want the volume left around after rebuild.  To me the data loss of the 
root disk is inherent in the rebuild operation.  I guess the one gotcha 
might be that I always create the root volume as part of the initial 
instance creation, if someone manually created a volume and then pointed 
Nova at it there's probably a better chance they don't want Nova to 
delete it on them.


Rather than overload delete_on_termination, could another flag like 
delete_on_rebuild be added?


-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects

2018-03-15 Thread Jeremy Stanley
On 2018-03-15 07:03:11 -0400 (-0400), Doug Hellmann wrote:
[...]
> 1. Update the requirements-check test job to change the check for
>an exact match to be a check for compatibility with the
>upper-constraints.txt value.
[...]

I thought it might be possible to even just do away with this job
entirely, but some cursory testing shows that if you supply a
required versionspec which excludes your constrained version of the
same package, you'll still get the constrained version installed
even though you indicated it wasn't in your "supported" range. Might
be a nice patch to work on upstream in pip, making it explicitly
error on such a mismatch (and _then_ we might be able to stop
bothering with this job).

>We also need to change requirements-check to look at the exclusions
>to ensure they all appear in the global-requirements.txt list
>(the local list needs to be a subset of the global list, but
>does not have to match it exactly). We can't have one project
>excluding a version that others do not, because we could then
>end up with a conflict with the upper constraints list that could
>wedge the gate as we had happen in the past.
[...]

At first it seems like this wouldn't end up being necessary; as long
as you're not setting an upper bound or excluding the constrained
version, there shouldn't be a coinstallability problem right? Though
I suppose there are still a couple of potential pitfalls if we don't
check exclusions: setting an exclusion for a future version which
hasn't been released yet or is otherwise higher than the global
upper constraint; situations where we need to roll back a constraint
to an earlier version (e.g., we discover a bug in it) and some
project has that earlier version excluded. So I suppose there is
some merit to centrally coordinating these, making sure we can still
pick sane constraints which work for all projects (mental
exercise: do we also need to build a tool which can make sure that
proposed exclusions don't eliminate all possible version numbers?).

>As the minimum
>versions of dependencies diverge within projects, there will no
>longer *be* a real global set of minimum values. Tracking a list of
>"highest minimums", would either require rebuilding the list from the
>settings in all projects, or requiring two patches to change the
>minimum version of a dependency within a project.
[...]

It's also been suggested in the past that package maintainers for
some distributions relied on the ranges in our global requirements
list to determine what the minimum acceptable version of a
dependency is so they know whether/when it needs updating (fairly
critical when you consider that within a given distro some
dependencies may be shared by entirely unrelated software outside
our ecosystem and may not be compatible with new versions as soon as
we are). On the other hand, we never actually _test_ our lower
bounds, so this was to some extent a convenient fiction anyway.

> 1. Set up a new tox environment called "lower-constraints" with
>base-python set to "python3" and with the deps setting configured
>to include a copy of the existing global lower constraints file
>from the openstack/requirements repo.
[...]

I didn't realize lower-constraints.txt already existed (looks like
it got added a little over a week ago). Reviewing the log it seems
to have been updated based on individual projects' declared minimums
so far which seems to make it a questionable starting point for a
baseline. I suppose the assumption is that projects have been
merging requirements proposals which bump their declared
lower-bounds, though experience suggests that this doesn't happen
consistently in projects receiving g-r updates today (they will
either ignore the syncs or amend them to undo the lower-bounds
changes before merging). At any rate, I suppose that's a separate
conversation to be had, and as you say it's just a place to start
from but projects will be able to change it to whatever values they
want at that point.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [refstack] Full list of API Tests versus 'OpenStack Powered' Tests

2018-03-15 Thread Arkady.Kanevsky
Greg,
For compliance it is sufficient to run 
https://refstack.openstack.org/#/guidelines.
But it is good if you can also submit fill Tempest run.
That is used internally by refstack to identify which tests to include in the 
future.
This can be submitted anonymously if you like.
Thanks,
Arkady

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: Thursday, March 15, 2018 9:05 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [refstack] Full list of API Tests versus 
'OpenStack Powered' Tests

Re-posting this question to ‘OPENSTACK REFSTACK’,
Any guidance on what level of compliance is required to qualify for the 
OpenStack Logo ( https://www.openstack.org/brand/interop/ ),
See questions below.

Greg.

From: Greg Waines >
Date: Monday, February 26, 2018 at 6:22 PM
To: 
"openstack-dev@lists.openstack.org" 
>
Subject: [openstack-dev] [refstack] Full list of API Tests versus 'OpenStack 
Powered' Tests


I have a commercial OpenStack product that I would like to claim compliancy 
with RefStack
· Is it sufficient to claim compliance with only the “OpenStack Powered 
Platform” TESTS ?
oi.e. https://refstack.openstack.org/#/guidelines
oi.e. the ~350-ish compute + object-storage tests
· OR
· Should I be using the COMPLETE API Test Set ?
oi.e. the > 1,000 tests from various domains that get run if you do not 
specify a test-list

Greg.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [refstack] Full list of API Tests versus 'OpenStack Powered' Tests

2018-03-15 Thread Waines, Greg
Re-posting this question to ‘OPENSTACK REFSTACK’,
Any guidance on what level of compliance is required to qualify for the 
OpenStack Logo ( https://www.openstack.org/brand/interop/ ),
See questions below.

Greg.

From: Greg Waines 
Date: Monday, February 26, 2018 at 6:22 PM
To: "openstack-dev@lists.openstack.org" 
Subject: [openstack-dev] [refstack] Full list of API Tests versus 'OpenStack 
Powered' Tests


I have a commercial OpenStack product that I would like to claim compliancy 
with RefStack
· Is it sufficient to claim compliance with only the “OpenStack Powered 
Platform” TESTS ?
oi.e. https://refstack.openstack.org/#/guidelines
oi.e. the ~350-ish compute + object-storage tests
· OR
· Should I be using the COMPLETE API Test Set ?
oi.e. the > 1,000 tests from various domains that get run if you do not 
specify a test-list

Greg.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [FEMDC] Brainstorming regarding the Vancouver Forum

2018-03-15 Thread lebre . adrien
Hi all, 

I just created an FEMDC etherpad following the Melvin's email regarding the 
next Forum in Vancouver. 
Please do not hesitate to propose ideas for sessions at the forum : 
https://wiki.openstack.org/wiki/Forum/Vancouver2018


++
Ad_ri3n_

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects

2018-03-15 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2018-03-15 14:34:50 +0100:
> Doug Hellmann wrote:
> > [...]
> > TL;DR
> > -
> > 
> > Let's stop copying exact dependency specifications into all our
> > projects to allow them to reflect the actual versions of things
> > they depend on. The constraints system in pip makes this change
> > safe. We still need to maintain some level of compatibility, so the
> > existing requirements-check job (run for changes to requirements.txt
> > within each repo) will change a bit rather than going away completely.
> > We can enable unit test jobs to verify the lower constraint settings
> > at the same time that we're doing the other work.
> 
> Thanks for the very detailed plan, Doug. It all makes sense to me,
> although I have a precision question (see below).
> 
> > [...]
> >We also need to change requirements-check to look at the exclusions
> >to ensure they all appear in the global-requirements.txt list
> >(the local list needs to be a subset of the global list, but
> >does not have to match it exactly). We can't have one project
> >excluding a version that others do not, because we could then
> >end up with a conflict with the upper constraints list that could
> >wedge the gate as we had happen in the past.
> > [...]
> > 2. We should stop syncing dependencies by turning off the
> >propose-update-requirements job entirely.
> > 
> >Turning off the job will stop the bot from proposing more
> >dependency updates to projects.
> > [...]
> > After these 3 steps are done, the requirements team will continue
> > to maintain the global-requirements.txt and upper-constraints.txt
> > files, as before. Adding a new dependency to a project will still
> > involve a review step to add it to the global list so we can monitor
> > licensing, duplication, python 3 support, etc. But adjusting the
> > version numbers once that dependency is in the global list will be
> > easier.
> 
> How would you set up an exclusion in that new world order ? We used to
> add it to the global-requirements file and the bot would automatically
> sync it to various consuming projects.
> 
> Now since any exclusion needs to also appear on the global file, you
> would push it first in the global-requirements, then to the project
> itself, is that correct ? In the end the global-requirements file would
> only contain those exclusions, right ?
> 

The first step would need to be adding it to the global-requirements.txt
list. After that, it would depend on how picky we want to be. If the
upper-constraints.txt list is successfully updated to avoid the release,
we might not need anything in the project. If the project wants to
provide detailed guidance about compatibility, then they could add the
exclusion. For example, if a version of oslo.config breaks cinder but
not nova, we might only put the exclusion in global-requirements.txt and
the requirements.txt for cinder.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] about rebuildinstance booted from volume

2018-03-15 Thread Matt Riedemann

On 3/15/2018 7:27 AM, 李杰 wrote:
It seems that  we  can  only delete the snapshots of the original volume 
firstly,then delete the original volume if the original volume has 
snapshots.


Nova won't be deleting the volume snapshots just to delete the volume 
during a rebuild.


If we decide to delete the root volume during rebuild 
(delete_on_termination=True *or* we decide to not consider that flag 
during rebuild), the rebuild operation will likely have to handle the 
scenario that the volume has snapshots and can't be deleted.


Which opens up another question: if we hit that scenario, what should 
the rebuild operation do? Log a warning and just detach the volume but 
not delete it and continue, or fail?


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects

2018-03-15 Thread Thierry Carrez
Doug Hellmann wrote:
> [...]
> TL;DR
> -
> 
> Let's stop copying exact dependency specifications into all our
> projects to allow them to reflect the actual versions of things
> they depend on. The constraints system in pip makes this change
> safe. We still need to maintain some level of compatibility, so the
> existing requirements-check job (run for changes to requirements.txt
> within each repo) will change a bit rather than going away completely.
> We can enable unit test jobs to verify the lower constraint settings
> at the same time that we're doing the other work.

Thanks for the very detailed plan, Doug. It all makes sense to me,
although I have a precision question (see below).

> [...]
>We also need to change requirements-check to look at the exclusions
>to ensure they all appear in the global-requirements.txt list
>(the local list needs to be a subset of the global list, but
>does not have to match it exactly). We can't have one project
>excluding a version that others do not, because we could then
>end up with a conflict with the upper constraints list that could
>wedge the gate as we had happen in the past.
> [...]
> 2. We should stop syncing dependencies by turning off the
>propose-update-requirements job entirely.
> 
>Turning off the job will stop the bot from proposing more
>dependency updates to projects.
> [...]
> After these 3 steps are done, the requirements team will continue
> to maintain the global-requirements.txt and upper-constraints.txt
> files, as before. Adding a new dependency to a project will still
> involve a review step to add it to the global list so we can monitor
> licensing, duplication, python 3 support, etc. But adjusting the
> version numbers once that dependency is in the global list will be
> easier.

How would you set up an exclusion in that new world order ? We used to
add it to the global-requirements file and the bot would automatically
sync it to various consuming projects.

Now since any exclusion needs to also appear on the global file, you
would push it first in the global-requirements, then to the project
itself, is that correct ? In the end the global-requirements file would
only contain those exclusions, right ?

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Rocky PTG summary - cells

2018-03-15 Thread Surya Seetharaman
I would also prefer not having to rely on reading all the cell DBs to
calculate quotas.


On Thu, Mar 15, 2018 at 3:29 AM, melanie witt  wrote:

>
>
> I would prefer not to block instance creations because of "down" cells,


​++

​


> so maybe there is some possibility to avoid it if we can get
> "queued_for_delete" and "user_id" columns added to the instance_mappings
> table.
>
>
seems reason enough to add them from my perspective.


Regards,
Surya.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][neutron] tools/tox_install changes - breakage with constraints

2018-03-15 Thread Doug Hellmann
Excerpts from Thomas Morin's message of 2018-03-15 10:15:38 +0100:
> Hi Doug,
> 
> Doug Hellmann, 2018-03-14 23:42:
> > We keep doing lots of infra-related work to make it "easy" to do
> >  when it comes to
> > managing dependencies.  There are three ways to address the issue
> > with horizon and neutron, and none of them involve adding features
> > to pbr.
> > 
> > 1. Things that are being used like libraries need to release like
> >libraries. Real releases. With appropriate version numbers. So
> >that other things that depend on them can express valid
> > dependencies.
> > 
> > 2. Extract the relevant code into libraries and release *those*.
> > 
> > 3. Things that are not stable enough to be treated as a library
> >shouldn't be used that way. Move the things that use the
> > application
> >code as library code back into the repo with the thing that they
> >are tied to but that we don't want to (or can't) treat like a
> >library.
> 
> What about the case where there is co-development of features across
> repos ? One specific case I have in mind is the Neutron stadium where

We do that all the time with the Oslo libraries. It's not as easy as
having everything in one repo, but we manage.

> we sometimes have features in neutron repo that are worked on as a pre-
> requisite for things that will be done in a neutron-* or networking-*
> project. Another is a case for instance where we need to add in project
> X a tempest test to validate the resolution of a bug for which the fix
> actually happened in project B (and where B is not a library).

If the tempest test can't live in B because it uses part of X, then I
think X and B are really one thing and you're doing more work than you
need to be doing to keep them in separate libraries.

> My intuition is that it is not illegitimate to expect this kind of
> development workflow to be feasible; but at the same time I read your
> suggestion above as meaning that it belongs to the real of "things we
> shouldn't be doing in the first place".  The only way I can reconcile

You read me correctly.

We install a bunch of components from source for integration tests
in devstack-gate because we want the final releases to work together.
But those things only interact via REST APIs, and don't import each
other.  The cases with neutron and horizon are different. Even the
*unit* tests of the add-ons require code from the "parent" app. That
indicates a level of coupling that is not being properly addressed by
the release model and code management practices for the parent apps.

> the two would be to conclude we should collapse all the module in
> neutron-*/networking-* into neutron, but doing that would have quite a
> lot of side effects (yes, this is an understatement).

That's not the only way to do it. The other way would be to properly
decompose the shared code into a library and then provide *stable
APIs* so code can be consumed by the add-on modules. That will make
evolving things a little more difficult because of the stability
requirement. So it's a trade off. I think the teams involved should
make that trade off (in one direction or another), instead of
building tools to continue to avoid dealing with it.

So let's start by examining the root of the problem: Why are the things
that need to import neutron/horizon not part of the neutron/horizon
repositories in the first place?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] FFE - Feuture Freeze Exception request for Routed Spine and Leaf Deployment

2018-03-15 Thread hjensas
Hi,

It has come to my attention that I missed one detail for the routed
spine and leaf support.

There is an issue with introspection and the filtering used to ensure
only specified nodes are introspected. Apparently we are still using
the iptables based PXE filtering in ironic-inspecter. (I tought the new
dnsmasq based filter was the default already.)

The problem:
  When using iptables to filter on mac addresses we won't be able to
filter PXE DHCP requests coming in via the dhcp-relay agent, e.g the
nodes in remote L2 segments will not be filtered. So while
introspection works, we have no way to ensure that nodes we do not
intend to introspect ends up running introspection by accident.

The solution:
  Switch to use the dnsmasq based filter available in ironic-inspector.


The question is where do we go from here?
 * Do we declare introspection unsupported for Queens when using routed
networks?
 * Can we continue the feature work, and backport something to
stable/queens that use the dnsmasq based filter? Maby with a
conditional to use the new filtering if, and only if, routed networks
support is enabled in the undercloud?


The work to start using the new filtering is on-going in the following
patches:

puppet-ironic: https://review.openstack.org/523922
puppet-tripleo: https://review.openstack.org/525203/
instack-undercloud: https://review.openstack.org/523944/



This one for overcloud and containers based undercloud. (This would not
be a backport requirement.)
https://review.openstack.org/523909/


Best Regars
Harald Jensås

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][CI][QA][HA][Eris][LCOO] Validating HA on upstream

2018-03-15 Thread Adam Spiers

Raoul Scarazzini  wrote:

On 15/03/2018 01:57, Ghanshyam Mann wrote:

Thanks all for starting the collaboration on this which is long pending
things and we all want to have some start on this.
Myself and SamP talked about it during OPS meetup in Tokyo and we talked
about below draft plan-
- Update the Spec - https://review.openstack.org/#/c/443504/. which is
almost ready as per SamP and his team is working on that.
- Start the technical debate on tooling we can use/reuse like Yardstick
etc, which is more this mailing thread. 
- Accept the new repo for Eris under QA and start at least something in
Rocky cycle.
I am in for having meeting on this which is really good idea. non-IRC
meeting is totally fine here. Do we have meeting place and time setup ?
-gmann


Hi Ghanshyam,
as I wrote earlier in the thread it's no problem for me to offer my
bluejeans channel, let's sort out which timeslice can be good. I've
added to the main etherpad [1] my timezone (line 53), let's do all that
so that we can create the meeting invite.

[1] https://etherpad.openstack.org/p/extreme-testing-contacts


Good idea!  I've added mine.  We're still missing replies from several
key stakeholders though (lines 62++) - probably worth getting buy-in
from a few more people before we organise anything.  I'm pinging a few
on IRC with reminders about this.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Rocky PTG summary - cells

2018-03-15 Thread Zhenyu Zheng
Thanks for the reply, both solution looks reasonable.

On Thu, Mar 15, 2018 at 10:29 AM, melanie witt  wrote:

> On Thu, 15 Mar 2018 09:54:59 +0800, Zhenyu Zheng wrote:
>
>> Thanks for the recap, got one question for the "block creation":
>>
>> * An attempt to create an instance should be blocked if the project
>> has instances in a "down" cell (the instance_mappings table has a
>> "project_id" column) because we cannot count instances in "down"
>> cells for the quota check.
>>
>>
>> Since users are not aware of any cell information, and the cells are
>> mostly randomly selected, there could be high possibility that
>> users(projects) instances are equally spreaded across cells. The proposed
>> behavior seems can
>> easily cause a lot of users couldn't create instances because one of the
>> cells is down, isn't it too rude?
>>
>
> To be honest, I share your concern. I had planned to change quota checks
> to use placement instead of reading cell databases ASAP but hit a snag
> where we won't be able to count instances from placement because we can't
> determine the "type" of an allocation. Allocations can be instances, or
> network-related resources, or volume-related resources, etc. Adding the
> concept of an allocation "type" in placement has been a controversial
> discussion so far.
>
> BUT ... we also said we would add a column like "queued_for_delete" to the
> instance_mappings table. If we do that, we could count instances from the
> instance_mappings table in the API database and count cores/ram from
> placement and no longer rely on reading cell databases for quota checks.
> Although, there is one more wrinkle: instance_mappings has a project_id
> column but does not have a user_id column, so we wouldn't be able to get a
> count by project + user needed for the quota check against user quota. So,
> if people would not be opposed, we could also add a "user_id" column to
> instance_mappings to handle that case.
>
> I would prefer not to block instance creations because of "down" cells, so
> maybe there is some possibility to avoid it if we can get
> "queued_for_delete" and "user_id" columns added to the instance_mappings
> table.
>
> -melanie
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] about rebuildinstance booted from volume

2018-03-15 Thread 李杰
It seems that  we  can  only delete the snapshots of the original volume 
firstly,then delete the original volume if the original volume has snapshots.
 
Thanks,
lijie
-- Original --
From:  "Tomáš Vondra";
Date:  Wed, Mar 14, 2018 11:43 PM
To:  "OpenStack Developmen"; 
"openstack-operators"; 

Subject:  Re: [openstack-dev] [Openstack-operators] [nova] about 
rebuildinstance booted from volume

 
Hi!
I say delete! Delete them all!
Really, it's called delete_on_termination and should be ignored on Rebuild.
We have a VPS service implemented on top of OpenStack and do throw the old 
contents away on Rebuild. When the user has the Backup service paid, they can 
restore a snapshot. Backup is implemented as volume snapshot, then clone 
volume, then upload to image (glance is on a different disk array).

I also sometimes multi-attach a volume manually to a service node and just dd 
an image onto it. If it was to be implemented this way, then there would be no 
deleting a volume with delete_on_termination, just overwriting. But the effect 
is the same.

IMHO you can have snapshots of volumes that have been deleted. Just some 
backends like our 3PAR don't allow it, but it's not disallowed in the API 
contract.
Tomas from Homeatcloud

-Original Message-
From: Saverio Proto [mailto:ziopr...@gmail.com] 
Sent: Wednesday, March 14, 2018 3:19 PM
To: Tim Bell; Matt Riedemann
Cc: OpenStack Development Mailing List (not for usage questions); 
openstack-operat...@lists.openstack.org
Subject: Re: [Openstack-operators] [openstack-dev] [nova] about rebuild 
instance booted from volume

My idea is that if delete_on_termination flag is set to False the Volume should 
never be deleted by Nova.

my 2 cents

Saverio

2018-03-14 15:10 GMT+01:00 Tim Bell :
> Matt,
>
> To add another scenario and make things even more difficult (sorry (), if the 
> original volume has snapshots, I don't think you can delete it.
>
> Tim
>
>
> -Original Message-
> From: Matt Riedemann 
> Reply-To: "OpenStack Development Mailing List (not for usage 
> questions)" 
> Date: Wednesday, 14 March 2018 at 14:55
> To: "openstack-dev@lists.openstack.org" 
> , openstack-operators 
> 
> Subject: Re: [openstack-dev] [nova] about rebuild instance booted from 
> volume
>
> On 3/14/2018 3:42 AM, 李杰 wrote:
> >
> >  This is the spec about  rebuild a instance booted from
> > volume.In the spec,there is a
> >question about if we should delete the old root_volume.Anyone who
> > is interested in
> >booted from volume can help to review this. Any suggestion is
> > welcome.Thank you!
> >The link is here.
> >Re:the rebuild spec:https://review.openstack.org/#/c/532407/
>
> Copying the operators list and giving some more context.
>
> This spec is proposing to add support for rebuild with a new image for
> volume-backed servers, which today is just a 400 failure in the API
> since the compute doesn't support that scenario.
>
> With the proposed solution, the backing root volume would be deleted and
> a new volume would be created from the new image, similar to how boot
> from volume works.
>
> The question raised in the spec is whether or not nova should delete the
> root volume even if its delete_on_termination flag is set to False. The
> semantics get a bit weird here since that flag was not meant for this
> scenario, it's meant to be used when deleting the server to which the
> volume is attached. Rebuilding a server is not deleting it, but we would
> need to replace the root volume, so what do we do with the volume we're
> replacing?
>
> Do we say that delete_on_termination only applies to deleting a server
> and not rebuild and therefore nova can delete the root volume during a
> rebuild?
>
> If we don't delete the volume during rebuild, we could end up leaving a
> lot of volumes lying around that the user then has to clean up,
> otherwise they'll eventually go over quota.
>
> We need user (and operator) feedback on this issue and what they would
> expect to happen.
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operator
> s


Re: [openstack-dev] [gnocchi] gnocchi-keystone verification failed.

2018-03-15 Thread gordon chung


On 2018-03-15 5:16 AM, __ mango. wrote:
> hi,
> The environment variable that you're talking about has been configured 
> and the error has not gone away.
> 
> I was on OpenStack for the first time, can you be more specific? Thank 
> you very much.
> 

https://gnocchi.xyz/gnocchiclient/shell.html#openstack-keystone-authentication 
you're missing OS_AUTH_TYPE

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][CI][QA][HA][Eris][LCOO] Validating HA on upstream

2018-03-15 Thread Raoul Scarazzini
On 15/03/2018 01:57, Ghanshyam Mann wrote:
> Thanks all for starting the collaboration on this which is long pending
> things and we all want to have some start on this.
> Myself and SamP talked about it during OPS meetup in Tokyo and we talked
> about below draft plan-
> - Update the Spec - https://review.openstack.org/#/c/443504/. which is
> almost ready as per SamP and his team is working on that.
> - Start the technical debate on tooling we can use/reuse like Yardstick
> etc, which is more this mailing thread. 
> - Accept the new repo for Eris under QA and start at least something in
> Rocky cycle.
> I am in for having meeting on this which is really good idea. non-IRC
> meeting is totally fine here. Do we have meeting place and time setup ?
> -gmann

Hi Ghanshyam,
as I wrote earlier in the thread it's no problem for me to offer my
bluejeans channel, let's sort out which timeslice can be good. I've
added to the main etherpad [1] my timezone (line 53), let's do all that
so that we can create the meeting invite.

[1] https://etherpad.openstack.org/p/extreme-testing-contacts

-- 
Raoul Scarazzini
ra...@redhat.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][requirements] a plan to stop syncing requirements into projects

2018-03-15 Thread Doug Hellmann
Back in Barcelona for the Ocata summit I presented a rough outline
of a plan for us to change the way we manage dependencies across
projects so that we can stop syncing them [1]. We've made some
progress, and I think it's time to finish the work so I'm volunteering
to take some of it up during Rocky. This email is meant to rehash
and update the proposal, and fill in some of the missing details.

[1] https://etherpad.openstack.org/p/ocata-requirements-notes

TL;DR
-

Let's stop copying exact dependency specifications into all our
projects to allow them to reflect the actual versions of things
they depend on. The constraints system in pip makes this change
safe. We still need to maintain some level of compatibility, so the
existing requirements-check job (run for changes to requirements.txt
within each repo) will change a bit rather than going away completely.
We can enable unit test jobs to verify the lower constraint settings
at the same time that we're doing the other work.

Some History


Back in the dark ages of OpenStack development we had a lot of
trouble keeping the dependencies of all of our various projects
configured so they were co-installable. Usually, but not always,
the problems were caused by caps or "exclusions" (version != X) on
dependencies in one project but not in another. Because pip's
dependency resolver does not take into account the versions of
dependencies needed by existing packages, it was quite easy to
install things in the "wrong" order and end up with incompatible
libraries so services wouldn't start or couldn't import plugins.

The first (working) solution to the problem was to develop a
dependency management system based the openstack/requirements
repository. This system and our policies required projects to copy
exactly the settings for all of their dependencies from a global
list managed by a team of reviewers (first the release team, and
later the requirements team). By copying exactly the same settings
into all projects we ensured that they were "co-installable" without
any dependency conflicts. Having a centralized list of dependencies
with a review team also gave us an opportunity to look for duplicates,
packages using incompatible licenses, and otherwise curate the list
of dependencies. More on that later.

Some time after we had the centralized dependency management system
in place, Robert Collins worked with the PyPA folks to add a feature
to pip to constrain installed versions of packages that are actually
installed, while still allowing a range of versions to be specified
in the dependency list. We were then able to to create a list of
"upper constraints" -- the highest, or newest, versions -- of all
of the packages we depend on and set up our test jobs to use that
list to control what is actually installed. This gives us the ability
to say that we need at least version X.Y.Z of a package and to force
the selection of X.Y+1.0 because we want to test with that version.

The constraint feature means that we no longer need to have all of
the dependency specifications match exactly, since we basically
force the installation of a specific version anyway. We've been
running with both constraints and requirements syncing enabled for
a while now, and I think we should stop syncing the settings to
allow projects to let their lower bounds (the minimum versions of
their dependencies) diverge.

That divergence is useful to folks creating packages for just some
of the services, especially when they are going to be deployed in
isolation where co-installability is not required. Skipping the
syncs will also mean we end up releasing fewer versions of stable
libraries, because we won't be raising the minimum supported versions
of their dependencies automatically. That second benefit is my
motivation for focusing on this right now.

Our Requirements


We have three primary requirements for managing the dependency list:

1. Maintain a list of co-installable versions of all of our
   dependencies.

2. Avoid breaking or deadlocking any of our gate jobs due to
   dependency conflicts.

3. Continue to review new dependencies for licensing, redundancy,
   etc.

I believe the upper-constraints.txt file in openstack/releases
satisfies the first two of these requirements. The third means we
need to continue to *have* a global requirements list, but we can
change how we manage it.

In addition to these hard requirements, it would be nice if we could
test the lower bounds of dependencies in projects to detect when a
project is using a feature of a newer version of a library than
their dependencies indicate. Although that is a bit orthogonal to
the syncing issue, I'm going to describe one way we could do that
because the original plan of keeping a global list of "lower
constraints" breaks our ability to stop syncing the same lower
bounds into all of the projects somewhat.

What I Want to Do
-

1. Update the requirements-check test job to change 

Re: [openstack-dev] [OpenStackAnsible] Tag repos as newton-eol

2018-03-15 Thread Jean-Philippe Evrard
Looks good to me.

On 15 March 2018 at 01:11, Tony Breeds  wrote:
> On Wed, Mar 14, 2018 at 09:40:33PM +, Jean-Philippe Evrard wrote:
>> Hello folks,
>>
>> The list is almost perfect: you can do all of those except
>> openstack/openstack-ansible-tests.
>> I'd like to phase out openstack/openstack-ansible-tests and
>> openstack/openstack-ansible later.
>
> Okay excluding the 2 repos above and filtering out projects that don't
> have newton branches we came down to:
>
> # EOL repos belonging to OpenStackAnsible
> eol_branch.sh -- stable/newton newton-eol \
>  openstack/ansible-hardening \
>  openstack/openstack-ansible-apt_package_pinning \
>  openstack/openstack-ansible-ceph_client \
>  openstack/openstack-ansible-galera_client \
>  openstack/openstack-ansible-galera_server \
>  openstack/openstack-ansible-haproxy_server \
>  openstack/openstack-ansible-lxc_container_create \
>  openstack/openstack-ansible-lxc_hosts \
>  openstack/openstack-ansible-memcached_server \
>  openstack/openstack-ansible-openstack_hosts \
>  openstack/openstack-ansible-openstack_openrc \
>  openstack/openstack-ansible-ops \
>  openstack/openstack-ansible-os_aodh \
>  openstack/openstack-ansible-os_ceilometer \
>  openstack/openstack-ansible-os_cinder \
>  openstack/openstack-ansible-os_glance \
>  openstack/openstack-ansible-os_gnocchi \
>  openstack/openstack-ansible-os_heat \
>  openstack/openstack-ansible-os_horizon \
>  openstack/openstack-ansible-os_ironic \
>  openstack/openstack-ansible-os_keystone \
>  openstack/openstack-ansible-os_magnum \
>  openstack/openstack-ansible-os_neutron \
>  openstack/openstack-ansible-os_nova \
>  openstack/openstack-ansible-os_rally \
>  openstack/openstack-ansible-os_sahara \
>  openstack/openstack-ansible-os_swift \
>  openstack/openstack-ansible-os_tempest \
>  openstack/openstack-ansible-pip_install \
>  openstack/openstack-ansible-plugins \
>  openstack/openstack-ansible-rabbitmq_server \
>  openstack/openstack-ansible-repo_build \
>  openstack/openstack-ansible-repo_server \
>  openstack/openstack-ansible-rsyslog_client \
>  openstack/openstack-ansible-rsyslog_server \
>  openstack/openstack-ansible-security
>
> If you confirm I have the list right this time I'll work on this tomorrow
>
> Yours Tony.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][stable] New release for Pike is overdue

2018-03-15 Thread Jens Harbott
The last neutron release for Pike has been made in November, a lot of
bug fixes have made it into the stable/pike branch, can we please get
a fresh release for it soon?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] core nomination for caoyuan

2018-03-15 Thread Martin André
+1

On Tue, Mar 13, 2018 at 5:50 PM, Swapnil Kulkarni  wrote:
> On Mon, Mar 12, 2018 at 7:36 AM, Jeffrey Zhang  
> wrote:
>> Kolla core reviewer team,
>>
>> It is my pleasure to nominate caoyuan for kolla core team.
>>
>> caoyuan's output is fantastic over the last cycle. And he is the most
>> active non-core contributor on Kolla project for last 180 days[1]. He
>> focuses on configuration optimize and improve the pre-checks feature.
>>
>> Consider this nomination a +1 vote from me.
>>
>> A +1 vote indicates you are in favor of caoyuan as a candidate, a -1
>> is a veto. Voting is open for 7 days until Mar 12th, or a unanimous
>> response is reached or a veto vote occurs.
>>
>> [1] http://stackalytics.com/report/contribution/kolla-group/180
>> --
>> Regards,
>> Jeffrey Zhang
>> Blog: http://xcodest.me
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> +1
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][neutron] tools/tox_install changes - breakage with constraints

2018-03-15 Thread Andreas Jaeger
On 2018-03-15 10:05, Thomas Morin wrote:
> Hi Andreas, all,
> 
> Andreas Jaeger, 2018-03-14 20:46:
>> Note that thanks to the tox-siblings feature, we really continue to
>> install neutron and horizon from git - and not use the versions in the
>> global-requirements constraints file.
> 
> This addresses my main concern, which was that by removing
> tools/tox_install.sh we would end up not pulling master from git.
> 
> The fact that we do keep pulling from git wasn't explicit AFAIK in any
> of the commit messages of the changes I had to look at to understand
> what was being modified.

Sorry for not mentioning that.

> I concur with Akihiro's comment, and would go slightly beyond that:
> ideally the solution chosen would not only technical work, but would
> reduce the ahah-there-is-magic-behind-the-scene effect, which is a pain
> I believe for many: people new to the community face a steeper learning
> curve, people inside the community need to spend time adjust, and infra
> folks end up having to document or explain more. In this precise case,
> the magic behind the scene (ie. the tox-siblings role) may lead to
> confusion for packagers (why our CI tests as valid is not what appears
> in requirements.txt) and perhaps people working in external communities
> (e.g. [1]).

The old way - included some magic as well ;(

I agree with Doug - we need to architect our dependencies better to
avoid these problems and hacks,

Andreas

> Best,
> 
> -Thomas
> 
> [1]
> http://docs.opnfv.org/en/latest/submodules/releng-xci/docs/xci-overview.html#xci-overview


-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] gnocchi-keystone verification failed.

2018-03-15 Thread __ mango.
hi??The environment variable that you're talking about has been configured and 
the error has not gone away.




I was on OpenStack for the first time, can you be more specific? Thank you very 
much.









-- Original --
From:  "Julien Danjou";
Date:  Thu, Mar 15, 2018 04:48 PM
To:  "__ mango."<935540...@qq.com>;
Cc:  "openstack-dev"; 
Subject:  Re: [openstack-dev] [gnocchi] gnocchi-keystone verification failed.



On Thu, Mar 15 2018, __ mango. wrote:

> I have a question about the validation of gnocchi keystone.

There's no question in your message.

> I run the following command, but it is not successful.(api.auth_mode :basic, 
> basic mode can be successful)
>
> # gnocchi status --debug
> REQ: curl -g -i -X GET http://localhost:8041/v1/status?details=False
> -H "Authorization: {SHA1}d4daf1cf567f14f32dbc762154b3a281b4ea4c62" -H
> "Accept: application/json, */*" -H "User-Agent: gnocchi
> keystoneauth1/3.1.0 python-requests/2.18.1 CPython/2.7.12"

There's no token in this request so Keystone auth won't work. You did
not set the environment variable OS_* correctly.

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][neutron] tools/tox_install changes - breakage with constraints

2018-03-15 Thread Thomas Morin
Hi Doug,

Doug Hellmann, 2018-03-14 23:42:
> We keep doing lots of infra-related work to make it "easy" to do
>  when it comes to
> managing dependencies.  There are three ways to address the issue
> with horizon and neutron, and none of them involve adding features
> to pbr.
> 
> 1. Things that are being used like libraries need to release like
>libraries. Real releases. With appropriate version numbers. So
>that other things that depend on them can express valid
> dependencies.
> 
> 2. Extract the relevant code into libraries and release *those*.
> 
> 3. Things that are not stable enough to be treated as a library
>shouldn't be used that way. Move the things that use the
> application
>code as library code back into the repo with the thing that they
>are tied to but that we don't want to (or can't) treat like a
>library.

What about the case where there is co-development of features across
repos ? One specific case I have in mind is the Neutron stadium where
we sometimes have features in neutron repo that are worked on as a pre-
requisite for things that will be done in a neutron-* or networking-*
project. Another is a case for instance where we need to add in project
X a tempest test to validate the resolution of a bug for which the fix
actually happened in project B (and where B is not a library).

My intuition is that it is not illegitimate to expect this kind of
development workflow to be feasible; but at the same time I read your
suggestion above as meaning that it belongs to the real of "things we
shouldn't be doing in the first place".  The only way I can reconcile
the two would be to conclude we should collapse all the module in
neutron-*/networking-* into neutron, but doing that would have quite a
lot of side effects (yes, this is an understatement).

-Thomas


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][neutron] tools/tox_install changes - breakage with constraints

2018-03-15 Thread Thomas Morin
Hi Andreas, all,
> Note that thanks to the tox-siblings feature, we really continue to
> install neutron and horizon from git - and not use the versions in
> the global-requirements constraints file.

This addresses my main concern, which was that by removing
tools/tox_install.sh we would end up not pulling master from git.

The fact that we do keep pulling from git wasn't explicit AFAIK in any
of the commit messages of the changes I had to look at to understand
what was being modified.

I concur with Akihiro's comment, and would go slightly beyond that:
ideally the solution chosen would not only technical work, but would
reduce the ahah-there-is-magic-behind-the-scene effect, which is a pain
I believe for many: people new to the community face a steeper learning
curve, people inside the community need to spend time adjust, and infra
folks end up having to document or explain more.  In this precise
case,  the magic behind the scene (ie. the tox-siblings role) may lead
to confusion for packagers (why our CI tests as valid is not what
appears
in requirements.txt) and perhaps people working in external communities
(e.g. [1]).
Best,

-Thomas

[1] http://docs.opnfv.org/en/latest/submodules/releng-xci/docs/xci-over
view.html#xci-overview

Andreas Jaeger, 2018-03-14 20:46:__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] gnocchi-keystone verification failed.

2018-03-15 Thread Julien Danjou
On Thu, Mar 15 2018, __ mango. wrote:

> I have a question about the validation of gnocchi keystone.

There's no question in your message.

> I run the following command, but it is not successful.(api.auth_mode :basic, 
> basic mode can be successful)
>
> # gnocchi status --debug
> REQ: curl -g -i -X GET http://localhost:8041/v1/status?details=False
> -H "Authorization: {SHA1}d4daf1cf567f14f32dbc762154b3a281b4ea4c62" -H
> "Accept: application/json, */*" -H "User-Agent: gnocchi
> keystoneauth1/3.1.0 python-requests/2.18.1 CPython/2.7.12"

There's no token in this request so Keystone auth won't work. You did
not set the environment variable OS_* correctly.

-- 
Julien Danjou
# Free Software hacker
# https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Octavia] Using Octavia without neutron's extensions allowed-address-pairs and security-groups.

2018-03-15 Thread Вадим Пономарев
Hi,

I'm trying to install Octavia (from branch master) in my openstack
installation. In my installation, neutron works with disabled extension
allowed-address-pairs and disabled extension security-groups. This is done
to improve performance. At the moment, i see that Octavia supporting for
neutron only the network_driver allowed_address_pairs_driver, but this
driver requires the extensions [1]. How can i use Octavia without the
extensions? Or the only option is to write your own driver?

[1]
https://github.com/openstack/octavia/blob/master/octavia/network/drivers/neutron/allowed_address_pairs.py#L57

-- 
Best regards,
Vadim Ponomarev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [masakari] Rocky work items

2018-03-15 Thread Sam P
Hi All,
  Rocky work items will be discussed in 3/20 masakari IRC meeting [1].
Current items are listed in etherpad [2] and new items, comments, questions
are welcome.
If you are not able to join IRC meeting, then please add sufficient details
to your comment
and your contacts (IRC or email) where we can reach you for further
discussion.

[1] http://eavesdrop.openstack.org/#Masakari_Team_Meeting
[2] https://etherpad.openstack.org/p/masakari-rocky-work-items

--- Regards,
Sampath
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [masakari] Masakari Project mascot ideas

2018-03-15 Thread Sam P
Thanks all. Seems like we are good to go with "St. Bernard".
I think general image is [1]. Or let me know I am wrong.
I will inform to Anne and Kendall,about our choice.

[1] https://www.min-inuzukan.com/st-bernard.html

--- Regards,
Sampath


On Wed, Mar 14, 2018 at 6:42 PM, Shewale, Bhagyashri <
bhagyashri.shew...@nttdata.com> wrote:

> +1 for St. Bernard
>
> Regards,
> Bhagyashri Shewale
>
> 
> From: Patil, Tushar 
> Sent: Wednesday, March 14, 2018 7:52:54 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [masakari] Masakari Project mascot ideas
>
> Hi,
>
>
> Total 4 people attended last IRC meeting and all of them have voted for
> St.Bernard Dog.
>
>
> If someone has missed to vote, please vote for mascot now.
>
>
> Options:
>
> 1) Asiatic black bear
> 2) Gekko : Geckos is able to regrow it's tail when the tail is lost.
> 3) St. Bernard: St. Bernard is famous as rescue dog (Masakari rescues VM
> instances)
>
>
> Thank you.
>
>
> Regards,
>
> Tushar Patil
>
>
>
> 
> From: Bhor, Dinesh 
> Sent: Wednesday, March 14, 2018 10:16:29 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [masakari] Masakari Project mascot ideas
>
>
> Hi Sampath San,
>
>
> There is one more option which we discussed in yesterdays masakari meeting
> [1]:
>
> St. Bernard(Dog) [2].
>
>
> [1] http://eavesdrop.openstack.org/meetings/masakari/2018/
> masakari.2018-03-13-04.01.log.html#l-38
>
>
> [2] https://en.wikipedia.org/wiki/St._Bernard_(dog)
>
>
> Thank you,
>
> Dinesh Bhor
>
>
> 
> From: Sam P 
> Sent: 13 March 2018 22:19:00
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [masakari] Masakari Project mascot ideas
>
> Hi All,
>
> We started this discussion on IRC meeting few weeks ago and still no
> progress..;)
> (aspiers: thanks for the reminder!)
>
> Need mascot proposals for Masakari, see FAQ [1] for more info
> Current ideas: Origin of "Masakari" is related to hero from Japanese
> folklore [2].
> Considering that relationship and to start the process, here are few ideas,
> (1) Asiatic black bear
> (2) Gekko : Geckos is able to regrow it's tail when the tail is lost.
>
> [1] https://www.openstack.org/project-mascots/
> [http://www.openstack.org/themes/openstack/images/openstack-logo-full.png
> ]
>
> Project Mascots - OpenStack is open source software for ...<
> https://www.openstack.org/project-mascots/>
> www.openstack.org
> We are OpenStack. We’re also passionately developing more than 60 projects
> within OpenStack. To support each project’s unique identity and visually
> demonstrate ...
>
>
> [2] https://en.wikipedia.org/wiki/Kintar%C5%8D
>
> --- Regards,
> Sampath
>
>
> __
> Disclaimer: This email and any attachments are sent in strictest confidence
> for the sole use of the addressee and may contain legally privileged,
> confidential, and proprietary data. If you are not the intended recipient,
> please advise the sender by replying promptly to this email and then delete
> and destroy this email and any attachments without any further use, copying
> or forwarding.
>
> __
> Disclaimer: This email and any attachments are sent in strictest confidence
> for the sole use of the addressee and may contain legally privileged,
> confidential, and proprietary data. If you are not the intended recipient,
> please advise the sender by replying promptly to this email and then delete
> and destroy this email and any attachments without any further use, copying
> or forwarding.
>
> __
> Disclaimer: This email and any attachments are sent in strictest confidence
> for the sole use of the addressee and may contain legally privileged,
> confidential, and proprietary data. If you are not the intended recipient,
> please advise the sender by replying promptly to this email and then delete
> and destroy this email and any attachments without any further use, copying
> or forwarding.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ops][heat][PTG] Heat PTG Summary

2018-03-15 Thread Rico Lin
Hi Heat devs and ops

It's a great PTG plus SnowpenStack experience. Now Rocky started. We
really need all kind of input and effort to make sure we're heading toward
the right way.

Here is what we been discussed during PTG:

   - Future strategy for heat-tempest-plugin & functional tests
   - Multi-cloud support
   - Next plan for Heat Dashboard
   - Race conditions for clients updating/deleting stacks
   - Swift Template/file object support
   - heat dashboard needs of clients
   - Resuming after an engine failure
   - Moving SyncPoints from DB to DLM
   - toggle the debug option at runtime
   - remove mox
   - Allow partial success in ASG
   - Client Plugins and OpenStackSDK
   - Global Request Id support
   - Heat KeyStone Credential issue
   - (How we going to survive on the island)

You can find *all Etherpads links* in
*https://etherpad.openstack.org/p/heat-rocky-ptg
*

We try to document down as much as we can(Thanks Zane for picking it up),
including discussion and actions. *Will try to target all actions in Rocky*.
If you do like to input on any topic (or any topic you think we
missing), *please
try to provide inputs to the etherpad* (and be kind to leave messages in ML
or meeting so we won't miss it.)

*Use Cases*
If you have any use case for us (What's your usecase, what's not working/
what's working well),
please help us and input to* https://etherpad.openstack.org/p/heat-usecases
*


Here are *Team photos* we took:
*https://www.dropbox.com/sh/dtei3ovfi7z74vo/AADX_s3PXFiC3Fod8Yj_RO4na/Heat?dl=0
*



-- 
May The Force of OpenStack Be With You,

*Rico Lin*irc: ricolin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][libvirt] question on max cpu and memory

2018-03-15 Thread Chen CH Ji

In order to work on [1] for prove libvirt is ok to do live-resize, I want
to make following changes on xml file of instance
to make maximum memory as 1G and current memory as 512M , and current CPU
is a while maximum CPU is 2 so that we can hot resize through libvirt
interface

question I have is whether it's ok and whether the current CPU count (1 in
this case) is inconsistent to cpu topology lead to any problem ?
and are there some considerations/limitations ? Not so much experience so
any comments is appreciated

  1048576
  524288
  2

  

  

[1]https://blueprints.launchpad.net/nova/+spec/instance-live-resize

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82451493
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][charms] Openstack + OVN

2018-03-15 Thread Aakash Kt
Hi James,

Just a small reminder that I have pushed a patch for review, according to
changes you suggested :-)

Thanks,
Aakash

On Mon, Mar 12, 2018 at 2:38 PM, James Page  wrote:

> Hi Aakash
>
> On Sun, 11 Mar 2018 at 19:01 Aakash Kt  wrote:
>
>> Hi,
>>
>> I had previously put in a mail about the development for openstack-ovn
>> charm. Sorry it took me this long to get back, was involved in other
>> projects.
>>
>> I have submitted a charm spec for the above charm.
>> Here is the review link : https://review.openstack.org/#/c/551800/
>>
>> Please look in to it and we can further discuss how to proceed.
>>
>
> I'll feedback directly on the review.
>
> Thanks!
>
> James
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev