Re: [openstack-dev] [qa][release][ironic][requirements] hacking 1.1.0 released and ironic CI gates failing pep8

2018-05-08 Thread Ghanshyam Mann
On Wed, May 9, 2018 at 2:23 AM, Doug Hellmann  wrote:
> (I added the [qa] topic tag for the QA team, since they own hacking, and
> [requirements] for that team since I have a question about capping.)
>
> Excerpts from Julia Kreger's message of 2018-05-08 12:43:07 -0400:
>> About two hours ago, we started seeing Ironic CI jobs failing pep8
>> with new errors[1]. For some of our repositories, it just seems to be
>> a couple of lines that need to be fixed. On ironic itself, supporting
>> this might have us dead in the water for a while to fix the code in
>> accordance with what hacking is now expecting.
>>
>> That being said, dtantsur and dhellmann have the perception that new
>> checks are supposed to be opt-in only, yet this new hacking appears to
>> have at W605 and W606 enabled by default as indicated by discussion in
>> #openstack-release[2].
>>
>> Please advise, it seems like the release team ought to revert the
>> breaking changes and cut a new release as soon as possible.
>>
>> -Julia
>>
>> [1]: 
>> http://logs.openstack.org/87/557687/4/check/openstack-tox-pep8/75380de/job-output.txt.gz#_2018-05-08_14_46_47_179606
>> [2]: 
>> http://eavesdrop.openstack.org/irclogs/%23openstack-release/%23openstack-release.2018-05-08.log.html#t2018-05-08T16:30:22
>>
>
> As discussed in #openstack-release, those checks are pulled in via
> pycodestyle rather than hacking itself, and pycodestyle doesn't have an
> opt-in policy.
>
> Hacking is in the blacklist for requirements management, so teams
> *ought* to be able to cap it, if I understand correctly. So I suggest at
> least starting with a patch to test that.
>
> Doug

Sorry for inconvenience but i agree with Doug on capping the hacking
on project side. hacking in blacklist and never be in g-r sync list
was to avoided the situation like this. hacking compatible version and
cap is maintained by project side only as per their source code.
Almost all the project has capped the hacking version in their
test-requirements.txt and update the version as when project team want
& code is passing the new rules. For example [1].

It is difficult for QA team or release team to verify that hacking
release will not break anyone with new version even there is no new
rule addition in hacking (like this time failure are caused by
pycodestyle). To avoid such failure in future, i am on side of capping
the hacking on project side like majority of the project does. I
searched manually and found below repo which does not cap the
hacking[2].

I am giving try to cap the version on those repo. Project team can
decide and merge them to fix the current gate as well as to avoid such
situation in future.

- 
https://review.openstack.org/#/q/topic:cap-hacking+(status:open+OR+status:merged)


FYI- W503 was raising failure even before hacking release which was
fixed in Tempest a month ago. [3]


..1 https://review.openstack.org/#/c/397486/

..2
http://codesearch.openstack.org/?q=hacking=nope=test-requirements.txt=
1. openstack/fuel-ccp-installer
2. openstack/fuel-ccp-tests
3. openstack/ironic
4. openstack/ironic-inspector
5. openstack/ironic-lib
6. openstack/ironic-python-agent
7. openstack/python-ironic-inspector-client
8. openstack/python-ironicclient
9. openstack/kolla-ansible
10. openstack/monasca-analytics
11. openstack/networking-generic-switch
12. openstack/patrole
13. openstack/pyghmi
14. openstack/rally
15. openstack/rally-openstack
16. openstack-infra/storyboard
17. openstack/sushy


.. 3 https://review.openstack.org/#/c/560360/

-gmann

>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] CI Squads’ Sprint 12 Summary: libvirt-reproducer, python-tempestconf

2018-05-08 Thread Matt Young
Greetings,

The TripleO squads for CI and Tempest have just completed Sprint 12.  The
following is a summary of activities during this sprint.   Details on our
team structure can be found in the spec [1].

---

# Sprint 12 Epic (CI): Libvirt Reproducer

* Epic Card: https://trello.com/c/JEGLSVh6/51-reproduce-ci-jobs-with-libvirt
* Tasks: http://ow.ly/O1vZ30jTSc3

"Allow developers to reproduce a multinode CI job on a bare metal host
using libvirt"
"Enable the same workflows used in upstream CI / reproducer using libvirt
instead of OVB as the provisioning mechanism"

The CI Squad prototyped, designed, and implemented new functionality for
our CI reproducer.   “Reproducers” are scripts generated by each CI job
that allow the job/test to be recreated.  These are useful to both CI team
members when investigating failures, as well as developers creating
failures with the intent to iteratively debug and/or fix issues.  Prior to
this sprint, the reproducer scripts supported reproduction of upstream CI
jobs using OVB, typically on RDO Cloud.  This sprint we extended this
capability to support reproduction of jobs in libvirt.

This work was done for a few reasons:

* (short term) enable the team to work on upgrades and other CI team tasks
more efficiently by mitigating recurring RDO Cloud infrastructure issues.
This was the primary motivator for doing this work at this time.
* (mid-longer term) enhance / enable iterative workflows such as THT
development, debugging deployment scenarios, etc.  Snapshots in particular
have proven quite useful.  As we look towards a future with a viable
single-node deployment capability, libvirt has clear benefits for common
developer scenarios.

It is expected that further iteration and refinement of this initial
implementation will be required before the tripleo-ci team is able to
support this broadly.  What we’ve done works as designed.  While we welcome
folks to explore, please note that we are not announcing a supported
libvirt reproducer meant for use outside the tripleo-ci team at this time.
We expect some degree of change, and have a number of RFE’s resulting from
our testing as well as documentation patches that we’re iterating on.

That said, we think it’s really cool, works well in its current form, and
are optimistic about its future.

## We did the following (CI):

* Add support to the reproducer script [2,3] generated by CI to enable
libvirt.
* Basic snapshot create/restore [4] capability.
* Tested Scenarios: featureset 3 (UC idem), 10 (multinode containers), 37
(min OC + minor update).  See sprint cards for details.
* 14-18 RFE’s identified as part of testing for future work
http://ow.ly/J2u830jTSLG

---

# Sprint 12 Epic (Tempest):

* Epic Card: https://trello.com/c/ifIYQsxs/75-sprint-12-undercloud-tempest
* Tasks: http://ow.ly/GGvc30jTSfV

“Run tempest on undercloud by using containerized and packaged tempest”
“Complete work items carried from sprint 11 or another side work going on.”

## We did the following (Tempest):

* Create tripleo-ci jobs that run containerized tempest on all stable
branches.
* Create documentation for configuring and running tempest using
containerized tempest on UC @tripleo.org, and blog posts. [5,6,7]
* Run certification tests via new Jenkins job using ansible role [8]
* Refactor validate-tempest CI role for UC and containers

---

# Ruck and Rover

Each sprint two of the team members assume the roles of Ruck and Rover
(each for half of the sprint).

* Ruck is responsible to monitoring the CI, checking for failures, opening
bugs, participate on meetings, and this is your focal point to any CI
issues.
* Rover is responsible to work on these bugs, fix problems and the rest of
the team are focused on the sprint. For more information about our
structure, check [1]

## Ruck & Rover (Sprint 12), Etherpad [9,10]:

* Quique Llorente(quiquell)
* Gabriele Cerami (panda)

A few notable issues where substantial time was spent were:

1767099 periodic-tripleo-ci-centos-7-multinode-1ctlr-featureset030-master
vxlan tunnel fails randomly
1758899 reproducer-quickstart.sh building wrong gating package.
1767343 gate tripleo-ci-centos-7-containers-multinode fails to update
packages in cron container
1762351
periodic-tripleo-ci-centos-7-ovb-1ctlr_1comp-featureset002-queens-upload is
timeout Depends on https://bugzilla.redhat.com/show_bug.cgi?id=1565179
1766873 quickstart on ovb doesn't yield a deployment
1767049 Error during test discovery : 'must specify exactly one of host or
intercept' Depends on https://bugzilla.redhat.com/show_bug.cgi?id=1434385
1767076 Creating pingtest_sack fails: Failed to schedule instances:
NoValidHost_Remote: No valid host was found
1763634 devmode.sh --ovb fails to deploy overcloud
1765680 Incorrect branch used for not gated tripleo-upgrade repo

If you have any questions and/or suggestions, please contact us in #oooq or
#tripleo

Thanks,

Matt


tq: https://github.com/openstack/tripleo-quickstart
tqe: 

Re: [openstack-dev] [keystone][monasca][congress][senlin][telemetry] authenticated webhook notifications

2018-05-08 Thread Eric K
Thank you, Zane for the discussion.

Point taken about sending webhook notifications.

Primarily I want Congress to consume webhook notifications from the
openstack services which already send them (monasca, vitrage, etc.). Most
of them do not currently support sending appropriate keystone tokens with
the notifications, but some are open to doing it.

The aodh and zaqar references are exactly what I was hoping to find. I
couldn't find a reference to it in aodh docs or much on google, so many
thanks for the pointer!

Eric



On 5/8/18, 1:20 PM, "Zane Bitter"  wrote:
>If the caller is something that is basically trusted, then you should
>prefer regular keystone auth. If you need to make sure that the caller
>can only use that one API, signed URLs are still the only game in town
>for now (but we hope this is very temporary).
>
>> I know some people are working on adding the keystone auth option to
>> Monasca's webhook framework. If there is a project that already does it,
>> it could be a very helpful reference.
>
>There's a sort of convention that where you supply a webhook URL with a
>scheme trust+https:// then the service creates a keystone trust and uses
>that to get keystone tokens which are then used to authenticate the
>webhook request. Aodh and Zaqar at least follow this convention. The
>trust part is an important point that you're overlooking: (from your
>other message)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] Project Update Topics

2018-05-08 Thread Ben Nemec

Hi,

This was discussed in the meeting this week too, but I wanted to send it 
to the list as well for a little more visibility.  We've started an 
etherpad at https://etherpad.openstack.org/p/oslo-project-update-rocky 
to collect any topics that folks want included in the Oslo project 
update session in Vancouver.  We're less than two weeks out from Summit 
so please don't wait if you have something to add.


Thanks.

-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [nova] [octavia] [ironic] [keystone] [policy] Spec. Freeze Exception - Default Roles

2018-05-08 Thread Doug Hellmann
Excerpts from Lance Bragstad's message of 2018-05-04 15:16:09 -0500:
> 
> On 05/04/2018 02:55 PM, Harry Rybacki wrote:
> > Greetings All,
> >
> > After a discussion in #openstack-tc[1] earlier today, the Keystone
> > team is adjusting its approach in proposing default roles[2].
> > Subsequently, I have ported the current default roles specification
> > from openstack-specs[3] to keystone-specs[2].
> >
> > The original review has been in a pretty stable state for a few weeks.
> > As such, I propose we allow the new spec an exception to the original
> > Rocky-m1 proposal freeze date.
> 
> I don't have an issue with this, especially since we talked about it heavily 
> at the PTG. We also had people familiar with keystone +1 the openstack-spec 
> prior to keystone's proposal freeze. I'm OK granting an exception here if 
> other keystone contributors don't object.
> 
> >
> > I invite more discussion around default roles, and our proposed
> > approach. The Keystone team has a forum session[4] dedicated to this
> > topic at 1135 on day one of the Vancouver Summit. Everyone should feel
> > welcome and encouraged to attend -- we hope that this work will lead
> > to an OpenStack Community Goal in a not-so-distant release.
> 
> I think scoping this down to be keystone-specific is a smart move. It allows 
> us to focus on building a solid template for other projects to learn from. I 
> was pleasantly surprised to hear people in -tc suggest this as a candidate 
> for a community goal in Stein or T.
> 
> Also, big thanks to jroll, dhellmann, ttx, zaneb, smcginnis, johnsom, and 
> mnaser for taking time to work through this with us.

This is a good opportunity for us to experiment with simplifying a
community process.

We've seen repeatedly that big initiatives like this work best when
the team behind them is committed to the specific initiative.  Rather
than trying to assemble a new team of uncertain membership or
interest to review all "global" specs like this, I like the idea
of the keystone team continuing to drive the work on this change
while seeking input from the rest of the community.

We still need to have consensus about the plan, which we can do via
the normal mailing list threads and review of the spec. And then
when we have one or two projects done as an example, we can review
what else we might need before taking it on as a goal.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][monasca][congress][senlin][telemetry] authenticated webhook notifications

2018-05-08 Thread Zane Bitter

On 03/05/18 15:49, Eric K wrote:

Question to the projects which send or consume webhook notifications
(telemetry, monasca, senlin, vitrage, etc.), what are your
supported/preferred authentication mechanisms? Bearer token (e.g.
Keystone)? Signing?


Signed URLs and regular Keystone auth are both options, and both used in 
various places as Thomas said. Any time you can not implement your own 
signed URL thing, it's better that you don't though. Security-sensitive 
things like authentication should be implemented as few times as possible.


Eventually we should be able to mostly eliminate the need for signed 
URLs with 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/capabilities-app-creds.html 
but we're not there yet.


If the caller is something that is basically trusted, then you should 
prefer regular keystone auth. If you need to make sure that the caller 
can only use that one API, signed URLs are still the only game in town 
for now (but we hope this is very temporary).



Any pointers to past discussions on the topic? My interest here is having
Congress consume and send webhook notifications.


Please don't.

Webhooks are a security nightmare. They can be used to enlist the 
OpenStack infrastructure in mounting attacks on other random sites, or 
to attack the OpenStack operator themselves if everything is not 
properly secured.


Ideally there should be only one place in OpenStack that can send 
webhooks, so that there's only one thing for operators to secure. (IMHO 
since that thing will need to keep a queue of pending webhooks to send, 
the logical place would be Zaqar notifications.) Obviously that's not 
the case today - we already send webhooks from Aodh, Mistral, Zaqar and 
others. But at least we can avoid adding more.



I know some people are working on adding the keystone auth option to
Monasca's webhook framework. If there is a project that already does it,
it could be a very helpful reference.


There's a sort of convention that where you supply a webhook URL with a 
scheme trust+https:// then the service creates a keystone trust and uses 
that to get keystone tokens which are then used to authenticate the 
webhook request. Aodh and Zaqar at least follow this convention. The 
trust part is an important point that you're overlooking: (from your 
other message)



I'm thinking about the situation where the sending service can obtain
tokens directly from keystone.


If you haven't stored the user's password then you cannot, in fact, 
obtain more tokens from keystone. You only have the one they gave you 
with the initial request, and that will soon expire. So you have to 
create a trust (which doesn't expire) and store the trust ID, which you 
can then use in combination with the service token to get additional 
user tokens from when required.


Don't do that though. Just create a Zaqar queue, store a pre-signed URL 
that allows you to post to it, and set up a Zaqar notification for the 
webhook URL you want to hit (which can be a trust+https:// URL). Avoid 
being the next project to reinvent the wheel :)


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][monasca][congress][senlin][telemetry] authenticated webhook notifications

2018-05-08 Thread Eric K
To clarify, one of the reasons I'd like to accept webhook notifications
authenticated with keystone tokens is that I don't want the access to
expire, but of course it's poor practice to use a signed URL that never
expires.

Eric

On 5/8/18, 12:29 PM, "Eric K"  wrote:

>Thanks, Thomas!
>
>I see the point that it is impractical to configure a service with a fixed
>keystone token to use in webhook notifications because they expire fairly
>quickly.
>
>I'm thinking about the situation where the sending service can obtain
>tokens directly from keystone. In that case I'm guessing the main reason
>it hasn't been done that way is because it does not generalize to most
>other services that don't connect to keystone?
>
>On 5/6/18, 9:30 AM, "Thomas Herve"  wrote:
>
>>On Sat, May 5, 2018 at 1:53 AM, Eric K  wrote:
>>> Thanks a lot Witold and Thomas!
>>>
>>> So it doesn't seem that someone is currently using a keystone token to
>>> authenticate web hook? Is is simply because most of the use cases had
>>> involved services which do not use keystone?
>>>
>>> Or is it unsuitable for another reason?
>>
>>It's fairly impractical for webhooks because
>>
>>1) Tokens expire fairly quickly.
>>2) You can't store all the data in the URL, so you need to store the
>>token and the URL separately.
>>
>>-- 
>>Thomas
>>
>>_
>>_
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][ptls][glance] final stages of python 3 transition

2018-05-08 Thread Zane Bitter

On 08/05/18 15:16, Matthew Treinish wrote:

Although, I don't think glance uses oslo.service even in the case where it's
using the standalone eventlet server. It looks like it launches eventlet.wsgi
directly:

https://github.com/openstack/glance/blob/master/glance/common/wsgi.py

and I don't see oslo.service in the requirements file either:

https://github.com/openstack/glance/blob/master/requirements.txt


It would probably independently suffer from 
https://bugs.launchpad.net/manila/+bug/1482633 in Python 3 then. IIUC 
the code started in oslo incubator but projects like neutron and manila 
converted to use the oslo.service version. There may be other copies of 
it still floating around...


- ZB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][monasca][congress][senlin][telemetry] authenticated webhook notifications

2018-05-08 Thread Eric K
Thanks, Thomas!

I see the point that it is impractical to configure a service with a fixed
keystone token to use in webhook notifications because they expire fairly
quickly.

I'm thinking about the situation where the sending service can obtain
tokens directly from keystone. In that case I'm guessing the main reason
it hasn't been done that way is because it does not generalize to most
other services that don't connect to keystone?

On 5/6/18, 9:30 AM, "Thomas Herve"  wrote:

>On Sat, May 5, 2018 at 1:53 AM, Eric K  wrote:
>> Thanks a lot Witold and Thomas!
>>
>> So it doesn't seem that someone is currently using a keystone token to
>> authenticate web hook? Is is simply because most of the use cases had
>> involved services which do not use keystone?
>>
>> Or is it unsuitable for another reason?
>
>It's fairly impractical for webhooks because
>
>1) Tokens expire fairly quickly.
>2) You can't store all the data in the URL, so you need to store the
>token and the URL separately.
>
>-- 
>Thomas
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][ptls][glance] final stages of python 3 transition

2018-05-08 Thread Matthew Treinish
On Tue, May 08, 2018 at 03:02:05PM -0400, Doug Hellmann wrote:
> Excerpts from Matthew Treinish's message of 2018-05-08 13:55:43 -0400:
> > On Tue, May 08, 2018 at 01:34:11PM -0400, Doug Hellmann wrote:
> > > 
> > > (added [glance] subject tag)
> > > 
> > > Excerpts from Matthew Treinish's message of 2018-05-08 12:22:56 -0400:
> > > > On Tue, May 08, 2018 at 05:01:36PM +0100, Graham Hayes wrote:
> > > > > On 08/05/18 16:53, Doug Hellmann wrote:
> > > > > > Excerpts from Graham Hayes's message of 2018-05-08 16:28:46 +0100:
> 
> [snip]
> 
> > > > > Glance - Has issues with image upload + uwsgi + eventlet [1]
> > > > 
> > > > This actually is a bit misleading. Glance works fine with image upload 
> > > > and uwsgi.
> > > > That's the only configuration of glance in a wsgi app that works because
> > > > of chunked transfer encoding not being in the WSGI protocol. [2] uwsgi 
> > > > provides
> > > > an alternate interface to read chunked requests which enables this to 
> > > > work.
> > > > If you look at the bugs linked off that release note about image upload
> > > > you'll see they're all fixed.
> > > 
> > > Is this documented somewhere?
> > 
> > The wsgi limitation or the glance usage? I wrote up a doc about running 
> > under
> > apache when I added the uwsgi chunked transfer encoding support to glance 
> > about
> > running glance under apache here:
> > 
> > https://docs.openstack.org/glance/latest/admin/apache-httpd.html
> > 
> > Which includes how you have to configure things to get it working and a 
> > section
> > on why mod_wsgi doesn't work.
> 
> I meant the glance usage so it sounds like you've covered the docs
> for that. Thanks!
> 
> > > > The issues glance has with running in a wsgi app are related to it's 
> > > > use of
> > > > async tasks via taskflow. (which includes the tasks api and image 
> > > > import stuff)
> > > > This shouldn't be hard to fix, and I've had patches up to address these 
> > > > for
> > > > months:
> > > > 
> > > > https://review.openstack.org/#/c/531498/
> > > > https://review.openstack.org/#/c/549743/
> > > > 
> > > > Part of the issue is that there is no api driven testing for these 
> > > > async api
> > > > functions or any documented way to test them. Which is why I marked the 
> > > > 2nd
> > > > one WIP, since I have no method to test it and after asking several 
> > > > times
> > > > for a test case or some other method to validate these APIs without an 
> > > > answer.
> > > 
> > > It would be helpful if some of this detail made its way into the glance
> > > section of 
> > > https://wiki.openstack.org/wiki/Python3#Python_3_Status_of_OpenStack_projects
> > 
> > It really doesn't have anything to do with Python 3 though since the bug 
> > with
> > glance's taskflow usage is on both py2 and py3. In fact we're already 
> > running
> > glance under uwsgi in the gate with python 3 today for the dsvm py3 jobs. 
> > The
> > reason these bugs haven't come up there is because there is no test coverage
> > for any of these async APIs. But I can add it to the wiki later today.
> 
> Will it block us from moving glance to python 3 if we drop the WSGI
> code from oslo.service so that the only way to deploy is behind
> some other WSGI server?
> 

It shouldn't be a blocker, the wsgi entrypoint just uses paste to expose the
wsgi app directly:

https://github.com/openstack/glance/blob/master/glance/common/wsgi_app.py#L59-L67

oslo.service doesn't come into play in that code path. So it won't block
the deploying with uwsgi model. The bugs addressed by the 2 patches I referenced
above will still be present though.

Although, I don't think glance uses oslo.service even in the case where it's
using the standalone eventlet server. It looks like it launches eventlet.wsgi
directly:

https://github.com/openstack/glance/blob/master/glance/common/wsgi.py

and I don't see oslo.service in the requirements file either:

https://github.com/openstack/glance/blob/master/requirements.txt

-Matt Treinish


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][ptls][glance] final stages of python 3 transition

2018-05-08 Thread Doug Hellmann
Excerpts from Matthew Treinish's message of 2018-05-08 13:55:43 -0400:
> On Tue, May 08, 2018 at 01:34:11PM -0400, Doug Hellmann wrote:
> > 
> > (added [glance] subject tag)
> > 
> > Excerpts from Matthew Treinish's message of 2018-05-08 12:22:56 -0400:
> > > On Tue, May 08, 2018 at 05:01:36PM +0100, Graham Hayes wrote:
> > > > On 08/05/18 16:53, Doug Hellmann wrote:
> > > > > Excerpts from Graham Hayes's message of 2018-05-08 16:28:46 +0100:

[snip]

> > > > Glance - Has issues with image upload + uwsgi + eventlet [1]
> > > 
> > > This actually is a bit misleading. Glance works fine with image upload 
> > > and uwsgi.
> > > That's the only configuration of glance in a wsgi app that works because
> > > of chunked transfer encoding not being in the WSGI protocol. [2] uwsgi 
> > > provides
> > > an alternate interface to read chunked requests which enables this to 
> > > work.
> > > If you look at the bugs linked off that release note about image upload
> > > you'll see they're all fixed.
> > 
> > Is this documented somewhere?
> 
> The wsgi limitation or the glance usage? I wrote up a doc about running under
> apache when I added the uwsgi chunked transfer encoding support to glance 
> about
> running glance under apache here:
> 
> https://docs.openstack.org/glance/latest/admin/apache-httpd.html
> 
> Which includes how you have to configure things to get it working and a 
> section
> on why mod_wsgi doesn't work.

I meant the glance usage so it sounds like you've covered the docs
for that. Thanks!

> > > The issues glance has with running in a wsgi app are related to it's use 
> > > of
> > > async tasks via taskflow. (which includes the tasks api and image import 
> > > stuff)
> > > This shouldn't be hard to fix, and I've had patches up to address these 
> > > for
> > > months:
> > > 
> > > https://review.openstack.org/#/c/531498/
> > > https://review.openstack.org/#/c/549743/
> > > 
> > > Part of the issue is that there is no api driven testing for these async 
> > > api
> > > functions or any documented way to test them. Which is why I marked the 
> > > 2nd
> > > one WIP, since I have no method to test it and after asking several times
> > > for a test case or some other method to validate these APIs without an 
> > > answer.
> > 
> > It would be helpful if some of this detail made its way into the glance
> > section of 
> > https://wiki.openstack.org/wiki/Python3#Python_3_Status_of_OpenStack_projects
> 
> It really doesn't have anything to do with Python 3 though since the bug with
> glance's taskflow usage is on both py2 and py3. In fact we're already running
> glance under uwsgi in the gate with python 3 today for the dsvm py3 jobs. The
> reason these bugs haven't come up there is because there is no test coverage
> for any of these async APIs. But I can add it to the wiki later today.

Will it block us from moving glance to python 3 if we drop the WSGI
code from oslo.service so that the only way to deploy is behind
some other WSGI server?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][ptls][glance] final stages of python 3 transition

2018-05-08 Thread Matthew Treinish
On Tue, May 08, 2018 at 01:34:11PM -0400, Doug Hellmann wrote:
> 
> (added [glance] subject tag)
> 
> Excerpts from Matthew Treinish's message of 2018-05-08 12:22:56 -0400:
> > On Tue, May 08, 2018 at 05:01:36PM +0100, Graham Hayes wrote:
> > > On 08/05/18 16:53, Doug Hellmann wrote:
> > > > Excerpts from Graham Hayes's message of 2018-05-08 16:28:46 +0100:
> > > >> On 08/05/18 16:09, Zane Bitter wrote:
> > > >>> On 30/04/18 17:16, Ben Nemec wrote:
> > > > Excerpts from Doug Hellmann's message of 2018-04-25 16:54:46 -0400:
> > > >> 1. Fix oslo.service functional tests -- the Oslo team needs help
> > > >>     maintaining this library. Alternatively, we could move all
> > > >>     services to use cotyledon 
> > > >> (https://pypi.org/project/cotyledon/).
> > > >>>
> > > >>> I submitted a patch that fixes the py35 gate (which was broken due to
> > > >>> changes between CPython 3.4 and 3.5), so once that merges we can flip
> > > >>> the gate back to voting:
> > > >>>
> > > >>> https://review.openstack.org/566714
> > > >>>
> > >  For everyone's awareness, we discussed this in the Oslo meeting today
> > >  and our first step is to see how many, if any, services are actually
> > >  relying on the oslo.service functionality that doesn't work in Python
> > >  3 today.  From there we will come up with a plan for how to move 
> > >  forward.
> > > 
> > >  https://bugs.launchpad.net/manila/+bug/1482633 is the original bug.
> > > >>>
> > > >>> These tests are currently skipped in both oslo_service and nova.
> > > >>> (Equivalent tests were removed from Neutron and Manila on the 
> > > >>> principle
> > > >>> that they're now oslo_service's responsibility.)
> > > >>>
> > > >>> This appears to be a series of long-standing bugs in eventlet:
> > > >>>
> > > >>> Python 3.5 failure mode:
> > > >>> https://github.com/eventlet/eventlet/issues/308
> > > >>> https://github.com/eventlet/eventlet/issues/189
> > > >>>
> > > >>> Python 3.4 failure mode:
> > > >>> https://github.com/eventlet/eventlet/issues/476
> > > >>> https://github.com/eventlet/eventlet/issues/145
> > > >>>
> > > >>> There are also more problems coming down the pipeline in Python 3.6:
> > > >>>
> > > >>> https://github.com/eventlet/eventlet/issues/371
> > > >>>
> > > >>> That one is resolved in eventlet 0.21, but we have that blocked by
> > > >>> upper-constraints:
> > > >>> http://git.openstack.org/cgit/openstack/requirements/tree/upper-constraints.txt#n135
> > > >>>
> > > >>>
> > > >>> Given that the code in question relates solely to standalone WSGI
> > > >>> servers with SSL and everything should have already migrated to 
> > > >>> Apache,
> > > >>> and that the upstream is clearly overworked and unlikely to merge 
> > > >>> fixes
> > > >>> any time soon (plus we would have to deal with the fallout of moving 
> > > >>> the
> > > >>> upper constraint), I agree that it would be preferable if we could 
> > > >>> just
> > > >>> ditch this functionality.
> > > >>
> > > >> There are a few projects that have not migrated, and some that have
> > > >> issues running in non standalone WSGI mode (due, ironically to 
> > > >> eventlet)
> > > >>
> > > >> We should probably get people to run these projects behind an reverse
> > > >> proxy, and terminate SSL there, but right now we don't have that
> > > >> documented.
> > > > 
> > > > Do you know which projects?
> > > 
> > > I know of 2:
> > > 
> > > Designate - mainly due to the major lack of resources available during
> > > the uwsgi goal period, and the level of work needed to unravel our
> > > tooling to support it.
> > > 
> > > Glance - Has issues with image upload + uwsgi + eventlet [1]
> > 
> > This actually is a bit misleading. Glance works fine with image upload and 
> > uwsgi.
> > That's the only configuration of glance in a wsgi app that works because
> > of chunked transfer encoding not being in the WSGI protocol. [2] uwsgi 
> > provides
> > an alternate interface to read chunked requests which enables this to work.
> > If you look at the bugs linked off that release note about image upload
> > you'll see they're all fixed.
> 
> Is this documented somewhere?

The wsgi limitation or the glance usage? I wrote up a doc about running under
apache when I added the uwsgi chunked transfer encoding support to glance about
running glance under apache here:

https://docs.openstack.org/glance/latest/admin/apache-httpd.html

Which includes how you have to configure things to get it working and a section
on why mod_wsgi doesn't work.

> 
> > 
> > The issues glance has with running in a wsgi app are related to it's use of
> > async tasks via taskflow. (which includes the tasks api and image import 
> > stuff)
> > This shouldn't be hard to fix, and I've had patches up to address these for
> > months:
> > 
> > https://review.openstack.org/#/c/531498/
> > https://review.openstack.org/#/c/549743/
> > 
> > Part of the issue is that there is no api driven testing for these async api

Re: [openstack-dev] [all][tc][ptls][glance] final stages of python 3 transition

2018-05-08 Thread Doug Hellmann

(added [glance] subject tag)

Excerpts from Matthew Treinish's message of 2018-05-08 12:22:56 -0400:
> On Tue, May 08, 2018 at 05:01:36PM +0100, Graham Hayes wrote:
> > On 08/05/18 16:53, Doug Hellmann wrote:
> > > Excerpts from Graham Hayes's message of 2018-05-08 16:28:46 +0100:
> > >> On 08/05/18 16:09, Zane Bitter wrote:
> > >>> On 30/04/18 17:16, Ben Nemec wrote:
> > > Excerpts from Doug Hellmann's message of 2018-04-25 16:54:46 -0400:
> > >> 1. Fix oslo.service functional tests -- the Oslo team needs help
> > >>     maintaining this library. Alternatively, we could move all
> > >>     services to use cotyledon (https://pypi.org/project/cotyledon/).
> > >>>
> > >>> I submitted a patch that fixes the py35 gate (which was broken due to
> > >>> changes between CPython 3.4 and 3.5), so once that merges we can flip
> > >>> the gate back to voting:
> > >>>
> > >>> https://review.openstack.org/566714
> > >>>
> >  For everyone's awareness, we discussed this in the Oslo meeting today
> >  and our first step is to see how many, if any, services are actually
> >  relying on the oslo.service functionality that doesn't work in Python
> >  3 today.  From there we will come up with a plan for how to move 
> >  forward.
> > 
> >  https://bugs.launchpad.net/manila/+bug/1482633 is the original bug.
> > >>>
> > >>> These tests are currently skipped in both oslo_service and nova.
> > >>> (Equivalent tests were removed from Neutron and Manila on the principle
> > >>> that they're now oslo_service's responsibility.)
> > >>>
> > >>> This appears to be a series of long-standing bugs in eventlet:
> > >>>
> > >>> Python 3.5 failure mode:
> > >>> https://github.com/eventlet/eventlet/issues/308
> > >>> https://github.com/eventlet/eventlet/issues/189
> > >>>
> > >>> Python 3.4 failure mode:
> > >>> https://github.com/eventlet/eventlet/issues/476
> > >>> https://github.com/eventlet/eventlet/issues/145
> > >>>
> > >>> There are also more problems coming down the pipeline in Python 3.6:
> > >>>
> > >>> https://github.com/eventlet/eventlet/issues/371
> > >>>
> > >>> That one is resolved in eventlet 0.21, but we have that blocked by
> > >>> upper-constraints:
> > >>> http://git.openstack.org/cgit/openstack/requirements/tree/upper-constraints.txt#n135
> > >>>
> > >>>
> > >>> Given that the code in question relates solely to standalone WSGI
> > >>> servers with SSL and everything should have already migrated to Apache,
> > >>> and that the upstream is clearly overworked and unlikely to merge fixes
> > >>> any time soon (plus we would have to deal with the fallout of moving the
> > >>> upper constraint), I agree that it would be preferable if we could just
> > >>> ditch this functionality.
> > >>
> > >> There are a few projects that have not migrated, and some that have
> > >> issues running in non standalone WSGI mode (due, ironically to eventlet)
> > >>
> > >> We should probably get people to run these projects behind an reverse
> > >> proxy, and terminate SSL there, but right now we don't have that
> > >> documented.
> > > 
> > > Do you know which projects?
> > 
> > I know of 2:
> > 
> > Designate - mainly due to the major lack of resources available during
> > the uwsgi goal period, and the level of work needed to unravel our
> > tooling to support it.
> > 
> > Glance - Has issues with image upload + uwsgi + eventlet [1]
> 
> This actually is a bit misleading. Glance works fine with image upload and 
> uwsgi.
> That's the only configuration of glance in a wsgi app that works because
> of chunked transfer encoding not being in the WSGI protocol. [2] uwsgi 
> provides
> an alternate interface to read chunked requests which enables this to work.
> If you look at the bugs linked off that release note about image upload
> you'll see they're all fixed.

Is this documented somewhere?

> 
> The issues glance has with running in a wsgi app are related to it's use of
> async tasks via taskflow. (which includes the tasks api and image import 
> stuff)
> This shouldn't be hard to fix, and I've had patches up to address these for
> months:
> 
> https://review.openstack.org/#/c/531498/
> https://review.openstack.org/#/c/549743/
> 
> Part of the issue is that there is no api driven testing for these async api
> functions or any documented way to test them. Which is why I marked the 2nd
> one WIP, since I have no method to test it and after asking several times
> for a test case or some other method to validate these APIs without an answer.

It would be helpful if some of this detail made its way into the glance
section of 
https://wiki.openstack.org/wiki/Python3#Python_3_Status_of_OpenStack_projects

> 
> In fact people are running glance under uwsgi in production already because 
> it 
> makes a lot of things easier and the current issues don't effect most users.

That's good to know!

> 
> -Matt Treinish
> 
> > 
> > I am sure there are probably others, but I know of these 2.
> > 
> > [1] 

Re: [openstack-dev] [qa][release][ironic][requirements] hacking 1.1.0 released and ironic CI gates failing pep8

2018-05-08 Thread Doug Hellmann
(I added the [qa] topic tag for the QA team, since they own hacking, and
[requirements] for that team since I have a question about capping.)

Excerpts from Julia Kreger's message of 2018-05-08 12:43:07 -0400:
> About two hours ago, we started seeing Ironic CI jobs failing pep8
> with new errors[1]. For some of our repositories, it just seems to be
> a couple of lines that need to be fixed. On ironic itself, supporting
> this might have us dead in the water for a while to fix the code in
> accordance with what hacking is now expecting.
> 
> That being said, dtantsur and dhellmann have the perception that new
> checks are supposed to be opt-in only, yet this new hacking appears to
> have at W605 and W606 enabled by default as indicated by discussion in
> #openstack-release[2].
> 
> Please advise, it seems like the release team ought to revert the
> breaking changes and cut a new release as soon as possible.
> 
> -Julia
> 
> [1]: 
> http://logs.openstack.org/87/557687/4/check/openstack-tox-pep8/75380de/job-output.txt.gz#_2018-05-08_14_46_47_179606
> [2]: 
> http://eavesdrop.openstack.org/irclogs/%23openstack-release/%23openstack-release.2018-05-08.log.html#t2018-05-08T16:30:22
> 

As discussed in #openstack-release, those checks are pulled in via
pycodestyle rather than hacking itself, and pycodestyle doesn't have an
opt-in policy.

Hacking is in the blacklist for requirements management, so teams
*ought* to be able to cap it, if I understand correctly. So I suggest at
least starting with a patch to test that.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [nova] [octavia] [ironic] [keystone] [policy] Spec. Freeze Exception - Default Roles

2018-05-08 Thread Lance Bragstad
This was discussed in today's meeting and it was pretty clear that we
should still do this for Rocky [0].

Updating this thread to include documentation of the discussion. Thanks,
Harry.

[0]
http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-05-08-16.00.log.html#l-15

On 05/04/2018 03:16 PM, Lance Bragstad wrote:
>
> On 05/04/2018 02:55 PM, Harry Rybacki wrote:
>> Greetings All,
>>
>> After a discussion in #openstack-tc[1] earlier today, the Keystone
>> team is adjusting its approach in proposing default roles[2].
>> Subsequently, I have ported the current default roles specification
>> from openstack-specs[3] to keystone-specs[2].
>>
>> The original review has been in a pretty stable state for a few weeks.
>> As such, I propose we allow the new spec an exception to the original
>> Rocky-m1 proposal freeze date.
> I don't have an issue with this, especially since we talked about it heavily 
> at the PTG. We also had people familiar with keystone +1 the openstack-spec 
> prior to keystone's proposal freeze. I'm OK granting an exception here if 
> other keystone contributors don't object.
>
>> I invite more discussion around default roles, and our proposed
>> approach. The Keystone team has a forum session[4] dedicated to this
>> topic at 1135 on day one of the Vancouver Summit. Everyone should feel
>> welcome and encouraged to attend -- we hope that this work will lead
>> to an OpenStack Community Goal in a not-so-distant release.
> I think scoping this down to be keystone-specific is a smart move. It allows 
> us to focus on building a solid template for other projects to learn from. I 
> was pleasantly surprised to hear people in -tc suggest this as a candidate 
> for a community goal in Stein or T.
>
> Also, big thanks to jroll, dhellmann, ttx, zaneb, smcginnis, johnsom, and 
> mnaser for taking time to work through this with us.
>
>> [1] - 
>> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-05-04.log.html#t2018-05-04T14:40:36
>> [2] - https://review.openstack.org/#/c/566377/
>> [3] - https://review.openstack.org/#/c/523973/
>> [4] - 
>> https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21761/default-roles
>>
>>
>> /R
>>
>> Harry Rybacki
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][ironic] hacking 1.1.0 released and ironic CI gates failing pep8

2018-05-08 Thread Julia Kreger
About two hours ago, we started seeing Ironic CI jobs failing pep8
with new errors[1]. For some of our repositories, it just seems to be
a couple of lines that need to be fixed. On ironic itself, supporting
this might have us dead in the water for a while to fix the code in
accordance with what hacking is now expecting.

That being said, dtantsur and dhellmann have the perception that new
checks are supposed to be opt-in only, yet this new hacking appears to
have at W605 and W606 enabled by default as indicated by discussion in
#openstack-release[2].

Please advise, it seems like the release team ought to revert the
breaking changes and cut a new release as soon as possible.

-Julia

[1]: 
http://logs.openstack.org/87/557687/4/check/openstack-tox-pep8/75380de/job-output.txt.gz#_2018-05-08_14_46_47_179606
[2]: 
http://eavesdrop.openstack.org/irclogs/%23openstack-release/%23openstack-release.2018-05-08.log.html#t2018-05-08T16:30:22

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][ptls] final stages of python 3 transition

2018-05-08 Thread Doug Hellmann
Excerpts from Graham Hayes's message of 2018-05-08 17:01:36 +0100:
> On 08/05/18 16:53, Doug Hellmann wrote:
> > Excerpts from Graham Hayes's message of 2018-05-08 16:28:46 +0100:
> >> On 08/05/18 16:09, Zane Bitter wrote:
> >>> On 30/04/18 17:16, Ben Nemec wrote:
> > Excerpts from Doug Hellmann's message of 2018-04-25 16:54:46 -0400:
> >> 1. Fix oslo.service functional tests -- the Oslo team needs help
> >>     maintaining this library. Alternatively, we could move all
> >>     services to use cotyledon (https://pypi.org/project/cotyledon/).
> >>>
> >>> I submitted a patch that fixes the py35 gate (which was broken due to
> >>> changes between CPython 3.4 and 3.5), so once that merges we can flip
> >>> the gate back to voting:
> >>>
> >>> https://review.openstack.org/566714
> >>>
>  For everyone's awareness, we discussed this in the Oslo meeting today
>  and our first step is to see how many, if any, services are actually
>  relying on the oslo.service functionality that doesn't work in Python
>  3 today.  From there we will come up with a plan for how to move forward.
> 
>  https://bugs.launchpad.net/manila/+bug/1482633 is the original bug.
> >>>
> >>> These tests are currently skipped in both oslo_service and nova.
> >>> (Equivalent tests were removed from Neutron and Manila on the principle
> >>> that they're now oslo_service's responsibility.)
> >>>
> >>> This appears to be a series of long-standing bugs in eventlet:
> >>>
> >>> Python 3.5 failure mode:
> >>> https://github.com/eventlet/eventlet/issues/308
> >>> https://github.com/eventlet/eventlet/issues/189
> >>>
> >>> Python 3.4 failure mode:
> >>> https://github.com/eventlet/eventlet/issues/476
> >>> https://github.com/eventlet/eventlet/issues/145
> >>>
> >>> There are also more problems coming down the pipeline in Python 3.6:
> >>>
> >>> https://github.com/eventlet/eventlet/issues/371
> >>>
> >>> That one is resolved in eventlet 0.21, but we have that blocked by
> >>> upper-constraints:
> >>> http://git.openstack.org/cgit/openstack/requirements/tree/upper-constraints.txt#n135
> >>>
> >>>
> >>> Given that the code in question relates solely to standalone WSGI
> >>> servers with SSL and everything should have already migrated to Apache,
> >>> and that the upstream is clearly overworked and unlikely to merge fixes
> >>> any time soon (plus we would have to deal with the fallout of moving the
> >>> upper constraint), I agree that it would be preferable if we could just
> >>> ditch this functionality.
> >>
> >> There are a few projects that have not migrated, and some that have
> >> issues running in non standalone WSGI mode (due, ironically to eventlet)
> >>
> >> We should probably get people to run these projects behind an reverse
> >> proxy, and terminate SSL there, but right now we don't have that
> >> documented.
> > 
> > Do you know which projects?
> 
> I know of 2:
> 
> Designate - mainly due to the major lack of resources available during
> the uwsgi goal period, and the level of work needed to unravel our
> tooling to support it.
> 
> Glance - Has issues with image upload + uwsgi + eventlet [1]
> 
> I am sure there are probably others, but I know of these 2.
> 
> [1] https://docs.openstack.org/releasenotes/glance/unreleased.html#b1

OK, so we need to put these things on the red flags list for moving to
Python 3. I've updated the status for oslo.service, designate, and
glance in https://wiki.openstack.org/wiki/Python3 to reflect that.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][ptls] final stages of python 3 transition

2018-05-08 Thread Matthew Treinish
On Tue, May 08, 2018 at 05:01:36PM +0100, Graham Hayes wrote:
> On 08/05/18 16:53, Doug Hellmann wrote:
> > Excerpts from Graham Hayes's message of 2018-05-08 16:28:46 +0100:
> >> On 08/05/18 16:09, Zane Bitter wrote:
> >>> On 30/04/18 17:16, Ben Nemec wrote:
> > Excerpts from Doug Hellmann's message of 2018-04-25 16:54:46 -0400:
> >> 1. Fix oslo.service functional tests -- the Oslo team needs help
> >>     maintaining this library. Alternatively, we could move all
> >>     services to use cotyledon (https://pypi.org/project/cotyledon/).
> >>>
> >>> I submitted a patch that fixes the py35 gate (which was broken due to
> >>> changes between CPython 3.4 and 3.5), so once that merges we can flip
> >>> the gate back to voting:
> >>>
> >>> https://review.openstack.org/566714
> >>>
>  For everyone's awareness, we discussed this in the Oslo meeting today
>  and our first step is to see how many, if any, services are actually
>  relying on the oslo.service functionality that doesn't work in Python
>  3 today.  From there we will come up with a plan for how to move forward.
> 
>  https://bugs.launchpad.net/manila/+bug/1482633 is the original bug.
> >>>
> >>> These tests are currently skipped in both oslo_service and nova.
> >>> (Equivalent tests were removed from Neutron and Manila on the principle
> >>> that they're now oslo_service's responsibility.)
> >>>
> >>> This appears to be a series of long-standing bugs in eventlet:
> >>>
> >>> Python 3.5 failure mode:
> >>> https://github.com/eventlet/eventlet/issues/308
> >>> https://github.com/eventlet/eventlet/issues/189
> >>>
> >>> Python 3.4 failure mode:
> >>> https://github.com/eventlet/eventlet/issues/476
> >>> https://github.com/eventlet/eventlet/issues/145
> >>>
> >>> There are also more problems coming down the pipeline in Python 3.6:
> >>>
> >>> https://github.com/eventlet/eventlet/issues/371
> >>>
> >>> That one is resolved in eventlet 0.21, but we have that blocked by
> >>> upper-constraints:
> >>> http://git.openstack.org/cgit/openstack/requirements/tree/upper-constraints.txt#n135
> >>>
> >>>
> >>> Given that the code in question relates solely to standalone WSGI
> >>> servers with SSL and everything should have already migrated to Apache,
> >>> and that the upstream is clearly overworked and unlikely to merge fixes
> >>> any time soon (plus we would have to deal with the fallout of moving the
> >>> upper constraint), I agree that it would be preferable if we could just
> >>> ditch this functionality.
> >>
> >> There are a few projects that have not migrated, and some that have
> >> issues running in non standalone WSGI mode (due, ironically to eventlet)
> >>
> >> We should probably get people to run these projects behind an reverse
> >> proxy, and terminate SSL there, but right now we don't have that
> >> documented.
> > 
> > Do you know which projects?
> 
> I know of 2:
> 
> Designate - mainly due to the major lack of resources available during
> the uwsgi goal period, and the level of work needed to unravel our
> tooling to support it.
> 
> Glance - Has issues with image upload + uwsgi + eventlet [1]

This actually is a bit misleading. Glance works fine with image upload and 
uwsgi.
That's the only configuration of glance in a wsgi app that works because
of chunked transfer encoding not being in the WSGI protocol. [2] uwsgi provides
an alternate interface to read chunked requests which enables this to work.
If you look at the bugs linked off that release note about image upload
you'll see they're all fixed.

The issues glance has with running in a wsgi app are related to it's use of
async tasks via taskflow. (which includes the tasks api and image import stuff)
This shouldn't be hard to fix, and I've had patches up to address these for
months:

https://review.openstack.org/#/c/531498/
https://review.openstack.org/#/c/549743/

Part of the issue is that there is no api driven testing for these async api
functions or any documented way to test them. Which is why I marked the 2nd
one WIP, since I have no method to test it and after asking several times
for a test case or some other method to validate these APIs without an answer.

In fact people are running glance under uwsgi in production already because it 
makes a lot of things easier and the current issues don't effect most users.

-Matt Treinish


> 
> I am sure there are probably others, but I know of these 2.
> 
> [1] https://docs.openstack.org/releasenotes/glance/unreleased.html#b1
> 

[2] There are a few other ways, as some other wsgi servers have grafted on
support for chunked transfer encoding. But, most wsgi servers have not
implemented a method.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [tc] Technical Committee Status update, 7 May

2018-05-08 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2018-05-07 10:53:04 -0400:

[snip]

> The Adjutant project application [10] is still under review, and
> the only votes registered are opposed. I anticipate having the topic
> of how we review project applications as one of several items we
> discuss during the TC retrospective session as the summit [11].
> 
> [10] https://review.openstack.org/553643
> [11] 
> https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21740/tc-retrospective

There is also a session dedicated to the Adjutant application
scheduled for Thursday. Sorry for the oversight.

https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21752/adjutant-official-project-status

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][ptls] final stages of python 3 transition

2018-05-08 Thread Graham Hayes
On 08/05/18 16:53, Doug Hellmann wrote:
> Excerpts from Graham Hayes's message of 2018-05-08 16:28:46 +0100:
>> On 08/05/18 16:09, Zane Bitter wrote:
>>> On 30/04/18 17:16, Ben Nemec wrote:
> Excerpts from Doug Hellmann's message of 2018-04-25 16:54:46 -0400:
>> 1. Fix oslo.service functional tests -- the Oslo team needs help
>>     maintaining this library. Alternatively, we could move all
>>     services to use cotyledon (https://pypi.org/project/cotyledon/).
>>>
>>> I submitted a patch that fixes the py35 gate (which was broken due to
>>> changes between CPython 3.4 and 3.5), so once that merges we can flip
>>> the gate back to voting:
>>>
>>> https://review.openstack.org/566714
>>>
 For everyone's awareness, we discussed this in the Oslo meeting today
 and our first step is to see how many, if any, services are actually
 relying on the oslo.service functionality that doesn't work in Python
 3 today.  From there we will come up with a plan for how to move forward.

 https://bugs.launchpad.net/manila/+bug/1482633 is the original bug.
>>>
>>> These tests are currently skipped in both oslo_service and nova.
>>> (Equivalent tests were removed from Neutron and Manila on the principle
>>> that they're now oslo_service's responsibility.)
>>>
>>> This appears to be a series of long-standing bugs in eventlet:
>>>
>>> Python 3.5 failure mode:
>>> https://github.com/eventlet/eventlet/issues/308
>>> https://github.com/eventlet/eventlet/issues/189
>>>
>>> Python 3.4 failure mode:
>>> https://github.com/eventlet/eventlet/issues/476
>>> https://github.com/eventlet/eventlet/issues/145
>>>
>>> There are also more problems coming down the pipeline in Python 3.6:
>>>
>>> https://github.com/eventlet/eventlet/issues/371
>>>
>>> That one is resolved in eventlet 0.21, but we have that blocked by
>>> upper-constraints:
>>> http://git.openstack.org/cgit/openstack/requirements/tree/upper-constraints.txt#n135
>>>
>>>
>>> Given that the code in question relates solely to standalone WSGI
>>> servers with SSL and everything should have already migrated to Apache,
>>> and that the upstream is clearly overworked and unlikely to merge fixes
>>> any time soon (plus we would have to deal with the fallout of moving the
>>> upper constraint), I agree that it would be preferable if we could just
>>> ditch this functionality.
>>
>> There are a few projects that have not migrated, and some that have
>> issues running in non standalone WSGI mode (due, ironically to eventlet)
>>
>> We should probably get people to run these projects behind an reverse
>> proxy, and terminate SSL there, but right now we don't have that
>> documented.
> 
> Do you know which projects?

I know of 2:

Designate - mainly due to the major lack of resources available during
the uwsgi goal period, and the level of work needed to unravel our
tooling to support it.

Glance - Has issues with image upload + uwsgi + eventlet [1]

I am sure there are probably others, but I know of these 2.

[1] https://docs.openstack.org/releasenotes/glance/unreleased.html#b1

> 
> Doug
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cyborg] [glance] [nova] Cyborg/Nova spec for os-acc is out

2018-05-08 Thread Nadathur, Sundar

Hi all,
    The Cyborg compute node specification has been published: 
https://review.openstack.org/#/c/566798/ . Please review it.


The main factors defined in this spec are:
* The behavior with respect to accelerators when various Compute API [1] 
operations are applied. E.g. On a reboot/pause/suspend, the assigned 
accelerators are left intact. But, on a stop or shelve, they are detached.
* The APIs for the newly proposed os-acc library. This is structured 
along the same lines as os-vif usage [2]. Changes are needed in Nova 
compute to invoke os-acc APIs on specific instance-related events.
* Interactions of Cyborg with Glance in the compute node. The plan is to 
use Glance properties. No changes are needed in Glance.


References:
[1] https://developer.openstack.org/api-guide/compute/server_concepts.html
[2] https://docs.openstack.org/os-vif/queens/user/usage.html

Thanks & Regards,
Sundar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][ptls] final stages of python 3 transition

2018-05-08 Thread Doug Hellmann
Excerpts from Graham Hayes's message of 2018-05-08 16:28:46 +0100:
> On 08/05/18 16:09, Zane Bitter wrote:
> > On 30/04/18 17:16, Ben Nemec wrote:
> >>> Excerpts from Doug Hellmann's message of 2018-04-25 16:54:46 -0400:
>  1. Fix oslo.service functional tests -- the Oslo team needs help
>      maintaining this library. Alternatively, we could move all
>      services to use cotyledon (https://pypi.org/project/cotyledon/).
> > 
> > I submitted a patch that fixes the py35 gate (which was broken due to
> > changes between CPython 3.4 and 3.5), so once that merges we can flip
> > the gate back to voting:
> > 
> > https://review.openstack.org/566714
> > 
> >> For everyone's awareness, we discussed this in the Oslo meeting today
> >> and our first step is to see how many, if any, services are actually
> >> relying on the oslo.service functionality that doesn't work in Python
> >> 3 today.  From there we will come up with a plan for how to move forward.
> >>
> >> https://bugs.launchpad.net/manila/+bug/1482633 is the original bug.
> > 
> > These tests are currently skipped in both oslo_service and nova.
> > (Equivalent tests were removed from Neutron and Manila on the principle
> > that they're now oslo_service's responsibility.)
> > 
> > This appears to be a series of long-standing bugs in eventlet:
> > 
> > Python 3.5 failure mode:
> > https://github.com/eventlet/eventlet/issues/308
> > https://github.com/eventlet/eventlet/issues/189
> > 
> > Python 3.4 failure mode:
> > https://github.com/eventlet/eventlet/issues/476
> > https://github.com/eventlet/eventlet/issues/145
> > 
> > There are also more problems coming down the pipeline in Python 3.6:
> > 
> > https://github.com/eventlet/eventlet/issues/371
> > 
> > That one is resolved in eventlet 0.21, but we have that blocked by
> > upper-constraints:
> > http://git.openstack.org/cgit/openstack/requirements/tree/upper-constraints.txt#n135
> > 
> > 
> > Given that the code in question relates solely to standalone WSGI
> > servers with SSL and everything should have already migrated to Apache,
> > and that the upstream is clearly overworked and unlikely to merge fixes
> > any time soon (plus we would have to deal with the fallout of moving the
> > upper constraint), I agree that it would be preferable if we could just
> > ditch this functionality.
> 
> There are a few projects that have not migrated, and some that have
> issues running in non standalone WSGI mode (due, ironically to eventlet)
> 
> We should probably get people to run these projects behind an reverse
> proxy, and terminate SSL there, but right now we don't have that
> documented.

Do you know which projects?

Doug


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][ptls] final stages of python 3 transition

2018-05-08 Thread Graham Hayes
On 08/05/18 16:09, Zane Bitter wrote:
> On 30/04/18 17:16, Ben Nemec wrote:
>>> Excerpts from Doug Hellmann's message of 2018-04-25 16:54:46 -0400:
 1. Fix oslo.service functional tests -- the Oslo team needs help
     maintaining this library. Alternatively, we could move all
     services to use cotyledon (https://pypi.org/project/cotyledon/).
> 
> I submitted a patch that fixes the py35 gate (which was broken due to
> changes between CPython 3.4 and 3.5), so once that merges we can flip
> the gate back to voting:
> 
> https://review.openstack.org/566714
> 
>> For everyone's awareness, we discussed this in the Oslo meeting today
>> and our first step is to see how many, if any, services are actually
>> relying on the oslo.service functionality that doesn't work in Python
>> 3 today.  From there we will come up with a plan for how to move forward.
>>
>> https://bugs.launchpad.net/manila/+bug/1482633 is the original bug.
> 
> These tests are currently skipped in both oslo_service and nova.
> (Equivalent tests were removed from Neutron and Manila on the principle
> that they're now oslo_service's responsibility.)
> 
> This appears to be a series of long-standing bugs in eventlet:
> 
> Python 3.5 failure mode:
> https://github.com/eventlet/eventlet/issues/308
> https://github.com/eventlet/eventlet/issues/189
> 
> Python 3.4 failure mode:
> https://github.com/eventlet/eventlet/issues/476
> https://github.com/eventlet/eventlet/issues/145
> 
> There are also more problems coming down the pipeline in Python 3.6:
> 
> https://github.com/eventlet/eventlet/issues/371
> 
> That one is resolved in eventlet 0.21, but we have that blocked by
> upper-constraints:
> http://git.openstack.org/cgit/openstack/requirements/tree/upper-constraints.txt#n135
> 
> 
> Given that the code in question relates solely to standalone WSGI
> servers with SSL and everything should have already migrated to Apache,
> and that the upstream is clearly overworked and unlikely to merge fixes
> any time soon (plus we would have to deal with the fallout of moving the
> upper constraint), I agree that it would be preferable if we could just
> ditch this functionality.

There are a few projects that have not migrated, and some that have
issues running in non standalone WSGI mode (due, ironically to eventlet)

We should probably get people to run these projects behind an reverse
proxy, and terminate SSL there, but right now we don't have that
documented.

> cheers,
> Zane.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][ptls] final stages of python 3 transition

2018-05-08 Thread Zane Bitter

On 30/04/18 17:16, Ben Nemec wrote:

Excerpts from Doug Hellmann's message of 2018-04-25 16:54:46 -0400:

1. Fix oslo.service functional tests -- the Oslo team needs help
    maintaining this library. Alternatively, we could move all
    services to use cotyledon (https://pypi.org/project/cotyledon/).


I submitted a patch that fixes the py35 gate (which was broken due to 
changes between CPython 3.4 and 3.5), so once that merges we can flip 
the gate back to voting:


https://review.openstack.org/566714

For everyone's awareness, we discussed this in the Oslo meeting today 
and our first step is to see how many, if any, services are actually 
relying on the oslo.service functionality that doesn't work in Python 3 
today.  From there we will come up with a plan for how to move forward.


https://bugs.launchpad.net/manila/+bug/1482633 is the original bug.


These tests are currently skipped in both oslo_service and nova. 
(Equivalent tests were removed from Neutron and Manila on the principle 
that they're now oslo_service's responsibility.)


This appears to be a series of long-standing bugs in eventlet:

Python 3.5 failure mode:
https://github.com/eventlet/eventlet/issues/308
https://github.com/eventlet/eventlet/issues/189

Python 3.4 failure mode:
https://github.com/eventlet/eventlet/issues/476
https://github.com/eventlet/eventlet/issues/145

There are also more problems coming down the pipeline in Python 3.6:

https://github.com/eventlet/eventlet/issues/371

That one is resolved in eventlet 0.21, but we have that blocked by 
upper-constraints: 
http://git.openstack.org/cgit/openstack/requirements/tree/upper-constraints.txt#n135


Given that the code in question relates solely to standalone WSGI 
servers with SSL and everything should have already migrated to Apache, 
and that the upstream is clearly overworked and unlikely to merge fixes 
any time soon (plus we would have to deal with the fallout of moving the 
upper constraint), I agree that it would be preferable if we could just 
ditch this functionality.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] The Weekly Owl - 20th Edition

2018-05-08 Thread Alex Schultz
Welcome to the twentieth edition of a weekly update in TripleO world!
The goal is to provide a short reading (less than 5 minutes) to learn
what's new this week.
Any contributions and feedback are welcome.
Link to the previous version:
http://lists.openstack.org/pipermail/openstack-dev/2018-May/130090.html

+-+
| General announcements |
+-+

+--> Further discussions about Storyboard migration will be coming to
the ML this week.
+--> We have 4 more weeks until milestone 2 ! Check-out the schedule:
https://releases.openstack.org/rocky/schedule.html

+--+
| Continuous Integration |
+--+

+--> Ruck is myoung and Rover is sshnaidm. Please let them know any
new CI issue.
+--> Master promotion is 0 day, Queens is 1 day, Pike is 3 days and
Ocata is 2 days. Kudos folks!
+--> Upcoming DLRN changes coming that may impact CI, see
http://lists.openstack.org/pipermail/openstack-dev/2018-May/130195.html
+--> Still working on libvirt based multinode reproducer, see
https://goo.gl/DYCnkx
+--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting

+-+
| Upgrades |
+-+

+--> Continued progress for ffwd upgrades as well as cleaing up
upgrade/updates jobs.
+--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status

+---+
| Containers |
+---+

+--> Continued efforts to align instack-undercloud & containerized undercloud
+--> all-in-one work beginning to extract the deployment
framework/tooling from the containerized undercloud
+--> More: https://etherpad.openstack.org/p/tripleo-containers-squad-status

+--+
| config-download |
+--+

+--> Progress on OpenStark operations Ansible role:
https://github.com/samdoran/ansible-role-openstack-operations
+--> Working on Skydive transition to external tasks
+--> Working on improving performances when deploying Ceph with Ansible.
+--> client/api/workflow for "play deployment failures list"
equivalent to "stack failures list
+--> More: https://etherpad.openstack.org/p/tripleo-config-download-squad-status

+--+
| Integration |
+--+

+--> No updates this week.
+--> More: https://etherpad.openstack.org/p/tripleo-integration-squad-status

+-+
| UI/CLI |
+-+

+--> No updates this week.
+--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status

+---+
| Validations |
+---+

+--> Custom validations
+--> Fixing node health validations
+--> More: https://etherpad.openstack.org/p/tripleo-validations-squad-status

+---+
| Networking |
+---+

+--> Continued work on neutron sidecar containers
+--> More: https://etherpad.openstack.org/p/tripleo-networking-squad-status

+--+
| Workflows |
+--+

+--> No updates this week.
+--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status

+---+
| Security |
+---+

+--> Patches for public TLS by default are up,
https://review.openstack.org/#/q/topic:public-tls-default+status:open
+--> More: https://etherpad.openstack.org/p/tripleo-security-squad

++
| Owl fact  |
++

Burrowing owls migrate to the Rocky Mountain Arsenal National Wildlife
Refuge (near Denver, CO) every summer and raise their young in
abandoned prairie dog burrows.
https://www.fws.gov/nwrs/threecolumn.aspx?id=2147510941

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Create a Volume type using OpenStack

2018-05-08 Thread Sean McGinnis
On Tue, May 08, 2018 at 12:18:36PM +0100, Duncan Thomas wrote:
> If you're using the cinder CLI (aka python-cinderclient) then if you
> run with --debug, then you can see the REST calls used.
> 
> >
> > I need API's to
> > i) list all the Volume types in the OpenStack
> > ii) I need API's to create the Volume types in the OpenStack
> >

Hi Hari,

The volume type API calls our in a different section (Volume types vs Volumes):

https://developer.openstack.org/api-ref/block-storage/v3/index.html#volume-types-types

So I believe you are looking for:

i) 
https://developer.openstack.org/api-ref/block-storage/v3/index.html#list-all-volume-types

and

ii) 
https://developer.openstack.org/api-ref/block-storage/v3/index.html#create-a-volume-type

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] reboot a rescued instance?

2018-05-08 Thread Matt Riedemann

On 5/8/2018 7:41 AM, Bob Ball wrote:

I'd be hesitant to permit reboot-from-rescue for all drivers as I'm not sure 
the drivers would have consistent (or perhaps working!) behaviours?  Is there a 
way to enable this when using XenAPI?


Off the top of my head the virt driver could report a capability for 
this which gets modeled in placement as a standard trait on the compute 
node resource provider. Then the API could check for that trait from 
Placement and fail if it's not found.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][neutron][requirements][pbr]Use git+https line in requirements.txt break the pip install

2018-05-08 Thread Marcin Juszkiewicz
W dniu 18.04.2018 o 11:02, Michel Peterson pisze:

> How can we fix this?

Any update on it? Would like to get rid of current workarounds.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] reboot a rescued instance?

2018-05-08 Thread Bob Ball
Hi Matt,

My understanding is that this is being used by Rackspace.

AFAIK the change isn't upstream because there was no sensible way to permit 
reboot of a rescued instance for XenAPI users but prevent it for other drivers.

I'd be hesitant to permit reboot-from-rescue for all drivers as I'm not sure 
the drivers would have consistent (or perhaps working!) behaviours?  Is there a 
way to enable this when using XenAPI?

Bob

-Original Message-
From: Matt Riedemann [mailto:mriede...@gmail.com] 
Sent: 04 May 2018 14:50
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [nova] reboot a rescued instance?

For full details on this, see the IRC conversation [1].

tl;dr: the nova compute manager and xen virt driver assume that you can reboot 
a rescued instance [2] but the API does not allow that [3] and as far as I can 
tell, it never has.

I can only assume that Rackspace had an out of tree change to the API to allow 
rebooting a rescued instance. I don't know why that wouldn't have been 
upstreamed, but the upstream API doesn't allow it. I'm also not aware of 
anything internal to nova that reboots an instance in a rescued state.

So the question now is, should we add rescue to the possible states to reboot 
an instance in the API? Or just rollback this essentially dead code in the 
compute manager and xen virt driver? I don't know if any other virt drivers 
will support rebooting a rescued instance.

[1]
http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-05-03.log.html#t2018-05-03T18:49:58
[2]
https://review.openstack.org/#/q/topic:bug/1170237+(status:open+OR+status:merged
[3]
https://github.com/openstack/nova/blob/4b0d0ea9f18139d58103a520a6a4e9119e19a4de/nova/compute/vm_states.py#L69-L72

-- 

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Create a Volume type using OpenStack

2018-05-08 Thread Duncan Thomas
If you're using the cinder CLI (aka python-cinderclient) then if you
run with --debug, then you can see the REST calls used.

I would assume the the unified openstack CLI client has a similar mode.

On 8 May 2018 at 12:13, Hari Prasanth Loganathan
 wrote:
> Hi Team,
>
> 1) I am able to list all the project using the OpenStack REST API,
>
>   http://{IP_ADDRESS}:5000/v3/auth/projects/
>
> But as per the documentation of /v3/ API's in OpenStack
> (https://developer.openstack.org/api-ref/block-storage/v3/index.html#volumes-volumes),
>
> I need API's to
> i) list all the Volume types in the OpenStack
> ii) I need API's to create the Volume types in the OpenStack
>
> I am able to create via CLI, I need to perform the same using API
> Create Volume Type
> openstack volume type create ${poolName}
> cinder type-key "${poolName}" set storagetype:pool=${poolName}
> volume_backend_name=rbd-${poolName}
>
>
> I am able to create via CLI, I need to perform the same using API. Please
> help me in this.
>
>
> Thanks,
> Hari
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Duncan Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Create a Volume type using OpenStack

2018-05-08 Thread Hari Prasanth Loganathan
Hi Team,

1) I am able to list all the project using the OpenStack REST API,

  http://{IP_ADDRESS}:5000/v3/auth/projects/

But as per the documentation of /v3/ API's in OpenStack (
https://developer.openstack.org/api-ref/block-storage/v3/index.html#volumes-volumes
),

I need API's to
i) list all the Volume types in the OpenStack
ii) I need API's to create the Volume types in the OpenStack

I am able to create via CLI, I need to perform the same using API
Create Volume Type
openstack volume type create ${poolName}
cinder type-key "${poolName}" set storagetype:pool=${poolName}
volume_backend_name=rbd-${poolName}


I am able to create via CLI, I need to perform the same using API. Please
help me in this.


Thanks,
Hari
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote]Core nomination for Mark Goddard (mgoddard) as kolla core member

2018-05-08 Thread Mark Goddard
Thanks everyone for putting your trust in me!

On 8 May 2018 at 11:13, Jeffrey Zhang  wrote:

> Time is up. And welcome mgoddard to the team :D
>
> On Thu, May 3, 2018 at 5:47 PM, Goutham Pratapa 
> wrote:
>
>> +1 for `mgoddard`
>>
>> On Thu, May 3, 2018 at 1:21 PM, duon...@vn.fujitsu.com <
>> duon...@vn.fujitsu.com> wrote:
>>
>>> +1
>>>
>>>
>>>
>>> Sorry for my late reply, thank you for your contribution in Kolla.
>>>
>>>
>>>
>>> Regards,
>>>
>>> Duong
>>>
>>>
>>>
>>> *From:* Jeffrey Zhang [mailto:zhang.lei@gmail.com]
>>> *Sent:* Thursday, April 26, 2018 10:31 PM
>>> *To:* OpenStack Development Mailing List >> .org>
>>> *Subject:* [openstack-dev] [kolla][vote]Core nomination for Mark
>>> Goddard (mgoddard) as kolla core member
>>>
>>>
>>>
>>> Kolla core reviewer team,
>>>
>>> It is my pleasure to nominate
>>>
>>> ​
>>>
>>> mgoddard for kolla core team.
>>>
>>> ​
>>>
>>> Mark has been working both upstream and downstream with kolla and
>>> kolla-ansible for over two years, building bare metal compute clouds with
>>> ironic for HPC. He's been involved with OpenStack since 2014. He started
>>> the kayobe deployment project which complements kolla-ansible. He is
>>> also the most active non-core contributor for last 90 days[1]
>>>
>>> ​​
>>>
>>> Consider this nomination a +1 vote from me
>>>
>>> A +1 vote indicates you are in favor of
>>>
>>> ​
>>>
>>> mgoddard as a candidate, a -1
>>> is a
>>>
>>> ​​
>>>
>>> veto. Voting is open for 7 days until
>>>
>>> ​May
>>>
>>>
>>>
>>> ​4​
>>>
>>> th, or a unanimous
>>> response is reached or a veto vote occurs.
>>>
>>> [1] http://stackalytics.com/report/contribution/kolla-group/90
>>>
>>>
>>>
>>> --
>>>
>>> Regards,
>>>
>>> Jeffrey Zhang
>>>
>>> Blog: http://xcodest.me
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Cheers !!!
>> Goutham Pratapa
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote]Core nomination for Mark Goddard (mgoddard) as kolla core member

2018-05-08 Thread Jeffrey Zhang
Time is up. And welcome mgoddard to the team :D

On Thu, May 3, 2018 at 5:47 PM, Goutham Pratapa 
wrote:

> +1 for `mgoddard`
>
> On Thu, May 3, 2018 at 1:21 PM, duon...@vn.fujitsu.com <
> duon...@vn.fujitsu.com> wrote:
>
>> +1
>>
>>
>>
>> Sorry for my late reply, thank you for your contribution in Kolla.
>>
>>
>>
>> Regards,
>>
>> Duong
>>
>>
>>
>> *From:* Jeffrey Zhang [mailto:zhang.lei@gmail.com]
>> *Sent:* Thursday, April 26, 2018 10:31 PM
>> *To:* OpenStack Development Mailing List > .org>
>> *Subject:* [openstack-dev] [kolla][vote]Core nomination for Mark Goddard
>> (mgoddard) as kolla core member
>>
>>
>>
>> Kolla core reviewer team,
>>
>> It is my pleasure to nominate
>>
>> ​
>>
>> mgoddard for kolla core team.
>>
>> ​
>>
>> Mark has been working both upstream and downstream with kolla and
>> kolla-ansible for over two years, building bare metal compute clouds with
>> ironic for HPC. He's been involved with OpenStack since 2014. He started
>> the kayobe deployment project which complements kolla-ansible. He is
>> also the most active non-core contributor for last 90 days[1]
>>
>> ​​
>>
>> Consider this nomination a +1 vote from me
>>
>> A +1 vote indicates you are in favor of
>>
>> ​
>>
>> mgoddard as a candidate, a -1
>> is a
>>
>> ​​
>>
>> veto. Voting is open for 7 days until
>>
>> ​May
>>
>>
>>
>> ​4​
>>
>> th, or a unanimous
>> response is reached or a veto vote occurs.
>>
>> [1] http://stackalytics.com/report/contribution/kolla-group/90
>>
>>
>>
>> --
>>
>> Regards,
>>
>> Jeffrey Zhang
>>
>> Blog: http://xcodest.me
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Cheers !!!
> Goutham Pratapa
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][ci]Does the openstack ci vms start each time clear up enough?

2018-05-08 Thread linghucongsong


hi cboylan. Thanks for reply!


I have recheker several times but always the second time failed. The first time 
always pass.

Is It  maybe the reason in the below email luckyvega wrote.?




At 2018-05-06 18:20:47, "Vega Cai"  wrote:

To test whether it's our new patch that causes the problem, I submitted a dummy 
patch[1] to trigger CI and the CI failed again. Checking the log of nova 
scheduler, it's very strange that the scheduling starts with 0 host at the 
beginning.


May 06 09:40:34.358585 ubuntu-xenial-inap-mtl01-0003885152 
nova-scheduler[21962]: DEBUG oslo_service.periodic_task [None 
req-008ee30a-47a1-40a2-bf64-cb0f1719806e None None] Running periodic task 
SchedulerManager._run_periodic_tasks {{(pid=23795) run_periodic_tasks 
/usr/local/lib/python2.7/dist-packages/oslo_service/periodic_task.py:215}}
May 06 09:41:23.968029 ubuntu-xenial-inap-mtl01-0003885152 
nova-scheduler[21962]: DEBUG nova.scheduler.manager [None 
req-c67986fa-2e3b-45b7-96dd-196704945b95 admin admin] Starting to schedule for 
instances: [u'8b227e85-8959-4e07-be3d-1bc094c115c1'] {{(pid=23795) 
select_destinations /opt/stack/new/nova/nova/scheduler/manager.py:118}}
May 06 09:41:23.969293 ubuntu-xenial-inap-mtl01-0003885152 
nova-scheduler[21962]: DEBUG oslo_concurrency.lockutils [None 
req-c67986fa-2e3b-45b7-96dd-196704945b95 admin admin] Lock "placement_client" 
acquired by "nova.scheduler.client.report._create_client" :: waited 0.000s 
{{(pid=23795) inner 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:273}}
May 06 09:41:23.975304 ubuntu-xenial-inap-mtl01-0003885152 
nova-scheduler[21962]: DEBUG oslo_concurrency.lockutils [None 
req-c67986fa-2e3b-45b7-96dd-196704945b95 admin admin] Lock "placement_client" 
released by "nova.scheduler.client.report._create_client" :: held 0.006s 
{{(pid=23795) inner 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:285}}
May 06 09:41:24.276470 ubuntu-xenial-inap-mtl01-0003885152 
nova-scheduler[21962]: DEBUG oslo_concurrency.lockutils [None 
req-c67986fa-2e3b-45b7-96dd-196704945b95 admin admin] Lock 
"6e118c71-9008-4694-8aee-faa607944c5f" acquired by 
"nova.context.get_or_set_cached_cell_and_set_connections" :: waited 0.000s 
{{(pid=23795) inner 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:273}}
May 06 09:41:24.279331 ubuntu-xenial-inap-mtl01-0003885152 
nova-scheduler[21962]: DEBUG oslo_concurrency.lockutils [None 
req-c67986fa-2e3b-45b7-96dd-196704945b95 admin admin] Lock 
"6e118c71-9008-4694-8aee-faa607944c5f" released by 
"nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.003s 
{{(pid=23795) inner 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:285}}
May 06 09:41:24.302854 ubuntu-xenial-inap-mtl01-0003885152 
nova-scheduler[21962]: DEBUG oslo_db.sqlalchemy.engines [None 
req-c67986fa-2e3b-45b7-96dd-196704945b95 admin admin] MySQL server mode set to 
STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
 {{(pid=23795) _check_effective_sql_mode 
/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/engines.py:308}}
May 06 09:41:24.321713 ubuntu-xenial-inap-mtl01-0003885152 
nova-scheduler[21962]: DEBUG nova.filters [None 
req-c67986fa-2e3b-45b7-96dd-196704945b95 admin admin] Starting with 0 host(s) 
{{(pid=23795) get_filtered_objects /opt/stack/new/nova/nova/filters.py:70}}
May 06 09:41:24.322136 ubuntu-xenial-inap-mtl01-0003885152 
nova-scheduler[21962]: INFO nova.filters [None 
req-c67986fa-2e3b-45b7-96dd-196704945b95 admin admin] Filter RetryFilter 
returned 0 hosts
May 06 09:41:24.322614 ubuntu-xenial-inap-mtl01-0003885152 
nova-scheduler[21962]: DEBUG nova.filters [None 
req-c67986fa-2e3b-45b7-96dd-196704945b95 admin admin] Filtering removed all 
hosts for the request with instance ID '8b227e85-8959-4e07-be3d-1bc094c115c1'. 
Filter results: [('RetryFilter', None)] {{(pid=23795) get_filtered_objects 
/opt/stack/new/nova/nova/filters.py:129}}
May 06 09:41:24.323029 ubuntu-xenial-inap-mtl01-0003885152 
nova-scheduler[21962]: INFO nova.filters [None 
req-c67986fa-2e3b-45b7-96dd-196704945b95 admin admin] Filtering removed all 
hosts for the request with instance ID '8b227e85-8959-4e07-be3d-1bc094c115c1'. 
Filter results: ['RetryFilter: (start: 0, end: 0)']
May 06 09:41:24.323419 ubuntu-xenial-inap-mtl01-0003885152 
nova-scheduler[21962]: DEBUG nova.scheduler.filter_scheduler [None 
req-c67986fa-2e3b-45b7-96dd-196704945b95 admin admin] Filtered [] {{(pid=23795) 
_get_sorted_hosts /opt/stack/new/nova/nova/scheduler/filter_scheduler.py:404}}
May 06 09:41:24.323861 ubuntu-xenial-inap-mtl01-0003885152 
nova-scheduler[21962]: DEBUG nova.scheduler.filter_scheduler [None 
req-c67986fa-2e3b-45b7-96dd-196704945b95 admin admin] There are 0 hosts 
available but 1 instances requested to build. {{(pid=23795) 
_ensure_sufficient_hosts 

[openstack-dev] [nova] nova-manage cell_v2 map_instances uses invalid UUID as marker in the db

2018-05-08 Thread Balázs Gibizer

Hi,

The oslo UUIDField emits a warning if the string used as a field value 
does not pass the validation of the uuid.UUID(str(value)) call [3]. All 
the offending places are fixed in nova except the nova-manage cell_v2 
map_instances call [1][2]. That call uses markers in the DB that are 
not valid UUIDs. If we could fix this last offender then we could merge 
the patch [4] that changes the this warning to an exception in the nova 
tests to avoid such future rule violations.


However I'm not sure it is easy to fix. Replacing 
'INSTANCE_MIGRATION_MARKER' at [1] to 
'----' might work but I don't know what to 
do with instance_uuid.replace(' ', '-') [2] to make it a valid uuid. 
Also I think that if there is an unfinished mapping in the deployment 
and then the marker is changed in the code that leads to 
inconsistencies.


I'm open to any suggestions.

Cheers,
gibi


[1] 
https://github.com/openstack/nova/blob/09af976016a83288df22ac6ed1cce1676c2294cc/nova/cmd/manage.py#L1168
[2] 
https://github.com/openstack/nova/blob/09af976016a83288df22ac6ed1cce1676c2294cc/nova/cmd/manage.py#L1180
[3] 
https://github.com/openstack/oslo.versionedobjects/blob/29e643e4a9866b33965b68fc8dfb8acf30fa/oslo_versionedobjects/fields.py#L359

[4] https://review.openstack.org/#/c/540386


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Notification update week 19

2018-05-08 Thread Balázs Gibizer

Hi,

After a bit of silence here is the latest notification status.

Bugs


[Low] https://bugs.launchpad.net/nova/+bug/1757407 Notification sending
sometimes hits the keystone API to get glance endpoints
Fix has been proposed has many +1s 
https://review.openstack.org/#/c/564528/


[Medium] https://bugs.launchpad.net/nova/+bug/1763051 Need to audit
when notifications are sent during live migration
We need to go throught the live migration codepath and make sure that
the different live migartion notifications sent at a proper time.

[Low] https://bugs.launchpad.net/nova/+bug/1764392 Avoid bandwidth
usage db query in notifications when the virt driver does not support
collecting such data

[High] https://bugs.launchpad.net/nova/+bug/1739325 Server operations
fail to complete with versioned notifications if payload contains unset
non-nullable fields
No progress. We still need to understand how this problem happens to
find the proper solution.

[Low] https://bugs.launchpad.net/nova/+bug/1487038
nova.exception._cleanse_dict should use
oslo_utils.strutils._SANITIZE_KEYS
Old abandoned patches exist but need somebody to pick them up:
* https://review.openstack.org/#/c/215308/
* https://review.openstack.org/#/c/388345/


Versioned notification transformation
-
https://review.openstack.org/#/q/topic:bp/versioned-notification-transformation-rocky+status:open
* https://review.openstack.org/#/c/403660 Transform instance.exists 
notification - lost the +2 due to a merge conflict
* https://review.openstack.org/#/c/410297/  Transform missing delete 
notifications - many +1s, needs core review



Introduce instance.lock and instance.unlock notifications
-
https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances
Implementation proposed but needs some work:
https://review.openstack.org/#/c/526251/ - No progress. I've pinged the 
author but no response.



Add the user id and project id of the user initiated the instance
action to the notification
-
https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications
Implementation patch exists but still needs work
https://review.openstack.org/#/c/536243/ - No progress. I've pinged the 
author but no response.



Sending full traceback in versioned notifications
-
https://blueprints.launchpad.net/nova/+spec/add-full-traceback-to-error-notifications
The bp was reassigned to Kevin_Zheng and he proposed a WIP patch 
https://review.openstack.org/#/c/564092/



Add versioned notifications for removing a member from a server group
-
The specless bp 
https://blueprints.launchpad.net/nova/+spec/add-server-group-remove-member-notifications
Based on the PoC patch https://review.openstack.org/#/c/559076/ we see 
basic problems with the overal bp. See Matt's mail from the ML 
http://lists.openstack.org/pipermail/openstack-dev/2018-April/129804.html



Add notification support for trusted_certs
--
This is part of the bp nova-validate-certificates implementation series 
to extend some of the instance notifications: 
https://review.openstack.org/#/c/563269
I have to re-review the patch as it seems Brianna updated it based on 
my suggestions.



Introduce Pending VM state
--
The spec https://review.openstack.org/#/c/554212 proposed to introduce 
new notification along with the new state. I have to give a detailed 
review about this proposal.



Weekly meeting
--
The next meeting will be held on 8th of May on #openstack-meeting-4
https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180508T17

Cheers,
gibi




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Scheduling switch to django >= 2.0

2018-05-08 Thread Thomas Goirand
Hi,

It has been decided that, in Debian, we'll switch to Django 2.0 after
Buster will be released. Buster is to be frozen next February. This
means that we have roughly one more year before Django 1.x goes away.

Hopefully, Horizon will be ready for it, right?

Hoping this helps,
Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cyborg]spec review day (May 9th)

2018-05-08 Thread Zhipeng Huang
Hi team,

Let's make use of the team meeting on Wed to kickstart a whole day of
concentration of review of the critical Rocky specs [0] and try to get them
done as much as possible.

We start with the meeting and folks in US and Europe could carry on til the
end of the day when Asian devs could come in again :)

Initial agenda for Wed team meeting:
- Promote Sundar as new core reviewer
- KubeCon feedback
- Bugs and Issues
- Spec Review Day kickstart

[0] https://etherpad.openstack.org/p/cyborg-rocky-spec-day
-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev