[openstack-dev] [devstack][python/pip][octavia] pip failure during octavia/pike image build by devstack

2018-05-18 Thread rezroo

Hi - let's try this again - this time with pike :-)
Any suggestions on how to get the image builder to create a larger loop 
device? I think that's what the problem is.

Thanks in advance.

   2018-05-19 05:03:04.523 | 2018-05-19 05:03:04.523 INFO
   diskimage_builder.block_device.level1.mbr [-] Write partition entry
   blockno [0] entry [0] start [2048] length [4190208]   [57/1588]
   2018-05-19 05:03:04.523 | 2018-05-19 05:03:04.523 INFO
   diskimage_builder.block_device.utils [-] Calling [sudo sync]
   2018-05-19 05:03:04.538 | 2018-05-19 05:03:04.537 INFO
   diskimage_builder.block_device.utils [-] Calling [sudo kpartx -avs
   /dev/loop3]
   2018-05-19 05:03:04.642 | 2018-05-19 05:03:04.642 INFO
   diskimage_builder.block_device.utils [-] Calling [sudo mkfs -t ext4
   -i 4096 -J size=64 -L cloudimg-rootfs -U 376d4b4d-2597-4838-963a-3d
   9c5fcb5d9c -q /dev/mapper/loop3p1]
   2018-05-19 05:03:04.824 | 2018-05-19 05:03:04.823 INFO
   diskimage_builder.block_device.utils [-] Calling [sudo mkdir -p
   /tmp/dib_build.zv2VZo3W/mnt/]
   2018-05-19 05:03:04.833 | 2018-05-19 05:03:04.833 INFO
   diskimage_builder.block_device.level3.mount [-] Mounting
   [mount_mkfs_root] to [/tmp/dib_build.zv2VZo3W/mnt/]
   2018-05-19 05:03:04.834 | 2018-05-19 05:03:04.833 INFO
   diskimage_builder.block_device.utils [-] Calling [sudo mount
   /dev/mapper/loop3p1 /tmp/dib_build.zv2VZo3W/mnt/]
   2018-05-19 05:03:04.850 | 2018-05-19 05:03:04.850 INFO
   diskimage_builder.block_device.blockdevice [-] create() finished
   2018-05-19 05:03:05.527 | 2018-05-19 05:03:05.527 INFO
   diskimage_builder.block_device.blockdevice [-] Getting value for
   [image-block-device]
   2018-05-19 05:03:06.168 | 2018-05-19 05:03:06.168 INFO
   diskimage_builder.block_device.blockdevice [-] Getting value for
   [image-block-devices]
   2018-05-19 05:03:06.845 | 2018-05-19 05:03:06.845 INFO
   diskimage_builder.block_device.blockdevice [-] Creating fstab
   2018-05-19 05:03:06.845 | 2018-05-19 05:03:06.845 INFO
   diskimage_builder.block_device.utils [-] Calling [sudo mkdir -p
   /tmp/dib_build.zv2VZo3W/built/etc]
   2018-05-19 05:03:06.855 | 2018-05-19 05:03:06.855 INFO
   diskimage_builder.block_device.utils [-] Calling [sudo cp
   /tmp/dib_build.zv2VZo3W/states/block-device/fstab
   /tmp/dib_build.zv2VZo3W/bui
   lt/etc/fstab]
   2018-05-19 05:03:12.946 | dib-run-parts Sat May 19 05:03:12 UTC 2018
   Sourcing environment file
   /tmp/in_target.d/finalise.d/../environment.d/10-bootloader-default-cmdline
   2018-05-19 05:03:12.947 | + source
   /tmp/in_target.d/finalise.d/../environment.d/10-bootloader-default-cmdline
   2018-05-19 05:03:12.947 | ++ export
   'DIB_BOOTLOADER_DEFAULT_CMDLINE=nofb nomodeset vga=normal'
   2018-05-19 05:03:12.947 | ++ DIB_BOOTLOADER_DEFAULT_CMDLINE='nofb
   nomodeset vga=normal'
   2018-05-19 05:03:12.948 | dib-run-parts Sat May 19 05:03:12 UTC 2018
   Sourcing environment file
   /tmp/in_target.d/finalise.d/../environment.d/10-dib-init-system.bash
   2018-05-19 05:03:12.950 | + source
   /tmp/in_target.d/finalise.d/../environment.d/10-dib-init-system.bash
   2018-05-19 05:03:12.950 |  dirname
   /tmp/in_target.d/finalise.d/../environment.d/10-dib-init-system.bash
   2018-05-19 05:03:12.951 | +++
   
PATH='$PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tmp/in_target.d/finalise.d/../environment.d/..'
   2018-05-19 05:03:12.951 | +++ dib-init-system
   2018-05-19 05:03:12.953 | ++ DIB_INIT_SYSTEM=systemd
   2018-05-19 05:03:12.953 | ++ export DIB_INIT_SYSTEM
   2018-05-19 05:03:12.954 | dib-run-parts Sat May 19 05:03:12 UTC 2018
   Sourcing environment file
   /tmp/in_target.d/finalise.d/../environment.d/10-pip-cache
   2018-05-19 05:03:12.955 | + source
   /tmp/in_target.d/finalise.d/../environment.d/10-pip-cache
   2018-05-19 05:03:12.955 | ++ export PIP_DOWNLOAD_CACHE=/tmp/pip
   2018-05-19 05:03:12.955 | ++ PIP_DOWNLOAD_CACHE=/tmp/pip
   2018-05-19 05:03:12.956 | dib-run-parts Sat May 19 05:03:12 UTC 2018
   Sourcing environment file
   /tmp/in_target.d/finalise.d/../environment.d/10-ubuntu-distro-name.bash
   2018-05-19 05:03:12.958 | + source
   /tmp/in_target.d/finalise.d/../environment.d/10-ubuntu-distro-name.bash
   2018-05-19 05:03:12.958 | ++ export DISTRO_NAME=ubuntu
   2018-05-19 05:03:12.958 | ++ DISTRO_NAME=ubuntu
   2018-05-19 05:03:12.958 | ++ export DIB_RELEASE=xenial
   2018-05-19 05:03:12.958 | ++ DIB_RELEASE=xenial
   2018-05-19 05:03:12.959 | dib-run-parts Sat May 19 05:03:12 UTC 2018
   Sourcing environment file
   /tmp/in_target.d/finalise.d/../environment.d/11-dib-install-type.bash
   2018-05-19 05:03:12.961 | + source
   /tmp/in_target.d/finalise.d/../environment.d/11-dib-install-type.bash
   2018-05-19 05:03:12.961 | ++ export DIB_DEFAULT_INSTALLTYPE=source
   2018-05-19 05:03:12.961 | ++ DIB_DEFAULT_INSTALLTYPE=source
   2018-05-19 05:03:12.962 | dib-run-parts Sat May 19 05:03:12 UTC 2018
   Sourcing environment file
   

Re: [openstack-dev] Fast Forward Upgrades (FFU) Forum Sessions

2018-05-18 Thread Erik McCormick
On Fri, May 18, 2018 at 3:59 PM, Thierry Carrez  wrote:
> Erik McCormick wrote:
>> There are two forum sessions in Vancouver covering Fast Forward Upgrades.
>>
>> Session 1 (Current State): Wednesday May 23rd, 09:00 - 09:40, Room 220
>> Session 2 (Future Work): Wednesday May 23rd, 09:50 - 10:30, Room 220
>>
>> The combined etherpad for both sessions can be found at:
>> https://etherpad.openstack.org/p/YVR-forum-fast-forward-upgrades
>
> You should add it to the list of all etherpads at:
> https://wiki.openstack.org/wiki/Forum/Vancouver2018
>
Done

> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-Erik

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fast Forward Upgrades (FFU) Forum Sessions

2018-05-18 Thread Thierry Carrez
Erik McCormick wrote:
> There are two forum sessions in Vancouver covering Fast Forward Upgrades.
> 
> Session 1 (Current State): Wednesday May 23rd, 09:00 - 09:40, Room 220
> Session 2 (Future Work): Wednesday May 23rd, 09:50 - 10:30, Room 220
> 
> The combined etherpad for both sessions can be found at:
> https://etherpad.openstack.org/p/YVR-forum-fast-forward-upgrades

You should add it to the list of all etherpads at:
https://wiki.openstack.org/wiki/Forum/Vancouver2018

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [User-committee] [Forum] [all] [Stable] OpenStack is "mature" -- time to get serious on Maintainers -- Session etherpad and food for thought for discussion

2018-05-18 Thread Rochelle Grober
Thanks, Lance!

Also, the more I think about it, the more I think Maintainer has too much 
baggage to use that term for this role.  It really is “continuity” that we are 
looking for.  Continuous important fixes, continuous updates of tools used to 
produce the SW.

Keep this in the back of your minds for the discussion.  And yes, this is a 
discussion to see if we are interested, and only if there is interest, how to 
move forward.

--Rocky

From: Lance Bragstad [mailto:lbrags...@gmail.com]
Sent: Friday, May 18, 2018 2:03 PM
To: Rochelle Grober ; openstack-dev 
; openstack-operators 
; user-committee 

Subject: Re: [User-committee] [Forum] [all] [Stable] OpenStack is "mature" -- 
time to get serious on Maintainers -- Session etherpad and food for thought for 
discussion

Here is the link to the session in case you'd like to add it to your schedule 
[0].

[0] 
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21759/openstack-is-mature-time-to-get-serious-on-maintainers
On 05/17/2018 07:55 PM, Rochelle Grober wrote:
Folks,

TL;DR
The last session related to extended releases is: OpenStack is "mature" -- time 
to get serious on Maintainers
It will be in room 220 at 11:00-11:40
The etherpad for the last session in the series on Extended releases is here:
https://etherpad.openstack.org/p/YVR-openstack-maintainers-maint-pt3

There are links to info on other communities’ maintainer 
process/role/responsibilities also, as reference material on how other have 
made it work (or not).

The nitty gritty details:

The upcoming Forum is filled with sessions that are focused on issues needed to 
improve and maintain the sustainability of OpenStack projects for the long 
term.  We have discussion on reducing technical debt, extended releases, fast 
forward installs, bringing Ops and User communities closer together, etc.  The 
community is showing it is now invested in activities that are often part of 
“Sustaining Engineering” teams (corporate speak) or “Maintainers (OSS speak).  
We are doing this; we are thinking about the moving parts to do this; let’s 
think about the contributors who want to do these and bring some clarity to 
their roles and the processes they need to be successful.  I am hoping you read 
this and keep these ideas in mind as you participate in the various Forum 
sessions.  Then you can bring the ideas generated during all these discussions 
to the Maintainers session near the end of the Summit to brainstorm how to 
visualize and define this new(ish) component of our technical community.

So, who has been doing the maintenance work so far?  Mostly (mostly) unsung 
heroes like the Stable Release team, Release team, Oslo team, project liaisons 
and the community goals champions (yes, moving to py3 is a 
sustaining/maintenance type of activity).  And some operators (Hi, mnaser!).  
We need to lean on their experience and what we think the community will need 
to reduce that technical debt to outline what the common tasks of maintainers 
should be, what else might fall in their purview, and how to partner with them 
to better serve them.

With API lower limits, new tool versions, placement, py3, and even projects 
reaching “code complete” or “maintenance mode,” there is a lot of work for 
maintainers to do (I really don’t like that term, but is there one that fits 
OpenStack’s community?).  It would be great if we could find a way to share the 
load such that we can have part time contributors here.  We know that operators 
know how to cherrypick, test in there clouds, do bug fixes.  How do we pair 
with them to get fixes upstreamed without requiring them to be full on 
developers?  We have a bunch of alumni who have stopped being “cores” and 
sometimes even developers, but who love our community and might be willing and 
able to put in a few hours a week, maybe reviewing small patches, providing 
help with user/ops submitted patch requests, or whatever.  They were trusted 
with +2 and +W in the past, so we should at least be able to trust they know 
what they know.  We  would need some way to identify them to Cores, since they 
would be sort of 1.5 on the voting scale, but……

So, burn out is high in other communities for maintainers.  We need to find a 
way to make sustaining the stable parts of OpenStack sustainable.

Hope you can make the talk, or add to the etherpad, or both.  The etherpad is 
very musch still a work in progress (trying to organize it to make sense).  If 
you want to jump in now, go for it, otherwise it should be in reasonable shape 
for use at the session.  I hope we get a good mix of community and a good 
collection of those who are already doing the job without title.

Thanks and see you next week.
--rocky




华为技术有限公司 Huawei Technologies Co., Ltd.
[Company_logo]
Rochelle Grober
Sr. Staff 

Re: [openstack-dev] [User-committee] [Forum] [all] [Stable] OpenStack is "mature" -- time to get serious on Maintainers -- Session etherpad and food for thought for discussion

2018-05-18 Thread Lance Bragstad
Here is the link to the session in case you'd like to add it to your
schedule [0].

[0]
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21759/openstack-is-mature-time-to-get-serious-on-maintainers

On 05/17/2018 07:55 PM, Rochelle Grober wrote:
>
> Folks,
>
>  
>
> TL;DR
>
> The last session related to extended releases is: OpenStack is
> "mature" -- time to get serious on Maintainers
> It will be in room 220 at 11:00-11:40
>
> The etherpad for the last session in the series on Extended releases
> is here:
>
> https://etherpad.openstack.org/p/YVR-openstack-maintainers-maint-pt3
>
>  
>
> There are links to info on other communities’ maintainer
> process/role/responsibilities also, as reference material on how other
> have made it work (or not).
>
>  
>
> The nitty gritty details:
>
>  
>
> The upcoming Forum is filled with sessions that are focused on issues
> needed to improve and maintain the sustainability of OpenStack
> projects for the long term.  We have discussion on reducing technical
> debt, extended releases, fast forward installs, bringing Ops and User
> communities closer together, etc.  The community is showing it is now
> invested in activities that are often part of “Sustaining Engineering”
> teams (corporate speak) or “Maintainers (OSS speak).  We are doing
> this; we are thinking about the moving parts to do this; let’s think
> about the contributors who want to do these and bring some clarity to
> their roles and the processes they need to be successful.  I am hoping
> you read this and keep these ideas in mind as you participate in the
> various Forum sessions.  Then you can bring the ideas generated during
> all these discussions to the Maintainers session near the end of the
> Summit to brainstorm how to visualize and define this new(ish)
> component of our technical community.
>
>  
>
> So, who has been doing the maintenance work so far?  Mostly (mostly)
> unsung heroes like the Stable Release team, Release team, Oslo team,
> project liaisons and the community goals champions (yes, moving to py3
> is a sustaining/maintenance type of activity).  And some operators
> (Hi, mnaser!).  We need to lean on their experience and what we think
> the community will need to reduce that technical debt to outline what
> the common tasks of maintainers should be, what else might fall in
> their purview, and how to partner with them to better serve them.
>
>  
>
> With API lower limits, new tool versions, placement, py3, and even
> projects reaching “code complete” or “maintenance mode,” there is a
> lot of work for maintainers to do (I really don’t like that term, but
> is there one that fits OpenStack’s community?).  It would be great if
> we could find a way to share the load such that we can have part time
> contributors here.  We know that operators know how to cherrypick,
> test in there clouds, do bug fixes.  How do we pair with them to get
> fixes upstreamed without requiring them to be full on developers?  We
> have a bunch of alumni who have stopped being “cores” and sometimes
> even developers, but who love our community and might be willing and
> able to put in a few hours a week, maybe reviewing small patches,
> providing help with user/ops submitted patch requests, or whatever. 
> They were trusted with +2 and +W in the past, so we should at least be
> able to trust they know what they know.  We  would need some way to
> identify them to Cores, since they would be sort of 1.5 on the voting
> scale, but……
>
>  
>
> So, burn out is high in other communities for maintainers.  We need to
> find a way to make sustaining the stable parts of OpenStack sustainable.
>
>  
>
> Hope you can make the talk, or add to the etherpad, or both.  The
> etherpad is very musch still a work in progress (trying to organize it
> to make sense).  If you want to jump in now, go for it, otherwise it
> should be in reasonable shape for use at the session.  I hope we get a
> good mix of community and a good collection of those who are already
> doing the job without title.
>
>  
>
> Thanks and see you next week.
>
> --rocky
>
>  
>
>  
>
>  
>
> 
>
> 华为技术有限公司 Huawei Technologies Co., Ltd.
>
> Company_logo
>
> Rochelle Grober
>
> Sr. Staff Architect, Open Source
> Office Phone:408-330-5472
> Email:rochelle.gro...@huawei.com
>
> 
>
> 本邮件及其附件含有华为公司的保密信息,仅限于发送给上面地址中列出的个人或群组。禁
> 止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、或散发)本邮件中
> 的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本邮件!
> This e-mail and its attachments contain confidential information from
> HUAWEI, which
> is intended only for the person or entity whose address is listed
> above. Any use of the
> information contained herein in any way (including, but not limited
> to, total or partial
> disclosure, reproduction, or dissemination) by persons other than the
> intended
> recipient(s) is prohibited. If you receive this 

[openstack-dev] [Glance] Vancouver Summit Glance Dinner planning

2018-05-18 Thread Erno Kuvaja
Hi all,

time to see if we could get the glance folks together for dinner and
perhaps some refreshing beverages. If you are interested, please do
contribute to the plans here:
https://etherpad.openstack.org/p/yvr-glance-dinner

- jokke

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Keystone Team Update - Week of 14 May 2018

2018-05-18 Thread Colleen Murphy
# Keystone Team Update - Week of 14 May 2018

## News

### WSGI

Morgan has been working on converting keystone's core application to use 
Flask[1], which will help us to stop using paste.deploy and simplify our WSGI 
middleware and routing. While we're reworking our WSGI application framework, 
we also need to be thinking about how we can implement the mutable 
configuration community goal[2] which relies on having a SIGHUP handler in the 
service application that can talk to oslo.config, which is a feature that is 
part of oslo.service which we're not using.

[1] https://review.openstack.org/#/c/568377/
[2] 
https://governance.openstack.org/tc/goals/rocky/enable-mutable-configuration.html

### Sphinx issues

This week we started seeing mysterious issues with the API docs builder in the 
docs jobs for keystoneauth[3][4]. They seemed to start sometime after the 
upper-constraint for Sphinx was bumped to 1.7.4[5] and seemed to go away when 
the constraint was reverted[6], but we haven't fully confirmed that correlation 
yet. If you have some free time and like puzzles please feel free to dive in.

[3] 
http://logs.openstack.org/65/568365/5/check/build-openstack-sphinx-docs/368b8db/
[4] 
http://logs.openstack.org/40/568640/2/check/build-openstack-sphinx-docs/c66ea98/
[5] https://review.openstack.org/#/c/566451/
[6] https://review.openstack.org/#/c/568248/

### Summit/forum next week

The OpenStack Summit and Forum is next week in Vancouver, BC. A team dinner is 
going to be organized, so please respond to the survey[7] with your 
availability if you'd like to join.

[7] http://lists.openstack.org/pipermail/openstack-dev/2018-May/130649.html

Some sessions that might be of interest to the keystone team are:

Default Roles - 
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21761/default-roles
Project Update - 
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21584/keystone-project-update
Project Onboarding - 
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21633/keystone-project-onboarding
Possible edge architectures for Keystone - 
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21737/possible-edge-architectures-for-keystone
Feedback session -  
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21762/keystone-feedback-session
Unified limits - 
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21760/unified-limits

The Open Research Cloud Alliance, which focuses on federated cloud topics, is 
also meeting on Thursday (requires a separate registration) - 
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21845/cloud-federation-and-open-research-cloud-alliance-congress

## Open Specs

Search query: https://bit.ly/2G8Ai5q

In addition to the specs proposed for Rocky, we also have the Patrole in CI 
spec[8] proposed for Stein. It was originally being proposed in the 
openstack-specs repo but it's now reproposed to the keystone-specs repo.

[8] https://review.openstack.org/#/c/464678/

## Recently Merged Changes

Search query: https://bit.ly/2IACk3F

We merged 15 changes this week.

## Changes that need Attention

Search query: https://bit.ly/2wv7QLK

There are 37 changes that are passing CI, not in merge conflict, have no 
negative reviews and aren't proposed by bots.

## Bugs

These week we opened 5 new bugs and closed 7.

## Milestone Outlook

https://releases.openstack.org/rocky/schedule.html

The spec freeze is in about three weeks. We're starting to close in on our 
bigger specs so things are looking good.

## Help with this newsletter

Help contribute to this newsletter by editing the etherpad: 
https://etherpad.openstack.org/p/keystone-team-newsletter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] summit sessions of interest

2018-05-18 Thread melanie witt

Howdy everyone,

Here's a last-minute (sorry) list of sessions you might find interesting 
from a nova perspective. Some of these are cross-project sessions of 
general interest.


-melanie


Forum sessions
--

Monday
--

* Default Roles Mon 21, 11:35am - 12:15pm
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21761/default-roles

* Building the path to extracting Placement from Nova Mon 21, 3:10pm - 
3:50pm

https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21716/building-the-path-to-extracting-placement-from-nova

* Ops/Devs: One community Mon 21, 4:20pm - 5:00pm
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21747/opsdevs-one-community

* Planning to use Placement in Cinder Mon 21, 4:20pm - 5:00pm
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21718/planning-to-use-placement-in-cinder

* Python 2 Deprecation Timeline Mon 21, 5:10pm - 5:50pm
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21741/python-2-deprecation-timeline

Tuesday
---

* Multi-attach introduction and future direction Tue 22, 11:50am - 12:30pm
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21732/multi-attach-introduction-and-future-direction

* Pre-emptible instances - the way forward Tue 22, 1:50pm - 2:30pm
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21787/pre-emptible-instances-the-way-forward

* nova/neutron + ops cross-project session Tue 22, 3:30pm - 4:10pm
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21754/novaneutron-ops-cross-project-session

* CellsV2 migration process sync with operators Tue 22, 4:40pm - 5:20pm
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21755/cellsv2-migration-process-sync-with-operators

Wednesday
-

* Making NFV features easier to use Wed 23, 11:00am - 11:40am
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21776/making-nfv-features-easier-to-use

* Nova - Project Onboarding Wed 23, 1:50pm - 2:30pm
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21641/nova-project-onboarding

* Missing features in OpenStack for public clouds Wed 23, 2:40pm - 3:20pm
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21749/missing-features-in-openstack-for-public-clouds

* API Debt Cleanup Wed 23, 4:40pm - 5:20pm
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21881/api-debt-cleanup

Thursday


* Extended Maintenance part I: past, present and future 9:00am - 9:40am
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21721/extended-maintenance-part-i-past-present-and-future

* Extended Maintenance part II: EM and release cycles 9:50am - 10:30am
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21745/extended-maintenance-part-ii-em-and-release-cycles

* S Release Goals Thu 24, 11:50am - 12:30pm
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21738/s-release-goals

* Unified Limits Thu 24, 2:40pm - 3:20pm
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21760/unified-limits


Presentations
-

Monday
--

* Moving from CellsV1 to CellsV2 at CERN Mon 21, 11:35am - 12:15pm
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/20667/moving-from-cellsv1-to-cellsv2-at-cern

* Call it real : Virtual GPUs in Nova Mon 21, 3:10pm - 3:50pm
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/20802/call-it-real-virtual-gpus-in-nova

* The multi-release, multi-project road to volume multi-attach Mon 21, 
5:10pm - 5:50pm

https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/20850/the-multi-release-multi-project-road-to-volume-multi-attach

Tuesday
---

* Placement, Present and Future, in Nova and Beyond Tue 22, 4:40pm - 5:20pm
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/20813/placement-present-and-future-in-nova-and-beyond

Wednesday
-

* Nova - Project Update Wed 23, 11:50am - 12:30pm
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21598/nova-project-update

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] project onboarding

2018-05-18 Thread Lance Bragstad
Hey all,

We've started an etherpad in an attempt to capture information prior to
the on-boarding session on Monday [0]. If you're looking to get
something specific out of the session, please let us know in the
etherpad [1]. This will help us come to the session prepared and make
the most of the time we have.

See you there,

Lance

[0]
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21633/keystone-project-onboarding
[1] https://etherpad.openstack.org/p/YVR-rocky-keystone-project-onboarding



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][python/pip][octavia] pip failure during octavia/ocata image build by devstack

2018-05-18 Thread Michael Johnson
Hi rezroo,

Yes, the recent release of pip 10 broke the disk image building.
There is a patch posted here: https://review.openstack.org/#/c/562850/
pending review that works around this issue for the ocata branch by
pining the pip used for the image building to a version that does not
have this issue.

Michael


On Thu, May 17, 2018 at 7:38 PM, rezroo  wrote:
> Hello - I'm trying to install a working local.conf devstack ocata on a new
> server, and some python packages have changed so I end up with this error
> during the build of octavia image:
>
> 2018-05-18 01:00:26.276 |   Found existing installation: Jinja2 2.8
> 2018-05-18 01:00:26.280 | Uninstalling Jinja2-2.8:
> 2018-05-18 01:00:26.280 |   Successfully uninstalled Jinja2-2.8
> 2018-05-18 01:00:26.839 |   Found existing installation: PyYAML 3.11
> 2018-05-18 01:00:26.969 | Cannot uninstall 'PyYAML'. It is a distutils
> installed project and thus we cannot accurately determine which files belong
> to it which would lead to only a partial uninstall.
>
> 2018-05-18 02:05:44.768 | Unmount
> /tmp/dib_build.2fbBBePD/mnt/var/cache/apt/archives
> 2018-05-18 02:05:44.796 | Unmount /tmp/dib_build.2fbBBePD/mnt/tmp/pip
> 2018-05-18 02:05:44.820 | Unmount
> /tmp/dib_build.2fbBBePD/mnt/tmp/in_target.d
> 2018-05-18 02:05:44.844 | Unmount /tmp/dib_build.2fbBBePD/mnt/tmp/ccache
> 2018-05-18 02:05:44.868 | Unmount /tmp/dib_build.2fbBBePD/mnt/sys
> 2018-05-18 02:05:44.896 | Unmount /tmp/dib_build.2fbBBePD/mnt/proc
> 2018-05-18 02:05:44.920 | Unmount /tmp/dib_build.2fbBBePD/mnt/dev/pts
> 2018-05-18 02:05:44.947 | Unmount /tmp/dib_build.2fbBBePD/mnt/dev
> 2018-05-18 02:05:50.668 |
> +/opt/stack/octavia/devstack/plugin.sh:build_octavia_worker_image:1
> exit_trap
> 2018-05-18 02:05:50.679 | +./devstack/stack.sh:exit_trap:494 local
> r=1
> 2018-05-18 02:05:50.690 | ++./devstack/stack.sh:exit_trap:495 jobs
> -p
> 2018-05-18 02:05:50.700 | +./devstack/stack.sh:exit_trap:495 jobs=
> 2018-05-18 02:05:50.710 | +./devstack/stack.sh:exit_trap:498 [[ -n
> '' ]]
> 2018-05-18 02:05:50.720 | +./devstack/stack.sh:exit_trap:504
> kill_spinner
> 2018-05-18 02:05:50.731 | +./devstack/stack.sh:kill_spinner:390  '[' '!'
> -z '' ']'
> 2018-05-18 02:05:50.741 | +./devstack/stack.sh:exit_trap:506 [[ 1
> -ne 0 ]]
> 2018-05-18 02:05:50.751 | +./devstack/stack.sh:exit_trap:507 echo
> 'Error on exit'
> 2018-05-18 02:05:50.751 | Error on exit
> 2018-05-18 02:05:50.761 | +./devstack/stack.sh:exit_trap:508
> generate-subunit 1526608058 1092 fail
> 2018-05-18 02:05:51.148 | +./devstack/stack.sh:exit_trap:509 [[ -z
> /tmp ]]
> 2018-05-18 02:05:51.157 | +./devstack/stack.sh:exit_trap:512
> /home/stack/devstack/tools/worlddump.py -d /tmp
>
> I've tried pip uninstalling PyYAML and pip installing it before running
> stack.sh, but the error comes back.
>
> $ sudo pip uninstall PyYAML
> The directory '/home/stack/.cache/pip/http' or its parent directory is not
> owned by the current user and the cache has been disabled. Please check the
> permissions and owner of that directory. If executing pip with sudo, you may
> want sudo's -H flag.
> Uninstalling PyYAML-3.12:
>   /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/INSTALLER
>   /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/METADATA
>   /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/RECORD
>   /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/WHEEL
>   /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/top_level.txt
>   /usr/local/lib/python2.7/dist-packages/_yaml.so
> Proceed (y/n)? y
>   Successfully uninstalled PyYAML-3.12
>
> I've posted my question to the pip folks and they think it's an openstack
> issue: https://github.com/pypa/pip/issues/4805
>
> Is there a workaround here?
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fast Forward Upgrades (FFU) Forum Sessions

2018-05-18 Thread Erik McCormick
Hello all,

There are two forum sessions in Vancouver covering Fast Forward Upgrades.

Session 1 (Current State): Wednesday May 23rd, 09:00 - 09:40, Room 220
Session 2 (Future Work): Wednesday May 23rd, 09:50 - 10:30, Room 220

The combined etherpad for both sessions can be found at:
https://etherpad.openstack.org/p/YVR-forum-fast-forward-upgrades

Please take some time to add in topics you would like to see discussed
or add any other pertinent information. There are several reference
links at the top which are worth reviewing prior to the sessions if
you have the time.

See you all in Vancover!

Cheers,
Erik

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] team dinner

2018-05-18 Thread Lance Bragstad
Hey all,

I put together a survey to see if we can plan a night to have supper
together [0]. I'll start parsing responses tomorrow and see what we can
get lined up.

Thanks and safe travels to Vancouver,

Lance

[0] https://goo.gl/forms/ogNsf9dUno8BHvqu1



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Eventlet + SSL + Python 3 = broken monkey patching leading to completely broken glance-api

2018-05-18 Thread Ben Nemec
This is a known problem: 
https://bugs.launchpad.net/oslo.service/+bug/1482633  There have been 
some discussions on what to do about it but I don't think we have a 
definite plan yet.


It also came up in the Python 3 support thread for some more context: 
http://lists.openstack.org/pipermail/openstack-dev/2018-May/130274.html


On 05/18/2018 08:01 AM, Thomas Goirand wrote:

Hi,

It took me nearly a week to figure this out, as I'm not really an expert
in Eventlet, OpenSSL and all, but now I've pin-pointed a big problem.

My tests were around Glance, which I was trying to run over SSL and
Eventlet, though it seems a general issue with SSL + Python 3.

In the normal setup, when I do:
openstack image list

then I get:
Unable to establish connection to https://127.0.0.1:9292/v2/images:
('Connection aborted.', OSError(0, 'Error'))

(more detailed stack dump at the end of this message [1])

Though, with Eventlet 0.20.0, if in
/usr/lib/python3/dist-packages/eventlet/green/ssl.py line 352, I comment
out set_nonblocking(newsock) in the accept() function of the
GreenSSLSocket, then everything works.

Note that:
- This also happens with latest Eventlet 0.23.0
- There's no problem without SSL
- There's no commit on top of 0.23.0 relevant to the issue

The issue has been reported here 2 years ago:
https://github.com/eventlet/eventlet/issues/308

it's marked with "importance-bug" and "need-contributor", but nobody did
anything about it.

I also tried running with libapache2-mod-wsgi-py3, but then I'm hitting
another bug: https://bugs.launchpad.net/glance/+bug/1518431

what's going on is that glanceclient spit out a 411 error complaining
about content lenght. That issue is seen *only* when using Apache and
mod_wsgi.

So, I'm left with no solution here: Glance never works over SSL and
Python 3. Something's really wrong should be fixed. Please help!

This also pinpoints something: our CI is *not* covering the SSL case, or
mod_wsgi, when really, it should. We should be having tests with:
- mod_wsgi
- eventlet
- uwsgi
and all of the above with and without SSL, plus Python 2 and 3, plus
with file or swift backend. That's 24 possibility of problems, which we
should IMO all cover. We don't need to run all tests, but maybe just
make sure that at least the daemon works, which isn't the case at the
moment for most of these use cases. The only setup that works are:
- eventlet with or without SSL, using Python 2
- eventlet without SSL with Python 3
- apache with or without SSL without swift backend

As much as I understand, we're only testing with eventlet with Python 2
and 3 without SSL and file backend. That's 2 setups out of 24... Can
someone works on fixing this?

Cheers,

Thomas Goirand (zigo)

[1]

Unable to establish connection to https://127.0.0.1:9292/v2/images:
('Connection aborted.', OSError(0, 'Error'))
Traceback (most recent call last):
   File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line
601, in urlopen
 chunked=chunked)
   File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line
346, in _make_request
 self._validate_conn(conn)
   File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line
852, in _validate_conn
 conn.connect()
   File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 326,
in connect
 ssl_context=context)
   File "/usr/lib/python3/dist-packages/urllib3/util/ssl_.py", line 329,
in ssl_wrap_socket
 return context.wrap_socket(sock, server_hostname=server_hostname)
   File "/usr/lib/python3.5/ssl.py", line 385, in wrap_socket
 _context=self)
   File "/usr/lib/python3.5/ssl.py", line 760, in __init__
 self.do_handshake()
   File "/usr/lib/python3.5/ssl.py", line 996, in do_handshake
 self._sslobj.do_handshake()
   File "/usr/lib/python3.5/ssl.py", line 641, in do_handshake
 self._sslobj.do_handshake()
OSError: [Errno 0] Error

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api] late addition to forum schedule

2018-05-18 Thread Doug Hellmann
Excerpts from Matt Riedemann's message of 2018-05-17 17:23:09 -0500:
> On 5/17/2018 11:02 AM, Doug Hellmann wrote:
> > After some discussion on twitter and IRC, we've added a new session to
> > the Forum schedule for next week to discuss our options for cleaning up
> > some of the design/technical debt in our REST APIs.
> 
> Not to troll too hard here, but it's kind of frustrating to see that 
> twitter trumps people actually proposing sessions on time and then 
> having them be rejected.
> 
> > The session description:
> > 
> >The introduction of microversions in OpenStack APIs added a
> >mechanism to incrementally change APIs without breaking users.
> >We're now at the point where people would like to start making
> >old things go away, which means we need to hammer out a plan and
> >potentially put it forward as a community goal.
> > 
> > [1]https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21881/api-debt-cleanup
> 
> This also came up at the Pike PTG in Atlanta:
> 
> https://etherpad.openstack.org/p/ptg-architecture-workgroup
> 
> See the "raising the minimum microversion" section. The TODO was Ironic 
> was going to go off and do this and see how much people freaked out. 
> What's changed since then besides that not happening? Since I'm not on 
> twitter, I don't know what new thing prompted this.
> 

What changed is that we thought doing it as a coordinated effort,
rather than one team, would work better, because we wouldn't have
a team appearing to be an outlier in terms of their API support
"guarantees".  We also wanted to start the planning early, so that
teams could talk about it at the PTG and make more detailed plans
for the changes over the course of Stein, to be implemented in the
next cycle (assuming we all decide that's the right timing).

The only aspect of this that's settled today is that we want to
talk about it. Each team will still need to consider whether, and
how, to do it.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api] late addition to forum schedule

2018-05-18 Thread Doug Hellmann
Excerpts from Jeremy Stanley's message of 2018-05-18 00:03:35 +:
> On 2018-05-17 18:47:06 -0500 (-0500), Matt Riedemann wrote:
> > On 5/17/2018 5:23 PM, Matt Riedemann wrote:
> > > Not to troll too hard here, but it's kind of frustrating to see that
> > > twitter trumps people actually proposing sessions on time and then
> > > having them be rejected.
> > 
> > I reckon this is because there were already a pre-defined set of slots /
> > rooms for Forum sessions and we had fewer sessions proposed than reserved
> > slots, and that's why adding something in later is not a major issue?
> 
> Yes, as I understand it we still have some overflow space too if
> planned forum sessions need continuing. Session leaders have
> hopefully received details from the event planners on how to reserve
> additional space in such situations. As far as I'm aware no proposed
> Forum sessions were rejected this time around, and there was some
> discussion among members of the TC (in #openstack-tc[*]) before it
> was agreed there was room to squeeze this particular latecomer into
> the lineup.
> 
> [*] 
> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-05-14.log.html#t2018-05-14T17:27:05

Yes, that's right.

I do remember that we've had sessions rejected in the past (for
space considerations or to avoid overbalancing the schedule with
too many sessions on a given topic), but it feels like it has been
quite a while since that happened. Maybe I'm wrong? Has that been
a persistent problem?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sig] [upgrades] inaugural meeting minutes & vancouver forum

2018-05-18 Thread James Page
Hi All

Lujin, Lee and myself held the inaugural IRC meeting for the Upgrades SIG
this week (see [0]). Suffice to say that, due to other time pressures,
setup of the SIG has taken a lot longer than desired, but hopefully now we
have the ball rolling we can keep up a bit of momentum.

The Upgrades SIG intended to meet weekly, alternating between slots that
work for (hopefully) all time zones:

   http://eavesdrop.openstack.org/#Upgrades_SIG

That said, we'll skip next weeks meeting due to the OpenStack Summit and
Forum in Vancouver, where we have a BoF on the schedule (see [1]) instead.

If you're interested in OpenStack Upgrades the BoF and Erik's sessions on
Fast Forward Upgrades (see [2]) should be on your schedule for next week!

Cheers

James


[0]
http://eavesdrop.openstack.org/meetings/upgrade_sig/2018/upgrade_sig.2018-05-15-09.06.html
[1]
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21855/upgrade-sig-bof
[2]
https://www.openstack.org/summit/vancouver-2018/summit-schedule/global-search?t=upgrades
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sig] [upgrades] inaugural meeting minutes & vancouver forum

2018-05-18 Thread James Page
Hi All

Lujin, Lee and myself held the inaugural IRC meeting for the Upgrades SIG
this week (see [0]). Suffice to say that, due to other time pressures,
setup of the SIG has taken a lot longer than desired, but hopefully now we
have the ball rolling we can keep up a bit of momentum.

The Upgrades SIG intended to meet weekly, alternating between slots that
work for (hopefully) all time zones:

   http://eavesdrop.openstack.org/#Upgrades_SIG

That said, we'll skip next weeks meeting due to the OpenStack Summit and
Forum in Vancouver, where we have a BoF on the schedule (see [1]) instead.

If you're interested in OpenStack Upgrades the BoF and Erik's sessions on
Fast Forward Upgrades (see [2]) should be on your schedule for next week!

Cheers

James


[0]
http://eavesdrop.openstack.org/meetings/upgrade_sig/2018/upgrade_sig.2018-05-15-09.06.html
[1]
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21855/upgrade-sig-bof
[2]
https://www.openstack.org/summit/vancouver-2018/summit-schedule/global-search?t=upgrades
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Eventlet + SSL + Python 3 = broken monkey patching leading to completely broken glance-api

2018-05-18 Thread Thomas Goirand
Hi,

It took me nearly a week to figure this out, as I'm not really an expert
in Eventlet, OpenSSL and all, but now I've pin-pointed a big problem.

My tests were around Glance, which I was trying to run over SSL and
Eventlet, though it seems a general issue with SSL + Python 3.

In the normal setup, when I do:
openstack image list

then I get:
Unable to establish connection to https://127.0.0.1:9292/v2/images:
('Connection aborted.', OSError(0, 'Error'))

(more detailed stack dump at the end of this message [1])

Though, with Eventlet 0.20.0, if in
/usr/lib/python3/dist-packages/eventlet/green/ssl.py line 352, I comment
out set_nonblocking(newsock) in the accept() function of the
GreenSSLSocket, then everything works.

Note that:
- This also happens with latest Eventlet 0.23.0
- There's no problem without SSL
- There's no commit on top of 0.23.0 relevant to the issue

The issue has been reported here 2 years ago:
https://github.com/eventlet/eventlet/issues/308

it's marked with "importance-bug" and "need-contributor", but nobody did
anything about it.

I also tried running with libapache2-mod-wsgi-py3, but then I'm hitting
another bug: https://bugs.launchpad.net/glance/+bug/1518431

what's going on is that glanceclient spit out a 411 error complaining
about content lenght. That issue is seen *only* when using Apache and
mod_wsgi.

So, I'm left with no solution here: Glance never works over SSL and
Python 3. Something's really wrong should be fixed. Please help!

This also pinpoints something: our CI is *not* covering the SSL case, or
mod_wsgi, when really, it should. We should be having tests with:
- mod_wsgi
- eventlet
- uwsgi
and all of the above with and without SSL, plus Python 2 and 3, plus
with file or swift backend. That's 24 possibility of problems, which we
should IMO all cover. We don't need to run all tests, but maybe just
make sure that at least the daemon works, which isn't the case at the
moment for most of these use cases. The only setup that works are:
- eventlet with or without SSL, using Python 2
- eventlet without SSL with Python 3
- apache with or without SSL without swift backend

As much as I understand, we're only testing with eventlet with Python 2
and 3 without SSL and file backend. That's 2 setups out of 24... Can
someone works on fixing this?

Cheers,

Thomas Goirand (zigo)

[1]

Unable to establish connection to https://127.0.0.1:9292/v2/images:
('Connection aborted.', OSError(0, 'Error'))
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line
601, in urlopen
chunked=chunked)
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line
346, in _make_request
self._validate_conn(conn)
  File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line
852, in _validate_conn
conn.connect()
  File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 326,
in connect
ssl_context=context)
  File "/usr/lib/python3/dist-packages/urllib3/util/ssl_.py", line 329,
in ssl_wrap_socket
return context.wrap_socket(sock, server_hostname=server_hostname)
  File "/usr/lib/python3.5/ssl.py", line 385, in wrap_socket
_context=self)
  File "/usr/lib/python3.5/ssl.py", line 760, in __init__
self.do_handshake()
  File "/usr/lib/python3.5/ssl.py", line 996, in do_handshake
self._sslobj.do_handshake()
  File "/usr/lib/python3.5/ssl.py", line 641, in do_handshake
self._sslobj.do_handshake()
OSError: [Errno 0] Error

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cyborg] [nova] Cyborg quotas

2018-05-18 Thread Nadathur, Sundar

On 5/18/2018 5:06 AM, Sylvain Bauza wrote:



Le ven. 18 mai 2018 à 13:59, Nadathur, Sundar 
> a écrit :


Hi Matt,

On 5/17/2018 3:18 PM, Matt Riedemann wrote:

On 5/17/2018 3:36 PM, Nadathur, Sundar wrote:

This applies only to the resources that Nova handles, IIUC,
which does not handle accelerators. The generic method that Alex
talks about is obviously preferable but, if that is not
available in Rocky, is the filter an option?


If nova isn't creating accelerator resources managed by cyborg, I
have no idea why nova would be doing quota checks on those types
of resources. And no, I don't think adding a scheduler filter to
nova for checking accelerator quota is something we'd add either.
I'm not sure that would even make sense - the quota for the
resource is per tenant, not per host is it? The scheduler filters
work on a per-host basis.

Can we not extend BaseFilter.filter_all() to get all the hosts in
a filter?
https://github.com/openstack/nova/blob/master/nova/filters.py#L36

I should have made it clearer that this putative filter will be
out-of-tree, and needed only till better solutions become available.


No, there are two clear parameters for a filter, and changing that 
would mean a new paradigm for FilterScheduler.
If you need to have a check for all the hosts, maybe it should be 
either a pre-filter for Placement or a post-filter but we don't accept 
out of tree yet.


Thanks, Sylvain. So, the filter approach got filtered out.

Matt had mentioned that Cinder volume quotas are not checked by Nova 
either, citing:

 https://bugs.launchpad.net/nova/+bug/1742102
That includes this comment:
    https://bugs.launchpad.net/nova/+bug/1742102/comments/4
I'll check how Cinder does it today.

Thanks to all for your valuable input.

Regards,
Sundar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api] late addition to forum schedule

2018-05-18 Thread Jim Rollenhagen
On Fri, May 18, 2018 at 5:38 AM, Dmitry Tantsur  wrote:

> On 05/18/2018 12:23 AM, Matt Riedemann wrote:
>
>> On 5/17/2018 11:02 AM, Doug Hellmann wrote:
>>
>>> After some discussion on twitter and IRC, we've added a new session to
>>> the Forum schedule for next week to discuss our options for cleaning up
>>> some of the design/technical debt in our REST APIs.
>>>
>>
>> Not to troll too hard here, but it's kind of frustrating to see that
>> twitter trumps people actually proposing sessions on time and then having
>> them be rejected.
>>
>> The session description:
>>>
>>>The introduction of microversions in OpenStack APIs added a
>>>mechanism to incrementally change APIs without breaking users.
>>>We're now at the point where people would like to start making
>>>old things go away, which means we need to hammer out a plan and
>>>potentially put it forward as a community goal.
>>>
>>> [1]https://www.openstack.org/summit/vancouver-2018/summit-sc
>>> hedule/events/21881/api-debt-cleanup
>>>
>>
>> This also came up at the Pike PTG in Atlanta:
>>
>> https://etherpad.openstack.org/p/ptg-architecture-workgroup
>>
>> See the "raising the minimum microversion" section. The TODO was Ironic
>> was going to go off and do this and see how much people freaked out. What's
>> changed since then besides that not happening? Since I'm not on twitter, I
>> don't know what new thing prompted this.
>>
>>
> Jim was driving this effort, then he left and it went into limbo. I'm not
> sure we're still interested in doing that, given the overall backlog.


Well, I'm still interested in doing this, but don't really have the time :(

// jim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cyborg] [nova] Cyborg quotas

2018-05-18 Thread Sylvain Bauza
Le ven. 18 mai 2018 à 13:59, Nadathur, Sundar  a
écrit :

> Hi Matt,
> On 5/17/2018 3:18 PM, Matt Riedemann wrote:
>
> On 5/17/2018 3:36 PM, Nadathur, Sundar wrote:
>
> This applies only to the resources that Nova handles, IIUC, which does not
> handle accelerators. The generic method that Alex talks about is obviously
> preferable but, if that is not available in Rocky, is the filter an option?
>
>
> If nova isn't creating accelerator resources managed by cyborg, I have no
> idea why nova would be doing quota checks on those types of resources. And
> no, I don't think adding a scheduler filter to nova for checking
> accelerator quota is something we'd add either. I'm not sure that would
> even make sense - the quota for the resource is per tenant, not per host is
> it? The scheduler filters work on a per-host basis.
>
> Can we not extend BaseFilter.filter_all() to get all the hosts in a
> filter?
>
> https://github.com/openstack/nova/blob/master/nova/filters.py#L36
>
> I should have made it clearer that this putative filter will be
> out-of-tree, and needed only till better solutions become available.
>

No, there are two clear parameters for a filter, and changing that would
mean a new paradigm for FilterScheduler.
If you need to have a check for all the hosts, maybe it should be either a
pre-filter for Placement or a post-filter but we don't accept out of tree
yet.


> Like any other resource in openstack, the project that manages that
> resource should be in charge of enforcing quota limits for it.
>
> Agreed. Not sure how other projects handle it, but here's the situation
> for Cyborg. A request may get scheduled on a compute node with no
> intervention by Cyborg. So, the earliest check that can be made today is in
> the selected compute node. A simple approach can result in quota violations
> as in this example.
>
> Say there are 5 devices in a cluster. A tenant has a quota of 4 and is
> currently using 3. That leaves 2 unused devices, of which the tenant is
> permitted to use only one. But he may submit two concurrent requests, and
> they may land on two different compute nodes. The Cyborg agent in each node
> will see the current tenant usage as 3 and let the request go through,
> resulting in quota violation.
>
> To prevent this, we need some kind of atomic update , like SQLAlchemy's
> with_lockmode():
>
> https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy#Pessimistic_Locking_-_SELECT_FOR_UPDATE
> That seems to have issues, as documented in the link above. Also, since
> every compute node does that, it would also serialize the bringup of all
> instances with accelerators, across the cluster.
>
> If there is a better solution, I'll be happy to hear it.
>
> Thanks,
> Sundar
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cyborg] [nova] Cyborg quotas

2018-05-18 Thread Nadathur, Sundar

Hi Matt,

On 5/17/2018 3:18 PM, Matt Riedemann wrote:

On 5/17/2018 3:36 PM, Nadathur, Sundar wrote:
This applies only to the resources that Nova handles, IIUC, which 
does not handle accelerators. The generic method that Alex talks 
about is obviously preferable but, if that is not available in Rocky, 
is the filter an option?


If nova isn't creating accelerator resources managed by cyborg, I have 
no idea why nova would be doing quota checks on those types of 
resources. And no, I don't think adding a scheduler filter to nova for 
checking accelerator quota is something we'd add either. I'm not sure 
that would even make sense - the quota for the resource is per tenant, 
not per host is it? The scheduler filters work on a per-host basis.

Can we not extend BaseFilter.filter_all() to get all the hosts in a filter?
https://github.com/openstack/nova/blob/master/nova/filters.py#L36

I should have made it clearer that this putative filter will be 
out-of-tree, and needed only till better solutions become available.


Like any other resource in openstack, the project that manages that 
resource should be in charge of enforcing quota limits for it.
Agreed. Not sure how other projects handle it, but here's the situation 
for Cyborg. A request may get scheduled on a compute node with no 
intervention by Cyborg. So, the earliest check that can be made today is 
in the selected compute node. A simple approach can result in quota 
violations as in this example.


   Say there are 5 devices in a cluster. A tenant has a quota of 4 and
   is currently using 3. That leaves 2 unused devices, of which the
   tenant is permitted to use only one. But he may submit two
   concurrent requests, and they may land on two different compute
   nodes. The Cyborg agent in each node will see the current tenant
   usage as 3 and let the request go through, resulting in quota violation.

To prevent this, we need some kind of atomic update , like SQLAlchemy's 
with_lockmode():
https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy#Pessimistic_Locking_-_SELECT_FOR_UPDATE 

That seems to have issues, as documented in the link above. Also, since 
every compute node does that, it would also serialize the bringup of all 
instances with accelerators, across the cluster.


If there is a better solution, I'll be happy to hear it.

Thanks,
Sundar




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][rdo] Fwd: Status of activities related to python3 PoC in RDO

2018-05-18 Thread Alfredo Moralejo Alonso
FYI

-- Forwarded message --
From: Alfredo Moralejo Alonso 
Date: Fri, May 18, 2018 at 1:02 PM
Subject: Status of activities related to python3 PoC in RDO
To: d...@lists.rdoproject.org, us...@lists.rdoproject.org


Hi,

One of the goals for RDO during this cycle is to carry out a PoC of python3
packaging using Fedora 28 as base OS. I'd like to update about the current
status about the tasks related to this goal so that all involved teams can
take required actions:

1. A initial stabilized fedora repos is available and ready to be used:
- The repo configuration is in https://trunk.rdoproject.org/
fedora/dlrn-deps.repo
- It contains only a subset of packages in Fedora 28 repo. If more
packages are required, they can be added sending a review to
fedora-stable-config repo, as in https://review.rdoproject.org/r/#/c/13744/
- We are still implementing some periodic updates on that repo.

2. A DLRN builder has been created using fedora-stable repo in
https://trunk.rdoproject.org/fedora . Note that only packages with python3
subpackages are being built on it. We will keep adding new packages as
specs are ready.

3. A new image and node type rdo-fedora-stable have been created in
review.rdoproject.org and it's ready to be used in jobs as needed.

Please, let us know using this mail list or #rdo channel in freenode if you
need further help with regards with this topic.

Best regards,

Alfredo
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api] late addition to forum schedule

2018-05-18 Thread Dmitry Tantsur

On 05/18/2018 12:23 AM, Matt Riedemann wrote:

On 5/17/2018 11:02 AM, Doug Hellmann wrote:

After some discussion on twitter and IRC, we've added a new session to
the Forum schedule for next week to discuss our options for cleaning up
some of the design/technical debt in our REST APIs.


Not to troll too hard here, but it's kind of frustrating to see that twitter 
trumps people actually proposing sessions on time and then having them be rejected.



The session description:

   The introduction of microversions in OpenStack APIs added a
   mechanism to incrementally change APIs without breaking users.
   We're now at the point where people would like to start making
   old things go away, which means we need to hammer out a plan and
   potentially put it forward as a community goal.

[1]https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21881/api-debt-cleanup 



This also came up at the Pike PTG in Atlanta:

https://etherpad.openstack.org/p/ptg-architecture-workgroup

See the "raising the minimum microversion" section. The TODO was Ironic was 
going to go off and do this and see how much people freaked out. What's changed 
since then besides that not happening? Since I'm not on twitter, I don't know 
what new thing prompted this.




Jim was driving this effort, then he left and it went into limbo. I'm not sure 
we're still interested in doing that, given the overall backlog.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [magnum] Magnum tempest fails with 400 bad request

2018-05-18 Thread Yatin Karel
Hi Tobias,

Thanks for looking into it.

Currently the issue i see is magnum configuration[1] is wrong:-
auth_uri=http://localhost:5000,  should be https and v3 versioned as
per scenario003 deployment configuration.
Magnum relies on auth_uri param and that too versioned("v3") like below:-

auth_uri=https://[::1]:5000/v3

After fixing this config current issue would be solved. Also i think
there is more work required to fix it completely but let's clear the
current issue first.

Also would be good to try our atomic 27 image(current is too old):-
tempest::magnum::image_source
https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-Atomic-27-20180212.2/CloudImages/x86_64/images/Fedora-Atomic-27-20180212.2.x86_64.qcow2

Some other thing that would be required are below:-
The cluster vm magnum creates should be able to connect to openstack
services and to internet.
Also settings would be required to work with SSL enabled services like
either TLS_DISABLED or setting up verify_ca and cert configuration in
magnum.conf

[1] 
http://logs.openstack.org/12/367012/28/check/puppet-openstack-integration-4-scenario003-tempest-centos-7/3f5252b/logs/etc/magnum/magnum.conf.txt.gz


Thanks and Regards
Yatin Karel

On Thu, May 17, 2018 at 5:37 PM, Thomas Goirand  wrote:
> On 05/17/2018 09:49 AM, Tobias Urdin wrote:
>> Hello,
>>
>> I was interested in getting Magnum working in gate by getting @dms patch
>> fixed and merged [1].
>>
>> The installation goes fine on Ubuntu and CentOS however the tempest
>> testing for Magnum fails on CentOS (it not available in Ubuntu).
>>
>>
>> It seems to be related to authentication against keystone but I don't
>> understand why, please see logs [2] [3]
>>
>>
>> [1] https://review.openstack.org/#/c/367012/
>>
>> [2]
>> http://logs.openstack.org/12/367012/28/check/puppet-openstack-integration-4-scenario003-tempest-centos-7/3f5252b/logs/magnum/magnum-api.txt.gz#_2018-05-16_15_10_36_010
>>
>> [3]
>> http://logs.openstack.org/12/367012/28/check/puppet-openstack-integration-4-scenario003-tempest-centos-7/3f5252b/
>
> From that log, you're getting a 404 from nova-api.
>
> Response - Headers: {'status': '404', u'content-length': '113',
> 'content-location': 'https://[::1]:8774/v2.1/os-keypairs/default',
> u'x-compute-request-id': 'req-35ae4651-186c-4f20-9143-f68f67b7d401',
> u'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version',
> u'server': 'Apache/2.4.6 (CentOS)', u'openstack-api-version': 'compute
> 2.1', u'connection': 'close', u'x-openstack-nova-api-version': '2.1',
> u'date': 'Wed, 16 May 2018 15:10:33 GMT', u'content-type':
> 'application/json; charset=UTF-8', u'x-openstack-request-id':
> 'req-35ae4651-186c-4f20-9143-f68f67b7d401'}
>
> but that seems fine because the request right after is working, however
> just right after, you're getting a 500 error on magnum-api a bit further:
>
> Response - Headers: {'status': '500', u'content-length': '149',
> 'content-location': 'https://[::1]:9511/clustertemplates',
> u'openstack-api-maximum-version': 'container-infra 1.6', u'vary':
> 'OpenStack-API-Version', u'openstack-api-minimum-version':
> 'container-infra 1.1', u'server': 'Werkzeug/0.11.6 Python/2.7.5',
> u'openstack-api-version': 'container-infra 1.1', u'date': 'Wed, 16 May
> 2018 15:10:36 GMT', u'content-type': 'application/json',
> u'x-openstack-request-id': 'req-12c635c9-889a-48b4-91d4-ded51220ad64'}
>
> With this body:
>
> Body: {"errors": [{"status": 500, "code": "server", "links": [],
> "title": "Bad Request (HTTP 400)", "detail": "Bad Request (HTTP 400)",
> "request_id": ""}]}
> 2018-05-16 15:24:14.434432 | centos-7 | 2018-05-16 15:10:36,016
> 13619 DEBUG[tempest.lib.common.dynamic_creds] Clearing network:
> {u'provider:physical_network': None, u'ipv6_address_scope': None,
> u'revision_number': 2, u'port_security_enabled': True, u'mtu': 1400,
> u'id': u'c26c237a-0583-4f72-8300-f87051080be7', u'router:external':
> False, u'availability_zone_hints': [], u'availability_zones': [],
> u'provider:segmentation_id': 35, u'ipv4_address_scope': None, u'shared':
> False, u'project_id': u'31c5c1fbc46e4880b7e498e493700a50', u'status':
> u'ACTIVE', u'subnets': [], u'description': u'', u'tags': [],
> u'updated_at': u'2018-05-16T15:10:26Z', u'is_default': False,
> u'qos_policy_id': None, u'name': u'tempest-setUp-2113966350-network',
> u'admin_state_up': True, u'tenant_id':
> u'31c5c1fbc46e4880b7e498e493700a50', u'created_at':
> u'2018-05-16T15:10:26Z', u'provider:network_type': u'vxlan'}, subnet:
> {u'service_types': [], u'description': u'', u'enable_dhcp': True,
> u'tags': [], u'network_id': u'c26c237a-0583-4f72-8300-f87051080be7',
> u'tenant_id': u'31c5c1fbc46e4880b7e498e493700a50', u'created_at':
> u'2018-05-16T15:10:26Z', u'dns_nameservers': [], u'updated_at':
> u'2018-05-16T15:10:26Z', u'ipv6_ra_mode': None, u'allocation_pools':
> [{u'start': u'10.100.0.2', u'end': u'10.100.0.14'}], u'gateway_ip':
> u'10.100.0.1', u'revision_number': 0, u'ipv6_address_mode':