Re: [openstack-dev] [Release-job-failures] Release of openstack-infra/shade failed

2018-10-23 Thread Tony Breeds
On Wed, Oct 24, 2018 at 03:23:53AM +, z...@openstack.org wrote:
> Build failed.
> 
> - release-openstack-python3 
> http://logs.openstack.org/ab/abac67d7bb347e1caba4d74c81712de86790316b/release/release-openstack-python3/e84da68/
>  : POST_FAILURE in 2m 18s

So this failed because pypi thinks there was a name collision[1]:
 HTTPError: 400 Client Error: File already exists. See 
https://pypi.org/help/#file-name-reuse for url: https://upload.pypi.org/legacy/

AFACIT the upload was successful:

shade-1.27.2-py2-none-any.whl  : 2018-10-24T03:20:00 
d30a230461ba276c8bc561a27e61dcfd6769ca00bb4c652a841f7148a0d74a5a
shade-1.27.2-py2.py3-none-any.whl  : 2018-10-24T03:20:11 
8942b56d7d02740fb9c799a57f0c4ff13d300680c89e6f04dadb5eaa854e1792
shade-1.27.2.tar.gz: 2018-10-24T03:20:04 
ebf40040b892f3e9bd4229fd05fff7ea24a08c51e46b7f2d8b3901ce34f51cbf

The strange thing is that the tar.gz was uploaded *befoer* the wheel
even though our publish jobs explictly do it in the other order and the
timestamp of the tar.gz doesn't match the error message.

SO I think we have a bug somewhere, more digging tomorrow

Yours Tony.

[1] 
http://logs.openstack.org/ab/abac67d7bb347e1caba4d74c81712de86790316b/release/release-openstack-python3/e84da68/job-output.txt.gz#_2018-10-24_03_20_15_264676


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova cellsv2 and DBs / down cells / quotas

2018-10-23 Thread melanie witt

On Wed, 24 Oct 2018 10:54:31 +1100, Sam Morrison wrote:

Hi nova devs,

Have been having a good look into cellsv2 and how we migrate to them 
(we’re still on cellsv1 and about to upgrade to queens and still run 
cells v1 for now).


One of the problems I have is that now all our nova cell database 
servers need to respond to API requests.
With cellsv1 our architecture was to have a big powerful DB cluster (3 
physical servers) at the API level to handle the API cell and then a 
smallish non HA DB server (usually just a VM) for each of the compute 
cells.


This architecture won’t work with cells V2 and we’ll now need to have a 
lot of highly available and responsive DB servers for all the cells.


It will also mean that our nova-apis which reside in Melbourne, 
Australia will now need to talk to database servers in Auckland, New 
Zealand.


The biggest issue we have is when a cell is down. We sometimes have 
cells go down for an hour or so planned or unplanned and with cellsv1 
this does not affect other cells.
Looks like some good work going on here 
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/handling-down-cell


But what about quota? If a cell goes down then it would seem that a user 
all of a sudden would regain some quota from the instances that are in 
the down cell?

Just wondering if anyone has thought about this?


Yes, we've discussed it quite a bit. The current plan is to offer a 
policy-driven behavior as part of the "down" cell handling which will 
control whether nova will:


a) Reject a server create request if the user owns instances in "down" cells

b) Go ahead and count quota usage "as-is" if the user owns instances in 
"down" cells and allow quota limit to be potentially exceeded


We would like to know if you think this plan will work for you.

Further down the road, if we're able to come to an agreement on a 
consumer type/owner or partitioning concept in placement (to be certain 
we are counting usage our instance of nova owns, as placement is a 
shared service), we could count quota usage from placement instead of 
querying cells.


Cheers,
-melanie


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [watcher] today’s meeting is cancelled

2018-10-23 Thread Чадин Александр Сергеевич
I won’t be able to handle the meeting at 8:00 am, so I’d propose to meet at 
10:30 am UTC on regular openstack-watcher channel if that’s suitable for you.

Alex Chadin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cyborg] [weekly-meeting]

2018-10-23 Thread Li Liu
Weekly meeting tomorrow will be held tomorrow at the usual time10AM
EST/10PM BJ time

please provide inputs to Sundar's docs if you have the change before the
meeting

https://docs.google.com/spreadsheets/d/179Q8J9qIJNOiVm86K7bWPxo7otTsU18XVCI32V77JaU/edit#gid=0

Let's make our final decision for the naming.

-- 
Thank you

Regards

Li
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [KOLLA] error deploying openstack -- TASK [keystone : Creating default user role] keystone is accessible and urllib3 and chardet libraries up to date

2018-10-23 Thread Manuel Sopena Ballesteros
Dear Kolla-ansible team,

I am trying to deploy openstack pike using kolla-ansible 6.1.0 without success. 
I am not a python developer so I was wondering whether someone could help 
troubleshooting.

[root@openstack-deployment ~]# pip show kolla-ansible
Name: kolla-ansible
Version: 6.1.0
Summary: Ansible Deployment of Kolla containers
Home-page: https://docs.openstack.org/kolla-ansible/latest/
Author: OpenStack
Author-email: openstack-dev@lists.openstack.org
License: Apache License, Version 2.0
Location: /usr/lib/python2.7/site-packages
Requires: PyYAML, setuptools, oslo.utils, Jinja2, cryptography, docker, 
netaddr, six, pbr, oslo.config
Required-by:

This is the ansible output

TASK [keystone : Creating default user role] 

task path: /usr/share/kolla-ansible/ansible/roles/keystone/tasks/register.yml:10
 ESTABLISH SSH CONNECTION FOR USER: None
 SSH: EXEC ssh -C -o ControlMaster=auto -o 
ControlPersist=60s -o StrictHostKeyChecking=no -o 
KbdInteractiveAuthentication=no -o 
PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o 
PasswordAuthentication=no -o ConnectTimeout=10 -o 
ControlPath=/root/.ansible/cp/104cd4ab74 test-openstack-controller '/bin/sh -c 
'"'"'echo ~ && sleep 0'"'"''
 (0, '/root\n', '')
 ESTABLISH SSH CONNECTION FOR USER: None
 SSH: EXEC ssh -C -o ControlMaster=auto -o 
ControlPersist=60s -o StrictHostKeyChecking=no -o 
KbdInteractiveAuthentication=no -o 
PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o 
PasswordAuthentication=no -o ConnectTimeout=10 -o 
ControlPath=/root/.ansible/cp/104cd4ab74 test-openstack-controller '/bin/sh -c 
'"'"'( umask 77 && mkdir -p "` echo 
/root/.ansible/tmp/ansible-tmp-1540346152.6-54138515670907 `" && echo 
ansible-tmp-1540346152.6-54138515670907="` echo 
/root/.ansible/tmp/ansible-tmp-1540346152.6-54138515670907 `" ) && sleep 0'"'"''
 (0, 
'ansible-tmp-1540346152.6-54138515670907=/root/.ansible/tmp/ansible-tmp-1540346152.6-54138515670907\n',
 '')
Using module file /usr/share/kolla-ansible/ansible/library/kolla_toolbox.py
 PUT 
/root/.ansible/tmp/ansible-local-10970L49VmL/tmpFspLOR TO 
/root/.ansible/tmp/ansible-tmp-1540346152.6-54138515670907/AnsiballZ_kolla_toolbox.py
 SSH: EXEC sftp -b - -C -o ControlMaster=auto -o 
ControlPersist=60s -o StrictHostKeyChecking=no -o 
KbdInteractiveAuthentication=no -o 
PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o 
PasswordAuthentication=no -o ConnectTimeout=10 -o 
ControlPath=/root/.ansible/cp/104cd4ab74 '[test-openstack-controller]'
 (0, 'sftp> put 
/root/.ansible/tmp/ansible-local-10970L49VmL/tmpFspLOR 
/root/.ansible/tmp/ansible-tmp-1540346152.6-54138515670907/AnsiballZ_kolla_toolbox.py\n',
 '')
 ESTABLISH SSH CONNECTION FOR USER: None
 SSH: EXEC ssh -C -o ControlMaster=auto -o 
ControlPersist=60s -o StrictHostKeyChecking=no -o 
KbdInteractiveAuthentication=no -o 
PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o 
PasswordAuthentication=no -o ConnectTimeout=10 -o 
ControlPath=/root/.ansible/cp/104cd4ab74 test-openstack-controller '/bin/sh -c 
'"'"'chmod u+x /root/.ansible/tmp/ansible-tmp-1540346152.6-54138515670907/ 
/root/.ansible/tmp/ansible-tmp-1540346152.6-54138515670907/AnsiballZ_kolla_toolbox.py
 && sleep 0'"'"''
 (0, '', '')
 ESTABLISH SSH CONNECTION FOR USER: None
 SSH: EXEC ssh -C -o ControlMaster=auto -o 
ControlPersist=60s -o StrictHostKeyChecking=no -o 
KbdInteractiveAuthentication=no -o 
PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o 
PasswordAuthentication=no -o ConnectTimeout=10 -o 
ControlPath=/root/.ansible/cp/104cd4ab74 -tt test-openstack-controller '/bin/sh 
-c '"'"'/usr/bin/python 
/root/.ansible/tmp/ansible-tmp-1540346152.6-54138515670907/AnsiballZ_kolla_toolbox.py
 && sleep 0'"'"''
 (1, 
'/usr/lib/python2.7/site-packages/requests/__init__.py:91: 
RequestsDependencyWarning: urllib3 (1.24) or chardet (2.2.1) doesn\'t match a 
supported version!\r\n  RequestsDependencyWarning)\r\nTraceback (most recent 
call last):\r\n  File 
"/root/.ansible/tmp/ansible-tmp-1540346152.6-54138515670907/AnsiballZ_kolla_toolbox.py",
 line 113, in \r\n_ansiballz_main()\r\n  File 
"/root/.ansible/tmp/ansible-tmp-1540346152.6-54138515670907/AnsiballZ_kolla_toolbox.py",
 line 105, in _ansiballz_main\r\ninvoke_module(zipped_mod, temp_path, 
ANSIBALLZ_PARAMS)\r\n  File 
"/root/.ansible/tmp/ansible-tmp-1540346152.6-54138515670907/AnsiballZ_kolla_toolbox.py",
 line 48, in invoke_module\r\nimp.load_module(\'__main__\', mod, module, 
MOD_DESC)\r\n  File "/tmp/ansible_kolla_toolbox_payload_JkGoxn/__main__.py", 
line 155, in \r\n  File 
"/tmp/ansible_kolla_toolbox_payload_JkGoxn/__main__.py", line 133, in main\r\n  
File 

Re: [openstack-dev] [goals][upgrade-checkers] Week R-25 Update

2018-10-23 Thread Adrian Turjak

On 24/10/18 2:09 AM, Ben Nemec wrote:
>
>
> On 10/22/18 5:40 PM, Matt Riedemann wrote:
>> On 10/22/2018 4:35 PM, Adrian Turjak wrote:
 The one other open question I have is about the Adjutant change [2]. I
 know Adjutant is very new and I'm not sure what upgrades look like for
 that project, so I don't really know how valuable adding the upgrade
 check framework is to that project. Is it like Horizon where it's
 mostly stateless and fed off plugins? Because we don't have an upgrade
 check CLI for Horizon for that reason.

 [1]
 https://review.openstack.org/#/q/topic:upgrade-checkers+(status:open+OR+status:merged)

 [2]https://review.openstack.org/#/c/611812/

>>> Adjutant's codebase is also going to be a bit unstable for the next few
>>> cycles while we refactor some internals (we're not marking it 1.0 yet).
>>> Once the current set of ugly refactors planned for late Stein are
>>> done I
>>> may look at building some upgrade checking, once we also work out what
>>> out upgrade checking should look like. Probably mostly checking config
>>> changes, database migration states, and plugin compatibility.
>>>
>>> Adjutant already has a concept of startup checks at least, which while
>>> not anywhere near as extensive as they should be, mostly amount to
>>> making sure your config file looks 'mostly' sane regarding plugins
>>> before starting up the service, and we do intend to expand on that,
>>> plus
>>> we can reuse a large chunk of that for upgrade checking.
>>
>> OK it seems there is not really any point in trying to satisfy the
>> upgrade checkers goal for Adjutant in Stein then. Should we just
>> abandon the change?
>>
>
> Can't we just add a noop command like we are for the services that
> don't currently need upgrade checks?


I mostly was responding to this in the review itself rather than on here.

We are probably going to have reason for an upgrade check in Adjutant,
my main gripe is, Adjutant is Django based and there isn't a good point
in adding a separate cli when we already expose 'adjutant-api' as a
proxy to manage.py and as such we should just register the upgrade check
as a custom Django admin command.

More so because all of the logic needed to actually run the check in
future will require Django settings to be configured. We don't actually
use any oslo libraries yet so the current code for the check doesn't
actually make sense in context.

I'm fine with a noop check, but we have to make it fit.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][all] TC office hours is started now on #openstack-tc

2018-10-23 Thread Ghanshyam Mann
Hi All, 

TC office hour is started on #openstack-tc channel. Feel free to reach to us 
for anything you want discuss/input/feedback/help from TC. 

-gmann 





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] nova cellsv2 and DBs / down cells / quotas

2018-10-23 Thread Sam Morrison
Hi nova devs,

Have been having a good look into cellsv2 and how we migrate to them (we’re 
still on cellsv1 and about to upgrade to queens and still run cells v1 for now).

One of the problems I have is that now all our nova cell database servers need 
to respond to API requests.
With cellsv1 our architecture was to have a big powerful DB cluster (3 physical 
servers) at the API level to handle the API cell and then a smallish non HA DB 
server (usually just a VM) for each of the compute cells. 

This architecture won’t work with cells V2 and we’ll now need to have a lot of 
highly available and responsive DB servers for all the cells. 

It will also mean that our nova-apis which reside in Melbourne, Australia will 
now need to talk to database servers in Auckland, New Zealand.

The biggest issue we have is when a cell is down. We sometimes have cells go 
down for an hour or so planned or unplanned and with cellsv1 this does not 
affect other cells. 
Looks like some good work going on here 
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/handling-down-cell
 


But what about quota? If a cell goes down then it would seem that a user all of 
a sudden would regain some quota from the instances that are in the down cell?
Just wondering if anyone has thought about this?

Cheers,
Sam



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goals][upgrade-checkers] Week R-25 Update

2018-10-23 Thread Matt Riedemann

On 10/23/2018 1:41 PM, Sean McGinnis wrote:

Yeah, but part of the reason for placeholders was consistency across all of
the services. I guess if there are never going to be upgrade checks in
adjutant then I could see skipping it, but otherwise I would prefer to at
least get the framework in place.


+1

Even if there is nothing to check at this point, I think having the facility
there is a benefit for projects and scripts that are going to be consuming
these checks. Having nothing to check, but having the status check there, is
going to be better than everything needing to keep a list of which projects to
run the checks on and which not.



Sure, that works for me as well. I'm not against adding placeholder/noop 
checks knowing that nothing is immediately obvious to replace those in 
Stein, but could be done later when the opportunity arises. If it's 
debatable on a per-project basis, then I'd defer to the core team for 
the project.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Neutron stadium project Tempest plugins

2018-10-23 Thread Slawomir Kaplonski
Hi,

Thx Miguel for raising this.
List of tempest plugins is on 
https://docs.openstack.org/tempest/latest/plugin-registry.html - if URL for 
Your plugin is the same as Your main repo, You should move Your tempest plugin 
code.


> Wiadomość napisana przez Miguel Lavalle  w dniu 
> 23.10.2018, o godz. 16:59:
> 
> Dear Neutron Stadium projects,
> 
> In a QA session during the recent PTG in Denver, it was suggested that the 
> Stadium projects should move their Tempest plugins to a repository of their 
> own or added to the Neutron Tempest plugin repository 
> (https://github.com/openstack/neutron-tempest-plugin). The purpose of this 
> message is to start a conversation for the Stadium projects to indicate what 
> is their preference. Please respond to this thread indicating how do you want 
> to move forward.
> 
> Best regards
> 
> Miguel
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

— 
Slawek Kaplonski
Senior software engineer
Red Hat


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goals][upgrade-checkers] Week R-25 Update

2018-10-23 Thread Sean McGinnis
On Tue, Oct 23, 2018 at 10:30:23AM -0400, Ben Nemec wrote:
> 
> 
> On 10/23/18 9:58 AM, Matt Riedemann wrote:
> > On 10/23/2018 8:09 AM, Ben Nemec wrote:
> > > Can't we just add a noop command like we are for the services that
> > > don't currently need upgrade checks?
> > 
> > We could, but I was also hoping that for most projects we will actually
> > be able to replace the noop / placeholder check with *something* useful
> > in Stein.
> > 
> 
> Yeah, but part of the reason for placeholders was consistency across all of
> the services. I guess if there are never going to be upgrade checks in
> adjutant then I could see skipping it, but otherwise I would prefer to at
> least get the framework in place.
> 

+1

Even if there is nothing to check at this point, I think having the facility
there is a benefit for projects and scripts that are going to be consuming
these checks. Having nothing to check, but having the status check there, is
going to be better than everything needing to keep a list of which projects to
run the checks on and which not.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack Foundation Community Meeting - October 24 - StarlingX

2018-10-23 Thread Chris Hoge
On Wednesday, October 24 we will host our next Foundation community
meeting at 8:00 PT / 15:00 UTC. This meeting will focus on an update
on StarlingX, one of the projects in the Edge Computing Strategic Focus
Area.

The full agenda is here:
https://etherpad.openstack.org/p/openstack-community-meeting

Do you have something you'd like to discuss or share with the community?
Please share them with me so that I can schedule them for future meetings.

Thanks,
Chris

BEGIN:VCALENDAR
PRODID:-//Google Inc//Google Calendar 70.9054//EN
VERSION:2.0
CALSCALE:GREGORIAN
METHOD:REQUEST
BEGIN:VEVENT
DTSTART:20181024T15Z
DTEND:20181024T16Z
DTSTAMP:20181015T174244Z
ORGANIZER;CN=cla...@openstack.org:mailto:cla...@openstack.org
UID:DB8C8EB6-5D6A-4DF9-AE1C-F8483DAAE005
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=Ildiko Vancsa;X-NUM-GUESTS=0:mailto:ild...@openstack.org
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=ACCEPTED;RSVP=TRUE
 ;CN=Allison Price;X-NUM-GUESTS=0:mailto:alli...@openstack.org
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=ACCEPTED;RSVP=TRUE
 ;CN=cla...@openstack.org;X-NUM-GUESTS=0:mailto:cla...@openstack.org
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=Ian Jolliffe;X-NUM-GUESTS=0:mailto:ian.jolli...@windriver.com
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=scott.w.doene...@intel.com;X-NUM-GUESTS=0:mailto:scott.w.doenecke@i
 ntel.com
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=Bruce E Jones;X-NUM-GUESTS=0:mailto:bruce.e.jo...@intel.com
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=Chris Hoge;X-NUM-GUESTS=0:mailto:ch...@openstack.org
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=Jeff;X-NUM-GUESTS=0:mailto:jeff.go...@windriver.com
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=glenn.sei...@windriver.com;X-NUM-GUESTS=0:mailto:glenn.seiler@windr
 iver.com
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=Jennifer Fowler;X-NUM-GUESTS=0:mailto:jenni...@cathey.co
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=Lauren Sell;X-NUM-GUESTS=0:mailto:lau...@openstack.org
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=Robert Cathey;X-NUM-GUESTS=0:mailto:rob...@cathey.co
ATTENDEE;CUTYPE=INDIVIDUAL;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=
 TRUE;CN=starlingx-disc...@lists.starlingx.io;X-NUM-GUESTS=0:mailto:starling
 x-disc...@lists.starlingx.io
URL:https://zoom.us/j/112003649
CREATED:20181003T170850Z
DESCRIPTION:https://etherpad.openstack.org/p/openstack-community-meeting\n\
 n-::~:~::~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:
 ~:~:~::~:~::-\nPlease do not edit this section of the description.\n\nView 
 your event at https://www.google.com/calendar/event?action=VIEW=XzhoMTN
 nZ3BvOGwxM2NiOWw4Z3I0MmI5azhoMzNpYmExOGtvazZiYTY3MHEzZ2NxNDg1MGthYzFnNmsgc3
 Rhcmxpbmd4LWRpc2N1c3NAbGlzdHMuc3Rhcmxpbmd4Lmlv=MjAjY2xhaXJlQG9wZW5zdGFj
 ay5vcmc0YjdkMzYwMzNkY2NjNjAxNGFmYjQ4Y2MzMGY4NGU3NGRkNmI0OTU3=America%2F
 Chicago=en=1.\n-::~:~::~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:~:
 ~:~:~:~:~:~:~:~:~:~:~:~:~::~:~::-
LAST-MODIFIED:20181015T174243Z
LOCATION:https://zoom.us/j/112003649
SEQUENCE:0
STATUS:CONFIRMED
SUMMARY:StarlingX First Release\, Community Webinar
TRANSP:OPAQUE
X-APPLE-TRAVEL-ADVISORY-BEHAVIOR:AUTOMATIC
END:VEVENT
END:VCALENDAR
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] [Kolla] SSL errors polling amphorae and missing tenant network interface

2018-10-23 Thread Erik McCormick
On Tue, Oct 23, 2018 at 10:20 AM Tobias Urdin  wrote:
>
> Hello Erik,
>
> Could you specify the DNs you used for all certificates just so that I
> can rule it out on my side.
> You can redact anything sensitive with some to just get the feel on how
> it's configured.
>
> Best regards
> Tobias
>
I'm not actually using anything special or custom. For right now I
just let it use the default www.example.com stuff. These are the
settings in the playbook which I distilled from OSA

octavia_cert_key_length_server: '4096' # key length
octavia_cert_cipher_server: 'aes256'
octavia_cert_cipher_client: 'aes256'
octavia_cert_key_length_client: '4096' # key length
octavia_cert_server_ca_subject:
'/C=US/ST=Denial/L=Nowhere/O=Dis/CN=www.example.com' # change this to
something more real
octavia_cert_client_ca_subject:
'/C=US/ST=Denial/L=Nowhere/O=Dis/CN=www.example.com' # change this to
something more real
octavia_cert_client_req_common_name: 'www.example.com' # change this
to something more real
octavia_cert_client_req_country_name: 'US'
octavia_cert_client_req_state_or_province_name: 'Denial'
octavia_cert_client_req_locality_name: 'Nowhere'
octavia_cert_client_req_organization_name: 'Dis'
octavia_cert_validity_days: 1825 # 5 years

-Erik

> On 10/22/2018 04:47 PM, Erik McCormick wrote:
> > On Mon, Oct 22, 2018 at 4:23 AM Tobias Urdin  wrote:
> >> Hello,
> >>
> >> I've been having a lot of issues with SSL certificates myself, on my
> >> second trip now trying to get it working.
> >>
> >> Before I spent a lot of time walking through every line in the DevStack
> >> plugin and fixing my config options, used the generate
> >> script [1] and still it didn't work.
> >>
> >> When I got the "invalid padding" issue it was because of the DN I used
> >> for the CA and the certificate IIRC.
> >>
> >>   > 19:34 < tobias-urdin> 2018-09-10 19:43:15.312 15032 WARNING
> >> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect
> >> to instance. Retrying.: SSLError: ("bad handshake: Error([('rsa
> >> routines', 'RSA_padding_check_PKCS1_type_1', 'block type is not 01'),
> >> ('rsa routines', 'RSA_EAY_PUBLIC_DECRYPT', 'padding check failed'),
> >> ('SSL routines', 'ssl3_get_key_exchange', 'bad signature')],)",)
> >>   > 19:47 < tobias-urdin> after a quick google "The problem was that my
> >> CA DN was the same as the certificate DN."
> >>
> >> IIRC I think that solved it, but then again I wouldn't remember fully
> >> since I've been at so many different angles by now.
> >>
> >> Here is my IRC logs history from the #openstack-lbaas channel, perhaps
> >> it can help you out
> >> http://paste.openstack.org/show/732575/
> >>
> > Tobias, I owe you a beer. This was precisely the issue. I'm deploying
> > Octavia with kolla-ansible. It only deploys a single CA. After hacking
> > the templates and playbook to incorporate a separate server CA, the
> > amphorae now load and provision the required namespace. I'm adding a
> > kolla tag to the subject of this in hopes that someone might want to
> > take on changing this behavior in the project. Hopefully after I get
> > through Upstream Institute in Berlin I'll be able to do it myself if
> > nobody else wants to do it.
> >
> > For certificate generation, I extracted the contents of
> > octavia_certs_install.yml (which sets up the directory structure,
> > openssl.cnf, and the client CA), and octavia_certs.yml (which creates
> > the server CA and the client certificate) and mashed them into a
> > separate playbook just for this purpose. At the end I get:
> >
> > ca_01.pem - Client CA Certificate
> > ca_01.key - Client CA Key
> > ca_server_01.pem - Server CA Certificate
> > cakey.pem - Server CA Key
> > client.pem - Concatenated Client Key and Certificate
> >
> > If it would help to have the playbook, I can stick it up on github
> > with a huge "This is a hack" disclaimer on it.
> >
> >> -
> >>
> >> Sorry for hijacking the thread but I'm stuck as well.
> >>
> >> I've in the past tried to generate the certificates with [1] but now
> >> moved on to using the openstack-ansible way of generating them [2]
> >> with some modifications.
> >>
> >> Right now I'm just getting: Could not connect to instance. Retrying.:
> >> SSLError: [SSL: BAD_SIGNATURE] bad signature (_ssl.c:579)
> >> from the amphoras, haven't got any further but I've eliminated a lot of
> >> stuck in the middle.
> >>
> >> Tried deploying Ocatavia on Ubuntu with python3 to just make sure there
> >> wasn't an issue with CentOS and OpenSSL versions since it tends to lag
> >> behind.
> >> Checking the amphora with openssl s_client [3] it gives the same one,
> >> but the verification is successful just that I don't understand what the
> >> bad signature
> >> part is about, from browsing some OpenSSL code it seems to be related to
> >> RSA signatures somehow.
> >>
> >> 140038729774992:error:1408D07B:SSL routines:ssl3_get_key_exchange:bad
> >> signature:s3_clnt.c:2032:
> >>
> >> So I've basicly ruled out Ubuntu (openssl-1.1.0g) and 

Re: [openstack-dev] [nova] Metadata API cross joining "instance_metadata" and "instance_system_metadata"

2018-10-23 Thread Sergio A. de Carvalho Jr.
Make sense, Dan.

Thanks so much for your help.

Sergio

On Tue, Oct 23, 2018 at 5:01 PM Dan Smith  wrote:

> > I tested a code change that essentially reverts
> > https://review.openstack.org/#/c/276861/1/nova/api/metadata/base.py
> >
> > In other words, with this change metadata tables are not fetched by
> > default in API requests. If I understand correctly, metadata is
> > fetched in separate queries as the instance object is
> > created. Everything seems to work just fine, and I've considerably
> > reduced the amount of data fetched from the database, as well as
> > reduced the average response time of API requests.
> >
> > Given how simple it is and the results I'm getting, I don't see any
> > reason not to patch my clusters with this change.
> >
> > Do you guys see any other impact this change could have? Anything that
> > it could potentially break?
>
> This is probably fine as a bandage fix, but it's not the right one for
> upstream, IMHO. By doing what you did, you cause two RPC round-trips to
> fetch the instance and then the metadata every single time the metadata
> API is hit (not including the cache). By converting the DB load to do
> the two-step, we still hit the DB twice, but only one RPC round-trip,
> which will be much more efficient especially at load/scale.
>
> --Dan
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [api] Paste Maintenance

2018-10-23 Thread Chris Dent

On Mon, 22 Oct 2018, Chris Dent wrote:


Thus far I'm not hearing any volunteers. If that continues to be the
case, I'll just keep it on bitbucket as that's the minimal change.


As there was some noise that suggested "if you make it use git I
might help", I put it on github:

https://github.com/cdent/paste

I'm now in the process of getting it somewhat sane for modern
python, however test coverage isn't that great so additional work is
required. Once it seems mostly okay, I'll push out a new version to
pypi.

I welcome assistance from any and all.

And, rather importantly, we also need to take over pastedeploy
as well, as the functionality there is also important. I've started
that ball rolling.

If having it live in my github proves a problem we can easily move
it along somewhere else, but this was the shortest hop.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Metadata API cross joining "instance_metadata" and "instance_system_metadata"

2018-10-23 Thread Dan Smith
> I tested a code change that essentially reverts
> https://review.openstack.org/#/c/276861/1/nova/api/metadata/base.py
>
> In other words, with this change metadata tables are not fetched by
> default in API requests. If I understand correctly, metadata is
> fetched in separate queries as the instance object is
> created. Everything seems to work just fine, and I've considerably
> reduced the amount of data fetched from the database, as well as
> reduced the average response time of API requests.
>
> Given how simple it is and the results I'm getting, I don't see any
> reason not to patch my clusters with this change.
>
> Do you guys see any other impact this change could have? Anything that
> it could potentially break?

This is probably fine as a bandage fix, but it's not the right one for
upstream, IMHO. By doing what you did, you cause two RPC round-trips to
fetch the instance and then the metadata every single time the metadata
API is hit (not including the cache). By converting the DB load to do
the two-step, we still hit the DB twice, but only one RPC round-trip,
which will be much more efficient especially at load/scale.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Metadata API cross joining "instance_metadata" and "instance_system_metadata"

2018-10-23 Thread Sergio A. de Carvalho Jr.
I tested a code change that essentially reverts
https://review.openstack.org/#/c/276861/1/nova/api/metadata/base.py

In other words, with this change metadata tables are not fetched by default
in API requests. If I understand correctly, metadata is fetched in separate
queries as the instance object is created. Everything seems to work just
fine, and I've considerably reduced the amount of data fetched from the
database, as well as reduced the average response time of API requests.

Given how simple it is and the results I'm getting, I don't see any reason
not to patch my clusters with this change.

Do you guys see any other impact this change could have? Anything that it
could potentially break?


On Mon, Oct 22, 2018 at 10:05 PM Sergio A. de Carvalho Jr. <
scarvalh...@gmail.com> wrote:

>
> https://bugs.launchpad.net/nova/+bug/1799298
>
> On Mon, Oct 22, 2018 at 9:15 PM Sergio A. de Carvalho Jr. <
> scarvalh...@gmail.com> wrote:
>
>> Cool, I'll open a bug then.
>>
>> I was wondering if, before joining the metadata tables with the rest of
>> instance data, we could do a UNION, since both tables are structurally
>> identical.
>>
>> On Mon, Oct 22, 2018 at 9:04 PM Dan Smith  wrote:
>>
>>> > Do you guys see an easy fix here?
>>> >
>>> > Should I open a bug report?
>>>
>>> Definitely open a bug. IMHO, we should just make the single-instance
>>> load work like the multi ones, where we load the metadata separately if
>>> requested. We might be able to get away without sysmeta these days, but
>>> we needed it for the flavor details back when the join was added. But,
>>> user metadata is controllable by the user and definitely of interest in
>>> that code, so just dropping sysmeta from the explicit required_attrs
>>> isn't enough, IMHO.
>>>
>>> --Dan
>>>
>>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Neutron stadium project Tempest plugins

2018-10-23 Thread Miguel Lavalle
Dear Neutron Stadium projects,

In a QA session during the recent PTG in Denver, it was suggested that the
Stadium projects should move their Tempest plugins to a repository of their
own or added to the Neutron Tempest plugin repository (
https://github.com/openstack/neutron-tempest-plugin). The purpose of this
message is to start a conversation for the Stadium projects to indicate
what is their preference. Please respond to this thread indicating how do
you want to move forward.

Best regards

Miguel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goals][upgrade-checkers] Week R-25 Update

2018-10-23 Thread Ben Nemec



On 10/23/18 9:58 AM, Matt Riedemann wrote:

On 10/23/2018 8:09 AM, Ben Nemec wrote:
Can't we just add a noop command like we are for the services that 
don't currently need upgrade checks?


We could, but I was also hoping that for most projects we will actually 
be able to replace the noop / placeholder check with *something* useful 
in Stein.




Yeah, but part of the reason for placeholders was consistency across all 
of the services. I guess if there are never going to be upgrade checks 
in adjutant then I could see skipping it, but otherwise I would prefer 
to at least get the framework in place.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] [Kolla] SSL errors polling amphorae and missing tenant network interface

2018-10-23 Thread Tobias Urdin

Hello Erik,

Could you specify the DNs you used for all certificates just so that I 
can rule it out on my side.
You can redact anything sensitive with some to just get the feel on how 
it's configured.


Best regards
Tobias

On 10/22/2018 04:47 PM, Erik McCormick wrote:

On Mon, Oct 22, 2018 at 4:23 AM Tobias Urdin  wrote:

Hello,

I've been having a lot of issues with SSL certificates myself, on my
second trip now trying to get it working.

Before I spent a lot of time walking through every line in the DevStack
plugin and fixing my config options, used the generate
script [1] and still it didn't work.

When I got the "invalid padding" issue it was because of the DN I used
for the CA and the certificate IIRC.

  > 19:34 < tobias-urdin> 2018-09-10 19:43:15.312 15032 WARNING
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect
to instance. Retrying.: SSLError: ("bad handshake: Error([('rsa
routines', 'RSA_padding_check_PKCS1_type_1', 'block type is not 01'),
('rsa routines', 'RSA_EAY_PUBLIC_DECRYPT', 'padding check failed'),
('SSL routines', 'ssl3_get_key_exchange', 'bad signature')],)",)
  > 19:47 < tobias-urdin> after a quick google "The problem was that my
CA DN was the same as the certificate DN."

IIRC I think that solved it, but then again I wouldn't remember fully
since I've been at so many different angles by now.

Here is my IRC logs history from the #openstack-lbaas channel, perhaps
it can help you out
http://paste.openstack.org/show/732575/


Tobias, I owe you a beer. This was precisely the issue. I'm deploying
Octavia with kolla-ansible. It only deploys a single CA. After hacking
the templates and playbook to incorporate a separate server CA, the
amphorae now load and provision the required namespace. I'm adding a
kolla tag to the subject of this in hopes that someone might want to
take on changing this behavior in the project. Hopefully after I get
through Upstream Institute in Berlin I'll be able to do it myself if
nobody else wants to do it.

For certificate generation, I extracted the contents of
octavia_certs_install.yml (which sets up the directory structure,
openssl.cnf, and the client CA), and octavia_certs.yml (which creates
the server CA and the client certificate) and mashed them into a
separate playbook just for this purpose. At the end I get:

ca_01.pem - Client CA Certificate
ca_01.key - Client CA Key
ca_server_01.pem - Server CA Certificate
cakey.pem - Server CA Key
client.pem - Concatenated Client Key and Certificate

If it would help to have the playbook, I can stick it up on github
with a huge "This is a hack" disclaimer on it.


-

Sorry for hijacking the thread but I'm stuck as well.

I've in the past tried to generate the certificates with [1] but now
moved on to using the openstack-ansible way of generating them [2]
with some modifications.

Right now I'm just getting: Could not connect to instance. Retrying.:
SSLError: [SSL: BAD_SIGNATURE] bad signature (_ssl.c:579)
from the amphoras, haven't got any further but I've eliminated a lot of
stuck in the middle.

Tried deploying Ocatavia on Ubuntu with python3 to just make sure there
wasn't an issue with CentOS and OpenSSL versions since it tends to lag
behind.
Checking the amphora with openssl s_client [3] it gives the same one,
but the verification is successful just that I don't understand what the
bad signature
part is about, from browsing some OpenSSL code it seems to be related to
RSA signatures somehow.

140038729774992:error:1408D07B:SSL routines:ssl3_get_key_exchange:bad
signature:s3_clnt.c:2032:

So I've basicly ruled out Ubuntu (openssl-1.1.0g) and CentOS
(openssl-1.0.2k) being the problem, ruled out signing_digest, so I'm
back to something related
to the certificates or the communication between the endpoints, or what
actually responds inside the amphora (gunicorn IIUC?). Based on the
"verify" functions actually causing that bad signature error I would
assume it's the generated certificate that the amphora presents that is
causing it.

I'll have to continue the troubleshooting to the inside of the amphora,
I've used the test-only amphora image before but have now built my own
one that is
using the amphora-agent from the actual stable branch, but same issue
(bad signature).

For verbosity this is the config options set for the certificates in
octavia.conf and which file it was copied from [4], same here, a
replication of what openstack-ansible does.

Appreciate any feedback or help :)

Best regards
Tobias

[1]
https://github.com/openstack/octavia/blob/master/bin/create_certificates.sh
[2] http://paste.openstack.org/show/732483/
[3] http://paste.openstack.org/show/732486/
[4] http://paste.openstack.org/show/732487/

On 10/20/2018 01:53 AM, Michael Johnson wrote:

Hi Erik,

Sorry to hear you are still having certificate issues.

Issue #2 is probably caused by issue #1. Since we hot-plug the tenant
network for the VIP, one of the first steps after the worker connects
to the amphora agent is 

Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-23 Thread Jon Bernard
* melanie witt  wrote:
> On Fri, 19 Oct 2018 23:21:01 +0800 (GMT+08:00), Boxiang Zhu wrote:
> > 
> > The version of my cinder and nova is Rocky. The scope of the cinder spec[1]
> > is only for available volume migration between two pools from the same
> > ceph cluster.
> > If the volume is in-use status[2], it will call the generic migration
> > function. So that as you
> > describe it, on the nova side, it raises NotImplementedError(_("Swap
> > only supports host devices").
> > The get_config of net volume[3] has not source_path.
> 
> Ah, OK, so you're trying to migrate a volume across two separate ceph
> clusters, and that is not supported.
> 
> > So does anyone try to succeed to migrate volume(in-use) with ceph
> > backend or is anyone doing something of it?
> 
> Hopefully someone can share their experience with trying to migrate volumes
> across separate ceph clusters. I unfortunately don't know anything about it.

If this is the case, then Cinder cannot request a storage-specific
migration which is typically more efficient.  The migration will require
a complete copy of each allocated block.  Whether the volume is attached
or not will determine who (cinder or nova) will perform the operation.

-- 
Jon

> 
> Best,
> -melanie
> 
> > [1] https://review.openstack.org/#/c/296150
> > [2] https://review.openstack.org/#/c/256091/23/cinder/volume/drivers/rbd.py
> > [3] 
> > https://github.com/openstack/nova/blob/stable/rocky/nova/virt/libvirt/volume/net.py#L101
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-23 Thread Jon Bernard
* melanie witt  wrote:
> On Mon, 22 Oct 2018 11:45:55 +0800 (GMT+08:00), Boxiang Zhu wrote:
> > I created a new vm and a new volume with type 'ceph'[So that the volume
> > will be created on one of two hosts. I assume that the volume created on
> > host dev@rbd-1#ceph this time]. Next step is to attach the volume to the
> > vm. At last I want to migrate the volume from host dev@rbd-1#ceph to
> > host dev@rbd-2#ceph, but it failed with the exception
> > 'NotImplementedError(_("Swap only supports host devices")'.
> > 
> > So that, my real problem is that is there any work to migrate
> > volume(*in-use*)(*ceph rbd*) from one host(pool) to another host(pool)
> > in the same ceph cluster?
> > The difference between the spec[2] with my scope is only one is
> > *available*(the spec) and another is *in-use*(my scope).
> > 
> > 
> > [1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/
> > [2] https://review.openstack.org/#/c/296150
> 
> Ah, I think I understand now, thank you for providing all of those details.
> And I think you explained it in your first email, that cinder supports
> migration of ceph volumes if they are 'available' but not if they are
> 'in-use'. Apologies that I didn't get your meaning the first time.
> 
> I see now the code you were referring to is this [3]:
> 
> if volume.status not in ('available', 'retyping', 'maintenance'):
> LOG.debug('Only available volumes can be migrated using backend '
>   'assisted migration. Falling back to generic migration.')
> return refuse_to_migrate
> 
> So because your volume is not 'available', 'retyping', or 'maintenance',
> it's falling back to generic migration, which will end up with an error in
> nova because the source_path is not set in the volume config.
> 
> Can anyone from the cinder team chime in about whether the ceph volume
> migration could be expanded to allow migration of 'in-use' volumes? Is there
> a reason not to allow migration of 'in-use' volumes?

Generally speaking, Nova must facilitate the migration of a live (or
in-use) volume.  A volume attached to a running instance requires code
in the I/O path to correctly route traffic to the correct location - so
Cinder must refuse (or defer) a migrate operation if the volume is
attached.  Until somewhat recently Qemu and Libvirt did not support the
migration to non-block (RBD) targets which is the reason for lack of
support.  I believe we now have all of the pieces to perform this
operation successfully, but I suspect it will require a setup with
correct versions of all the related software.  I will try to verify this
during the current release cycle and report back.

-- 
Jon

> 
> [3] 
> https://github.com/openstack/cinder/blob/c42fdc470223d27850627fd4fc9d8cb15f2941f8/cinder/volume/drivers/rbd.py#L1618-L1621
> 
> Cheers,
> -melanie
> 
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goals][upgrade-checkers] Week R-25 Update

2018-10-23 Thread Matt Riedemann

On 10/23/2018 8:09 AM, Ben Nemec wrote:
Can't we just add a noop command like we are for the services that don't 
currently need upgrade checks?


We could, but I was also hoping that for most projects we will actually 
be able to replace the noop / placeholder check with *something* useful 
in Stein.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goals][upgrade-checkers] Week R-25 Update

2018-10-23 Thread Ben Nemec



On 10/22/18 5:40 PM, Matt Riedemann wrote:

On 10/22/2018 4:35 PM, Adrian Turjak wrote:

The one other open question I have is about the Adjutant change [2]. I
know Adjutant is very new and I'm not sure what upgrades look like for
that project, so I don't really know how valuable adding the upgrade
check framework is to that project. Is it like Horizon where it's
mostly stateless and fed off plugins? Because we don't have an upgrade
check CLI for Horizon for that reason.

[1]
https://review.openstack.org/#/q/topic:upgrade-checkers+(status:open+OR+status:merged) 


[2]https://review.openstack.org/#/c/611812/


Adjutant's codebase is also going to be a bit unstable for the next few
cycles while we refactor some internals (we're not marking it 1.0 yet).
Once the current set of ugly refactors planned for late Stein are done I
may look at building some upgrade checking, once we also work out what
out upgrade checking should look like. Probably mostly checking config
changes, database migration states, and plugin compatibility.

Adjutant already has a concept of startup checks at least, which while
not anywhere near as extensive as they should be, mostly amount to
making sure your config file looks 'mostly' sane regarding plugins
before starting up the service, and we do intend to expand on that, plus
we can reuse a large chunk of that for upgrade checking.


OK it seems there is not really any point in trying to satisfy the 
upgrade checkers goal for Adjutant in Stein then. Should we just abandon 
the change?




Can't we just add a noop command like we are for the services that don't 
currently need upgrade checks?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][plugins] Horizon plugins validation on CI

2018-10-23 Thread Ivan Kolodyazhny
Hi Tony,

I like the idea to get functional tests instead of tempest. We can extend
our functional tests to plugins.

Personally, I don't have a strong opinion on what way we should go forward.
I'll support any community decision which helps us to get cross projects CI
up and running.

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/


On Thu, Oct 18, 2018 at 4:55 AM Tony Breeds  wrote:

> On Wed, Oct 17, 2018 at 04:18:26PM +0300, Ivan Kolodyazhny wrote:
> > Hi all,
> >
> > We discussed this topic at PTG both with Horizon and other teams. Sounds
> > like everybody is interested to have some cross-project CI jobs to verify
> > that plugins are not broken with the latest Horizon changes.
> >
> > The initial idea was to use tempest plugins for this effort like we do
> for
> > Horizon [1]. We've got a very simple test to verify that Horizon is up
> and
> > running and a user is able to login.
> >
> > It's easy to implement such tests for any existing horizon plugin. I
> tried
> > it for Heat and Manila dashboards.
>
> Given that I know very little about this but isn't it just as simple as
> running the say the octavia-dashboard[1] npm tests on all horizon changes?
> This would be similar to the way we run the nova[2] functional tests on all
> constraints changes in openstack/requirements.
>
> Yours Tony.
>
> [1] Of course all dashbaords/plugins
> [2] Not just nova but you get the idea
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Bug deputy report week of October 15th

2018-10-23 Thread Brian Haley

Hi,

I was Neutron bug deputy last week. Below is a short summary about 
reported bugs.


Note: I will not be at the team meeting this morning, sorry for the late 
notice.


-Brian


Critical bugs
-
None

High bugs
-

* https://bugs.launchpad.net/neutron/+bug/1798472 - Fullstack tests 
fails because process is not killed properly

  - gate failure

* https://bugs.launchpad.net/neutron/+bug/1798475 - Fullstack test 
test_ha_router_restart_agents_no_packet_lost failing

  - gate failure

* https://bugs.launchpad.net/neutron/+bug/1799124 - Path MTU discovery 
fails for VMs with Floating IP behind DVR routers

  - Needs confirmation, I took ownership

Medium bugs
---

* https://bugs.launchpad.net/neutron/+bug/1798577
  - Fix proposed, https://review.openstack.org/#/c/606007/

* A number of port-forwarding bugs were filed by Liu Yulong
  - https://bugs.launchpad.net/neutron/+bug/1799135
  - https://bugs.launchpad.net/neutron/+bug/1799137
  - https://bugs.launchpad.net/neutron/+bug/1799138
  - https://bugs.launchpad.net/neutron/+bug/1799140
  - https://bugs.launchpad.net/neutron/+bug/1799150
  - https://bugs.launchpad.net/neutron/+bug/1799155
  - Will discuss with Liu if he is working on them

Wishlist bugs
-

* https://bugs.launchpad.net/neutron/+bug/146 - When use ‘neutron 
net-update’, we cannot change the 'vlan-transparent' dynamically

  - not a bug as per the API definition, asked if proposing extension
  - perhaps possible to implement in backward-compatible way

* https://bugs.launchpad.net/neutron/+bug/1799178 - l2 pop doesn't 
always provide the whole list of fdb entries on agent restart

  - Need a smarter way to detect agent restarts

Invalid bugs


* https://bugs.launchpad.net/neutron/+bug/1798536 - OpenVswitch: qg-XXX 
goes to br-int instead of br-ext


* https://bugs.launchpad.net/neutron/+bug/1798689 -
Fullstack test test_create_one_default_qos_policy_per_project failed
  - Fixed by https://review.openstack.org/#/c/610280/

Further triage required
---

* https://bugs.launchpad.net/neutron/+bug/1798588 - 
neutron-openvswitch-agent break network connection on second reboot

  - Asked for more information from submitter

* https://bugs.launchpad.net/neutron/+bug/1798688 - iptables_hybrid 
tests 
tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_shelve_shelved_server 
failed

  - tempest.lib.exceptions.NotFound: Object not found
Details: {u'message': u'Instance None could not be found.', 
u'code': 404}

Not sure if issue with shelve/unshelve since the instance is gone

* https://bugs.launchpad.net/bugs/1798713 - [fwaas]wrong judgment in 
_is_supported_by_fw_l2_driver method

  - Fix proposed, https://review.openstack.org/#/c/605988
Need someone from FWaaS team to confirm and set priority

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][vitrage][infra] SQLAlchemy-Utils version 0.33.6 breaks Vitrage gate

2018-10-23 Thread Ian Wienand
On Thu, Oct 18, 2018 at 01:17:13PM +, Jeremy Stanley wrote:
> It's been deleted (again) and the suspected fix approved so
> hopefully it won't recur.

Unfortunately the underlying issue is still a mystery.  It recurred
once after the suspected fix was merged [1], and despite trying to
replicate it mostly in-situ we could not duplicate the issue.

Another change [2] has made our builds use a modified pip [3] which
logs the sha256 hash of the .whl outputs.  If this reappears, we can
look at the logs and the final (corrupt) wheel and see if the problem
is coming from pip, or something after that as we copy the files.

If looking at hexdumps of zip files is your idea of a good time, there
are some details on the corruption in the comments of [2].  Any
suggestions welcome :) Also any corruption reports welcome too, and we
can continue investigation.

Thanks,

-i

[1] https://review.openstack.org/611444
[2] https://review.openstack.org/612234
[3] https://github.com/pypa/pip/pull/5908

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] [FEMDC] [Edge] [tripleo] On the use of terms "Edge" and "Far Edge"

2018-10-23 Thread Csatari, Gergely (Nokia - HU/Budapest)
Hi,

Yes, https://github.com/State-of-the-Edge/glossary is a good initiative. Maybe 
we should all just start using the terms defined there and contribute if we 
have problems with the definitions.

Br,
Gerg0

From: Teresa Peluso 
Sent: Friday, October 19, 2018 4:39 PM
To: Csatari, Gergely (Nokia - HU/Budapest) ; 
OpenStack Development Mailing List (not for usage questions) 
; ful...@redhat.com; 
edge-comput...@lists.openstack.org
Cc: openstack-s...@lists.openstack.org
Subject: RE: [openstack-dev] [Openstack-sigs] [FEMDC] [Edge] [tripleo] On the 
use of terms "Edge" and "Far Edge"

Fyi – could this help?  
https://www.linuxfoundation.org/blog/2018/06/edge-computing-just-got-its-rosetta-stone/

https://imasons.org/ starting to host workshops about this as well 
https://imasons.org/events/2018-im-edge-congress/

From: Csatari, Gergely (Nokia - HU/Budapest) 
mailto:gergely.csat...@nokia.com>>
Sent: Friday, October 19, 2018 1:05 AM
To: OpenStack Development Mailing List (not for usage questions) 
mailto:openstack-dev@lists.openstack.org>>; 
ful...@redhat.com; 
edge-comput...@lists.openstack.org
Cc: 
openstack-s...@lists.openstack.org
Subject: [EXTERNAL] Re: [Edge-computing] [openstack-dev] [Openstack-sigs] 
[FEMDC] [Edge] [tripleo] On the use of terms "Edge" and "Far Edge"

Hi,

I’m adding the ECG mailing list to the discussion.

I think the root of the problem is that there is no single definition of „the 
edge” (except for 
[1]),
 but it changes from group to group or use case to use case. What I recognise 
as the commonalities in these edge definitions, are 1) a distributed cloud 
infrastructure (kind of a cloud of clouds) 2) need for automation or everything 
3) resource constraints for the control plane.

The different edge variants are putting different emphasis on these common 
needs based ont he use case discussed.

To have a more clear understanding of these definitions we could try the 
following:

  1.  Always add the definition of these to the given context
  2.  Check what other groups are using and adopt to that
  3.  Define our own language and expect everyone else to adopt

Br,
Gerg0



[1]: 
https://en.wikipedia.org/wiki/The_Edge

From: Jim Rollenhagen mailto:j...@jimrollenhagen.com>>
Sent: Thursday, October 18, 2018 11:43 PM
To: ful...@redhat.com; OpenStack Development Mailing 
List (not for usage questions) 
mailto:openstack-dev@lists.openstack.org>>
Cc: 
openstack-s...@lists.openstack.org
Subject: Re: [openstack-dev] [Openstack-sigs] [FEMDC] [Edge] [tripleo] On the 
use of terms "Edge" and "Far Edge"

On Thu, Oct 18, 2018 at 4:45 PM John Fulton 
mailto:johfu...@redhat.com>> wrote:
On Thu, Oct 18, 2018 at 11:56 AM Jim Rollenhagen 
mailto:j...@jimrollenhagen.com>> wrote:
>
> On Thu, Oct 18, 2018 at 10:23 AM Dmitry Tantsur 
> mailto:dtant...@redhat.com>> wrote:
>>
>> Hi all,
>>
>> Sorry for chiming in really late in this topic, but I think $subj is worth
>> discussing until we settle harder on the potentially confusing terminology.
>>
>> I think the difference between "Edge" and "Far Edge" is too vague to use 
>> these
>> terms in practice. Think about the "edge" metaphor itself: something rarely 
>> has
>> several layers of edges. A knife has an edge, there are no far edges. I 
>> imagine
>> zooming in and seeing more edges at the edge, and then it's quite cool 
>> indeed,
>> but is it really a useful metaphor for those who never used a strong 
>> microscope? :)
>>
>> I think in the trivial sense "Far Edge" is a tautology, and should be 
>> avoided.
>> As a weak proof of my words, I already see a lot of smart people confusing 
>> these
>> two and actually use Central/Edge where they mean Edge/Far Edge. I suggest we
>> adopt a different terminology, even if it less consistent with typical 
>> marketing
>> term around the "Edge" movement.
>
>
> FWIW, we created rough definitions of "edge" and "far edge" during the edge 
> WG session in Denver.
> It's mostly based on latency to the end user, though we also talked about 
> quantities of compute resources, if someone can find the pictures.

Perhaps these are the pictures Jim was referring to?
 

[openstack-dev] [tc][all] TC office hours is started now on #openstack-tc

2018-10-23 Thread Ghanshyam Mann
Hi All, 

TC office hour is started on #openstack-tc channel. Feel free to reach to us 
for anything you want discuss/input/feedback/help from TC. 

-gmann 





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] xstatic-bootstrap-datepicker and twitter-bootstrap dependency

2018-10-23 Thread Thomas Goirand
Hi,

The python3-xstatic-bootstrap-datepicker Debian package runtime depends
on libjs-twitter-bootstrap-datepicker which itself depends on
libjs-twitter-bootstrap, which is produced by the twitter-bootstrap
source package. The twitter-bootstrap will go away from Debian Buster,
as per https://bugs.debian.org/907724

So a few questions here:
- Do I really need to have libjs-twitter-bootstrap-datepicker depend on
libjs-twitter-bootstrap (which is version 2 of bootstrap)?
- Is Horizon using bootstrap 3?
- What action does the Horizon team suggest to keep Horizon working in
Debian?

Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][neutron-release] feature/graphql branch rebase

2018-10-23 Thread Gilles Dubreuil

Hi Miguel,

Thank you for your help.

I'll use those precious instructions next time.

Cheers,
Gilles

On 16/10/18 1:32 am, Miguel Lavalle wrote:

Hi Gilles,

The merge of master into feature/graphql  has been approved: 
https://review.openstack.org/#/c/609455. In the future, you can create 
your own merge patch following the instructions here: 
https://docs.openstack.org/infra/manual/drivers.html#merge-master-into-feature-branch. 
The Neutron team will catch it in Gerrit and review it


Regards

Miguel

On Thu, Oct 4, 2018 at 11:44 PM Gilles Dubreuil > wrote:


Hey Neutron folks,

I'm just reiterating the request.

Thanks


On 20/06/18 11:34, Gilles Dubreuil wrote:
> Could someone from the Neutron release group rebase feature/graphql
> branch against master/HEAD branch please?
>
> Regards,
> Gilles
>
>


--
Gilles Dubreuil
Senior Software Engineer - Red Hat - Openstack DFG Integration
Email: gil...@redhat.com
GitHub/IRC: gildub
Mobile: +61 400 894 219

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev