[Yahoo-eng-team] [Bug 1697426] Re: dhcp request packet didn't been forwarded from br-int to br-tun

2017-08-18 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1697426

Title:
  dhcp request packet didn't been forwarded from br-int to br-tun

Status in neutron:
  Expired

Bug description:
  Hi Guy:

  I met an issue , my vm failed to get an ip address from neutron dhcp
  agent, and I found I can capture the dhcp request packet on br-int by
  tcpdump , but didn't found the packet on br-tun.

  If I config a static IP address, the network is OK.

  how to trouble-shoot the issue?

  my neutron is Ocata version and ml2 plugin is Openvswitch.

  [root@cloud-sz-compute-f19-01 ~]# tcpdump -i br-int -enn
  09:06:26.614849 fa:16:3e:df:77:45 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q 
(0x8100), length 346: vlan 2, p 0, ethertype IPv4, 0.0.0.0.68 > 
255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:df:77:45, length 300

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1697426/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1709938] Re: DefaultSubnetPoolsTest is racy

2017-08-18 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/492653
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=637734c1b6d5a158374576ce27941a04363fc8bb
Submitter: Jenkins
Branch:master

commit 637734c1b6d5a158374576ce27941a04363fc8bb
Author: Jakub Libosvar 
Date:   Thu Aug 10 17:04:45 2017 +

Fix DefaultSubnetPool API test

As default subnetpool is a unique resource in the cloud, it needs to be
cleaned after each test is done. This patch adds a cleanup call to
DefaultSubnetPool tests in order to delete created default subnet pool.

Change-Id: I4c963d0d0e9910f7047061b51feb36c8a19de65c
Closes-bug: #1709938


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1709938

Title:
  DefaultSubnetPoolsTest is racy

Status in neutron:
  Fix Released

Bug description:
  Default subnet can exist only once in cloud and there are two tests
  that create default and one that updates to default, it happens that
  tests are running in parallel while the check for existing subnet is
  at the class level. So it happens that:

   1) class checks for default subnet, it's not there
   3) test1 creates default subnet -> it's fine, we now have our unique resource
   4) test2 creates default subnet -> the error we see cause test1 has the 
default

  From the tempest logs:

  Step1:
  2017-08-10 07:03:12.341 3008 INFO tempest.lib.common.rest_client 
[req-07271c2f-6725-4f77-b676-a55a95adbf7b ] Request 
(DefaultSubnetPoolsTest:setUpClass): 200 GET 
http://10.0.0.103:9696/v2.0/subnetpools 0.418s
  ion/json', 'X-Auth-Token': ''}
  Body: None
  Response - Headers: {'status': '200', u'content-length': '18', 
'content-location': 'http://10.0.0.103:9696/v2.0/subnetpools', u'date': 'Thu, 
10 Aug 2017 11:03:12 GMT', u'content-type': 'application/json', u'connection': 
'close', u'x-openstack-request-id': 'req-07271c2f-6725-4f77-b676-a55a95adbf7b'}
  Body: {"subnetpools":[]} _log_request_full 
/usr/lib/python2.7/site-packages/tempest/lib/common/rest_client.py:425

  Step2:
  2017-08-10 07:03:12.998 3008 INFO tempest.lib.common.rest_client 
[req-6322524f-d1d8-4c7c-abd4-f08862bcec60 ] Request 
(DefaultSubnetPoolsTest:test_admin_create_default_subnetpool): 201 POST 
http://10.0.0.103:9696/v2.0/subnetpools 0.655s
  2017-08-10 07:03:12.998 3008 DEBUG tempest.lib.common.rest_client 
[req-6322524f-d1d8-4c7c-abd4-f08862bcec60 ] Request - Headers: {'Content-Type': 
'application/json', 'Accept': 'application/json', 'X-Auth-Token': ''}
  Body: {"subnetpool": {"is_default": true, "prefixes": 
["10.11.12.0/24"], "name": "tempest-smoke-subnetpool-2026337716", 
"min_prefixlen": "29"}}
  Response - Headers: {'status': '201', u'content-length': '508', 
'content-location': 'http://10.0.0.103:9696/v2.0/subnetpools', u'date': 'Thu, 
10 Aug 2017 11:03:12 GMT', u'content-type': 'application/json', u'connection': 
'close', u'x-openstack-request-id': 'req-6322524f-d1d8-4c7c-abd4-f08862bcec60'}
  Body: 
{"subnetpool":{"is_default":true,"description":"","default_quota":null,"tenant_id":"542c5acbca3f49a0bc89d0903eb5c7e5","created_at":"2017-08-10T11:03:12Z","tags":[],"updated_at":"2017-08-10T11:03:12Z","prefixes":["10.11.12.0/24"],"min_prefixlen":"29","max_prefixlen":"32","address_scope_id":null,"revision_number":0,"ip_version":4,"shared":false,"default_prefixlen":"29","project_id":"542c5acbca3f49a0bc89d0903eb5c7e5","id":"dd1b15f4-0dc1-4582-9435-394a5b2bdea9","name":"tempest-smoke-subnetpool-2026337716"}}
 _log_request_full 
/usr/lib/python2.7/site-packages/tempest/lib/common/rest_client.py:425

  Step3:
  2017-08-10 07:03:15.667 3008 INFO tempest.lib.common.rest_client 
[req-24e48bfe-473f-44e0-aaf4-8f2debf81a0e ] Request 
(DefaultSubnetPoolsTest:test_convert_subnetpool_to_default_subnetpool): 400 PUT 
http://10.0.0.103:9696/v2.0/subnetpools/fb199e24-a9e2-443f-81cc-3c07c3bd7a20 
0.842s
  2017-08-10 07:03:15.668 3008 DEBUG tempest.lib.common.rest_client 
[req-24e48bfe-473f-44e0-aaf4-8f2debf81a0e ] Request - Headers: {'Content-Type': 
'application/json', 'Accept': 'application/json', 'X-Auth-Token': ''}
  Body: {"subnetpool": {"is_default": true}}
  Response - Headers: {'status': '400', u'content-length': '203', 
'content-location': 
'http://10.0.0.103:9696/v2.0/subnetpools/fb199e24-a9e2-443f-81cc-3c07c3bd7a20', 
u'date': 'Thu, 10 Aug 2017 11:03:15 GMT', u'content-type': 'application/json', 
u'connection': 'close', u'x-openstack-request-id': 
'req-24e48bfe-473f-44e0-aaf4-8f2debf81a0e'}
  Body: {"NeutronError": {"message": "Invalid input for operation: A 
default subnetpool for this IP family has already been set. Only one default 
may exist per IP family.", "type": "InvalidInput", "detail": ""}} 
_log_request_full 

[Yahoo-eng-team] [Bug 1711767] [NEW] TypeError in _call_() function while trying to bring NOVA-API

2017-08-18 Thread raminder
Public bug reported:

I have Nova (15.0.4-1e17)

When i try to bring up my nova-api on my controller, i am getting this
error:

2017-08-18 16:35:40.236 15107 CRITICAL nova [req-
28985cc3-c33b-4538-aa55-cedd9ab05c70 - - - - -] TypeError: __call__()
takes exactly 3 arguments (2 given); got (,
{'__file...va'}, keystone=..., noauth2=...), wanted (loader,
global_conf, **local_conf)


I wanted to paste my api-paste.ini here but wanted to keep my initial post 
concise so decided not to. if you need a specific section of that file let me 
know. 
 

This is what I have done so far:
1. Installed nova per instructions on openstack website.
yum install -y openstack-nova-api openstack-nova-conductor 
openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler

2. Edited /etc/nova/nova.conf file per my local environment.
One line that i added under [keystone_authtoken] section while troubleshooting 
was:
service_token_roles_required = true

3. In my api-paste.ini file my request_log was set as follows. 
[filter:request_log]
paste.filter_factory = nova.api.openstack.requestlog:RequestLog.factory

But i got errors while trying to load the requestlog module. First it was not 
able to locate the module and when i tried to give it the path, it was 
complaing about factory attribute, so my final settings look like this
[filter:request_log]
paste.filter_factory = nova.api.openstack.placement.requestlog:RequestLog

4. Start openstack-nova-api (by setting the log level to debug), the
/var/log/nova/nova-api.log gives me the following error before it exits:


2017-08-18 16:35:40.236 15107 CRITICAL nova 
[req-28985cc3-c33b-4538-aa55-cedd9ab05c70 - - - - -] TypeError: __call__() 
takes exactly 3 arguments (2 given); got (, {'__file...va'}, 
keystone=..., noauth2=...), wanted (loader, global_conf, **local_conf)
2017-08-18 16:35:40.236 15107 ERROR nova Traceback (most recent call last):
2017-08-18 16:35:40.236 15107 ERROR nova   File "/usr/bin/nova-api", line 10, 
in 
2017-08-18 16:35:40.236 15107 ERROR nova sys.exit(main())
2017-08-18 16:35:40.236 15107 ERROR nova   File 
"/usr/lib/python2.7/site-packages/nova/cmd/api.py", line 59, in main
2017-08-18 16:35:40.236 15107 ERROR nova server = service.WSGIService(api, 
use_ssl=should_use_ssl)
2017-08-18 16:35:40.236 15107 ERROR nova   File 
"/usr/lib/python2.7/site-packages/nova/service.py", line 311, in __init__
2017-08-18 16:35:40.236 15107 ERROR nova self.app = 
self.loader.load_app(name)
2017-08-18 16:35:40.236 15107 ERROR nova   File 
"/usr/lib/python2.7/site-packages/nova/wsgi.py", line 497, in load_app
2017-08-18 16:35:40.236 15107 ERROR nova return deploy.loadapp("config:%s" 
% self.config_path, name=name)
2017-08-18 16:35:40.236 15107 ERROR nova   File 
"/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 247, in 
loadapp
2017-08-18 16:35:40.236 15107 ERROR nova return loadobj(APP, uri, 
name=name, **kw)
2017-08-18 16:35:40.236 15107 ERROR nova   File 
"/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 272, in 
loadobj
2017-08-18 16:35:40.236 15107 ERROR nova return context.create()
2017-08-18 16:35:40.236 15107 ERROR nova   File 
"/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 710, in create
2017-08-18 16:35:40.236 15107 ERROR nova return 
self.object_type.invoke(self)
2017-08-18 16:35:40.236 15107 ERROR nova   File 
"/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 144, in invoke
2017-08-18 16:35:40.236 15107 ERROR nova **context.local_conf)
2017-08-18 16:35:40.236 15107 ERROR nova   File 
"/usr/lib/python2.7/site-packages/paste/deploy/util.py", line 58, in fix_call
2017-08-18 16:35:40.236 15107 ERROR nova reraise(*exc_info)
2017-08-18 16:35:40.236 15107 ERROR nova   File 
"/usr/lib/python2.7/site-packages/paste/deploy/compat.py", line 23, in reraise
2017-08-18 16:35:40.236 15107 ERROR nova exec('raise t, e, tb', dict(t=t, 
e=e, tb=tb))
2017-08-18 16:35:40.236 15107 ERROR nova   File 
"/usr/lib/python2.7/site-packages/paste/deploy/util.py", line 55, in fix_call
2017-08-18 16:35:40.236 15107 ERROR nova val = callable(*args, **kw)
2017-08-18 16:35:40.236 15107 ERROR nova   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/urlmap.py", line 160, in 
urlmap_factory
2017-08-18 16:35:40.236 15107 ERROR nova app = loader.get_app(app_name, 
global_conf=global_conf)
2017-08-18 16:35:40.236 15107 ERROR nova   File 
"/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 350, in 
get_app
2017-08-18 16:35:40.236 15107 ERROR nova name=name, 
global_conf=global_conf).create()
2017-08-18 16:35:40.236 15107 ERROR nova   File 
"/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 710, in create
2017-08-18 16:35:40.236 15107 ERROR nova return 
self.object_type.invoke(self)
2017-08-18 16:35:40.236 15107 ERROR nova   File 
"/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 144, in invoke
2017-08-18 16:35:40.236 15107 ERROR 

[Yahoo-eng-team] [Bug 1711739] [NEW] OpenStack Error during instance creation with Cirros

2017-08-18 Thread JOHNNY DONALD DARWIN
Public bug reported:

I am pasting the error below:


root@controller:~/scripts_OpenStack# openstack server create --flavor m1.nano 
--image cirros \
>   --nic net-id=5230655d-c667-48fd-b4a4-1acd83bef6f4 --security-group default \
>   --key-name mykey selfservice-instance
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and 
attach the Nova API log if possible.
 (HTTP 
500) (Request-ID: req-363e4a19-b09c-48e2-9003-d73ee049aa31)

Below is nova-api.log:

2017-08-19 00:11:04.871 6149 INFO nova.osapi_compute.wsgi.server 
[req-93aa2c08-9121-4354-8503-5d9ba546f287 203090887fbe47a8be95ee3a4107b1f2 
3e62a5af4ac7af1df294d9e357d4 - default default] 10.20.30.30 "GET 
/v2.1/3e62a5af4ac7af1df294d9e357d4/servers/detail?all_tenants=True=21 
HTTP/1.1" status: 200 len: 347 time: 0.6775930
2017-08-19 00:11:10.658 6149 INFO nova.osapi_compute.wsgi.server 
[req-1ae58526-b120-4666-8422-bd6f7e190b62 203090887fbe47a8be95ee3a4107b1f2 
3e62a5af4ac7af1df294d9e357d4 - default default] 10.20.30.30 "GET 
/v2.1/3e62a5af4ac7af1df294d9e357d4/extensions HTTP/1.1" status: 200 len: 
23035 time: 2.4226429
2017-08-19 00:12:13.221 6149 INFO nova.osapi_compute.wsgi.server 
[req-071240b1-514e-48ac-9ffd-a1de30e24103 203090887fbe47a8be95ee3a4107b1f2 
3e62a5af4ac7af1df294d9e357d4 - default default] 10.20.30.30 "GET 
/v2.1/3e62a5af4ac7af1df294d9e357d4/extensions HTTP/1.1" status: 200 len: 
23035 time: 8.1807971
2017-08-19 00:13:32.518 6149 INFO nova.osapi_compute.wsgi.server 
[req-d996ba97-49bc-43f0-ad24-8906210fcdf9 203090887fbe47a8be95ee3a4107b1f2 
3e62a5af4ac7af1df294d9e357d4 - default default] 10.20.30.30 "POST 
/v2.1/3e62a5af4ac7af1df294d9e357d4/flavors HTTP/1.1" status: 200 len: 752 
time: 0.3743351
2017-08-19 00:13:32.553 6149 INFO nova.osapi_compute.wsgi.server 
[req-d9430a50-d1b1-451a-9b94-a08b3aa67f7c 203090887fbe47a8be95ee3a4107b1f2 
3e62a5af4ac7af1df294d9e357d4 - default default] 10.20.30.30 "GET 
/v2.1/3e62a5af4ac7af1df294d9e357d4/flavors/0/os-extra_specs HTTP/1.1" 
status: 200 len: 351 time: 0.0311458
2017-08-19 00:14:31.974 6149 INFO nova.osapi_compute.wsgi.server 
[req-b46000da-b0c8-44b4-88dd-85d96c85e906 035e3002ea5f42c6a6842763d7fbf0c7 
465b7d3adad64088a70833964bdc4e2d - default default] 10.20.30.31 "POST 
/v2.1/465b7d3adad64088a70833964bdc4e2d/os-keypairs HTTP/1.1" status: 200 len: 
890 time: 0.3092380
2017-08-19 00:15:52.389 6149 INFO nova.osapi_compute.wsgi.server 
[req-6bcfa8c3-2c25-4b8b-bd83-9afe04b0c683 203090887fbe47a8be95ee3a4107b1f2 
3e62a5af4ac7af1df294d9e357d4 - default default] 10.20.30.30 "GET 
/v2.1/3e62a5af4ac7af1df294d9e357d4/flavors/detail HTTP/1.1" status: 200 
len: 755 time: 0.3320630
2017-08-19 00:17:41.962 6149 INFO nova.api.openstack.wsgi 
[req-dbddc9d4-3cbf-4c4f-909a-26ece959d4e8 203090887fbe47a8be95ee3a4107b1f2 
3e62a5af4ac7af1df294d9e357d4 - default default] HTTP exception thrown: 
Flavor m1.nano could not be found.
2017-08-19 00:17:41.984 6149 INFO nova.osapi_compute.wsgi.server 
[req-dbddc9d4-3cbf-4c4f-909a-26ece959d4e8 203090887fbe47a8be95ee3a4107b1f2 
3e62a5af4ac7af1df294d9e357d4 - default default] 10.20.30.30 "GET 
/v2.1/3e62a5af4ac7af1df294d9e357d4/flavors/m1.nano HTTP/1.1" status: 404 
len: 434 time: 0.2477911
2017-08-19 00:17:42.150 6149 INFO nova.api.openstack.wsgi 
[req-bc34b129-d6a3-45f5-977d-698d98f5def3 203090887fbe47a8be95ee3a4107b1f2 
3e62a5af4ac7af1df294d9e357d4 - default default] HTTP exception thrown: 
Flavor m1.nano could not be found.
2017-08-19 00:17:42.155 6149 INFO nova.osapi_compute.wsgi.server 
[req-bc34b129-d6a3-45f5-977d-698d98f5def3 203090887fbe47a8be95ee3a4107b1f2 
3e62a5af4ac7af1df294d9e357d4 - default default] 10.20.30.30 "GET 
/v2.1/3e62a5af4ac7af1df294d9e357d4/flavors/m1.nano HTTP/1.1" status: 404 
len: 434 time: 0.1630840
2017-08-19 00:17:42.328 6149 INFO nova.osapi_compute.wsgi.server 
[req-c2bdb426-1990-4197-b8b2-9323f0aa3ffc 203090887fbe47a8be95ee3a4107b1f2 
3e62a5af4ac7af1df294d9e357d4 - default default] 10.20.30.30 "GET 
/v2.1/3e62a5af4ac7af1df294d9e357d4/flavors HTTP/1.1" status: 200 len: 586 
time: 0.1631751
2017-08-19 00:17:42.430 6149 INFO nova.osapi_compute.wsgi.server 
[req-952ee384-2159-4eae-8eca-9dbbb7117161 203090887fbe47a8be95ee3a4107b1f2 
3e62a5af4ac7af1df294d9e357d4 - default default] 10.20.30.30 "GET 
/v2.1/3e62a5af4ac7af1df294d9e357d4/flavors/0 HTTP/1.1" status: 200 len: 752 
time: 0.0938671
2017-08-19 00:17:46.905 6149 INFO nova.api.openstack.wsgi 
[req-539f5519-45df-45b5-a40d-17bddca3fd90 203090887fbe47a8be95ee3a4107b1f2 
3e62a5af4ac7af1df294d9e357d4 - default default] HTTP exception thrown: 
Invalid key_name provided.
2017-08-19 00:17:46.915 6149 INFO nova.osapi_compute.wsgi.server 
[req-539f5519-45df-45b5-a40d-17bddca3fd90 203090887fbe47a8be95ee3a4107b1f2 
3e62a5af4ac7af1df294d9e357d4 - default default] 10.20.30.30 "POST 
/v2.1/3e62a5af4ac7af1df294d9e357d4/servers HTTP/1.1" status: 400 len: 426 
time: 3.6345561
2017-08-19 

[Yahoo-eng-team] [Bug 1711468] Re: interoperable image import requires exposing the tasks api

2017-08-18 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/494732
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=b90ad2524fd1c80e33930191b415c67a91904fd9
Submitter: Jenkins
Branch:master

commit b90ad2524fd1c80e33930191b415c67a91904fd9
Author: Brian Rosmaita 
Date:   Thu Aug 17 18:21:25 2017 -0400

Add 'tasks_api_access' policy

The Tasks API was made admin-only in Mitaka to prevent it from being
exposed directly to end users.  The interoperable image import
process introduced in Pike uses the tasks engine to perform the
import.  This patch introduces a new policy, 'tasks_api_access',
that determines whether a user can make Tasks API calls.

The currently existing task-related policies are retained so that
operators can have fine-grained control over tasks.  With this
new policy, operators can restrict Tasks API access to admins,
while at the same time, admin-level credentials are not required
for glance to perform task-related functions on behalf of users.

Change-Id: I3f66f7efa7c377d999a88457fc6492701a894f34
Closes-bug: #1711468


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1711468

Title:
  interoperable image import requires exposing the tasks api

Status in Glance:
  Fix Released

Bug description:
  The Tasks API was made admin-only in Mitaka by changing the get_task,
  get_tasks, add_task, and modify_task policies to require "role:admin"
  by default.  The interoperable image import process introduced in Pike
  requires an ordinary user to have (at least) the add_task permission
  (although the user does not create the task directly, and in fact,
  should have no knowledge that a task is being used behind the scenes
  to do the image import).

  We need a way to allow non-admin credentials to manipulate tasks, but
  not allow access to tasks directly via the Tasks API.

  It would be nice to get this resolved in Pike.  Otherwise operators
  may not want to try out the interoperable image import.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1711468/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1710958] Re: glance image stage fails with 500

2017-08-18 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/494036
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=e17a349306e1a4ae287ce69cae0a545acfcf6b27
Submitter: Jenkins
Branch:master

commit e17a349306e1a4ae287ce69cae0a545acfcf6b27
Author: Brian Rosmaita 
Date:   Tue Aug 15 19:14:22 2017 -0400

Fix 500 error from image-stage call

Adds a RequestDeserializer for the stage and adjusts the image
status transitions so that they can handle the 'uploading' status
of an image with data in the stage.

Closes-bug: #1710958

Change-Id: I6f1cfe44a01542bc93a43cbd518956686adb366d


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1710958

Title:
  glance image stage fails with 500

Status in Glance:
  Fix Released

Bug description:
  Returns a 500 when enable_image_import=False.  Should probably return
  a 404.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1710958/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1711018] Re: glanceclient chokes on task-list, task schema needs update

2017-08-18 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/494063
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=b6e4ddaf42af2d4169681969bd288b678a65ae32
Submitter: Jenkins
Branch:master

commit b6e4ddaf42af2d4169681969bd288b678a65ae32
Author: Brian Rosmaita 
Date:   Tue Aug 15 22:46:32 2017 -0400

Add 'api_image_import' type to task(s) schemas

The glanceclient relies on the schemas being accurate so it can
format responses.  Without this update, admins won't be able to
use the glanceclient generate a task list containing the
api_image_import task that the task engine uses to process the
new image import process being introduced in Pike.

Closes-bug: #1711018

Change-Id: I5bcc9f4cdc55635809e8a90be555a367348a58c2


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1711018

Title:
  glanceclient chokes on task-list, task schema needs update

Status in Glance:
  Fix Released

Bug description:
  $ glance task-list
  u'api_image_import' is not one of [u'import']

  Failed validating u'enum' in schema[u'properties'][u'type']:
  {u'description': u'The type of task represented by this content',
   u'enum': [u'import'],
   u'type': u'string'}

  On instance[u'type']:
  u'api_image_import'

  
  The GET /v2/tasks call works via the API.  The problem is that the 
glanceclient is using the tasks schema to validate the response, so this does 
need to be fixed on the API side.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1711018/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1711603] [NEW] Unable to Launch instance using ceph boot volume

2017-08-18 Thread Pranita Desai
Public bug reported:

I have deployed Openstack Base bundle 
cs:~openstack-charmers-next/bundle/openstack-base-xenial-ocata-0 which deploys 
ceph as storage backend.
https://jujucharms.com/u/openstack-charmers-next/openstack-base-xenial-ocata/

I am unable to launch instance using ceph volume as a boot volume.
Launching instance from glance image is also failing. Getting below
errors in nova-compute.log:


2017-08-18 12:24:02.691 1708784 ERROR nova.virt.libvirt.driver 
[req-72b3771b-209c-415f-b14c-4e885a72f0b8 ef1fea1a88c744ecb96d08bb56fdc412 
5a41e5c16d2c4299b23d26b87ff994aa - - -] [instance: 
0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] Failed to start libvirt guest
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager 
[req-72b3771b-209c-415f-b14c-4e885a72f0b8 ef1fea1a88c744ecb96d08bb56fdc412 
5a41e5c16d2c4299b23d26b87ff994aa - - -] [instance: 
0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] Instance failed to spawn
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 
0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] Traceback (most recent call last):
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 
0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2124, in 
_build_resources
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 
0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] yield resources
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 
0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1930, in 
_build_and_run_instance
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 
0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] block_device_info=block_device_info)
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 
0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2698, in 
spawn
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 
0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] destroy_disks_on_failure=True)
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 
0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5114, in 
_create_domain_and_network
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 
0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] destroy_disks_on_failure)
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 
0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0]   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 
0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] self.force_reraise()
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 
0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0]   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 
0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] six.reraise(self.type_, self.value, 
self.tb)
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 
0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5086, in 
_create_domain_and_network
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 
0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] post_xml_callback=post_xml_callback)
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 
0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5004, in 
_create_domain
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 
0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] guest.launch(pause=pause)
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 
0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/guest.py", line 145, in 
launch
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 
0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] self._encoded_xml, errors='ignore')
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 
0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0]   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 
0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] self.force_reraise()
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 
0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0]   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager [instance: 
0ff1f63d-e25f-4c75-97a5-05f02f1e6aa0] six.reraise(self.type_, self.value, 
self.tb)
2017-08-18 12:24:03.015 1708784 ERROR nova.compute.manager 

[Yahoo-eng-team] [Bug 1711555] [NEW] Image Visibility field should represent the selection

2017-08-18 Thread serlex
Public bug reported:

Hi,

When viewing images URL:/project/images

Visibility selection is Private or Public

If you select, Public and save. The table will show Public under
visibility.

However if you select Private. The images table will show Shared under
visibility?

Shouldn't this be consistent?

Regards

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1711555

Title:
  Image Visibility field should represent the selection

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Hi,

  When viewing images URL:/project/images

  Visibility selection is Private or Public

  If you select, Public and save. The table will show Public under
  visibility.

  However if you select Private. The images table will show Shared under
  visibility?

  Shouldn't this be consistent?

  Regards

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1711555/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1711553] [NEW] Error while modifying instances fitler on page /project/instances

2017-08-18 Thread Ferenc Cserepkei
Public bug reported:

Description
===
Recoverable error: Unexpected API Error.
 (HTTP 500) (Request-ID: 
req-99afec9c-b02f-4a64-b718-0109b1836743)

Steps to reproduce
==
1. launch 250+ CirrOSes with flavor cirros64 (same as cirros256 but with 64M 
ram)
2.navigate to /project/instances in horizon
3. select instance name and write a proper prefix having '0' in it the press 
filter.

Expected result
===
Instance list

Actual result
=
Error: Unable to retrieve instances
The following appeared in devstack nova log: 
http://paste.openstack.org/raw/618749/

Environment
===
I'm using devstack fc2919f the setup is running on a  Dell T7810 and has 251 
instances launched and running.
Hypervisor: libvirt+kvm
Storage: LVM shipped with devstack
networking: Neutron with OpenVSwitch


Logs & Configs
==
base64 encoded  sosreport-trunkport-20170818113155.tar.xz : 
http://paste.openstack.org/raw/618752/

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1711553

Title:
  Error while modifying instances fitler on page /project/instances

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  Recoverable error: Unexpected API Error.
   (HTTP 500) (Request-ID: 
req-99afec9c-b02f-4a64-b718-0109b1836743)

  Steps to reproduce
  ==
  1. launch 250+ CirrOSes with flavor cirros64 (same as cirros256 but with 64M 
ram)
  2.navigate to /project/instances in horizon
  3. select instance name and write a proper prefix having '0' in it the press 
filter.

  Expected result
  ===
  Instance list

  Actual result
  =
  Error: Unable to retrieve instances
  The following appeared in devstack nova log: 
http://paste.openstack.org/raw/618749/

  Environment
  ===
  I'm using devstack fc2919f the setup is running on a  Dell T7810 and has 251 
instances launched and running.
  Hypervisor: libvirt+kvm
  Storage: LVM shipped with devstack
  networking: Neutron with OpenVSwitch

  
  Logs & Configs
  ==
  base64 encoded  sosreport-trunkport-20170818113155.tar.xz : 
http://paste.openstack.org/raw/618752/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1711553/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1711547] [NEW] Nova service restart disconnects vzstorage volumes

2017-08-18 Thread Carlo Baijens
Public bug reported:

Description
===
Virtuozzo storage mounts invalid state after Openstack Nova Compute service 
restart.

==
Running Openstack Ocata release with Virtuozzo KVM hypervisor on Vstorage 
backed by Cinder. A restart of the openstack-nova-compute service causes 
vzstorage mounts to be unreachable. The restart keeps VMs running using the 
mount but the filesystem is unreachable.

Before restart:
[user@node ~]$ sudo systemctl status  -l openstack-nova-compute.service
● openstack-nova-compute.service - OpenStack Nova Compute Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; 
enabled; vendor preset: disabled)
   Active: active (running) since Mon 2017-08-14 13:47:00 CEST; 1 day 23h ago
 Main PID: 3932 (nova-compute)
   CGroup: /system.slice/openstack-nova-compute.service
   ├─3932 /usr/bin/python2 /usr/bin/nova-compute
   ├─4251 /usr/bin/python2 /bin/privsep-helper --config-file 
/usr/share/nova/nova-dist.conf --config-file /etc/nova/nova.conf 
--privsep_context vif_plug_linux_bridge.privsep.vif_plug --privsep_sock_path 
/tmp/tmpe2LS_d/privsep.sock
   ├─4454 /usr/bin/python2 /bin/privsep-helper --config-file 
/usr/share/nova/nova-dist.conf --config-file /etc/nova/nova.conf 
--privsep_context os_brick.privileged.default --privsep_sock_path 
/tmp/tmpOXZxyO/privsep.sock
   └─4498 pstorage-mount -c openstackpoc -u nova -g root -m 0770 -l 
/var/log/vstorage/nova-openstackpoc.log.gz 
/var/lib/nova/mnt/b2b894afb0fb3e4734c87ad01eee

We restart the service :
sudo systemctl restart openstack-nova-compute.service

After this we see:

[user@node ~]$ sudo systemctl status  -l openstack-nova-compute.service
● openstack-nova-compute.service - OpenStack Nova Compute Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; 
enabled; vendor preset: disabled)
   Active: active (running) since Wed 2017-08-16 13:15:26 CEST; 50s ago
 Main PID: 199753 (nova-compute)
   CGroup: /system.slice/openstack-nova-compute.service
   ├─199753 /usr/bin/python2 /usr/bin/nova-compute
   └─199798 /usr/bin/python2 /bin/privsep-helper --config-file 
/usr/share/nova/nova-dist.conf --config-file /etc/nova/nova.conf 
--privsep_context vif_plug_linux_bridge.privsep.vif_plug --privsep_sock_path 
/tmp/tmpjDaH7y/privsep.sock

The vstorage-mount is not reinitialised by nova-compute.

Expected result
===
The vstorage-mount is available after nova-compute restart.

Actual result
=
The vstorage-mount is not reinitialised by nova-compute.

Nova mounts do not disappear after restart but are unreachable:

[user@node ~]$ sudo lsof | grep 
/var/lib/nova/mnt/b2b894afb0fb3e4734c87ad01eee
lsof: WARNING: can't stat() fuse.vstorage file system 
/var/lib/nova/mnt/b2b894afb0fb3e4734c87ad01eee
  Output information may be incomplete.
qemu-kvm7366   root   18u  unknown  

/var/lib/nova/mnt/b2b894afb0fb3e4734c87ad01eee/volume-7d555095-f36e-4673-ad10-30786d105270
 (stat: Transport endpoint is not connected)
qemu-kvm7366   7370root   18u  unknown  

/var/lib/nova/mnt/b2b894afb0fb3e4734c87ad01eee/volume-7d555095-f36e-4673-ad10-30786d105270
 (stat: Transport endpoint is not connected)

We currently implemented a workaround by adding

KillMode=process

to the openstack nova compute unit file override. This leave open the
vzstorage mounts and keeps the VMs running.

Environment
===
1. Openstack Ocata release 15.0.3-2.el7

2. Which hypervisor did you use?

qemu-kvm-vz 2.6.0-28.3.9.vz7.56.1

2. Which storage type did you use?

Vstorage, 7.4.106-1

3. Which networking type did you use?
Neutron with Linuxbridge

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  Description
  ===
- Virtuozzo storage mounts invalid state after Openstack Nova Compute service 
restart. 
+ Virtuozzo storage mounts invalid state after Openstack Nova Compute service 
restart.
  
  ==
  Running Openstack Ocata release with Virtuozzo KVM hypervisor on Vstorage 
backed by Cinder. A restart of the openstack-nova-compute service causes 
vzstorage mounts to be unreachable. The restart keeps VMs running using the 
mount but the filesystem is unreachable.
  
  Before restart:
  [user@node ~]$ sudo systemctl status  -l openstack-nova-compute.service
  ● openstack-nova-compute.service - OpenStack Nova Compute Server
-Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; 
enabled; vendor preset: disabled)
-Active: active (running) since Mon 2017-08-14 13:47:00 CEST; 1 day 23h ago
-  Main PID: 3932 (nova-compute)
-CGroup: /system.slice/openstack-nova-compute.service
-├─3932 /usr/bin/python2 /usr/bin/nova-compute
-├─4251 /usr/bin/python2 /bin/privsep-helper --config-file 

[Yahoo-eng-team] [Bug 1621709] Re: There is no allocation record for migration action

2017-08-18 Thread Chris Dent
This is effectively a duplicate of #1707071 , which has been released,
so I'm going to mark this as such.

** Changed in: nova
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1621709

Title:
  There is no allocation record for migration action

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  In the current RT, the migration case was consider as resource
  consuming. But we didn't update any allocation record to Placement
  API.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1621709/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1653122] Re: Placement API should support DELETE /resource-providers/{uuid}/inventories

2017-08-18 Thread Chris Dent
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1653122

Title:
  Placement API should support DELETE /resource-
  providers/{uuid}/inventories

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  This is a small feature request.

  Currently (version 1.3 or before of the placement API), in order to
  delete all inventory for a resource provider, one must call PUT
  /resource_providers/{uuid}/inventories and pass in the following
  request payload:

  {
'generation': ,
'resources': {}
  }

  it would be easier and more intuitive to support DELETE
  /resource_providers/{uuid}/inventories with no request payload and
  returning a 204 No Content on success.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1653122/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661312] Re: Evacuation will corrupt instance allocations

2017-08-18 Thread Chris Dent
https://bugs.launchpad.net/nova/+bug/1709902 duplcates this, and that
one has code, so invalidating this one.

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1661312

Title:
  Evacuation will corrupt instance allocations

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  The following sequence of events will result in a corrupted instance
  allocation in placement:

  1. Instance running on host A, placement has allocations for instance on host 
A
  2. Host A goes down
  3. Instance is evacuated to host B, host B creates duplicated allocations in 
placement for instance
  4. Host A comes up, notices that instance is gone, deletes all allocations 
for instance on both hosts A and B
  5. Instance now has no allocations for a period
  6. Eventually, host B will re-create the allocations for the instance

  The period between #4 and #6 will have the scheduler making bad
  decisions because it thinks host B is less loaded than it is.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1661312/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1705250] Re: OpenStack Administrator Guides: missing index for murano, cinder & keystone page

2017-08-18 Thread wangchao
** Changed in: keystone
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1705250

Title:
  OpenStack Administrator Guides: missing index for murano, cinder &
  keystone page

Status in Cinder:
  New
Status in OpenStack Identity (keystone):
  Fix Released
Status in Murano:
  Fix Released
Status in openstack-manuals:
  Fix Released

Bug description:
  These href's on https://docs.openstack.org/admin/ are generating
  directory listings instead of proper pages (missing index.html?):

  Block Storage service (cinder)  (/cinder/latest/admin/)
  Identity service (keystone)  (/keystone/latest/admin/)
  Application Catalog service (murano)  (/murano/latest/admin/)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1705250/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp