[Yahoo-eng-team] [Bug 1731948] Re: Wrong OVO classes registered in some cases

2017-11-22 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/519622
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=5e08a9b0e7d4f99d217ca73c6aa37e52a13c5d5a
Submitter: Zuul
Branch:master

commit 5e08a9b0e7d4f99d217ca73c6aa37e52a13c5d5a
Author: Sławek Kapłoński 
Date:   Tue Nov 14 20:36:39 2017 +

[OVO] Switch to use own registry

Neutron will now use own registry for versionedobjects.
It avoids problems with loading wrong OVO objects from
different projects (like os_vif) when names are the same.

Change-Id: I9d4fab591fbe52271c613251321a6d03078976f7
Closes-Bug: #1731948


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1731948

Title:
  Wrong OVO classes registered in some cases

Status in neutron:
  Fix Released
Status in oslo.versionedobjects:
  New

Bug description:
  When patch https://review.openstack.org/#/c/321001 was merged some UT
  in projects like networking-midonet started failing. It is reported on
  https://bugs.launchpad.net/networking-midonet/+bug/1731623

  It looks that reason of this problem is that wrong OVO classes are
  registered in case when e.g. 2 different projects uses same names of
  OVO objects.

  I checked it little bit and it looks that
  neutron.objects.subnet.Subnet has got registered
  os_vif.objects.route.Route object instead of
  neutron.objects.subnet.Route, see my logs from one exampled test:
  http://paste.openstack.org/show/626170/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1731948/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1618878] Re: Disabling IPv6 on an interface fails if IPv6 is completely disabled in the kernel

2017-11-22 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/363634
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=bfe947b26266e13251b7ba972d8b57e67e9ebb02
Submitter: Zuul
Branch:master

commit bfe947b26266e13251b7ba972d8b57e67e9ebb02
Author: Adrien Cunin 
Date:   Sun Sep 4 21:44:18 2016 +0200

Skip IPv6 sysctl calls when IPv6 is disabled

If IPv6 is globally disabled, do not try to run any IPv6
sysctls as they will all fail as the parameters do not exist.

Change-Id: I789dbbe1c44581978c51f8c3c1d22aef10cbe01a
Closes-Bug: #1618878


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1618878

Title:
  Disabling IPv6 on an interface fails if IPv6 is completely disabled in
  the kernel

Status in neutron:
  Fix Released

Bug description:
  Neutron Mitaka.

  From linuxbridge-agent.log:

  ERROR neutron.agent.linux.ip_lib [req-7d62c8de-1678-4b17-b568-b2a9a938c97c - 
- - - -] Failed running ['sysctl', '-w', 
u'net.ipv6.conf.brqc766b4dc-d2.disable_ipv6=1']
  ERROR neutron.agent.linux.ip_lib Traceback (most recent call last):
  ERROR neutron.agent.linux.ip_lib   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 335, in 
_sysctl
  ERROR neutron.agent.linux.ip_lib check_exit_code=True)
  ERROR neutron.agent.linux.ip_lib   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 927, in 
execute
  ERROR neutron.agent.linux.ip_lib log_fail_as_error=log_fail_as_error, 
**kwargs)
  ERROR neutron.agent.linux.ip_lib   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 140, in 
execute
  ERROR neutron.agent.linux.ip_lib raise RuntimeError(msg)
  ERROR neutron.agent.linux.ip_lib RuntimeError: Exit code: 255; Stdin: ; 
Stdout: ; Stderr: sysctl: cannot stat 
/proc/sys/net/ipv6/conf/brqc766b4dc-d2/disable_ipv6: No such file or directory

  Indeed:
  # ls /proc/sys/net/ipv6/
  ls: cannot access /proc/sys/net/ipv6/: No such file or directory

  This is because the system was started with the ipv6.disable=1 Linux
  kernel boot option.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1618878/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1733747] Re: No way to find out which instances are using a security group

2017-11-22 Thread Sam Morrison
Have submitted an RFE for this at
https://bugs.launchpad.net/neutron/+bug/1734026

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1733747

Title:
  No way to find out which instances are using a security group

Status in neutron:
  Invalid

Bug description:
  I'm trying to figure out which instances are using a specific security
  group but it doesn't look possible via the API (unless I'm missing
  something).

  The only way to do this is by looking in the database and doing some
  sql on the securitygroupportbindings table.

  Is there another way?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1733747/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1734026] [NEW] [RFE] Add ability to see what devices use a certain security group

2017-11-22 Thread Sam Morrison
Public bug reported:

Given a security group ID I would like an API to determine which devices
(nova instances) use this security group.

Currently the only way to do this is by looking in the database and
doing some SQL on the securitygroupportbindings table.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1734026

Title:
  [RFE] Add ability to see what devices use a certain security group

Status in neutron:
  New

Bug description:
  Given a security group ID I would like an API to determine which
  devices (nova instances) use this security group.

  Currently the only way to do this is by looking in the database and
  doing some SQL on the securitygroupportbindings table.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1734026/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1734025] [NEW] clearup running deleted instance with reap failed with none token context

2017-11-22 Thread Li Xipeng
Public bug reported:

Description

When zombied instances appear(You can also see bug 
https://bugs.launchpad.net/nova/+bug/911366),
set running_deleted_instance_poll_interval = 60 and 
running_deleted_instance_action = reap, then nova-compute service will clear 
those zombied instances, but if those instances is boot from volume or had 
volumes attached. After clear, zombied instances cleared, but volumes with 
attached status exist, and if those volumes are bootable and used to boot 
volume and set deleted_on_termination=True, thoses volume will still exist and 
in attached status but instance did not exist.

Steps to reproduce

1. set running_deleted_instance_poll_interval=60 and 
running_deleted_instance_action = reap.
2. update an running instance status to deleted.
3. restart nova-compute service and wait 60 seconds.

Expected result

Previous test bootable volume was deleted and volumes attached to
zombied instances ware detached.

Actual result

Previous test bootable volume was in state attached and in-use, volumes
attached to zombied instances ware in-use and attached to those zombied
instances.

** Affects: nova
 Importance: Undecided
 Assignee: Li Xipeng (lixipeng)
 Status: In Progress

** Changed in: nova
   Status: New => In Progress

** Changed in: nova
 Assignee: (unassigned) => Li Xipeng (lixipeng)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1734025

Title:
  clearup running deleted instance with reap failed with none token
  context

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description

  When zombied instances appear(You can also see bug 
https://bugs.launchpad.net/nova/+bug/911366),
  set running_deleted_instance_poll_interval = 60 and 
running_deleted_instance_action = reap, then nova-compute service will clear 
those zombied instances, but if those instances is boot from volume or had 
volumes attached. After clear, zombied instances cleared, but volumes with 
attached status exist, and if those volumes are bootable and used to boot 
volume and set deleted_on_termination=True, thoses volume will still exist and 
in attached status but instance did not exist.

  Steps to reproduce

  1. set running_deleted_instance_poll_interval=60 and 
running_deleted_instance_action = reap.
  2. update an running instance status to deleted.
  3. restart nova-compute service and wait 60 seconds.

  Expected result

  Previous test bootable volume was deleted and volumes attached to
  zombied instances ware detached.

  Actual result

  Previous test bootable volume was in state attached and in-use,
  volumes attached to zombied instances ware in-use and attached to
  those zombied instances.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1734025/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1734012] [NEW] Handle exception in get_instance_sorted when scattering gather results from all cells.

2017-11-22 Thread Yikun Jiang
Public bug reported:

Description
===
Currently, when we get servers list in multi cell, we scatter gather results 
from cells, but if we get back exception or timeout from cell, we will get 500 
error finally.

We should handle raise or timeout after getting back all results.


Maybe we could just skip the error result like quota stuff:
https://github.com/openstack/nova/blob/e9ce5c4c95edc869ab2cf82ca0733a2821c384ad/nova/quota.py#L1865


Steps to reproduce
==
1. raise some error exception when gather results.

2. get server list
curl -g -i -X GET http://XXX/compute/v2.1/servers -H "OpenStack-API-Version: 
compute 2.53" -H "User-Agent: python-novaclient" -H "Accept: application/json" 
-H "X-OpenStack-Nova-API-Version: 2.53" -H "X-Auth-Token: $TOKEN"
HTTP/1.1 500 Internal Server Error
Date: Wed, 22 Nov 2017 07:11:39 GMT
Server: Apache/2.4.18 (Ubuntu)
OpenStack-API-Version: compute 2.53
X-OpenStack-Nova-API-Version: 2.53
Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version
Content-Type: application/json; charset=UTF-8
Content-Length: 193
x-openstack-request-id: req-429c02e7-84db-4c2d-98fd-dd1af0186383
x-compute-request-id: req-429c02e7-84db-4c2d-98fd-dd1af0186383
Connection: close

{"computeFault": {"message": "Unexpected API Error. Please report this
at http://bugs.launchpad.net/nova/ and attach the Nova API log if
possible.\n", "code": 500}}

2017-11-22 02:11:39.925 9792 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
2017-11-22 02:11:39.925 9792 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/extensions.py", line 336, in wrapped
2017-11-22 02:11:39.925 9792 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
2017-11-22 02:11:39.925 9792 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/validation/__init__.py", line 181, in wrapper
2017-11-22 02:11:39.925 9792 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2017-11-22 02:11:39.925 9792 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/validation/__init__.py", line 181, in wrapper
2017-11-22 02:11:39.925 9792 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2017-11-22 02:11:39.925 9792 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/servers.py", line 152, in index
2017-11-22 02:11:39.925 9792 ERROR nova.api.openstack.extensions servers = 
self._get_servers(req, is_detail=False)
2017-11-22 02:11:39.925 9792 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/servers.py", line 308, in 
_get_servers
2017-11-22 02:11:39.925 9792 ERROR nova.api.openstack.extensions 
sort_keys=sort_keys, sort_dirs=sort_dirs)
2017-11-22 02:11:39.925 9792 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/compute/api.py", line 2394, in get_all
2017-11-22 02:11:39.925 9792 ERROR nova.api.openstack.extensions context, 
filters, limit, marker, fields, sort_keys, sort_dirs)
2017-11-22 02:11:39.925 9792 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/compute/instance_list.py", line 251, in 
get_instance_objects_sorted
2017-11-22 02:11:39.925 9792 ERROR nova.api.openstack.extensions 
expected_attrs)
2017-11-22 02:11:39.925 9792 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/objects/instance.py", line 1196, in _make_instance_list
2017-11-22 02:11:39.925 9792 ERROR nova.api.openstack.extensions for 
db_inst in db_inst_list:
2017-11-22 02:11:39.925 9792 ERROR nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/compute/instance_list.py", line 228, in 
get_instances_sorted
2017-11-22 02:11:39.925 9792 ERROR nova.api.openstack.extensions for i in 
heapq.merge(*results.values()):
2017-11-22 02:11:39.925 9792 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/heapq.py", line 373, in merge
2017-11-22 02:11:39.925 9792 ERROR nova.api.openstack.extensions for itnum, 
it in enumerate(map(iter, iterables)):
2017-11-22 02:11:39.925 9792 ERROR nova.api.openstack.extensions TypeError: 
'object' object is not iterable
2017-11-22 02:11:39.925 9792 ERROR nova.api.openstack.extensions
2017-11-22 02:11:39.935 9792 INFO nova.api.openstack.wsgi 
[req-429c02e7-84db-4c2d-98fd-dd1af0186383 - admin] HTTP exception thrown: 
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and 
attach the Nova API log if possible.

2017-11-22 02:11:39.937 9792 DEBUG nova.api.openstack.wsgi 
[req-429c02e7-84db-4c2d-98fd-dd1af0186383 - admin] Returning 500 to user: 
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and 
attach the Nova API log if possible.
 __call__ 
/opt/stack/nova/nova/api/openstack/wsgi.py:1029
2017-11-22 02:11:39.938 9792 INFO nova.api.openstack.requestlog 
[req-429c02e7-84db-4c2d-98fd-dd1af0186383 - admin] 10.76.6.31 "GET 
/compute/v2.1/servers" status: 500 len: 193 microversion: 2.53 time: 0.746751

** Affect

[Yahoo-eng-team] [Bug 1733367] Re: external_net extension not properly documented in api-ref

2017-11-22 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/521652
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lib/commit/?id=1254fca65a1ca6d259232f7e70621a9ba65a93b0
Submitter: Zuul
Branch:master

commit 1254fca65a1ca6d259232f7e70621a9ba65a93b0
Author: Boden R 
Date:   Mon Nov 20 13:34:41 2017 -0700

add description in api-ref for external net extension

While the external network documents its params in the network api-ref,
there's no description of the extension.

This patch adds a sub-section to the network api-ref describing the
external net API extension.

Change-Id: I6a75fce1f52ce3052a336cca81a43b96b3591b6b
Closes-Bug: #1733367


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1733367

Title:
  external_net extension not properly documented in api-ref

Status in neutron:
  Fix Released

Bug description:
  The external_net extension is not completely documented yet. While it
  appears the networks api-ref does doc the router:external params, the
  external_net extension needs to be described in a subsection atop the
  networks api-ref (see others for examples).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1733367/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615498] Re: VMware: unable to launch an instance on a 'portgroup' provider network

2017-11-22 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/358425
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=77e51f14a50dafb46176e50ff3788e7918ff29df
Submitter: Zuul
Branch:master

commit 77e51f14a50dafb46176e50ff3788e7918ff29df
Author: Gary Kotton 
Date:   Sun Aug 21 23:37:27 2016 -0700

VMware: ensure that provider networks work for type 'portgroup'

When an existing portgroup is used as a provider network the
vmware_nsx NSX|V and DVS plugins will validate that the name
of the network is the same name as the actual portgroup. This
name is used when searching for the portgroup. Using the network
UUID will not match here as that is the network UUID.

A provider network can be a regular port group or a NSX virtual
wire.

Change-Id: Icc72b9c4ddd11964f0e4a774588684eb016fae0f
Closes-bug: #1615498


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1615498

Title:
  VMware: unable to launch an instance on a 'portgroup' provider network

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The vmware_nsx NSX|V and DVS plugins enable a admin to create a
  provide network that points to an existing portgroup.

  One is uanble to spin up and instance on these networks.

  The trace is as follows:

  2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a] Traceback (most recent call last):
  2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2514, in 
_build_resources
  2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a] yield resources
  2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2384, in 
_build_and_run_instance
  2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a] block_device_info=block_device_info)
  2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/driver.py", line 429, in 
spawn
  2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a] admin_password, network_info, 
block_device_info)
  2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmops.py", line 872, in 
spawn
  2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a] metadata)
  2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmops.py", line 328, in 
build_virtual_machine
  2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a] pci_devices)
  2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vif.py", line 170, in 
get_vif_info
  2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a] is_neutron, vif, pci_info))
  2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vif.py", line 138, in 
get_vif_dict
  2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a] ref = get_network_ref(session, 
cluster, vif, is_neutron)
  2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vif.py", line 127, in 
get_network_ref
  2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a] network_ref = 
_get_neutron_network(session, network_id, cluster, vif)
  2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vif.py", line 117, in 
_get_neutron_network
  2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a] raise 
exception.NetworkNotFoundForBridge(bridge

[Yahoo-eng-team] [Bug 1729767] Re: Ocata upgrade, midonet port binding fails in mixed ml2 environment

2017-11-22 Thread Sam Morrison
OK I've figured it out, very sorry, not a bug. In newton we had
mech_driver set to midonet_ext and in ocata this is now just midonet
again so this is why everything was failing.

** Changed in: networking-midonet
   Status: New => Invalid

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1729767

Title:
  Ocata upgrade, midonet port binding fails in mixed ml2 environment

Status in networking-midonet:
  Invalid
Status in neutron:
  Invalid

Bug description:
  We have a mixed ml2 environment with some networks are linuxbridge and some 
are midonet.
  We can bind the 2 different ports to instances on the same compute nodes (can 
do to the same instance too)

  Testing out Ocata this is now not working due to some extra checks that have 
been introduced.
  In the logs I get:

  
  2017-11-03 15:50:47.967 25598 DEBUG neutron.plugins.ml2.drivers.mech_agent 
[req-0779656d-919a-4517-9a5c-12b63d77fd02 ac2c53521bd74ab89acb7b705f2b49ff 
2f3e9e705b0b460b9de90d9844e88fd2 - - -] Checking agent: {'binary': 
u'neutron-linuxbridge-agent', 'description': None, 'admin_state_up': True, 
'heartbeat_timestamp': datetime.datetime(2017, 11, 3, 4, 50, 31), 
'availability_zone': None, 'alive': True, 'topic': u'N/A', 'host': u'cn2-qh2', 
'agent_type': u'Linux bridge agent', 'resource_versions': {u'SubPort': u'1.0', 
u'QosPolicy': u'1.3', u'Trunk': u'1.0'}, 'created_at': datetime.datetime(2017, 
8, 9, 3, 43, 15), 'started_at': datetime.datetime(2017, 11, 3, 3, 38, 31), 
'id': u'afba1ad9-b880-4943-aea9-7faae20f787a', 'configurations': 
{u'bridge_mappings': {}, u'interface_mappings': {u'other': u'bond0.3082'}, 
u'extensions': [], u'devices': 9}} bind_port 
/opt/neutron/neutron/plugins/ml2/drivers/mech_agent.py:105
  2017-11-03 15:50:47.967 25598 DEBUG neutron.plugins.ml2.drivers.mech_agent 
[req-0779656d-919a-4517-9a5c-12b63d77fd02 ac2c53521bd74ab89acb7b705f2b49ff 
2f3e9e705b0b460b9de90d9844e88fd2 - - -] Checking segment: {'segmentation_id': 
None, 'physical_network': None, 'id': u'4d716b78-d0b6-4503-b22d-1e42f7d5667a', 
'network_type': u'midonet'} for mappings: {u'other': u'bond0.3082'} with 
network types: ['local', 'flat', 'vlan'] check_segment_for_agent 
/opt/neutron/neutron/plugins/ml2/drivers/mech_agent.py:231
  2017-11-03 15:50:47.968 25598 DEBUG neutron.plugins.ml2.drivers.mech_agent 
[req-0779656d-919a-4517-9a5c-12b63d77fd02 ac2c53521bd74ab89acb7b705f2b49ff 
2f3e9e705b0b460b9de90d9844e88fd2 - - -] Network 
4d716b78-d0b6-4503-b22d-1e42f7d5667a is of type midonet but agent cn2-qh2 or 
mechanism driver only support ['local', 'flat', 'vlan']. 
check_segment_for_agent 
/opt/neutron/neutron/plugins/ml2/drivers/mech_agent.py:242
  2017-11-03 15:50:47.969 25598 ERROR neutron.plugins.ml2.managers 
[req-0779656d-919a-4517-9a5c-12b63d77fd02 ac2c53521bd74ab89acb7b705f2b49ff 
2f3e9e705b0b460b9de90d9844e88fd2 - - -] Failed to bind port 
1c68f72b-1405-43cd-b0d1-ba128f211f51 on host cn2-qh2 for vnic_type normal using 
segments [{'segmentation_id': None, 'physical_network': None, 'id': 
u'4d716b78-d0b6-4503-b22d-1e42f7d5667a', 'network_type': u'midonet'}]

  
  It seems that because there is a linuxbridge agent on the compute node that 
it thinks that is the only type of network that can be bound.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1729767/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1733987] [NEW] name resolution error with DVR+HA routers

2017-11-22 Thread Armando Migliaccio
Public bug reported:

Steps to repro:

* Deploy with multiple DHCP agents per network (e.g. 3) and multiple L3 agents 
per router (e.g. 2)
* Create a network
* Create a subnet
* Create a DVR+HA router
* Uplink router to external network
* Deploy a VM on the network

The resolv.conf of the VM looks something like this:

cat /etc/resolv.conf
search openstack.local
nameserver 192.168.0.2
nameserver 192.168.0.4
nameserver 192.168.0.3

Where .2, .3. and .4 are your DHCP servers that relay DNS requests.

Name resolution may fail when using one of these servers, due to the
lack of qrouter namespace on one of the network nodes associated with
the qdhcp namespace hosting the DHCP service for the network.

Expected behavior:

All nameservers can resolve correctly.

This happens in master and prior versions.

** Affects: neutron
 Importance: Medium
 Status: Confirmed


** Tags: l3-dvr-backlog l3-ha l3-ipam-dhcp

** Changed in: neutron
   Importance: Undecided => Medium

** Tags added: l3-dvr-backlog

** Tags added: l3-ha

** Tags added: l3-ipam-dhcp

** Changed in: neutron
   Status: New => Confirmed

** Description changed:

  Steps to repro:
  
- * Deploy with multiple DHCP agents per network (e.g. 3) and multiple L3 
agents per router (e.g. 2) 
+ * Deploy with multiple DHCP agents per network (e.g. 3) and multiple L3 
agents per router (e.g. 2)
  * Create a network
  * Create a subnet
  * Create a DVR+HA router
  * Uplink router to external network
  * Deploy a VM on the network
  
  The resolv.conf of the VM looks something like this:
  
- cat /etc/resolv.conf 
+ cat /etc/resolv.conf
  search openstack.local
  nameserver 192.168.0.2
  nameserver 192.168.0.4
  nameserver 192.168.0.3
  
  Where .2, .3. and .4 are your DHCP servers that relay DNS requests.
  
  Name resolution may fail when using one of these servers, due to the
  lack of qrouter namespace on one of the network nodes associated with
  the qdhcp namespace hosting the DHCP service for the network.
  
  Expected behavior:
  
  All nameservers can resolve correctly.
+ 
+ This happens in master and prior versions.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1733987

Title:
  name resolution error with DVR+HA routers

Status in neutron:
  Confirmed

Bug description:
  Steps to repro:

  * Deploy with multiple DHCP agents per network (e.g. 3) and multiple L3 
agents per router (e.g. 2)
  * Create a network
  * Create a subnet
  * Create a DVR+HA router
  * Uplink router to external network
  * Deploy a VM on the network

  The resolv.conf of the VM looks something like this:

  cat /etc/resolv.conf
  search openstack.local
  nameserver 192.168.0.2
  nameserver 192.168.0.4
  nameserver 192.168.0.3

  Where .2, .3. and .4 are your DHCP servers that relay DNS requests.

  Name resolution may fail when using one of these servers, due to the
  lack of qrouter namespace on one of the network nodes associated with
  the qdhcp namespace hosting the DHCP service for the network.

  Expected behavior:

  All nameservers can resolve correctly.

  This happens in master and prior versions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1733987/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1733905] Re: SQL integer type is to small to store BGP LOCAL_PREF value

2017-11-22 Thread Thomas Morin
Disregard my previous comment, this is only à DB issue.

** Changed in: bgpvpn
   Status: New => Confirmed

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1733905

Title:
  SQL integer type is to small to store BGP LOCAL_PREF value

Status in networking-bgpvpn:
  Confirmed

Bug description:
  SQL interger type max size is 2^31-1 and BGP LOCAL_PREF max value is
  2^32-1 (RFC 4271, section 4.3 p.18)

  DEBUG neutron.api.v2.base [None req-4aa8e09d-235a-4c7a-b956-02dcca7eea7f demo 
demo] Request body: {u'port_association': {u'routes': [{u'local_pref': 1234, 
u'prefix': u'5.6.7.8/24', u'type': u'prefix'}, {u'
  local_pref': 2147483647, u'prefix': u'5.6.7.6/24', u'type': u'prefix'}, 
{u'prefix': u'1.2.3.4/32', u'type': u'prefix'}, {u'local_pref': 2147483648, 
u'bgpvpn_id': u'39d7d7d2-ffa2-4fd4-8556-f8e7a759abdf', u'
  type': u'bgpvpn'}, {u'bgpvpn_id': u'b5d3e8ff-f381-40a6-a4d1-56622effba1e', 
u'type': u'bgpvpn'}]}} {{(pid=8447) prepare_request_body 
/opt/stack/openstack/neutron/neutron/api/v2/base.py:685}}
  DEBUG neutron_lib.api.validators [None 
req-4aa8e09d-235a-4c7a-b956-02dcca7eea7f demo demo] Validation of dictionary's 
keys failed. Expected keys: set(['prefix', 'type']) Provided keys: 
set([u'local_pref', 
  u'bgpvpn_id', u'type']) {{(pid=8447) _verify_dict_keys 
/usr/local/lib/python2.7/dist-packages/neutron_lib/api/validators/__init__.py:69}}
  DEBUG neutron_lib.api.validators [None 
req-4aa8e09d-235a-4c7a-b956-02dcca7eea7f demo demo] Validation of dictionary's 
keys failed. Expected keys: set(['prefix', 'type']) Provided keys: 
set([u'bgpvpn_id', u
  'type']) {{(pid=8447) _verify_dict_keys 
/usr/local/lib/python2.7/dist-packages/neutron_lib/api/validators/__init__.py:69}}
  ERROR neutron.api.v2.resource [None req-4aa8e09d-235a-4c7a-b956-02dcca7eea7f 
demo demo] update failed: No details.: DBDataError: (pymysql.err.DataError) 
(1264, u"Out of range value for column 'local_pref' 
  at row 1") [SQL: u'INSERT INTO bgpvpn_port_association_routes (id, 
port_association_id, type, local_pref, prefix, bgpvpn_id) VALUES (%(id)s, 
%(port_association_id)s, %(type)s, %(local_pref)s, %(prefix)s, %
  (bgpvpn_id)s)'] [parameters: {'prefix': None, 'port_association_id': 
u'915bc1bb-cafe-4fa0-a44b-3b705dedb5f6', 'bgpvpn_id': 
u'39d7d7d2-ffa2-4fd4-8556-f8e7a759abdf', 'local_pref': 2147483648, 'type': 
'bgpvpn
  ', 'id': 'faf251ff-648c-445b-a9bc-183ea0edadbe'}]

To manage notifications about this bug go to:
https://bugs.launchpad.net/bgpvpn/+bug/1733905/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1733964] [NEW] Rolling Upgrades in glance typo error

2017-11-22 Thread Gaël THEROND
Public bug reported:

- [x] This doc is inaccurate in this way:

At the end of the documentation regarding the glance rolling update, the
command should be: « glance-manage db contract » but the minus is
missing. That could lead to improper use of the glance command itself
and an incomplete rolling update for people not carefully reading it.

---
Release: 15.0.1.dev1 on 'Mon Aug 7 01:28:54 2017, commit 9091d26'
SHA: 9091d262afb120fd077bae003d52463f833a4fde
Source: 
https://git.openstack.org/cgit/openstack/glance/tree/doc/source/admin/rollingupgrades.rst
URL: https://docs.openstack.org/glance/pike/admin/rollingupgrades.html

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1733964

Title:
  Rolling Upgrades in glance typo error

Status in Glance:
  New

Bug description:
  - [x] This doc is inaccurate in this way:

  At the end of the documentation regarding the glance rolling update,
  the command should be: « glance-manage db contract » but the minus is
  missing. That could lead to improper use of the glance command itself
  and an incomplete rolling update for people not carefully reading it.

  ---
  Release: 15.0.1.dev1 on 'Mon Aug 7 01:28:54 2017, commit 9091d26'
  SHA: 9091d262afb120fd077bae003d52463f833a4fde
  Source: 
https://git.openstack.org/cgit/openstack/glance/tree/doc/source/admin/rollingupgrades.rst
  URL: https://docs.openstack.org/glance/pike/admin/rollingupgrades.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1733964/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1733747] Re: No way to find out which instances are using a security group

2017-11-22 Thread Sam Morrison
Sorry what you are explaining is the reverse of what I want and doesn't
help, I have a security group ID and I want to know what instances have
that security group applied.

We have thousands of instances and querying each one to see if they have
the security group applied is very inefficient and time consuming

** Changed in: neutron
   Status: Opinion => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1733747

Title:
  No way to find out which instances are using a security group

Status in neutron:
  New

Bug description:
  I'm trying to figure out which instances are using a specific security
  group but it doesn't look possible via the API (unless I'm missing
  something).

  The only way to do this is by looking in the database and doing some
  sql on the securitygroupportbindings table.

  Is there another way?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1733747/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1733933] [NEW] nova-conductor is masking error when rescheduling

2017-11-22 Thread Dr. Jens Harbott
Public bug reported:

Sometimes when build_instance fails on n-cpu, the error that n-cond
receives is mangles like this:

Nov 22 17:39:04 jh-devstack-03 nova-conductor[26556]: ERROR 
nova.scheduler.utils [None req-fd8acb29-8a2c-4603-8786-54f2580d0ea9 
tempest-FloatingIpSameNetwork-1597192363 
tempest-FloatingIpSameNetwork-1597192363] 
[instance: 5ee9d527-0043-474e-bfb3-e6621426662e] Error from last host: 
jh-devstack-03 (node jh-devstack03): [u'Traceback (most recent call last):\n', 
u'  File "/opt/stack/nova/nova/compute/manager.py", line 1847, in
 _do_build_and_run_instance\nfilter_properties)\n', u'  File 
"/opt/stack/nova/nova/compute/manager.py", line 2086, in 
_build_and_run_instance\ninstance_uuid=instance.uuid, 
reason=six.text_type(e))\n', 
u"RescheduledException: Build of instance 5ee9d527-0043-474e-bfb3-e6621426662e 
was re-scheduled: operation failed: domain 'instance-0028' already exists 
with uuid 
93974d36e3a7-4139bbd8-2d5b51195a5f\n"]
Nov 22 17:39:04 jh-devstack-03 nova-conductor[26556]: WARNING 
nova.scheduler.utils [None req-fd8acb29-8a2c-4603-8786-54f2580d0ea9 
tempest-FloatingIpSameNetwork-1597192363 
tempest-FloatingIpSameNetwork-1597192363] 
Failed to compute_task_build_instances: No sql_connection parameter is 
established: CantStartEngineError: No sql_connection parameter is established
Nov 22 17:39:04 jh-devstack-03 nova-conductor[26556]: WARNING 
nova.scheduler.utils [None req-fd8acb29-8a2c-4603-8786-54f2580d0ea9 
tempest-FloatingIpSameNetwork-1597192363 
tempest-FloatingIpSameNetwork-1597192363] 
[instance: 5ee9d527-0043-474e-bfb3-e6621426662e] Setting instance to ERROR 
state.: CantStartEngineError: No sql_connection parameter is established

Seem to occur quite often in gate, too.
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Setting%20instance%20to%20ERROR%20state.%3A%20CantStartEngineError%5C%22

The result is that the instance information shows "No sql_connection
parameter is established" instead of the original error, making
debugging the root cause quite difficult.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1733933

Title:
  nova-conductor is masking error when rescheduling

Status in OpenStack Compute (nova):
  New

Bug description:
  Sometimes when build_instance fails on n-cpu, the error that n-cond
  receives is mangles like this:

  Nov 22 17:39:04 jh-devstack-03 nova-conductor[26556]: ERROR 
nova.scheduler.utils [None req-fd8acb29-8a2c-4603-8786-54f2580d0ea9 
tempest-FloatingIpSameNetwork-1597192363 
tempest-FloatingIpSameNetwork-1597192363] 
  [instance: 5ee9d527-0043-474e-bfb3-e6621426662e] Error from last host: 
jh-devstack-03 (node jh-devstack03): [u'Traceback (most recent call last):\n', 
u'  File "/opt/stack/nova/nova/compute/manager.py", line 1847, in
   _do_build_and_run_instance\nfilter_properties)\n', u'  File 
"/opt/stack/nova/nova/compute/manager.py", line 2086, in 
_build_and_run_instance\ninstance_uuid=instance.uuid, 
reason=six.text_type(e))\n', 
  u"RescheduledException: Build of instance 
5ee9d527-0043-474e-bfb3-e6621426662e was re-scheduled: operation failed: domain 
'instance-0028' already exists with uuid 
  93974d36e3a7-4139bbd8-2d5b51195a5f\n"]
  Nov 22 17:39:04 jh-devstack-03 nova-conductor[26556]: WARNING 
nova.scheduler.utils [None req-fd8acb29-8a2c-4603-8786-54f2580d0ea9 
tempest-FloatingIpSameNetwork-1597192363 
tempest-FloatingIpSameNetwork-1597192363] 
  Failed to compute_task_build_instances: No sql_connection parameter is 
established: CantStartEngineError: No sql_connection parameter is established
  Nov 22 17:39:04 jh-devstack-03 nova-conductor[26556]: WARNING 
nova.scheduler.utils [None req-fd8acb29-8a2c-4603-8786-54f2580d0ea9 
tempest-FloatingIpSameNetwork-1597192363 
tempest-FloatingIpSameNetwork-1597192363] 
  [instance: 5ee9d527-0043-474e-bfb3-e6621426662e] Setting instance to ERROR 
state.: CantStartEngineError: No sql_connection parameter is established

  Seem to occur quite often in gate, too.
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Setting%20instance%20to%20ERROR%20state.%3A%20CantStartEngineError%5C%22

  The result is that the instance information shows "No sql_connection
  parameter is established" instead of the original error, making
  debugging the root cause quite difficult.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1733933/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1733886] Re: 'force' parameter broken in os-quota-sets microversion >= 2.36

2017-11-22 Thread Matt Riedemann
** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Also affects: nova/newton
   Importance: Undecided
   Status: New

** Also affects: nova/ocata
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1733886

Title:
  'force' parameter broken in os-quota-sets microversion >= 2.36

Status in OpenStack Compute (nova):
  Triaged
Status in OpenStack Compute (nova) newton series:
  Confirmed
Status in OpenStack Compute (nova) ocata series:
  Confirmed
Status in OpenStack Compute (nova) pike series:
  Confirmed

Bug description:
  The 2.36 microversion broke the 'force' parameter in the os-quota-sets
  API:

  https://developer.openstack.org/api-ref/compute/#update-quotas

  It's because for 2.36 the schema redefined the properties but didn't
  copy the force parameter:

  
https://github.com/openstack/nova/blob/f69d98ea744bc13189b17ba4c67e4f0279d2f45a/nova/api/openstack/compute/schemas/quota_sets.py#L47

  We could fix this as part of blueprint deprecate-file-injection which
  needs to change the os-quota-sets API to remove the injected_file*
  parameters, however, after the counting quotas changes in Pike, the
  'force' parameter doesn't really mean anything because there are no
  reserved quotas anymore, so maybe we just document this in the API
  reference and not try to 'fix it' since the fix wouldn't do anything.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1733886/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1733917] [NEW] novaclient list servers by update attribute throws TypeError instead of CommandError

2017-11-22 Thread Theodoros Tsioutsias
Public bug reported:

Description
===

nova list command fails with TypeError instead of CommandError when an
existing but not valid attribute of the object is given as field.

$ /usr/bin/nova list --all --status ERROR --fields update
ERROR (TypeError): object.__new__(thread.lock) is not safe, use 
thread.lock.__new__()

Steps to reproduce
==

At least one server has to exist so that the list is not empty.
In this case update was given as a field. 
The Server object has an update method so the (not hasattr) check is False:

python-novaclient/novaclient/v2/shell.py:

1634 def _get_list_table_columns_and_formatters(fields, objs, exclude_fields=(),
1635filters=None):
[...]
1680 for field in fields.split(','):
1681 if not hasattr(obj, field):
1682 non_existent_fields.append(field)
1683 continue
1684 if field in exclude_fields:
1685 continue
1686 field_title, formatter = utils.make_field_formatter(field,
1687 filters)
1688 columns.append(field_title)
1689 formatters[field_title] = formatter
1690 exclude_fields.add(field)

As a result of this check, all of the attributes of the server object can be 
used as fields.
Most of them cause a TypeError to be raised.

e.g:
[devstack-006 ~]$ /usr/bin/nova list --fields __dict__
ERROR (TypeError): object.__new__(thread.lock) is not safe, use 
thread.lock.__new__()
[devstack-006 ~]$ /usr/bin/nova list --fields __getattr__
ERROR (TypeError): object.__new__(thread.lock) is not safe, use 
thread.lock.__new__()
[devstack-006 ~]$ /usr/bin/nova list --fields __getattribute__
ERROR (TypeError): object.__new__(method-wrapper) is not safe, use 
method-wrapper.__new__()
[devstack-006 ~]$ /usr/bin/nova list --fields __hash__
ERROR (TypeError): object.__new__(method-wrapper) is not safe, use 
method-wrapper.__new__()
[devstack-006 ~]$ /usr/bin/nova list --fields __init__
ERROR (TypeError): object.__new__(thread.lock) is not safe, use 
thread.lock.__new__()
[devstack-006 v2]$ /usr/bin/nova list --field unshelve
ERROR (TypeError): object.__new__(thread.lock) is not safe, use 
thread.lock.__new__()
[devstack-006 ~]$ /usr/bin/nova list --fields to_dict
ERROR (TypeError): object.__new__(thread.lock) is not safe, use 
thread.lock.__new__()
[devstack-006 ~]$ /usr/bin/nova list --fields revert_resize
ERROR (TypeError): object.__new__(thread.lock) is not safe, use 
thread.lock.__new__()
[devstack-006 ~]$ /usr/bin/nova list --fields _add_details
ERROR (TypeError): object.__new__(thread.lock) is not safe, use 
thread.lock.__new__()
[...]

Expected result
===

A CommandError should be raised.

Actual result
=

A TypeError is raised.

Environment
===

$ rpm -qf /usr/bin/nova 
python2-novaclient-9.1.1-1.el7.noarch

Reproduced in devstack:

[devstack-006 ~]$ /usr/bin/nova list --field update
ERROR (TypeError): object.__new__(thread.lock) is not safe, use 
thread.lock.__new__()
[devstack-006 python-novaclient]$ git log -1
commit c9e7a64ca83302bdeab2044c09f9063646cc59a3
Merge: dd520c7 bef6765
Author: Zuul 
Date:   Tue Nov 21 19:51:32 2017 +

Merge "Microversion 2.54 - Enable reset keypair while rebuild"

Logs & Configs
==

Attaching the debug output of the command.

Comments


Attributes like __module__ or __class__, are not raising a TypeError.
Should they be allowed?

[devstack-006 ~]$ /usr/bin/nova list --field __class__
+--++
| ID   |   Class
|
+--++
| 353e6118-919e-4684-9e73-2441bbd8f0bd |  
|
| b7f1f1f2-b195-4bd3-a5a1-5ad855526e4a |  
|
| d7d99415-3bc5-4633-9c57-2b6f730a9bb1 |  
|
+--++
[devstack-006 ~]$ /usr/bin/nova list --field __module__
+--+---+
| ID   |   Module  |
+--+---+
| 353e6118-919e-4684-9e73-2441bbd8f0bd | novaclient.v2.servers |
| b7f1f1f2-b195-4bd3-a5a1-5ad855526e4a | novaclient.v2.servers |
| d7d99415-3bc5-4633-9c57-2b6f730a9bb1 | novaclient.v2.servers |
+--+---+

** Affects: nova
 Importance: Undecided
 Assignee: Theodoros Tsioutsias (ttsiouts)
 Status: New

** Attachment added: "Debug output of the command"
   
https://bugs.launchpad.net/bugs/1733917/+attachment/5012990/+files/debug_output.log

** Changed in: nova
 Assignee: (unassigned) => Theodoros Tsioutsias (ttsiouts)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenSt

[Yahoo-eng-team] [Bug 1733905] Re: SQL integer type is to small to store BGP LOCAL_PREF value

2017-11-22 Thread Thomas Morin
good catch doude!

adding neutron, since the field definition is in neutron-lib

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1733905

Title:
  SQL integer type is to small to store BGP LOCAL_PREF value

Status in networking-bgpvpn:
  New
Status in neutron:
  New

Bug description:
  SQL interger type max size is 2^31-1 and BGP LOCAL_PREF max value is
  2^32-1 (RFC 4271, section 4.3 p.18)

  DEBUG neutron.api.v2.base [None req-4aa8e09d-235a-4c7a-b956-02dcca7eea7f demo 
demo] Request body: {u'port_association': {u'routes': [{u'local_pref': 1234, 
u'prefix': u'5.6.7.8/24', u'type': u'prefix'}, {u'
  local_pref': 2147483647, u'prefix': u'5.6.7.6/24', u'type': u'prefix'}, 
{u'prefix': u'1.2.3.4/32', u'type': u'prefix'}, {u'local_pref': 2147483648, 
u'bgpvpn_id': u'39d7d7d2-ffa2-4fd4-8556-f8e7a759abdf', u'
  type': u'bgpvpn'}, {u'bgpvpn_id': u'b5d3e8ff-f381-40a6-a4d1-56622effba1e', 
u'type': u'bgpvpn'}]}} {{(pid=8447) prepare_request_body 
/opt/stack/openstack/neutron/neutron/api/v2/base.py:685}}
  DEBUG neutron_lib.api.validators [None 
req-4aa8e09d-235a-4c7a-b956-02dcca7eea7f demo demo] Validation of dictionary's 
keys failed. Expected keys: set(['prefix', 'type']) Provided keys: 
set([u'local_pref', 
  u'bgpvpn_id', u'type']) {{(pid=8447) _verify_dict_keys 
/usr/local/lib/python2.7/dist-packages/neutron_lib/api/validators/__init__.py:69}}
  DEBUG neutron_lib.api.validators [None 
req-4aa8e09d-235a-4c7a-b956-02dcca7eea7f demo demo] Validation of dictionary's 
keys failed. Expected keys: set(['prefix', 'type']) Provided keys: 
set([u'bgpvpn_id', u
  'type']) {{(pid=8447) _verify_dict_keys 
/usr/local/lib/python2.7/dist-packages/neutron_lib/api/validators/__init__.py:69}}
  ERROR neutron.api.v2.resource [None req-4aa8e09d-235a-4c7a-b956-02dcca7eea7f 
demo demo] update failed: No details.: DBDataError: (pymysql.err.DataError) 
(1264, u"Out of range value for column 'local_pref' 
  at row 1") [SQL: u'INSERT INTO bgpvpn_port_association_routes (id, 
port_association_id, type, local_pref, prefix, bgpvpn_id) VALUES (%(id)s, 
%(port_association_id)s, %(type)s, %(local_pref)s, %(prefix)s, %
  (bgpvpn_id)s)'] [parameters: {'prefix': None, 'port_association_id': 
u'915bc1bb-cafe-4fa0-a44b-3b705dedb5f6', 'bgpvpn_id': 
u'39d7d7d2-ffa2-4fd4-8556-f8e7a759abdf', 'local_pref': 2147483648, 'type': 
'bgpvpn
  ', 'id': 'faf251ff-648c-445b-a9bc-183ea0edadbe'}]

To manage notifications about this bug go to:
https://bugs.launchpad.net/bgpvpn/+bug/1733905/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1733886] [NEW] 'force' parameter broken in os-quota-sets microversion >= 2.36

2017-11-22 Thread Matt Riedemann
Public bug reported:

The 2.36 microversion broke the 'force' parameter in the os-quota-sets
API:

https://developer.openstack.org/api-ref/compute/#update-quotas

It's because for 2.36 the schema redefined the properties but didn't
copy the force parameter:

https://github.com/openstack/nova/blob/f69d98ea744bc13189b17ba4c67e4f0279d2f45a/nova/api/openstack/compute/schemas/quota_sets.py#L47

We could fix this as part of blueprint deprecate-file-injection which
needs to change the os-quota-sets API to remove the injected_file*
parameters, however, after the counting quotas changes in Pike, the
'force' parameter doesn't really mean anything because there are no
reserved quotas anymore, so maybe we just document this in the API
reference and not try to 'fix it' since the fix wouldn't do anything.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1733886

Title:
  'force' parameter broken in os-quota-sets microversion >= 2.36

Status in OpenStack Compute (nova):
  New

Bug description:
  The 2.36 microversion broke the 'force' parameter in the os-quota-sets
  API:

  https://developer.openstack.org/api-ref/compute/#update-quotas

  It's because for 2.36 the schema redefined the properties but didn't
  copy the force parameter:

  
https://github.com/openstack/nova/blob/f69d98ea744bc13189b17ba4c67e4f0279d2f45a/nova/api/openstack/compute/schemas/quota_sets.py#L47

  We could fix this as part of blueprint deprecate-file-injection which
  needs to change the os-quota-sets API to remove the injected_file*
  parameters, however, after the counting quotas changes in Pike, the
  'force' parameter doesn't really mean anything because there are no
  reserved quotas anymore, so maybe we just document this in the API
  reference and not try to 'fix it' since the fix wouldn't do anything.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1733886/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1733747] Re: No way to find out which instances are using a security group

2017-11-22 Thread Jakub Libosvar
Yes, you can find which port is the instance using and the query the
port, it will show you security groups.

The port belonging to instance has device_id equal to instance id.

** Changed in: neutron
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1733747

Title:
  No way to find out which instances are using a security group

Status in neutron:
  Opinion

Bug description:
  I'm trying to figure out which instances are using a specific security
  group but it doesn't look possible via the API (unless I'm missing
  something).

  The only way to do this is by looking in the database and doing some
  sql on the securitygroupportbindings table.

  Is there another way?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1733747/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1711184] Re: scheduler selects the same ironic node several times

2017-11-22 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/494136
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=3759f105a7c4c3029a81a5431434190ef1bbb020
Submitter: Zuul
Branch:master

commit 3759f105a7c4c3029a81a5431434190ef1bbb020
Author: Pavlo Shchelokovskyy 
Date:   Wed Aug 16 08:50:49 2017 +

Allow shuffling hosts with the same best weight

This patch adds a new boolean config option
`[filter_scheduler]shuffle_best_same_weighed_hosts` (default False).

Enabling it will improve scheduling for the case when host_subset_size=1
but list of weighed hosts contains many hosts with the same best weight
(quite often the case for ironic nodes).
On the other hand, enabling it will also make VM packing on hypervisors
less dense even when host weighing is completely disabled.

Change-Id: Icee137e15f264da59a1bdc1dc1ecfeaac82b98c6
Closes-Bug: #1711184


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1711184

Title:
  scheduler selects the same ironic node several times

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Observed on ironic multinode grenade job (using Ocata scheduler).

  Ironic returns its nodes in the same relative order (by internal DB
  id). Quite often (and in DevStack always by default) ironic nodes are
  identical, thus filter scheduler gives them the same weight. As a
  result, during concurrent requests to schedule instances, the
  weighed_hosts list goes always in the same order and is being always
  consumed from the start.

  This leads to the first node selected often enough to exceed the
  default number of retries when it is being stolen by another
  concurrent request (as it also always picks the first one from the
  list).

  See log examples from the same gate job [0-2], failure [3]
  (ServerActionsTestJSON test failure). Notice how the weighed hosts
  list is always is always in the same order, and scheduler retries 3
  times on nodes that are being already occupied by another parallel
  request, always picking the currently first one.

  This could be fixed by increasing the host_subset_size config option from its 
default value of 1,
  which would bring some randomness to the first element.
  While fine (and actually recommended) for baremetal-only case, this choice is 
a bit suboptimal in a mixed hypervizor (virtual+ironic computes) as it makes 
scheduling logic for virtual computes less ideal.

  Instead, it might be better to always randomize the first hosts in the
  weighed_hosts list for hosts with identical (and maximal) weight as
  those should be equally good candidates to schedule to. This will
  decrease collision and rescheduling chances, definitely for ironic
  nodes, but also to some tiny extent for standard compute hosts as
  well.

  
  [0] 
http://logs.openstack.org/12/493812/2/check/gate-grenade-dsvm-ironic-multinode-multitenant-ubuntu-xenial/8d3f840/logs/old/screen-n-sch.txt.gz#_2017-08-15_13_27_29_410

  [1] http://logs.openstack.org/12/493812/2/check/gate-grenade-dsvm-
  ironic-multinode-multitenant-ubuntu-
  xenial/8d3f840/logs/old/screen-n-sch.txt.gz#_2017-08-15_13_27_31_839

  [2] http://logs.openstack.org/12/493812/2/check/gate-grenade-dsvm-
  ironic-multinode-multitenant-ubuntu-
  xenial/8d3f840/logs/old/screen-n-sch.txt.gz#_2017-08-15_13_27_34_244

  [3] http://logs.openstack.org/12/493812/2/check/gate-grenade-dsvm-
  ironic-multinode-multitenant-ubuntu-
  xenial/8d3f840/logs/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1711184/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1733861] [NEW] VIFs not always detached from ironic nodes during termination

2017-11-22 Thread Mark Goddard
Public bug reported:

Description
===

Sometimes when a baremetal instance is terminated, some VIFs are not
detached from the node. This can lead to the node becoming unusable,
with subsequent attempts to provision it fail during VIF attachment due
to there being insufficient free ironic ports to attach the VIF to.

Steps to reproduce
==

No reproduction procedure identified as yet, but will be something like:

* boot one baremetal instance
* do something to trigger the bug
* delete the instance
* boot a second instance on the same ironic node

Expected results


The second instance should boot successfully.

Actual results
==

The second instance fails to boot, and the following error message is
emitted by nova-compute:

VirtualInterfacePlugException: Cannot attach VIF 409830a5-b4de-4d1d-
be22-5e6fe4ccd65b to the node 3aaaf79e-99fb-42a3-b22e-b1a7fae44272 due
to error: Unable to attach VIF 409830a5-b4de-4d1d-be22-5e6fe4ccd65b, not
enough free physical ports. (HTTP 400)

The neutron port has been deleted:

$ openstack port show 7e567468-53a2-4fad-8bc9-a30a0e7218a0
ResourceNotFound: No Port found for 7e567468-53a2-4fad-8bc9-a30a0e7218a0

The ironic node's VIF is still attached:

$ openstack baremetal node vif list 
+--+
| ID   |
+--+
| 7e567468-53a2-4fad-8bc9-a30a0e7218a0 |
+--+

Workaround
==

The VIF can be manually detached via ironic:

$ openstack baremetal node vif detach  7e567468-53a2-4fad-
8bc9-a30a0e7218a0

This allows instances to be deployed on the node.

Environment
===

RDO Pike, deployed on CentOS 7 using kayobe & kolla-ansible.

openstack-nova-api-16.0.0-1.el7.noarch

Notes
=

I've seen this happen on a number of occasions, and have spent some time
investigating a few of them. Although they all have similarities, no two
have been the same, so far as I can tell.

Some things I've worked out along the way:

* the VIF detach code in ironic is very simple, and just removes the
tenant_vif_port_id field from the internal_info attribute of the ironic
port to which the VIF is attached. This leads me to believe that nova is
*not* calling this API during instance termination.

* the nova ironic virt driver's terminate method always ends up calling
_unplug_vifs, so either terminate has not been called, it has not
completed successfully, or the VIF was not present in the provided
network_info object. So far my investigations have suggested the latter
- network_info does not contain the VIF.

* there seems to be some level of raciness when deleting instances and
their ports (VIFs) at similar times. The neutron vif unplugged event may
not always call detach_interface[1] on the virt driver, but will remove
the port from the instance info cache. This would cause the VIF to be
absent from network_info during terminate.

Given that there seem to be multiple causes for this issue, one way to
avoid the node becoming unusable would be to query the attached VIFs
from ironic, as well as those in network_info when terminating an
instance. Any unexpected VIFs could then be detached.

References
==

[1]
https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L1481

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1733861

Title:
  VIFs not always detached from ironic nodes during termination

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===

  Sometimes when a baremetal instance is terminated, some VIFs are not
  detached from the node. This can lead to the node becoming unusable,
  with subsequent attempts to provision it fail during VIF attachment
  due to there being insufficient free ironic ports to attach the VIF
  to.

  Steps to reproduce
  ==

  No reproduction procedure identified as yet, but will be something
  like:

  * boot one baremetal instance
  * do something to trigger the bug
  * delete the instance
  * boot a second instance on the same ironic node

  Expected results
  

  The second instance should boot successfully.

  Actual results
  ==

  The second instance fails to boot, and the following error message is
  emitted by nova-compute:

  VirtualInterfacePlugException: Cannot attach VIF 409830a5-b4de-4d1d-
  be22-5e6fe4ccd65b to the node 3aaaf79e-99fb-42a3-b22e-b1a7fae44272 due
  to error: Unable to attach VIF 409830a5-b4de-4d1d-be22-5e6fe4ccd65b,
  not enough free physical ports. (HTTP 400)

  The neutron port has been deleted:

  $ openstack port show 7e567468-53a2-4fad-8bc9-a30a0e7218a0
  ResourceNotFound: No Port found for 7e567468-53a2-4fad-8bc9-a30a0e7218a0

  The ironic node's VIF is stil

[Yahoo-eng-team] [Bug 1730933] Re: Quobyte mount validation needs update

2017-11-22 Thread Silvan Kaiser
Added the Nova project as this issue hits the Quobyte Nova drivers mount
point validation, too.

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) => Silvan Kaiser (2-silvan)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1730933

Title:
  Quobyte mount validation needs update

Status in Cinder:
  Fix Released
Status in OpenStack Compute (nova):
  New

Bug description:
  The Quobyte driver currently validates mounts, amongst other checks,
  by verifying the mounts device field has a 'quobyte@' prefix [1].
  Setting this will be replaced in the Quobyte client by setting a fuse
  subtype. The driver should be able to cope with both variants in order
  to stay compatible to all Quobyte releases.

  [1]
  
https://github.com/openstack/cinder/blob/fb27334719fb612d2d5386b7d9de374d4a415d81/cinder/volume/drivers/quobyte.py#L476

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1730933/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1733852] [NEW] Incorrect ARP entries in new DVR routers for Octavia VRRP addresses

2017-11-22 Thread Daniel Russell
Public bug reported:

Hi,

I am running Ocata Neutron with OVS DVR, l2_population is on, and Ocata
Octavia is also installed.  Under a certain circumstance, I am getting
incorrect ARP entries in the routers for the VRRP address of the
loadbalancers created.

Here is the ARP table for a router that preexisted a Load Balancer creation :
[root@ ~]# ip netns exec qrouter-6b5fe9df-eab2-4147-b95f-419d0c620344 ip 
 neigh
10.2.2.11 dev qr-458b6819-4f lladdr fa:16:3e:3c:df:9c PERMANENT
10.2.2.1 dev qr-458b6819-4f lladdr fa:16:3e:f0:45:c9 PERMANENT
10.2.2.2 dev qr-458b6819-4f lladdr fa:16:3e:70:0e:8c PERMANENT
[root@ ~]#

After creating a loadbalancer, ports are created for the load balancer instance 
in the project network and the vrrp address (but as far as I understand, the 
vrrp port is just there to reserve the IP):
[root@ /]# openstack port show 9bb862a7-fdb5-487e-94f5-4fac8b55d5d2
+---+---+
| Field | Value 
|
+---+---+
| admin_state_up| UP
|
| allowed_address_pairs | ip_address='10.2.2.8', 
mac_address='fa:16:3e:78:82:cb'|
| binding_host_id   |
|
| binding_profile   |   
|
| binding_vif_details   | ovs_hybrid_plug='True', port_filter='True'
|
| binding_vif_type  | ovs   
|
| binding_vnic_type | normal
|
| created_at| 2017-11-22T10:35:11Z  
|
| description   |   
|
| device_id | 3355a8e7-95fe-4f15-8233-3ffcbb935d5c  
|
| device_owner  | compute:None  
|
| dns_assignment| 
fqdn='amphora-8cc77a78-359e-4829-968b-2d026869d845.cloud..', hostname
  |
|   | ='amphora-8cc77a78-359e-4829-968b-2d026869d845', 
ip_address='10.2.2.5'|
| dns_name  | amphora-8cc77a78-359e-4829-968b-2d026869d845  
|
| extra_dhcp_opts   |   
|
| fixed_ips | ip_address='10.2.2.5', 
subnet_id='0c8633c6-96a1-4c0e-a73f-212eddfd6172'   |
| id| 9bb862a7-fdb5-487e-94f5-4fac8b55d5d2  
|
| ip_address| None  
|
| mac_address   | fa:16:3e:78:82:cb 
|
| name  | octavia-lb-vrrp-8cc77a78-359e-4829-968b-2d026869d845  
|
| network_id| 8d365ce2-d909-410d-991c-7f503a65d67b  
|
| option_name   | None  
|
| option_value  | None  
|
| port_security_enabled | False 
|
| project_id| 905d2c54fe08456abee3c44feb1d8e05  
|
| qos_policy_id | None  
|
| revision_number   | 18
|
| security_groups   | 355790da-7eec-4685-b92e-7a6e2cd1ba1e  
|
| status| ACTIVE
|
| subnet_id | None  
|
| updated_at| 2017-11-22T12:04:36Z  
|
+---+

[Yahoo-eng-team] [Bug 1733836] [NEW] Support LDAP server discovery via DNS SRV records

2017-11-22 Thread Colleen Murphy
Public bug reported:

When an organization has more than one LDAP server and a potentially
large number of clients connecting to them, they may support automatic
discovery of those servers by creating DNS SRV records for them. The
overview of how this works is described here:

http://www.rjsystems.nl/en/2100-dns-discovery-openldap.php

When using OpenLDAP utilities like the ldapsearch command line tool, we
can use syntax like this to discover an LDAP host and make queries
against it:

ldapsearch -H ldap:///dc%3Dexample%2Cdc%3Dcom uid=ccolumbus

python-ldap does not support discovery this way. It interprets a URL
like this as referring to a file on localhost. Based on this thread, it
seems unlikely that python-ldap or libldap would be willing to support
this:

https://mail.python.org/pipermail/python-ldap/2013q4/003298.html

Their concerns seem to be about this being a major change in behavior.
It also poses a problem for TLS-secured hosts since we'd no longer be
requesting the host directly by its CN, also mentioned in this thread:

http://python-ldap.python.narkive.com/27BmiEIr/connect-to-multiple-
servers-for-failover#post6

We could implement this in keystone, as a wrapper around ldappool
/python-ldap. It would come with the caveat that DNSSEC is necessary and
that LDAPS/StartTLS might not work or you might have to add some weird
alt names to your certificates.

It looks like RedHat has had this idea as well:

https://bugzilla.redhat.com/show_bug.cgi?id=1469527

In that report, Nathan suggests that this should be in python-ldap
rather than keystone, but based on the above python-ldap thread I think
that might be an uphill battle.

Thoughts?

** Affects: keystone
 Importance: Wishlist
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1733836

Title:
  Support LDAP server discovery via DNS SRV records

Status in OpenStack Identity (keystone):
  New

Bug description:
  When an organization has more than one LDAP server and a potentially
  large number of clients connecting to them, they may support automatic
  discovery of those servers by creating DNS SRV records for them. The
  overview of how this works is described here:

  http://www.rjsystems.nl/en/2100-dns-discovery-openldap.php

  When using OpenLDAP utilities like the ldapsearch command line tool,
  we can use syntax like this to discover an LDAP host and make queries
  against it:

  ldapsearch -H ldap:///dc%3Dexample%2Cdc%3Dcom uid=ccolumbus

  python-ldap does not support discovery this way. It interprets a URL
  like this as referring to a file on localhost. Based on this thread,
  it seems unlikely that python-ldap or libldap would be willing to
  support this:

  https://mail.python.org/pipermail/python-ldap/2013q4/003298.html

  Their concerns seem to be about this being a major change in behavior.
  It also poses a problem for TLS-secured hosts since we'd no longer be
  requesting the host directly by its CN, also mentioned in this thread:

  http://python-ldap.python.narkive.com/27BmiEIr/connect-to-multiple-
  servers-for-failover#post6

  We could implement this in keystone, as a wrapper around ldappool
  /python-ldap. It would come with the caveat that DNSSEC is necessary
  and that LDAPS/StartTLS might not work or you might have to add some
  weird alt names to your certificates.

  It looks like RedHat has had this idea as well:

  https://bugzilla.redhat.com/show_bug.cgi?id=1469527

  In that report, Nathan suggests that this should be in python-ldap
  rather than keystone, but based on the above python-ldap thread I
  think that might be an uphill battle.

  Thoughts?

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1733836/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1728479] Re: some security-group rules will be covered.

2017-11-22 Thread Zachary Ma
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1728479

Title:
  some security-group rules will be covered.

Status in neutron:
  Fix Released

Bug description:
  1. create security-group anquanzu01, anquanzu02
  2. create vm1 with anquanzu01, anquanzu02, create vm2 with anquanzu02.
  3. vm1 can ping vm2 well, but vm2 can not ping vm1.

  anquanzu01, anquanzu02 are as follows:
   
  [root@172e18e211e96 ~]# neutron security-group-show anquanzu01
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  
+--++
  | Field| Value
  |
  
+--++
  | created_at   | 2017-10-19T04:14:01Z 
  |
  | description  |  
  |
  | id   | b089348a-f939-43f8-bdd2-d7b54376f640 
  |
  | name | anquanzu01   
  |
  | project_id   | 2acab64182334292a9bf5f3cdd5b3428 
  |
  | revision_number  | 6
  |
  | security_group_rules | {
  |
  |  |  "remote_group_id": null,
  |
  |  |  "direction": "ingress", 
  |
  |  |  "protocol": "icmp", 
  |
  |  |  "description": "",  
  |
  |  |  "tags": [], 
  |
  |  |  "ethertype": "IPv4",
  |
  |  |  "remote_ip_prefix": "0.0.0.0/0",
  |
  |  |  "port_range_max": null, 
  |
  |  |  "updated_at": "2017-10-19T04:26:01Z",   
  |
  |  |  "security_group_id": 
"b089348a-f939-43f8-bdd2-d7b54376f640",  |
  |  |  "port_range_min": null, 
  |
  |  |  "revision_number": 0,   
  |
  |  |  "tenant_id": 
"2acab64182334292a9bf5f3cdd5b3428",  |
  |  |  "created_at": "2017-10-19T04:26:01Z",   
  |
  |  |  "project_id": 
"2acab64182334292a9bf5f3cdd5b3428", |
  |  |  "id": "1b7a4a06-e762-487a-9776-0d9d781f537c"
  |
  |  | }
  |
  |  | {
  |
  |  |  "remote_group_id": null,
  |
  |  |  "direction": "egress",  
  |
  |  |  "protocol": null,   
  |
  |  |  "description": null,
  |
  |  |  "tags": [], 
  |
  |  |  "ethertype": "IPv6",
  |
  |  |  "remote_ip_prefix": null,   
  |
  |  |  "port_range_max": null, 
  |
  |  |  "updated_at": "2017-10-19T04:14:01Z",   
  |
  |  |  "security_group_id": 
"b089348a-f939-43f8-bdd2-d7b54376f640",  |
  |  |  "port_range_min": null, 
  |
  |  |  "revision_number": 0,   
  |
  |  |  "tenant_id": 
"2acab64182334292a9bf5f3cdd5b3428",  |
  |  |  "created_at": "2017-10-19T04:14:01Z",   
  |
  |  |  "project_id": 
"2acab64182334292a9bf5f3cdd5b3428", |
  |  |  "id": "2e605e9b-9be1-4dd3-a86b-af7b95c476fb"
  |
  |

[Yahoo-eng-team] [Bug 1733816] [NEW] Import api, image becomes active if disk-format and container format are not set

2017-11-22 Thread Abhishek Kekane
Public bug reported:

If you run image-import api on any image which is in saving state and
does not have container-format and/or disk-format set goes into active
state. Ideally image which does not have container-format or disk-format
set should raise bad request error.

Prerequisites:
1. Ensure you have latest version of python-glanceclient (version 2.8.0) 
installed
2. Due to isssue [1] to execute taskflow you need to modify line [2] as shown 
below and restart glance-api service
   -pool.spawn_n(import_task.run, task_executor)
   +import_task.run(task_executor)
   [1] https://bugs.launchpad.net/glance/+bug/1712463
   [2] 
https://github.com/openstack/glance/blob/master/glance/api/v2/images.py#L106
   

Steps to reporoduce:
1. Create an image without container format and disk-format
$ glance image-create --name cirros_image
2. Run stage call to upload data in staging area
$ glance image-stage  --file 
~/devstack/local.conf
3. Run image-import call
   $ glance image-import  --import-method 
glance-direct

Output:
+--+--+
| Property | Value|
+--+--+
| checksum | 527294ab8d1550529d6e5ef853cf1933 |
| container_format | None |
| created_at   | 2017-11-22T09:28:42Z |
| disk_format  | None |
| id   | 303e1af0-4273-4a40-a719-9bd2e6a89864 |
| min_disk | 0|
| min_ram  | 0|
| name | cirros_image |
| owner| 40ab3e7ce43e4b6bb31a912b434490b5 |
| protected| False|
| size | 314  |
| status   | active   |
| tags | []   |
| updated_at   | 2017-11-22T09:29:46Z |
| virtual_size | None |
| visibility   | shared   |
+--+--+

>From the above output you can easily figure out that image is in active
state and container_format and disk_format are set to None.

** Affects: glance
 Importance: Undecided
 Assignee: Abhishek Kekane (abhishek-kekane)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => Abhishek Kekane (abhishek-kekane)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1733816

Title:
  Import api, image becomes active if disk-format and container format
  are not set

Status in Glance:
  New

Bug description:
  If you run image-import api on any image which is in saving state and
  does not have container-format and/or disk-format set goes into active
  state. Ideally image which does not have container-format or disk-
  format set should raise bad request error.

  Prerequisites:
  1. Ensure you have latest version of python-glanceclient (version 2.8.0) 
installed
  2. Due to isssue [1] to execute taskflow you need to modify line [2] as shown 
below and restart glance-api service
 -pool.spawn_n(import_task.run, task_executor)
 +import_task.run(task_executor)
 [1] https://bugs.launchpad.net/glance/+bug/1712463
 [2] 
https://github.com/openstack/glance/blob/master/glance/api/v2/images.py#L106
 

  Steps to reporoduce:
  1. Create an image without container format and disk-format
  $ glance image-create --name cirros_image
  2. Run stage call to upload data in staging area
  $ glance image-stage  --file 
~/devstack/local.conf
  3. Run image-import call
 $ glance image-import  --import-method 
glance-direct

  Output:
  +--+--+
  | Property | Value|
  +--+--+
  | checksum | 527294ab8d1550529d6e5ef853cf1933 |
  | container_format | None |
  | created_at   | 2017-11-22T09:28:42Z |
  | disk_format  | None |
  | id   | 303e1af0-4273-4a40-a719-9bd2e6a89864 |
  | min_disk | 0|
  | min_ram  | 0|
  | name | cirros_image |
  | owner| 40ab3e7ce43e4b6bb31a912b434490b5 |
  | protected| False|
  | size | 314  |
  | status   | active   |
  | tags | []   |
  | updated_at   | 

[Yahoo-eng-team] [Bug 1733813] [NEW] Running image-import call on queued image having valid container and disk formats returns 500 internal server error

2017-11-22 Thread Abhishek Kekane
Public bug reported:

If you run image-import api on any image which is in queued state having
valid container-format and disk-format set will return 500 error as it
raises IOError: [Errno 2] No such file or directory:
'/tmp/staging/567bfb61-d9f7-47e5-aa1a-90b7797e70be'. Also image status
changes from 'queued' to 'importing'. Ideally transition from queued to
importing should not be allowed and it should return HTTP 409 conflict
error to the user.

Prerequisites:
1. Ensure you have latest version of python-glanceclient (version 2.8.0) 
installed
2. Due to isssue [1] to execute taskflow you need to modify line [2] as shown 
below and restart glance-api service
   -pool.spawn_n(import_task.run, task_executor)
   +import_task.run(task_executor)
   [1] https://bugs.launchpad.net/glance/+bug/1712463
   [2] 
https://github.com/openstack/glance/blob/master/glance/api/v2/images.py#L106

Steps to reporoduce:
1. Create an image with valid container format and disk-format
$ glance image-create --container-format ami --disk-format ami --name 
cirros_image
2. Ensure image is in queued state
3. Run image-import call
   $ glance image-import  --import-method 
glance-direct

Output:
500 Internal Server Error: The server has either erred or is incapable of 
performing the requested operation. (HTTP 500)

Glance API Logs:

Nov 22 09:12:57 devstack devstack@g-api.service[14229]: pdict['tenant'] = 
self.tenant
Nov 22 09:12:57 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi [None req-52ea6328-83bf-4c25-a137-a51272308be9 admin admin] 
Caught error: [Errno 2] No such file or directory: 
'/tmp/staging/567bfb61-d9f7-47e5-aa1a-90b7797e70be': IOError: [Errno 2] No such 
file or directory: '/tmp/staging/567bfb61-d9f7-47e5-aa1a-90b7797e70be'
Nov 22 09:12:57 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi Traceback (most recent call last):
Nov 22 09:12:57 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi   File "/opt/stack/glance/glance/common/wsgi.py", line 1222, 
in __call__
Nov 22 09:12:57 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi request, **action_args)
Nov 22 09:12:57 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi   File "/opt/stack/glance/glance/common/wsgi.py", line 1261, 
in dispatch
Nov 22 09:12:57 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi return method(*args, **kwargs)
Nov 22 09:12:57 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi   File "/opt/stack/glance/glance/common/utils.py", line 363, 
in wrapped
Nov 22 09:12:57 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi return func(self, req, *args, **kwargs)
Nov 22 09:12:57 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi   File "/opt/stack/glance/glance/api/v2/images.py", line 
107, in import_image
Nov 22 09:12:57 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi import_task.run(task_executor)
Nov 22 09:12:57 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi   File "/opt/stack/glance/glance/domain/proxy.py", line 238, 
in run
Nov 22 09:12:57 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi self.base.run(executor)
Nov 22 09:12:57 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi   File "/opt/stack/glance/glance/notifier.py", line 581, in 
run
Nov 22 09:12:57 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi super(TaskProxy, self).run(executor)
Nov 22 09:12:57 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi   File "/opt/stack/glance/glance/domain/proxy.py", line 238, 
in run
Nov 22 09:12:57 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi self.base.run(executor)
Nov 22 09:12:57 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi   File "/opt/stack/glance/glance/domain/proxy.py", line 238, 
in run
Nov 22 09:12:57 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi self.base.run(executor)
Nov 22 09:12:57 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi   File "/opt/stack/glance/glance/domain/__init__.py", line 
439, in run
Nov 22 09:12:57 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi executor.begin_processing(self.task_id)
Nov 22 09:12:57 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi   File 
"/opt/stack/glance/glance/async/taskflow_executor.py", line 143, in 
begin_processing
Nov 22 09:12:57 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi super(TaskExecutor, self).begin_processing(task_id)
Nov 22 09:12:57 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi   File "/opt/stack/glance/glance/async/__init__.py", line 
63, in begin_processing
Nov 22 09:12:57 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi self._run(task_id, task.type)
Nov 22 09:12:57 devstack devstack@g-api.service[14229]: ERROR 
glan

[Yahoo-eng-team] [Bug 1733810] [NEW] Running image-import call on queued image without container and disk format returns 500 internal server error

2017-11-22 Thread Abhishek Kekane
Public bug reported:

If you run image-import api on any image which is in queued state and
doesn't have container-format and disk-format set will return 500 error
as it raises ValueError: Properties disk_format, container_format must
be set prior to saving data. Ideally it should return HTTP 400
BadRequest error to the user.

Prerequisites:
1. Ensure you have latest version of python-glanceclient (version 2.8.0) 
installed
2. Due to isssue [1] to execute taskflow you need to modify line [2] as shown 
below and restart glance-api service
   -pool.spawn_n(import_task.run, task_executor)
   +import_task.run(task_executor)
   [1] https://bugs.launchpad.net/glance/+bug/1712463
   [2] 
https://github.com/openstack/glance/blob/master/glance/api/v2/images.py#L106
   

Steps to reporoduce:
1. Create an image without container format and disk-format
$ glance image-create --name cirros_image
2. Ensure image is in queued state
3. Run image-import call
   $ glance image-import  --import-method 
glance-direct

Output:
500 Internal Server Error: The server has either erred or is incapable of 
performing the requested operation. (HTTP 500)


Glance API Logs:

Nov 22 09:04:17 devstack devstack@g-api.service[14229]: pdict['tenant'] = 
self.tenant
Nov 22 09:04:17 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi [None req-4d0baee8-445e-4ed0-82b8-966e71636ddf admin admin] 
Caught error: Properties disk_format, container_format must be set prior to 
saving data.: ValueError: Properties disk_format, container_format must be set 
prior to saving data.
Nov 22 09:04:17 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi Traceback (most recent call last):
Nov 22 09:04:17 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi   File "/opt/stack/glance/glance/common/wsgi.py", line 1222, 
in __call__
Nov 22 09:04:17 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi request, **action_args)
Nov 22 09:04:17 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi   File "/opt/stack/glance/glance/common/wsgi.py", line 1261, 
in dispatch
Nov 22 09:04:17 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi return method(*args, **kwargs)
Nov 22 09:04:17 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi   File "/opt/stack/glance/glance/common/utils.py", line 363, 
in wrapped
Nov 22 09:04:17 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi return func(self, req, *args, **kwargs)
Nov 22 09:04:17 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi   File "/opt/stack/glance/glance/api/v2/images.py", line 
107, in import_image
Nov 22 09:04:17 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi import_task.run(task_executor)
Nov 22 09:04:17 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi   File "/opt/stack/glance/glance/domain/proxy.py", line 238, 
in run
Nov 22 09:04:17 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi self.base.run(executor)
Nov 22 09:04:17 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi   File "/opt/stack/glance/glance/notifier.py", line 581, in 
run
Nov 22 09:04:17 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi super(TaskProxy, self).run(executor)
Nov 22 09:04:17 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi   File "/opt/stack/glance/glance/domain/proxy.py", line 238, 
in run
Nov 22 09:04:17 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi self.base.run(executor)
Nov 22 09:04:17 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi   File "/opt/stack/glance/glance/domain/proxy.py", line 238, 
in run
Nov 22 09:04:17 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi self.base.run(executor)
Nov 22 09:04:17 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi   File "/opt/stack/glance/glance/domain/__init__.py", line 
439, in run
Nov 22 09:04:17 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi executor.begin_processing(self.task_id)
Nov 22 09:04:17 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi   File 
"/opt/stack/glance/glance/async/taskflow_executor.py", line 143, in 
begin_processing
Nov 22 09:04:17 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi super(TaskExecutor, self).begin_processing(task_id)
Nov 22 09:04:17 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi   File "/opt/stack/glance/glance/async/__init__.py", line 
63, in begin_processing
Nov 22 09:04:17 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi self._run(task_id, task.type)
Nov 22 09:04:17 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi   File 
"/opt/stack/glance/glance/async/taskflow_executor.py", line 181, in _run
Nov 22 09:04:17 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi self.ta

[Yahoo-eng-team] [Bug 1733803] [NEW] Running image-import call on active image returns 500 internal server error

2017-11-22 Thread Abhishek Kekane
Public bug reported:

If you run image-import api on any image which is in active state will
return 500 error as it raises InvalidImageStatusTransition because Image
status transition from active to importing is not allowed.

Ideally it should return HTTP 409 Conflict error to the user.

Prerequisites:
1. Ensure you have latest version of python-glanceclient (version 2.8.0) 
installed
2. Due to isssue [1] to execute taskflow you need to modify line [2] as shown 
below and restart glance-api service
   -pool.spawn_n(import_task.run, task_executor)
   +import_task.run(task_executor)
   [1] https://bugs.launchpad.net/glance/+bug/1712463
   [2] 
https://github.com/openstack/glance/blob/master/glance/api/v2/images.py#L106

Steps to reporoduce:
1. Create image and upload data to it
$ glance image-create --container-format ami --disk-format ami --name 
cirros_image --file cirros-0.3.4-x86_64-blank.img
2. Ensure image is in active state
3. Run image-import call
   $ glance image-import  --import-method 
glance-direct

Output:
500 Internal Server Error: The server has either erred or is incapable of 
performing the requested operation. (HTTP 500)


Glance API Logs:

Nov 22 07:21:01 devstack devstack@g-api.service[14229]: pdict['tenant'] = 
self.tenant
Nov 22 07:21:01 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi [None req-2abf2e90-c810-44d4-bc21-ab8f0e6cc8de admin admin] 
Caught error: Image status transition from active to importing is not allowed: 
InvalidImageStatusTransition: Image status transition from active to importing 
is not allowed
Nov 22 07:21:01 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi Traceback (most recent call last):
Nov 22 07:21:01 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi   File "/opt/stack/glance/glance/common/wsgi.py", line 1222, 
in __call__
Nov 22 07:21:01 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi request, **action_args)
Nov 22 07:21:01 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi   File "/opt/stack/glance/glance/common/wsgi.py", line 1261, 
in dispatch
Nov 22 07:21:01 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi return method(*args, **kwargs)
Nov 22 07:21:01 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi   File "/opt/stack/glance/glance/common/utils.py", line 363, 
in wrapped
Nov 22 07:21:01 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi return func(self, req, *args, **kwargs)
Nov 22 07:21:01 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi   File "/opt/stack/glance/glance/api/v2/images.py", line 
107, in import_image
Nov 22 07:21:01 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi import_task.run(task_executor)
Nov 22 07:21:01 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi   File "/opt/stack/glance/glance/domain/proxy.py", line 238, 
in run
Nov 22 07:21:01 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi self.base.run(executor)
Nov 22 07:21:01 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi   File "/opt/stack/glance/glance/notifier.py", line 581, in 
run
Nov 22 07:21:01 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi super(TaskProxy, self).run(executor)
Nov 22 07:21:01 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi   File "/opt/stack/glance/glance/domain/proxy.py", line 238, 
in run
Nov 22 07:21:01 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi self.base.run(executor)
Nov 22 07:21:01 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi   File "/opt/stack/glance/glance/domain/proxy.py", line 238, 
in run
Nov 22 07:21:01 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi self.base.run(executor)
Nov 22 07:21:01 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi   File "/opt/stack/glance/glance/domain/__init__.py", line 
439, in run
Nov 22 07:21:01 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi executor.begin_processing(self.task_id)
Nov 22 07:21:01 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi   File 
"/opt/stack/glance/glance/async/taskflow_executor.py", line 143, in 
begin_processing
Nov 22 07:21:01 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi super(TaskExecutor, self).begin_processing(task_id)
Nov 22 07:21:01 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi   File "/opt/stack/glance/glance/async/__init__.py", line 
63, in begin_processing
Nov 22 07:21:01 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi self._run(task_id, task.type)
Nov 22 07:21:01 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi   File 
"/opt/stack/glance/glance/async/taskflow_executor.py", line 181, in _run
Nov 22 07:21:01 devstack devstack@g-api.service[14229]: ERROR 
glance.common.wsgi   

[Yahoo-eng-team] [Bug 1673759] Re: Get a wrong error message when you extend a volume

2017-11-22 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/482823
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=476b0a4e5a3b3c871d72acb7021de4d000a30ee0
Submitter: Zuul
Branch:master

commit 476b0a4e5a3b3c871d72acb7021de4d000a30ee0
Author: Chiew Yee Xin 
Date:   Wed Jul 12 15:56:44 2017 +0900

Display correct volume size in error message

The error message displayed when extending a volume should include
the original size of the volume, to tell the user the exact
maximum size the volume can be extended to.

Change-Id: I0ab1ceddc1e9b842ebe7296eda1ca3caea0d7ea6
Closes-Bug: #1673759


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1673759

Title:
  Get a wrong error message when you extend a volume

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  If your quota for the volume is 10G, and you have a 5G volume, when
  you want to extend the volume to 11G, you can get a message show that
  you only have 5G of your quota available, it should be 10G available.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1673759/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp