[Yahoo-eng-team] [Bug 1583419] Re: Make dict.keys() PY3 compatible

2016-07-27 Thread qinchunhua
** Also affects: networking-l2gw
   Importance: Undecided
   Status: New

** Changed in: networking-l2gw
   Status: New => Fix Released

** Changed in: networking-l2gw
 Assignee: (unassigned) => qinchunhua (qin-chunhua)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583419

Title:
  Make dict.keys() PY3 compatible

Status in Cinder:
  Fix Released
Status in networking-l2gw:
  Fix Released
Status in neutron:
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-manilaclient:
  Fix Released
Status in python-troveclient:
  Fix Released
Status in Rally:
  Fix Released
Status in tacker:
  Fix Released
Status in watcher:
  Fix Released

Bug description:
  In PY3, dict.keys() will return a view of list but not a list anymore, i.e.
  $ python3.4
  Python 3.4.3 (default, Mar 31 2016, 20:42:37) 
  >>> body={"11":"22"}
  >>> body[body.keys()[0]]
  Traceback (most recent call last):
File "", line 1, in 
  TypeError: 'dict_keys' object does not support indexing

  so for py3 compatible we should change it as follows:
  >>> body[list(body.keys())[0]]
  '22'

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1583419/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463784] Re: [RFE] Networking L2 Gateway does not work with DVR

2016-07-27 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/312593
Committed: 
https://git.openstack.org/cgit/openstack/networking-l2gw/commit/?id=483109a9f3d68f47dc4f1a7701e82c9971e3e57c
Submitter: Jenkins
Branch:master

commit 483109a9f3d68f47dc4f1a7701e82c9971e3e57c
Author: Ofer Ben-Yacov 
Date:   Wed May 4 18:01:45 2016 +0300

enabling L2GW to work with DVR

In DVR mode no host IP is associated with the L3 Agent because
the router is configured on all the Compute Nodes and on the Network Node.
To overcome this problem we look for L3 Agent that is running on the 
Network Node,
resolve the hostname of the Network Node to get its IP address and use it to
configure the destination IP needed for neutron port location information.

Closes-Bug: 1463784

Change-Id: I2595c714ede896baa7726ceec793de9a7a29e6b2


** Changed in: networking-l2gw
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463784

Title:
  [RFE] Networking L2 Gateway does not work with DVR

Status in networking-l2gw:
  Fix Released
Status in neutron:
  In Progress

Bug description:
  Currently, networking L2 gateway solution cannot be used with a DVR.
  If a virtual machine is in one subnet and the bare metal server is in
  another, then it makes sense to allow DVR configured on the compute
  node to route the traffic from the VM to the bare metal server.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-l2gw/+bug/1463784/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1607172] [NEW] Updating metering-labels and metering-label-rules return 500 error because of unexpected AttributeError

2016-07-27 Thread Rahmad Ade Putra
Public bug reported:

Updating metering-labels and metering-label-rules return 500 error
because of unexpected AttributeError.

These my explanations

1. I faced these issues that perhaps did not expected to me, when I
tried to update metering-label by using PUT, the log shows that there
was 500 Internal Server Error occured with no parameters inserted after
metering_label object. I think this should not be 500 Internal Server
Error, but this supposed to be 501 MethodNotSupported since there no
method for supporting Update(PUT) and because the 501 Error could give
proper explanation to the user itself.

2. And the second one is perhaps the same as the first one, when I tried
to update metering-label-rules by using PUT, the log also showing that
there was 500 Internal Server Error occured, the actual issue is similar
with the first one, I guess the 501 MethodNotSupported supposed to be
happen instead of 500 Internal Server Error.

Here I also attached all of my traceback (logs) and my request parameter
commands.

-
Updating metering-labels request to API

vagrant@ubuntu:~$ curl -g -i -X PUT 
http://192.168.122.139:9696/v2.0/metering/metering-labels/1f3ec85a-e250-4e3e-a6c6-58745121f0bf
 -H "X-Auth-Token: $TOKEN" -d '{"metering_label":{}}'
HTTP/1.1 500 Internal Server Error
Content-Type: application/json
Content-Length: 150
X-Openstack-Request-Id: req-8b02de2a-3ed6-48ba-b9d8-d7ae34268fe2
Date: Wed, 27 Jul 2016 17:20:57 GMT

{"NeutronError": {"message": "Request Failed: internal server error
while processing your request.", "type": "HTTPInternalServerError",
"detail": ""}}



Log Trackback

2016-07-27 17:20:57.504 1448 DEBUG neutron.api.v2.base 
[req-8b02de2a-3ed6-48ba-b9d8-d7ae34268fe2 e01bc3eadeb045edb02fc6b2af4b5d49 
867929bfedca4a719e17a7f3293845de - - -] Request body: {u'metering_label': {}} 
prepare_request_body /opt/stack/neutron/neutron/api/v2/base.py:649
2016-07-27 17:20:57.521 1448 ERROR neutron.api.v2.resource 
[req-8b02de2a-3ed6-48ba-b9d8-d7ae34268fe2 e01bc3eadeb045edb02fc6b2af4b5d49 
867929bfedca4a719e17a7f3293845de - - -] update failed: No details.
2016-07-27 17:20:57.521 1448 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
2016-07-27 17:20:57.521 1448 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 79, in resource
2016-07-27 17:20:57.521 1448 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
2016-07-27 17:20:57.521 1448 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 571, in update
2016-07-27 17:20:57.521 1448 ERROR neutron.api.v2.resource return 
self._update(request, id, body, **kwargs)
2016-07-27 17:20:57.521 1448 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 148, in wrapper
2016-07-27 17:20:57.521 1448 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
2016-07-27 17:20:57.521 1448 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
2016-07-27 17:20:57.521 1448 ERROR neutron.api.v2.resource 
self.force_reraise()
2016-07-27 17:20:57.521 1448 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2016-07-27 17:20:57.521 1448 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2016-07-27 17:20:57.521 1448 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 138, in wrapper
2016-07-27 17:20:57.521 1448 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)
2016-07-27 17:20:57.521 1448 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 613, in _update
2016-07-27 17:20:57.521 1448 ERROR neutron.api.v2.resource obj_updater = 
getattr(self._plugin, action)
2016-07-27 17:20:57.521 1448 ERROR neutron.api.v2.resource AttributeError: 
'MeteringPlugin' object has no attribute 'update_metering_label'
2016-07-27 17:20:57.521 1448 ERROR neutron.api.v2.resource
2016-07-27 17:20:57.523 1448 INFO neutron.wsgi 
[req-8b02de2a-3ed6-48ba-b9d8-d7ae34268fe2 e01bc3eadeb045edb02fc6b2af4b5d49 
867929bfedca4a719e17a7f3293845de - - -] 192.168.122.139 - - [27/Jul/2016 
17:20:57] "PUT 
/v2.0/metering/metering-labels/1f3ec85a-e250-4e3e-a6c6-58745121f0bf HTTP/1.1" 
500 344 0.055912

=

Updating metering-label-rules request to API

vagrant@ubuntu:~$ curl -g -i -X PUT 
http://192.168.122.139:9696/v2.0/metering/metering-label-rules/c4deb0b6-0fee-4166-a574-8f4582e301ec
 -H "X-Auth-TOken: $TOKEN" -d '{"metering_label_rule":{}}'
HTTP/1.1 500 Internal Server Error
Content-Type: application/json
Content-Length: 150
X-Openstack-Request-Id: req-342881fb-f466-4adb-a6f4-d78b5d568a96
Date: Wed, 27 Jul 2016 17:24:45 GMT

{"NeutronError": {"message": "Request Failed: internal server error
while processing your request.", "type": 

[Yahoo-eng-team] [Bug 1590117] Re: Service plugin class' get_plugin_type should be a classmethod

2016-07-27 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/326867
Committed: 
https://git.openstack.org/cgit/openstack/networking-midonet/commit/?id=39e387c182a5d713e17be4b77349a4472e73e7d6
Submitter: Jenkins
Branch:master

commit 39e387c182a5d713e17be4b77349a4472e73e7d6
Author: YAMAMOTO Takashi 
Date:   Wed Jun 8 16:08:47 2016 +0900

Make get_plugin_type classmethod

Following the recent Neutron change. [1]

[1] Ia3a1237a5e07169ebc9378b1cd4188085e20d71c

Closes-Bug: #1590117
Change-Id: I81f46ef6d855166240581d0f2843bd519d45c4a5


** Changed in: networking-midonet
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1590117

Title:
  Service plugin class' get_plugin_type should be a classmethod

Status in networking-midonet:
  Fix Released
Status in neutron:
  Fix Released
Status in tap-as-a-service:
  Fix Released

Bug description:
  There isn't any reason to have it as an instance method as its only
  returning a constant.

  $ git grep 'def get_plugin_type('
  neutron/extensions/metering.py:def get_plugin_type(self):
  neutron/extensions/qos.py:def get_plugin_type(self):
  neutron/extensions/segment.py:def get_plugin_type(self):
  neutron/extensions/tag.py:def get_plugin_type(self):
  neutron/services/auto_allocate/plugin.py:def get_plugin_type(self):
  neutron/services/flavors/flavors_plugin.py:def get_plugin_type(self):
  neutron/services/l3_router/l3_router_plugin.py:def get_plugin_type(self):
  neutron/services/network_ip_availability/plugin.py:def 
get_plugin_type(self):
  neutron/services/service_base.py:def get_plugin_type(self):
  neutron/services/timestamp/timestamp_plugin.py:def get_plugin_type(self):
  neutron/tests/functional/pecan_wsgi/utils.py:def get_plugin_type(self):
  neutron/tests/unit/api/test_extensions.py:def 
get_plugin_type(self):
  neutron/tests/unit/api/test_extensions.py:def 
get_plugin_type(self):
  neutron/tests/unit/dummy_plugin.py:def get_plugin_type(self):
  neutron/tests/unit/extensions/test_flavors.py:def get_plugin_type(self):
  neutron/tests/unit/extensions/test_l3.py:def get_plugin_type(self):
  neutron/tests/unit/extensions/test_router_availability_zone.py:def 
get_plugin_type(self):
  neutron/tests/unit/extensions/test_segment.py:def get_plugin_type(self):

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1590117/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1533454] Re: L3 agent unable to update HA router state after race between HA router creating and deleting

2016-07-27 Thread LIU Yulong
** Changed in: neutron/kilo
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1533454

Title:
  L3 agent unable to update HA router state after race between HA router
  creating and deleting

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  The router L3 HA binding process does not take into account the fact
  that the port it is binding to the agent can be concurrently deleted.

  Details:

  When neutron server deleted all the resources of a
  HA router, L3 agent can not aware that, so race
  happened in some procedure like this:
  1. Neutron server delete all resources of a HA router
  2. RPC fanout to L3 agent 1 in which
     the HA router was master state
  3. In l3 agent 2 'backup' router set itself to masert
     and notify neutron server a HA router state change notify.
  4. PortNotFound rasied in updating HA router states function
  (Seems the DB error was no longer existed.)

  How the step 2 and 3 happens?
  Consider that l3 agent 2 has much more HA routers than l3 agent 1,
  or any reason that causes l3 agent 2 gets/processes the deleting
  RPC later than l3 agent 1. Then l3 agent 1 remove HA router's
  keepalived process will soonly be detected by backup router in
  l3 agent 2 via VRRP protocol. Now the router deleting RPC is in
  the queue of RouterUpdate or any step of a HA router deleting
  procedure, and the router_info will still have 'the' router info.
  So l3 agent 2 will do the state change procedure, AKA notify
  the neutron server to update router state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1533454/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583284] Re: disconnect_volume calls are made during a remote rebuild of a volume backed instance

2016-07-27 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/318266
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=fdf3328107e53f1c5578c2e4dfbad78d832b01c6
Submitter: Jenkins
Branch:master

commit fdf3328107e53f1c5578c2e4dfbad78d832b01c6
Author: Lee Yarwood 
Date:   Wed May 18 17:11:16 2016 +0100

compute: Skip driver detach calls for non local instances

Only call for a driver detach from a volume if the instance is currently
associated with the local compute host. This avoids potential virt
driver and volume backend issues when attempting to disconnect from
volumes that have never been connected to from the current host.

Closes-Bug: #1583284
Change-Id: I36b8532554d75b24130f456a35acd0be838b62d6


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1583284

Title:
  disconnect_volume calls are made during a remote rebuild of a volume
  backed instance

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===
  disconnect_volume calls are made during a remote rebuild of a volume backed 
instance

  Steps to reproduce
  ==
  - Evacuate a volume backed instance.
  - disconnect_volume is called for each previously attached volume on the now 
remote node rebuilding the instance.

  Expected result
  ===
  disconnect_volume is not called unless the instance was previously running on 
the current host.

  Actual result
  =
  disconnect_volume is called regardless of the instance previously running on 
the current host.

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
list for all releases: http://docs.openstack.org/releases/

 Multinode devstack

  2. Which hypervisor did you use?
 (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)

 libvirt + KVM

  2. Which storage type did you use?
 (For example: Ceph, LVM, GPFS, ...)
 What's the version of that?

 LVM/iSCSI

  3. Which networking type did you use?
 (For example: nova-network, Neutron with OpenVSwitch, ...)

 N/A

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1583284/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1607119] [NEW] TOTP auth not functional in python3

2016-07-27 Thread Adrian Turjak
Public bug reported:

Because of how python3 handles byte>str conversion, the passcode
generation function produces a mangled result in python3. The reason the
unit tests still pass in python3 is because the tests also use the same
function and thus the server and the tests are both sending and
expecting the same mangled passcode.

This would then mean that anyone correctly generating the passcode and
attempting to authenticate via TOTP would fail because the server is
expecting a mangled passcode.

The fix is to not use six.text_type, as it does the wrong thing, and
instead use .decode('utf-8') which produces the correct result in both
python2 and python3.

Example of why and how this happens:
Python2:

>>> passcode = b'123456'
>>> print passcode
123456
>>> type(passcode)

>>> import six
>>> six.text_type(passcode)
u'123456'
>>> type(six.text_type(passcode))

>>> otherstring = "openstack"
>>> otherstring + passcode
'openstack123456'
>>> passcode.decode('utf-8')
u'123456'
>>> type(passcode.decode('utf-8'))


Python3:

>>> passcode = b'123456'
>>> print(passcode)
b'123456'
>>> type(passcode)

>>> import six
>>> six.text_type(passcode)
"b'123456'"
>>> type(six.text_type(passcode))

>>> otherstring = "openstack"
>>> otherstring + passcode
Traceback (most recent call last):
  File "", line 1, in 
TypeError: Can't convert 'bytes' object to str implicitly
>>> otherstring + str(passcode)
"openstackb'123456'"
>>> passcode.decode('utf-8')
'123456'
>>> type(passcode.decode('utf-8'))


** Affects: keystone
 Importance: Undecided
 Assignee: Adrian Turjak (adriant-y)
 Status: New

** Description changed:

  Because of how python3 handles byte>str conversion, the passcode
  generation function produces a mangled result in python3. The reason the
  unit tests still pass in python3 is because the tests also use the same
  function and thus the server and the tests are both sending and
  expecting the same mangled passcode.
+ 
+ This would then mean that anyone correctly generating the passcode and
+ attempting to authenticate via TOTP would fail because the server is
+ expecting a mangled passcode.
  
  The fix is to not use six.text_type, as it does the wrong thing, and
  instead use .decode('utf-8') which produces the correct result in both
  python2 and python3.
  
  Example of why and how this happens:
  Python2:
  
  >>> passcode = b'123456'
  >>> print passcode
  123456
  >>> type(passcode)
  
  >>> import six
  >>> six.text_type(passcode)
  u'123456'
  >>> type(six.text_type(passcode))
  
  >>> otherstring = "openstack"
  >>> otherstring + passcode
  'openstack123456'
  >>> passcode.decode('utf-8')
  u'123456'
  >>> type(passcode.decode('utf-8'))
  
  
  Python3:
  
  >>> passcode = b'123456'
  >>> print(passcode)
  b'123456'
  >>> type(passcode)
  
  >>> import six
  >>> six.text_type(passcode)
  "b'123456'"
  >>> type(six.text_type(passcode))
  
  >>> otherstring = "openstack"
  >>> otherstring + passcode
  Traceback (most recent call last):
-   File "", line 1, in 
+   File "", line 1, in 
  TypeError: Can't convert 'bytes' object to str implicitly
  >>> otherstring + str(passcode)
  "openstackb'123456'"
  >>> passcode.decode('utf-8')
  '123456'
  >>> type(passcode.decode('utf-8'))
  

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1607119

Title:
  TOTP auth not functional in python3

Status in OpenStack Identity (keystone):
  New

Bug description:
  Because of how python3 handles byte>str conversion, the passcode
  generation function produces a mangled result in python3. The reason
  the unit tests still pass in python3 is because the tests also use the
  same function and thus the server and the tests are both sending and
  expecting the same mangled passcode.

  This would then mean that anyone correctly generating the passcode and
  attempting to authenticate via TOTP would fail because the server is
  expecting a mangled passcode.

  The fix is to not use six.text_type, as it does the wrong thing, and
  instead use .decode('utf-8') which produces the correct result in both
  python2 and python3.

  Example of why and how this happens:
  Python2:

  >>> passcode = b'123456'
  >>> print passcode
  123456
  >>> type(passcode)
  
  >>> import six
  >>> six.text_type(passcode)
  u'123456'
  >>> type(six.text_type(passcode))
  
  >>> otherstring = "openstack"
  >>> otherstring + passcode
  'openstack123456'
  >>> passcode.decode('utf-8')
  u'123456'
  >>> type(passcode.decode('utf-8'))
  

  Python3:

  >>> passcode = b'123456'
  >>> print(passcode)
  b'123456'
  >>> type(passcode)
  
  >>> import six
  >>> six.text_type(passcode)
  "b'123456'"
  >>> type(six.text_type(passcode))
  
  >>> otherstring = "openstack"
  >>> otherstring + passcode
  Traceback (most recent call last):
    File "", line 1, in 
  TypeError: Can't convert 'bytes' object to str implicitly
  

[Yahoo-eng-team] [Bug 1606718] Re: logging pci_devices from the resource tracker is kind of terrible

2016-07-27 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/347576
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=0f790181f40cb3e3ca0ae10f2293777b8c32bd9a
Submitter: Jenkins
Branch:master

commit 0f790181f40cb3e3ca0ae10f2293777b8c32bd9a
Author: Matt Riedemann 
Date:   Tue Jul 26 19:02:23 2016 -0400

rt: don't log pci_devices twice when updating resources

By default we update available resources on the compute every
60 seconds. The _report_hypervisor_resource_view method is
logging pci devices twice. On a compute with hundreds of pci
devices this blows up the logs in a short amount of time.

This change removes at least the duplicate pci device logging
but we might want to even consider just not logging these in
each iteration. I've opted to at least push the simple fix for
now.

Change-Id: Id05bfef44b1108dec286486d42516649dd0683e9
Closes-Bug: #1606718


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1606718

Title:
  logging pci_devices from the resource tracker is kind of terrible

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The _report_hypervisor_resource_view method in the resource tracker on
  the compute node is logging pci devices (if set in the resources dict
  from the virt driver).

  I have a compute node with libvirt 1.2.2 with several hundred devices:

  http://paste.openstack.org/show/542185/

  Those get logged every TWICE every 60 seconds (by default) because of
  the update_available_resource periodic task in the compute manager.

  We should at the very least only log the giant dict of pci devices
  once in _report_hypervisor_resource_view, or maybe not at all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1606718/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1607114] [NEW] List role assignments doesn't include domain of role

2016-07-27 Thread Henry Nash
Public bug reported:

The list role assignment will return the names (and domain names) of
each party in an assignment if the the "include_names" query parameter
is included.

However, this is not true for roles, which would be useful for domain
specific roles.

** Affects: keystone
 Importance: Undecided
 Status: New

** Description changed:

  The list role assignment will return the names (and domain names) of
  each party in an assignment if the the "include_names" query parameter
  is included.
  
- However, this is not true for domain specific roles.
+ However, this is not true for roles, which would be useful for domain
+ specific roles.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1607114

Title:
  List role assignments doesn't include domain of role

Status in OpenStack Identity (keystone):
  New

Bug description:
  The list role assignment will return the names (and domain names) of
  each party in an assignment if the the "include_names" query parameter
  is included.

  However, this is not true for roles, which would be useful for domain
  specific roles.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1607114/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596602] Re: Instance Snapshot failure: "Unable to set 'kernel_id' to 'None'. Reason: None is not of type u'string'"

2016-07-27 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/347971
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=2f2b0e954d7738c2bf49a48a708cc0ce2b19e726
Submitter: Jenkins
Branch:master

commit 2f2b0e954d7738c2bf49a48a708cc0ce2b19e726
Author: Mike Fedosin 
Date:   Wed Jul 27 20:34:41 2016 +0300

Don't set empty kernel_id and ramdisk_id to glance image

In some cases, if 'allow_additional_properties' option is
disabled in Glance, 'kernel_id' and 'ramdisk_id' are considered
the usual properties and cannot have None values. So it's better
always omit these props and never send them to Glance if they are
empty.

Change-Id: I3dd2a3f39d31a79c86bdbe7b656d42c20c560af3
Closes-bug: #1596602


** Changed in: nova
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1596602

Title:
  Instance Snapshot failure: "Unable to set 'kernel_id' to 'None'.
  Reason: None is not of type u'string'"

Status in OpenStack Compute (nova):
  Fix Released
Status in python-glanceclient:
  Fix Released

Bug description:
  Description
  ===
  Attempted to take a snapshot of a suspended server through the CLI, but the 
command failed. (Nova log stack trace appended below)

  Steps to reproduce
  ==
  1. Created a server instance:
 $ nova boot --image  --flavor m1.tiny snapshotvm
  2. Suspended the server:
 $ openstack server suspend 
  3. Attempt to create server snapshot:
 $ nova image-create snapshotvm snapshotimage --poll

  Expected result
  ===
  Expected to have a snapshot of my instance created in the image list.

  Actual result
  =
  Received following error output from the image create command:

  # nova image-create snapshotvm snapshotimage --poll

  Server snapshotting... 25% complete
  ERROR (NotFound): Image not found. (HTTP 404) (Request-ID: 
req-4670eba3-a0d5-4814-b0a8-4aba37a1dd3a)

  Environment
  ===
  1. Running from master level of Openstack

  2. Using KVM virtualization on Ubuntu 14.04:
  # kvm --version
  QEMU emulator version 2.0.0 (Debian 2.0.0+dfsg-2ubuntu1.24), Copyright (c) 
2003-2008 Fabrice Bellard

  2. Which storage type did you use?
 LVM

  3. Which networking type did you use?
 Neutron


  2016-06-27 15:04:21.361 10781 INFO nova.compute.manager 
[req-f393be05-96c2-4d52-bfc1-0c5b644c553e d672d690e856437196a748ebc85abb41 
b9b13412e4fd4fd6a6016368b720309e - - -] [instance: 
ab71489a-0716-404b-808a-165f2a85af74] Successfully reverted task state from 
image_uploading on failure for instance.
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server 
[req-f393be05-96c2-4d52-bfc1-0c5b644c553e d672d690e856437196a748ebc85abb41 
b9b13412e4fd4fd6a6016368b720309e - - -] Exception during message handling
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server   File 
"/opt/bbc/openstack-11.0-master/nova/local/lib/python2.7/site-packages/oslo_messaging/rpc/server.py",
 line 133, in _process_incoming
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server   File 
"/opt/bbc/openstack-11.0-master/nova/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 150, in dispatch
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server   File 
"/opt/bbc/openstack-11.0-master/nova/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 121, in _do_dispatch
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server   File 
"/opt/bbc/openstack-11.0-master/nova/local/lib/python2.7/site-packages/nova/exception.py",
 line 104, in wrapped
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server payload)
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server   File 
"/opt/bbc/openstack-11.0-master/nova/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 221, in __exit__
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server 
self.force_reraise()
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server   File 
"/opt/bbc/openstack-11.0-master/nova/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 197, in force_reraise
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server   File 

[Yahoo-eng-team] [Bug 1607107] [NEW] Unauthorized exception causes missing exception kwargs (programmer error)

2016-07-27 Thread Ron De Rose
Public bug reported:

I created the following class in keystone/exception.py:

class AccountLocked(Unauthorized):
message_format = _("The account is locked for user: %(user_id)s")

And would raise the exception if a user account was locked:
raise exception.AccountLocked(user_id=user_id)

However when doing so, the following error would get logged:
missing exception kwargs (programmer error)

This seems to be a product of:
https://github.com/openstack/keystone/blob/master/keystone/exception.py#L61-L67

A few of us spent time on this in IRC.  For more content you can view that 
conversation here:
http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2016-07-27.log.html#t2016-07-27T14:49:24

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1607107

Title:
  Unauthorized exception causes missing exception kwargs (programmer
  error)

Status in OpenStack Identity (keystone):
  New

Bug description:
  I created the following class in keystone/exception.py:

  class AccountLocked(Unauthorized):
  message_format = _("The account is locked for user: %(user_id)s")

  And would raise the exception if a user account was locked:
  raise exception.AccountLocked(user_id=user_id)

  However when doing so, the following error would get logged:
  missing exception kwargs (programmer error)

  This seems to be a product of:
  
https://github.com/openstack/keystone/blob/master/keystone/exception.py#L61-L67

  A few of us spent time on this in IRC.  For more content you can view that 
conversation here:
  
http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2016-07-27.log.html#t2016-07-27T14:49:24

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1607107/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585601] Re: Deleting a live-migrated instance causes its fixed IP to remain reserved

2016-07-27 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/325361
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=f9e9b30b93443c986d3bee7f8b4140b82050418b
Submitter: Jenkins
Branch:master

commit f9e9b30b93443c986d3bee7f8b4140b82050418b
Author: Artom Lifshitz 
Date:   Fri Jun 3 15:08:08 2016 +

Call release_dhcp via RPC to ensure correct host

When deleting an instance in a nova-network environment, the network
manager calls release_dhcp() on the local host. The linux_net driver
then executes dhcp_release, a binary that comes with dnsmasq that
releases a DHCP lease on the local host. Upon lease release, dnsmasq
calls its dhcp-script, nova-dhcpbridge. The latter calls
release_fixed_ip() and the instance's fixed IP is returned to the
pool. This is fine if an instance has never been live-migrated.

If an instance has been live-migrated, the dnsmasq on its new
host fails with 'unknown lease' because it's not the same dnsmasq that
originally handed out the lease. Having failed, dnsmasq doesn't call
nova-dhcpbridge and release_fixed_ip() is never called. The fixed IP
is not returned to the pool and a new instance cannot be booted with
that IP.

This patches adds a release_dhcp RPC call that calls release_dhcp on
the instance's "original" host, thus ensuring that the correct dnsmasq
handles the lease release and that nova-dhcpbridge and
release_fixed_ip() are called.

Change-Id: I0eec8c995dd8cff50c37af83018697fc686fe727
Closes-bug: 1585601


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1585601

Title:
  Deleting a live-migrated instance causes its fixed IP to remain
  reserved

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When using nova-network, an attempt to boot an instance with the fixed
  IP of an instance that has been live-migrated and then deleted will
  fail with 'Fixed IP address is already in use on instance.'

  To reproduce:

  1. Boot an instance
  2. Live-migrate it
  3. Delete it
  4. Boot a new instance with the same fixed IP.

  This has been reported against Icehouse and has been reproduced in
  master, and is therefore presumably present in all versions in-
  between.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1585601/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1607061] [NEW] [RFE] Bulk LBaaS pool member operations

2016-07-27 Thread Matt Greene
Public bug reported:

[Use-cases]
- Configuration Management
Perform administrative operations on a collection of members.

[Limitations]
Members must currently be created/modified/deleted one at a time.  This can be 
accomplished programmatically via neutron-api but is cumbersome through the CLI.

[Enhancement]
Embellish neutron-api (CLI) and GUI to support management of a group of members 
via one operation.  Pitching a few ideas on how to do this.

- Extend existing API
Add optional filter parameter to neutron-api to find and modify any member 
caught by the filter.

- Create new API
Create new lbaas-members-* commands that makes it clear we're changing a 
collection.  But leave the lbaas-pool-* command alone which are organizing 
collections.

- Base inheritance
Create new lbaas-member-base-* commands to define default settings then extend 
lbaas-member-* to specify to the base.  Updating the base would update all 
members that have not overridden the default.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas rfe

** Summary changed:

- [RFE] Bulk pool member operations
+ [RFE] Bulk LBaaS pool member operations

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1607061

Title:
  [RFE] Bulk LBaaS pool member operations

Status in neutron:
  New

Bug description:
  [Use-cases]
  - Configuration Management
  Perform administrative operations on a collection of members.

  [Limitations]
  Members must currently be created/modified/deleted one at a time.  This can 
be accomplished programmatically via neutron-api but is cumbersome through the 
CLI.

  [Enhancement]
  Embellish neutron-api (CLI) and GUI to support management of a group of 
members via one operation.  Pitching a few ideas on how to do this.

  - Extend existing API
  Add optional filter parameter to neutron-api to find and modify any member 
caught by the filter.

  - Create new API
  Create new lbaas-members-* commands that makes it clear we're changing a 
collection.  But leave the lbaas-pool-* command alone which are organizing 
collections.

  - Base inheritance
  Create new lbaas-member-base-* commands to define default settings then 
extend lbaas-member-* to specify to the base.  Updating the base would update 
all members that have not overridden the default.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1607061/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1605670] Re: ngdetails path doesnt allow slash

2016-07-27 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/346074
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=8f77a685471af26f6a1d04b6fde3f2fb8eb0ae2a
Submitter: Jenkins
Branch:master

commit 8f77a685471af26f6a1d04b6fde3f2fb8eb0ae2a
Author: Tyr Johanson 
Date:   Fri Jul 22 09:06:59 2016 -0600

Allow ngdetails path to contain '/'

When the "path" portion of a route contains a '/', prevent Angular
from truncating it. This is common if the resource has a complex
key and wants to use '/' as a separator. For example:
/project/ngdns/zone//recordset/

Change-Id: I7b4fe1ba2b2f657ccee91de50cc9d5267544b51e
Closes-Bug: 1605670


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1605670

Title:
  ngdetails path doesnt allow slash

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  When using the generic resource details view, if the resource path is
  a complex key such as /ngdns/zone//
  the "path" portion of the route is truncated by angular.

  The fix is to change the path registration in core.moudle.js to
  /project/ngdetails/:type/:path*

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1605670/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1607052] [NEW] [RFE] Per-server port for Health Monitoring

2016-07-27 Thread Matt Greene
Public bug reported:

[Use-cases]
- Hierarchical health monitoring
The operator wants to monitor server health for the pool separately from 
application health.

- Micro-service deployment
An application is deployed as docker containers, which consume an ephemeral 
port.

[Limitations]
LBaaSv2 health monitor is attached to the pool, so is restricted to monitoring 
the single port set for the pool.  Certain operators wish to monitor the health 
of the server instance separately, but in addition to the health of the 
service/application.  This model also does not support deployment of a service 
via docker container, in which each container will use an ephemeral port.

[Enhancement]
Add an optional application port field in the member object.  Default is 
.  Enhance health monitor creation with an optional parameter to use the 
service or application port.  Default is  in the pool object.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1607052

Title:
  [RFE] Per-server port for Health Monitoring

Status in neutron:
  New

Bug description:
  [Use-cases]
  - Hierarchical health monitoring
  The operator wants to monitor server health for the pool separately from 
application health.

  - Micro-service deployment
  An application is deployed as docker containers, which consume an ephemeral 
port.

  [Limitations]
  LBaaSv2 health monitor is attached to the pool, so is restricted to 
monitoring the single port set for the pool.  Certain operators wish to monitor 
the health of the server instance separately, but in addition to the health of 
the service/application.  This model also does not support deployment of a 
service via docker container, in which each container will use an ephemeral 
port.

  [Enhancement]
  Add an optional application port field in the member object.  Default is 
.  Enhance health monitor creation with an optional parameter to use the 
service or application port.  Default is  in the pool object.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1607052/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1607039] [NEW] KVS _update_user_token_list can be more efficient

2016-07-27 Thread Billy Olsen
Public bug reported:

Maintaining the user token list and the revocation list in the memcached
persistence backend (kvs) is inefficient for larger amounts of tokens
due to the use of a linear algorithm for token list maintenance.

Since the list is unordered, each token within the list must be checked
first to ensure whether it has expired or not, secondly to determine if
it has been revoked or not. By changing to an ordered list and using a
binary search, expired tokens can be found with less computational
overhead.

The current algorithm means that the insertion of a new token into the
list is O(n) since token expiration validity is done when the list is
updated. By using an ordered list, the insertion and validation of the
expiration can be reduced to O(log n).

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: sts

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1607039

Title:
  KVS _update_user_token_list can be more efficient

Status in OpenStack Identity (keystone):
  New

Bug description:
  Maintaining the user token list and the revocation list in the
  memcached persistence backend (kvs) is inefficient for larger amounts
  of tokens due to the use of a linear algorithm for token list
  maintenance.

  Since the list is unordered, each token within the list must be
  checked first to ensure whether it has expired or not, secondly to
  determine if it has been revoked or not. By changing to an ordered
  list and using a binary search, expired tokens can be found with less
  computational overhead.

  The current algorithm means that the insertion of a new token into the
  list is O(n) since token expiration validity is done when the list is
  updated. By using an ordered list, the insertion and validation of the
  expiration can be reduced to O(log n).

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1607039/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1214176] Re: Fix copyright headers to be compliant with Foundation policies

2016-07-27 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/347408
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=b2e3ed90737b7c9ef8119758db690e2822b7af9b
Submitter: Jenkins
Branch:master

commit b2e3ed90737b7c9ef8119758db690e2822b7af9b
Author: dineshbhor 
Date:   Tue Jul 26 19:29:23 2016 +0530

Replace OpenStack LLC with OpenStack Foundation

Change-Id: Ifee4e6eef37fe00019dd3adfaef8bb99a7970944
Closes-Bug: #1214176


** Changed in: glance
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1214176

Title:
  Fix copyright headers to be compliant with Foundation policies

Status in Ceilometer:
  Fix Released
Status in Cinder:
  In Progress
Status in devstack:
  Fix Released
Status in Glance:
  Fix Released
Status in heat:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in Murano:
  In Progress
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in PBR:
  In Progress
Status in python-ceilometerclient:
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-glanceclient:
  Fix Released
Status in python-heatclient:
  Fix Released
Status in python-keystoneclient:
  Fix Released
Status in python-manilaclient:
  In Progress
Status in python-neutronclient:
  Fix Released
Status in python-troveclient:
  In Progress
Status in OpenStack Object Storage (swift):
  Fix Released
Status in tempest:
  Fix Released
Status in OpenStack DBaaS (Trove):
  Fix Released

Bug description:
  Correct the copyright headers to be consistent with the policies
  outlined by the OpenStack Foundation at http://www.openstack.org/brand
  /openstack-trademark-policy/

  Remove references to OpenStack LLC, replace with OpenStack Foundation

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1214176/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606999] [NEW] reporting messages can slow down operations greatly

2016-07-27 Thread Scott Moser
Public bug reported:

When looking into bug 1604962 I investigated cloud-init and curtin logs and 
looked at timestamps.
In Adams scenario shown at 
  https://bugs.launchpad.net/maas/+bug/1604962/comments/16
posting a message back to maas was taking up to 7 seconds.

cloud-init and curtin during an installation can be expected to post
dozens of messages.  if each of those took just 5 seconds the group of
only 12 would make an installation take 60 seconds longer than it needed
to.

This can be considered a "client" problem (curtin and cloud-init) in
some respects. These clients could definitely background their posting
of data so that they can go on.  However, if they do that at some point
they probably should verify that all messages were correctly posted, so
its possible that backgrounding the posting wouldn't actually help.

Adam's system I think was a "orange box", with 10 clients all
installing.  That does not seem like enough load to account for 5+
second posts.

Related bugs:
  * bug 1604962: node set to "failed deployment" for no visible reason

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Affects: curtin
 Importance: Undecided
 Status: New

** Affects: maas
 Importance: Undecided
 Status: New

** Also affects: cloud-init
   Importance: Undecided
   Status: New

** Also affects: curtin
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1606999

Title:
  reporting messages can slow down operations greatly

Status in cloud-init:
  New
Status in curtin:
  New
Status in MAAS:
  New

Bug description:
  When looking into bug 1604962 I investigated cloud-init and curtin logs and 
looked at timestamps.
  In Adams scenario shown at 
https://bugs.launchpad.net/maas/+bug/1604962/comments/16
  posting a message back to maas was taking up to 7 seconds.

  cloud-init and curtin during an installation can be expected to post
  dozens of messages.  if each of those took just 5 seconds the group of
  only 12 would make an installation take 60 seconds longer than it
  needed to.

  This can be considered a "client" problem (curtin and cloud-init) in
  some respects. These clients could definitely background their posting
  of data so that they can go on.  However, if they do that at some
  point they probably should verify that all messages were correctly
  posted, so its possible that backgrounding the posting wouldn't
  actually help.

  Adam's system I think was a "orange box", with 10 clients all
  installing.  That does not seem like enough load to account for 5+
  second posts.

  Related bugs:
* bug 1604962: node set to "failed deployment" for no visible reason

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1606999/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606426] Re: user list is much slower in mitaka and newton

2016-07-27 Thread Steve Martinelli
** Changed in: keystone
   Status: In Progress => Invalid

** Changed in: keystone
Milestone: newton-3 => None

** Changed in: keystone
 Assignee: Boris Bobrov (bbobrov) => (unassigned)

** Changed in: keystone
   Importance: Critical => Undecided

** Changed in: keystone/mitaka
 Assignee: Steve Martinelli (stevemar) => Ron De Rose (ronald-de-rose)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1606426

Title:
  user list is much slower in mitaka and newton

Status in OpenStack Identity (keystone):
  Invalid
Status in OpenStack Identity (keystone) mitaka series:
  In Progress

Bug description:
  With Kilo doing a user-list on V2 or V3 would take approx. 2-4 seconds

  In Mitaka it takes 19-22 seconds. This is a significant slow down.

  We have ~9,000 users

  We also changed from going under eventlet to moving to apache wsgi

  We have ~10,000 project and this api (project-list) hasn't slowed down
  so I think this is something specific to the user-list api

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1606426/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1605336] Re: Neutron loadbalancer VIP port fails to create

2016-07-27 Thread Tim Simmons
** Changed in: designate
   Status: New => In Progress

** Changed in: designate
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1605336

Title:
  Neutron loadbalancer VIP port fails to create

Status in Designate:
  Invalid
Status in OpenStack Neutron LBaaS Integration:
  New
Status in neutron:
  In Progress

Bug description:
  When trying to create a Loadbalancer (v1) VIP with the command:

  neutron lb-vip-create --address 10.97.0.254 --name vip-97 \
  --protocol-port 22 --protocol TCP --subnet-id subnet-97 hapool-97

  Where subnet-97 is a subnet to tenant-97, which have 'dns_domain' set
  to an existing domain. The domain works - creating an instance +
  floating IP on that will register the set dns_name in the domain.

  However, the lb-vip-create will fail with

  Request Failed: internal server error while processing your request.
  Neutron server returns request_ids: 
['req-ee6a68f1-ed8a-4f22-9dea-646fb97ff795']

  and the log will say:

  ==> /var/log/neutron/neutron-server.log <==
  2016-07-21 18:08:54.940 7926 INFO neutron.wsgi 
[req-cc53af04-89fc-482c-8a4f-0a3f5cc2e614 4b0e25c70d2b4ad6ba4c50250f2f0b0b 
04ee0e71babe4fd7aa16c3f64a8fca89 - - -] 10.0.4.1 - - [21/Jul/2016 18:08:54] 
"GET /v2.0/lb/pools.json?fields=id=hapool-97 HTTP/1.1" 200 257 0.070421
  2016-07-21 18:08:55.027 7926 INFO neutron.wsgi 
[req-e95bbb13-c38e-4cdf-afc5-9bba3351b8ff 4b0e25c70d2b4ad6ba4c50250f2f0b0b 
04ee0e71babe4fd7aa16c3f64a8fca89 - - -] 10.0.4.1 - - [21/Jul/2016 18:08:55] 
"GET /v2.0/subnets.json?fields=id=subnet-97 HTTP/1.1" 200 259 0.081731
  2016-07-21 18:08:55.037 7926 INFO neutron.quota 
[req-ee6a68f1-ed8a-4f22-9dea-646fb97ff795 4b0e25c70d2b4ad6ba4c50250f2f0b0b 
04ee0e71babe4fd7aa16c3f64a8fca89 - - -] Loaded quota_driver: 
.
  2016-07-21 18:08:55.494 7926 INFO neutron.plugins.ml2.managers 
[req-ee6a68f1-ed8a-4f22-9dea-646fb97ff795 4b0e25c70d2b4ad6ba4c50250f2f0b0b 
04ee0e71babe4fd7aa16c3f64a8fca89 - - -] Extension driver 'dns' failed in 
process_create_port
  2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource 
[req-ee6a68f1-ed8a-4f22-9dea-646fb97ff795 4b0e25c70d2b4ad6ba4c50250f2f0b0b 
04ee0e71babe4fd7aa16c3f64a8fca89 - - -] create failed
  2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource Traceback 
(most recent call last):
  2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line 84, in 
resource
  2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
  2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 410, in create
  2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource return 
self._create(request, body, **kwargs)
  2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/oslo_db/api.py", line 148, in wrapper
  2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource 
ectxt.value = e.inner_exc
  2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 221, in __exit__
  2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource 
self.force_reraise()
  2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 197, in 
force_reraise
  2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/oslo_db/api.py", line 138, in wrapper
  2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource return 
f(*args, **kwargs)
  2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 521, in _create
  2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource obj = 
do_create(body)
  2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 503, in 
do_create
  2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource 
request.context, reservation.reservation_id)
  2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 221, in __exit__
  2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource 
self.force_reraise()
  2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 197, in 
force_reraise
  2016-07-21 18:08:55.719 7926 ERROR neutron.api.v2.resource 

[Yahoo-eng-team] [Bug 1596602] Re: Instance Snapshot failure: "Unable to set 'kernel_id' to 'None'. Reason: None is not of type u'string'"

2016-07-27 Thread Matt Riedemann
So the problem it looks like is we don't deploy this file:

https://raw.githubusercontent.com/openstack/glance/master/etc/schema-
image.json

To /etc/glance/ on our glance-api (controller) nodes. That defines the
schema for kernel_id and ramdisk_id (among other things). It's deployed
with devstack which is why we don't see issues in the upstream CI:

http://logs.openstack.org/77/335277/1/check/gate-tempest-dsvm-
full/de259ec/logs/etc/glance/schema-image.json.txt.gz

The schema we have in our cloud is this:

http://paste.openstack.org/show/542625/

With:

   "additionalProperties":{
  "type":"string"
   },

So if you pass kernel_id and ramdisk_id, they must not be None, but
that's what nova sends:

https://github.com/openstack/nova/blob/3f8076acdc7756b8a5f0f16d4885a47cb001483e/nova/image/glance.py#L844

Basically the glance API, or the image schema at least, is completely
configurable and nova isn't using the schema from glance to tell what
properties it can and can't set in the image body on the request to
glance.

So this is a bug in nova now to use the schema from glance.


** Changed in: python-glanceclient
   Status: Confirmed => Fix Released

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New => Triaged

** Changed in: nova
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1596602

Title:
  Instance Snapshot failure: "Unable to set 'kernel_id' to 'None'.
  Reason: None is not of type u'string'"

Status in OpenStack Compute (nova):
  Triaged
Status in python-glanceclient:
  Fix Released

Bug description:
  Description
  ===
  Attempted to take a snapshot of a suspended server through the CLI, but the 
command failed. (Nova log stack trace appended below)

  Steps to reproduce
  ==
  1. Created a server instance:
 $ nova boot --image  --flavor m1.tiny snapshotvm
  2. Suspended the server:
 $ openstack server suspend 
  3. Attempt to create server snapshot:
 $ nova image-create snapshotvm snapshotimage --poll

  Expected result
  ===
  Expected to have a snapshot of my instance created in the image list.

  Actual result
  =
  Received following error output from the image create command:

  # nova image-create snapshotvm snapshotimage --poll

  Server snapshotting... 25% complete
  ERROR (NotFound): Image not found. (HTTP 404) (Request-ID: 
req-4670eba3-a0d5-4814-b0a8-4aba37a1dd3a)

  Environment
  ===
  1. Running from master level of Openstack

  2. Using KVM virtualization on Ubuntu 14.04:
  # kvm --version
  QEMU emulator version 2.0.0 (Debian 2.0.0+dfsg-2ubuntu1.24), Copyright (c) 
2003-2008 Fabrice Bellard

  2. Which storage type did you use?
 LVM

  3. Which networking type did you use?
 Neutron


  2016-06-27 15:04:21.361 10781 INFO nova.compute.manager 
[req-f393be05-96c2-4d52-bfc1-0c5b644c553e d672d690e856437196a748ebc85abb41 
b9b13412e4fd4fd6a6016368b720309e - - -] [instance: 
ab71489a-0716-404b-808a-165f2a85af74] Successfully reverted task state from 
image_uploading on failure for instance.
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server 
[req-f393be05-96c2-4d52-bfc1-0c5b644c553e d672d690e856437196a748ebc85abb41 
b9b13412e4fd4fd6a6016368b720309e - - -] Exception during message handling
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server   File 
"/opt/bbc/openstack-11.0-master/nova/local/lib/python2.7/site-packages/oslo_messaging/rpc/server.py",
 line 133, in _process_incoming
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server   File 
"/opt/bbc/openstack-11.0-master/nova/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 150, in dispatch
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server   File 
"/opt/bbc/openstack-11.0-master/nova/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 121, in _do_dispatch
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server   File 
"/opt/bbc/openstack-11.0-master/nova/local/lib/python2.7/site-packages/nova/exception.py",
 line 104, in wrapped
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server payload)
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server   File 
"/opt/bbc/openstack-11.0-master/nova/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 221, in __exit__
  2016-06-27 15:04:21.364 

[Yahoo-eng-team] [Bug 1606995] [NEW] Nova fails to provision machine but can pull existing machines

2016-07-27 Thread Joshua Houle
Public bug reported:

After switching from Keystone V2.0 to Keystone V3 we can no longer
provision machines, we can still see existing machines in Horizon and
log in Horizon.

Nova config for Keystone:

[keystone_authtoken]

#
# From keystonemiddleware.auth_token
#

# Complete public Identity API endpoint. (string value)
#auth_uri = 
auth_uri = http://192.168.0.2:5000/

# API version of the admin Identity API endpoint. (string value)
#auth_version = 

# Do not handle authorization requests within the middleware, but delegate the
# authorization decision to downstream WSGI components. (boolean value)
#delay_auth_decision = false

# Request timeout value for communicating with Identity API server. (integer
# value)
#http_connect_timeout = 

# How many times are we trying to reconnect when communicating with Identity
# API Server. (integer value)
#http_request_max_retries = 3

# Env key for the swift cache. (string value)
#cache = 

# Required if identity server requires client certificate (string value)
#certfile = 

# Required if identity server requires client certificate (string value)
#keyfile = 

# A PEM encoded Certificate Authority to use when verifying HTTPs connections.
# Defaults to system CAs. (string value)
#cafile = 

# Verify HTTPS connections. (boolean value)
#insecure = false

# The region in which the identity server can be found. (string value)
#region_name = 

# Directory used to cache files related to PKI tokens. (string value)
#signing_dir = 
signing_dir = /tmp/keystone-signing-nova

# Optionally specify a list of memcached server(s) to use for caching. If left
# undefined, tokens will instead be cached in-process. (list value)
# Deprecated group/name - [DEFAULT]/memcache_servers
#memcached_servers = 

# In order to prevent excessive effort spent validating tokens, the middleware
# caches previously-seen tokens for a configurable duration (in seconds). Set
# to -1 to disable caching completely. (integer value)
#token_cache_time = 300

# Determines the frequency at which the list of revoked tokens is retrieved
# from the Identity service (in seconds). A high number of revocation events
# combined with a low cache duration may significantly reduce performance.
# (integer value)
#revocation_cache_time = 10

# (Optional) If defined, indicate whether token data should be authenticated or
# authenticated and encrypted. Acceptable values are MAC or ENCRYPT.  If MAC,
# token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data
# is encrypted and authenticated in the cache. If the value is not one of these
# options or empty, auth_token will raise an exception on initialization.
# (string value)
#memcache_security_strategy = 

# (Optional, mandatory if memcache_security_strategy is defined) This string is
# used for key derivation. (string value)
#memcache_secret_key = 

# (Optional) Number of seconds memcached server is considered dead before it is
# tried again. (integer value)
#memcache_pool_dead_retry = 300

# (Optional) Maximum total number of open connections to every memcached
# server. (integer value)
#memcache_pool_maxsize = 10

# (Optional) Socket timeout in seconds for communicating with a memcached
# server. (integer value)
#memcache_pool_socket_timeout = 3

# (Optional) Number of seconds a connection to memcached is held unused in the
# pool before it is closed. (integer value)
#memcache_pool_unused_timeout = 60

# (Optional) Number of seconds that an operation will wait to get a memcached
# client connection from the pool. (integer value)
#memcache_pool_conn_get_timeout = 10

# (Optional) Use the advanced (eventlet safe) memcached client pool. The
# advanced pool will only work under python 2.x. (boolean value)
#memcache_use_advanced_pool = false

# (Optional) Indicate whether to set the X-Service-Catalog header. If False,
# middleware will not ask for service catalog on token validation and will not
# set the X-Service-Catalog header. (boolean value)
#include_service_catalog = true

# Used to control the use and type of token binding. Can be set to: "disabled"
# to not check token binding. "permissive" (default) to validate binding
# information if the bind type is of a form known to the server and ignore it
# if not. "strict" like "permissive" but if the bind type is unknown the token
# will be rejected. "required" any form of token binding is needed to be
# allowed. Finally the name of a binding method that must be present in tokens.
# (string value)
#enforce_token_bind = permissive

# If true, the revocation list will be checked for cached tokens. This requires
# that PKI tokens are configured on the identity server. (boolean value)
#check_revocations_for_cached = false

# Hash algorithms to use for hashing PKI tokens. This may be a single algorithm
# or multiple. The algorithms are those supported by Python standard
# hashlib.new(). The hashes will be tried in the order given, so put the
# preferred one first for performance. The result of the first hash will 

[Yahoo-eng-team] [Bug 1606988] [NEW] saving metadata of array type error

2016-07-27 Thread Ryan Peters
Public bug reported:

After saving metadata of type array on an image, when you go to edit and
remove that metadata and re-save. Metadata is not removed and remains on
the image.

To reproduce:
If you don't have metadata with an array type, add the json in the file 
attached at:
 - /admin/metadata_defs > Import Namespace

Then go to:
 - /admin/images and Update Metadata on one of the images.

Add one of the Storage Types (eg. SAN_storage) and click Save. Notice
metadata saved to image on image detail page.

After it saves, click Update Metadata for the same image. Remove the
metadata you just added in the previous step and click Save. After
saving notice that metadata is still on the image and was not deleted.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "metadata definition with array type"
   
https://bugs.launchpad.net/bugs/1606988/+attachment/4708195/+files/metadata_def.json

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1606988

Title:
  saving metadata of array type error

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  After saving metadata of type array on an image, when you go to edit
  and remove that metadata and re-save. Metadata is not removed and
  remains on the image.

  To reproduce:
  If you don't have metadata with an array type, add the json in the file 
attached at:
   - /admin/metadata_defs > Import Namespace

  Then go to:
   - /admin/images and Update Metadata on one of the images.

  Add one of the Storage Types (eg. SAN_storage) and click Save. Notice
  metadata saved to image on image detail page.

  After it saves, click Update Metadata for the same image. Remove the
  metadata you just added in the previous step and click Save. After
  saving notice that metadata is still on the image and was not deleted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1606988/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1605787] Re: How to code front-end Horizon Dashboard

2016-07-27 Thread Eddie Ramirez
I don't think this is best place to ask for help, but a good starting
point could be reading the docs
http://docs.openstack.org/developer/horizon/topics/tutorial.html and
http://docs.openstack.org/developer/horizon/topics/customizing.html.
Videos https://www.youtube.com/watch?v=0xpogjXCUr0  and
https://www.youtube.com/watch?v=YOgvLshstAs.

And don't forget the IRC channel #openstack-horizon
https://wiki.openstack.org/wiki/IRC

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1605787

Title:
  How to code front-end Horizon Dashboard

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  I have problem with front-end Dashboard. Please help me.
  I want codeing as image (link: http://imgur.com/a/i3DcU ) Please step by step 
help me, because I am a newbie :)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1605787/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1598100] Re: Adding FDB population agent extension

2016-07-27 Thread Akihiro Motoki
We need to add some content to the networking guide, so openstack-
manuals needs to be added to the affected projects.

The fix is under review: https://review.openstack.org/#/c/345384/

** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

** Changed in: openstack-manuals
   Status: New => In Progress

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1598100

Title:
  Adding FDB population agent extension

Status in neutron:
  Invalid
Status in openstack-manuals:
  In Progress

Bug description:
  https://review.openstack.org/320562
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 2c8f61b816bf531a17a7b45d35a5388e8a2f607a
  Author: Edan David 
  Date:   Tue May 24 11:54:02 2016 -0400

  Adding FDB population agent extension
  
  The purpose of this extension is updating the FDB table upon changes of
  normal port instances thus enabling communication between direct port
  SR-IOV instances and normal port instances.
  Additionally enabling communication to direct port
  instances with floating ips.
  Support for OVS agent and linux bridge.
  
  DocImpact
  Change-Id: I61a8aacb1b21b2a6e452389633d7dcccf9964fea
  Closes-Bug: #1492228
  Closes-Bug: #1527991

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1598100/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1599377] Re: nova should raise http 409 instead of 500 when get a error instance diagnostics

2016-07-27 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/338034
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=16b092d9cf3e7a641940543bbf5dd4937ac643c9
Submitter: Jenkins
Branch:master

commit 16b092d9cf3e7a641940543bbf5dd4937ac643c9
Author: Eli Qiao 
Date:   Wed Jul 6 14:35:42 2016 +0800

API: catch InstanceNotReady exception.

When retrieving server diagnostics, nova should raise 409 instead of 500
in case the instance has no host yet.

Closes-Bug: #1599377
Change-Id: I3748978d9faf8adc8ca7d1d1d3f02128aa22cf3f


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1599377

Title:
  nova should raise http 409 instead of 500 when get a error instance
  diagnostics

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When retrieving server diagnostics,  nova should raise 409 instead of
  500 in case the instance has no host yet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1599377/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597000] Re: Directives and don't work nicely together

2016-07-27 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/326641
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=617042b9cb516ba8c6b502990018adea8e3dd2eb
Submitter: Jenkins
Branch:master

commit 617042b9cb516ba8c6b502990018adea8e3dd2eb
Author: Timur Sufiev 
Date:   Tue Jun 7 19:42:37 2016 +0300

Allow wiring of  into 

The framework change consists of 2 parts:

* Provide filterAvailable filter to be used inside 'items' value of
   instead of 'ng-if' directve which was used before
  in manually written table layout (no longer possible with dynamic
  tables). This filter solves the task of hiding the available values
  once they become allocated.
* Provide 'allocateItemAction' and 'deallocateItemAction' actions on
  transfer-table controller which are compatible with 'itemActions'
  attribute of .

Keypairs tab in Angular Launch Instance wizard is rewritten to use the
new approach.

Also a nasty bug within  was fixed: `scope.items`
value was set in hz-dynamic-table's post-linking function before,
which lead to `undefined` value arriving into st-table directive,
because st-table was linked before hz-dynamic-table as its child
(that's how postLink function works). Directive st-table under some
circustances was wrapping `undefined` into `[]`, causing various
issues with table row equal to `undefined`. The solution to that
problem was to extract setting `scope.items = []` to a pre-linking
function, so by the time st-table is linked, there is already an empty
array under scope's 'items' property.

Closes-Bug: #1597000
Change-Id: Ia6d707d793cefd75d869b061a313390110f620cf


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1597000

Title:
  Directives  and  don't work nicely
  together

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  New directive  provides an elegant way to reduce the
  amount of boilerplate html templates one had to provide recently to
  render tables. Unfortunately,  doesn't work with it
  because of its internal structure and it's necessary to use the old
  verbose markup inside its  and  sibling tags.
  This has to be fixed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1597000/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1532098] Re: "nova list-extensions" not showing Summary for all extensions

2016-07-27 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/286130
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=50c4033ac29268dc3f4be9147bf23eef11cc6546
Submitter: Jenkins
Branch:master

commit 50c4033ac29268dc3f4be9147bf23eef11cc6546
Author: Pushkar Umaranikar 
Date:   Mon Feb 29 15:57:28 2016 +

"nova list-extensions" not showing summary for all

Change nova extensions API to show summary
description for V2.1 API.

Change-Id: Iefd087baddd65a52a20f1b98ae3efe22b3c5085c
Closes-Bug: #1532098


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1532098

Title:
  "nova list-extensions" not showing Summary for all extensions

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  In OpenStack Liberty installed with RDO Packstack, the output of "nova
  list-extensions" is showing question marks as summary for many
  extensions, even if those extensions have properly defined description
  in the docstring of their code.

  For me it looks like the logic is that if the extension exists only in
  /usr/lib/python2.7/site-
  packages/nova/api/openstack/compute/legacy_v2/contrib, then its
  description isn't visible, for example /usr/lib/python2.7/site-
  packages/nova/api/openstack/compute/legacy_v2/contrib/extended_ips.py
  is such case. But docstring is in reality found in that file:

  class Extended_ips(extensions.ExtensionDescriptor):
  """Adds type parameter to the ip list."""

  
  But for extensions whose implementation exists in 
/usr/lib/python2.7/site-packages/nova/api/openstack/compute, the description is 
shown ok.

  
  [root@rdo2controller ~(keystone_admin)]# nova list-extensions
  
++---+-+--+
  | Name   | Summary
   | Alias   | Updated  
|
  
++---+-+--+
  | Multinic   | Multiple network support.  
   | NMN | 2014-12-03T00:00:00Z 
|
  | DiskConfig | Disk Management Extension. 
   | OS-DCF  | 2014-12-03T00:00:00Z 
|
  | ExtendedAvailabilityZone   | Extended Availability Zone support.
   | OS-EXT-AZ   | 2014-12-03T00:00:00Z 
|
  | ImageSize  | Adds image size to image listings. 
   | OS-EXT-IMG-SIZE | 2014-12-03T00:00:00Z 
|
  | ExtendedIps| ?? 
   | OS-EXT-IPS  | 2014-12-03T00:00:00Z 
|
  | ExtendedIpsMac | ?? 
   | OS-EXT-IPS-MAC  | 2014-12-03T00:00:00Z 
|
  | ExtendedServerAttributes   | Extended Server Attributes support.
   | OS-EXT-SRV-ATTR | 2014-12-03T00:00:00Z 
|
  | ExtendedStatus | ?? 
   | OS-EXT-STS  | 2014-12-03T00:00:00Z 
|
  | ExtendedVIFNet | ?? 
   | OS-EXT-VIF-NET  | 2014-12-03T00:00:00Z 
|
  | FlavorDisabled | ?? 
   | OS-FLV-DISABLED | 2014-12-03T00:00:00Z 
|
  | FlavorExtraData| ?? 
   | OS-FLV-EXT-DATA | 2014-12-03T00:00:00Z 
|
  | SchedulerHints | Pass arbitrary key/value pairs to the 
scheduler.  | OS-SCH-HNT  | 
2014-12-03T00:00:00Z |
  | ServerUsage| Adds launched_at and terminated_at on Servers. 
   | OS-SRV-USG  | 2014-12-03T00:00:00Z 
|
  | AccessIPs  | Access IPs support.
   | os-access-ips   | 2014-12-03T00:00:00Z 
|
  | AdminActions   | Enable admin-only server actions...
   | os-admin-actions| 2014-12-03T00:00:00Z 
|
  | AdminPassword  | Admin password management support. 
   | os-admin-password   | 2014-12-03T00:00:00Z 

[Yahoo-eng-team] [Bug 1606941] [NEW] nova hypervisor-show is broken when hypervisor_type is ironic is ironic type

2016-07-27 Thread Moshe Levi
Public bug reported:

openstack with master branch configure to use ironic

running 
stack@r-dcs88:~/ironic-inspector$ nova hypervisor-show 
98f78cb6-a157-4580-bbc7-7b0f9ea03245
ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-0820f738-e07b-47f7-8f11-1399554e22d2)

the nova-api log show

^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mTraceback (most recent call last):
^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/opt/stack/nova/nova/api/openstack/extensions.py", line 
338, in wrapped
^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mreturn f(*args, **kwargs)
^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File 
"/opt/stack/nova/nova/api/openstack/compute/hypervisors.py", line 132, in detail
^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mreturn self._detail(req)
^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File 
"/opt/stack/nova/nova/api/openstack/compute/hypervisors.py", line 148, in 
_detail
^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mTrue, req) for hyp in compute_nodes]
^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File 
"/opt/stack/nova/nova/api/openstack/compute/hypervisors.py", line 72, in 
_view_hypervisor
^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mhyp_dict['cpu_info'] = jsonutils.loads(hypervisor.cpu_info)
^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_serialization/jsonutils.py", line 
235, in loads
^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mreturn json.loads(encodeutils.safe_decode(s, encoding), 
**kwargs)
^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/usr/lib/python2.7/json/__init__.py", line 338, in loads
^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mreturn _default_decoder.decode(s)
^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/usr/lib/python2.7/json/decoder.py", line 366, in decode
^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mobj, end = self.raw_decode(s, idx=_w(s, 0).end())
^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/usr/lib/python2.7/json/decoder.py", line 384, in 
raw_decode
^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mraise ValueError("No JSON object could be decoded")
^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mValueError: No JSON object could be decoded

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1606941

Title:
  nova hypervisor-show is broken when hypervisor_type is ironic is
  ironic type

Status in OpenStack Compute (nova):
  New

Bug description:
  openstack with master branch configure to use ironic

  running 
  stack@r-dcs88:~/ironic-inspector$ nova hypervisor-show 
98f78cb6-a157-4580-bbc7-7b0f9ea03245
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-0820f738-e07b-47f7-8f11-1399554e22d2)

  the nova-api log show

  ^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mTraceback (most recent call last):
  ^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File "/opt/stack/nova/nova/api/openstack/extensions.py", line 
338, in wrapped
  ^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mreturn f(*args, **kwargs)
  ^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File 
"/opt/stack/nova/nova/api/openstack/compute/hypervisors.py", line 132, in detail
  ^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mreturn self._detail(req)
  ^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00m  File 
"/opt/stack/nova/nova/api/openstack/compute/hypervisors.py", line 148, in 
_detail
  ^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 
^[[01;35m^[[00mTrue, req) for hyp in compute_nodes]
  ^[[01;31m2016-07-27 14:00:36.008 TRACE nova.api.openstack.extensions 

[Yahoo-eng-team] [Bug 1547068] Re: Configuration option 'fake_call' is not used

2016-07-27 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/282419
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=b8fea0351895f468d0b5e72087adde5e8a788ab1
Submitter: Jenkins
Branch:master

commit b8fea0351895f468d0b5e72087adde5e8a788ab1
Author: EdLeafe 
Date:   Thu Feb 18 21:46:27 2016 +

Remove unused config option 'fake_call'

This option isn't used anywhere, and should be removed.

Closes-Bug: #1547068

Change-Id: Id2983a3c31fb767992976d3eff9649cc6ff4d8c3


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1547068

Title:
  Configuration option 'fake_call' is not used

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  As of stable/liberty there is a configuration option named
  'fake_call', defined in nova/network/manager.py, and has been moved in
  Mitaka to nova/conf/network.py as part of the config option cleanup.
  Running grep on this name shows no usage of this option anywhere in
  the nova code base. Since it is not used, it should be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1547068/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1605537] Re: Action name should be returned during Forbidden Exception to make the message more informative.

2016-07-27 Thread Mh Raies
** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1605537

Title:
  Action name should be returned during Forbidden Exception to make the
  message more informative.

Status in Glance:
  Fix Released

Bug description:
  During Forbidden Exception in Glance, the message doesn't provide
  sufficient information about the action _name for which the policy was
  restricted/unauthorized which makes it very difficult for the user to
  debug. Currently, the message displayed is as follows:

  "You are not authorized to complete this action."

  We propose to make this more intuitive by providing the action_name as
  part of the message as shown below:

  "You are not authorized to complete %(action)s action."

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1605537/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499647] Re: test_ha_router fails intermittently

2016-07-27 Thread John Schwarz
As per comment #39, this can be closed - this bug report is mostly a
tracker bug and I'm under most of the races that made test_ha_router
fail are resolved.

Some other races are https://bugs.launchpad.net/neutron/+bug/1605285 and
https://bugs.launchpad.net/neutron/+bug/1605282, but these can be
addressed separately.

** Changed in: neutron
   Status: In Progress => Fix Released

** Changed in: neutron/kilo
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1499647

Title:
  test_ha_router fails intermittently

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  I have tested work of L3 HA on environment with 3 controllers and 1
  compute (Kilo) keepalived v1.2.13 I create 50 nets with 50 subnets and
  50 routers with interface is set for each subnet(Note: I've seem the
  same errors with just one router and net). I've got the following
  errors:

  root@node-6:~# neutron l3-agent-list-hosting-router router-1
  Request Failed: internal server error while processing your request.
   
  In neutron-server error log:  http://paste.openstack.org/show/473760/

  When I fixed _get_agents_dict_for_router to skip None for further
  testing, so then I was able to see:

  root@node-6:~# neutron l3-agent-list-hosting-router router-1
  
+--+---++---+--+
  | id   | host  | admin_state_up | 
alive | ha_state |
  
+--+---++---+--+
  | f3baba98-ef5d-41f8-8c74-a91b7016ba62 | node-6.domain.tld | True   | 
:-)   | active   |
  | c9159f09-34d4-404f-b46c-a8c18df677f3 | node-7.domain.tld | True   | 
:-)   | standby  |
  | b458ab49-c294-4bdb-91bf-ae375d87ff20 | node-8.domain.tld | True   | 
:-)   | standby  |
  | f3baba98-ef5d-41f8-8c74-a91b7016ba62 | node-6.domain.tld | True   | 
:-)   | active   |
  
+--+---++---+--+

  root@node-6:~# neutron port-list 
--device_id=fcf150c0-f690-4265-974d-8db370e345c4
  
+--+-+---++
  | id   | name 
   | mac_address   | fixed_ips  
|
  
+--+-+---++
  | 0834f8a2-f109-4060-9312-edebac84aba5 |  
   | fa:16:3e:73:9f:33 | {"subnet_id": 
"0c7a2cfa-1cfd-4ecc-a196-ab9e97139352", "ip_address": "172.18.161.223"}  |
  | 2b5a7a15-98a2-4ff1-9128-67d098fa3439 | HA port tenant 
aef8d13bad9d42df9f25d8ee54c80ad6 | fa:16:3e:b8:f6:35 | {"subnet_id": 
"1915ccb8-9d0f-4f1a-9811-9a196d1e495e", "ip_address": "169.254.192.149"} |
  | 48c887c1-acc3-4804-a993-b99060fa2c75 | HA port tenant 
aef8d13bad9d42df9f25d8ee54c80ad6 | fa:16:3e:e7:70:13 | {"subnet_id": 
"1915ccb8-9d0f-4f1a-9811-9a196d1e495e", "ip_address": "169.254.192.151"} |
  | 82ab62d6-7dd1-4294-a0dc-f5ebfbcbb4ca |  
   | fa:16:3e:c6:fc:74 | {"subnet_id": 
"c4cc21c9-3b3a-407c-b4a7-b22f783377e7", "ip_address": "10.0.40.1"}   |
  | bbca8575-51f1-4b42-b074-96e15aeda420 | HA port tenant 
aef8d13bad9d42df9f25d8ee54c80ad6 | fa:16:3e:84:4c:fc | {"subnet_id": 
"1915ccb8-9d0f-4f1a-9811-9a196d1e495e", "ip_address": "169.254.192.150"} |
  | bee5c6d4-7e0a-4510-bb19-2ef9d60b9faf | HA port tenant 
aef8d13bad9d42df9f25d8ee54c80ad6 | fa:16:3e:09:a1:ae | {"subnet_id": 
"1915ccb8-9d0f-4f1a-9811-9a196d1e495e", "ip_address": "169.254.193.11"}  |
  | f8945a1d-b359-4c36-a8f8-e78c1ba992f0 | HA port tenant 
aef8d13bad9d42df9f25d8ee54c80ad6 | fa:16:3e:c4:54:b5 | {"subnet_id": 
"1915ccb8-9d0f-4f1a-9811-9a196d1e495e", "ip_address": "169.254.193.12"}  |
  
+--+-+---++
  mysql root@192.168.0.2:neutron> SELECT * FROM ha_router_agent_port_bindings 
WHERE router_id='fcf150c0-f690-4265-974d-8db370e345c4';
  
+--+--+--+-+
  | port_id  | router_id
| l3_agent_id  | state   |
  

[Yahoo-eng-team] [Bug 1590746] Re: SRIOV PF/VF allocation fails with NUMA aware flavor

2016-07-27 Thread Vladik Romanovsky
*** This bug is a duplicate of bug 1582278 ***
https://bugs.launchpad.net/bugs/1582278

** This bug has been marked a duplicate of bug 1582278
   [SR-IOV][CPU Pinning] nova compute can try to boot VM with CPUs from one 
NUMA node and PCI device from another NUMA node.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1590746

Title:
  SRIOV PF/VF allocation fails with NUMA aware flavor

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===
  It seems that the main failure happens due to the incorrect NUMA filtering in 
the pci allocation mechanism. The allocation is being done according to the 
instance NUMA topology, however, this is not always correct. Specifically in 
the case when a user selects hw:numa_nodes=1, which would mean that VM will 
take resources from just one numa node and not from a specific one.

  
  Steps to reproduce
  ==

  Create nova flavor with NUMA awareness, CPU pinning, Huge pages, etc:

  #  nova flavor-create prefer_pin_1 auto 2048 20 1
  #  nova flavor-key prefer_pin_1 set  hw:numa_nodes=1
  #  nova flavor-key prefer_pin_1 set  hw:mem_page_size=1048576
  #  nova flavor-key prefer_pin_1 set hw:numa_mempolicy=strict
  #  nova flavor-key prefer_pin_1 set hw:cpu_policy=dedicated
  #  nova flavor-key prefer_pin_1 set hw:cpu_thread_policy=prefer

  Then instantiate VMs with direct-physical neutron ports:

  neutron port-create nfv_sriov --binding:vnic-type direct-physical --name pf1
  nova boot pf1 --flavor prefer_pin_1 --image centos_udev --nic 
port-id=a0fe88f6-07cc-4c70-b702-1915e36ed728
  neutron port-create nfv_sriov --binding:vnic-type direct-physical --name pf2
  nova boot pf2 --flavor prefer_pin_1 --image centos_udev --nic 
port-id=b96de3ec-ef94-428b-96bc-dc46623a2427

  Third VM instantiation failed. Our environment has got 4 NICs
  configured to be allocated. However, with a regular flavor
  (m1.normal), the instantiation works:

  neutron port-create nfv_sriov --binding:vnic-type direct-physical --name pf3
  nova boot pf3 --flavor 2 --image centos_udev --nic 
port-id=52caacfe-0324-42bd-84ad-9a54d80e8fbe
  neutron port-create nfv_sriov --binding:vnic-type direct-physical --name pf4
  nova boot pf4 --flavor 2 --image centos_udev --nic 
port-id=7335a9a6-82d0-4595-bb88-754678db56ef

  
  Expected result
  ===

  PCI passthrough (PFs and VFs) should work in an environment with
  NUMATopologyFilter enable

  
  Actual result
  =

  Checking availability of NICs with NUMATopologyFilter is not working.

  
  Environment
  ===

  1 controller + 1 compute.

  OpenStack Mitaka

  Logs & Configs
  ==

  See attachment

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1590746/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523780] Re: Race between HA router create and HA router delete

2016-07-27 Thread John Schwarz
I've gone through all 5 of the initial reported problems. There are all
either fixed or referenced by other bugs:

1. DBReferenceError: referenced by
https://bugs.launchpad.net/neutron/+bug/1533460 and fixed by
https://review.openstack.org/#/c/260303/

2. AttributeError: referenced by
https://bugs.launchpad.net/neutron/+bug/1605546

3. DBError: referenced by
https://bugs.launchpad.net/neutron/+bug/1533443

4. port["id"]: referenced by
https://bugs.launchpad.net/neutron/+bug/1533457

5. concurrency error: fixed by https://review.openstack.org/#/c/254586/

Therefore, this bug can be closed.


** Changed in: neutron
   Status: In Progress => Invalid

** Changed in: neutron/kilo
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1523780

Title:
  Race between HA router create and HA router delete

Status in neutron:
  Invalid
Status in neutron kilo series:
  Invalid

Bug description:
  Set more than one API worker and RPC worker,  and then run rally scenario 
test  create_and_delete_routers:
  you may get such errors:

  1.DBReferenceError: (IntegrityError) (1452, 'Cannot add or update a
  child row: a foreign key constraint fails
  (`neutron`.`ha_router_agent_port_bindings`, CONSTRAINT
  `ha_router_agent_port_bindings_ibfk_2` FOREIGN KEY (`router_id`)
  REFERENCES `routers` (`id`) ON DELETE CASCADE)') 'INSERT INTO
  ha_router_agent_port_bindings (port_id, router_id, l3_agent_id, state)
  VALUES (%s, %s, %s, %s)' ('xxx', 'xxx', None,
  'standby')

  (InvalidRequestError: This Session's transaction has been rolled back
  by a nested rollback() call.  To begin a new transaction, issue
  Session.rollback() first.)

  2. AttributeError: 'NoneType' object has no attribute 'config' (l3
  agent process router in router_delete function)

  3. DBError: UPDATE statement on table 'ports' expected to update 1
  row(s); 0 were matched.

  4. res = {"id": port["id"],
     TypeError: 'NoneType' object is unsubscriptable

  5. delete HA network during deleting the last router, get error
  message: "Unable to complete operation on network . There
  are one or more ports still in use on the network."

  There are a bunch of sub-bugs related to this one, basically different
  incarnations of race conditions in the interactions between the
  l3-agent and the neutron-server:

     https://bugs.launchpad.net/neutron/+bug/1499647
     https://bugs.launchpad.net/neutron/+bug/1533441
     https://bugs.launchpad.net/neutron/+bug/1533443
     https://bugs.launchpad.net/neutron/+bug/1533457
     https://bugs.launchpad.net/neutron/+bug/1533440
     https://bugs.launchpad.net/neutron/+bug/1533454
     https://bugs.launchpad.net/neutron/+bug/1533455
 https://bugs.launchpad.net/neutron/+bug/1533460

  (I suggest we use this main bug as a tracker for the whole thing,
   as reviews already reference this bug as related).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1523780/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1533441] Re: HA router can not be deleted in L3 agent after race between HA router creating and deleting

2016-07-27 Thread John Schwarz
I've gone through the 2 errors initially reported:

1. Concurrency issues with HA ports: fixed by
https://review.openstack.org/#/c/257059/ (introduction of the ALLOCATING
status for routers)

2. AttributeError: already referenced by
https://bugs.launchpad.net/neutron/+bug/1605546

So this bug can be closed.

** Changed in: neutron
   Status: In Progress => Invalid

** Changed in: neutron/kilo
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1533441

Title:
  HA router can not be deleted in L3 agent after race between HA router
  creating and deleting

Status in neutron:
  Invalid
Status in neutron kilo series:
  Invalid

Bug description:
  HA router can not be deleted in L3 agent after race between HA router
  creating and deleting

  Exception:
  1. Unable to process HA router %s without HA port (HA router initialize)

  2. AttributeError: 'NoneType' object has no attribute 'config' (HA
  router deleting procedure)

  
  With the newest neutron code, I find a infinite loop in _safe_router_removed.
  Consider a HA router without HA port was placed in the l3 agent,
  usually because of the race condition.

  Infinite loop steps:
  1. a HA router deleting RPC comes
  2. l3 agent remove it
  3. the RouterInfo will delete its the router 
namespace(self.router_namespace.delete())
  4. the HaRouter, ha_router.delete(), where the AttributeError: 'NoneType' or 
some error will be raised.
  5. _safe_router_removed return False
  6. self._resync_router(update)
  7. the router namespace is not existed, RuntimeError raised, go to 5, 
infinite loop 5 - 7

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1533441/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1533440] Re: Race between deleting last HA router and a new HA router API call

2016-07-27 Thread John Schwarz
3 of the 4 original issues in the first post are now fixed, and the one
that isn't is addressed by a separate bug report:

1. NetworkNotFound: fixed by the introduction of
_create_ha_interfaces_and_ensure_network

2. IpAddressGenerationFailure:
https://bugs.launchpad.net/neutron/+bug/1562887

3. DBReferenceError: Opened a separate bug,
https://bugs.launchpad.net/neutron/+bug/1533460, and fixed by
https://review.openstack.org/#/c/260303/

4. HA Network Attribute Error: fixed by the introduction of
_create_ha_interfaces_and_ensure_network

I think this bug can be closed.


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1533440

Title:
  Race between deleting last HA router and a new HA router API call

Status in neutron:
  Fix Released

Bug description:
  During the delete of tenant last HA router, neutron will also delete
  the HA network which can be racy if a new HA router API call is coming
  concurrently.

  Some known exceptions:
  1. NetworkNotFound: (HA network not found when create HA router HA port)

  2. IpAddressGenerationFailure: (HA port created failed due to the
     concurrently HA subnet deletion)

  3. DBReferenceError(IntegrityError): (HA network was deleted by
     concurrently operation, e.g. deleting the last HA router)

  4. HA Network Attribute Error
 http://paste.openstack.org/show/490140/

  Consider using the Rally to do the following steps to reproduce the race 
exceptions:
  1. Create 200+ tenant, each one has 2 or more user
  2. Create ONLY 1 router for each tenant
  3. Concurently do the following:
    (1) one user try to delete the LAST HA router
    (2) other user try to create some HA router

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1533440/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600766] Re: initial dhcp has default hostname of ubuntu

2016-07-27 Thread Scott Moser
There are at least 3 ways to fix this:
a.) lxd templates can write /etc/hostname as they write 
/var/lib/cloud/data/seed/nocloud/meta-data
then, when the system does a dhcp on its first boot, it will have the 
/etc/hostname that lxd thinks it should

b.) cloud-init can read the hostname from meta-data and set it before the dhcp.
   This is only really possible recently in cloud-init, but it is now possible.

c.) lxd can control dnsmasq better

'a' and 'b' have the requirement that they container continue to behave,
and if a container changes its /etc/hostname and dhcps again, it will
then possibly have a new hostname.  Basically, that puts the container
in charge of declaring its hostname that will appear in dns.  That
brittle .  The failure you see here is fallout of that system being
brittle.

if LXD says a system's name is 'cloudy-weather' and something in the
container writes 'sunny-day' to /etc/hostname, a subsequent reboot would
apparently change the dns entry for that system.  Essentially, you're
allowing a api by which a container can change its desired hostname by
metadata in a dhcp request.  That seems an odd api at best.

It also means that if another system was already named 'sunny-day', then
you have a collision and a misbehaving system can actually do some harm.

In my head, ultimately the only fix is 'c'.

either 'a' or 'b' will improve the situation, but not fix it.

** Also affects: cloud-init
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1600766

Title:
  initial dhcp has default hostname of ubuntu

Status in cloud-init:
  New
Status in cloud-init package in Ubuntu:
  New

Bug description:
  When using the normal lxd-bridge, using a dnsmasq instance for dhcp,
  the initial dhcp is always the hostname 'ubuntu', and this is recorded
  in the dnsmasq's dhcp leases file.

  Presumably the dhcp is done before the container's hostname is set. A
  restart or dhcp renew seems to put the correct container name in the
  leases file.

  This is a problem when using the dnsmasq for local dns resolving for
  *.lxd, which is the standard way of doing host dns for containers, as
  new containers are not dns addressable without a restart or renew.

  Is there anyway get the correct hostname in the initial dhcp? Or maybe
  renew after setting the hostname?

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1600766/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606895] [NEW] DHCP offer wrong router

2016-07-27 Thread Tangi
Public bug reported:

Hello, 
I have a public network with 3 sub-nets and associated gateways: 
public3 XX.0/27 IPv4XX.30
public  XX.64/28IPv4XX.78
public2 XX.160/28   IPv4XX.174  

>From time to time, I saw that on instantiated VM, the default route to
the GW was missing.

Checking the DHCP answer, I noticed the proposed router was a wrong one.

We get the .174 instead of .78

see below DHCP offer :

Bootstrap Protocol (ACK)
Message type: Boot Reply (2)
Hardware type: Ethernet (0x01)
Hardware address length: 6
Hops: 0
Transaction ID: 0x70542239
Seconds elapsed: 0
Bootp flags: 0x (Unicast)
0...    = Broadcast flag: Unicast
.000    = Reserved flags: 0x
Client IP address: 0.0.0.0 (0.0.0.0)
Your (client) IP address: XX.XX.XX.66 (XX.XX.XX.66)
Next server IP address: XX.XX.XX.65 (XX.XX.XX.65)
Relay agent IP address: 0.0.0.0 (0.0.0.0)
Client MAC address: fa:16:3e:9c:ea:c4 (fa:16:3e:9c:ea:c4)
Client hardware address padding: 
Server host name not given
Boot file name not given
Magic cookie: DHCP
Option: (53) DHCP Message Type (ACK)
Length: 1
DHCP: ACK (5)
Option: (54) DHCP Server Identifier
Length: 4
DHCP Server Identifier: XX.XX.XX.65 (XX.XX.XX.65)
Option: (51) IP Address Lease Time
Length: 4
IP Address Lease Time: (86400s) 1 day
Option: (58) Renewal Time Value
Length: 4
Renewal Time Value: (43200s) 12 hours
Option: (59) Rebinding Time Value
Length: 4
Rebinding Time Value: (75600s) 21 hours
Option: (1) Subnet Mask
Length: 4
Subnet Mask: 255.255.255.240 (255.255.255.240)
Option: (28) Broadcast Address
Length: 4
Broadcast Address: XX.XX.XX.79 (XX.XX.XX.79)
Option: (15) Domain Name
Length: 14
Domain Name: openstacklocal
Option: (12) Host Name
Length: 18
Host Name: host-XX-XX-XX-66
Option: (3) Router
Length: 4
Router: XX.XX.XX.174 (XX.XX.XX.174)
Option: (121) Classless Static Route
Length: 32
Subnet/MaskWidth-Router: 169.254.169.254/32-XX.XX.XX.161
Subnet/MaskWidth-Router: XX.XX.XX.0/27-0.0.0.0
Subnet/MaskWidth-Router: XX.XX.XX.64/28-0.0.0.0
Subnet/MaskWidth-Router: default-XX.XX.XX.174
Option: (6) Domain Name Server
Length: 8
Domain Name Server: XX
Domain Name Server: XX
Option: (255) End
Option End: 255


Could that it be an issue related to multiples sub-nets?

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: dhcp multiple router sub-nets

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1606895

Title:
  DHCP offer wrong router

Status in neutron:
  New

Bug description:
  Hello, 
  I have a public network with 3 sub-nets and associated gateways: 
  public3   XX.0/27 IPv4XX.30
  publicXX.64/28IPv4XX.78
  public2   XX.160/28   IPv4XX.174  

  From time to time, I saw that on instantiated VM, the default route to
  the GW was missing.

  Checking the DHCP answer, I noticed the proposed router was a wrong
  one.

  We get the .174 instead of .78

  see below DHCP offer :

  Bootstrap Protocol (ACK)
  Message type: Boot Reply (2)
  Hardware type: Ethernet (0x01)
  Hardware address length: 6
  Hops: 0
  Transaction ID: 0x70542239
  Seconds elapsed: 0
  Bootp flags: 0x (Unicast)
  0...    = Broadcast flag: Unicast
  .000    = Reserved flags: 0x
  Client IP address: 0.0.0.0 (0.0.0.0)
  Your (client) IP address: XX.XX.XX.66 (XX.XX.XX.66)
  Next server IP address: XX.XX.XX.65 (XX.XX.XX.65)
  Relay agent IP address: 0.0.0.0 (0.0.0.0)
  Client MAC address: fa:16:3e:9c:ea:c4 (fa:16:3e:9c:ea:c4)
  Client hardware address padding: 
  Server host name not given
  Boot file name not given
  Magic cookie: DHCP
  Option: (53) DHCP Message Type (ACK)
  Length: 1
  DHCP: ACK (5)
  Option: (54) DHCP Server Identifier
  Length: 4
  DHCP Server Identifier: XX.XX.XX.65 (XX.XX.XX.65)
  Option: (51) IP Address Lease Time
  Length: 4
  IP Address Lease Time: (86400s) 1 day
  Option: (58) Renewal Time Value
  Length: 4
  Renewal Time 

[Yahoo-eng-team] [Bug 1606889] [NEW] Should Use "p_gp.spec.vswitchname" instead of "p_gp.vswitch.split" in vmware driver

2016-07-27 Thread Fang He
Public bug reported:

Description
===
when try to get vlanid and vswitch for portgroup, the driver use the attribute 
"portgroup.vswitch" which is full name of the vswitch ("key-x-")
programe should user "portgroup.spec.vswitchname" instead

** Affects: nova
 Importance: Undecided
 Assignee: Fang He (fang-he)
 Status: New


** Tags: improve

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1606889

Title:
  Should Use "p_gp.spec.vswitchname" instead of "p_gp.vswitch.split" in
  vmware driver

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  when try to get vlanid and vswitch for portgroup, the driver use the 
attribute "portgroup.vswitch" which is full name of the vswitch ("key-x-")
  programe should user "portgroup.spec.vswitchname" instead

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1606889/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606562] Re: Add 'vhdx' disk format support

2016-07-27 Thread Stuart McLaren
** Changed in: glance
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1606562

Title:
  Add 'vhdx' disk format support

Status in Glance:
  Invalid

Bug description:
  VHDX is the newer version of VHD:

  https://technet.microsoft.com/en-us/library/hh831446(v=ws.11).aspx

  It removes VHD's 2 TB disk size limit (amoung other things).

  There's no reason we shouldn't support it as a default disk format.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1606562/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606844] [NEW] Neutron constantly resyncing deleted router

2016-07-27 Thread Oleg Bondarev
Public bug reported:

No need to constantly resync router which was deleted and for which
there is no namespace.

Observed: l3 agent log full of

2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent [-] Error while 
deleting router 81ef46de-f7f9-4c5e-b787-c935e0af253a
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent Traceback (most 
recent call last):
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 359, in 
_safe_router_removed
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent 
self._router_removed(router_id)
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 377, in 
_router_removed
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent ri.delete(self)
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/router_info.py", line 347, 
in delete
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent 
self.process_delete(agent)
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/common/utils.py", line 385, in call
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent self.logger(e)
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent 
self.force_reraise()
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent 
six.reraise(self.type_, self.value, self.tb)
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/common/utils.py", line 382, in call
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent return 
func(*args, **kwargs)
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/router_info.py", line 947, 
in process_delete
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent 
self._process_internal_ports(agent.pd)
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/router_info.py", line 530, 
in _process_internal_ports
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent existing_devices 
= self._get_existing_devices()
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/router_info.py", line 413, 
in _get_existing_devices
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent ip_devs = 
ip_wrapper.get_devices(exclude_loopback=True)
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 130, in 
get_devices
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent 
log_fail_as_error=self.log_fail_as_error
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py", line 140, in 
execute
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent raise 
RuntimeError(msg)
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent RuntimeError: Exit 
code: 1; Stdin: ; Stdout: ; Stderr: Cannot open network namespace 
"qrouter-81ef46de-f7f9-4c5e-b787-c935e0af253a": No such file or directory
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent
2016-07-26 14:00:45.236 13360 ERROR neutron.agent.linux.utils [-] Exit code: 1; 
Stdin: ; Stdout: ; Stderr: Cannot open network namespace 
"qrouter-81ef46de-f7f9-4c5e-b787-c935e0af253a": No such file or directory

this consumes memory, cpu, disk.

** Affects: neutron
 Importance: Undecided
 Assignee: Oleg Bondarev (obondarev)
 Status: New


** Tags: l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1606844

Title:
  Neutron constantly resyncing deleted router

Status in neutron:
  New

Bug description:
  No need to constantly resync router which was deleted and for which
  there is no namespace.

  Observed: l3 agent log full of

  2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent [-] Error while 
deleting router 81ef46de-f7f9-4c5e-b787-c935e0af253a
  2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent Traceback (most 
recent call last):
  2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 359, in 
_safe_router_removed
  2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent 

[Yahoo-eng-team] [Bug 1606845] [NEW] L3 agent constantly resyncing deleted router

2016-07-27 Thread Oleg Bondarev
Public bug reported:

No need to constantly resync router which was deleted and for which
there is no namespace.

Observed: l3 agent log full of

2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent [-] Error while 
deleting router 81ef46de-f7f9-4c5e-b787-c935e0af253a
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent Traceback (most 
recent call last):
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 359, in 
_safe_router_removed
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent 
self._router_removed(router_id)
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 377, in 
_router_removed
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent ri.delete(self)
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/router_info.py", line 347, 
in delete
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent 
self.process_delete(agent)
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/common/utils.py", line 385, in call
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent self.logger(e)
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent 
self.force_reraise()
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent 
six.reraise(self.type_, self.value, self.tb)
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/common/utils.py", line 382, in call
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent return 
func(*args, **kwargs)
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/router_info.py", line 947, 
in process_delete
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent 
self._process_internal_ports(agent.pd)
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/router_info.py", line 530, 
in _process_internal_ports
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent existing_devices 
= self._get_existing_devices()
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/router_info.py", line 413, 
in _get_existing_devices
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent ip_devs = 
ip_wrapper.get_devices(exclude_loopback=True)
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 130, in 
get_devices
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent 
log_fail_as_error=self.log_fail_as_error
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py", line 140, in 
execute
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent raise 
RuntimeError(msg)
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent RuntimeError: Exit 
code: 1; Stdin: ; Stdout: ; Stderr: Cannot open network namespace 
"qrouter-81ef46de-f7f9-4c5e-b787-c935e0af253a": No such file or directory
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent
2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent
2016-07-26 14:00:45.236 13360 ERROR neutron.agent.linux.utils [-] Exit code: 1; 
Stdin: ; Stdout: ; Stderr: Cannot open network namespace 
"qrouter-81ef46de-f7f9-4c5e-b787-c935e0af253a": No such file or directory

this consumes memory, cpu, disk.

** Affects: neutron
 Importance: Undecided
 Assignee: Oleg Bondarev (obondarev)
 Status: New


** Tags: l3-ipam-dhcp

** Summary changed:

- Neutron constantly resyncing deleted router
+ L3 agent constantly resyncing deleted router

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1606845

Title:
  L3 agent constantly resyncing deleted router

Status in neutron:
  New

Bug description:
  No need to constantly resync router which was deleted and for which
  there is no namespace.

  Observed: l3 agent log full of

  2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent [-] Error while 
deleting router 81ef46de-f7f9-4c5e-b787-c935e0af253a
  2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent Traceback (most 
recent call last):
  2016-07-26 14:00:45.224 13360 ERROR neutron.agent.l3.agent   File 

[Yahoo-eng-team] [Bug 1606822] Re: can not update lbaas pool name

2016-07-27 Thread Jakub Libosvar
** Tags added: lbaas

** Project changed: neutron => python-neutronclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1606822

Title:
  can not update lbaas pool name

Status in python-neutronclient:
  New

Bug description:
  Steps to reproduce:
  1. Create a lb

  2. Create a pool:
  neutron lbaas-pool-create --name pool1 --lb-algorithm ROUND_ROBIN --protocol 
TCP --loadbalancer lb1

  3. Update pool name:
  neutron lbaas-pool-update --name pool2 pool1

  Expected:
  pool name been updated

  Actual result:

  usage: neutron lbaas-pool-update [-h] [--request-format {json}]
   [--admin-state-up {True,False}]
   [--session-persistence 
type=TYPE[,cookie_name=COOKIE_NAME]]
   [--description DESCRIPTION] [--name NAME]
   --lb-algorithm
   {ROUND_ROBIN,LEAST_CONNECTIONS,SOURCE_IP}
   POOL
  neutron lbaas-pool-update: error: argument --lb-algorithm is required
  Try 'neutron help lbaas-pool-update' for more information.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1606822/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606837] [NEW] Some long strigs are not translated

2016-07-27 Thread Kenji Ishii
Public bug reported:

In https://bugs.launchpad.net/horizon/+bug/1592965, extracted messages
are stored with trimming. Because of this change, some strings like
written with multi line won't be translated.

** Affects: horizon
 Importance: Undecided
 Assignee: Kenji Ishii (ken-ishii)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Kenji Ishii (ken-ishii)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1606837

Title:
  Some long strigs are not translated

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In https://bugs.launchpad.net/horizon/+bug/1592965, extracted messages
  are stored with trimming. Because of this change, some strings like
  written with multi line won't be translated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1606837/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1533876] Re: plug_vhostuser may fail due to device not found error when setting mtu

2016-07-27 Thread yossib
** Changed in: nova/liberty
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1533876

Title:
  plug_vhostuser may fail due to device not found error when setting mtu

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  Fix Released
Status in OpenStack Compute (nova) mitaka series:
  Fix Committed

Bug description:
  Setting the mtu of a vhost-user port with the ip command will cause vms to 
fail
  to boot with a device not found error as vhost-user prots are not represented 
as
  kernel netdevs.

  this bug is present in stable/kilo, stable/liberty and master and i would 
like to ask that it be back ported if accepted
  and fixed in master.

  when using vhost-user with ovs-dpdk the vhost-user port is plugged
  into ovs by nova using a non atomic call to
  linux_net.create_ovs_vif_port to add an ovs port followed by a second
  call to linux_net.ovs_set_vhostuser_port_type to update the port type.

  
https://github.com/openstack/nova/blob/1bf6a8760f0ef226dba927b62d4354e248b984de/nova/virt/libvirt/vif.py#L652-L655

  the reuse of the create_ovs_vif_port has an untended concequece of 
introducing an error where
  the ip tool is invoked to try and set the mtu on the userspace vhost-user 
interface which dose not exist
  as a kernel netdev.
  
https://github.com/openstack/nova/blob/1bf6a8760f0ef226dba927b62d4354e248b984de/nova/network/linux_net.py#L1379

  this results in the in the call to set_device_mtu throwing an exception as 
the ip comand exits with code 1
  
https://github.com/openstack/nova/blob/1bf6a8760f0ef226dba927b62d4354e248b984de/nova/network/linux_net.py#L1340-L1342

  as a result the second function call to ovs_set_vhostuser_port_type
  is never maid and the vm fails to boot.

  to resolve this issue i would like to introduce a new function to inux_net.py
  create_ovs_vhostuser_port  which will create the vhostuser port as an atomic 
action
  and will not set the mtu similar to the impentation in the os-vif vhost-user 
driver

  
https://github.com/jaypipes/vif_plug_vhostuser/blob/8ac30ce32b3e0bae5d2d8f1edc9d64ac2871608e/vif_plug_vhostuser/linux_net.py#L34-L46

  an alternitive solution would be to add "1" to the retrun code check here 
https://github.com/openstack/nova/blob/master/nova/network/linux_net.py#L1339  
or catch the exception here 
https://github.com/openstack/nova/blob/1bf6a8760f0ef226dba927b62d4354e248b984de/nova/virt/libvirt/vif.py#L652
   however neither solve the underlying cause.

  this was observed with kilo openstack  on ubuntu 14.04 with ovs-dpdk
  deployed with puppet/fule.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1533876/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606827] [NEW] Agents might be reported as down for 10 minutes after all controllers restart

2016-07-27 Thread John Schwarz
Public bug reported:

The scenario which initially revealed this issue involved multiple
controllers and an extra compute node (total of 4) but it should also
reproduce on deployments smaller than described.

The issue is that if an agent tries to report_state to the neutron-
server and it fails because of a timeout (raising
oslo_messaging.MessagingTimeout), then there is an exponential back-off
effect which was put in place by [1]. The feature was intended for heavy
RPC calls (like get_routers()) and not for light calls such as
report_state, so this can be considered a regression. This can be
reproduced by restarting the controllers on a triple-O deployment and
specified before.

A solution would be to ensure PluginReportStateAPI doesn't use the
exponential backoff, instead seeking to always time out after
rpc_response_timeout.

[1]: https://review.openstack.org/#/c/280595/14/neutron/common/rpc.py

** Affects: neutron
 Importance: Undecided
 Assignee: John Schwarz (jschwarz)
 Status: In Progress


** Tags: liberty-backport-potential mitaka-backport-potential

** Description changed:

  The scenario which initially revealed this issue involved multiple
  controllers and an extra compute node (total of 4) but it should also
  reproduce on deployments smaller than described.
  
  The issue is that if an agent tries to report_state to the neutron-
  server and it fails because of a timeout (raising
  oslo_messaging.MessagingTimeout), then there is an exponential back-off
  effect which was put in place by [1]. The feature was intended for heavy
  RPC calls (like get_routers()) and not for light calls such as
- report_state, so this can be considered a regression.
+ report_state, so this can be considered a regression. This can be
+ reproduced by restarting the controllers on a triple-O deployment and
+ specified before.
  
  A solution would be to ensure PluginReportStateAPI doesn't use the
  exponential backoff, instead seeking to always time out after
  rpc_response_timeout.
  
  [1]: https://review.openstack.org/#/c/280595/14/neutron/common/rpc.py

** Tags added: mitaka-backport-potential

** Tags added: liberty-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1606827

Title:
  Agents might be reported as down for 10 minutes after all controllers
  restart

Status in neutron:
  In Progress

Bug description:
  The scenario which initially revealed this issue involved multiple
  controllers and an extra compute node (total of 4) but it should also
  reproduce on deployments smaller than described.

  The issue is that if an agent tries to report_state to the neutron-
  server and it fails because of a timeout (raising
  oslo_messaging.MessagingTimeout), then there is an exponential back-
  off effect which was put in place by [1]. The feature was intended for
  heavy RPC calls (like get_routers()) and not for light calls such as
  report_state, so this can be considered a regression. This can be
  reproduced by restarting the controllers on a triple-O deployment and
  specified before.

  A solution would be to ensure PluginReportStateAPI doesn't use the
  exponential backoff, instead seeking to always time out after
  rpc_response_timeout.

  [1]: https://review.openstack.org/#/c/280595/14/neutron/common/rpc.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1606827/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606822] [NEW] can not update lbaas pool name

2016-07-27 Thread li,chen
Public bug reported:

Steps to reproduce:
1. Create a lb

2. Create a pool:
neutron lbaas-pool-create --name pool1 --lb-algorithm ROUND_ROBIN --protocol 
TCP --loadbalancer lb1

3. Update pool name:
neutron lbaas-pool-update --name pool2 pool1

Expected:
pool name been updated

Actual result:

usage: neutron lbaas-pool-update [-h] [--request-format {json}]
 [--admin-state-up {True,False}]
 [--session-persistence 
type=TYPE[,cookie_name=COOKIE_NAME]]
 [--description DESCRIPTION] [--name NAME]
 --lb-algorithm
 {ROUND_ROBIN,LEAST_CONNECTIONS,SOURCE_IP}
 POOL
neutron lbaas-pool-update: error: argument --lb-algorithm is required
Try 'neutron help lbaas-pool-update' for more information.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1606822

Title:
  can not update lbaas pool name

Status in neutron:
  New

Bug description:
  Steps to reproduce:
  1. Create a lb

  2. Create a pool:
  neutron lbaas-pool-create --name pool1 --lb-algorithm ROUND_ROBIN --protocol 
TCP --loadbalancer lb1

  3. Update pool name:
  neutron lbaas-pool-update --name pool2 pool1

  Expected:
  pool name been updated

  Actual result:

  usage: neutron lbaas-pool-update [-h] [--request-format {json}]
   [--admin-state-up {True,False}]
   [--session-persistence 
type=TYPE[,cookie_name=COOKIE_NAME]]
   [--description DESCRIPTION] [--name NAME]
   --lb-algorithm
   {ROUND_ROBIN,LEAST_CONNECTIONS,SOURCE_IP}
   POOL
  neutron lbaas-pool-update: error: argument --lb-algorithm is required
  Try 'neutron help lbaas-pool-update' for more information.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1606822/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606801] [NEW] deleting router run into race condition

2016-07-27 Thread Bernhard
Public bug reported:

After deleting a router the logfiles of both network nodes are filled up with " 
RuntimeError: Exit code: 1; Stdin: ; Stdout: ; Stderr: Cannot open network 
namespace "qrouter-3767"
After i have restarted the openstack services on the network nodes, no new 
entries

Reproduceable: yes

Steps:
* add router via CLI or dashboard
* delete router via CLI or dashboard
* logfiles grow up 

Openstack version: mitaka ( this error occured on liberty too ! )

OS: Centos 7, latest updates

Installed Packages on nerwork nodes
openstack-neutron-vpnaas-8.0.0-1.el7.noarch
openstack-neutron-common-8.1.2-1.el7.noarch
openstack-neutron-metering-agent-8.1.2-1.el7.noarch
python-neutronclient-4.1.1-2.el7.noarch
python-neutron-8.1.2-1.el7.noarch
python-neutron-fwaas-8.0.0-3.el7.noarch
openstack-neutron-ml2-8.1.2-1.el7.noarch
openstack-neutron-bgp-dragent-8.1.2-1.el7.noarch
python-neutron-vpnaas-8.0.0-1.el7.noarch
openstack-neutron-openvswitch-8.1.2-1.el7.noarch
openstack-neutron-8.1.2-1.el7.noarch
python-neutron-lib-0.0.2-1.el7.noarch
openstack-neutron-fwaas-8.0.0-3.el7.noarch


Logfile network node:
2.770 44778 DEBUG neutron.agent.linux.ra [-] radvd disabled for router 
37678766-597a-4e33-b83a-65142ca2ced8 disable 
/usr/lib/python2.7/site-packages/neutron/agent/linux/ra.py:190
2016-07-27 09:10:02.770 44778 DEBUG neutron.agent.linux.utils [-] Running 
command (rootwrap daemon): ['ip', 'netns', 'exec', 
'qrouter-37678766-597a-4e33-b83a-65142ca2ced8', 'find', '/sys/class/net', 
'-maxdepth', '1', '-type', 'l', '-printf', '%f '] execute_rootwrap_daemon 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:100
2016-07-27 09:10:02.773 44778 ERROR neutron.agent.linux.utils [-] Exit code: 1; 
Stdin: ; Stdout: ; Stderr: Cannot open network namespace 
"qrouter-37678766-597a-4e33-b83a-65142ca2ced8": No such file or directory
2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent [-] Error while 
deleting router 37678766-597a-4e33-b83a-65142ca2ced8
2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent Traceback (most 
recent call last):
2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 359, in 
_safe_router_removed
2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent 
self._router_removed(router_id)
2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 377, in 
_router_removed
2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent ri.delete(self)
2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/ha_router.py", line 380, in 
delete
2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent super(HaRouter, 
self).delete(agent)
2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py", line 349, 
in delete
2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent 
self.router_namespace.delete()
2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/namespaces.py", line 100, in 
delete
2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent for d in 
ns_ip.get_devices(exclude_loopback=True):
2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 130, in 
get_devices
2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent 
log_fail_as_error=self.log_fail_as_error
2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 140, in 
execute
2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent raise 
RuntimeError(msg)
2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent RuntimeError: Exit 
code: 1; Stdin: ; Stdout: ; Stderr: Cannot open network namespace 
"qrouter-37678766-597a-4e33-b83a-65142ca2ced8": No such file or directory
2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent
2016-07-27 09:10:02.773 44778 ERROR neutron.agent.l3.agent


Attached logfiles of control node and both network nodes.
At 09:09:00 ->  added router
At 09:10:00 -> deleted router

** Affects: neutron
 Importance: Undecided
 Status: New

** Attachment added: "logs.tar.gz"
   
https://bugs.launchpad.net/bugs/1606801/+attachment/4707969/+files/logs.tar.gz

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1606801

Title:
  deleting router run into race condition

Status in neutron:
  New

Bug description:
  After deleting a router the logfiles of both network nodes are filled up with 
" RuntimeError: Exit code: 1; Stdin: ; Stdout: ; Stderr: Cannot open network 
namespace "qrouter-3767"
  After i have restarted the openstack services on 

[Yahoo-eng-team] [Bug 1588795] Re: Error when trying to show details of cluster from other tenants (is_public = True)

2016-07-27 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/342689
Committed: 
https://git.openstack.org/cgit/openstack/sahara-dashboard/commit/?id=d288c54073e8b7f11060b612d35865685f86e9e0
Submitter: Jenkins
Branch:master

commit d288c54073e8b7f11060b612d35865685f86e9e0
Author: Vitaly Gridnev 
Date:   Fri Jul 15 12:09:29 2016 +0300

be safer on retrieving objects

when object is public, there is no guarantee that
all resources are shared between tenants. so, it's
better to safer on case of retrieving additional information
about cluster details.

Change-Id: Ib0d9b4d324f40504d3540d2716df64dd8e206af5
Closes-bug: 1588795


** Changed in: sahara
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1588795

Title:
  Error when trying to show details of cluster from other tenants
  (is_public = True)

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in Sahara:
  Fix Released

Bug description:
  Error with sahara-dashboard

  
  To reproduce: 

  -1 create a cluster
  -2 make it public
  -3 change your tenant
  -4 try to click on the previously created cluster to show details.


  Error:

  Error during template rendering

  In template 
/home/horizon/horizon/.venv/local/lib/python2.7/site-packages/sahara_dashboard/content/data_processing/clusters/templates/clusters/_details.html,
 error at line 32
  Reverse for 'plugin-details' with arguments '('',)' and keyword arguments 
'{}' not found. 1 pattern(s) tried: 
[u'project/data_processing/jobs/plugin/(?P[^/]+)$']

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1588795/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606777] [NEW] neutron lbaas dashboard has wrong validation with port

2016-07-27 Thread Jeffrey Zhang
Public bug reported:

in the port validation, even if i fill a pure number, it show error
always.

** Affects: neutron-lbaas-dashboard
 Importance: Undecided
 Status: New

** Attachment added: "090.png"
   https://bugs.launchpad.net/bugs/1606777/+attachment/4707900/+files/090.png

** Project changed: neutron => neutron-lbaas-dashboard

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1606777

Title:
  neutron  lbaas dashboard has wrong validation with port

Status in Neutron LBaaS Dashboard:
  New

Bug description:
  in the port validation, even if i fill a pure number, it show error
  always.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron-lbaas-dashboard/+bug/1606777/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp