[Yahoo-eng-team] [Bug 1377024] [NEW] nova-compute fails to spawn a VM with a crash in _get_host_numa_topology

2014-10-02 Thread Tomoe Sugihara
Public bug reported:

Apparently, this is introduced in this commit:
https://github.com/openstack/nova/commit/f5b37bac2eec89003cef6220722cc527a00ae7ee

On Precise with "qemu", I am unable ot spwan a VM with the following
stackce in nova-compute

2014-09-30 16:38:44.188 10841 DEBUG nova.virt.libvirt.driver [-] Updating host 
stats update_status /opt/stack/nova/nova/virt/libvirt/driver.py:6361
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 455, 
in fire_timers
timer()
  File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/timer.py", line 
58, in __call__
cb(*args, **kw)
  File "/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line 168, in 
_do_send
waiter.switch(result)
  File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
212, in main
result = function(*args, **kwargs)
  File "/opt/stack/nova/nova/openstack/common/service.py", line 490, in 
run_service
service.start()
  File "/opt/stack/nova/nova/service.py", line 181, in start
self.manager.pre_start_hook()
  File "/opt/stack/nova/nova/compute/manager.py", line 1152, in pre_start_hook
self.update_available_resource(nova.context.get_admin_context())
  File "/opt/stack/nova/nova/compute/manager.py", line 5946, in 
update_available_resource
nodenames = set(self.driver.get_available_nodes())
  File "/opt/stack/nova/nova/virt/driver.py", line 1237, in get_available_nodes
stats = self.get_host_stats(refresh=refresh)
  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5771, in 
get_host_stats
return self.host_state.get_host_stats(refresh=refresh)
  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 470, in host_state
self._host_state = HostState(self)
  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 6331, in __init__
self.update_status()
  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 6387, in 
update_status
numa_topology = self.driver._get_host_numa_topology()
  File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 4828, in 
_get_host_numa_topology
for cell in topology.cells])
TypeError: unsupported operand type(s) for /: 'NoneType' and 'int'


Libvirt related package versions:
$ dpkg -l|grep libvir
ii  libvirt-bin   0.9.8-2ubuntu17.20
programs for the libvirt library
ii  libvirt0  0.9.8-2ubuntu17.20
library for interfacing with different virtualization systems
ii  python-libvirt0.9.8-2ubuntu17.20
libvirt Python bindings


capabilities xml that I get from libvirt. Indeed "memory" node under "cell" is 
missing. 


  

  




  

  


** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1377024

Title:
  nova-compute fails to spawn a VM with a crash in
  _get_host_numa_topology

Status in OpenStack Compute (Nova):
  New

Bug description:
  Apparently, this is introduced in this commit:
  
https://github.com/openstack/nova/commit/f5b37bac2eec89003cef6220722cc527a00ae7ee

  On Precise with "qemu", I am unable ot spwan a VM with the following
  stackce in nova-compute

  2014-09-30 16:38:44.188 10841 DEBUG nova.virt.libvirt.driver [-] Updating 
host stats update_status /opt/stack/nova/nova/virt/libvirt/driver.py:6361
  Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 
455, in fire_timers
  timer()
File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/timer.py", line 
58, in __call__
  cb(*args, **kw)
File "/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line 168, 
in _do_send
  waiter.switch(result)
File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
212, in main
  result = function(*args, **kwargs)
File "/opt/stack/nova/nova/openstack/common/service.py", line 490, in 
run_service
  service.start()
File "/opt/stack/nova/nova/service.py", line 181, in start
  self.manager.pre_start_hook()
File "/opt/stack/nova/nova/compute/manager.py", line 1152, in pre_start_hook
  self.update_available_resource(nova.context.get_admin_context())
File "/opt/stack/nova/nova/compute/manager.py", line 5946, in 
update_available_resource
  nodenames = set(self.driver.get_available_nodes())
File "/opt/stack/nova/nova/virt/driver.py", line 1237, in 
get_available_nodes
  stats = self.get_host_stats(refresh=refresh)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5771, in 
get_host_stats
  return self.host_state.get_host_stats(refresh=refresh)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 470, in host

[Yahoo-eng-team] [Bug 1376307] Re: nova compute is crashing with the error TypeError: unsupported operand type(s) for /: 'NoneType' and 'int'

2014-10-02 Thread OpenStack Infra
** Changed in: nova
   Status: Won't Fix => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1376307

Title:
  nova compute is crashing with the error TypeError: unsupported operand
  type(s) for /: 'NoneType' and 'int'

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  nova compute is crashing with the below error when nova compute is
  started

  
  2014-10-01 14:50:26.854 ^[[00;32mDEBUG nova.virt.libvirt.driver 
[^[[00;36m-^[[00;32m] ^[[01;35m^[[00;32mUpdating host stats^[[00m ^[[00;33mfrom 
(pid=9945) update_status /opt/stack/nova/nova/virt/libvirt/driver.py:6361^[[00m
  Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 
449, in fire_timers
  timer()
File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/timer.py", line 
58, in __call__
  cb(*args, **kw)
File "/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line 167, 
in _do_send
  waiter.switch(result)
File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
207, in main
  result = function(*args, **kwargs)
File "/opt/stack/nova/nova/openstack/common/service.py", line 490, in 
run_service
  service.start()
File "/opt/stack/nova/nova/service.py", line 181, in start
  self.manager.pre_start_hook()
File "/opt/stack/nova/nova/compute/manager.py", line 1152, in pre_start_hook
  self.update_available_resource(nova.context.get_admin_context())
File "/opt/stack/nova/nova/compute/manager.py", line 5946, in 
update_available_resource
  nodenames = set(self.driver.get_available_nodes())
File "/opt/stack/nova/nova/virt/driver.py", line 1237, in 
get_available_nodes
  stats = self.get_host_stats(refresh=refresh)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5771, in 
get_host_stats
  return self.host_state.get_host_stats(refresh=refresh)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 470, in host_state
  self._host_state = HostState(self)
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 6331, in __init__
  self.update_status()
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 6387, in 
update_status
  numa_topology = self.driver._get_host_numa_topology()
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 4828, in 
_get_host_numa_topology
  for cell in topology.cells])
  TypeError: unsupported operand type(s) for /: 'NoneType' and 'int'
  2014-10-01 14:50:26.989 ^[[01;31mERROR nova.openstack.common.threadgroup 
[^[[00;36m-^[[01;31m] ^[[01;35m^[[01;31munsupported operand type(s) for /: 
'NoneType' and 'int'^[[00m


  Seems like the commit 
https://github.com/openstack/nova/commit/6a374f21495c12568e4754800574e6703a0e626f
  is the cause.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1376307/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1377012] [NEW] Can't delete an image in deleted status

2014-10-02 Thread Sam Morrison
Public bug reported:

I'm trying to delete an image that has a status of "deleted"

It's not deleted as I can do an image-show and it returns plus I can see
it in image_locations and it exists in the backend which for us is swift

glance image-show 17c6077c-99f0-41c7-9bd2-175216330990
+---+--+
| Property  | Value 
   |
+---+--+
| checksum  | c9ef771d317595fd3654ca69a4be5f31  
   |
| container_format  | bare  
   |
| created_at| 2014-05-22T07:58:23   
   |
| deleted   | True  
   |
| deleted_at| 2014-05-23T02:16:53   
   |
| disk_format   | raw   
   |
| id| 
17c6077c-99f0-41c7-9bd2-175216330990 |
| is_public | True  
   |
| min_disk  | 10
   |
| min_ram   | 0 
   |
| name  | XX|
| owner | X |
| protected | False 
   |
| size  | 10737418240   
   |
| status| deleted   
   |
| updated_at| 2014-05-23T02:16:53   
   |
+---+--+

glance image-delete 17c6077c-99f0-41c7-9bd2-175216330990
Request returned failure status.
404 Not Found
Image 17c6077c-99f0-41c7-9bd2-175216330990 not found.
(HTTP 404): Unable to delete image 17c6077c-99f0-41c7-9bd2-175216330990

** Affects: glance
 Importance: Undecided
 Status: New

** Description changed:

  I'm trying to delete an image that has a status of "deleted"
  
  It's not deleted as I can do an image-show and it returns plus I can see
  it in image_locations and it exists in the backend which for us is swift
- 
  
  glance image-show 17c6077c-99f0-41c7-9bd2-175216330990
  
+---+--+
  | Property  | Value   
 |
  
+---+--+
  | checksum  | 
c9ef771d317595fd3654ca69a4be5f31 |
  | container_format  | bare
 |
  | created_at| 2014-05-22T07:58:23 
 |
  | deleted   | True
 |
  | deleted_at| 2014-05-23T02:16:53 
 |
  | disk_format   | raw 
 |
  | id| 
17c6077c-99f0-41c7-9bd2-175216330990 |
  | is_public | True
 |
  | min_disk  | 10  
 |
  | min_ram   | 0   
 |
  | name  | XX|
  | owner | X |
  | protected | False   
 |
  | size  | 10737418240 
 |
  | status| deleted 
 |
  | updated_at| 2014-05-23T02:16:53 
 |
  
+---+--+
- sam@cloudboy:~/cloud-init$ glance image-delete 
17c6077c-99f0-41c7-9bd2-175216330990
+ 
+ glance image-delete 17c6077c-99f0-41c7-9bd2-175216330990
  Request returned failure status.
  404 Not Found
  Image 17c6077c-99f0-41c7-9bd2-175216330990 not found.
- (HTTP 404): Unable to delete image 17c6077c-99f0-41c7-9bd2-175216330990
+ (HTTP 404): Unable to delete image 17c6077c-99f0-41c7-9bd2-175216330990

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1377012

Title:
  Can't delete an image in deleted status

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug desc

[Yahoo-eng-team] [Bug 1377005] [NEW] Breaks machine without IPv4: "Route info failed"

2014-10-02 Thread Jeroen T. Vermeulen
Public bug reported:

When I try to deploy a MAAS node on a pure IPv6 network, it seems to
install normally and reboot, but cloud-init gives off an error about
route_info failing.  Eventually the console moves on to a login prompt,
but the machine is not reachable on the network — which means I can't
log in at all.

The console shows:
«
Cloud-init v. 0.7.5 running 'init-local' at Fri, 03 Oct 2014 04:19:29 
cloud-init-nonet[15.53]: waiting 10 seconds for network device
 * Starting Mount network filesystems
 * Stopping Mount network filesystems
cloud-init-nonet[16.46]: static networking is now up
 * Starting configure network device
Cloud-init v. 0.7.5 running 'init' at Fri, 03 Oct 2014 04:19:30 
ci-info: +++Net device info+++
ci-info: ++--+---+---+---+
ci-info: | Device |  Up  |  Address  |Mask   | Hw-Address|
ci-info: ++--+---+---+---+
ci-info: |   lo   | True | 127.0.0.1 | 255.0.0.0 |   |
ci-info: |  eth0  | True | - | - | 00:12:34:56:78:90 |
ci-info: ++--+---+---+---+
ci-info: !!!Route info 
failed!!!
»

After that it pauses for a long time, and finally moves on to a login
prompt.

Here's what was installed in the node's /etc/network/interfaces:
«
auto lo

auto eth0

iface eth0 inet6 static
netmask 64
address fd0d:1777:6bb6:db7::2:1
gateway fd0d:1777:6bb6:db7::1
»

The node doesn't show up even in neighbour discovery.  If I do the same
thing but with eth0 also configured to get a dynamic IPv4 address from
DHCP, then it boots up normally and becomes reachable through both IPv4
and IPv6.

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Description changed:

  When I try to deploy a MAAS node on a pure IPv6 network, it seems to
  install normally and reboot, but cloud-init gives off an error about
  route_info failing.  Eventually the console moves on to a login prompt,
  but the machine is not reachable on the network — which means I can't
  log in at all.
  
  The console shows:
  «
  Cloud-init v. 0.7.5 running 'init-local' at Fri, 03 Oct 2014 04:19:29 
  cloud-init-nonet[15.53]: waiting 10 seconds for network device
-  * Starting Mount network filesystems
-  * Stopping Mount network filesystems
+  * Starting Mount network filesystems
+  * Stopping Mount network filesystems
  cloud-init-nonet[16.46]: static networking is now up
-  * Starting configure network device
+  * Starting configure network device
  Cloud-init v. 0.7.5 running 'init' at Fri, 03 Oct 2014 04:19:30 
  ci-info: +++Net device info+++
  ci-info: ++--+---+---+---+
  ci-info: | Device |  Up  |  Address  |Mask   | Hw-Address|
  ci-info: ++--+---+---+---+
  ci-info: |   lo   | True | 127.0.0.1 | 255.0.0.0 |   |
  ci-info: |  eth0  | True | - | - | 00:12:34:56:78:90 |
  ci-info: ++--+---+---+---+
  ci-info: !!!Route info 
failed!!!
  »
  
  After that it pauses for a long time, and finally moves on to a login
  prompt.
  
  Here's what was installed in the node's /etc/network/interfaces:
  «
  auto lo
  
  auto eth0
  
  iface eth0 inet6 static
- netmask 64
- address fd0d:1777:6bb6:db7::2:1
- gateway fd0d:1777:6bb6:db7::1
+ netmask 64
+ address fd0d:1777:6bb6:db7::2:1
+ gateway fd0d:1777:6bb6:db7::1
  »
+ 
+ The node doesn't show up even in neighbour discovery.  If I do the same
+ thing but with eth0 also configured to get a dynamic IPv4 address from
+ DHCP, then it boots up normally and becomes reachable through both IPv4
+ and IPv6.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1377005

Title:
  Breaks machine without IPv4: "Route info failed"

Status in Init scripts for use on cloud images:
  New

Bug description:
  When I try to deploy a MAAS node on a pure IPv6 network, it seems to
  install normally and reboot, but cloud-init gives off an error about
  route_info failing.  Eventually the console moves on to a login
  prompt, but the machine is not reachable on the network — which means
  I can't log in at all.

  The console shows:
  «
  Cloud-init v. 0.7.5 running 'init-local' at Fri, 03 Oct 2014 04:19:29 
  cloud-init-nonet[15.53]: waiting 10 seconds for network device
   * Starting Mount network filesystems
   * Stopping Mount network filesystems
  cloud-init-nonet[16.46]: static networking is now up
   * Starting configure network device
  Cloud-init v. 0.7.5 running 'init' at Fri, 03 Oct 

[Yahoo-eng-team] [Bug 1259618] Re: Twice the same IP for an instance on the instances view

2014-10-02 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1259618

Title:
  Twice the same IP for an instance on the instances view

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  With this setup:

  Two compute/network nodes and the configuration set to use two dhcp
  agents per network:

  neutron.conf:
  ...
  dhcp_agents_per_network = 2
  ...

  I have two subnets on a network.

  When I start a VM I get a line with my VM and twice the same private
  IP.

  Here is the capture:

  https://drive.google.com/file/d/0B9I9CCpIXwTPQkNqZ0Foa2NUMDQ/edit?usp=sharing

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1259618/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376983] Re: v2.0 API does not work with httpd for admin interface

2014-10-02 Thread Nathan Kinder
*** This bug is a duplicate of bug 1343579 ***
https://bugs.launchpad.net/bugs/1343579

** This bug has been marked a duplicate of bug 1343579
   Versionless GET on keystone gives different answer with port 5000 and 35357

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1376983

Title:
  v2.0 API does not work with httpd for admin interface

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  When Keystone is deployed in Apache httpd, v2.0 does not work for the
  admin  interface.  Here is what I see when using httpd:

  ---
  [rhosuser@rhos ~]$ curl http://127.0.0.1:35357/
  {"versions": {"values": [{"status": "stable", "updated": 
"2013-03-06T00:00:00Z", "media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v3+json"}, {"base": "application/xml", 
"type": "application/vnd.openstack.identity-v3+xml"}], "id": "v3.0", "links": 
[{"href": "http://127.0.0.1:35357/v3/";, "rel": "self"}]}]}}

  [rhosuser@rhos ~]$ curl http://127.0.0.1:35357/v2.0
  {"error": {"message": "Could not find version: v2.0", "code": 404, "title": 
"Not Found"}}
  ---

  Here are the results of same requests when running keystone-all with
  the exact same configuration:

  ---
  [rhosuser@rhos ~]$ curl http://127.0.0.1:35357/
  {"versions": {"values": [{"status": "stable", "updated": 
"2013-03-06T00:00:00Z", "media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v3+json"}, {"base": "application/xml", 
"type": "application/vnd.openstack.identity-v3+xml"}], "id": "v3.0", "links": 
[{"href": "http://127.0.0.1:35357/v3/";, "rel": "self"}]}, {"status": "stable", 
"updated": "2014-04-17T00:00:00Z", "media-types": [{"base": "application/json", 
"type": "application/vnd.openstack.identity-v2.0+json"}, {"base": 
"application/xml", "type": "application/vnd.openstack.identity-v2.0+xml"}], 
"id": "v2.0", "links": [{"href": "http://127.0.0.1:35357/v2.0/";, "rel": 
"self"}, {"href": "http://docs.openstack.org/";, "type": "text/html", "rel": 
"describedby"}]}]}}

  [rhosuser@rhos ~]$ curl http://127.0.0.1:35357/v2.0
  {"version": {"status": "stable", "updated": "2014-04-17T00:00:00Z", 
"media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v2.0+json"}, {"base": "application/xml", 
"type": "application/vnd.openstack.identity-v2.0+xml"}], "id": "v2.0", "links": 
[{"href": "http://127.0.0.1:35357/v2.0/";, "rel": "self"}, {"href": 
"http://docs.openstack.org/";, "type": "text/html", "rel": "describedby"}]}}
  ---

  There's nothing really of interest in keystone.log with debug enabled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1376983/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376983] [NEW] v2.0 API does not work with httpd for admin interface

2014-10-02 Thread Nathan Kinder
*** This bug is a duplicate of bug 1343579 ***
https://bugs.launchpad.net/bugs/1343579

Public bug reported:

When Keystone is deployed in Apache httpd, v2.0 does not work for the
admin  interface.  Here is what I see when using httpd:

---
[rhosuser@rhos ~]$ curl http://127.0.0.1:35357/
{"versions": {"values": [{"status": "stable", "updated": 
"2013-03-06T00:00:00Z", "media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v3+json"}, {"base": "application/xml", 
"type": "application/vnd.openstack.identity-v3+xml"}], "id": "v3.0", "links": 
[{"href": "http://127.0.0.1:35357/v3/";, "rel": "self"}]}]}}

[rhosuser@rhos ~]$ curl http://127.0.0.1:35357/v2.0
{"error": {"message": "Could not find version: v2.0", "code": 404, "title": 
"Not Found"}}
---

Here are the results of same requests when running keystone-all with the
exact same configuration:

---
[rhosuser@rhos ~]$ curl http://127.0.0.1:35357/
{"versions": {"values": [{"status": "stable", "updated": 
"2013-03-06T00:00:00Z", "media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v3+json"}, {"base": "application/xml", 
"type": "application/vnd.openstack.identity-v3+xml"}], "id": "v3.0", "links": 
[{"href": "http://127.0.0.1:35357/v3/";, "rel": "self"}]}, {"status": "stable", 
"updated": "2014-04-17T00:00:00Z", "media-types": [{"base": "application/json", 
"type": "application/vnd.openstack.identity-v2.0+json"}, {"base": 
"application/xml", "type": "application/vnd.openstack.identity-v2.0+xml"}], 
"id": "v2.0", "links": [{"href": "http://127.0.0.1:35357/v2.0/";, "rel": 
"self"}, {"href": "http://docs.openstack.org/";, "type": "text/html", "rel": 
"describedby"}]}]}}

[rhosuser@rhos ~]$ curl http://127.0.0.1:35357/v2.0
{"version": {"status": "stable", "updated": "2014-04-17T00:00:00Z", 
"media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v2.0+json"}, {"base": "application/xml", 
"type": "application/vnd.openstack.identity-v2.0+xml"}], "id": "v2.0", "links": 
[{"href": "http://127.0.0.1:35357/v2.0/";, "rel": "self"}, {"href": 
"http://docs.openstack.org/";, "type": "text/html", "rel": "describedby"}]}}
---

There's nothing really of interest in keystone.log with debug enabled.

** Affects: keystone
 Importance: Undecided
 Assignee: Nathan Kinder (nkinder)
 Status: In Progress

** Changed in: keystone
   Status: New => In Progress

** Changed in: keystone
 Assignee: (unassigned) => Nathan Kinder (nkinder)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1376983

Title:
  v2.0 API does not work with httpd for admin interface

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  When Keystone is deployed in Apache httpd, v2.0 does not work for the
  admin  interface.  Here is what I see when using httpd:

  ---
  [rhosuser@rhos ~]$ curl http://127.0.0.1:35357/
  {"versions": {"values": [{"status": "stable", "updated": 
"2013-03-06T00:00:00Z", "media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v3+json"}, {"base": "application/xml", 
"type": "application/vnd.openstack.identity-v3+xml"}], "id": "v3.0", "links": 
[{"href": "http://127.0.0.1:35357/v3/";, "rel": "self"}]}]}}

  [rhosuser@rhos ~]$ curl http://127.0.0.1:35357/v2.0
  {"error": {"message": "Could not find version: v2.0", "code": 404, "title": 
"Not Found"}}
  ---

  Here are the results of same requests when running keystone-all with
  the exact same configuration:

  ---
  [rhosuser@rhos ~]$ curl http://127.0.0.1:35357/
  {"versions": {"values": [{"status": "stable", "updated": 
"2013-03-06T00:00:00Z", "media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v3+json"}, {"base": "application/xml", 
"type": "application/vnd.openstack.identity-v3+xml"}], "id": "v3.0", "links": 
[{"href": "http://127.0.0.1:35357/v3/";, "rel": "self"}]}, {"status": "stable", 
"updated": "2014-04-17T00:00:00Z", "media-types": [{"base": "application/json", 
"type": "application/vnd.openstack.identity-v2.0+json"}, {"base": 
"application/xml", "type": "application/vnd.openstack.identity-v2.0+xml"}], 
"id": "v2.0", "links": [{"href": "http://127.0.0.1:35357/v2.0/";, "rel": 
"self"}, {"href": "http://docs.openstack.org/";, "type": "text/html", "rel": 
"describedby"}]}]}}

  [rhosuser@rhos ~]$ curl http://127.0.0.1:35357/v2.0
  {"version": {"status": "stable", "updated": "2014-04-17T00:00:00Z", 
"media-types": [{"base

[Yahoo-eng-team] [Bug 1376981] [NEW] NSX plugin security group rules OVS flow explosion

2014-10-02 Thread Sudheendra Murthy
Public bug reported:

In our clouds running Havana with VMware NSX, we often see an explosion
of OVS flows when there are many complex security group rules.
Specifically when the rules involve remote_group_id (security profile in
NSX), there are OVS flow rules created for every pair of VMs belonging
to the tenant resulting in O(n^2) rules. In large deployments, this
results in severe performance issues when the number of OVS flow rules
in gets into millions. In addition, this results in an exponential
increase in memory consumption on NSX controllers.

Nicira plugin should make an attempt at summarizing the security group
rules created by the users, so that it results in efficient
representation on OVS as well as reduces memory consumption on NSX
controllers.

Examples:

1. With every security group, Nicira automatically adds a hidden (hidden
= not stored in Neutron) security group rule to allow ingress IPv4  UDP
traffic on DHCP port 68. If a user creates exactly the same rule, then a
duplicate rule is created and maintained by NSX controllers and pushed
down to OVS on hypervisors. The other case is even if the user creates a
broader rule allowing UDP traffic on all ports, NSX maintains both the
broader rule and the hidden DHCP rule. In this case, there is no need to
have the additional more specific DHCP hidden rule.

2. We have seen cases where users have created both a broader rule to
allow UDP/TCP/ICMP traffic from outside and additional rules to restrict
the same traffic to their tenant VMs. In this case, the self-referential
rules significantly increase OVS flows and can be completely avoided.

Ideally, NVP plugin (nvplib.py in Havana) should summarize the rules in
the security group before submitting them NSX controller.

** Affects: neutron
 Importance: Undecided
 Assignee: Sudheendra Murthy (sudhi-vm)
 Status: New


** Tags: folsom-backport-potential icehouse-backport-potential nicira vmware

** Changed in: neutron
 Assignee: (unassigned) => Sudheendra Murthy (sudhi-vm)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1376981

Title:
  NSX plugin security group rules OVS flow explosion

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In our clouds running Havana with VMware NSX, we often see an
  explosion of OVS flows when there are many complex security group
  rules. Specifically when the rules involve remote_group_id (security
  profile in NSX), there are OVS flow rules created for every pair of
  VMs belonging to the tenant resulting in O(n^2) rules. In large
  deployments, this results in severe performance issues when the number
  of OVS flow rules in gets into millions. In addition, this results in
  an exponential increase in memory consumption on NSX controllers.

  Nicira plugin should make an attempt at summarizing the security group
  rules created by the users, so that it results in efficient
  representation on OVS as well as reduces memory consumption on NSX
  controllers.

  Examples:

  1. With every security group, Nicira automatically adds a hidden
  (hidden = not stored in Neutron) security group rule to allow ingress
  IPv4  UDP traffic on DHCP port 68. If a user creates exactly the same
  rule, then a duplicate rule is created and maintained by NSX
  controllers and pushed down to OVS on hypervisors. The other case is
  even if the user creates a broader rule allowing UDP traffic on all
  ports, NSX maintains both the broader rule and the hidden DHCP rule.
  In this case, there is no need to have the additional more specific
  DHCP hidden rule.

  2. We have seen cases where users have created both a broader rule to
  allow UDP/TCP/ICMP traffic from outside and additional rules to
  restrict the same traffic to their tenant VMs. In this case, the self-
  referential rules significantly increase OVS flows and can be
  completely avoided.

  Ideally, NVP plugin (nvplib.py in Havana) should summarize the rules
  in the security group before submitting them NSX controller.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1376981/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362854] Re: Incorrect regex on rootwrap for encrypted volumes ln creation

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362854

Title:
  Incorrect regex on rootwrap for encrypted volumes ln creation

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  While running Tempest tests against my device, the encryption tests
  consistently fail to attach.  Turns out the problem is an attempt to
  create symbolic link for encryption process, however the rootwrap spec
  is restricted to targets with the default openstack.org iqn.

  Error Message from n-cpu:

  Stderr: '/usr/local/bin/nova-rootwrap: Unauthorized command: ln
  --symbolic --force /dev/mapper/ip-10.10.8.112:3260-iscsi-
  iqn.2010-01.com.solidfire:3gd2.uuid-6f210923-36bf-46a4-b04a-
  6b4269af9d4f.4710-lun-0 /dev/disk/by-path/ip-10.10.8.112:3260-iscsi-
  iqn.2010-01.com.sol

  
  Rootwrap entry currently implemented:

  ln: RegExpFilter, ln, root, ln, --symbolic, --force, /dev/mapper/ip
  -.*-iscsi-iqn.2010-10.org.openstack:volume-.*, /dev/disk/by-path/ip
  -.*-iscsi-iqn.2010-10.org.openstack:volume-.*

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362854/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362480] Re: Datacenter moid should be a value not a tuple

2014-10-02 Thread Adam Gandelman
** Changed in: neutron/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1362480

Title:
  Datacenter moid should be a value not a tuple

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Released

Bug description:
  In edge_appliance_driver.py, there is a comma added when setting the
  datacenter moid, so the result is the value datacenter moid is changed
  to the tuple type, that is wrong.

   if datacenter_moid:
  edge['datacenterMoid'] = datacenter_moid,  ===> Should remove the ','
  return edge

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1362480/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368391] Re: sqlalchemy-migrate 0.9.2 is breaking nova unit tests

2014-10-02 Thread Adam Gandelman
** Changed in: cinder/icehouse
   Status: Fix Committed => Fix Released

** Changed in: glance/icehouse
   Status: Fix Committed => Fix Released

** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368391

Title:
  sqlalchemy-migrate 0.9.2 is breaking nova unit tests

Status in Cinder:
  Fix Released
Status in Cinder havana series:
  Fix Released
Status in Cinder icehouse series:
  Fix Released
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance havana series:
  Fix Released
Status in Glance icehouse series:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in Database schema migration for SQLAlchemy:
  New

Bug description:
  sqlalchemy-migrate 0.9.2 is breaking nova unit tests

  OperationalError: (OperationalError) cannot commit - no transaction is
  active u'COMMIT;' ()

  http://logs.openstack.org/39/117839/18/gate/gate-nova-
  python27/8a7aa8c/

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1368391/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1305423] Re: nova libvirt re-write broken with mulitiple ephemeral disks

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1305423

Title:
  nova libvirt re-write broken with mulitiple ephemeral disks

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  Seem to be experiencing a bug with libvirt.xml device formatting when 
--ephemeral flag is used after initial booth and then use of nova stop/start or 
nova reboot --hard.  We are using following libvirt options in nova.conf for 
storage:
  libvirt_images_type=lvm
  libvirt_images_volume_group=vglocal

  When normally using nova boot with a flavor that has ephemeral defined it 
create two LVM volumes appropriatly ex.
  instance-077e_disk
  instance-077e_disk.local

  The instance libvirt.xml contains disk devices entry as follows:
  
  



  
  



  

  
  If we use "nova boot --flavor 757c75fa-0b6d-4d4f-a128-27813009bff4 --image 
caa978e0-acae-4205-a4a4-2cf159c166fd --nic 
net-id=44f2fb0b-0a7a-475c-8fff-54cd4b37958b --ephemeral size=1 --ephemeral 
size=1 localdisk-1" the LVM disks for ephemeral goes through enumeration logic 
whether there is one or more --ephemeral options
   instance-07ed_disk   
   instance-07ed_disk.eph0  
   instance-07ed_disk.eph1

  The instance libvirt.xml after instance spawn has disk device entries like 
below and the instances happily boots.
   
  



  
  



  
  



  

  If nova stop/start or nova reboot --hard is executed the instance is 
destroyed and libvirt.xml gets recreated.  At this stage whatever values we 
passed with --ephemeral are not respected and libvirt.xml revirts to 
configuration that would have been generated without the use of the --ephemeral 
option like below where we only have one extra disk and it is not using the 
enumerated naming.  

  



  
  



  

  
  This causes instances booting to fail at this stage.  The nova 
block_device_mapping table has records for all 3 devices.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1305423/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296478] Re: The Hyper-V driver's list_instances() returns an empty result set on certain localized versions of the OS

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1296478

Title:
  The Hyper-V driver's list_instances() returns an empty result set on
  certain localized versions of the OS

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  This issue is related to different values that MSVM_ComputerSystem's
  Caption property can have on different locales.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1296478/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1242366] Re: volume attach failed if attach again to an pause to active VM

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1242366

Title:
  volume attach failed if attach again to an pause to active VM

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  Steps are as following:
  1) Create one VM
  2) Attach volume to the VM
  3) pause the VM
  4) detach the volume
  5) unpause the VM
  6) re-attch the VM to same device, nova compute throw exception

  2013-10-20 23:21:22.520 DEBUG amqp [-] Channel open from (pid=19728) _open_ok 
/usr/local/lib/python2.7/dist-packages/amqp-1.0.12-py2.7.egg/amqp/channel.py:420
  2013-10-20 23:21:22.520 ERROR nova.openstack.common.rpc.amqp 
[req-5f0d786e-1273-4611-b0a5-a787754c6bc8 admin admin] Exception during message 
handling
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp Traceback (most 
recent call last):
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/openstack/common/rpc/amqp.py", line 461, in _process_data
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp **args)
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/openstack/common/rpc/dispatcher.py", line 172, in dispatch
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp result = 
getattr(proxyobj, method)(ctxt, **kwargs)
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/exception.py", line 90, in wrapped
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp payload)
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/exception.py", line 73, in wrapped
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp return 
f(self, context, *args, **kw)
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/compute/manager.py", line 244, in decorated_function
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp pass
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/compute/manager.py", line 230, in decorated_function
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/compute/manager.py", line 272, in decorated_function
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp e, 
sys.exc_info())
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/compute/manager.py", line 259, in decorated_function
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/compute/manager.py", line 3649, in attach_volume
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp context, 
instance, mountpoint)
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/compute/manager.py", line 3644, in attach_volume
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp mountpoint, 
instance)
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/compute/manager.py", line 3690, in _attach_volume
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp connector)
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/compute/manager.py", line 3680, in _attach_volume
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp 
encryption=encryption)
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 1107, in attach_volume
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp raise 
exception.DeviceIsBusy(device=disk_dev)
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp DeviceIsBusy: 
The supplied device (vdb) is busy.
  2013-10-20 23:21:22.520 TRACE nova.openstack.common.rpc.amqp 
  ^C2013-10-20 23:21:24.871 INFO nova.openstack.common.service [-] Caught 
SIGINT, exiting

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1242366/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1300972] Re: os-simple-tenant-usage: printing trace in logs if not passing all requirements

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1300972

Title:
  os-simple-tenant-usage: printing trace in logs if not passing all
  requirements

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  REST os-simple-tenant-usage is returning with a response code 400


  from api.log:
  2014-03-20 11:27:58.044 49351 INFO nova.osapi_compute.wsgi.server 
[req-b653843a-de1c-4267-af33-7f7a9c5fa99d 0 b38c2504810f437883a5b57a8b13fe7f] 
9.41.223.193,127.0.0.1 "GET 
/v2/b38c2504810f437883a5b57a8b13fe7f/os-simpletenant-usage HTTP/1.1" status: 
404 len: 302 time: 0.6078889
  2014-03-20 11:28:04.221 49351 ERROR nova.api.openstack.wsgi 
[req-09d455d6-c963-4729-b1b6-082bd04cfe8b 0 b38c2504810f437883a5b57a8b13fe7f] 
NV-46FDF46 Exception handling resource: strptime() argument 1 must be string, 
not None
  2014-03-20 11:28:04.221 49351 TRACE nova.api.openstack.wsgi Traceback (most 
recent call last):
  2014-03-20 11:28:04.221 49351 TRACE nova.api.openstack.wsgi   File 
"/usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py", line 991, in 
_process_stack
  2014-03-20 11:28:04.221 49351 TRACE nova.api.openstack.wsgi action_result 
= self.dispatch(meth, request, action_args)
  2014-03-20 11:28:04.221 49351 TRACE nova.api.openstack.wsgi   File 
"/usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py", line 1078, in 
dispatch
  2014-03-20 11:28:04.221 49351 TRACE nova.api.openstack.wsgi return 
method(req=request, **action_args)
  2014-03-20 11:28:04.221 49351 TRACE nova.api.openstack.wsgi   File 
"/usr/lib/python2.6/site-packages/nova/api/openstack/compute/contrib/simple_tenant_usage.py",
 line 252, in index
  2014-03-20 11:28:04.221 49351 TRACE nova.api.openstack.wsgi 
(period_start, period_stop, detailed) = self._get_datetime_range(req)
  2014-03-20 11:28:04.221 49351 TRACE nova.api.openstack.wsgi   File 
"/usr/lib/python2.6/site-packages/nova/api/openstack/compute/contrib/simple_tenant_usage.py",
 line 234, in _get_datetime_range
  2014-03-20 11:28:04.221 49351 TRACE nova.api.openstack.wsgi period_start 
= self._parse_datetime(env.get('start', [None])[0])
  2014-03-20 11:28:04.221 49351 TRACE nova.api.openstack.wsgi   File 
"/usr/lib/python2.6/site-packages/nova/api/openstack/compute/contrib/simple_tenant_usage.py",
 line 220, in _parse_datetime
  2014-03-20 11:28:04.221 49351 TRACE nova.api.openstack.wsgi value = 
timeutils.parse_strtime(dtstr, "%Y-%m-%d %H:%M:%S.%f")
  2014-03-20 11:28:04.221 49351 TRACE nova.api.openstack.wsgi   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/timeutils.py", line 65, 
in parse_strtime
  2014-03-20 11:28:04.221 49351 TRACE nova.api.openstack.wsgi return 
datetime.datetime.strptime(timestr, fmt)
  2014-03-20 11:28:04.221 49351 TRACE nova.api.openstack.wsgi TypeError: 
strptime() argument 1 must be string, not None
  2014-03-20 11:28:04.221 49351 TRACE nova.api.openstack.wsgi
  2014-03-20 11:28:04.226 49351 INFO nova.osapi_compute.wsgi.server 
[req-09d455d6-c963-4729-b1b6-082bd04cfe8b 0 b38c2504810f437883a5b57a8b13fe7f] 
9.41.223.193,127.0.0.1 "GET 
/v2/b38c2504810f437883a5b57a8b13fe7f/os-simple-tenant-usage HTTP/1.1" status: 
400 len: 362 time: 0.0551372

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1300972/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1275173] Re: _translate_from_glance() can cause an unnecessary HTTP request

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1275173

Title:
  _translate_from_glance() can cause an unnecessary HTTP request

Status in Cinder:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  I noticed when performing a "nova image-show" on a current (not
  deleted) image, two HTTP requests were issued. Why isn't the Image
  retrieved on the first GET request?

  In fact, it is. The problem lies in _extract_attributes(), called by
  _translate_from_glance(). This function loops through a list of
  expected attributes, and extracts them from the passed-in Image. The
  problem is that if the attribute 'deleted' is False, there won't be a
  'deleted_at' attribute in the Image. Not finding the attribute results
  in getattr() making another GET request (to try to find the "missing"
  attribute?). This is unnecessary of course, since it makes sense for
  the Image to not have that attribute set.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1275173/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1220256] Re: Hyper-V driver needs tests for WMI WQL instructions

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1220256

Title:
  Hyper-V driver needs tests for WMI WQL instructions

Status in OpenStack Compute (Nova):
  In Progress
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  The Hyper-V Nova driver uses mainly WMI to access the hypervisor and OS 
features. 
  Additional tests can be added in this area.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1220256/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1293184] Re: Can't clear shared flag of unused network

2014-10-02 Thread Adam Gandelman
** Changed in: neutron/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1293184

Title:
  Can't clear shared flag of unused network

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Released

Bug description:
  A network marked as external can be used as a gateway for tenant routers, 
even though it's not necessarily marked as shared.
  If the 'shared' attribute is changed from True to False for such a network 
you get an error:
  Unable to reconfigure sharing settings for network sharetest. Multiple 
tenants are using it

  This is clearly not the intention of the 'shared' field, so if there
  are only service ports on the network there is no reason to block
  changing it from shared to not shared.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1293184/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1292102] Re: AttributeError: 'NoneType' object has no attribute 'obj' (driver.obj.release_segment(session, segment))

2014-10-02 Thread Adam Gandelman
** Changed in: neutron/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1292102

Title:
  AttributeError: 'NoneType' object has no attribute 'obj'
  (driver.obj.release_segment(session, segment))

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Released
Status in “neutron” package in Ubuntu:
  Fix Released
Status in “neutron” source package in Trusty:
  Triaged
Status in “neutron” source package in Utopic:
  Fix Released

Bug description:
  When trying to delete a network, I hit a traceback.

  ubuntu@neutron01:~$ neutron port-list

  ubuntu@neutron01:~$ neutron net-list
  +--+-+-+
  | id   | name| subnets |
  +--+-+-+
  | 822d2b2e-481f-4838-9fe5-459be7b10193 | int_net | |
  | ac498310-833b-42f2-9009-049cac145c71 | ext_net | |
  +--+-+-+

  ubuntu@neutron01:~$ neutron --debug net-delete int_net
  Request Failed: internal server error while processing your request.
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/neutronclient/shell.py", line 527, 
in run_subcommand
  return run_command(cmd, cmd_parser, sub_argv)
File "/usr/lib/python2.7/dist-packages/neutronclient/shell.py", line 80, in 
run_command
  return cmd.run(known_args)
File 
"/usr/lib/python2.7/dist-packages/neutronclient/neutron/v2_0/__init__.py", line 
510, in run
  obj_deleter(_id)
File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
112, in with_params
  ret = self.function(instance, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
354, in delete_network
  return self.delete(self.network_path % (network))
File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
1233, in delete
  headers=headers, params=params)
File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
1222, in retry_request
  headers=headers, params=params)
File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
1165, in do_request
  self._handle_fault_response(status_code, replybody)
File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
1135, in _handle_fault_response
  exception_handler_v20(status_code, des_error_body)
File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
85, in exception_handler_v20
  message=error_dict)
  NeutronClientException: Request Failed: internal server error while 
processing your request.
  ubuntu@neutron01:~$

  
  /var/log/neutron/server.log
  
  2014-03-13 12:30:09.930 16624 ERROR neutron.api.v2.resource 
[req-cc63906f-1e13-4d22-becf-86979d80399f None] delete failed
  2014-03-13 12:30:09.930 16624 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2014-03-13 12:30:09.930 16624 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line 87, in 
resource
  2014-03-13 12:30:09.930 16624 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2014-03-13 12:30:09.930 16624 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 438, in delete
  2014-03-13 12:30:09.930 16624 TRACE neutron.api.v2.resource 
obj_deleter(request.context, id, **kwargs)
  2014-03-13 12:30:09.930 16624 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/plugin.py", line 479, in 
delete_network
  2014-03-13 12:30:09.930 16624 TRACE neutron.api.v2.resource 
self.type_manager.release_segment(session, segment)
  2014-03-13 12:30:09.930 16624 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py", line 104, 
in release_segment
  2014-03-13 12:30:09.930 16624 TRACE neutron.api.v2.resource 
driver.obj.release_segment(session, segment)
  2014-03-13 12:30:09.930 16624 TRACE neutron.api.v2.resource AttributeError: 
'NoneType' object has no attribute 'obj'
  2014-03-13 12:30:09.930 16624 TRACE neutron.api.v2.resource

  ProblemType: Bug
  DistroRelease: Ubuntu 14.04
  Package: neutron-plugin-ml2 1:2014.1~b3-0ubuntu1
  ProcVersionSignature: Ubuntu 3.13.0-16.36-generic 3.13.5
  Uname: Linux 3.13.0-16-generic x86_64
  ApportVersion: 2.13.3-0ubuntu1
  Architecture: amd64
  Date: Thu Mar 13 12:36:03 2014
  PackageArchitecture: all
  ProcEnviron:
   TERM=screen.linux
   PATH=(custom, no user)
   XDG_RUNTIME_DIR=
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  SourcePackage: neutron
  UpgradeStatus: No upgrade log present (probably fresh install)
  modified.conffile..etc.n

[Yahoo-eng-team] [Bug 1303536] Re: Live migration fails. XML error: CPU feature `wdt' specified more than once

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1303536

Title:
  Live migration fails. XML error: CPU feature `wdt' specified more than
  once

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  Description of problem
  ---

  Live migration fails.
  libvirt says "XML error: CPU feature `wdt' specified more than once"

  Version
  -

  ii  libvirt-bin 1.2.2-0ubuntu2
amd64programs for the libvirt library
  ii  python-libvirt  1.2.2-0ubuntu1
amd64libvirt Python bindings
  ii  nova-compute1:2014.1~b3-0ubuntu2  
all  OpenStack Compute - compute node base
  ii  nova-compute-kvm1:2014.1~b3-0ubuntu2  
all  OpenStack Compute - compute node (KVM)
  ii  nova-cert   1:2014.1~b3-0ubuntu2  
all  OpenStack Compute - certificate management

  DISTRIB_ID=Ubuntu
  DISTRIB_RELEASE=14.04
  DISTRIB_CODENAME=trusty
  DISTRIB_DESCRIPTION="Ubuntu Trusty Tahr (development branch)"
  NAME="Ubuntu"
  VERSION="14.04, Trusty Tahr"

  
  Test env
  --

  A two node openstack havana on ubuntu 14.04. Migrating a instance to
  other node.

  
  Steps to Reproduce
  --
   - Migrate the instance

  
  And observe /var/log/nova/compute.log and /var/log/libvirt.log

  Actual results
  --

  /var/log/nova-conductor.log

  2014-04-04 13:42:17.128 3294 ERROR oslo.messaging._drivers.common [-] 
['Traceback (most recent call last):\n', '  File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 133, 
in _dispatch_and_reply\nincoming.message))\n', '  File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 176, 
in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, args)\n', '  
File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
122, in _do_dispatch\nresult = getattr(endpoint, method)(ctxt, 
**new_args)\n', '  File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/server.py", line 139, in 
inner\nreturn func(*args, **kwargs)\n', '  File 
"/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 668, in 
migrate_server\nblock_migration, disk_over_commit)\n', '  File 
"/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 769, in 
_live_migrate\nraise exception.MigrationError(reason=ex)\n'
 , 'MigrationError: Migration error: Remote error: libvirtError XML error: CPU 
feature `wdt\' specified more than once\n[u\'Traceback (most recent call 
last):\\n\', u\'  File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 133, 
in _dispatch_and_reply\\nincoming.message))\\n\', u\'  File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 176, 
in _dispatch\\nreturn self._do_dispatch(endpoint, method, ctxt, args)\\n\', 
u\'  File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", 
line 122, in _do_dispatch\\nresult = getattr(endpoint, method)(ctxt, 
**new_args)\\n\', u\'  File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 88, in wrapped\\n
payload)\\n\', u\'  File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__\\nsix.reraise(self.type_, self.value, self.tb)\\n\', u\'  File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 71, in wrapped\\n  
   return f(self, context, *args, **kw)\\n\', u\'  File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 272, in 
decorated_function\\ne, sys.exc_info())\\n\', u\'  File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__\\nsix.reraise(self.type_, self.value, self.tb)\\n\', u\'  File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 259, in 
decorated_function\\nreturn function(self, context, *args, **kwargs)\\n\', 
u\'  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 
4159, in check_can_live_migrate_destination\\nblock_migration, 
disk_over_commit)\\n\', u\'  File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 4094, in 
check_can_live_migrate_destination\\n
self._compare_cpu(source_cpu_info)\\n\', u\'  File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 4236, in 
_compare_cpu\\nLOG.error(m, {\\\'ret\\\': ret, \\\'u\\\': u})\\n\', u\'
   File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.

[Yahoo-eng-team] [Bug 1327406] Re: The One And Only network is variously visible

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1327406

Title:
  The One And Only network is variously visible

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  I am testing with the templates in
  https://review.openstack.org/#/c/97366/

  I can create a stack.  I can use `curl` to hit the webhooks to scale
  up and down the old-style group and to scale down the new-style group;
  those all work.  What fails is hitting the webhook to scale up the
  new-style group.  Here is a typescript showing the failure:

  $ curl -X POST
  
'http://10.10.0.125:8000/v1/signal/arn%3Aopenstack%3Aheat%3A%3A39675672862f4bd08505bfe1283773e0%3Astacks%2Ftest4
  %2F3cd6160b-
  
d8c5-48f1-a527-4c7df9205fc3%2Fresources%2FNewScaleUpPolicy?Timestamp=2014-06-06T19%3A45%3A27Z&SignatureMethod=HmacSHA256&AWSAccessKeyId=35678396d987432f87cda8e4c6cdbfb5&SignatureVersion=2&Signature=W3aJQ6SR7O5lLOxLEQndbzNB%2FUhefr1W7qO9zNZ%2BHVs%3D'

  The request processing has failed due to an 
internal error:Remote error: ResourceFailure Error: Nested stack UPDATE failed: 
Error: Resource CREATE failed: NotFound: No Network matching {'label': 
u'private'}. (HTTP 404)
  [u'Traceback (most recent call last):\n', u'  File 
"/opt/stack/heat/heat/engine/service.py", line 61, in wrapped\nreturn 
func(self, ctx, *args, **kwargs)\n', u'  File 
"/opt/stack/heat/heat/engine/service.py", line 911, in resource_signal\n
stack[resource_name].signal(details)\n', u'  File 
"/opt/stack/heat/heat/engine/resource.py", line 879, in signal\nraise 
failure\n', u"ResourceFailure: Error: Nested stack UPDATE failed: Error: 
Resource CREATE failed: NotFound: No Network matching {'label': u'private'}. 
(HTTP 
404)\n"].InternalFailureServer

  The original sin looks like this in the heat engine log:

  2014-06-06 17:39:20.013 28692 DEBUG urllib3.connectionpool 
[req-2391a9ea-46d6-46f0-9a7b-cf999a8697e9 ] "GET 
/v2/39675672862f4bd08505bfe1283773e0/os-networks HTTP/1.1" 200 16 _make_request 
/usr/lib/python2.7/dist-packages/urllib3/connectionpool.py:415
  2014-06-06 17:39:20.014 28692 ERROR heat.engine.resource 
[req-2391a9ea-46d6-46f0-9a7b-cf999a8697e9 None] CREATE : Server "my_instance" 
Stack "test1-new_style-qidqbd5nrk44-43e7l57kqf5w-4t3xdjrfrr7s" 
[20523269-0ebb-45b8-ad59-75f55607f3bd]
  2014-06-06 17:39:20.014 28692 TRACE heat.engine.resource Traceback (most 
recent call last):
  2014-06-06 17:39:20.014 28692 TRACE heat.engine.resource   File 
"/opt/stack/heat/heat/engine/resource.py", line 383, in _do_action
  2014-06-06 17:39:20.014 28692 TRACE heat.engine.resource handle())
  2014-06-06 17:39:20.014 28692 TRACE heat.engine.resource   File 
"/opt/stack/heat/heat/engine/resources/server.py", line 493, in handle_create
  2014-06-06 17:39:20.014 28692 TRACE heat.engine.resource nics = 
self._build_nics(self.properties.get(self.NETWORKS))
  2014-06-06 17:39:20.014 28692 TRACE heat.engine.resource   File 
"/opt/stack/heat/heat/engine/resources/server.py", line 597, in _build_nics
  2014-06-06 17:39:20.014 28692 TRACE heat.engine.resource network = 
self.nova().networks.find(label=label_or_uuid)
  2014-06-06 17:39:20.014 28692 TRACE heat.engine.resource   File 
"/opt/stack/python-novaclient/novaclient/base.py", line 194, in find
  2014-06-06 17:39:20.014 28692 TRACE heat.engine.resource raise 
exceptions.NotFound(msg)
  2014-06-06 17:39:20.014 28692 TRACE heat.engine.resource NotFound: No Network 
matching {'label': u'private'}. (HTTP 404)

  Private debug logging reveals that in the scale-up case, the call to
  "GET /v2/{tenant-id}/os-networks HTTP/1.1" returns with response code
  200 and an empty list of networks.  Comparing with the corresponding
  call when the stack is being created shows no difference in the calls
  --- because the normal logging omits the headers --- even though the
  results differ (when the stack is being created, the result contains
  the correct list of networks).  Turning on HTTP debug logging in the
  client reveals that the X-Auth-Token headers differ.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1327406/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1327497] Re: live-migration fails when FC multipath is used

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1327497

Title:
  live-migration fails when FC multipath is used

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  I tried live-migration against VM with multipath access to FC bootable volume 
and FC data volume.
  After checking the code, I found the reason is that
  1. /dev/dm- is used, which is subject to change in the destination 
Compute Node since it is not unique across nodes
  2. multipath_id in connnection_info is not maintained properly and may be 
lost during connection refreshing

  The fix would be
  1. Like iSCSI multipath, use /dev/mapper/ instead of 
/dev/dm-
  2. Since multipath_id is unique for a volume no matter where it is attached, 
add logic to preserve this information.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1327497/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1286412] Re: Add support for router and network scheduling in Cisco N1kv Plugin.

2014-10-02 Thread Adam Gandelman
** Changed in: neutron/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1286412

Title:
  Add support for router and network scheduling in Cisco N1kv Plugin.

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Released

Bug description:
  Added functionality to schedule routers and networks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1286412/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1311758] Re: OpenDaylight ML2 Mechanism Driver does not handle authentication errors

2014-10-02 Thread Adam Gandelman
** Changed in: neutron/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1311758

Title:
  OpenDaylight ML2 Mechanism Driver does not handle authentication
  errors

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Released

Bug description:
  This behaviour was noticed when troubleshooting a misconfiguration.
  Authentication with ODL was failing and the exception was being ignored.

  In the "sync_resources" method of the ODL Mechanism Driver, HTTPError 
exceptions with a status code of 404 are handled but the exception is not 
re-raised if the status code is not 404. 
  It is preferable to re-raise this exception.

  In addition it would be helpful if the "obtain_auth_cookies" should
  throw a more specific exception than HTTPError when authentication
  with the ODL controller fails.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1311758/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1304968] Re: Nova cpu full of instance_info_cache stack traces due to attempting to send events about deleted instances

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1304968

Title:
  Nova cpu full of instance_info_cache stack traces due to attempting to
  send events about deleted instances

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  The bulk of the stack traces in n-cpu is because emit_event is getting
  triggered on a VM delete, however by the time we get to emit_event the
  instance is deleted (we see this exception 183 times in this log -
  which means it's happening on *every* compute terminate) so when we
  try to look up the instance we hit the exception found here:

  @base.remotable_classmethod
  def get_by_instance_uuid(cls, context, instance_uuid):
  db_obj = db.instance_info_cache_get(context, instance_uuid)
  if not db_obj:
  raise exception.InstanceInfoCacheNotFound(
  instance_uuid=instance_uuid)
  return InstanceInfoCache._from_db_object(context, cls(), db_obj)

  A log trace of this interaction looks like this:

  
  2014-04-08 11:14:25.475 DEBUG nova.openstack.common.lockutils 
[req-fe9db989-416e-4da0-986c-e68336e3c602 TenantUsagesTestJSON-153098759 
TenantUsagesTestJSON-953946497] Semaphore / lock released 
"do_terminate_instance" inner 
/opt/stack/new/nova/nova/openstack/common/lockutils.py:252
  2014-04-08 11:14:25.907 DEBUG nova.openstack.common.lockutils 
[req-687de0cf-67fc-434f-927f-4c37665ad5d8 FixedIPsTestJson-234831436 
FixedIPsTestJson-1960919997] Got semaphore 
"75da98d7-bbd5-42a2-ad6f-7a66e38977fa" lock 
/opt/stack/new/nova/nova/openstack/common/lockutils.py:168
  2014-04-08 11:14:25.907 DEBUG nova.openstack.common.lockutils 
[req-687de0cf-67fc-434f-927f-4c37665ad5d8 FixedIPsTestJson-234831436 
FixedIPsTestJson-1960919997] Got semaphore / lock "do_terminate_instance" inner 
/opt/stack/new/nova/nova/openstack/common/lockutils.py:248
  2014-04-08 11:14:25.907 DEBUG nova.openstack.common.lockutils 
[req-687de0cf-67fc-434f-927f-4c37665ad5d8 FixedIPsTestJson-234831436 
FixedIPsTestJson-1960919997] Got semaphore "" 
lock /opt/stack/new/nova/nova/openstack/common/lockutils.py:168
  2014-04-08 11:14:25.908 DEBUG nova.openstack.common.lockutils 
[req-687de0cf-67fc-434f-927f-4c37665ad5d8 FixedIPsTestJson-234831436 
FixedIPsTestJson-1960919997] Got semaphore / lock "_clear_events" inner 
/opt/stack/new/nova/nova/openstack/common/lockutils.py:248
  2014-04-08 11:14:25.908 DEBUG nova.openstack.common.lockutils 
[req-687de0cf-67fc-434f-927f-4c37665ad5d8 FixedIPsTestJson-234831436 
FixedIPsTestJson-1960919997] Semaphore / lock released "_clear_events" inner 
/opt/stack/new/nova/nova/openstack/common/lockutils.py:252
  2014-04-08 11:14:25.928 AUDIT nova.compute.manager 
[req-687de0cf-67fc-434f-927f-4c37665ad5d8 FixedIPsTestJson-234831436 
FixedIPsTestJson-1960919997] [instance: 75da98d7-bbd5-42a2-ad6f-7a66e38977fa] 
Terminating instance
  2014-04-08 11:14:25.989 DEBUG nova.objects.instance 
[req-687de0cf-67fc-434f-927f-4c37665ad5d8 FixedIPsTestJson-234831436 
FixedIPsTestJson-1960919997] Lazy-loading `system_metadata' on Instance uuid 
75da98d7-bbd5-42a2-ad6f-7a66e38977fa obj_load_attr 
/opt/stack/new/nova/nova/objects/instance.py:519
  2014-04-08 11:14:26.209 DEBUG nova.network.api 
[req-687de0cf-67fc-434f-927f-4c37665ad5d8 FixedIPsTestJson-234831436 
FixedIPsTestJson-1960919997] Updating cache with info: [VIF({'ovs_interfaceid': 
None, 'network': Network({'bridge': u'br100', 'subnets': [Subnet({'ips': 
[FixedIP({'meta': {}, 'version': 4, 'type': u'fixed', 'floating_ips': [], 
'address': u'10.1.0.2'})], 'version': 4, 'meta': {u'dhcp_server': u'10.1.0.1'}, 
'dns': [IP({'meta': {}, 'version': 4, 'type': u'dns', 'address': u'8.8.4.4'})], 
'routes': [], 'cidr': u'10.1.0.0/24', 'gateway': IP({'meta': {}, 'version': 4, 
'type': u'gateway', 'address': u'10.1.0.1'})}), Subnet({'ips': [], 'version': 
None, 'meta': {u'dhcp_server': None}, 'dns': [], 'routes': [], 'cidr': None, 
'gateway': IP({'meta': {}, 'version': None, 'type': u'gateway', 'address': 
None})})], 'meta': {u'tenant_id': None, u'should_create_bridge': True, 
u'bridge_interface': u'eth0'}, 'id': u'9751787e-f41c-4299-be13-941c901f6d18', 
'label': u'private'}), 'devname': N
 one, 'qbh_params': None, 'meta': {}, 'details': {}, 'address': 
u'fa:16:3e:d8:87:38', 'active': False, 'type': u'bridge', 'id': 
u'db1ac48d-805a-45d3-9bb9-786bb5855673', 'qbg_params': None})] 
update_instance_cache_with_nw_info /opt/stack/new/nova/nova/network/api.py:74
  2014-04-08 11:14:27.661 2894 DEBUG nova.virt.driver [-] Emitting event 
 emit_event 
/opt/stack/new/nova/nova/virt/driver.py:1207
  2014-04-08 11:14:27.661 2894 INFO nova.compute.manager [-] Lifecycle event 1 
on VM 75da98d7-bbd5-42a2-ad6f-7a66e38977fa
  2014-04-0

[Yahoo-eng-team] [Bug 1316618] Re: add host to security group broken

2014-10-02 Thread Adam Gandelman
** Changed in: neutron/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1316618

Title:
  add host to security group broken

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Released
Status in OpenStack Compute (Nova):
  Incomplete

Bug description:
  I am running nova/neutron forked from trunk around 12/30/2013. Neutron
  is configured with openvswitch plugin and security group enabled.

  How to reproduce the issue: create a security group SG1; add a rule to
  allow ingress from SG1 group to port 5000; add host A, B, and C to SG1
  in order.

  It seems that A can talk to B and C over port 5000, B can talk to C,
  but C can talk to neither of A and B. I confirmed that the iptables
  rules are incorrect for A and B. It seems to me that when A is added
  to the group, nothing changed since no other group member exists. When
  B and C were added to the group, A's ingress iptables rules were never
  updated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1316618/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1330490] Re: can't create security group rule by ip protocol when using postgresql

2014-10-02 Thread Adam Gandelman
** Changed in: neutron/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1330490

Title:
  can't create security group rule by ip protocol when using postgresql

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Released

Bug description:
  when i try to create a rule in sec group using ip protocol number it
  fails if the db in use is postgresql

  i can repeat the problem in havana, icehouse and master

  2014-06-16 08:41:07.009 15134 ERROR neutron.api.v2.resource 
[req-3d2d03a3-2d8a-4ad0-b41d-098aecd5ecb8 None] create failed
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line 87, in 
resource
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 419, in create
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource obj = 
obj_creator(request.context, **kwargs)
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/db/securitygroups_rpc_base.py", line 
43, in create_security_group_rule
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource bulk_rule)[0]
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/db/securitygroups_db.py", line 266, 
in create_security_group_rule_bulk_native
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource 
self._check_for_duplicate_rules(context, r)
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/db/securitygroups_db.py", line 394, 
in _check_for_duplicate_rules
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource db_rules = 
self.get_security_group_rules(context, filters)
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/db/securitygroups_db.py", line 421, 
in get_security_group_rules
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource 
page_reverse=page_reverse)
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/neutron/db/db_base_plugin_v2.py", line 197, 
in _get_collection
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource items = 
[dict_func(c, fields) for c in query]
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2353, in 
__iter__
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource return 
self._execute_and_instances(context)
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2368, in 
_execute_and_instances
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource result = 
conn.execute(querycontext.statement, self._params)
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 662, in 
execute
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource params)
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 761, in 
_execute_clauseelement
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource compiled_sql, 
distilled_params
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 874, in 
_execute_context
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource context)
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1024, in 
_handle_dbapi_exception
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource exc_info
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/util/compat.py", line 196, in 
raise_from_cause
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource 
reraise(type(exception), exception, tb=exc_tb)
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 867, in 
_execute_context
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource context)
  2014-06-16 08:41:07.009 15134 TRACE neutron.api.v2.resource   File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 324,

[Yahoo-eng-team] [Bug 1329764] Re: Hyper-V volume attach issue: wrong SCSI slot is selected

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1329764

Title:
  Hyper-V volume attach issue: wrong SCSI slot is selected

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  When attaching volumes, the Hyper-V driver selects the slot on the
  SCSI controller by using the number of drives attached to that
  controller.

  This leads to exceptions when detaching volumes having lower numbered
  slots and then attaching a new volume.

  Take for example 2 volumes attached which will have 0 and 1 as
  controller addresses. If the first one gets detached, the next time
  we'll try to attach a volume the controller address 1 will be used (as
  it's the number of drives attached to the controller at that time) but
  that slot is actually uesd, so it will raise an exception.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1329764/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1304593] Re: VMware: waste of disk datastore when root disk size of instance is 0

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1304593

Title:
  VMware: waste of disk datastore when root disk size of instance is 0

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in The OpenStack VMwareAPI subTeam:
  New

Bug description:
  When an instance has 0 root disk size an extra image is created on the
  datastore (uuid.0.vmdk that is identical to uuid.vmdk). This is only
  in the case of a linked clone image and wastes space on the datastore.
  The original image that is cached can be used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1304593/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1288574] Re: backup operation should delete image if snapshot failed

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1288574

Title:
  backup operation should delete image if snapshot failed

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  when we snapshot an instance, we will use @delete_image_on_error to delete 
any failed snapshot 
  however, the image will not be removed by backup code flow, it will be an 
issue if too many backup failed 
  at last ,all useful image will be removed and we have only 'error' image left 
in host

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1288574/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290294] Re: Instance's XXX_resize dir never be deleted if we resize a pre-grizzly instance in havana

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1290294

Title:
  Instance's XXX_resize dir never be deleted if we resize a pre-grizzly
  instance in havana

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  reproduce steps:
  1. create an instance under Folsom
  2. update nova to Havana
  3. resize the instance to another host
  4. confirm the resize
  5. examine the instance dir on source host

  you will find the instance-_resize dir exists there which was
  not deleted while confirming resize.

  the reason is that:
  in the _cleanup_resize in libvirt driver:
  def _cleanup_resize(self, instance, network_info):
  target = libvirt_utils.get_instance_path(instance) + "_resize"

  we get the instance path by using get_instance_path method in libvirt utils,
  but we check the original instance dir of pre-grizzly instances' before we 
return it,
  if this instance is a resized one which original instance dir exists on 
another host(the dest host),
  the wrong instance path with uuid will be returned, and then the `target` 
existing check will be failed,
  then the instance-_resize dir will never be deleted.

  def get_instance_path(instance, forceold=False, relative=False):
  """Determine the correct path for instance storage.

  This method determines the directory name for instance storage, while
  handling the fact that we changed the naming style to something more
  unique in the grizzly release.

  :param instance: the instance we want a path for
  :param forceold: force the use of the pre-grizzly format
  :param relative: if True, just the relative path is returned

  :returns: a path to store information about that instance
  """
  pre_grizzly_name = os.path.join(CONF.instances_path, instance['name'])
  if forceold or os.path.exists(pre_grizzly_name):  
### here we check the original instance dir, but if we have resized 
the instance to another host, this check will be failed, and a wrong dir with 
instance uuid will be returned.
  if relative:
  return instance['name']
  return pre_grizzly_name

  if relative:
  return instance['uuid']
  return os.path.join(CONF.instances_path, instance['uuid'])

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1290294/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326183] Re: detach interface fails as instance info cache is corrupted

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1326183

Title:
  detach interface fails as instance info cache is corrupted

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  
  Performing attach/detach interface on a VM sometimes results in an interface 
that can't be detached from the VM.
  I could triage it to the corrupted instance cache info due to non-atomic 
update of that information.
  Details on how to reproduce the bug are as follows. Since this is due to a 
race condition, the test can take quite a bit of time before it hits the bug.

  Steps to reproduce:

  1) Devstack with trunk with the following local.conf:
  disable_service n-net
  enable_service q-svc
  enable_service q-agt
  enable_service q-dhcp
  enable_service q-l3
  enable_service q-meta
  enable_service q-metering
  RECLONE=yes
  # and other options as set in the trunk's local

  2) Create few networks:
  $> neutron net-create testnet1
  $> neutron net-create testnet2
  $> neutron net-create testnet3
  $> neutron subnet-create testnet1 192.168.1.0/24
  $> neutron subnet-create testnet2 192.168.2.0/24
  $> neutron subnet-create testnet3 192.168.3.0/24

  2) Create a testvm in testnet1:
  $> nova boot --flavor m1.tiny --image cirros-0.3.2-x86_64-uec --nic 
net-id=`neutron net-list | grep testnet1 | cut -f 2 -d ' '` testvm

  3) Run the following shell script to attach and detach interfaces for this vm 
in the remaining two networks in a loop until we run into the issue at hand:
  
  #! /bin/bash
  c=1
  netid1=`neutron net-list | grep testnet2 | cut -f 2 -d ' '`
  netid2=`neutron net-list | grep testnet3 | cut -f 2 -d ' '`
  while [ $c -gt 0 ]
  do
 echo "Round: " $c
 echo -n "Attaching two interfaces... "
 nova interface-attach --net-id $netid1 testvm
 nova interface-attach --net-id $netid2 testvm
 echo "Done"
 echo "Sleeping until both those show up in interfaces"
 waittime=0
 while [ $waittime -lt 60 ]
 do
 count=`nova interface-list testvm | wc -l`
 if [ $count -eq 7 ]
 then
 break
 fi
 sleep 2
 (( waittime+=2 ))
 done
 echo "Waited for " $waittime " seconds"
 echo "Detaching both... "
 nova interface-list testvm | grep $netid1 | awk '{print "deleting ",$4; 
system("nova interface-detach testvm "$4 " ; sleep 2");}'
 nova interface-list testvm | grep $netid2 | awk '{print "deleting ",$4; 
system("nova interface-detach testvm "$4 " ; sleep 2");}'
 echo "Done; check interfaces are gone in a minute."
 waittime=0
 while [ $waittime -lt 60 ]
 do
 count=`nova interface-list testvm | wc -l`
 echo "line count: " $count
 if [ $count -eq 5 ]
 then
 break
 fi
 sleep 2
 (( waittime+=2 ))
 done
 if [ $waittime -ge 60 ]
 then
echo "bad case"
exit 1
 fi
 echo "Interfaces are gone"
 ((  c-- ))
  done
  -

  Eventually the test will stop with a failure ("bad case") and the
  interface remaining either from testnet2 or testnet3 can not be
  detached at all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1326183/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332713] Re: Cisco: Send network and subnet UUID during subnet create

2014-10-02 Thread Adam Gandelman
** Changed in: neutron/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332713

Title:
  Cisco: Send network and subnet UUID during subnet create

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Released

Bug description:
  n1kv client is not sending netSegmentName and id fields to the VSM
  (controller) in create_ip_pool

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1332713/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1288466] Re: Get servers REST reply does not have marker when default limit is reached

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1288466

Title:
  Get servers REST reply does not have marker when default limit is
  reached

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  Both the /servers and /servers/details APIs support pagination. When
  the request includes the "limit" parameters, then a "next" link is
  included in the reply if the number of servers that match the query is
  greater than or equal to the limit.

  The problem occurs when the caller does not include the limit
  parameter but the total number of servers is greater than or equal to
  the default "CONF.osapi_max_limit". When this occurs, the number of
  servers in the reply is "osapi_max" but there is no "next" link.
  Therefore, the caller cannot determine if there are any more servers
  and has no marker value such that they can retrieve the rest of the
  servers.

  The fix for this is to include the "next" link when the total number
  of servers is greater than or equal to the default limit, even if the
  "limit" parameter is not supplied.

  The documentation also says that the "next" link is required:
  http://docs.openstack.org/api/openstack-compute/2/content
  /Paginated_Collections-d1e664.html

  The fix appears to be in the _get_collection_links function in 
nova/api/openstack/common.py. The logic needs to be updated so that the "next"
  link is included if the total number of items returned equals the minimum of 
either the "limit" paramater or the "CONF.osapi_max_limit" value.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1288466/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1323715] Re: network tests fail on policy check after upgrade from icehouse to master (juno)

2014-10-02 Thread Adam Gandelman
** Changed in: neutron/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1323715

Title:
  network tests fail on policy check after upgrade from icehouse to
  master (juno)

Status in Grenade - OpenStack upgrade testing:
  Invalid
Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron icehouse series:
  Fix Released

Bug description:
  Lots of tempest tests fail after upgrade

  http://logs.openstack.org/51/94351/3/check/check-grenade-dsvm-
  neutron/ac837a8/logs/testr_results.html.gz

  2014-05-26 21:47:20.109 364 INFO neutron.wsgi [req-
  7c96bf86-6845-4143-92d0-2bb32f5767d7 None] (364) accepted
  ('127.0.0.1', 60250)

  2014-05-26 21:47:20.110 364 DEBUG keystoneclient.middleware.auth_token [-] 
Authenticating user token __call__ 
/opt/stack/new/python-keystoneclient/keystoneclient/middleware/auth_token.py:619
  2014-05-26 21:47:20.110 364 DEBUG keystoneclient.middleware.auth_token [-] 
Removing headers from request environment: 
X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X-Project-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Domain-Id,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-Tenant-Name,X-Tenant,X-Role
 _remove_auth_headers 
/opt/stack/new/python-keystoneclient/keystoneclient/middleware/auth_token.py:678
  2014-05-26 21:47:20.110 364 DEBUG keystoneclient.middleware.auth_token [-] 
Returning cached token _cache_get 
/opt/stack/new/python-keystoneclient/keystoneclient/middleware/auth_token.py:1041
  2014-05-26 21:47:20.111 364 DEBUG keystoneclient.middleware.auth_token [-] 
Storing token in cache _cache_put 
/opt/stack/new/python-keystoneclient/keystoneclient/middleware/auth_token.py:1151
  2014-05-26 21:47:20.111 364 DEBUG keystoneclient.middleware.auth_token [-] 
Received request from user: 47d465f7c2e44c048f63066dff93093c with project_id : 
d3e7af8cf42d4613beb315dc19444d40 and roles: _member_  _build_user_headers 
/opt/stack/new/python-keystoneclient/keystoneclient/middleware/auth_token.py:940
  2014-05-26 21:47:20.112 364 DEBUG routes.middleware [-] No route matched for 
GET /ports.json __call__ 
/usr/lib/python2.7/dist-packages/routes/middleware.py:97
  2014-05-26 21:47:20.112 364 DEBUG routes.middleware [-] Matched GET 
/ports.json __call__ /usr/lib/python2.7/dist-packages/routes/middleware.py:100
  2014-05-26 21:47:20.112 364 DEBUG routes.middleware [-] Route path: 
'/ports{.format}', defaults: {'action': u'index', 'controller': >} __call__ 
/usr/lib/python2.7/dist-packages/routes/middleware.py:102
  2014-05-26 21:47:20.112 364 DEBUG routes.middleware [-] Match dict: 
{'action': u'index', 'controller': >, 'format': u'json'} __call__ 
/usr/lib/python2.7/dist-packages/routes/middleware.py:103
  2014-05-26 21:47:20.122 364 DEBUG neutron.policy 
[req-ee5c1651-9d0c-43c9-974d-a0c888c08468 None] Unable to find ':' as separator 
in tenant_id. __call__ /opt/stack/new/neutron/neutron/policy.py:243
  2014-05-26 21:47:20.123 364 ERROR neutron.policy 
[req-ee5c1651-9d0c-43c9-974d-a0c888c08468 None] Unable to verify 
match:%(tenant_id)s as the parent resource: tenant was not found
  2014-05-26 21:47:20.123 364 TRACE neutron.policy Traceback (most recent call 
last):
  2014-05-26 21:47:20.123 364 TRACE neutron.policy   File 
"/opt/stack/new/neutron/neutron/policy.py", line 239, in __call__
  2014-05-26 21:47:20.123 364 TRACE neutron.policy parent_res, parent_field 
= do_split(separator)
  2014-05-26 21:47:20.123 364 TRACE neutron.policy   File 
"/opt/stack/new/neutron/neutron/policy.py", line 234, in do_split
  2014-05-26 21:47:20.123 364 TRACE neutron.policy separator, 1)
  2014-05-26 21:47:20.123 364 TRACE neutron.policy ValueError: need more than 1 
value to unpack
  2014-05-26 21:47:20.123 364 TRACE neutron.policy 
  2014-05-26 21:47:20.123 364 ERROR neutron.api.v2.resource 
[req-ee5c1651-9d0c-43c9-974d-a0c888c08468 None] index failed
  2014-05-26 21:47:20.123 364 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2014-05-26 21:47:20.123 364 TRACE neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/resource.py", line 87, in resource
  2014-05-26 21:47:20.123 364 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2014-05-26 21:47:20.123 364 TRACE neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/base.py", line 309, in index
  2014-05-26 21:47:20.123 364 TRACE neutron.api.v2.resource return 
self._items(request, True, parent_id)
  2014-05-26 21:47:20.123 364 TRACE neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/base.py", line 264, in _items
  2014-05-26 21:47:20.123 364 TRACE neutron.api.v2.resource 
request.context, obj_list[0])
  2014-05-26 21:47:20.123 364 TRACE neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/ap

[Yahoo-eng-team] [Bug 1302611] Re: policy.init called too many time for each API request

2014-10-02 Thread Adam Gandelman
** Changed in: neutron/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1302611

Title:
  policy.init called too many time for each API request

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron havana series:
  Fix Released
Status in neutron icehouse series:
  Fix Released

Bug description:
  policy.init() checks whether the rule cache is populated and valid,
  and if not reloads the policy cache from the policy.json file.

  As the current code runs init() each time a policy is checked or enforced, 
list operations will call init() several times (*)
  If policy.json is updated while a response is being generated, this will lead 
to a situation where some item are processed according to the old policies, and 
other according to the new ones, which would be wrong.

  Also, init() checks the last update time of the policy file, and
  repeating this check multiple time is wasteful.

  A simple solution would be to explicitly call policy.init from
  api.v2.base.Controller in order to ensure the method is called only
  once per API request.


  (*) a  GET /ports operation returning 1600 ports calls policy.init()
  9606 times

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1302611/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291364] Re: _destroy_evacuated_instances fails randomly with high number of instances

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291364

Title:
  _destroy_evacuated_instances fails randomly with high number of
  instances

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  In our production environment (2013.2.1), we're facing a random error
  thrown while starting nova-compute in Hyper-V nodes.

  The following exception is thrown while calling
  '_destroy_evacuated_instances':

  16:30:58.802 7248 ERROR nova.openstack.common.threadgroup [-] 'NoneType' 
object is not iterable
  2014-03-05 16:30:58.802 7248 TRACE nova.openstack.common.threadgroup 
Traceback (most recent call last):
  (...)
  2014-03-05 16:30:58.802 7248 TRACE nova.openstack.common.threadgroup   File 
"C:\Python27\lib\site-packages\nova\compute\manager.py", line 532, in 
_get_instances_on_driver
  2014-03-05 16:30:58.802 7248 TRACE nova.openstack.common.threadgroup 
name_map = dict((instance['name'], instance) for instance in instances)
  2014-03-05 16:30:58.802 7248 TRACE nova.openstack.common.threadgroup 
TypeError: 'NoneType' object is not iterable

  Full trace: http://paste.openstack.org/show/73243/

  Our first guess is that this problem is related with number of
  instances in our deployment (~3000), they're all fetched in order to
  check evacuated instances (as Hyper-V is not implementing
  "list_instance_uuids").

  In the case of KVM, this error is not happening as it's using a
  smarter method to get this list based on the UUID of the instances.

  Although this is being reported using Hyper-V, it's a problem that
  could occur in other drivers not implementing "list_instance_uuids"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1291364/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1317094] Re: neutron requires list amqplib dependency

2014-10-02 Thread Adam Gandelman
** Changed in: neutron/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1317094

Title:
  neutron requires list amqplib dependency

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Released

Bug description:
  Neutron does not use amqplib directly (only via oslo.messaging or
  kombu). kombu already depends on either amqp or amqplib, so the extra
  dep is not necessary.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1317094/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291007] Re: device_path not available at detach time for boot from volume

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291007

Title:
  device_path not available at detach time for boot from volume

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  When you do a normal volume attach to an existing VM and then detach it, the 
connection_info contains the following
  connection_info['data']['device_path'] at libvirt volume driver 
disconnect_volume(self, connection_info, mount_device) time.

  When you boot a VM from a volume, not an image, and then terminate the VM, 
the libvirt volume driver disconnect_volume's
  connection_info['data'] doesn't contain the 'device_path' key.   The libvirt 
volume driver's need this information to correctly disconnect the LUN from the 
kernel.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1291007/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1319182] Re: Pausing a rescued instance should be impossible

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1319182

Title:
  Pausing a rescued instance should be impossible

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  In the following commands, 'vmtest' is a freshly created virtual
  machine.

  
  $ nova show vmtest | grep -E "(status|task_state)"
  | OS-EXT-STS:task_state| -
  | status   | ACTIVE

  $ nova rescue vmtest
  +---+--+
  | Property  | Value
  +---+--+
  | adminPass | 2ZxvzZULT4sr
  +---+--+

  $ nova show vmtest | grep -E "(status|task_state)"
  | OS-EXT-STS:task_state| -
  | status   | RESCUE

  $ nova pause vmtest

  $ nova show vmtest | grep -E "(status|task_state)"
  | OS-EXT-STS:task_state| -
  | status   | PAUSED

  $ nova unpause vmtest

  $ nova show vmtest | grep -E "(status|task_state)"
  | OS-EXT-STS:task_state| -
  | status   | ACTIVE

  Here, we would want the vm to be in the 'RESCUE' state, as it was
  before being paused.

  $ nova unrescue vmtest
  ERROR (Conflict): Cannot 'unrescue' while instance is in vm_state active 
(HTTP 409) (Request-ID: req-34b8004d-b072-4328-bbf9-29152bd4c34f)

  The 'unrescue' command fails, which seems to confirm that the VM was
  no longer being rescued.

  
  So, two possibilities:
  1) When unpausing, the vm should go back to 'rescued' state
  2) Rescued vms should not be allowed to be paused, as is indicated by this 
graph: http://docs.openstack.org/developer/nova/devref/vmstates.html

  
  Note that the same issue can be observed with suspend/resume instead of 
pause/unpause, and probably other commands as well.

  WDYT ?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1319182/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1299331] Re: There isn't effect when attach/detach interface for paused instance

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1299331

Title:
  There isn't effect when attach/detach interface for paused instance

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  $ nova boot --flavor 1 --image 76ae1239-0973-44cf-9051-0e1bc8f41cdd
  --nic net-id=a15cfbed-86d8-4660-9593-46447cb9464e vm1

  $ nova list
  
+--+--+++-+---+
  | ID   | Name | Status | Task State | Power 
State | Networks  |
  
+--+--+++-+---+
  | f7e2877d-c7f5-4493-89d4-c68e9839a7ff | vm1  | ACTIVE | -  | Running 
| private=10.0.0.22 |
  
+--+--+++-+---+

  $ brctl show
  bridge name   bridge id   STP enabled interfaces
  br-eth0   .fe989d8bd148   no  
  br-ex .8a1d06d8854e   no  
  br-ex2.4a98bdebe544   no  
  br-int.229ad5053a41   no  
  br-tun.2e58a2f0e047   no  
  docker0   8000.   no  
  lxcbr08000.   no  
  qbr0ad6a86e-d98000.9e5491dd719a   no  
qvb0ad6a86e-d9
tap0ad6a86e-d9

  
  $ neutron port-list
  
+--+--+---++
  | id   | name | mac_address   | fixed_ips 
 |
  
+--+--+---++
  | 0ad6a86e-d967-424e-9bf5-e6821cc0cd0d |  | fa:16:3e:3a:3e:5a | 
{"subnet_id": "94575a05-796f-4ff5-b892-3c3b8231b303", "ip_address": 
"10.0.0.22"}   |
  | 1e6bed8d-aece-4d3e-abcc-3ad7957d6d72 |  | fa:16:3e:9e:dc:83 | 
{"subnet_id": "e5dbc790-c26f-45b7-b2c7-574f12ad8b41", "ip_address": 
"172.24.4.12"} |
  | 5f522a9a-2856-4a95-8bd8-c354c00abf0f |  | fa:16:3e:01:47:43 | 
{"subnet_id": "94575a05-796f-4ff5-b892-3c3b8231b303", "ip_address": "10.0.0.1"} 
   |
  | 6226f6d3-3814-469c-bf50-8c99dfec481e |  | fa:16:3e:46:0e:35 | 
{"subnet_id": "94575a05-796f-4ff5-b892-3c3b8231b303", "ip_address": "10.0.0.2"} 
   |
  | a3f2ab1c-a634-446d-8885-d7d8e5978fa1 |  | fa:16:3e:cf:02:d6 | 
{"subnet_id": "94575a05-796f-4ff5-b892-3c3b8231b303", "ip_address": 
"10.0.0.20"}   |
  | c10390a9-6f84-44f5-8a17-91cb330a9e12 |  | fa:16:3e:41:7c:34 | 
{"subnet_id": "e5dbc790-c26f-45b7-b2c7-574f12ad8b41", "ip_address": 
"172.24.4.15"} |
  | c814425c-be1a-4c06-a54b-1788c7c6fb31 |  | fa:16:3e:f5:fc:d3 | 
{"subnet_id": "e5dbc790-c26f-45b7-b2c7-574f12ad8b41", "ip_address": 
"172.24.4.2"}  |
  | ebd874b7-43e6-4d18-b0ed-f86bb349d8b9 |  | fa:16:3e:e6:b5:09 | 
{"subnet_id": "e5dbc790-c26f-45b7-b2c7-574f12ad8b41", "ip_address": 
"172.24.4.19"} |
  
+--+--+---++

  
  $ nova pause vm1

  $ nova interface-detach vm1 0ad6a86e-d967-424e-9bf5-e6821cc0cd0d

  $ nova list
  
+--+--+++-+--+
  | ID   | Name | Status | Task State | Power 
State | Networks |
  
+--+--+++-+--+
  | f7e2877d-c7f5-4493-89d4-c68e9839a7ff | vm1  | PAUSED | -  | Paused  
|  |
  
+--+--+++-+--+

  $ brctl show
  bridge name   bridge id   STP enabled interfaces
  br-eth0   .fe989d8bd148   no  
  br-ex .8a1d06d8854e   no  
  br-ex2.4a98bdebe544   no  
  br-int.229ad5053a41   no  
  br-tun.2e58a2f0e047   no  
  docker0   8000.   no  
  lxcbr08000.   no  

  
  But tap still alive

  $ ifconfig|grep tap0ad6a86e-d9
  tap0ad6a86e-d9 Link encap:Ethernet  HWaddr fe:16:3e:3a:3e:5a

  And l

[Yahoo-eng-team] [Bug 1328181] Re: NSX: remove_router_interface might fail because of NAT rule mismatch

2014-10-02 Thread Adam Gandelman
** Changed in: neutron/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1328181

Title:
  NSX: remove_router_interface might fail because of NAT rule mismatch

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Released

Bug description:
  The remove_router_interface for the VMware NSX plugin expects a precise 
number of SNAT rules for a subnet.
  If the actual number of NAT rules differs from the expected one, an exception 
is raised.

  The reasons for this might be:
  - earlier failure in remove_router_interface
  - NSX API client tampering with NSX objects
  - etc.

  In any case, the remove_router_interface operation should succeed
  removing every match for the NAT rule to delete from the NSX logical
  router.

  sample traceback: http://paste.openstack.org/show/83427/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1328181/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1321186] Re: nova can't show or delete queued image for AttributeError

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1321186

Title:
  nova can't show or delete queued image for AttributeError

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  steps to reproduce:
  1. run "glance image-create" to create a queued image
  2. run "nova image-delete "

  it returns:
  Delete for image b31aa5dd-f07a-4748-8f15-398346887584 failed: The server has 
either erred or is incapable of performing the requested operation. (HTTP 500)

  the traceback in log file is:

  Traceback (most recent call last):
File "/opt/stack/nova/nova/api/openstack/__init__.py", line 125, in __call__
  return req.get_response(self.application)
File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1296, 
in send
  application, catch_exc_info=False)
File "/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1260, 
in call_application
  app_iter = application(self.environ, start_response)
File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in 
__call__
  return resp(environ, start_response)
File 
"/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py", 
line 632, in __call__
  return self.app(env, start_response)
File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in 
__call__
  return resp(environ, start_response)
File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in 
__call__
  return resp(environ, start_response)
File "/usr/lib/python2.7/dist-packages/routes/middleware.py", line 131, in 
__call__
  response = self.app(environ, start_response)
File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in 
__call__
  return resp(environ, start_response)
File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in 
__call__
  resp = self.call_func(req, *args, **self.kwargs)
File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
  return self.func(req, *args, **kwargs)
File "/opt/stack/nova/nova/api/openstack/wsgi.py", line 917, in __call__
  content_type, body, accept)
File "/opt/stack/nova/nova/api/openstack/wsgi.py", line 983, in 
_process_stack
  action_result = self.dispatch(meth, request, action_args)
File "/opt/stack/nova/nova/api/openstack/wsgi.py", line 1067, in dispatch
  return method(req=request, **action_args)
File "/opt/stack/nova/nova/api/openstack/compute/images.py", line 139, in 
show
  image = self._image_service.show(context, id)
File "/opt/stack/nova/nova/image/glance.py", line 277, in show
  base_image_meta = _translate_from_glance(image)
File "/opt/stack/nova/nova/image/glance.py", line 462, in 
_translate_from_glance
  image_meta = _extract_attributes(image)
File "/opt/stack/nova/nova/image/glance.py", line 530, in 
_extract_attributes
  output[attr] = getattr(image, attr)
File 
"/opt/stack/python-glanceclient/glanceclient/openstack/common/apiclient/base.py",
 line 462, in __getattr__
  return self.__getattr__(k)
File 
"/opt/stack/python-glanceclient/glanceclient/openstack/common/apiclient/base.py",
 line 464, in __getattr__
  raise AttributeError(k)
  AttributeError: disk_format

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1321186/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1316373] Re: Can't force delete an errored instance with no info cache

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1316373

Title:
  Can't force delete an errored instance with no info cache

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  Sometimes when an instance fails to launch for some reason when trying
  to delete it using nova delete or nova force-delete it doesn't work
  and gives the following error:

  This is when using cells but I think it possibly isn't cells related.
  Deleting is expecting an info cache no matter what. Ideally force
  delete should ignore all errors and delete the instance.

  
  2014-05-06 10:48:58.368 21210 ERROR nova.cells.messaging 
[req-a74c59d3-dc58-4318-87e8-0da15ca2a78d d1fa8867e42444cf8724e65fef1da549 
094ae1e2c08f4eddb444a9d9db71ab40] Error processing message locally: Info cache 
for instance bb07522b-d705-4fc8-8045-e12de2affe2e could not be found.
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging Traceback (most 
recent call last):
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
"/opt/nova/nova/cells/messaging.py", line 200, in _process_locally
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging resp_value = 
self.msg_runner._process_message_locally(self)
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
"/opt/nova/nova/cells/messaging.py", line 1532, in _process_message_locally
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging return 
fn(message, **message.method_kwargs)
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
"/opt/nova/nova/cells/messaging.py", line 894, in terminate_instance
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging 
self._call_compute_api_with_obj(message.ctxt, instance, 'delete')
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
"/opt/nova/nova/cells/messaging.py", line 855, in _call_compute_api_with_obj
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging 
instance.refresh(ctxt)
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
"/opt/nova/nova/objects/base.py", line 151, in wrapper
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging return fn(self, 
ctxt, *args, **kwargs)
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
"/opt/nova/nova/objects/instance.py", line 500, in refresh
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging 
self.info_cache.refresh()
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
"/opt/nova/nova/objects/base.py", line 151, in wrapper
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging return fn(self, 
ctxt, *args, **kwargs)
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
"/opt/nova/nova/objects/instance_info_cache.py", line 103, in refresh
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging 
self.instance_uuid)
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
"/opt/nova/nova/objects/base.py", line 112, in wrapper
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging result = fn(cls, 
context, *args, **kwargs)
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
"/opt/nova/nova/objects/instance_info_cache.py", line 70, in 
get_by_instance_uuid
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging 
instance_uuid=instance_uuid)
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging 
InstanceInfoCacheNotFound: Info cache for instance 
bb07522b-d705-4fc8-8045-e12de2affe2e could not be found.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1316373/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1321640] Re: [HyperV]: Config drive is not attached to instance after resized or migrated

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1321640

Title:
  [HyperV]: Config drive is not attached to instance after resized or
  migrated

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  If we use config-drive (whether set --config-drive=true in boot
  command or set force_config_drive=always in nova.conf), there is bug
  for config-drive when resize or migrate instances on hyperv.

  You can see from current nova codes:
  
https://github.com/openstack/nova/blob/master/nova/virt/hyperv/migrationops.py#L269
  when finished migration, there is no code to attach configdrive.iso or 
configdrive.vhd to the resized instance. compared to boot instance 
(https://github.com/openstack/nova/blob/master/nova/virt/hyperv/vmops.py#L226). 
Although this commit https://review.openstack.org/#/c/55975/ handled coping 
configdrive to resized or migrated instance, there is no code to attach it 
after resized or migrated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1321640/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1308058] Re: Cannot create volume from glance image without checksum

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1308058

Title:
  Cannot create volume from glance image without checksum

Status in Cinder:
  Fix Released
Status in Cinder icehouse series:
  Fix Released
Status in Fuel: OpenStack installer that works:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  It is no longer possible to create a volume from an image that does
  not have a checksum set.

  
https://github.com/openstack/cinder/commit/da13c6285bb0aee55cfbc93f55ce2e2b7d6a28f2
  - this patch removes the default of None from the getattr call.

  If this is intended it would be nice to see something more informative
  in the logs.

  2014-04-15 11:52:26.035 19000 ERROR cinder.api.middleware.fault 
[req-cf0f7b89-a9c1-4a10-b1ac-ddf415a28f24 c139cd16ac474d2184237ba837a04141 
83d5198d5f5a461798c6b843f57540d
  f - - -] Caught error: checksum
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault Traceback 
(most recent call last):
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
"/opt/stack/cinder/cinder/api/middleware/fault.py", line 75, in __call__
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault return 
req.get_response(self.application)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1320, in send
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault 
application, catch_exc_info=False)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1284, in 
call_application
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault app_iter 
= application(self.environ, start_response)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault return 
resp(environ, start_response)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
"/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py", 
line 615, in __call__
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault return 
self.app(env, start_response)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault return 
resp(environ, start_response)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault return 
resp(environ, start_response)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
"/usr/lib/python2.7/dist-packages/routes/middleware.py", line 131, in __call__
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault response 
= self.app(environ, start_response)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault return 
resp(environ, start_response)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault resp = 
self.call_func(req, *args, **self.kwargs)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault return 
self.func(req, *args, **kwargs)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
"/opt/stack/cinder/cinder/api/openstack/wsgi.py", line 895, in __call__
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault 
content_type, body, accept)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
"/opt/stack/cinder/cinder/api/openstack/wsgi.py", line 943, in _process_stack
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault 
action_result = self.dispatch(meth, request, action_args)
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault   File 
"/opt/stack/cinder/cinder/api/openstack/wsgi.py", line 1019, in dispatch
  2014-04-15 11:52:26.035 19000 TRACE cinder.api.middleware.fault return 
method(req=request, **action_args)
  2014-04-15 11:52:26.035 1900

[Yahoo-eng-team] [Bug 1358719] Re: Live migration fails as get_instance_disk_info is not present in the compute driver base class

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1358719

Title:
  Live migration fails as get_instance_disk_info is not present in the
  compute driver base class

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  The "get_instance_disk_info" driver has been added to the libvirt
  compute driver in the following commit:

  
https://github.com/openstack/nova/commit/e4974769743d5967626c1f0415113683411a03a4

  This caused regression failures on drivers that do not implement it,
  e.g.:

  http://paste.openstack.org/show/97258/

  The method has been subsequently added to the base class which, but
  raising a NotImplementedError(), which still causes the regression:

  
https://github.com/openstack/nova/commit/2bed16c89356554a193a111d268a9587709ed2f7

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1358719/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357125] Re: Cisco N1kv plugin needs to send subtype on network profile creation

2014-10-02 Thread Adam Gandelman
** Changed in: neutron/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1357125

Title:
  Cisco N1kv plugin needs to send subtype on network profile creation

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Released

Bug description:
  Cisco N1kv neutron plugin should send also the subtype for overly
  networks when the a network segment pool is created

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1357125/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334142] Re: A server creation fails due to adding interface failure

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1334142

Title:
  A server creation fails due to adding interface failure

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  http://logs.openstack.org/72/61972/27/gate/gate-tempest-dsvm-
  full/ed1ab55/logs/testr_results.html.gz

  pythonlogging:'': {{{
  2014-06-25 06:45:11,596 25675 INFO [tempest.common.rest_client] Request 
(DeleteServersTestXML:test_delete_server_while_in_verify_resize_state): 202 
POST http://127.0.0.1:8774/v2/aedc849c2c1742b8b1077c85d609b127/servers 0.295s
  2014-06-25 06:45:11,674 25675 INFO [tempest.common.rest_client] Request 
(DeleteServersTestXML:test_delete_server_while_in_verify_resize_state): 200 GET 
http://127.0.0.1:8774/v2/aedc849c2c1742b8b1077c85d609b127/servers/f9fb672b-f2e6-4303-b7d1-2c5aa324170f
 0.077s
  2014-06-25 06:45:12,977 25675 INFO [tempest.common.rest_client] Request 
(DeleteServersTestXML:test_delete_server_while_in_verify_resize_state): 200 GET 
http://127.0.0.1:8774/v2/aedc849c2c1742b8b1077c85d609b127/servers/f9fb672b-f2e6-4303-b7d1-2c5aa324170f
 0.300s
  2014-06-25 06:45:12,978 25675 INFO [tempest.common.waiters] State 
transition "BUILD/scheduling" ==> "BUILD/spawning" after 1 second wait
  2014-06-25 06:45:14,150 25675 INFO [tempest.common.rest_client] Request 
(DeleteServersTestXML:test_delete_server_while_in_verify_resize_state): 200 GET 
http://127.0.0.1:8774/v2/aedc849c2c1742b8b1077c85d609b127/servers/f9fb672b-f2e6-4303-b7d1-2c5aa324170f
 0.171s
  2014-06-25 06:45:14,153 25675 INFO [tempest.common.waiters] State 
transition "BUILD/spawning" ==> "ERROR/None" after 3 second wait
  2014-06-25 06:45:14,221 25675 INFO [tempest.common.rest_client] Request 
(DeleteServersTestXML:test_delete_server_while_in_verify_resize_state): 400 
POST 
http://127.0.0.1:8774/v2/aedc849c2c1742b8b1077c85d609b127/servers/f9fb672b-f2e6-4303-b7d1-2c5aa324170f/action
 0.066s
  2014-06-25 06:45:14,404 25675 INFO [tempest.common.rest_client] Request 
(DeleteServersTestXML:test_delete_server_while_in_verify_resize_state): 204 
DELETE 
http://127.0.0.1:8774/v2/aedc849c2c1742b8b1077c85d609b127/servers/f9fb672b-f2e6-4303-b7d1-2c5aa324170f
 0.182s
  }}}

  Traceback (most recent call last):
File "tempest/api/compute/servers/test_delete_server.py", line 97, in 
test_delete_server_while_in_verify_resize_state
  resp, server = self.create_test_server(wait_until='ACTIVE')
File "tempest/api/compute/base.py", line 247, in create_test_server
  raise ex
  BadRequest: Bad request
  Details: {'message': 'The server could not comply with the request since it 
is either malformed or otherwise incorrect.', 'code': '400'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1334142/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338451] Re: shelve api does not work in the nova-cell environment

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1338451

Title:
  shelve api does not work in the nova-cell environment

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  If you run nova shelve api in nova-cell environment It throws
  following error:

  Nova cell (n-cell-child) Logs:

  2014-07-06 23:57:13.445 ERROR nova.cells.messaging 
[req-a689a1a1-4634-4634-974a-7343b5554f46 admin admin] Error processing message 
locally: save() got an unexpected keyword argument 'expected_task_state'
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging Traceback (most recent 
call last):
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging   File 
"/opt/stack/nova/nova/cells/messaging.py", line 200, in _process_locally
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging resp_value = 
self.msg_runner._process_message_locally(self)
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging   File 
"/opt/stack/nova/nova/cells/messaging.py", line 1287, in 
_process_message_locally
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging return fn(message, 
**message.method_kwargs)
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging   File 
"/opt/stack/nova/nova/cells/messaging.py", line 700, in run_compute_api_method
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging return 
fn(message.ctxt, *args, **method_info['method_kwargs'])
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging   File 
"/opt/stack/nova/nova/compute/api.py", line 192, in wrapped
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging return func(self, 
context, target, *args, **kwargs)
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging   File 
"/opt/stack/nova/nova/compute/api.py", line 182, in inner
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging return function(self, 
context, instance, *args, **kwargs)
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging   File 
"/opt/stack/nova/nova/compute/api.py", line 163, in inner
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging return f(self, 
context, instance, *args, **kw)
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging   File 
"/opt/stack/nova/nova/compute/api.py", line 2458, in shelve
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging 
instance.save(expected_task_state=[None])
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging TypeError: save() got an 
unexpected keyword argument 'expected_task_state'
  2014-07-06 23:57:13.445 TRACE nova.cells.messaging

  Nova compute log:

  2014-07-07 00:05:19.084 ERROR oslo.messaging.rpc.dispatcher 
[req-9539189d-239b-4e74-8aea-8076740
  31c2f admin admin] Exception during message handling: 'NoneType' object is 
not iterable
  Traceback (most recent call last):

    File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
134, in _
  dispatch_and_reply
  incoming.message))

    File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _
  dispatch
  return self._do_dispatch(endpoint, method, ctxt, args)

    File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _
  do_dispatch
  result = getattr(endpoint, method)(ctxt, **new_args)

    File "/opt/stack/nova/nova/conductor/manager.py", line 351, in 
notify_usage_exists
  system_metadata, extra_usage_info)

    File "/opt/stack/nova/nova/compute/utils.py", line 250, in 
notify_usage_exists
  ignore_missing_network_data)

    File "/opt/stack/nova/nova/notifications.py", line 285, in bandwidth_usage
  macs = [vif['address'] for vif in nw_info]

  TypeError: 'NoneType' object is not iterable

  2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher Traceback (most 
recent call last):
  2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dis
  t-packages/oslo/messaging/rpc/dispatcher.py", line 134, in _dispatch_and_reply
  2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _dispatch
  2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch
  2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
  2014-07-07 00:05:19.084 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/exception.py", line 88, in wrapped
  2014-07-07 00:05:19.084

[Yahoo-eng-team] [Bug 1360394] Re: NSX: log request body to NSX as debug

2014-10-02 Thread Adam Gandelman
** Changed in: neutron/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1360394

Title:
  NSX: log request body to NSX as debug

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Released

Bug description:
  Previously we never logged the request body that we sent to NSX. This makes
  things hard to debug when issues arise as we don't actually log the body of
  the request that we made. This patch adds the body to our issue request log
  statement.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1360394/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1350466] Re: deadlock in scheduler expire reservation periodic task

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1350466

Title:
  deadlock in scheduler expire reservation periodic task

Status in Cinder:
  Fix Released
Status in Cinder icehouse series:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  http://logs.openstack.org/54/105554/4/check/gate-tempest-dsvm-neutron-
  large-
  ops/45501af/logs/screen-n-sch.txt.gz?level=TRACE#_2014-07-30_16_26_20_158

  
  2014-07-30 16:26:20.158 17209 ERROR nova.openstack.common.periodic_task [-] 
Error during SchedulerManager._expire_reservations: (OperationalError) (1213, 
'Deadlock found when trying to get lock; try restarting transaction') 'UPDATE 
reservations SET updated_at=updated_at, deleted_at=%s, deleted=id WHERE 
reservations.deleted = %s AND reservations.expire < %s' 
(datetime.datetime(2014, 7, 30, 16, 26, 20, 152098), 0, datetime.datetime(2014, 
7, 30, 16, 26, 20, 149665))
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
Traceback (most recent call last):
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/opt/stack/new/nova/nova/openstack/common/periodic_task.py", line 198, in 
run_periodic_tasks
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
task(self, context)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/opt/stack/new/nova/nova/scheduler/manager.py", line 157, in 
_expire_reservations
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
QUOTAS.expire(context)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/opt/stack/new/nova/nova/quota.py", line 1401, in expire
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
self._driver.expire(context)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/opt/stack/new/nova/nova/quota.py", line 651, in expire
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
db.reservation_expire(context)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/opt/stack/new/nova/nova/db/api.py", line 1173, in reservation_expire
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
return IMPL.reservation_expire(context)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 149, in wrapper
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
return f(*args, **kwargs)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 3394, in 
reservation_expire
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
reservation_query.soft_delete(synchronize_session=False)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/opt/stack/new/nova/nova/openstack/common/db/sqlalchemy/session.py", line 
694, in soft_delete
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
synchronize_session=synchronize_session)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2690, in 
update
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
update_op.exec_()
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py", line 
816, in exec_
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
self._do_exec()
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py", line 
913, in _do_exec
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
update_stmt, params=self.query._params)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/opt/stack/new/nova/nova/openstack/common/db/sqlalchemy/session.py", line 
444, in _wrap
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
_raise_if_deadlock_error(e, self.bind.dialect.name)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task   
File "/opt/stack/new/nova/nova/openstack/common/db/sqlalchemy/session.py", line 
427, in _raise_if_deadlock_error
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
raise exception.DBDeadlock(operational_error)
  2014-07-30 16:26:20.158 17209 TRACE nova.openstack.common.periodic_task 
DBDeadlock: (OperationalError) (12

[Yahoo-eng-team] [Bug 1357972] Re: boot from volume fails on Hyper-V if boot device is not vda

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357972

Title:
  boot from volume fails on Hyper-V if boot device is not vda

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  The Tempest test
  
"tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern"
  fails on Hyper-V.

  The cause is related to the fact that the root device is "sda" and not
  "vda".

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357972/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338830] Re: [OSSA 2014-032] Nova VMware driver still leaks rescued images (CVE-2014-3608)

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1338830

Title:
  [OSSA 2014-032] Nova VMware driver still leaks rescued images
  (CVE-2014-3608)

Status in OpenStack Compute (Nova):
  Invalid
Status in OpenStack Compute (nova) havana series:
  Won't Fix
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  Garth Mollet of Red Hat reported the following when examining the fix
  for OSSA 2014-017:

  .. there may still be a regression in the upstream patches.

  With the new patch applied it appears unrescue can still fail if the
  live vm is in the suspended state. With the new patch unrescue will
  attempt to poweroff the vm, however poweroff will fail if state ==
  suspended:

  # Only PoweredOn VMs can be powered off.
  # Raise Exception if VM is suspended
  elif pwr_state == "suspended":
   reason = _("instance is suspended and cannot be powered off.")
   raise exception.InstancePowerOffFailure(reason=reason)

  And this exception will be uncaught in the case of a manual unrescue,
  leading to the same end scenario in Jaroslavs test above, where
  destroying the vm in error state will leave the -rescue instance.

  Red Hat bugzilla reference -
  https://bugzilla.redhat.com/show_bug.cgi?id=1108406

  Can we confirm if this is a regression / incomplete fix of bug
  #1269418 ?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1338830/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358881] Re: jjsonschema 2.3.0 -> 2.4.0 upgrade breaking nova.tests.test_api_validation tests

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1358881

Title:
  jjsonschema 2.3.0 -> 2.4.0 upgrade breaking
  nova.tests.test_api_validation tests

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  the following two failures appeared after upgrading jsonschema to
  2.4.0.  downgrading to 2.3.0 returned the tests to passing.

  ==
  FAIL: 
nova.tests.test_api_validation.TcpUdpPortTestCase.test_validate_tcp_udp_port_fails
  --
  Traceback (most recent call last):
  _StringException: Empty attachments:
pythonlogging:''
stderr
stdout

  Traceback (most recent call last):
File "/home/dev/Desktop/nova-test/nova/tests/test_api_validation.py", line 
602, in test_validate_tcp_udp_port_fails
  expected_detail=detail)
File "/home/dev/Desktop/nova-test/nova/tests/test_api_validation.py", line 
31, in check_validation_error
  self.assertEqual(ex.kwargs, expected_kwargs)
File 
"/home/dev/Desktop/nova-test/.venv/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 321, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/home/dev/Desktop/nova-test/.venv/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 406, in assertThat
  raise mismatch_error
  MismatchError: !=:
  reference = {'code': 400,
   'detail': u'Invalid input for field/attribute foo. Value: 65536. 65536 is 
greater than the maximum of 65535'}
  actual= {'code': 400,
   'detail': 'Invalid input for field/attribute foo. Value: 65536. 65536.0 is 
greater than the maximum of 65535'}

  
  ==
  FAIL: 
nova.tests.test_api_validation.IntegerRangeTestCase.test_validate_integer_range_fails
  --
  Traceback (most recent call last):
  _StringException: Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  INFO [migrate.versioning.api] 215 -> 216... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 216 -> 217... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 217 -> 218... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 218 -> 219... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 219 -> 220... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 220 -> 221... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 221 -> 222... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 222 -> 223... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 223 -> 224... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 224 -> 225... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 225 -> 226... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 226 -> 227... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 227 -> 228... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 228 -> 229... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 229 -> 230... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 230 -> 231... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 231 -> 232... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 232 -> 233... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 233 -> 234... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 234 -> 235... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 235 -> 236... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 236 -> 237... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 237 -> 238... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 238 -> 239... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 239 -> 240... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 240 -> 241... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 241 -> 242... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 242 -> 243... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 243 -> 244... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 244 -> 245... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 245 -> 246... 
  INFO [migrate.versioning.api] done
  INFO [migrate.versioning.api] 246 -> 247... 
  INFO [migrate.

[Yahoo-eng-team] [Bug 1368251] Re: migrate_to_ml2 accessing boolean as int fails on postgresql

2014-10-02 Thread Adam Gandelman
** Changed in: neutron/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1368251

Title:
  migrate_to_ml2 accessing boolean as int fails on postgresql

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron icehouse series:
  Fix Released

Bug description:
  The "allocated" variable used in migrate_to_ml2 was defined to be a boolean 
type and in postgresql this type is enforced,
  while in mysql this just maps to tinyint and accepts both numbers and bools.

  Thus the migrate_to_ml2 script breaks on postgresql

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1368251/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1352893] Re: ipv6 cannot be disabled for ovs agent

2014-10-02 Thread Adam Gandelman
** Changed in: neutron/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1352893

Title:
  ipv6 cannot be disabled for ovs agent

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Released

Bug description:
  If ipv6 module is not loaded in kernel ip6tables command doesn't work
  and fails  in openvswitch-agent when processing ports:

  2014-08-05 15:20:57.089 3944 ERROR 
neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Error while processing 
VIF ports
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Traceback (most recent call 
last):
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py",
 line 1262, in rpc_loop
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent ovs_restarted)
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py",
 line 1090, in process_network_ports
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
port_info.get('updated', set()))
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/securitygroups_rpc.py", line 
247, in setup_port_filters
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
self.prepare_devices_filter(new_devices)
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/securitygroups_rpc.py", line 
164, in prepare_devices_filter
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
self.firewall.prepare_port_filter(device)
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib64/python2.7/contextlib.py", line 24, in __exit__
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent self.gen.next()
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/firewall.py", line 108, in 
defer_apply
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
self.filter_defer_apply_off()
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_firewall.py", 
line 370, in filter_defer_apply_off
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
self.iptables.defer_apply_off()
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_manager.py", 
line 353, in defer_apply_off
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent self._apply()
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_manager.py", 
line 369, in _apply
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent return 
self._apply_synchronized()
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_manager.py", 
line 400, in _apply_synchronized
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
root_helper=self.root_helper)
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 76, in 
execute
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent raise RuntimeError(m)
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent RuntimeError:
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Command: ['sudo', 
'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip6tables-restore', '-c']
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Exit code: 2
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Stdout: ''
  2014-08-05 15:20:57.089 3944 TRACE 
neutron.plugins.openvswitch.agent.ovs_neutron_agent Stderr: "ip6tables-restore 
v1.4.21: ip6tables-restore: u

[Yahoo-eng-team] [Bug 1354448] Re: The Hyper-V driver should raise a InstanceFaultRollback in case of resize down requests

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1354448

Title:
  The Hyper-V driver should raise a InstanceFaultRollback in case of
  resize down requests

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  The Hyper-V driver does not support resize down and is currently
  rising an exception if the user attempts to do that, causing the
  instance to go in ERROR state.

  The driver should use the recently introduced instance faults
  "exception.InstanceFaultRollback" instead, which will leave the
  instance in ACTIVE state as expected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1354448/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361545] Re: dhcp agent shouldn't spawn metadata-proxy for non-isolated networks

2014-10-02 Thread Adam Gandelman
** Changed in: neutron/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361545

Title:
  dhcp agent shouldn't spawn metadata-proxy for non-isolated networks

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Released

Bug description:
  The "enable_isolated_metadata = True" options tells DHCP agents that for each 
network under its care, a neutron-ns-metadata-proxy process should be spawned, 
regardless if it's isolated or not.
  This is fine for isolated networks (networks with no routers and no default 
gateways), but for networks which are connected to a router (for which the L3 
agent spawns a separate neutron-ns-metadata-proxy which is attached to the 
router's namespace), 2 different metadata proxies are spawned. For these 
networks, the static routes which are pushed to each instance, letting it know 
where to search for the metadata-proxy, is not pushed and the proxy spawned 
from the DHCP agent is left unused.

  The DHCP agent should know if the network it handles is isolated or
  not, and for non-isolated networks, no neutron-ns-metadata-proxy
  processes should spawn.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1361545/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1352428] Re: HyperV "Shutting Down" state is not mapped

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1352428

Title:
  HyperV "Shutting Down" state is not mapped

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  The method which gets VM related information can fail if the VM is in an 
intermediary state such as "Shutting down".
  The reason is that some of the Hyper-V specific vm states are not defined as 
possible states.

  This will result into a key error as shown bellow:

  http://paste.openstack.org/show/90015/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1352428/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357063] Re: nova.virt.driver "Emitting event" log message in stable/icehouse doesn't show anything

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357063

Title:
  nova.virt.driver "Emitting event" log message in stable/icehouse
  doesn't show anything

Status in OpenStack Compute (Nova):
  Confirmed
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  This is fixed on master with commit
  8c98b601f2db1f078d5f42ab94043d9939608f73 but is useless on
  stable/icehouse, here is an example snip from a stable/icehouse
  tempest run of what this looks like in the n-cpu log:

  2014-08-14 16:18:53.311 473 DEBUG nova.virt.driver [-] Emitting event
  emit_event /opt/stack/new/nova/nova/virt/driver.py:1207

  It would be really nice to use that information in trying to debug
  what's causing all of these hits for InstanceInfoCacheNotFound stack
  traces:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRXhjZXB0aW9uIGRpc3BhdGNoaW5nIGV2ZW50XCIgQU5EIG1lc3NhZ2U6XCJJbmZvIGNhY2hlIGZvciBpbnN0YW5jZVwiIEFORCBtZXNzYWdlOlwiY291bGQgbm90IGJlIGZvdW5kXCIgQU5EIHRhZ3M6XCJzY3JlZW4tbi1jcHUudHh0XCIgQU5EIE5PVCBidWlsZF9icmFuY2g6XCJtYXN0ZXJcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwODA0NzMxMzM5Nn0=

  We should backport that repr fix to stable/icehouse for serviceability
  purposes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357063/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348766] Re: Big Switch: hash shouldn't be updated on unsuccessful calls

2014-10-02 Thread Adam Gandelman
** Changed in: neutron/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348766

Title:
  Big Switch: hash shouldn't be updated on unsuccessful calls

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Released

Bug description:
  The configuration hash db is updated on every response from the
  backend including errors that contain an empty hash. This is causing
  the hash to be wiped out if a standby controller is contacted first,
  which opens a narrow time window where the backend could become out of
  sync. It should only update the hash on successful REST calls.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1348766/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357578] Re: Unit test: nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITest.test_terminate_sigterm timing out in gate

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357578

Title:
  Unit test:
  
nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITest.test_terminate_sigterm
  timing out in gate

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  http://logs.openstack.org/62/114062/3/gate/gate-nova-
  python27/2536ea4/console.html

   FAIL:
  
nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITest.test_terminate_sigterm

  014-08-15 13:46:09.155 | INFO [nova.tests.integrated.api.client] Doing GET on 
/v2/openstack//flavors/detail
  2014-08-15 13:46:09.155 | INFO [nova.tests.integrated.test_multiprocess_api] 
sent launcher_process pid: 10564 signal: 15
  2014-08-15 13:46:09.155 | INFO [nova.tests.integrated.test_multiprocess_api] 
waiting on process 10566 to exit
  2014-08-15 13:46:09.155 | INFO [nova.wsgi] Stopping WSGI server.
  2014-08-15 13:46:09.155 | }}}
  2014-08-15 13:46:09.156 | 
  2014-08-15 13:46:09.156 | Traceback (most recent call last):
  2014-08-15 13:46:09.156 |   File 
"nova/tests/integrated/test_multiprocess_api.py", line 206, in 
test_terminate_sigterm
  2014-08-15 13:46:09.156 | self._terminate_with_signal(signal.SIGTERM)
  2014-08-15 13:46:09.156 |   File 
"nova/tests/integrated/test_multiprocess_api.py", line 194, in 
_terminate_with_signal
  2014-08-15 13:46:09.156 | self.wait_on_process_until_end(pid)
  2014-08-15 13:46:09.156 |   File 
"nova/tests/integrated/test_multiprocess_api.py", line 146, in 
wait_on_process_until_end
  2014-08-15 13:46:09.157 | time.sleep(0.1)
  2014-08-15 13:46:09.157 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/greenthread.py",
 line 31, in sleep
  2014-08-15 13:46:09.157 | hub.switch()
  2014-08-15 13:46:09.157 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 287, in switch
  2014-08-15 13:46:09.157 | return self.greenlet.switch()
  2014-08-15 13:46:09.157 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 339, in run
  2014-08-15 13:46:09.158 | self.wait(sleep_time)
  2014-08-15 13:46:09.158 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/poll.py",
 line 82, in wait
  2014-08-15 13:46:09.158 | sleep(seconds)
  2014-08-15 13:46:09.158 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/fixtures/_fixtures/timeout.py",
 line 52, in signal_handler
  2014-08-15 13:46:09.158 | raise TimeoutException()
  2014-08-15 13:46:09.158 | TimeoutException

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357578/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1356157] Re: make nova floating-ip-delete atomic with neutron

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1356157

Title:
  make nova floating-ip-delete atomic with neutron

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  The infra guys were noticing an issue where they were leaking floating ip
  addresses. One of the reasons this would occur for them is they called
  nova floating-ip-delete which first disassocates the floating-ip in neutron
  and then deletes it. Because it makes two calls to neutron if the first one
  succeeds and the second fails it results in the instance no longer being
  associated with the floatingip. They have retry logic but they base it on
  the instance and when they go to retry cleaning up the instance the floatingip
  is no longer on the instance so they never delete it.  

  This patch fixes this issue by directly calling delete_floating_ip instead
  of releasing first if using neutron as neutron allows this. I looked into 
  doing the same thing for nova-network but the code is written to prevent this.
  This allows the operation to be atomic. I know this is sorta hackish that
  we're doing this in the api layer but we do this in a few other places
  too fwiw.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1356157/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358668] Re: Big Switch: keyerror on filtered get_ports call

2014-10-02 Thread Adam Gandelman
** Changed in: neutron/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1358668

Title:
  Big Switch: keyerror on filtered get_ports call

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron icehouse series:
  Fix Released

Bug description:
  If get_ports is called in the Big Switch plugin without 'id' being one
  of the included fields, _extend_port_dict_binding will fail with the
  following error.

  Traceback (most recent call last):
File "neutron/tests/unit/bigswitch/test_restproxy_plugin.py", line 87, in 
test_get_ports_no_id
  context.get_admin_context(), fields=['name'])
File "neutron/plugins/bigswitch/plugin.py", line 715, in get_ports
  self._extend_port_dict_binding(context, port)
File "neutron/plugins/bigswitch/plugin.py", line 361, in 
_extend_port_dict_binding
  hostid = porttracker_db.get_port_hostid(context, port['id'])
  KeyError: 'id'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1358668/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348720] Re: Missing index for expire_reservations

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348720

Title:
  Missing index for expire_reservations

Status in Cinder:
  Fix Released
Status in Cinder icehouse series:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  While investigating some database performance problems, we discovered
  that there is no index on deleted for the reservations table. When
  this table gets large, the expire_reservations code will do a full
  table scan and take multiple seconds to complete. Because the expire
  runs on a periodic, it can slow down the master database significantly
  and cause nova or cinder to become extremely slow.

  > EXPLAIN UPDATE reservations SET updated_at=updated_at, 
deleted_at='2014-07-24 22:26:17', deleted=id WHERE reservations.deleted = 0 AND 
reservations.expire < '2014-07-24 22:26:11';
  
++-+--+---+---+-+-+--++--+
  | id | select_type | table| type  | possible_keys | key| key_len 
| ref  | rows  | Extra|
  
++-+--+---+---+-+-+--++--+
  |  1 | SIMPLE  | reservations | index | NULL  | PRIMARY | 4  
| NULL | 950366 | Using where; Using temporary |
  
++-+--+---+---+-+-+--++--+

  An index on (deleted, expire) would be the most efficient.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1348720/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1360817] Re: Hyper-V agent fails on Hyper-V 2008 R2 due to missing "remove_all_security_rules" method

2014-10-02 Thread Adam Gandelman
** Changed in: neutron/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1360817

Title:
  Hyper-V agent fails on Hyper-V 2008 R2 due to missing
  "remove_all_security_rules" method

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron icehouse series:
  Fix Released

Bug description:
  A recent regression does not allow the Hyper-V agent to run
  successfully on Hyper-V 2008 R2, which is currently still a supported
  platform.

  The call generating the error is:

  
https://github.com/openstack/neutron/blob/771327adbe9e563506f98ca561de9ded4d987698/neutron/plugins/hyperv/agent/hyperv_neutron_agent.py#L392

  Error stack trace:

  http://paste.openstack.org/show/98471/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1360817/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374519] Re: Orphaned queues are not auto-deleted for Qpid

2014-10-02 Thread Adam Gandelman
** Changed in: neutron
Milestone: 2014.1.3 => None

** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1374519

Title:
  Orphaned queues are not auto-deleted for Qpid

Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in neutron icehouse series:
  New
Status in Messaging API for OpenStack:
  In Progress

Bug description:
  The following patch incorrectly set auto-delete for Qpid to False:
  https://github.com/openstack/oslo-incubator/commit/5ff534d1#diff-
  372094c4bfc6319d22875a970aa6b730R190

  While for RabbitMQ, it's True.

  This results in queues left on the broker if client dies and does not
  return back.

  Red Hat bug for reference:
  https://bugzilla.redhat.com/show_bug.cgi?id=1099657

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1374519/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370191] Re: db deadlock on service_update()

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370191

Title:
  db deadlock on service_update()

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  Several methods in nova.db.sqlalchemy.api are decorated with
  @_retry_on_deadlock.  service_update() is not currently one of them,
  but it should be based on the following backtrace:

  4-09-15 15:40:22.574 34384 ERROR nova.servicegroup.drivers.db [-] model
  server went away
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db Traceback
  (most recent call last):
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  "/usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py", line
  95, in _report_state
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  service.service_ref, state_catalog)
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  "/usr/lib/python2.7/site-packages/nova/conductor/api.py", line 218, in
  service_update
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  return self._manager.service_update(context, service, values)
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  "/usr/lib/python2.7/site-packages/nova/utils.py", line 967, in wrapper
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  return func(*args, **kwargs)
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/server.py", line 139,
  in inner
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  return func(*args, **kwargs)
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 491, in
  service_update
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db svc =
  self.db.service_update(context, service['id'], values)
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  "/usr/lib/python2.7/site-packages/nova/db/api.py", line 148, in
  service_update
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  return IMPL.service_update(context, service_id, values)
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 146, in
  wrapper
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  return f(*args, **kwargs)
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 533, in
  service_update
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  service_ref.update(values)
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 447,
  in __exit__
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  self.rollback()
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  "/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line
  58, in __exit__
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  compat.reraise(exc_type, exc_value, exc_tb)
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 444,
  in __exit__
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  self.commit()
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  "/usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py", line 358,
  in commit
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  t[1].commit()
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1195,
  in commit
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  self._do_commit()
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1226,
  in _do_commit
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  self.connection._commit_impl()
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 491,
  in _commit_impl
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db
  self._handle_dbapi_exception(e, None, None, None, None)
  2014-09-15 15:40:22.574 34384 TRACE nova.servicegroup.drivers.db   File
  "/usr/lib64/python2.7/sit

[Yahoo-eng-team] [Bug 1344036] Re: Hyper-V agent generates exception when force_hyperv_utils_v1 is True on Windows Server / Hyper-V Server 2012 R2

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1344036

Title:
  Hyper-V agent generates exception when force_hyperv_utils_v1 is True
  on Windows Server / Hyper-V Server 2012 R2

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  WMI root\virtualization namespace v1 (in Hyper-V) has been removed from 
Windows Server / Hyper-V Server 2012 R2, according to:
  http://technet.microsoft.com/en-us/library/dn303411.aspx

  Because of this, setting the force_hyperv_utils_v1 option on the
  Windows Server 2012 R2 nova compute agent's nova.conf will cause
  exceptions, since it will try to use the removed root\virtualization
  namespace v1.

  Logs:
  http://paste.openstack.org/show/87125/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1344036/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1347777] Re: The compute_driver option description does not include the Hyper-V driver

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/134

Title:
  The compute_driver option description does not include the Hyper-V
  driver

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  The description of the option "compute_driver" should include
  hyperv.HyperVDriver along with the other supported drivers

  
https://github.com/openstack/nova/blob/aa018a718654b5f868c1226a6db7630751613d92/nova/virt/driver.py#L35-L38

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/134/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357599] Re: race condition with neutron in nova migrate code

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357599

Title:
  race condition with neutron in nova migrate code

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  The tempest test that does a resize on the instance from time to time
  fails with a neutron virtual interface timeout error. The reason why
  this is occurring is because resize_instance() calls:

  disk_info = self.driver.migrate_disk_and_power_off(
  context, instance, migration.dest_host,
  instance_type, network_info,
  block_device_info)

  which calls destory() which unplugs the vifs(). Then,

  self.driver.finish_migration(context, migration, instance,
   disk_info,
   network_info,
   image, resize_instance,
   block_device_info, power_on)

  is called which expects a vif_plugged event. Since this happens on the
  same host the neutron agent is able to detect that the vif was
  unplugged then plugged because it happens so fast.  To fix this we
  should check if we are migrating to the same host if we are we should
  not expect to get an event.

  8d1] Setting instance vm_state to ERROR
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] Traceback (most recent call last):
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 3714, in finish_resize
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] disk_info, image)
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 3682, in _finish_resize
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] old_instance_type, sys_meta)
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1]   File 
"/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] six.reraise(self.type_, self.value, 
self.tb)
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 3677, in _finish_resize
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] block_device_info, power_on)
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 5302, in 
finish_migration
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] block_device_info, power_on)
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 3792, in 
_create_domain_and_network
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] raise 
exception.VirtualInterfaceCreateException()
  2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] VirtualInterfaceCreateException: Virtual 
Interface creation failed

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357599/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362221] Re: VMs fail to start when Ceph is used as a backend for ephemeral drives

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362221

Title:
  VMs fail to start when Ceph is used as a backend for ephemeral drives

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  VMs' drives placement in Ceph option has been chosen
  (libvirt.images_types == 'rbd').

  When user creates a flavor and specifies:
 - root drive size >0
 - ephemeral drive size >0 (important)

  and tries to boot a VM, he gets "no valid host was found" in the
  scheduler log:

  Error from last host: node-3.int.host.com (node node-3.int.host.com): 
[u'Traceback (most recent call last):\n', u'
   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1305, 
in _build_instance\n set_access_ip=set_access_ip)\n', u' File "/usr/l
  ib/python2.6/site-packages/nova/compute/manager.py", line 393, in 
decorated_function\n return function(self, context, *args, **kwargs)\n', u' File
   "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1717, in 
_spawn\n LOG.exception(_(\'Instance failed to spawn\'), instance=instanc
  e)\n', u' File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__\n six.reraise(self.type_, self.value, se
  lf.tb)\n', u' File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1714, in 
_spawn\n block_device_info)\n', u' File "/usr/lib/py
  thon2.6/site-packages/nova/virt/libvirt/driver.py", line 2259, in spawn\n 
admin_pass=admin_password)\n', u' File "/usr/lib/python2.6/site-packages
  /nova/virt/libvirt/driver.py", line 2648, in _create_image\n 
ephemeral_size=ephemeral_gb)\n', u' File 
"/usr/lib/python2.6/site-packages/nova/virt/
  libvirt/imagebackend.py", line 186, in cache\n *args, **kwargs)\n', u' File 
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/imagebackend.py",
  line 587, in create_image\n prepare_template(target=base, max_size=size, 
*args, **kwargs)\n', u' File "/usr/lib/python2.6/site-packages/nova/opens
  tack/common/lockutils.py", line 249, in inner\n return f(*args, **kwargs)\n', 
u' File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/imagebac
  kend.py", line 176, in fetch_func_sync\n fetch_func(target=target, *args, 
**kwargs)\n', u' File "/usr/lib/python2.6/site-packages/nova/virt/libvir
  t/driver.py", line 2458, in _create_ephemeral\n disk.mkfs(os_type, fs_label, 
target, run_as_root=is_block_dev)\n', u' File "/usr/lib/python2.6/sit
  e-packages/nova/virt/disk/api.py", line 117, in mkfs\n utils.mkfs(default_fs, 
target, fs_label, run_as_root=run_as_root)\n', u' File "/usr/lib/pyt
  hon2.6/site-packages/nova/utils.py", line 856, in mkfs\n execute(*args, 
run_as_root=run_as_root)\n', u' File "/usr/lib/python2.6/site-packages/nov
  a/utils.py", line 165, in execute\n return processutils.execute(*cmd, 
**kwargs)\n', u' File "/usr/lib/python2.6/site-packages/nova/openstack/commo
  n/processutils.py", line 193, in execute\n cmd=\' \'.join(cmd))\n', 
u"ProcessExecutionError: Unexpected error while running command.\nCommand: sudo
   nova-rootwrap /etc/nova/rootwrap.conf mkfs -t ext3 -F -L ephemeral0 
/var/lib/nova/instances/_base/ephemeral_1_default\nExit code: 1\nStdout: 
''\nStde
  rr: 'mke2fs 1.41.12 (17-May-2010)\\nmkfs.ext3: No such file or directory 
while trying to determine filesystem size\\n'\n"]

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362221/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1343604] Re: Exceptions thrown, and messages logged by execute() may include passwords

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

** Changed in: trove/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1343604

Title:
  Exceptions thrown, and messages logged by execute() may include
  passwords

Status in Cinder:
  Fix Released
Status in Cinder havana series:
  Fix Released
Status in Cinder icehouse series:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in The Oslo library incubator:
  Fix Released
Status in OpenStack Security Advisories:
  Triaged
Status in Openstack Database (Trove):
  Fix Committed
Status in Trove icehouse series:
  Fix Released

Bug description:
  Currently when execute() throws a ProcessExecutionError, it returns
  the command without masking passwords. In the one place where it logs
  the command, it correctly masks the password.

  It would be prudent to mask the password in the exception as well so
  that upstream catchers don't have to go through the mask_password()
  motions.

  The same also goes for stdout and stderr information which should be
  sanitized.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1343604/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334926] Re: floatingip still working once connected even after it is disociated

2014-10-02 Thread Adam Gandelman
** Changed in: neutron/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1334926

Title:
  floatingip still working once connected even after it is disociated

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron icehouse series:
  Fix Released
Status in OpenStack Security Notes:
  Fix Released

Bug description:
  After we create an SSH connection to a VM via its floating ip, even
  though we have removed the floating ip association, we can still
  access the VM via that connection. Namely, SSH is not disconnected
  when the floating ip is not valid

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1334926/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1364696] Re: Big Switch: Request context is missing from backend requests

2014-10-02 Thread Adam Gandelman
** Changed in: neutron/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1364696

Title:
  Big Switch: Request context is missing from backend requests

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron icehouse series:
  Fix Released

Bug description:
  The request context that comes into Neutron is not included in the
  request to the backend. This makes it difficult to correlate events in
  the debug logs on the backend such as what incoming Neutron request
  resulted in particular REST calls to the backend and if admin
  privileges were used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1364696/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357102] Re: Big Switch: Multiple read calls to consistency DB fails

2014-10-02 Thread Adam Gandelman
** Changed in: neutron/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1357102

Title:
  Big Switch: Multiple read calls to consistency DB fails

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron icehouse series:
  Fix Released

Bug description:
  The Big Switch consistency DB throws an exception if read_for_update() is 
called multiple times without closing the transaction in between. This was 
originally because there was a DB lock in place and a single thread could 
deadlock if it tried twice. However, 
  there is no longer a point to this protection because the DB lock is gone and 
certain response failures result in the DB being read twice (the second time 
for a retry).

  2014-08-14 21:56:41.496 12939 ERROR neutron.plugins.ml2.managers 
[req-ee311173-b38a-481e-8900-d963c676b05f None] Mechanism driver 'bigswitch' 
failed in update_port_postcommit
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers Traceback 
(most recent call last):
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py", line 168, 
in _call_on_drivers
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers 
getattr(driver.obj, method_name)(context)
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/mech_bigswitch/driver.py",
 line 91, in update_port_postcommit
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers 
port["network"]["id"], port)
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/bigswitch/servermanager.py", 
line 555, in rest_update_port
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers 
self.rest_create_port(tenant_id, net_id, port)
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/bigswitch/servermanager.py", 
line 545, in rest_create_port
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers 
self.rest_action('PUT', resource, data, errstr)
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/bigswitch/servermanager.py", 
line 476, in rest_action
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers timeout)
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/dist-packages/neutron/openstack/common/lockutils.py", line 
249, in inner
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers return 
f(*args, **kwargs)
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/bigswitch/servermanager.py", 
line 423, in rest_call
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers 
hash_handler=hash_handler)
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/bigswitch/servermanager.py", 
line 139, in rest_call
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers 
headers[HASH_MATCH_HEADER] = hash_handler.read_for_update()
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/bigswitch/db/consistency_db.py",
 line 56, in read_for_update
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers raise 
MultipleReadForUpdateCalls()
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers 
MultipleReadForUpdateCalls: Only one read_for_update call may be made at a time.
  2014-08-14 21:56:41.496 12939 TRACE neutron.plugins.ml2.managers

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1357102/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334164] Re: nova error migrating VMs with floating ips: 'FixedIP' object has no attribute '_sa_instance_state'

2014-10-02 Thread Adam Gandelman
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1334164

Title:
  nova error migrating VMs with floating ips: 'FixedIP' object has no
  attribute '_sa_instance_state'

Status in Fuel: OpenStack installer that works:
  Fix Committed
Status in Fuel for OpenStack 5.0.x series:
  Fix Released
Status in Mirantis OpenStack:
  Fix Committed
Status in Mirantis OpenStack 5.0.x series:
  Fix Released
Status in Mirantis OpenStack 5.1.x series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  Seeing this in conductor logs when migrating a VM with a floating IP
  assigned:

  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 133, 
in _dispatch_and_reply
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 176, 
in _dispatch
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 122, 
in _do_dispatch
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/conductor/manager.py", line 1019, in 
network_migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
migration)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/conductor/manager.py", line 527, in 
network_migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
self.network_api.migrate_instance_start(context, instance, migration)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/network/api.py", line 94, in wrapped
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
func(self, context, *args, **kwargs)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/network/api.py", line 543, in 
migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
self.network_rpcapi.migrate_instance_start(context, **args)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/nova/network/rpcapi.py", line 350, in 
migrate_instance_start
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
floating_addresses=floating_addresses)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/rpc/client.py", line 150, in 
call
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
wait_for_reply=True, timeout=timeout)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/transport.py", line 90, in 
_send
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
timeout=timeout)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 
409, in send
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher return 
self._send(target, ctxt, message, wait_for_reply, timeout)
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.6/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 
402, in _send
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher raise 
result
  2014-06-23 09:22:38.899 20314 TRACE oslo.messaging.rpc.dispatcher 
AttributeError: 'FixedIP' object has no attribute '_sa_instance_state'

To manage notifications about this bug go to:
https://bugs.launchpad.net/fuel/+bug/1334164/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1366917] Re: neutron should not use neutronclients utils methods

2014-10-02 Thread Adam Gandelman
** Changed in: neutron/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1366917

Title:
  neutron should not use neutronclients utils methods

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron icehouse series:
  Fix Released

Bug description:
  2014-09-07 19:17:58.331 | Traceback (most recent call last):
  2014-09-07 19:17:58.331 |   File "/usr/local/bin/neutron-debug", line 6, in 
  2014-09-07 19:17:58.332 | from neutron.debug.shell import main
  2014-09-07 19:17:58.332 |   File 
"/opt/stack/new/neutron/neutron/debug/shell.py", line 29, in 
  2014-09-07 19:17:58.332 | 'probe-create': utils.import_class(
  2014-09-07 19:17:58.332 | AttributeError: 'module' object has no attribute 
'import_class'
  2014-09-07 19:17:58.375 | + exit_trap
  2014-09-07 19:17:58.375 | + local r=1

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1366917/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357105] Re: Big Switch: servermanager should retry on 503 instead of failing immediately

2014-10-02 Thread Adam Gandelman
** Changed in: neutron/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1357105

Title:
  Big Switch: servermanager should retry on 503 instead of failing
  immediately

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron icehouse series:
  Fix Released

Bug description:
  When the backend controller returns a 503 service unavailable, the big
  switch server manager immediately counts the server request as failed.
  Instead it should retry a few times because a 503 occurs when there
  are locks in place for synchronization during upgrade, etc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1357105/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1317016] Re: User are not allowed to delete object which the user created under Container

2014-10-02 Thread Adam Gandelman
** Changed in: horizon/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1317016

Title:
  User are not allowed to  delete object which the user created under
  Container

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) icehouse series:
  Fix Released

Bug description:
  Testing step:
  1: create a pseudo-folder object pf1
  2: delete pf1

  Testing result:

  Error: You are not allowed to delete object: pf1

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1317016/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1324450] Re: add delete operations for the ODL MechanismDriver

2014-10-02 Thread Adam Gandelman
** Changed in: neutron/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1324450

Title:
  add delete operations for the ODL MechanismDriver

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Released

Bug description:
  The delete operations (networks, subnets and ports) haven't been managed 
since the 12th review of the initial support.
  It seems sync_single_resource only implements create and update operations.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1324450/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1288859] Re: Load balancer can't choose proper port in multi-network configuration

2014-10-02 Thread Adam Gandelman
** Changed in: horizon/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1288859

Title:
  Load balancer can't choose proper port in multi-network configuration

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) icehouse series:
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  If LBaaS functionality enabled and instances has more that one network
  interfaces, horizon incorrectly choose members ports to add in the LB
  pool.

  Steps to reproduce:

  0. nova, neutron with configured LBaaS functions, horizon.
  1. Create 1st network (e.g. net1)
  2. Create 2nd network (e.g. net2)
  3. Create few (e.g. 6) instances with networks attached to both networks.
  4. Create LB pool
  5. Go to member page and click 'add members'
  6. Select all instances from step 3, click add

  Expected result:
  all selected interfaces will be in same network.

  Actual result:
  Some interfaces are selected from net1, some from net2. 

  And there is no way to plug instance to LB pool with proper interface
  via horizon, because add member dialog do not allow to choose port of
  instance.

  Checked on havanna and icehouse-2.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1288859/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1347891] Re: mis-use of XML canonicalization in keystone tests

2014-10-02 Thread Adam Gandelman
** Changed in: keystone/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1347891

Title:
  mis-use of XML canonicalization in keystone tests

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone icehouse series:
  Fix Released

Bug description:
  running the keystone suite on a new Fedora VM, I get many many
  failures of the variety of XML comparison failing, in a non-
  deterministic way:

  [classic@localhost keystone]$ tox -e py27 --  
keystone.tests.test_versions.XmlVersionTestCase.test_v3_disabled
  py27 develop-inst-noop: /home/classic/dev/redhat/keystone
  py27 runtests: PYTHONHASHSEED='2335155056'
  py27 runtests: commands[0] | python setup.py testr --slowest 
--testr-args=keystone.tests.test_versions.XmlVersionTestCase.test_v3_disabled
  running testr
  running=
  OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
  OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
  OS_LOG_CAPTURE=${OS_LOG_CAPTURE:-1} \
  ${PYTHON:-python} -m subunit.run discover -t ./ ./keystone/tests --list 
  running=
  OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
  OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
  OS_LOG_CAPTURE=${OS_LOG_CAPTURE:-1} \
  ${PYTHON:-python} -m subunit.run discover -t ./ ./keystone/tests  --load-list 
/tmp/tmpCKSHDr
  ==
  FAIL: keystone.tests.test_versions.XmlVersionTestCase.test_v3_disabled
  tags: worker-0
  --
  Empty attachments:
pythonlogging:''-1
stderr
stdout

  pythonlogging:'': {{{
  Adding cache-proxy 'keystone.tests.test_cache.CacheIsolatingProxy' to backend.
  Deprecated: keystone.common.kvs.Base is deprecated as of Icehouse in favor of 
keystone.common.kvs.KeyValueStore and may be removed in Juno.
  Registering Dogpile Backend 
keystone.tests.test_kvs.KVSBackendForcedKeyMangleFixture as 
openstack.kvs.KVSBackendForcedKeyMangleFixture
  Registering Dogpile Backend keystone.tests.test_kvs.KVSBackendFixture as 
openstack.kvs.KVSBackendFixture
  KVS region configuration for token-driver: 
{'keystone.kvs.arguments.distributed_lock': True, 'keystone.kvs.backend': 
'openstack.kvs.Memory', 'keystone.kvs.arguments.lock_timeout': 6}
  Using default dogpile sha1_mangle_key as KVS region token-driver key_mangler
  It is recommended to only use the base key-value-store implementation for the 
token driver for testing purposes.  Please use 
keystone.token.backends.memcache.Token or keystone.token.backends.sql.Token 
instead.
  KVS region configuration for os-revoke-driver: 
{'keystone.kvs.arguments.distributed_lock': True, 'keystone.kvs.backend': 
'openstack.kvs.Memory', 'keystone.kvs.arguments.lock_timeout': 6}
  Using default dogpile sha1_mangle_key as KVS region os-revoke-driver 
key_mangler
  Callback: `keystone.contrib.revoke.core.Manager._trust_callback` subscribed 
to event `identity.OS-TRUST:trust.deleted`.
  Callback: `keystone.contrib.revoke.core.Manager._consumer_callback` 
subscribed to event `identity.OS-OAUTH1:consumer.deleted`.
  Callback: `keystone.contrib.revoke.core.Manager._access_token_callback` 
subscribed to event `identity.OS-OAUTH1:access_token.deleted`.
  Callback: `keystone.contrib.revoke.core.Manager._role_callback` subscribed to 
event `identity.role.deleted`.
  Callback: `keystone.contrib.revoke.core.Manager._user_callback` subscribed to 
event `identity.user.deleted`.
  Callback: `keystone.contrib.revoke.core.Manager._user_callback` subscribed to 
event `identity.user.disabled`.
  Callback: `keystone.contrib.revoke.core.Manager._project_callback` subscribed 
to event `identity.project.deleted`.
  Callback: `keystone.contrib.revoke.core.Manager._project_callback` subscribed 
to event `identity.project.disabled`.
  Callback: `keystone.contrib.revoke.core.Manager._domain_callback` subscribed 
to event `identity.domain.disabled`.
  Deprecated: keystone.middleware.core.XmlBodyMiddleware is deprecated as of 
Icehouse in favor of support for "application/json" only and may be removed in 
K.
  Auth token not in the request header. Will not build auth context.
  arg_dict: {}
  }}}

  Traceback (most recent call last):
File 
"/home/classic/dev/redhat/keystone/.tox/py27/lib/python2.7/site-packages/mock.py",
 line 1201, in patched
  return func(*args, **keywargs)
File "keystone/tests/test_versions.py", line 460, in test_v3_disabled
  self.assertThat(data, matchers.XMLEquals(expected))
File 
"/home/classic/dev/redhat/keystone/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py",
 line 406, in assertThat
  raise mismatch_error
  MismatchError: expected = http://docs.openstack.org/identity/api/v2.0"; id="v2.0" status="stable" 
updated="2014-04-17T00:00:00Z">

  
  


  http://localhost:26739/v2.0/"; rel="self"/>

[Yahoo-eng-team] [Bug 1236868] Re: image status set to killed even if has been deleted

2014-10-02 Thread Adam Gandelman
** Changed in: glance/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1236868

Title:
  image status set to killed even if has been deleted

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance icehouse series:
  Fix Released

Bug description:
  This error occurs with the following sequence of steps:

  1. Upload data to an image e.g. cinder upload-to-image
  2. image status is set to 'saving' as data is uploaded
  3. delete image before upload is complete
  4. image status goes to 'deleted' and image is deleted from backend store
  5. fail the upload
  6. image status then goes to 'killed' when it should stay as 'deleted'

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1236868/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1347840] Re: Primary Project should stay selected after user added to new project

2014-10-02 Thread Adam Gandelman
** Changed in: horizon/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1347840

Title:
  Primary Project should stay selected after user added to new project

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) icehouse series:
  Fix Released

Bug description:
  Prereq: multi domain enabled

  == Scenario ==
  1. Have a domain with 2 projects, p1 and p2.
  2. Create userA and set userA's primary project to p1.
  3. Update project members of p2 and add userA as member.  Now, userA is part 
of both projects.
  4. Now go to edit password for userA.  You'll notice on the modal, that the 
Primary Project isn't set.  You have to *reselect* before you can save.  See 
attached image.

  ==> The Primary Project should have stayed as p1.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1347840/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1306835] Re: V3 list users filter by email address throws exception

2014-10-02 Thread Adam Gandelman
** Changed in: keystone/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1306835

Title:
  V3 list users  filter by email address throws exception

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone icehouse series:
  Fix Released
Status in OpenStack Manuals:
  In Progress

Bug description:
  V3 list_user filter by email throws excpetion. There is no such
  attribute email.

  keystone.common.wsgi): 2014-04-11 23:09:00,422 ERROR type object 'User' has 
no attribute 'email'
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/keystone/common/wsgi.py", line 206, 
in __call__
  result = method(context, **params)
File "/usr/lib/python2.7/dist-packages/keystone/common/controller.py", line 
183, in wrapper
  return f(self, context, filters, **kwargs)
File "/usr/lib/python2.7/dist-packages/keystone/identity/controllers.py", 
line 284, in list_users
  hints=hints)
File "/usr/lib/python2.7/dist-packages/keystone/common/manager.py", line 
52, in wrapper
  return f(self, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/keystone/identity/core.py", line 
189, in wrapper
  return f(self, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/keystone/identity/core.py", line 
328, in list_users
  ref_list = driver.list_users(hints or driver_hints.Hints())
File "/usr/lib/python2.7/dist-packages/keystone/common/sql/core.py", line 
227, in wrapper
  return f(self, hints, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/keystone/identity/backends/sql.py", 
line 132, in list_users
  user_refs = sql.filter_limit_query(User, query, hints)
File "/usr/lib/python2.7/dist-packages/keystone/common/sql/core.py", line 
374, in filter_limit_query
  query = _filter(model, query, hints)
File "/usr/lib/python2.7/dist-packages/keystone/common/sql/core.py", line 
326, in _filter
  filter_dict = exact_filter(model, filter_, filter_dict, hints)
File "/usr/lib/python2.7/dist-packages/keystone/common/sql/core.py", line 
312, in exact_filter
  if isinstance(getattr(model, key).property.columns[0].type,
  AttributeError: type object 'User' has no attribute 'email'

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1306835/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334791] Re: horizon not writing region when generating openrc file

2014-10-02 Thread Adam Gandelman
** Changed in: horizon/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1334791

Title:
  horizon not writing region when generating openrc file

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) icehouse series:
  Fix Released

Bug description:
  When generating the openrc file, Horizon is not setting the
  OS_REGION_NAME.

  It's not even in the template file, which explains why it doesn't
  work.

  
openstack_dashboard/dashboards/project/access_and_security/templates/access_and_security/api_access/openrc.sh.template

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1334791/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1209343] Re: LDAP connection code does not provide ldap.set_option(ldap.OPT_X_TLS_CACERTFILE) for ldaps protocol

2014-10-02 Thread Adam Gandelman
** Changed in: keystone/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1209343

Title:
  LDAP connection code does not provide
  ldap.set_option(ldap.OPT_X_TLS_CACERTFILE) for ldaps protocol

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone icehouse series:
  Fix Released

Bug description:
  The HP Enterprise Directory LDAP servers require a ca certificate file
  for ldaps connections. Sample working Python code:

  ldap.set_option(ldap.OPT_X_TLS_CACERTFILE, 
"d:/etc/ssl/certs/hpca2ssG2_ns.cer")
  ldap_client = ldap.initialize(host)
  ldap_client.protocol_version = ldap.VERSION3

  ldap_client.simple_bind_s(binduser,bindpw)

  filter = '(uid=mark.m*)'
  attrs = ['cn', 'mail', 'uid', 'hpStatus']

  r = ldap_client.search_s(base, scope, filter, attrs)

  for dn, entry in r:
  print 'dn=', repr(dn)

  for k in entry.keys():
  print '\t', k, '=', entry[k]

  The current H-2 " keystone/common/ldap/core.py" file only provides
  this ldap.set_option for TLS connections. I have attached a picture of
  a screen shot showing the change I had to make to file core.py to
  enable the "ldap.set_option(ldap.OPT_X_TLS_CACERTFILE,
  tls_cacertfile)" statement to also get executed for ldaps connections.
  Basically I pulled the set_option code out of the "if tls_cacertfile:"
  block.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1209343/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294994] Re: Managers instantiated multiple times

2014-10-02 Thread Adam Gandelman
** Changed in: keystone/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1294994

Title:
  Managers instantiated multiple times

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone icehouse series:
  Fix Released

Bug description:
  A number of the managers are instantiated multiple times and this can
  cause odd edge cases where they will have differing dependency
  injection results, differing configurations, etc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1294994/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1325143] Re: Eliminate use of with_lockmode('update')

2014-10-02 Thread Adam Gandelman
** Changed in: keystone/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1325143

Title:
  Eliminate use of with_lockmode('update')

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone icehouse series:
  Fix Released

Bug description:
  As discussed here: 
http://lists.openstack.org/pipermail/openstack-dev/2014-May/035264.html
   the use of "with_lockmode('update')" can cause a number of issues when run 
on top of MySQL+Galera because galera does not support the 'SELECT ... FOR 
UPDATE' SQL call.

  We currently only use with_lockmode('update') for coordinating
  consuming trusts (limited use trusts).

  We should eliminate this and handle the coordination of consumption to
  ensure only the specified number of tokens can be issued from a trust.
  Unfortunately, this is not as straightforward as it could be, we need
  to handle the following deployment scenarios:

  * Eventlet
  * Multiple Keystone Processes (same physical server) [same issue as mod_wsgi]
  * Multiple Keystone Processes (different physical servers)

  The first and second ones could be handled with the lockutils
  (external file-based) locking decorator. The last scenario will take
  more thought.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1325143/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1330955] Re: Lock wait timeout exceeded while updating status for floatingips

2014-10-02 Thread Adam Gandelman
** Changed in: neutron/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1330955

Title:
  Lock wait timeout exceeded while updating status for floatingips

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Released

Bug description:
  Lock timeout occurred when updating floating IP.

  2014-06-15 12:50:41.052 15781 TRACE neutron.openstack.common.rpc.amqp
  OperationalError: (OperationalError) (1205, 'Lock wait timeout
  exceeded; try restarting transaction') 'UPDATE floatingips SET
  status=%s WHERE floatingips.id = %s' ('ACTIVE', 'a030bb1e-31f0-42d7
  -84fc-520856f0ee66')

  This is probably introduced in Icehouse with:
  https://review.openstack.org/#/c/66866/

  More info at Red Hat bugzilla:
  https://bugzilla.redhat.com/show_bug.cgi?id=1109577

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1330955/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1335437] Re: LDAP attributes mapped to None can cause 500 errors

2014-10-02 Thread Adam Gandelman
** Changed in: keystone/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1335437

Title:
  LDAP attributes mapped to None can cause 500 errors

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone icehouse series:
  Fix Released

Bug description:
  When LDAP is being used as a backend, attributes that are mapped to
  'None' will trigger a 500 error if they are not also configured to be
  ignored.   This can be easily reproduced by modifying the default
  config as follows:

  -
  # List of attributes stripped off the user on update. (list
  # value)
  #user_attribute_ignore=default_project_id,tenants
  user_attribute_ignore=tenants

  # LDAP attribute mapped to default_project_id for users.
  # (string value)
  #user_default_project_id_attribute=
  -

  If you then perform a 'keystone user-list', it will trigger a 500
  error:

  -
  [root@keystone ~(keystone_admin)]# keystone user-list
  Authorization Failed: An unexpected error prevented the server from 
fulfilling your request. (HTTP 500)
  -

  The end of the stacktrace in keystone.log clearly shows the problem:

  -
  2014-06-28 06:23:36.366 21931 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/ldap/core.py", line 502, in 
_ldap_res_to_model
  2014-06-28 06:23:36.366 21931 TRACE keystone.common.wsgi v = 
lower_res[self.attribute_mapping.get(k, k).lower()]
  2014-06-28 06:23:36.366 21931 TRACE keystone.common.wsgi AttributeError: 
'NoneType' object has no attribute 'lower'
  -

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1335437/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1336596] Re: Cisco N1k: Clear entries in n1kv specific tables on rollbacks

2014-10-02 Thread Adam Gandelman
** Changed in: neutron/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1336596

Title:
  Cisco N1k: Clear entries in n1kv specific tables on rollbacks

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Released

Bug description:
  During rollback operations, the resource is cleaned up from the neutron 
database but leaves a few stale entries in the n1kv specific tables.
  Vlan/VXLAN allocation tables are inconsistent during network rollbacks.
  VM-Network table is left inconsistent during port rollbacks.
  Explicitly clearing ProfileBinding table entry (during network profile 
rollbacks) is not required as delete_network_profile internally takes care of 
it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1336596/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1349491] Re: [OSSA 2014-027] Persistent XSS in the Host Aggregates interface (CVE-2014-3594)

2014-10-02 Thread Adam Gandelman
** Changed in: horizon/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1349491

Title:
  [OSSA 2014-027] Persistent XSS in the Host Aggregates interface
  (CVE-2014-3594)

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released
Status in OpenStack Dashboard (Horizon) icehouse series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  Received 2014-07-28 18:08:47 +0200 via encrypted E-mail from "Dennis
  Felsch ":

  Hi everyone,

  We spotted an issue with Horizon in OpenStack Icehouse and the current
  development version of Juno (older versions not tested):

  The interface for Host Aggregates is vulnerable to persistent XSS.

  Steps to reproduce the issue:

   * Log into Horizon as admin
   * Go to "Host Aggregates"
   * Create a new host aggregate
   * Enter some name and an availability zone like this: 
   * Save
   * See alert pop up

  Because we are researchers, we are happy to help you, whenever we can.
  However, from the research point of view, it would be really nice to get
  some acknowledgment on your site about this issue. Is something
  like this possible?

  The people working on this are:
  Dennis Felsch (me), dennis.fel...@ruhr-uni-bochum.de
  Mario Heiderich, mario.heider...@cure53.de

  Please let me know if you need more info.

  Greetings,
  Dennis

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1349491/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1341954] Re: suds client subject to cache poisoning by local attacker

2014-10-02 Thread Adam Gandelman
** Changed in: cinder/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1341954

Title:
  suds client subject to cache poisoning by local attacker

Status in Cinder:
  Fix Released
Status in Cinder havana series:
  Fix Released
Status in Cinder icehouse series:
  Fix Released
Status in Gantt:
  New
Status in OpenStack Compute (Nova):
  Fix Released
Status in Oslo VMware library for OpenStack projects:
  Fix Released
Status in OpenStack Security Advisories:
  Won't Fix
Status in OpenStack Security Notes:
  New

Bug description:
  
  The suds project appears to be largely unmaintained upstream. The default 
cache implementation stores pickled objects to a predictable path in /tmp. This 
can be used by a local attacker to redirect SOAP requests via symlinks or run a 
privilege escalation / code execution attack via a pickle exploit. 

  cinder/requirements.txt:suds>=0.4
  gantt/requirements.txt:suds>=0.4
  nova/requirements.txt:suds>=0.4
  oslo.vmware/requirements.txt:suds>=0.4

  
  The details are available here - 
  https://bugzilla.redhat.com/show_bug.cgi?id=978696
  (CVE-2013-2217)

  Although this is an unlikely attack vector steps should be taken to
  prevent this behaviour. Potential ways to fix this are by explicitly
  setting the cache location to a directory created via
  tempfile.mkdtemp(), disabling cache client.set_options(cache=None), or
  using a custom cache implementation that doesn't load / store pickled
  objects from an insecure location.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1341954/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1295128] Re: Error getting keystone related informations when running keystone in httpd

2014-10-02 Thread Adam Gandelman
** Changed in: horizon/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1295128

Title:
  Error getting keystone related informations when running keystone in
  httpd

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) icehouse series:
  Fix Released

Bug description:
  1. Need to deploy keystone on apache: 
http://docs.openstack.org/developer/keystone/apache-httpd.html
  2. Update keystone endpoints to, http://192.168.94.129/keystone/main/v2.0 and 
http://192.168.94.129/keystone/main/v2.0 
  3. Edit openstack_dashboard/local/local_settings.py, update 
OPENSTACK_KEYSTONE_URL = "http://%s/keystone/main/v2.0"; % OPENSTACK_HOST
  4. Visit dashboard, 
   * Error on dashboard: `Error: Unable to retrieve project list.`
   * Error in log:
  Not Found: Not Found (HTTP 404)
  Traceback (most recent call last):
File 
"/opt/stack/horizon/openstack_dashboard/dashboards/admin/overview/views.py", 
line 63, in get_data
  projects, has_more = api.keystone.tenant_list(self.request)
File "/opt/stack/horizon/openstack_dashboard/api/keystone.py", line 266, in 
tenant_list
  tenants = manager.list(limit, marker)
File "/opt/stack/python-keystoneclient/keystoneclient/v2_0/tenants.py", 
line 118, in list
  tenant_list = self._list("/tenants%s" % query, "tenants")
File "/opt/stack/python-keystoneclient/keystoneclient/base.py", line 106, 
in _list
  resp, body = self.client.get(url)
File "/opt/stack/python-keystoneclient/keystoneclient/httpclient.py", line 
578, in get
  return self._cs_request(url, 'GET', **kwargs)
File "/opt/stack/python-keystoneclient/keystoneclient/httpclient.py", line 
575, in _cs_request
  **kwargs)
File "/opt/stack/python-keystoneclient/keystoneclient/httpclient.py", line 
554, in request
  resp = super(HTTPClient, self).request(url, method, **kwargs)
File "/opt/stack/python-keystoneclient/keystoneclient/baseclient.py", line 
21, in request
  return self.session.request(url, method, **kwargs)
File "/opt/stack/python-keystoneclient/keystoneclient/session.py", line 
209, in request
  raise exceptions.from_response(resp, method, url)
  NotFound: Not Found (HTTP 404)

  
  But using the keystoneclient command line everything works fine..
  $ keystone  tenant-list
  +--++-+
  |id|name| enabled |
  +--++-+
  | 9542f4d212064b96addcfbca9fd530ee |   admin|   True  |
  | 5e317523a51745d1a65f4b166b85dd1b |demo|   True  |
  | 70058501677e4c2ea7cef31a7ddbd48d | invisible_to_admin |   True  |
  | 246ef23151354782aa75850cde8501e8 |  service   |   True  |
  +--++-+

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1295128/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1313458] Re: v3 catalog not implemented for templated backend

2014-10-02 Thread Adam Gandelman
** Changed in: keystone/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1313458

Title:
  v3 catalog not implemented for templated backend

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone icehouse series:
  Fix Released

Bug description:
  
  The templated backend didn't implement the method to get a v3 catalog. So you 
couldn't get a valid v3 token when the templated catalog backend was configured.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1313458/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338880] Re: Any user can set a network as external

2014-10-02 Thread Adam Gandelman
** Changed in: neutron/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1338880

Title:
  Any user can set a network as external

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  Fix Released

Bug description:
  Even though the default policy.json restrict the creation of external
  networks to admin_only, any user can update a network as external.

  I could verify this with the following test (PseudoPython):

  project: ProjectA
  user: ProjectMemberA has Member role on project ProjectA.

  with network(name="UpdateNetworkExternalRouter", tenant_id=ProjectA, 
router_external=False) as test_network:
  
self.project_member_a_neutron_client.update_network(network=test_network, 
router_external=True)

  project_member_a_neutron_client encapsulates a python-neutronclient,
  and here it is what the method does.

  def update_network(self, network, name=None, shared=None, 
router_external=None):
  body = {
  'network': {
  }
  }
  if name is not None:
  body['network']['name'] = name
  if shared is not None:
  body['network']['shared'] = shared
  if router_external is not None:
  body['network']['router:external'] = router_external

  self.python_neutronclient.update_network(network=network.id,
  body=body)['network']

  
  The expected behaviour is that the operation should not be allowed, but the 
user without admin privileges is able to perform such change.

  Trying to add an "update_network:router:external": "rule:admin_only"
  policy did not work and broke other operations a regular user should
  be able to do.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1338880/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1312971] Re: mod_wsgi exception processing UTF-8 Header

2014-10-02 Thread Adam Gandelman
** Changed in: keystone/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1312971

Title:
  mod_wsgi exception processing UTF-8 Header

Status in OpenStack Identity (Keystone):
  Fix Released
Status in Keystone icehouse series:
  Fix Released
Status in Python client library for Keystone:
  Triaged

Bug description:
  Using master version of python-keystoneclient (not yet released)
  gives the following error when running with Keystone in Apache HTTPD
  and requesting a V3 Token

  
   [Fri Apr 25 18:28:14.775659 2014] [:error] [pid 5075] [remote 
10.10.63.250:2982] mod_wsgi (pid=5075): Exception occurred processing WSGI 
script '/var/www/cgi-bin/keystone/main'.
   [Fri Apr 25 18:28:14.775801 2014] [:error] [pid 5075] [remote 
10.10.63.250:2982] TypeError: expected byte string object for header value, 
value of type unicode found

  Its due to the utf-8 encoding in keystoneclient/common/cms.py  which
  is making the PKI token Unicode instead of str.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1312971/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   >