[Yahoo-eng-team] [Bug 1587973] Re: DhcpAgentSchedulerDbMixin.get_dhcp_agents_hosting_networks fails for many networks

2016-06-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/324005
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=6290af9cf97d4ee323ba988733e026a27695e9fa
Submitter: Jenkins
Branch:master

commit 6290af9cf97d4ee323ba988733e026a27695e9fa
Author: Brandon Logan 
Date:   Wed Jun 1 11:10:08 2016 -0500

Fix getting dhcp agents for multiple networks

The method to get dhcp agents for networks takes as a an argument
the ability to send a list of network_ids.  However, this method
is really only being used to send in a list of one network_id, but
there don't seem to be any usages where it is being used for many
network_ids.  Since its not being used, this hasn't been caught yet.

Change-Id: I6f07ed50b29448d279a4fd5f9d392af3a8490340
Closes-Bug: #1587973


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1587973

Title:
  DhcpAgentSchedulerDbMixin.get_dhcp_agents_hosting_networks fails for
  many networks

Status in neutron:
  Fix Released

Bug description:
  DhcpAgentSchedulerDbMixin.get_dhcp_agents_hosting_networks takes
  network_ids as a parameter but fails when more than one network_id is
  passed in because of a wrong usage of sql alchemy in clause.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1587973/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1575055] Re: check_instance_id() error on reboots when using config-drive

2016-06-02 Thread Scott Moser
I'm pretty sure this is not fix-released in xenial.

** Changed in: cloud-init (Ubuntu Xenial)
   Status: Fix Released => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1575055

Title:
  check_instance_id() error on reboots when using config-drive

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  Confirmed

Bug description:
  Problem Description
  =
  When using a config-drive to provide meta-data to cloud-init on ubuntu (for 
Linux guest running in KVM for z Systems) we get a check_instance_id() error 
whenever we soft reboot after the (successful) initial boot.

  The error shows:

  [5.283203] cloud-init[1637]: Cloud-init v. 0.7.7 running 'init-local' at 
Sat, 23 Apr 2016 00:50:58 +. Up 5.25 seconds.
  [5.283368] cloud-init[1637]: 2016-04-22 20:50:58,839 - util.py[WARNING]: 
failed of stage init-local
  [5.286659] cloud-init[1637]: failed run of stage init-local
  [5.286770] cloud-init[1637]: 

  [5.286849] cloud-init[1637]: Traceback (most recent call last):
  [5.286924] cloud-init[1637]:   File "/usr/bin/cloud-init", line 520, in 
status_wrapper
  [5.286998] cloud-init[1637]: ret = functor(name, args)
  [5.287079] cloud-init[1637]:   File "/usr/bin/cloud-init", line 250, in 
main_init
  [5.287152] cloud-init[1637]: init.fetch(existing=existing)
  [5.287225] cloud-init[1637]:   File 
"/usr/lib/python3/dist-packages/cloudinit/stages.py", line 322, in fetch
  [5.287298] cloud-init[1637]: return 
self._get_data_source(existing=existing)
  [5.287371] cloud-init[1637]:   File 
"/usr/lib/python3/dist-packages/cloudinit/stages.py", line 229, in 
_get_data_source
  [5.287445] cloud-init[1637]: ds.check_instance_id(self.cfg)):
  [5.287518] cloud-init[1637]: TypeError: check_instance_id() takes 1 
positional argument but 2 were given
  [5.287592] cloud-init[1637]: 

  [FAILED] Failed to start Initial cloud-init job (pre-networking).

  
  The failure of the init-local pre-networking does seem to lead to a boot up 
delay as cloud-init tries to search for networking outside of the already saved 
networking data.   

  Otherwise the error is purely cosmetic as later init modules find (or
  let existing IP configuration take over) and bring up the correct
  interfaces.

  The original problem was found outside of openstack with stand-alone
  cloud-config iso images.  But have been able to reproduce the problem
  within an openstack ICM environment.

  Team has had some success getting around the problem by patching the
  check_instance_id function in /usr/lib/python3/dist-
  packages/cloudinit/sources/DataSourceConfigDrive.py so that it
  accepted an extra argument, ex:

  ubuntu@markvercd:~$ sudo cat check_instance_id.patch 
  --- /usr/lib/python3/dist-packages/cloudinit/sources/DataSourceConfigDrive.py 
2016-04-06 15:29:59.0 +
  +++ 
/usr/lib/python3/dist-packages/cloudinit/sources/DataSourceConfigDrive.py.new   
  2016-04-11 22:53:47.799867139 +
  @@ -155,7 +155,7 @@
   
   return True
   
  -def check_instance_id(self):
  +def check_instance_id(self,somecfg):
   # quickly (local check only) if self.instance_id is still valid
   return 
sources.instance_id_matches_system_uuid(self.get_instance_id())
   
  ubuntu@markvercd:~$ 

  ---uname output---
  Linux k6mpathcl.pokprv.stglabs.ibm.com 4.4.0-21-generic #37-Ubuntu SMP Mon 
Apr 18 18:31:26 UTC 2016 s390x s390x s390x GNU/Linux
   
  Machine Type = KVM guest on a z13 (2827-732) LPAR 

  Steps to Reproduce
  =
   1) set up ubuntu guest image with cloud-init
  2) pass in iso image with cloud-config data in cdrom device
  3) boot up successfully with cloud-config data
  4) attempt a soft reboot.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1575055/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1586931] Re: TestServerBasicOps: Test fails when deleting server and floating ip almost at the same time

2016-06-02 Thread Matthew Treinish
** Changed in: tempest
   Status: Won't Fix => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1586931

Title:
  TestServerBasicOps: Test fails when deleting server and floating ip
  almost at the same time

Status in OpenStack Compute (nova):
  New
Status in tempest:
  Incomplete

Bug description:
  In
  
tempest.scenario.test_server_basic_ops.TestServerBasicOps.test_server_basic_ops,
  after last step:
  self.servers_client.delete_server(self.instance['id']) it doesn't wait
  for the server to be deleted, and then deletes the floating ip
  immediately in the clean up, this will cause faiure:

  Here is the partial log:
  2016-05-29 21:51:29.499 29791 INFO tempest.lib.common.rest_client 
[req-c3588ac4-21ca-47c3-bdb1-62088efd7a8b ] Request 
(TestServerBasicOps:test_server_basic_ops): 204 DELETE 
https://:8774/v2/159886ce087a4f8fbfbcab14947d96b1/servers/6d44763b-ea79-4b5b-b57e-714191802c7c
 0.465s
  2016-05-29 21:51:29.499 29791 DEBUG tempest.lib.common.rest_client 
[req-c3588ac4-21ca-47c3-bdb1-62088efd7a8b ] Request - Headers: {'Content-Type': 
'application/json', 'Accept': 'application/json', 'X-Auth-Token': ''}
  Body: None
  Response - Headers: {'status': '204', 'content-length': '0', 
'content-location': 
'https://:8774/v2/159886ce087a4f8fbfbcab14947d96b1/servers/6d44763b-ea79-4b5b-b57e-714191802c7c',
 'date': 'Mon, 30 May 2016 02:51:29 GMT', 'x-compute-request-id': 
'req-c3588ac4-21ca-47c3-bdb1-62088efd7a8b', 'content-type': 'application/json', 
'connection': 'close'}
  Body:  _log_request_full tempest/lib/common/rest_client.py:422
  2016-05-29 21:51:30.410 29791 INFO tempest.lib.common.rest_client 
[req-db2323f5-3d58-4fd7-ae51-44f5525c6689 ] Request 
(TestServerBasicOps:_run_cleanups): 500 DELETE 
https://:8774/v2/159886ce087a4f8fbfbcab14947d96b1/os-floating-ips/948912f6-ce03-4856-922b-59c4f16d3740
 0.910s
  2016-05-29 21:51:30.410 29791 DEBUG tempest.lib.common.rest_client 
[req-db2323f5-3d58-4fd7-ae51-44f5525c6689 ] Request - Headers: {'Content-Type': 
'application/json', 'Accept': 'application/json', 'X-Auth-Token': ''}
  Body: None
  Response - Headers: {'status': '500', 'content-length': '224', 
'content-location': 
'https://:8774/v2/159886ce087a4f8fbfbcab14947d96b1/os-floating-ips/948912f6-ce03-4856-922b-59c4f16d3740',
 'date': 'Mon, 30 May 2016 02:51:30 GMT', 'x-compute-request-id': 
'req-db2323f5-3d58-4fd7-ae51-44f5525c6689', 'content-type': 'application/json; 
charset=UTF-8', 'connection': 'close'}
  Body: {"computeFault": {"message": "Unexpected API Error. Please 
report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if 
possible.\n", 
"code": 500}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1586931/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1575055] Re: check_instance_id() error on reboots when using config-drive

2016-06-02 Thread airah
** Changed in: cloud-init (Ubuntu Xenial)
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1575055

Title:
  check_instance_id() error on reboots when using config-drive

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  Confirmed

Bug description:
  Problem Description
  =
  When using a config-drive to provide meta-data to cloud-init on ubuntu (for 
Linux guest running in KVM for z Systems) we get a check_instance_id() error 
whenever we soft reboot after the (successful) initial boot.

  The error shows:

  [5.283203] cloud-init[1637]: Cloud-init v. 0.7.7 running 'init-local' at 
Sat, 23 Apr 2016 00:50:58 +. Up 5.25 seconds.
  [5.283368] cloud-init[1637]: 2016-04-22 20:50:58,839 - util.py[WARNING]: 
failed of stage init-local
  [5.286659] cloud-init[1637]: failed run of stage init-local
  [5.286770] cloud-init[1637]: 

  [5.286849] cloud-init[1637]: Traceback (most recent call last):
  [5.286924] cloud-init[1637]:   File "/usr/bin/cloud-init", line 520, in 
status_wrapper
  [5.286998] cloud-init[1637]: ret = functor(name, args)
  [5.287079] cloud-init[1637]:   File "/usr/bin/cloud-init", line 250, in 
main_init
  [5.287152] cloud-init[1637]: init.fetch(existing=existing)
  [5.287225] cloud-init[1637]:   File 
"/usr/lib/python3/dist-packages/cloudinit/stages.py", line 322, in fetch
  [5.287298] cloud-init[1637]: return 
self._get_data_source(existing=existing)
  [5.287371] cloud-init[1637]:   File 
"/usr/lib/python3/dist-packages/cloudinit/stages.py", line 229, in 
_get_data_source
  [5.287445] cloud-init[1637]: ds.check_instance_id(self.cfg)):
  [5.287518] cloud-init[1637]: TypeError: check_instance_id() takes 1 
positional argument but 2 were given
  [5.287592] cloud-init[1637]: 

  [FAILED] Failed to start Initial cloud-init job (pre-networking).

  
  The failure of the init-local pre-networking does seem to lead to a boot up 
delay as cloud-init tries to search for networking outside of the already saved 
networking data.   

  Otherwise the error is purely cosmetic as later init modules find (or
  let existing IP configuration take over) and bring up the correct
  interfaces.

  The original problem was found outside of openstack with stand-alone
  cloud-config iso images.  But have been able to reproduce the problem
  within an openstack ICM environment.

  Team has had some success getting around the problem by patching the
  check_instance_id function in /usr/lib/python3/dist-
  packages/cloudinit/sources/DataSourceConfigDrive.py so that it
  accepted an extra argument, ex:

  ubuntu@markvercd:~$ sudo cat check_instance_id.patch 
  --- /usr/lib/python3/dist-packages/cloudinit/sources/DataSourceConfigDrive.py 
2016-04-06 15:29:59.0 +
  +++ 
/usr/lib/python3/dist-packages/cloudinit/sources/DataSourceConfigDrive.py.new   
  2016-04-11 22:53:47.799867139 +
  @@ -155,7 +155,7 @@
   
   return True
   
  -def check_instance_id(self):
  +def check_instance_id(self,somecfg):
   # quickly (local check only) if self.instance_id is still valid
   return 
sources.instance_id_matches_system_uuid(self.get_instance_id())
   
  ubuntu@markvercd:~$ 

  ---uname output---
  Linux k6mpathcl.pokprv.stglabs.ibm.com 4.4.0-21-generic #37-Ubuntu SMP Mon 
Apr 18 18:31:26 UTC 2016 s390x s390x s390x GNU/Linux
   
  Machine Type = KVM guest on a z13 (2827-732) LPAR 

  Steps to Reproduce
  =
   1) set up ubuntu guest image with cloud-init
  2) pass in iso image with cloud-config data in cdrom device
  3) boot up successfully with cloud-config data
  4) attempt a soft reboot.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1575055/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588593] [NEW] If Neutron IPAM driver is setted,using 'net-delete' command to delete the network created when ipam_driver is not set,the command seems to cause dead loop.

2016-06-02 Thread xiewj
Public bug reported:

In Mitaka,

When ipam_driver is not setted,created a network with a subnet,then using the 
reference implementation of 
Neutron IPAM driver by setting 'ipam_driver='internal'',and using 'net-delete' 
command to delete the 
network created when ipam_driver is not set,the command seems to cause dead 
loop.


1)Specifying ‘ipam_driver = ’ in the neutron.conf file,created a network 
with a subnet
[root@localhost devstack]# neutron net-create net_vlan_01 
--provider:network_type vlan --provider:physical_network physnet1 
--provider:segmentation_id  2 
Created a new network:
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| availability_zone_hints   |  |
| availability_zones|  |
| created_at| 2016-06-03T02:42:50  |
| description   |  |
| id| 666f8a6a-e3e3-4183-84b3-a43c92b050f5 |
| ipv4_address_scope|  |
| ipv6_address_scope|  |
| mtu   | 1500 |
| name  | net_vlan_01  |
| port_security_enabled | True |
| provider:network_type | vlan |
| provider:physical_network | physnet1 |
| provider:segmentation_id  | 2|
| qos_policy_id |  |
| router:external   | False|
| shared| False|
| status| ACTIVE   |
| subnets   |  |
| tags  |  |
| tenant_id | 69fa49e368d340679ab3d05de3426bfa |
| updated_at| 2016-06-03T02:42:50  |
+---+--+
[root@localhost devstack]# neutron subnet-create net_vlan_01 --name 
subnet_vlan_01 101.1.1.0/24
Created a new subnet:
+---+--+
| Field | Value|
+---+--+
| allocation_pools  | {"start": "101.1.1.2", "end": "101.1.1.254"} |
| cidr  | 101.1.1.0/24 |
| created_at| 2016-06-03T02:42:56  |
| description   |  |
| dns_nameservers   |  |
| enable_dhcp   | True |
| gateway_ip| 101.1.1.1|
| host_routes   |  |
| id| 1c60dbd7-ae1e-4d7c-a767-ec3106cc62ad |
| ip_version| 4|
| ipv6_address_mode |  |
| ipv6_ra_mode  |  |
| name  | subnet_vlan_01   |
| network_id| 666f8a6a-e3e3-4183-84b3-a43c92b050f5 |
| subnetpool_id |  |
| tenant_id | 69fa49e368d340679ab3d05de3426bfa |
| updated_at| 2016-06-03T02:42:56  |
+---+--+
[root@localhost devstack]# neutron net-list
+--+-+--+
| id   | name| subnets  
|
+--+-+--+
| 666f8a6a-e3e3-4183-84b3-a43c92b050f5 | net_vlan_01 | 
1c60dbd7-ae1e-4d7c-a767-ec3106cc62ad 101.1.1.0/24|
+--+-+--+
[root@localhost devstack]# neutron port-list 
| e759c9df-db2d-477b-94e9-02d80844f7f9 |  | fa:16:3e:2d:e5:6e | 
{"subnet_id": "1c60dbd7-ae1e-4d7c-a767-ec3106cc62ad", "ip_address": 
"101.1.1.2"}

2)Modified ‘ipam_driver = ‘internal’’ in the neutron.conf file,and restarted 
neutron-server service|
[root@localhost devstack]# vi /etc/neutron/neutron.conf 

# Neutron IPAM (IP address 

[Yahoo-eng-team] [Bug 1586931] Re: TestServerBasicOps: Test fails when deleting server and floating ip almost at the same time

2016-06-02 Thread Ken'ichi Ohmichi
This seems a problem of nova side because any internal errors should not
happen on nova side even if any requests are gotten from client
side(tempest in this case)

** Changed in: tempest
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1586931

Title:
  TestServerBasicOps: Test fails when deleting server and floating ip
  almost at the same time

Status in OpenStack Compute (nova):
  New
Status in tempest:
  Won't Fix

Bug description:
  In
  
tempest.scenario.test_server_basic_ops.TestServerBasicOps.test_server_basic_ops,
  after last step:
  self.servers_client.delete_server(self.instance['id']) it doesn't wait
  for the server to be deleted, and then deletes the floating ip
  immediately in the clean up, this will cause faiure:

  Here is the partial log:
  2016-05-29 21:51:29.499 29791 INFO tempest.lib.common.rest_client 
[req-c3588ac4-21ca-47c3-bdb1-62088efd7a8b ] Request 
(TestServerBasicOps:test_server_basic_ops): 204 DELETE 
https://:8774/v2/159886ce087a4f8fbfbcab14947d96b1/servers/6d44763b-ea79-4b5b-b57e-714191802c7c
 0.465s
  2016-05-29 21:51:29.499 29791 DEBUG tempest.lib.common.rest_client 
[req-c3588ac4-21ca-47c3-bdb1-62088efd7a8b ] Request - Headers: {'Content-Type': 
'application/json', 'Accept': 'application/json', 'X-Auth-Token': ''}
  Body: None
  Response - Headers: {'status': '204', 'content-length': '0', 
'content-location': 
'https://:8774/v2/159886ce087a4f8fbfbcab14947d96b1/servers/6d44763b-ea79-4b5b-b57e-714191802c7c',
 'date': 'Mon, 30 May 2016 02:51:29 GMT', 'x-compute-request-id': 
'req-c3588ac4-21ca-47c3-bdb1-62088efd7a8b', 'content-type': 'application/json', 
'connection': 'close'}
  Body:  _log_request_full tempest/lib/common/rest_client.py:422
  2016-05-29 21:51:30.410 29791 INFO tempest.lib.common.rest_client 
[req-db2323f5-3d58-4fd7-ae51-44f5525c6689 ] Request 
(TestServerBasicOps:_run_cleanups): 500 DELETE 
https://:8774/v2/159886ce087a4f8fbfbcab14947d96b1/os-floating-ips/948912f6-ce03-4856-922b-59c4f16d3740
 0.910s
  2016-05-29 21:51:30.410 29791 DEBUG tempest.lib.common.rest_client 
[req-db2323f5-3d58-4fd7-ae51-44f5525c6689 ] Request - Headers: {'Content-Type': 
'application/json', 'Accept': 'application/json', 'X-Auth-Token': ''}
  Body: None
  Response - Headers: {'status': '500', 'content-length': '224', 
'content-location': 
'https://:8774/v2/159886ce087a4f8fbfbcab14947d96b1/os-floating-ips/948912f6-ce03-4856-922b-59c4f16d3740',
 'date': 'Mon, 30 May 2016 02:51:30 GMT', 'x-compute-request-id': 
'req-db2323f5-3d58-4fd7-ae51-44f5525c6689', 'content-type': 'application/json; 
charset=UTF-8', 'connection': 'close'}
  Body: {"computeFault": {"message": "Unexpected API Error. Please 
report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if 
possible.\n", 
"code": 500}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1586931/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1580809] Re: hz-expand-detail cannot be used outside of hz-table

2016-06-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/321089
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=fe98059ca108d80b412e73ec373904c310795b37
Submitter: Jenkins
Branch:master

commit fe98059ca108d80b412e73ec373904c310795b37
Author: Tyr Johanson 
Date:   Wed May 25 09:49:06 2016 -0600

Relax hz-table parent requirement

hz-expand-detail is a pretty handy directive to use in normal
HTML tables. This patch changes the parent directive requirement
to be optional.

Change-Id: Ic0b0c7af6143ec1d654cb277503342693d051b50
Closes-Bug: #1580809


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1580809

Title:
  hz-expand-detail cannot be used outside of hz-table

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  hz-expand-detail is a reasonably handy directive that allows row
  expansion. It was modified to allow a table user to detect row
  expansion. However, this modification added a require of the hz-table
  controller.

  This is not necessary, is difficult to maintain and prevents the reuse
  of this with anything other than hz-table.

  Instead, the directive should simply emit an event and any parent that
  cares can listen for it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1580809/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588560] [NEW] neutron-lbaas devstack plugin for ubuntu hard-codes trusty

2016-06-02 Thread Stephen Balukoff
Public bug reported:

Ubuntu 16.04 just came out, and it's likely people will want to start
testing neutron-lbaas on this (and potentially other) releases of
Ubuntu. However, presently the neutron-lbaas devstack plugin.sh hard-
codes trusty in a couple of places. This script should be updated to
dynamically determine the Ubuntu codename in use on the current devstack
(i.e. so we don't break compatibility with trusty, but also allow for
testing on other Ubuntu releases).

** Affects: neutron
 Importance: Undecided
 Assignee: Stephen Balukoff (sbalukoff)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1588560

Title:
  neutron-lbaas devstack plugin for ubuntu hard-codes trusty

Status in neutron:
  In Progress

Bug description:
  Ubuntu 16.04 just came out, and it's likely people will want to start
  testing neutron-lbaas on this (and potentially other) releases of
  Ubuntu. However, presently the neutron-lbaas devstack plugin.sh hard-
  codes trusty in a couple of places. This script should be updated to
  dynamically determine the Ubuntu codename in use on the current
  devstack (i.e. so we don't break compatibility with trusty, but also
  allow for testing on other Ubuntu releases).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1588560/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1517883] Re: switch from oslo-incubator cache code to dogpile

2016-06-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/290305
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=56efc8ac76415070041d125a6e30523caf9b3cbd
Submitter: Jenkins
Branch:master

commit 56efc8ac76415070041d125a6e30523caf9b3cbd
Author: Fang Zhen 
Date:   Wed Mar 9 14:46:12 2016 +0800

Switch to oslo.cache

Oslo incubator is about to stop for cache module. We could use
oslo.cache instead. The legacy memory backend is replaced by
oslo_cache.dict.

Closes-Bug: #1517883

Change-Id: I108242ca9f27c9ec47959ce7615bc7d84cae014b


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1517883

Title:
  switch from oslo-incubator cache code to dogpile

Status in neutron:
  Fix Released

Bug description:
  Oslo-incubator is about to stop support for cache module, so we should
  get rid of it. The best candidate is probably dogpile. There is also
  oslo.cache library released, but it has different API and is overkill
  for our needs (metadata agent optimizations).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1517883/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1586286] Re: Titles of "create folder" and "upload object" are Untranslated

2016-06-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/321996
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=a45972600b75a60eb816d441e060b6925c8ea87f
Submitter: Jenkins
Branch:master

commit a45972600b75a60eb816d441e060b6925c8ea87f
Author: Kenji Ishii 
Date:   Fri May 27 16:46:31 2016 +0900

Fix untranslated strings and adding icon to OK button

In ng-container, titles in Create Folder modal and Upload Object
modal are untranslated.
And OK button in these modal are missing icon (in Create
Container modal, OK button has a icon).

Change-Id: I7b18ceb6d3aafd5b183eb730970c0be792df9f2b
Closes-Bug: #1586286


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1586286

Title:
  Titles of "create folder" and "upload object" are Untranslated

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Titles in "create folder" modal and "upload object" modal are
  Untranslated in ng-container.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1586286/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1586434] Re: Service Catalog ignores interface

2016-06-02 Thread Kevin Esensoy
** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1586434

Title:
  Service Catalog ignores interface

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  When running Keystone testing in Rally, the OS_INTERFACE/endpoint_type
  variable gets ignored or changed to admin in order to create users and
  do other tasks. While this may be useful for most Openstack
  installations, we require the endpoint_type to remain "public"
  throughout the entirety of the Rally process.

  The change proposed (https://review.openstack.org/#/c/321809/) simply
  checks for the OS_INTERFACE variable before proceeding to set and
  endpoint_type in the service_catalog.

  Example:

  Params:
  auth_url: https://publicUrl:5000
  endpoint_type: public
  sourced OS_INTERFACE: public
  sourced OS_ENDPOINT_TYPE: publicURL

  You can see that keystone tries to hit the admin endpoint, completely
  disregarding the user's request to hit public.

  

   Preparing input task
  


  Input task is:
  {
  "KeystoneBasic.create_delete_user": [
  {
  "args": {},
  "runner": {
  "type": "constant",
  "times": 100,
  "concurrency": 10
  }
  }
  ]
  }

  Task syntax is correct :)
  2016-05-26 13:46:58.139 21751 INFO rally.task.engine [-] Task 
5eabbbe2-52d1-4d0e-94f3-b13415070de2 | Starting:  Task validation.
  2016-05-26 13:46:58.149 21751 INFO rally.task.engine [-] Task 
5eabbbe2-52d1-4d0e-94f3-b13415070de2 | Starting:  Task validation of scenarios 
names.
  2016-05-26 13:46:58.151 21751 INFO rally.task.engine [-] Task 
5eabbbe2-52d1-4d0e-94f3-b13415070de2 | Completed: Task validation of scenarios 
names.
  2016-05-26 13:46:58.151 21751 INFO rally.task.engine [-] Task 
5eabbbe2-52d1-4d0e-94f3-b13415070de2 | Starting:  Task validation of syntax.
  2016-05-26 13:46:58.153 21751 INFO rally.task.engine [-] Task 
5eabbbe2-52d1-4d0e-94f3-b13415070de2 | Completed: Task validation of syntax.
  2016-05-26 13:46:58.153 21751 INFO rally.task.engine [-] Task 
5eabbbe2-52d1-4d0e-94f3-b13415070de2 | Starting:  Task validation of semantic.
  2016-05-26 13:46:58.153 21751 INFO rally.task.engine [-] Task 
5eabbbe2-52d1-4d0e-94f3-b13415070de2 | Starting:  Task validation check cloud.
  2016-05-26 13:46:58.562 21751 INFO rally.task.engine [-] Task 
5eabbbe2-52d1-4d0e-94f3-b13415070de2 | Completed: Task validation check cloud.
  2016-05-26 13:46:58.567 21751 INFO 
rally.plugins.openstack.context.keystone.users [-] Task 
5eabbbe2-52d1-4d0e-94f3-b13415070de2 | Starting:  Enter context: `users`
  2016-05-26 13:46:58.924 21751 WARNING keystoneclient.auth.identity.base [-] 
Failed to contact the endpoint at http://adminUrl:35357 for discovery. Fallback 
to using that endpoint as the base url.
  2016-05-26 13:46:58.947 21751 WARNING rally.common.broker [-] Failed to 
consume a task from the queue: Unable to establish connection to 
http://adminUrl:35357/domains/default
  2016-05-26 13:46:58.948 21751 INFO 
rally.plugins.openstack.context.keystone.users [-] Task 
5eabbbe2-52d1-4d0e-94f3-b13415070de2 | Starting:  Exit context: `users`
  2016-05-26 13:47:00.360 21751 INFO 
rally.plugins.openstack.context.keystone.users [-] Task 
5eabbbe2-52d1-4d0e-94f3-b13415070de2 | Completed: Exit context: `users`
  Task config is invalid: `Unable to setup context 'users': 'Failed to create 
the requested number of tenants.'`

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1586434/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1327005] Re: Need change host to host_name in host resources

2016-06-02 Thread jichenjc
per comment #1, mark this invalid

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1327005

Title:
  Need change host to host_name in host resources

Status in OpenStack Compute (nova):
  Invalid
Status in python-novaclient:
  Fix Released

Bug description:
  step to reproduce:
  In python Terminal ,
  >>> from novaclient.v1_1 import client
  >>> ct = 
client.Client("admin","password","admin","http://192.168.1.100:5000/v2.0;)
  >>> ct.hosts.get("hostname")

  error:
  File "", line 1, in 
File "/opt/stack/python-novaclient/novaclient/v1_1/hosts.py", line 24, in 
__repr__
  return "" % self.host_name
File 
"/opt/stack/python-novaclient/novaclient/openstack/common/apiclient/base.py", 
line 464, in __getattr__
  raise AttributeError(k)
  AttributeError: host_name

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1327005/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1538448] Re: REST api layer doesn't handle TooManyInstances while doing resize

2016-06-02 Thread jichenjc
per comment #1

** Changed in: nova
   Status: Confirmed => Won't Fix

** Changed in: nova
   Status: Won't Fix => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1538448

Title:
  REST api layer doesn't handle TooManyInstances while doing resize

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  compute_api may raise TooManyInstances if over quota, but no handler
  in REST Api layer.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1538448/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348447] Re: Enable metadata when create server groups

2016-06-02 Thread Thomas Herve
** Changed in: heat
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348447

Title:
  Enable metadata when create server groups

Status in heat:
  Won't Fix
Status in OpenStack Compute (nova):
  Invalid
Status in python-novaclient:
  In Progress

Bug description:
  instance_group object already support instance group metadata but the
  api extension do not support this.

  We should enable this by default.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1348447/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361683] Re: Instance pci_devices and security_groups refreshing can break backporting

2016-06-02 Thread Sivasathurappan Radhakrishnan
Since the bug reporter hasn't provided the information requested by
Sean, closing it for now. Feel free to reopen the bug by providing the
requested information and set the bug status back to ''New''.

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361683

Title:
  Instance pci_devices and security_groups refreshing can break
  backporting

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  In the Instance object, on a remotable operation such as save(), we
  refresh the pci_devices and security_groups with the information we
  get back from the database. Since this *replaces* the objects
  currently attached to the instance object (which might be backlevel)
  with current versions, an older client could get a failure upon
  deserializing the result.

  We need to figure out some way to either backport the results of
  remoteable methods, or put matching backlevel objects into the
  instance during the refresh in the first place.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1361683/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1043148] Re: snapshots fail with client read timeout when using swift

2016-06-02 Thread Sivasathurappan Radhakrishnan
Closing this bug based Sam's comment.

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1043148

Title:
  snapshots fail with client read timeout when using swift

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Apologies if this is a glance or swift bug (or not a bug at all!) but
  I think I've nailed it down to nova.

  Setup:

  * We are using latest nova in ubuntu precise.
  * using swift as a backend to glance
  * Compute nodes are running nova-compute and nova-network (have 10G ethernet)
  * glance-api and swift-proxy are installed on the same host. (also has 10G 
ethernet)

  When snapshotting instances we regularly get the snapshot failing.

  Sometimes it works, sometimes it fails adding the 1st hunk and
  sometimes it fails after a few.

  Logs below show it failing after 34 hunks have been added to swift
  successfully (takes around 3 seconds to PUT a hunk until the error)

  The reason I think this has something to do with nova is that I can
  successfully use the glance client from the compute node to upload the
  image.

  There's a lot of log info below, happy to provide more information if
  needed. It been bugging us for sometime.

  I think the client read timeout is between glance and nova as glance
  and swift are on the same host so I would doubt they would timeout.

  Thanks in advance.

  Sam


  
  Nova:
  2012-08-29 07:33:08 ERROR nova.rpc.amqp 
[req-33589abd-db75-4a1f-b36c-a64f41e8862f 25 23] Exception during message 
handling
  2012-08-29 07:33:08 TRACE nova.rpc.amqp Traceback (most recent call last):
  2012-08-29 07:33:08 TRACE nova.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/rpc/amqp.py", line 253, in _process_data
  2012-08-29 07:33:08 TRACE nova.rpc.amqp Locals:{'args': {u'backup_type': 
None,
  2012-08-29 07:33:08 TRACE nova.rpc.amqpu'image_id': 
u'e866c9ee-80c7-42a0-91c4-d23b9d4edd6a',
  2012-08-29 07:33:08 TRACE nova.rpc.amqpu'image_type': u'snapshot',
  2012-08-29 07:33:08 TRACE nova.rpc.amqpu'instance_uuid': 
u'2fb5ba40-0f61-4360-bb57-f28871f7cebf',
  2012-08-29 07:33:08 TRACE nova.rpc.amqpu'rotation': None},
  2012-08-29 07:33:08 TRACE nova.rpc.amqp'ctxt': 
,
  2012-08-29 07:33:08 TRACE nova.rpc.amqp'e': Invalid(Invalid(),),
  2012-08-29 07:33:08 TRACE nova.rpc.amqp'method': 
u'snapshot_instance',
  2012-08-29 07:33:08 TRACE nova.rpc.amqp'node_args': 
{'backup_type': None,
  2012-08-29 07:33:08 TRACE nova.rpc.amqp'image_id': 
u'e866c9ee-80c7-42a0-91c4-d23b9d4edd6a',
  2012-08-29 07:33:08 TRACE nova.rpc.amqp'image_type': u'snapshot',
  2012-08-29 07:33:08 TRACE nova.rpc.amqp'instance_uuid': 
u'2fb5ba40-0f61-4360-bb57-f28871f7cebf',
  2012-08-29 07:33:08 TRACE nova.rpc.amqp'rotation': None},
  2012-08-29 07:33:08 TRACE nova.rpc.amqp'node_func': >,
  2012-08-29 07:33:08 TRACE nova.rpc.amqp'self': 
}
  2012-08-29 07:33:08 TRACE nova.rpc.amqp
  2012-08-29 07:33:08 TRACE nova.rpc.amqp 
  2012-08-29 07:33:08 TRACE nova.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 114, in wrapped
  2012-08-29 07:33:08 TRACE nova.rpc.amqp Locals:{'args': 
(,),
  2012-08-29 07:33:08 TRACE nova.rpc.amqp'e': Invalid(Invalid(),),
  2012-08-29 07:33:08 TRACE nova.rpc.amqp'event_type': None,
  2012-08-29 07:33:08 TRACE nova.rpc.amqp'exc_info': (,
  2012-08-29 07:33:08 TRACE nova.rpc.amqpInvalid(Invalid(),),
  2012-08-29 07:33:08 TRACE nova.rpc.amqp),
  2012-08-29 07:33:08 TRACE nova.rpc.amqp'f': ,
  2012-08-29 07:33:08 TRACE nova.rpc.amqp'kw': {'backup_type': None,
  2012-08-29 07:33:08 TRACE nova.rpc.amqp'context': 
,
  2012-08-29 07:33:08 TRACE nova.rpc.amqp'image_id': 
u'e866c9ee-80c7-42a0-91c4-d23b9d4edd6a',
  2012-08-29 07:33:08 TRACE nova.rpc.amqp'image_type': u'snapshot',
  2012-08-29 07:33:08 TRACE nova.rpc.amqp'instance_uuid': 
u'2fb5ba40-0f61-4360-bb57-f28871f7cebf',
  2012-08-29 07:33:08 TRACE nova.rpc.amqp'rotation': None},
  2012-08-29 07:33:08 TRACE nova.rpc.amqp'level': None,
  2012-08-29 07:33:08 TRACE nova.rpc.amqp'notifier': ,
  2012-08-29 07:33:08 TRACE nova.rpc.amqp'payload': {'args': 
(,),
  2012-08-29 07:33:08 TRACE nova.rpc.amqp'backup_type': None,
  2012-08-29 07:33:08 TRACE nova.rpc.amqp'context': 
,
  2012-08-29 07:33:08 TRACE nova.rpc.amqp'exception': 
Invalid(Invalid(),),
  2012-08-29 07:33:08 TRACE nova.rpc.amqp'image_id': 
u'e866c9ee-80c7-42a0-91c4-d23b9d4edd6a',
  2012-08-29 07:33:08 TRACE nova.rpc.amqp

[Yahoo-eng-team] [Bug 1348447] Re: Enable metadata when create server groups

2016-06-02 Thread Pushkar Umaranikar
No progress on bug for > 4 weeks. Marking as invalid. Feel free to
reopen.

** Tags added: api

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348447

Title:
  Enable metadata when create server groups

Status in heat:
  Triaged
Status in OpenStack Compute (nova):
  Invalid
Status in python-novaclient:
  In Progress

Bug description:
  instance_group object already support instance group metadata but the
  api extension do not support this.

  We should enable this by default.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1348447/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1400574] Re: Create VMs sometimes failure when use mellonax nic as SR-IOV

2016-06-02 Thread Sivasathurappan Radhakrishnan
Since the bug reporter hasn't provided the necessary information,bug has
been closed. Feel free to reopen the bug by providing the requested
information and set the bug status back to ''New''.

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1400574

Title:
  Create VMs sometimes failure when use mellonax nic as SR-IOV

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  SYMPTOM:
  I used the mellonax nic as SR-IOV, then create  VMs has a problem (some can 
create success,some failed) ,and the traffic  is also affected (the VLAN of 
some VMs is error).

  CAUSE:
  Due to the particularity of mellonax nic, a PCI number corresponding to the 
two physical net ports, so lead to that nova side only scan to one net port 
from the PCI , and doesn't perceive another one.

  In my environment, eth0 has three available VF resources, eth1 has
  four available VF resources.

  the comment of nova-compute.conf:
 
pci_passthrough_whitelist={"devname":"eth1","physical_network":"sriov_net2","bandwidths":"0"}
 
pci_passthrough_whitelist={"devname":"eth0","physical_network":"sriov_net","bandwidths":"1"}

  Even if the whitelist is correctly configured, Nova from whitelist is
  still unable to get VF resources information network port, Nova side
  only scan by PCI , when scanned a corresponding PCI net port, does not
  scan a network the rest of the ports, causing direct plane in the net
  export under the network not VF resources,so create a virtual machine
  failed in the through plane.And the net port that has been scanned
  occupies the VF resources of another net port,this will result in
  front of a portion of the virtual machine settings VLAN error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1400574/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240554] Re: Insecure live migration with libvirt driver

2016-06-02 Thread Pushkar Umaranikar
As Sean suggested, marking this bug has invalid. Feel free to reopen.

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1240554

Title:
  Insecure live migration with libvirt driver

Status in OpenStack Compute (nova):
  Invalid
Status in OpenStack Security Advisory:
  Won't Fix
Status in OpenStack Security Notes:
  Fix Released

Bug description:
  By default, libvirt on the receiving end of a live migration starts a
  qemu process listening on 0.0.0.0 waiting for a tcp connection from
  the sender.  During block migration, qemu-nbd is started similarly.
  This is bad because compute nodes have interfaces on the guest
  network.  As a result, guests can interfere with live migrations.

  There is a flag during migration to remedy this called VIR_MIGRATE_TUNNELLED,
  which tunnels traffic over the libvirt socket (which can be secured with 
TLS).  This seems like a great option. Unfortunately it doesn't work with the 
new nbd-based block migration code, so there isn't a great option for securing 
the traffic.

  Related to this, libvirt just added:

   - Default migration bind()/listen() IP addr in /etc/libvirt/qemu.conf
   - Pass in bind()/listen() IP address to migration APIs

  So with libvirt >= 1.1.4, Nova will have the ability to control the
  interface used

  (Problem originally reported by Vish Ishaya)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1240554/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490238] Re: Configdrive fails to properly display within Windows Guest (Xenapi)

2016-06-02 Thread Sivasathurappan Radhakrishnan
This bug lacks the necessary information, therefore it has been closed.
Feel free to reopen the bug by providing the requested information and
set the bug status back to ''New'

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1490238

Title:
  Configdrive fails to properly display within Windows Guest (Xenapi)

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Windows guests within a XenServer environment currently do not have
  the ability to properly have ConfigDrive attached unless the
  environment has its nova.conf set up as:

  config_drive_format=vfat

  This issue ultimately results from this value being defaulted to
  ISO9660 (CDFS) and the VBD object being used for it being a disk (the
  nova.virt.xenapi.vm_utils.create_vbd default).  After testing, While
  the VBD is attached without issue and in the proper stateI was unable
  to get this drive to show up within Windows at all.  I was unable to
  see the drive detected within the GUI, or within Windows Powershell.

  This can be addressed by detecting the nova.conf configuration
  setting, and adjusting the VBD attach accordingly.  I will be
  submitting a follow-up commit shortly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1490238/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1265447] Re: floating-ip-bulk-delete method delete the associated floating ip

2016-06-02 Thread Pushkar Umaranikar
This bug lacks the necessary information to effectively reproduce and
fix it, therefore it has been closed. Feel free to reopen the bug by
providing the requested information and set the bug status back to
''New''.

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1265447

Title:
  floating-ip-bulk-delete method delete the associated floating ip

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  The floating-ip-bulk-delete can delete a floating ip which is using by one 
instance,
  This will make the floating ip manager very confusion,
  for example: an associated floating ip is deleted in pool but can also be saw 
and *used* on instance.

  Also, If I use `nova remove-floating-ip` to remove the floating ip,
  there will a floating ip not found error

  I think we should check the floating ips is not associated to instance , 
before destroy these ips.
  if have , raise an error to user,  the floating ip in the your plan-to-delete 
ip range  is being using

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1265447/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1516671] Re: LiveMigration DBError: ProgrammingError: can't adapt type 'Instance'

2016-06-02 Thread Pushkar Umaranikar
This bug lacks the necessary information to effectively reproduce and
fix it, therefore it has been closed. Feel free to reopen the bug by
providing the requested information and set the bug status back to
''New''.

** Changed in: nova
   Status: Incomplete => Invalid

** Tags added: api

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1516671

Title:
  LiveMigration DBError: ProgrammingError: can't adapt type 'Instance'

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Postgres as Database + latest stable Kilo; When doing live migrate
  with attached volume (rally test
  boot_server_attach_created_volume_and_live_migrate):

  8<

  
  ERROR oslo_db.sqlalchemy.exc_filters 
[req-76f003ad-8b12-4e85-902a-a52581fd5611 c2190adfee124fefb139254ce3df27dc 
0b707847d6254b5b9e956e40ae16deca - - -] DBAPIError exception wrapped from 
(ProgrammingError) can't adapt type 'Instance' 'SELECT 
block_device_mapping.created_at AS block_device_mapping_created_at, 
block_device_mapping.updated_at AS block_device_mapping_updated_at, 
block_device_mapping.deleted_at AS block_device_mapping_deleted_at, 
block_device_mapping.deleted AS block_device_mapping_deleted, 
block_device_mapping.id AS block_device_mapping_id, 
block_device_mapping.instance_uuid AS block_device_mapping_instance_uuid, 
block_device_mapping.source_type AS block_device_mapping_source_type, 
block_device_mapping.destination_type AS block_device_mapping_destination_type, 
block_device_mapping.guest_format AS block_device_mapping_guest_format, 
block_device_mapping.device_type AS block_device_mapping_device_type, 
block_device_mapping.disk_bus AS block_device_mapping_disk_bus, block_d
 evice_mapping.boot_index AS block_device_mapping_boot_index, 
block_device_mapping.device_name AS block_device_mapping_device_name, 
block_device_mapping.delete_on_termination AS 
block_device_mapping_delete_on_termination, block_device_mapping.snapshot_id AS 
block_device_mapping_snapshot_id, block_device_mapping.volume_id AS 
block_device_mapping_volume_id, block_device_mapping.volume_size AS 
block_device_mapping_volume_size, block_device_mapping.image_id AS 
block_device_mapping_image_id, block_device_mapping.no_device AS 
block_device_mapping_no_device, block_device_mapping.connection_info AS 
block_device_mapping_connection_info \nFROM block_device_mapping \nWHERE 
block_device_mapping.deleted = %(deleted_1)s AND block_device_mapping.volume_id 
= %(volume_id_1)s \n LIMIT %(param_1)s' {'param_1': 1, 'volume_id_1': 
  [...]

  
  TRACE oslo_db.sqlalchemy.exc_filters Traceback (most recent call last):
  TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1063, in 
_execute_context
  TRACE oslo_db.sqlalchemy.exc_filters context)
  TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 442, in 
do_execute
  TRACE oslo_db.sqlalchemy.exc_filters cursor.execute(statement, parameters)
  TRACE oslo_db.sqlalchemy.exc_filters ProgrammingError: can't adapt type 
'Instance'

  
  >8

  In nova-compute.log you find the corresponding error:

  -8<

  TRACE nova.virt.libvirt.driver [instance: 
70d591fe-cdd8-46c2-bc7c-35ef733c2e1b] Traceback (most recent call last):
  TRACE nova.virt.libvirt.driver [instance: 
70d591fe-cdd8-46c2-bc7c-35ef733c2e1b]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5714, in 
_live_migration
  TRACE nova.virt.libvirt.driver [instance: 
70d591fe-cdd8-46c2-bc7c-35ef733c2e1b] dom, finish_event)
  TRACE nova.virt.libvirt.driver [instance: 
70d591fe-cdd8-46c2-bc7c-35ef733c2e1b]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5651, in 
_live_migration_monitor
  TRACE nova.virt.libvirt.driver [instance: 
70d591fe-cdd8-46c2-bc7c-35ef733c2e1b] migrate_data)
  TRACE nova.virt.libvirt.driver [instance: 
70d591fe-cdd8-46c2-bc7c-35ef733c2e1b]   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 88, in wrapped
  TRACE nova.virt.libvirt.driver [instance: 
70d591fe-cdd8-46c2-bc7c-35ef733c2e1b] payload)
  TRACE nova.virt.libvirt.driver [instance: 
70d591fe-cdd8-46c2-bc7c-35ef733c2e1b]   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
  TRACE nova.virt.libvirt.driver [instance: 
70d591fe-cdd8-46c2-bc7c-35ef733c2e1b] six.reraise(self.type_, self.value, 
self.tb)
  TRACE nova.virt.libvirt.driver [instance: 
70d591fe-cdd8-46c2-bc7c-35ef733c2e1b]   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 71, in wrapped
  TRACE nova.virt.libvirt.driver [instance: 
70d591fe-cdd8-46c2-bc7c-35ef733c2e1b] return f(self, context, *args, **kw)
  TRACE nova.virt.libvirt.driver [instance: 
70d591fe-cdd8-46c2-bc7c-35ef733c2e1b]   File 

[Yahoo-eng-team] [Bug 1554301] Re: vm's xml disk type info changed after the vm's volume migrated

2016-06-02 Thread Pushkar Umaranikar
This bug lacks the necessary information to effectively reproduce and
fix it, therefore it has been closed. Feel free to reopen the bug by
providing the requested information and set the bug status back to
''New''.

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1554301

Title:
  vm's xml  disk type info changed after the vm's volume migrated

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  1. boot a vm from a volume
  2. check the vm's  xml  disk type info is "block"
  


  3. migrate the volume which vm is using in 
  cinder migrate volume

  4. after volume migrate success,check vm's  xml  disk type info is "file"
   



  5. it will be resulted in the volume's  by-path info (like /dev/disk
  /by-path/ip-172.168.101.25:3260-iscsi-
  iqn.2099-01.cn.com.zte:usp.spr11-4c:09:b4:b0:55:91-lun-9 -> ../../sdx
  ) cleaned after other volume's detached on this host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1554301/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1560472] Re: nova interface-attach command removes pre-existing neutron ports from the environment if it fails to attach to an instance _even_ where '--port-id' has been specifie

2016-06-02 Thread Sivasathurappan Radhakrishnan
This bug lacks the necessary information to effectively reproduce and
fix it, therefore it has been closed. Feel free to reopen the bug by
providing the requested information and set the bug status back to
''New'

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1560472

Title:
  nova interface-attach command removes pre-existing neutron ports from
  the environment if it fails to attach to an instance _even_ where
  '--port-id' has been specified

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Problem description:
  The nova interface-attach command removes pre-existing neutron ports from the 
environment if it fails to attach to an instance _even_ where '--port-id' has 
been specified. This behaviour was introduced by fixing bug #1338551 [1].

  Steps to reproduce:
  1) create a new neutron port
    $ neutron port-create --name  
  2) boot an instance (make sure to specify a keypair and check sec groups for 
ssh connectivity to the instance)
    $ nova boot ...
  3) [OPTIONAL] add/remove the port several times over to prove the 
funtionality is working OK.
    $ nova interface-attach --port-id  
    $ nova interface-detach  
  4) simulate a kernel crash on the instance, as this should cause a scenario 
where an interface attach will fail (ssh connectivity is assumed for this step)
    $ ssh  "sudo kill -11 1" # OR execute 'echo c > 
/proc/sysrq-trigger' while connected to the instance
    4a. Verify the kernel has actually crashed
    $ nova console-log 
  5) try to attach the port while the instance is still crashed # **note** if 
the port hasn't been attached before (i.e. you skipped step 3, it may succeed 
initially then it will fail on subsequent attach attempts). Also, at this point 
it should not matter if the port is still attached to the instance.
    $ nova interface-attach --port-id  

  Errors observed:
  $ nova interface-attach --port-id  
  ERROR: Failed to attach interface (HTTP 500) (Request-ID: 
req-----)

  Expected results:
  The port should still exist after failure in this scenario.

  Actual results:
  'neutron port-list' will no longer show the port. It has been removed.
  The port is removed from the environment and therefore is no longer available.

  Snippet from /var/log/nova/nova-compute.log

  [instance: ----] attaching network adapter 
failed.
   Traceback (most recent call last):
     File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 
1263, in attach_interface
   virt_dom.attachDeviceFlags(cfg.to_xml(), flags)
     File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 183, in 
doit
   result = proxy_call(self._autowrap, f, *args, **kwargs)
     File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 141, in 
proxy_call
   rv = execute(f, *args, **kwargs)
     File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 122, in 
execute
   six.reraise(c, e, tb)
     File "/usr/lib/python2.7/dist-packages/eventlet/tpool.py", line 80, in 
tworker
   rv = meth(*args, **kwargs)
     File "/usr/lib/python2.7/dist-packages/libvirt.py", line 513, in 
attachDeviceFlags
   if ret == -1: raise libvirtError ('virDomainAttachDeviceFlags() failed', 
dom=self)
   libvirtError: Unable to create tap device tap-xx: Device or resource 
busy
   attach interface failed , try to deallocate port 
----, reason: Failed to attach network adapter 
device to ----
   Exception during message handling: Failed to attach network adapter device 
to ----

  Full error:
  http://pastebin.ubuntu.com/15471511/

  $ sudo apt-cache policy nova-compute
  nova-compute:
    Installed: 1:2015.1.2-0ubuntu2~cloud0

  Ubuntu 14.04.4 LTS

  Why does this matter:
  As specified in [1], where a port has been attached using --net-id option it 
is automatically created before attaching to the VM. Therefore, it is the 
correct behaviour to cleanup after a failure to attach.
  Where "--port-id" has been specified, it should not be assumed that it was 
auto created, it has been specifically created and therefore may have 
pre-existed the VM, this means the port should be re-usable if desired and 
therefore should not be cleaned up in the case of attach failure. When the port 
has been pre-created and '--port-id' is specified in the interface-attach 
command, if the action fails to attach it should be handled without being 
removed from the environment and exist for re-assignment to another instance or 
for retry to the original instance once it has recovered from it's failure.

  This behaviour is confirmed on both Kilo and Liberty.

  Related bugs:
  [1] 

[Yahoo-eng-team] [Bug 1588502] [NEW] Bump hacking version in neutron-dynamic-routing

2016-06-02 Thread Ryan Tidwell
Public bug reported:

https://review.openstack.org/#/c/284376/ makes changes that require
neutron to bump the version of hacking specified in test-
requirements.txt. Now that these changes have merged, pep8 jobs in
neutron-dynamic-routing repository are failing. We need to simply bump
the version of hacking specified in the test-requirements.txt.

** Affects: neutron
 Importance: Undecided
 Assignee: Ryan Tidwell (ryan-tidwell)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1588502

Title:
  Bump hacking version in neutron-dynamic-routing

Status in neutron:
  In Progress

Bug description:
  https://review.openstack.org/#/c/284376/ makes changes that require
  neutron to bump the version of hacking specified in test-
  requirements.txt. Now that these changes have merged, pep8 jobs in
  neutron-dynamic-routing repository are failing. We need to simply bump
  the version of hacking specified in the test-requirements.txt.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1588502/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1433402] Re: list users in group unauthorised with v3 policy

2016-06-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/321128
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=9e7f24c2353d107e448f4e8a0d926e3968c6673d
Submitter: Jenkins
Branch:master

commit 9e7f24c2353d107e448f4e8a0d926e3968c6673d
Author: Rudolf Vriend 
Date:   Wed May 25 18:49:47 2016 +0200

Allow domain admins to list users in groups with v3 policy

Domain admins (with a domain scoped token) could not list members of
groups in their domain or groups of a user in their domain.
This was due to 2 reasons: the v3 policy rule
'identity:list_groups_for_user' was not evaluating the users domain
and the identity controller method protections of 'list_users_in_group'
and 'list_groups_for_user' were not providing the required targets for
the rules.

Change-Id: Ibf8442a2ceefc2bb0941bd5e7beba6c252b2ab36
Closes-Bug: #1433402
Closes-Bug: #1458994


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1433402

Title:
  list users in group unauthorised with v3 policy

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Two identity api have unauthorised issue with v3 policy. They are
  list_users_in_group and list_groups_for_user:

  The domain admin should have permission to call these two api, but
  failed.

  Repo Step:
  * use v3 policy as config
  1. Create domain
  2. Create admin user 'userA' under domain (assign admin role to the user with 
domain scope)
  3. Create a normal domain user 'userB' (with domain admin userA's token)
  4. Create a normal domain group 'groupB'  (with domain admin userA's token)
  5. Add userB a member in groupB (with domain admin userA's token)
  6. list_users_in_group with groupB's id as param (with domain admin userA's 
token), unauthorized
  7. list_groups_for_user with userB's id as param (with domain admin userA's 
token), unauthorized

  Both step 6 and step 7 use the domain token.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1433402/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458994] Re: When logged in as a pure domain admin, cannot list users in a group

2016-06-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/321128
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=9e7f24c2353d107e448f4e8a0d926e3968c6673d
Submitter: Jenkins
Branch:master

commit 9e7f24c2353d107e448f4e8a0d926e3968c6673d
Author: Rudolf Vriend 
Date:   Wed May 25 18:49:47 2016 +0200

Allow domain admins to list users in groups with v3 policy

Domain admins (with a domain scoped token) could not list members of
groups in their domain or groups of a user in their domain.
This was due to 2 reasons: the v3 policy rule
'identity:list_groups_for_user' was not evaluating the users domain
and the identity controller method protections of 'list_users_in_group'
and 'list_groups_for_user' were not providing the required targets for
the rules.

Change-Id: Ibf8442a2ceefc2bb0941bd5e7beba6c252b2ab36
Closes-Bug: #1433402
Closes-Bug: #1458994


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1458994

Title:
  When logged in as a pure domain admin, cannot list users in a group

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  When using domain scoped tokens, and trying to add users to a group , 
keystone throws the error {u'error': {u'code': 403,
u'message': u'You are not authorized to perform the requested action: 
identity:list_users_in_group (Disable debug mode to suppress these details.)',
u'title': u'Forbidden'}}.

  To reproduce this bug you may use the following code:

  
  import requests
  import json


  
  def get_unscoped_token(username,password,domain):
  headers = {'Content-Type': 'application/json'}
  payload = {'auth': {'identity': {'password': {'user': {'domain': {'name': 
domain}, 'password': password, 'name': username}}, 'methods': ['password']}}}
  r = requests.post(OS_AUTH_URL, data=json.dumps(payload), headers=headers)
  return r.headers['X-Subject-Token']

  def get_token_scoped_to_domain(unscoped_token,domain):
  headers = {'Content-Type': 'application/json'}
  payload ={"auth": {"scope": {"domain": {"name": domain}}, "identity": 
{"token": {"id":unscoped_token}, "methods": ["token"]}}}
  r = requests.post(OS_AUTH_URL, data=json.dumps(payload), headers=headers)
  return r.headers['X-Subject-Token']

  def get_token_scoped_to_project(unscoped_token,project):
  headers = {'Content-Type': 'application/json'}
  payload ={"auth": {"scope": {"project": {"name": project}}, "identity": 
{"token": {"id":unscoped_token}, "methods": ["token"]}}}
  r = requests.post(OS_AUTH_URL, data=json.dumps(payload), headers=headers)
  return r.headers['X-Subject-Token']

  def list_domains(token):
  headers = {'Content-Type': 'application/json',
 'Accept': 'application/json',
 'X-Auth-Token': token}
  r = requests.get("http://192.168.27.100:35357/v3/domains;, 
headers=headers)
  return r.json()["domains"]

  
  def list_groups_for_domain(domain_id, token):
  headers = {'Content-Type': 'application/json',
 'X-Auth-Token': token}
  r = requests.get("http://192.168.27.100:5000/v3/groups?domain_id=%s; % 
domain_id , headers=headers)
  return r.json()["groups"]

  def get_domain_named(domain_name,token):
  domains = list_domains(domain_token)
  domain = next(x for x in domains if x.get("name") == domain_name)
  return domain

  def get_group_named_in_domain(group_name, domain_id,token):
  groups = list_groups_for_domain(domain_id,token)
  group = next(x for x in groups if x.get("name") == group_name)
  return group

  def get_users_in_group_in_domain(group_id, domain_id, token):
  headers = {'Content-Type': 'application/json',
 'Accept': 'application/json',
 'X-Auth-Token': token}
  r = 
requests.get("http://192.168.27.100:35357/v3/groups/%s/users?domain_id=%s; % 
(group_id,domain_id), headers=headers)
  return r.json()
  

  
  
  unscoped_token  = get_unscoped_token(OS_USERNAME,OS_PASSWORD,"default")
  domain_token = get_token_scoped_to_domain(unscoped_token,"default")
  nintendo_domain = get_domain_named("nintendo", domain_token)

  #nintendo domain operations
  unscoped_token  = get_unscoped_token("mario","pass","nintendo")
  domain_token = get_token_scoped_to_domain(unscoped_token,"nintendo")

  list_groups_for_domain(nintendo_domain.get("id"), domain_token)

  list_groups_for_domain(nintendo_domain.get("id"), domain_token)

  mygroup =
  get_group_named_in_domain("mygroup",nintendo_domain.get("id"),
  domain_token )

  get_users_in_group_in_domain(mygroup.get("id"),
  nintendo_domain.get("id"), domain_token)

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1585214] Re: Cannot pin/unpin cpus during cold migration with enabled CPU pinning

2016-06-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/320478
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=d7b8d997f0a7d40055c544470533e8a11855ff8f
Submitter: Jenkins
Branch:master

commit d7b8d997f0a7d40055c544470533e8a11855ff8f
Author: Sergey Nikitin 
Date:   Tue May 24 17:14:33 2016 +0300

Fixed clean up process in confirm_resize() after resize/cold migration

On env with NUMA topology and enabled cpu pinning we have one problem.
If instance changes numa node (or even pinned cpus in numa node)
during cold migration from one host to another confirming resize
failed with "Cannot pin/unpin cpus from the following pinned set".

It happening because confirm_resize() tries to clean up source
host using numa topology from destination host.

Closes-Bug: #1585214

Change-Id: I3b87be3f25fc0bce4efd9804fa562a6f66355464


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1585214

Title:
  Cannot pin/unpin cpus during cold migration with enabled CPU pinning

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  With enabled cpu pinning for vm migration doesn't work properly

  Steps to reproduce:
  1) Deploy env with 2 compute node with enable pinning
  2) Create aggregate states for this compute-node
  3) Create 3 flavors:
  - flavor with 2 cpu and 2 numa node
  nova flavor-create m1.small.performance-2 auto 2048 20 2
  nova flavor-key m1.small.performance-2 set hw:cpu_policy=dedicated
  nova flavor-key m1.small.performance-2 set 
aggregate_instance_extra_specs:pinned=true
  nova flavor-key m1.small.performance-2 set hw:numa_nodes=2
  nova boot --image TestVM --nic net-id=93e25766-2a22-486c-af82-c62054260c26 
--flavor m1.small.performance-2 test2
  - flavor with 2 cpu and 1 numa node
  nova flavor-create m1.small.performance-1 auto 2048 20 2
  nova flavor-key m1.small.performance-1 set hw:cpu_policy=dedicated
  nova flavor-key m1.small.performance-1 set 
aggregate_instance_extra_specs:pinned=true
  nova flavor-key m1.small.performance-1 set hw:numa_nodes=1
  nova boot --image TestVM --nic net-id=93e25766-2a22-486c-af82-c62054260c26 
--flavor m1.small.performance-1 test3
  - flavor with 1 cpu and 1 numa node
  nova flavor-create m1.small.performance auto 512 1 1
  nova flavor-key m1.small.performance set hw:cpu_policy=dedicated
  nova flavor-key m1.small.performance set 
aggregate_instance_extra_specs:pinned=true
  nova flavor-key m1.small.performance set hw:numa_nodes=1
  4) boot vm1, vm2 and vm3 with this flavors
  5) Migrate vm1: nova migrate vm1
  Confirm resizing: nova resize-confirm vm1
  Expected results:
  vm1 migrate to another node
  Actual resilts:
  vm1 in ERROR
  {"message": "Cannot pin/unpin cpus [17] from the following pinned set [3]", 
"code": 400, "created": "2016-03-31T09:26:00Z"} |
  6) Migrate vm2: nova migrate vm2
  Confirm resizing: nova resize-confirm vm2
  Repeat one more time migration and confirmin
  Expected results:
  vm1 migrate to another node
  Actual resilts:
  vm1 in ERROR
  6) nova migrate vm3 for 3 time
  the same

  
  It happening because confirm_resize() tries to clean up source host using 
NUMA topology from destination host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1585214/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588472] Re: tox -e genconfig - maximum recursion depth exceeded

2016-06-02 Thread Darek Smigiel
s/PIP/tox

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1588472

Title:
  tox -e genconfig - maximum recursion depth exceeded

Status in neutron:
  Invalid

Bug description:
  When running 'tox -e genconfig' on stable/mitaka of neutron, I get the
  following (running on Ubuntu 14.04):

File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 989, in 
_replace_match
  return handler(match)
File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 913, in 
_replace_env
  env_list = self.getdict('setenv')
File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 839, in 
getdict
  s = self.getstring(name, None)
File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 893, in 
getstring
  x = self._replace(x)
File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 993, in 
_replace
  return RE_ITEM_REF.sub(self._replace_match, x)
File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 989, in 
_replace_match
  return handler(match)
File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 913, in 
_replace_env
  env_list = self.getdict('setenv')
File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 839, in 
getdict
  s = self.getstring(name, None)
File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 893, in 
getstring
  x = self._replace(x)
File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 993, in 
_replace
  return RE_ITEM_REF.sub(self._replace_match, x)
File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 989, in 
_replace_match
  return handler(match)
File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 913, in 
_replace_env
  env_list = self.getdict('setenv')
File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 839, in 
getdict
  s = self.getstring(name, None)
File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 893, in 
getstring
  x = self._replace(x)
File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 993, in 
_replace
  return RE_ITEM_REF.sub(self._replace_match, x)
File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 989, in 
_replace_match
  return handler(match)
File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 959, in 
_replace_substitution
  val = self._substitute_from_other_section(sub_key)
File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 941, in 
_substitute_from_other_section
  if section in self._cfg and item in self._cfg[section]:
File "/usr/lib/python2.7/dist-packages/py/_iniconfig.py", line 151, in 
__getitem__
  return SectionWrapper(self, name)
  RuntimeError: maximum recursion depth exceeded while calling a Python object

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1588472/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1584109] Re: Swift UI failures when deleting large numbers of objects

2016-06-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/321362
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=2481d2b1ac2f0e5adaeb981476f882d8b390974b
Submitter: Jenkins
Branch:master

commit 2481d2b1ac2f0e5adaeb981476f882d8b390974b
Author: Richard Jones 
Date:   Thu May 26 16:02:57 2016 +1000

Remove memoize that holds connections open

This memoize usage holds on to connection objects for a very long
time, resulting in exhaustion of file descriptors.

Change-Id: If7367819b050a65562b3e05175ab15bd93d0d398
Fixes-Bug: 1584109


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1584109

Title:
  Swift UI failures when deleting large numbers of objects

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Failed to establish a new connection: [Errno 24] Too many open
  files',))

  Basically I first create a bunch of containers objects and folders with this 
script:
   - 
https://github.com/openstack/searchlight/blob/master/test-scripts/generate-swift-data.py
   -  ./generate-swift-data.py 1000 5 10 
  Then I add an empty nested sub-folder foo/bar/hello

  Then I try to delete all.

  I got an error 500

  The browser screen freezes and I cannot do anything.

  I manually refresh.

  I then re-select all and click delete.

  It works.

  But I am able to make it do the above again.

  Screen shot: http://imgur.com/a/4t4AV

  ConnectionError: HTTPConnectionPool(host='192.168.200.200', port=8080): Max 
retries exceeded with url: 
/v1/AUTH_4bade81378e6428db0e896db77d68e02/scale_3/BL/object_669 (Caused by 
NewConnectionError(': Failed to establish a new connection: [Errno 24] Too many open 
files',))
  [20/May/2016 15:11:20] "DELETE 
/api/swift/containers/scale_3/object/BL/object_669 HTTP/1.1" 500 332
  HTTP exception with no status/code
  Traceback (most recent call last):
File 
"/Users/ttripp/dev/openstack/horizon/openstack_dashboard/api/rest/utils.py", 
line 126, in _wrapped
  data = function(self, request, *args, **kw)
File 
"/Users/ttripp/dev/openstack/horizon/openstack_dashboard/api/rest/swift.py", 
line 211, in delete
  api.swift.swift_delete_object(request, container, object_name)
File 
"/Users/ttripp/dev/openstack/horizon/openstack_dashboard/api/swift.py", line 
314, in swift_delete_object
  swift_api(request).delete_object(container_name, object_name)
File 
"/Users/ttripp/dev/openstack/horizon/.venv/lib/python2.7/site-packages/swiftclient/client.py",
 line 1721, in delete_object
  response_dict=response_dict)
File 
"/Users/ttripp/dev/openstack/horizon/.venv/lib/python2.7/site-packages/swiftclient/client.py",
 line 1565, in _retry
  service_token=self.service_token, **kwargs)
File 
"/Users/ttripp/dev/openstack/horizon/.venv/lib/python2.7/site-packages/swiftclient/client.py",
 line 1369, in delete_object
  conn.request('DELETE', path, '', headers)
File 
"/Users/ttripp/dev/openstack/horizon/.venv/lib/python2.7/site-packages/swiftclient/client.py",
 line 401, in request
  files=files, **self.requests_args)
File 
"/Users/ttripp/dev/openstack/horizon/.venv/lib/python2.7/site-packages/swiftclient/client.py",
 line 384, in _request
  return self.request_session.request(*arg, **kwarg)
File 
"/Users/ttripp/dev/openstack/horizon/.venv/lib/python2.7/site-packages/requests/sessions.py",
 line 475, in request
  resp = self.send(prep, **send_kwargs)
File 
"/Users/ttripp/dev/openstack/horizon/.venv/lib/python2.7/site-packages/requests/sessions.py",
 line 585, in send
  r = adapter.send(request, **kwargs)
File 
"/Users/ttripp/dev/openstack/horizon/.venv/lib/python2.7/site-packages/requests/adapters.py",
 line 467, in send
  raise ConnectionError(e, request=request)
  ConnectionError: HTTPConnectionPool(host='192.168.200.200', port=8080): Max 
retries exceeded with url: 
/v1/AUTH_4bade81378e6428db0e896db77d68e02/scale_3/BL/object_667 (Caused by 
NewConnectionError(': Failed to establish a new connection: [Errno 24] Too many open 
files',))
  [20/May/2016 15:11:20] "DELETE 
/api/swift/containers/scale_3/object/BL/object_667 HTTP/1.1" 500 332

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1584109/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497448] Re: admin system info page shows useless data, hides useful data

2016-06-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/225334
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=5934d83b0479a4105930ce0ce44d9eb7649a91e1
Submitter: Jenkins
Branch:master

commit 5934d83b0479a4105930ce0ce44d9eb7649a91e1
Author: eric 
Date:   Fri Sep 18 15:21:09 2015 -0600

Improve system info page

This change adds region info and all the url types,
and also removes enabled (which never did anything).

Change-Id: I7594d2b3d1e9826ec66bac379059171150155c4b
Closes-bug: #1497448


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1497448

Title:
  admin system info page shows useless data, hides useful data

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The system info page's display of keystone catalog and endpoint hides
  important data like regions and various url types (public / private /
  etc), and shows silly columns like Status (which is always enabled).

  Need to show more information on this page, and hide the useless
  stuff.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1497448/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588472] [NEW] tox -e genconfig - maximum recursion depth exceeded

2016-06-02 Thread Eric Brown
Public bug reported:

When running 'tox -e genconfig' on stable/mitaka of neutron, I get the
following (running on Ubuntu 14.04):

  File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 989, in 
_replace_match
return handler(match)
  File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 913, in 
_replace_env
env_list = self.getdict('setenv')
  File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 839, in 
getdict
s = self.getstring(name, None)
  File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 893, in 
getstring
x = self._replace(x)
  File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 993, in 
_replace
return RE_ITEM_REF.sub(self._replace_match, x)
  File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 989, in 
_replace_match
return handler(match)
  File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 913, in 
_replace_env
env_list = self.getdict('setenv')
  File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 839, in 
getdict
s = self.getstring(name, None)
  File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 893, in 
getstring
x = self._replace(x)
  File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 993, in 
_replace
return RE_ITEM_REF.sub(self._replace_match, x)
  File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 989, in 
_replace_match
return handler(match)
  File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 913, in 
_replace_env
env_list = self.getdict('setenv')
  File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 839, in 
getdict
s = self.getstring(name, None)
  File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 893, in 
getstring
x = self._replace(x)
  File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 993, in 
_replace
return RE_ITEM_REF.sub(self._replace_match, x)
  File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 989, in 
_replace_match
return handler(match)
  File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 959, in 
_replace_substitution
val = self._substitute_from_other_section(sub_key)
  File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 941, in 
_substitute_from_other_section
if section in self._cfg and item in self._cfg[section]:
  File "/usr/lib/python2.7/dist-packages/py/_iniconfig.py", line 151, in 
__getitem__
return SectionWrapper(self, name)
RuntimeError: maximum recursion depth exceeded while calling a Python object

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1588472

Title:
  tox -e genconfig - maximum recursion depth exceeded

Status in neutron:
  New

Bug description:
  When running 'tox -e genconfig' on stable/mitaka of neutron, I get the
  following (running on Ubuntu 14.04):

File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 989, in 
_replace_match
  return handler(match)
File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 913, in 
_replace_env
  env_list = self.getdict('setenv')
File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 839, in 
getdict
  s = self.getstring(name, None)
File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 893, in 
getstring
  x = self._replace(x)
File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 993, in 
_replace
  return RE_ITEM_REF.sub(self._replace_match, x)
File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 989, in 
_replace_match
  return handler(match)
File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 913, in 
_replace_env
  env_list = self.getdict('setenv')
File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 839, in 
getdict
  s = self.getstring(name, None)
File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 893, in 
getstring
  x = self._replace(x)
File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 993, in 
_replace
  return RE_ITEM_REF.sub(self._replace_match, x)
File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 989, in 
_replace_match
  return handler(match)
File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 913, in 
_replace_env
  env_list = self.getdict('setenv')
File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 839, in 
getdict
  s = self.getstring(name, None)
File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 893, in 
getstring
  x = self._replace(x)
File "/usr/local/lib/python2.7/dist-packages/tox/config.py", line 993, in 
_replace
  return RE_ITEM_REF.sub(self._replace_match, x)
File 

[Yahoo-eng-team] [Bug 1588112] Re: instance stuck at error state after trying to migrate it to a misconfigured nova-compute node

2016-06-02 Thread Sean Dague
It is correct that you can't do operations other than DELETE on
instances in this state. There could be some other enhancements here to
handle this, but this is mostly working as designed.

** Changed in: nova
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1588112

Title:
  instance stuck at error state after trying to migrate it to a
  misconfigured nova-compute node

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  I was trying to add another nova-compute node into the cluster.

  Afterwards I tried to do a live migration to verify the configuration.

  The instance got stuck at error state and showing the task of
  migrating.

  I then fixed the misconfigure with the new added compute node. But the 
instance still not got fixed.
  (the misconfigure was the network of the server and nova.conf file)

  It seems that I can do nothing with but deleting the instance.

  
  some commands and results below:

  (openstack) server list
  
+--+--++--+
  | ID   | Name | Status | Networks 
|
  
+--+--++--+
  | 84d31496-5d0a-4dbf-99bc-363d018a30a8 | ttt  | ACTIVE | 
test=192.168.1.3 |
  | d754e7e4-6dba-45a7-a303-902f45ff4ca0 | testfromdisk | ERROR  | 
test=192.168.1.2 |
  
+--+--++--+
  server show d754e7e4-6dba-45a7-a303-902f45ff4ca0
  
+--++
  | Field| Value
  |
  
+--++
  | OS-DCF:diskConfig| AUTO 
  |
  | OS-EXT-AZ:availability_zone  | nova 
  |
  | OS-EXT-SRV-ATTR:host | nova1
  |
  | OS-EXT-SRV-ATTR:hypervisor_hostname  | nova1
  |
  | OS-EXT-SRV-ATTR:instance_name| instance-0003
  |
  | OS-EXT-STS:power_state   | 1
  |
  | OS-EXT-STS:task_state| migrating
  |
  | OS-EXT-STS:vm_state  | error
  |
  | OS-SRV-USG:launched_at   | 2016-05-31T05:15:28.00   
  |
  | OS-SRV-USG:terminated_at | None 
  |
  | accessIPv4   |  
  |
  | accessIPv6   |  
  |
  | addresses| test=192.168.1.2 
  |
  | config_drive |  
  |
  | created  | 2016-05-31T05:15:15Z 
  |
  | fault| {u'message': u'Compute host nova2 
could not be found.', u'code': 404, u'created': u'2016-06-01T07:08:13Z'} |
  | flavor   | m1.tiny (1)  
  |
  | hostId   | 
b1f5ce288eb6e023bb7a7fcd2adbea8f32650690fb6b34d070ab8e22
   |
  | id   | d754e7e4-6dba-45a7-a303-902f45ff4ca0 
 

[Yahoo-eng-team] [Bug 1585652] Re: EmptyCatalog not treated during cinderclient creation

2016-06-02 Thread Sean Dague
It's fine if you want to submit a patch to make this better. I'm not
really convinced that we should be handling the case where users create
crippled tokens and then attempt to do complex operations.

** Changed in: nova
   Status: New => Opinion

** Changed in: nova
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1585652

Title:
  EmptyCatalog not treated during cinderclient creation

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Steps to reproduce
  ==
  1 - Get a keystone v3 token using the ?nocatalog param. Example:

  export TOKEN=`curl -i -k -v -H "Content-type: application/json" -d
  '{"auth": {"identity": {"methods": ["password"], "password": {"user":
  {"domain": {"name": "Default"}, "name": "test", "password":
  "password"}}}, "scope": {"project": {"name": "test-project", "domain":
  {"name": "Default"}'
  http://localhost:5000/v3/auth/tokens?nocatalog | grep X-Subject-Token
  | awk '{print $2}' | sed -e 's,\r,,' `

  2 - Try to create a server using a cinder volume. Example:

  curl -k -v -H  "X-Auth-Token:$TOKEN" -H "Content-type:
  application/json" -d '{"server": {"name": "test_CSDPU_1", "imageRef":
  "", "block_device_mapping_v2": [{"source_type": "volume",
  "destination_type": "volume", "boot_index": 0,
  "delete_on_termination": false, "uuid": "85397498-850f-406f-806a-
  25cf93cd94dc"}], "flavorRef": "790959df-f79b-4b87-8389-a160a3b6e606",
  "max_count": 1, "min_count": 1}}'
  http://localhost:8774/v2/07564c39740f405b92f4722090cd745b/servers

  Actual result
  =

  {"badRequest": {"message": "Block Device Mapping is Invalid: failed to
  get volume 85397498-850f-406f-806a-25cf93cd94dc.", "code": 400}}

  Expected result
  ===

  A meaningful error message is displayed.

  Details
  ===

  - During cinderclient creation, nova tries to get cinder's endpoint
  using the auth object obtained from the token without the catalog [1].
  keystoneauth will raise an EmptyCatalog exception [2] that is not
  treated and will result in the error seen above.

  [1] https://github.com/openstack/nova/blob/master/nova/volume/cinder.py#L82
  [2] 
https://github.com/openstack/keystoneauth/blob/master/keystoneauth1/access/service_catalog.py#L190

  - This issue might happen in other areas of code, is not necessarily
  exclusive to the cinderclient creation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1585652/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1561230] Re: ng launch instance modal second time is weird

2016-06-02 Thread Rob Cresswell
** Changed in: horizon
   Status: Fix Committed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1561230

Title:
  ng launch instance modal second time is weird

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  For Mitaka, we've replaced the original Django Launch Instance with
  the new ng Launch Instance.

  I successfully filled out the required steps and created an instance.
  However, if I don't refresh the page manually, and click the "Launch
  Instance" again, it shows an strange/incomplete modal.

  It shows: Details, Source, Flavor, Security Groups, Metadata.
  It *should* show: Details, Source, Flavor, Network Ports, Key Pair, 
Configuration, Metadata

  See attached image.

  Issues:
  - It doesn't show all the workflow steps.
  - Slide out help panel doesn't work. Only toggles the button.
  - Need to click on cancel button TWICE to get it to close. First click shows 
a faint modal overlay sliding up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1561230/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588429] Re: get all instances from the Nova by a few project ids

2016-06-02 Thread Sean Dague
This is not a supported API

** Changed in: nova
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1588429

Title:
  get all instances from the Nova by a few project ids

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  Hi. I have some problems with using nova api.
  I want to change the table in the tab Admin -> Hypervisors -> detail -> VM 
List.
  It is necessary to change the loading table to return all projects virtual 
machines in which the current user is an administrator. Using 'all_tenants = 
True' returns all instances by all projects.

  How to replace the filter to get a list of instances for current user
  of a single query?

  Thanks,
  Aynur.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1588429/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518430] Re: liberty: ~busy loop on epoll_wait being called with zero timeout

2016-06-02 Thread Sean Dague
This is really an oslo.messaging issue. Not valid for Nova upstream.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1518430

Title:
  liberty: ~busy loop on epoll_wait being called with zero timeout

Status in OpenStack Compute (nova):
  Invalid
Status in oslo.messaging:
  New
Status in nova package in Ubuntu:
  Confirmed
Status in python-oslo.messaging package in Ubuntu:
  New

Bug description:
  Context: openstack juju/maas deploy using 1510 charms release
  on trusty, with:
    openstack-origin: "cloud:trusty-liberty"
    source: "cloud:trusty-updates/liberty

  * Several openstack nova- and neutron- services, at least:
  nova-compute, neutron-server, nova-conductor,
  neutron-openvswitch-agent,neutron-vpn-agent
  show almost busy looping on epoll_wait() calls, with zero timeout set
  most frequently.
  - nova-compute (chose it b/cos single proc'd) strace and ltrace captures:
    http://paste.ubuntu.com/13371248/ (ltrace, strace)

  As comparison, this is how it looks on a kilo deploy:
  - http://paste.ubuntu.com/13371635/

  * 'top' sample from a nova-cloud-controller unit from
     this completely idle stack:
    http://paste.ubuntu.com/13371809/

  FYI *not* seeing this behavior on keystone, glance, cinder,
  ceilometer-api.

  As this issue is present on several components, it likely comes
  from common libraries (oslo concurrency?), fyi filed the bug to
  nova itself as a starting point for debugging.

  Note: The description in the following bug gives a good overview of
  the issue and points to a possible fix for oslo.messaging:
  https://bugs.launchpad.net/mos/+bug/1380220

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1518430/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1530010] Re: Rescued instance failed to boot from cirros image

2016-06-02 Thread Sean Dague
Honestly, I don't think that rescue booting a split image is really very
common, and not something we need to support. If someone wants to
specify a doc fix for this, it's fine.

** Changed in: nova
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1530010

Title:
  Rescued instance failed to boot from cirros image

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  Steps: 
  1 boot a centos instance
  2 nova rescue centos --image=cirros
  3 check centos instance status "no bootable instance"

  Root cause is:
  $ glance image-show ccefc63d-6eb7-486e-b3a2-e63f09fb9e5d
  +--+--+
  | Property | Value|
  +--+--+
  | checksum | eb9139e4942121f22bbc2afc0400b2a4 |
  | container_format | ami  |
  | created_at   | 2015-11-23T11:38:12Z |
  | disk_format  | ami  |
  | id   | ccefc63d-6eb7-486e-b3a2-e63f09fb9e5d |
  | kernel_id| e6eb027f-55a5-465e-9fce-5ebdb3d13d0a | <<<
  | min_disk | 0|
  | min_ram  | 0|
  | name | cirros-0.3.4-x86_64-uec  |
  | owner| e62253640b9c478f9c15c97e6ca40cb4 |
  | protected| False|
  | ramdisk_id   | 6425cc10-eaff-4f35-bd6e-941a3b439878 | <<<
  | size | 25165824 |
  | status   | active   |
  | tags | []   |
  | updated_at   | 2015-11-23T11:38:13Z |
  | virtual_size | None |
  | visibility   | public   |
  +--+--+

  cirros image needs to boot from kernel file and initrd file.

  I debuged the rescue process, image_meta = 
  {'status': u'active', 'name': u'cirros-0.3.4-x86_64-uec', 'deleted': False, 
'container_format': u'ami', 'created_at': datetime.datetime(2015, 11, 23, 11, 
38, 12, tzinfo=), 'disk_format': u'ami', 'updated_at': 
datetime.datetime(2015, 11, 23, 11, 38, 13, tzinfo=), 'id': 
u'ccefc63d-6eb7-486e-b3a2-e63f09fb9e5d', 'owner': 
u'e62253640b9c478f9c15c97e6ca40cb4', 'min_ram': 0, 'checksum': 
u'eb9139e4942121f22bbc2afc0400b2a4', 'min_disk': 0, 'is_public': True, 
'deleted_at': None, 'properties': {u'kernel_id': 
u'e6eb027f-55a5-465e-9fce-5ebdb3d13d0a', u'ramdisk_id': 
u'6425cc10-eaff-4f35-bd6e-941a3b439878'}, 'size': 25165824}

  But check libvirt driver, we don't populate kernel_id and ramdisk_id
  from image_meta.

  rescue_image_id = None
  if image_meta is not None:
  image_meta = objects.ImageMeta.from_dict(image_meta)
  if image_meta.obj_attr_is_set("id"):
  rescue_image_id = image_meta.id

  To fix it, grab kernel_id and ramdisk_id from image_meta

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1530010/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1377072] Re: Not existing pool can be passed to floating_ip_bulk

2016-06-02 Thread Sean Dague
This is part of nova-net that is now deprecated. We probably won't fix
bugs like these.

** Changed in: nova
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1377072

Title:
  Not existing pool can be passed to floating_ip_bulk

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  Default floating-ip-pool only contains 'public'

  $ nova floating-ip-pool-list
  ++
  | name   |
  ++
  | public |
  ++

  But when creating new floating-ip-bulk with '--pool' option which does
  not exist, the pool can be set successfully.

  $ nova floating-ip-bulk-create 192.0.50.0/25 --pool private

  | -  | 192.0.50.1   | - | private | eth0  |
  | -  | 192.0.50.2   | - | private | eth0  |
  | -  | 192.0.50.3   | - | private | eth0  |
  | -  | 192.0.50.4   | - | private | eth0  |
  | -  | 192.0.50.5   | - | private | eth0  |
  | -  | 192.0.50.6   | - | private | eth0  |
  | -  | 192.0.50.7   | - | private | eth0  |
  | -  | 192.0.50.8   | - | private | eth0  |
  | -  | 192.0.50.9   | - | private | eth0  |
  | -  | 192.0.50.10  | - | private | eth0  |
  | -  | 192.0.50.11  | - | private | eth0  |
  | -  | 192.0.50.12  | - | private | eth0  |
  | -  | 192.0.50.13  | - | private | eth0  |

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1377072/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588429] [NEW] get all instances from the Nova by a few project ids

2016-06-02 Thread Aynur
Public bug reported:

Hi. I have some problems with using nova api.
I want to change the table in the tab Admin -> Hypervisors -> detail -> VM List.
It is necessary to change the loading table to return all projects virtual 
machines in which the current user is an administrator. Using 'all_tenants = 
True' returns all instances by all projects.

How to replace the filter to get a list of instances for current user of
a single query?

Thanks,
Aynur.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1588429

Title:
  get all instances from the Nova by a few project ids

Status in OpenStack Compute (nova):
  New

Bug description:
  Hi. I have some problems with using nova api.
  I want to change the table in the tab Admin -> Hypervisors -> detail -> VM 
List.
  It is necessary to change the loading table to return all projects virtual 
machines in which the current user is an administrator. Using 'all_tenants = 
True' returns all instances by all projects.

  How to replace the filter to get a list of instances for current user
  of a single query?

  Thanks,
  Aynur.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1588429/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588411] [NEW] Sorting the instance table on a deployment with >20 instances only sorts the current page

2016-06-02 Thread James Owen
Public bug reported:

In the admin dashboard, if I view the current instances and then attempt
to sort them by host, only the instances on the current page are sorted.

This seems wrong, as it means that relevant results are hidden if the
cluster has more than 20 instances on it.

This was seen on OpenStack Kilo.

I have attached screenshots which confirm that I have two instances on
the same host, using the filter functions, and then confirm that sort is
not showing one of them correctly.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: instance pagination sorting table

** Attachment added: "Illustrative screenshots"
   
https://bugs.launchpad.net/bugs/1588411/+attachment/4675305/+files/Horizon_screenshots.zip

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1588411

Title:
  Sorting the instance table on a deployment with >20 instances only
  sorts the current page

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In the admin dashboard, if I view the current instances and then
  attempt to sort them by host, only the instances on the current page
  are sorted.

  This seems wrong, as it means that relevant results are hidden if the
  cluster has more than 20 instances on it.

  This was seen on OpenStack Kilo.

  I have attached screenshots which confirm that I have two instances on
  the same host, using the filter functions, and then confirm that sort
  is not showing one of them correctly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1588411/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588408] [NEW] Volume pagination doesn't really work

2016-06-02 Thread Maksym Livshyn
Public bug reported:

1. Login into Horizon as admin user
2. Go to User Settings page and set 'Items Per Page' to 1
3. Create a few volumes

Actual result: there is no pagination available. All the items are shown
on one page

Expected behaviour: pagination should be available and one items per
page should be shown. As it was implemented on project/instances/ or
project/images/ pages.

Note: log out/log in didn't help. got the same result.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1588408

Title:
  Volume pagination doesn't really work

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  1. Login into Horizon as admin user
  2. Go to User Settings page and set 'Items Per Page' to 1
  3. Create a few volumes

  Actual result: there is no pagination available. All the items are
  shown on one page

  Expected behaviour: pagination should be available and one items per
  page should be shown. As it was implemented on project/instances/ or
  project/images/ pages.

  Note: log out/log in didn't help. got the same result.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1588408/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588393] [NEW] Switching OpenFlow interface to 'native' causes network loop

2016-06-02 Thread Ilya Chukhnakov
Public bug reported:

* Description:
After switching openvswitch agent to the 'native' OpenFlow interface 
(of_interface=native) the public network and the tunnel networks are flooded 
with ARP packets (see [1] for the tcpdump sample).

* Environment:
 - DevStack stable/mitaka
 - 1 controller/compute and 2 compute nodes
 - configuration from [2]
 - ubuntu 14.04

* How to reproduce:
0. (WARNING) the following steps will flood the network, so it is recommended 
to use a virtual network as the provider network
1. Deploy DevStack with access to the provider network (see [2]; 1 controller + 
2 compute nodes)
2. Set of_interface=native in the [ovs] section of 
/etc/neutron/plugins/ml2/ml2_conf.ini
3. restart l2 agents on all nodes
4. login to the default gateway and send a broadcast ARP request to the 
devstack's public network (arping -UD )

* Expected result:
normal network operation

* Actual result:
the public network and the tunnel network are flooded with ARP packets

[1] http://paste.openstack.org/show/507292/
[2] 
http://docs.openstack.org/developer/devstack/guides/neutron.html#devstack-configuration

** Affects: neutron
 Importance: Undecided
 Assignee: Ilya Chukhnakov (ichukhnakov)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Ilya Chukhnakov (ichukhnakov)

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Status: Confirmed => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1588393

Title:
  Switching OpenFlow interface to 'native' causes network loop

Status in neutron:
  In Progress

Bug description:
  * Description:
  After switching openvswitch agent to the 'native' OpenFlow interface 
(of_interface=native) the public network and the tunnel networks are flooded 
with ARP packets (see [1] for the tcpdump sample).

  * Environment:
   - DevStack stable/mitaka
   - 1 controller/compute and 2 compute nodes
   - configuration from [2]
   - ubuntu 14.04

  * How to reproduce:
  0. (WARNING) the following steps will flood the network, so it is recommended 
to use a virtual network as the provider network
  1. Deploy DevStack with access to the provider network (see [2]; 1 controller 
+ 2 compute nodes)
  2. Set of_interface=native in the [ovs] section of 
/etc/neutron/plugins/ml2/ml2_conf.ini
  3. restart l2 agents on all nodes
  4. login to the default gateway and send a broadcast ARP request to the 
devstack's public network (arping -UD )

  * Expected result:
  normal network operation

  * Actual result:
  the public network and the tunnel network are flooded with ARP packets

  [1] http://paste.openstack.org/show/507292/
  [2] 
http://docs.openstack.org/developer/devstack/guides/neutron.html#devstack-configuration

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1588393/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1587834] Re: use_neutron_default_nets: StrOpt ->BoolOpt

2016-06-02 Thread Sean Dague
already in the reno and the conf docs

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1587834

Title:
  use_neutron_default_nets: StrOpt ->BoolOpt

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  https://review.openstack.org/243061
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/nova" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit d8474e044e820f29f70641b8fd5fd590750441a3
  Author: ChangBo Guo(gcb) 
  Date:   Mon Nov 9 19:56:48 2015 +0800

  use_neutron_default_nets: StrOpt ->BoolOpt
  
  Config option use_neutron_default_nets is StrOpt with
  value 'True' or 'False'.  But current method _test_network_index
  uses it as boolean type, this lead to can't pass when comparing it
  with string type 'True', it doesn't test properly. This commit
  makes use_neutron_default_nets as BoolOpt.
  
  DocImpact: This option is now a BoolOpt and documentation should
  be updated accordingly.
  
  Change-Id: I19a57db073359a9e58a16cd0de39d39aa95d2aa5

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1587834/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1587537] Re: nova-compute error while installing on XenServer 7

2016-06-02 Thread Sean Dague
This appears to be an upstream packaging issue, please report to RDO
directly.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1587537

Title:
  nova-compute error while installing on XenServer 7

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Dear,

  I am using Mitaka RDO release on Controller and I am trying install
  nova-compute node using RDO on XenServer 7 (Dundee) with XenAPI
  driver.

  I have successfully installed neutron on the XenServer.

  I am having a following bug while trying to install openstack-nova-
  compute:

  Transaction check error:
file /usr/libexec/qemu-bridge-helper from install of 
qemu-kvm-common-10:1.5.3-105.el7_2.4.x86_64 conflicts with file from package 
qemu-xen-2.2.1-4.36786.x86_64

  Full output here: http://paste.openstack.org/show/506611/

  Is there any change to fix this dependency?

  Kind regards,
  Michal.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1587537/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588389] [NEW] Change JS coverage report dir to match CTI

2016-06-02 Thread Matt Borland
Public bug reported:

Krotscheck has pointed out that to follow the CTI definition, we should
place all our JS coverage reports in ./cover.  This means we need
subdirs in there for 'horizon' and 'openstack_dashboard'.

https://governance.openstack.org/reference/cti/javascript-cti.html
#executing-tests-and-code-coverage

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1588389

Title:
  Change JS coverage report dir to match CTI

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Krotscheck has pointed out that to follow the CTI definition, we
  should place all our JS coverage reports in ./cover.  This means we
  need subdirs in there for 'horizon' and 'openstack_dashboard'.

  https://governance.openstack.org/reference/cti/javascript-cti.html
  #executing-tests-and-code-coverage

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1588389/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1515528] Re: openstack-nova-novncproxy-2014.1.5-1.el6.noarch

2016-06-02 Thread Sean Dague
This is not a bug, it's a stack trace. Please provide the reproduce
scenario if this is really a bug.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1515528

Title:
  openstack-nova-novncproxy-2014.1.5-1.el6.noarch

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  #/usr/bin/nova-novncproxy --web /usr/share/novnc/

  WARNING: no 'numpy' module, HyBi protocol will be slower
  Traceback (most recent call last):
File "/usr/bin/nova-novncproxy", line 10, in 
  sys.exit(main())
File "/usr/lib/python2.6/site-packages/nova/cmd/novncproxy.py", line 87, in 
main
  wrap_cmd=None)
File "/usr/lib/python2.6/site-packages/nova/console/websocketproxy.py", 
line 47, in __init__
  ssl_target=None, *args, **kwargs)
File "/usr/lib/python2.6/site-packages/websockify/websocketproxy.py", line 
231, in __init__
  websocket.WebSocketServer.__init__(self, RequestHandlerClass, *args, 
**kwargs)
  TypeError: __init__() got an unexpected keyword argument 'no_parent'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1515528/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588281] Re: db_base_plugin_v2: id used instead of subnet_id

2016-06-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/324344
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=6f30217b42e178492f37f8619866e7c743868aa8
Submitter: Jenkins
Branch:master

commit 6f30217b42e178492f37f8619866e7c743868aa8
Author: Artur Korzeniewski 
Date:   Thu Jun 2 12:28:33 2016 +0200

DB base plugin: correct typo id to subnet_id.

Code is referring to not declared variable 'id'. It was working before
because the 'id' was defined in place where
_subnet_check_ip_allocations_internal_router_ports() was invoked.

Closes-bug: #1588281
Change-Id: I2d9331171bf59e5e375e0c4fd0eeec25f4ebff30


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1588281

Title:
  db_base_plugin_v2: id used instead of subnet_id

Status in neutron:
  Fix Released

Bug description:
  In db_base_plugin_v2 _subnet_check_ip_allocations_internal_router_ports(self, 
context, subnet_id)
  code is referring to not declared variable 'id'. It was working before 
because the 'id' was defined in place where 
_subnet_check_ip_allocations_internal_router_ports() was invoked.

  def _subnet_check_ip_allocations_internal_router_ports(self, context,
 subnet_id):
  # Do not delete the subnet if IP allocations for internal
  # router ports still exist
  allocs = context.session.query(models_v2.IPAllocation).filter_by(
  subnet_id=subnet_id).join(models_v2.Port).filter(
  models_v2.Port.device_owner.in_(
  constants.ROUTER_INTERFACE_OWNERS)
  ).first()
  if allocs:
  LOG.debug("Subnet %s still has internal router ports, "
"cannot delete", subnet_id)
  raise exc.SubnetInUse(subnet_id=id)

  the last line should be subnet_id=subnet_id.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1588281/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588378] [NEW] Cancelled live migration are reported as in progress

2016-06-02 Thread Andrea Rosa
Public bug reported:

With the introduction of the API for aborting a running live-migration 
(https://review.openstack.org/277971) we have introduced a new status for the 
aborted live migration jobs. This new status called "cancelled" is not filtered 
by the sqlalchemy query used to return the list of migration in progress: 
https://github.com/openstack/nova/blob/87dc738763d6a7a10409e14b878f5cdd39ba805e/nova/db/sqlalchemy/api.py#L4851

** Affects: nova
 Importance: Low
 Assignee: Andrea Rosa (andrea-rosa-m)
 Status: In Progress


** Tags: libvirt live-migration

** Changed in: nova
 Assignee: (unassigned) => Andrea Rosa (andrea-rosa-m)

** Tags added: libvirt

** Tags added: live-migration

** Changed in: nova
   Status: New => In Progress

** Changed in: nova
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1588378

Title:
  Cancelled live migration are reported as in progress

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  With the introduction of the API for aborting a running live-migration 
(https://review.openstack.org/277971) we have introduced a new status for the 
aborted live migration jobs. This new status called "cancelled" is not filtered 
by the sqlalchemy query used to return the list of migration in progress: 
  
https://github.com/openstack/nova/blob/87dc738763d6a7a10409e14b878f5cdd39ba805e/nova/db/sqlalchemy/api.py#L4851

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1588378/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588372] [NEW] L7 policy is deleted along with listener deletion

2016-06-02 Thread Evgeny Fedoruk
Public bug reported:

Alike to https://bugs.launchpad.net/neutron/+bug/1571097,
there is an issue with deletion of related entities.
In this case the issue is unnecessary deletion of L7 policy and its rules when 
listener related to it is deleted.

The solution should be preventing deletion of a listener if the last has
L7 policy associated with it.

** Affects: neutron
 Importance: Undecided
 Assignee: Evgeny Fedoruk (evgenyf)
 Status: New


** Tags: lbaas

** Changed in: neutron
 Assignee: (unassigned) => Evgeny Fedoruk (evgenyf)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1588372

Title:
  L7 policy is deleted along with listener deletion

Status in neutron:
  New

Bug description:
  Alike to https://bugs.launchpad.net/neutron/+bug/1571097,
  there is an issue with deletion of related entities.
  In this case the issue is unnecessary deletion of L7 policy and its rules 
when listener related to it is deleted.

  The solution should be preventing deletion of a listener if the last
  has L7 policy associated with it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1588372/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1536226] Re: Not all .po files compiled

2016-06-02 Thread Sven Anderson
** Also affects: cinder
   Importance: Undecided
   Status: New

** Also affects: heat
   Importance: Undecided
   Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: cinder
 Assignee: (unassigned) => Sven Anderson (ansiwen)

** Changed in: keystone
 Assignee: (unassigned) => Sven Anderson (ansiwen)

** Changed in: heat
 Assignee: (unassigned) => Sven Anderson (ansiwen)

** Changed in: neutron
 Assignee: (unassigned) => Sven Anderson (ansiwen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1536226

Title:
  Not all .po files compiled

Status in Cinder:
  New
Status in Glance:
  In Progress
Status in heat:
  New
Status in OpenStack Identity (keystone):
  New
Status in neutron:
  New
Status in OpenStack Compute (nova):
  Fix Released
Status in openstack i18n:
  New

Bug description:
  python setup.py compile_catalog only compiles one .po file per
  language to a .mo file. By default  is the project
  name, that is nova.po. This means all other nova-log-*.po files are
  never compiled. The only way to get setup.py compile the other files
  is calling it several times with different domains set, like for
  instance `python setup.py --domain nova-log-info` and so on. Since
  this is not usual, it can be assumed that the usual packages don't
  contain all the .mo files.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1536226/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588356] [NEW] Glance must support keystone sessions

2016-06-02 Thread Raildo Mascena de Sousa Filho
Public bug reported:

Glance is one of the last OpenStack services left that does not support
instantiating a client using an existing Keystone session object. This
complicates handling glance-related code in other projects.

Moving to Keystone sessions would also enable easier integration with
various auth methods supported by Keystone as well as different Keystone
API versions.

** Affects: glance
 Importance: Undecided
 Assignee: Raildo Mascena de Sousa Filho (raildo)
 Status: In Progress

** Changed in: glance
 Assignee: (unassigned) => Raildo Mascena de Sousa Filho (raildo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1588356

Title:
  Glance must support keystone sessions

Status in Glance:
  In Progress

Bug description:
  Glance is one of the last OpenStack services left that does not
  support instantiating a client using an existing Keystone session
  object. This complicates handling glance-related code in other
  projects.

  Moving to Keystone sessions would also enable easier integration with
  various auth methods supported by Keystone as well as different
  Keystone API versions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1588356/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588355] [NEW] ovs agent resets ovs every 5 seconds

2016-06-02 Thread cuong
Public bug reported:

In the file ovs_ofctl/br-int.py, the function check_canary_table() is defined 
to check the status of the OVS. This function checks it by dumping flows of 
table 23, the canary table. 
However in my configuration, table 23 does not have any flow. For some reason 
the function setup_canary_table() was not called at all. As the consequence, 
the function check_canary_table() always reports that the OVS is just restarted 
and OVS neutron agent keeps resetting the flows in OVS causing packets to be 
lost every around 5 seconds.
I'm running Liberty release.
Thanks,
Cuong Nguyen

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1588355

Title:
  ovs agent resets ovs every 5 seconds

Status in neutron:
  New

Bug description:
  In the file ovs_ofctl/br-int.py, the function check_canary_table() is defined 
to check the status of the OVS. This function checks it by dumping flows of 
table 23, the canary table. 
  However in my configuration, table 23 does not have any flow. For some reason 
the function setup_canary_table() was not called at all. As the consequence, 
the function check_canary_table() always reports that the OVS is just restarted 
and OVS neutron agent keeps resetting the flows in OVS causing packets to be 
lost every around 5 seconds.
  I'm running Liberty release.
  Thanks,
  Cuong Nguyen

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1588355/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583289] Re: api: page_reverse does not work if limit passed

2016-06-02 Thread Ihar Hrachyshka
Turned our that there is no bug: page_reverse just implies behaviour
that is different from what I expected: it merely changes the order in
which the page is read (so with a marker, it will return the previous
page before the marker in db order; it is not supposed to reverse the
page result itself).

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583289

Title:
  api: page_reverse does not work if limit passed

Status in neutron:
  Invalid

Bug description:
  In API, if page_reverse is passed with limit, then the result is not
  in reversed order. This is because common_db_mixin mistakenly apply
  .reverse() on the result from database, breaking the order as returned
  from database backend.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1583289/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1586976] Re: Nova UTs and functional tests broken due to cinderclient modifying i18n global vars

2016-06-02 Thread Markus Zoeller (markus_z)
*** This bug is a duplicate of bug 1587071 ***
https://bugs.launchpad.net/bugs/1587071

Commit [1] solved bug 1587071. Bug 1586976 is a duplicate to bug
1587071.

References:
[1] 
https://git.openstack.org/cgit/openstack/python-cinderclient/commit/?id=623cef2d5c5d9f375c60c991f5ab9f951e9253fa

** This bug has been marked a duplicate of bug 1587071
   importing cinderclient (1.7.1) now enables globally oslo.i18n lazy 
translation

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1586976

Title:
  Nova UTs and functional tests broken due to cinderclient modifying
  i18n global vars

Status in OpenStack Compute (nova):
  Confirmed
Status in python-cinderclient:
  Confirmed

Bug description:
  Python unittests and functional tests are trampled by our Translation
  system, and in particular the gettext module.

  For example :
  2016-05-30 05:45:25.217 | 
nova.tests.unit.virt.libvirt.test_driver.LibvirtConnTestCase.test_create_images_and_backing_images_exist
  2016-05-30 05:45:25.218 | 

  2016-05-30 05:45:25.218 |
  2016-05-30 05:45:25.218 | Captured traceback:
  2016-05-30 05:45:25.218 | ~~~
  2016-05-30 05:45:25.218 | Traceback (most recent call last):
  2016-05-30 05:45:25.218 |   File 
"/home/jenkins/workspace/gate-nova-python27-db/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py",
 line 1305, in patched
  2016-05-30 05:45:25.218 | return func(*args, **keywargs)
  2016-05-30 05:45:25.218 |   File 
"nova/tests/unit/virt/libvirt/test_driver.py", line 8337, in 
test_create_images_and_backing_images_exist
  2016-05-30 05:45:25.218 | conn = 
libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False)
  2016-05-30 05:45:25.218 |   File "nova/virt/libvirt/driver.py", line 325, 
in __init__
  2016-05-30 05:45:25.218 | host=self._host)
  2016-05-30 05:45:25.219 |   File "nova/virt/firewall.py", line 37, in 
load_driver
  2016-05-30 05:45:25.219 | return fw_class(*args, **kwargs)
  2016-05-30 05:45:25.219 |   File "nova/virt/libvirt/firewall.py", line 
335, in __init__
  2016-05-30 05:45:25.219 | self.nwfilter = 
NWFilterFirewall(kwargs['host'])
  2016-05-30 05:45:25.219 |   File "nova/virt/libvirt/firewall.py", line 
58, in __init__
  2016-05-30 05:45:25.219 | LOG.warning(_LW("Libvirt module could not 
be loaded. "
  2016-05-30 05:45:25.219 |   File 
"/home/jenkins/workspace/gate-nova-python27-db/.tox/py27/local/lib/python2.7/site-packages/oslo_i18n/_factory.py",
 line 83, in f
  2016-05-30 05:45:25.219 | return _message.Message(msg, domain=domain)
  2016-05-30 05:45:25.219 |   File 
"/home/jenkins/workspace/gate-nova-python27-db/.tox/py27/local/lib/python2.7/site-packages/oslo_i18n/_message.py",
 line 60, in __new__
  2016-05-30 05:45:25.219 | msgtext = Message._translate_msgid(msgid, 
domain)
  2016-05-30 05:45:25.219 |   File 
"/home/jenkins/workspace/gate-nova-python27-db/.tox/py27/local/lib/python2.7/site-packages/oslo_i18n/_message.py",
 line 117, in _translate_msgid
  2016-05-30 05:45:25.219 | fallback=True)
  2016-05-30 05:45:25.220 |   File "/usr/lib/python2.7/gettext.py", line 
492, in translation
  2016-05-30 05:45:25.220 | with open(mofile, 'rb') as fp:
  2016-05-30 05:45:25.220 | IOError: [Errno 2] No such file or directory: 
'/home/jenkins/workspace/gate-nova-python27-db/.tox/py27/share/locale/en_US.ISO8859-1/LC_MESSAGES/nova-log-warning.mo'
  2016-05-30 05:45:25.220 |

  Or :
  2016-05-30 05:45:25.210 | 
nova.tests.unit.virt.libvirt.test_driver.LibvirtDriverTestCase.test_delete_instance_files_resize
  2016-05-30 05:45:25.210 | 

  2016-05-30 05:45:25.210 |
  2016-05-30 05:45:25.210 | Captured pythonlogging:
  2016-05-30 05:45:25.210 | ~~~
  2016-05-30 05:45:25.210 | 2016-05-30 05:42:04,467 WARNING 
[nova.virt.libvirt.firewall] Libvirt module could not be loaded. 
NWFilterFirewall will not work correctly.
  2016-05-30 05:45:25.210 | 2016-05-30 05:42:04,469 INFO 
[os_brick.initiator.connector] Init DISCO connector
  2016-05-30 05:45:25.210 |
  2016-05-30 05:45:25.210 |
  2016-05-30 05:45:25.210 | Captured traceback:
  2016-05-30 05:45:25.210 | ~~~
  2016-05-30 05:45:25.211 | Traceback (most recent call last):
  2016-05-30 05:45:25.211 |   File 
"/home/jenkins/workspace/gate-nova-python27-db/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py",
 line 1305, in patched
  2016-05-30 05:45:25.211 | return func(*args, **keywargs)
  2016-05-30 05:45:25.211 |   File 
"nova/tests/unit/virt/libvirt/test_driver.py", line 15409, in 
test_delete_instance_files_resize
  2016-05-30 

[Yahoo-eng-team] [Bug 1280522] Re: Replace assertEqual(None, *) with assertIsNone in tests

2016-06-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/319864
Committed: 
https://git.openstack.org/cgit/openstack/designate/commit/?id=77d9f412d23c567a0b75550808081e52b6a74605
Submitter: Jenkins
Branch:master

commit 77d9f412d23c567a0b75550808081e52b6a74605
Author: sharat.sharma 
Date:   Mon May 23 17:09:12 2016 +0530

Modify assert statement when comparing with None

Replace assertEqual(None, *) with assertIsNone in designate's
tests to have more clear messages in case of failure.

Change-Id: Iad1dee48bdb35b338e7b18bfb76c7f5e561c3790
Closes-Bug: #1280522


** Changed in: designate
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1280522

Title:
  Replace assertEqual(None, *) with assertIsNone in tests

Status in Anchor:
  Fix Released
Status in bifrost:
  Fix Released
Status in Blazar:
  In Progress
Status in Cinder:
  Fix Released
Status in congress:
  Fix Released
Status in Designate:
  Fix Released
Status in dox:
  New
Status in Glance:
  Fix Released
Status in glance_store:
  Fix Released
Status in heat:
  Fix Released
Status in heat-cfntools:
  Fix Released
Status in Heat Translator:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Ironic:
  Fix Released
Status in ironic-python-agent:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystoneauth:
  Fix Released
Status in kolla-mesos:
  Fix Released
Status in Manila:
  Fix Released
Status in networking-cisco:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in octavia:
  Fix Released
Status in ooi:
  Fix Released
Status in os-client-config:
  Fix Released
Status in python-barbicanclient:
  Fix Released
Status in python-ceilometerclient:
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-congressclient:
  Fix Released
Status in python-cueclient:
  Fix Released
Status in python-designateclient:
  Fix Released
Status in python-glanceclient:
  Fix Released
Status in python-heatclient:
  Fix Released
Status in python-ironicclient:
  Fix Released
Status in python-manilaclient:
  Fix Released
Status in python-neutronclient:
  Fix Released
Status in python-openstackclient:
  Fix Released
Status in OpenStack SDK:
  In Progress
Status in python-swiftclient:
  Fix Released
Status in python-troveclient:
  Fix Released
Status in Python client library for Zaqar:
  Fix Released
Status in Sahara:
  Fix Released
Status in Solum:
  Fix Released
Status in Stackalytics:
  In Progress
Status in tempest:
  Fix Released
Status in OpenStack DBaaS (Trove):
  Fix Released
Status in tuskar:
  Fix Released
Status in watcher:
  Fix Released
Status in zaqar:
  Fix Released
Status in designate package in Ubuntu:
  Fix Released
Status in python-tuskarclient package in Ubuntu:
  Fix Committed

Bug description:
  Replace assertEqual(None, *) with assertIsNone in tests to have
  more clear messages in case of failure.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anchor/+bug/1280522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588326] [NEW] Eliminate all use of id

2016-06-02 Thread Henry Gessau
Public bug reported:

id is a Python built-in function. Too often we encounter bugs due to id
being used as a variable, for example https://launchpad.net/bugs/1588281

We should eliminate all use of id in the code base.
The hardest step will be to change neutron.db.model_base.HasId

Once all uses have been eliminated, introduce a hacking check to prevent
new occurrences.

** Affects: neutron
 Importance: Wishlist
 Status: New


** Tags: db

** Changed in: neutron
   Importance: Undecided => Wishlist

** Tags added: db

** Description changed:

  id is a Python built-in function. Too often we encounter bugs due to id
  being used as a variable, for example https://launchpad.net/bugs/1588281
  
  We should eliminate all use of id in the code base.
  The hardest step will be to change neutron.db.model_base.HasId
+ 
+ Once all uses have been eliminated, introduce a hacking check to prevent
+ new occurrences.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1588326

Title:
  Eliminate all use of id

Status in neutron:
  New

Bug description:
  id is a Python built-in function. Too often we encounter bugs due to
  id being used as a variable, for example
  https://launchpad.net/bugs/1588281

  We should eliminate all use of id in the code base.
  The hardest step will be to change neutron.db.model_base.HasId

  Once all uses have been eliminated, introduce a hacking check to
  prevent new occurrences.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1588326/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588171] Re: Should update nova api version to 2.1

2016-06-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/324211
Committed: 
https://git.openstack.org/cgit/openstack/ceilometer/commit/?id=8ecf1fc99abcfb74f0d0c66d1cac9503beab1016
Submitter: Jenkins
Branch:master

commit 8ecf1fc99abcfb74f0d0c66d1cac9503beab1016
Author: Kevin_Zheng 
Date:   Thu Jun 2 12:58:25 2016 +0800

Bump to Nova v2.1

The nova team has decided to removew nova v2
API code completly. And it will be merged
very soon: https://review.openstack.org/#/c/311653/
we should bump to use v2.1 ASAP

Closes-bug: #1588171
Change-Id: I3181d4cb8562e92a2e954467ca0482fba787d30f


** Changed in: ceilometer
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1588171

Title:
  Should update nova api version to 2.1

Status in Ceilometer:
  Fix Released
Status in Cinder:
  In Progress
Status in heat:
  In Progress
Status in neutron:
  In Progress
Status in octavia:
  In Progress
Status in python-openstackclient:
  In Progress
Status in OpenStack Search (Searchlight):
  In Progress

Bug description:
  The nova team has decided to removew nova v2 API code completly. And it will 
be merged
  very soon: https://review.openstack.org/#/c/311653/

  we should bump to use v2.1 ASAP

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1588171/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588307] [NEW] It is possible that the volume size in integration test is fetched before volume finishes to extend

2016-06-02 Thread Timur Sufiev
Public bug reported:

As a result, if volume takes a bit longer to extend and Selenium is fast
enough, Selenium fetches old volume size while the volume is extending.
This results in a test failure.

** Affects: horizon
 Importance: Medium
 Assignee: Timur Sufiev (tsufiev-x)
 Status: In Progress


** Tags: integration-tests

** Changed in: horizon
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1588307

Title:
  It is possible that the volume size in integration test is fetched
  before volume finishes to extend

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  As a result, if volume takes a bit longer to extend and Selenium is
  fast enough, Selenium fetches old volume size while the volume is
  extending. This results in a test failure.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1588307/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551747] Re: ubuntu-fan causes issues during network configuration

2016-06-02 Thread Paul Schyska
This is breaking cloud-init on AWS for us with  16.04 hvm:ebs-ssd (ami-
7a138709).

The networking.service unit goes into failed state and cloud-init
metadata crawler fails and falls back to local DataSource, skipping
provisioning of ssh keys etc.

Jun 01 19:45:07 ip-172-31-13-182 ifup[928]: bound to 172.31.13.182 -- renewal 
in 1373 seconds.
Jun 01 19:45:07 ip-172-31-13-182 ifup[928]: run-parts: 
/etc/network/if-up.d/ubuntu-fan exited with return code 1
Jun 01 19:45:07 ip-172-31-13-182 ifup[928]: Failed to bring up ens3.
Jun 01 19:45:07 ip-172-31-13-182 systemd[1]: networking.service: Main process 
exited, code=exited, status=1/FAILURE
Jun 01 19:45:07 ip-172-31-13-182 systemd[1]: Failed to start Raise network 
interfaces.
Jun 01 19:45:07 ip-172-31-13-182 systemd[1]: Dependency failed for Initial 
cloud-init job (metadata service crawler).
Jun 01 19:45:07 ip-172-31-13-182 systemd[1]: cloud-init.service: Job 
cloud-init.service/start failed with result 'dependency'.
Jun 01 19:45:07 ip-172-31-13-182 systemd[1]: networking.service: Unit entered 
failed state.
Jun 01 19:45:07 ip-172-31-13-182 systemd[1]: networking.service: Failed with 
result 'exit-code'.

[... snip ...]

Jun 01 19:45:09 ip-172-31-13-182 cloud-init[1347]: [CLOUDINIT]
cc_final_message.py[WARNING]: Used fallback datasource

FWIW, this happened after installing docker.io, which pulls in ubuntu-
fan as a dependency.

A manual `/usr/sbin/fanctl net start ens3` exits 1, without any output.


** Also affects: cloud-init
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1551747

Title:
  ubuntu-fan causes issues during network configuration

Status in cloud-init:
  New
Status in Snappy:
  Confirmed
Status in ubuntu-fan package in Ubuntu:
  Confirmed

Bug description:
  it seems that ubuntu-fan is causing issues with network configuration.

  On 16.04 daily image:

  root@localhost:~# snappy list
  NameDate   Version  Developer
  canonical-pi2   2016-02-02 3.0  canonical
  canonical-pi2-linux 2016-02-03 4.3.0-1006-3 canonical
  ubuntu-core 2016-02-22 16.04.0-10.armhf canonical

  I see this when I'm activating a wifi card on a raspberry pi 2.

  root@localhost:~# ifdown wlan0
  ifdown: interface wlan0 not configured
  root@localhost:~# ifup wlan0
  Internet Systems Consortium DHCP Client 4.3.3
  Copyright 2004-2015 Internet Systems Consortium.
  All rights reserved.
  For info, please visit https://www.isc.org/software/dhcp/

  Listening on LPF/wlan0/c4:e9:84:17:31:9b
  Sending on   LPF/wlan0/c4:e9:84:17:31:9b
  Sending on   Socket/fallback
  DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 3 (xid=0x81c0c95e)
  DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 5 (xid=0x81c0c95e)
  DHCPREQUEST of 192.168.0.170 on wlan0 to 255.255.255.255 port 67 
(xid=0x5ec9c081)
  DHCPOFFER of 192.168.0.170 from 192.168.0.251
  DHCPACK of 192.168.0.170 from 192.168.0.251
  RTNETLINK answers: File exists
  bound to 192.168.0.170 -- renewal in 17145 seconds.
  run-parts: /etc/network/if-up.d/ubuntu-fan exited with return code 1
  Failed to bring up wlan0.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1551747/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1587681] Re: new launch instance wizard defaults

2016-06-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/323623
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=d4a8023d17f672f10e83e48fab4e97dc0796af76
Submitter: Jenkins
Branch:master

commit d4a8023d17f672f10e83e48fab4e97dc0796af76
Author: Kevin Fox 
Date:   Tue May 31 17:00:20 2016 -0700

Set some useful default values with the new launch wizard.

In Mitaka, the new launch instance wizard regresses functionality
from the old launch wizard with regards to defaults. If there is
only one neutron network, it is not automatically selected, if
there is only one keypair, its not selected, and the default
security group is not automatically selected.

This patch fixes all of that.

Change-Id: I23bab6998e38ba1066b926a4fe71713768d9dff4
Closes-Bug: #1587681


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1587681

Title:
  new launch instance wizard defaults

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  In Mitaka, the new launch instance wizard regresses functionality from
  the old launch wizard with regards to defaults. If there is only one
  neutron network, it is not automatically selected, if there is only
  one keypair, its not selected, and the default security group is not
  automatically selected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1587681/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588281] [NEW] db_base_plugin_v2: id used instead of subnet_id

2016-06-02 Thread Artur Korzeniewski
Public bug reported:

In db_base_plugin_v2 _subnet_check_ip_allocations_internal_router_ports(self, 
context, subnet_id)
code is referring to not declared variable 'id'. It was working before because 
the 'id' was defined in place where 
_subnet_check_ip_allocations_internal_router_ports() was invoked.

def _subnet_check_ip_allocations_internal_router_ports(self, context,
   subnet_id):
# Do not delete the subnet if IP allocations for internal
# router ports still exist
allocs = context.session.query(models_v2.IPAllocation).filter_by(
subnet_id=subnet_id).join(models_v2.Port).filter(
models_v2.Port.device_owner.in_(
constants.ROUTER_INTERFACE_OWNERS)
).first()
if allocs:
LOG.debug("Subnet %s still has internal router ports, "
  "cannot delete", subnet_id)
raise exc.SubnetInUse(subnet_id=id)

the last line should be subnet_id=subnet_id.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1588281

Title:
  db_base_plugin_v2: id used instead of subnet_id

Status in neutron:
  New

Bug description:
  In db_base_plugin_v2 _subnet_check_ip_allocations_internal_router_ports(self, 
context, subnet_id)
  code is referring to not declared variable 'id'. It was working before 
because the 'id' was defined in place where 
_subnet_check_ip_allocations_internal_router_ports() was invoked.

  def _subnet_check_ip_allocations_internal_router_ports(self, context,
 subnet_id):
  # Do not delete the subnet if IP allocations for internal
  # router ports still exist
  allocs = context.session.query(models_v2.IPAllocation).filter_by(
  subnet_id=subnet_id).join(models_v2.Port).filter(
  models_v2.Port.device_owner.in_(
  constants.ROUTER_INTERFACE_OWNERS)
  ).first()
  if allocs:
  LOG.debug("Subnet %s still has internal router ports, "
"cannot delete", subnet_id)
  raise exc.SubnetInUse(subnet_id=id)

  the last line should be subnet_id=subnet_id.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1588281/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588262] Re: yaml option displayed twice

2016-06-02 Thread Sharat Sharma
** Project changed: neutron => python-neutronclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1588262

Title:
  yaml option displayed twice

Status in python-neutronclient:
  Confirmed

Bug description:
  The help message of "neutron rbac-list" displays "yaml" twice in the
  output formatters option.

  Steps to reproduce:
  1. neutron rbac-list --help

  Output:
  ..
  output formatters:
output formatter options

-f {csv,html,json,json,table,value,yaml,yaml}, --format 
{csv,html,json,json,table,value,yaml,yaml}
  the output format, defaults to table

  
  The yaml option need not be displayed twice.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1588262/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588262] [NEW] yaml option displayed twice

2016-06-02 Thread Sharat Sharma
Public bug reported:

The help message of "neutron rbac-list" displays "yaml" twice in the
output formatters option.

Steps to reproduce:
1. neutron rbac-list --help

Output:
..
output formatters:
  output formatter options

  -f {csv,html,json,json,table,value,yaml,yaml}, --format 
{csv,html,json,json,table,value,yaml,yaml}
the output format, defaults to table


The yaml option need not be displayed twice.

** Affects: neutron
 Importance: Undecided
 Assignee: Sharat Sharma (sharat-sharma)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Sharat Sharma (sharat-sharma)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1588262

Title:
  yaml option displayed twice

Status in neutron:
  New

Bug description:
  The help message of "neutron rbac-list" displays "yaml" twice in the
  output formatters option.

  Steps to reproduce:
  1. neutron rbac-list --help

  Output:
  ..
  output formatters:
output formatter options

-f {csv,html,json,json,table,value,yaml,yaml}, --format 
{csv,html,json,json,table,value,yaml,yaml}
  the output format, defaults to table

  
  The yaml option need not be displayed twice.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1588262/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588249] [NEW] External auth method has problem in getting the head of REMOTE_USER

2016-06-02 Thread guoshan
Public bug reported:

When enable the external method, It need to pass a REMOTE_USER as a log user.
For now it can't get the head of REMOTE_USER, and we should modify the method 
in controller.py.

** Affects: horizon
 Importance: Undecided
 Status: New

** Description changed:

- when enable the external method, It need to pass a REMOTE_USER as a log user.
+ When enable the external method, It need to pass a REMOTE_USER as a log user.
  For now it can't get the head of REMOTE_USER, and we should modify the method 
in controller.py.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1588249

Title:
  External auth method has problem in getting the head of REMOTE_USER

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When enable the external method, It need to pass a REMOTE_USER as a log user.
  For now it can't get the head of REMOTE_USER, and we should modify the method 
in controller.py.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1588249/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588244] [NEW] When we delete the user "admin", it will cause the resource unavailable

2016-06-02 Thread guoshan
Public bug reported:

According to policy, admin can disable or delete admin itself.
However, when we delete admin, it will cause the resource under admin 
unavailable.

** Affects: keystone
 Importance: Undecided
 Assignee: guoshan (guoshan)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => guoshan (guoshan)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1588244

Title:
  When we delete the user "admin", it will cause the resource
  unavailable

Status in OpenStack Identity (keystone):
  New

Bug description:
  According to policy, admin can disable or delete admin itself.
  However, when we delete admin, it will cause the resource under admin 
unavailable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1588244/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588228] [NEW] 'net-delete' command seems to cause dead loop

2016-06-02 Thread xiewj
Public bug reported:

In Mitaka,

Creating a network with two subnets,one belongs to the network's owner tenant 
,and the other belongs to the admin tenant; 
Using 'net-delete' command to delete the network with the owner tenant,the 
command seems to cause dead loop.
 
1)demo tenant created a network with a subnet

[root@localhost devstack]# source openrc demo demo
WARNING: setting legacy OS_TENANT_NAME to support cli tools.

[root@localhost devstack]# neutron net-create net_demo
Created a new network:
+-+--+
| Field   | Value|
+-+--+
| admin_state_up  | True |
| availability_zone_hints |  |
| availability_zones  |  |
| created_at  | 2016-06-02T15:32:23  |
| description |  |
| id  | 2c149c50-5f29-40cc-b2d7-8d2cceb2ecb3 |
| ipv4_address_scope  |  |
| ipv6_address_scope  |  |
| mtu | 4950 |
| name| net_demo |
| qos_policy_id   |  |
| router:external | False|
| shared  | False|
| status  | ACTIVE   |
| subnets |  |
| tags|  |
| tenant_id   | 8be18865b76b4428af952487dfdc250f |
| updated_at  | 2016-06-02T15:32:23  |
| vlan_transparent| False|
+-+--+

[root@localhost devstack]# neutron subnet-create net_demo --name subnet_demo  
102.1.1.0/24
Created a new subnet:
+---+--+
| Field | Value|
+---+--+
| allocation_pools  | {"start": "102.1.1.2", "end": "102.1.1.254"} |
| cidr  | 102.1.1.0/24 |
| created_at| 2016-06-02T15:33:46  |
| description   |  |
| dns_nameservers   |  |
| enable_dhcp   | True |
| gateway_ip| 102.1.1.1|
| host_routes   |  |
| id| 2c3c178a-4139-4490-a8a8-4d1be1b1e2b3 |
| ip_version| 4|
| ipv6_address_mode |  |
| ipv6_ra_mode  |  |
| name  | subnet_demo  |
| network_id| 2c149c50-5f29-40cc-b2d7-8d2cceb2ecb3 |
| subnetpool_id |  |
| tenant_id | 8be18865b76b4428af952487dfdc250f |
| updated_at| 2016-06-02T15:33:46  |
+---+--+


2)admin tenant created a subnet for the network belonging to the demo tenant in 
step 1;

[root@localhost devstack]# source openrc admin admin

WARNING: setting legacy OS_TENANT_NAME to support cli tools.
[root@localhost devstack]# neutron subnet-create net_demo 103.1.1.0/24 
Created a new subnet:
+---+--+
| Field | Value|
+---+--+
| allocation_pools  | {"start": "103.1.1.2", "end": "103.1.1.254"} |
| cidr  | 103.1.1.0/24 |
| created_at| 2016-06-02T15:34:50  |
| description   |  |
| dns_nameservers   |  |
| enable_dhcp   | True |
| gateway_ip| 103.1.1.1|
| host_routes   |  |
| id| 427bfd4e-8a6a-40de-8f7f-7f89d8ea6468 |
| ip_version| 4|
| ipv6_address_mode |  |
| ipv6_ra_mode  |  |
| name  | 

[Yahoo-eng-team] [Bug 1580611] Re: murano-engine cannot authenticate to keystone

2016-06-02 Thread Nikolay Starodubtsev
** Project changed: keystone => murano

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1580611

Title:
  murano-engine cannot authenticate to keystone

Status in Murano:
  New

Bug description:
  mitaka release

  murano-api and murano-engine is running, muranoclient works too
  (murano environment-list or environment-create is works), but after
  the launch environment-deploy in murano-engine fails to attempt to
  login to keystone: error 401 "Exception Could not find domain:
  default" and KeyError: 'model' in /usr/lib/python2.7/dist-
  packages/murano/common/server.py

  Details (versions, log, tcpdump)
  http://paste.openstack.org/show/496729/

  The problem is that from murano-engine queries come to keystone
  incorrect domain_name and a blank password, i.e:

  {"auth": {"scope": {"project": {"domain": {"name": "default"}, "name":
  "admin"}}, "identity": {"password": {"user": {"password": null}},
  "methods": ["password"]}}}

  If you specify in murano.conf instead project_domain_id and
  user_domain_id:

  project_domain_name = "Default"
  user_domain_name = "Default"

  then nothing changes and the error remains.

To manage notifications about this bug go to:
https://bugs.launchpad.net/murano/+bug/1580611/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280522] Re: Replace assertEqual(None, *) with assertIsNone in tests

2016-06-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/319908
Committed: 
https://git.openstack.org/cgit/openstack/watcher/commit/?id=164a802718a1e026e78035bfaae9ffd047b4089e
Submitter: Jenkins
Branch:master

commit 164a802718a1e026e78035bfaae9ffd047b4089e
Author: sharat.sharma 
Date:   Mon May 23 18:03:24 2016 +0530

Replace assertEqual(None, *) with assertIsNone in tests

Replace assertEqual(None, *) with assertIsNone in tests to have
more clear messages in case of failure.

Change-Id: I98261ef7cca06447ea9d443a2c287c046f380f77
Closes-Bug: #1280522


** Changed in: watcher
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1280522

Title:
  Replace assertEqual(None, *) with assertIsNone in tests

Status in Anchor:
  Fix Released
Status in bifrost:
  Fix Released
Status in Blazar:
  In Progress
Status in Cinder:
  Fix Released
Status in congress:
  Fix Released
Status in Designate:
  In Progress
Status in dox:
  New
Status in Glance:
  Fix Released
Status in glance_store:
  Fix Released
Status in heat:
  Fix Released
Status in heat-cfntools:
  Fix Released
Status in Heat Translator:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Ironic:
  Fix Released
Status in ironic-python-agent:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystoneauth:
  Fix Released
Status in kolla-mesos:
  Fix Released
Status in Manila:
  Fix Released
Status in networking-cisco:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in octavia:
  Fix Released
Status in ooi:
  Fix Released
Status in os-client-config:
  Fix Released
Status in python-barbicanclient:
  Fix Released
Status in python-ceilometerclient:
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-congressclient:
  Fix Released
Status in python-cueclient:
  Fix Released
Status in python-designateclient:
  Fix Released
Status in python-glanceclient:
  Fix Released
Status in python-heatclient:
  Fix Released
Status in python-ironicclient:
  Fix Released
Status in python-manilaclient:
  Fix Released
Status in python-neutronclient:
  Fix Released
Status in python-openstackclient:
  Fix Released
Status in OpenStack SDK:
  In Progress
Status in python-swiftclient:
  Fix Released
Status in python-troveclient:
  Fix Released
Status in Python client library for Zaqar:
  Fix Released
Status in Sahara:
  Fix Released
Status in Solum:
  Fix Released
Status in Stackalytics:
  In Progress
Status in tempest:
  Fix Released
Status in OpenStack DBaaS (Trove):
  Fix Released
Status in tuskar:
  Fix Released
Status in watcher:
  Fix Released
Status in zaqar:
  Fix Released
Status in designate package in Ubuntu:
  Fix Released
Status in python-tuskarclient package in Ubuntu:
  Fix Committed

Bug description:
  Replace assertEqual(None, *) with assertIsNone in tests to have
  more clear messages in case of failure.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anchor/+bug/1280522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1536226] Re: Not all .po files compiled

2016-06-02 Thread Andreas Jaeger
keystone change: https://review.openstack.org/#/c/319260/

** Also affects: keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1536226

Title:
  Not all .po files compiled

Status in Glance:
  In Progress
Status in OpenStack Identity (keystone):
  New
Status in OpenStack Compute (nova):
  Fix Released
Status in openstack i18n:
  New

Bug description:
  python setup.py compile_catalog only compiles one .po file per
  language to a .mo file. By default  is the project
  name, that is nova.po. This means all other nova-log-*.po files are
  never compiled. The only way to get setup.py compile the other files
  is calling it several times with different domains set, like for
  instance `python setup.py --domain nova-log-info` and so on. Since
  this is not usual, it can be assumed that the usual packages don't
  contain all the .mo files.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1536226/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588190] [NEW] policy.v3cloudsample.json broken in mitaka

2016-06-02 Thread Jay Jahns
Public bug reported:

We have a multi-domain configuration in our private cloud that I've had
to revert to using the Liberty policy.v3cloudsample.json file instead of
Mitaka or master.

Horizon is generating the following trace when a domain admin is trying
to look at projects/users:

[pid: 22842|app: 0|req: 5/17] 10.38.202.12 () {46 vars in 907 bytes} [Thu Jun  
2 07:17:24 2016] GET / => generated 0 bytes in 5 msecs (HTTP/1.1 302) 5 headers 
in 198 bytes (1 switches on core 1)
Internal Server Error: /identity/
Traceback (most recent call last):
  File 
"/opt/mhos/openstack/horizon/local/lib/python2.7/site-packages/django/core/handlers/base.py",
 line 132, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
  File 
"/opt/mhos/openstack/horizon/local/lib/python2.7/site-packages/horizon/decorators.py",
 line 36, in dec
return view_func(request, *args, **kwargs)
  File 
"/opt/mhos/openstack/horizon/local/lib/python2.7/site-packages/horizon/decorators.py",
 line 52, in dec
return view_func(request, *args, **kwargs)
  File 
"/opt/mhos/openstack/horizon/local/lib/python2.7/site-packages/horizon/decorators.py",
 line 36, in dec
return view_func(request, *args, **kwargs)
  File 
"/opt/mhos/openstack/horizon/local/lib/python2.7/site-packages/django/views/generic/base.py",
 line 71, in view
return self.dispatch(request, *args, **kwargs)
  File 
"/opt/mhos/openstack/horizon/local/lib/python2.7/site-packages/django/views/generic/base.py",
 line 89, in dispatch
return handler(request, *args, **kwargs)
  File 
"/opt/mhos/openstack/horizon/local/lib/python2.7/site-packages/horizon/tables/views.py",
 line 159, in get
handled = self.construct_tables()
  File 
"/opt/mhos/openstack/horizon/local/lib/python2.7/site-packages/horizon/tables/views.py",
 line 150, in construct_tables
handled = self.handle_table(table)
  File 
"/opt/mhos/openstack/horizon/local/lib/python2.7/site-packages/horizon/tables/views.py",
 line 121, in handle_table
data = self._get_data_dict()
  File 
"/opt/mhos/openstack/horizon/local/lib/python2.7/site-packages/horizon/tables/views.py",
 line 187, in _get_data_dict
self._data = {self.table_class._meta.name: self.get_data()}
  File 
"/opt/mhos/openstack/horizon/openstack_dashboard/dashboards/identity/projects/views.py",
 line 84, in get_data
self.request):
  File "/opt/mhos/openstack/horizon/openstack_dashboard/policy.py", line 24, in 
check
return policy_check(actions, request, target)
  File 
"/opt/mhos/openstack/horizon/local/lib/python2.7/site-packages/openstack_auth/policy.py",
 line 155, in check
enforcer[scope], action, target, domain_credentials)
  File 
"/opt/mhos/openstack/horizon/local/lib/python2.7/site-packages/openstack_auth/policy.py",
 line 169, in _check_credentials
if not enforcer_scope.enforce(action, target, credentials):
  File 
"/opt/mhos/openstack/horizon/local/lib/python2.7/site-packages/oslo_policy/policy.py",
 line 578, in enforce
result = self.rules[rule](target, creds, self)
  File 
"/opt/mhos/openstack/horizon/local/lib/python2.7/site-packages/oslo_policy/_checks.py",
 line 160, in __call__
if rule(target, cred, enforcer):
  File 
"/opt/mhos/openstack/horizon/local/lib/python2.7/site-packages/oslo_policy/_checks.py",
 line 204, in __call__
return enforcer.rules[self.match](target, creds, enforcer)
  File 
"/opt/mhos/openstack/horizon/local/lib/python2.7/site-packages/oslo_policy/_checks.py",
 line 125, in __call__
if not rule(target, cred, enforcer):
  File 
"/opt/mhos/openstack/horizon/local/lib/python2.7/site-packages/oslo_policy/_checks.py",
 line 160, in __call__
if rule(target, cred, enforcer):
  File 
"/opt/mhos/openstack/horizon/local/lib/python2.7/site-packages/oslo_policy/_checks.py",
 line 311, in __call__
return self._find_in_dict(creds, path_segments, match)
  File 
"/opt/mhos/openstack/horizon/local/lib/python2.7/site-packages/oslo_policy/_checks.py",
 line 292, in _find_in_dict
return self._find_in_dict(test_value, path_segments, match)
  File 
"/opt/mhos/openstack/horizon/local/lib/python2.7/site-packages/oslo_policy/_checks.py",
 line 283, in _find_in_dict
test_value = test_value[key]
TypeError: 'Token' object has no attribute '__getitem__'
[pid: 22837|app: 0|req: 5/18] 10.38.202.12 () {46 vars in 925 bytes} [Thu Jun  
2 07:17:24 2016] GET /identity/ => generated 375516 bytes in 251 msecs 
(HTTP/1.1 500) 4 headers in 145 bytes (2 switches on core 0)

Or we will get another trace, as follows, which is a bit more
understanding:

[pid: 22623|app: 0|req: 17/76] 10.38.202.12 () {44 vars in 3206 bytes} [Thu Jun 
 2 07:05:15 2016] GET 
/i18n/js/horizon+openstack_dashboard+neutron_lbaas_dashboard+muranodashboard/ 
=> generated 2372 bytes in 4 msecs (HTTP/1.1 200) 4 hea
ders in 132 bytes (1 switches on core 1)
Pure project admin doesn't have a domain token
Internal Server Error: /identity/users/
Traceback (most recent call last):
  File 

[Yahoo-eng-team] [Bug 1588171] Re: Should update nova api version to 2.1

2016-06-02 Thread zhaobo
** No longer affects: designate

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1588171

Title:
  Should update nova api version to 2.1

Status in Ceilometer:
  In Progress
Status in Cinder:
  In Progress
Status in heat:
  In Progress
Status in neutron:
  In Progress
Status in octavia:
  In Progress
Status in python-openstackclient:
  New
Status in OpenStack Search (Searchlight):
  In Progress

Bug description:
  The nova team has decided to removew nova v2 API code completly. And it will 
be merged
  very soon: https://review.openstack.org/#/c/311653/

  we should bump to use v2.1 ASAP

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1588171/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588171] Re: Should update nova api version to 2.1

2016-06-02 Thread zhaobo
** Also affects: designate
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1588171

Title:
  Should update nova api version to 2.1

Status in Ceilometer:
  In Progress
Status in Cinder:
  In Progress
Status in Designate:
  New
Status in heat:
  In Progress
Status in neutron:
  In Progress
Status in octavia:
  In Progress
Status in python-openstackclient:
  New
Status in OpenStack Search (Searchlight):
  In Progress

Bug description:
  The nova team has decided to removew nova v2 API code completly. And it will 
be merged
  very soon: https://review.openstack.org/#/c/311653/

  we should bump to use v2.1 ASAP

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1588171/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588170] Re: Should update nova api version to 2.1

2016-06-02 Thread zhaobo
duplicate bug..:(

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1588170

Title:
  Should update nova api version to 2.1

Status in neutron:
  Invalid

Bug description:
  As nova api had abandoned 2.0, and suggest to use v2.1 if we call nova.
  So should update the version from 2.0 to 2.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1588170/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588171] Re: Should update nova api version to 2.1

2016-06-02 Thread Zhenyu Zheng
** Also affects: python-openstackclient
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1588171

Title:
  Should update nova api version to 2.1

Status in Ceilometer:
  In Progress
Status in Cinder:
  New
Status in heat:
  In Progress
Status in neutron:
  In Progress
Status in octavia:
  New
Status in python-openstackclient:
  New
Status in OpenStack Search (Searchlight):
  In Progress

Bug description:
  The nova team has decided to removew nova v2 API code completly. And it will 
be merged
  very soon: https://review.openstack.org/#/c/311653/

  we should bump to use v2.1 ASAP

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1588171/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588171] Re: Should update nova api version to 2.1

2016-06-02 Thread zhaobo
** Changed in: cinder
 Assignee: (unassigned) => zhaobo (zhaobo6)

** Also affects: octavia
   Importance: Undecided
   Status: New

** Changed in: octavia
 Assignee: (unassigned) => zhaobo (zhaobo6)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1588171

Title:
  Should update nova api version to 2.1

Status in Ceilometer:
  In Progress
Status in Cinder:
  New
Status in heat:
  In Progress
Status in neutron:
  In Progress
Status in octavia:
  New
Status in python-openstackclient:
  New
Status in OpenStack Search (Searchlight):
  In Progress

Bug description:
  The nova team has decided to removew nova v2 API code completly. And it will 
be merged
  very soon: https://review.openstack.org/#/c/311653/

  we should bump to use v2.1 ASAP

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1588171/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588171] Re: Should update nova api version to 2.1

2016-06-02 Thread Zhenyu Zheng
** Also affects: ceilometer
   Importance: Undecided
   Status: New

** Also affects: cinder
   Importance: Undecided
   Status: New

** Also affects: searchlight
   Importance: Undecided
   Status: New

** Also affects: heat
   Importance: Undecided
   Status: New

** Changed in: searchlight
 Assignee: (unassigned) => Zhenyu Zheng (zhengzhenyu)

** Changed in: ceilometer
 Assignee: (unassigned) => Zhenyu Zheng (zhengzhenyu)

** Description changed:

- As nova api had abandoned 2.0, and suggest to use v2.1 if we call nova.
- So should update the version from 2.0 to 2.1
+ The nova team has decided to removew nova v2 API code completly. And it will 
be merged
+ very soon: https://review.openstack.org/#/c/311653/
+ 
+ we should bump to use v2.1 ASAP

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1588171

Title:
  Should update nova api version to 2.1

Status in Ceilometer:
  New
Status in Cinder:
  New
Status in heat:
  New
Status in neutron:
  New
Status in OpenStack Search (Searchlight):
  In Progress

Bug description:
  The nova team has decided to removew nova v2 API code completly. And it will 
be merged
  very soon: https://review.openstack.org/#/c/311653/

  we should bump to use v2.1 ASAP

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1588171/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588170] [NEW] Should update nova api version to 2.1

2016-06-02 Thread zhaobo
Public bug reported:

As nova api had abandoned 2.0, and suggest to use v2.1 if we call nova.
So should update the version from 2.0 to 2.1

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1588170

Title:
  Should update nova api version to 2.1

Status in neutron:
  New

Bug description:
  As nova api had abandoned 2.0, and suggest to use v2.1 if we call nova.
  So should update the version from 2.0 to 2.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1588170/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588171] [NEW] Should update nova api version to 2.1

2016-06-02 Thread zhaobo
Public bug reported:

The nova team has decided to removew nova v2 API code completly. And it will be 
merged
very soon: https://review.openstack.org/#/c/311653/

we should bump to use v2.1 ASAP

** Affects: ceilometer
 Importance: Undecided
 Assignee: Zhenyu Zheng (zhengzhenyu)
 Status: New

** Affects: cinder
 Importance: Undecided
 Status: New

** Affects: heat
 Importance: Undecided
 Status: New

** Affects: neutron
 Importance: Undecided
 Assignee: zhaobo (zhaobo6)
 Status: New

** Affects: searchlight
 Importance: Undecided
 Assignee: Zhenyu Zheng (zhengzhenyu)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => zhaobo (zhaobo6)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1588171

Title:
  Should update nova api version to 2.1

Status in Ceilometer:
  New
Status in Cinder:
  New
Status in heat:
  New
Status in neutron:
  New
Status in OpenStack Search (Searchlight):
  In Progress

Bug description:
  The nova team has decided to removew nova v2 API code completly. And it will 
be merged
  very soon: https://review.openstack.org/#/c/311653/

  we should bump to use v2.1 ASAP

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1588171/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588171] [NEW] Should update nova api version to 2.1

2016-06-02 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

As nova api had abandoned 2.0, and suggest to use v2.1 if we call nova.
So should update the version from 2.0 to 2.1

** Affects: neutron
 Importance: Undecided
 Assignee: zhaobo (zhaobo6)
 Status: New

-- 
Should update nova api version to 2.1
https://bugs.launchpad.net/bugs/1588171
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp