[Yahoo-eng-team] [Bug 1544857] [NEW] Powered-ff VMs still get 'cpu_util' metrics.

2016-02-11 Thread Andrei
Public bug reported:

Powered-ff VMs still get 'cpu_util' metrics with zero values. There
should be a distinction between no values and zero values. There is no
such problem with, say, 'memory.usage' metric.

I tested it on KILO.

[root@controller ~(keystone_admin)]# nova show 
2648fd92-b84d-4309-a3cb-34e3b5ceea74
+--+-+
| Property | Value  
 |
+--+-+
| OS-DCF:diskConfig| AUTO   
 |
| OS-EXT-AZ:availability_zone  | nova   
 |
| OS-EXT-SRV-ATTR:host | kvm3.openstack5.lan
 |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | kvm3.openstack5.lan
 |
| OS-EXT-SRV-ATTR:instance_name| instance-009f  
 |
| OS-EXT-STS:power_state   | 4  
 |
| OS-EXT-STS:task_state| -  
 |
| OS-EXT-STS:vm_state  | stopped
 |
| OS-SRV-USG:launched_at   | 2015-09-24T12:40:04.00 
 |
| OS-SRV-USG:terminated_at | -  
 |
| accessIPv4   |
 |
| accessIPv6   |
 |
| config_drive |
 |
| created  | 2015-09-24T12:39:33Z   
 |
| ext_net network  | 192.168.185.21 
 |
| flavor   | m1.tiny (1)
 |
| hostId   | 
a2f6da6c2121d7aefdb19ecae81cb18248a8ed9775d1151131f6ed41|
| id   | 2648fd92-b84d-4309-a3cb-34e3b5ceea74   
 |
| image| olegn-3-cirros-0.3.4-x86_64-disk 
(744f81b2-aa16-42dd-ade0-05b050d4f17b) |
| key_name | -  
 |
| metadata | {} 
 |
| name | test_cirros034--1  
 |
| os-extended-volumes:volumes_attached | [] 
 |
| qa_net network   | 10.0.3.22  
 |
| security_groups  | default
 |
| status   | SHUTOFF
 |
| tenant_id| 86f1bbb7f7054997a67239680b69aaaf   
 |
| updated  | 2015-11-19T12:26:32Z   
 |
| user_id  | 8c60c96e20e6417bb19701677afb6a2f   
 |
+--+-+
[root@controller ~(keystone_admin)]# ceilometer sample-list -m cpu_util -l 10 
-q "resource=2648fd92-b84d-4309-a3cb-34e3b5ceea74"
+--+--+---++--+-+
| Resource ID  | Name | Type  | Volume | Unit | 
Timestamp   |
+--+--+---++--+-+
| 2648fd92-b84d-4309-a3cb-34e3b5ceea74 | cpu_util | gauge | 0.0| %| 
2016-02-12T07:07:00 |
| 2648fd92-b84d-4309-a3cb-34e3b5ceea74 | cpu_util | gauge | 0.0| %| 
2016-02-12T06:57:00 |
| 2648fd92-b84d-4309-a3cb-34e3b5ceea74 | cpu_util | gauge | 0.0| %| 
2016-02-12T06:47:00 |
| 2648fd92-b84d-4309-a3cb-34e3b5ceea74 | cpu_util | gauge | 0.0| %| 
2016-02-12T06:37:00 |
| 2648fd92-b84d-4309-a3cb-34e3b5ceea7

[Yahoo-eng-team] [Bug 1544585] Re: Missing netwrok mock for Instance tests

2016-02-11 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/279103
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=72bbc8dcaa93825239533d4590de30345d35c4d7
Submitter: Jenkins
Branch:master

commit 72bbc8dcaa93825239533d4590de30345d35c4d7
Author: Itxaka 
Date:   Thu Feb 11 15:42:35 2016 +0100

Add missing network mock

InstanceAjaxTests.test_row_update_flavor_not_found was
missing a mock for servers_update_addresses causing
an error to log while runnig the tests

Change-Id: I15bdab7add2e403110a8d85a77ef49aa11ae03ac
Closes-Bug: #1544585


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1544585

Title:
  Missing netwrok mock for Instance tests

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  
openstack_dashboards.dashboards.project.instances.tests:InstanceAjaxTests.test_row_update_flavor_not_found
  test is missing a mock for "servers_update_addresses" causing the test
  to output "Cannot connect to neutron"

  Strangely enough, the test is passing, which is not really good.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1544585/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513267] Re: network_data.json not found in openstack/2015-10-15/

2016-02-11 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/241824
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=edea873565d07120141ae0f76199d6a3aee2959d
Submitter: Jenkins
Branch:master

commit edea873565d07120141ae0f76199d6a3aee2959d
Author: Mathieu Gagné 
Date:   Wed Nov 4 19:04:40 2015 -0500

Properly inject network_data.json in configdrive

The file "network_data.json" is not currently found in the folder
"openstack/2015-10-15/" of config drive, only in "openstack/latest/".
The patch makes sure its found in both folders.

Closes-bug: #1513267
Change-Id: Ifcd4fadb91fcd360af5cf0988178992f2905190a


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1513267

Title:
  network_data.json not found in openstack/2015-10-15/

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The file "network_data.json" is not found in the folder
  "openstack/2015-10-15/" of config drive, only in "openstack/latest/".

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1513267/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542486] Re: nova-compute stack traces with BadRequest: Specifying 'tenant_id' other than authenticated tenant in request requires admin privileges

2016-02-11 Thread Steve Martinelli
looks like middlware related, not keystone server, changing project

** Also affects: keystonemiddleware
   Importance: Undecided
   Status: New

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1542486

Title:
  nova-compute stack traces with BadRequest: Specifying 'tenant_id'
  other than authenticated tenant in request requires admin privileges

Status in OpenStack Identity (keystone):
  Invalid
Status in keystonemiddleware:
  New
Status in OpenStack Compute (nova):
  Incomplete
Status in puppet-nova:
  New

Bug description:
  The puppet-openstack-integration tests (rebased on
  https://review.openstack.org/#/c/276773/ ) currently fail on the
  latest version of RDO Mitaka (delorean current) due to what seems to
  be a problem with the neutron configuration.

  Everything installs fine but tempest fails:
  
http://logs.openstack.org/92/276492/6/check/gate-puppet-openstack-integration-scenario001-tempest-dsvm-centos7/78b9c32/console.html#_2016-02-05_20_26_35_569

  And there are stack traces in nova-compute.log:
  
http://logs.openstack.org/92/276492/6/check/gate-puppet-openstack-integration-scenario001-tempest-dsvm-centos7/78b9c32/logs/nova/nova-compute.txt.gz#_2016-02-05_20_22_16_151

  I talked with #openstack-nova and they pointed out a difference between what 
devstack yields as a [neutron] configuration versus what puppet-nova configures:
  
  # puppet-nova via puppet-openstack-integration
  
  [neutron]
  service_metadata_proxy=True
  metadata_proxy_shared_secret =a_big_secret
  url=http://127.0.0.1:9696
  region_name=RegionOne
  ovs_bridge=br-int
  extension_sync_interval=600
  auth_url=http://127.0.0.1:35357
  password=a_big_secret
  tenant_name=services
  timeout=30
  username=neutron
  auth_plugin=password
  default_tenant_id=default

  
  # Well, it worked in devstack™
  
  [neutron]
  service_metadata_proxy = True
  url = http://127.0.0.1:9696
  region_name = RegionOne
  auth_url = http://127.0.0.1:35357/v3
  password = secretservice
  auth_strategy = keystone
  project_domain_name = Default
  project_name = service
  user_domain_name = Default
  username = neutron
  auth_plugin = v3password

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1542486/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544835] [NEW] Using scope to clear table selections

2016-02-11 Thread Thai Tran
Public bug reported:

We currently use scope to clear table selections.  This is not ideal
because it breaks encapsulation and encourages the use of scope over
ctrl. We should provide a method that can clear instead.

Reference:
https://github.com/openstack/horizon/blob/master/openstack_dashboard/static/app/core/images/table/images.controller.js#L101

** Affects: horizon
 Importance: Medium
 Assignee: Thai Tran (tqtran)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1544835

Title:
  Using scope to clear table selections

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  We currently use scope to clear table selections.  This is not ideal
  because it breaks encapsulation and encourages the use of scope over
  ctrl. We should provide a method that can clear instead.

  Reference:
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/static/app/core/images/table/images.controller.js#L101

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1544835/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544833] [NEW] VM failed to add the flows when tenant vm is booted with default public and private network created by devstack in VXLAN setup

2016-02-11 Thread prabhu murthy
Public bug reported:

Description
Boot a VM and check for the flows .
execute /opt/stack/neutron/neutron/agent/linux/utils.py:156
2016-02-10 11:04:22.977 11780 DEBUG neutron.agent.linux.utils [-] Running 
command (rootwrap daemon): ['ovs-ofctl', 'add-flows', 'br-sec', '-'] 
execute_rootwrap_daemon /opt/stack/neutron/neutron/agent/linux/utils.py:101
2016-02-10 11:04:22.984 11780 ERROR neutron.agent.linux.utils [-]
Command: ['ovs-ofctl', 'add-flows', 'br-sec', '-']
Exit code: 1
Stdin: 
hard_timeout=0,idle_timeout=0,priority=1,ip,dl_src=fa:16:3e:03:93:c2,cookie=0,table=1,dl_vlan=4,nw_src=172.24.4.17,in_port=1,actions=resubmit(,5)
hard_timeout=0,idle_timeout=0,priority=10,ip,dl_src=fa:16:3e:03:93:c2,cookie=0x13de491e1aaf506d,table=1,dl_vlan=4,nw_src=172.24.4.17,in_port=1,actions=resubmit(,2),normal
hard_timeout=0,idle_timeout=0,priority=1,ip,dl_src=fa:16:3e:03:93:c2,cookie=0,table=1,dl_vlan=4,nw_src=2001:db8::12,in_port=1,actions=resubmit(,5)
hard_timeout=0,idle_timeout=0,priority=10,ip,dl_src=fa:16:3e:03:93:c2,cookie=0x13de491e1aaf506d,table=1,dl_vlan=4,nw_src=2001:db8::12,in_port=1,actions=resubmit(,2),normal
hard_timeout=0,idle_timeout=0,priority=1,ipv6,dl_src=fa:16:3e:03:93:c2,cookie=0,table=1,dl_vlan=4,nw_src=172.24.4.17,in_port=1,actions=resubmit(,5)
hard_timeout=0,idle_timeout=0,priority=10,ipv6,dl_src=fa:16:3e:03:93:c2,cookie=0x13de491e1aaf506d,table=1,dl_vlan=4,nw_src=172.24.4.17,in_port=1,actions=resubmit(,2),normal
hard_timeout=0,idle_timeout=0,priority=1,ipv6,dl_src=fa:16:3e:03:93:c2,cookie=0,table=1,dl_vlan=4,nw_src=2001:db8::12,in_port=1,actions=resubmit(,5)
hard_timeout=0,idle_timeout=0,priority=10,ipv6,dl_src=fa:16:3e:03:93:c2,cookie=0x13de491e1aaf506d,table=1,dl_vlan=4,nw_src=2001:db8::12,in_port=1,actions=resubmit(,2),normal
Stdout:
Stderr: ovs-ofctl: -:3: 2001:db8::12: invalid IP address
2016-02-10 11:04:22.985 11780 ERROR neutron.agent.common.ovs_lib [-] Unable to 
execute ['ovs-ofctl', 'add-flows', 'br-sec', '-']. Exception:
Command: ['ovs-ofctl', 'add-flows', 'br-sec', '-']
Exit code: 1
==

** Affects: networking-vsphere
 Importance: Undecided
 Status: New

** Project changed: nova => networking-vsphere

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1544833

Title:
  VM failed to add the flows when tenant vm is booted with default
  public and private network created by devstack in  VXLAN setup

Status in networking-vsphere:
  New

Bug description:
  Description
  Boot a VM and check for the flows .
  execute /opt/stack/neutron/neutron/agent/linux/utils.py:156
  2016-02-10 11:04:22.977 11780 DEBUG neutron.agent.linux.utils [-] Running 
command (rootwrap daemon): ['ovs-ofctl', 'add-flows', 'br-sec', '-'] 
execute_rootwrap_daemon /opt/stack/neutron/neutron/agent/linux/utils.py:101
  2016-02-10 11:04:22.984 11780 ERROR neutron.agent.linux.utils [-]
  Command: ['ovs-ofctl', 'add-flows', 'br-sec', '-']
  Exit code: 1
  Stdin: 
hard_timeout=0,idle_timeout=0,priority=1,ip,dl_src=fa:16:3e:03:93:c2,cookie=0,table=1,dl_vlan=4,nw_src=172.24.4.17,in_port=1,actions=resubmit(,5)
  
hard_timeout=0,idle_timeout=0,priority=10,ip,dl_src=fa:16:3e:03:93:c2,cookie=0x13de491e1aaf506d,table=1,dl_vlan=4,nw_src=172.24.4.17,in_port=1,actions=resubmit(,2),normal
  
hard_timeout=0,idle_timeout=0,priority=1,ip,dl_src=fa:16:3e:03:93:c2,cookie=0,table=1,dl_vlan=4,nw_src=2001:db8::12,in_port=1,actions=resubmit(,5)
  
hard_timeout=0,idle_timeout=0,priority=10,ip,dl_src=fa:16:3e:03:93:c2,cookie=0x13de491e1aaf506d,table=1,dl_vlan=4,nw_src=2001:db8::12,in_port=1,actions=resubmit(,2),normal
  
hard_timeout=0,idle_timeout=0,priority=1,ipv6,dl_src=fa:16:3e:03:93:c2,cookie=0,table=1,dl_vlan=4,nw_src=172.24.4.17,in_port=1,actions=resubmit(,5)
  
hard_timeout=0,idle_timeout=0,priority=10,ipv6,dl_src=fa:16:3e:03:93:c2,cookie=0x13de491e1aaf506d,table=1,dl_vlan=4,nw_src=172.24.4.17,in_port=1,actions=resubmit(,2),normal
  
hard_timeout=0,idle_timeout=0,priority=1,ipv6,dl_src=fa:16:3e:03:93:c2,cookie=0,table=1,dl_vlan=4,nw_src=2001:db8::12,in_port=1,actions=resubmit(,5)
  
hard_timeout=0,idle_timeout=0,priority=10,ipv6,dl_src=fa:16:3e:03:93:c2,cookie=0x13de491e1aaf506d,table=1,dl_vlan=4,nw_src=2001:db8::12,in_port=1,actions=resubmit(,2),normal
  Stdout:
  Stderr: ovs-ofctl: -:3: 2001:db8::12: invalid IP address
  2016-02-10 11:04:22.985 11780 ERROR neutron.agent.common.ovs_lib [-] Unable 
to execute ['ovs-ofctl', 'add-flows', 'br-sec', '-']. Exception:
  Command: ['ovs-ofctl', 'add-flows', 'br-sec', '-']
  Exit code: 1
  ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-vsphere/+bug/1544833/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521194] Re: Qos Aggregated Bandwidth Rate Limiting

2016-02-11 Thread Armando Migliaccio
If we want to be more explicit about this in the networking-guide and
clarify the perceived side-effects of the existing implementation, then
it's fine, but this is not exactly a neutron 'bug', but a documentation
one.

** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1521194

Title:
  Qos Aggregated Bandwidth Rate Limiting

Status in neutron:
  Won't Fix

Bug description:
  [Existing problem]
  In the current QoS implementation, network QoS bandwidth limit are applied to 
all the ports uniformly in the network. This may be wrong. Consider a case 
where the user just want 20 mbps speed for the entire network but the current 
implementation will end up in allowing 20 * num_of_ports_in_the_networks mbps.

  [Proposal]
  This proposal talks about the support for aggregated bandwidth rate limiting 
where all the ports in the network should together attain the specified network 
bandwidth limit. To start with an easiest implementation could be dividing the 
overall bandwidth value with the number of ports in the network. In this there 
might a case of over and under utilization which might need more thought (May 
be we got to monitor all the ports and have a notion of thresh hold to decide 
whether to increase or decrease the bandwidth)

  [Benefits]
  Better and correct user experience.

  [What is the enhancement?]
  Applying correct QoS rule parameter to each ports.

  [Related information]
  None

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1521194/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370033] Re: Admin should be able to manually select the active instance of a HA router

2016-02-11 Thread Armando Migliaccio
Status update:

http://eavesdrop.openstack.org/meetings/neutron_drivers/2016/neutron_drivers.2016-02-11-22.00.log.html#l-48

** Changed in: neutron
   Status: Triaged => Won't Fix

** Changed in: neutron
 Assignee: Hong Hui Xiao (xiaohhui) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1370033

Title:
  Admin should be able to manually select the active instance of a HA
  router

Status in neutron:
  Won't Fix

Bug description:
  The admin can see where is the active replica of an HA router. Once this bug: 
https://bugs.launchpad.net/neutron/+bug/1401095
  Is solved, the admin will be able to manually move HA routers from one agent 
to the next. Combining the two, it gives a decent if not ideal way to move the 
master by unscheduling it from the master node thereby moving it a backup:

  For example if hosts A and B are hosting router R, and router R is
  active on host A, you can unschedule it from host A, invoking a
  failover and causing B to become the new active replica. You then
  schedule it to host A once more and it'll host the router again, this
  time as standby.

  This RFE is about adding the ability to manually move the master state
  of a router from one agent to another explicitly. I think this can
  only be done with an API modification, or even a new API verb just for
  HA routers. I think that any API modifications need a slim spec and an
  RFE bug is not enough.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1370033/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544799] [NEW] FWaaS: Pick/install the right sql database to store user and data in each backend

2016-02-11 Thread Madhusudhan Kandadai
Public bug reported:

This is odd when looking at https://github.com/openstack/neutron-
fwaas/blob/master/neutron_fwaas/tests/contrib/gate_hook.sh.

https://jenkins05.openstack.org/job/gate-neutron-fwaas-dsvm-
api/2/consoleFull

2016-02-11 23:04:01.253 | ++ 
/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/contrib/gate_hook.sh:main:L35: 
  mktemp -d
2016-02-11 23:04:01.254 | + 
/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/contrib/gate_hook.sh:main:L35: 
  tmp_dir=/tmp/tmp.Hlllt3DxyO
2016-02-11 23:04:01.254 | + 
/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/contrib/gate_hook.sh:main:L37: 
  cat
2016-02-11 23:04:01.256 | + 
/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/contrib/gate_hook.sh:main:L45: 
  /usr/bin/mysql -u root
2016-02-11 23:04:01.268 | + 
/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/contrib/gate_hook.sh:main:L47: 
  cat
2016-02-11 23:04:01.270 | + 
/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/contrib/gate_hook.sh:main:L52: 
  setfacl -m g:postgres:rwx /tmp/tmp.Hlllt3DxyO
2016-02-11 23:04:01.274 | + 
/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/contrib/gate_hook.sh:main:L53: 
  sudo -u postgres /usr/bin/psql --file=/tmp/tmp.Hlllt3DxyO/postgresql.sql
2016-02-11 23:04:01.364 | Error: You must install at least one 
postgresql-client- package.

Either install postgresql client to access the psql database OR  prefer
mysql to store relevant user and data in each backend.

** Affects: neutron
 Importance: Undecided
 Assignee: Madhusudhan Kandadai (madhusudhan-kandadai)
 Status: In Progress


** Tags: fwaas

** Changed in: neutron
 Assignee: (unassigned) => Madhusudhan Kandadai (madhusudhan-kandadai)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1544799

Title:
  FWaaS: Pick/install the right sql database to store user and data in
  each backend

Status in neutron:
  In Progress

Bug description:
  This is odd when looking at https://github.com/openstack/neutron-
  fwaas/blob/master/neutron_fwaas/tests/contrib/gate_hook.sh.

  https://jenkins05.openstack.org/job/gate-neutron-fwaas-dsvm-
  api/2/consoleFull

  2016-02-11 23:04:01.253 | ++ 
/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/contrib/gate_hook.sh:main:L35: 
  mktemp -d
  2016-02-11 23:04:01.254 | + 
/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/contrib/gate_hook.sh:main:L35: 
  tmp_dir=/tmp/tmp.Hlllt3DxyO
  2016-02-11 23:04:01.254 | + 
/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/contrib/gate_hook.sh:main:L37: 
  cat
  2016-02-11 23:04:01.256 | + 
/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/contrib/gate_hook.sh:main:L45: 
  /usr/bin/mysql -u root
  2016-02-11 23:04:01.268 | + 
/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/contrib/gate_hook.sh:main:L47: 
  cat
  2016-02-11 23:04:01.270 | + 
/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/contrib/gate_hook.sh:main:L52: 
  setfacl -m g:postgres:rwx /tmp/tmp.Hlllt3DxyO
  2016-02-11 23:04:01.274 | + 
/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/contrib/gate_hook.sh:main:L53: 
  sudo -u postgres /usr/bin/psql --file=/tmp/tmp.Hlllt3DxyO/postgresql.sql
  2016-02-11 23:04:01.364 | Error: You must install at least one 
postgresql-client- package.

  Either install postgresql client to access the psql database OR
  prefer mysql to store relevant user and data in each backend.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1544799/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544801] [NEW] Constant tracebacks with eventlet 0.18.2

2016-02-11 Thread Jesse Keating
Public bug reported:

Kilo builds, with eventlet 0.18.2 have a constant traceback:

2016-02-12 00:47:01.126 3936 DEBUG nova.api.openstack.wsgi [-] Calling method 
'>' _process_stack 
/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:783
2016-02-12 00:47:01.129 3936 INFO nova.osapi_compute.wsgi.server [-] Traceback 
(most recent call last):
  File 
"/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/eventlet/wsgi.py",
 line 501, in handle_one_response
write(b''.join(towrite))
  File 
"/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/eventlet/wsgi.py",
 line 442, in write
_writelines(towrite)
  File 
"/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/eventlet/support/__init__.py",
 line 62, in safe_writelines
writeall(fd, item)
  File 
"/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/eventlet/support/__init__.py",
 line 67, in writeall
fd.write(buf)
  File "/usr/lib/python2.7/socket.py", line 324, in write
self.flush()
  File "/usr/lib/python2.7/socket.py", line 303, in flush
self._sock.sendall(view[write_offset:write_offset+buffer_size])
  File 
"/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/eventlet/greenio/base.py",
 line 383, in sendall
tail = self.send(data, flags)
  File 
"/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/eventlet/greenio/base.py",
 line 377, in send
return self._send_loop(self.fd.send, data, flags)
  File 
"/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/eventlet/greenio/base.py",
 line 364, in _send_loop
return send_method(data, *args)
error: [Errno 104] Connection reset by peer

This is happening across nova, neutron, glance, etc..

Dropping back to eventlet < 0.18.0 works.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1544801

Title:
  Constant tracebacks with eventlet 0.18.2

Status in OpenStack Compute (nova):
  New

Bug description:
  Kilo builds, with eventlet 0.18.2 have a constant traceback:

  2016-02-12 00:47:01.126 3936 DEBUG nova.api.openstack.wsgi [-] Calling method 
'>' _process_stack 
/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:783
  2016-02-12 00:47:01.129 3936 INFO nova.osapi_compute.wsgi.server [-] 
Traceback (most recent call last):
File 
"/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/eventlet/wsgi.py",
 line 501, in handle_one_response
  write(b''.join(towrite))
File 
"/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/eventlet/wsgi.py",
 line 442, in write
  _writelines(towrite)
File 
"/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/eventlet/support/__init__.py",
 line 62, in safe_writelines
  writeall(fd, item)
File 
"/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/eventlet/support/__init__.py",
 line 67, in writeall
  fd.write(buf)
File "/usr/lib/python2.7/socket.py", line 324, in write
  self.flush()
File "/usr/lib/python2.7/socket.py", line 303, in flush
  self._sock.sendall(view[write_offset:write_offset+buffer_size])
File 
"/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/eventlet/greenio/base.py",
 line 383, in sendall
  tail = self.send(data, flags)
File 
"/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/eventlet/greenio/base.py",
 line 377, in send
  return self._send_loop(self.fd.send, data, flags)
File 
"/opt/bbc/openstack-11.0-bbc153/nova/local/lib/python2.7/site-packages/eventlet/greenio/base.py",
 line 364, in _send_loop
  return send_method(data, *args)
  error: [Errno 104] Connection reset by peer

  This is happening across nova, neutron, glance, etc..

  Dropping back to eventlet < 0.18.0 works.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1544801/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544469] Re: Use Keystone Service catalog to search endpoints

2016-02-11 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/274131
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=5062e1bb50b1bbae5f56e2f82f96e6484c12e824
Submitter: Jenkins
Branch:master

commit 5062e1bb50b1bbae5f56e2f82f96e6484c12e824
Author: kairat_kushaev 
Date:   Fri Jan 29 18:53:30 2016 +0300

Use keystoneclient functions to receive endpoint

Use keystoneclient function get_urls from service catalog to
search glance endpoints. So the search logic will be defined
in keystone and glance only works with received endpoints.

Closes-Bug: #1544469
Change-Id: I4b1e92647e594d564005d54e9eec692df32f5980


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1544469

Title:
  Use Keystone Service catalog to search endpoints

Status in Glance:
  Fix Released

Bug description:
  Glance uses custom function to search endpoint in service catalog:
  https://github.com/openstack/glance/blob/master/glance/common/auth.py#L259
  But that functionality is also available in python-keystoneclient:
  
https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/service_catalog.py#L352
  So we can reduce code duplication and just simply use the logic from 
keystoneclient to search endpoint.

  P.S. Looks like we need to initialize ServiceCatalog in request
  context. It looks like we need separate attribute for
  ServiceCatalog(and deprecate current attribute in request context) so
  need some additional work.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1544469/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1298460] Re: Add the same security rule shouldn't report "Unable to add rule to security group."

2016-02-11 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/246275
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=62ccd13a6f5d67aeb1e4845b187d1e3ab82ded59
Submitter: Jenkins
Branch:master

commit 62ccd13a6f5d67aeb1e4845b187d1e3ab82ded59
Author: Itxaka 
Date:   Tue Nov 17 10:24:42 2015 +0100

Try to be more verbose on sec group error

Try to identify if the error is due to the security rule
already existing and return the proper message in that case to
not confuse the user.

Change-Id: I13346611b9d7309f84a5bfba8b69ea5e65d0a02a
Closes-Bug: 1298460


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1298460

Title:
  Add the same security rule shouldn't report "Unable to add rule to
  security group."

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Error information i think it must show what's happened clearly. So if
  the same security rule was created, end-user should receive a clear
  message show they add the rule again.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1298460/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1537106] Re: Horizon configuration option to enable Config Drive by default

2016-02-11 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/271464
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=73bb9a59435c287c6ab364d92a6defacbbcfa680
Submitter: Jenkins
Branch:master

commit 73bb9a59435c287c6ab364d92a6defacbbcfa680
Author: Justin Pomeroy 
Date:   Fri Jan 22 12:28:10 2016 -0600

Allow setting default value for config_drive

This adds a setting that can be used to specify the default value
for the Configuration Drive option when launching an instance.

Closes-Bug: #1537106
Change-Id: If402d3331158b462bece27fa6fce2bdb7f6a4a2e


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1537106

Title:
  Horizon configuration option to enable Config Drive by default

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  When launching an instance from Horizon, there is a "Configuration
  Drive" option under the "Advanced Options" tab.  The option is
  disabled by default.  A cloud deployer should be allowed to configure
  Horizon so that this option is enabled by default.

  Background:
  The neutron metadata service does not support IPv6 (see 
https://bugs.launchpad.net/neutron/+bug/1460177).  As a result, IPv6 only 
instances need to use config drive to access their metadata.  Because of this, 
a cloud deployer may want config drive to be enabled by default in cloud 
environments supporting such instances.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1537106/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1535991] Re: add breadcrumbs on Network Detail

2016-02-11 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/270012
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=d099477de3dafcf5da064c2b46b227bbe6f8440d
Submitter: Jenkins
Branch:master

commit d099477de3dafcf5da064c2b46b227bbe6f8440d
Author: Kenji Ishii 
Date:   Wed Jan 20 14:37:05 2016 +0900

Add breadcrumbs on Network Detail

Same as other pages, this patch will address to display breadcrumbs
on Network Detail page.
As far as I know, a place which need to address is only this one.

Change-Id: I81b650f34a95d7534f003499a1d486183c00a807
Closes-Bug: #1535991


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1535991

Title:
  add breadcrumbs on Network Detail

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Same as https://review.openstack.org/#/c/254167/, 
  Network Detail page has yet to be addressed to display breadcrumbs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1535991/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544768] [NEW] [RFE] Differentiate between static and floating subnets

2016-02-11 Thread Carl Baldwin
Public bug reported:

I've been thinking about this for a little while now.  There seems to be
something different about floating IP subnets and other (I'll call them
static in this context) subnets in some use cases.

- On an external network where operators wish to use private IPs for router 
ports (and DVR FIP ports)  and public for floating IPs.
- Enable using floating IPs on provider networks without routers [1].  This has 
come up a lot.  In many cases, operators want them to be public while the 
static ones are private.
- On routed networks where VM instance and router ports need IPs from their 
segments but floating IPs can be routed more flexibly.

These boil down to two ways I see to differentiate subnets:

- public vs private
- L2 bound vs routed

We could argue the definitions of public and private but I don't think
that's necessary.  Public could mean globally routable or routable
within some organization.  Private would mean not public.

An L2 bound subnet is one used on a segment where arp is expected to
work.  The opposite type can be routed by some L3 mechanism.

One possible way to make this distinction might be to mark certain
subnets as floating subnets.  The rules, roughly would be as follows:

- When allocating floating IPs, prefer floating subnets.  (fallback to 
non-floating to support backward compatibility?)
- Don't allocate non-floating IP ports from floating subnets.

[1] http://lists.openstack.org/pipermail/openstack-
operators/2016-February/009551.html

** Affects: neutron
 Importance: Wishlist
 Status: Confirmed


** Tags: l3-ipam-dhcp rfe

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => Wishlist

** Tags added: l3-ipam-dhcp rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1544768

Title:
  [RFE] Differentiate between static and floating subnets

Status in neutron:
  Confirmed

Bug description:
  I've been thinking about this for a little while now.  There seems to
  be something different about floating IP subnets and other (I'll call
  them static in this context) subnets in some use cases.

  - On an external network where operators wish to use private IPs for router 
ports (and DVR FIP ports)  and public for floating IPs.
  - Enable using floating IPs on provider networks without routers [1].  This 
has come up a lot.  In many cases, operators want them to be public while the 
static ones are private.
  - On routed networks where VM instance and router ports need IPs from their 
segments but floating IPs can be routed more flexibly.

  These boil down to two ways I see to differentiate subnets:

  - public vs private
  - L2 bound vs routed

  We could argue the definitions of public and private but I don't think
  that's necessary.  Public could mean globally routable or routable
  within some organization.  Private would mean not public.

  An L2 bound subnet is one used on a segment where arp is expected to
  work.  The opposite type can be routed by some L3 mechanism.

  One possible way to make this distinction might be to mark certain
  subnets as floating subnets.  The rules, roughly would be as follows:

  - When allocating floating IPs, prefer floating subnets.  (fallback to 
non-floating to support backward compatibility?)
  - Don't allocate non-floating IP ports from floating subnets.

  [1] http://lists.openstack.org/pipermail/openstack-
  operators/2016-February/009551.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1544768/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1442357] Re: Brocade MLX plug-ins config options for switch need a group title

2016-02-11 Thread Angela Smith
** Changed in: neutron/kilo
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1442357

Title:
  Brocade MLX plug-ins config options for switch need a group title

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  In order to create config option reference docs, changes need to be
  made to the INI file and config registrations for Brocade MLX ML2 and
  L3 plug-ins to include a block name for the switch options.  Currently
  there is no block name as the block names are dynamic based on the
  switch_names value.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1442357/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544744] [NEW] Live migration with attached volume results in ERROR state and fails to rollback when cinder client exception is thrown

2016-02-11 Thread Sujitha
Public bug reported:

During live migration with attached volume, instance is moved to ERROR
state and stuck in task state Migrating when cinder client exception is
thrown.

Steps:
   1. Create a nova instance
   2. Attach a cinder volume
   3. Raise cinderclient exception in initialize_connection() in 
nova/volume/cinder.py 
   4. Live migrate instance to other compute node (on a shared storage setup)

Result:
   * ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API  log if 
possible. 
   * And instance changes to unrecoverable ERROR state and stuck in 
migrating task state.

Error message is expected but instance changing into unrecoverable ERROR
state should be fixed in my opinion. It has to roll back instead of
moving to ERROR state.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: live-migration

** Attachment added: "LM-cinderclient-exception.txt"
   
https://bugs.launchpad.net/bugs/1544744/+attachment/4569617/+files/LM-cinderclient-exception.txt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1544744

Title:
  Live migration with attached volume results in ERROR state and fails
  to rollback when cinder client exception is thrown

Status in OpenStack Compute (nova):
  New

Bug description:
  During live migration with attached volume, instance is moved to ERROR
  state and stuck in task state Migrating when cinder client exception
  is thrown.

  Steps:
 1. Create a nova instance
 2. Attach a cinder volume
 3. Raise cinderclient exception in initialize_connection() in 
nova/volume/cinder.py 
 4. Live migrate instance to other compute node (on a shared storage setup)

  Result:
 * ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API  log if 
possible. 
 * And instance changes to unrecoverable ERROR state and stuck in 
migrating task state.

  Error message is expected but instance changing into unrecoverable
  ERROR state should be fixed in my opinion. It has to roll back instead
  of moving to ERROR state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1544744/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544729] [NEW] No grenade coverage for neutron-lbaas/octavia

2016-02-11 Thread Doug Wiegley
Public bug reported:

Stock neutron grenade no longer covers this, so we need a grenade plugin
for neutron-lbaas.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1544729

Title:
  No grenade coverage for neutron-lbaas/octavia

Status in neutron:
  New

Bug description:
  Stock neutron grenade no longer covers this, so we need a grenade
  plugin for neutron-lbaas.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1544729/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544721] [NEW] Policy for listing service providers requires admin

2016-02-11 Thread Kristi Nikolla
Public bug reported:

When creating a v3 keystoneclient using non admin credentials I'm able
to get the list of service providers from the service catalog, but the
policy doesn't allow to list or get service providers by default.

>>> ksclient2.service_catalog.catalog[u'service_providers']
[{u'sp_url': u'http://xxx.xxx.xxx.xxx:5000/Shibboleth.sso/SAML2/ECP', 
u'auth_url': 
u'http://xxx.xxx.xxx.xxx:35357/v3/OS-FEDERATION/identity_providers/keystone-idp/protocols/saml2/auth',
 u'id': u'keystone-sp'}]

>>> ksclient2.federation.service_providers.list()
Traceback (most recent call last):
  File "", line 1, in 
  File 
"/usr/local/lib/python2.7/dist-packages/keystoneclient/v3/contrib/federation/service_providers.py",
 line 76, in list
return super(ServiceProviderManager, self).list(**kwargs)
  File "/usr/local/lib/python2.7/dist-packages/keystoneclient/base.py", line 
75, in func
return f(*args, **new_kwargs)
  File "/usr/local/lib/python2.7/dist-packages/keystoneclient/base.py", line 
388, in list
self.collection_key)
  File "/usr/local/lib/python2.7/dist-packages/keystoneclient/base.py", line 
124, in _list
resp, body = self.client.get(url, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/keystoneclient/adapter.py", line 
170, in get
return self.request(url, 'GET', **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/keystoneclient/adapter.py", line 
206, in request
resp = super(LegacyJsonAdapter, self).request(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/keystoneclient/adapter.py", line 
95, in request
return self.session.request(url, method, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/keystoneclient/utils.py", line 
337, in inner
return func(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/keystoneclient/session.py", line 
405, in request
raise exceptions.from_response(resp, method, url)
keystoneauth1.exceptions.http.Forbidden: You are not authorized to perform the 
requested action: identity:list_service_providers (Disable debug mode to 
suppress these details.) (HTTP 403) (Request-ID: 
req-485c64e6-5de1-4470-9439-e05275a350fa)

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1544721

Title:
  Policy for listing service providers requires admin

Status in OpenStack Identity (keystone):
  New

Bug description:
  When creating a v3 keystoneclient using non admin credentials I'm able
  to get the list of service providers from the service catalog, but the
  policy doesn't allow to list or get service providers by default.

  >>> ksclient2.service_catalog.catalog[u'service_providers']
  [{u'sp_url': u'http://xxx.xxx.xxx.xxx:5000/Shibboleth.sso/SAML2/ECP', 
u'auth_url': 
u'http://xxx.xxx.xxx.xxx:35357/v3/OS-FEDERATION/identity_providers/keystone-idp/protocols/saml2/auth',
 u'id': u'keystone-sp'}]

  >>> ksclient2.federation.service_providers.list()
  Traceback (most recent call last):
File "", line 1, in 
File 
"/usr/local/lib/python2.7/dist-packages/keystoneclient/v3/contrib/federation/service_providers.py",
 line 76, in list
  return super(ServiceProviderManager, self).list(**kwargs)
File "/usr/local/lib/python2.7/dist-packages/keystoneclient/base.py", line 
75, in func
  return f(*args, **new_kwargs)
File "/usr/local/lib/python2.7/dist-packages/keystoneclient/base.py", line 
388, in list
  self.collection_key)
File "/usr/local/lib/python2.7/dist-packages/keystoneclient/base.py", line 
124, in _list
  resp, body = self.client.get(url, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/keystoneclient/adapter.py", 
line 170, in get
  return self.request(url, 'GET', **kwargs)
File "/usr/local/lib/python2.7/dist-packages/keystoneclient/adapter.py", 
line 206, in request
  resp = super(LegacyJsonAdapter, self).request(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/keystoneclient/adapter.py", 
line 95, in request
  return self.session.request(url, method, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/keystoneclient/utils.py", line 
337, in inner
  return func(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/keystoneclient/session.py", 
line 405, in request
  raise exceptions.from_response(resp, method, url)
  keystoneauth1.exceptions.http.Forbidden: You are not authorized to perform 
the requested action: identity:list_service_providers (Disable debug mode to 
suppress these details.) (HTTP 403) (Request-ID: 
req-485c64e6-5de1-4470-9439-e05275a350fa)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1544721/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-te

[Yahoo-eng-team] [Bug 1544720] [NEW] In response template processing, some values are set to u'' instead of None

2016-02-11 Thread Augustina Ragwitz
Public bug reported:

I discovered this bug while refactoring functional tests. This happens
in the current Nova master.

Update the following files:
- 
nova/tests/functional/api_sample_tests/api_samples/os-extended-server-attributes/v2.16/server-get-resp.json.tpl
- nova/doc/api_samples/os-extended-server-attributes/v2.16/server-get-resp.json

Set the field "OS-EXT-SRV-ATTR:kernel_id" from "null" to a value.
Run functional tests (or just 
nova/tests/functional/api_sample_tests/test_extended_server_attributes.py) and 
you'll see that the value you for the response has been replaced with u''.

Not sure if this is in nova itself or just the test framework, more
research needed.

I'm creating this bug to make sure I don't forget to come back and
troubleshoot this issue.

** Affects: nova
 Importance: Medium
 Assignee: Augustina Ragwitz (auggy)
 Status: Confirmed


** Tags: api testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1544720

Title:
  In response template processing, some values are set to u'' instead of
  None

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  I discovered this bug while refactoring functional tests. This happens
  in the current Nova master.

  Update the following files:
  - 
nova/tests/functional/api_sample_tests/api_samples/os-extended-server-attributes/v2.16/server-get-resp.json.tpl
  - 
nova/doc/api_samples/os-extended-server-attributes/v2.16/server-get-resp.json

  Set the field "OS-EXT-SRV-ATTR:kernel_id" from "null" to a value.
  Run functional tests (or just 
nova/tests/functional/api_sample_tests/test_extended_server_attributes.py) and 
you'll see that the value you for the response has been replaced with u''.

  Not sure if this is in nova itself or just the test framework, more
  research needed.

  I'm creating this bug to make sure I don't forget to come back and
  troubleshoot this issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1544720/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1537929] Re: Pecan: controller lookup for resources with dashes fails

2016-02-11 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/272313
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=7c24fa111e0ea0b656858c7f54e55a1eac5fc2d3
Submitter: Jenkins
Branch:master

commit 7c24fa111e0ea0b656858c7f54e55a1eac5fc2d3
Author: Salvatore Orlando 
Date:   Mon Jan 25 15:01:34 2016 -0800

Pecan: wrap PUT response with resource name

For no peculiar reason other than that's the way the API is
supposed to work.

Change-Id: Ifd6f8b492dfc86c069a4f33931235c7f3d8e7c2e
Closes-Bug: #1537929


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1537929

Title:
  Pecan: controller lookup for resources with dashes fails

Status in neutron:
  Fix Released

Bug description:
  The controller lookup process is unable to route the request to any resource 
with a dash in it.
  This happens because the controllers are stored according to their resource 
name, where dashes are replaced by underscores, and the pecan lookup process, 
quite dumbly, does not perform this simple conversion.

  The author of the code in question should consider an alternative
  career path far away from programming and engineering in general.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1537929/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1522135] Re: Add neutron extensions to angular cloud services

2016-02-11 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/252597
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=85973b7e907a41352d6aa8fe7dbc0f4afb89bdbc
Submitter: Jenkins
Branch:master

commit 85973b7e907a41352d6aa8fe7dbc0f4afb89bdbc
Author: Paulo Ewerton 
Date:   Wed Dec 2 15:08:02 2015 +

Adding hz-if-neutron-extensions directive

We should add neutron extensions to angular cloud services
in a similar fashion as that of cinder and nova services.

Change-Id: I184f520568a807ae5948e73eed33023787b204d8
Closes-Bug: 1522135


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1522135

Title:
  Add neutron extensions to angular cloud services

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  We should add neutron extensions to angular cloud services in a
  similar fashion as that of cinder and nova services. One application
  of this would be the new angular identity projects panel, which
  includes a neutron quota workflow.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1522135/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460164] Re: restart of openvswitch-switch causes instance network down when l2population enabled

2016-02-11 Thread James Page
** Description changed:

+ [Impact]
+ Restarts of openvswitch (typically on upgrade) result in loss of tunnel 
connectivity when the l2population driver is in use.  This results in loss of 
access to all instances on the effected compute hosts
+ 
+ [Test Case]
+ Deploy cloud with ml2/ovs/l2population enabled
+ boot instances
+ restart ovs; instance connectivity will be lost until the 
neutron-openvswitch-agent is restarted on the compute hosts.
+ 
+ [Regression Potential]
+ Minimal - in multiple stable branches upstream.
+ 
+ [Original Bug Report]
  On 2015-05-28, our Landscape auto-upgraded packages on two of our
  OpenStack clouds.  On both clouds, but only on some compute nodes, the
  upgrade of openvswitch-switch and corresponding downtime of
  ovs-vswitchd appears to have triggered some sort of race condition
  within neutron-plugin-openvswitch-agent leaving it in a broken state;
  any new instances come up with non-functional network but pre-existing
  instances appear unaffected.  Restarting n-p-ovs-agent on the affected
  compute nodes is sufficient to work around the problem.
  
  The packages Landscape upgraded (from /var/log/apt/history.log):
  
  Start-Date: 2015-05-28  14:23:07
  Upgrade: nova-compute-libvirt:amd64 (2014.1.4-0ubuntu2, 2014.1.4-0ubuntu2.1), 
libsystemd-login0:amd64 (204-5ubuntu20.11, 204-5ubuntu20.12), 
nova-compute-kvm:amd64 (2014.1.4-0ubuntu2, 2014.1.4-0ubuntu2.1), 
systemd-services:amd64 (204-5ubuntu20.11, 204-5ubuntu20.12), 
isc-dhcp-common:amd64 (4.2.4-7ubuntu12.1, 4.2.4-7ubuntu12.2), nova-common:amd64 
(2014.1.4-0ubuntu2, 2014.1.4-0ubuntu2.1), python-nova:amd64 (2014.1.4-0ubuntu2, 
2014.1.4-0ubuntu2.1), libsystemd-daemon0:amd64 (204-5ubuntu20.11, 
204-5ubuntu20.12), grub-common:amd64 (2.02~beta2-9ubuntu1.1, 
2.02~beta2-9ubuntu1.2), libpam-systemd:amd64 (204-5ubuntu20.11, 
204-5ubuntu20.12), udev:amd64 (204-5ubuntu20.11, 204-5ubuntu20.12), 
grub2-common:amd64 (2.02~beta2-9ubuntu1.1, 2.02~beta2-9ubuntu1.2), 
openvswitch-switch:amd64 (2.0.2-0ubuntu0.14.04.1, 2.0.2-0ubuntu0.14.04.2), 
libudev1:amd64 (204-5ubuntu20.11, 204-5ubuntu20.12), isc-dhcp-client:amd64 
(4.2.4-7ubuntu12.1, 4.2.4-7ubuntu12.2), python-eventlet:amd64 (0.13.0-1ubuntu2, 
0.13.0-1ubuntu
 2.1), python-novaclient:amd64 (2.17.0-0ubuntu1.1, 2.17.0-0ubuntu1.2), 
grub-pc-bin:amd64 (2.02~beta2-9ubuntu1.1, 2.02~beta2-9ubuntu1.2), grub-pc:amd64 
(2.02~beta2-9ubuntu1.1, 2.02~beta2-9ubuntu1.2), nova-compute:amd64 
(2014.1.4-0ubuntu2, 2014.1.4-0ubuntu2.1), openvswitch-common:amd64 
(2.0.2-0ubuntu0.14.04.1, 2.0.2-0ubuntu0.14.04.2)
  End-Date: 2015-05-28  14:24:47
  
  From /var/log/neutron/openvswitch-agent.log:
  
  2015-05-28 14:24:18.336 47866 ERROR neutron.agent.linux.ovsdb_monitor
  [-] Error received from ovsdb monitor: ovsdb-client:
  unix:/var/run/openvswitch/db.sock: receive failed (End of file)
  
  Looking at a stuck instances, all the right tunnels and bridges and
  what not appear to be there:
  
  root@vector:~# ip l l | grep c-3b
- 460002: qbr7ed8b59c-3b:  mtu 1500 qdisc 
noqueue state UP mode DEFAULT group default 
+ 460002: qbr7ed8b59c-3b:  mtu 1500 qdisc 
noqueue state UP mode DEFAULT group default
  460003: qvo7ed8b59c-3b:  mtu 1500 
qdisc pfifo_fast master ovs-system state UP mode DEFAULT group default qlen 1000
  460004: qvb7ed8b59c-3b:  mtu 1500 
qdisc pfifo_fast master qbr7ed8b59c-3b state UP mode DEFAULT group default qlen 
1000
  460005: tap7ed8b59c-3b:  mtu 1500 qdisc 
pfifo_fast master qbr7ed8b59c-3b state UNKNOWN mode DEFAULT group default qlen 
500
  root@vector:~# ovs-vsctl list-ports br-int | grep c-3b
  qvo7ed8b59c-3b
- root@vector:~# 
+ root@vector:~#
  
  But I can't ping the unit from within the qrouter-${id} namespace on
  the neutron gateway.  If I tcpdump the {q,t}*c-3b interfaces, I don't
  see any traffic.

** Changed in: neutron (Ubuntu Trusty)
   Status: New => In Progress

** Changed in: neutron (Ubuntu Trusty)
 Assignee: (unassigned) => James Page (james-page)

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/juno
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/kilo
   Importance: Undecided
   Status: New

** Changed in: cloud-archive/kilo
   Importance: Undecided => Medium

** Changed in: cloud-archive/juno
   Importance: Undecided => Medium

** Changed in: neutron (Ubuntu Trusty)
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460164

Title:
  restart of openvswitch-switch causes instance network down when
  l2population enabled

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive juno series:
  New
Status in Ubuntu Cloud Archive kilo series:
  New
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Trusty:
  In Progress
Status in neutron source package i

[Yahoo-eng-team] [Bug 1544703] [NEW] webSSO URLs may not be accessible under some network configurations

2016-02-11 Thread Steve McLellan
Public bug reported:

WebSSO uses OPENSTACK_KEYSTONE_URL to generate URLs to point a browser
at. Under many configurations this is fine, but in setups where there
may be multiple networks, it can be problematic. For instance, if
horizon is configured to talk to keystone over a network that is
private, OPENSTACK_KEYSTONE_URL will not be reachable from a browser. A
fuller explanation is in https://blueprints.launchpad.net/horizon/+spec
/configurable-websso-keystone-url but this seems more like a bug than a
feature. The upshot is adding a second setting to allow a separate
WEBSSO keystone url.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1544703

Title:
  webSSO URLs may not be accessible under some network configurations

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  WebSSO uses OPENSTACK_KEYSTONE_URL to generate URLs to point a browser
  at. Under many configurations this is fine, but in setups where there
  may be multiple networks, it can be problematic. For instance, if
  horizon is configured to talk to keystone over a network that is
  private, OPENSTACK_KEYSTONE_URL will not be reachable from a browser.
  A fuller explanation is in
  https://blueprints.launchpad.net/horizon/+spec/configurable-websso-
  keystone-url but this seems more like a bug than a feature. The upshot
  is adding a second setting to allow a separate WEBSSO keystone url.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1544703/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1522381] Re: Support for addition of compute node into the running environment

2016-02-11 Thread Sean Dague
I'm not really sure why this is filed against nova.

** Changed in: nova
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1522381

Title:
  Support for addition of compute node into the running environment

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Summary
  ===

  - To provide the support for adding new compute node through
  templates.

  Motivation
  ==

  - While woking on multi-node network, when we need more computation
  power, it is difficult to add compute node. Always we have to do it
  manually. So we wish to provide a mechanism so that we can do it
  through templates that we can run from controller without manual
  intevention of the user.

  Description
  ===

  - A new feature will be added in horizon which will allow to add a new
  compute node through templates from controller.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1522381/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523175] Re: ""nova evacuate"" does not tell cinder to do ""removehlu""

2016-02-11 Thread Sean Dague
evacuate assumes you've done all the fencing and cleanup of the node
itself, this is by design.

** Changed in: nova
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1523175

Title:
  ""nova evacuate""  does not tell cinder to do  ""removehlu""

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  [Background]

  When I did:

   ""nova live-migration""

  It tells the Cinder to do:

   1.addhlu to add HLU on destination host
   2.removehlu to delete HLU on source host

  But when I did:

   ""nova evacuate""

  It only tells the Cinder to do:

   1.addhlu to add HLU on destination host

  Even source compute node completely is down, powered off, HLU on
  source compute node is still visible.

  Evacuation itself is success. I'm not quite sure if this behavior is
  by design or not.

  [Reproduce]
   * Pre-requirement:
the Cinder using EMC VNX storage as backend.

  ""nova boot"" to create an instance, called ""ins-A"" (for instance)
  Power-off compute node which  ""ins-A"" were running on.
  ""nova evacuate"" to evacuate ""ins-A"" to another compute node.

  [Expected Behavior]
   Add HLU on destination compute node
   Remove HLU from source compute node.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1523175/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1482633] Re: requests to SSL wrapped sockets hang while reading using py3

2016-02-11 Thread Sean Dague
Is there an upstream eventlet bug for this?

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1482633

Title:
  requests to SSL wrapped sockets hang while reading using py3

Status in Manila:
  New
Status in neutron:
  Invalid
Status in OpenStack Compute (nova):
  Invalid
Status in oslo.service:
  New

Bug description:
  If we run unit tests using py3 then we get following errors:

  ==
  FAIL: manila.tests.test_wsgi.TestWSGIServer.test_app_using_ssl
  tags: worker-0
  --
  Empty attachments:
pythonlogging:''
stdout

  stderr: {{{
  Traceback (most recent call last):
File 
"/home/vponomaryov/Documents/python/projects/manila/.tox/py34/lib/python3.4/site-packages/eventlet/hubs/hub.py",
 line 457, in fire_timers
  timer()
File 
"/home/vponomaryov/Documents/python/projects/manila/.tox/py34/lib/python3.4/site-packages/eventlet/hubs/timer.py",
 line 58, in __call__
  cb(*args, **kw)
File 
"/home/vponomaryov/Documents/python/projects/manila/.tox/py34/lib/python3.4/site-packages/eventlet/greenthread.py",
 line 214, in main
  result = function(*args, **kwargs)
File 
"/home/vponomaryov/Documents/python/projects/manila/.tox/py34/lib/python3.4/site-packages/eventlet/wsgi.py",
 line 823, in server
  client_socket = sock.accept()
File 
"/home/vponomaryov/Documents/python/projects/manila/.tox/py34/lib/python3.4/site-packages/eventlet/green/ssl.py",
 line 333, in accept
  suppress_ragged_eofs=self.suppress_ragged_eofs)
File 
"/home/vponomaryov/Documents/python/projects/manila/.tox/py34/lib/python3.4/site-packages/eventlet/green/ssl.py",
 line 88, in __init__
  self.do_handshake()
File 
"/home/vponomaryov/Documents/python/projects/manila/.tox/py34/lib/python3.4/site-packages/eventlet/green/ssl.py",
 line 241, in do_handshake
  super(GreenSSLSocket, self).do_handshake)
File 
"/home/vponomaryov/Documents/python/projects/manila/.tox/py34/lib/python3.4/site-packages/eventlet/green/ssl.py",
 line 106, in _call_trampolining
  return func(*a, **kw)
File "/usr/lib/python3.4/ssl.py", line 805, in do_handshake
  self._sslobj.do_handshake()
  ssl.SSLWantReadError: The operation did not complete (read) (_ssl.c:598)
  }}}

  Traceback (most recent call last):
File 
"/home/vponomaryov/Documents/python/projects/manila/manila/tests/test_wsgi.py", 
line 181, in test_app_using_ssl
  'https://127.0.0.1:%d/' % server.port)
File "/usr/lib/python3.4/urllib/request.py", line 153, in urlopen
  return opener.open(url, data, timeout)
File "/usr/lib/python3.4/urllib/request.py", line 455, in open
  response = self._open(req, data)
File "/usr/lib/python3.4/urllib/request.py", line 473, in _open
  '_open', req)
File "/usr/lib/python3.4/urllib/request.py", line 433, in _call_chain
  result = func(*args)
File "/usr/lib/python3.4/urllib/request.py", line 1273, in https_open
  context=self._context, check_hostname=self._check_hostname)
File "/usr/lib/python3.4/urllib/request.py", line 1232, in do_open
  h.request(req.get_method(), req.selector, req.data, headers)
File "/usr/lib/python3.4/http/client.py", line 1065, in request
  self._send_request(method, url, body, headers)
File "/usr/lib/python3.4/http/client.py", line 1103, in _send_request
  self.endheaders(body)
File "/usr/lib/python3.4/http/client.py", line 1061, in endheaders
  self._send_output(message_body)
File "/usr/lib/python3.4/http/client.py", line 906, in _send_output
  self.send(msg)
File "/usr/lib/python3.4/http/client.py", line 841, in send
  self.connect()
File "/usr/lib/python3.4/http/client.py", line 1205, in connect
  server_hostname=server_hostname)
File 
"/home/vponomaryov/Documents/python/projects/manila/.tox/py34/lib/python3.4/site-packages/eventlet/green/ssl.py",
 line 362, in _green_sslcontext_wrap_socket
  return GreenSSLSocket(sock, *a, _context=self, **kw)
File 
"/home/vponomaryov/Documents/python/projects/manila/.tox/py34/lib/python3.4/site-packages/eventlet/green/ssl.py",
 line 88, in __init__
  self.do_handshake()
File 
"/home/vponomaryov/Documents/python/projects/manila/.tox/py34/lib/python3.4/site-packages/eventlet/green/ssl.py",
 line 241, in do_handshake
  super(GreenSSLSocket, self).do_handshake)
File 
"/home/vponomaryov/Documents/python/projects/manila/.tox/py34/lib/python3.4/site-packages/eventlet/green/ssl.py",
 line 116, in _call_trampolining
  timeout_exc=timeout_exc('timed out'))
File 
"/home/vponomaryov/Documents/python/projects/manila/.tox/py34/lib/python3.4/site-packages/eventlet/hubs/__init__.py",
 line 162, in trampoline
  return hub.switch()
Fi

[Yahoo-eng-team] [Bug 1523742] Re: illegal video driver for PPC64 little endian system

2016-02-11 Thread Matt Riedemann
** Also affects: nova/liberty
   Importance: Undecided
   Status: New

** Changed in: nova/liberty
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1523742

Title:
  illegal video driver for PPC64 little endian system

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) liberty series:
  Confirmed

Bug description:
  for openstack kilo version, when creating a instance,   libvirt creates 
cirrus video type in template xml file  while is not supported for  PPC64 
little endian system.  I debug the code and finally find mistake in function 
_add_video_driver of file nova/virt/libvirt/driver.py
  In function,  there is a logic that video driver is determinded by guest 
arch.   If arch is in (PPC, PPC64) then return vga, otherwise video driver is 
determined by other options.  For PPC64 little endian system,  guest arch is 
PPC64LE so that video driver is determined by other option (In our environment, 
with kvm virt type and spice disabled , video driver is determined by 
hw_video_model)  which makes video driver is cirrus .  Exception happens when 
creating vm instance because cirrus video driver is not supported on power 
hardware.
  I  add PPC64LE arch in the guestarch option and it does works.   The patch 
will be attached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1523742/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1531930] Re: SQLalchemy API crashes executing migration_get_unconfirmed_by_dest_compute

2016-02-11 Thread Sean Dague
This was released in oslo.db 4.1.0.

Something else must be wrong about your environment for this to be an
issue, because it's definitely in 4.2

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1531930

Title:
  SQLalchemy API crashes executing
  migration_get_unconfirmed_by_dest_compute

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  The function migration_get_unconfirmed_by_dest_compute has an error in
  the decorator function:

  Traceback (most recent call last):
File "/usr/share/java/pycharm-community/helpers/pycharm/utrunner.py", line 
120, in 
  modules = [loadSource(a[0])]
File "/usr/share/java/pycharm-community/helpers/pycharm/utrunner.py", line 
41, in loadSource
  module = imp.load_source(moduleName, fileName)
File 
"/opt/stack/nova/nova/tests/unit/scheduler/filters/test_type_filters.py", line 
17, in 
  from nova import test
File "/opt/stack/nova/nova/test.py", line 51, in 
  from nova.tests import fixtures as nova_fixtures
File "/opt/stack/nova/nova/tests/fixtures.py", line 29, in 
  from nova.db import migration
File "/opt/stack/nova/nova/db/migration.py", line 19, in 
  from nova.db.sqlalchemy import migration
File "/opt/stack/nova/nova/db/sqlalchemy/migration.py", line 27, in 
  from nova.db.sqlalchemy import api as db_session
File "/opt/stack/nova/nova/db/sqlalchemy/api.py", line 4526, in 
  @main_context_manager.reader.allow_async
  AttributeError: '_TransactionContextManager' object has no attribute 
'allow_async'

  
  The object reader, in the sqlalchemy engine facade, in oslo_db library, 
doesn't yet have the attribute "allow_async".

  oslo.db==4.2.0 (latest)

  The code with the "allow_async" function is not available yet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1531930/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1536050] Re: Error: ImageMetaProps object has no attribute 'ssd'

2016-02-11 Thread Sean Dague
Yes, that is by design

** Changed in: nova
   Status: New => Opinion

** Changed in: nova
   Importance: Undecided => Wishlist

** Changed in: nova
   Status: Opinion => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1536050

Title:
  Error: ImageMetaProps object has no attribute 'ssd'

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  When use AggregateImagePropertiesIsolation scheduler filter, there is : 
Error: ImageMetaProps object has no attribute 'ssd'
  Step 1:
  create a agg
  aggregate-create ssd-agg nova
  nova aggregate-set-metadata  ssd-agg  ssd=true
  nova aggregate-add-host ssd-agg host-2
  Step2. add ssd metadata
  nova  image-meta 28565806-241c-43cf-b096-666721748004 set ssd=true
  step3: add AggregateImagePropertiesIsolation to nova.conf
  step4: boot an instance

  Actual result:
   Error: ImageMetaProps object has no attribute 'ssd'

  environments:
  master stream code

  Looks like host-agg can have arbitrary meta but image_props only have
  limited meta.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1536050/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1537076] Re: Timed out waiting for Nova hypervisor-stats count >= 1 due to Nova Unable to establish connection to http://127.0.0.1:35357/v2.0/tokens

2016-02-11 Thread Sean Dague
It seems really weird about why keystone just stops working. That is the
thing that really needs to be sorted out.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1537076

Title:
  Timed out waiting for Nova hypervisor-stats count >= 1 due to Nova
  Unable to establish connection to http://127.0.0.1:35357/v2.0/tokens

Status in Ironic:
  Fix Released
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  e.g. seems like http://logs.openstack.org/61/246161/12/check/gate-
  tempest-dsvm-ironic-pxe_ipa-
  ipxe/169b905/logs/screen-n-cpu.txt.gz?#_2016-01-22_09_01_00_240 causes
  http://logs.openstack.org/61/246161/12/check/gate-tempest-dsvm-ironic-
  pxe_ipa-ipxe/169b905/logs/devstacklog.txt.gz#_2016-01-22_09_08_19_910

  Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/poll.py", line 
115, in wait
  listener.cb(fileno)
File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
214, in main
  result = function(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/oslo_service/service.py", line 
671, in run_service
  service.start()
File "/opt/stack/new/nova/nova/service.py", line 198, in start
  self.manager.pre_start_hook()
File "/opt/stack/new/nova/nova/compute/manager.py", line 1340, in 
pre_start_hook
  self.update_available_resource(nova.context.get_admin_context())
File "/opt/stack/new/nova/nova/compute/manager.py", line 6290, in 
update_available_resource
  nodenames = set(self.driver.get_available_nodes())
File "/opt/stack/new/nova/nova/virt/ironic/driver.py", line 554, in 
get_available_nodes
  self._refresh_cache()
File "/opt/stack/new/nova/nova/virt/ironic/driver.py", line 537, in 
_refresh_cache
  for node in self._get_node_list(detail=True, limit=0):
File "/opt/stack/new/nova/nova/virt/ironic/driver.py", line 476, in 
_get_node_list
  node_list = self.ironicclient.call("node.list", **kwargs)
File "/opt/stack/new/nova/nova/virt/ironic/client_wrapper.py", line 136, in 
call
  client = self._get_client(retry_on_conflict=retry_on_conflict)
File "/opt/stack/new/nova/nova/virt/ironic/client_wrapper.py", line 86, in 
_get_client
  cli = ironic.client.get_client(CONF.ironic.api_version, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/ironicclient/client.py", line 
86, in get_client
  _ksclient = _get_ksclient(**ks_kwargs)
File "/usr/local/lib/python2.7/dist-packages/ironicclient/client.py", line 
35, in _get_ksclient
  insecure=kwargs.get('insecure'))
File 
"/usr/local/lib/python2.7/dist-packages/keystoneclient/v2_0/client.py", line 
166, in __init__
  self.authenticate()
File "/usr/local/lib/python2.7/dist-packages/keystoneclient/utils.py", line 
337, in inner
  return func(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/keystoneclient/httpclient.py", 
line 589, in authenticate
  resp = self.get_raw_token_from_identity_service(**kwargs)
File 
"/usr/local/lib/python2.7/dist-packages/keystoneclient/v2_0/client.py", line 
210, in get_raw_token_from_identity_service
  _("Authorization Failed: %s") % e)
  AuthorizationFailure: Authorization Failed: Unable to establish connection to 
http://127.0.0.1:35357/v2.0/tokens

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1537076/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1538204] Re: Failed to stop nova-api in grenade tests

2016-02-11 Thread Sean Dague
This is definitely a core oslo.service issue with shutting down, this
keeps tripping us up.

** Changed in: oslo.service
   Status: New => Confirmed

** Changed in: oslo.service
   Importance: Undecided => Critical

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1538204

Title:
  Failed to stop nova-api in grenade tests

Status in OpenStack Compute (nova):
  Invalid
Status in oslo.service:
  Confirmed

Bug description:
  Saw this during a grenade run:

  2016-01-26 16:12:58.553 22016 ERROR oslo_service.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/oslo_service/service.py", line 143, in 
clear
  2016-01-26 16:12:58.553 22016 ERROR oslo_service.threadgroup for sig in 
self._signal_handlers:
  2016-01-26 16:12:58.553 22016 ERROR oslo_service.threadgroup RuntimeError: 
dictionary changed size during iteration

  (From http://logs.openstack.org/25/272425/1/gate/gate-grenade-dsvm-
  heat/b32eda2/).

  May be due to a change in oslo, but it's in the old process so I'm not
  sure it ought to use it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1538204/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542303] Re: When using realtime guests we should to avoid using QGA

2016-02-11 Thread Sean Dague
If you are creating a tracking bug for yourself, please move to
confirmed

** Changed in: nova
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1542303

Title:
  When using realtime guests we should to avoid using QGA

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  When running in realtime we should to leading to a very minimal
  hardware support for guest and so disable support of QEMU guest agent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1542303/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1522008] Re: Nova delete instance fails if cleaning is not async

2016-02-11 Thread Sean Dague
It is not clear that this is a nova issue, but a wholely ironic issue.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1522008

Title:
  Nova delete instance fails if cleaning is not async

Status in Ironic:
  In Progress
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  When node cleaning is not async (conductor starts cleaning immediately
  in do_node_tear_down()) Nova delete instance fails. Nova tries to
  update Ironic port (remove vif id), but conductor releases a lock only
  after thread with cleaning completed. Because cleaning usually is not
  a short time action Nova fails after retries. This bug can affects any
  out-of-tree driver, reproduced on Ansible deploy driver PoC (
  https://review.openstack.org/#/c/238183/ ).

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1522008/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544676] [NEW] [RFE] Support for multiple L2 agents on a host

2016-02-11 Thread Jason Niesz
Public bug reported:

[Description]
Currently it is not possible to run multiple L2 agents on a host without them 
conflicting with each other.  For example, if I set up and configure Linux 
bridge and Open vSwitch agents on a compute host both agents will try to 
enslave the tap interface of the instance resulting in a conflict.  

[Proposed Change]
Add a mechanism to associate a network to an L2 agent type.  When a new network 
is provisioned it would get associated with an L2 agent type.  When a new 
instance is launched on that network only the appropriate L2 agent would get 
called to enslave the tap interface of the instance.  

[Reason for Change]
Having the capability to run multiple L2 agents on a host and associating 
networks to an agent type would allow for in place migrations between different 
networking scenarios.  An example of this would be migrating from provider 
networking with Linux bridge to DVR with OVS.  With this new capability, I 
could configure and spin up OVS agents across all my compute hosts and 
provision new networks associated with the OVS agent type.  I could then 
migrate instances over from provider networking to DVR with OVS.  The current 
option for migration forces me to have a separate set of dedicated compute 
hosts for migrating between different networking scenarios.

There could also be other reasons to support multiple L2 agents, such as
performance or functionality.  It might make sense to back one network
with OVS and another with Linux bridge.  While I gave Linux bridge and
OVS as examples, the number of L2 agents could also include possible
future options, such as Cisco Vector Packet Processing (VPP).

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1544676

Title:
  [RFE] Support for multiple L2 agents on a host

Status in neutron:
  New

Bug description:
  [Description]
  Currently it is not possible to run multiple L2 agents on a host without them 
conflicting with each other.  For example, if I set up and configure Linux 
bridge and Open vSwitch agents on a compute host both agents will try to 
enslave the tap interface of the instance resulting in a conflict.  

  [Proposed Change]
  Add a mechanism to associate a network to an L2 agent type.  When a new 
network is provisioned it would get associated with an L2 agent type.  When a 
new instance is launched on that network only the appropriate L2 agent would 
get called to enslave the tap interface of the instance.  

  [Reason for Change]
  Having the capability to run multiple L2 agents on a host and associating 
networks to an agent type would allow for in place migrations between different 
networking scenarios.  An example of this would be migrating from provider 
networking with Linux bridge to DVR with OVS.  With this new capability, I 
could configure and spin up OVS agents across all my compute hosts and 
provision new networks associated with the OVS agent type.  I could then 
migrate instances over from provider networking to DVR with OVS.  The current 
option for migration forces me to have a separate set of dedicated compute 
hosts for migrating between different networking scenarios.

  There could also be other reasons to support multiple L2 agents, such
  as performance or functionality.  It might make sense to back one
  network with OVS and another with Linux bridge.  While I gave Linux
  bridge and OVS as examples, the number of L2 agents could also include
  possible future options, such as Cisco Vector Packet Processing (VPP).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1544676/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543169] Re: Nova os-volume-types endpoint doesn't exist

2016-02-11 Thread Sean Dague
It's not there in 2.0 either. The help text is super out of whack saying
that volumes have 'gpu' associated with them.

I just don't think this ever existed.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1543169

Title:
  Nova os-volume-types endpoint doesn't exist

Status in OpenStack Compute (nova):
  Invalid
Status in openstack-api-site:
  Confirmed

Bug description:
  The Nova v2.1 documentation shows an endpoint "os-volume-types" which
  lists the available volume types. http://developer.openstack.org/api-
  ref-compute-v2.1.html#listVolumeTypes

  I am using OpenStack Liberty and that endpoint doesn't appear to exist
  anymore. GET requests sent to /v2.1/​{tenant_id}​/os-volume-types
  return 404 not found. When I searched the Nova codebase on GitHub, I
  could only find a reference to volume types in the policy.json but not
  implemented anywhere.

  Does this endpoint still exist, and if so what is the appropriate
  documentation?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1543169/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542486] Re: nova-compute stack traces with BadRequest: Specifying 'tenant_id' other than authenticated tenant in request requires admin privileges

2016-02-11 Thread Sean Dague
These changes all come from keystone libraries. Nova doesn't do any
registration of this config variables itself. Please poke the keystone
auth folks about this.

** Also affects: keystone
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1542486

Title:
  nova-compute stack traces with BadRequest: Specifying 'tenant_id'
  other than authenticated tenant in request requires admin privileges

Status in OpenStack Identity (keystone):
  New
Status in OpenStack Compute (nova):
  Incomplete
Status in puppet-nova:
  New

Bug description:
  The puppet-openstack-integration tests (rebased on
  https://review.openstack.org/#/c/276773/ ) currently fail on the
  latest version of RDO Mitaka (delorean current) due to what seems to
  be a problem with the neutron configuration.

  Everything installs fine but tempest fails:
  
http://logs.openstack.org/92/276492/6/check/gate-puppet-openstack-integration-scenario001-tempest-dsvm-centos7/78b9c32/console.html#_2016-02-05_20_26_35_569

  And there are stack traces in nova-compute.log:
  
http://logs.openstack.org/92/276492/6/check/gate-puppet-openstack-integration-scenario001-tempest-dsvm-centos7/78b9c32/logs/nova/nova-compute.txt.gz#_2016-02-05_20_22_16_151

  I talked with #openstack-nova and they pointed out a difference between what 
devstack yields as a [neutron] configuration versus what puppet-nova configures:
  
  # puppet-nova via puppet-openstack-integration
  
  [neutron]
  service_metadata_proxy=True
  metadata_proxy_shared_secret =a_big_secret
  url=http://127.0.0.1:9696
  region_name=RegionOne
  ovs_bridge=br-int
  extension_sync_interval=600
  auth_url=http://127.0.0.1:35357
  password=a_big_secret
  tenant_name=services
  timeout=30
  username=neutron
  auth_plugin=password
  default_tenant_id=default

  
  # Well, it worked in devstack™
  
  [neutron]
  service_metadata_proxy = True
  url = http://127.0.0.1:9696
  region_name = RegionOne
  auth_url = http://127.0.0.1:35357/v3
  password = secretservice
  auth_strategy = keystone
  project_domain_name = Default
  project_name = service
  user_domain_name = Default
  username = neutron
  auth_plugin = v3password

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1542486/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1486565] Re: Network/Image names allows terminal escape sequence

2016-02-11 Thread Sean Dague
Nova is an API server, it's fine to put whatever into these fields.
Should the clients scrub this, probably.

** Changed in: nova
   Status: New => Opinion

** Changed in: nova
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1486565

Title:
  Network/Image names allows terminal escape sequence

Status in Glance:
  New
Status in neutron:
  New
Status in OpenStack Compute (nova):
  Opinion
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  This allows a malicious user to create network that will mess with
  administrator terminal when they list network.

  Steps to reproduces:

  As a user: neutron net-create $(echo -e "\E[37mhidden\x1b[f")

  As an admin: neutron net-list

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1486565/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1535878] Re: A user with a role on a project should be able to issue a GET /project call

2016-02-11 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/270057
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=38e115d385153a631216a120df68a903b2faa6d7
Submitter: Jenkins
Branch:master

commit 38e115d385153a631216a120df68a903b2faa6d7
Author: Ajaya Agrawal 
Date:   Wed Jan 20 08:41:33 2016 +

Change get_project permission

Previously to issue GET /project a user needed
at least project_admin level of permission. With
this change, a user can issue GET /project by just
having a role on the project.

Change-Id: I9d23edc22eb88d0b21ab8968dfbe63661220a6fd
Closes-Bug: 1535878


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1535878

Title:
  A user with a role on a project should be able to issue a GET /project
  call

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Currently, we require project admin or "higher" in order to issue a
  GET /project call. This seems overly restrictive, since if you have a
  role on a project, I would think you should be able to issue GET
  /project. Further, there are cases (such as other projects wanting
  work work au quotas) where being able to get the info on a project
  (such as it's parent) that are important.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1535878/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542892] Re: all test_extension_driver_port_security require port-security extension

2016-02-11 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/278624
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=f846519e5af6efa5007896c7ac75d13b415c3f53
Submitter: Jenkins
Branch:master

commit f846519e5af6efa5007896c7ac75d13b415c3f53
Author: malos 
Date:   Wed Feb 10 21:35:01 2016 +0100

Add extension requirement in port-security api test

if port-security extension is not enabled in neutron, 2 tests run and fail.
Even if port-security is not part of the tempest.conf api_extensions list.

Change-Id: I7c15bff96ea976841400e8df93d0c1cb74049bce
Closes-bug: #1542892


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1542892

Title:
  all test_extension_driver_port_security require port-security
  extension

Status in neutron:
  Fix Released

Bug description:
  2 tests are not checking if port-security extension is enable or not.
  then fail if not enabled.

   
neutron.tests.api.admin.test_extension_driver_port_security_admin.PortSecurityAdminTests.test_create_port_security_false_on_shared_network
   
neutron.tests.api.test_extension_driver_port_security.PortSecTest.test_allow_address_pairs

  below trace of failing test when port-security is not enabled but run
  anyway :

  Traceback (most recent call last):
    File "neutron/tests/api/test_extension_driver_port_security.py", line 147, 
in test_allow_address_pairs
  port = self.create_port(network=network, port_security_enabled=False)
    File "neutron/tests/api/base.py", line 290, in create_port
  **kwargs)
    File "neutron/tests/tempest/services/network/json/network_client.py", line 
148, in _create
  resp, body = self.post(uri, post_data)
    File 
"neutron/.tox/api-constraints/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 259, in post
  return self.request('POST', url, extra_headers, headers, body)
    File 
"neutron/.tox/api-constraints/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 639, in request
  resp, resp_body)
    File 
"neutron/.tox/api-constraints/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 697, in _error_checker
  raise exceptions.BadRequest(resp_body, resp=resp)
  tempest_lib.exceptions.BadRequest: Bad request
  Details: {u'detail': u'', u'message': u"Unrecognized attribute(s) 
'port_security_enabled'", u'type': u'HTTPBadRequest'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1542892/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544619] [NEW] test_create_delete_instance race fails in gate-horizon-dsvm-integration with "error: [Errno 111] Connection refused"

2016-02-11 Thread Matt Riedemann
Public bug reported:

http://logs.openstack.org/25/266125/3/gate/gate-horizon-dsvm-
integration/0476236/console.html.gz#_2016-02-05_14_47_17_144

2016-02-05 14:47:17.144 | 2016-02-05 14:47:17.123 | ERROR: 
openstack_dashboard.test.integration_tests.tests.test_instances.TestInstances.test_create_delete_instance
2016-02-05 14:47:17.153 | 2016-02-05 14:47:17.131 | 
--
2016-02-05 14:47:17.156 | 2016-02-05 14:47:17.137 | Traceback (most recent call 
last):
2016-02-05 14:47:17.176 | 2016-02-05 14:47:17.139 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/nose/case.py",
 line 133, in run
2016-02-05 14:47:17.176 | 2016-02-05 14:47:17.148 | self.runTest(result)
2016-02-05 14:47:17.209 | 2016-02-05 14:47:17.188 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/nose/case.py",
 line 151, in runTest
2016-02-05 14:47:17.226 | 2016-02-05 14:47:17.204 | test(result)
2016-02-05 14:47:17.231 | 2016-02-05 14:47:17.211 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/unittest2/case.py",
 line 673, in __call__
2016-02-05 14:47:17.242 | 2016-02-05 14:47:17.222 | return self.run(*args, 
**kwds)
2016-02-05 14:47:17.261 | 2016-02-05 14:47:17.232 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 619, in run
2016-02-05 14:47:17.262 | 2016-02-05 14:47:17.243 | return 
run_test.run(result)
2016-02-05 14:47:17.264 | 2016-02-05 14:47:17.245 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/testtools/runtest.py",
 line 80, in run
2016-02-05 14:47:17.266 | 2016-02-05 14:47:17.246 | return 
self._run_one(actual_result)
2016-02-05 14:47:17.268 | 2016-02-05 14:47:17.249 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/testtools/runtest.py",
 line 94, in _run_one
2016-02-05 14:47:17.269 | 2016-02-05 14:47:17.250 | return 
self._run_prepared_result(ExtendedToOriginalDecorator(result))
2016-02-05 14:47:17.271 | 2016-02-05 14:47:17.252 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/testtools/runtest.py",
 line 108, in _run_prepared_result
2016-02-05 14:47:17.273 | 2016-02-05 14:47:17.253 | self._run_core()
2016-02-05 14:47:17.274 | 2016-02-05 14:47:17.255 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/testtools/runtest.py",
 line 149, in _run_core
2016-02-05 14:47:17.282 | 2016-02-05 14:47:17.262 | 
self.case._run_teardown, self.result):
2016-02-05 14:47:17.290 | 2016-02-05 14:47:17.270 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/testtools/runtest.py",
 line 193, in _run_user
2016-02-05 14:47:17.305 | 2016-02-05 14:47:17.272 | return 
self._got_user_exception(sys.exc_info())
2016-02-05 14:47:17.307 | 2016-02-05 14:47:17.288 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/testtools/runtest.py",
 line 213, in _got_user_exception
2016-02-05 14:47:17.326 | 2016-02-05 14:47:17.299 | 
self.case.onException(exc_info, tb_label=tb_label)
2016-02-05 14:47:17.339 | 2016-02-05 14:47:17.315 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 570, in onException
2016-02-05 14:47:17.366 | 2016-02-05 14:47:17.345 | handler(exc_info)
2016-02-05 14:47:17.377 | 2016-02-05 14:47:17.357 |   File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/helpers.py", 
line 141, in _save_screenshot
2016-02-05 14:47:17.386 | 2016-02-05 14:47:17.366 | 
self.driver.get_screenshot_as_file(filename)
2016-02-05 14:47:17.391 | 2016-02-05 14:47:17.371 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py",
 line 758, in get_screenshot_as_file
2016-02-05 14:47:17.395 | 2016-02-05 14:47:17.376 | png = 
self.get_screenshot_as_png()
2016-02-05 14:47:17.397 | 2016-02-05 14:47:17.378 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py",
 line 777, in get_screenshot_as_png
2016-02-05 14:47:17.413 | 2016-02-05 14:47:17.389 | return 
base64.b64decode(self.get_screenshot_as_base64().encode('ascii'))
2016-02-05 14:47:17.415 | 2016-02-05 14:47:17.392 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py",
 line 787, in get_screenshot_as_base64
2016-02-05 14:47:17.415 | 2016-02-05 14:47:17.394 | return 
self.execute(Command.SCREENSHOT)['value']
2016-02-05 14:47:17.417 | 2016-02-05 14:47:17.396 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py",
 line 199, in execu

[Yahoo-eng-team] [Bug 1544608] [NEW] test_image_create_delete race fails in gate-horizon-dsvm-integration with "AttributeError: 'NoneType' object has no attribute 'cells'"

2016-02-11 Thread Matt Riedemann
Public bug reported:

http://logs.openstack.org/47/275747/3/gate/gate-horizon-dsvm-
integration/b83e62a/console.html#_2016-02-09_01_25_30_551

2016-02-09 01:25:30.551 | 2016-02-09 01:25:30.522 | Traceback (most recent call 
last):
2016-02-09 01:25:30.555 | 2016-02-09 01:25:30.525 |   File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/tests/test_image_create_delete.py",
 line 32, in test_image_create_delete
2016-02-09 01:25:30.560 | 2016-02-09 01:25:30.531 | 
self.assertTrue(images_page.is_image_active(self.IMAGE_NAME))
2016-02-09 01:25:30.562 | 2016-02-09 01:25:30.533 |   File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/pages/project/compute/imagespage.py",
 line 103, in is_image_active
2016-02-09 01:25:30.569 | 2016-02-09 01:25:30.540 | 
self._wait_till_text_present_in_element(cell_getter, 'Active')
2016-02-09 01:25:30.572 | 2016-02-09 01:25:30.543 |   File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/basewebobject.py",
 line 107, in _wait_till_text_present_in_element
2016-02-09 01:25:30.575 | 2016-02-09 01:25:30.546 | 
self._wait_until(predicate, timeout)
2016-02-09 01:25:30.577 | 2016-02-09 01:25:30.548 |   File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/basewebobject.py",
 line 91, in _wait_until
2016-02-09 01:25:30.580 | 2016-02-09 01:25:30.551 | predicate)
2016-02-09 01:25:30.582 | 2016-02-09 01:25:30.553 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/selenium/webdriver/support/wait.py",
 line 71, in until
2016-02-09 01:25:30.585 | 2016-02-09 01:25:30.556 | value = 
method(self._driver)
2016-02-09 01:25:30.588 | 2016-02-09 01:25:30.559 |   File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/basewebobject.py",
 line 104, in predicate
2016-02-09 01:25:30.592 | 2016-02-09 01:25:30.562 | elt = element() if 
hasattr(element, '__call__') else element
2016-02-09 01:25:30.599 | 2016-02-09 01:25:30.569 |   File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/pages/project/compute/imagespage.py",
 line 101, in cell_getter
2016-02-09 01:25:30.602 | 2016-02-09 01:25:30.573 | return 
row.cells[self.IMAGES_TABLE_STATUS_COLUMN]
2016-02-09 01:25:30.604 | 2016-02-09 01:25:30.575 | AttributeError: 'NoneType' 
object has no attribute 'cells'

Looks like this started around 2/8:

http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22AttributeError%3A%20'NoneType'%20object%20has%20no%20attribute%20'cells'%5C%22%20AND%20tags%3A%5C%22console%5C%22%20AND%20build_name%3A%5C
%22gate-horizon-dsvm-integration%5C%22from=7d

** Affects: horizon
 Importance: Undecided
 Status: New

** Description changed:

  http://logs.openstack.org/47/275747/3/gate/gate-horizon-dsvm-
  integration/b83e62a/console.html#_2016-02-09_01_25_30_551
  
  2016-02-09 01:25:30.551 | 2016-02-09 01:25:30.522 | Traceback (most recent 
call last):
  2016-02-09 01:25:30.555 | 2016-02-09 01:25:30.525 |   File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/tests/test_image_create_delete.py",
 line 32, in test_image_create_delete
  2016-02-09 01:25:30.560 | 2016-02-09 01:25:30.531 | 
self.assertTrue(images_page.is_image_active(self.IMAGE_NAME))
  2016-02-09 01:25:30.562 | 2016-02-09 01:25:30.533 |   File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/pages/project/compute/imagespage.py",
 line 103, in is_image_active
  2016-02-09 01:25:30.569 | 2016-02-09 01:25:30.540 | 
self._wait_till_text_present_in_element(cell_getter, 'Active')
  2016-02-09 01:25:30.572 | 2016-02-09 01:25:30.543 |   File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/basewebobject.py",
 line 107, in _wait_till_text_present_in_element
  2016-02-09 01:25:30.575 | 2016-02-09 01:25:30.546 | 
self._wait_until(predicate, timeout)
  2016-02-09 01:25:30.577 | 2016-02-09 01:25:30.548 |   File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/basewebobject.py",
 line 91, in _wait_until
  2016-02-09 01:25:30.580 | 2016-02-09 01:25:30.551 | predicate)
  2016-02-09 01:25:30.582 | 2016-02-09 01:25:30.553 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/selenium/webdriver/support/wait.py",
 line 71, in until
  2016-02-09 01:25:30.585 | 2016-02-09 01:25:30.556 | value = 
method(self._driver)
  2016-02-09 01:25:30.588 | 2016-02-09 01:25:30.559 |   File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/basewebobject.py",
 line 104, in predicate
  2016-02-09 01:25:30.592 | 2016-02-09 01:25:30.562 | elt = element() if 
hasattr(element, '__call__') else element
  2016-02-09 01:25:30.599 | 2016-02-09 01:25:30.569 |   File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/pages/project/compute/imagespage.py",
 line 101, in cell_getter
  2016-02-09 01:25:30.602 | 2016-02-09 01:25:30.573 | retu

[Yahoo-eng-team] [Bug 1544599] [NEW] RBD image resize to smaller error handling

2016-02-11 Thread Martins Jakubovics
Public bug reported:

When use CEPH as storage backend and try to resize instance to smaller
flavor, resize fails with wrong message.

Steps to reproduce:

1.) Create instance from image with flavor.
2.) Try resize instance to flavor which have smaller root disk.

For example:

FlavorDiskSmallerThanImage: Flavor's disk is too small for requested
image. Flavor disk is 42949672960 bytes, image is 21474836480 bytes.

Should be:

FlavorDiskSmallerThanImage: Flavor's disk is too small for requested
image. Flavor disk is 21474836480 bytes, image is 42949672960 bytes.

** Affects: nova
 Importance: Undecided
 Assignee: Martins Jakubovics (martins-k)
 Status: New


** Tags: rbd

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1544599

Title:
  RBD image resize to smaller error handling

Status in OpenStack Compute (nova):
  New

Bug description:
  When use CEPH as storage backend and try to resize instance to smaller
  flavor, resize fails with wrong message.

  Steps to reproduce:

  1.) Create instance from image with flavor.
  2.) Try resize instance to flavor which have smaller root disk.

  For example:

  FlavorDiskSmallerThanImage: Flavor's disk is too small for requested
  image. Flavor disk is 42949672960 bytes, image is 21474836480 bytes.

  Should be:

  FlavorDiskSmallerThanImage: Flavor's disk is too small for requested
  image. Flavor disk is 21474836480 bytes, image is 42949672960 bytes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1544599/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1532809] Re: Gate failures when DHCP lease cannot be acquired

2016-02-11 Thread Matt Riedemann
Marked as high priority since it's nova-net specific at this point and
that's what the related fixes are for.

** Also affects: nova/kilo
   Importance: Undecided
   Status: New

** Also affects: nova/liberty
   Importance: Undecided
   Status: New

** Changed in: nova/kilo
   Status: New => In Progress

** Changed in: nova/liberty
   Status: New => In Progress

** Changed in: nova/kilo
 Assignee: (unassigned) => Sean Dague (sdague)

** Changed in: nova/liberty
 Assignee: (unassigned) => Sean Dague (sdague)

** Changed in: nova
   Importance: Undecided => High

** Changed in: nova/kilo
   Importance: Undecided => High

** Changed in: nova/liberty
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1532809

Title:
  Gate failures when DHCP lease cannot be acquired

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) kilo series:
  In Progress
Status in OpenStack Compute (nova) liberty series:
  In Progress

Bug description:
  Example from:
  
http://logs.openstack.org/97/265697/1/check/gate-grenade-dsvm/6eeced7/console.html#_2016-01-11_07_42_30_838

  Logstash query:
  message:"No lease, failing" AND voting:1

  dhcp_release for an ip/mac does not seem to reach dnsmasq (or it fails
  to act on it - "unknown lease") as i don't see entries in syslog for
  it.

  Logs from nova network:
  dims@dims-mac:~/junk/6eeced7$ grep dhcp_release old/screen-n-net.txt.gz | 
grep 10.1.0.3 | grep CMD
  2016-01-11 07:25:35.548 DEBUG oslo_concurrency.processutils 
[req-62aaa0b9-093e-4f28-805d-d4bf3008bfe6 tempest-ServersTestJSON-1206086292 
tempest-ServersTestJSON-1551541405] CMD "sudo nova-rootwrap 
/etc/nova/rootwrap.conf dhcp_release br100 10.1.0.3 fa:16:3e:32:51:c3" 
returned: 0 in 0.117s execute 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:297
  2016-01-11 07:25:51.259 DEBUG oslo_concurrency.processutils 
[req-31115ffa-8d43-4621-bb2e-351d6cd4bcef 
tempest-ServerActionsTestJSON-357128318 
tempest-ServerActionsTestJSON-854742699] CMD "sudo nova-rootwrap 
/etc/nova/rootwrap.conf dhcp_release br100 10.1.0.3 fa:16:3e:a4:f0:11" 
returned: 0 in 0.108s execute 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:297
  2016-01-11 07:26:35.357 DEBUG oslo_concurrency.processutils 
[req-c32a216e-d909-41a0-a0bc-d5eb7a21c048 
tempest-TestVolumeBootPattern-46217374 
tempest-TestVolumeBootPattern-1056816637] CMD "sudo nova-rootwrap 
/etc/nova/rootwrap.conf dhcp_release br100 10.1.0.3 fa:16:3e:ed:de:f6" 
returned: 0 in 0.110s execute 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:297

  Logs from syslog:
  dims@dims-mac:~/junk$ grep 10.1.0.3 syslog.txt.gz
  Jan 11 07:25:35 devstack-trusty-rax-iad-7090830 dnsmasq-dhcp[7798]: 
DHCPRELEASE(br100) 10.1.0.3 fa:16:3e:32:51:c3 unknown lease
  Jan 11 07:25:51 devstack-trusty-rax-iad-7090830 dnsmasq-dhcp[7798]: 
DHCPRELEASE(br100) 10.1.0.3 fa:16:3e:a4:f0:11 unknown lease
  Jan 11 07:26:24 devstack-trusty-rax-iad-7090830 dnsmasq-dhcp[7798]: 
DHCPOFFER(br100) 10.1.0.3 fa:16:3e:ed:de:f6
  Jan 11 07:26:24 devstack-trusty-rax-iad-7090830 dnsmasq-dhcp[7798]: 
DHCPREQUEST(br100) 10.1.0.3 fa:16:3e:ed:de:f6
  Jan 11 07:26:24 devstack-trusty-rax-iad-7090830 dnsmasq-dhcp[7798]: 
DHCPACK(br100) 10.1.0.3 fa:16:3e:ed:de:f6 tempest
  Jan 11 07:27:34 devstack-trusty-rax-iad-7090830 object-auditor: Object audit 
(ALL). Since Mon Jan 11 07:27:34 2016: Locally: 1 passed, 0 quarantined, 0 
errors files/sec: 2.03 , bytes/sec: 10119063.16, Total time: 0.49, Auditing 
time: 0.00, Rate: 0.00
  Jan 11 07:39:12 devstack-trusty-rax-iad-7090830 dnsmasq-dhcp[7798]: not using 
configured address 10.1.0.3 because it is leased to fa:16:3e:ed:de:f6
  Jan 11 07:40:12 devstack-trusty-rax-iad-7090830 dnsmasq-dhcp[7798]: not using 
configured address 10.1.0.3 because it is leased to fa:16:3e:ed:de:f6
  Jan 11 07:41:12 devstack-trusty-rax-iad-7090830 dnsmasq-dhcp[7798]: not using 
configured address 10.1.0.3 because it is leased to fa:16:3e:ed:de:f6
  Jan 11 07:42:26 devstack-trusty-rax-iad-7090830 dnsmasq-dhcp[7798]: 
DHCPRELEASE(br100) 10.1.0.3 fa:16:3e:fe:1f:36 unknown lease

  Net, The test that runs the ssh against the vm fails with the "No
  lease, failing" in its serial console

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1532809/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541859] Re: Router namespace missing after neutron L3 agent restarted

2016-02-11 Thread venkata anil
unable to reproduce the bug on neutron.

** Changed in: neutron
 Assignee: venkata anil (anil-venkata) => (unassigned)

** No longer affects: neutron

** Changed in: networking-ovn
 Assignee: venkata anil (anil-venkata) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1541859

Title:
  Router namespace missing after neutron L3 agent restarted

Status in networking-ovn:
  Confirmed

Bug description:
  After restarting the neutron L3 agent, the router namespace is
  deleted, but not recreated.

  Recreate scenario:
  1) Deploy OVN with the neutron L3 agent instead of the native L3 support.
  2) Follow http://docs.openstack.org/developer/networking-ovn/testing.html to 
test the deployment.
  3) Setup a floating IP address for one of the VMs.
  4) Check the namespaces.

  $ sudo ip netns
  qrouter-d4937a95-9742-4ed3-a8a6-7eccb622837d
  qdhcp-4ed85305-259e-4f62-a7a4-32aed191c546

  5) SSH to the VM via the floating IP address.

  $ ssh -i id_rsa_demo cirros@172.24.4.3
  $ exit
  Connection to 172.24.4.3 closed.

  6) Use screen to stop and then restart q-l3.
  7) Check the namespaces.

  $ sudo ip netns
  qdhcp-4ed85305-259e-4f62-a7a4-32aed191c546

  8) Disassociate the floating IP address from the VM.  This seems to receate 
the namespace.  It's possible that other operations would do the same.
  9) Check the namespaces.

  $ sudo ip netns
  qrouter-d4937a95-9742-4ed3-a8a6-7eccb622837d
  qdhcp-4ed85305-259e-4f62-a7a4-32aed191c546

  10) Associate the floating IP address to the VM again and connectivity
  is restored to the VM.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-ovn/+bug/1541859/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460164] Re: restart of openvswitch-switch causes instance network down when l2population enabled

2016-02-11 Thread James Page
** Also affects: neutron (Ubuntu Xenial)
   Importance: High
   Status: Triaged

** Also affects: neutron (Ubuntu Trusty)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Wily)
   Importance: Undecided
   Status: New

** Changed in: neutron (Ubuntu Xenial)
   Status: Triaged => Fix Released

** Changed in: neutron (Ubuntu Wily)
   Importance: Undecided => High

** Changed in: neutron (Ubuntu Wily)
   Status: New => In Progress

** Changed in: neutron (Ubuntu Wily)
 Assignee: (unassigned) => James Page (james-page)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460164

Title:
  restart of openvswitch-switch causes instance network down when
  l2population enabled

Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Trusty:
  New
Status in neutron source package in Wily:
  In Progress
Status in neutron source package in Xenial:
  Fix Released

Bug description:
  On 2015-05-28, our Landscape auto-upgraded packages on two of our
  OpenStack clouds.  On both clouds, but only on some compute nodes, the
  upgrade of openvswitch-switch and corresponding downtime of
  ovs-vswitchd appears to have triggered some sort of race condition
  within neutron-plugin-openvswitch-agent leaving it in a broken state;
  any new instances come up with non-functional network but pre-existing
  instances appear unaffected.  Restarting n-p-ovs-agent on the affected
  compute nodes is sufficient to work around the problem.

  The packages Landscape upgraded (from /var/log/apt/history.log):

  Start-Date: 2015-05-28  14:23:07
  Upgrade: nova-compute-libvirt:amd64 (2014.1.4-0ubuntu2, 2014.1.4-0ubuntu2.1), 
libsystemd-login0:amd64 (204-5ubuntu20.11, 204-5ubuntu20.12), 
nova-compute-kvm:amd64 (2014.1.4-0ubuntu2, 2014.1.4-0ubuntu2.1), 
systemd-services:amd64 (204-5ubuntu20.11, 204-5ubuntu20.12), 
isc-dhcp-common:amd64 (4.2.4-7ubuntu12.1, 4.2.4-7ubuntu12.2), nova-common:amd64 
(2014.1.4-0ubuntu2, 2014.1.4-0ubuntu2.1), python-nova:amd64 (2014.1.4-0ubuntu2, 
2014.1.4-0ubuntu2.1), libsystemd-daemon0:amd64 (204-5ubuntu20.11, 
204-5ubuntu20.12), grub-common:amd64 (2.02~beta2-9ubuntu1.1, 
2.02~beta2-9ubuntu1.2), libpam-systemd:amd64 (204-5ubuntu20.11, 
204-5ubuntu20.12), udev:amd64 (204-5ubuntu20.11, 204-5ubuntu20.12), 
grub2-common:amd64 (2.02~beta2-9ubuntu1.1, 2.02~beta2-9ubuntu1.2), 
openvswitch-switch:amd64 (2.0.2-0ubuntu0.14.04.1, 2.0.2-0ubuntu0.14.04.2), 
libudev1:amd64 (204-5ubuntu20.11, 204-5ubuntu20.12), isc-dhcp-client:amd64 
(4.2.4-7ubuntu12.1, 4.2.4-7ubuntu12.2), python-eventlet:amd64 (0.13.0-1ubuntu2, 
0.13.0-1ubuntu
 2.1), python-novaclient:amd64 (2.17.0-0ubuntu1.1, 2.17.0-0ubuntu1.2), 
grub-pc-bin:amd64 (2.02~beta2-9ubuntu1.1, 2.02~beta2-9ubuntu1.2), grub-pc:amd64 
(2.02~beta2-9ubuntu1.1, 2.02~beta2-9ubuntu1.2), nova-compute:amd64 
(2014.1.4-0ubuntu2, 2014.1.4-0ubuntu2.1), openvswitch-common:amd64 
(2.0.2-0ubuntu0.14.04.1, 2.0.2-0ubuntu0.14.04.2)
  End-Date: 2015-05-28  14:24:47

  From /var/log/neutron/openvswitch-agent.log:

  2015-05-28 14:24:18.336 47866 ERROR neutron.agent.linux.ovsdb_monitor
  [-] Error received from ovsdb monitor: ovsdb-client:
  unix:/var/run/openvswitch/db.sock: receive failed (End of file)

  Looking at a stuck instances, all the right tunnels and bridges and
  what not appear to be there:

  root@vector:~# ip l l | grep c-3b
  460002: qbr7ed8b59c-3b:  mtu 1500 qdisc 
noqueue state UP mode DEFAULT group default 
  460003: qvo7ed8b59c-3b:  mtu 1500 
qdisc pfifo_fast master ovs-system state UP mode DEFAULT group default qlen 1000
  460004: qvb7ed8b59c-3b:  mtu 1500 
qdisc pfifo_fast master qbr7ed8b59c-3b state UP mode DEFAULT group default qlen 
1000
  460005: tap7ed8b59c-3b:  mtu 1500 qdisc 
pfifo_fast master qbr7ed8b59c-3b state UNKNOWN mode DEFAULT group default qlen 
500
  root@vector:~# ovs-vsctl list-ports br-int | grep c-3b
  qvo7ed8b59c-3b
  root@vector:~# 

  But I can't ping the unit from within the qrouter-${id} namespace on
  the neutron gateway.  If I tcpdump the {q,t}*c-3b interfaces, I don't
  see any traffic.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460164/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543166] Re: Tables have a bunch of extra padding

2016-02-11 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/277439
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=6b04f47eae87dc78e609f7089cf31c14bf42ce6d
Submitter: Jenkins
Branch:master

commit 6b04f47eae87dc78e609f7089cf31c14bf42ce6d
Author: Rob Cresswell 
Date:   Mon Feb 8 15:16:01 2016 +

Remove extraneous table padding

A bunch of extra padding was added to the tables for some reason. It's
not needed, as bootstraps rows/cols provide any padding already.

Change-Id: If462004899afa8bcf3d7a10ab181206f47596bea
Closes-Bug: 1543166


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1543166

Title:
  Tables have a bunch of extra padding

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  There is twice the usual padding around the tables due to some recent
  SCSS refactoring. This should be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1543166/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513558] Re: test_create_ebs_image_and_check_boot failing with ceph job on stable/kilo

2016-02-11 Thread Sean Dague
** Also affects: tempest
   Importance: Undecided
   Status: New

** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1513558

Title:
  test_create_ebs_image_and_check_boot failing with ceph job on
  stable/kilo

Status in tempest:
  New

Bug description:
  After https://review.openstack.org/#/c/230937/ merged stable/kilo gate
  seems to be broken in the ceph job gate-tempest-dsvm-full-ceph

  The tests fail with an error like:

  2015-11-04 19:20:07.224 | Captured traceback-2:
  2015-11-04 19:20:07.224 | ~
  2015-11-04 19:20:07.224 | Traceback (most recent call last):
  2015-11-04 19:20:07.224 |   File 
"/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 791, in wait_for_resource_deletion
  2015-11-04 19:20:07.224 | raise exceptions.TimeoutException(message)
  2015-11-04 19:20:07.224 | tempest_lib.exceptions.TimeoutException: 
Request timed out
  2015-11-04 19:20:07.224 | Details: (TestVolumeBootPattern:_run_cleanups) 
Failed to delete volume 1da0ba45-a4e6-49c6-8d47-ca522d7acabb within the 
required time (196 s).
  2015-11-04 19:20:07.225 | 
  2015-11-04 19:20:07.225 | 
  2015-11-04 19:20:07.225 | Captured traceback-1:
  2015-11-04 19:20:07.225 | ~
  2015-11-04 19:20:07.225 | Traceback (most recent call last):
  2015-11-04 19:20:07.225 |   File "tempest/scenario/manager.py", line 100, 
in delete_wrapper
  2015-11-04 19:20:07.225 | delete_thing(*args, **kwargs)
  2015-11-04 19:20:07.225 |   File 
"tempest/services/volume/json/volumes_client.py", line 108, in delete_volume
  2015-11-04 19:20:07.225 | resp, body = self.delete("volumes/%s" % 
str(volume_id))
  2015-11-04 19:20:07.225 |   File 
"/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 290, in delete
  2015-11-04 19:20:07.225 | return self.request('DELETE', url, 
extra_headers, headers, body)
  2015-11-04 19:20:07.226 |   File 
"/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 639, in request
  2015-11-04 19:20:07.226 | resp, resp_body)
  2015-11-04 19:20:07.226 |   File 
"/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 697, in _error_checker
  2015-11-04 19:20:07.226 | raise exceptions.BadRequest(resp_body, 
resp=resp)
  2015-11-04 19:20:07.226 | tempest_lib.exceptions.BadRequest: Bad request
  2015-11-04 19:20:07.226 | Details: {u'code': 400, u'message': u'Invalid 
volume: Volume still has 1 dependent snapshots.'}

  Full logs here: http://logs.openstack.org/52/229152/11/check/gate-
  tempest-dsvm-full-ceph/11bddbf/console.html#_2015-11-04_19_20_07_224

  This seems to be similar to
  https://bugs.launchpad.net/tempest/+bug/1489581 but isn't in the cells
  job.

To manage notifications about this bug go to:
https://bugs.launchpad.net/tempest/+bug/1513558/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544585] [NEW] Missing netwrok mock for Instance tests

2016-02-11 Thread Itxaka Serrano
Public bug reported:

openstack_dashboards.dashboards.project.instances.tests:InstanceAjaxTests.test_row_update_flavor_not_found
test is missing a mock for "servers_update_addresses" causing the test
to output "Cannot connect to neutron"

Strangely enough, the test is passing, which is not really good.

** Affects: horizon
 Importance: Undecided
 Assignee: Itxaka Serrano (itxakaserrano)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Itxaka Serrano (itxakaserrano)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1544585

Title:
  Missing netwrok mock for Instance tests

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  
openstack_dashboards.dashboards.project.instances.tests:InstanceAjaxTests.test_row_update_flavor_not_found
  test is missing a mock for "servers_update_addresses" causing the test
  to output "Cannot connect to neutron"

  Strangely enough, the test is passing, which is not really good.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1544585/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544240] Re: disassociate floating ip 500 response

2016-02-11 Thread Andrew Laski
I was not aware that we allowed 500s in that case. It makes sense, just
unfortunate. So yeah, this bug is invalid.

** Changed in: nova
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1544240

Title:
  disassociate floating ip 500 response

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  From http://logs.openstack.org/88/276088/5/check/gate-grenade-
  dsvm/8051980/logs/new/screen-n-api.txt.gz

  2016-02-10 13:39:19.932 ERROR nova.api.openstack.extensions 
[req-644cea97-7d26-4e2a-984b-d346ebf96ccb cinder_grenade cinder_grenade] 
Unexpected exception in API method
  2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
  2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/extensions.py", line 478, in wrapped
  2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
  2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/validation/__init__.py", line 73, in wrapper
  2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/compute/floating_ips.py", line 293, in 
_remove_floating_ip
  2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions 
disassociate_floating_ip(self, context, instance, address)
  2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/compute/floating_ips.py", line 80, in 
disassociate_floating_ip
  2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions 
self.network_api.disassociate_floating_ip(context, instance, address)
  2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/network/api.py", line 49, in wrapped
  2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions return 
func(self, context, *args, **kwargs)
  2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/network/base_api.py", line 77, in wrapper
  2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions res = 
f(self, context, *args, **kwargs)
  2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/network/api.py", line 240, in disassociate_floating_ip
  2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions 
affect_auto_assigned)
  2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/utils.py", line 1082, in wrapper
  2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
150, in inner
  2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/network/floating_ips.py", line 460, in 
disassociate_floating_ip
  2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions 
interface, host, fixed_ip.instance_uuid)
  2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/network/rpcapi.py", line 324, in 
_disassociate_floating_ip
  2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions 
instance_uuid=instance_uuid)
  2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 
158, in call
  2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions 
retry=self.retry)
  2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 90, 
in _send
  2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions 
timeout=timeout, retry=retry)
  2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 466, in send
  2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions 
retry=retry)
  2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 455, in _send
  2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions result 
= self._waiter.wait(msg_id, timeout)
  2016-02-10 13:39:19.932 22992 ERROR nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-pac

[Yahoo-eng-team] [Bug 1537674] Re: host action should use POST instead of GET for power on , reboot etc

2016-02-11 Thread Sean Dague
This is going to require a spec, as it will be a microversion. I agree
GET is the wrong method for these kinds of things.

** Changed in: nova
   Importance: Undecided => Wishlist

** Changed in: nova
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1537674

Title:
  host action should use POST instead of GET for power on , reboot etc

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  288 class Hosts(extensions.V21APIExtensionBase):
  289 """Admin-only host administration."""
  290
  291 name = "Hosts"
  292 alias = ALIAS
  293 version = 1
  294
  295 def get_resources(self):
  296 resources = [extensions.ResourceExtension(ALIAS,
  297 HostController(),
  298 member_actions={"startup": "GET", "shutdown": "GET",
  299 "reboot": "GET"})]

  
  we use GET for thoes actions, POST will be a better choice

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1537674/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1529836] Fix merged to networking-powervm (master)

2016-02-11 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/267648
Committed: 
https://git.openstack.org/cgit/openstack/networking-powervm/commit/?id=6b14ac69332056a14c98f8b4401ac3a93566a066
Submitter: Jenkins
Branch:master

commit 6b14ac69332056a14c98f8b4401ac3a93566a066
Author: Harshada Mangesh Kakad 
Date:   Thu Jan 14 07:32:32 2016 -0800

Replace deprecated library function os.popen() with subprocess

os.popen() is deprecated since version 2.6. Resolved with use of
subprocess module.

Change-Id: If3be398196688c9634cb12bfd96daac71f0bf42d
Closes-Bug: #1529836


** Changed in: nova-powervm
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1529836

Title:
  Fix deprecated library function (os.popen()).

Status in bilean:
  In Progress
Status in Blazar:
  In Progress
Status in Ceilometer:
  In Progress
Status in ceilometer-powervm:
  Fix Released
Status in Cinder:
  In Progress
Status in congress:
  In Progress
Status in devstack:
  In Progress
Status in Glance:
  In Progress
Status in glance_store:
  Fix Released
Status in group-based-policy-specs:
  In Progress
Status in heat:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in horizon-cisco-ui:
  In Progress
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystoneauth:
  Fix Released
Status in keystonemiddleware:
  Fix Released
Status in Kwapi:
  In Progress
Status in Manila:
  Fix Released
Status in Murano:
  Fix Released
Status in networking-powervm:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in nova-powervm:
  Fix Released
Status in oslo-incubator:
  In Progress
Status in python-heatclient:
  Fix Released
Status in python-keystoneclient:
  Fix Released
Status in Python client library for Zaqar:
  In Progress
Status in Sahara:
  Fix Released
Status in senlin:
  Fix Released
Status in OpenStack Object Storage (swift):
  In Progress
Status in tempest:
  Fix Released

Bug description:
  Deprecated library function os.popen is still in use at some places.
  Need to replace it using subprocess module.

To manage notifications about this bug go to:
https://bugs.launchpad.net/bilean/+bug/1529836/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1529836] Re: Fix deprecated library function (os.popen()).

2016-02-11 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/267615
Committed: 
https://git.openstack.org/cgit/openstack/ceilometer-powervm/commit/?id=8e1651bba33dc1546f4cb0da1e52a50253802cbe
Submitter: Jenkins
Branch:master

commit 8e1651bba33dc1546f4cb0da1e52a50253802cbe
Author: Harshada Mangesh Kakad 
Date:   Thu Jan 14 06:47:42 2016 -0800

Replace deprecated library function os.popen() with subprocess

os.popen() is deprecated since version 2.6. Resolved with use of
subprocess module.

Change-Id: Ibdcfdf156f03588b152b86401809adcb433e4699
Closes-Bug: #1529836


** Changed in: ceilometer-powervm
   Status: In Progress => Fix Released

** Changed in: networking-powervm
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1529836

Title:
  Fix deprecated library function (os.popen()).

Status in bilean:
  In Progress
Status in Blazar:
  In Progress
Status in Ceilometer:
  In Progress
Status in ceilometer-powervm:
  Fix Released
Status in Cinder:
  In Progress
Status in congress:
  In Progress
Status in devstack:
  In Progress
Status in Glance:
  In Progress
Status in glance_store:
  Fix Released
Status in group-based-policy-specs:
  In Progress
Status in heat:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in horizon-cisco-ui:
  In Progress
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystoneauth:
  Fix Released
Status in keystonemiddleware:
  Fix Released
Status in Kwapi:
  In Progress
Status in Manila:
  Fix Released
Status in Murano:
  Fix Released
Status in networking-powervm:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in nova-powervm:
  Fix Released
Status in oslo-incubator:
  In Progress
Status in python-heatclient:
  Fix Released
Status in python-keystoneclient:
  Fix Released
Status in Python client library for Zaqar:
  In Progress
Status in Sahara:
  Fix Released
Status in senlin:
  Fix Released
Status in OpenStack Object Storage (swift):
  In Progress
Status in tempest:
  Fix Released

Bug description:
  Deprecated library function os.popen is still in use at some places.
  Need to replace it using subprocess module.

To manage notifications about this bug go to:
https://bugs.launchpad.net/bilean/+bug/1529836/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544548] [NEW] DHCP: no indication in API that DHCP service is not running

2016-02-11 Thread Gary Kotton
Public bug reported:

Even if DHCP namespace creation fails at the network node due to some
reason, neutron API still returns success to the user.

2016-01-18 02:51:12.661 ^[[00;32mDEBUG neutron.agent.dhcp.agent [^[[01
;36mreq-f6d7a436-b9ff-45ca-9cfc-0f147b97effb
^[[00;36mctx_rally_bbaa10b4eb2749b3a09b375682b6cb6e_user_0
bbaa10b4eb2749b3a09b375682b6cb6e^[[00;32m] ^[[01;35m^[[00;32mCalling
driver for network: 351d9017-6e92-4310-ae6d-cf1d0bce0b14 action:
enable^[[00m ^[[00;33mfrom (pid=26547) call_driver
/opt/stack/neutron/neutron/agent/dhcp/agent.py:104

2016-01-18 02:51:12.662 ^[[00;32mDEBUG neutron.agent.linux.dhcp 
[^[[01;36mreq-f6d7a436-b9ff-45ca-9cfc-0f147b97effb 
^[[00;36mctx_rally_bbaa10b4eb2749b3a09b375682b6cb6e_user_0 
bbaa10b4eb2749b3a09b375682b6cb6e^[[00;32m] ^[[01;35m^[[00;32mDHCP port 
dhcpa382383f-19b6-5ca7-94ec-5ec1e62dc705-351d9017-6e92-4310-ae6d-cf1d0bce0b14 
on network 351d9017-6e92-4310-ae6d-cf1d0bce0b14 does not yet exist. Checking 
for a reserved port.^[[00m ^[[00;33mfrom (pid=26547) _setup_reserved_dhcp_port 
/opt/stack/neutron/neutron/agent/linux/dhcp.py:1098^[[00m
2016-01-18 02:51:12.663 ^[[00;32mDEBUG neutron.agent.linux.dhcp 
[^[[01;36mreq-f6d7a436-b9ff-45ca-9cfc-0f147b97effb 
^[[00;36mctx_rally_bbaa10b4eb2749b3a09b375682b6cb6e_user_0 
bbaa10b4eb2749b3a09b375682b6cb6e^[[00;32m] ^[[01;35m^[[00;32mDHCP port 
dhcpa382383f-19b6-5ca7-94ec-5ec1e62dc705-351d9017-6e92-4310-ae6d-cf1d0bce0b14 
on network 351d9017-6e92-4310-ae6d-cf1d0bce0b14 does not yet exist. Creating 
new one.^[[00m ^[[00;33mfrom (pid=26547) _setup_new_dhcp_port 
/opt/stack/neutron/neutron/agent/linux/dhcp.py:1119

2016-01-18 02:51:13.000 ^[[01;31mERROR neutron.agent.dhcp.agent 
[^[[01;36mreq-f6d7a436-b9ff-45ca-9cfc-0f147b97effb 
^[[00;36mctx_rally_bbaa10b4eb2749b3a09b375682b6cb6e_user_0 
bbaa10b4eb2749b3a09b375682b6cb6e^[[01;31m] ^[[01;35m^[[01;31mUnable to enable 
dhcp for 351d9017-6e92-4310-ae6d-cf1d0bce0b14.^[[00m
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent 
^[[01;35m^[[00mTraceback (most recent call last):
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
 File "/opt/stack/neutron/neutron/agent/dhcp/agent.py", line 113, in call_driver
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
   getattr(driver, action)(**action_kwargs)
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
 File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 206, in enable
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
   interface_name = self.device_manager.setup(self.network)
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
 File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 1206, in setup
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
   namespace=network.namespace)
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
 File "/opt/stack/neutron/neutron/agent/linux/interface.py", line 243, in plug
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
   bridge, namespace, prefix)
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
 File "/opt/stack/neutron/neutron/agent/linux/interface.py", line 311, in 
plug_new
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
   self.check_bridge_exists(bridge)
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
 File "/opt/stack/neutron/neutron/agent/linux/interface.py", line 220, in 
check_bridge_exists
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
   if not ip_lib.device_exists(bridge):
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
 File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 908, in 
device_exists
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
   return IPDevice(device_name, namespace=namespace).exists()
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
 File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 265, in exists
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
   return bool(self.link.address)
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
 File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 482, in address
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
   return self.attributes.get('link/ether')
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
 File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 506, in 
attributes
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent ^[[01;35m^[[00m 
   return self._parse_line(self._run(['o'], ('show', self.name)))
^[[01;31m2016-01-18 02:51:13.000 TRACE neutron.agent.dhcp.agent 

[Yahoo-eng-team] [Bug 1541859] Re: Router namespace missing after neutron L3 agent restarted

2016-02-11 Thread venkata anil
** Changed in: networking-ovn
 Assignee: (unassigned) => venkata anil (anil-venkata)

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1541859

Title:
  Router namespace missing after neutron L3 agent restarted

Status in networking-ovn:
  New
Status in neutron:
  New

Bug description:
  After restarting the neutron L3 agent, the router namespace is
  deleted, but not recreated.

  Recreate scenario:
  1) Deploy OVN with the neutron L3 agent instead of the native L3 support.
  2) Follow http://docs.openstack.org/developer/networking-ovn/testing.html to 
test the deployment.
  3) Setup a floating IP address for one of the VMs.
  4) Check the namespaces.

  $ sudo ip netns
  qrouter-d4937a95-9742-4ed3-a8a6-7eccb622837d
  qdhcp-4ed85305-259e-4f62-a7a4-32aed191c546

  5) SSH to the VM via the floating IP address.

  $ ssh -i id_rsa_demo cirros@172.24.4.3
  $ exit
  Connection to 172.24.4.3 closed.

  6) Use screen to stop and then restart q-l3.
  7) Check the namespaces.

  $ sudo ip netns
  qdhcp-4ed85305-259e-4f62-a7a4-32aed191c546

  8) Disassociate the floating IP address from the VM.  This seems to receate 
the namespace.  It's possible that other operations would do the same.
  9) Check the namespaces.

  $ sudo ip netns
  qrouter-d4937a95-9742-4ed3-a8a6-7eccb622837d
  qdhcp-4ed85305-259e-4f62-a7a4-32aed191c546

  10) Associate the floating IP address to the VM again and connectivity
  is restored to the VM.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-ovn/+bug/1541859/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544522] Re: Don't use Mock.called_once_with that does not exist

2016-02-11 Thread javeme
** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: sahara
 Assignee: (unassigned) => javeme (javaloveme)

** Changed in: cinder
 Assignee: (unassigned) => javeme (javaloveme)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1544522

Title:
  Don't use Mock.called_once_with that does not exist

Status in Cinder:
  New
Status in neutron:
  New
Status in Sahara:
  New

Bug description:
  class mock.Mock does not exist method "called_once_with", it just
  exists method "assert_called_once_with". Currently there are still
  some places where we use called_once_with method, we should correct
  it.

  NOTE: called_once_with() does nothing because it's a mock object.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1544522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540719] Re: Integration tests browser maximise makes working on tests painful

2016-02-11 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/274996
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=af505dacd1d8d64f602629fa2e334582c3a89d5b
Submitter: Jenkins
Branch:master

commit af505dacd1d8d64f602629fa2e334582c3a89d5b
Author: Richard Jones 
Date:   Tue Feb 2 14:54:10 2016 +1100

Add configuration mechanism to turn off browser maximisation

Having the window maximise during a test run makes it difficult
to work with other windows, especially on a single-monitor
computer. This patch removes the maximisation call.

This patch also adds a "local" configuration mechanism to allow
developers to have local configuration of the integration test
environment without it affecting the git repository.

Change-Id: I8a7acbe40deaec5cca904526e3f3c8bc3357744c
Closes-Bug: 1540719


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1540719

Title:
  Integration tests browser maximise makes working on tests painful

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Currently the webdriver for the integration test suite maximises the
  browser, making it difficult to work with other windows during a test
  run. We should investigate whether the maximisation is even necessary,
  at least, with a view to turning it off permanently.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1540719/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543937] Re: 'nova-manage db archive_deleted_rows' fails for very large number

2016-02-11 Thread Abhishek Kekane
Similarly glance-manage db purge 1
111 fails
and give long stack trace.


** Also affects: glance
   Importance: Undecided
   Status: New

** Changed in: glance
 Assignee: (unassigned) => Dinesh Bhor (dinesh-bhor)

** Summary changed:

- 'nova-manage db archive_deleted_rows' fails for very large number
+ db purge records fails for very large number

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1543937

Title:
  db purge records fails for very large number

Status in Glance:
  New
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  The command:
  $ nova-manage db archive_deleted_rows  --verbose
  fails for very large NUMBER value on nova master

  Nova version:
  openstack@openstack-136:/opt/stack/nova$ git log -1
  commit 29641bd9778b51ac5794dfed9d4b881c5d47dc50
  Merge: 21e79d5 9fbe683
  Author: Jenkins 
  Date:   Wed Feb 10 06:03:00 2016 +

  Merge "Top 100 slow tests: api.openstack.compute.test_api"

  Example:

  openstack@openstack-136:~$ nova-manage db archive_deleted_rows 
214748354764774747774747477536654545649 --verbose
  2016-02-09 22:17:10.713 ERROR oslo_db.sqlalchemy.exc_filters [-] DBAPIError 
exception wrapped from (pymysql.err.ProgrammingError) (1064, u"You have an 
error in your SQL syntax; check the manual that corresponds to your MySQL 
server version for the right syntax to use near 
'214748354764774747774747477536654545649' at line 4") [SQL: u'INSERT INTO 
shadow_instance_actions_events (created_at, updated_at, deleted_at, deleted, 
id, event, action_id, start_time, finish_time, result, traceback, host, 
details) SELECT instance_actions_events.created_at, 
instance_actions_events.updated_at, instance_actions_events.deleted_at, 
instance_actions_events.deleted, instance_actions_events.id, 
instance_actions_events.event, instance_actions_events.action_id, 
instance_actions_events.start_time, instance_actions_events.finish_time, 
instance_actions_events.result, instance_actions_events.traceback, 
instance_actions_events.host, instance_actions_events.details \nFROM 
instance_actions_events \nWHERE inst
 ance_actions_events.deleted != %s ORDER BY instance_actions_events.id \n LIMIT 
%s'] [parameters: (0, 214748354764774747774747477536654545649L)]
  2016-02-09 22:17:10.713 TRACE oslo_db.sqlalchemy.exc_filters Traceback (most 
recent call last):
  2016-02-09 22:17:10.713 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1139, 
in _execute_context
  2016-02-09 22:17:10.713 TRACE oslo_db.sqlalchemy.exc_filters context)
  2016-02-09 22:17:10.713 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 
450, in do_execute
  2016-02-09 22:17:10.713 TRACE oslo_db.sqlalchemy.exc_filters 
cursor.execute(statement, parameters)
  2016-02-09 22:17:10.713 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py", line 146, in 
execute
  2016-02-09 22:17:10.713 TRACE oslo_db.sqlalchemy.exc_filters result = 
self._query(query)
  2016-02-09 22:17:10.713 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py", line 296, in _query
  2016-02-09 22:17:10.713 TRACE oslo_db.sqlalchemy.exc_filters conn.query(q)
  2016-02-09 22:17:10.713 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 781, in 
query
  2016-02-09 22:17:10.713 TRACE oslo_db.sqlalchemy.exc_filters 
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
  2016-02-09 22:17:10.713 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 942, in 
_read_query_result
  2016-02-09 22:17:10.713 TRACE oslo_db.sqlalchemy.exc_filters result.read()
  2016-02-09 22:17:10.713 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 1138, in 
read
  2016-02-09 22:17:10.713 TRACE oslo_db.sqlalchemy.exc_filters first_packet 
= self.connection._read_packet()
  2016-02-09 22:17:10.713 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 906, in 
_read_packet
  2016-02-09 22:17:10.713 TRACE oslo_db.sqlalchemy.exc_filters 
packet.check_error()
  2016-02-09 22:17:10.713 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 367, in 
check_error
  2016-02-09 22:17:10.713 TRACE oslo_db.sqlalchemy.exc_filters 
err.raise_mysql_exception(self._data)
  2016-02-09 22:17:10.713 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/err.py", li

[Yahoo-eng-team] [Bug 1544515] [NEW] [tempest] No metadata route in test_dualnet* family

2016-02-11 Thread Evgeny Antyshev
Public bug reported:

We run tempest in such an environment that all ssh keys, personality files, 
etc. go through metadata service.
Which requires metadata route to 169.254.169.254 being provided by DHCP.

We faced rather complicated problem in scenario/test_network_v6.py:

Networking configuration comprises of 4 steps:
1) Private network creation
2) Router creation and plugging it as a gateway in external network
3) Subnet creation
4) Adding router interface to the subnet

This sequence leads to that DHCP service provides static metadata route, and 
our scenario works.
That's how it is done in create_networks() from tempest/scenario/manager.py 
(which used in the majority of tests).

But, prepare_network() of scenario/test_network_v6.py first creates subnet, and 
after that it creates router (1-3-2-4).
DHCP service configuration in neutron regards this subnet isolated then (which 
is what I don't understand), therefore,
doesn't provide it with metadata route by default (force_metadata=False, 
enable_isolated_metadata=False).

It really seems like Neutron has a bug handling such scenario: it
doesn't update DHCP service configuration when router is created for
subnet.

Also, AFAIU, all the upstream Jenkins dsvm tempest jobs running test_dualnet* 
get metadata from config drive,
 that's possibly why they aren't affected.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1544515

Title:
  [tempest] No metadata route in test_dualnet* family

Status in neutron:
  New

Bug description:
  We run tempest in such an environment that all ssh keys, personality files, 
etc. go through metadata service.
  Which requires metadata route to 169.254.169.254 being provided by DHCP.

  We faced rather complicated problem in scenario/test_network_v6.py:

  Networking configuration comprises of 4 steps:
  1) Private network creation
  2) Router creation and plugging it as a gateway in external network
  3) Subnet creation
  4) Adding router interface to the subnet

  This sequence leads to that DHCP service provides static metadata route, and 
our scenario works.
  That's how it is done in create_networks() from tempest/scenario/manager.py 
(which used in the majority of tests).

  But, prepare_network() of scenario/test_network_v6.py first creates subnet, 
and after that it creates router (1-3-2-4).
  DHCP service configuration in neutron regards this subnet isolated then 
(which is what I don't understand), therefore,
  doesn't provide it with metadata route by default (force_metadata=False, 
enable_isolated_metadata=False).

  It really seems like Neutron has a bug handling such scenario: it
  doesn't update DHCP service configuration when router is created for
  subnet.

  Also, AFAIU, all the upstream Jenkins dsvm tempest jobs running test_dualnet* 
get metadata from config drive,
   that's possibly why they aren't affected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1544515/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544508] [NEW] neutron-meter-agent - makes traffic between internal networks NATed

2016-02-11 Thread Dmitry Sutyagin
Public bug reported:

If neutron-meter-agent is installed and enabled, and a meter-label is
created, all traffic between internal networks becomes NATed, which is
unexpected and potentially causes firewall/routing issues. This happens
because meter-agent does not define stateless flag during iptables
initialization which later during _modify_rules in
agent/linux/iptables_manager.py results in moving the following rules:

before:
-A POSTROUTING -j neutron-l3-agent-POSTROUTING
-A POSTROUTING -j neutron-postrouting-bottom

after:
-A POSTROUTING -j neutron-postrouting-bottom
-A POSTROUTING -j neutron-l3-agent-POSTROUTING

The attached patch fixes the issue by setting "state_less=True" for
metering agent's iptables_manager.

** Affects: neutron
 Importance: Undecided
 Status: New

** Patch added: "fix_metering_agent_nat.patch"
   
https://bugs.launchpad.net/bugs/1544508/+attachment/4569216/+files/fix_metering_agent_nat.patch

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1544508

Title:
  neutron-meter-agent - makes traffic between internal networks NATed

Status in neutron:
  New

Bug description:
  If neutron-meter-agent is installed and enabled, and a meter-label is
  created, all traffic between internal networks becomes NATed, which is
  unexpected and potentially causes firewall/routing issues. This
  happens because meter-agent does not define stateless flag during
  iptables initialization which later during _modify_rules in
  agent/linux/iptables_manager.py results in moving the following rules:

  before:
  -A POSTROUTING -j neutron-l3-agent-POSTROUTING
  -A POSTROUTING -j neutron-postrouting-bottom

  after:
  -A POSTROUTING -j neutron-postrouting-bottom
  -A POSTROUTING -j neutron-l3-agent-POSTROUTING

  The attached patch fixes the issue by setting "state_less=True" for
  metering agent's iptables_manager.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1544508/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1535759] Re: Booting server with --hint group=group-name throws http 500

2016-02-11 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/272737
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=5cc5a841109b082395d9664edcfc11e31fb678fa
Submitter: Jenkins
Branch:master

commit 5cc5a841109b082395d9664edcfc11e31fb678fa
Author: Balazs Gibizer 
Date:   Tue Jan 26 22:10:12 2016 +0100

Return HTTP 400 for invalid server-group uuid

Nova API checks that the value of the group scheduler hint
shall be a valid server-group uuid, however it was done by custom
code not jsonschema. Moreover the exception was not handled by the API
so both v2.0 and v.2.1 returned HTTP 500 if the group hint wasn't a valid
uuid of an existing server group.

The custom code to check for the validity of the group uuid is kept in
nova.compute.api as it is still needed in v2.0.

In v2.0 InstanceGroupNotFound exception are now translated to 
HTTPBadRequest.

In v2.1 the scheduler_hint jsonschema is now extended to check the format
of the group hint and the api is now translates the InstanceGroupNotFound
to HTTPBadRequest.

Closes-bug: #1535759
Change-Id: I38d98c74f6cceed5b4becf9ed67f7189cba479fa


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1535759

Title:
  Booting server with --hint group=group-name throws http 500

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  It seems that nova expects and validates that the os:scheduler_hints
  group key contains an UUID. However if it is not a UUID HTTP 500 is
  returned instead of HTTP 400.

  This was visible in devstack with nova from master branch
  (b558d616c3b123dbe2a0914162b45765192f3a12)

  To reproduce:

  $ nova server-group-create affin-group-1 affinity
  
+--+---+---+-+--+
  | Id   | Name  | Policies  | 
Members | Metadata |
  
+--+---+---+-+--+
  | b087079c-cfc8-4a7d-a578-ccfbb7a85cf5 | affin-group-1 | [u'affinity'] | []   
   | {}   |
  
+--+---+---+-+--+
  nova --debug boot --flavor 42 --image cirros-0.3.4-x86_64-uec --hint 
group=affin-group-1 inst-1
  
  DEBUG (session:225) REQ: curl -g -i -X POST 
http://192.168.200.200:8774/v2.1/91b11b772c4d400f9a44ab1bbfd4ddd8/servers -H 
"User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: 
application/json" -H "X-OpenStack-Nova-API-Version: 2.12" -H "X-Auth-Token: 
{SHA1}71a9fd700ffb3e625648e4423b3bd3c409d23246" -d '{"os:scheduler_hints": 
{"group": "affin-group-1"}, "server": {"min_count": 1, "flavorRef": "42", 
"name": "inst-1", "imageRef": "75a189fa-389b-4386-a731-a3bfccdfe352", 
"max_count": 1}}'
  DEBUG (connectionpool:387) "POST 
/v2.1/91b11b772c4d400f9a44ab1bbfd4ddd8/servers HTTP/1.1" 500 201
  DEBUG (session:254) RESP: [500] Content-Length: 201 X-Compute-Request-Id: 
req-5a167ca6-6828-4724-ba7f-246202b0a9ec Vary: X-OpenStack-Nova-API-Version 
Connection: keep-alive X-Openstack-Nova-Api-Version: 2.12 Date: Tue, 05 Jan 
2016 22:48:57 GMT Content-Type: application/json; charset=UTF-8 
  RESP BODY: {"computeFault": {"message": "Unexpected API Error. Please report 
this at http://bugs.launchpad.net/nova/ and attach the Nova API log if 
possible.\n", "code": 500}}

  DEBUG (shell:896) Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-5a167ca6-6828-4724-ba7f-246202b0a9ec)
  Traceback (most recent call last):
File "/opt/stack/python-novaclient/novaclient/shell.py", line 894, in main
  OpenStackComputeShell().main(argv)
File "/opt/stack/python-novaclient/novaclient/shell.py", line 821, in main
  args.func(self.cs, args)
File "/opt/stack/python-novaclient/novaclient/v2/shell.py", line 542, in 
do_boot
  server = cs.servers.create(*boot_args, **boot_kwargs)
File "/opt/stack/python-novaclient/novaclient/v2/servers.py", line 1024, in 
create
  **boot_kwargs)
File "/opt/stack/python-novaclient/novaclient/v2/servers.py", line 555, in 
_boot
  return_raw=return_raw, **kwargs)
File "/opt/stack/python-novaclient/novaclient/base.py", line 175, in _create
  _resp, body = self.api.client.post(url, body=body)
File "/usr/local/lib/python2.7/dist-packages/keystoneauth1/adapter.py", 
line 179, in post
  return self.request(url, 'POST', **kwargs)
File "/opt/stack/python-novaclient/novaclient/client.py", line 92, in 
request
  raise exceptions.from_response(resp, body, url, method)
  ClientException: Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova 

[Yahoo-eng-team] [Bug 1543880] Re: duplicated security groups in test_port_security_disable_security_group

2016-02-11 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/278609
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=eb8141051abe64de7ad9398fb2adaf9a0a79d62d
Submitter: Jenkins
Branch:master

commit eb8141051abe64de7ad9398fb2adaf9a0a79d62d
Author: Kevin Benton 
Date:   Wed Feb 10 11:52:37 2016 -0800

ML2: delete_port on deadlock during binding

The previous logic was only catching mechanism driver exceptions so
it would leave behind a partially built port if a deadlock was
encountered during port binding.

The bug this closes was caused by a DBDeadlock being encountered when
a lock was attempted on the port binding record
(get_locked_port_and_binding). This would not be caught so the API
would retry the whole operation with the original created port
left behind. This resulted in two ports assigned to the same instance.

Closes-Bug: #1543880
Change-Id: I694a9d58002d72636225f99a5fb2b1ccc1cfb6e5


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1543880

Title:
  duplicated security groups in
  test_port_security_disable_security_group

Status in neutron:
  Fix Released

Bug description:
  An instance of the failure:

  http://logs.openstack.org/10/275510/2/gate/gate-tempest-dsvm-neutron-
  linuxbridge/34fdca2/logs/testr_results.html.gz

  Traceback (most recent call last):
File "tempest/scenario/test_security_groups_basic_ops.py", line 166, in 
setUp
  self._deploy_tenant(self.primary_tenant)
File "tempest/scenario/test_security_groups_basic_ops.py", line 306, in 
_deploy_tenant
  self._set_access_point(tenant)
File "tempest/scenario/test_security_groups_basic_ops.py", line 272, in 
_set_access_point
  security_groups=secgroups)
File "tempest/scenario/test_security_groups_basic_ops.py", line 250, in 
_create_server
  sorted([s['name'] for s in server['security_groups']]))
File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 362, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 447, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: !=:
  reference = [u'tempest-secgroup_access--952188639', 
u'tempest-secgroup_general--210717852']
  actual= [u'tempest-secgroup_access--952188639',
   u'tempest-secgroup_access--952188639',
   u'tempest-secgroup_general--210717852',
   u'tempest-secgroup_general--210717852']

  You can notice the duplicated security groups.

  More to follow.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1543880/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544176] Re: br-int with Normal action

2016-02-11 Thread Ihar Hrachyshka
** Project changed: neutron => tap-as-a-service

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1544176

Title:
  br-int with Normal action

Status in tap-as-a-service:
  New

Bug description:
  Hello,
  I have two Vms (VM A and VMB) connected to a single host. Flow going and 
coming from VMB is mirrored to VM c in another compute node. 

  After the creation of tap flow and tap servicee, I ping from VM A to
  VMB. I am able to get ICMP reply messaage at VMC ( thanks to
  configuration by TaaS agent.

  When I look the problem in br-int in a compute node ( where VM A and
  VM B are located).

  OF rules able to catch the ICMP reply and not request. And Ping works
  because of normal mode of br-int.

  See the sample flow in br-int.

   cookie=0x0, duration=21656.671s, table=0, n_packets=10397,
  n_bytes=981610, idle_age=0, priority=20,in_port=6
  actions=NORMAL,mod_vlan_vid:3913,output:12

  [ICMP reply catched in OF pipe line]
   
  cookie=0x0, duration=15.937s, table=0, n_packets=0, n_bytes=0, idle_age=15, 
priority=20,dl_vlan=4,dl_dst=fa:16:3e:c7:b5:42 
actions=normal,mod_vlan_vid:3913,output:12

  [ICMP request suppose to catch by this rule, didnt :-)]

  My question is why this rule not yet hit? Please mind that ping works
  as normal.

  Any clue? It may be problem with normal action.

  Best regards
  Sothy

To manage notifications about this bug go to:
https://bugs.launchpad.net/tap-as-a-service/+bug/1544176/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542819] Re: The details of security group contains "null"

2016-02-11 Thread Martin Hickey
** Project changed: neutron => python-neutronclient

** Changed in: python-neutronclient
   Importance: Undecided => Wishlist

** Tags added: rfe

** Changed in: python-neutronclient
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1542819

Title:
  The details of security group contains "null"

Status in python-neutronclient:
  Triaged

Bug description:
  When using security group, I found the some output of security group will be 
"null". This happens when the value is not specified.
  Under the same condition, "neutron security-group-rule-list" will report 
"any". However, "neutron security-group-rule-show" will report empty.

  The details can be found at [1].

  I think, if the value it not specified for a security group rule, we
  could show "any" to user. This will make the output be consistent, and
  the more easily to understand.

  [1]  http://paste.openstack.org/show/486190/

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1542819/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543756] Re: RBAC: Port creation on a shared network failed if --fixed-ip is specified in 'neutron port-create' command

2016-02-11 Thread Kevin Benton
We can't let people that don't own the network select their own fixed
IP. Using the fixed IP field, someone can pick addresses outside of the
allocation pool so it's restricted to an owner-only operation.

It might be worth discussion if we should allow them to select a
subnet_id but not a specific IP. Maybe change this to an RFE because
it's going to be a policy change that we need to carefully consider.

** Changed in: neutron
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1543756

Title:
  RBAC: Port creation on a shared network failed if --fixed-ip is
  specified in 'neutron port-create' command

Status in neutron:
  Opinion

Bug description:
  The network demo-net, owned by user demo, is shared with tenant
  demo-2.  The sharing is created by demo using the command

  neutron rbac-create --type network --action access_as_shared --target-
  tenant  demo-net

  
  A user on the demo-2 tenant is can see the network demo-net:

  stack@Ubuntu-38:~/DEVSTACK/demo$ neutron net-list
  
+--+--+--+
  | id   | name | subnets   
   |
  
+--+--+--+
  | 85bb7612-e5fa-440c-bacf-86c5929298f3 | demo-net | 
e66487b6-430b-4fb1-8a87-ed28dd378c43 10.1.2.0/24 |
  |  |  | 
ff01f7ca-d838-42dc-8d86-1b2830bc4824 10.1.3.0/24 |
  | 5beb4080-4cf0-4921-9bbf-a7f65df6367f | public   | 
57485a80-815c-45ef-a0d1-ce11939d7fab |
  |  |  | 
38d1ddad-8084-4d32-b142-240e16fcd5df |
  
+--+--+--+


  
  The owner of network demo-net is able to create a port using the command 
'neutron port-create demo-net --fixed-ip ... :
  stack@Ubuntu-38:~/DEVSTACK/devstack$ neutron port-create demo-net --fixed-ip 
subnet_id=ff01f7ca-d838-42dc-8d86-1b2830bc4824
  Created a new port:
  
+---+-+
  | Field | Value   
|
  
+---+-+
  | admin_state_up| True
|
  | allowed_address_pairs | 
|
  | binding:vnic_type | normal  
|
  | device_id | 
|
  | device_owner  | 
|
  | dns_name  | 
|
  | fixed_ips | {"subnet_id": 
"ff01f7ca-d838-42dc-8d86-1b2830bc4824", "ip_address": "10.1.3.6"} |
  | id| 37402f22-fcd5-4b01-8b01-c6734573d7a8
|
  | mac_address   | fa:16:3e:44:71:ad   
|
  | name  | 
|
  | network_id| 85bb7612-e5fa-440c-bacf-86c5929298f3
|
  | security_groups   | 7db11aa0-3d0d-40d1-ae25-e4c02b8886ce
|
  | status| DOWN
|
  | tenant_id | 54913ee1ca89458ba792d685c799484d
|
  
+---+-+


  The user demo-2 of tenant demo-2 is able to create a port using the
  network demo-net:

  stack@Ubuntu-38:~/DEVSTACK/demo$ neutron port-create demo-net
  Created a new port:
  
+---+-+
  | Field | Value   
|
  
+---+-+
  | admin_state_up| True
|
  | allowed_address_pairs | 
|
  | bind

[Yahoo-eng-team] [Bug 1544469] [NEW] Use Keystone Service catalog to search endpoints

2016-02-11 Thread Kairat Kushaev
Public bug reported:

Glance uses custom function to search endpoint in service catalog:
https://github.com/openstack/glance/blob/master/glance/common/auth.py#L259
But that functionality is also available in python-keystoneclient:
https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/service_catalog.py#L352
So we can reduce code duplication and just simply use the logic from 
keystoneclient to search endpoint.

P.S. Looks like we need to initialize ServiceCatalog in request context.
It looks like we need separate attribute for ServiceCatalog(and
deprecate current attribute in request context) so need some additional
work.

** Affects: glance
 Importance: Undecided
 Assignee: Kairat Kushaev (kkushaev)
 Status: In Progress

** Changed in: glance
 Assignee: (unassigned) => Kairat Kushaev (kkushaev)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1544469

Title:
  Use Keystone Service catalog to search endpoints

Status in Glance:
  In Progress

Bug description:
  Glance uses custom function to search endpoint in service catalog:
  https://github.com/openstack/glance/blob/master/glance/common/auth.py#L259
  But that functionality is also available in python-keystoneclient:
  
https://github.com/openstack/python-keystoneclient/blob/master/keystoneclient/service_catalog.py#L352
  So we can reduce code duplication and just simply use the logic from 
keystoneclient to search endpoint.

  P.S. Looks like we need to initialize ServiceCatalog in request
  context. It looks like we need separate attribute for
  ServiceCatalog(and deprecate current attribute in request context) so
  need some additional work.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1544469/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544458] [NEW] SCTP packets from VM are not NATed

2016-02-11 Thread BALAJI SRINIVASAN
Public bug reported:

We have installed kilo release

[root@sienna ~]# uname -a
Linux sienna 3.10.0-327.4.5.el7.x86_64 #1 SMP Mon Jan 25 22:07:14 UTC 2016 
x86_64 x86_64 x86_64 GNU/Linux
[root@sienna ~]# cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"

[root@sienna ~]# openstack --version
openstack 1.0.3
[root@sienna ~]# neutron --version
2.4.0
[root@sienna ~]# nova --version
2.23.0

After installing kilo release, we found that SCTP packets VM were being dropped 
at the host.
Found that this was a known issue 
https://bugs.launchpad.net/neutron/+bug/1460741 and downloaded the neutron 
patch 
neutron 2015.1.2 and applied the same.

After that the SCTP packets from VM were transmitted from the host. But
with the private IP Address (192.168.x.x) without SNAT being performed.

SNAT is being done for UDP packets though.

only SCTP packets are sent out with private IP Addresses.

Please confirm whether this is a known issue and any fix/patch available
for this in Neutron for Kilo release.

Thank you
Balaji Srinivasan

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1544458

Title:
  SCTP packets from VM are not NATed

Status in neutron:
  New

Bug description:
  We have installed kilo release

  [root@sienna ~]# uname -a
  Linux sienna 3.10.0-327.4.5.el7.x86_64 #1 SMP Mon Jan 25 22:07:14 UTC 2016 
x86_64 x86_64 x86_64 GNU/Linux
  [root@sienna ~]# cat /etc/os-release
  NAME="CentOS Linux"
  VERSION="7 (Core)"
  ID="centos"
  ID_LIKE="rhel fedora"
  VERSION_ID="7"

  [root@sienna ~]# openstack --version
  openstack 1.0.3
  [root@sienna ~]# neutron --version
  2.4.0
  [root@sienna ~]# nova --version
  2.23.0

  After installing kilo release, we found that SCTP packets VM were being 
dropped at the host.
  Found that this was a known issue 
https://bugs.launchpad.net/neutron/+bug/1460741 and downloaded the neutron 
patch 
  neutron 2015.1.2 and applied the same.

  After that the SCTP packets from VM were transmitted from the host.
  But with the private IP Address (192.168.x.x) without SNAT being
  performed.

  SNAT is being done for UDP packets though.

  only SCTP packets are sent out with private IP Addresses.

  Please confirm whether this is a known issue and any fix/patch
  available for this in Neutron for Kilo release.

  Thank you
  Balaji Srinivasan

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1544458/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp