[Yahoo-eng-team] [Bug 1599296] Re: exception.Conflict does not give specifics

2016-12-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/409010
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=34f1201a2f1250d77ca21c51e89c2edd313c8597
Submitter: Jenkins
Branch:master

commit 34f1201a2f1250d77ca21c51e89c2edd313c8597
Author: “Richard 
Date:   Fri Dec 9 08:13:15 2016 +

Add id to conflict error if caused by duplicate id

This patch adds test coverage for duplicate erros on objects that
would raise an exception conflict by duplicate ids. It also adds
the id of the object in place of the name in the duplicate error
message if a name field for the object cannot be found.

Closes-Bug: #1599296

Change-Id: Ief9ef3d29ee9ac2da1c205247601399fb6f79d7b


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1599296

Title:
  exception.Conflict does not give specifics

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  When attempting to create a duplicate entity, the exception raised
  only provides the resource type that was involved in the request [0].
  It would be nice to know exactly what caused the unique constraint to
  fail. Many deployers are creating multiple keystone objects
  simultaneously, and so it can get confusing when trying to decipher
  what error goes with what object that was trying to be created.

  
  [0] http://cdn.pasteraw.com/dh2hbk5m6hkhagpbjog07ve2ud1z71j

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1599296/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649762] [NEW] KeyError in vpn agent

2016-12-13 Thread YAMAMOTO Takashi
Public bug reported:

eg. http://logs.openstack.org/11/410511/2/check/gate-neutron-vpnaas-
dsvm-api-ubuntu-xenial-nv/7e31cf8/logs/screen-neutron-vpnaas.txt.gz

2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server 
[req-3b108eca-b4cb-470e-be8e-d20d2829974e tempest-BaseTestCase-373340779 -] 
Exception during message handling
2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
155, in _process_incoming
2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
222, in dispatch
2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
192, in _do_dispatch
2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/ipsec.py",
 line 884, in vpnservice_updated
2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server 
self.sync(context, [router] if router else [])
2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
271, in inner
2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server return 
f(*args, **kwargs)
2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/ipsec.py",
 line 1049, in sync
2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server 
self.report_status(context)
2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/ipsec.py",
 line 1005, in report_status
2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server if not 
self.should_be_reported(context, process):
2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/ipsec.py",
 line 999, in should_be_reported
2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server 
process.vpnservice["tenant_id"] == context.tenant_id):
2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server KeyError: 
'tenant_id'
2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: vpnaas

** Tags added: vpnaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1649762

Title:
  KeyError in vpn agent

Status in neutron:
  New

Bug description:
  eg. http://logs.openstack.org/11/410511/2/check/gate-neutron-vpnaas-
  dsvm-api-ubuntu-xenial-nv/7e31cf8/logs/screen-neutron-vpnaas.txt.gz

  2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server 
[req-3b108eca-b4cb-470e-be8e-d20d2829974e tempest-BaseTestCase-373340779 -] 
Exception during message handling
  2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
  2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
155, in _process_incoming
  2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
222, in dispatch
  2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
  2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
192, in _do_dispatch
  2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
  2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server   File 
"/opt/stack/new/neutron-vpnaas/neutron_vpnaas/services/vpn/device_drivers/ipsec.py",
 line 884, in vpnservice_updated
  2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server 
self.sync(context, [router] if router else [])
  2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
271, in inner
  2016-12-14 03:50:37.103 1970 ERROR oslo_messaging.rpc.server return 

[Yahoo-eng-team] [Bug 1649754] [NEW] nova api error InternalServerError HTTP 500

2016-12-13 Thread xianba
Public bug reported:

when use rally to test openstack,I meets a error that run 4times failure
1time.details info as follow:

Iteration   Exception type  Exception message
►   4   ClientException Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.  (HTTP 500) (Request-ID: 
req-537dd75e-d75c-450d-afff-9271a69c4aae)

and rally task is:
{
  "NovaSecGroup.create_and_list_secgroups": [
{
  "runner": {
"type": "constant", 
"concurrency": 4, 
"times": 4
  }, 
  "args": {
"security_group_count": 5, 
"rules_per_security_group": 5
  }, 
  "sla": {
"failure_rate": {
  "max": 0
}
  }, 
  "context": {
"users": {
  "users_per_tenant": 2, 
  "project_domain": "default", 
  "user_choice_method": "random", 
  "user_domain": "default", 
  "tenants": 2, 
  "resource_management_workers": 20
}, 
"quotas": {
  "neutron": {
"security_group": -1, 
"security_group_rule": -1
  }
}
  }
}
  ]
}

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1649754

Title:
  nova api error InternalServerError HTTP 500

Status in OpenStack Compute (nova):
  New

Bug description:
  when use rally to test openstack,I meets a error that run 4times
  failure 1time.details info as follow:

  Iteration Exception type  Exception message
  ► 4   ClientException Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.  (HTTP 500) (Request-ID: 
req-537dd75e-d75c-450d-afff-9271a69c4aae)

  and rally task is:
  {
"NovaSecGroup.create_and_list_secgroups": [
  {
"runner": {
  "type": "constant", 
  "concurrency": 4, 
  "times": 4
}, 
"args": {
  "security_group_count": 5, 
  "rules_per_security_group": 5
}, 
"sla": {
  "failure_rate": {
"max": 0
  }
}, 
"context": {
  "users": {
"users_per_tenant": 2, 
"project_domain": "default", 
"user_choice_method": "random", 
"user_domain": "default", 
"tenants": 2, 
"resource_management_workers": 20
  }, 
  "quotas": {
"neutron": {
  "security_group": -1, 
  "security_group_rule": -1
}
  }
}
  }
]
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1649754/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649747] Re: XenAPI: With ovs polling mode, Neutron gets the error of oslo_rootwrap.wrapper.FilterMatchNotExecutable

2016-12-13 Thread Jianghua Wang
** Project changed: nova => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1649747

Title:
  XenAPI: With ovs polling mode, Neutron gets the error of
  oslo_rootwrap.wrapper.FilterMatchNotExecutable

Status in neutron:
  New

Bug description:
  When enabled polling mode for XenAPI, the neutron q-domua always get
  the error of oslo_rootwrap.wrapper.FilterMatchNotExecutable. See the
  following log.

  With ovs polling mode, ovs will report data to neutron once any ovs DB
  update. So there is chance to report partialy data for vifs before ovs
  DB synced-up with xapi. so it will invoke xe command to get iface-id.
  But xe is not in the allowed command list. So it will raise errors.

  
https://github.com/openstack/neutron/blob/master/neutron/agent/common/ovs_lib.py#L418

  2016-12-12 09:59:31.449 32172 DEBUG neutron.agent.linux.async_process [-] 
Output received from [ovsdb-client monitor tcp:192.168.33.2:6640 Interface 
name,ofport,external_ids --format=json]: 
{"data":[["2d32873e-2b35-47ba-90b3-81c953fd8193","old",null,["set",[]],null],["","new","vif16.0",1,["map",[["attached-mac","fa:16:3e:c9:46:63"],["xs-network-uuid","75ba394a-ed6a-549e-d608-9ad43461c462"],["xs-vif-uuid","f72ce34c-f1cb-f54c-3132-bfd97fabef37"],["xs-vm-uuid","c91f272e-a710-2c00-818d-df05953f34d9"],"headings":["row","action","name","ofport","external_ids"]}
 _read_stdout /opt/stack/new/neutron/neutron/agent/linux/async_process.py:238
  2016-12-12 09:59:31.469 32172 DEBUG neutron.agent.linux.async_process [-] 
Output received from [ovsdb-client monitor tcp:192.168.33.2:6640 Interface 
name,ofport,external_ids --format=json]: 
{"data":[["2d32873e-2b35-47ba-90b3-81c953fd8193","old",null,null,["map",[["attached-mac","fa:16:3e:c9:46:63"],["xs-network-uuid","75ba394a-ed6a-549e-d608-9ad43461c462"],["xs-vif-uuid","f72ce34c-f1cb-f54c-3132-bfd97fabef37"],["xs-vm-uuid","c91f272e-a710-2c00-818d-df05953f34d9",["","new","vif16.0",1,["map",[["attached-mac","fa:16:3e:c9:46:63"],["iface-id","81329e07-4df2-4239-a2c6-1cba950741a4"],["iface-status","active"],["vm-id","c91f272e-a710-2c00-818d-df05953f34d9"],["xs-network-uuid","75ba394a-ed6a-549e-d608-9ad43461c462"],["xs-vif-uuid","f72ce34c-f1cb-f54c-3132-bfd97fabef37"],["xs-vm-uuid","c91f272e-a710-2c00-818d-df05953f34d9"],"headings":["row","action","name","ofport","external_ids"]}
 _read_stdout /opt/stack/new/neutron/neutron/agent/linux/async_process.py:238
  2016-12-12 09:59:31.469 32172 DEBUG neutron.agent.ovsdb.native.vlog [-] 
[POLLIN] on fd 7 __log_wakeup 
/usr/local/lib/python2.7/dist-packages/ovs/poller.py:202
  2016-12-12 09:59:31.511 32172 DEBUG neutron.agent.ovsdb.native.vlog [-] 
[POLLIN] on fd 7 __log_wakeup 
/usr/local/lib/python2.7/dist-packages/ovs/poller.py:202
  2016-12-12 09:59:31.711 32172 DEBUG neutron.agent.ovsdb.native.vlog [-] 
[POLLIN] on fd 7 __log_wakeup 
/usr/local/lib/python2.7/dist-packages/ovs/poller.py:202
  2016-12-12 09:59:32.346 32172 DEBUG 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-ba252219-a9c6-4354-9981-3cfeff7ca54f - -] Agent rpc_loop - iteration:251 
started rpc_loop 
/opt/stack/new/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1943
  2016-12-12 09:59:32.351 32172 DEBUG 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ofswitch 
[req-ba252219-a9c6-4354-9981-3cfeff7ca54f - -] ofctl request 
version=0x4,msg_type=0x12,msg_len=0x38,xid=0x9a58c035,OFPFlowStatsRequest(cookie=0,cookie_mask=0,flags=0,match=OFPMatch(oxm_fields={}),out_group=4294967295,out_port=4294967295,table_id=23,type=1)
 result 
[OFPFlowStatsReply(body=[OFPFlowStats(byte_count=0,cookie=9912536195431297602L,duration_nsec=71200,duration_sec=504,flags=0,hard_timeout=0,idle_timeout=0,instructions=[],length=56,match=OFPMatch(oxm_fields={}),packet_count=0,priority=0,table_id=23)],flags=0,type=1)]
 _send_msg 
/opt/stack/new/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ofswitch.py:93
  2016-12-12 09:59:32.353 32172 DEBUG 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-ba252219-a9c6-4354-9981-3cfeff7ca54f - -] Agent rpc_loop - iteration:251 - 
starting polling. Elapsed:0.007 rpc_loop 
/opt/stack/new/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1994
  2016-12-12 09:59:32.354 32172 DEBUG neutron.agent.linux.utils 
[req-ba252219-a9c6-4354-9981-3cfeff7ca54f - -] Running command: 
['/usr/local/bin/neutron-rootwrap-xen-dom0', '/etc/neutron/rootwrap.conf', 
'xe', 'vif-param-get', 'param-name=other-config', 'param-key=nicira-iface-id', 
'uuid=f72ce34c-f1cb-f54c-3132-bfd97fabef37'] create_process 
/opt/stack/new/neutron/neutron/agent/linux/utils.py:92
  2016-12-12 09:59:33.564 32172 ERROR neutron.agent.linux.utils 
[req-ba252219-a9c6-4354-9981-3cfeff7ca54f - -] Exit code: 1; Stdin: ; Stdout: ; 
Stderr: Traceback (most recent call last):
File 

[Yahoo-eng-team] [Bug 1649750] [NEW] Cannot create a provider net without segment_id if only config a physical net name

2016-12-13 Thread zhaobo
Public bug reported:

Can not create a provider network if just config ml2_vlan_range like
"vlan_net".

cmd1: openstack network segment create net-name --physical-network vlan_net 
--network f5cfe320-f5bb-41b1-b39f-ee5c32f8f356 --network-type vlan
cmd2: neutron net-create --provider:physical_network vlan_net 
--provider:network_type vlan net-name

All cmds return exception NoNetworkAvailable, this is not frendly, we
should tell users how to use the configured physical network, what they
can do in the next.

** Affects: neutron
 Importance: Undecided
 Assignee: zhaobo (zhaobo6)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => zhaobo (zhaobo6)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1649750

Title:
  Cannot create a provider net without segment_id if only config a
  physical net name

Status in neutron:
  In Progress

Bug description:
  Can not create a provider network if just config ml2_vlan_range like
  "vlan_net".

  cmd1: openstack network segment create net-name --physical-network vlan_net 
--network f5cfe320-f5bb-41b1-b39f-ee5c32f8f356 --network-type vlan
  cmd2: neutron net-create --provider:physical_network vlan_net 
--provider:network_type vlan net-name

  All cmds return exception NoNetworkAvailable, this is not frendly, we
  should tell users how to use the configured physical network, what
  they can do in the next.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1649750/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649747] [NEW] XenAPI: With ovs polling mode, Neutron gets the error of oslo_rootwrap.wrapper.FilterMatchNotExecutable

2016-12-13 Thread Jianghua Wang
Public bug reported:

When enabled polling mode for XenAPI, the neutron q-domua always get the
error of oslo_rootwrap.wrapper.FilterMatchNotExecutable. See the
following log.

With ovs polling mode, ovs will report data to neutron once any ovs DB
update. So there is chance to report partialy data for vifs before ovs
DB synced-up with xapi. so it will invoke xe command to get iface-id.
But xe is not in the allowed command list. So it will raise errors.

https://github.com/openstack/neutron/blob/master/neutron/agent/common/ovs_lib.py#L418

2016-12-12 09:59:31.449 32172 DEBUG neutron.agent.linux.async_process [-] 
Output received from [ovsdb-client monitor tcp:192.168.33.2:6640 Interface 
name,ofport,external_ids --format=json]: 
{"data":[["2d32873e-2b35-47ba-90b3-81c953fd8193","old",null,["set",[]],null],["","new","vif16.0",1,["map",[["attached-mac","fa:16:3e:c9:46:63"],["xs-network-uuid","75ba394a-ed6a-549e-d608-9ad43461c462"],["xs-vif-uuid","f72ce34c-f1cb-f54c-3132-bfd97fabef37"],["xs-vm-uuid","c91f272e-a710-2c00-818d-df05953f34d9"],"headings":["row","action","name","ofport","external_ids"]}
 _read_stdout /opt/stack/new/neutron/neutron/agent/linux/async_process.py:238
2016-12-12 09:59:31.469 32172 DEBUG neutron.agent.linux.async_process [-] 
Output received from [ovsdb-client monitor tcp:192.168.33.2:6640 Interface 
name,ofport,external_ids --format=json]: 
{"data":[["2d32873e-2b35-47ba-90b3-81c953fd8193","old",null,null,["map",[["attached-mac","fa:16:3e:c9:46:63"],["xs-network-uuid","75ba394a-ed6a-549e-d608-9ad43461c462"],["xs-vif-uuid","f72ce34c-f1cb-f54c-3132-bfd97fabef37"],["xs-vm-uuid","c91f272e-a710-2c00-818d-df05953f34d9",["","new","vif16.0",1,["map",[["attached-mac","fa:16:3e:c9:46:63"],["iface-id","81329e07-4df2-4239-a2c6-1cba950741a4"],["iface-status","active"],["vm-id","c91f272e-a710-2c00-818d-df05953f34d9"],["xs-network-uuid","75ba394a-ed6a-549e-d608-9ad43461c462"],["xs-vif-uuid","f72ce34c-f1cb-f54c-3132-bfd97fabef37"],["xs-vm-uuid","c91f272e-a710-2c00-818d-df05953f34d9"],"headings":["row","action","name","ofport","external_ids"]}
 _read_stdout /opt/stack/new/neutron/neutron/agent/linux/async_process.py:238
2016-12-12 09:59:31.469 32172 DEBUG neutron.agent.ovsdb.native.vlog [-] 
[POLLIN] on fd 7 __log_wakeup 
/usr/local/lib/python2.7/dist-packages/ovs/poller.py:202
2016-12-12 09:59:31.511 32172 DEBUG neutron.agent.ovsdb.native.vlog [-] 
[POLLIN] on fd 7 __log_wakeup 
/usr/local/lib/python2.7/dist-packages/ovs/poller.py:202
2016-12-12 09:59:31.711 32172 DEBUG neutron.agent.ovsdb.native.vlog [-] 
[POLLIN] on fd 7 __log_wakeup 
/usr/local/lib/python2.7/dist-packages/ovs/poller.py:202
2016-12-12 09:59:32.346 32172 DEBUG 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-ba252219-a9c6-4354-9981-3cfeff7ca54f - -] Agent rpc_loop - iteration:251 
started rpc_loop 
/opt/stack/new/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1943
2016-12-12 09:59:32.351 32172 DEBUG 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ofswitch 
[req-ba252219-a9c6-4354-9981-3cfeff7ca54f - -] ofctl request 
version=0x4,msg_type=0x12,msg_len=0x38,xid=0x9a58c035,OFPFlowStatsRequest(cookie=0,cookie_mask=0,flags=0,match=OFPMatch(oxm_fields={}),out_group=4294967295,out_port=4294967295,table_id=23,type=1)
 result 
[OFPFlowStatsReply(body=[OFPFlowStats(byte_count=0,cookie=9912536195431297602L,duration_nsec=71200,duration_sec=504,flags=0,hard_timeout=0,idle_timeout=0,instructions=[],length=56,match=OFPMatch(oxm_fields={}),packet_count=0,priority=0,table_id=23)],flags=0,type=1)]
 _send_msg 
/opt/stack/new/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ofswitch.py:93
2016-12-12 09:59:32.353 32172 DEBUG 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-ba252219-a9c6-4354-9981-3cfeff7ca54f - -] Agent rpc_loop - iteration:251 - 
starting polling. Elapsed:0.007 rpc_loop 
/opt/stack/new/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1994
2016-12-12 09:59:32.354 32172 DEBUG neutron.agent.linux.utils 
[req-ba252219-a9c6-4354-9981-3cfeff7ca54f - -] Running command: 
['/usr/local/bin/neutron-rootwrap-xen-dom0', '/etc/neutron/rootwrap.conf', 
'xe', 'vif-param-get', 'param-name=other-config', 'param-key=nicira-iface-id', 
'uuid=f72ce34c-f1cb-f54c-3132-bfd97fabef37'] create_process 
/opt/stack/new/neutron/neutron/agent/linux/utils.py:92
2016-12-12 09:59:33.564 32172 ERROR neutron.agent.linux.utils 
[req-ba252219-a9c6-4354-9981-3cfeff7ca54f - -] Exit code: 1; Stdin: ; Stdout: ; 
Stderr: Traceback (most recent call last):
  File "/usr/local/bin/neutron-rootwrap-xen-dom0", line 6, in 
exec(compile(open(__file__).read(), __file__, 'exec'))
  File "/opt/stack/new/neutron/bin/neutron-rootwrap-xen-dom0", line 151, in 

main()
  File "/opt/stack/new/neutron/bin/neutron-rootwrap-xen-dom0", line 138, in main
filter_command(exec_name, config['filters_path'], user_args, 
config['exec_dirs'])
  File 

[Yahoo-eng-team] [Bug 1646428] Re: Protocol parameters not validated when updating firewall rule

2016-12-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/407311
Committed: 
https://git.openstack.org/cgit/openstack/neutron-fwaas/commit/?id=37709b3e99eb29f359b4ed744a0fe691f24a3229
Submitter: Jenkins
Branch:master

commit 37709b3e99eb29f359b4ed744a0fe691f24a3229
Author: Ha Van Tu 
Date:   Tue Dec 6 12:19:40 2016 +0700

Adding validation protocol parameters when updating firewall rules

This patch adding validation protocol parameter when updating firewall
rules to prevent updating ICMP rule with "destination_port" or 
"source_port".

Change-Id: Ieb2ac42e009f08f4708afb3edc2b2e7ae01af06d
Closes-Bug: #1646428


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1646428

Title:
  Protocol parameters not validated when updating firewall rule

Status in neutron:
  Fix Released

Bug description:
  When we create an ICMP firewall rule with port range parameters, there will 
be an error from Neutron server.
  However, when we create a TCP firewall rule with port range parameters, then 
edit this rule to the ICMP one, there is not any error from Neutron server.
  We need to check before updating firewall rule.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1646428/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649733] Re: TypeError: IPAddress('172.19.0.2') is not JSON serializable

2016-12-13 Thread YAMAMOTO Takashi
** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: networking-midonet
   Status: Fix Committed => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1649733

Title:
  TypeError: IPAddress('172.19.0.2') is not JSON serializable

Status in networking-midonet:
  In Progress
Status in neutron:
  In Progress

Bug description:
  "TypeError: IPAddress('172.19.0.2') is not JSON serializable" seen on
  gate

  eg. http://logs.openstack.org/51/410451/1/check/gate-tempest-dsvm-
  networking-midonet-ml2-ubuntu-
  xenial/b810ef8/logs/screen-q-svc.txt.gz#_2016-12-13_23_29_12_286

  2016-12-13 23:29:12.286 23652 ERROR neutron.api.v2.resource 
[req-b4c0541f-8e87-4539-a8ad-c4a2b10098cf admin -] update failed: No details.
  2016-12-13 23:29:12.286 23652 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
  2016-12-13 23:29:12.286 23652 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/resource.py", line 79, in resource
  2016-12-13 23:29:12.286 23652 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
  2016-12-13 23:29:12.286 23652 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/base.py", line 612, in update
  2016-12-13 23:29:12.286 23652 ERROR neutron.api.v2.resource return 
self._update(request, id, body, **kwargs)
  2016-12-13 23:29:12.286 23652 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/api.py", line 92, in wrapped
  2016-12-13 23:29:12.286 23652 ERROR neutron.api.v2.resource setattr(e, 
'_RETRY_EXCEEDED', True)
  2016-12-13 23:29:12.286 23652 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-12-13 23:29:12.286 23652 ERROR neutron.api.v2.resource 
self.force_reraise()
  2016-12-13 23:29:12.286 23652 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-12-13 23:29:12.286 23652 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-12-13 23:29:12.286 23652 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/api.py", line 88, in wrapped
  2016-12-13 23:29:12.286 23652 ERROR neutron.api.v2.resource return 
f(*args, **kwargs)
  2016-12-13 23:29:12.286 23652 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 151, in wrapper
  2016-12-13 23:29:12.286 23652 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
  2016-12-13 23:29:12.286 23652 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-12-13 23:29:12.286 23652 ERROR neutron.api.v2.resource 
self.force_reraise()
  2016-12-13 23:29:12.286 23652 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-12-13 23:29:12.286 23652 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-12-13 23:29:12.286 23652 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 139, in wrapper
  2016-12-13 23:29:12.286 23652 ERROR neutron.api.v2.resource return 
f(*args, **kwargs)
  2016-12-13 23:29:12.286 23652 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/api.py", line 128, in wrapped
  2016-12-13 23:29:12.286 23652 ERROR neutron.api.v2.resource 
traceback.format_exc())
  2016-12-13 23:29:12.286 23652 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-12-13 23:29:12.286 23652 ERROR neutron.api.v2.resource 
self.force_reraise()
  2016-12-13 23:29:12.286 23652 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-12-13 23:29:12.286 23652 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-12-13 23:29:12.286 23652 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/api.py", line 123, in wrapped
  2016-12-13 23:29:12.286 23652 ERROR neutron.api.v2.resource return 
f(*dup_args, **dup_kwargs)
  2016-12-13 23:29:12.286 23652 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/base.py", line 660, in _update
  2016-12-13 23:29:12.286 23652 ERROR neutron.api.v2.resource obj = 
obj_updater(request.context, id, **kwargs)
  2016-12-13 23:29:12.286 23652 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_log/helpers.py", line 48, in 
wrapper
  2016-12-13 23:29:12.286 23652 ERROR neutron.api.v2.resource return 
method(*args, **kwargs)
  2016-12-13 23:29:12.286 23652 ERROR 

[Yahoo-eng-team] [Bug 1608842] Re: [api-ref] The 'id' parameters are defined as 'optional' in os-volume_attachments

2016-12-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/349863
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=6ab20bc52392ea3eb4dcc1573fc842558eeb889e
Submitter: Jenkins
Branch:master

commit 6ab20bc52392ea3eb4dcc1573fc842558eeb889e
Author: Takashi NATSUME 
Date:   Tue Aug 2 17:19:15 2016 +0900

api-ref: Fix 'id' (attachment_id) parameters

At first, the 'attachment_id_resp' in parameters.yaml was defined
as 'required' in I3789a4ad36e30728024f2aa122403b0e53b1e741
for os-volume_attachments.inc.
Then it was changed to 'optional' in
I0c1d183c5aaf6fb796be30fa5627bd5644ea689f
for os-volumes.inc.
So currently 'id' (attachment_id) parameters in
os-volume_attachments.inc are wrong.
They should be 'required'. So fix them.

Change-Id: I403a9eb1b08a840cbb2b82cb37f1b49c6edb87c9
Closes-Bug: #1608842


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1608842

Title:
  [api-ref] The 'id' parameters are defined as 'optional' in os-
  volume_attachments

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  http://developer.openstack.org/api-ref/compute/?expanded=#servers-
  with-volume-attachments-servers-os-volume-attachments

  In os-volume_attachments of api-ref, the 'id' (attachement ID) parameters are 
defined as 'optional'.
  But they are not optional actually.

  
https://github.com/openstack/nova/blob/9fdb5a43f2d2853f67e28ed33a713c92c99e6869/nova/api/openstack/compute/volumes.py#L225
  
https://github.com/openstack/nova/blob/9fdb5a43f2d2853f67e28ed33a713c92c99e6869/nova/api/openstack/compute/volumes.py#L339

  Originally they are defined as 'required'.
  But it was changed by the following patch.

  https://review.openstack.org/#/c/320048/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1608842/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1627902] Re: DHCP agent conflicting with dynamic IPv6 addresses

2016-12-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/406428
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=21bb77667007b0b140c17d9104204943c4a0f4cc
Submitter: Jenkins
Branch:master

commit 21bb77667007b0b140c17d9104204943c4a0f4cc
Author: Brian Haley 
Date:   Fri Dec 2 23:07:29 2016 -0500

Correctly configure IPv6 addresses on upgrades

When starting the dhcp-agent after an upgrade, there could
be stale IPv6 addresses in the namespace that had been
configured via SLAAC.  These need to be removed, and the
same address added back statically, in order for the
agent to start up correctly.

To avoid the race condition where an IPv6 RA could arrive
while we are making this change, we must move the call
to disable RAs in the namespace from plug(), since devices
may already exist that are receiving packets.

Uncovered by the grenade tests.

Change-Id: I7e1e5d6c1fa938918aac3fb63888d20ff4088ba7
Closes-bug: #1627902


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1627902

Title:
  DHCP agent conflicting with dynamic IPv6 addresses

Status in neutron:
  Fix Released

Bug description:
  Below in gate logs. Completely breaks the DHCP for that network
  because it's trying to add an address that conflicts with one given to
  it via RA. Cause is the merge of
  d86f1b87f01c53c3e0b085086133b311e5bf3ab5 which allowed the agent to be
  configured with stateless v6 addresses to serve metadata correctly.

  http://logs.openstack.org/12/343312/5/gate/gate-tempest-dsvm-neutron-
  full-ubuntu-
  xenial/c11b933/logs/screen-q-dhcp.txt.gz?level=TRACE#_2016-09-26_21_45_40_604

  2016-09-26 21:45:40.604 13605 ERROR neutron.agent.linux.utils [-] Exit
  code: 2; Stdin: ; Stdout: ; Stderr: RTNETLINK answers: File exists

  2016-09-26 21:45:40.604 13605 ERROR neutron.agent.dhcp.agent [-] Unable to 
enable dhcp for 81d252a2-8207-4e8c-a286-07fb3494a3ec.
  2016-09-26 21:45:40.604 13605 ERROR neutron.agent.dhcp.agent Traceback (most 
recent call last):
  2016-09-26 21:45:40.604 13605 ERROR neutron.agent.dhcp.agent   File 
"/opt/stack/new/neutron/neutron/agent/dhcp/agent.py", line 114, in call_driver
  2016-09-26 21:45:40.604 13605 ERROR neutron.agent.dhcp.agent 
getattr(driver, action)(**action_kwargs)
  2016-09-26 21:45:40.604 13605 ERROR neutron.agent.dhcp.agent   File 
"/opt/stack/new/neutron/neutron/agent/linux/dhcp.py", line 212, in enable
  2016-09-26 21:45:40.604 13605 ERROR neutron.agent.dhcp.agent 
interface_name = self.device_manager.setup(self.network)
  2016-09-26 21:45:40.604 13605 ERROR neutron.agent.dhcp.agent   File 
"/opt/stack/new/neutron/neutron/agent/linux/dhcp.py", line 1396, in setup
  2016-09-26 21:45:40.604 13605 ERROR neutron.agent.dhcp.agent 
namespace=network.namespace)
  2016-09-26 21:45:40.604 13605 ERROR neutron.agent.dhcp.agent   File 
"/opt/stack/new/neutron/neutron/agent/linux/interface.py", line 129, in init_l3
  2016-09-26 21:45:40.604 13605 ERROR neutron.agent.dhcp.agent 
device.addr.add(ip_cidr)
  2016-09-26 21:45:40.604 13605 ERROR neutron.agent.dhcp.agent   File 
"/opt/stack/new/neutron/neutron/agent/linux/ip_lib.py", line 577, in add
  2016-09-26 21:45:40.604 13605 ERROR neutron.agent.dhcp.agent 
self._as_root([net.version], tuple(args))
  2016-09-26 21:45:40.604 13605 ERROR neutron.agent.dhcp.agent   File 
"/opt/stack/new/neutron/neutron/agent/linux/ip_lib.py", line 364, in _as_root
  2016-09-26 21:45:40.604 13605 ERROR neutron.agent.dhcp.agent 
use_root_namespace=use_root_namespace)
  2016-09-26 21:45:40.604 13605 ERROR neutron.agent.dhcp.agent   File 
"/opt/stack/new/neutron/neutron/agent/linux/ip_lib.py", line 95, in _as_root
  2016-09-26 21:45:40.604 13605 ERROR neutron.agent.dhcp.agent 
log_fail_as_error=self.log_fail_as_error)
  2016-09-26 21:45:40.604 13605 ERROR neutron.agent.dhcp.agent   File 
"/opt/stack/new/neutron/neutron/agent/linux/ip_lib.py", line 104, in _execute
  2016-09-26 21:45:40.604 13605 ERROR neutron.agent.dhcp.agent 
log_fail_as_error=log_fail_as_error)
  2016-09-26 21:45:40.604 13605 ERROR neutron.agent.dhcp.agent   File 
"/opt/stack/new/neutron/neutron/agent/linux/utils.py", line 138, in execute
  2016-09-26 21:45:40.604 13605 ERROR neutron.agent.dhcp.agent raise 
RuntimeError(msg)
  2016-09-26 21:45:40.604 13605 ERROR neutron.agent.dhcp.agent RuntimeError: 
Exit code: 2; Stdin: ; Stdout: ; Stderr: RTNETLINK answers: File exists
  2016-09-26 21:45:40.604 13605 ERROR neutron.agent.dhcp.agent 
  2016-09-26 21:45:40.604 13605 ERROR neutron.agent.dhcp.agent

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1627902/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net

[Yahoo-eng-team] [Bug 1636157] Re: os-server-groups uses same policy.json rule for all CRUD operations

2016-12-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/391113
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=4a09c2210b3c07343411a06c676c2d85aa0e214f
Submitter: Jenkins
Branch:master

commit 4a09c2210b3c07343411a06c676c2d85aa0e214f
Author: Prashanth kumar reddy 
Date:   Thu Oct 27 07:09:01 2016 -0400

Separate CRUD policy for server_groups

The same policy rule (os_compute_api:os-server-groups) is being used
for all actions (show, index, delete, create) for server_groups REST
APIs. It is thus impossible to provide different RBAC for specific
actions based on roles. To address this changes are made to have
separate policy rules for each of action.

It has been argued that index and show may not need separate policy
rules, but most other places in nova (and OpenStack in general) do
have separate policy rules for each action. This affords the ultimate
flexibility to deployers, who can obviously use the same rule if
that is what they want. One example where show and index may be
different is that if show is restricted based on some criteria, such
that a user is able to see some resources within the tenant but not
others, then list would need to be disallowed to prevent the user
from using list to see resources they cannot show.

Change-Id: Ica9e07f6e80257902b4a0cc44b65fd6bad008bba
Closes-Bug: #1636157


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1636157

Title:
  os-server-groups uses same policy.json rule for all CRUD operations

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  All os-server-groups REST calls use same rule
  
(https://github.com/openstack/nova/blob/master/nova/policies/server_groups.py#L29-L31)
  instead of having a separate rule for create, delete, show and list
  actions on server_groups. This takes away control of RBAC at a REST
  api level and is incorrect.

  Here are the references of rule being used with respective REST action.
  1. create 
(https://github.com/openstack/nova/blob/stable/newton/nova/api/openstack/compute/server_groups.py#L136)
  2. 
delete(https://github.com/openstack/nova/blob/stable/newton/nova/api/openstack/compute/server_groups.py#L89)
  3. show 
(https://github.com/openstack/nova/blob/stable/newton/nova/api/openstack/compute/server_groups.py#L78)
  4. 
list(https://github.com/openstack/nova/blob/stable/newton/nova/api/openstack/compute/server_groups.py#L120)

  
  seen in newton

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1636157/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1632856] Re: Incorrect datatype for Python 3 in api-samples functional test

2016-12-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/385686
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=a2d7ebd39b99db9c5728e2a256568b8bbbc4734f
Submitter: Jenkins
Branch:master

commit a2d7ebd39b99db9c5728e2a256568b8bbbc4734f
Author: EdLeafe 
Date:   Wed Oct 12 20:55:40 2016 +

Corrects the type of a base64 encoded string

The nova/tests/functional/api_sample_tests/test_servers.py contains the
ServersSampleBase class, and in its class definition creates a
'user_data' attribute by base64 encoding a string. However, this will
not work in Python 3, as the base64.b64encode() method requires bytes,
not a string. As a result, importing the test class fails and no tests
get run.

Note that this change doesn't fix all tests to work under Python 3; it
simply fixes the bug that prevents the tests from running at all under
Python 3.

Closes-Bug: #1632856

Change-Id: I35a7b02132bed0387a173b339f6204bf0e3269de


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1632856

Title:
  Incorrect datatype for Python 3 in api-samples functional test

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The nova/tests/functional/api_sample_tests/test_servers.py contains
  the ServersSampleBase class, and in its class definition creates user
  data by base64 encoding a string. However, this will not work in
  Python 3, as the base64.b64encode() method requires bytes, not a
  string.

  This can be seen by simply running 'tox -e functional' under Python 3,
  which then emits a series of errors, most of which look like:

  Failed to import test module: 
nova.tests.functional.api_sample_tests.test_servers
  Traceback (most recent call last):
File 
"/home/ed/projects/nova/.tox/functional/lib/python3.4/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
  module = self._get_module_from_name(name)
File 
"/home/ed/projects/nova/.tox/functional/lib/python3.4/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
  __import__(name)
File 
"/home/ed/projects/nova/nova/tests/functional/api_sample_tests/test_servers.py",
 line 24, in 
  class ServersSampleBase(api_sample_base.ApiSampleTestBaseV21):
File 
"/home/ed/projects/nova/nova/tests/functional/api_sample_tests/test_servers.py",
 line 29, in ServersSampleBase
  user_data = base64.b64encode(user_data_contents)
File "/home/ed/projects/nova/.tox/functional/lib/python3.4/base64.py", line 
62, in b64encode
  encoded = binascii.b2a_base64(s)[:-1]
  TypeError: 'str' does not support the buffer interface

  
  This was reported in https://bugs.launchpad.net/nova/+bug/1632521, and a fix 
was issued that simply forced tox to use py27.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1632856/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647451] Re: Post live migration step could fail due to auth errors

2016-12-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/407147
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=4a5ecf1e29c3bdbb022f98a5fba41d4e7df56d88
Submitter: Jenkins
Branch:master

commit 4a5ecf1e29c3bdbb022f98a5fba41d4e7df56d88
Author: Timofey Durakov 
Date:   Thu Dec 1 19:03:24 2016 +0300

fix for auth during live-migration

Post step could fail due to auth token expiration.
get_instance_nw_info fails with authentication required,
because there are several calls to neutron api, some of them
are admin context, while others try to use token from request
context. This patch ensure that if admin context is initially used,
all subsequent calls will use the same initialized client

Closes-Bug: #1647451

Change-Id: I8962a9cd472cbbb5b9b67c5b164ff29fd8f5558a


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1647451

Title:
  Post live migration step could fail due to auth errors

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===
  When live migration is finished it's possible that keystone auth token is 
already expired,
  that causes for post_step to fail

  Steps to reproduce
  ==
  there are 2 options to reproduce this issue:
  1. run live-migration of heavy loaded instance, wait for token to expire, and 
after that try to execute live-migration-force-complete
  2. set a breakpoint in _post_live_migration method of compute manager, once 
breakpoint is reached,
  do openstack token revoke, continue nova execution normally

  Expected result
  ===
  live-migration to be finished sucessfully

  Actual result
  =
  post step is failed, overall migration is also failed

  Environment
  ===
  1. I've tested this case on Newton version, but the issue should be valid for 
master branch too.

  2. Libvirt + kvm

  2. Ceph

  3. Neutron vxlan

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1647451/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1644369] Re: Support for DSCP marking in Linuxbridge L2 agent

2016-12-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/401458
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=fd3bf3327cada36ef57eefebabc78a112298be8d
Submitter: Jenkins
Branch:master

commit fd3bf3327cada36ef57eefebabc78a112298be8d
Author: Sławek Kapłoński 
Date:   Wed Nov 23 22:14:30 2016 +

DSCP packet marking support in Linuxbridge agent

Linuxbridge agent uses iptable rules in POSTROUTING chain
in the mangle table to mark outgoing packets with the
DSCP mark value configured by the user in QoS policy.

DocImpact: DSCP Marking rule support is extended to the
   Linuxbridge L2 agent

Closes-Bug: #1644369

Change-Id: I47e44cb2e67ab73bd5ee0aa4cca47cb3d07e43f3


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1644369

Title:
  Support for DSCP marking in Linuxbridge L2 agent

Status in neutron:
  Fix Released

Bug description:
  Currently after #1468353 was merged Neutron QoS has got support for DSCP 
marking rules in Openvswitch L2 agent.
  It would be nice to extend this support for Linuxbridge agent also. That can 
be done by setting DSCP marks in Iptables in "mangle" table in POSTROUTING 
chain.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1644369/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649403] Re: nova.tests.functional.notification_sample_tests.test_instance.TestInstanceNotificationSample.test_create_delete_server_with_instance_update randomly fails with ip_ad

2016-12-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/409984
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=5d3ae79455d1e601a1a718a274796a2354bf31e8
Submitter: Jenkins
Branch:master

commit 5d3ae79455d1e601a1a718a274796a2354bf31e8
Author: Matt Riedemann 
Date:   Mon Dec 12 19:17:49 2016 -0500

Make test_create_delete_server_with_instance_update deterministic

We're seeing race failures in this test because the ip_addresses
can show up in an unexpected notification depending on when
allocate_for_instance in the neutron network API completes and
the instance.info_cache.network_info data gets stored in the
database.

We can resolve the race by using SpawnIsSynchronousFixture which
makes the allocate_for_instance network API a blocking call
until the network_info is returned, and by that time it's stored
in the instance_info_cache in the database which is where the
versioned notification pulls it from in _send_versioned_instance_update.

Change-Id: Id482220b8332549a07efb4f82212d74e6e7b9d6c
Closes-Bug: #1649403


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1649403

Title:
  
nova.tests.functional.notification_sample_tests.test_instance.TestInstanceNotificationSample.test_create_delete_server_with_instance_update
  randomly fails with ip_addresses not set in notifications

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Seen here:

  http://logs.openstack.org/90/409890/1/check/gate-nova-tox-db-
  functional-ubuntu-
  xenial/17015ce/console.html#_2016-12-12_19_24_33_892626

  The differences between the expected notifications on instance.update
  notifications and what we actually get is the u'ip_addresses': [] is
  empty in the actual results. There is probably a race where the fake
  virt driver isn't waiting for the network allocation (which is stubbed
  out) to complete.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1649403/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649720] [NEW] DSCP packet marking support in Linuxbridge agent

2016-12-13 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/401458
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit fd3bf3327cada36ef57eefebabc78a112298be8d
Author: Sławek Kapłoński 
Date:   Wed Nov 23 22:14:30 2016 +

DSCP packet marking support in Linuxbridge agent

Linuxbridge agent uses iptable rules in POSTROUTING chain
in the mangle table to mark outgoing packets with the
DSCP mark value configured by the user in QoS policy.

DocImpact: DSCP Marking rule support is extended to the
   Linuxbridge L2 agent

Closes-Bug: #1644369

Change-Id: I47e44cb2e67ab73bd5ee0aa4cca47cb3d07e43f3

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: doc neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1649720

Title:
  DSCP packet marking support in Linuxbridge agent

Status in neutron:
  New

Bug description:
  https://review.openstack.org/401458
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit fd3bf3327cada36ef57eefebabc78a112298be8d
  Author: Sławek Kapłoński 
  Date:   Wed Nov 23 22:14:30 2016 +

  DSCP packet marking support in Linuxbridge agent
  
  Linuxbridge agent uses iptable rules in POSTROUTING chain
  in the mangle table to mark outgoing packets with the
  DSCP mark value configured by the user in QoS policy.
  
  DocImpact: DSCP Marking rule support is extended to the
 Linuxbridge L2 agent
  
  Closes-Bug: #1644369
  
  Change-Id: I47e44cb2e67ab73bd5ee0aa4cca47cb3d07e43f3

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1649720/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649703] [NEW] neutron-fwaas check jobs for FWaaS v2 fail intermittently

2016-12-13 Thread Nate Johnston
Public bug reported:

The following check jobs fail intermittently.  When one fails the other
usually succeeds.

- gate-neutron-fwaas-v2-dsvm-tempest
- gate-neutron-fwaas-v2-dsvm-tempest-multinode-nv

Here is an example of the multinode failing but the singlenode
succeeding:

- singlenode fail: 
http://logs.openstack.org/92/391392/10/check/gate-neutron-fwaas-v2-dsvm-tempest/cf602b9/testr_results.html.gz
- multinode succeed: 
http://logs.openstack.org/92/391392/10/check/gate-grenade-dsvm-neutron-fwaas-multinode-nv/fb42351/console.html

Here is an example of singlenode failing but multinode succeeding:

- singlenode succeed: 
http://logs.openstack.org/20/408920/1/check/gate-neutron-fwaas-v2-dsvm-tempest/7e38030/testr_results.html.gz
- multinode fail: 
http://logs.openstack.org/20/408920/1/check/gate-neutron-fwaas-v2-dsvm-tempest-multinode-nv/d3bbaac/testr_results.html.gz

Another example of same:

- singlenode succeed: 
http://logs.openstack.org/11/407311/2/check/gate-neutron-fwaas-v2-dsvm-tempest/0e52b7e/testr_results.html.gz
- multinode fail: 
http://logs.openstack.org/11/407311/2/check/gate-neutron-fwaas-v2-dsvm-tempest-multinode-nv/73a4af8/testr_results.html.gz

SridarK commented on https://review.openstack.org/#/c/407311/ that this
appears to occur on delete of fwg.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fwaas gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1649703

Title:
  neutron-fwaas check jobs for FWaaS v2 fail intermittently

Status in neutron:
  New

Bug description:
  The following check jobs fail intermittently.  When one fails the
  other usually succeeds.

  - gate-neutron-fwaas-v2-dsvm-tempest
  - gate-neutron-fwaas-v2-dsvm-tempest-multinode-nv

  Here is an example of the multinode failing but the singlenode
  succeeding:

  - singlenode fail: 
http://logs.openstack.org/92/391392/10/check/gate-neutron-fwaas-v2-dsvm-tempest/cf602b9/testr_results.html.gz
  - multinode succeed: 
http://logs.openstack.org/92/391392/10/check/gate-grenade-dsvm-neutron-fwaas-multinode-nv/fb42351/console.html

  Here is an example of singlenode failing but multinode succeeding:

  - singlenode succeed: 
http://logs.openstack.org/20/408920/1/check/gate-neutron-fwaas-v2-dsvm-tempest/7e38030/testr_results.html.gz
  - multinode fail: 
http://logs.openstack.org/20/408920/1/check/gate-neutron-fwaas-v2-dsvm-tempest-multinode-nv/d3bbaac/testr_results.html.gz

  Another example of same:

  - singlenode succeed: 
http://logs.openstack.org/11/407311/2/check/gate-neutron-fwaas-v2-dsvm-tempest/0e52b7e/testr_results.html.gz
  - multinode fail: 
http://logs.openstack.org/11/407311/2/check/gate-neutron-fwaas-v2-dsvm-tempest-multinode-nv/73a4af8/testr_results.html.gz

  SridarK commented on https://review.openstack.org/#/c/407311/ that
  this appears to occur on delete of fwg.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1649703/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648516] Re: Mitaka error install

2016-12-13 Thread Lucas Alves Martins
Thanks, what I did to get it right, install devstack again on a new vm, so it 
worked

** Changed in: glance
   Status: Incomplete => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1648516

Title:
  Mitaka error install

Status in Glance:
  Fix Released

Bug description:
  Obtaining file:///opt/stack/glance
  Exception:
  Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/pip/basecommand.py", line 215, 
in main
  status = self.run(options, args)
File "/usr/local/lib/python2.7/dist-packages/pip/commands/install.py", line 
335, in run
  wb.build(autobuilding=True)
File "/usr/local/lib/python2.7/dist-packages/pip/wheel.py", line 749, in 
build
  self.requirement_set.prepare_files(self.finder)
File "/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 380, 
in prepare_files
  ignore_dependencies=self.ignore_dependencies))
File "/usr/local/lib/python2.7/dist-packages/pip/req/req_set.py", line 521, 
in _prepare_file
  req_to_install.check_if_exists()
File "/usr/local/lib/python2.7/dist-packages/pip/req/req_install.py", line 
1036, in check_if_exists
  self.req.name
File 
"/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py", 
line 558, in get_distribution
  dist = get_provider(dist)
File 
"/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py", 
line 432, in get_provider
  return working_set.find(moduleOrReq) or require(str(moduleOrReq))[0]
File 
"/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py", 
line 968, in require
  needed = self.resolve(parse_requirements(requirements))
File 
"/usr/local/lib/python2.7/dist-packages/pip/_vendor/pkg_resources/__init__.py", 
line 859, in resolve
  raise VersionConflict(dist, req).with_context(dependent_req)
  ContextualVersionConflict: (oslo.concurrency 3.7.1 
(/usr/local/lib/python2.7/dist-packages), 
Requirement.parse('oslo.concurrency>=3.8.0'), set(['glance-store']))
  +inc/python:pip_install:1  exit_trap
  +./stack.sh:exit_trap:474  local r=2
  ++./stack.sh:exit_trap:475  jobs -p
  +./stack.sh:exit_trap:475  jobs=
  +./stack.sh:exit_trap:478  [[ -n '' ]]
  +./stack.sh:exit_trap:484  kill_spinner
  +./stack.sh:kill_spinner:370   '[' '!' -z '' ']'
  +./stack.sh:exit_trap:486  [[ 2 -ne 0 ]]
  +./stack.sh:exit_trap:487  echo 'Error on exit'
  Error on exit
  +./stack.sh:exit_trap:488  generate-subunit 1481208793 169 
fail
  +./stack.sh:exit_trap:489  [[ -z /opt/stack/logs ]]
  +./stack.sh:exit_trap:492  
/home/openstack/devstack/tools/worlddump.py -d /opt/stack/logs
  World dumping... see /opt/stack/logs/worlddump-2016-12-08-145603.txt for 
details
  +./stack.sh:exit_trap:498  exit 2


  
  Local.conf
  [[local|localrc]]

  ADMIN_PASSWORD=010465
  DATABASE_PASSWORD=010465
  RABBIT_PASSWORD=010465
  SERVICE_PASSWORD=010465
  MYSQL_PASSWORD=010465

  
  #Enable heat services
  enable_service h-eng h-api h-api-cfn h-api-cw

  #Enable heat plugin
  enable_plugin heat https://git.openstack.org/openstack/heat stable/mitaka

  #Image for Heat
  IMAGE_URL_SITE="http://fedora.c3sl.ufpr.br;
  IMAGE_URL_PATH="/linux//releases/22/Cloud/x86_64/Images/"
  IMAGE_URL_FILE="Fedora-Cloud-Base-22-20150521.x86_64.qcow2"
  IMAGE_URLS+=","$IMAGE_URL_SITE$IMAGE_URL_PATH$IMAGE_URL_FILE

  #Enable Ceilometer plugin
  CEILOMETER_BACKEND=mongodb
  enable_plugin ceilometer https://git.openstack.org/openstack/ceilometer 
stable/mitaka
  enable_plugin aodh https://git.openstack.org/openstack/aodh stable/mitaka

  #Enable Tacker plugin
  enable_plugin tacker https://git.openstack.org/openstack/tacker stable/mitaka

  help
  thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1648516/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1601986] Re: RuntimeError: osrandom engine already registered

2016-12-13 Thread Corey Bryant
** Changed in: cloud-archive
   Status: New => Fix Released

** Changed in: cloud-archive
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1601986

Title:
  RuntimeError: osrandom engine already registered

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive liberty series:
  Triaged
Status in OpenStack Dashboard (Horizon):
  New
Status in python-cryptography package in Ubuntu:
  Fix Released

Bug description:
  Horizon errors with 500 Internal Server Error.

  The apache error.log logs an exception "RuntimeError: osrandom engine
  already registered", cf. traceback below. We need to restart apache2
  to recover.

  This happens in a non-deterministic way, ie. Horizon will function
  correctly for some time after throwing this error.

  Versions:

  python-django-horizon2:8.0.1-0ubuntu1~cloud0 
  apache2  2.4.7-1ubuntu4.10 
  libapache2-mod-wsgi  3.4-4ubuntu2.1.14.04.2

  Traceback:

  
  [Mon Jul 11 20:16:46.373640 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908] mod_wsgi (pid=2045796): Exception occurred 
processing WSGI script 
'/usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi'.
  [Mon Jul 11 20:16:46.373681 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908] Traceback (most recent call last):
  [Mon Jul 11 20:16:46.373697 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908]   File 
"/usr/lib/python2.7/dist-packages/django/core/handlers/wsgi.py", line 168, in 
__call__
  [Mon Jul 11 20:16:46.390398 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908] self.load_middleware()
  [Mon Jul 11 20:16:46.390420 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908]   File 
"/usr/lib/python2.7/dist-packages/django/core/handlers/base.py", line 46, in 
load_middleware
  [Mon Jul 11 20:16:46.390515 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908] mw_instance = mw_class()
  [Mon Jul 11 20:16:46.390525 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908]   File 
"/usr/lib/python2.7/dist-packages/django/middleware/locale.py", line 23, in 
__init__
  [Mon Jul 11 20:16:46.394033 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908] for url_pattern in 
get_resolver(None).url_patterns:
  [Mon Jul 11 20:16:46.394052 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908]   File 
"/usr/lib/python2.7/dist-packages/django/core/urlresolvers.py", line 372, in 
url_patterns
  [Mon Jul 11 20:16:46.394500 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908] patterns = getattr(self.urlconf_module, 
"urlpatterns", self.urlconf_module)
  [Mon Jul 11 20:16:46.394516 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908]   File 
"/usr/lib/python2.7/dist-packages/django/core/urlresolvers.py", line 366, in 
urlconf_module
  [Mon Jul 11 20:16:46.394533 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908] self._urlconf_module = 
import_module(self.urlconf_name)
  [Mon Jul 11 20:16:46.394540 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908]   File "/usr/lib/python2.7/importlib/__init__.py", 
line 37, in import_module
  [Mon Jul 11 20:16:46.410602 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908] __import__(name)
  [Mon Jul 11 20:16:46.410618 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908]   File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/urls.py",
 line 35, in 
  [Mon Jul 11 20:16:46.416197 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908] url(r'^api/', 
include('openstack_dashboard.api.rest.urls')),
  [Mon Jul 11 20:16:46.416219 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908]   File 
"/usr/lib/python2.7/dist-packages/django/conf/urls/__init__.py", line 28, in 
include
  [Mon Jul 11 20:16:46.422868 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908] urlconf_module = import_module(urlconf_module)
  [Mon Jul 11 20:16:46.422882 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908]   File "/usr/lib/python2.7/importlib/__init__.py", 
line 37, in import_module
  [Mon Jul 11 20:16:46.422899 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908] __import__(name)
  [Mon Jul 11 20:16:46.422905 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908]   File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/api/__init__.py",
 line 36, in 
  [Mon Jul 11 20:16:46.432789 2016] 

[Yahoo-eng-team] [Bug 1601986] Re: RuntimeError: osrandom engine already registered

2016-12-13 Thread Corey Bryant
commit 9837cb15b84fea92ffce3306d14160a8c11b1c65 is included in mitaka
and above, so I'm marking this as fix released for python-cryptography
in Ubuntu.  We'll target a fix for the Liberty cloud-archive.

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/liberty
   Importance: Undecided
   Status: New

** Changed in: cloud-archive/liberty
   Status: New => Triaged

** Changed in: python-cryptography (Ubuntu)
   Status: Triaged => Fix Released

** Changed in: cloud-archive/liberty
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1601986

Title:
  RuntimeError: osrandom engine already registered

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive liberty series:
  Triaged
Status in OpenStack Dashboard (Horizon):
  New
Status in python-cryptography package in Ubuntu:
  Fix Released

Bug description:
  Horizon errors with 500 Internal Server Error.

  The apache error.log logs an exception "RuntimeError: osrandom engine
  already registered", cf. traceback below. We need to restart apache2
  to recover.

  This happens in a non-deterministic way, ie. Horizon will function
  correctly for some time after throwing this error.

  Versions:

  python-django-horizon2:8.0.1-0ubuntu1~cloud0 
  apache2  2.4.7-1ubuntu4.10 
  libapache2-mod-wsgi  3.4-4ubuntu2.1.14.04.2

  Traceback:

  
  [Mon Jul 11 20:16:46.373640 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908] mod_wsgi (pid=2045796): Exception occurred 
processing WSGI script 
'/usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi'.
  [Mon Jul 11 20:16:46.373681 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908] Traceback (most recent call last):
  [Mon Jul 11 20:16:46.373697 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908]   File 
"/usr/lib/python2.7/dist-packages/django/core/handlers/wsgi.py", line 168, in 
__call__
  [Mon Jul 11 20:16:46.390398 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908] self.load_middleware()
  [Mon Jul 11 20:16:46.390420 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908]   File 
"/usr/lib/python2.7/dist-packages/django/core/handlers/base.py", line 46, in 
load_middleware
  [Mon Jul 11 20:16:46.390515 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908] mw_instance = mw_class()
  [Mon Jul 11 20:16:46.390525 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908]   File 
"/usr/lib/python2.7/dist-packages/django/middleware/locale.py", line 23, in 
__init__
  [Mon Jul 11 20:16:46.394033 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908] for url_pattern in 
get_resolver(None).url_patterns:
  [Mon Jul 11 20:16:46.394052 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908]   File 
"/usr/lib/python2.7/dist-packages/django/core/urlresolvers.py", line 372, in 
url_patterns
  [Mon Jul 11 20:16:46.394500 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908] patterns = getattr(self.urlconf_module, 
"urlpatterns", self.urlconf_module)
  [Mon Jul 11 20:16:46.394516 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908]   File 
"/usr/lib/python2.7/dist-packages/django/core/urlresolvers.py", line 366, in 
urlconf_module
  [Mon Jul 11 20:16:46.394533 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908] self._urlconf_module = 
import_module(self.urlconf_name)
  [Mon Jul 11 20:16:46.394540 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908]   File "/usr/lib/python2.7/importlib/__init__.py", 
line 37, in import_module
  [Mon Jul 11 20:16:46.410602 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908] __import__(name)
  [Mon Jul 11 20:16:46.410618 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908]   File 
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/urls.py",
 line 35, in 
  [Mon Jul 11 20:16:46.416197 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908] url(r'^api/', 
include('openstack_dashboard.api.rest.urls')),
  [Mon Jul 11 20:16:46.416219 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908]   File 
"/usr/lib/python2.7/dist-packages/django/conf/urls/__init__.py", line 28, in 
include
  [Mon Jul 11 20:16:46.422868 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908] urlconf_module = import_module(urlconf_module)
  [Mon Jul 11 20:16:46.422882 2016] [:error] [pid 2045796:tid 139828791035648] 
[remote 172.16.4.81:33908]   File 

[Yahoo-eng-team] [Bug 1577541] Re: Neutron-LBaaS v2: TLS Listeners Functional Tests

2016-12-13 Thread Darek Smigiel
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1577541

Title:
   Neutron-LBaaS v2: TLS Listeners Functional Tests

Status in octavia:
  In Progress

Bug description:
  Create the following tests:

  * A battery of CRUD tests around TLS listeners
  * Dependent on Barbican to generate default_tls_container_ref

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1577541/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649665] [NEW] Erroneous limit parameter not included in error message for neutron port-list --limit

2016-12-13 Thread rick jones
Public bug reported:

Running a "previously happy on Mitaka" cleanup script which included a:

neutron port-list --all-tenants --limit -1

command now returns an error under Newton:

stack@np-cp1-c0-m1-mgmt:~/rjones2$ neutron port-list --all-tenants --limit -1
Bad limit request: Limit must be an integer 0 or greater and not '%d'.
Neutron server returns request_ids: ['req-27f59b03-d063-4bd2-a84f-43a6545c1f41']

Independent of whether the command should still accept a -1, it would
seem the error message is in error - the percent-d should be the value
provided to the --limit option.  Looking at
https://github.com/openstack/neutron/blob/master/neutron/api/api_common.py
one can see that "limit" is not added to msg in _get_limit_param.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1649665

Title:
  Erroneous limit parameter not included in error message for neutron
  port-list --limit

Status in neutron:
  New

Bug description:
  Running a "previously happy on Mitaka" cleanup script which included
  a:

  neutron port-list --all-tenants --limit -1

  command now returns an error under Newton:

  stack@np-cp1-c0-m1-mgmt:~/rjones2$ neutron port-list --all-tenants --limit -1
  Bad limit request: Limit must be an integer 0 or greater and not '%d'.
  Neutron server returns request_ids: 
['req-27f59b03-d063-4bd2-a84f-43a6545c1f41']

  Independent of whether the command should still accept a -1, it would
  seem the error message is in error - the percent-d should be the value
  provided to the --limit option.  Looking at
  https://github.com/openstack/neutron/blob/master/neutron/api/api_common.py
  one can see that "limit" is not added to msg in _get_limit_param.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1649665/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1646770] Re: lbaas tempest test cases are not executing

2016-12-13 Thread Darek Smigiel
** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1646770

Title:
  lbaas tempest test cases are not executing

Status in octavia:
  In Progress

Bug description:
  Hitting the error 'TestListenerBasic' object has no attribute
  'tenant_id while executing test_listener_basic.py. Need to make
  changes in base.py file to run the test files successfully.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1646770/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649298] Re: multiple fixed ips assigned to newly spawned instance

2016-12-13 Thread Jean-Philippe Evrard
This is an upstream issue. Linking bug to nova.

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: openstack-ansible
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1649298

Title:
  multiple fixed ips assigned to newly spawned instance

Status in OpenStack Compute (nova):
  New
Status in openstack-ansible:
  Invalid

Bug description:
  Upon the creation of an instance where the network is attached, there
  is a possibility that the instance will be assigned 2 or more IPv4
  addresses from the address pool of the network.  Multiple ports are
  created along with the assignment of each address but only one of
  these ports is actually up.

  Versions of relevant services:
  neutron: 4.1.1
  OpenStack-ansible: ansible-playbook 1.9.4
  novaclient: 6.0.0
  os_client_config: 1.24.0

  What is expected:
  An instance that is configured with a network is spawned and built with a 
single IPv4 address assigned to it.

  What actually occurs:
  An instance is spawned and built with two or more IPv4 addresses assigned to 
it.  If an instance is spawned with 2 or more networks, 2 or more IPv4 
addresses from each network are assigned to the instance.

  How to recreate it:
  Create an instance using the Horizon dashboard or create a server using the 
CLI.
  In both cases, the network device needs to be specified during in the 
creation command.
  It is not guaranteed that the issue will occur on every attempt to create the 
instance.
  It has been noted that the probability of the instance being assigned two or 
more IPv4 addresses from the same network is higher when the instance is 
created with two or more networks devices attached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1649298/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1627044] Re: Last chance call to neutron if VIF plugin notification is lost

2016-12-13 Thread Darek Smigiel
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1627044

Title:
  Last chance call to neutron if VIF plugin notification is lost

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  While spawning a new VM, Nova waits for event from Neutron that its
  port is configured. In some cases Neutron event is lost (e.g. RabbitMQ
  issue) and if vif_plugging_is_fatal=True (it is by default) the
  instance is set to ERROR state. It happens even if in fact port is
  ACTIVE on Neutron side and all should work fine.

  This workflow could be improved by calling Neutron before failing.
  Nova could check real state of each port in Neutron just before setting the 
instance in ERROR (if at least one port is not ACTIVE).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1627044/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1642167] Re: Move neutron.common.utils.wait_until_true to neutron-lib

2016-12-13 Thread Darek Smigiel
** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1642167

Title:
  Move neutron.common.utils.wait_until_true to neutron-lib

Status in neutron:
  Invalid

Bug description:
  In an effort to move from neutron.common to neutron-lib,
  wait_until_true should be moved from neutron.common.util to neutron-
  lib.

  Seen in: Ocata

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1642167/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1641813] Re: stable/newton branch creation request for networking-odl

2016-12-13 Thread Darek Smigiel
Branch got created.

** Changed in: neutron
   Status: New => Fix Released

** Changed in: networking-odl
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1641813

Title:
  stable/newton branch creation request for networking-odl

Status in networking-odl:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  Please create stable/newton branch of networking-odl on
  f313b7f5b3ebba5bd6f7e8e855315e1570c71f54

  the corresponding patch for openstack/release can be found at
  https://review.openstack.org/#/c/395415/

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-odl/+bug/1641813/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649616] [NEW] Keystone Token Flush job does not complete in HA deployed environment

2016-12-13 Thread Alex Krzos
Public bug reported:

The Keystone token flush job can get into a state where it will never
complete because the transaction size exceeds the mysql galara
transaction size - wsrep_max_ws_size (1073741824).


Steps to Reproduce:
1. Authenticate many times
2. Observe that keystone token flush job runs (should be a very long time 
depending on disk) >20 hours in my environment
3. Observe errors in mysql.log indicating a transaction that is too large


Actual results:
Expired tokens are not actually flushed from the database without any errors in 
keystone.log.  Only errors appear in mysql.log.


Expected results:
Expired tokens to be removed from the database


Additional info:
It is likely that you can demonstrate this with less than 1 million tokens as 
the >1 million token table is larger than 13GiB and the max transaction size is 
1GiB, my token bench-marking Browbeat job creates more than needed.  

Once the token flush job can not complete the token table will never
decrease in size and eventually the cloud will run out of disk space.

Furthermore the flush job will consume disk utilization resources.  This
was demonstrated on slow disks (Single 7.2K SATA disk).  On faster disks
you will have more capacity to generate tokens, you can then generate
the number of tokens to exceed the transaction size even faster.

Log evidence:
[root@overcloud-controller-0 log]# grep " Total expired" 
/var/log/keystone/keystone.log
2016-12-08 01:33:40.530 21614 INFO keystone.token.persistence.backends.sql [-] 
Total expired tokens removed: 1082434
2016-12-09 09:31:25.301 14120 INFO keystone.token.persistence.backends.sql [-] 
Total expired tokens removed: 1084241
2016-12-11 01:35:39.082 4223 INFO keystone.token.persistence.backends.sql [-] 
Total expired tokens removed: 1086504
2016-12-12 01:08:16.170 32575 INFO keystone.token.persistence.backends.sql [-] 
Total expired tokens removed: 1087823
2016-12-13 01:22:18.121 28669 INFO keystone.token.persistence.backends.sql [-] 
Total expired tokens removed: 1089202
[root@overcloud-controller-0 log]# tail mysqld.log 
161208  1:33:41 [Warning] WSREP: transaction size limit (1073741824) exceeded: 
1073774592
161208  1:33:41 [ERROR] WSREP: rbr write fail, data_len: 0, 2
161209  9:31:26 [Warning] WSREP: transaction size limit (1073741824) exceeded: 
1073774592
161209  9:31:26 [ERROR] WSREP: rbr write fail, data_len: 0, 2
161211  1:35:39 [Warning] WSREP: transaction size limit (1073741824) exceeded: 
1073774592
161211  1:35:40 [ERROR] WSREP: rbr write fail, data_len: 0, 2
161212  1:08:16 [Warning] WSREP: transaction size limit (1073741824) exceeded: 
1073774592
161212  1:08:17 [ERROR] WSREP: rbr write fail, data_len: 0, 2
161213  1:22:18 [Warning] WSREP: transaction size limit (1073741824) exceeded: 
1073774592
161213  1:22:19 [ERROR] WSREP: rbr write fail, data_len: 0, 2


Disk utilization issue graph is attached.  The entire job in that graph takes 
from the first spike is disk util(~5:18UTC) and culminates in about ~90 minutes 
of pegging the disk (between 1:09utc to 2:43utc).

** Affects: keystone
 Importance: Undecided
 Status: New

** Attachment added: "Disk IO % util on Controller when Token Flush is running."
   
https://bugs.launchpad.net/bugs/1649616/+attachment/4791197/+files/Token_flush-Disk_io.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1649616

Title:
  Keystone Token Flush job does not complete in HA deployed environment

Status in OpenStack Identity (keystone):
  New

Bug description:
  The Keystone token flush job can get into a state where it will never
  complete because the transaction size exceeds the mysql galara
  transaction size - wsrep_max_ws_size (1073741824).

  
  Steps to Reproduce:
  1. Authenticate many times
  2. Observe that keystone token flush job runs (should be a very long time 
depending on disk) >20 hours in my environment
  3. Observe errors in mysql.log indicating a transaction that is too large

  
  Actual results:
  Expired tokens are not actually flushed from the database without any errors 
in keystone.log.  Only errors appear in mysql.log.

  
  Expected results:
  Expired tokens to be removed from the database

  
  Additional info:
  It is likely that you can demonstrate this with less than 1 million tokens as 
the >1 million token table is larger than 13GiB and the max transaction size is 
1GiB, my token bench-marking Browbeat job creates more than needed.  

  Once the token flush job can not complete the token table will never
  decrease in size and eventually the cloud will run out of disk space.

  Furthermore the flush job will consume disk utilization resources.
  This was demonstrated on slow disks (Single 7.2K SATA disk).  On
  faster disks you will have more capacity to generate tokens, you can
  then generate the number of tokens to exceed the transaction size even
  faster.

[Yahoo-eng-team] [Bug 1634568] Re: Inconsistency between v3 API and keystone token timestamps

2016-12-13 Thread Brant Knudson
The v3 documentation is still incorrect.

** Changed in: keystone
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1634568

Title:
  Inconsistency between v3 API and keystone token timestamps

Status in OpenStack Identity (keystone):
  New

Bug description:
  The v3 API spec for tokens documents the format of timestamps[1]. It
  says the format is like "CCYY-MM-DDThh:mm:ss±hh:mm".

  By this, the timestamps returned by keystone should be like
  2016-10-17T15:17:03+00:00. But they actually show up like this:

  V3:
  "issued_at": "2016-10-17T15:17:03.00Z",
  "expires_at": "2016-10-17T16:17:03.00Z",

  V2:
  "issued_at": "2016-10-17T15:17:56.00Z",
  "expires": "2016-10-17T16:17:56Z",

  Tempest has checks that the timestamp ends in Z.

  [1] http://developer.openstack.org/api-ref/identity/v3/?expanded
  =validate-and-show-information-for-token-detail#id19

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1634568/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1634568] Re: Inconsistency between v3 API and keystone token timestamps

2016-12-13 Thread Steve Martinelli
this looks OK to me now...

stevemar@ubuntu:/opt/stack$ source ~/devstack/openrc admin admin
WARNING: setting legacy OS_TENANT_NAME to support cli tools.
<>

stevemar@ubuntu:/opt/stack$ openstack token issue
++---+
| Field  | Value  
++---+
| expires| 2016-12-13T15:33:12+
| id | gABYvnjb554MXv4_MMDN7d6JAQ
| project_id | e8228d4835664159abdcaeb4bf8a26ac
| user_id| 08bdbf92b87d47469aa40d4b10217f40
++---+

stevemar@ubuntu:/opt/stack$ openstack token issue --os-identity-api-
version 2 --os-auth-url http://172.16.240.201:5000/v2.0

Ignoring domain related config project_domain_id because identity API version 
is 2.0
Ignoring domain related config user_domain_id because identity API version is 
2.0
Ignoring domain related config project_domain_id because identity API version 
is 2.0
Ignoring domain related config user_domain_id because identity API version is 
2.0
Ignoring domain related config project_domain_id because identity API version 
is 2.0
Ignoring domain related config user_domain_id because identity API version is 
2.0

++-
| Field  | Value   
++-
| expires| 2016-12-13T15:33:50+
| id | oz1E2HQwVIJqhdz9D703gvFJTbM
| project_id | e8228d4835664159abdcaeb4bf8a26ac
| user_id| 08bdbf92b87d47469aa40d4b10217f40
++-

** Changed in: keystone
   Status: In Progress => Invalid

** Changed in: keystone
 Assignee: Lance Bragstad (lbragstad) => (unassigned)

** Changed in: keystone
   Importance: High => Undecided

** Changed in: keystone
Milestone: ocata-2 => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1634568

Title:
  Inconsistency between v3 API and keystone token timestamps

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  The v3 API spec for tokens documents the format of timestamps[1]. It
  says the format is like "CCYY-MM-DDThh:mm:ss±hh:mm".

  By this, the timestamps returned by keystone should be like
  2016-10-17T15:17:03+00:00. But they actually show up like this:

  V3:
  "issued_at": "2016-10-17T15:17:03.00Z",
  "expires_at": "2016-10-17T16:17:03.00Z",

  V2:
  "issued_at": "2016-10-17T15:17:56.00Z",
  "expires": "2016-10-17T16:17:56Z",

  Tempest has checks that the timestamp ends in Z.

  [1] http://developer.openstack.org/api-ref/identity/v3/?expanded
  =validate-and-show-information-for-token-detail#id19

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1634568/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649601] [NEW] Postgresql case sensitiveness not honoured

2016-12-13 Thread Andreas
Public bug reported:

I installed Keystone with the ubuntu guide
(http://docs.openstack.org/newton/install-guide-ubuntu/index.html), but
with postgresql backend.

At http://docs.openstack.org/newton/install-guide-ubuntu/keystone-
users.html I get following reported error:

https://ask.openstack.org/en/question/67398/error-openstack-the-request-
you-have-made-requires-authentication-http-401/ --> ERROR: openstack The
request you have made requires authentication. (HTTP 401)

After some deeper debugging and comparing of databases I found out that
following query:

SELECT project.id AS project_id, project.name AS project_name, 
project.domain_id AS project_domain_id, project.description AS 
project_description, project.enabled AS project_enabled, project.extra AS 
project_extra, project.parent_id AS project_parent_id, project.is_domain AS 
project_is_domain
FROM project
WHERE project.name = 'default' AND project.domain_id = 
'<>'

does not work on postgresql and gives an empty result back. MySQL works
here successful and a creation of the project works.

There are several ways to solve this issue (citext field type or lower()
function), but I'm not a programmer nor I found the possible location in
the code yet.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1649601

Title:
  Postgresql case sensitiveness not honoured

Status in OpenStack Identity (keystone):
  New

Bug description:
  I installed Keystone with the ubuntu guide
  (http://docs.openstack.org/newton/install-guide-ubuntu/index.html),
  but with postgresql backend.

  At http://docs.openstack.org/newton/install-guide-ubuntu/keystone-
  users.html I get following reported error:

  https://ask.openstack.org/en/question/67398/error-openstack-the-
  request-you-have-made-requires-authentication-http-401/ --> ERROR:
  openstack The request you have made requires authentication. (HTTP
  401)

  After some deeper debugging and comparing of databases I found out
  that following query:

  SELECT project.id AS project_id, project.name AS project_name, 
project.domain_id AS project_domain_id, project.description AS 
project_description, project.enabled AS project_enabled, project.extra AS 
project_extra, project.parent_id AS project_parent_id, project.is_domain AS 
project_is_domain
  FROM project
  WHERE project.name = 'default' AND project.domain_id = 
'<>'

  does not work on postgresql and gives an empty result back. MySQL
  works here successful and a creation of the project works.

  There are several ways to solve this issue (citext field type or
  lower() function), but I'm not a programmer nor I found the possible
  location in the code yet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1649601/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526587] Re: Neutron doesn't have a command to show the available IP addresses for one subnet

2016-12-13 Thread John Davidge
python-neutronclient has been deprecated. Any changes will need to be
made in python-openstackclient.

** Changed in: python-neutronclient
   Status: In Progress => Won't Fix

** Also affects: python-openstackclient
   Importance: Undecided
   Status: New

** Changed in: python-neutronclient
 Assignee: Prateek khushalani (prateek-khushalani) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1526587

Title:
  Neutron doesn't have a command to show the available IP addresses for
  one subnet

Status in neutron:
  In Progress
Status in python-neutronclient:
  Won't Fix
Status in python-openstackclient:
  New

Bug description:
  Neutron doesn't have a command to show the allocated ip addresses for
  one subnet.

  We can get the allocated ip list with command:
  [root@cts-orch ~]# neutron port-list | grep `neutron subnet-show 110-OAM2 | 
awk '/ id / {print $4}'` | cut -d"|" -f5 | cut -d":" -f3 | sort
   "135.111.122.97"}
   "135.111.122.98"}

  But we don't have a command to show the available ips for one subnet.
  I write a shell script to show the available ports as below, but it
  will be helpful if we can provide such a neutron command.

  [root@cts-orch ~]# ./show_available_ip.sh 110-OAM2
  135.111.122.99
  135.111.122.100
  135.111.122.101
  135.111.122.102
  135.111.122.103
  135.111.122.104
  135.111.122.105
  135.111.122.106
  135.111.122.107
  135.111.122.108
  135.111.122.109
  135.111.122.110
  135.111.122.111
  135.111.122.112
  135.111.122.113
  135.111.122.114
  135.111.122.115
  135.111.122.116
  135.111.122.117
  135.111.122.118
  135.111.122.119
  135.111.122.120
  135.111.122.121
  135.111.122.122
  135.111.122.123
  135.111.122.124
  Total Count: 26

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1526587/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1639955] Re: bad test for snappy systems

2016-12-13 Thread Scott Moser
** Changed in: cloud-init (Ubuntu)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu)
   Importance: Undecided => Medium

** Also affects: cloud-init
   Importance: Undecided
   Status: New

** Changed in: cloud-init
   Status: New => Confirmed

** Changed in: cloud-init
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1639955

Title:
  bad test for snappy systems

Status in cloud-init:
  Confirmed
Status in cloud-init package in Ubuntu:
  Confirmed

Bug description:
  Reviewing the latest SRU for cloud-init, I noticed the following:

  def system_is_snappy():
  # channel.ini is configparser loadable.
  # snappy will move to using /etc/system-image/config.d/*.ini
  # this is certainly not a perfect test, but good enough for now.
  content = load_file("/etc/system-image/channel.ini", quiet=True)
  if 'ubuntu-core' in content.lower():
  return True
  if os.path.isdir("/etc/system-image/config.d/"):
  return True
  return False

  This isn't a good test for whether a system is an ubuntu-core system.
  'system-image' is historical baggage, and not likely to be present at
  all in future versions.

  I'm afraid I don't know a good alternative test offhand, but wanted to
  log the bug so someone could look into it rather than being caught by
  surprise when ubuntu-core image contents later change.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1639955/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526587] Re: Neutron doesn't have a command to show the available IP addresses for one subnet

2016-12-13 Thread Prateek khushalani
I did an analysis on the change. The changes will have to be made on
neutron-server and neutronclient(CLI). Here are the list of changes that
will be done

Neutronclient side-

1. A new command to be added.
2. A new resource to be created
3. The resource will take subnet-id and return available IP's in the allocation 
range.

Neutron-server side-

1. A new api will be created.
2. The api will accept subnet-id and return list of available  IP's in the 
allocation range.

** Also affects: python-neutronclient
   Importance: Undecided
   Status: New

** Changed in: python-neutronclient
 Assignee: (unassigned) => Prateek khushalani (prateek-khushalani)

** Changed in: python-neutronclient
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1526587

Title:
  Neutron doesn't have a command to show the available IP addresses for
  one subnet

Status in neutron:
  In Progress
Status in python-neutronclient:
  In Progress

Bug description:
  Neutron doesn't have a command to show the allocated ip addresses for
  one subnet.

  We can get the allocated ip list with command:
  [root@cts-orch ~]# neutron port-list | grep `neutron subnet-show 110-OAM2 | 
awk '/ id / {print $4}'` | cut -d"|" -f5 | cut -d":" -f3 | sort
   "135.111.122.97"}
   "135.111.122.98"}

  But we don't have a command to show the available ips for one subnet.
  I write a shell script to show the available ports as below, but it
  will be helpful if we can provide such a neutron command.

  [root@cts-orch ~]# ./show_available_ip.sh 110-OAM2
  135.111.122.99
  135.111.122.100
  135.111.122.101
  135.111.122.102
  135.111.122.103
  135.111.122.104
  135.111.122.105
  135.111.122.106
  135.111.122.107
  135.111.122.108
  135.111.122.109
  135.111.122.110
  135.111.122.111
  135.111.122.112
  135.111.122.113
  135.111.122.114
  135.111.122.115
  135.111.122.116
  135.111.122.117
  135.111.122.118
  135.111.122.119
  135.111.122.120
  135.111.122.121
  135.111.122.122
  135.111.122.123
  135.111.122.124
  Total Count: 26

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1526587/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649586] [NEW] "HTTP exception thrown: Cannot 'os-migrateLive' instance while it is in task_state migrating" in gate-grenade-dsvm-neutron-multinode-live-migration-nv

2016-12-13 Thread Matt Riedemann
Public bug reported:

Seen here:

http://logs.openstack.org/04/409904/2/check/gate-grenade-dsvm-neutron-
multinode-live-migration-
nv/78de48c/logs/new/screen-n-api.txt.gz#_2016-12-13_03_02_15_500

2016-12-13 03:02:15.500 8323 INFO nova.api.openstack.wsgi [req-
54d8956d-3289-418b-a05f-4f85910d83d8 tempest-
LiveBlockMigrationTestJSON-1803989051 tempest-
LiveBlockMigrationTestJSON-1803989051] HTTP exception thrown: Cannot
'os-migrateLive' instance af219925-a0a1-4e1c-92e0-ff6f510d7cd1 while it
is in task_state migrating

Which causes the job to fail.

http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22HTTP%20exception%20thrown%3A%20Cannot%20
'os-
migrateLive'%20instance%5C%22%20AND%20message%3A%5C%22while%20it%20is%20in%20task_state%20migrating%5C%22%20AND%20tags%3A%5C%22screen-n-api.txt%5C%22=7d

292 hits in 7 days, all failures.

** Affects: nova
 Importance: Medium
 Status: Confirmed


** Tags: grenade live-migration multinode testing

** Summary changed:

- "HTTP exception thrown: Cannot 'os-migrateLive' instance 
af219925-a0a1-4e1c-92e0-ff6f510d7cd1 while it is in task_state migrating" in 
gate-grenade-dsvm-neutron-multinode-live-migration-nv
+ "HTTP exception thrown: Cannot 'os-migrateLive' instance while it is in 
task_state migrating" in gate-grenade-dsvm-neutron-multinode-live-migration-nv

** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1649586

Title:
  "HTTP exception thrown: Cannot 'os-migrateLive' instance while it is
  in task_state migrating" in gate-grenade-dsvm-neutron-multinode-live-
  migration-nv

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  Seen here:

  http://logs.openstack.org/04/409904/2/check/gate-grenade-dsvm-neutron-
  multinode-live-migration-
  nv/78de48c/logs/new/screen-n-api.txt.gz#_2016-12-13_03_02_15_500

  2016-12-13 03:02:15.500 8323 INFO nova.api.openstack.wsgi [req-
  54d8956d-3289-418b-a05f-4f85910d83d8 tempest-
  LiveBlockMigrationTestJSON-1803989051 tempest-
  LiveBlockMigrationTestJSON-1803989051] HTTP exception thrown: Cannot
  'os-migrateLive' instance af219925-a0a1-4e1c-92e0-ff6f510d7cd1 while
  it is in task_state migrating

  Which causes the job to fail.

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22HTTP%20exception%20thrown%3A%20Cannot%20
  'os-
  
migrateLive'%20instance%5C%22%20AND%20message%3A%5C%22while%20it%20is%20in%20task_state%20migrating%5C%22%20AND%20tags%3A%5C%22screen-n-api.txt%5C%22=7d

  292 hits in 7 days, all failures.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1649586/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649581] [NEW] IPv4 Link Local Addresses Not Supported in OVS firewall

2016-12-13 Thread Drew Thorstensen
Public bug reported:

There are certain workloads that require the ability to define IPv4 Link
Local addresses dynamically, as defined in RFC3927.

The openvswitch_firewall service allows for IPv6 link local addresses
(likely because they are deterministic), but does not account for IPv4
Link Local addresses.  Without support of this, workloads that have not
yet made the transition to IPv6 support won't be able to run with the
openvswitch_firewall.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1649581

Title:
  IPv4 Link Local Addresses Not Supported in OVS firewall

Status in neutron:
  New

Bug description:
  There are certain workloads that require the ability to define IPv4
  Link Local addresses dynamically, as defined in RFC3927.

  The openvswitch_firewall service allows for IPv6 link local addresses
  (likely because they are deterministic), but does not account for IPv4
  Link Local addresses.  Without support of this, workloads that have
  not yet made the transition to IPv6 support won't be able to run with
  the openvswitch_firewall.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1649581/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649574] [NEW] vpnaas gate jobs are failing with openswan

2016-12-13 Thread YAMAMOTO Takashi
Public bug reported:

following jobs are failing

gate-neutron-vpnaas-dsvm-api-ubuntu-xenial-nv
gate-neutron-vpnaas-dsvm-functional-ubuntu-xenial

eg. http://logs.openstack.org/78/408878/1/check/gate-neutron-vpnaas-
dsvm-functional-ubuntu-xenial/31c0c21/console.html

2016-12-13 03:23:23.122072 | + 
/opt/stack/new/neutron-vpnaas/devstack/plugin.sh:neutron_agent_vpnaas_install_agent_packages:14
 :   install_package openswan
2016-12-13 03:23:23.123674 | + functions-common:install_package:1285:   
update_package_repo
2016-12-13 03:23:23.125364 | + functions-common:update_package_repo:1257 :   
NO_UPDATE_REPOS=False
2016-12-13 03:23:23.126944 | + functions-common:update_package_repo:1258 :   
REPOS_UPDATED=True
2016-12-13 03:23:23.128576 | + functions-common:update_package_repo:1259 :   
RETRY_UPDATE=False
2016-12-13 03:23:23.130157 | + functions-common:update_package_repo:1261 :   [[ 
False = \T\r\u\e ]]
2016-12-13 03:23:23.131643 | + functions-common:update_package_repo:1265 :   
is_ubuntu
2016-12-13 03:23:23.132903 | + functions-common:is_ubuntu:466   :   [[ 
-z deb ]]
2016-12-13 03:23:23.134496 | + functions-common:is_ubuntu:469   :   '[' 
deb = deb ']'
2016-12-13 03:23:23.136126 | + functions-common:update_package_repo:1266 :   
apt_get_update
2016-12-13 03:23:23.137769 | + functions-common:apt_get_update:1059 :   [[ 
True == \T\r\u\e ]]
2016-12-13 03:23:23.139346 | + functions-common:apt_get_update:1059 :   [[ 
False != \T\r\u\e ]]
2016-12-13 03:23:23.140965 | + functions-common:apt_get_update:1060 :   
return
2016-12-13 03:23:23.142465 | + functions-common:install_package:1286:   
real_install_package openswan
2016-12-13 03:23:23.143939 | + functions-common:real_install_package:1271 :   
is_ubuntu
2016-12-13 03:23:23.145398 | + functions-common:is_ubuntu:466   :   [[ 
-z deb ]]
2016-12-13 03:23:23.146901 | + functions-common:is_ubuntu:469   :   '[' 
deb = deb ']'
2016-12-13 03:23:23.148310 | + functions-common:real_install_package:1272 :   
apt_get install openswan
2016-12-13 03:23:23.149982 | + functions-common:apt_get:1087:   
local xtrace result
2016-12-13 03:23:23.152192 | ++ functions-common:apt_get:1088:   
set +o
2016-12-13 03:23:23.152282 | ++ functions-common:apt_get:1088:   
grep xtrace
2016-12-13 03:23:23.155615 | + functions-common:apt_get:1088:   
xtrace='set -o xtrace'
2016-12-13 03:23:23.156974 | + functions-common:apt_get:1089:   set 
+o xtrace
2016-12-13 03:23:23.161695 | + functions-common:apt_get:1100:   
sudo DEBIAN_FRONTEND=noninteractive http_proxy= https_proxy= no_proxy= apt-get 
--option Dpkg::Options::=--force-confold --assume-yes install openswan
2016-12-13 03:23:23.188474 | Reading package lists...
2016-12-13 03:23:23.334278 | Building dependency tree...
2016-12-13 03:23:23.335012 | Reading state information...
2016-12-13 03:23:23.350723 | Package openswan is not available, but is referred 
to by another package.
2016-12-13 03:23:23.350762 | This may mean that the package is missing, has 
been obsoleted, or
2016-12-13 03:23:23.350786 | is only available from another source
2016-12-13 03:23:23.350799 | 
2016-12-13 03:23:23.352792 | E: Package 'openswan' has no installation candidate
2016-12-13 03:23:23.356052 | + functions-common:apt_get:1104:   
result=100
2016-12-13 03:23:23.357655 | + functions-common:apt_get:1107:   
time_stop apt-get
2016-12-13 03:23:23.359003 | + functions-common:time_stop:2398  :   
local name
2016-12-13 03:23:23.360321 | + functions-common:time_stop:2399  :   
local end_time
2016-12-13 03:23:23.361866 | + functions-common:time_stop:2400  :   
local elapsed_time
2016-12-13 03:23:23.363173 | + functions-common:time_stop:2401  :   
local total
2016-12-13 03:23:23.364458 | + functions-common:time_stop:2402  :   
local start_time
2016-12-13 03:23:23.365963 | + functions-common:time_stop:2404  :   
name=apt-get
2016-12-13 03:23:23.367539 | + functions-common:time_stop:2405  :   
start_time=1481599403
2016-12-13 03:23:23.368969 | + functions-common:time_stop:2407  :   [[ 
-z 1481599403 ]]
2016-12-13 03:23:23.370783 | ++ functions-common:time_stop:2410  :   
date +%s
2016-12-13 03:23:23.373482 | + functions-common:time_stop:2410  :   
end_time=1481599403
2016-12-13 03:23:23.374823 | + functions-common:time_stop:2411  :   
elapsed_time=0
2016-12-13 03:23:23.376293 | + functions-common:time_stop:2412  :   
total=51
2016-12-13 03:23:23.35 | + functions-common:time_stop:2414  :   
_TIME_START[$name]=
2016-12-13 03:23:23.379213 | + functions-common:time_stop:2415  :   
_TIME_TOTAL[$name]=51
2016-12-13 03:23:23.380978 | + functions-common:apt_get:1108:   
return 100
2016-12-13 03:23:23.382811 | + functions-common:install_package:1287:   
RETRY_UPDATE=True
2016-12-13 03:23:23.384665 | + 

[Yahoo-eng-team] [Bug 1649531] Re: When adding subports to trunk segmentation details shouldn't be mandatory

2016-12-13 Thread Vasyl Saienko
@John: I don't see that different drivers may have different validation rules 
please see [0] and [1].
Could you please show an example when different rules are applied on per driver 
basis?


[0] 
https://github.com/openstack/neutron/blob/master/neutron/services/trunk/plugin.py#L281
[1] 
https://github.com/openstack/neutron/blob/master/neutron/services/trunk/rules.py#L156

** Changed in: neutron
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1649531

Title:
  When adding subports to trunk segmentation details shouldn't be
  mandatory

Status in neutron:
  New

Bug description:
  According to Neutron trunk specification [0] segmentation_id and
  segmentation_type are optional fields. But when we adding subport to
  trunk requests without segmentation_id/segmentation_type are failing.
  Example:

  $ openstack network trunk set --subport port=port2 trunk10
  Failed to add subports to trunk 'trunk10': Invalid input for operation: 
Invalid subport details '{u'port_id': 
u'f9922f99-f0e4-4420-be73-dbb9ea7904c6'}': missing segmentation information. 
Must specify both segmentation_id and segmentation_type.
  Neutron server returns request_ids: 
['req-7601d1a4-afdf-40e6-9273-1fdab1fc8040']

  
  [0] 
https://specs.openstack.org/openstack/neutron-specs/specs/mitaka/vlan-aware-vms.html#proposed-change

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1649531/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649531] Re: When adding subports to trunk segmentation details shouldn't be mandatory

2016-12-13 Thread John Davidge
@Vasyl Please read the linked documentation. The OVS driver requires
segmentation_id/type, and is rejecting your requests because of that.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1649531

Title:
  When adding subports to trunk segmentation details shouldn't be
  mandatory

Status in neutron:
  Invalid

Bug description:
  According to Neutron trunk specification [0] segmentation_id and
  segmentation_type are optional fields. But when we adding subport to
  trunk requests without segmentation_id/segmentation_type are failing.
  Example:

  $ openstack network trunk set --subport port=port2 trunk10
  Failed to add subports to trunk 'trunk10': Invalid input for operation: 
Invalid subport details '{u'port_id': 
u'f9922f99-f0e4-4420-be73-dbb9ea7904c6'}': missing segmentation information. 
Must specify both segmentation_id and segmentation_type.
  Neutron server returns request_ids: 
['req-7601d1a4-afdf-40e6-9273-1fdab1fc8040']

  
  [0] 
https://specs.openstack.org/openstack/neutron-specs/specs/mitaka/vlan-aware-vms.html#proposed-change

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1649531/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649531] Re: When adding subports to trunk segmentation details shouldn't be mandatory

2016-12-13 Thread Vasyl Saienko
@John thanks for your notice. What is the neutron base case? I've
deployed Neutron with ovs and segmentation_id/type are required fields
when adding subports. Could you please an example of configuration when
they are generated automatically?

** Changed in: neutron
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1649531

Title:
  When adding subports to trunk segmentation details shouldn't be
  mandatory

Status in neutron:
  New

Bug description:
  According to Neutron trunk specification [0] segmentation_id and
  segmentation_type are optional fields. But when we adding subport to
  trunk requests without segmentation_id/segmentation_type are failing.
  Example:

  $ openstack network trunk set --subport port=port2 trunk10
  Failed to add subports to trunk 'trunk10': Invalid input for operation: 
Invalid subport details '{u'port_id': 
u'f9922f99-f0e4-4420-be73-dbb9ea7904c6'}': missing segmentation information. 
Must specify both segmentation_id and segmentation_type.
  Neutron server returns request_ids: 
['req-7601d1a4-afdf-40e6-9273-1fdab1fc8040']

  
  [0] 
https://specs.openstack.org/openstack/neutron-specs/specs/mitaka/vlan-aware-vms.html#proposed-change

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1649531/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649466] Re: contrail analyticks api status stuck in "contrail-analytics-api initializing (UvePartitions:UVE-Aggregation[None] connection down)"

2016-12-13 Thread Steve Martinelli
I have no idea what the issue here is, please update the description
with a stacktrace or how to recreate the problem. The bug title also
references a file that is not in the Keystone project.

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1649466

Title:
  contrail analyticks api status stuck in "contrail-analytics-api
  initializing (UvePartitions:UVE-Aggregation[None] connection down)"

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  Sorry. I added this bug to the wrong project by mistake hence removing
  all the description .

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1649466/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649557] [NEW] [api-ref] Make api parameters more meaningful

2016-12-13 Thread zhaobo
Public bug reported:

Our api-ref in neutron-lib repo now should be made the different
parameters more meaningful.

Such as :

The "status" "status_1" ... "status_10", these parameters is not friendly.
Also "shared_X" and so on.


So we should correct them with some worthly parameter name.

** Affects: neutron
 Importance: Undecided
 Assignee: zhaobo (zhaobo6)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => zhaobo (zhaobo6)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1649557

Title:
  [api-ref] Make api parameters more meaningful

Status in neutron:
  New

Bug description:
  Our api-ref in neutron-lib repo now should be made the different
  parameters more meaningful.

  Such as :

  The "status" "status_1" ... "status_10", these parameters is not friendly.
  Also "shared_X" and so on.

  
  So we should correct them with some worthly parameter name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1649557/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1632486] Re: add debug to tox environment

2016-12-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/385177
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=df4cf3e5db2166b67c9e95b381ec19ace10d0279
Submitter: Jenkins
Branch:master

commit df4cf3e5db2166b67c9e95b381ec19ace10d0279
Author: Sivasathurappan Radhakrishnan 
Date:   Tue Oct 11 21:55:05 2016 +

Add debug to tox environment

The oslotest package distributes a shell file that may be used to assist
in debugging python code. The shell file uses testtools, and supports
debugging with pdb. Debug tox environment implements following test
instructions.
https://wiki.openstack.org/wiki/Testr#Debugging_.28pdb.29_Tests

To enable debugging, run tox with the debug environment. Below are the
following ways to run it.

 * tox -e debug module
 * tox -e debug module.test_class
 * tox -e debug module.test_class.test_method

Change-Id: I08937845803be7bd125b838ab07bda56f202e88d
Closes-Bug: 1632486


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1632486

Title:
  add debug to tox environment

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Using pdb breakpoints with testr fails with BdbQuit exception rather
  than stopping at the breakpoint.

  The oslotest package also distributes a shell file that may be used to assist 
in debugging python code. The shell file uses testtools, and supports debugging 
with pdb. Debug tox environment implements following test instructions 
  https://wiki.openstack.org/wiki/Testr#Debugging_.28pdb.29_Tests

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1632486/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649531] Re: When adding subports to trunk segmentation details shouldn't be mandatory

2016-12-13 Thread John Davidge
These parameters are optional in the neutron base case, but can be
required by the driver. See https://github.com/openstack/openstack-
manuals/blob/master/doc/networking-guide/source/config-
trunking.rst#operation

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1649531

Title:
  When adding subports to trunk segmentation details shouldn't be
  mandatory

Status in neutron:
  Invalid

Bug description:
  According to Neutron trunk specification [0] segmentation_id and
  segmentation_type are optional fields. But when we adding subport to
  trunk requests without segmentation_id/segmentation_type are failing.
  Example:

  $ openstack network trunk set --subport port=port2 trunk10
  Failed to add subports to trunk 'trunk10': Invalid input for operation: 
Invalid subport details '{u'port_id': 
u'f9922f99-f0e4-4420-be73-dbb9ea7904c6'}': missing segmentation information. 
Must specify both segmentation_id and segmentation_type.
  Neutron server returns request_ids: 
['req-7601d1a4-afdf-40e6-9273-1fdab1fc8040']

  
  [0] 
https://specs.openstack.org/openstack/neutron-specs/specs/mitaka/vlan-aware-vms.html#proposed-change

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1649531/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649532] [NEW] private flavors globally visible

2016-12-13 Thread Maurice Schreiber
Public bug reported:

I have project A with user Anna, who has a role representing nova admin 
assigned (needed to allow creation of private flavors).
I have project B with user Ben, who has a role representing nova admin assigned 
(needed to allow creation of private flavors).
Anna has no permission on project B.
Ben has no permission on project A.

Anna creates a private flavor 'A_private', gives flavor access to
project A.

Expected behaviour: only Anna (or any other nova admin in project A) can
perform actions on this flavor.

Issue: Ben can perform all sort of actions on the private flavor
'A_private' (read, delete, manage access, manage extra specs).

Observed in Mitaka, but I haven't seen any updates related to this, so
this should be the same in master. Please correct me if I'm wrong.

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  I have project A with user Anna, who has a role representing nova admin 
assigned (needed to allow creation of private flavors).
  I have project B with user Ben, who has a role representing nova admin 
assigned (needed to allow creation of private flavors).
  Anna has no permission on project B.
  Ben has no permission on project A.
  
  Anna creates a private flavor 'A_private', gives flavor access to
  project A.
  
  Expected behaviour: only Anna (or any other nova admin in project A) can
  perform actions on this flavor.
  
  Issue: Ben can perform all sort of actions on the private flavor
  'A_private' (read, delete, manage access, manage extra specs).
+ 
+ Observed in Mitaka, but I haven't seen any updates related to this, so
+ this should be the same in master. Please correct me if I'm wrong.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1649532

Title:
  private flavors globally visible

Status in OpenStack Compute (nova):
  New

Bug description:
  I have project A with user Anna, who has a role representing nova admin 
assigned (needed to allow creation of private flavors).
  I have project B with user Ben, who has a role representing nova admin 
assigned (needed to allow creation of private flavors).
  Anna has no permission on project B.
  Ben has no permission on project A.

  Anna creates a private flavor 'A_private', gives flavor access to
  project A.

  Expected behaviour: only Anna (or any other nova admin in project A)
  can perform actions on this flavor.

  Issue: Ben can perform all sort of actions on the private flavor
  'A_private' (read, delete, manage access, manage extra specs).

  Observed in Mitaka, but I haven't seen any updates related to this, so
  this should be the same in master. Please correct me if I'm wrong.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1649532/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649531] [NEW] When adding subports to trunk segmentation details shouldn't be mandatory

2016-12-13 Thread Vasyl Saienko
Public bug reported:

According to Neutron trunk specification [0] segmentation_id and
segmentation_type are optional fields. But when we adding subport to
trunk requests without segmentation_id/segmentation_type are failing.
Example:

$ openstack network trunk set --subport port=port2 trunk10
Failed to add subports to trunk 'trunk10': Invalid input for operation: Invalid 
subport details '{u'port_id': u'f9922f99-f0e4-4420-be73-dbb9ea7904c6'}': 
missing segmentation information. Must specify both segmentation_id and 
segmentation_type.
Neutron server returns request_ids: ['req-7601d1a4-afdf-40e6-9273-1fdab1fc8040']


[0] 
https://specs.openstack.org/openstack/neutron-specs/specs/mitaka/vlan-aware-vms.html#proposed-change

** Affects: neutron
 Importance: Undecided
 Assignee: Vasyl Saienko (vsaienko)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Vasyl Saienko (vsaienko)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1649531

Title:
  When adding subports to trunk segmentation details shouldn't be
  mandatory

Status in neutron:
  New

Bug description:
  According to Neutron trunk specification [0] segmentation_id and
  segmentation_type are optional fields. But when we adding subport to
  trunk requests without segmentation_id/segmentation_type are failing.
  Example:

  $ openstack network trunk set --subport port=port2 trunk10
  Failed to add subports to trunk 'trunk10': Invalid input for operation: 
Invalid subport details '{u'port_id': 
u'f9922f99-f0e4-4420-be73-dbb9ea7904c6'}': missing segmentation information. 
Must specify both segmentation_id and segmentation_type.
  Neutron server returns request_ids: 
['req-7601d1a4-afdf-40e6-9273-1fdab1fc8040']

  
  [0] 
https://specs.openstack.org/openstack/neutron-specs/specs/mitaka/vlan-aware-vms.html#proposed-change

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1649531/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1640442] Re: glance image-tag-update, not updating a tag whose length is more than 255

2016-12-13 Thread sandeep nandal
** Changed in: glance
   Status: Opinion => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1640442

Title:
  glance image-tag-update, not updating a tag whose length is more than
  255

Status in Glance:
  Confirmed

Bug description:
  The command "glance image-tag-update  " will
  update the tag to the give . If the length of the
   is more than 255 , it won't update the tag and throws the
  below error:

  "400 Bad Request: Provided object does not match schema 'image':
  
u'123niteshgjkdhgjfdghfdjghfdjkghjfdghsdjfghjsdfghfdjghfdjghfdjghsdfgjkshdkjsfhdgljksfdghfsdkjghfdsjkdkghdsfjghdfjkhgjkdfghjkdfghjsdfkghfdjkghdfjkghsdfkjghsdfgkljsfhdgsjkdfghfsdjkghsddjskfndjighfnidughndfjgkhfndjkbhfdnbujfhdnbuidfsdsafsfsdfsdfdsfsdfdsdfdfffsdfsdfdsfsdf'
  is too long: Failed validating 'maxLength' in
  schema['properties']['tags']['items']:: {'maxLength': 255, 'type':
  'string'}: On instance['tags'][0]::
  
u'123niteshgjkdhgjfdghfdjghfdjkghjfdghsdjfghjsdfghfdjghfdjghfdjghsdfgjkshdkjsfhdgljksfdghfsdkjghfdsjkdkghdsfjghdfjkhgjkdfghjkdfghjsdfkghfdjkghdfjkghsdfkjghsdfgkljsfhdgsjkdfghfsdjkghsddjskfndjighfnidughndfjgkhfndjkbhfdnbujfhdnbuidfsdsafsfsdfsdfdsfsdfdsdfdfffsdfsdfdsfsdf'
  (HTTP 400)"

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1640442/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649527] [NEW] nova creates an invalid ethernet/bridge interface definition in virsh xml

2016-12-13 Thread Michael Henkel
Public bug reported:

Description
===

https://github.com/openstack/nova/blob/master/nova/virt/libvirt/designer.py#L61
sets the script path of an ethernet interface to ""

https://github.com/openstack/nova/blob/master/nova/virt/libvirt/config.py#L1228
checks script for None. As it is not none but a string it adds an empty 
script path to the ethernet interface definition in the virsh xml

Steps to reproduce
==

nova generated virsh:

[root@overcloud-novacompute-0 heat-admin]# cat 2.xml |grep tap -A5 -B3

  
  
  
  
  
  
  


XML validation:

[root@overcloud-novacompute-0 heat-admin]# virt-xml-validate 2.xml
Relax-NG validity error : Extra element devices in interleave
2.xml:59: element devices: Relax-NG validity error : Element domain failed to 
validate content
2.xml fails to validate

removing the  element the xml validation succeeds:

[root@overcloud-novacompute-0 heat-admin]# cat 1.xml |grep tap -A5 -B2

  
  
  
  
  
  

[root@overcloud-novacompute-0 heat-admin]# virt-xml-validate 1.xml
1.xml validates

Point is that libvirt <2.0.0 is more tolerant. libvirt 2.0.0 throws a segfault:
 
Dec  9 13:30:32 comp1 kernel: libvirtd[1048]: segfault at 8 ip 7fc9ff09e1c3 
sp 7fc9edfef1d0 error 4 in libvirt.so.0.2000.0[7fc9fef4b000+352000]
Dec  9 13:30:32 comp1 journal: End of file while reading data: Input/output 
error
Dec  9 13:30:32 comp1 systemd: libvirtd.service: main process exited, 
code=killed, status=11/SEGV
Dec  9 13:30:32 comp1 systemd: Unit libvirtd.service entered failed state.
Dec  9 13:30:32 comp1 systemd: libvirtd.service failed.
Dec  9 13:30:32 comp1 systemd: libvirtd.service holdoff time over, scheduling 
restart.
Dec  9 13:30:32 comp1 systemd: Starting Virtualization daemon...
Dec  9 13:30:32 comp1 systemd: Started Virtualization daemon. 

Expected result
===
VM can be started
instead of checking for None, config.py should check for an empty string before
adding script path


Actual result
=
VM doesn't start

Environment
===
OSP10/Newton, libvirt 2.0.0

** Affects: nova
 Importance: Undecided
 Assignee: Michael  Henkel (mhenkel-3)
 Status: New

** Summary changed:

- nova creates and invalid ethernet interface definition in virsh xml
+ nova creates an invalid ethernet interface definition in virsh xml

** Summary changed:

- nova creates an invalid ethernet interface definition in virsh xml
+ nova creates an invalid ethernet/bridge interface definition in virsh xml

** Changed in: nova
 Assignee: (unassigned) => Michael  Henkel (mhenkel-3)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1649527

Title:
  nova creates an invalid ethernet/bridge interface definition in virsh
  xml

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===

  
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/designer.py#L61
  sets the script path of an ethernet interface to ""

  
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/config.py#L1228
  checks script for None. As it is not none but a string it adds an empty 
  script path to the ethernet interface definition in the virsh xml

  Steps to reproduce
  ==

  nova generated virsh:

  [root@overcloud-novacompute-0 heat-admin]# cat 2.xml |grep tap -A5 -B3
  







  

  XML validation:

  [root@overcloud-novacompute-0 heat-admin]# virt-xml-validate 2.xml
  Relax-NG validity error : Extra element devices in interleave
  2.xml:59: element devices: Relax-NG validity error : Element domain failed to 
validate content
  2.xml fails to validate

  removing the  element the xml validation succeeds:

  [root@overcloud-novacompute-0 heat-admin]# cat 1.xml |grep tap -A5 -B2
  






  
  [root@overcloud-novacompute-0 heat-admin]# virt-xml-validate 1.xml
  1.xml validates

  Point is that libvirt <2.0.0 is more tolerant. libvirt 2.0.0 throws a 
segfault:
   
  Dec  9 13:30:32 comp1 kernel: libvirtd[1048]: segfault at 8 ip 
7fc9ff09e1c3 sp 7fc9edfef1d0 error 4 in 
libvirt.so.0.2000.0[7fc9fef4b000+352000]
  Dec  9 13:30:32 comp1 journal: End of file while reading data: Input/output 
error
  Dec  9 13:30:32 comp1 systemd: libvirtd.service: main process exited, 
code=killed, status=11/SEGV
  Dec  9 13:30:32 comp1 systemd: Unit libvirtd.service entered failed state.
  Dec  9 13:30:32 comp1 systemd: libvirtd.service failed.
  Dec  9 13:30:32 comp1 systemd: libvirtd.service holdoff time over, scheduling 
restart.
  Dec  9 13:30:32 comp1 systemd: Starting Virtualization daemon...
  Dec  9 13:30:32 comp1 systemd: Started Virtualization daemon. 

  Expected result
  ===
  VM can be started
  

[Yahoo-eng-team] [Bug 1649517] [NEW] qos policy attached to network, qos_policy_id is reflecting on neutron net-show , but not on the port with neutron port-show

2016-12-13 Thread Srinivas Balajinaidu
Public bug reported:

issue : qos policy attached to network, qos_policy_id is reflecting on
neutron net-show , but not on the port with neutron port-show


(neutron) net-list
+--+--+-+
| id   | name | subnets 
|
+--+--+-+
| 2fb0001c-98ba-4c67-9c81-2a19b05f4883 | net2 | 
ab1493c2-206a-4ac7-9818-f9aa61462399 2.2.2.0/24 |
| 3a8575c2-72dc-4602-a2e1-ab7eeb6421b7 | net1 | 
7378f275-5069-4284-a34d-3dfc852015b9 1.1.1.0/24 |
+--+--+-+
(neutron)


(neutron) port-list
+-+---+---+--+
| id  | name  | mac_address   | fixed_ips   
 |
+-+---+---+--+
| 4330f8a8-6a88-4874-af29-f7defb2a60f | port1 | fa:16:3e:dd:a0:37 | 
{"subnet_id": "7378f275-5069-4284|
| 2   |   |   | 
-a34d-3dfc852015b9", "ip_address":   |
| |   |   | "1.1.1.8"}  
 |
| c23db14e-   |   | fa:16:3e:d5:40:fa | 
{"subnet_id": "7378f275-5069-4284|
| ea16-4f08-a0d5-884ef77eef2f |   |   | 
-a34d-3dfc852015b9", "ip_address":   |
| |   |   | "1.1.1.2"}  
 |
| d2e6a1c5-030b-4f75-b2cd-| port2 | fa:16:3e:30:29:45 | 
{"subnet_id": "7378f275-5069-4284|
| f309f5ac9888|   |   | 
-a34d-3dfc852015b9", "ip_address":   |
| |   |   | "1.1.1.9"}  
 |
| f6a624f8-b3c8-4865-a712-d2883e700f7 |   | fa:16:3e:e1:99:d3 | 
{"subnet_id": "ab1493c2-206a-|
| 0   |   |   | 
4ac7-9818-f9aa61462399", |
| |   |   | 
"ip_address": "2.2.2.2"} |
+-+---+---+--+


(neutron) qos-policy-list
+--+--+
| id   | name |
+--+--+
| 48a4ac9a-729b-42ef-aab6-4b712193e4e2 | qos3 |
| 8f6aee1d-c981-44a5-8edb-32bd84e0b055 | qos2 |
| 94bf3722-2c1f-4c67-b288-0285f4b5690b | qos1 |
+--+--+

(neutron) net-update net1 --qos-policy qos1
Updated network: net1
(neutron)

--
Issue: policy is seen in net-show but not in port-show 
--

+
(neutron) net-show net1
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| availability_zone_hints   |  |
| availability_zones| nova |
| created_at| 2016-12-13T13:12:49Z |
| description   |  |
| id| 3a8575c2-72dc-4602-a2e1-ab7eeb6421b7 |
| ipv4_address_scope|  |
| ipv6_address_scope|  |
| mtu   | 1450 |
| name  | net1 |
| port_security_enabled | True |
| project_id| dd4b9f1005e34daa9e2b8c77d4478bab |
| provider:network_type | vxlan|
| provider:physical_network |  |
| provider:segmentation_id  | 58   |
| qos_policy_id | 94bf3722-2c1f-4c67-b288-0285f4b5690b |
| revision_number   | 8|
| router:external   | False|
| shared| False|
| status| ACTIVE   |
| subnets   | 7378f275-5069-4284-a34d-3dfc852015b9 |
| tags  |  |
| tenant_id | dd4b9f1005e34daa9e2b8c77d4478bab 

[Yahoo-eng-team] [Bug 1649503] [NEW] Mechanism driver can't be notified with updated network

2016-12-13 Thread Hong Hui Xiao
Public bug reported:

When disassociate qos with network, the ml2 mechanism drivers will still
be notified that the network has the stale qos policy.

This bug can be observed after cd7d63bde92e47a4b7bd4212b2e6c45f08c03143

The same issue will not happen for port.

neutron --debug net-update private --no-qos-policy

DEBUG: keystoneauth.session REQ: curl -g -i -X PUT 
http://192.168.31.90:9696/v2.0/networks/60e7627a-1722-439d-90d4-975fd431df7c.json
 -H "User-Agent: python-neutronclient" -H "Content-Type: application/json" -H 
"Accept: application/json" -H "X-Auth-Token: 
{SHA1}db3122bb702d9094793c5235c47f7b1e544315b2" -d '{"network": 
{"qos_policy_id": null}}'
DEBUG: keystoneauth.session RESP: [200] Content-Type: application/json 
Content-Length: 802 X-Openstack-Request-Id: 
req-7b551082-c2d3-452c-b58a-da9884b24d42 Date: Tue, 13 Dec 2016 08:05:35 GMT 
Connection: keep-alive 
RESP BODY: {"network": {"provider:physical_network": null, 
"ipv6_address_scope": null, "revision_number": 11, "port_security_enabled": 
true, "mtu": 1450, "id": "60e7627a-1722-439d-90d4-975fd431df7c", 
"router:external": false, "availability_zone_hints": [], "availability_zones": 
[], "provider:segmentation_id": 77, "ipv4_address_scope": null, "shared": 
false, "project_id": "e33a0ae9e47e49e8b2b6d65efee75b43", "status": "ACTIVE", 
"subnets": ["230bcb4f-8c2b-4db2-a9aa-325351cd6064", 
"09aa4e9c-fe6b-42d5-b5ca-76443a6c380a"], "description": "", "tags": [], 
"updated_at": "2016-12-13T08:05:34Z", "qos_policy_id": 
"6cd40fa9-092f-43bb-8214-ed79e5174c4f", "name": "private", "admin_state_up": 
true, "tenant_id": "e33a0ae9e47e49e8b2b6d65efee75b43", "created_at": 
"2016-12-13T01:36:43Z", "provider:network_type": "vxlan"}}

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1649503

Title:
  Mechanism driver can't be notified with updated network

Status in neutron:
  New

Bug description:
  When disassociate qos with network, the ml2 mechanism drivers will
  still be notified that the network has the stale qos policy.

  This bug can be observed after
  cd7d63bde92e47a4b7bd4212b2e6c45f08c03143

  The same issue will not happen for port.

  neutron --debug net-update private --no-qos-policy

  DEBUG: keystoneauth.session REQ: curl -g -i -X PUT 
http://192.168.31.90:9696/v2.0/networks/60e7627a-1722-439d-90d4-975fd431df7c.json
 -H "User-Agent: python-neutronclient" -H "Content-Type: application/json" -H 
"Accept: application/json" -H "X-Auth-Token: 
{SHA1}db3122bb702d9094793c5235c47f7b1e544315b2" -d '{"network": 
{"qos_policy_id": null}}'
  DEBUG: keystoneauth.session RESP: [200] Content-Type: application/json 
Content-Length: 802 X-Openstack-Request-Id: 
req-7b551082-c2d3-452c-b58a-da9884b24d42 Date: Tue, 13 Dec 2016 08:05:35 GMT 
Connection: keep-alive 
  RESP BODY: {"network": {"provider:physical_network": null, 
"ipv6_address_scope": null, "revision_number": 11, "port_security_enabled": 
true, "mtu": 1450, "id": "60e7627a-1722-439d-90d4-975fd431df7c", 
"router:external": false, "availability_zone_hints": [], "availability_zones": 
[], "provider:segmentation_id": 77, "ipv4_address_scope": null, "shared": 
false, "project_id": "e33a0ae9e47e49e8b2b6d65efee75b43", "status": "ACTIVE", 
"subnets": ["230bcb4f-8c2b-4db2-a9aa-325351cd6064", 
"09aa4e9c-fe6b-42d5-b5ca-76443a6c380a"], "description": "", "tags": [], 
"updated_at": "2016-12-13T08:05:34Z", "qos_policy_id": 
"6cd40fa9-092f-43bb-8214-ed79e5174c4f", "name": "private", "admin_state_up": 
true, "tenant_id": "e33a0ae9e47e49e8b2b6d65efee75b43", "created_at": 
"2016-12-13T01:36:43Z", "provider:network_type": "vxlan"}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1649503/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp