[Yahoo-eng-team] [Bug 1582224] Re: Post liberty upgrade neutron-server wont start - seemingly related to ml2_vlan_allocations table

2016-07-18 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1582224

Title:
  Post liberty upgrade neutron-server wont start - seemingly related to
  ml2_vlan_allocations table

Status in neutron:
  Expired

Bug description:
  Environment:
  ii  neutron-plugin-ml2   2:7.0.3-0ubuntu1~cloud0   
all  Neutron is a virtual network service for Openstack - ML2 plugin
  ii  neutron-server   2:7.0.3-0ubuntu1~cloud0   
all  Neutron is a virtual network service for Openstack - server

  :~# lsb_release -a
  No LSB modules are available.
  Distributor ID:   Ubuntu
  Description:  Ubuntu 14.04.4 LTS
  Release:  14.04
  Codename: trusty
  Kernel: 3.18.26-x1-64
  }}}

  After upgrading Kilo to Liberty (using Ubuntu Cloud Archive Packages).
  Neutron server fails to start.

  We have been able to trace this back to the following query.

  {{{
  [SQL:
  u'SELECT ml2_vlan_allocations.physical_network AS 
ml2_vlan_allocations_physical_network, ml2_vlan_allocations.vlan_id AS 
ml2_vlan_allocations_vlan_id, ml2_vlan_
  allocations.allocated AS ml2_vlan_allocations_allocated \nFROM 
ml2_vlan_allocations FOR UPDATE']
  }}}

  It appears that neutron is locking the MySQL database, and a little
  later, MySQL is dropping the lock causing the following:

  {{{
  neutron [-] DBError: (pymysql.err.InternalError) (1205, u'Lock wait timeout 
exceeded; try restarting transaction')
  }}}

  Example of where Neutron Server is stuck starting up:

  {{{
  2016-05-16 12:39:07.859 13463 INFO neutron.manager [-] Loading core plugin: 
ml2
  2016-05-16 12:39:07.960 13463 INFO neutron.plugins.ml2.managers [-] 
Configured type driver names: ['flat', 'gre', 'vlan']
  2016-05-16 12:39:07.963 13463 INFO neutron.plugins.ml2.drivers.type_flat [-] 
Allowable flat physical_network names: ['external']
  2016-05-16 12:39:07.966 13463 INFO neutron.plugins.ml2.drivers.type_vlan [-] 
Network VLAN ranges: {'vlan': [(300, 4000)]}
  2016-05-16 12:39:07.972 13463 INFO neutron.plugins.ml2.managers [-] Loaded 
type driver names: ['flat', 'vlan', 'gre']
  2016-05-16 12:39:07.973 13463 INFO neutron.plugins.ml2.managers [-] 
Registered types: ['flat', 'vlan', 'gre']
  2016-05-16 12:39:07.973 13463 INFO neutron.plugins.ml2.managers [-] Tenant 
network_types: ['gre']
  2016-05-16 12:39:07.974 13463 INFO neutron.plugins.ml2.managers [-] 
Configured extension driver names: []
  2016-05-16 12:39:07.974 13463 INFO neutron.plugins.ml2.managers [-] Loaded 
extension driver names: []
  2016-05-16 12:39:07.975 13463 INFO neutron.plugins.ml2.managers [-] 
Registered extension drivers: []
  2016-05-16 12:39:07.975 13463 INFO neutron.plugins.ml2.managers [-] 
Configured mechanism driver names: ['openvswitch']
  2016-05-16 12:39:07.977 13463 INFO neutron.plugins.ml2.managers [-] Loaded 
mechanism driver names: ['openvswitch']
  2016-05-16 12:39:07.978 13463 INFO neutron.plugins.ml2.managers [-] 
Registered mechanism drivers: ['openvswitch']
  2016-05-16 12:39:08.062 13463 DEBUG neutron.callbacks.manager [-] Subscribe: 
> rbac-policy 
before_create subscribe 
/usr/lib/python2.7/dist-packages/neutron/callbacks/manager.py:41
  2016-05-16 12:39:08.062 13463 DEBUG neutron.callbacks.manager [-] Subscribe: 
> rbac-policy 
before_update subscribe 
/usr/lib/python2.7/dist-packages/neutron/callbacks/manager.py:41
  2016-05-16 12:39:08.063 13463 DEBUG neutron.callbacks.manager [-] Subscribe: 
> rbac-policy 
before_delete subscribe 
/usr/lib/python2.7/dist-packages/neutron/callbacks/manager.py:41
  2016-05-16 12:39:08.064 13463 INFO neutron.plugins.ml2.managers [-] 
Initializing driver for type 'flat'
  2016-05-16 12:39:08.064 13463 INFO neutron.plugins.ml2.drivers.type_flat [-] 
ML2 FlatTypeDriver initialization complete
  2016-05-16 12:39:08.065 13463 INFO neutron.plugins.ml2.managers [-] 
Initializing driver for type 'vlan'
  2016-05-16 12:39:08.115 13463 DEBUG oslo_db.sqlalchemy.engines [-] MySQL 
server mode set to 
STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
 _check_effective_sql_mode 
/usr/lib/python2.7/dist-packages/oslo_db/sqlalchemy/engines.py:256
  }}}

  We are able to use the following to populate a fresh database, then
  neutron will start.

  {{{
  su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf 
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
  }}}

  However as you can imagine its quite important that we are able to
  upgrade the current database!

  Appreciate any help, or links to previous related bugs that may
  improve our situation.

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1600062] Re: Click "Edit Flavor" should open "Flavor information" tab

2016-07-18 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/339288
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=fc02a0207da0f3bdfce023789009fc2c529a5bee
Submitter: Jenkins
Branch:master

commit fc02a0207da0f3bdfce023789009fc2c529a5bee
Author: chen.qiao...@99cloud.net 
Date:   Fri Jul 8 00:25:30 2016 +

Modify "Edit Flavor" action

Reproduce:
1 Open Admin/Flavors
2 Click "Edit Flavor" in the table list actions

Expected result:
Open "Flavor Information" tab.

Actual result:
Open "Flavor Access" tab.

Change-Id: I45a4097413e8e5154350459d15e2f45b61f811a8
Closes-Bug: #1600062


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1600062

Title:
  Click "Edit Flavor" should open "Flavor information" tab

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Reproduce:
  1 Open Admin/Flavors
  2 Click "Edit Flavor" in the table list actions

  Expected result:
  Open "Flavor Information" tab.

  Actual result:
  Open "Flavor Access" tab.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1600062/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1604222] [NEW] [RFE] Implement vlan transparent for openvswitch ML2 driver

2016-07-18 Thread Bo Chi
Public bug reported:

There's some progress in ovs community to add QinQ support, email
discussion: [1]

This bug is to leverages the effort in the ovs community to add QinQ
support in order to enable vlan transparent networks when using the
openvswitch ML2 driver. This will allow VMs to send out tagged packets,
rather than being dropped.

The thought is to use the vlan_transparent property of network, which is
added before, but never used in openvswitch ML2 driver.

[1] https://www.mail-
archive.com/search?a=1=dev%40openvswitch.org=%22qinq+tunneling%22=12=11===2m=2016-07-13==newest

** Affects: neutron
 Importance: Undecided
 Assignee: Bo Chi (bochi-michael)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Bo Chi (bochi-michael)

** Description changed:

  There's some progress in ovs community to add QinQ support, email
  discussion: [1]
  
  This bug is to leverages the effort in the ovs community to add QinQ
  support in order to enable vlan transparent networks when using the
  openvswitch ML2 driver. This will allow VMs to send out tagged packets,
  rather than being dropped.
  
  The thought is to use the vlan_transparent property of network, which is
  added before, but never used in openvswitch ML2 driver.
+ 
+ [1] https://www.mail-
+ 
archive.com/search?a=1=dev%40openvswitch.org=%22qinq+tunneling%22=12=11===2m=2016-07-13==newest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1604222

Title:
  [RFE] Implement vlan transparent for openvswitch ML2 driver

Status in neutron:
  New

Bug description:
  There's some progress in ovs community to add QinQ support, email
  discussion: [1]

  This bug is to leverages the effort in the ovs community to add QinQ
  support in order to enable vlan transparent networks when using the
  openvswitch ML2 driver. This will allow VMs to send out tagged
  packets, rather than being dropped.

  The thought is to use the vlan_transparent property of network, which
  is added before, but never used in openvswitch ML2 driver.

  [1] https://www.mail-
  
archive.com/search?a=1=dev%40openvswitch.org=%22qinq+tunneling%22=12=11===2m=2016-07-13==newest

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1604222/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1508442] Re: LOG.warn is deprecated

2016-07-18 Thread weiweigu
** Also affects: taskflow
   Importance: Undecided
   Status: New

** Changed in: taskflow
 Assignee: (unassigned) => weiweigu (gu-weiwei)

** No longer affects: taskflow

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1508442

Title:
  LOG.warn is deprecated

Status in anvil:
  In Progress
Status in Aodh:
  Fix Released
Status in Astara:
  Fix Released
Status in Barbican:
  Fix Released
Status in bilean:
  Fix Released
Status in Ceilometer:
  Fix Released
Status in cloud-init:
  In Progress
Status in cloudkitty:
  Fix Released
Status in congress:
  Fix Released
Status in Designate:
  Fix Released
Status in django-openstack-auth:
  Fix Released
Status in django-openstack-auth-kerberos:
  In Progress
Status in DragonFlow:
  Fix Released
Status in ec2-api:
  Fix Released
Status in Evoque:
  In Progress
Status in Freezer:
  In Progress
Status in gce-api:
  Fix Released
Status in Glance:
  In Progress
Status in Gnocchi:
  Fix Released
Status in heat:
  Fix Released
Status in heat-cfntools:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in KloudBuster:
  Fix Released
Status in kolla:
  Fix Released
Status in Magnum:
  Fix Released
Status in Manila:
  Fix Released
Status in Mistral:
  In Progress
Status in networking-arista:
  In Progress
Status in networking-calico:
  In Progress
Status in networking-cisco:
  In Progress
Status in networking-fujitsu:
  Fix Released
Status in networking-odl:
  In Progress
Status in networking-ofagent:
  In Progress
Status in networking-plumgrid:
  In Progress
Status in networking-powervm:
  Fix Released
Status in networking-vsphere:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in nova-powervm:
  Fix Released
Status in nova-solver-scheduler:
  In Progress
Status in octavia:
  Fix Released
Status in openstack-ansible:
  Fix Released
Status in oslo.cache:
  Fix Released
Status in Packstack:
  Fix Released
Status in python-dracclient:
  Fix Released
Status in python-magnumclient:
  Fix Released
Status in RACK:
  In Progress
Status in python-watcherclient:
  In Progress
Status in Rally:
  In Progress
Status in OpenStack Search (Searchlight):
  Fix Released
Status in senlin:
  Triaged
Status in shaker:
  Fix Released
Status in Solum:
  Fix Released
Status in tacker:
  In Progress
Status in tempest:
  Fix Released
Status in tripleo:
  Fix Released
Status in trove-dashboard:
  Fix Released
Status in Vitrage:
  In Progress
Status in watcher:
  Fix Released
Status in zaqar:
  Fix Released

Bug description:
  LOG.warn is deprecated in Python 3 [1] . But it still used in a few
  places, non-deprecated LOG.warning should be used instead.

  Note: If we are using logger from oslo.log, warn is still valid [2],
  but I agree we can switch to LOG.warning.

  [1]https://docs.python.org/3/library/logging.html#logging.warning
  [2]https://github.com/openstack/oslo.log/blob/master/oslo_log/log.py#L85

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1508442/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603712] Re: Misuse of assertTrue in L3 DVR test case

2016-07-18 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/343248
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=adbcdeb975b8a1c51a96a20c5fa2e511bcc07e48
Submitter: Jenkins
Branch:master

commit adbcdeb975b8a1c51a96a20c5fa2e511bcc07e48
Author: Takashi NATSUME 
Date:   Sun Jul 17 10:13:37 2016 +0900

Fix misuse of assertTrue in L3 DVR test case

Replace assertTrue with assertEqual in a private method
for unit tests.

Change-Id: I15fedf067f01c3993830b8c38b521d0dfd0c7740
Closes-Bug: #1603712


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1603712

Title:
  Misuse of assertTrue in L3 DVR test case

Status in neutron:
  Fix Released

Bug description:
  AssertEqual should be used instead of assertTrue in the following
  unitetests.

  _test_update_arp_entry_for_dvr_service_port method of
  classL3DvrTestCase in neutron/tests/unit/db/test_l3_dvr_db.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1603712/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1604196] [NEW] UX: Misaligned Eye Icon in Create User modal form

2016-07-18 Thread Eddie Ramirez
Public bug reported:

How to reproduce:
1. Go to Identity->Users->Create User
2. A new modal form opens, put different values in "Password" and "Confirm 
Password".
3. See how an error message is displayed below the "Confirm Password" text 
input and the Eye Icon is misaligned because of the msg.

Expected result:
Keep Eye icon in place, don't move.

Actual result:
The Eye Icon is moved some pixels to the top.

** Affects: horizon
 Importance: Undecided
 Assignee: Eddie Ramirez (ediardo)
 Status: In Progress


** Tags: ux

** Changed in: horizon
 Assignee: (unassigned) => Eddie Ramirez (ediardo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1604196

Title:
  UX: Misaligned Eye Icon in Create User modal form

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  How to reproduce:
  1. Go to Identity->Users->Create User
  2. A new modal form opens, put different values in "Password" and "Confirm 
Password".
  3. See how an error message is displayed below the "Confirm Password" text 
input and the Eye Icon is misaligned because of the msg.

  Expected result:
  Keep Eye icon in place, don't move.

  Actual result:
  The Eye Icon is moved some pixels to the top.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1604196/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603918] Re: The port's dns_name is cleared while the instance boot failed, because the port's dns_name is not equal to the instance's hostname

2016-07-18 Thread Miguel Lavalle
This is not a bug. This is the expected behavior according to the spec.
Please point 1 here: http://specs.openstack.org/openstack/nova-
specs/specs/mitaka/implemented/internal-dns-resolution.html#proposed-
change

If you have a valid use case for this, I invite you to submit a RFE
following this process:
http://docs.openstack.org/developer/neutron/policies/blueprints.html
#neutron-request-for-feature-enhancements

** Changed in: neutron
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1603918

Title:
  The port's dns_name is cleared while the instance boot failed,because
  the port's dns_name is not equal to the instance's hostname

Status in neutron:
  Opinion

Bug description:
  In Mitaka,
  The value assigned to the dns_name attribute of the port is not equal to the 
value that Compute service will assign to the instance’s hostname, 
  the instance boot will fail,however,the port's dns_name is cleared
  Repetition steps are as follows:
  step 1:Create a port specifying 'my-port' to its dns_name attribute
  step 2:Boot an instance using the port,the hostname assigned to the instance 
is not equal to the port's dns_name
  step 3:The boot failed
  step 4:Show the port info created in step1 using neutron port-show 
command,the dns_name is cleared.
   
  [root@devstack218 devstack]# neutron port-update port2_net1 --dns-name my-port
  Updated port: port2_net1
  [root@devstack218 devstack]# neutron port-show port2_net1
  
+---++
  | Field | Value   
   |
  
+---++
  | admin_state_up| True
   |
  | allowed_address_pairs | 
   |
  | binding:host_id   | 
   |
  | binding:profile   | {}  
   |
  | binding:vif_details   | {}  
   |
  | binding:vif_type  | unbound 
   |
  | binding:vnic_type | normal  
   |
  | created_at| 2016-07-12T08:40:47 
   |
  | description   | 
   |
  | device_id | 
   |
  | device_owner  | 
   |
  | dns_assignment| {"hostname": "my-port", "ip_address": 
"198.51.100.12", "fqdn": "my-port.example.org."} |
  |   | {"hostname": "my-port", "ip_address": 
"2001:db8:80d2:c4d3:f816:3eff:fe44:f8d", "fqdn": "my-port.example.org."} |
  | dns_name  | my-port 
   |
  | extra_dhcp_opts   | 
   |
  | fixed_ips | {"subnet_id": 
"481cadf6-fa52-4739-80b2-331a3b90d7b6", "ip_address": "198.51.100.12"}  
 |
  |   | {"subnet_id": 
"60f56f75-ce94-498f-b4ad-0383db2796a8", "ip_address": 
"2001:db8:80d2:c4d3:f816:3eff:fe44:f8d"}   |
  | id| aa89b945-1806-4384-9771-25c44bf7f66d
   |
  | mac_address   | fa:16:3e:44:0f:8d   
   |
  | name  | port2_net1  
   |
  | network_id| d885e8ed-5e70-478f-a279-fd6c00bbb2d7
   |
  | 

[Yahoo-eng-team] [Bug 1537062] Re: Fail to boot vm when set AggregateImagePropertiesIsolation filter and add custom metadata in the Host Aggregate

2016-07-18 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/307496
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=85a307d3385ecfd641d7842c6450325410f2d3ba
Submitter: Jenkins
Branch:master

commit 85a307d3385ecfd641d7842c6450325410f2d3ba
Author: EdLeafe 
Date:   Mon Apr 18 21:36:09 2016 +

Don't raise error when filtering on custom metadata

Hosts can have custom metadata. There is no restriction on the key names
used in this metadata, so we should not be raising an exception when
checking for the existence of any metadata key.

Originally worked on: https://review.openstack.org/#/c/271401

Co-Authored-By: Xiaowei Qian 

Closes-Bug: #1537062

Change-Id: Ie5ff3c1847e9c4533822a77d443e4ce1fcf047fe


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1537062

Title:
  Fail to boot vm when set AggregateImagePropertiesIsolation filter and
  add custom metadata in the Host Aggregate

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  
  Image has no custom metadata, should not affect the 
AggregateImagePropertiesIsolation filter

  Reproduce steps:

  (1) add Host Aggregate with custom metadata
  ++---+---+--++
  | Id | Name  | Availability Zone | Hosts| Metadata   |
  ++---+---+--++
  | 1  | linux-agg | - | 'controller' | 'os=linux' |
  ++---+---+--++ 

  (2) add  AggregateImagePropertiesIsolation filter
  scheduler_default_filters = 
RetryFilter,AggregateImagePropertiesIsolation,AvailabilityZoneFilter,RamFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,SameHostFilter,DifferentHostFilter

  (3) boot vm and error log:
  2016-01-22 21:00:10.834 ERROR oslo_messaging.rpc.dispatcher 
[req-1cded809-cfe6-4657-8e31-b494f1b3278d admin admin] Exception during messa
  ge handling: ImageMetaProps object has no attribute 'os'
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher Traceback (most 
recent call last):
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", l
  ine 143, in _dispatch_and_reply
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", l
  ine 189, in _dispatch
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", l
  ine 130, in _do_dispatch
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 
  150, in inner
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher return 
func(*args, **kwargs)
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/manager.py", line 78, in select_destin
  ations
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher dests = 
self.driver.select_destinations(ctxt, spec_obj)
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 53, in sele
  ct_destinations
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher 
selected_hosts = self._schedule(context, spec_obj)
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 113, in _sc
  hedule
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher spec_obj, 
index=num)
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/host_manager.py", line 532, in get_fil
  tered_hosts
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher hosts, 
spec_obj, index)
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/filters.py", line 89, in get_filtered_objects
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher list_objs = 
list(objs)
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/filters.py", line 44, in filter_all
  2016-01-22 21:00:10.834 TRACE oslo_messaging.rpc.dispatcher if 
self._filter_one(obj, spec_obj):
  2016-01-22 21:00:10.834 TRACE 

[Yahoo-eng-team] [Bug 1599435] Re: help of numa_get_reserved_huge_pages is not accurate.

2016-07-18 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/338261
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=81310c512ce3b0cfe0ddc73d7b6aaafffbabbc6a
Submitter: Jenkins
Branch:master

commit 81310c512ce3b0cfe0ddc73d7b6aaafffbabbc6a
Author: liu-lixiu 
Date:   Wed Jul 6 22:35:05 2016 +0800

Modify docstring of numa_get_reserved_huge_pages method

Corrected grammar errors in the docstring.

Change-Id: Id40e10973e2ec26ada8f3a3a1bb2d7ac63bccb89
Closes-Bug: #1599435


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1599435

Title:
  help of numa_get_reserved_huge_pages is not accurate.

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  version:master

  problem:
  help of numa_get_reserved_huge_pages is not accurate. 
  raises: exceptionInvalidReservedMemoryPagesOption is option is not correctly 
set.
  would be:
  raises: exception InvalidReservedMemoryPagesOption when reserved_huge_pages 
option is not correctly set.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1599435/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591996] Re: Serial console output is not properly handled

2016-07-18 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/302182
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=e641e5c9b5e68e93f1d44c4898ae7e2943d5fe66
Submitter: Jenkins
Branch:master

commit e641e5c9b5e68e93f1d44c4898ae7e2943d5fe66
Author: Lucian Petrut 
Date:   Wed Apr 6 13:23:26 2016 +0300

Py3: fix serial console output

The compute API expects the serial console output to be a string,
attempting to use a regex to remove some characters. This will fail
as we are returning a byte array.

Also, since the API is expected to return the console output as a
string, the compute nodes may just return a string as well.

Closes-Bug: #1591996
Partially implements blueprint: nova-python3-newton

Change-Id: I5d3097f1d30f3b3568a5421e0d68aaf0797c850a


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1591996

Title:
  Serial console output is not properly handled

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The compute API expects the serial console output to be a string, attempting 
to use a regex to remove some characters.
  
https://github.com/openstack/nova/blob/6d2470ade25b3a58045e7f75afa2629e851ac049/nova/api/openstack/compute/console_output.py#L70

  This will fail if the compute node is using Python 3, as we are passing a 
byte array.
  
https://github.com/openstack/nova/blob/6d2470ade25b3a58045e7f75afa2629e851ac049/nova/compute/manager.py#L4283-L4297

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1591996/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1604116] [NEW] overcommit ratios should not be able to set to negative value

2016-07-18 Thread vu tran
Public bug reported:

Currently the three overcommit ratios: ram_allocation_ratio,
cpu_allocation_ratio, and disk_allocation_ratio can be set to negative
values.

Nova scheduler filters (e.g. CoreFilter) will use these ratios to
calculate free_vcpus/free_ram_mb/usable_disk_mb (which is negative) and
therefore scheduler filters will eventually filter out node which has
negative overcommit ratio.

It makes more sense that these 3 ratios values should not be able to set
to negative values.  If any of these ratios is negative then nova-
compute service should fail to start.

Step to reproduce on devstack:

* On compute node, modify /etc/nova/nova.conf to have "cpu_allocation_ratio = 
-1.0"
* Restart nova-compute (n-cpu) and nova-compute service is up and running (we 
should expect nova-compute to fail to start)

** Affects: nova
 Importance: Undecided
 Assignee: vu tran (vu-tran)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => vu tran (vu-tran)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1604116

Title:
  overcommit ratios should not be able to set to negative value

Status in OpenStack Compute (nova):
  New

Bug description:
  Currently the three overcommit ratios: ram_allocation_ratio,
  cpu_allocation_ratio, and disk_allocation_ratio can be set to negative
  values.

  Nova scheduler filters (e.g. CoreFilter) will use these ratios to
  calculate free_vcpus/free_ram_mb/usable_disk_mb (which is negative)
  and therefore scheduler filters will eventually filter out node which
  has negative overcommit ratio.

  It makes more sense that these 3 ratios values should not be able to
  set to negative values.  If any of these ratios is negative then nova-
  compute service should fail to start.

  Step to reproduce on devstack:

  * On compute node, modify /etc/nova/nova.conf to have "cpu_allocation_ratio = 
-1.0"
  * Restart nova-compute (n-cpu) and nova-compute service is up and running (we 
should expect nova-compute to fail to start)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1604116/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1604115] [NEW] test_cleanup_stale_devices functional test sporadic failures

2016-07-18 Thread Assaf Muller
Public bug reported:

19 hits in the last 7 days

build_status:"FAILURE" AND message:", in test_cleanup_stale_devices" AND
build_name:"gate-neutron-dsvm-functional"

Example TRACE failure:
http://logs.openstack.org/55/342955/3/check/gate-neutron-dsvm-functional/0cd557d/console.html#_2016-07-18_16_42_45_219041

Example log from testrunner:
http://logs.openstack.org/55/342955/3/check/gate-neutron-dsvm-functional/0cd557d/logs/dsvm-functional-logs/neutron.tests.functional.agent.linux.test_dhcp.TestDhcp.test_cleanup_stale_devices.txt.gz

** Affects: neutron
 Importance: Medium
 Status: Confirmed


** Tags: functional-tests gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1604115

Title:
  test_cleanup_stale_devices functional test sporadic failures

Status in neutron:
  Confirmed

Bug description:
  19 hits in the last 7 days

  build_status:"FAILURE" AND message:", in test_cleanup_stale_devices"
  AND build_name:"gate-neutron-dsvm-functional"

  Example TRACE failure:
  
http://logs.openstack.org/55/342955/3/check/gate-neutron-dsvm-functional/0cd557d/console.html#_2016-07-18_16_42_45_219041

  Example log from testrunner:
  
http://logs.openstack.org/55/342955/3/check/gate-neutron-dsvm-functional/0cd557d/logs/dsvm-functional-logs/neutron.tests.functional.agent.linux.test_dhcp.TestDhcp.test_cleanup_stale_devices.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1604115/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1604110] [NEW] when ngimages is set as default panel, page loops infinitely

2016-07-18 Thread Michael Xiong
Public bug reported:

Steps to reproduce the issue:
1. Comment out 
https://github.com/openstack/horizon/blob/master/openstack_dashboard/enabled/_1020_project_overview_panel.py#L21

# DEFAULT_PANEL = 'overview'

This disables 'overview' as being the default panel of project
dashboard.

In
https://github.com/openstack/horizon/blob/master/openstack_dashboard/enabled/_1051_project_ng_images_panel.py

2. Set DISABLED = False
3. Add line: DEFAULT_PANEL = 'ngimages'

4. Load the project dashboard.
The ngimages panel will reload itself infinitely. I set a debug point in the 
images.module.js and saw that the loading of js modules would occur again and 
again, but the page itself wouldn't load.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: angularjs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1604110

Title:
  when ngimages is set as default panel, page loops infinitely

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Steps to reproduce the issue:
  1. Comment out 
https://github.com/openstack/horizon/blob/master/openstack_dashboard/enabled/_1020_project_overview_panel.py#L21

  # DEFAULT_PANEL = 'overview'

  This disables 'overview' as being the default panel of project
  dashboard.

  In
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/enabled/_1051_project_ng_images_panel.py

  2. Set DISABLED = False
  3. Add line: DEFAULT_PANEL = 'ngimages'

  4. Load the project dashboard.
  The ngimages panel will reload itself infinitely. I set a debug point in the 
images.module.js and saw that the loading of js modules would occur again and 
again, but the page itself wouldn't load.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1604110/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1593001] Re: Horizon Workflows should be one experience

2016-07-18 Thread Diana Whitten
** Changed in: horizon
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1593001

Title:
  Horizon Workflows should be one experience

Status in OpenStack Dashboard (Horizon):
  Won't Fix

Bug description:
  The experience of the legacy (Django) workflows and the modern
  (angular based) workflows are drastically different.  They should be
  aligned.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1593001/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1604104] [NEW] SRU cloud-init power_condition to trusty

2016-07-18 Thread Blake Rouse
Public bug reported:

MAAS uses the power_condition feature to prevent poweroff when enlisting
and commissioning with Ubuntu. This works on Xenial but to keep the code
path the same and make trusty perform as expected this feature needs to
be SRU'd into trusty.

** Affects: cloud-init
 Importance: Undecided
 Status: Confirmed

** Changed in: cloud-init
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1604104

Title:
  SRU cloud-init power_condition to trusty

Status in cloud-init:
  Confirmed

Bug description:
  MAAS uses the power_condition feature to prevent poweroff when
  enlisting and commissioning with Ubuntu. This works on Xenial but to
  keep the code path the same and make trusty perform as expected this
  feature needs to be SRU'd into trusty.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1604104/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1604064] Re: ovn ml2 mechanism driver tcp connectors

2016-07-18 Thread Ryan Moats
This may have neutron pieces that need to be fixed, but the defect as
written should also include the networking-ovn project.

Also, removed the ovn tag because that's not valid. Does neutron have a
pure ml2 tag now?


** Also affects: networking-ovn
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1604064

Title:
  ovn ml2 mechanism driver tcp connectors

Status in networking-ovn:
  New
Status in neutron:
  New

Bug description:
  Bug description:
  When a TCP connection from the OVN ml2 mechanism driver dies (in my scenario, 
this is due to a UCARP fail over) a new TCP connection does not get generated 
for port monitoring.

  Reproduction steps:
  1. Set up UCARP between 2 nodes
  2. Set OVN north database and south database on both nodes
  3. Point the ml2 driver to the UCARP address (north and south ports)
  4. Point the ovn-controllers to the UCARP address (south database port)
  5. Boot a VM
  6. View VM entries in the north database and south database OVN tables
  7. See that port status is UP in north database
  8. See that Neutron still has status of VM as down

  **Temporary solution is to reboot neutron-server, thus resetting the TCP 
connections
  **I have not verified the problem is TCP connections, but it's currently my 
best guess.


  Linux Version: Ubuntu 14.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-ovn/+bug/1604064/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1482701] Re: Federation: user's name in rules not respected

2016-07-18 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/335617
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=2042c955c81929deb47bc8cc77082b085faaa47d
Submitter: Jenkins
Branch:master

commit 2042c955c81929deb47bc8cc77082b085faaa47d
Author: Roxana Gherle 
Date:   Wed Jun 29 11:21:13 2016 -0700

Fix the username value in federated tokens

Currently, in both unscoped and scoped federated tokens, the
username value in the token is equal to the userid and not to
the value of the username in the external identity provider.
This makes WebSSO login to show the userid of the logged-in
user in the Horizon dashboard, whereas before it was showing
the actual user name.

This patch fixes the value of the username in the federated
tokens, which will fix the WebSSO issue as well, since Horizon
looks at the username value and displays that as the logged-in user.

Closes-Bug: #1597101
Closes-Bug: #1482701
Change-Id: I33a0274641c4e6bc4e127f5206ba9bc7dbd8e5a8


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1482701

Title:
  Federation: user's name in rules not respected

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  For a mapping rule  (see local's user name and user id are different)

  [
  {
  "local": [
  {
  "group": {
  "id": "852d0dc079cf4709813583e92498e625"
  }
  },
  {
  "user": {
  "id": "marek",
  "name": "federated_user"
  }
  }
  ],
  "remote": [
  {
  "any_one_of": [
  "user1",
  "admin"
  ],
  "type": "openstack_user"
  }
  ]
  }
  ]

  I can authenticate via federated workflow but the token JSON response
  has (see id and name are identical):

  u'user': {u'OS-FEDERATION': {u'groups': [{u'id': 
u'852d0dc079cf4709813583e92498e625'}],
   u'identity_provider': {u'id': 
u'keystone-idp'},
   u'protocol': {u'id': u'saml2'}},
    u'domain': {u'id': u'Federated',
    u'name': u'Federated'},
    u'id': u'marek',
    u'name': u'marek'}}}

  This happens for both UUID and Fernet tokens.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1482701/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597101] Re: WebSSO username shows as a UUID in the Horizon page

2016-07-18 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/335617
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=2042c955c81929deb47bc8cc77082b085faaa47d
Submitter: Jenkins
Branch:master

commit 2042c955c81929deb47bc8cc77082b085faaa47d
Author: Roxana Gherle 
Date:   Wed Jun 29 11:21:13 2016 -0700

Fix the username value in federated tokens

Currently, in both unscoped and scoped federated tokens, the
username value in the token is equal to the userid and not to
the value of the username in the external identity provider.
This makes WebSSO login to show the userid of the logged-in
user in the Horizon dashboard, whereas before it was showing
the actual user name.

This patch fixes the value of the username in the federated
tokens, which will fix the WebSSO issue as well, since Horizon
looks at the username value and displays that as the logged-in user.

Closes-Bug: #1597101
Closes-Bug: #1482701
Change-Id: I33a0274641c4e6bc4e127f5206ba9bc7dbd8e5a8


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1597101

Title:
  WebSSO username shows as a UUID in the Horizon page

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  When you login into Horizon using Web Single Sign On with saml2 or oidc 
federation protocols, the logged in user shows as a UUID (the user's ID) in the 
Horizon page. This was different before when the specific username from the 
external identity provider was showed by the Horizon dashboard.
  This happens because both the unscoped and scoped federated tokens have both 
the user.id and user.name the ID of the user. The actual username does not show 
in the federated token.

  This change in the behavior seems to have happened after introducing
  shadow users functionality, because the token was containg the
  username for both user.id and user.name in the pre-mitaka releases but
  now that changed to both containing the UUID.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1597101/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1604078] [NEW] Hyper-V: planned vms are not cleaned up

2016-07-18 Thread Lucian Petrut
Public bug reported:

We create a planned vm during live migration when having passthrough
disks attached in order to properly configure the resources of the 'new'
instance.

The issue is that if the migration fails, this planned vm is not cleaned
up.

Although planned vms are destroyed at a second attempt to migrate the
instance, this issue had an impact on the Hyper-V CI as planned vms
persisted among CI runs and vms having the same name failed to spawn, as
there were file handles kept open by the VMMS service, preventing the
instance path from being cleaned up.

Trace:
http://paste.openstack.org/show/536149/

** Affects: compute-hyperv
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Also affects: compute-hyperv
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1604078

Title:
  Hyper-V: planned vms are not cleaned up

Status in compute-hyperv:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  We create a planned vm during live migration when having passthrough
  disks attached in order to properly configure the resources of the
  'new' instance.

  The issue is that if the migration fails, this planned vm is not
  cleaned up.

  Although planned vms are destroyed at a second attempt to migrate the
  instance, this issue had an impact on the Hyper-V CI as planned vms
  persisted among CI runs and vms having the same name failed to spawn,
  as there were file handles kept open by the VMMS service, preventing
  the instance path from being cleaned up.

  Trace:
  http://paste.openstack.org/show/536149/

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1604078/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1604073] [NEW] neutron-lib's validate_values function is incomplete and might fail for some checks

2016-07-18 Thread Pablo
Public bug reported:

>From comments on https://review.openstack.org/#/c/337237/, we tried to
ensure some validations are working properly, but when we rely on
validate_values function, it should also handle the case that
'valid_values' might not provide a  "__contains__" method.

** Affects: neutron
 Importance: Undecided
 Assignee: Pablo (iranzo)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Pablo (iranzo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1604073

Title:
  neutron-lib's validate_values function is incomplete and might fail
  for some checks

Status in neutron:
  New

Bug description:
  From comments on https://review.openstack.org/#/c/337237/, we tried to
  ensure some validations are working properly, but when we rely on
  validate_values function, it should also handle the case that
  'valid_values' might not provide a  "__contains__" method.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1604073/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1574703] Re: Stacks page isn't refresh after stack deletion sometimes

2016-07-18 Thread venkatamahesh
** Changed in: horizon
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1574703

Title:
  Stacks page isn't refresh after stack deletion sometimes

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Autotests detected http://logs.openstack.org/58/308458/12/check/gate-
  horizon-dsvm-integration/08f3893/screenshots/

  Steps:
  - Go to Orchestration -> Stacks
  - Launch stack
  - Delete stack

  Expected result:
  - Stack is deleted, table is empty

  Actual result:
  - horizon shows that stack is present, but in heat logs there is response 
that no stacks:
  
http://logs.openstack.org/58/308458/12/check/gate-horizon-dsvm-integration/08f3893/logs/screen-h-api.txt.gz#_2016-04-25_14_10_50_814

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1574703/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1604064] [NEW] ovn ml2 mechanism driver tcp connectors

2016-07-18 Thread Daniel
Public bug reported:

Bug description:
When a TCP connection from the OVN ml2 mechanism driver dies (in my scenario, 
this is due to a UCARP fail over) a new TCP connection does not get generated 
for port monitoring.

Reproduction steps:
1. Set up UCARP between 2 nodes
2. Set OVN north database and south database on both nodes
3. Point the ml2 driver to the UCARP address (north and south ports)
4. Point the ovn-controllers to the UCARP address (south database port)
5. Boot a VM
6. View VM entries in the north database and south database OVN tables
7. See that port status is UP in north database
8. See that Neutron still has status of VM as down

**Temporary solution is to reboot neutron-server, thus resetting the TCP 
connections
**I have not verified the problem is TCP connections, but it's currently my 
best guess.


Linux Version: Ubuntu 14.04

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ml2 ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1604064

Title:
  ovn ml2 mechanism driver tcp connectors

Status in neutron:
  New

Bug description:
  Bug description:
  When a TCP connection from the OVN ml2 mechanism driver dies (in my scenario, 
this is due to a UCARP fail over) a new TCP connection does not get generated 
for port monitoring.

  Reproduction steps:
  1. Set up UCARP between 2 nodes
  2. Set OVN north database and south database on both nodes
  3. Point the ml2 driver to the UCARP address (north and south ports)
  4. Point the ovn-controllers to the UCARP address (south database port)
  5. Boot a VM
  6. View VM entries in the north database and south database OVN tables
  7. See that port status is UP in north database
  8. See that Neutron still has status of VM as down

  **Temporary solution is to reboot neutron-server, thus resetting the TCP 
connections
  **I have not verified the problem is TCP connections, but it's currently my 
best guess.


  Linux Version: Ubuntu 14.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1604064/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1604056] [NEW] UX: Sort Indicator does not fit in table cell

2016-07-18 Thread Eddie Ramirez
Public bug reported:

How to reproduce:

1. Go to Admin -> Metadata Definitions
2. Locate the "Protected" column (header cell)
3. Click the header cell to sort the column.
4. See how the Sort Indicator appears in the bottom of the cell

Actual result:
The sort indicator does not fit in the header cell, making the table header to 
increase its size.

Expected result:
The sort indicator should be located to the right of the text. Just as the rest 
of the columns do.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: low-hanging-fruit ux

** Attachment added: "Screenshot"
   
https://bugs.launchpad.net/bugs/1604056/+attachment/4702924/+files/protected.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1604056

Title:
  UX: Sort Indicator does not fit in table cell

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  How to reproduce:

  1. Go to Admin -> Metadata Definitions
  2. Locate the "Protected" column (header cell)
  3. Click the header cell to sort the column.
  4. See how the Sort Indicator appears in the bottom of the cell

  Actual result:
  The sort indicator does not fit in the header cell, making the table header 
to increase its size.

  Expected result:
  The sort indicator should be located to the right of the text. Just as the 
rest of the columns do.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1604056/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1466476] Re: Change of the default value of user_identity in the log format

2016-07-18 Thread Ihar Hrachyshka
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1466476

Title:
  Change of the default value of user_identity in the log format

Status in neutron:
  Fix Released

Bug description:
  A default value of user_identity in the log format is user_id and
  project_id[1].

  For example:
   user_name:admin
   user_id:765fcde1b35349a4b3b06c227df90d18
   project_name:admin
   project_id:0862ba8c3497455a8fdf40c49f0f264

  the following logs are part of "neutron net-list" result.
   2015-06-18 19:26:53.636 15006 INFO neutron.plugins.ml2.managers 
[req-7e8c308e-840f-4a0d-9e1e-b6a0682aa9c2 765fcde1b35349a4b3b06c227df90d18 
0862ba8c3497455a8fdf40c49f0f2644 - - -] Extended network dict for driver 
'port_security'
   2015-06-18 19:26:53.643 15006 INFO neutron.wsgi 
[req-7e8c308e-840f-4a0d-9e1e-b6a0682aa9c2 765fcde1b35349a4b3b06c227df90d18 
0862ba8c3497455a8fdf40c49f0f2644 - - -] 192.168.122.141 - - [18/Jun/2015 
19:26:53] "GET /v2.0/networks.json HTTP/1.1" 200 1394 0.309419

  In this case, user should refer to keystone to find user name and
  project name from this log.

  Changing the user_id and project_id to the user_name and project_name in 
order to easily discriminate when and who an API has been executed.
  Like the following:
   2015-06-18 19:28:25.002 15781 INFO neutron.plugins.ml2.managers 
[req-4e6f0393-a613-41ba-9064-2b88af3c2686 admin admin - - -] Extended network 
dict for driver 'port_security'
   2015-06-18 19:28:25.008 15781 INFO neutron.wsgi 
[req-4e6f0393-a613-41ba-9064-2b88af3c2686 admin admin - - -] 192.168.122.141 - 
- [18/Jun/2015 19:28:25] "GET /v2.0/networks.json HTTP/1.1" 200 1394 0.311092

  
  [1]
  If logging_context_format_string in neuteon.conf is not set.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1466476/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1604022] [NEW] nova-coverage job fails with MismatchError

2016-07-18 Thread Matt Riedemann
*** This bug is a duplicate of bug 1603979 ***
https://bugs.launchpad.net/bugs/1603979

Public bug reported:

Seen here:

http://logs.openstack.org/3f/3f700b5a5a498ba08e77378d34f059c3fa6845d8/post
/nova-coverage-db/2bdabf8/console.html

Also failing locally with this change to use constraints:
https://review.openstack.org/#/c/343297/

2016-07-17 23:08:46.572302 | FAIL: 
nova.tests.unit.test_context.ContextTestCase.test_convert_from_rc_to_dict
2016-07-17 23:08:46.572320 | tags: worker-7
2016-07-17 23:08:46.572350 | 
--
2016-07-17 23:08:46.572366 | Empty attachments:
2016-07-17 23:08:46.572382 |   pythonlogging:''
2016-07-17 23:08:46.572395 |   stderr
2016-07-17 23:08:46.572408 |   stdout
2016-07-17 23:08:46.572419 | 
2016-07-17 23:08:46.572440 | Traceback (most recent call last):
2016-07-17 23:08:46.572473 |   File "nova/tests/unit/test_context.py", line 
203, in test_convert_from_rc_to_dict
2016-07-17 23:08:46.572496 | self.assertEqual(expected_values, values2)
2016-07-17 23:08:46.572545 |   File 
"/home/jenkins/workspace/nova-coverage-db/.tox/cover/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual
2016-07-17 23:08:46.572574 | self.assertThat(observed, matcher, message)
2016-07-17 23:08:46.572623 |   File 
"/home/jenkins/workspace/nova-coverage-db/.tox/cover/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
2016-07-17 23:08:46.572642 | raise mismatch_error
2016-07-17 23:08:46.572665 | testtools.matchers._impl.MismatchError: !=:
2016-07-17 23:08:46.572685 | reference = {'auth_token': None,
2016-07-17 23:08:46.572700 |  'domain': None,
2016-07-17 23:08:46.572719 |  'instance_lock_checked': False,
2016-07-17 23:08:46.572735 |  'is_admin': False,
2016-07-17 23:08:46.572753 |  'project_domain': None,
2016-07-17 23:08:46.572769 |  'project_id': 222,
2016-07-17 23:08:46.572786 |  'project_name': None,
2016-07-17 23:08:46.572802 |  'quota_class': None,
2016-07-17 23:08:46.572819 |  'read_deleted': 'no',
2016-07-17 23:08:46.572835 |  'read_only': False,
2016-07-17 23:08:46.572852 |  'remote_address': None,
2016-07-17 23:08:46.572878 |  'request_id': 
'req-679033b7-1755-4929-bf85-eb3bfaef7e0b',
2016-07-17 23:08:46.572895 |  'resource_uuid': None,
2016-07-17 23:08:46.572909 |  'roles': [],
2016-07-17 23:08:46.572926 |  'service_catalog': [],
2016-07-17 23:08:46.572944 |  'show_deleted': False,
2016-07-17 23:08:46.573285 |  'tenant': 222,
2016-07-17 23:08:46.573317 |  'timestamp': '2015-03-02T22:31:56.641629',
2016-07-17 23:08:46.57 |  'user': 111,
2016-07-17 23:08:46.573350 |  'user_domain': None,
2016-07-17 23:08:46.573365 |  'user_id': 111,
2016-07-17 23:08:46.573386 |  'user_identity': '111 222 - - -',
2016-07-17 23:08:46.573402 |  'user_name': None}
2016-07-17 23:08:46.573421 | actual= {'auth_token': None,
2016-07-17 23:08:46.573436 |  'domain': None,
2016-07-17 23:08:46.573456 |  'instance_lock_checked': False,
2016-07-17 23:08:46.573472 |  'is_admin': False,
2016-07-17 23:08:46.573490 |  'is_admin_project': True,
2016-07-17 23:08:46.573508 |  'project_domain': None,
2016-07-17 23:08:46.573542 |  'project_id': 222,
2016-07-17 23:08:46.573565 |  'project_name': None,
2016-07-17 23:08:46.573583 |  'quota_class': None,
2016-07-17 23:08:46.573600 |  'read_deleted': 'no',
2016-07-17 23:08:46.573616 |  'read_only': False,
2016-07-17 23:08:46.573634 |  'remote_address': None,
2016-07-17 23:08:46.573660 |  'request_id': 
'req-679033b7-1755-4929-bf85-eb3bfaef7e0b',
2016-07-17 23:08:46.573684 |  'resource_uuid': None,
2016-07-17 23:08:46.573700 |  'roles': [],
2016-07-17 23:08:46.573718 |  'service_catalog': [],
2016-07-17 23:08:46.573735 |  'show_deleted': False,
2016-07-17 23:08:46.573750 |  'tenant': 222,
2016-07-17 23:09:14.685049 |  'timestamp': '2015-03-02T22:31:5Version 1 is 
deprecated, use alternative version 2 instead.
2016-07-17 23:09:25.086731 | Version 1 is deprecated, use alternative version 2 
instead.
2016-07-17 23:09:27.836462 | Version 1 is deprecated, use alternative version 2 
instead.
2016-07-17 23:09:28.039136 | Version 1 is deprecated, use alternative version 2 
instead.
2016-07-17 23:09:35.827867 | Version 1 is deprecated, use alternative version 2 
instead.
2016-07-17 23:09:36.085779 | Version 1 is deprecated, use alternative version 2 
instead.
2016-07-17 23:10:24.895628 | Version 1 is deprecated, use alternative version 2 
instead.
2016-07-17 23:10:25.100091 | Version 1 is deprecated, use alternative version 2 
instead.
2016-07-17 23:11:23.391327 | 6.641629',
2016-07-17 23:11:23.391388 |  'user': 111,
2016-07-17 23:11:23.391409 |  'user_domain': None,
2016-07-17 23:11:23.391426 |  'user_id': 111,
2016-07-17 23:11:23.391447 |  'user_identity': u'111 222 - - -',
2016-07-17 23:11:23.391465 |  'user_name': None}
2016-07-17 23:11:23.391497 | 
==
2016-07-17 23:11:23.391532 | 

[Yahoo-eng-team] [Bug 1603861] Re: wrong check condtion for revoke event

2016-07-18 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/342034
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=9df02bfb551b81e99ee4cc81f11e3881cd4ed80a
Submitter: Jenkins
Branch:master

commit 9df02bfb551b81e99ee4cc81f11e3881cd4ed80a
Author: Dave Chen 
Date:   Thu Jul 14 16:06:18 2016 +0800

Fix the wrong check condition

Keystone has the code to prevent `None` value to be returned in the
revoke event, but there is wrong check condition that leads to
the `access_token_id` with None will be returned to end user.

Closes-Bug: #1603861
Change-Id: Ifc2908ffb6b8353d24a6416338d8fadb0e0b2a21


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1603861

Title:
  wrong check condtion for revoke event

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Keystone has the code to prevent `None` value to be returned when list
  revoke event, but there is wrong check condition that leads to
  access_token_i with None returned to end user.

  see code here.
  
https://github.com/openstack/keystone/blob/master/keystone/models/revoke_model.py#L114-L115

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1603861/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1601994] Re: instances always packed to NUMA node 0

2016-07-18 Thread Daniel Berrange
Nova fully fills each node before considering placing a guest on the
next node. This isn't a bug - its expected behaviour intended to
maximise number of guests that can be packed onto each compute host. If
you want to suggest alternative strategies please submit a blueprint +
spec with the proposed design & rational

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1601994

Title:
  instances always packed to NUMA node 0

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Description
  ===
  Instances are packed into node0, always. NUMA placement criteria is undefined 
when not using CPU pinning and hw:numa_nodes=1.


  Steps to reproduce
  ==
  Create a flavor w/ hw:numa_nodes=1 (hw:cpu_policy unset)

  Spawn multiple instances

  Check nodeset in the instance XML


  Expected result
  ===

  Use all NUMA nodes by applying some NUMA placement criteria: Spread,
  pack or random


  Actual result
  =
  Only node 0 is used. All others are unused.


  Environment
  ===

  Ubuntu Xenial 16.10, OpenStack Mitaka release, Libvirt 1.3.1

  Note: This issue has been found / tested on Ubuntu KVM on Power
  (ppc64le arch), however, it affects all architectures.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1601994/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603979] [NEW] context tests failed because missing parameter "is_admin_project"

2016-07-18 Thread Tang Chen
Public bug reported:

Description
===
The following 3 tests failed:
1. 
nova.tests.unit.test_context.ContextTestCase.test_convert_from_dict_then_to_dict
Captured traceback:
~~~
Traceback (most recent call last):
  File "nova/tests/unit/test_context.py", line 230, in 
test_convert_from_dict_then_to_dict
self.assertEqual(values, values2)
  File 
"/opt/stack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py", 
line 411, in assertEqual
self.assertThat(observed, matcher, message)
  File 
"/opt/stack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py", 
line 498, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: !=:
reference = {
 ..
 'is_admin': True,
 ..}
actual= {
 ..
 'is_admin': True,
 'is_admin_project': True,
 ..}

2. nova.tests.unit.test_context.ContextTestCase.test_convert_from_rc_to_dict
Captured traceback:
~~~
Traceback (most recent call last):
  File "nova/tests/unit/test_context.py", line 203, in 
test_convert_from_rc_to_dict
self.assertEqual(expected_values, values2)
  File 
"/opt/stack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py", 
line 411, in assertEqual
self.assertThat(observed, matcher, message)
  File 
"/opt/stack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py", 
line 498, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: !=:
reference = {
 ..
 'is_admin': True,
 ..}
actual= {
 ..
 'is_admin': True,
 'is_admin_project': True,
 ..}

3. nova.tests.unit.test_context.ContextTestCase.test_to_dict_from_dict_no_log
Captured traceback:
~~~
Traceback (most recent call last):
  File "nova/tests/unit/test_context.py", line 144, in 
test_to_dict_from_dict_no_log
self.assertEqual(0, len(warns), warns)
  File 
"/opt/stack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py", 
line 411, in assertEqual
self.assertThat(observed, matcher, message)
  File 
"/opt/stack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py", 
line 498, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: 0 != 1: ["Arguments dropped when 
creating context: {'is_admin_project': True}"]

Steps to reproduce
==
Just run the context tests:
tox -e py27 test_context

This is because we missed to pass "is_admin_project" parameter to
__init__() of  oslo.context.ResourceContext when initializing a nova
ResourceContext object.

In nova/context.py

@enginefacade.transaction_context_provider
class RequestContext(context.RequestContext):
"""Security context and request information.

Represents the user taking a given action within the system.

"""

def __init__(self, user_id=None, project_id=None,
 is_admin=None, read_deleted="no",
 roles=None, remote_address=None, timestamp=None,
 request_id=None, auth_token=None, overwrite=True,
 quota_class=None, user_name=None, project_name=None,
 service_catalog=None, instance_lock_checked=False,
 user_auth_plugin=None, **kwargs):
..
super(RequestContext, self).__init__(
..
is_admin=is_admin,
..)

But in oslo_context/context.py,

class RequestContext(object):

..

def __init__(..
 is_admin=False,
 ..
 is_admin_project=True):

** Affects: nova
 Importance: Undecided
 Assignee: Tang Chen (tangchen)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Tang Chen (tangchen)

** Description changed:

  Description
  ===
  The following 3 tests failed:
  1. 
nova.tests.unit.test_context.ContextTestCase.test_convert_from_dict_then_to_dict
  Captured traceback:
  ~~~
- Traceback (most recent call last):
-   File "nova/tests/unit/test_context.py", line 230, in 
test_convert_from_dict_then_to_dict
- self.assertEqual(values, values2)
-   File 
"/opt/stack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py", 
line 411, in assertEqual
- self.assertThat(observed, matcher, message)
-   File 
"/opt/stack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py", 
line 498, in assertThat
- raise mismatch_error
- testtools.matchers._impl.MismatchError: !=:
- reference = {
-  ..
-  'is_admin': True,
-  ..}
- actual= {
-  ..
-  'is_admin': True,
-  'is_admin_project': True,
-  ..}
+ Traceback (most recent call last):
+   File "nova/tests/unit/test_context.py", line 230, in 
test_convert_from_dict_then_to_dict
+ 

[Yahoo-eng-team] [Bug 1603954] [NEW] py27 gate is broken: "Arguments dropped when creating context: {'is_admin_project': True}"

2016-07-18 Thread Vasyl Saienko
Public bug reported:

Full log may be found: http://logs.openstack.org/69/343569/1/check/gate-
nova-python27-db/73b8865/console.html

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1603954

Title:
  py27 gate is broken: "Arguments dropped when creating context:
  {'is_admin_project': True}"

Status in OpenStack Compute (nova):
  New

Bug description:
  Full log may be found: http://logs.openstack.org/69/343569/1/check
  /gate-nova-python27-db/73b8865/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1603954/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1602436] Re: DVR: floating IP not reachable after live migration

2016-07-18 Thread Josias Montag
*** This bug is a duplicate of bug 1585165 ***
https://bugs.launchpad.net/bugs/1585165

Indeed the underlaying problem is bug #1585165 and my issue is solved by 
applying mentioned patch. For some reason I did not find this other bug report 
previously.
Thanks anyways!

** This bug has been marked a duplicate of bug 1585165
   floating ip not reachable after vm migration

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1602436

Title:
  DVR: floating IP not reachable after live migration

Status in neutron:
  New

Bug description:
  In my network environment floating IPs are not reachable after live VM
  migration. I am using the Mitaka release with neutron DVR.

  1) spawn a VM
  2) assign a floating IP
  3) live migrate the VM
  4) ping the floating IP 

  
  Using tcpdump I identified the reason for this problem: There is no 
gratuitous ARP reply sent out by the new host after live migration. Thus the 
network switch sends the traffic to the old host.

  If I manually send the GARP on the new host using:

  root@compute-2:~# ip netns exec fip-
  a46e0978-6e87-43d4-85d3-1d7030cdaf49 arping -A -I fg-de83cc57-32
  FLOATING_IP

  the floating IP comes up again and everything works as expected.

  Version:
  Mitaka on Ubuntu Trusty
  Kernel 3.19
  Deployed using Fuel 9.0

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1602436/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603942] [NEW] delete instance who had dscp rule will be abnormal

2016-07-18 Thread QunyingRan
Public bug reported:

step:
1. create a QOS policy with DSCP rule;
2. create a network with above QOS policy and boot a VM;
3. delete the VM, found abnormal information in 
neutron-openvswitch-agent.service

e-754e-467a-a584-d4878a472758']) removed
2016-07-18 06:51:56.910 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-7ebfd48c-f845-47c3-a826-b964c37a5ad8 None None] Error while processing VIF 
ports
2016-07-18 06:51:56.910 TRACE 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most 
recent call last):
2016-07-18 06:51:56.910 TRACE 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2035, in rpc_loop
2016-07-18 06:51:56.910 TRACE 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent port_info, 
ovs_restarted)
2016-07-18 06:51:56.910 TRACE 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 147, in wrapper
2016-07-18 06:51:56.910 TRACE 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent return 
f(*args, **kwargs)
2016-07-18 06:51:56.910 TRACE 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 1655, in process_network_ports
2016-07-18 06:51:56.910 TRACE 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
port_info['removed'])
2016-07-18 06:51:56.910 TRACE 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 147, in wrapper
2016-07-18 06:51:56.910 TRACE 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent return 
f(*args, **kwargs)
2016-07-18 06:51:56.910 TRACE 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 1582, in treat_devices_removed
2016-07-18 06:51:56.910 TRACE 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
self.ext_manager.delete_port(self.context, {'port_id': device})
2016-07-18 06:51:56.910 TRACE 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/agent/l2/extensions/manager.py", line 80, in 
delete_port
2016-07-18 06:51:56.910 TRACE 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
extension.obj.delete_port(context, data)
2016-07-18 06:51:56.910 TRACE 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/agent/l2/extensions/qos.py", line 261, in 
delete_port
2016-07-18 06:51:56.910 TRACE 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
self._process_reset_port(port)
2016-07-18 06:51:56.910 TRACE 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/agent/l2/extensions/qos.py", line 282, in 
_process_reset_port
2016-07-18 06:51:56.910 TRACE 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
self.qos_driver.delete(port)
2016-07-18 06:51:56.910 TRACE 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/agent/l2/extensions/qos.py", line 98, in delete
2016-07-18 06:51:56.910 TRACE 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
self._handle_rule_delete(port, rule_type)
2016-07-18 06:51:56.910 TRACE 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/agent/l2/extensions/qos.py", line 113, in 
_handle_rule_delete
2016-07-18 06:51:56.910 TRACE 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
handler(port)
2016-07-18 06:51:56.910 TRACE 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/extension_drivers/qos_driver.py",
 line 112, in delete_dscp_marking
2016-07-18 06:51:56.910 TRACE 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent port_name = 
port['vif_port'].port_name
2016-07-18 06:51:56.910 TRACE 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent KeyError: 
'vif_port'
2016-07-18 06:51:56.910 TRACE 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent

** Affects: neutron
 Importance: Undecided
 Assignee: QunyingRan (ran-qunying)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => QunyingRan (ran-qunying)

** Description changed:

  step:
- 1. create a QOS policy with DSCPrule;
+ 1. create a QOS policy with DSCP rule;
  2. create a network with above QOS policy and boot a VM;
  3. delete the VM, found abnormal information in 
neutron-openvswitch-agent.service
  
  e-754e-467a-a584-d4878a472758']) removed
  2016-07-18 06:51:56.910 ERROR 

[Yahoo-eng-team] [Bug 1583419] Re: Make dict.keys() PY3 compatible

2016-07-18 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/332661
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=31ed226a67ff6b2320f605ce436658312cf0c701
Submitter: Jenkins
Branch:master

commit 31ed226a67ff6b2320f605ce436658312cf0c701
Author: Bin Zhou 
Date:   Wed Jun 22 16:35:12 2016 +0800

Refactor usage of dict.values()[0]

The dict.values()[0] will raise a TypeError in PY3, as dict.values()
doesn't return a list any more in PY3 but a view of list. This patch
is to fix this bug by refactoring the relevant code.

Change-Id: I4362a22322807eda8d795d68395445d53ab60ad4
Closes-Bug: #1583419


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583419

Title:
  Make dict.keys() PY3 compatible

Status in Cinder:
  Fix Released
Status in neutron:
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-manilaclient:
  Fix Released
Status in python-troveclient:
  Fix Released
Status in Rally:
  Fix Released
Status in tacker:
  Fix Released
Status in watcher:
  In Progress

Bug description:
  In PY3, dict.keys() will return a view of list but not a list anymore, i.e.
  $ python3.4
  Python 3.4.3 (default, Mar 31 2016, 20:42:37) 
  >>> body={"11":"22"}
  >>> body[body.keys()[0]]
  Traceback (most recent call last):
File "", line 1, in 
  TypeError: 'dict_keys' object does not support indexing

  so for py3 compatible we should change it as follows:
  >>> body[list(body.keys())[0]]
  '22'

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1583419/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603918] [NEW] The port's dns_name is cleared while the instance boot failed, because the port's dns_name is not equal to the instance's hostname

2016-07-18 Thread xiewj
Public bug reported:

In Mitaka,
The value assigned to the dns_name attribute of the port is not equal to the 
value that Compute service will assign to the instance’s hostname, 
the instance boot will fail,however,the port's dns_name is cleared
Repetition steps are as follows:
step 1:Create a port specifying 'my-port' to its dns_name attribute
step 2:Boot an instance using the port,the hostname assigned to the instance is 
not equal to the port's dns_name
step 3:The boot failed
step 4:Show the port info created in step1 using neutron port-show command,the 
dns_name is cleared.
 
[root@devstack218 devstack]# neutron port-update port2_net1 --dns-name my-port
Updated port: port2_net1
[root@devstack218 devstack]# neutron port-show port2_net1
+---++
| Field | Value 
 |
+---++
| admin_state_up| True  
 |
| allowed_address_pairs |   
 |
| binding:host_id   |   
 |
| binding:profile   | {}
 |
| binding:vif_details   | {}
 |
| binding:vif_type  | unbound   
 |
| binding:vnic_type | normal
 |
| created_at| 2016-07-12T08:40:47   
 |
| description   |   
 |
| device_id |   
 |
| device_owner  |   
 |
| dns_assignment| {"hostname": "my-port", "ip_address": 
"198.51.100.12", "fqdn": "my-port.example.org."} |
|   | {"hostname": "my-port", "ip_address": 
"2001:db8:80d2:c4d3:f816:3eff:fe44:f8d", "fqdn": "my-port.example.org."} |
| dns_name  | my-port   
 |
| extra_dhcp_opts   |   
 |
| fixed_ips | {"subnet_id": "481cadf6-fa52-4739-80b2-331a3b90d7b6", 
"ip_address": "198.51.100.12"}   |
|   | {"subnet_id": "60f56f75-ce94-498f-b4ad-0383db2796a8", 
"ip_address": "2001:db8:80d2:c4d3:f816:3eff:fe44:f8d"}   |
| id| aa89b945-1806-4384-9771-25c44bf7f66d  
 |
| mac_address   | fa:16:3e:44:0f:8d 
 |
| name  | port2_net1
 |
| network_id| d885e8ed-5e70-478f-a279-fd6c00bbb2d7  
 |
| port_security_enabled | True  
 |
| qos_policy_id | 2e7b351c-5579-4d34-9617-f7e95acbb56b  
 |
| security_groups   | e5bd8e12-ea85-4801-acd2-a997df98053d  
 |
| status| DOWN  
 |
| tenant_id | d9cc08fe87ee49f08020baa95893e2ef  
 |
| updated_at| 2016-07-18T08:43:52   
 |

[Yahoo-eng-team] [Bug 1533572] Re: Failed to create vm on kvm while boot with multiple nics

2016-07-18 Thread Saurabh
*** This bug is a duplicate of bug 1522112 ***
https://bugs.launchpad.net/bugs/1522112

** This bug has been marked a duplicate of bug 1522112
   ports duplication in the VM XML when using heat and multiple networks

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1533572

Title:
  Failed to create vm on kvm while boot with multiple nics

Status in OpenStack Compute (nova):
  Expired

Bug description:
  On devsatck master branch, tried to boot vm with 4 vnic and it failed
  with error:- Unable to create tap device tap791b4be3-3c: Device or
  resource busy, 2 out of 20 times

  
  ERROR trace:-
  2016-01-13 21:41:04.550 ^[[01;31mERROR nova.compute.manager 
[^[[01;36mreq-5bcd4a69-13e3-4ec4-9a24-f665e61759e9 
^[[00;36mctx_rally_06cccf92bf84487293a6cf350e10721c_user_0 
ctx_rally_d2f4c584-4152-4e5a-97ab-d2e9e019b22b_tenant_12^[[01;31m] 
^[[01;35m[instance: a89c8f38-618e-4307-8f42-b243c900a674] ^[[01;31mInstance 
failed to spawn^[[00m
  ^[[01;31m2016-01-13 21:41:04.550 TRACE nova.compute.manager 
^[[01;35m[instance: a89c8f38-618e-4307-8f42-b243c900a674] ^[[00mTraceback (most 
recent call last):
  ^[[01;31m2016-01-13 21:41:04.550 TRACE nova.compute.manager 
^[[01;35m[instance: a89c8f38-618e-4307-8f42-b243c900a674] ^[[00m  File 
"/opt/stack/nova/nova/compute/manager.py", line 2182, in _build_resources
  ^[[01;31m2016-01-13 21:41:04.550 TRACE nova.compute.manager 
^[[01;35m[instance: a89c8f38-618e-4307-8f42-b243c900a674] ^[[00myield 
resources
  ^[[01;31m2016-01-13 21:41:04.550 TRACE nova.compute.manager 
^[[01;35m[instance: a89c8f38-618e-4307-8f42-b243c900a674] ^[[00m  File 
"/opt/stack/nova/nova/compute/manager.py", line 2029, in _build_and_run_instance
  ^[[01;31m2016-01-13 21:41:04.550 TRACE nova.compute.manager 
^[[01;35m[instance: a89c8f38-618e-4307-8f42-b243c900a674] ^[[00m
block_device_info=block_device_info)
  ^[[01;31m2016-01-13 21:41:04.550 TRACE nova.compute.manager 
^[[01;35m[instance: a89c8f38-618e-4307-8f42-b243c900a674] ^[[00m  File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2517, in spawn
  ^[[01;31m2016-01-13 21:41:04.550 TRACE nova.compute.manager 
^[[01;35m[instance: a89c8f38-618e-4307-8f42-b243c900a674] ^[[00m
block_device_info=block_device_info)
  ^[[01;31m2016-01-13 21:41:04.550 TRACE nova.compute.manager 
^[[01;35m[instance: a89c8f38-618e-4307-8f42-b243c900a674] ^[[00m  File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 4631, in 
_create_domain_and_network
  ^[[01;31m2016-01-13 21:41:04.550 TRACE nova.compute.manager 
^[[01;35m[instance: a89c8f38-618e-4307-8f42-b243c900a674] ^[[00mxml, 
pause=pause, power_on=power_on)
  ^[[01;31m2016-01-13 21:41:04.550 TRACE nova.compute.manager 
^[[01;35m[instance: a89c8f38-618e-4307-8f42-b243c900a674] ^[[00m  File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 4561, in _create_domain
  ^[[01;31m2016-01-13 21:41:04.550 TRACE nova.compute.manager 
^[[01;35m[instance: a89c8f38-618e-4307-8f42-b243c900a674] ^[[00m
guest.launch(pause=pause)
  ^[[01;31m2016-01-13 21:41:04.550 TRACE nova.compute.manager 
^[[01;35m[instance: a89c8f38-618e-4307-8f42-b243c900a674] ^[[00m  File 
"/opt/stack/nova/nova/virt/libvirt/guest.py", line 142, in launch
  ^[[01;31m2016-01-13 21:41:04.550 TRACE nova.compute.manager 
^[[01;35m[instance: a89c8f38-618e-4307-8f42-b243c900a674] ^[[00m
self._encoded_xml, errors='ignore')
  ^[[01;31m2016-01-13 21:41:04.550 TRACE nova.compute.manager 
^[[01;35m[instance: a89c8f38-618e-4307-8f42-b243c900a674] ^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 204, in 
__exit__
  ^[[01;31m2016-01-13 21:41:04.550 TRACE nova.compute.manager 
^[[01;35m[instance: a89c8f38-618e-4307-8f42-b243c900a674] ^[[00m
six.reraise(self.type_, self.value, self.tb)
  ^[[01;31m2016-01-13 21:41:04.550 TRACE nova.compute.manager 
^[[01;35m[instance: a89c8f38-618e-4307-8f42-b243c900a674] ^[[00m  File 
"/opt/stack/nova/nova/virt/libvirt/guest.py", line 137, in launch
  ^[[01;31m2016-01-13 21:41:04.550 TRACE nova.compute.manager 
^[[01;35m[instance: a89c8f38-618e-4307-8f42-b243c900a674] ^[[00mreturn 
self._domain.createWithFlags(flags)
  ^[[01;31m2016-01-13 21:41:04.550 TRACE nova.compute.manager 
^[[01;35m[instance: a89c8f38-618e-4307-8f42-b243c900a674] ^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 183, in doit
  ^[[01;31m2016-01-13 21:41:04.550 TRACE nova.compute.manager 
^[[01;35m[instance: a89c8f38-618e-4307-8f42-b243c900a674] ^[[00mresult = 
proxy_call(self._autowrap, f, *args, **kwargs)
  ^[[01;31m2016-01-13 21:41:04.550 TRACE nova.compute.manager 
^[[01;35m[instance: a89c8f38-618e-4307-8f42-b243c900a674] ^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 141, in 
proxy_call
  ^[[01;31m2016-01-13 21:41:04.550 TRACE nova.compute.manager 
^[[01;35m[instance: a89c8f38-618e-4307-8f42-b243c900a674] 

[Yahoo-eng-team] [Bug 1551747] Re: ubuntu-fan causes issues during network configuration

2016-07-18 Thread Launchpad Bug Tracker
This bug was fixed in the package ubuntu-fan - 0.9.1

---
ubuntu-fan (0.9.1) xenial; urgency=medium

  [ Andy Whitcroft ]
  * fanatic: fix legacy command line form syntax error (LP: #1584150)
  * fanctl/fanatic: add help commands/options with initial pointers
(LP: #1535054)

  [ Jay Vosburgh ]
  * fanatic: fix underlay with calculation (LP: #1584092)

  [ Andy Whitcroft ]
  * fanctl/fanatic: remove invalid web reference from manual pages.
(LP: #1582956)
  * fanatic: detect user specified underlay address without overlay
(LP: #1584692)
  * fanatic: switch from lxd-images to using cached lxc images. (LP: #1584775)
  * fanatic: test-host -- use the selected underlay width to calculate the 
remote addresses
(LP: #1584878)
  * fanctl: fix net start/stop exit codes. (LP: #1551747)
  * fanatic: install ping and nc when needed (LP: #1586176)
  * fanatic: switch docker testing to lts images (LP: #1586169)

 -- Andy Whitcroft   Mon, 04 Jul 2016 14:35:39 +0100

** Changed in: ubuntu-fan (Ubuntu Xenial)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1551747

Title:
  ubuntu-fan causes issues during network configuration

Status in cloud-init:
  New
Status in Snappy:
  Confirmed
Status in ubuntu-fan package in Ubuntu:
  Fix Released
Status in ubuntu-fan source package in Xenial:
  Fix Released
Status in ubuntu-fan source package in Yakkety:
  Fix Released

Bug description:
  it seems that ubuntu-fan is causing issues with network configuration.

  On 16.04 daily image:

  root@localhost:~# snappy list
  NameDate   Version  Developer
  canonical-pi2   2016-02-02 3.0  canonical
  canonical-pi2-linux 2016-02-03 4.3.0-1006-3 canonical
  ubuntu-core 2016-02-22 16.04.0-10.armhf canonical

  I see this when I'm activating a wifi card on a raspberry pi 2.

  root@localhost:~# ifdown wlan0
  ifdown: interface wlan0 not configured
  root@localhost:~# ifup wlan0
  Internet Systems Consortium DHCP Client 4.3.3
  Copyright 2004-2015 Internet Systems Consortium.
  All rights reserved.
  For info, please visit https://www.isc.org/software/dhcp/

  Listening on LPF/wlan0/c4:e9:84:17:31:9b
  Sending on   LPF/wlan0/c4:e9:84:17:31:9b
  Sending on   Socket/fallback
  DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 3 (xid=0x81c0c95e)
  DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 5 (xid=0x81c0c95e)
  DHCPREQUEST of 192.168.0.170 on wlan0 to 255.255.255.255 port 67 
(xid=0x5ec9c081)
  DHCPOFFER of 192.168.0.170 from 192.168.0.251
  DHCPACK of 192.168.0.170 from 192.168.0.251
  RTNETLINK answers: File exists
  bound to 192.168.0.170 -- renewal in 17145 seconds.
  run-parts: /etc/network/if-up.d/ubuntu-fan exited with return code 1
  Failed to bring up wlan0.

  ===
  [Impact]

  Installing ubuntu-fan can trigger error messages when initialising
  with no fan configuration.

  [Test Case]

  As above.

  [Regression Potential]

  Low, suppresses errorneous error messages.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1551747/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603909] [NEW] Nova unable to delete dangling ports on timeout/failure; leads to multiple ports per VM instead of one

2016-07-18 Thread Fawad Khaliq
Public bug reported:

Description
===
When launching many instances (100 to 200) in parallel on a physical 
small-medium scale cluster (20-30 compute nodes), Nova sees timeouts from 
Neutron and fails to clean the ports. This leads to Nova instances having 
multiple ports instead of one. 

A similar issue [1] was reported long time ago but seems like the fix
[2] from Nova was never added.

[1] https://bugs.launchpad.net/neutron/+bug/1160442
[2] https://bugs.launchpad.net/neutron/+bug/1160442/comments/28

Steps to reproduce
==

Launch 200+ VMs in parallel.

Expected result
===

All instances should have port each

Actual result
=

Randomly some instances have multiple ports allocated.

Environment
===

3 Controllers, 20 computes
20 computes are not co-located, means there is some latency between the 
controllers and computes as well as computes and computes.

KVM hypervisor

Nova versions:
openstack-nova-common-2015.1.2-18.2.el7ost.noarch
openstack-nova-console-2015.1.2-18.2.el7ost.noarch
openstack-nova-conductor-2015.1.2-18.2.el7ost.noarch
openstack-nova-compute-2015.1.2-18.2.el7ost.noarch
openstack-nova-novncproxy-2015.1.2-18.2.el7ost.noarch
python-nova-2015.1.2-18.2.el7ost.noarch
openstack-nova-api-2015.1.2-18.2.el7ost.noarch
openstack-nova-cert-2015.1.2-18.2.el7ost.noarch
openstack-nova-scheduler-2015.1.2-18.2.el7ost.noarch
python-novaclient-2.23.0-2.el7ost.noarch


Networking:
OpenStack Neutron with PLUMgrid plugin (similar results seen on non-PLUMgrid 
install).

Logs & Configs
==

nova.conf is attached

nova-compute logs
-

2016-07-18 07:41:24.972 36869 ERROR nova.compute.manager [-] Instance failed 
network setup after 1 attempt(s)
2016-07-18 07:41:24.972 36869 TRACE nova.compute.manager Traceback (most recent 
call last):
2016-07-18 07:41:24.972 36869 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1808, in 
_allocate_network_async
2016-07-18 07:41:24.972 36869 TRACE nova.compute.manager 
dhcp_options=dhcp_options)
2016-07-18 07:41:24.972 36869 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 577, in 
allocate_for_instance
2016-07-18 07:41:24.972 36869 TRACE nova.compute.manager 
self._delete_ports(neutron, instance, created_port_ids)
2016-07-18 07:41:24.972 36869 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2016-07-18 07:41:24.972 36869 TRACE nova.compute.manager 
six.reraise(self.type_, self.value, self.tb)
2016-07-18 07:41:24.972 36869 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 569, in 
allocate_for_instance
2016-07-18 07:41:24.972 36869 TRACE nova.compute.manager 
security_group_ids, available_macs, dhcp_opts)
2016-07-18 07:41:24.972 36869 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 304, in 
_create_port
2016-07-18 07:41:24.972 36869 TRACE nova.compute.manager port_id = 
port_client.create_port(port_req_body)['port']['id']
2016-07-18 07:41:24.972 36869 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 102, in 
with_params
2016-07-18 07:41:24.972 36869 TRACE nova.compute.manager ret = 
self.function(instance, *args, **kwargs)
2016-07-18 07:41:24.972 36869 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 544, in 
create_port
2016-07-18 07:41:24.972 36869 TRACE nova.compute.manager return 
self.post(self.ports_path, body=body)
2016-07-18 07:41:24.972 36869 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 298, in 
post
2016-07-18 07:41:24.972 36869 TRACE nova.compute.manager headers=headers, 
params=params)
2016-07-18 07:41:24.972 36869 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 200, in 
do_request
2016-07-18 07:41:24.972 36869 TRACE nova.compute.manager 
content_type=self.content_type())
2016-07-18 07:41:24.972 36869 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/neutronclient/client.py", line 306, in 
do_request
2016-07-18 07:41:24.972 36869 TRACE nova.compute.manager return 
self.request(url, method, **kwargs)
2016-07-18 07:41:24.972 36869 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/neutronclient/client.py", line 294, in request
2016-07-18 07:41:24.972 36869 TRACE nova.compute.manager resp = 
super(SessionClient, self).request(*args, **kwargs)
2016-07-18 07:41:24.972 36869 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/keystoneclient/adapter.py", line 95, in 
request
2016-07-18 07:41:24.972 36869 TRACE nova.compute.manager return 
self.session.request(url, method, **kwargs)

[Yahoo-eng-team] [Bug 1603905] [NEW] V2 API: enable a user doesn't work

2016-07-18 Thread Dave Chen
Public bug reported:

Enable user
===
PUT /v2.0/users/{userId}/OS-KSADM/enabled

The above API doesn't work, there are two issue here.

1. The API unnecessarily need a request body

url -g -i -X PUT
http://10.239.159.68/identity_v2_admin/v2.0/users/acc163d0efa14fe5b84e1dcc62ff6404
/OS-KSADM/enabled  -H "Content-Type: application/json" -H "Accept:
application/json" -H "X-Auth-Token:

{"error": {"message": "set_user_enabled() takes exactly 4 arguments (3
given)", "code": 400, "title": "Bad Request"}}

2. If we pass a request body without 'enabled' property, it cannot
enable the user.

openstack user show acc163d0efa14fe5b84e1dcc62ff6404
++--+
| Field  | Value|
++--+
| default_project_id | e9b5b0575cad498f8fce9e39ef209411 |
| domain_id  | default  |
| enabled| False|
| id | acc163d0efa14fe5b84e1dcc62ff6404 |
| name   | test_user|
++--+

curl -g -i -X PUT
http://10.239.159.68/identity_v2_admin/v2.0/users/acc163d0efa14fe5b84e1dcc62ff6404
/OS-KSADM/enabled  -H "Content-Type: application/json" -H "Accept:
application/json" -H "X-Auth-Token: e2fde9a73eb743e298e3d10aabebe5e0" -d
'{"user": {"name": "test_user"}}'

{"user": {"username": "test_user", "name": "test_user", "extra": {},
"enabled": false, "id": "acc163d0efa14fe5b84e1dcc62ff6404", "tenantId":
"e9b5b0575cad498f8fce9e39ef209411"}}

Nothing is changed, the user is still disabled.

** Affects: keystone
 Importance: Undecided
 Status: New

** Description changed:

  Enable user
  ===
  PUT /v2.0/users/{userId}/OS-KSADM/enabled
- 
  
  The above API doesn't work, there are two issue here.
  
  1. The API unnecessarily need a request body
  
  url -g -i -X PUT
  
http://10.239.159.68/identity_v2_admin/v2.0/users/acc163d0efa14fe5b84e1dcc62ff6404
  /OS-KSADM/enabled  -H "Content-Type: application/json" -H "Accept:
  application/json" -H "X-Auth-Token:
  
  {"error": {"message": "set_user_enabled() takes exactly 4 arguments (3
  given)", "code": 400, "title": "Bad Request"}}
  
- 
- 2. If we pass a request body without 'enabled' property, it could make the 
user enabled.
+ 2. If we pass a request body without 'enabled' property, it cannot
+ enable the user.
  
  openstack user show acc163d0efa14fe5b84e1dcc62ff6404
  ++--+
  | Field  | Value|
  ++--+
  | default_project_id | e9b5b0575cad498f8fce9e39ef209411 |
  | domain_id  | default  |
  | enabled| False|
  | id | acc163d0efa14fe5b84e1dcc62ff6404 |
  | name   | test_user|
  ++--+
  
- 
- curl -g -i -X PUT 
http://10.239.159.68/identity_v2_admin/v2.0/users/acc163d0efa14fe5b84e1dcc62ff6404/OS-KSADM/enabled
  -H "Content-Type: application/json" -H "Accept: application/json" -H 
"X-Auth-Token: 
+ curl -g -i -X PUT
+ 
http://10.239.159.68/identity_v2_admin/v2.0/users/acc163d0efa14fe5b84e1dcc62ff6404
+ /OS-KSADM/enabled  -H "Content-Type: application/json" -H "Accept:
+ application/json" -H "X-Auth-Token:
  
  {"user": {"username": "test_user", "name": "test_user", "extra": {},
  "enabled": false, "id": "acc163d0efa14fe5b84e1dcc62ff6404", "tenantId":
  "e9b5b0575cad498f8fce9e39ef209411"}}
  
- 
- Nothing changed, the user is still disabled.
+ Nothing is changed, the user is still disabled.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1603905

Title:
  V2 API: enable a user doesn't work

Status in OpenStack Identity (keystone):
  New

Bug description:
  Enable user
  ===
  PUT /v2.0/users/{userId}/OS-KSADM/enabled

  The above API doesn't work, there are two issue here.

  1. The API unnecessarily need a request body

  url -g -i -X PUT
  
http://10.239.159.68/identity_v2_admin/v2.0/users/acc163d0efa14fe5b84e1dcc62ff6404
  /OS-KSADM/enabled  -H "Content-Type: application/json" -H "Accept:
  application/json" -H "X-Auth-Token:

  {"error": {"message": "set_user_enabled() takes exactly 4 arguments (3
  given)", "code": 400, "title": "Bad Request"}}

  2. If we pass a request body without 'enabled' property, it cannot
  enable the user.

  openstack user show acc163d0efa14fe5b84e1dcc62ff6404
  ++--+
  | Field  | Value|
  ++--+
  | default_project_id | e9b5b0575cad498f8fce9e39ef209411 |
  |