[Yahoo-eng-team] [Bug 1615922] [NEW] pci device object doesn't set correctly during rolling upgrade

2016-08-22 Thread Hiroyuki Eguchi
Public bug reported:

I'm evaluating a rolling upgrade from liberty to mitaka in sr-iov
environment.

The following error occurred in resource_tracker in case controller node
is mitaka and compute node is liberty.

Error updating resources for node overcloud-compute-0.localdomain: Cannot load 
'parent_addr' in the base classTraceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 85, 
in _object_dispatch
return getattr(target, method)(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 
223, in wrapper
return fn(self, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/nova/objects/pci_device.py", line 251, 
in save
updates = self.obj_get_changes()
  File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 
604, in obj_get_changes
changes[key] = getattr(self, key)
  File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 
67, in getter
self.obj_load_attr(name)
  File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 
580, in obj_load_attr
_("Cannot load '%s' in the base class") % attrname)
NotImplementedError: Cannot load 'parent_addr' in the base class


The cause of error is that a parent_addr parameter which has been added newly 
since mitaka is not set correctly.

We should consider the a pci device object that nova-conductor receives
from nova-compute does not have a parent_addr attribute.

** Affects: nova
 Importance: Undecided
 Assignee: Hiroyuki Eguchi (h-eguchi)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Hiroyuki Eguchi (h-eguchi)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1615922

Title:
  pci device object doesn't set correctly during rolling upgrade

Status in OpenStack Compute (nova):
  New

Bug description:
  I'm evaluating a rolling upgrade from liberty to mitaka in sr-iov
  environment.

  The following error occurred in resource_tracker in case controller
  node is mitaka and compute node is liberty.

  Error updating resources for node overcloud-compute-0.localdomain: Cannot 
load 'parent_addr' in the base classTraceback (most recent call last):
File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 85, 
in _object_dispatch
  return getattr(target, method)(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 
223, in wrapper
  return fn(self, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/nova/objects/pci_device.py", line 
251, in save
  updates = self.obj_get_changes()
File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 
604, in obj_get_changes
  changes[key] = getattr(self, key)
File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 
67, in getter
  self.obj_load_attr(name)
File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 
580, in obj_load_attr
  _("Cannot load '%s' in the base class") % attrname)
  NotImplementedError: Cannot load 'parent_addr' in the base class

  
  The cause of error is that a parent_addr parameter which has been added newly 
since mitaka is not set correctly.

  We should consider the a pci device object that nova-conductor
  receives from nova-compute does not have a parent_addr attribute.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1615922/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1614561] Re: db.bw_usage_update can update multiple db records

2016-08-22 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/250807
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=51575f872218df16c4e43f242f1db3eab792a332
Submitter: Jenkins
Branch:master

commit 51575f872218df16c4e43f242f1db3eab792a332
Author: Pavel Kholkin 
Date:   Fri Nov 27 16:19:46 2015 +0300

removed db_exc.DBDuplicateEntry in bw_usage_update

BandwidthUsage model has no UniqueConstraints.
In 'bw_usage_cache' table in nova db there is single primary
autoincrement key. So the duplicate entry problem is solved by
db itself and db_exc.DBDuplicateEntry could not be raised in Nova.

Ideally we should add UniqueConstraint to prevent multiple bw usage
records existing for the same date range and UUID. That fix for this
will mean we should be able to remove the .first() call and instead
use .one(). The current code that uses .first() is not correct
because there is no order_by() applied on the SQL query and
therefore the returned "first record" is indeterminate.

This workaround fix removed misleading note and exception and
added order_by() to ensure that the same record is updated every time.

Co-Authored-By: Sergey Nikitin 

Closes-bug: #1614561

Change-Id: I408bc3a3e5623965a619d8c7241e4e77c8bf44f5


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1614561

Title:
  db.bw_usage_update can update multiple db records

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The current code in db.bw_usage_update() function uses .first() and is
  not correct because there is no order_by() applied on the SQL query
  and therefore the returned "first record" is indeterminate. We should
  remove misleading note about possible race and exception and added
  order_by() to ensure that the same record is updated every time.

  Ideally we should add UniqueConstraint for BandwidthUsage model to
  prevent multiple bw usage records existing for the same date range and
  UUID. That fix for this will mean we should be able to remove the
  .first() call and instead use .one().

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1614561/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615919] [NEW] BGP: DVR fip has next_hop to snat gateway after associate first time

2016-08-22 Thread LIU Yulong
Public bug reported:

ENV:
stable/mitaka

When A DVR floating IP associate to a port, the `floatingip_update_callback` 
will immediately start a `start_route_advertisements` to notify DR agent that 
FIP bgp route.
But this bgp route is not right, its next_hop is set to snat gateway IP address.
And then after `periodic_interval` seconds, then the DR agent will resync that 
DVR fip route with new next_hop to the correct FIP namespace fg-device IP 
address.

Reproduce:
1. create a DVR router 1, set gateway
2. create a network/subnet, and connected to the DVR router 1
3. create a VM 1
4. bind a floating IP to that VM 1
5. in DR agent LOG, you may see the following infos:

2016-08-23 13:08:26.301 13559 INFO bgpspeaker.api.base 
[req-829d21e2-98c3-49f3-9ba5-bd626aaf782e - - - - -] API method network.add 
called with args: {'prefix': u'172.16.10.68/32', 'next_hop': u'172.16.6.154'}
2016-08-23 13:08:26.302 13559 INFO neutron.services.bgp.driver.ryu.driver 
[req-829d21e2-98c3-49f3-9ba5-bd626aaf782e - - - - -] Route 
cidr=172.16.10.68/32, nexthop=172.16.6.154 is advertised for BGP Speaker 
running for local_as=2345.
2016-08-23 13:08:37.131 13559 INFO bgpspeaker.api.base 
[req-fa420676-2ddd-4b24-9c36-932c2c8b1bef - - - - -] API method network.del 
called with args: {'prefix': u'172.16.10.68/32'}
2016-08-23 13:08:37.131 13559 INFO neutron.services.bgp.driver.ryu.driver 
[req-fa420676-2ddd-4b24-9c36-932c2c8b1bef - - - - -] Route cidr=172.16.10.68/32 
is withdrawn from BGP Speaker running for local_as=2345.
2016-08-23 13:08:37.132 13559 INFO bgpspeaker.api.base 
[req-fa420676-2ddd-4b24-9c36-932c2c8b1bef - - - - -] API method network.add 
called with args: {'prefix': u'172.16.10.68/32', 'next_hop': u'172.16.10.66'}
2016-08-23 13:08:37.132 13559 INFO neutron.services.bgp.driver.ryu.driver 
[req-fa420676-2ddd-4b24-9c36-932c2c8b1bef - - - - -] Route 
cidr=172.16.10.68/32, nexthop=172.16.10.66 is advertised for BGP Speaker 
running for local_as=2345.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-bgp

** Description changed:

  ENV:
  stable/mitaka
  
- 
  When A DVR floating IP associate to a port, the `floatingip_update_callback` 
will immediately start a `start_route_advertisements` to notify DR agent that 
FIP bgp route.
  But this bgp route is not right, its next_hop is set to snat gateway IP 
address.
- And then after `periodic_interval` seconds, then the DR agent will resync 
that DVR fip route with new next_hop to the correct FIP namespace fip-device IP 
address.
- 
+ And then after `periodic_interval` seconds, then the DR agent will resync 
that DVR fip route with new next_hop to the correct FIP namespace fg-device IP 
address.
  
  Reproduce:
  1. create a DVR router 1, set gateway
  2. create a network/subnet, and connected to the DVR router 1
  3. create a VM 1
  4. bind a floating IP to that VM 1
  5. in DR agent LOG, you may see the following infos:
  
- 
  2016-08-23 13:08:26.301 13559 INFO bgpspeaker.api.base 
[req-829d21e2-98c3-49f3-9ba5-bd626aaf782e - - - - -] API method network.add 
called with args: {'prefix': u'172.16.10.68/32', 'next_hop': u'172.16.6.154'}
  2016-08-23 13:08:26.302 13559 INFO neutron.services.bgp.driver.ryu.driver 
[req-829d21e2-98c3-49f3-9ba5-bd626aaf782e - - - - -] Route 
cidr=172.16.10.68/32, nexthop=172.16.6.154 is advertised for BGP Speaker 
running for local_as=2345.
  2016-08-23 13:08:37.131 13559 INFO bgpspeaker.api.base 
[req-fa420676-2ddd-4b24-9c36-932c2c8b1bef - - - - -] API method network.del 
called with args: {'prefix': u'172.16.10.68/32'}
  2016-08-23 13:08:37.131 13559 INFO neutron.services.bgp.driver.ryu.driver 
[req-fa420676-2ddd-4b24-9c36-932c2c8b1bef - - - - -] Route cidr=172.16.10.68/32 
is withdrawn from BGP Speaker running for local_as=2345.
  2016-08-23 13:08:37.132 13559 INFO bgpspeaker.api.base 
[req-fa420676-2ddd-4b24-9c36-932c2c8b1bef - - - - -] API method network.add 
called with args: {'prefix': u'172.16.10.68/32', 'next_hop': u'172.16.10.66'}
  2016-08-23 13:08:37.132 13559 INFO neutron.services.bgp.driver.ryu.driver 
[req-fa420676-2ddd-4b24-9c36-932c2c8b1bef - - - - -] Route 
cidr=172.16.10.68/32, nexthop=172.16.10.66 is advertised for BGP Speaker 
running for local_as=2345.

** Tags added: l3-bgp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1615919

Title:
  BGP: DVR fip has next_hop to snat gateway after associate first time

Status in neutron:
  New

Bug description:
  ENV:
  stable/mitaka

  When A DVR floating IP associate to a port, the `floatingip_update_callback` 
will immediately start a `start_route_advertisements` to notify DR agent that 
FIP bgp route.
  But this bgp route is not right, its next_hop is set to snat gateway IP 
address.
  And then after `periodic_interval` seconds, then the DR agent will resync 
that DVR fip route with new next_hop to the correct FIP namespace fg-device IP 
address.

[Yahoo-eng-team] [Bug 1614361] Re: tox.ini needs to be updated as openstack infra now supports upper constraints

2016-08-22 Thread avnish
** No longer affects: openstack-ansible

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1614361

Title:
  tox.ini needs to be updated as openstack infra now supports upper
  constraints

Status in Ceilometer:
  In Progress
Status in Cinder:
  In Progress
Status in Designate:
  In Progress
Status in Glance:
  In Progress
Status in heat:
  In Progress
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in Ironic Inspector:
  Fix Released
Status in Mistral:
  Fix Released
Status in Murano:
  Fix Released
Status in networking-ovn:
  Invalid
Status in octavia:
  In Progress
Status in python-muranoclient:
  In Progress
Status in tacker:
  In Progress
Status in OpenStack DBaaS (Trove):
  In Progress
Status in vmware-nsx:
  In Progress
Status in zaqar:
  Fix Released
Status in Zaqar-ui:
  Fix Released

Bug description:
  Openstack infra now supports upper constraints for releasenotes, cover, venv 
targets.
  tox.ini uses install_command for these targets, which can now be safely 
removed.
  Reference for mail that details this support: 
http://lists.openstack.org/pipermail/openstack-dev/2016-August/101474.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1614361/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1614361] Re: tox.ini needs to be updated as openstack infra now supports upper constraints

2016-08-22 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/358408
Committed: 
https://git.openstack.org/cgit/openstack/mistral/commit/?id=4ecca5717cc07d1f004be2f7c660336b6c32d043
Submitter: Jenkins
Branch:master

commit 4ecca5717cc07d1f004be2f7c660336b6c32d043
Author: lvdongbing 
Date:   Mon Aug 22 01:58:19 2016 -0400

Use upper constraints for all jobs in tox.ini

Openstack infra now supports upper constraints for
all jobs. Updated tox.ini to use upper constraints
for all jobs.

Change-Id: Id6080cf39ff5c50bc85b576247822040ccaae445
Closes-Bug: #1614361


** Changed in: mistral
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1614361

Title:
  tox.ini needs to be updated as openstack infra now supports upper
  constraints

Status in Ceilometer:
  In Progress
Status in Cinder:
  In Progress
Status in Designate:
  In Progress
Status in Glance:
  In Progress
Status in heat:
  In Progress
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in Ironic Inspector:
  Fix Released
Status in Mistral:
  Fix Released
Status in Murano:
  Fix Released
Status in networking-ovn:
  Invalid
Status in octavia:
  In Progress
Status in python-muranoclient:
  In Progress
Status in tacker:
  In Progress
Status in OpenStack DBaaS (Trove):
  In Progress
Status in vmware-nsx:
  In Progress
Status in zaqar:
  Fix Released
Status in Zaqar-ui:
  Fix Released

Bug description:
  Openstack infra now supports upper constraints for releasenotes, cover, venv 
targets.
  tox.ini uses install_command for these targets, which can now be safely 
removed.
  Reference for mail that details this support: 
http://lists.openstack.org/pipermail/openstack-dev/2016-August/101474.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1614361/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1614361] Re: tox.ini needs to be updated as openstack infra now supports upper constraints

2016-08-22 Thread avnish
** No longer affects: neutron

** Changed in: networking-ovn
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1614361

Title:
  tox.ini needs to be updated as openstack infra now supports upper
  constraints

Status in Ceilometer:
  In Progress
Status in Cinder:
  In Progress
Status in Designate:
  In Progress
Status in Glance:
  In Progress
Status in heat:
  In Progress
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in Ironic Inspector:
  Fix Released
Status in Mistral:
  In Progress
Status in Murano:
  Fix Released
Status in networking-ovn:
  Invalid
Status in octavia:
  In Progress
Status in openstack-ansible:
  In Progress
Status in python-muranoclient:
  In Progress
Status in tacker:
  In Progress
Status in OpenStack DBaaS (Trove):
  In Progress
Status in vmware-nsx:
  In Progress
Status in zaqar:
  Fix Released
Status in Zaqar-ui:
  Fix Released

Bug description:
  Openstack infra now supports upper constraints for releasenotes, cover, venv 
targets.
  tox.ini uses install_command for these targets, which can now be safely 
removed.
  Reference for mail that details this support: 
http://lists.openstack.org/pipermail/openstack-dev/2016-August/101474.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1614361/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1558679] Re: Live block migration fails with TypeError exception in driver.py

2016-08-22 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1558679

Title:
  Live block migration fails with TypeError exception in driver.py

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Hi,

  When I attempt to do a live block migration of my VM instance, I see
  the following exception in my nova-compute.log and migration did not
  happen:

  2016-03-15 12:27:48.121 15330 ERROR oslo_messaging.rpc.dispatcher 
[req-e41b3f49-8bf8-4a4e-8511-1f8eea811dff b567c533c6a842908a3888a4ce80117e 
0a6eee33460e4c86ba591fd427cce163 - - -] Exception during message handling: 
string indices must be integers
  2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
  2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
  2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do_dispatch
  2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher result 
= func(ctxt, **new_args)
  2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6845, in 
pre_live_migration
  2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher disk, 
migrate_data=migrate_data)
  2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 461, in 
decorated_function
  2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 88, in wrapped
  2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher payload)
  2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
  2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/exception.py", line 71, in wrapped
  2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 369, in 
decorated_function
  2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
  2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
  2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 357, in 
decorated_function
  2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5272, in 
pre_live_migration
  2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher 
migrate_data)
  2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 6003, in 
pre_live_migration
  2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher 
image_file = os.path.basename(info['path'])
  2016-03-15 12:27:48.121 15330 TRACE oslo_messaging.rpc.dispatcher TypeError: 
string indices must be integers

  The Nova/OpenStack version that I'm running is as follows (as shown in 'rpm 
-qa | grep nova'):
  python-nova-2015.1.2-18.1.el7ost.noarch
  python-novaclient-2.23.0-2.el7ost.noarch
  openstack-nova-common-2015.1.2-18.1.el7ost.noarch
  openstack-nova-compute-2015.1.2-18.1.el7ost.noarch

  The '/nova/virt/libvirt/driver.py' file running on my system is dated
  Mar 

[Yahoo-eng-team] [Bug 1614361] Fix proposed to trove (master)

2016-08-22 Thread OpenStack Infra
Fix proposed to branch: master
Review: https://review.openstack.org/358943

** Changed in: zaqar
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1614361

Title:
  tox.ini needs to be updated as openstack infra now supports upper
  constraints

Status in Ceilometer:
  In Progress
Status in Cinder:
  In Progress
Status in Designate:
  In Progress
Status in Glance:
  In Progress
Status in heat:
  In Progress
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in Ironic Inspector:
  Fix Released
Status in Mistral:
  In Progress
Status in Murano:
  Fix Released
Status in networking-ovn:
  New
Status in neutron:
  New
Status in octavia:
  In Progress
Status in openstack-ansible:
  In Progress
Status in python-muranoclient:
  In Progress
Status in tacker:
  In Progress
Status in OpenStack DBaaS (Trove):
  In Progress
Status in vmware-nsx:
  In Progress
Status in zaqar:
  Fix Released
Status in Zaqar-ui:
  Fix Released

Bug description:
  Openstack infra now supports upper constraints for releasenotes, cover, venv 
targets.
  tox.ini uses install_command for these targets, which can now be safely 
removed.
  Reference for mail that details this support: 
http://lists.openstack.org/pipermail/openstack-dev/2016-August/101474.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1614361/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615908] [NEW] dummy BDM record if reserve_block_device_name timeout

2016-08-22 Thread Felix Ma
Public bug reported:

When attaching a volume, nova-api will initiate a rpc call to nova-
compute to run reserve_block_device_name:

def _attach_volume(self, context, instance, volume_id, device,
   disk_bus, device_type):
"""Attach an existing volume to an existing instance.

This method is separated to make it possible for cells version
to override it.
"""
# NOTE(vish): This is done on the compute host because we want
# to avoid a race where two devices are requested at
# the same time. When db access is removed from
# compute, the bdm will be created here and we will
# have to make sure that they are assigned atomically.
volume_bdm = self.compute_rpcapi.reserve_block_device_name(
context, instance, device, volume_id, disk_bus=disk_bus,
device_type=device_type)
try:
volume = self.volume_api.get(context, volume_id)
self.volume_api.check_attach(context, volume, instance=instance)
self.volume_api.reserve_volume(context, volume_id)
self.compute_rpcapi.attach_volume(context, instance=instance,
volume_id=volume_id, mountpoint=device, bdm=volume_bdm)
except Exception:
with excutils.save_and_reraise_exception():
volume_bdm.destroy()

return volume_bdm.device_name


If timeout occurs, a dummy BDM record is left in database. As a result, you 
will see an attached volume when run nova show, which is wrong.

The trace:
---
2016-08-03 10:29:29.929 4508 ERROR nova.api.openstack 
[req-9036ab02-c49e-408e-9914-1627175e9158 a3e789e51d6243e493483d12593757a6 
597854e23bfe46abb6178f786af12391 - - -] Caught error: Timed out waiting for a 
reply to message ID ddf15f1d53764aa090920c64852e4fba
2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack Traceback (most recent 
call last):
2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/__init__.py", line 125, in 
__call__
2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack return 
req.get_response(self.application)
2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/request.py", line 1296, in send
2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack application, 
catch_exc_info=False)
2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/request.py", line 1260, in 
call_application
2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 144, in __call__
2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack return resp(environ, 
start_response)
2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/keystonemiddleware/auth_token/__init__.py", 
line 634, in __call__
2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack return 
self._call_app(env, start_response)
2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/keystonemiddleware/auth_token/__init__.py", 
line 554, in _call_app
2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack return self._app(env, 
_fake_start_response)
2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 144, in __call__
2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack return resp(environ, 
start_response)
2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 144, in __call__
2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack return resp(environ, 
start_response)
2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/routes/middleware.py", line 131, in __call__
2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack response = 
self.app(environ, start_response)
2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 144, in __call__
2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack return resp(environ, 
start_response)
2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 130, in __call__
2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 195, in call_func
2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack return self.func(req, 
*args, **kwargs)
2016-08-03 10:29:29.929 4508 TRACE nova.api.openstack   File 

[Yahoo-eng-team] [Bug 1615903] [NEW] free_disk_gb is not correctly, because swap disk size is not minus.

2016-08-22 Thread Charlotte Han
Public bug reported:

1. code in compute/resource_tracker.py

def _update_usage(self, usage, sign=1):
mem_usage = usage['memory_mb']
disk_usage = usage.get('root_gb', 0)

overhead = self.driver.estimate_instance_overhead(usage)
mem_usage += overhead['memory_mb']
disk_usage += overhead.get('disk_gb', 0)

self.compute_node.memory_mb_used += sign * mem_usage
self.compute_node.local_gb_used += sign * disk_usage
self.compute_node.local_gb_used += sign * usage.get('ephemeral_gb', 0)
self.compute_node.vcpus_used += sign * usage.get('vcpus', 0)

# free ram and disk may be negative, depending on policy:
self.compute_node.free_ram_mb = (self.compute_node.memory_mb -
 self.compute_node.memory_mb_used)
self.compute_node.free_disk_gb = (self.compute_node.local_gb -
  self.compute_node.local_gb_used)

self.compute_node.running_vms = self.stats.num_instances

2. So I think self.compute_node.local_gb_used is contained swap disk
size. And free_disk_gb is not minus swap disk size.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1615903

Title:
  free_disk_gb is not correctly, because swap disk size is not  minus.

Status in OpenStack Compute (nova):
  New

Bug description:
  1. code in compute/resource_tracker.py

  def _update_usage(self, usage, sign=1):
  mem_usage = usage['memory_mb']
  disk_usage = usage.get('root_gb', 0)

  overhead = self.driver.estimate_instance_overhead(usage)
  mem_usage += overhead['memory_mb']
  disk_usage += overhead.get('disk_gb', 0)

  self.compute_node.memory_mb_used += sign * mem_usage
  self.compute_node.local_gb_used += sign * disk_usage
  self.compute_node.local_gb_used += sign * usage.get('ephemeral_gb', 0)
  self.compute_node.vcpus_used += sign * usage.get('vcpus', 0)

  # free ram and disk may be negative, depending on policy:
  self.compute_node.free_ram_mb = (self.compute_node.memory_mb -
   self.compute_node.memory_mb_used)
  self.compute_node.free_disk_gb = (self.compute_node.local_gb -
self.compute_node.local_gb_used)

  self.compute_node.running_vms = self.stats.num_instances

  2. So I think self.compute_node.local_gb_used is contained swap disk
  size. And free_disk_gb is not minus swap disk size.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1615903/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615899] [NEW] [api-ref]: "Show images" should be changed to "List images"

2016-08-22 Thread Ha Van Tu
Public bug reported:

Image Service API v2: 
developer.openstack.org/api-ref/image/v2/index.html#show-images
I think "show images" should be changed to "list images" to standardize API 
methods (list, show, create, update, delete)

** Affects: glance
 Importance: Undecided
 Assignee: Ha Van Tu (tuhv)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => Ha Van Tu (tuhv)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1615899

Title:
  [api-ref]: "Show images" should be changed to "List images"

Status in Glance:
  New

Bug description:
  Image Service API v2: 
developer.openstack.org/api-ref/image/v2/index.html#show-images
  I think "show images" should be changed to "list images" to standardize API 
methods (list, show, create, update, delete)

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1615899/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615888] [NEW] integration test failed with non-en locale

2016-08-22 Thread Jim
Public bug reported:

When running integration test without en locale, it failed. I found that
current integration test cases, e.g.,
integration_tests/tests/test_login.py, hard code the expected text
rather than load the text from translation messages.

** Affects: horizon
 Importance: Undecided
 Assignee: Jim (jimlintw922)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Jim (jimlintw922)

** Changed in: horizon
   Status: New => In Progress

** Description changed:

  When running integration test without en locale, it failed. I found that
  current integration test cases, e.g.,
  integration_tests/tests/test_login.py, hard code the expected text
- rather than load the text from translation messages. Thanks to this
- situation, this patch launches web browser with en locale when running
- integration test.
+ rather than load the text from translation messages.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1615888

Title:
  integration test failed with non-en locale

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When running integration test without en locale, it failed. I found
  that current integration test cases, e.g.,
  integration_tests/tests/test_login.py, hard code the expected text
  rather than load the text from translation messages.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1615888/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615883] [NEW] The server_group_members quota cannot work when bulk boot instances

2016-08-22 Thread Zhenyu Zheng
Public bug reported:

The server_group_members quota cannot work when bulk boot instances
as the instance_group object was read once from db in:
https://git.openstack.org/cgit/openstack/nova/tree/nova/compute/api.py#n1137
and was not updated while doing _provision_instances, when we doing bulk boot
instances, the count here will be incorrect if we don't refresh the 
instance_group
members.
https://git.openstack.org/cgit/openstack/nova/tree/nova/compute/api.py#n975

** Affects: nova
 Importance: Undecided
 Assignee: Zhenyu Zheng (zhengzhenyu)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Zhenyu Zheng (zhengzhenyu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1615883

Title:
  The server_group_members quota cannot work when bulk boot instances

Status in OpenStack Compute (nova):
  New

Bug description:
  The server_group_members quota cannot work when bulk boot instances
  as the instance_group object was read once from db in:
  https://git.openstack.org/cgit/openstack/nova/tree/nova/compute/api.py#n1137
  and was not updated while doing _provision_instances, when we doing bulk boot
  instances, the count here will be incorrect if we don't refresh the 
instance_group
  members.
  https://git.openstack.org/cgit/openstack/nova/tree/nova/compute/api.py#n975

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1615883/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1614361] Re: tox.ini needs to be updated as openstack infra now supports upper constraints

2016-08-22 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/358564
Committed: 
https://git.openstack.org/cgit/openstack/ironic-inspector/commit/?id=f3fd06fab532c999202ddb8637d7cce9de1bf0f3
Submitter: Jenkins
Branch:master

commit f3fd06fab532c999202ddb8637d7cce9de1bf0f3
Author: AvnishPal 
Date:   Mon Aug 22 16:52:53 2016 +0530

Use upper constraints for all jobs in tox.ini

Openstack infra now supports upper constraints for
all jobs. Updated tox.ini to use upper constraints
for all jobs.

Change-Id: Ibb49105fea5f119d181e7fd6f78ca6cf72ada33f
Closes-Bug: #1614361


** Changed in: ironic-inspector
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1614361

Title:
  tox.ini needs to be updated as openstack infra now supports upper
  constraints

Status in Ceilometer:
  In Progress
Status in Cinder:
  In Progress
Status in Designate:
  In Progress
Status in Glance:
  In Progress
Status in heat:
  In Progress
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in Ironic Inspector:
  Fix Released
Status in Mistral:
  In Progress
Status in Murano:
  Fix Released
Status in networking-ovn:
  New
Status in neutron:
  In Progress
Status in octavia:
  In Progress
Status in openstack-ansible:
  In Progress
Status in python-muranoclient:
  In Progress
Status in tacker:
  In Progress
Status in OpenStack DBaaS (Trove):
  In Progress
Status in vmware-nsx:
  In Progress
Status in zaqar:
  In Progress
Status in Zaqar-ui:
  Fix Released

Bug description:
  Openstack infra now supports upper constraints for releasenotes, cover, venv 
targets.
  tox.ini uses install_command for these targets, which can now be safely 
removed.
  Reference for mail that details this support: 
http://lists.openstack.org/pipermail/openstack-dev/2016-August/101474.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1614361/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615572] Re: db vs migration mismatch in fwaas tables

2016-08-22 Thread YAMAMOTO Takashi
i still see mismatches on pgsql

** Changed in: neutron
   Status: Fix Released => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1615572

Title:
  db vs migration mismatch in fwaas tables

Status in networking-midonet:
  Invalid
Status in neutron:
  New

Bug description:
  AssertionError: Models and migration scripts aren't in sync:
  [ ( 'remove_index',
  Index('firewall_group_id', Column('firewall_group_id', 
VARCHAR(length=36), ForeignKey(u'firewall_groups_v2.id'), 
ForeignKey(u'firewall_groups_v2.id'), 
table=, nullable=False), unique=True)),
( 'remove_index',
  Index('port_id', Column('port_id', VARCHAR(length=36), 
ForeignKey(u'ports.id'), ForeignKey(u'ports.id'), 
table=, nullable=False), unique=True)),
( 'remove_fk',
  ForeignKeyConstraint(, None, name=u'firewall_group_port_associations_v2_ibfk_2', 
ondelete=u'CASCADE', table=Table('firewall_group_port_associations_v2', 
MetaData(bind=None), Column('firewall_group_id', VARCHAR(length=36), 
ForeignKey(u'firewall_groups_v2.id'), ForeignKey(u'firewall_groups_v2.id'), 
table=, nullable=False), Column('port_id', 
VARCHAR(length=36), ForeignKey(u'ports.id'), ForeignKey(u'ports.id'), 
table=, nullable=False), schema=None))),
( 'remove_fk',
  ForeignKeyConstraint(, None, name=u'firewall_group_port_associations_v2_ibfk_1', 
ondelete=u'CASCADE', table=Table('firewall_group_port_associations_v2', 
MetaData(bind=None), Column('firewall_group_id', VARCHAR(length=36), 
ForeignKey(u'firewall_groups_v2.id'), ForeignKey(u'firewall_groups_v2.id'), 
table=, nullable=False), Column('port_id', 
VARCHAR(length=36), ForeignKey(u'ports.id'), ForeignKey(u'ports.id'), 
table=, nullable=False), schema=None))),
( 'add_fk',
  ForeignKeyConstraint(, None, table=Table('firewall_group_port_associations_v2', 
MetaData(bind=None), Column('firewall_group_id', String(length=36), 
ForeignKey('firewall_groups_v2.id'), 
table=, primary_key=True, nullable=False), 
Column('port_id', String(length=36), ForeignKey('ports.id'), 
table=, primary_key=True, nullable=False), 
schema=None))),
( 'add_fk',
  ForeignKeyConstraint(, None, table=Table('firewall_group_port_associations_v2', 
MetaData(bind=None), Column('firewall_group_id', String(length=36), 
ForeignKey('firewall_groups_v2.id'), 
table=, primary_key=True, nullable=False), 
Column('port_id', String(length=36), ForeignKey('ports.id'), 
table=, primary_key=True, nullable=False), 
schema=None))),
[ ( 'modify_type',
None,
'firewall_groups_v2',
'project_id',
{ 'existing_nullable': True,
  'existing_server_default': False},
VARCHAR(length=36),
String(length=255))],
[ ( 'modify_type',
None,
'firewall_groups_v2',
'status',
{ 'existing_nullable': True,
  'existing_server_default': False},
VARCHAR(length=255),
String(length=16))],
( 'add_fk',
  ForeignKeyConstraint(, None, table=Table('firewall_groups_v2', MetaData(bind=None), 
Column('project_id', String(length=255), table=), 
Column('id', String(length=36), table=, primary_key=True, 
nullable=False, default=ColumnDefault( at 0x7fbfe03bc758>)), 
Column('name', String(length=255), table=), 
Column('description', String(length=1024), table=), 
Column('public', Boolean(), table=), 
Column('ingress_firewall_policy_id', String(length=36), 
ForeignKey('firewall_policies_v2.id'), table=), 
Column('egress_firewall_policy_id', String(length=36), 
ForeignKey('firewall_policies_v2.id'), table=), 
Column('admin_state_up', Boolean(), table=), 
Column('status', String(length=16), table=), schema=None))),
( 'add_fk',
  ForeignKeyConstraint(, None, table=Table('firewall_groups_v2', MetaData(bind=None), 
Column('project_id', String(length=255), table=), 
Column('id', String(length=36), table=, primary_key=True, 
nullable=False, default=ColumnDefault( at 0x7fbfe03bc758>)), 
Column('name', String(length=255), table=), 
Column('description', String(length=1024), table=), 
Column('public', Boolean(), table=), 
Column('ingress_firewall_policy_id', String(length=36), 
ForeignKey('firewall_policies_v2.id'), table=), 
Column('egress_firewall_policy_id', String(length=36), 
ForeignKey('firewall_policies_v2.id'), table=), 
Column('admin_state_up', Boolean(), table=), 
Column('status', String(length=16), table=), schema=None))),
[ ( 'modify_type',
None,
'firewall_policies_v2',
'project_id',
{ 'existing_nullable': True,
  'existing_server_default': False},
VARCHAR(length=36),
String(length=255))],
[ ( 'modify_nullable',
None,
'firewall_rules_v2',
'ip_version',
   

[Yahoo-eng-team] [Bug 1615577] Re: fwaas db migration faliure with postgres

2016-08-22 Thread YAMAMOTO Takashi
this is not a duplicate of 1615572.
i don't think https://review.openstack.org/#/c/358728/ fixed this.


** This bug is no longer a duplicate of bug 1615572
   db vs migration mismatch in fwaas tables

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1615577

Title:
  fwaas db migration faliure with postgres

Status in networking-midonet:
  New
Status in neutron:
  New

Bug description:
  Traceback (most recent call last):
File 
"/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/test_migrations.py",
 line 602, in test_models_sync
  self.db_sync(self.get_engine())
File "midonet/neutron/tests/unit/db/test_migrations.py", line 102, in 
db_sync
  migration.do_alembic_command(conf, 'upgrade', 'heads')
File 
"/opt/stack/networking-midonet/.tox/py27/src/neutron/neutron/db/migration/cli.py",
 line 108, in do_alembic_command
  getattr(alembic_command, cmd)(config, *args, **kwargs)
File 
"/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/alembic/command.py",
 line 174, in upgrade
  script.run_env()
File 
"/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/alembic/script/base.py",
 line 407, in run_env
  util.load_python_file(self.dir, 'env.py')
File 
"/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/alembic/util/pyfiles.py",
 line 93, in load_python_file
  module = load_module_py(module_id, path)
File 
"/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/alembic/util/compat.py",
 line 79, in load_module_py
  mod = imp.load_source(module_id, path, fp)
File 
"/opt/stack/networking-midonet/.tox/py27/src/neutron-fwaas/neutron_fwaas/db/migration/alembic_migrations/env.py",
 line 86, in 
  run_migrations_online()
File 
"/opt/stack/networking-midonet/.tox/py27/src/neutron-fwaas/neutron_fwaas/db/migration/alembic_migrations/env.py",
 line 77, in run_migrations_online
  context.run_migrations()
File "", line 8, in run_migrations
File 
"/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/alembic/runtime/environment.py",
 line 797, in run_migrations
  self.get_context().run_migrations(**kw)
File 
"/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/alembic/runtime/migration.py",
 line 312, in run_migrations
  step.migration_fn(**kw)
File 
"/opt/stack/networking-midonet/.tox/py27/src/neutron-fwaas/neutron_fwaas/db/migration/alembic_migrations/versions/d6a12e637e28_neutron_fwaas_v2_0.py",
 line 61, in upgrade
  sa.Column('enabled', sa.Boolean))
File "", line 8, in create_table
File "", line 3, in create_table
File 
"/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/alembic/operations/ops.py",
 line 1098, in create_table
  return operations.invoke(op)
File 
"/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/alembic/operations/base.py",
 line 318, in invoke
  return fn(self, operation)
File 
"/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/alembic/operations/toimpl.py",
 line 101, in create_table
  operations.impl.create_table(table)
File 
"/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/alembic/ddl/impl.py",
 line 193, in create_table
  _ddl_runner=self)
File 
"/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/event/attr.py",
 line 256, in __call__
  fn(*args, **kw)
File 
"/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/util/langhelpers.py",
 line 546, in __call__
  return getattr(self.target, self.name)(*arg, **kw)
File 
"/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/sql/sqltypes.py",
 line 1030, in _on_table_create
  t._on_table_create(target, bind, **kw)
File 
"/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/dialects/postgresql/base.py",
 line 1369, in _on_table_create
  self.create(bind=bind, checkfirst=checkfirst)
File 
"/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/dialects/postgresql/base.py",
 line 1317, in create
  bind.execute(CreateEnumType(self))
File 
"/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 914, in execute
  return meth(self, multiparams, params)
File 
"/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/sql/ddl.py",
 line 68, in _execute_on_connection
  return 

[Yahoo-eng-team] [Bug 1615852] [NEW] Store names in Glance not in sync with the store names used in glance_store

2016-08-22 Thread Dharini Chandrasekar
Public bug reported:

In the file: 
https://github.com/openstack/glance/blob/master/glance/common/location_strategy/store_type.py#L57-L63,
 there is a mapping for stores and their corresponding schemes. 
The store names in this dictionary however is not in sync with the stores 
actually accepted in glance_store by the ``stores`` configuration option. 
The key 'vmware_datastore' needs to actually be 'vmware'
and 'filesystem' needs to be 'file'.

Reference: https://review.openstack.org/#/c/351436/10

** Affects: glance
 Importance: Undecided
 Assignee: Dharini Chandrasekar (dharini-chandrasekar)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => Dharini Chandrasekar (dharini-chandrasekar)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1615852

Title:
  Store names in Glance not in sync with the store names used in
  glance_store

Status in Glance:
  New

Bug description:
  In the file: 
https://github.com/openstack/glance/blob/master/glance/common/location_strategy/store_type.py#L57-L63,
 there is a mapping for stores and their corresponding schemes. 
  The store names in this dictionary however is not in sync with the stores 
actually accepted in glance_store by the ``stores`` configuration option. 
  The key 'vmware_datastore' needs to actually be 'vmware'
  and 'filesystem' needs to be 'file'.

  Reference: https://review.openstack.org/#/c/351436/10

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1615852/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1609071] Re: test_list_pagination_with_href_links fails intermittently

2016-08-22 Thread Armando Migliaccio
I got a more recent failure, that postdates the fix for bug 1607903

http://logs.openstack.org/18/337518/1/gate/gate-neutron-dsvm-
api/7dc1856/

** This bug is no longer a duplicate of bug 1607903
   test_list_pagination_with_marker failure in PortsSearchCriteriaTest

** Description changed:

  http://logs.openstack.org/08/347708/2/gate/gate-neutron-dsvm-
  api/d70465a/testr_results.html.gz
  
  Traceback (most recent call last):
-   File "/opt/stack/new/neutron/neutron/tests/tempest/api/test_ports.py", line 
80, in test_list_pagination_with_href_links
+   File "/opt/stack/new/neutron/neutron/tests/tempest/api/test_ports.py", line 
80, in test_list_pagination_with_href_links
+ self._test_list_pagination_with_href_links()
+   File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 485, 
in inner
+ return f(self, *args, **kwargs)
+   File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 476, 
in inner
+ return f(self, *args, **kwargs)
+   File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 663, 
in _test_list_pagination_with_href_links
+ self._test_list_pagination_iteratively(self._list_all_with_hrefs)
+   File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 592, 
in _test_list_pagination_iteratively
+ len(expected_resources), sort_args
+   File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 643, 
in _list_all_with_hrefs
+ self.assertNotIn('next', prev_links)
+   File "/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 
455, in assertNotIn
+ self.assertThat(haystack, matcher, message)
+   File "/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 
498, in assertThat
+ raise mismatch_error
+ testtools.matchers._impl.MismatchError: {u'previous': 
u'http://127.0.0.1:9696/v2.0/ports?limit=1_dir=asc_key=name=a963fff7-eaf1-4455-9597-e82528720797_reverse=True',
 u'next': 
u'http://127.0.0.1:9696/v2.0/ports?limit=1_dir=asc_key=name=a963fff7-eaf1-4455-9597-e82528720797'}
 matches Contains('next')
+ 
+ or
+ 
+ 
+ Traceback (most recent call last):
+   File "/opt/stack/new/neutron/neutron/tests/tempest/api/test_networks.py", 
line 124, in test_list_pagination_with_href_links
  self._test_list_pagination_with_href_links()
-   File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 485, 
in inner
+   File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 486, 
in inner
  return f(self, *args, **kwargs)
-   File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 476, 
in inner
+   File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 477, 
in inner
  return f(self, *args, **kwargs)
-   File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 663, 
in _test_list_pagination_with_href_links
+   File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 664, 
in _test_list_pagination_with_href_links
  self._test_list_pagination_iteratively(self._list_all_with_hrefs)
-   File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 592, 
in _test_list_pagination_iteratively
+   File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 593, 
in _test_list_pagination_iteratively
  len(expected_resources), sort_args
-   File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 643, 
in _list_all_with_hrefs
- self.assertNotIn('next', prev_links)
-   File "/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 
455, in assertNotIn
- self.assertThat(haystack, matcher, message)
+   File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 657, 
in _list_all_with_hrefs
+ self.assertSameOrder(resources, reversed(resources2))
+   File "/opt/stack/new/neutron/neutron/tests/tempest/api/base.py", line 516, 
in assertSameOrder
+ self.assertEqual(expected[self.field], res[self.field])
+   File "/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 
411, in assertEqual
+ self.assertThat(observed, matcher, message)
File "/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 
498, in assertThat
  raise mismatch_error
- testtools.matchers._impl.MismatchError: {u'previous': 
u'http://127.0.0.1:9696/v2.0/ports?limit=1_dir=asc_key=name=a963fff7-eaf1-4455-9597-e82528720797_reverse=True',
 u'next': 
u'http://127.0.0.1:9696/v2.0/ports?limit=1_dir=asc_key=name=a963fff7-eaf1-4455-9597-e82528720797'}
 matches Contains('next')
- 
- 
- 1 occurrence in gate queue, a few more in the check queue.
+ testtools.matchers._impl.MismatchError: u'123test' != u'abc1'

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1609071

Title:
  test_list_pagination_with_href_links fails intermittently

Status in neutron:
  Confirmed

Bug description:
  http://logs.openstack.org/08/347708/2/gate/gate-neutron-dsvm-
  

[Yahoo-eng-team] [Bug 1611533] Re: ml2 transaction_guard broke out of tree plugins

2016-08-22 Thread Armando Migliaccio
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1611533

Title:
  ml2 transaction_guard broke out of tree plugins

Status in DragonFlow:
  New
Status in networking-midonet:
  Fix Released
Status in networking-odl:
  New
Status in neutron:
  Fix Released

Bug description:
  recent change [1] broke l3 plugin for networking-midonet.

  [1] I9924600c57648f7eccaa5abb6979419d9547a2ff

  l3 plugins for networking-odl and dragonflow seem to have similar code
  and would be affected too.

  eg.
  
http://logs.openstack.org/87/199387/36/check/gate-tempest-dsvm-networking-midonet-ml2/ceb0331/logs/q-svc.txt.gz?level=TRACE

  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource 
[req-af588d02-2944-411f-aa22-eafca4fdabeb 
tempest-TestSecurityGroupsBasicOps-509565194 -] remove_router_interface failed: 
No details.
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/resource.py", line 79, in resource
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 151, in wrapper
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource 
self.force_reraise()
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 139, in wrapper
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource return 
f(*args, **kwargs)
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/base.py", line 217, in _handle_action
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource ret_value = 
getattr(self._plugin, name)(*arg_list, **kwargs)
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_log/helpers.py", line 48, in 
wrapper
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource return 
method(*args, **kwargs)
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/networking-midonet/midonet/neutron/services/l3/l3_midonet.py", 
line 190, in remove_router_interface
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource context, 
router_id, interface_info)
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/l3_db.py", line 1756, in 
remove_router_interface
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource context, 
router_id, interface_info)
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/l3_db.py", line 924, in 
remove_router_interface
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource context, 
router_id, subnet_id, device_owner)
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/l3_db.py", line 901, in 
_remove_interface_by_subnet
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource 
l3_port_check=False)
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/common/utils.py", line 611, in inner
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource raise 
RuntimeError(_("Method cannot be called within a "
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource RuntimeError: 
Method cannot be called within a transaction.
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource 
  2016-08-09 12:31:58.293 25876 ERROR neutron.api.v2.resource 
[req-c9ae4bf8-2baf-4327-be58-bb3006b4d9c9 
tempest-TestSecurityGroupsBasicOps-2112515119 -] delete failed: No details.
  2016-08-09 12:31:58.293 25876 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
  2016-08-09 12:31:58.293 25876 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/resource.py", line 79, in resource
  2016-08-09 12:31:58.293 25876 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
  2016-08-09 12:31:58.293 25876 ERROR neutron.api.v2.resource   File 

[Yahoo-eng-team] [Bug 1615828] Re: test_floating_mangle_rules fails with mismatch error

2016-08-22 Thread Armando Migliaccio
Ok I see what's happening, this is an example of two competing changes
ending up in the gate interleaving and thus causing a failure in the
merge phase..

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1615828

Title:
  test_floating_mangle_rules fails with mismatch error

Status in neutron:
  Invalid

Bug description:
  http://logs.openstack.org/09/336609/1/gate/gate-neutron-python27
  -ubuntu-xenial/31d11e0/testr_results.html.gz

  ft23.24: 
neutron.tests.unit.agent.l3.test_dvr_local_router.TestDvrRouterOperations.test_floating_mangle_rules_StringException:
 Empty attachments:
stdout

  pythonlogging:'': {{{WARNING [stevedore.named] Could not load 
neutron.agent.linux.interface.NullDriver}}}
  stderr: {{{
  neutron/agent/l3/agent.py:375: DeprecationWarning: L3_AGENT_MODE: moved to 
neutron_lib.constants
connection, l3_constants.L3_AGENT_MODE)
  }}}

  Traceback (most recent call last):
File "neutron/tests/unit/agent/l3/test_dvr_local_router.py", line 251, in 
test_floating_mangle_rules
  self.assertEqual(expected, actual)
File 
"/home/jenkins/workspace/gate-neutron-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/home/jenkins/workspace/gate-neutron-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: !=:
  reference = [('floatingip', '-d 15.1.2.3 -i fake_router -j MARK --set-xmark 
fake_mark'),
   ('FORWARD', '-s 192.168.0.1 -j $float-snat')]
  actual= [('floatingip', '-d 15.1.2.3/32 -i fake_router -j MARK 
--set-xmark fake_mark'),
   ('FORWARD', '-s 192.168.0.1/32 -j $float-snat')]

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1615828/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1611596] Re: recent routerports unique key change broke out-of-tree plugins

2016-08-22 Thread Armando Migliaccio
Fixed by: https://review.openstack.org/#/c/353263/

** Changed in: neutron
   Status: In Progress => Fix Released

** Changed in: networking-midonet
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1611596

Title:
  recent routerports unique key change broke out-of-tree plugins

Status in networking-midonet:
  Invalid
Status in neutron:
  Fix Released

Bug description:
  recent change [1] broke out-of-tree plugins which uses surrounding 
transactions.
  eg. networking-midonet

  [1] I15be35689ec59ac02ed34abe5862fa4580c8587c

  eg. http://logs.openstack.org/40/353140/1/check/gate-networking-
  midonet-python35/4099ec1/testr_results.html.gz

  Traceback (most recent call last):
File "/tmp/openstack/neutron/neutron/api/v2/resource.py", line 79, in 
resource
  result = method(request=request, **args)
File 
"/home/jenkins/workspace/gate-networking-midonet-python35/.tox/py35/lib/python3.5/site-packages/oslo_db/api.py",
 line 151, in wrapper
  ectxt.value = e.inner_exc
File 
"/home/jenkins/workspace/gate-networking-midonet-python35/.tox/py35/lib/python3.5/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
  self.force_reraise()
File 
"/home/jenkins/workspace/gate-networking-midonet-python35/.tox/py35/lib/python3.5/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
  six.reraise(self.type_, self.value, self.tb)
File 
"/home/jenkins/workspace/gate-networking-midonet-python35/.tox/py35/lib/python3.5/site-packages/six.py",
 line 686, in reraise
  raise value
File 
"/home/jenkins/workspace/gate-networking-midonet-python35/.tox/py35/lib/python3.5/site-packages/oslo_db/api.py",
 line 139, in wrapper
  return f(*args, **kwargs)
File "/tmp/openstack/neutron/neutron/db/api.py", line 74, in wrapped
  traceback.format_exc())
File 
"/home/jenkins/workspace/gate-networking-midonet-python35/.tox/py35/lib/python3.5/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
  self.force_reraise()
File 
"/home/jenkins/workspace/gate-networking-midonet-python35/.tox/py35/lib/python3.5/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
  six.reraise(self.type_, self.value, self.tb)
File 
"/home/jenkins/workspace/gate-networking-midonet-python35/.tox/py35/lib/python3.5/site-packages/six.py",
 line 686, in reraise
  raise value
File "/tmp/openstack/neutron/neutron/db/api.py", line 69, in wrapped
  return f(*args, **kwargs)
File "/tmp/openstack/neutron/neutron/api/v2/base.py", line 217, in 
_handle_action
  ret_value = getattr(self._plugin, name)(*arg_list, **kwargs)
File 
"/home/jenkins/workspace/gate-networking-midonet-python35/midonet/neutron/plugin_v1.py",
 line 388, in add_router_interface
  context, router_id, interface_info)
File "/tmp/openstack/neutron/neutron/db/l3_db.py", line 1766, in 
add_router_interface
  context, router_id, interface_info)
File "/tmp/openstack/neutron/neutron/db/l3_db.py", line 807, in 
add_router_interface
  port_id=port['id']).one()
File 
"/home/jenkins/workspace/gate-networking-midonet-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/orm/query.py",
 line 2718, in one
  ret = list(self)
File 
"/home/jenkins/workspace/gate-networking-midonet-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/orm/query.py",
 line 2761, in __iter__
  return self._execute_and_instances(context)
File 
"/home/jenkins/workspace/gate-networking-midonet-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/orm/query.py",
 line 2774, in _execute_and_instances
  close_with_result=True)
File 
"/home/jenkins/workspace/gate-networking-midonet-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/orm/query.py",
 line 2765, in _connection_from_session
  **kw)
File 
"/home/jenkins/workspace/gate-networking-midonet-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/orm/session.py",
 line 893, in connection
  execution_options=execution_options)
File 
"/home/jenkins/workspace/gate-networking-midonet-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/orm/session.py",
 line 898, in _connection_for_bind
  engine, execution_options)
File 
"/home/jenkins/workspace/gate-networking-midonet-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/orm/session.py",
 line 313, in _connection_for_bind
  self._assert_active()
File 
"/home/jenkins/workspace/gate-networking-midonet-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/orm/session.py",
 line 218, in _assert_active
  "This Session's transaction has been rolled back "
  sqlalchemy.exc.InvalidRequestError: This Session's transaction has been 
rolled back by a nested rollback() call.  To begin a new transaction, issue 
Session.rollback() first.
  }}}

To manage notifications about 

[Yahoo-eng-team] [Bug 1615572] Re: db vs migration mismatch in fwaas tables

2016-08-22 Thread Armando Migliaccio
https://review.openstack.org/#/c/358728/ should have fixed the issues,
Please reopen if not.

** Changed in: neutron
   Importance: Undecided => Critical

** Changed in: neutron
   Status: In Progress => Fix Released

** Changed in: networking-midonet
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1615572

Title:
  db vs migration mismatch in fwaas tables

Status in networking-midonet:
  Invalid
Status in neutron:
  Fix Released

Bug description:
  AssertionError: Models and migration scripts aren't in sync:
  [ ( 'remove_index',
  Index('firewall_group_id', Column('firewall_group_id', 
VARCHAR(length=36), ForeignKey(u'firewall_groups_v2.id'), 
ForeignKey(u'firewall_groups_v2.id'), 
table=, nullable=False), unique=True)),
( 'remove_index',
  Index('port_id', Column('port_id', VARCHAR(length=36), 
ForeignKey(u'ports.id'), ForeignKey(u'ports.id'), 
table=, nullable=False), unique=True)),
( 'remove_fk',
  ForeignKeyConstraint(, None, name=u'firewall_group_port_associations_v2_ibfk_2', 
ondelete=u'CASCADE', table=Table('firewall_group_port_associations_v2', 
MetaData(bind=None), Column('firewall_group_id', VARCHAR(length=36), 
ForeignKey(u'firewall_groups_v2.id'), ForeignKey(u'firewall_groups_v2.id'), 
table=, nullable=False), Column('port_id', 
VARCHAR(length=36), ForeignKey(u'ports.id'), ForeignKey(u'ports.id'), 
table=, nullable=False), schema=None))),
( 'remove_fk',
  ForeignKeyConstraint(, None, name=u'firewall_group_port_associations_v2_ibfk_1', 
ondelete=u'CASCADE', table=Table('firewall_group_port_associations_v2', 
MetaData(bind=None), Column('firewall_group_id', VARCHAR(length=36), 
ForeignKey(u'firewall_groups_v2.id'), ForeignKey(u'firewall_groups_v2.id'), 
table=, nullable=False), Column('port_id', 
VARCHAR(length=36), ForeignKey(u'ports.id'), ForeignKey(u'ports.id'), 
table=, nullable=False), schema=None))),
( 'add_fk',
  ForeignKeyConstraint(, None, table=Table('firewall_group_port_associations_v2', 
MetaData(bind=None), Column('firewall_group_id', String(length=36), 
ForeignKey('firewall_groups_v2.id'), 
table=, primary_key=True, nullable=False), 
Column('port_id', String(length=36), ForeignKey('ports.id'), 
table=, primary_key=True, nullable=False), 
schema=None))),
( 'add_fk',
  ForeignKeyConstraint(, None, table=Table('firewall_group_port_associations_v2', 
MetaData(bind=None), Column('firewall_group_id', String(length=36), 
ForeignKey('firewall_groups_v2.id'), 
table=, primary_key=True, nullable=False), 
Column('port_id', String(length=36), ForeignKey('ports.id'), 
table=, primary_key=True, nullable=False), 
schema=None))),
[ ( 'modify_type',
None,
'firewall_groups_v2',
'project_id',
{ 'existing_nullable': True,
  'existing_server_default': False},
VARCHAR(length=36),
String(length=255))],
[ ( 'modify_type',
None,
'firewall_groups_v2',
'status',
{ 'existing_nullable': True,
  'existing_server_default': False},
VARCHAR(length=255),
String(length=16))],
( 'add_fk',
  ForeignKeyConstraint(, None, table=Table('firewall_groups_v2', MetaData(bind=None), 
Column('project_id', String(length=255), table=), 
Column('id', String(length=36), table=, primary_key=True, 
nullable=False, default=ColumnDefault( at 0x7fbfe03bc758>)), 
Column('name', String(length=255), table=), 
Column('description', String(length=1024), table=), 
Column('public', Boolean(), table=), 
Column('ingress_firewall_policy_id', String(length=36), 
ForeignKey('firewall_policies_v2.id'), table=), 
Column('egress_firewall_policy_id', String(length=36), 
ForeignKey('firewall_policies_v2.id'), table=), 
Column('admin_state_up', Boolean(), table=), 
Column('status', String(length=16), table=), schema=None))),
( 'add_fk',
  ForeignKeyConstraint(, None, table=Table('firewall_groups_v2', MetaData(bind=None), 
Column('project_id', String(length=255), table=), 
Column('id', String(length=36), table=, primary_key=True, 
nullable=False, default=ColumnDefault( at 0x7fbfe03bc758>)), 
Column('name', String(length=255), table=), 
Column('description', String(length=1024), table=), 
Column('public', Boolean(), table=), 
Column('ingress_firewall_policy_id', String(length=36), 
ForeignKey('firewall_policies_v2.id'), table=), 
Column('egress_firewall_policy_id', String(length=36), 
ForeignKey('firewall_policies_v2.id'), table=), 
Column('admin_state_up', Boolean(), table=), 
Column('status', String(length=16), table=), schema=None))),
[ ( 'modify_type',
None,
'firewall_policies_v2',
'project_id',
{ 'existing_nullable': True,
  

[Yahoo-eng-team] [Bug 1608346] Re: test breakage due to config refactoring

2016-08-22 Thread Armando Migliaccio
Neutron stadium patches have merged.

** Changed in: neutron
   Importance: Undecided => Critical

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1608346

Title:
  test breakage due to config refactoring

Status in neutron:
  Fix Released

Bug description:
  core_opts is referenced by at least fwaas and vpnaas tests.
  they got broken by the recently merged code refactoring. [1]

  [1] Ib5fa294906549237630f87b9c848eebe0644088c

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1608346/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615828] [NEW] test_floating_mangle_rules fails with mismatch error

2016-08-22 Thread Armando Migliaccio
Public bug reported:

http://logs.openstack.org/09/336609/1/gate/gate-neutron-python27-ubuntu-
xenial/31d11e0/testr_results.html.gz

ft23.24: 
neutron.tests.unit.agent.l3.test_dvr_local_router.TestDvrRouterOperations.test_floating_mangle_rules_StringException:
 Empty attachments:
  stdout

pythonlogging:'': {{{WARNING [stevedore.named] Could not load 
neutron.agent.linux.interface.NullDriver}}}
stderr: {{{
neutron/agent/l3/agent.py:375: DeprecationWarning: L3_AGENT_MODE: moved to 
neutron_lib.constants
  connection, l3_constants.L3_AGENT_MODE)
}}}

Traceback (most recent call last):
  File "neutron/tests/unit/agent/l3/test_dvr_local_router.py", line 251, in 
test_floating_mangle_rules
self.assertEqual(expected, actual)
  File 
"/home/jenkins/workspace/gate-neutron-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual
self.assertThat(observed, matcher, message)
  File 
"/home/jenkins/workspace/gate-neutron-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: !=:
reference = [('floatingip', '-d 15.1.2.3 -i fake_router -j MARK --set-xmark 
fake_mark'),
 ('FORWARD', '-s 192.168.0.1 -j $float-snat')]
actual= [('floatingip', '-d 15.1.2.3/32 -i fake_router -j MARK --set-xmark 
fake_mark'),
 ('FORWARD', '-s 192.168.0.1/32 -j $float-snat')]

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: gate-failure unittest

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => Critical

** Tags added: gate-failure unittest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1615828

Title:
  test_floating_mangle_rules fails with mismatch error

Status in neutron:
  Confirmed

Bug description:
  http://logs.openstack.org/09/336609/1/gate/gate-neutron-python27
  -ubuntu-xenial/31d11e0/testr_results.html.gz

  ft23.24: 
neutron.tests.unit.agent.l3.test_dvr_local_router.TestDvrRouterOperations.test_floating_mangle_rules_StringException:
 Empty attachments:
stdout

  pythonlogging:'': {{{WARNING [stevedore.named] Could not load 
neutron.agent.linux.interface.NullDriver}}}
  stderr: {{{
  neutron/agent/l3/agent.py:375: DeprecationWarning: L3_AGENT_MODE: moved to 
neutron_lib.constants
connection, l3_constants.L3_AGENT_MODE)
  }}}

  Traceback (most recent call last):
File "neutron/tests/unit/agent/l3/test_dvr_local_router.py", line 251, in 
test_floating_mangle_rules
  self.assertEqual(expected, actual)
File 
"/home/jenkins/workspace/gate-neutron-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/home/jenkins/workspace/gate-neutron-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: !=:
  reference = [('floatingip', '-d 15.1.2.3 -i fake_router -j MARK --set-xmark 
fake_mark'),
   ('FORWARD', '-s 192.168.0.1 -j $float-snat')]
  actual= [('floatingip', '-d 15.1.2.3/32 -i fake_router -j MARK 
--set-xmark fake_mark'),
   ('FORWARD', '-s 192.168.0.1/32 -j $float-snat')]

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1615828/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615820] [NEW] test_undo_router_interface_change_on_csnat_error_revert_failure fails with integrity constraint violation

2016-08-22 Thread Armando Migliaccio
Public bug reported:

http://logs.openstack.org/53/358753/1/check/gate-neutron-python27
-ubuntu-xenial/d8f4b7e/testr_results.html.gz

WARNING [neutron.quota.resource_registry] subnetpool is already registered
 WARNING [neutron.quota.resource_registry] port is already registered
 WARNING [neutron.quota.resource_registry] subnet is already registered
 WARNING [neutron.quota.resource_registry] network is already registered
   ERROR [neutron.api.rpc.agentnotifiers.l3_rpc_agent_api] No plugin for L3 
routing registered. Cannot notify agents with the message routers_updated
 WARNING [neutron.scheduler.dhcp_agent_scheduler] No more DHCP agents
 WARNING [neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api] Unable to schedule 
network 330127fd-a570-4e0d-b27c-e2970cc776b8: no agents available; will retry 
on subsequent port and subnet creation events.
 WARNING [neutron.scheduler.dhcp_agent_scheduler] No more DHCP agents
 WARNING [neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api] Unable to schedule 
network 330127fd-a570-4e0d-b27c-e2970cc776b8: no agents available; will retry 
on subsequent port and subnet creation events.
   ERROR [neutron.api.rpc.agentnotifiers.l3_rpc_agent_api] No plugin for L3 
routing registered. Cannot notify agents with the message routers_updated
 WARNING [neutron.scheduler.dhcp_agent_scheduler] No more DHCP agents
 WARNING [neutron.api.rpc.agentnotifiers.dhcp_rpc_agent_api] Unable to schedule 
network 330127fd-a570-4e0d-b27c-e2970cc776b8: no agents available; will retry 
on subsequent port and subnet creation events.
   ERROR [neutron.callbacks.manager] Error during notification for 
neutron.db.l3_db._notify_routers_callback-8767758024633 port, after_delete
Traceback (most recent call last):
  File "neutron/callbacks/manager.py", line 148, in _notify_loop
callback(resource, event, trigger, **kwargs)
  File "neutron/db/l3_db.py", line 1842, in _notify_routers_callback
l3plugin.notify_routers_updated(context, router_ids)
AttributeError: 'NoneType' object has no attribute 'notify_routers_updated'
}}}

Traceback (most recent call last):
  File "neutron/tests/unit/db/test_l3_dvr_db.py", line 563, in 
test_undo_router_interface_change_on_csnat_error_revert_failure
self._test_undo_router_interface_change_on_csnat_error(True)
  File "neutron/tests/unit/db/test_l3_dvr_db.py", line 566, in 
_test_undo_router_interface_change_on_csnat_error
router, subnet_v4, subnet_v6 = self._setup_router_with_v4_and_v6()
  File "neutron/tests/unit/db/test_l3_dvr_db.py", line 551, in 
_setup_router_with_v4_and_v6
{'subnet_id': subnet_v6['subnet']['id']})
  File "neutron/db/l3_dvr_db.py", line 335, in add_router_interface
port['fixed_ips'][-1]['subnet_id'])
  File "neutron/db/l3_dvr_db.py", line 782, in _add_csnat_router_interface_port
{'port': port_data})
  File "neutron/plugins/common/utils.py", line 197, in create_port
return core_plugin.create_port(context, {'port': port_data})
  File "neutron/common/utils.py", line 618, in inner
return f(self, context, *args, **kwargs)
  File "neutron/plugins/ml2/plugin.py", line 1238, in create_port
result, mech_context = self._create_port_db(context, port)
  File "neutron/plugins/ml2/plugin.py", line 1206, in _create_port_db
port_db = self.create_port_db(context, port)
  File "neutron/db/db_base_plugin_v2.py", line 1153, in create_port_db
self.ipam.allocate_ips_for_port_and_store(context, port, port_id)
  File "neutron/db/ipam_pluggable_backend.py", line 191, in 
allocate_ips_for_port_and_store
revert_on_fail=False)
  File 
"/home/jenkins/workspace/gate-neutron-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
self.force_reraise()
  File 
"/home/jenkins/workspace/gate-neutron-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
six.reraise(self.type_, self.value, self.tb)
  File "neutron/db/ipam_pluggable_backend.py", line 172, in 
allocate_ips_for_port_and_store
ips = self._allocate_ips_for_port(context, port_copy)
  File "neutron/db/ipam_pluggable_backend.py", line 203, in 
_allocate_ips_for_port
host=p.get(portbindings.HOST_ID))
  File "neutron/db/ipam_backend_mixin.py", line 578, in _ipam_get_subnets
return [self._make_subnet_dict(c, context=context) for c in query]
  File 
"/home/jenkins/workspace/gate-neutron-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
 line 2760, in __iter__
self.session._autoflush()
  File 
"/home/jenkins/workspace/gate-neutron-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py",
 line 1293, in _autoflush
self.flush()
  File 
"/home/jenkins/workspace/gate-neutron-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/orm/session.py",
 line 2019, in flush
self._flush(objects)
  File 

[Yahoo-eng-team] [Bug 1615822] [NEW] desired azure cleanups

2016-08-22 Thread Scott Moser
Public bug reported:

When doing some recent cleanups  of azure datasource, we found a couple desires
a.) it'd be nice if cloud-init automatically used 'fabric' if no agent 
(waagent) was available.
b.) I didn't understand why we have a test 
test_exception_fetching_fabric_data_doesnt_propagate
if we failed to fetch data, why do we just log exception rather than 
raising error.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1615822

Title:
  desired azure cleanups

Status in cloud-init:
  New

Bug description:
  When doing some recent cleanups  of azure datasource, we found a couple 
desires
  a.) it'd be nice if cloud-init automatically used 'fabric' if no agent 
(waagent) was available.
  b.) I didn't understand why we have a test 
test_exception_fetching_fabric_data_doesnt_propagate
  if we failed to fetch data, why do we just log exception rather than 
raising error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1615822/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615715] Re: ip6tables-restore fails

2016-08-22 Thread Armando Migliaccio
** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1615715

Title:
  ip6tables-restore fails

Status in neutron:
  Invalid

Bug description:
  2016-08-22 11:54:58.697 1 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'qrouter-baa3335b-0013-42dd-856a-64a5c2557a01', 
'ip6tables-restore', '-n'] create_process 
/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/utils.py:83
  2016-08-22 11:54:58.970 1 ERROR neutron.agent.linux.utils [-] Exit code: 2; 
Stdin: # Generated by iptables_manager

  Usage: ip6tables-restore [-b] [-c] [-v] [-t] [-h]
 [ --binary ]
 [ --counters ]
 [ --verbose ]
 [ --test ]
 [ --help ]
 [ --noflush ]
[ --modprobe=]

  It seems iptables-1.4.21-16.el7.x86_64 does not support '-n' option
  used in the command above.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1615715/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1577960] Re: [2.0b4] After commissioning, subnet lists 'observed' IP address for machines

2016-08-22 Thread Andres Rodriguez
** Changed in: maas
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1577960

Title:
  [2.0b4] After commissioning, subnet lists 'observed' IP address for
  machines

Status in cloud-init:
  Invalid
Status in MAAS:
  Invalid

Bug description:
  I commissioned a whole bunch of machines, and after it compelted, MAAS
  showed 'Machines' with 'Observed' IP addresses but machines are now
  off.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1577960/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615788] Re: Microversions URL missing in Nova docs.

2016-08-22 Thread Andreas Jaeger
** Project changed: openstack-manuals => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1615788

Title:
  Microversions URL missing in Nova docs.

Status in OpenStack Compute (nova):
  New

Bug description:
  http://docs.openstack.org/developer/nova/api_plugins.html

  Above page has "microversions specific document" link which shows up
  "The requested URL /developer/nova/api_microversions.html was not
  found on this server"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1615788/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1613251] Re: HAproxy scenario tests cleanup fail with a StaleDataError (LBaaSv1 and LBaaSv2)

2016-08-22 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/351490
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lbaas/commit/?id=8614790d9eb0ee717b467919b09f08b44f682333
Submitter: Jenkins
Branch:master

commit 8614790d9eb0ee717b467919b09f08b44f682333
Author: Nir Magnezi 
Date:   Fri Aug 5 08:37:01 2016 +0300

Add retries upon db error for deleting vip_port

This is meant to handle cases where vip_port deletion fail with 
StaleDataError.
StaleDataError happens when the you delete a loadbalancer, and then try
to delete it's vip port.

LBaaSv1:
The retries where added by splitting delete_vip() to two functions:
delete_vip() and _delete_vip_port(), and decorating the latter
with @db_api.retry_db_errors

LBaaSv2:
The retries where added by splitting delete_loadbalancer() to two functions:
delete_loadbalancer() and _delete_vip_port(), and decorating the latter
with @db_api.retry_db_errors

Closes-Bug: #1613251

Change-Id: Ibf295bdf21ac2a7debc26aec8b403103fa867691


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1613251

Title:
  HAproxy scenario tests cleanup fail with a StaleDataError (LBaaSv1 and
  LBaaSv2)

Status in neutron:
  Fix Released

Bug description:
  Example for that error:
  
http://logs.openstack.org/90/351490/4/check/gate-neutron-lbaasv2-dsvm-scenario-namespace-nv/8f1255a/logs/screen-q-svc.txt.gz#_2016-08-07_06_00_57_478

  This is easily reproduced locally on my devstack with 
neutron_lbaas.tests.tempest.v2.scenario.test_load_balancer_basic
  moreover, even if narrow the above mentioned scenario to only create a 
loadbalancer (with listener and pool and then, run the cleanup - the issue 
reproduces.

  This is blocking the gate-neutron-lbaasv2-dsvm-scenario-namespace-nv
  from properly indicate whether or not scenario tests fail for haproxy
  in namespace lbaas driver.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1613251/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615788] [NEW] Microversions URL missing in Nova docs.

2016-08-22 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

http://docs.openstack.org/developer/nova/api_plugins.html

Above page has "microversions specific document" link which shows up
"The requested URL /developer/nova/api_microversions.html was not found
on this server"

** Affects: nova
 Importance: Undecided
 Status: New

-- 
Microversions URL missing in Nova docs.
https://bugs.launchpad.net/bugs/1615788
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1607751] Re: Schema for enabling users breaks keystoneclient and other projects

2016-08-22 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/348881
Committed: 
https://git.openstack.org/cgit/openstack/python-keystoneclient/commit/?id=2b3258dcd0581e3a1291bd0e9277698b5631dced
Submitter: Jenkins
Branch:master

commit 2b3258dcd0581e3a1291bd0e9277698b5631dced
Author: Boris Bobrov 
Date:   Fri Jul 29 16:17:44 2016 +0300

Do not send user ids as payload

User ids are already in the URL. Keystone doesn't consume ids in the
body.

Change-Id: Ie90ebd32fe584dd1b360dc75a828316b1a9aedde
Closes-Bug: 1607751


** Changed in: python-keystoneclient
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1607751

Title:
  Schema for enabling users breaks keystoneclient and other projects

Status in OpenStack Identity (keystone):
  Fix Released
Status in python-keystoneclient:
  Fix Released

Bug description:
  Patch https://review.openstack.org/#/c/344057/ introduced schema
  validation for enabling a user. In the schema, it forbid passing any
  parameters other than "enabled". It causes failures to at least rally:
  http://logs.openstack.org/88/348788/1/check/gate-rally-dsvm-keystone-
  v2api-rally/992e7ee/rally-
  
plot/results.html.gz#/KeystoneBasic.create_user_set_enabled_and_delete/failures

  It happens because keystoneclient passes "id" and "enabled":
  https://github.com/openstack/python-
  keystoneclient/blob/master/keystoneclient/v2_0/users.py#L60-L62 , so
  the change broke anybody who uses the method in keystoneclient.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1607751/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615763] [NEW] CommandError: An error occurred during rendering

2016-08-22 Thread Eric K
Public bug reported:

Repeatedly seeing the following error during devstack setup in tempest
tests in Congress (for example:
http://logs.openstack.org/57/356157/4/check/gate-congress-dsvm-
api/70474ca/logs/devstack-early.txt.gz [also attached]). Any idea
whether it's a horizon bug or devstack bug or user error on Congress
side? Thanks so much!

2016-08-22 04:09:07.389 | 1658 static files copied to 
'/opt/stack/new/horizon/static'.
2016-08-22 04:09:09.017 | Found 'compress' tags in:
2016-08-22 04:09:09.017 |   
/opt/stack/new/horizon/openstack_dashboard/templates/horizon/_conf.html
2016-08-22 04:09:09.017 |   
/opt/stack/new/horizon/openstack_dashboard/templates/horizon/_scripts.html
2016-08-22 04:09:09.017 |   
/opt/stack/new/congress/congress_dashboard/templates/admin/base.html
2016-08-22 04:09:09.017 |   
/opt/stack/new/horizon/openstack_dashboard/templates/_stylesheets.html
2016-08-22 04:09:09.017 |   
/opt/stack/new/congress/congress_dashboard/templates/admin/_scripts.html
2016-08-22 04:09:09.875 | Compressing... CommandError: An error occurred during 
rendering /opt/stack/new/congress/congress_dashboard/templates/admin/base.html: 
'horizon/lib/jquery-ui/ui/jquery-ui.css' could not be found in the 
COMPRESS_ROOT '/opt/stack/new/horizon/static' or with staticfiles.
2016-08-22 04:09:09.957 | exit_trap: cleaning up child processes
2016-08-22 04:09:09.957 | ./stack.sh: line 486: kill: (15777) - No such 
process+ unset GREP_OPTIONS

** Affects: congress
 Importance: Undecided
 Status: New

** Affects: devstack
 Importance: Undecided
 Status: New

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "devstack-early.txt.gz.txt"
   
https://bugs.launchpad.net/bugs/1615763/+attachment/4725920/+files/devstack-early.txt.gz.txt

** Also affects: devstack
   Importance: Undecided
   Status: New

** Also affects: congress
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1615763

Title:
  CommandError: An error occurred during rendering

Status in congress:
  New
Status in devstack:
  New
Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Repeatedly seeing the following error during devstack setup in tempest
  tests in Congress (for example:
  http://logs.openstack.org/57/356157/4/check/gate-congress-dsvm-
  api/70474ca/logs/devstack-early.txt.gz [also attached]). Any idea
  whether it's a horizon bug or devstack bug or user error on Congress
  side? Thanks so much!

  2016-08-22 04:09:07.389 | 1658 static files copied to 
'/opt/stack/new/horizon/static'.
  2016-08-22 04:09:09.017 | Found 'compress' tags in:
  2016-08-22 04:09:09.017 | 
/opt/stack/new/horizon/openstack_dashboard/templates/horizon/_conf.html
  2016-08-22 04:09:09.017 | 
/opt/stack/new/horizon/openstack_dashboard/templates/horizon/_scripts.html
  2016-08-22 04:09:09.017 | 
/opt/stack/new/congress/congress_dashboard/templates/admin/base.html
  2016-08-22 04:09:09.017 | 
/opt/stack/new/horizon/openstack_dashboard/templates/_stylesheets.html
  2016-08-22 04:09:09.017 | 
/opt/stack/new/congress/congress_dashboard/templates/admin/_scripts.html
  2016-08-22 04:09:09.875 | Compressing... CommandError: An error occurred 
during rendering 
/opt/stack/new/congress/congress_dashboard/templates/admin/base.html: 
'horizon/lib/jquery-ui/ui/jquery-ui.css' could not be found in the 
COMPRESS_ROOT '/opt/stack/new/horizon/static' or with staticfiles.
  2016-08-22 04:09:09.957 | exit_trap: cleaning up child processes
  2016-08-22 04:09:09.957 | ./stack.sh: line 486: kill: (15777) - No such 
process+ unset GREP_OPTIONS

To manage notifications about this bug go to:
https://bugs.launchpad.net/congress/+bug/1615763/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615763] [NEW] CommandError: An error occurred during rendering

2016-08-22 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Repeatedly seeing the following error during devstack setup in tempest
tests in Congress (for example:
http://logs.openstack.org/57/356157/4/check/gate-congress-dsvm-
api/70474ca/logs/devstack-early.txt.gz [also attached]). Any idea
whether it's a horizon bug or devstack bug or user error on Congress
side? Thanks so much!

2016-08-22 04:09:07.389 | 1658 static files copied to 
'/opt/stack/new/horizon/static'.
2016-08-22 04:09:09.017 | Found 'compress' tags in:
2016-08-22 04:09:09.017 |   
/opt/stack/new/horizon/openstack_dashboard/templates/horizon/_conf.html
2016-08-22 04:09:09.017 |   
/opt/stack/new/horizon/openstack_dashboard/templates/horizon/_scripts.html
2016-08-22 04:09:09.017 |   
/opt/stack/new/congress/congress_dashboard/templates/admin/base.html
2016-08-22 04:09:09.017 |   
/opt/stack/new/horizon/openstack_dashboard/templates/_stylesheets.html
2016-08-22 04:09:09.017 |   
/opt/stack/new/congress/congress_dashboard/templates/admin/_scripts.html
2016-08-22 04:09:09.875 | Compressing... CommandError: An error occurred during 
rendering /opt/stack/new/congress/congress_dashboard/templates/admin/base.html: 
'horizon/lib/jquery-ui/ui/jquery-ui.css' could not be found in the 
COMPRESS_ROOT '/opt/stack/new/horizon/static' or with staticfiles.
2016-08-22 04:09:09.957 | exit_trap: cleaning up child processes
2016-08-22 04:09:09.957 | ./stack.sh: line 486: kill: (15777) - No such 
process+ unset GREP_OPTIONS

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
CommandError: An error occurred during rendering
https://bugs.launchpad.net/bugs/1615763
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1614014] Re: Networking configuration options in Configuration Reference

2016-08-22 Thread Matt Kassawara
** Project changed: openstack-manuals => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1614014

Title:
  Networking configuration options in Configuration Reference

Status in neutron:
  New

Bug description:
  Reference to depricated option "segment_mtu" in Networking
  configuration options in Configuration Reference

  In link: http://docs.openstack.org/mitaka/config-
  reference/networking/networking_options_reference.html#agent

  For option: network_device_mtu = None 
  It is suggested: (Integer) DEPRECATED: MTU setting for device. This option 
will be removed in Newton. Please use the system-wide segment_mtu setting which 
the agents will take into account when wiring VIFs.

  As per this link: 
http://docs.openstack.org/mitaka/config-reference/tables/conf-changes/neutron.html
 
  "segment_mtu" itself is depricated in Mitaka and instead "global_physnet_mtu" 
should be used.

  So, above description for "network_device_mtu" should be corrected to:

  (Integer) DEPRECATED: MTU setting for device. This option will be
  removed in Newton. Please use the system-wide global_physnet_mtu
  setting which the agents will take into account when wiring VIFs.

  ---
  Release: 0.9 on 2016-08-13 23:57
  SHA: 1c28dd8140b31a7249db96f88408a093249ad5bd
  Source: 
http://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc/config-reference/source/networking/networking_options_reference.rst
  URL: 
http://docs.openstack.org/mitaka/config-reference/networking/networking_options_reference.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1614014/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1614361] Re: tox.ini needs to be updated as openstack infra now supports upper constraints

2016-08-22 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/358571
Committed: 
https://git.openstack.org/cgit/openstack/murano/commit/?id=5ff901ba1f2353fa783fb7318de2460a4fa722f0
Submitter: Jenkins
Branch:master

commit 5ff901ba1f2353fa783fb7318de2460a4fa722f0
Author: AvnishPal 
Date:   Mon Aug 22 17:02:29 2016 +0530

Use upper constraints for all jobs in tox.ini

Openstack infra now supports upper constraints for
all jobs. Updated tox.ini to use upper constraints
for all jobs

Change-Id: I06babce38bd90550f3d5d169e424209bc10ab11f
Closes-Bug: #1614361


** Changed in: murano
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1614361

Title:
  tox.ini needs to be updated as openstack infra now supports upper
  constraints

Status in Ceilometer:
  In Progress
Status in Cinder:
  In Progress
Status in Designate:
  In Progress
Status in Glance:
  In Progress
Status in heat:
  In Progress
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in Ironic Inspector:
  In Progress
Status in Mistral:
  In Progress
Status in Murano:
  Fix Released
Status in networking-ovn:
  New
Status in neutron:
  In Progress
Status in octavia:
  In Progress
Status in openstack-ansible:
  In Progress
Status in python-muranoclient:
  In Progress
Status in tacker:
  In Progress
Status in OpenStack DBaaS (Trove):
  In Progress
Status in vmware-nsx:
  In Progress
Status in zaqar:
  In Progress
Status in Zaqar-ui:
  Fix Released

Bug description:
  Openstack infra now supports upper constraints for releasenotes, cover, venv 
targets.
  tox.ini uses install_command for these targets, which can now be safely 
removed.
  Reference for mail that details this support: 
http://lists.openstack.org/pipermail/openstack-dev/2016-August/101474.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1614361/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1614014] [NEW] Networking configuration options in Configuration Reference

2016-08-22 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Reference to depricated option "segment_mtu" in Networking configuration
options in Configuration Reference

In link: http://docs.openstack.org/mitaka/config-
reference/networking/networking_options_reference.html#agent

For option: network_device_mtu = None   
It is suggested: (Integer) DEPRECATED: MTU setting for device. This option will 
be removed in Newton. Please use the system-wide segment_mtu setting which the 
agents will take into account when wiring VIFs.

As per this link: 
http://docs.openstack.org/mitaka/config-reference/tables/conf-changes/neutron.html
 
"segment_mtu" itself is depricated in Mitaka and instead "global_physnet_mtu" 
should be used.

So, above description for "network_device_mtu" should be corrected to:

(Integer) DEPRECATED: MTU setting for device. This option will be
removed in Newton. Please use the system-wide global_physnet_mtu setting
which the agents will take into account when wiring VIFs.

---
Release: 0.9 on 2016-08-13 23:57
SHA: 1c28dd8140b31a7249db96f88408a093249ad5bd
Source: 
http://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc/config-reference/source/networking/networking_options_reference.rst
URL: 
http://docs.openstack.org/mitaka/config-reference/networking/networking_options_reference.html

** Affects: neutron
 Importance: Undecided
 Assignee: Ashish Billore (ashish.billore)
 Status: New

-- 
Networking configuration options in Configuration Reference
https://bugs.launchpad.net/bugs/1614014
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1614361] Re: tox.ini needs to be updated as openstack infra now supports upper constraints

2016-08-22 Thread Richard Theis
** Also affects: networking-ovn
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1614361

Title:
  tox.ini needs to be updated as openstack infra now supports upper
  constraints

Status in Ceilometer:
  In Progress
Status in Cinder:
  In Progress
Status in Designate:
  In Progress
Status in Glance:
  In Progress
Status in heat:
  In Progress
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in Ironic Inspector:
  In Progress
Status in Mistral:
  In Progress
Status in Murano:
  In Progress
Status in networking-ovn:
  New
Status in neutron:
  In Progress
Status in octavia:
  In Progress
Status in openstack-ansible:
  In Progress
Status in python-muranoclient:
  In Progress
Status in tacker:
  In Progress
Status in OpenStack DBaaS (Trove):
  In Progress
Status in vmware-nsx:
  In Progress
Status in zaqar:
  In Progress
Status in Zaqar-ui:
  Fix Released

Bug description:
  Openstack infra now supports upper constraints for releasenotes, cover, venv 
targets.
  tox.ini uses install_command for these targets, which can now be safely 
removed.
  Reference for mail that details this support: 
http://lists.openstack.org/pipermail/openstack-dev/2016-August/101474.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1614361/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615715] [NEW] ip6tables-restore fails

2016-08-22 Thread Serguei Bezverkhi
Public bug reported:

2016-08-22 11:54:58.697 1 DEBUG neutron.agent.linux.utils [-] Running command: 
['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 
'exec', 'qrouter-baa3335b-0013-42dd-856a-64a5c2557a01', 'ip6tables-restore', 
'-n'] create_process 
/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/utils.py:83
2016-08-22 11:54:58.970 1 ERROR neutron.agent.linux.utils [-] Exit code: 2; 
Stdin: # Generated by iptables_manager

Usage: ip6tables-restore [-b] [-c] [-v] [-t] [-h]
   [ --binary ]
   [ --counters ]
   [ --verbose ]
   [ --test ]
   [ --help ]
   [ --noflush ]
  [ --modprobe=]

It seems iptables-1.4.21-16.el7.x86_64 does not support '-n' option used
in the command above.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1615715

Title:
  ip6tables-restore fails

Status in neutron:
  New

Bug description:
  2016-08-22 11:54:58.697 1 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'qrouter-baa3335b-0013-42dd-856a-64a5c2557a01', 
'ip6tables-restore', '-n'] create_process 
/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/utils.py:83
  2016-08-22 11:54:58.970 1 ERROR neutron.agent.linux.utils [-] Exit code: 2; 
Stdin: # Generated by iptables_manager

  Usage: ip6tables-restore [-b] [-c] [-v] [-t] [-h]
 [ --binary ]
 [ --counters ]
 [ --verbose ]
 [ --test ]
 [ --help ]
 [ --noflush ]
[ --modprobe=]

  It seems iptables-1.4.21-16.el7.x86_64 does not support '-n' option
  used in the command above.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1615715/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615710] [NEW] test_server_multi_create_auto_allocate failing with "Failed to allocate the network(s), not rescheduling"

2016-08-22 Thread Armando Migliaccio
Public bug reported:

http://logs.openstack.org/93/356593/3/check/gate-tempest-dsvm-neutron-
dvr/5396726/logs/testr_results.html.gz

http://logs.openstack.org/78/355078/7/check/gate-tempest-dsvm-neutron-
linuxbridge/913f31a/logs/testr_results.html.gz

Traceback (most recent call last):
  File "tempest/api/compute/admin/test_auto_allocate_network.py", line 177, in 
test_server_multi_create_auto_allocate
min_count=3)
  File "tempest/common/compute.py", line 167, in create_test_server
% server['id'])
  File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
self.force_reraise()
  File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
six.reraise(self.type_, self.value, self.tb)
  File "tempest/common/compute.py", line 149, in create_test_server
clients.servers_client, server['id'], wait_until)
  File "tempest/common/waiters.py", line 75, in wait_for_server_status
server_id=server_id)
tempest.exceptions.BuildErrorException: Server 
600e0bed-d693-476f-9e72-5ef34d9e00d8 failed to build and is in ERROR status
Details: {u'created': u'2016-08-21T23:02:49Z', u'code': 500, u'message': 
u'Build of instance 600e0bed-d693-476f-9e72-5ef34d9e00d8 aborted: Failed to 
allocate the network(s), not rescheduling.'}

This test:

https://review.openstack.org/#/c/327191/

merged Aug 20.

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: auto-allocated-topology

** Changed in: neutron
   Importance: Undecided => Critical

** Changed in: neutron
   Status: New => Confirmed

** Tags added: auto-allocated-topology

** Description changed:

  http://logs.openstack.org/93/356593/3/check/gate-tempest-dsvm-neutron-
  dvr/5396726/logs/testr_results.html.gz
  
  http://logs.openstack.org/78/355078/7/check/gate-tempest-dsvm-neutron-
  linuxbridge/913f31a/logs/testr_results.html.gz
  
+ Traceback (most recent call last):
+   File "tempest/api/compute/admin/test_auto_allocate_network.py", line 177, 
in test_server_multi_create_auto_allocate
+ min_count=3)
+   File "tempest/common/compute.py", line 167, in create_test_server
+ % server['id'])
+   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
+ self.force_reraise()
+   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
+ six.reraise(self.type_, self.value, self.tb)
+   File "tempest/common/compute.py", line 149, in create_test_server
+ clients.servers_client, server['id'], wait_until)
+   File "tempest/common/waiters.py", line 75, in wait_for_server_status
+ server_id=server_id)
+ tempest.exceptions.BuildErrorException: Server 
600e0bed-d693-476f-9e72-5ef34d9e00d8 failed to build and is in ERROR status
+ Details: {u'created': u'2016-08-21T23:02:49Z', u'code': 500, u'message': 
u'Build of instance 600e0bed-d693-476f-9e72-5ef34d9e00d8 aborted: Failed to 
allocate the network(s), not rescheduling.'}
+ 
  This test:
  
  https://review.openstack.org/#/c/327191/
  
  merged Aug 20.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1615710

Title:
  test_server_multi_create_auto_allocate failing with "Failed to
  allocate the network(s), not rescheduling"

Status in neutron:
  Confirmed

Bug description:
  http://logs.openstack.org/93/356593/3/check/gate-tempest-dsvm-neutron-
  dvr/5396726/logs/testr_results.html.gz

  http://logs.openstack.org/78/355078/7/check/gate-tempest-dsvm-neutron-
  linuxbridge/913f31a/logs/testr_results.html.gz

  Traceback (most recent call last):
File "tempest/api/compute/admin/test_auto_allocate_network.py", line 177, 
in test_server_multi_create_auto_allocate
  min_count=3)
File "tempest/common/compute.py", line 167, in create_test_server
  % server['id'])
File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
  self.force_reraise()
File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
  six.reraise(self.type_, self.value, self.tb)
File "tempest/common/compute.py", line 149, in create_test_server
  clients.servers_client, server['id'], wait_until)
File "tempest/common/waiters.py", line 75, in wait_for_server_status
  server_id=server_id)
  tempest.exceptions.BuildErrorException: Server 
600e0bed-d693-476f-9e72-5ef34d9e00d8 failed to build and is in ERROR status
  Details: {u'created': u'2016-08-21T23:02:49Z', u'code': 500, u'message': 
u'Build of instance 600e0bed-d693-476f-9e72-5ef34d9e00d8 aborted: Failed to 
allocate the network(s), not rescheduling.'}

  This test:

  

[Yahoo-eng-team] [Bug 1615698] [NEW] developing.rst needs to be updated for new rolling upgrade approach

2016-08-22 Thread Henry Nash
Public bug reported:

The existing developing.rst references the standard rolling upgrade
approach where contract can't remove anything until X+2 etc. This needs
to be updated for the new approach we have now merged.

** Affects: keystone
 Importance: Undecided
 Assignee: Henry Nash (henry-nash)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => Henry Nash (henry-nash)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1615698

Title:
  developing.rst needs to be updated for new rolling upgrade approach

Status in OpenStack Identity (keystone):
  New

Bug description:
  The existing developing.rst references the standard rolling upgrade
  approach where contract can't remove anything until X+2 etc. This
  needs to be updated for the new approach we have now merged.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1615698/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615687] [NEW] Help text in Angular Images workflows is lacking

2016-08-22 Thread Rob Cresswell
Public bug reported:

The help text in the following angular workflows is very sparse:

- Create Image
- Edit Image
- Create Volume

These files can be found at
openstack_dashboard/static/app/core/images/steps/create-
image/*/*.help.html

We should drop the unnecessary 'Description' header and just use HTML
description lists to explain what each field does, in this format:

https://review.openstack.org/#/c/348969/5/openstack_dashboard/static/app/core/networks/actions
/subnet-details.help.html

This information can just be pulled from the API docs

Note, to enable the angular images panel at this point in time, you will
need to override the default ANGULAR_FEATURES setting. An example of
overriding this setting is here:
https://review.openstack.org/#/c/357041/ In this instance, you will want
to change the setting to True instead of False.

You may also want to base this change on
https://review.openstack.org/#/c/356501/ as it affects the create volume
help file.

** Affects: horizon
 Importance: Wishlist
 Assignee: Bradley Jones (bradjones)
 Status: New


** Tags: angularjs

** Changed in: horizon
 Assignee: (unassigned) => Bradley Jones (bradjones)

** Changed in: horizon
   Importance: Undecided => Wishlist

** Description changed:

  The help text in the following angular workflows is very sparse:
  
  - Create Image
  - Edit Image
  - Create Volume
+ 
+ These files can be found at
+ openstack_dashboard/static/app/core/images/steps/create-
+ image/*/*.help.html
  
  We should drop the unnecessary 'Description' header and just use HTML
  description lists to explain what each field does, in this format:
  
  
https://review.openstack.org/#/c/348969/5/openstack_dashboard/static/app/core/networks/actions
  /subnet-details.help.html
  
  This information can just be pulled from the API docs
  
  Note, to enable the angular images panel at this point in time, you will
  need to override the default ANGULAR_FEATURES setting. An example of
  overriding this setting is here:
  https://review.openstack.org/#/c/357041/ In this instance, you will want
  to change the setting to True instead of False.

** Description changed:

  The help text in the following angular workflows is very sparse:
  
  - Create Image
  - Edit Image
  - Create Volume
  
  These files can be found at
  openstack_dashboard/static/app/core/images/steps/create-
  image/*/*.help.html
  
  We should drop the unnecessary 'Description' header and just use HTML
  description lists to explain what each field does, in this format:
  
  
https://review.openstack.org/#/c/348969/5/openstack_dashboard/static/app/core/networks/actions
  /subnet-details.help.html
  
  This information can just be pulled from the API docs
  
  Note, to enable the angular images panel at this point in time, you will
  need to override the default ANGULAR_FEATURES setting. An example of
  overriding this setting is here:
  https://review.openstack.org/#/c/357041/ In this instance, you will want
  to change the setting to True instead of False.
+ 
+ You may also want to base this change on
+ https://review.openstack.org/#/c/356501/ as it affects the create volume
+ help file.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1615687

Title:
  Help text in Angular Images workflows is lacking

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The help text in the following angular workflows is very sparse:

  - Create Image
  - Edit Image
  - Create Volume

  These files can be found at
  openstack_dashboard/static/app/core/images/steps/create-
  image/*/*.help.html

  We should drop the unnecessary 'Description' header and just use HTML
  description lists to explain what each field does, in this format:

  
https://review.openstack.org/#/c/348969/5/openstack_dashboard/static/app/core/networks/actions
  /subnet-details.help.html

  This information can just be pulled from the API docs

  Note, to enable the angular images panel at this point in time, you
  will need to override the default ANGULAR_FEATURES setting. An example
  of overriding this setting is here:
  https://review.openstack.org/#/c/357041/ In this instance, you will
  want to change the setting to True instead of False.

  You may also want to base this change on
  https://review.openstack.org/#/c/356501/ as it affects the create
  volume help file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1615687/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615690] [NEW] TypeError with native openflow agent if no datapath_id returned

2016-08-22 Thread Inessa Vasilevskaya
Public bug reported:

In case no datapath_id is returned (ex. deployment is broken in some way) the 
_get_dpid will raise TypeError while trying to process [].
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_bridge.py#L49

log - http://paste.openstack.org/show/562232/

** Affects: neutron
 Importance: Undecided
 Assignee: Inessa Vasilevskaya (ivasilevskaya)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Inessa Vasilevskaya (ivasilevskaya)

** Description changed:

- In case no datapath_id is returned (ex. deployment is broken in some
- way) the _get_dpid will raise TypeError while trying to process [].
+ In case no datapath_id is returned (ex. deployment is broken in some way) the 
_get_dpid will raise TypeError while trying to process [].
+ 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_bridge.py#L49
  
  log - http://paste.openstack.org/show/562232/

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1615690

Title:
  TypeError with native openflow agent if no datapath_id returned

Status in neutron:
  New

Bug description:
  In case no datapath_id is returned (ex. deployment is broken in some way) the 
_get_dpid will raise TypeError while trying to process [].
  
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_bridge.py#L49

  log - http://paste.openstack.org/show/562232/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1615690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1511430] Re: live migration does not coordinate VM resume with network readiness

2016-08-22 Thread Gaudenz Steinlin
*** This bug is a duplicate of bug 1414559 ***
https://bugs.launchpad.net/bugs/1414559

** This bug has been marked a duplicate of bug 1414559
   OVS drops RARP packets by QEMU upon live-migration - VM temporarily 
disconnected

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1511430

Title:
  live migration does not coordinate VM resume with network readiness

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  When migrating a VM from one host to another in combination with
  neutron,  VM can resume at destination host while network is not ready
  (race condition)

  QEMU has a mechanism to send a few RARPs once migration is done and
  before resuming.

  Nova needs to coordinate with Qemu and neutron (nova/neutron
  notification mechanism) to make sure VM is only resumed at destination
  host when networking has been properly wired, otherwise the RARPs are
  lost, and connectivity to the VM is disrupted until the VM sends any
  broadcast message.

  log detail (merged from two hosts logs and tcpdumps)

  migration from host 29 to 30

  2015-10-29 10:54:27.592000 [VMLIFE30] 21476 INFO nova.compute.manager [-] 
[instance: a18a5824-4215-4e24-bcfc-cb9f89f6bcbd] VM Resumed (Lifecycle Event)
  2015-10-29 10:54:27.609000 [VMLIFE29] 29022 INFO nova.compute.manager [-] 
[instance: a18a5824-4215-4e24-bcfc-cb9f89f6bcbd] VM Paused (Lifecycle Event)
  2015-10-29 10:54:27.636000 [TAP30] tcpdump DEBUG 10:54:27.632047 
fa:16:3e:50:a3:46 > Broadcast, ethertype Reverse ARP (0x8035), length 60: 
Reverse Request who-is fa:16:3e:50:a3:46 tell fa:16:3e:50:a3:46, length 46
  2015-10-29 10:54:27.656000 [TAP29] tcpdump DEBUG tcpdump: pcap_loop: The 
interface went down

  2015-10-29 10:54:27.787000 [TAP30] tcpdump DEBUG 10:54:27.783353
  fa:16:3e:50:a3:46 > Broadcast, ethertype Reverse ARP (0x8035), length
  60: Reverse Request who-is fa:16:3e:50:a3:46 tell fa:16:3e:50:a3:46,
  length 46

  2015-10-29 10:54:27.818000 [FDB30] ovs-fdb DEBUG 62 0
  fa:16:3e:50:a3:460  # switch associated to VLAN 0, should be "1",
  still not tagged, also not propagated to other hosts because vlan0 is
  invalid in the OVS implementation

  2015-10-29 10:54:28.037000 [TAP30] tcpdump DEBUG 10:54:28.033259
  fa:16:3e:50:a3:46 > Broadcast, ethertype Reverse ARP (0x8035), length
  60: Reverse Request who-is fa:16:3e:50:a3:46 tell fa:16:3e:50:a3:46,
  length 46

  2015-10-29 10:54:28.387000 [TAP30] tcpdump DEBUG 10:54:28.383211
  fa:16:3e:50:a3:46 > Broadcast, ethertype Reverse ARP (0x8035), length
  60: Reverse Request who-is fa:16:3e:50:a3:46 tell fa:16:3e:50:a3:46,
  length 46

  2015-10-29 10:54:28.969000 [VMLIFE29] 29022 INFO nova.compute.manager
  [-] [instance: a18a5824-4215-4e24-bcfc-cb9f89f6bcbd] VM Stopped
  (Lifecycle Event)

  2015-10-29 10:54:29.803000 [OVS30] 21310 DEBUG neutron.agent.linux.utils 
[req-a33468a6-f259-4324-a132-ab0dd025eeec None]
  Command: ['sudo', 'neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ovs-vsctl', '--timeout=10', 'set', 'Port', 
'qvo2e6d0f35-cb', 'tag=1']  # wiring is now ready, and after this 
neutron-openvswitch-agent will notify neutron-server which could notify nova 
about readiness...

  
  A reproduction ansible script is provided to show how it happens:

  https://github.com/mangelajo/oslogmerger/blob/master/contrib/debug-
  live-migration/debug-live-migration.yaml

  And complete merged output with oslogmerger can be found here:
  
https://raw.githubusercontent.com/mangelajo/oslogmerger/master/contrib/debug-live-migration/logs/mergedlogs-packets-ovs.log

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1511430/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp



[Yahoo-eng-team] [Bug 1608209] Re: manage.py not found

2016-08-22 Thread Rob Cresswell
There is a manage.py in upstream, so this is a distribution error.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1608209

Title:
  manage.py not found

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  I've followed all instruction from OpenStack.org to setup a "All In One" 
Linux server with OpenStack.
  I've used Ubuntu 16.04.1 (supported) with minimal installation, then I've run 
stack.sh with success. Test.sh run with success too.
  Opening Horizon it fails with a very common error found a lot in literature. 
I've tried some fixes with no success.

  But the real strange thing is that THERE'S NO MANAGE.PY file at all to
  run...

  You have offline compression enabled but key 
"71f1bb91aa1db46c691a399635d662c7" is missing from offline manifest. You may 
need to run "python manage.py compress".
  1 {% load compress %}
  2 {% load themes %}
  3
  4
  5

    {% compress css %}

  6 
  7 
  8 {% endcompress %}
  9
  10
  11
  12{% current_theme as current_theme %}
  13{% theme_dir as theme_dir %}
  14
  15{% comment %}

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1608209/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615676] [NEW] os-vif log messages corrupt and pollute test output

2016-08-22 Thread Jay Pipes
Public bug reported:

When running unit tests with tox, os-vif logging pollutes the output of
the testr runner and on multi-core machines, nearly always corrupts the
output stream like so:

http://paste.openstack.org/show/562213/

os-vif logging setup should be examined to ensure it is being processed
like all other modules that behave properly with testr.

** Affects: nova
 Importance: Wishlist
 Status: New


** Tags: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1615676

Title:
  os-vif log messages corrupt and pollute test output

Status in OpenStack Compute (nova):
  New

Bug description:
  When running unit tests with tox, os-vif logging pollutes the output
  of the testr runner and on multi-core machines, nearly always corrupts
  the output stream like so:

  http://paste.openstack.org/show/562213/

  os-vif logging setup should be examined to ensure it is being
  processed like all other modules that behave properly with testr.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1615676/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615666] [NEW] Nova config misses [cinder] insecure option

2016-08-22 Thread Sayali Lunkad
Public bug reported:

Sample config generated does not contain insecure option needed for ssl.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1615666

Title:
  Nova config misses [cinder] insecure option

Status in OpenStack Compute (nova):
  New

Bug description:
  Sample config generated does not contain insecure option needed for
  ssl.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1615666/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615107] Re: Total Gibibytes chart title should be in bold

2016-08-22 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/358103
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=ece7915bd4709adc0d1d759fc0683e82f7a42079
Submitter: Jenkins
Branch:master

commit ece7915bd4709adc0d1d759fc0683e82f7a42079
Author: Ying Zuo 
Date:   Fri Aug 19 13:11:23 2016 -0700

Remove a misplaced double quote

Change-Id: I941f57337c6fd908e22b4c1f03ac0052fe49e016
Closes-bug: #1615107


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1615107

Title:
  Total Gibibytes chart title should be in bold

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Steps to reproduce:
  1. Go to project -> volumes panel
  2. Click create volume
  3. Note that the Total Gibibytes chart title should be in bold but it's not

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1615107/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615632] [NEW] Horizon uses a table row class called 'status_unknown' when it should use 'table-warning'

2016-08-22 Thread Rob Cresswell
Public bug reported:

We're adding extra handling for a table row 'status_unknown' class; we
should just use bootstraps 'warning' class, and default to the bootstrap
handling.

** Affects: horizon
 Importance: Low
 Assignee: Rob Cresswell (robcresswell)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Rob Cresswell (robcresswell)

** Changed in: horizon
   Importance: Undecided => Low

** Changed in: horizon
Milestone: None => next

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1615632

Title:
  Horizon uses a table row class called 'status_unknown' when it should
  use 'table-warning'

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  We're adding extra handling for a table row 'status_unknown' class; we
  should just use bootstraps 'warning' class, and default to the
  bootstrap handling.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1615632/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615371] Re: Restrict user private network cidr input

2016-08-22 Thread Rob Cresswell
Already documented in original patch, this is just an incorrect use of
DocImpact.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1615371

Title:
  Restrict user private network cidr input

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  https://review.openstack.org/135877
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/horizon" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 134210195578bb568ec48228246fc7e399e80ba2
  Author: LIU-Yulong 
  Date:   Thu Nov 20 14:53:48 2014 +0800

  Restrict user private network cidr input
  
  If the user's private network has the same CIDR as the public
  network, there will be an error.  The router is unable to set
  the gateway properly when the private and public CIDR overlap.
  
  This patch add setting 'ALLOWED_PRIVATE_SUBNET_CIDR' to decide
  whether to restrict user private network cidr input. And admin
  dashboard network panel was not restricted.
  
  Example:
  ALLOWED_PRIVATE_SUBNET_CIDR = {'ipv4': ['192.168.0.0/16',
  '10.0.0.0/8'],
 'ipv6': ['fc00::/7',]}
  
  By default, leave the 'ipv4' and 'ipv6' with empty lists,
  then user subnet cidr input will not be restricted.
  
  DocImpact
  Implements blueprint: restrict-private-network-input
  Change-Id: I6b2ee58447d517c1c40344b8f4dd95968638da5b

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1615371/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1508442] Re: LOG.warn is deprecated

2016-08-22 Thread Hanxi Liu
** Also affects: panko
   Importance: Undecided
   Status: New

** Changed in: panko
 Assignee: (unassigned) => Hanxi Liu (hanxi-liu)

** No longer affects: panko

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1508442

Title:
  LOG.warn is deprecated

Status in anvil:
  In Progress
Status in Aodh:
  Fix Released
Status in Astara:
  Fix Released
Status in Barbican:
  Fix Released
Status in bilean:
  Fix Released
Status in Ceilometer:
  Fix Released
Status in cloud-init:
  In Progress
Status in cloudkitty:
  Fix Released
Status in congress:
  Fix Released
Status in Designate:
  Fix Released
Status in django-openstack-auth:
  Fix Released
Status in django-openstack-auth-kerberos:
  In Progress
Status in DragonFlow:
  Fix Released
Status in ec2-api:
  Fix Released
Status in Evoque:
  In Progress
Status in Freezer:
  In Progress
Status in gce-api:
  Fix Released
Status in Glance:
  In Progress
Status in Gnocchi:
  Fix Released
Status in heat:
  Fix Released
Status in heat-cfntools:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in KloudBuster:
  Fix Released
Status in kolla:
  Fix Released
Status in Magnum:
  Fix Released
Status in Manila:
  Fix Released
Status in masakari:
  Fix Released
Status in Mistral:
  In Progress
Status in networking-arista:
  In Progress
Status in networking-calico:
  In Progress
Status in networking-cisco:
  In Progress
Status in networking-fujitsu:
  Fix Released
Status in networking-odl:
  In Progress
Status in networking-ofagent:
  In Progress
Status in networking-plumgrid:
  In Progress
Status in networking-powervm:
  Fix Released
Status in networking-vsphere:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in nova-powervm:
  Fix Released
Status in nova-solver-scheduler:
  In Progress
Status in octavia:
  Fix Released
Status in openstack-ansible:
  Fix Released
Status in oslo.cache:
  Fix Released
Status in oslo.middleware:
  New
Status in Packstack:
  Fix Released
Status in python-dracclient:
  Fix Released
Status in python-magnumclient:
  Fix Released
Status in RACK:
  In Progress
Status in python-watcherclient:
  In Progress
Status in Rally:
  Fix Released
Status in OpenStack Search (Searchlight):
  Fix Released
Status in senlin:
  Fix Committed
Status in shaker:
  Fix Released
Status in Solum:
  Fix Released
Status in tacker:
  Fix Released
Status in tempest:
  Fix Released
Status in tripleo:
  Fix Released
Status in trove-dashboard:
  Fix Released
Status in Vitrage:
  Fix Committed
Status in watcher:
  Fix Released
Status in zaqar:
  Fix Released

Bug description:
  LOG.warn is deprecated in Python 3 [1] . But it still used in a few
  places, non-deprecated LOG.warning should be used instead.

  Note: If we are using logger from oslo.log, warn is still valid [2],
  but I agree we can switch to LOG.warning.

  [1]https://docs.python.org/3/library/logging.html#logging.warning
  [2]https://github.com/openstack/oslo.log/blob/master/oslo_log/log.py#L85

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1508442/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615373] Re: Create Image when insufficient credentials

2016-08-22 Thread Rob Cresswell
The policy support for this has been in place since at least liberty
(https://github.com/openstack/horizon/blob/stable/liberty/openstack_dashboard/dashboards/project/images/images/tables.py#L125)
so this is just a configuration issue with TryStack.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1615373

Title:
  Create Image when insufficient credentials

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  The "Create image" button and the create form is accessible when a
  tenant's credentials don't allow image creation. And after submitting
  the form the error message "Unable to create new image" is too
  obscure.

  The example can be seen on TryStack (Liberty)
  https://x86.trystack.org/dashboard/project/images/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1615373/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1590696] Re: neutron-lbaas: Devstack doesn't start agent properly

2016-08-22 Thread venkatamahesh
Now it is corrected and is working properly


** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1590696

Title:
  neutron-lbaas: Devstack doesn't start agent properly

Status in neutron:
  Fix Released

Bug description:
  If I run devstack with

  enable_plugin neutron-lbaas https://github.com/openstack/neutron-lbaas.git
  ENABLED_SERVICES+=,q-lbaasv1

  then the service gets configured, but not started. Using

  ENABLED_SERVICES+=,q-lbaas

  instead works fine. The reason is that neutron-
  lbaas/devstack/plugin.sh does a

  run_process q-lbaas ...

  and within that function there is another check for "is_enabled
  q-lbaas" which is false at that point in the first case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1590696/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615613] [NEW] Live migration always fails when VNC/SPICE is listening at non-local, non-catch-all address

2016-08-22 Thread Paulo Matias
Public bug reported:

When VNC or SPICE is configured to listen only at a specific IP address
(e.g. on the management network in an OpenStack-Ansible deploy), the
check performed by check_can_live_migrate_source always fails, because
check_can_live_migrate_destination does not return the attributes needed
to retrieve listen_addrs using libvirt_migrate.graphics_listen_addrs.


Steps to reproduce:

 * Deploy OpenStack-Ansible from master.

 * Create an instance.

 * Try to live migrate the instance to another host.


Expected result: live migration should be carried.

Actual result (nova-compute.log):

ERROR oslo_messaging.rpc.server MigrationError: Migration error: Your
libvirt version does not support the VIR_DOMAIN_XML_MIGRATABLE flag or
your destination node does not support retrieving listen addresses.  In
order for live migration to work properly, you must configure the
graphics (VNC and/or SPICE) listen addresses to be either the catch-all
address (0.0.0.0 or ::) or the local address (127.0.0.1 or ::1).


Environment:

* Multi-node OpenStack-Ansible deploy.

* Nova from git (commit 32b7526b3cf40f40c5430034f75444fc64ac0e04).

* Libvirt + KVM

** Affects: nova
 Importance: Undecided
 Assignee: Paulo Matias (paulo-matias)
 Status: In Progress


** Tags: live-migration

** Changed in: nova
   Status: New => In Progress

** Changed in: nova
 Assignee: (unassigned) => Paulo Matias (paulo-matias)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1615613

Title:
  Live migration always fails when VNC/SPICE is listening at non-local,
  non-catch-all address

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  When VNC or SPICE is configured to listen only at a specific IP
  address (e.g. on the management network in an OpenStack-Ansible
  deploy), the check performed by check_can_live_migrate_source always
  fails, because check_can_live_migrate_destination does not return the
  attributes needed to retrieve listen_addrs using
  libvirt_migrate.graphics_listen_addrs.

  
  Steps to reproduce:

   * Deploy OpenStack-Ansible from master.

   * Create an instance.

   * Try to live migrate the instance to another host.

  
  Expected result: live migration should be carried.

  Actual result (nova-compute.log):

  ERROR oslo_messaging.rpc.server MigrationError: Migration error: Your
  libvirt version does not support the VIR_DOMAIN_XML_MIGRATABLE flag or
  your destination node does not support retrieving listen addresses.
  In order for live migration to work properly, you must configure the
  graphics (VNC and/or SPICE) listen addresses to be either the catch-
  all address (0.0.0.0 or ::) or the local address (127.0.0.1 or ::1).

  
  Environment:

  * Multi-node OpenStack-Ansible deploy.

  * Nova from git (commit 32b7526b3cf40f40c5430034f75444fc64ac0e04).

  * Libvirt + KVM

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1615613/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1611533] Re: ml2 transaction_guard broke out of tree plugins

2016-08-22 Thread YAMAMOTO Takashi
** Also affects: dragonflow
   Importance: Undecided
   Status: New

** Also affects: networking-odl
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1611533

Title:
  ml2 transaction_guard broke out of tree plugins

Status in DragonFlow:
  New
Status in networking-midonet:
  Fix Released
Status in networking-odl:
  New
Status in neutron:
  Fix Committed

Bug description:
  recent change [1] broke l3 plugin for networking-midonet.

  [1] I9924600c57648f7eccaa5abb6979419d9547a2ff

  l3 plugins for networking-odl and dragonflow seem to have similar code
  and would be affected too.

  eg.
  
http://logs.openstack.org/87/199387/36/check/gate-tempest-dsvm-networking-midonet-ml2/ceb0331/logs/q-svc.txt.gz?level=TRACE

  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource 
[req-af588d02-2944-411f-aa22-eafca4fdabeb 
tempest-TestSecurityGroupsBasicOps-509565194 -] remove_router_interface failed: 
No details.
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/resource.py", line 79, in resource
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 151, in wrapper
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource 
self.force_reraise()
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 139, in wrapper
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource return 
f(*args, **kwargs)
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/base.py", line 217, in _handle_action
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource ret_value = 
getattr(self._plugin, name)(*arg_list, **kwargs)
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_log/helpers.py", line 48, in 
wrapper
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource return 
method(*args, **kwargs)
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/networking-midonet/midonet/neutron/services/l3/l3_midonet.py", 
line 190, in remove_router_interface
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource context, 
router_id, interface_info)
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/l3_db.py", line 1756, in 
remove_router_interface
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource context, 
router_id, interface_info)
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/l3_db.py", line 924, in 
remove_router_interface
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource context, 
router_id, subnet_id, device_owner)
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/db/l3_db.py", line 901, in 
_remove_interface_by_subnet
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource 
l3_port_check=False)
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/common/utils.py", line 611, in inner
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource raise 
RuntimeError(_("Method cannot be called within a "
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource RuntimeError: 
Method cannot be called within a transaction.
  2016-08-09 12:31:57.844 25876 ERROR neutron.api.v2.resource 
  2016-08-09 12:31:58.293 25876 ERROR neutron.api.v2.resource 
[req-c9ae4bf8-2baf-4327-be58-bb3006b4d9c9 
tempest-TestSecurityGroupsBasicOps-2112515119 -] delete failed: No details.
  2016-08-09 12:31:58.293 25876 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
  2016-08-09 12:31:58.293 25876 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/resource.py", line 79, in resource
  2016-08-09 12:31:58.293 25876 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
  

[Yahoo-eng-team] [Bug 1615577] Re: fwaas db migration faliure with postgres

2016-08-22 Thread YAMAMOTO Takashi
https://review.openstack.org/358541

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: networking-midonet
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1615577

Title:
  fwaas db migration faliure with postgres

Status in networking-midonet:
  New
Status in neutron:
  New

Bug description:
  Traceback (most recent call last):
File 
"/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/test_migrations.py",
 line 602, in test_models_sync
  self.db_sync(self.get_engine())
File "midonet/neutron/tests/unit/db/test_migrations.py", line 102, in 
db_sync
  migration.do_alembic_command(conf, 'upgrade', 'heads')
File 
"/opt/stack/networking-midonet/.tox/py27/src/neutron/neutron/db/migration/cli.py",
 line 108, in do_alembic_command
  getattr(alembic_command, cmd)(config, *args, **kwargs)
File 
"/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/alembic/command.py",
 line 174, in upgrade
  script.run_env()
File 
"/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/alembic/script/base.py",
 line 407, in run_env
  util.load_python_file(self.dir, 'env.py')
File 
"/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/alembic/util/pyfiles.py",
 line 93, in load_python_file
  module = load_module_py(module_id, path)
File 
"/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/alembic/util/compat.py",
 line 79, in load_module_py
  mod = imp.load_source(module_id, path, fp)
File 
"/opt/stack/networking-midonet/.tox/py27/src/neutron-fwaas/neutron_fwaas/db/migration/alembic_migrations/env.py",
 line 86, in 
  run_migrations_online()
File 
"/opt/stack/networking-midonet/.tox/py27/src/neutron-fwaas/neutron_fwaas/db/migration/alembic_migrations/env.py",
 line 77, in run_migrations_online
  context.run_migrations()
File "", line 8, in run_migrations
File 
"/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/alembic/runtime/environment.py",
 line 797, in run_migrations
  self.get_context().run_migrations(**kw)
File 
"/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/alembic/runtime/migration.py",
 line 312, in run_migrations
  step.migration_fn(**kw)
File 
"/opt/stack/networking-midonet/.tox/py27/src/neutron-fwaas/neutron_fwaas/db/migration/alembic_migrations/versions/d6a12e637e28_neutron_fwaas_v2_0.py",
 line 61, in upgrade
  sa.Column('enabled', sa.Boolean))
File "", line 8, in create_table
File "", line 3, in create_table
File 
"/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/alembic/operations/ops.py",
 line 1098, in create_table
  return operations.invoke(op)
File 
"/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/alembic/operations/base.py",
 line 318, in invoke
  return fn(self, operation)
File 
"/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/alembic/operations/toimpl.py",
 line 101, in create_table
  operations.impl.create_table(table)
File 
"/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/alembic/ddl/impl.py",
 line 193, in create_table
  _ddl_runner=self)
File 
"/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/event/attr.py",
 line 256, in __call__
  fn(*args, **kw)
File 
"/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/util/langhelpers.py",
 line 546, in __call__
  return getattr(self.target, self.name)(*arg, **kw)
File 
"/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/sql/sqltypes.py",
 line 1030, in _on_table_create
  t._on_table_create(target, bind, **kw)
File 
"/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/dialects/postgresql/base.py",
 line 1369, in _on_table_create
  self.create(bind=bind, checkfirst=checkfirst)
File 
"/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/dialects/postgresql/base.py",
 line 1317, in create
  bind.execute(CreateEnumType(self))
File 
"/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 914, in execute
  return meth(self, multiparams, params)
File 
"/opt/stack/networking-midonet/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/sql/ddl.py",
 line 68, in _execute_on_connection
  return 

[Yahoo-eng-team] [Bug 1615582] [NEW] Error: Unable to create the server. Unexpected API Error.

2016-08-22 Thread Tytus
Public bug reported:

Description
===
After upgrading fresh OpenStack Liberty installation to Mitaka on Trusty with 
Juju, I am no longer able to create Nova instances.

Steps to reproduce
==
* Install OpenStack Liberty with Juju from the attached "bundle.yaml" file.
* Upgrade OpenStack Liberty to Mitaka by executing the following commands:

juju upgrade-charm keystone
juju upgrade-charm ceilometer
juju upgrade-charm ceilometer-agent
juju upgrade-charm ceph
juju upgrade-charm ceph-osd
juju upgrade-charm cinder
juju upgrade-charm glance
juju upgrade-charm nova-cloud-controller
juju upgrade-charm nova-compute
juju upgrade-charm neutron-api
juju upgrade-charm neutron-gateway
juju upgrade-charm openstack-dashboard
juju set-config ceph source="cloud:trusty-mitaka"
juju set-config ceph-osd source="cloud:trusty-mitaka"
juju set-config keystone openstack-origin="cloud:trusty-mitaka"
juju ssh keystone/0 sudo keystone-manage db_sync
juju set-config ceilometer openstack-origin="cloud:trusty-mitaka"
juju set-config ceilometer-agent openstack-origin="cloud:trusty-mitaka"
juju set-config cinder openstack-origin="cloud:trusty-mitaka"
juju ssh cinder/0 sudo cinder-manage db sync
juju set-config glance openstack-origin="cloud:trusty-mitaka"
juju ssh glance/0 sudo glance-manage db_sync
juju set-config nova-cloud-controller openstack-origin="cloud:trusty-mitaka"
juju ssh nova-cloud-controller/0 sudo nova-manage db sync
juju set-config nova-compute openstack-origin="cloud:trusty-mitaka"
juju set-config neutron-api openstack-origin="cloud:trusty-mitaka"
juju ssh neutron-api/0 sudo neutron-db-manage upgrade heads
juju set-config neutron-gateway openstack-origin="cloud:trusty-mitaka"
juju set-config openstack-dashboard openstack-origin="cloud:trusty-mitaka"
juju ssh ceph/0 sudo reboot
juju ssh ceph/1 sudo reboot
juju ssh ceph/2 sudo reboot
juju ssh ceph-osd/0 sudo service ceph restart
juju ssh ceph-osd/1 sudo service ceph restart
juju ssh ceph-osd/2 sudo service ceph restart
juju ssh keystone/0 sudo reboot
juju ssh keystone/1 sudo reboot
juju ssh keystone/2 sudo reboot
juju ssh ceilometer/0 sudo reboot
juju ssh ceilometer/1 sudo reboot
juju ssh ceilometer/2 sudo reboot
juju ssh ceilometer-agent/0 sudo service ceilometer-agent-compute restart
juju ssh ceilometer-agent/1 sudo service ceilometer-agent-compute restart
juju ssh ceilometer-agent/2 sudo service ceilometer-agent-compute restart
juju ssh cinder/0 sudo reboot
juju ssh cinder/1 sudo reboot
juju ssh cinder/2 sudo reboot
juju ssh glance/0 sudo reboot
juju ssh glance/1 sudo reboot
juju ssh glance/2 sudo reboot
juju ssh nova-cloud-controller/0 sudo reboot
juju ssh nova-cloud-controller/1 sudo reboot
juju ssh nova-cloud-controller/2 sudo reboot
juju ssh nova-compute/0 sudo service nova-compute restart
juju ssh nova-compute/1 sudo service nova-compute restart
juju ssh nova-compute/2 sudo service nova-compute restart
juju ssh neutron-api/0 sudo reboot
juju ssh neutron-api/1 sudo reboot
juju ssh neutron-api/2 sudo reboot
juju ssh neutron-gateway/0 sudo service neutron-dhcp-agent restart
juju ssh neutron-gateway/0 sudo service neutron-lbaas-agent restart
juju ssh neutron-gateway/0 sudo service neutron-metadata-agent restart
juju ssh neutron-gateway/0 sudo service neutron-metering-agent restart
juju ssh neutron-gateway/0 sudo service neutron-openvswitch-agent restart
juju ssh neutron-gateway/0 sudo service neutron-vpn-agent restart
juju ssh nova-compute/0 sudo service neutron-openvswitch-agent restart
juju ssh nova-compute/1 sudo service neutron-openvswitch-agent restart
juju ssh nova-compute/2 sudo service neutron-openvswitch-agent restart
juju ssh openstack-dashboard/0 sudo reboot
juju ssh openstack-dashboard/1 sudo reboot
juju ssh openstack-dashboard/2 sudo reboot

* Attempt to create Nova instance.

Expected result
===
Nova instance being created.

Actual result
=
* Nova instance not being created.
* The following error messages being displayed:
** from GUI:

   Error: Unable to create the server.

** from CLI:

   Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ 
and attach the Nova API log if possible.
(HTTP 500) (Request-ID: 
req-c035d518-50c0-4cab-913e-9a5263392a2a)

Environment
===
1. Exact version of OpenStack you are running.

ubuntu@tkurek-maas:~$ juju ssh nova-cloud-controller/0 sudo dpkg-query -l | 
grep nova
ii  nova-api-os-compute   2:13.1.0-0ubuntu1~cloud0
all OpenStack Compute - OpenStack Compute API frontend
ii  nova-cert 2:13.1.0-0ubuntu1~cloud0
all OpenStack Compute - certificate management
ii  nova-common   2:13.1.0-0ubuntu1~cloud0
all OpenStack Compute - common files
ii  nova-conductor2:13.1.0-0ubuntu1~cloud0
all OpenStack Compute - conductor 

[Yahoo-eng-team] [Bug 1501860] Re: OpenStack Services functions should have its name with first letter lowercased

2016-08-22 Thread Vinay Mahuli
** No longer affects: juniperopenstack/trunk

** No longer affects: juniperopenstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1501860

Title:
  OpenStack Services functions should have its name with first letter
  lowercased

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  OpenStack Services functions are not constructor functions therefor shouldn't 
have the name with first letter capitalized. 
  It should be with lower cased.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1501860/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615572] [NEW] db vs migration mismatch in fwaas tables

2016-08-22 Thread YAMAMOTO Takashi
Public bug reported:

AssertionError: Models and migration scripts aren't in sync:
[ ( 'remove_index',
Index('firewall_group_id', Column('firewall_group_id', 
VARCHAR(length=36), ForeignKey(u'firewall_groups_v2.id'), 
ForeignKey(u'firewall_groups_v2.id'), 
table=, nullable=False), unique=True)),
  ( 'remove_index',
Index('port_id', Column('port_id', VARCHAR(length=36), 
ForeignKey(u'ports.id'), ForeignKey(u'ports.id'), 
table=, nullable=False), unique=True)),
  ( 'remove_fk',
ForeignKeyConstraint(, None, name=u'firewall_group_port_associations_v2_ibfk_2', 
ondelete=u'CASCADE', table=Table('firewall_group_port_associations_v2', 
MetaData(bind=None), Column('firewall_group_id', VARCHAR(length=36), 
ForeignKey(u'firewall_groups_v2.id'), ForeignKey(u'firewall_groups_v2.id'), 
table=, nullable=False), Column('port_id', 
VARCHAR(length=36), ForeignKey(u'ports.id'), ForeignKey(u'ports.id'), 
table=, nullable=False), schema=None))),
  ( 'remove_fk',
ForeignKeyConstraint(, None, name=u'firewall_group_port_associations_v2_ibfk_1', 
ondelete=u'CASCADE', table=Table('firewall_group_port_associations_v2', 
MetaData(bind=None), Column('firewall_group_id', VARCHAR(length=36), 
ForeignKey(u'firewall_groups_v2.id'), ForeignKey(u'firewall_groups_v2.id'), 
table=, nullable=False), Column('port_id', 
VARCHAR(length=36), ForeignKey(u'ports.id'), ForeignKey(u'ports.id'), 
table=, nullable=False), schema=None))),
  ( 'add_fk',
ForeignKeyConstraint(, None, table=Table('firewall_group_port_associations_v2', 
MetaData(bind=None), Column('firewall_group_id', String(length=36), 
ForeignKey('firewall_groups_v2.id'), 
table=, primary_key=True, nullable=False), 
Column('port_id', String(length=36), ForeignKey('ports.id'), 
table=, primary_key=True, nullable=False), 
schema=None))),
  ( 'add_fk',
ForeignKeyConstraint(, None, table=Table('firewall_group_port_associations_v2', 
MetaData(bind=None), Column('firewall_group_id', String(length=36), 
ForeignKey('firewall_groups_v2.id'), 
table=, primary_key=True, nullable=False), 
Column('port_id', String(length=36), ForeignKey('ports.id'), 
table=, primary_key=True, nullable=False), 
schema=None))),
  [ ( 'modify_type',
  None,
  'firewall_groups_v2',
  'project_id',
  { 'existing_nullable': True,
'existing_server_default': False},
  VARCHAR(length=36),
  String(length=255))],
  [ ( 'modify_type',
  None,
  'firewall_groups_v2',
  'status',
  { 'existing_nullable': True,
'existing_server_default': False},
  VARCHAR(length=255),
  String(length=16))],
  ( 'add_fk',
ForeignKeyConstraint(, None, table=Table('firewall_groups_v2', MetaData(bind=None), 
Column('project_id', String(length=255), table=), 
Column('id', String(length=36), table=, primary_key=True, 
nullable=False, default=ColumnDefault( at 0x7fbfe03bc758>)), 
Column('name', String(length=255), table=), 
Column('description', String(length=1024), table=), 
Column('public', Boolean(), table=), 
Column('ingress_firewall_policy_id', String(length=36), 
ForeignKey('firewall_policies_v2.id'), table=), 
Column('egress_firewall_policy_id', String(length=36), 
ForeignKey('firewall_policies_v2.id'), table=), 
Column('admin_state_up', Boolean(), table=), 
Column('status', String(length=16), table=), schema=None))),
  ( 'add_fk',
ForeignKeyConstraint(, None, table=Table('firewall_groups_v2', MetaData(bind=None), 
Column('project_id', String(length=255), table=), 
Column('id', String(length=36), table=, primary_key=True, 
nullable=False, default=ColumnDefault( at 0x7fbfe03bc758>)), 
Column('name', String(length=255), table=), 
Column('description', String(length=1024), table=), 
Column('public', Boolean(), table=), 
Column('ingress_firewall_policy_id', String(length=36), 
ForeignKey('firewall_policies_v2.id'), table=), 
Column('egress_firewall_policy_id', String(length=36), 
ForeignKey('firewall_policies_v2.id'), table=), 
Column('admin_state_up', Boolean(), table=), 
Column('status', String(length=16), table=), schema=None))),
  [ ( 'modify_type',
  None,
  'firewall_policies_v2',
  'project_id',
  { 'existing_nullable': True,
'existing_server_default': False},
  VARCHAR(length=36),
  String(length=255))],
  [ ( 'modify_nullable',
  None,
  'firewall_rules_v2',
  'ip_version',
  { 'existing_server_default': False,
'existing_type': INTEGER(display_width=11)},
  False,
  True)],
  [ ( 'modify_type',
  None,
  'firewall_rules_v2',
  'project_id',
  { 'existing_nullable': True,
'existing_server_default': False},
  VARCHAR(length=36),
  String(length=255))]]

** Affects: networking-midonet
 Importance: Undecided
 Status: New

** 

[Yahoo-eng-team] [Bug 1615573] [NEW] When a user show a angular image panel, the request to get images is called twice

2016-08-22 Thread Kenji Ishii
Public bug reported:

Reproduce
1. logged in horizon
2. display Project -> Compute -> Images

In this case, horizon will request images but at the moment the request is 
called twice.
As far as I confirmed, ResourceTableController is called twice. As a result, 
onResourceTypeNameChange also called twice.
A behavior from a end user's aspect is no problem but we should reduce 
unnecessary request.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1615573

Title:
  When a user show a angular image panel, the request to get images is
  called twice

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Reproduce
  1. logged in horizon
  2. display Project -> Compute -> Images

  In this case, horizon will request images but at the moment the request is 
called twice.
  As far as I confirmed, ResourceTableController is called twice. As a result, 
onResourceTypeNameChange also called twice.
  A behavior from a end user's aspect is no problem but we should reduce 
unnecessary request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1615573/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615545] [NEW] The parameter all_tenants should not be used by non-admins

2016-08-22 Thread Zhenyu Zheng
Public bug reported:

In 
http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/servers.py#n1099
the parameter "all_tenants" is also treated as an available parameter for 
non-admins, this search
option should be strict to admin users.

** Affects: nova
 Importance: Undecided
 Status: Invalid

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1615545

Title:
  The parameter all_tenants should not be used by non-admins

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  In 
http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/servers.py#n1099
  the parameter "all_tenants" is also treated as an available parameter for 
non-admins, this search
  option should be strict to admin users.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1615545/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1502028] Re: cannot attach a volume when using multiple ceph backends

2016-08-22 Thread Kevin Zhao
In Mitaka,
   When I use nova to attach the volume to an instance(with Ceph), will get 
error:
2016-08-22 03:30:04.879 DEBUG nova.virt.libvirt.guest 
[^[[01;36mreq-af1bf2bd-5346-4b56-b68f-e22b44e32715 ^[[00;36mdemo demo] 
^[[01;35mattach device xml: 
  
  



  
  

  
  
  aa3152f6-db7b-4893-8c5f-ff05de3ac36e


2016-08-22 03:30:04.946 ERROR nova.virt.libvirt.driver 
[^[[01;36mreq-af1bf2bd-5346-4b56-b68f-e22b44e32715 ^[[00;36mdemo demo] 
^[[01;35m[instance: bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] Failed to attach 
volume at mountpoint: /dev/sdb^[[00m
2016-08-22 03:30:04.946 TRACE nova.virt.libvirt.driver ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00mTraceback (most recent call last):
2016-08-22 03:30:04.946 TRACE nova.virt.libvirt.driver ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00m  File 
"/srv/nova/local/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 
1225, in attach_volume
2016-08-22 03:30:04.946 TRACE nova.virt.libvirt.driver ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00mguest.attach_device(conf, 
persistent=True, live=live)
2016-08-22 03:30:04.946 TRACE nova.virt.libvirt.driver ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00m  File 
"/srv/nova/local/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 
296, in attach_device
2016-08-22 03:30:04.946 TRACE nova.virt.libvirt.driver ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00m
self._domain.attachDeviceFlags(device_xml, flags=flags)
2016-08-22 03:30:04.946 TRACE nova.virt.libvirt.driver ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00m  File 
"/srv/nova/local/lib/python2.7/site-packages/eventlet/tpool.py", line 186, in 
doit
2016-08-22 03:30:04.946 TRACE nova.virt.libvirt.driver ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00mresult = 
proxy_call(self._autowrap, f, *args, **kwargs)
2016-08-22 03:30:04.946 TRACE nova.virt.libvirt.driver ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00m  File 
"/srv/nova/local/lib/python2.7/site-packages/eventlet/tpool.py", line 144, in 
proxy_call
2016-08-22 03:30:04.946 TRACE nova.virt.libvirt.driver ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00mrv = execute(f, *args, **kwargs)
2016-08-22 03:30:04.946 TRACE nova.virt.libvirt.driver ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00m  File 
"/srv/nova/local/lib/python2.7/site-packages/eventlet/tpool.py", line 125, in 
execute
2016-08-22 03:30:04.946 TRACE nova.virt.libvirt.driver ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00msix.reraise(c, e, tb)
2016-08-22 03:30:04.946 TRACE nova.virt.libvirt.driver ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00m  File 
"/srv/nova/local/lib/python2.7/site-packages/eventlet/tpool.py", line 83, in 
tworker
2016-08-22 03:30:04.946 TRACE nova.virt.libvirt.driver ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00mrv = meth(*args, **kwargs)
2016-08-22 03:30:04.946 TRACE nova.virt.libvirt.driver ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00m  File 
"/srv/nova/local/lib/python2.7/site-packages/libvirt.py", line 560, in 
attachDeviceFlags
2016-08-22 03:30:04.946 TRACE nova.virt.libvirt.driver ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00mif ret == -1: raise 
libvirtError ('virDomainAttachDeviceFlags() failed', dom=self)
2016-08-22 03:30:04.946 TRACE nova.virt.libvirt.driver ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00mlibvirtError: internal error: 
unable to execute QEMU command 'device_add': Property 'scsi-hd.drive' can't 
find value 'drive-scsi0-0-0-1'
2016-08-22 03:30:04.946 TRACE nova.virt.libvirt.driver ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00m
2016-08-22 03:30:04.949 ERROR nova.virt.block_device 
[^[[01;36mreq-af1bf2bd-5346-4b56-b68f-e22b44e32715 ^[[00;36mdemo demo] 
^[[01;35m[instance: bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] Driver failed to 
attach volume aa3152f6-db7b-4893-8c5f-ff05de3ac36e at /dev/sdb^[[00m
2016-08-22 03:30:04.949 TRACE nova.virt.block_device ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00mTraceback (most recent call last):
2016-08-22 03:30:04.949 TRACE nova.virt.block_device ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00m  File 
"/srv/nova/local/lib/python2.7/site-packages/nova/virt/block_device.py", line 
274, in attach
2016-08-22 03:30:04.949 TRACE nova.virt.block_device ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00m
device_type=self['device_type'], encryption=encryption)
2016-08-22 03:30:04.949 TRACE nova.virt.block_device ^[[01;35m[instance: 
bfa9db55-e55b-4aaa-9dab-7d2ebf85c009] ^[[00m  File 
"/srv/nova/local/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 
1236, in attach_volume
2016-08-22 03:30:04.949 TRACE nova.virt.block_device ^[[01;35m[instance: 

[Yahoo-eng-team] [Bug 1613542] Re: tempest.conf doesn't contain $project in [service_available] section

2016-08-22 Thread Hanxi Liu
** Also affects: gnocchi
   Importance: Undecided
   Status: New

** Changed in: gnocchi
 Assignee: (unassigned) => Hanxi Liu (hanxi-liu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1613542

Title:
  tempest.conf doesn't contain $project in [service_available] section

Status in Aodh:
  Fix Released
Status in Ceilometer:
  In Progress
Status in ec2-api:
  In Progress
Status in Gnocchi:
  New
Status in OpenStack Identity (keystone):
  In Progress
Status in Magnum:
  Fix Released

Bug description:
  When generating the tempest conf, the tempest plugins need to register the 
config options.
  But for the [service_available] section, ceilometer (and the other mentioned 
projects) doesn't register any value so it's missng in the tempest sample 
config.

  Steps to reproduce:

  $ tox -egenconfig
  $ source .tox/genconfig/bin/activate
  $ oslo-config-generator --config-file 
.tox/genconfig/lib/python2.7/site-packages/tempest/cmd/config-generator.tempest.conf
 --output-file tempest.conf.sample

  Now check the [service_available] section from tempest.conf.sample

To manage notifications about this bug go to:
https://bugs.launchpad.net/aodh/+bug/1613542/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1614361] Re: tox.ini needs to be updated as openstack infra now supports upper constraints

2016-08-22 Thread Hanxi Liu
** No longer affects: aodh

** No longer affects: gnocchi

** No longer affects: panko

** No longer affects: python-ceilometerclient

** No longer affects: python-gnocchiclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1614361

Title:
  tox.ini needs to be updated as openstack infra now supports upper
  constraints

Status in Ceilometer:
  In Progress
Status in Cinder:
  In Progress
Status in Designate:
  In Progress
Status in Glance:
  New
Status in heat:
  New
Status in OpenStack Dashboard (Horizon):
  New
Status in Ironic Inspector:
  New
Status in Mistral:
  In Progress
Status in Murano:
  New
Status in neutron:
  New
Status in octavia:
  New
Status in openstack-ansible:
  New
Status in python-muranoclient:
  New
Status in tacker:
  New
Status in OpenStack DBaaS (Trove):
  New
Status in vmware-nsx:
  New
Status in zaqar:
  In Progress
Status in Zaqar-ui:
  Fix Released

Bug description:
  Openstack infra now supports upper constraints for releasenotes, cover, venv 
targets.
  tox.ini uses install_command for these targets, which can now be safely 
removed.
  Reference for mail that details this support: 
http://lists.openstack.org/pipermail/openstack-dev/2016-August/101474.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1614361/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1614361] Re: tox.ini needs to be updated as openstack infra now supports upper constraints

2016-08-22 Thread Hanxi Liu
** Also affects: python-ceilometerclient
   Importance: Undecided
   Status: New

** Changed in: python-ceilometerclient
 Assignee: (unassigned) => Hanxi Liu (hanxi-liu)

** Also affects: panko
   Importance: Undecided
   Status: New

** Changed in: panko
 Assignee: (unassigned) => Hanxi Liu (hanxi-liu)

** Also affects: gnocchi
   Importance: Undecided
   Status: New

** Changed in: gnocchi
 Assignee: (unassigned) => Hanxi Liu (hanxi-liu)

** Also affects: python-gnocchiclient
   Importance: Undecided
   Status: New

** Changed in: python-gnocchiclient
 Assignee: (unassigned) => Hanxi Liu (hanxi-liu)

** Also affects: aodh
   Importance: Undecided
   Status: New

** Changed in: aodh
 Assignee: (unassigned) => Hanxi Liu (hanxi-liu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1614361

Title:
  tox.ini needs to be updated as openstack infra now supports upper
  constraints

Status in Aodh:
  New
Status in Ceilometer:
  In Progress
Status in Cinder:
  In Progress
Status in Designate:
  In Progress
Status in Glance:
  New
Status in Gnocchi:
  New
Status in heat:
  New
Status in OpenStack Dashboard (Horizon):
  New
Status in Ironic Inspector:
  New
Status in Mistral:
  In Progress
Status in Murano:
  New
Status in neutron:
  New
Status in octavia:
  New
Status in openstack-ansible:
  New
Status in Panko:
  New
Status in python-ceilometerclient:
  New
Status in python-gnocchiclient:
  New
Status in python-muranoclient:
  New
Status in tacker:
  New
Status in OpenStack DBaaS (Trove):
  New
Status in vmware-nsx:
  New
Status in zaqar:
  In Progress
Status in Zaqar-ui:
  Fix Released

Bug description:
  Openstack infra now supports upper constraints for releasenotes, cover, venv 
targets.
  tox.ini uses install_command for these targets, which can now be safely 
removed.
  Reference for mail that details this support: 
http://lists.openstack.org/pipermail/openstack-dev/2016-August/101474.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/aodh/+bug/1614361/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1614092] Re: SRIOV - PF / VM that assign to PF does not get vlan tag

2016-08-22 Thread Moshe Levi
@Eran,

Please discard my previous comment.
It is not possible do config the vlan because you pass all the PF to the guest.
The you should create the vlan by himself or cloud init. 
I am not aware of easy solution for this. 

** Changed in: nova
   Status: Confirmed => Won't Fix

** Changed in: nova
   Status: Won't Fix => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1614092

Title:
  SRIOV - PF / VM that assign to PF  does not get vlan tag

Status in neutron:
  Invalid
Status in OpenStack Compute (nova):
  Incomplete

Bug description:
  During RFE testing Manage SR-IOV PFs as Neutron ports, I found VM booted with 
Neutron port vnic_type  direct-physical  but it does not get access to DHCP 
server. 
  The problem is that the PF / VM does not get VLAN tag with the internal vlan.
  Workaround : 
  Enter to the VM via console and set vlan interface. 


  version RHOS 10 
  python-neutronclient-4.2.1-0.20160721230146.3b1c538.el7ost.noarch
  openstack-neutron-common-9.0.0-0.20160726001729.6a23add.el7ost.noarch
  python-neutron-9.0.0-0.20160726001729.6a23add.el7ost.noarch
  openstack-neutron-fwaas-9.0.0-0.20160720211704.c3e491c.el7ost.noarch
  openstack-neutron-metering-agent-9.0.0-0.20160726001729.6a23add.el7ost.noarch
  openstack-neutron-openvswitch-9.0.0-0.20160726001729.6a23add.el7ost.noarch
  puppet-neutron-9.1.0-0.20160725142451.4061b39.el7ost.noarch
  python-neutron-lib-0.2.1-0.20160726025313.405f896.el7ost.noarch
  openstack-neutron-ml2-9.0.0-0.20160726001729.6a23add.el7ost.noarch
  openstack-neutron-9.0.0-0.20160726001729.6a23add.el7ost.noarch
  openstack-neutron-sriov-nic-agent-9.0.0-0.20160726001729.6a23add.el7ost.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1614092/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615502] [NEW] LBAAS - housekeeping serive does not cleanup stale amphora VMs

2016-08-22 Thread chandrasekaran natarajan
Public bug reported:

1.  Initially there were no spare VMs since the “spare_amphora_pool_size = 
0”
 .

 [house_keeping]
# Pool size for the spare pool
spare_amphora_pool_size = 0


stack@hlm:~/scratch/ansible/next/hos/ansible$ nova list --all
WARNING: Option "--all_tenants" is deprecated; use "--all-tenants"; this option 
will be removed in novaclient 3.3.0.
+--+--+--+++-+---+
| ID   | Name   
  | Tenant ID| Status | Task State | Power State | 
Networks  |
+--+--+--+++-+---+
| 91eef324-0c51-4b91-8a54-e16abdb64e55 | vm1
  | d15f2abc106740499a453260ae6522f3 | ACTIVE | -  | Running | 
n1=4.5.6.5|
| 7d85921c-e7d9-4b70-9023-0478c66b7e7c | vm2
  | d15f2abc106740499a453260ae6522f3 | ACTIVE | -  | Running | 
n1=4.5.6.6|
+--+--+--+++-+---+

2.  Change the spare pool size to 1 and restart Octavia-housekeeping
service. Spare Amphora VM gets created as below.


stack@hlm:~/scratch/ansible/next/hos/ansible$  nova list --all
WARNING: Option "--all_tenants" is deprecated; use "--all-tenants"; this option 
will be removed in novaclient 3.3.0.
+--+--+--+++-+---+
| ID   | Name   
  | Tenant ID| Status | Task State | Power State | 
Networks  |
+--+--+--+++-+---+
| 6a1101cd-d9d3-4c8e-aa1d-0790f7f4ac8b | 
amphora-18f4d90f-fe6e-4085-851e-7571cba0c65a | a5e6e87d402847e7b4210e035a0fceec 
| ACTIVE | -  | Running | OCTAVIA-MGMT-NET=100.74.25.13 |
| 91eef324-0c51-4b91-8a54-e16abdb64e55 | vm1
  | d15f2abc106740499a453260ae6522f3 | ACTIVE | -  | Running | 
n1=4.5.6.5|
| 7d85921c-e7d9-4b70-9023-0478c66b7e7c | vm2
  | d15f2abc106740499a453260ae6522f3 | ACTIVE | -  | Running | 
n1=4.5.6.6|
+--+--+--+++-+---+

3.  Now change the spare pool size to 0 and restart Octavia-
housekeeping service. Spare Amphora VM does not get deleted.

stack@hlm:~/scratch/ansible/next/hos/ansible$  nova list --all
WARNING: Option "--all_tenants" is deprecated; use "--all-tenants"; this option 
will be removed in novaclient 3.3.0.
+--+--+--+++-+---+
| ID   | Name   
  | Tenant ID| Status | Task State | Power State | 
Networks  |
+--+--+--+++-+---+
| 6a1101cd-d9d3-4c8e-aa1d-0790f7f4ac8b | 
amphora-18f4d90f-fe6e-4085-851e-7571cba0c65a | a5e6e87d402847e7b4210e035a0fceec 
| ACTIVE | -  | Running | OCTAVIA-MGMT-NET=100.74.25.13 |
| 91eef324-0c51-4b91-8a54-e16abdb64e55 | vm1
  | d15f2abc106740499a453260ae6522f3 | ACTIVE | -  | Running | 
n1=4.5.6.5|
| 7d85921c-e7d9-4b70-9023-0478c66b7e7c | vm2
  | d15f2abc106740499a453260ae6522f3 | ACTIVE | -  | Running | 
n1=4.5.6.6|
+--+--+--+++-+---+
stack@hlm:~/scratch/ansible/next/hos/ansible$

4.  I see the requirement is not satisfied as per the config so
hosue keeping should take care of cleaning up the spare 

[Yahoo-eng-team] [Bug 1615498] [NEW] VMware: unable to launhc an instance on a 'portgroup' provider network

2016-08-22 Thread Gary Kotton
Public bug reported:

The vmware_nsx NSX|V and DVS plugins enable a admin to create a provide
network that points to an existing portgroup.

One is uanble to spin up and instance on these networks.

The trace is as follows:

2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a] Traceback (most recent call last):
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2514, in 
_build_resources
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a] yield resources
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2384, in 
_build_and_run_instance
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a] block_device_info=block_device_info)
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/driver.py", line 429, in 
spawn
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a] admin_password, network_info, 
block_device_info)
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmops.py", line 872, in 
spawn
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a] metadata)
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vmops.py", line 328, in 
build_virtual_machine
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a] pci_devices)
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vif.py", line 170, in 
get_vif_info
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a] is_neutron, vif, pci_info))
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vif.py", line 138, in 
get_vif_dict
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a] ref = get_network_ref(session, 
cluster, vif, is_neutron)
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vif.py", line 127, in 
get_network_ref
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a] network_ref = 
_get_neutron_network(session, network_id, cluster, vif)
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/vmwareapi/vif.py", line 117, in 
_get_neutron_network
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a] raise 
exception.NetworkNotFoundForBridge(bridge=network_id)
2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 
cd811ab1-9bef-4f07-b107-196276123a6a] NetworkNotFoundForBridge: Network could 
not be found for bridge 7bc687f6-aeff-4420-9663-03e75b7c28e3


This is due to the fact that the validation for the network selection is
done according the the network UUID (which is the name of the port
group). In the case of importing a existing port group this actually
needs to be the name of the portgroup.

The NSX|V and DVS plugins enforce that the name of the network needs to
match the name of the existing port group

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1615498

Title:
  VMware: unable to launhc an instance on a 'portgroup' provider network

Status in OpenStack Compute (nova):
  New

Bug description:
  The vmware_nsx NSX|V and DVS plugins enable a admin to create a
  provide network that points to an existing portgroup.

  One is uanble to spin up and instance on these networks.

  The trace is as follows:

  2016-08-20 01:16:01.289 13629 TRACE nova.compute.manager [instance: 

[Yahoo-eng-team] [Bug 1614361] Re: tox.ini needs to be updated as openstack infra now supports upper constraints

2016-08-22 Thread lvdongbing
** Also affects: ceilometer
   Importance: Undecided
   Status: New

** Changed in: ceilometer
 Assignee: (unassigned) => lvdongbing (dbcocle)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1614361

Title:
  tox.ini needs to be updated as openstack infra now supports upper
  constraints

Status in Ceilometer:
  In Progress
Status in Cinder:
  New
Status in Designate:
  New
Status in Glance:
  New
Status in heat:
  New
Status in OpenStack Dashboard (Horizon):
  New
Status in Ironic Inspector:
  New
Status in Mistral:
  In Progress
Status in Murano:
  New
Status in neutron:
  New
Status in octavia:
  New
Status in openstack-ansible:
  New
Status in python-muranoclient:
  New
Status in tacker:
  New
Status in OpenStack DBaaS (Trove):
  New
Status in vmware-nsx:
  New
Status in zaqar:
  In Progress
Status in Zaqar-ui:
  Fix Released

Bug description:
  Openstack infra now supports upper constraints for releasenotes, cover, venv 
targets.
  tox.ini uses install_command for these targets, which can now be safely 
removed.
  Reference for mail that details this support: 
http://lists.openstack.org/pipermail/openstack-dev/2016-August/101474.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1614361/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1611892] Re: Quickbooks Customer Support Phone Number 1-844-887-9236 (SUPPORT TEAM)

2016-08-22 Thread William Grant
** Project changed: nova => null-and-void

** Information type changed from Public to Private

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1611892

Title:
  Quickbooks Customer Support Phone Number 1-844-887-9236 (SUPPORT TEAM)

Status in NULL Project:
  Invalid

Bug description:
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236
  quickbooks customer support phone number 1-844-887-9236

To manage notifications about this bug go to:
https://bugs.launchpad.net/null-and-void/+bug/1611892/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1611888] Re: 1844-887-9236 Quickbooks Activation Support Phone Number

2016-08-22 Thread William Grant
** Project changed: nova => null-and-void

** Information type changed from Public to Private

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1611888

Title:
  1844-887-9236 Quickbooks Activation Support Phone Number

Status in NULL Project:
  Invalid

Bug description:
  1844-887-9236 Quickbooks Activation Support Phone Number1844-887-9236
  Quickbooks Activation Support Phone Number1844-887-9236 Quickbooks
  Activation Support Phone Number1844-887-9236 Quickbooks Activation
  Support Phone Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 Quickbooks Activation Support Phone
  Number1844-887-9236 

[Yahoo-eng-team] [Bug 1614361] Re: tox.ini needs to be updated as openstack infra now supports upper constraints

2016-08-22 Thread lvdongbing
** Also affects: mistral
   Importance: Undecided
   Status: New

** Changed in: mistral
 Assignee: (unassigned) => lvdongbing (dbcocle)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1614361

Title:
  tox.ini needs to be updated as openstack infra now supports upper
  constraints

Status in Cinder:
  New
Status in Designate:
  New
Status in Glance:
  New
Status in heat:
  New
Status in OpenStack Dashboard (Horizon):
  New
Status in Ironic Inspector:
  New
Status in Mistral:
  New
Status in Murano:
  New
Status in neutron:
  New
Status in octavia:
  New
Status in openstack-ansible:
  New
Status in python-muranoclient:
  New
Status in tacker:
  New
Status in OpenStack DBaaS (Trove):
  New
Status in vmware-nsx:
  New
Status in zaqar:
  In Progress
Status in Zaqar-ui:
  Fix Released

Bug description:
  Openstack infra now supports upper constraints for releasenotes, cover, venv 
targets.
  tox.ini uses install_command for these targets, which can now be safely 
removed.
  Reference for mail that details this support: 
http://lists.openstack.org/pipermail/openstack-dev/2016-August/101474.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1614361/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp