[Yahoo-eng-team] [Bug 1624097] Re: Neutron LBaaS CLI quota show includes l7policy and doesn't include member

2016-09-15 Thread Reedip
You are right.
But this is a client side issue. 
Would be fixing it on the OpenstackClient side, as NeutronClient would be 
deprecated soon 

** Project changed: neutron => python-openstackclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1624097

Title:
  Neutron LBaaS CLI quota show includes l7policy and doesn't include
  member

Status in python-openstackclient:
  New

Bug description:
  When running devstack and executing "neutron quota-show" it lists an
  l7 policy quota, but does not show a member quota.  However, the help
  message for "neutron quota-update" includes a member quota, but not an
  l7 policy quota.  The show command should not have the l7 policy
  quota, but should have the member quota.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-openstackclient/+bug/1624097/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1570463] Re: RFE: keystone-manage CLI to allow using syslog & specific log files

2016-09-15 Thread Steve Martinelli
** No longer affects: keystone/newton

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1570463

Title:
  RFE: keystone-manage CLI to allow using syslog & specific log files

Status in OpenStack Identity (keystone):
  Triaged

Bug description:
  Currently, keystone-manage CLI tool will by default write in
  $log_dir/$log_file, which is most of the case /var/log/keystone.log.

  Some actions (like fernet keys generations) are dynamic, and having
  them in a separated logfile would be a nice feature for operators.
  Also supporting syslog would be very helpful for production
  deployments.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1570463/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1621200] Re: password created_at does not honor timezones

2016-09-15 Thread Steve Martinelli
Due to the fact that the created_at column was created in the Newton
milestone 3 release it is best to fix this time sensitive attribute in
the same release. Let's create an rc2 milestone and backport the fix.

** Summary changed:

- MySQLOpportunisticIdentityDriverTestCase.test_change_password fails in UTC+N 
timezone
+ password created_at does not honor timezones

** Description changed:

+ This was initially discovered when running the unit tests for migration
+ 002 in a timezone that is UTC+3.
+ 
  Migration 002 sets the password created_at column to a TIMESTAMP type
  with a server_default=sql.func.now(). There are a couple problems
  that have been uncovered with this change:
  * We cannot guarantee that func.now() will generate a UTC timestamp.
  * For some older versions of MySQL, the TIMESTAMP column will
  automatically be updated when other columns are updated:
  https://dev.mysql.com/doc/refman/5.5/en/timestamp-initialization.html
  
  Steps to reproduce:
  1. dpkg-reconfigure tzdata and select there Europe/Moscow (UTC+3).
  2. Restart mysql
  3. Configure opportunistic tests with the following command in mysql:
  GRANT ALL PRIVILEGES ON *.* TO 'openstack_citest' @'%' identified by 
'openstack_citest' WITH GRANT OPTION;
  4. Run 
keystone.tests.unit.identity.backends.test_sql.MySQLOpportunisticIdentityDriverTestCase.test_change_password
  
  Expected result: test pass
  
  Actual result:
  Traceback (most recent call last):
    File "keystone/tests/unit/identity/backends/test_base.py", line 255, in 
test_change_password
  self.driver.authenticate(user['id'], new_password)
    File "keystone/identity/backends/sql.py", line 65, in authenticate
  raise AssertionError(_('Invalid user / password'))
  AssertionError: Invalid user / password
+ 
+ Aside from the test issue, we should be saving all time related data in
+ DateTime format instead of TIMESTAMP.

** Also affects: keystone/newton
   Importance: Undecided
   Status: New

** Changed in: keystone/newton
   Status: New => In Progress

** Changed in: keystone/newton
 Assignee: (unassigned) => Ron De Rose (ronald-de-rose)

** Changed in: keystone/newton
   Importance: Undecided => High

** Changed in: keystone/newton
Milestone: None => newtone-rc2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1621200

Title:
  password created_at does not honor timezones

Status in OpenStack Identity (keystone):
  In Progress
Status in OpenStack Identity (keystone) newton series:
  In Progress

Bug description:
  This was initially discovered when running the unit tests for
  migration 002 in a timezone that is UTC+3.

  Migration 002 sets the password created_at column to a TIMESTAMP type
  with a server_default=sql.func.now(). There are a couple problems
  that have been uncovered with this change:
  * We cannot guarantee that func.now() will generate a UTC timestamp.
  * For some older versions of MySQL, the TIMESTAMP column will
  automatically be updated when other columns are updated:
  https://dev.mysql.com/doc/refman/5.5/en/timestamp-initialization.html

  Steps to reproduce:
  1. dpkg-reconfigure tzdata and select there Europe/Moscow (UTC+3).
  2. Restart mysql
  3. Configure opportunistic tests with the following command in mysql:
  GRANT ALL PRIVILEGES ON *.* TO 'openstack_citest' @'%' identified by 
'openstack_citest' WITH GRANT OPTION;
  4. Run 
keystone.tests.unit.identity.backends.test_sql.MySQLOpportunisticIdentityDriverTestCase.test_change_password

  Expected result: test pass

  Actual result:
  Traceback (most recent call last):
    File "keystone/tests/unit/identity/backends/test_base.py", line 255, in 
test_change_password
  self.driver.authenticate(user['id'], new_password)
    File "keystone/identity/backends/sql.py", line 65, in authenticate
  raise AssertionError(_('Invalid user / password'))
  AssertionError: Invalid user / password

  Aside from the test issue, we should be saving all time related data
  in DateTime format instead of TIMESTAMP.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1621200/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1510257] Re: Invalid credentials message not translated

2016-09-15 Thread Akihiro Motoki
The cause is that a corresponding message in d.o.a was not translated.

** Changed in: horizon
 Assignee: Tony Dunbar (adunbar) => (unassigned)

** Changed in: horizon
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1510257

Title:
  Invalid credentials message not translated

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  I'm running stable/liberty with a pseudo translation as described at
  http://docs.openstack.org/developer/horizon/contributing.html#running-
  the-pseudo-translation-tool.

  When logging in with an incorrect password, the error message "Invalid
  credentials." is not translated, screen shot attached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1510257/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624197] [NEW] i18n: Cannot control word order of message in Delete Dialog

2016-09-15 Thread Akihiro Motoki
Public bug reported:

When deleting a resource, the confirm form is displayed. In the form, we
have a message 'You have selected "net1".' but "You have selected" and a
resource name is concatenated in the django template. In some language,
an object is placed before a verb, but translators cannot control the
word order.

** Affects: horizon
 Importance: Medium
 Assignee: Akihiro Motoki (amotoki)
 Status: New


** Tags: i18n

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1624197

Title:
  i18n: Cannot control word order of message in Delete Dialog

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When deleting a resource, the confirm form is displayed. In the form,
  we have a message 'You have selected "net1".' but "You have selected"
  and a resource name is concatenated in the django template. In some
  language, an object is placed before a verb, but translators cannot
  control the word order.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1624197/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624119] Re: remove_deleted_instances in RT stacktraces with AttributeError

2016-09-15 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/371179
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=30a85c8bcfb9ba7a66364df7eb1dc3dfc61341ab
Submitter: Jenkins
Branch:master

commit 30a85c8bcfb9ba7a66364df7eb1dc3dfc61341ab
Author: Dan Smith 
Date:   Thu Sep 15 15:07:32 2016 -0700

Fix object assumption in remove_deleted_instances()

The RT tracks instance dicts for some reason. Fix that and make the
test poke the previous issue.

Change-Id: Iae3b2aa8e655f51f6fffd98dd02fa8e1cd9366c3
Closes-Bug: #1624119


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1624119

Title:
  remove_deleted_instances in RT stacktraces with AttributeError

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  http://logs.openstack.org/92/371092/1/check/gate-tempest-dsvm-neutron-
  full-ubuntu-
  xenial/62ba450/logs/screen-n-cpu.txt.gz?level=TRACE#_2016-09-15_21_07_36_354

  2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager 
[req-a2ce4588-a9b5-466d-9f06-62d78a23fd04 - -] Error updating resources for 
node ubuntu-xenial-osic-cloud1-4322012.
  2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager Traceback (most 
recent call last):
  2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 6416, in 
update_available_resource_for_node
  2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager 
rt.update_available_resource(context)
  2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/compute/resource_tracker.py", line 526, in 
update_available_resource
  2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager 
self._update_available_resource(context, resources)
  2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
271, in inner
  2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager return f(*args, 
**kwargs)
  2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/compute/resource_tracker.py", line 562, in 
_update_available_resource
  2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager 
self._update_usage_from_instances(context, instances)
  2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/compute/resource_tracker.py", line 903, in 
_update_usage_from_instances
  2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager 
self.compute_node, self.tracked_instances.values())
  2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/scheduler/client/__init__.py", line 37, in 
__run_method
  2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager return 
getattr(self.instance, __name)(*args, **kwargs)
  2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/scheduler/client/report.py", line 446, in 
remove_deleted_instances
  2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager for instance in 
instance_uuids}
  2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/scheduler/client/report.py", line 446, in 
  2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager for instance in 
instance_uuids}
  2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager AttributeError: 
'dict' object has no attribute 'uuid'
  2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager 

  Introduced with: https://review.openstack.org/#/c/369147/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1624119/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623876] Re: nova is not setting the MTU provided by Neutron

2016-09-15 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/370681
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=77f546623bb6c0b2a4441940f1740abd45bd3352
Submitter: Jenkins
Branch:master

commit 77f546623bb6c0b2a4441940f1740abd45bd3352
Author: John Garbutt 
Date:   Thu Sep 15 11:29:45 2016 +0100

Override MTU for os_vif attachments

os-vif does not current respect the neutron provided mtu, it just uses
the configuration inside os-vif. This is a big regression from mitaka
and liberty.

Long term, this is something os-vif will be able to do by correctly
parsing and acting on the network info. For now we just set the mtu for
a second time once os-vif has done what it wants to do.

Closes-Bug: #1623876

Change-Id: Id4ca38fa1bb84f8cdb5edcd9ccb7acd8c8e9b60c


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1623876

Title:
  nova is not setting the MTU provided by Neutron

Status in OpenStack Compute (nova):
  Fix Released
Status in os-vif:
  In Progress

Bug description:
  Spotted in gate grenade job. We can see neutron MTU is 1450 but the mtu set 
calls in privsep use 1500.
  
http://logs.openstack.org/56/369956/3/gate/gate-grenade-dsvm-neutron-ubuntu-trusty/83daad8/logs/new/screen-n-cpu.txt.gz#_2016-09-15_01_16_57_512

  Relevant log snippet:

  2016-09-15 01:16:57.512 25573 DEBUG nova.network.os_vif_util 
[req-53929d93-d999-4035-8aae-f8d9fd1b2efb 
tempest-AttachInterfacesTestJSON-355908889 
tempest-AttachInterfacesTestJSON-355908889] Converting VIF {"profile": {}, 
"ovs_interfaceid": "8dfdfd9b-da9d-4215-abbd-4dffdc48494b", 
"preserve_on_delete": false, "network": {"bridge": "br-int", "subnets": 
[{"ips": [{"meta": {}, "version": 4, "type": "fixed", "floating_ips": [], 
"address": "10.1.0.9"}], "version": 4, "meta": {}, "dns": [], "routes": [], 
"cidr": "10.1.0.0/28", "gateway": {"meta": {}, "version": 4, "type": "gateway", 
"address": "10.1.0.1"}}], "meta": {"injected": false, "tenant_id": 
"563ca55619b1402ebf0c792ec604a774", "mtu": 1450}, "id": 
"6e1f0d14-a238-4da9-a2d5-659a0f28479c", "label": 
"tempest-AttachInterfacesTestJSON-32449395-network"}, "devname": 
"tap8dfdfd9b-da", "vnic_type": "normal", "qbh_params": null, "meta": {}, 
"details": {"port_filter": true, "ovs_hybrid_plug": true}, "address": 
"fa:16:3e:38:52:12", "active": fal
 se, "type": "ovs", "id": "8dfdfd9b-da9d-4215-abbd-4dffdc48494b", "qbg_params": 
null} nova_to_osvif_vif /opt/stack/new/nova/nova/network/os_vif_util.py:362
  2016-09-15 01:16:57.513 25573 DEBUG nova.network.os_vif_util 
[req-53929d93-d999-4035-8aae-f8d9fd1b2efb 
tempest-AttachInterfacesTestJSON-355908889 
tempest-AttachInterfacesTestJSON-355908889] Converted object 
VIFBridge(active=False,address=fa:16:3e:38:52:12,bridge_name='qbr8dfdfd9b-da',has_traffic_filtering=True,id=8dfdfd9b-da9d-4215-abbd-4dffdc48494b,network=Network(6e1f0d14-a238-4da9-a2d5-659a0f28479c),plugin='ovs',port_profile=VIFPortProfileBase,preserve_on_delete=False,vif_name='tap8dfdfd9b-da')
 nova_to_osvif_vif /opt/stack/new/nova/nova/network/os_vif_util.py:374
  2016-09-15 01:16:57.514 25573 DEBUG os_vif 
[req-53929d93-d999-4035-8aae-f8d9fd1b2efb 
tempest-AttachInterfacesTestJSON-355908889 
tempest-AttachInterfacesTestJSON-355908889] Plugging vif 
VIFBridge(active=False,address=fa:16:3e:38:52:12,bridge_name='qbr8dfdfd9b-da',has_traffic_filtering=True,id=8dfdfd9b-da9d-4215-abbd-4dffdc48494b,network=Network(6e1f0d14-a238-4da9-a2d5-659a0f28479c),plugin='ovs',port_profile=VIFPortProfileBase,preserve_on_delete=False,vif_name='tap8dfdfd9b-da')
 plug /usr/local/lib/python2.7/dist-packages/os_vif/__init__.py:76
  2016-09-15 01:16:57.515 25573 DEBUG oslo.privsep.daemon [-] privsep: 
request[140021949493072]: (3, 'vif_plug_ovs.linux_net.ensure_bridge', 
(u'qbr8dfdfd9b-da',), {}) out_of_band 
/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py:194
  2016-09-15 01:16:57.515 25573 DEBUG oslo.privsep.daemon [-] Running cmd 
(subprocess): brctl addbr qbr8dfdfd9b-da out_of_band 
/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py:194
  2016-09-15 01:16:57.517 25573 DEBUG neutronclient.v2_0.client 
[req-6754757c-066c-488a-bc16-6bd451c28cdc 
tempest-ServerActionsTestJSON-1197364963 
tempest-ServerActionsTestJSON-1197364963] GET call to neutron for 
http://127.0.0.1:9696/v2.0/subnets.json?id=2b899e3c-17dc-478a-bd39-91132bb057ab 
used request id req-e82dd927-a0f8-48ba-bbbc-56724a10a29d _append_request_id 
/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py:127
  2016-09-15 01:16:57.521 25573 DEBUG oslo.privsep.daemon [-] CMD "brctl addbr 
qbr8dfdfd9b-da" returned: 0 in 0.005s out_of_band 
/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py:194
  2016-09-15 01:16:57.521 25573 DEBUG oslo.privsep.daemon [-] Running 

[Yahoo-eng-team] [Bug 1618987] Re: test_connection_from_diff_address_scope intermittent "Cannot find device" errors

2016-09-15 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/371032
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=a75ce6850f3954edafbb0c128750e39b57875743
Submitter: Jenkins
Branch:master

commit a75ce6850f3954edafbb0c128750e39b57875743
Author: Kevin Benton 
Date:   Thu Sep 15 01:33:00 2016 -0700

Retry setting mac address on new OVS port 10 times

We've seen several times in the gate an OVS add port call
succeed and then have the mac address set fail to find it
seconds later. The vswitch log frequently shows that it
returns milliseconds later. Until we get to the bottom of
it, we should just retry several times before giving up
and raising.

Closes-Bug: #1618987
Partial-Bug: #1623732
Change-Id: Ia73a9be047093c02f61e3e9ce13d98dccd49dfeb


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1618987

Title:
  test_connection_from_diff_address_scope intermittent "Cannot find
  device" errors

Status in neutron:
  Fix Released

Bug description:
  Example TRACE:
  
http://logs.openstack.org/58/360858/4/check/gate-neutron-dsvm-functional/3fb0ba3/console.html#_2016-08-30_23_25_18_854125

  It looks like OVSDB adds an internal OVS port, then when we try to set
  the MTU, the linux network stack cannot find the device.

  I'm under the impression that https://review.openstack.org/#/c/344859/
  was supposed to solve this class of issues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1618987/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623990] Re: ml2 "while True" assumes fresh data on transactions

2016-09-15 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/370920
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=1279640da845d4f6bc6274e6706875610e179326
Submitter: Jenkins
Branch:master

commit 1279640da845d4f6bc6274e6706875610e179326
Author: Kevin Benton 
Date:   Wed Sep 14 22:35:02 2016 -0700

Expire DB objects in ML2 infinity loops

We must expire all DB objects in the session to ensure
that we are getting the latest object state from the DB
on each iteration of these 'while True:' loops to ensure
we don't continue forever on stale data.

Closes-Bug: #1623990
Change-Id: I2f4ea6cce5d83c13fd37650d28a7089c5aa9a4c0


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1623990

Title:
  ml2 "while True" assumes fresh data on transactions

Status in neutron:
  Fix Released

Bug description:
  both delete_network and delete_subnet in ML2 use the following
  pattern:

  while True:
  with session.begin():
  record = get_thing_from_db()
  ... Logic to determine if out of transaction cleaning should be done
  if no_cleaning:
  context.session.delete(record)

  do_cleaning_up()


  The problem here is that it assumes it will get fresh data on each
  iteration. However, due to to the identity map pattern followed by
  sqlalchemy[1], new data will not be reflected in the 'record' var
  above on subsequent iterations if the primary key still exists in the
  DB.

  This can lead to infinite loops on delete_subnet if a concurrent
  delete request (or network update) bumps the network revision number.
  This is because the network that is in the session will always have
  the stale revision number even though the lookup methods are called on
  each iteration.

  
  This can be reproduced with the following steps:
  1. Place a pdb breakpoint here: 
https://github.com/openstack/neutron/blob/27928c0ddfb8d62843aa72ecf943d1f46ef30099/neutron/plugins/ml2/plugin.py#L1062
  2. create a network with two subnets (ensure the dhcp agent is running so it 
gets an IP on each)
  3. issue an API subnet delete call
  4. first time breakpoint hits, nothing is looked up yet, so just continue
  5. second time breakpoint hits, issue the following query in mysql: 'update 
standardattributes set revision_number=50;'. this will alter the revision 
number in the DB but the loop won't ever get the new one.
  6. continue
  7. continue
  8. continue
  9. do this as long as you want :)


  
  1. 
http://docs.sqlalchemy.org/en/latest/orm/session_basics.html#is-the-session-a-cache

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1623990/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1612798] Re: Move db retry logic closer to where DB error occur

2016-09-15 Thread Armando Migliaccio
other patches to be reproposed if they burn.

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1612798

Title:
  Move db retry logic closer to where DB error occur

Status in neutron:
  Fix Released

Bug description:
  This has caused weird failure modes where DB errors get masked by
  integrity violation errors.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1612798/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624170] [NEW] Iteration for metering is stopped because metering-agent tried to operate chain in namespace of deleted router

2016-09-15 Thread Kengo Hobo
Public bug reported:

Metering-agent cannot recognize namespace for router is deleted.
Thus, metering-agent sometimes fails to operate chain for metering with 
following trace
because namespace for router had already deleted, especially when synchronizing 
information with neutron-server.

I assume that we should simply catch the error and continue the
iteration.

trace in metering-agent.log
=
2016-09-16 00:20:35.118 16578 ERROR 
neutron.services.metering.agents.metering_agent 
[req-4798393c-417b-4af6-bb16-15c7b8cb4b17 - - - - -] Driver 
neutron.services.meterin
g.drivers.iptables.iptables_driver.IptablesMeteringDriver:update_routers 
runtime error
2016-09-16 00:20:35.118 16578 ERROR 
neutron.services.metering.agents.metering_agent Traceback (most recent call 
last):
2016-09-16 00:20:35.118 16578 ERROR 
neutron.services.metering.agents.metering_agent   File 
"/opt/stack/neutron/neutron/services/metering/agents/metering_agent.py", line
 166, in _invoke_driver
2016-09-16 00:20:35.118 16578 ERROR 
neutron.services.metering.agents.metering_agent return 
getattr(self.metering_driver, func_name)(context, meterings)
2016-09-16 00:20:35.118 16578 ERROR 
neutron.services.metering.agents.metering_agent   File 
"/usr/local/lib/python2.7/dist-packages/oslo_log/helpers.py", line 48, in wra
pper
2016-09-16 00:20:35.118 16578 ERROR 
neutron.services.metering.agents.metering_agent return method(*args, 
**kwargs)
2016-09-16 00:20:35.118 16578 ERROR 
neutron.services.metering.agents.metering_agent   File 
"/opt/stack/neutron/neutron/services/metering/drivers/iptables/iptables_drive
r.py", line 109, in update_routers
2016-09-16 00:20:35.118 16578 ERROR 
neutron.services.metering.agents.metering_agent 
self._process_disassociate_metering_label(rm.router)
2016-09-16 00:20:35.118 16578 ERROR 
neutron.services.metering.agents.metering_agent   File 
"/opt/stack/neutron/neutron/services/metering/drivers/iptables/iptables_drive
r.py", line 257, in _process_disassociate_metering_label
2016-09-16 00:20:35.118 16578 ERROR 
neutron.services.metering.agents.metering_agent del 
rm.metering_labels[label_id]
2016-09-16 00:20:35.118 16578 ERROR 
neutron.services.metering.agents.metering_agent   File 
"/opt/stack/neutron/neutron/services/metering/drivers/iptables/iptables_drive
r.py", line 58, in __exit__
2016-09-16 00:20:35.118 16578 ERROR 
neutron.services.metering.agents.metering_agent self.im.apply()
2016-09-16 00:20:35.118 16578 ERROR 
neutron.services.metering.agents.metering_agent   File 
"/opt/stack/neutron/neutron/agent/linux/iptables_manager.py", line 437, in apply
2016-09-16 00:20:35.118 16578 ERROR 
neutron.services.metering.agents.metering_agent return self._apply()
2016-09-16 00:20:35.118 16578 ERROR 
neutron.services.metering.agents.metering_agent   File 
"/opt/stack/neutron/neutron/agent/linux/iptables_manager.py", line 445, in 
_apply
2016-09-16 00:20:35.118 16578 ERROR 
neutron.services.metering.agents.metering_agent first = 
self._apply_synchronized()
2016-09-16 00:20:35.118 16578 ERROR 
neutron.services.metering.agents.metering_agent   File 
"/opt/stack/neutron/neutron/agent/linux/iptables_manager.py", line 480, in 
_apply_synchronized
2016-09-16 00:20:35.118 16578 ERROR 
neutron.services.metering.agents.metering_agent save_output = 
self.execute(args, run_as_root=True)
2016-09-16 00:20:35.118 16578 ERROR 
neutron.services.metering.agents.metering_agent   File 
"/opt/stack/neutron/neutron/agent/linux/utils.py", line 138, in execute
2016-09-16 00:20:35.118 16578 ERROR 
neutron.services.metering.agents.metering_agent raise RuntimeError(msg)
2016-09-16 00:20:35.118 16578 ERROR 
neutron.services.metering.agents.metering_agent RuntimeError: Exit code: 1; 
Stdin: ; Stdout: ; Stderr: Cannot open network namespace 
"qrouter-15e41fb9-44f7-4233-8ff9-12f74f73ade2": No such file or directory
==

** Affects: neutron
 Importance: Undecided
 Assignee: Kengo Hobo (hobo-kengo)
 Status: New


** Tags: metering

** Tags added: metering

** Changed in: neutron
 Assignee: (unassigned) => Kengo Hobo (hobo-kengo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1624170

Title:
  Iteration for metering is stopped because metering-agent tried to
  operate chain in namespace of deleted router

Status in neutron:
  New

Bug description:
  Metering-agent cannot recognize namespace for router is deleted.
  Thus, metering-agent sometimes fails to operate chain for metering with 
following trace
  because namespace for router had already deleted, especially when 
synchronizing information with neutron-server.

  I assume that we should simply catch the error and continue the
  iteration.

  trace in metering-agent.log
  =
  2016-09-16 00:20:35.118 16578 ERROR 
neutron.services.metering.agents.metering_agent 

[Yahoo-eng-team] [Bug 1596411] Re: Cannot delete a subnet or a port from their detail page

2016-09-15 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/336662
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=53bdad4cf7d6a2039c8de3bd4140282d7837
Submitter: Jenkins
Branch:master

commit 53bdad4cf7d6a2039c8de3bd4140282d7837
Author: Luis Daniel Castellanos 
Date:   Fri Jul 1 11:49:35 2016 -0500

Ability to delete network items from their details page

This patch fixes the bug that was preventing users to delete a port
from the Admin>Networks>Port Details page

It also fixes bug preventing user from deleting a subnet from
the Admin>Networks>Port Details page.

The fix affects both Project>Networks & Admin>Networks

Co-Authored-By: Ankur Gupta 

Change-Id: I408f190584b01e0aadd1af2d4a59438a53426e70
Closes-Bug: #1596411
Closes-Bug: #1596691


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1596411

Title:
  Cannot delete a subnet or a port from their detail page

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Example:

  admin/networks

  When looking at the network table one is able to click dropdown
  "Delete " and it will pop-up with a message
  asking whether a use wants to delete.

  Upon confirmation it will delete.

  If you click the link and go into the details page for the . And attempt to delete. It will pop up a message, but
  omit the name/id.

  Upon confirmation. It will NOT delete.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1596411/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596691] Re: Delete port confirmation is missing port name or ID

2016-09-15 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/336662
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=53bdad4cf7d6a2039c8de3bd4140282d7837
Submitter: Jenkins
Branch:master

commit 53bdad4cf7d6a2039c8de3bd4140282d7837
Author: Luis Daniel Castellanos 
Date:   Fri Jul 1 11:49:35 2016 -0500

Ability to delete network items from their details page

This patch fixes the bug that was preventing users to delete a port
from the Admin>Networks>Port Details page

It also fixes bug preventing user from deleting a subnet from
the Admin>Networks>Port Details page.

The fix affects both Project>Networks & Admin>Networks

Co-Authored-By: Ankur Gupta 

Change-Id: I408f190584b01e0aadd1af2d4a59438a53426e70
Closes-Bug: #1596411
Closes-Bug: #1596691


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1596691

Title:
  Delete port confirmation is missing port name or ID

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Steps to reproduce:

  - This can be reproduced using the latest master code as of 6/27/16.

  - Go to Admin -> System -> Networks -> select a network -> Ports tab
  -> Click on a Port to see the Details -> Click "Delete Port" (from
  Details)

  The "Confirm Delete Port" text says:

  "You have selected: . Please confirm your selection. This action
  cannot be undone."

  So we are missing the port name or ID when running actions on Details
  screens.

  This applies to other resources as well, such as Subnets.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1596691/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1617769] Re: Floating IPs cannot be associated in a HA router setup

2016-09-15 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/361754
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=2ac3d510147b51c37e343099d4c477d2d7cdd7a3
Submitter: Jenkins
Branch:master

commit 2ac3d510147b51c37e343099d4c477d2d7cdd7a3
Author: Paulo Matias 
Date:   Sun Aug 28 12:33:34 2016 -0300

Add the new device owner flag for HA router interface

Changeset I89b247bdac3aee1e47ee8a1c9f1cb2c385019a51 introduced a new
device owner flag for HA router interfaces which was still not included
in Horizon's ROUTER_INTERFACE_OWNERS.

Change-Id: I2db4218e7351e0017a7a74114be6ac7af803476c
Related-Bug: #1554519
Closes-Bug: #1617769


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1617769

Title:
  Floating IPs cannot be associated in a HA router setup

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Regression introduced by https://github.com/openstack/neutron-
  lib/commit/0440878cc61737cf68a2f5780fc8af57845078fd

  When Neutron is deployed to implement a HA router setup, Horizon is
  not capable to list targets for floating IP association. The user
  receives only a "No Ports Available" message and is thus unable to
  associate any floating IP through Horizon.

  This happens because get_reachable_subnets [1] always returns an empty
  set, issue which is caused by the absence of the new device owner flag
  for HA router interface ("network:ha_router_replicated_interface") in
  ROUTER_INTERFACE_OWNERS [2].

  I will send a patch to gerrit in a few minutes.

  --

  Expected result: The user should be able to associate a floating IP to
  an instance using Horizon.

  Actual result: The user cannot associate any floating IP because "Port
  to be associated" only displays "No Ports Available".

  Steps to reproduce:
  * Deploy the environment
  * Create an external network, a project network, and a router between them
  * Launch an instance
  * Allocate a floating IP
  * Try to associate a floating IP to the instance

  Environment:
  * OpenStack-Ansible, master branch
  * Multi-node setup
  * Default neutron setup (ml2.lxb)

  --
  References

  [1]
  
https://github.com/openstack/horizon/blob/3bd785274636e9f90d11d28428f479e51403c6d1/openstack_dashboard/api/neutron.py#L449

  [2]
  
https://github.com/openstack/horizon/blob/3bd785274636e9f90d11d28428f479e51403c6d1/openstack_dashboard/api/neutron.py#L50

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1617769/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624165] [NEW] neutron_lbaas.conf deprecated error for neutron_lbaas.conf

2016-09-15 Thread Banashankar
Public bug reported:

If we set fatal_deprecations = True in neutron.conf and neutron-lbaas is 
enabled 
neutron server fails with following error 

http://paste.openstack.org/show/577816/

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1624165

Title:
  neutron_lbaas.conf deprecated error for neutron_lbaas.conf

Status in neutron:
  New

Bug description:
  If we set fatal_deprecations = True in neutron.conf and neutron-lbaas is 
enabled 
  neutron server fails with following error 

  http://paste.openstack.org/show/577816/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1624165/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624152] [NEW] cc_users_groups changes passwords and locks pre-existing accounts

2016-09-15 Thread Chris Brinker
Public bug reported:

The behavior of cc_users_groups leads to a pre-existing user account
being locked out (by default).

The fix: "return False" from add_user() when the account pre-exists.
Then add a conditional to create_user() to also skip password
manipulation if the account already existed.

if not self.add_user(name, **kwargs):
LOG.info("User %s already exists, skipping password setting." % name)
return False

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1624152

Title:
  cc_users_groups changes passwords and locks pre-existing accounts

Status in cloud-init:
  New

Bug description:
  The behavior of cc_users_groups leads to a pre-existing user account
  being locked out (by default).

  The fix: "return False" from add_user() when the account pre-exists.
  Then add a conditional to create_user() to also skip password
  manipulation if the account already existed.

  if not self.add_user(name, **kwargs):
  LOG.info("User %s already exists, skipping password setting." % name)
  return False

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1624152/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1568191] Re: Pre-defined VirtCPUTopology metadata should have the same name as flavor extra specs in nova

2016-09-15 Thread melanie witt
Marking as Invalid for Nova as Nikhil has commented on fixing it in
Glance.

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1568191

Title:
  Pre-defined VirtCPUTopology metadata should have the same name as
  flavor extra specs in nova

Status in Glance:
  In Progress
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  If I understand correctly, the concept of flavor extra specs has been
  replaced by metadata.

  On nova/virt/hardware.py, it's checking for some specific flavor extra
  specs in _get_cpu_topology_constraints().

  
https://github.com/openstack/nova/blob/f396826314b9f37eb57151f0dd8a8e3b7d8a8a5c/nova/virt/hardware.py

  These specific flavor extra specs are included in the pre-defined
  metadata json so the user can load them with command "glance-manage
  db_load_metadefs". However, the names are not exactly the same.

  
https://github.com/openstack/glance/blob/1c242032fbb26fed3a82691abb030583b4f8940b/etc/metadefs
  /compute-vcputopology.json

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1568191/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624145] [NEW] Octavia should ignore project_id on API create commands (except load_balancer)

2016-09-15 Thread Stephen Balukoff
Public bug reported:

Right now, the Octavia API allows the specification of the project_id on
the create commands for the following objects:

listener
health_monitor
member
pool

However, all of these objects should be inheriting their project_id from
the ancestor load_balancer object. Allowing the specification of
project_id when we create these objects could lead to a situation where
the descendant object's project_id is different from said object's
ancestor load_balancer project_id.

We don't want to break our API's backward compatibility for at least two
release cycles, so for now we should simply ignore this parameter if
specified (and get it from the load_balancer object in the database
directly), and insert TODO notes in the API code to remove the ability
to specify project_id after a certain openstack release.

We should also update the Octavia driver in neutron_lbaas to stop
specifying the project_id on descendant object creation.

This bug is related to https://bugs.launchpad.net/octavia/+bug/1624113

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: octavia
 Importance: Undecided
 Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1624145

Title:
  Octavia should ignore project_id on API create commands (except
  load_balancer)

Status in neutron:
  New
Status in octavia:
  New

Bug description:
  Right now, the Octavia API allows the specification of the project_id
  on the create commands for the following objects:

  listener
  health_monitor
  member
  pool

  However, all of these objects should be inheriting their project_id
  from the ancestor load_balancer object. Allowing the specification of
  project_id when we create these objects could lead to a situation
  where the descendant object's project_id is different from said
  object's ancestor load_balancer project_id.

  We don't want to break our API's backward compatibility for at least
  two release cycles, so for now we should simply ignore this parameter
  if specified (and get it from the load_balancer object in the database
  directly), and insert TODO notes in the API code to remove the ability
  to specify project_id after a certain openstack release.

  We should also update the Octavia driver in neutron_lbaas to stop
  specifying the project_id on descendant object creation.

  This bug is related to https://bugs.launchpad.net/octavia/+bug/1624113

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1624145/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624145] [NEW] Octavia should ignore project_id on API create commands (except load_balancer)

2016-09-15 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Right now, the Octavia API allows the specification of the project_id on
the create commands for the following objects:

listener
health_monitor
member
pool

However, all of these objects should be inheriting their project_id from
the ancestor load_balancer object. Allowing the specification of
project_id when we create these objects could lead to a situation where
the descendant object's project_id is different from said object's
ancestor load_balancer project_id.

We don't want to break our API's backward compatibility for at least two
release cycles, so for now we should simply ignore this parameter if
specified (and get it from the load_balancer object in the database
directly), and insert TODO notes in the API code to remove the ability
to specify project_id after a certain openstack release.

We should also update the Octavia driver in neutron_lbaas to stop
specifying the project_id on descendant object creation.

This bug is related to https://bugs.launchpad.net/octavia/+bug/1624113

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
Octavia should ignore project_id on API create commands (except load_balancer)
https://bugs.launchpad.net/bugs/1624145
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624119] [NEW] remove_deleted_instances in RT stacktraces with AttributeError

2016-09-15 Thread Matt Riedemann
Public bug reported:

http://logs.openstack.org/92/371092/1/check/gate-tempest-dsvm-neutron-
full-ubuntu-
xenial/62ba450/logs/screen-n-cpu.txt.gz?level=TRACE#_2016-09-15_21_07_36_354

2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager 
[req-a2ce4588-a9b5-466d-9f06-62d78a23fd04 - -] Error updating resources for 
node ubuntu-xenial-osic-cloud1-4322012.
2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager Traceback (most recent 
call last):
2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 6416, in 
update_available_resource_for_node
2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager 
rt.update_available_resource(context)
2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/compute/resource_tracker.py", line 526, in 
update_available_resource
2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager 
self._update_available_resource(context, resources)
2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
271, in inner
2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager return f(*args, 
**kwargs)
2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/compute/resource_tracker.py", line 562, in 
_update_available_resource
2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager 
self._update_usage_from_instances(context, instances)
2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/compute/resource_tracker.py", line 903, in 
_update_usage_from_instances
2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager self.compute_node, 
self.tracked_instances.values())
2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/scheduler/client/__init__.py", line 37, in 
__run_method
2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager return 
getattr(self.instance, __name)(*args, **kwargs)
2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/scheduler/client/report.py", line 446, in 
remove_deleted_instances
2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager for instance in 
instance_uuids}
2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/scheduler/client/report.py", line 446, in 
2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager for instance in 
instance_uuids}
2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager AttributeError: 'dict' 
object has no attribute 'uuid'
2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager 

Introduced with: https://review.openstack.org/#/c/369147/

** Affects: nova
 Importance: High
 Assignee: Jay Pipes (jaypipes)
 Status: Confirmed


** Tags: compute placement

** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1624119

Title:
  remove_deleted_instances in RT stacktraces with AttributeError

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  http://logs.openstack.org/92/371092/1/check/gate-tempest-dsvm-neutron-
  full-ubuntu-
  xenial/62ba450/logs/screen-n-cpu.txt.gz?level=TRACE#_2016-09-15_21_07_36_354

  2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager 
[req-a2ce4588-a9b5-466d-9f06-62d78a23fd04 - -] Error updating resources for 
node ubuntu-xenial-osic-cloud1-4322012.
  2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager Traceback (most 
recent call last):
  2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 6416, in 
update_available_resource_for_node
  2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager 
rt.update_available_resource(context)
  2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/compute/resource_tracker.py", line 526, in 
update_available_resource
  2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager 
self._update_available_resource(context, resources)
  2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
271, in inner
  2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager return f(*args, 
**kwargs)
  2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/compute/resource_tracker.py", line 562, in 
_update_available_resource
  2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager 
self._update_usage_from_instances(context, instances)
  2016-09-15 21:07:36.354 15826 ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/compute/resource_tracker.py", line 903, in 

[Yahoo-eng-team] [Bug 1624103] Re: Neutron accepting multiple networks/subnets to be created with same name, network address

2016-09-15 Thread Darek Smigiel
It's not a bug.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1624103

Title:
  Neutron accepting multiple networks/subnets to be created with same
  name, network address

Status in neutron:
  Invalid

Bug description:
  Description:
  
  I am able to create multiple networks/subnets with the same name and network 
address. This behaviour can result in confusion.

  Expected Behaviour:
  --
  Enforcing uniqueness in network/subnet names

  Environment:
  ---
  OpenStack Mitaka on Ubuntu 14.04 server

  Reproduction Steps:
  ---

  Steps from horizon:
  1. Create multiple networks/subnets with same name and same addresses

  Result:
  ---
  neutron net-list
  
+--++---+
  | id   | name   | subnets 
  |
  
+--++---+
  | e7d3c477-4c37-4871-af51-0567b3d4a03b | intnet | 
f9ef42c7-f190-4f6b-b7a3-eb85d55c8c1a 2.2.2.0/24   |
  | 182434dc-0c5b-4ba9-9dee-dfefbdf81d23 | intnet | 
0d2e21a1-df8a-4d0a-a11d-9b0646a158e3 2.2.2.0/24   |
  | 258095ac-e04d-485e-9e83-956499208da9 | intnet | 
dfc87017-06dd-4449-a898-9a3c30eb1d81 2.2.2.0/24   |
  | 134aa0b6-dc03-4b82-9e17-150ec5aa5471 | extnet | 
f25d0a78-033c-4434-9b5d-5779dd72b8f4 10.10.0.0/16 |
  
+--++---+

  neutron subnet-list
  
+--++--++
  | id   | name   | cidr | 
allocation_pools   |
  
+--++--++
  | f9ef42c7-f190-4f6b-b7a3-eb85d55c8c1a | intsub | 2.2.2.0/24   | {"start": 
"2.2.2.2", "end": "2.2.2.254"}   |
  | 0d2e21a1-df8a-4d0a-a11d-9b0646a158e3 | intsub | 2.2.2.0/24   | {"start": 
"2.2.2.2", "end": "2.2.2.254"}   |
  | dfc87017-06dd-4449-a898-9a3c30eb1d81 || 2.2.2.0/24   | {"start": 
"2.2.2.2", "end": "2.2.2.254"}   |
  | f25d0a78-033c-4434-9b5d-5779dd72b8f4 | extsub | 10.10.0.0/16 | {"start": 
"10.10.70.30", "end": "10.10.70.40"} |
  
+--++--++

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1624103/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624109] [NEW] keystone-manage fernet_setup fails silently

2016-09-15 Thread Christophe Balczunas
Public bug reported:

This from the Newton build openstack-
keystone-10.0.0-0.20160905112836.816d260.el7.centos.noarch

I created a /etc/keystone/fernet-keys  directory with 775 permissions
and tried to run keystone-manage fernet_setup:

[root@newton1 fernet-keys]# keystone-manage  fernet_setup 
usage: keystone-manage 
[bootstrap|credential_setup|db_sync|db_version|doctor|domain_config_upload|fernet_rotate|fernet_setup|mapping_populate|mapping_purge|mapping_engine|pki_setup|saml_idp_metadata|token_flush]
 fernet_setup
   [-h] --keystone-user KEYSTONE_USER --keystone-group KEYSTONE_GROUP
keystone-manage 
[bootstrap|credential_setup|db_sync|db_version|doctor|domain_config_upload|fernet_rotate|fernet_setup|mapping_populate|mapping_purge|mapping_engine|pki_setup|saml_idp_metadata|token_flush]
 fernet_setup: error: argument --keystone-user is required


Two issues, the first is that it's asking for a --keystone-user, and 
--keystone-group switch, which is probably not meant to be required switches 
for this command.

If I supply some value for these switches, the command executes but does
nothing (does not generate startup keys in the directory).   I am unable
to testout fernet tokens.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1624109

Title:
  keystone-manage fernet_setup fails silently

Status in OpenStack Identity (keystone):
  New

Bug description:
  This from the Newton build openstack-
  keystone-10.0.0-0.20160905112836.816d260.el7.centos.noarch

  I created a /etc/keystone/fernet-keys  directory with 775 permissions
  and tried to run keystone-manage fernet_setup:

  [root@newton1 fernet-keys]# keystone-manage  fernet_setup 
  usage: keystone-manage 
[bootstrap|credential_setup|db_sync|db_version|doctor|domain_config_upload|fernet_rotate|fernet_setup|mapping_populate|mapping_purge|mapping_engine|pki_setup|saml_idp_metadata|token_flush]
 fernet_setup
 [-h] --keystone-user KEYSTONE_USER --keystone-group KEYSTONE_GROUP
  keystone-manage 
[bootstrap|credential_setup|db_sync|db_version|doctor|domain_config_upload|fernet_rotate|fernet_setup|mapping_populate|mapping_purge|mapping_engine|pki_setup|saml_idp_metadata|token_flush]
 fernet_setup: error: argument --keystone-user is required

  
  Two issues, the first is that it's asking for a --keystone-user, and 
--keystone-group switch, which is probably not meant to be required switches 
for this command.

  If I supply some value for these switches, the command executes but
  does nothing (does not generate startup keys in the directory).   I am
  unable to testout fernet tokens.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1624109/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624103] [NEW] Neutron accepting multiple networks/subnets to be created with same name, network address

2016-09-15 Thread kiran-vemuri
Public bug reported:

Description:

I am able to create multiple networks/subnets with the same name and network 
address. This behaviour can result in confusion.

Expected Behaviour:
--
Enforcing uniqueness in network/subnet names

Environment:
---
OpenStack Mitaka on Ubuntu 14.04 server

Reproduction Steps:
---

Steps from horizon:
1. Create multiple networks/subnets with same name and same addresses

Result:
---
neutron net-list
+--++---+
| id   | name   | subnets   
|
+--++---+
| e7d3c477-4c37-4871-af51-0567b3d4a03b | intnet | 
f9ef42c7-f190-4f6b-b7a3-eb85d55c8c1a 2.2.2.0/24   |
| 182434dc-0c5b-4ba9-9dee-dfefbdf81d23 | intnet | 
0d2e21a1-df8a-4d0a-a11d-9b0646a158e3 2.2.2.0/24   |
| 258095ac-e04d-485e-9e83-956499208da9 | intnet | 
dfc87017-06dd-4449-a898-9a3c30eb1d81 2.2.2.0/24   |
| 134aa0b6-dc03-4b82-9e17-150ec5aa5471 | extnet | 
f25d0a78-033c-4434-9b5d-5779dd72b8f4 10.10.0.0/16 |
+--++---+

neutron subnet-list
+--++--++
| id   | name   | cidr | 
allocation_pools   |
+--++--++
| f9ef42c7-f190-4f6b-b7a3-eb85d55c8c1a | intsub | 2.2.2.0/24   | {"start": 
"2.2.2.2", "end": "2.2.2.254"}   |
| 0d2e21a1-df8a-4d0a-a11d-9b0646a158e3 | intsub | 2.2.2.0/24   | {"start": 
"2.2.2.2", "end": "2.2.2.254"}   |
| dfc87017-06dd-4449-a898-9a3c30eb1d81 || 2.2.2.0/24   | {"start": 
"2.2.2.2", "end": "2.2.2.254"}   |
| f25d0a78-033c-4434-9b5d-5779dd72b8f4 | extsub | 10.10.0.0/16 | {"start": 
"10.10.70.30", "end": "10.10.70.40"} |
+--++--++

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1624103

Title:
  Neutron accepting multiple networks/subnets to be created with same
  name, network address

Status in neutron:
  New

Bug description:
  Description:
  
  I am able to create multiple networks/subnets with the same name and network 
address. This behaviour can result in confusion.

  Expected Behaviour:
  --
  Enforcing uniqueness in network/subnet names

  Environment:
  ---
  OpenStack Mitaka on Ubuntu 14.04 server

  Reproduction Steps:
  ---

  Steps from horizon:
  1. Create multiple networks/subnets with same name and same addresses

  Result:
  ---
  neutron net-list
  
+--++---+
  | id   | name   | subnets 
  |
  
+--++---+
  | e7d3c477-4c37-4871-af51-0567b3d4a03b | intnet | 
f9ef42c7-f190-4f6b-b7a3-eb85d55c8c1a 2.2.2.0/24   |
  | 182434dc-0c5b-4ba9-9dee-dfefbdf81d23 | intnet | 
0d2e21a1-df8a-4d0a-a11d-9b0646a158e3 2.2.2.0/24   |
  | 258095ac-e04d-485e-9e83-956499208da9 | intnet | 
dfc87017-06dd-4449-a898-9a3c30eb1d81 2.2.2.0/24   |
  | 134aa0b6-dc03-4b82-9e17-150ec5aa5471 | extnet | 
f25d0a78-033c-4434-9b5d-5779dd72b8f4 10.10.0.0/16 |
  
+--++---+

  neutron subnet-list
  
+--++--++
  | id   | name   | cidr | 
allocation_pools   |
  
+--++--++
  | f9ef42c7-f190-4f6b-b7a3-eb85d55c8c1a | intsub | 2.2.2.0/24   | {"start": 
"2.2.2.2", "end": "2.2.2.254"}   |
  | 0d2e21a1-df8a-4d0a-a11d-9b0646a158e3 | intsub | 2.2.2.0/24   | {"start": 
"2.2.2.2", "end": "2.2.2.254"}   |
  | dfc87017-06dd-4449-a898-9a3c30eb1d81 || 2.2.2.0/24   | {"start": 
"2.2.2.2", "end": "2.2.2.254"}   |
  | f25d0a78-033c-4434-9b5d-5779dd72b8f4 | extsub | 10.10.0.0/16 | {"start": 
"10.10.70.30", "end": "10.10.70.40"} |
  
+--++--++

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1517839] Re: Make CONF.set_override with parameter enforce_type=True by default

2016-09-15 Thread Julien Danjou
** No longer affects: panko

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1517839

Title:
  Make CONF.set_override with parameter enforce_type=True by default

Status in Aodh:
  In Progress
Status in Ceilometer:
  In Progress
Status in Cinder:
  In Progress
Status in cloudkitty:
  Fix Released
Status in Designate:
  Fix Released
Status in Freezer:
  In Progress
Status in Glance:
  Invalid
Status in glance_store:
  In Progress
Status in Gnocchi:
  In Progress
Status in heat:
  Fix Released
Status in Ironic:
  Triaged
Status in Karbor:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in kolla:
  New
Status in Magnum:
  In Progress
Status in Manila:
  Fix Released
Status in Murano:
  Fix Released
Status in neutron:
  Won't Fix
Status in OpenStack Compute (nova):
  Fix Released
Status in octavia:
  New
Status in oslo.config:
  In Progress
Status in oslo.messaging:
  Fix Released
Status in Quark: Money Reinvented:
  New
Status in Rally:
  Fix Released
Status in senlin:
  New
Status in tacker:
  In Progress
Status in watcher:
  Fix Released

Bug description:
  1. Problems :
     oslo_config provides method CONF.set_override[1] , developers usually use 
it to change config option's value in tests. That's convenient .
     By default  parameter enforce_type=False,  it doesn't check any type or 
value of override. If set enforce_type=True , will check parameter
     override's type and value.  In production code(running time code),  
oslo_config  always checks  config option's value.
     In short, we test and run code in different ways. so there's  gap:  config 
option with wrong type or invalid value can pass tests when
     parameter enforce_type = False in consuming projects.  that means some 
invalid or wrong tests are in our code base.

     [1]
  https://github.com/openstack/oslo.config/blob/master/oslo_config/cfg.py#L2173

  2. Proposal
     1) Fix violations when enforce_type=True in each project.

    2) Make method CONF.set_override with  enforce_type=True by default
  in oslo_config

   You can find more details and comments  in
  https://etherpad.openstack.org/p/enforce_type_true_by_default

  3. How to find violations in your projects.

     1. Run tox -e py27

     2. then modify oslo.config with enforce_type=True
    cd .tox/py27/lib64/python2.7/site-packages/oslo_config
    edit cfg.py with enforce_type=True

  -def set_override(self, name, override, group=None, enforce_type=False):
  +def set_override(self, name, override, group=None, enforce_type=True):

    3. Run tox -e py27 again, you will find violations.

To manage notifications about this bug go to:
https://bugs.launchpad.net/aodh/+bug/1517839/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624097] [NEW] Neutron LBaaS CLI quota show includes l7policy and doesn't include member

2016-09-15 Thread Trevor Vardeman
Public bug reported:

When running devstack and executing "neutron quota-show" it lists an l7
policy quota, but does not show a member quota.  However, the help
message for "neutron quota-update" includes a member quota, but not an
l7 policy quota.  The show command should not have the l7 policy quota,
but should have the member quota.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1624097

Title:
  Neutron LBaaS CLI quota show includes l7policy and doesn't include
  member

Status in neutron:
  New

Bug description:
  When running devstack and executing "neutron quota-show" it lists an
  l7 policy quota, but does not show a member quota.  However, the help
  message for "neutron quota-update" includes a member quota, but not an
  l7 policy quota.  The show command should not have the l7 policy
  quota, but should have the member quota.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1624097/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623422] Re: delete_subnet update_port needs to catch SubnetNotFound

2016-09-15 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/369956
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=33afa82517dfe45a0e40d0f3ba738bc50bf04648
Submitter: Jenkins
Branch:master

commit 33afa82517dfe45a0e40d0f3ba738bc50bf04648
Author: Kevin Benton 
Date:   Tue Sep 13 19:38:58 2016 -0700

Capture SubnetNotFound from update_port call

In delete_subnet, we update the remaining ports with fixed_ips
of the IPs remaining from the undeleted subnets. The issue with
this is that two concurrent subnet_delete calls can result in
trying to update the remaining port with a fixed IP from a
subnet already deleted. This results in a SubnetNotFound exception,
which aborts the whole deletion.

This patch captures these exceptions and 'continues' to restart
the iteration and re-query to get the latest fixed IPs of the
ports to automatically update.

Closes-Bug: #1623422
Change-Id: I4e299217eb2aabbe572f24e4328272873369da0f


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1623422

Title:
  delete_subnet update_port needs to catch SubnetNotFound

Status in neutron:
  Fix Released

Bug description:
  The code that updates the ports on a subnet to remove the fixed IPs of
  the subnets being deleted needs to capture SubnetNotFound. A
  concurrent deletion of another subnet on the same network will result
  in the update_port call trying to set fixed IPs containing a subnet
  which no longer exists, which results in SubnetNotFound.

  This error was spotted in a Rally test:
  http://logs.openstack.org/11/369511/3/check/gate-rally-dsvm-neutron-
  rally/b188655/logs/screen-q-svc.txt.gz#_2016-09-14_07_20_06_111


  The relevant request ID is req-befec696-04be-4b2c-94b4-8abb6eb195e0,
  the paste for which is here: http://paste.openstack.org/show/576077/

  
  It tried to update the port, got a concurrent operation error (due to another 
proc updating the same port to remove another subnet), and on retry it got a 
resourcenotfound. PortNotFound is already captured in delete_subnet, and the 
network has to exist for the port to still exist, so the only remaining thing 
to be missing is the Subnet of an ID being requested.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1623422/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615020] Re: Test that expand, migrate, contract repos have the same number of steps

2016-09-15 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/370370
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=b52e0de37cee910ef190a03dda8a2851a877ea99
Submitter: Jenkins
Branch:master

commit b52e0de37cee910ef190a03dda8a2851a877ea99
Author: Dolph Mathews 
Date:   Wed Sep 14 18:43:11 2016 +

Test that rolling upgrade repos are in lockstep

Change-Id: I249e8580d0d5dc53b54619d41a145ef9f0166d19
Closes-Bug: 1615020


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1615020

Title:
  Test that expand, migrate, contract repos have the same number of
  steps

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  To ensure that each of the 3 new migration repositories contain the
  same number of steps, we should introduce a test to catch developers
  attempting to introduce a new step to any one of the new repos,
  without introducing no-op steps to all of them.

  Upon failure, the test should explain to developers that they need to
  add no-op migrations to the other repositories, and the motivation for
  doing so (e.g. making it easy for us to prevent deployers from
  breaking their databases).

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1615020/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623838] Re: Nova requires netaddr >= 0.7.12 which is not enough

2016-09-15 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/370846
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=bad2497b70605b66ada28960d9ccd9796a0867ee
Submitter: Jenkins
Branch:master

commit bad2497b70605b66ada28960d9ccd9796a0867ee
Author: Sean Dague 
Date:   Thu Sep 15 09:49:08 2016 -0400

Update minimum requirement for netaddr

Depends-On: Iffede36c4d9fb3b27d94c9497980504a70a435ba

Change-Id: I75770f3ed4fbf9f48fde210ece90662091bc0c23
Closes-Bug: #1623838


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1623838

Title:
  Nova requires netaddr >= 0.7.12 which is not enough

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  In this commit:
  
https://github.com/openstack/nova/commit/4647f418afb9ced223c089f9d49cd686eccae9e2

  nova starts using the modified_eui64() function which isn't available
  in netaddr 0.7.12. It is available in version 0.7.18, which is what
  the upper-constraints.txt has. I haven't investigate (yet) when the
  new method was introduce in netaddr, though in all reasonableness, I'd
  strongly suggest pushing for an upgrade of global-requirements.txt to
  0.7.18 (which is what we've been gating on for a long time).

  At the packaging level, it doesn't seem to be a big problem, as 0.7.18
  is what Xenial has.

  The other solution would be to remove the call of modified_eui64() in
  Nova, but this looks a more risky option to me, so close from the
  release.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1623838/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1611102] Re: EndpointPolicy backend implementation doesn't inherit its driver interface

2016-09-15 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/352586
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=97585c15a62eb081cce5489bf4b91f0ee0334fcb
Submitter: Jenkins
Branch:master

commit 97585c15a62eb081cce5489bf4b91f0ee0334fcb
Author: Harini 
Date:   Tue Aug 9 01:40:18 2016 +0530

EndpointPolicy driver doesn't inherit interface

Added the driver interface 'base.EndpointPolicyDriverV8' as super class
of the sql driver implementation.

Removed unused methods from driver interface and added release notes.

Change-Id: I198dcbda7591e0dafb1da3a72e3f32b258c0e299
Closes-Bug: #1611102


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1611102

Title:
  EndpointPolicy backend implementation doesn't inherit its driver
  interface

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Every backend implemented in sql.py files, inherit the respective driver 
interface class.
  But EndpointPolicy backend implementation defined @ 
'keystone/endpoint_policy/backends/sql.py' file is anomalous.
  It doesn't inherit its driver interface. So, fix it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1611102/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624079] [NEW] KeyError on "subnet_dhcp_ip = subnet_to_interface_ip[subnet.id]"

2016-09-15 Thread Richard Theis
Public bug reported:

The networking-ovn gate-tempest-dsvm-networking-ovn job is seeing
KeyErrors on "subnet_dhcp_ip = subnet_to_interface_ip[subnet.id]".  It
is unclear yet if this is contributing to the recent job failures.

LogStash Query:
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22subnet_dhcp_ip%20%3D%20subnet_to_interface_ip%5Bsubnet.id%5D%5C%22

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1624079

Title:
  KeyError on "subnet_dhcp_ip = subnet_to_interface_ip[subnet.id]"

Status in neutron:
  New

Bug description:
  The networking-ovn gate-tempest-dsvm-networking-ovn job is seeing
  KeyErrors on "subnet_dhcp_ip = subnet_to_interface_ip[subnet.id]".  It
  is unclear yet if this is contributing to the recent job failures.

  LogStash Query:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22subnet_dhcp_ip%20%3D%20subnet_to_interface_ip%5Bsubnet.id%5D%5C%22

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1624079/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624070] [NEW] domain context filtering is broken

2016-09-15 Thread David Lyle
Public bug reported:

testing on master, setting the domain context had no effect on the
content of the identity panels.

expected result: once the domain context is set, only items in that
domain are shown in the identity panels.

actual: all items still displayed regardless of context.

** Affects: horizon
 Importance: High
 Status: New


** Tags: keystone

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1624070

Title:
  domain context filtering is broken

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  testing on master, setting the domain context had no effect on the
  content of the identity panels.

  expected result: once the domain context is set, only items in that
  domain are shown in the identity panels.

  actual: all items still displayed regardless of context.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1624070/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544861] Re: LBaaS: connection limit does not work with HA Proxy

2016-09-15 Thread Michael Johnson
** Changed in: neutron
   Status: Fix Committed => Fix Released

** Changed in: octavia
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1544861

Title:
  LBaaS: connection limit does not work with HA Proxy

Status in neutron:
  Fix Released
Status in octavia:
  Fix Released

Bug description:
  connection limit does not work with HA Proxy.

  It sets at frontend section like:

  frontend 75a12b66-9d2a-4a68-962e-ec9db8c3e2fb
  option httplog
  capture cookie JSESSIONID len 56
  bind 192.168.10.20:80
  mode http
  default_backend fb8ba6e3-71a4-47dd-8a83-2978bafea6e7
  maxconn 5
  option forwardfor

  But above configuration does not work.
  It should be set at global section like:

  global
  daemon
  user nobody
  group haproxy
  log /dev/log local0
  log /dev/log local1 notice
  stats socket 
/var/lib/neutron/lbaas/fb8ba6e3-71a4-47dd-8a83-2978bafea6e7/sock mode 0666 
level user
  maxconn 5

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1544861/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624065] [NEW] Multiple security groups with the same name are created

2016-09-15 Thread kiran-vemuri
Public bug reported:

Description:

I am able to create multiple security groups with the same name and same 
description. This behaviour can result in confusion.

Expected Behaviour:
--
Enforcing uniqueness in security group names 


Environment:
---
OpenStack Mitaka on Ubuntu 14.04 server

Reproduction Steps:
---

Steps from horizon:
1. Create multiple security groups with same name and same description

Steps from cli:
1. Run the command "nova secgroup-create test test" multiple times


Result: 
--
nova secgroup-list
+--+-++
| Id   | Name| Description|
+--+-++
| 7708f691-7107-43d3-87f4-1d3e672dbe8d | default | Default security group |
| 60d730cc-476b-4d0b-8fbe-f06f09a0b9cd | test| test   |
| 63481312-0f6c-4575-af37-3941e9864cfb | test| test   |
| 827a8642-6b14-47b7-970d-38b8136f62a8 | test| test   |
| 827c33b5-ee4b-43eb-867d-56b3c858664c | test| test   |
| 95607bc1-43a4-4105-9aad-f072ac330499 | test| test   |
+--+-++

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1624065

Title:
  Multiple security groups with the same name are created

Status in OpenStack Compute (nova):
  New

Bug description:
  Description:
  
  I am able to create multiple security groups with the same name and same 
description. This behaviour can result in confusion.

  Expected Behaviour:
  --
  Enforcing uniqueness in security group names 

  
  Environment:
  ---
  OpenStack Mitaka on Ubuntu 14.04 server

  Reproduction Steps:
  ---

  Steps from horizon:
  1. Create multiple security groups with same name and same description

  Steps from cli:
  1. Run the command "nova secgroup-create test test" multiple times

  
  Result: 
  --
  nova secgroup-list
  +--+-++
  | Id   | Name| Description|
  +--+-++
  | 7708f691-7107-43d3-87f4-1d3e672dbe8d | default | Default security group |
  | 60d730cc-476b-4d0b-8fbe-f06f09a0b9cd | test| test   |
  | 63481312-0f6c-4575-af37-3941e9864cfb | test| test   |
  | 827a8642-6b14-47b7-970d-38b8136f62a8 | test| test   |
  | 827c33b5-ee4b-43eb-867d-56b3c858664c | test| test   |
  | 95607bc1-43a4-4105-9aad-f072ac330499 | test| test   |
  +--+-++

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1624065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624064] [NEW] Bump up Glance API minor version to 2.4

2016-09-15 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/350809
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/glance" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit a2b329c997b41632b29471b9ddacb7b19adfdb0d
Author: Nikhil Komawar 
Date:   Wed Aug 3 18:17:47 2016 -0400

Bump up Glance API minor version to 2.4

This is the minor version bump for Newton after some of the API
impacting changes occur.

APIImpact
UpgradeImpact
DocImpact

Depends-On: Ie463e2f30db94cde7716c83a94ec2fb0c0658c91

Change-Id: I5d1c4380682efa4c15ff0f294f269c800fe6762a

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: doc glance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1624064

Title:
  Bump up Glance API minor version to 2.4

Status in Glance:
  New

Bug description:
  https://review.openstack.org/350809
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/glance" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit a2b329c997b41632b29471b9ddacb7b19adfdb0d
  Author: Nikhil Komawar 
  Date:   Wed Aug 3 18:17:47 2016 -0400

  Bump up Glance API minor version to 2.4
  
  This is the minor version bump for Newton after some of the API
  impacting changes occur.
  
  APIImpact
  UpgradeImpact
  DocImpact
  
  Depends-On: Ie463e2f30db94cde7716c83a94ec2fb0c0658c91
  
  Change-Id: I5d1c4380682efa4c15ff0f294f269c800fe6762a

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1624064/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624052] [NEW] Evacuation fails with VirtualInterfaceCreateException

2016-09-15 Thread Artom Lifshitz
Public bug reported:

Description:

With Neutron, evacuating an instance results in a
VirtualInterfaceCreateException and the evacuation fails.

Steps to reproduce:

1. Boot a VM with a Neutron NIC.
2. Cause the underlying host to be down, for example by stopping the compute 
service.
3. Evacuate the VM.
4. The evacuation will fail because the destination compute host waits for a 
network-vif-plugged event from Neutron, which it never received because Nova 
sends the event to the source compute.

Environment: Neutron, KVM

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1624052

Title:
  Evacuation fails with VirtualInterfaceCreateException

Status in OpenStack Compute (nova):
  New

Bug description:
  Description:

  With Neutron, evacuating an instance results in a
  VirtualInterfaceCreateException and the evacuation fails.

  Steps to reproduce:

  1. Boot a VM with a Neutron NIC.
  2. Cause the underlying host to be down, for example by stopping the compute 
service.
  3. Evacuate the VM.
  4. The evacuation will fail because the destination compute host waits for a 
network-vif-plugged event from Neutron, which it never received because Nova 
sends the event to the source compute.

  Environment: Neutron, KVM

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1624052/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1465758] Re: [RFE] Add the ability to create lb vip and member with network_id

2016-09-15 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/363302
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lbaas/commit/?id=4455759f4506a43d70f811a32ee60b13af6afd8d
Submitter: Jenkins
Branch:master

commit 4455759f4506a43d70f811a32ee60b13af6afd8d
Author: Cedric Shock 
Date:   Mon Aug 29 23:46:55 2016 +

Allow creating loadbalancer with network_id

Create loadbalancer accepts either a vip_subnet_id
or vip_network_id. If vip_network_id is provided the
vip port is created on that network using the default
neutron behavior. If neutron assigns multiple fixed ips,
an ipv4 addresses is chosen as the vip in preference to
ipv6 addresses.

-

Who would use the feature?
LBaaS users on a network with multiple subnets

Why use the feature?
Large deployments may have many subnets to allocate
vip addresses. Many of these subnets might have
no addresses remaining to allocate. Creating a
loadbalancer by network selects a subnet with an
available address.

What is the exact usage for the feature?

POST /lbaas/loadbalancers
Host: lbaas-service.cloudX.com:8651
Content-Type: application/json
Accept: application/json
X-Auth-Token:887665443383838

{
"loadbalancer": {
"name": "loadbalancer1",
"description": "simple lb",
"tenant_id": "b7c1a69e88bf4b21a8148f787aef2081",
"vip_network_id": "a3847aea-fa6d-45bc-9bce-03d4472d209d",
"admin_state_up": true
}
}

DocImpact: 2.0 API Create a loadbalancer attributes
APIImpact
Closes-Bug: #1465758
Change-Id: I31f10581369343fde7f928ff0aeb1024eb752dc4


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1465758

Title:
  [RFE] Add the ability to create lb vip and member with network_id

Status in neutron:
  Fix Released

Bug description:
  For large deployments that use cells and provider networks, it may not
  make sense to only allow a user to specify the subnet in which to
  allocate a port because nova scheduling may not be able to allocate a
  port on the specified subnet.  Specifying a subnet would create a
  conflict with that, especially when it comes to capacity management.

  Allowing network_id to be specified, in addition to subnet_id, will
  give flexibility to deployers who only want to allow allocation by
  network_id.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1465758/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624048] [NEW] Allow creating loadbalancer with network_id

2016-09-15 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/363302
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.

commit 4455759f4506a43d70f811a32ee60b13af6afd8d
Author: Cedric Shock 
Date:   Mon Aug 29 23:46:55 2016 +

Allow creating loadbalancer with network_id

Create loadbalancer accepts either a vip_subnet_id
or vip_network_id. If vip_network_id is provided the
vip port is created on that network using the default
neutron behavior. If neutron assigns multiple fixed ips,
an ipv4 addresses is chosen as the vip in preference to
ipv6 addresses.

-

Who would use the feature?
LBaaS users on a network with multiple subnets

Why use the feature?
Large deployments may have many subnets to allocate
vip addresses. Many of these subnets might have
no addresses remaining to allocate. Creating a
loadbalancer by network selects a subnet with an
available address.

What is the exact usage for the feature?

POST /lbaas/loadbalancers
Host: lbaas-service.cloudX.com:8651
Content-Type: application/json
Accept: application/json
X-Auth-Token:887665443383838

{
"loadbalancer": {
"name": "loadbalancer1",
"description": "simple lb",
"tenant_id": "b7c1a69e88bf4b21a8148f787aef2081",
"vip_network_id": "a3847aea-fa6d-45bc-9bce-03d4472d209d",
"admin_state_up": true
}
}

DocImpact: 2.0 API Create a loadbalancer attributes
APIImpact
Closes-Bug: #1465758
Change-Id: I31f10581369343fde7f928ff0aeb1024eb752dc4

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: doc neutron-lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1624048

Title:
  Allow creating loadbalancer with network_id

Status in neutron:
  New

Bug description:
  https://review.openstack.org/363302
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.

  commit 4455759f4506a43d70f811a32ee60b13af6afd8d
  Author: Cedric Shock 
  Date:   Mon Aug 29 23:46:55 2016 +

  Allow creating loadbalancer with network_id
  
  Create loadbalancer accepts either a vip_subnet_id
  or vip_network_id. If vip_network_id is provided the
  vip port is created on that network using the default
  neutron behavior. If neutron assigns multiple fixed ips,
  an ipv4 addresses is chosen as the vip in preference to
  ipv6 addresses.
  
  -
  
  Who would use the feature?
  LBaaS users on a network with multiple subnets
  
  Why use the feature?
  Large deployments may have many subnets to allocate
  vip addresses. Many of these subnets might have
  no addresses remaining to allocate. Creating a
  loadbalancer by network selects a subnet with an
  available address.
  
  What is the exact usage for the feature?
  
  POST /lbaas/loadbalancers
  Host: lbaas-service.cloudX.com:8651
  Content-Type: application/json
  Accept: application/json
  X-Auth-Token:887665443383838
  
  {
  "loadbalancer": {
  "name": "loadbalancer1",
  "description": "simple lb",
  "tenant_id": "b7c1a69e88bf4b21a8148f787aef2081",
  "vip_network_id": "a3847aea-fa6d-45bc-9bce-03d4472d209d",
  "admin_state_up": true
  }
  }
  
  DocImpact: 2.0 API Create a loadbalancer attributes
  APIImpact
  Closes-Bug: #1465758
  Change-Id: I31f10581369343fde7f928ff0aeb1024eb752dc4

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1624048/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1622914] Re: agent traces about bridge-nf-call sysctl values missing

2016-09-15 Thread Assaf Muller
Added TripleO - br_filter kernel module should be loaded by installers.

** Also affects: tripleo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1622914

Title:
  agent traces about bridge-nf-call sysctl values missing

Status in devstack:
  New
Status in neutron:
  In Progress
Status in tripleo:
  New

Bug description:
  spotted in gate:

  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent Traceback (most recent call 
last):
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/agent/_common_agent.py", 
line 450, in daemon_loop
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent sync = 
self.process_network_devices(device_info)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/usr/local/lib/python2.7/dist-packages/osprofiler/profiler.py", line 154, in 
wrapper
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent return f(*args, **kwargs)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/agent/_common_agent.py", 
line 200, in process_network_devices
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent device_info.get('updated'))
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/agent/securitygroups_rpc.py", line 265, in 
setup_port_filters
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent 
self.prepare_devices_filter(new_devices)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/agent/securitygroups_rpc.py", line 130, in 
decorated_function
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent *args, **kwargs)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/agent/securitygroups_rpc.py", line 138, in 
prepare_devices_filter
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent 
self._apply_port_filter(device_ids)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/agent/securitygroups_rpc.py", line 163, in 
_apply_port_filter
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent 
self.firewall.prepare_port_filter(device)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/agent/linux/iptables_firewall.py", line 170, in 
prepare_port_filter
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent 
self._enable_netfilter_for_bridges()
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/agent/linux/iptables_firewall.py", line 114, in 
_enable_netfilter_for_bridges
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent run_as_root=True)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/agent/linux/utils.py", line 138, in execute
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent raise RuntimeError(msg)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent RuntimeError: Exit code: 255; 
Stdin: ; Stdout: ; Stderr: sysctl: cannot stat 
/proc/sys/net/bridge/bridge-nf-call-arptables: No such file or directory
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent 
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1622914/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1622914] Re: agent traces about bridge-nf-call sysctl values missing

2016-09-15 Thread Armando Migliaccio
** Also affects: devstack
   Importance: Undecided
   Status: New

** Changed in: neutron
Milestone: newton-rc1 => ocata-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1622914

Title:
  agent traces about bridge-nf-call sysctl values missing

Status in devstack:
  New
Status in neutron:
  In Progress

Bug description:
  spotted in gate:

  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent Traceback (most recent call 
last):
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/agent/_common_agent.py", 
line 450, in daemon_loop
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent sync = 
self.process_network_devices(device_info)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/usr/local/lib/python2.7/dist-packages/osprofiler/profiler.py", line 154, in 
wrapper
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent return f(*args, **kwargs)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/agent/_common_agent.py", 
line 200, in process_network_devices
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent device_info.get('updated'))
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/agent/securitygroups_rpc.py", line 265, in 
setup_port_filters
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent 
self.prepare_devices_filter(new_devices)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/agent/securitygroups_rpc.py", line 130, in 
decorated_function
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent *args, **kwargs)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/agent/securitygroups_rpc.py", line 138, in 
prepare_devices_filter
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent 
self._apply_port_filter(device_ids)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/agent/securitygroups_rpc.py", line 163, in 
_apply_port_filter
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent 
self.firewall.prepare_port_filter(device)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/agent/linux/iptables_firewall.py", line 170, in 
prepare_port_filter
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent 
self._enable_netfilter_for_bridges()
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/agent/linux/iptables_firewall.py", line 114, in 
_enable_netfilter_for_bridges
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent run_as_root=True)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/agent/linux/utils.py", line 138, in execute
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent raise RuntimeError(msg)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent RuntimeError: Exit code: 255; 
Stdin: ; Stdout: ; Stderr: sysctl: cannot stat 
/proc/sys/net/bridge/bridge-nf-call-arptables: No such file or directory
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent 
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1622914/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623908] Re: Nova documentation on creating flavors has depreceated examples

2016-09-15 Thread Sean Dague
The documentation actually exists in the openstack-manuals not in Nova

** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

** No longer affects: nova

** Changed in: openstack-manuals
   Status: New => Confirmed

** Changed in: openstack-manuals
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1623908

Title:
  Nova documentation on creating flavors has depreceated examples

Status in openstack-manuals:
  Confirmed

Bug description:
  In the admin-guide/compute-flavors [1][2], there is deprecated
  example, the example is (below "Is Public"):

  $ openstack flavor create --private p1.medium auto 512 40 4

  Pasting it in, gives you an error:

  $ openstack flavor create --private p1.medium auto 512 40 4
  usage: openstack flavor create [-h] [-f {json,shell,table,value,yaml}]
 [-c COLUMN] [--max-width ]
 [--noindent] [--prefix PREFIX] [--id ]
 [--ram ] [--disk ]
 [--ephemeral ] [--swap ]
 [--vcpus ] [--rxtx-factor ]
 [--public | --private] [--property 

[Yahoo-eng-team] [Bug 1493653] Re: DVR: port with None binding:host_id can't be deleted

2016-09-15 Thread Brian Haley
I just tried these steps with newton master branch and it worked, so i'm
going to close this as there has been no response in a while.

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1493653

Title:
  DVR: port with None binding:host_id can't be deleted

Status in neutron:
  Invalid

Bug description:
  On Neutron master branch,  in bellow use case,  a port can't be deleted
  1. create a DVR router
  2. create a network, a subnet which disable dhcp
  3. create a port with device_owner=compute:None

  when we delete this port,  we will get a error:
  root@compute:/var/log/neutron# neutron port-delete 
830d6db6-cd00-46ff-8f17-f32f363de1fd
  Agent with agent_type=L3 agent and host= could not be found

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1493653/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623871] Re: Nova hugepage support does not include aarch64

2016-09-15 Thread Sylvain Bauza
Thanks for proposing to add aarch64 support to hugepages and NUMA
instances.

Considering the operator impact and the fact that we would need to
verify the libvirt and qemu minimum versions that we ship for making
sure they support the above, I think it is really important to have that
specific feature request to be handled accordingly with people able to
review your proposal.

In Nova, we follow a specific process for writing new specifications and
proposals that you can find more information on
http://docs.openstack.org/developer/nova/process.html#how-do-i-get-my-
code-merged

Basically, it requires you to write a blueprint and discuss on IRC to
see whether your blueprint needs a formal specification writing called a
"spec" file.


** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1623871

Title:
  Nova hugepage support does not include aarch64

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Although aarch64 supports spawning a vm with hugepages, in nova code,
  the libvirt driver considers only x86_64 and I686. Both for NUMA and
  Hugepage support, AARCH64 needs to be added. Due to this bug, vm can
  not be launched with hugepage using OpenStack on aarch64 servers.

  Steps to reproduce:
  On an openstack environment running on aarch64:
  1. Configure compute to use hugepages.
  2. Set mem_page_size="2048" for a flavor
  3. Launch a VM using the above flavor. 

  Expected result:
  VM should be launched with hugepages and the libvirt xml should have 



  



  Actual result:
  VM is launched without hugepages.

  There are no error logs in nova-scheduler.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1623871/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624005] [NEW] The Update Flavor Metadata does not work properly

2016-09-15 Thread Béla Vancsics
Public bug reported:

The flavor metadatas changes are not saved well.

Steps:
1) Create a new flavor - e.g.: in console: '$openstack flavor create new_flavor 
--id 99 --ram 512 --disk 2 --vcpus 4' or in dashboard
2) Update Metadata  - e.g.: CIM Processor Allocation Setting -> Instruction Set 
Extension -> select: ARM:DSP and ARM:DSP and ARM:NEON
3) Save
4) Update Metadata - remove the CIM_PASD_InstructionSetExtensionName from 
Existing Metadata box
5) Save

Result:
There are the CIM_PASD_InstructionSetExtensionNames metadata in extra_specs 
(See: $nova flavor-show 99 or dashboard)

** Affects: nova
 Importance: Undecided
 Status: New

** Summary changed:

- The Update Flavor Metadata does not work properly (in dashboard)
+ The Update Flavor Metadata does not work properly

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1624005

Title:
  The Update Flavor Metadata does not work properly

Status in OpenStack Compute (nova):
  New

Bug description:
  The flavor metadatas changes are not saved well.

  Steps:
  1) Create a new flavor - e.g.: in console: '$openstack flavor create 
new_flavor --id 99 --ram 512 --disk 2 --vcpus 4' or in dashboard
  2) Update Metadata  - e.g.: CIM Processor Allocation Setting -> Instruction 
Set Extension -> select: ARM:DSP and ARM:DSP and ARM:NEON
  3) Save
  4) Update Metadata - remove the CIM_PASD_InstructionSetExtensionName from 
Existing Metadata box
  5) Save

  Result:
  There are the CIM_PASD_InstructionSetExtensionNames metadata in extra_specs 
(See: $nova flavor-show 99 or dashboard)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1624005/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623990] [NEW] ml2 "while True" assumes fresh data on transactions

2016-09-15 Thread Kevin Benton
Public bug reported:

both delete_network and delete_subnet in ML2 use the following pattern:

while True:
with session.begin():
record = get_thing_from_db()
... Logic to determine if out of transaction cleaning should be done
if no_cleaning:
context.session.delete(record)

do_cleaning_up()


The problem here is that it assumes it will get fresh data on each
iteration. However, due to to the identity map pattern followed by
sqlalchemy[1], new data will not be reflected in the 'record' var above
on subsequent iterations if the primary key still exists in the DB.

This can lead to infinite loops on delete_subnet if a concurrent delete
request (or network update) bumps the network revision number. This is
because the network that is in the session will always have the stale
revision number even though the lookup methods are called on each
iteration.


This can be reproduced with the following steps:
1. Place a pdb breakpoint here: 
https://github.com/openstack/neutron/blob/27928c0ddfb8d62843aa72ecf943d1f46ef30099/neutron/plugins/ml2/plugin.py#L1062
2. create a network with two subnets (ensure the dhcp agent is running so it 
gets an IP on each)
3. issue an API subnet delete call
4. first time breakpoint hits, nothing is looked up yet, so just continue
5. second time breakpoint hits, issue the following query in mysql: 'update 
standardattributes set revision_number=50;'. this will alter the revision 
number in the DB but the loop won't ever get the new one.
6. continue
7. continue
8. continue
9. do this as long as you want :)


1. http://docs.sqlalchemy.org/en/latest/orm/session_basics.html#is-the-
session-a-cache

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1623990

Title:
  ml2 "while True" assumes fresh data on transactions

Status in neutron:
  In Progress

Bug description:
  both delete_network and delete_subnet in ML2 use the following
  pattern:

  while True:
  with session.begin():
  record = get_thing_from_db()
  ... Logic to determine if out of transaction cleaning should be done
  if no_cleaning:
  context.session.delete(record)

  do_cleaning_up()


  The problem here is that it assumes it will get fresh data on each
  iteration. However, due to to the identity map pattern followed by
  sqlalchemy[1], new data will not be reflected in the 'record' var
  above on subsequent iterations if the primary key still exists in the
  DB.

  This can lead to infinite loops on delete_subnet if a concurrent
  delete request (or network update) bumps the network revision number.
  This is because the network that is in the session will always have
  the stale revision number even though the lookup methods are called on
  each iteration.

  
  This can be reproduced with the following steps:
  1. Place a pdb breakpoint here: 
https://github.com/openstack/neutron/blob/27928c0ddfb8d62843aa72ecf943d1f46ef30099/neutron/plugins/ml2/plugin.py#L1062
  2. create a network with two subnets (ensure the dhcp agent is running so it 
gets an IP on each)
  3. issue an API subnet delete call
  4. first time breakpoint hits, nothing is looked up yet, so just continue
  5. second time breakpoint hits, issue the following query in mysql: 'update 
standardattributes set revision_number=50;'. this will alter the revision 
number in the DB but the loop won't ever get the new one.
  6. continue
  7. continue
  8. continue
  9. do this as long as you want :)


  
  1. 
http://docs.sqlalchemy.org/en/latest/orm/session_basics.html#is-the-session-a-cache

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1623990/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623664] Re: Race between L3 agent and neutron-ns-cleanup

2016-09-15 Thread Hirofumi Ichihara
I agree Assaf. Why do we need to run neutron-netns-cleanup with
l3-agent? Although I don't know ubuntu package behavior.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1623664

Title:
  Race between L3 agent and neutron-ns-cleanup

Status in neutron:
  Invalid

Bug description:
  I suspect a race between the neutron L3 agent and the neutron-netns-
  cleanup script, which runs as a CRON job in Ubuntu. Here's a stack
  trace in the router delete code path:

  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager [-] Error during 
notification for neutron.agent.metadata.driver.before_router_removed router, 
before_delete
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager Traceback (most 
recent call last):
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager   File 
"/usr/lib/python2.7/dist-packages/neutron/callbacks/manager.py", line 141, in 
_notify_loop
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager 
callback(resource, event, trigger, **kwargs)
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/metadata/driver.py", line 176, 
in before_router_removed
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager 
router.iptables_manager.apply()
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/iptables_manager.py", 
line 423, in apply
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager return 
self._apply()
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/iptables_manager.py", 
line 431, in _apply
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager return 
self._apply_synchronized()
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/iptables_manager.py", 
line 457, in _apply_synchronized
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager save_output 
= self.execute(args, run_as_root=True)
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py", line 159, in 
execute
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager raise 
RuntimeError(m)
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager RuntimeError:
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager Command: 
['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'qrouter-69ef3d5c-1ad1-42fb-8a1e-8d949837bbf8', 
'iptables-save']
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager Exit code: 1
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager Stdin:
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager Stdout:
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager Stderr: Cannot 
open network namespace "qrouter-69ef3d5c-1ad1-42fb-8a1e-8d949837bbf8": No such 
file or directory
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager
  2016-08-03 03:30:03.392 2595 ERROR neutron.callbacks.manager
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent [-] Error while 
deleting router 69ef3d5c-1ad1-42fb-8a1e-8d949837bbf8
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent Traceback (most 
recent call last):
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 344, in 
_safe_router_removed
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent 
self._router_removed(router_id)
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 360, in 
_router_removed
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent self, router=ri)
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/callbacks/registry.py", line 44, in 
notify
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent 
_get_callback_manager().notify(resource, event, trigger, **kwargs)
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent   File 
"/usr/lib/python2.7/dist-packages/neutron/callbacks/manager.py", line 123, in 
notify
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent raise 
exceptions.CallbackFailure(errors=errors)
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent CallbackFailure: 
Callback neutron.agent.metadata.driver.before_router_removed failed with "
  2016-08-03 03:30:03.393 2595 ERROR neutron.agent.l3.agent Command: ['sudo', 
'/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 
'exec', 'qrouter-69ef3d5c-1ad1-42fb-8a1e-8d949837bbf8', 

[Yahoo-eng-team] [Bug 1618025] Re: MTU specified during neutron network create is not honored, always set to 1500

2016-09-15 Thread Esha Seth
I checked on the neutron IRC and MTU cannot be set by API/GUI/CLI. MTU
is calculated by neutron as per your config options global_physnet_mtu
and such, but it cant be set. There is a separate page in user guide for
MTU in newton http://docs.openstack.org/draft/networking-guide/config-
mtu.html

Via neutron net-update or REST API its not possible,cause I checked the
parameters and it didnt have MTU. Only neutron ipsec-site-connection-
update is the cli I could see MTU as a parameter.

A GET can let MTU be returned in network payload, assuming your plugin
supports net-mtu API extension. POST(edit) will not work.

It was suggested, we could in theory allow to request MTUs that are
lower than plugin calculated, that would be a feature request but only
if there is valid usecase around to lower your MTU.

I am cancelling this defect based on this discussion

** Changed in: neutron
   Status: Incomplete => Invalid

** Changed in: neutron
 Assignee: ramesram (ramesh.rayapureddi) => Esha Seth (eshaseth)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1618025

Title:
  MTU specified during neutron network create is not honored, always set
  to 1500

Status in neutron:
  Invalid

Bug description:
  When I try to create a new neutron network with MTU say 1000, it is
  getting overwritten with a value of 1500.

  Is MTU not settable per neutron network?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1618025/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623953] [NEW] Updating firewall rule that is associated with a policy causes KeyError

2016-09-15 Thread Sridar Kandaswamy
Public bug reported:

Updating a firewall rule, when associated with policy causes a KeyError.
Observed during testing.

** Affects: neutron
 Importance: Undecided
 Assignee: Sridar Kandaswamy (skandasw)
 Status: Incomplete


** Tags: fwaas

** Changed in: neutron
 Assignee: (unassigned) => Sridar Kandaswamy (skandasw)

** Tags added: fwaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1623953

Title:
  Updating firewall rule that is associated with a  policy causes
  KeyError

Status in neutron:
  Incomplete

Bug description:
  Updating a firewall rule, when associated with policy causes a
  KeyError. Observed during testing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1623953/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623958] [NEW] Metering-agent fails to restore iptables

2016-09-15 Thread Kengo Hobo
Public bug reported:

iptables-restore fails with 'Bad argument' when we call 'add_rule' with '' as 
rule,
because IptableManager try to add line below which is invalid format.
===
-I neutron-meter-l-ba70a353-f3a 1 neutron-meter-l-ba70a353-f3a
===

This behavior is occurred after the modification.
https://github.com/openstack/neutron/commit/5b7c71a327d735134fa0eeb4427d0e1bd1f7d1e5

Created blank by rule is stripped during initializing IptablesRule.
https://github.com/openstack/neutron/blob/master/neutron/agent/linux/iptables_manager.py#L124

This stripping blank changes result in _generate_chain_diff_iptables_commands 
as follows.
https://github.com/openstack/neutron/blob/master/neutron/agent/linux/iptables_manager.py#L767

before:  '' is set in rule variable.
after: chain name(e.g.neutron-meter-l-e648a667-c21) is set as rule variable. 

Additional string is added and invalid format is generated.
Thus, iptable-restore fails.

trace in metering-agent.log
==
2016-09-15 12:50:39.202 22676 ERROR neutron.agent.linux.utils 
[req-b3191be2-6daf-42bf-bc3a-e792481da48b da7194b4e98b4c8badc5912bbcd7aea4 
9384ef06bc9d4af08a384692c92761c
e - - -] Exit code: 2; Stdin: # Generated by iptables_manager
*filter
:neutron-meter-FORWARD - [0:0]
:neutron-meter-INPUT - [0:0]
:neutron-meter-OUTPUT - [0:0]
:neutron-meter-l-ba70a353-f3a - [0:0]
:neutron-meter-local - [0:0]
:neutron-meter-r-ba70a353-f3a - [0:0]
-I FORWARD 2 -j neutron-meter-FORWARD
-I INPUT 1 -j neutron-meter-INPUT
-I OUTPUT 2 -j neutron-meter-OUTPUT
-I neutron-filter-top 1 -j neutron-meter-local
-I neutron-meter-FORWARD 1 -j neutron-meter-r-ba70a353-f3a
-I neutron-meter-l-ba70a353-f3a 1 neutron-meter-l-ba70a353-f3a
COMMIT
# Completed by iptables_manager
# Generated by iptables_manager
*raw
:neutron-meter-OUTPUT - [0:0]
:neutron-meter-PREROUTING - [0:0]
-I OUTPUT 1 -j neutron-meter-OUTPUT
-I PREROUTING 1 -j neutron-meter-PREROUTING
COMMIT
# Completed by iptables_manager
; Stdout: ; Stderr: Bad argument `neutron-meter-l-ba70a353-f3a'
Error occurred at line: 14
Try `iptables-restore -h' or 'iptables-restore --help' for more information.

2016-09-15 12:50:39.204 22676 ERROR neutron.agent.linux.iptables_manager 
[req-b3191be2-6daf-42bf-bc3a-e792481da48b da7194b4e98b4c8badc5912bbcd7aea4 
9384ef06bc9d4af08a384692c92761ce - - -] IPTablesManager.apply failed to apply 
the following set of iptables rules:
  1. # Generated by iptables_manager
  2. *filter
  3. :neutron-meter-FORWARD - [0:0]
  4. :neutron-meter-INPUT - [0:0]
  5. :neutron-meter-OUTPUT - [0:0]
  6. :neutron-meter-l-ba70a353-f3a - [0:0]
  7. :neutron-meter-local - [0:0]
  8. :neutron-meter-r-ba70a353-f3a - [0:0]
  9. -I FORWARD 2 -j neutron-meter-FORWARD
 10. -I INPUT 1 -j neutron-meter-INPUT
 11. -I OUTPUT 2 -j neutron-meter-OUTPUT
 12. -I neutron-filter-top 1 -j neutron-meter-local
 13. -I neutron-meter-FORWARD 1 -j neutron-meter-r-ba70a353-f3a
 14. -I neutron-meter-l-ba70a353-f3a 1 neutron-meter-l-ba70a353-f3a
 15. COMMIT
 16. # Completed by iptables_manager
 17. # Generated by iptables_manager
 18. *raw
 19. :neutron-meter-OUTPUT - [0:0]
 20. :neutron-meter-PREROUTING - [0:0]
 21. -I OUTPUT 1 -j neutron-meter-OUTPUT
 22. -I PREROUTING 1 -j neutron-meter-PREROUTING
 23. COMMIT
 24. # Completed by iptables_manager
 25.
2016-09-15 12:50:39.204 22676 DEBUG oslo_concurrency.lockutils 
[req-b3191be2-6daf-42bf-bc3a-e792481da48b da7194b4e98b4c8badc5912bbcd7aea4 
9384ef06bc9d4af08a384692c92761ce - - -] Releasing semaphore 
"iptables-qrouter-98b59b73-4490-4834-a02c-ae2e1ea16d64" lock 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:225
2016-09-15 12:50:39.205 22676 ERROR 
neutron.services.metering.agents.metering_agent 
[req-b3191be2-6daf-42bf-bc3a-e792481da48b da7194b4e98b4c8badc5912bbcd7aea4 
9384ef06bc9d4af08a384692c92761ce - - -] Driver 
neutron.services.metering.drivers.iptables.iptables_driver.IptablesMeteringDriver:add_metering_label
 runtime error
2016-09-15 12:50:39.205 22676 ERROR 
neutron.services.metering.agents.metering_agent Traceback (most recent call 
last):
2016-09-15 12:50:39.205 22676 ERROR 
neutron.services.metering.agents.metering_agent   File 
"/opt/stack/neutron/neutron/services/metering/agents/metering_agent.py", line 
166, in _invoke_driver
2016-09-15 12:50:39.205 22676 ERROR 
neutron.services.metering.agents.metering_agent return 
getattr(self.metering_driver, func_name)(context, meterings)
2016-09-15 12:50:39.205 22676 ERROR 
neutron.services.metering.agents.metering_agent   File 
"/usr/local/lib/python2.7/dist-packages/oslo_log/helpers.py", line 48, in 
wrapper
2016-09-15 12:50:39.205 22676 ERROR 
neutron.services.metering.agents.metering_agent return method(*args, 
**kwargs)
2016-09-15 12:50:39.205 22676 ERROR 
neutron.services.metering.agents.metering_agent   File 

[Yahoo-eng-team] [Bug 1622917] Re: Failed to update router to ha mode when overlapping is disabled

2016-09-15 Thread Hirofumi Ichihara
Thanks. I reproduced.

However, this is not bug and it's not related migration from legacy to
HA mode. L3HA originally expects "allow_overlapping_ips = True"[1].
Because HA router needs 169.254.192.0/18 network (this is constant value
by l3_ha_net_cidr) but "allow_overlapping_ips = False" doesn't allow to
create the network for each project. If we can use L3HA with
"allow_overlapping_ips = False", we must implement auto generated cidr
network instead of l3_ha_net_cidr. It's not impossible but not
reasnable. It's better to just use "allow_overlapping_ips = True".

[1]: http://docs.openstack.org/liberty/networking-guide/scenario-l3ha-
ovs.html

** Changed in: neutron
   Status: Incomplete => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1622917

Title:
  Failed to update router to ha mode when overlapping is disabled

Status in neutron:
  Opinion

Bug description:
  I tried to move users routers from non-ha to ha mode. I made neutron
  router-update  $rid  --ha true; It works but only few times. When I
  try to do it on fourth of fifth router - it fails with error:

  Invalid input for operation: Requested subnet with cidr:
  169.254.192.0/18 for network: 313e3e5e-79a8-42cd-bdf3-5d385682197a
  overlaps with another subnet.

  Unfortunately I did it in a loop with all routers, so all of remaining
  non-ha routers became unusable. Network node with l3 agent was unable
  to create virtual router giving error:

  2016-09-12 19:18:35.827 9357 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/dist-packages/neutron/scheduler/l3_agent_scheduler.py", 
line 293, in create_ha_port_and_bind
  2016-09-12 19:18:35.827 9357 ERROR oslo_service.periodic_task 
ha_network.network.id, tenant_id)
  2016-09-12 19:18:35.827 9357 ERROR oslo_service.periodic_task
  2016-09-12 19:18:35.827 9357 ERROR oslo_service.periodic_task AttributeError: 
'NoneType' object has no attribute 'network'

  Trying to reverse operation "neutron router-update  $rid --ha false"
  also fails (with the same error in neutron log), so after spending few
  hours on diagnose the problem I found source of the problem and
  solution. I had to manualy change ha mode in database
  (router_extra_attributes  table) and routers became stable and
  working. The problem is that allow_overlapping_ips = False prevents
  neutron from creating HA network tenants which are using the same
  subnets 169.254.x.0/18.

  My Openstack version is Liberty. So if there is no solution yet, let me 
propose two solutions:
  - allow to create ha networks regardless of allow_overlapping_ips setting 
(but I think it can be hard to develop such exception)
  - not to change ha mode of the router if ha network creating failed (the 
procedure create_ha_port_and_bind needs additonal exception).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1622917/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546152] Re: openstack adding a role to an openldap user failed

2016-09-15 Thread Steve Martinelli
*** This bug is a duplicate of bug 1526462 ***
https://bugs.launchpad.net/bugs/1526462

** This bug has been marked a duplicate of bug 1526462
   Need support for OpenDirectory in LDAP driver

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1546152

Title:
  openstack adding a role to an openldap user failed

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  When issuing "openstack role add  --domain  --user
--user-domain  member" command on a domain
  associated with OpenLDAP, the keystone logs report that the domain and
  the role member could not be found though the openstack role show
  member displays the member role and openstack domain show 
  displays the domain as active.

  OpenLDAP is running on a CentOS 7 host.
  Openstack keystone release is Liberty running on a CentOS 7 host.
  OpenLDAP version: OpenLDAP: slapd 2.4.39 (Sep 29 2015 13:31:12)
  openstack v: 1.7.2

  This "bug" could be probably related to the two other bugs I reported
  before: #1546040 and #1546136

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1546152/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623908] [NEW] Nova documentation on creating flavors has depreceated examples

2016-09-15 Thread Gábor Antal
Public bug reported:

In the admin-guide/compute-flavors [1][2], there is deprecated example,
the example is (below "Is Public"):

$ openstack flavor create --private p1.medium auto 512 40 4

Pasting it in, gives you an error:

$ openstack flavor create --private p1.medium auto 512 40 4
usage: openstack flavor create [-h] [-f {json,shell,table,value,yaml}]
   [-c COLUMN] [--max-width ]
   [--noindent] [--prefix PREFIX] [--id ]
   [--ram ] [--disk ]
   [--ephemeral ] [--swap ]
   [--vcpus ] [--rxtx-factor ]
   [--public | --private] [--property 

[Yahoo-eng-team] [Bug 1620279] Re: Allow metadata agent to make calls to more than one nova_metadata_ip

2016-09-15 Thread OpenStack Infra
Fix proposed to branch: master
Review: https://review.openstack.org/370727

** Changed in: neutron
   Status: Won't Fix => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1620279

Title:
  Allow metadata agent to make calls to more than one nova_metadata_ip

Status in neutron:
  In Progress

Bug description:
  Currently in config of metadata agent there is option to set IP address of 
nova metadata service (nova_metadata_ip).
  There can be situation that there is more than one nova-api service in 
cluster and in such case if configured nova metadata IP will return e.g. error 
500 then it will be returned to instance, but there can be situation that all 
other nova-api services are working fine and call to other Nova service would 
return proper metadata.

  So proposition is to change nova_metadata_ip string option to list of
  IP addresses and to change metadata agent that it will try to make
  calls to one of configured Nova services. If response from this Nova
  service will not be 200, than agent will try to make call to next Nova
  service. If response from all Nova services will fail, then it will
  return lowest error code which will get from Nova (for example Nova-
  api-1 returned 500 and Nova-api-2 returned 404 - agent will return to
  VM response 404).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1620279/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1621888] Re: placement-api http responses are not marked for translation

2016-09-15 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/369035
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=a4b5b0c859a6e194987652708b5473827a1cd604
Submitter: Jenkins
Branch:master

commit a4b5b0c859a6e194987652708b5473827a1cd604
Author: Chris Dent 
Date:   Mon Sep 12 20:00:08 2016 +

[placement] Mark HTTP error responses for translation

The HTTP responses that are errors and provided messages to the
client should be marked for translation. This change does that.

Change-Id: If22270768c2e6cdb810e0e08b3a4ab7a42bf828d
Closes-Bug: #1621888


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1621888

Title:
  placement-api http responses are not marked for translation

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The HTTP error responses from the placement-api should be marked for
  translation with _(), like here:

  
https://github.com/openstack/nova/blob/f441ee55c6f99a099fe1a68a6cfa486fd522554f/nova/api/openstack/placement/handlers/allocation.py#L101

  raise webob.exc.HTTPBadRequest(
  "Allocation for resource provider '%s' "
  "that does not exist." % resource_provider_uuid,
  json_formatter=util.json_error_formatter)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1621888/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1584204] Re: VersionsCallbackNotFound exception when using QoS

2016-09-15 Thread Richard Theis
** Changed in: networking-ovn
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1584204

Title:
  VersionsCallbackNotFound exception when using QoS

Status in networking-ovn:
  Invalid
Status in neutron:
  Fix Released

Bug description:
  VersionsCallbackNotFound exception occurred in neutron-server running
  networking-ovn when trying to enable QoS with the following commands:

  $ neutron qos-policy-create bw-limiter

  $ neutron qos-bandwidth-limit-rule-create bw-limiter --max-kbps 3000
  --max-burst-kbps 300

  Note:  This exception occurred when running core plugin or ML2 mech
  driver.

  
  2016-05-20 09:41:36.789 27596 DEBUG oslo_policy.policy 
[req-0fe76c74-76a6-43b3-8f5b-4d85a65aec7b admin -] Reloaded policy file: 
/etc/neutron/policy.json _load_policy_file 
/usr/local/lib/python2.7/dist-packages/oslo_policy/policy.py:520
  2016-05-20 09:41:36.954 27596 INFO neutron.wsgi 
[req-0fe76c74-76a6-43b3-8f5b-4d85a65aec7b admin -] 192.168.56.10 - - 
[20/May/2016 09:41:36] "GET /v2.0/qos/policies.json?fields=id=bw-limiter 
HTTP/1.1" 200 260 0.368297
  2016-05-20 09:41:37.031 27596 DEBUG neutron.api.v2.base 
[req-c50967a6-838f-4da8-adab-9a44e7c7c207 admin -] Request body: 
{u'bandwidth_limit_rule': {u'max_kbps': u'3000', u'max_burst_kbps': u'300'}} 
prepare_request_body /opt/stack/neutron/neutron/api/v2/base.py:658
  2016-05-20 09:41:37.031 27596 DEBUG neutron.api.v2.base 
[req-c50967a6-838f-4da8-adab-9a44e7c7c207 admin -] Unknown quota resources 
['bandwidth_limit_rule']. _create /opt/stack/neutron/neutron/api/v2/base.py:460
  2016-05-20 09:41:37.056 27596 DEBUG neutron.api.rpc.handlers.resources_rpc 
[req-c50967a6-838f-4da8-adab-9a44e7c7c207 admin -] 
neutron.api.rpc.handlers.resources_rpc.ResourcesPushRpcApi method push called 
with arguments (, 
QosPolicy(description='',id=dbee9581-44a5-4889-bd06-9193eb08c10d,name='bw-limiter',rules=[QosRule(7317f86e-bacb-4c6c-9221-66e2f9d9309d)],shared=False,tenant_id=7c291c3d9d1a45dd89c8c80c7f5f12b0),
 'updated') {} wrapper 
/usr/local/lib/python2.7/dist-packages/oslo_log/helpers.py:47
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource 
[req-c50967a6-838f-4da8-adab-9a44e7c7c207 admin -] create failed
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 84, in resource
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 412, in create
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource return 
self._create(request, body, **kwargs)
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 148, in wrapper
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 221, in 
__exit__
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource 
self.force_reraise()
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 197, in 
force_reraise
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 138, in wrapper
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource return 
f(*args, **kwargs)
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 523, in _create
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource obj = 
do_create(body)
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 505, in do_create
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource 
request.context, reservation.reservation_id)
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 221, in 
__exit__
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource 
self.force_reraise()
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 197, in 
force_reraise
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource   File 

[Yahoo-eng-team] [Bug 1623890] Re: duplicate packages in glare

2016-09-15 Thread Valerii Kovalchuk
*** This bug is a duplicate of bug 1623567 ***
https://bugs.launchpad.net/bugs/1623567

Bug in glance already exists and is being fixed
https://bugs.launchpad.net/glance/+bug/1623567

** This bug has been marked a duplicate of bug 1623567
   It is possible to import package twice via plugin with enabled glance 
artifact repository

** Also affects: glance
   Importance: Undecided
   Status: New

** This bug is no longer a duplicate of bug 1623567
   It is possible to import package twice via plugin with enabled glance 
artifact repository

** Changed in: murano
   Status: New => Invalid

** This bug has been marked a duplicate of bug 1623567
   It is possible to import package twice via plugin with enabled glance 
artifact repository

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1623890

Title:
  duplicate packages in glare

Status in Glance:
  New
Status in Murano:
  Invalid

Bug description:
  While uploading same package to Murano+Glare i see duplicates. But it
  should detect same package and ask for action (skip, upload, abort).

  python-muranoclient 0.11.0.dev6-1~u14.04+mos2
  python-murano 1:3.0.0~b3.dev8-1~u14.04+mos2
  glance-glare 2:12.0.0-3~u14.04+mos12

  Steps to reproduce:
  - Deploy MOS9.0 + Murano plugin
  - murano package-import 
  - murano package-import 
  - murano package-list

  Expected result:
  No duplicates

  Actual result:
  There will be two package identical packages

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1623890/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623876] [NEW] nova is not setting the MTU provided by Neutron

2016-09-15 Thread Kevin Benton
Public bug reported:

Spotted in gate grenade job. We can see neutron MTU is 1450 but the mtu set 
calls in privsep use 1500.
http://logs.openstack.org/56/369956/3/gate/gate-grenade-dsvm-neutron-ubuntu-trusty/83daad8/logs/new/screen-n-cpu.txt.gz#_2016-09-15_01_16_57_512

Relevant log snippet:

2016-09-15 01:16:57.512 25573 DEBUG nova.network.os_vif_util 
[req-53929d93-d999-4035-8aae-f8d9fd1b2efb 
tempest-AttachInterfacesTestJSON-355908889 
tempest-AttachInterfacesTestJSON-355908889] Converting VIF {"profile": {}, 
"ovs_interfaceid": "8dfdfd9b-da9d-4215-abbd-4dffdc48494b", 
"preserve_on_delete": false, "network": {"bridge": "br-int", "subnets": 
[{"ips": [{"meta": {}, "version": 4, "type": "fixed", "floating_ips": [], 
"address": "10.1.0.9"}], "version": 4, "meta": {}, "dns": [], "routes": [], 
"cidr": "10.1.0.0/28", "gateway": {"meta": {}, "version": 4, "type": "gateway", 
"address": "10.1.0.1"}}], "meta": {"injected": false, "tenant_id": 
"563ca55619b1402ebf0c792ec604a774", "mtu": 1450}, "id": 
"6e1f0d14-a238-4da9-a2d5-659a0f28479c", "label": 
"tempest-AttachInterfacesTestJSON-32449395-network"}, "devname": 
"tap8dfdfd9b-da", "vnic_type": "normal", "qbh_params": null, "meta": {}, 
"details": {"port_filter": true, "ovs_hybrid_plug": true}, "address": 
"fa:16:3e:38:52:12", "active": false
 , "type": "ovs", "id": "8dfdfd9b-da9d-4215-abbd-4dffdc48494b", "qbg_params": 
null} nova_to_osvif_vif /opt/stack/new/nova/nova/network/os_vif_util.py:362
2016-09-15 01:16:57.513 25573 DEBUG nova.network.os_vif_util 
[req-53929d93-d999-4035-8aae-f8d9fd1b2efb 
tempest-AttachInterfacesTestJSON-355908889 
tempest-AttachInterfacesTestJSON-355908889] Converted object 
VIFBridge(active=False,address=fa:16:3e:38:52:12,bridge_name='qbr8dfdfd9b-da',has_traffic_filtering=True,id=8dfdfd9b-da9d-4215-abbd-4dffdc48494b,network=Network(6e1f0d14-a238-4da9-a2d5-659a0f28479c),plugin='ovs',port_profile=VIFPortProfileBase,preserve_on_delete=False,vif_name='tap8dfdfd9b-da')
 nova_to_osvif_vif /opt/stack/new/nova/nova/network/os_vif_util.py:374
2016-09-15 01:16:57.514 25573 DEBUG os_vif 
[req-53929d93-d999-4035-8aae-f8d9fd1b2efb 
tempest-AttachInterfacesTestJSON-355908889 
tempest-AttachInterfacesTestJSON-355908889] Plugging vif 
VIFBridge(active=False,address=fa:16:3e:38:52:12,bridge_name='qbr8dfdfd9b-da',has_traffic_filtering=True,id=8dfdfd9b-da9d-4215-abbd-4dffdc48494b,network=Network(6e1f0d14-a238-4da9-a2d5-659a0f28479c),plugin='ovs',port_profile=VIFPortProfileBase,preserve_on_delete=False,vif_name='tap8dfdfd9b-da')
 plug /usr/local/lib/python2.7/dist-packages/os_vif/__init__.py:76
2016-09-15 01:16:57.515 25573 DEBUG oslo.privsep.daemon [-] privsep: 
request[140021949493072]: (3, 'vif_plug_ovs.linux_net.ensure_bridge', 
(u'qbr8dfdfd9b-da',), {}) out_of_band 
/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py:194
2016-09-15 01:16:57.515 25573 DEBUG oslo.privsep.daemon [-] Running cmd 
(subprocess): brctl addbr qbr8dfdfd9b-da out_of_band 
/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py:194
2016-09-15 01:16:57.517 25573 DEBUG neutronclient.v2_0.client 
[req-6754757c-066c-488a-bc16-6bd451c28cdc 
tempest-ServerActionsTestJSON-1197364963 
tempest-ServerActionsTestJSON-1197364963] GET call to neutron for 
http://127.0.0.1:9696/v2.0/subnets.json?id=2b899e3c-17dc-478a-bd39-91132bb057ab 
used request id req-e82dd927-a0f8-48ba-bbbc-56724a10a29d _append_request_id 
/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py:127
2016-09-15 01:16:57.521 25573 DEBUG oslo.privsep.daemon [-] CMD "brctl addbr 
qbr8dfdfd9b-da" returned: 0 in 0.005s out_of_band 
/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py:194
2016-09-15 01:16:57.521 25573 DEBUG oslo.privsep.daemon [-] Running cmd 
(subprocess): brctl setfd qbr8dfdfd9b-da 0 out_of_band 
/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py:194
2016-09-15 01:16:57.524 25573 DEBUG oslo.privsep.daemon [-] CMD "brctl setfd 
qbr8dfdfd9b-da 0" returned: 0 in 0.004s out_of_band 
/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py:194
2016-09-15 01:16:57.524 25573 DEBUG oslo.privsep.daemon [-] Running cmd 
(subprocess): brctl stp qbr8dfdfd9b-da off out_of_band 
/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py:194
2016-09-15 01:16:57.528 25573 DEBUG oslo.privsep.daemon [-] CMD "brctl stp 
qbr8dfdfd9b-da off" returned: 0 in 0.003s out_of_band 
/usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py:194
2016-09-15 01:16:57.528 25573 DEBUG oslo.privsep.daemon [-] Running cmd 
(subprocess): tee /sys/class/net/qbr8dfdfd9b-da/bridge/multicast_snooping 
out_of_band /usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py:194
2016-09-15 01:16:57.532 25573 DEBUG oslo.privsep.daemon [-] CMD "tee 
/sys/class/net/qbr8dfdfd9b-da/bridge/multicast_snooping" returned: 0 in 0.004s 
out_of_band /usr/local/lib/python2.7/dist-packages/oslo_privsep/daemon.py:194
2016-09-15 01:16:57.532 25573 DEBUG oslo.privsep.daemon [-] Running cmd 
(subprocess): 

[Yahoo-eng-team] [Bug 1623871] [NEW] Nova hugepage support does not include aarch64

2016-09-15 Thread Veena
Public bug reported:

Although aarch64 supports spawning a vm with hugepages, in nova code,
the libvirt driver considers only x86_64 and I686. Both for NUMA and
Hugepage support, AARCH64 needs to be added. Due to this bug, vm can not
be launched with hugepage using OpenStack on aarch64 servers.

Steps to reproduce:
On an openstack environment running on aarch64:
1. Configure compute to use hugepages.
2. Set mem_page_size="2048" for a flavor
3. Launch a VM using the above flavor. 

Expected result:
VM should be launched with hugepages and the libvirt xml should have 

  
  

  
  

Actual result:
VM is launched without hugepages.

There are no error logs in nova-scheduler.

** Affects: nova
 Importance: Undecided
 Assignee: Veena (mveenasl)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Veena (mveenasl)

** Changed in: nova
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1623871

Title:
  Nova hugepage support does not include aarch64

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Although aarch64 supports spawning a vm with hugepages, in nova code,
  the libvirt driver considers only x86_64 and I686. Both for NUMA and
  Hugepage support, AARCH64 needs to be added. Due to this bug, vm can
  not be launched with hugepage using OpenStack on aarch64 servers.

  Steps to reproduce:
  On an openstack environment running on aarch64:
  1. Configure compute to use hugepages.
  2. Set mem_page_size="2048" for a flavor
  3. Launch a VM using the above flavor. 

  Expected result:
  VM should be launched with hugepages and the libvirt xml should have 



  



  Actual result:
  VM is launched without hugepages.

  There are no error logs in nova-scheduler.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1623871/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623813] Re: IPv6 network not shown in metadata

2016-09-15 Thread Dr. Jens Rosenboom
Gah, forgot there is a knob that needs turning in order to enable this.
Might be worth considering changing the default for this nowadays.

** Changed in: nova
   Status: New => Invalid

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1623813

Title:
  IPv6 network not shown in metadata

Status in neutron:
  Invalid
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Steps to reproduce:

  - Set up devstack with default settings from master
  - Start an instance using e.g. the default cirros-0.3.4 image
  - The instance will receive both an IPv4 and IPv6 address, but this isn't 
shown in the metadata, either on the configdrive or via http:

  stack@jr-t5:~/devstack$ nova list
  
+--+---+++-++
  | ID   | Name  | Status | Task State | Power 
State | Networks   |
  
+--+---+++-++
  | dfc9021e-b9d6-4b47-8bcf-d59d6b886a67 | test1 | ACTIVE | -  | 
Running | private=10.1.0.6, fdc0:b675:211f:0:f816:3eff:fe03:5402 |
  
+--+---+++-++

  $ curl 169.254.169.254/openstack/latest/network_data.json;echo
  {"services": [], "networks": [{"network_id": 
"43e683d5-d3c8-4dd3-aab5-a279bbf6d049", "link": "tap5359dc4d-d0", "type": 
"ipv4_dhcp", "id": "network0"}], "links": [{"ethernet_mac_address": 
"fa:16:3e:03:54:02", "mtu": 1450, "type": "ovs", "id": "tap5359dc4d-d0", 
"vif_id": "53
  59dc4d-d04b-4a18-aab4-57b763c100af"}]} 
  $ cat /mnt/openstack/latest/network_data.json;echo
  {"services": [], "networks": [{"network_id": 
"43e683d5-d3c8-4dd3-aab5-a279bbf6d049", "link": "tap5359dc4d-d0", "type": 
"ipv4_dhcp", "id": "network0"}], "links": [{"ethernet_mac_address": 
"fa:16:3e:03:54:02", "mtu": 1450, "type": "ovs", "id": "tap5359dc4d-d0", 
"vif_id": "53
  59dc4d-d04b-4a18-aab4-57b763c100af"}]}

  Expected result: The presence of an IPv6 network should be shown in
  the metadata, in order to allow the instance to enable or disable IPv6
  processing accordingly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1623813/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374999] Re: iSCSI volume detach does not correctly remove the multipath device descriptors

2016-09-15 Thread James Page
** Changed in: cloud-archive
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1374999

Title:
  iSCSI volume detach does not correctly remove the multipath device
  descriptors

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive icehouse series:
  Fix Committed
Status in Ubuntu Cloud Archive juno series:
  Won't Fix
Status in Ubuntu Cloud Archive kilo series:
  Fix Released
Status in OpenStack Compute (nova):
  Invalid
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Trusty:
  Fix Released

Bug description:
  [Impact]

  iSCSI volume detach does not correctly remove the multipath device
  descriptors.

  The multipath devices are left on the compute node and multipath tools
  will occaisionally send IOs to known multipath devices.

  [Test Case]

  tested environment:
  nova-compute on Ubuntu 14.04.1, iscsi_use_multipath=True and iSCSI volume 
backend is EMC VNX 5300.

   I created 3 cinder volumes and attached them to a nova instance. Then I 
detach them one by one. First 2 volumes volumes detached successfully.  3rd 
volume also successfully detached but ended up with  failed multipaths.
  Here is the terminal log for last volume detach.

  openstack@W1DEV103:~/devstack$ cinder list
  
+--++--+--+-+--+--+
  |
   ID
   | Status | Name | Size | Volume Type | Bootable |
   Attached to
   |
  
+--++--+--+-+--+--+
  | 56a63288-5cc0-4f5c-9197-cde731172dd8 | in-use | None | 1 |
   None
   | false | 5bd68785-4acf-43ab-ae13-11b1edc3a62e |
  
+--++--+--+-+--+--+
  openstack@W1CN103:/etc/iscsi$ date;sudo multipath -l
  Fri Sep 19 21:38:13 JST 2014
  360060160cf0036002d1475f6e73fe411 dm-2 DGC,VRAID
  size=1.0G features='1 queue_if_no_path' hwhandler='1 emc' wp=rw
  |-+- policy='round-robin 0' prio=-1 status=active
  | |- 4:0:0:42 sdb 8:16 active undef running
  | |- 5:0:0:42 sdd 8:48 active undef running
  | |- 6:0:0:42 sdf 8:80 active undef running
  | `- 7:0:0:42 sdh 8:112 active undef running
  `-+- policy='round-robin 0' prio=-1 status=enabled
  |- 11:0:0:42 sdp 8:240 active undef running
  |- 8:0:0:42 sdj 8:144 active undef running
  |- 9:0:0:42 sdl 8:176 active undef running
  `- 10:0:0:42 sdn 8:208 active undef running
  openstack@W1CN103:/etc/iscsi$ date;sudo iscsiadm -m session
  Fri Sep 19 21:38:19 JST 2014
  tcp: [10] 172.23.58.228:3260,4 iqn.1992-04.com.emc:cx.fcn00133400150.a7
  tcp: [3] 172.23.58.238:3260,8 iqn.1992-04.com.emc:cx.fcn00133400150.b7
  tcp: [4] 172.23.58.235:3260,20 iqn.1992-04.com.emc:cx.fcn00133400150.b4
  tcp: [5] 172.23.58.236:3260,6 iqn.1992-04.com.emc:cx.fcn00133400150.b5
  tcp: [6] 172.23.58.237:3260,19 iqn.1992-04.com.emc:cx.fcn00133400150.b6
  tcp: [7] 172.23.58.225:3260,16 iqn.1992-04.com.emc:cx.fcn00133400150.a4
  tcp: [8] 172.23.58.226:3260,2 iqn.1992-04.com.emc:cx.fcn00133400150.a5
  tcp: [9] 172.23.58.227:3260,17 iqn.1992-04.com.emc:cx.fcn00133400150.a6

  openstack@W1DEV103:~/devstack$ nova volume-detach 
5bd68785-4acf-43ab-ae13-11b1edc3a62e
  56a63288-5cc0-4f5c-9197-cde731172dd8
  openstack@W1DEV103:~/devstack$
  openstack@W1DEV103:~/devstack$ cinder list
  
+--+---+--+--+-+--+--+
  |
   ID
   | Status | Name | Size | Volume Type | Bootable |
   Attached to
   |
  
+--+---+--+--+-+--+--+
  | 56a63288-5cc0-4f5c-9197-cde731172dd8 | detaching | None | 1 |
   None
   | false | 5bd68785-4acf-43ab-ae13-11b1edc3a62e|

  
+--+---+--+--+-+--+--+
  openstack@W1DEV103:~/devstack$
  openstack@W1DEV103:~/devstack$ cinder list
  
+--+---+--+--+-+--+-+
  |
   ID
   | Status | Name | Size | Volume Type | Bootable | Attached to |
  
+--+---+--+--+-+--+-+
  | 56a63288-5cc0-4f5c-9197-cde731172dd8 | available | None | 1 |
   None
   | false |
  
+--+---+--+--+-+--+-+
  |
  openstack@W1CN103:/etc/iscsi$ date;sudo multipath -l
  Fri Sep 19 21:39:23 JST 2014
  360060160cf0036002d1475f6e73fe411 dm-2 ,
  size=1.0G features='1 queue_if_no_path' hwhandler='1 emc' wp=rw
  |-+- 

[Yahoo-eng-team] [Bug 1411849] Re: No message is prompted when Gateway IP is set to IPv4 address and IP version as IPv6 in 'Create Network' window

2016-09-15 Thread srikanth
Message is getting displayed

** Attachment added: "message.png"
   
https://bugs.launchpad.net/horizon/+bug/1411849/+attachment/4741342/+files/message.png

** Changed in: horizon
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1411849

Title:
  No message is prompted when Gateway IP is set to  IPv4 address  and IP
  version as IPv6  in 'Create Network' window

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Steps to reproduce:

  1. Login to horizon dashboard.
  2. Navigate to Network Topology under Network and select 'Create Network' 
button.
  3. On the pop up window, provide network name and click next.
  4. In Subnet window, provide a Subnet Name, valid Network address , select IP 
Version as IPv6 , set the Gateway IP address with a valid IPv4 address and 
click Next

  Current output:
  No message is prompted to the user to correct the problem in the particular 
field and Next button simply highlights the Create Subnet checkbox.

  Expected output:
  Useful message should be displayed near the field prompting the user to 
correct the IP address.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1411849/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623849] [NEW] openvswitch native agent, ARP responder response has wrong Eth headers

2016-09-15 Thread Thomas Morin
Public bug reported:

The ovs-ofctl ARP responder implementation (install_arp_responder) sets
the correct src/dst MAC addresses in the Ethernet header:

https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/br_tun.py#L197

https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/common/constants.py#L110

--> 'move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],mod_dl_src:%(mac)s,'

*However* the native Openflow/ryu install_arp_responder implementation
does not set these src/dst fields of the Ethernet header:

https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/br_tun.py#L223


The result is that the forged ARP response is incorrect arp_responder=True and 
of_interface=native:

09:59:47.162196 fa:16:3e:ea:2e:9a > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), 
length 42: Request who-has 192.168.10.1 tell 192.168.10.5, length 28
09:59:47.162426 fa:16:3e:ea:2e:9a > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), 
length 42: Reply 192.168.10.1 is-at fa:16:5e:47:33:64, length 28

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1623849

Title:
  openvswitch native agent, ARP responder response has wrong Eth headers

Status in neutron:
  New

Bug description:
  The ovs-ofctl ARP responder implementation (install_arp_responder)
  sets the correct src/dst MAC addresses in the Ethernet header:

  
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/br_tun.py#L197

  
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/common/constants.py#L110

  --> 'move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],mod_dl_src:%(mac)s,'

  *However* the native Openflow/ryu install_arp_responder implementation
  does not set these src/dst fields of the Ethernet header:

  
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/br_tun.py#L223

  
  The result is that the forged ARP response is incorrect arp_responder=True 
and of_interface=native:

  09:59:47.162196 fa:16:3e:ea:2e:9a > ff:ff:ff:ff:ff:ff, ethertype ARP 
(0x0806), length 42: Request who-has 192.168.10.1 tell 192.168.10.5, length 28
  09:59:47.162426 fa:16:3e:ea:2e:9a > ff:ff:ff:ff:ff:ff, ethertype ARP 
(0x0806), length 42: Reply 192.168.10.1 is-at fa:16:5e:47:33:64, length 28

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1623849/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623425] Re: DNSNameServerDbObjectTestCase.test_filtering_by_fields fails sometimes

2016-09-15 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/370037
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=2676372f261da202f41ab8e2d175be27b9405261
Submitter: Jenkins
Branch:master

commit 2676372f261da202f41ab8e2d175be27b9405261
Author: Ihar Hrachyshka 
Date:   Sat Sep 10 06:07:58 2016 +

tests: don't override base object test class attributes

DNSNameServerDbObjectTestCase was overriding self.db_objs and
self.obj_fields if the attributes did not have unique order/address
fields generated by get_random_fields. But since
Id1ca4ce7b134d9729e68661cedb2f5556e58d6ff landed, we should have also
updated self.objs, otherwise test_filtering_by_fields will fail later
when it will try to find an object with attributes that were not used
when creating the object targeted by the filtering attempt.

Instead of adding the update for self.objs in the
DNSNameServerDbObjectTestCase test class, I went with an alternative
approach, getting rid of overriding logic completely. The rationale for
the path is that there is nothing wrong in duplicate address and order
field values (at least as per underlying model definition), and hence
our tests should be resilient against that kind of scenario.

So instead of comparing all fields for an object, just make sure that
the order monotonically goes up/down in the sorted result, and ignore
other fields to be strictly ordered.

Change-Id: Ic956072de5dab336f83b04bddfa9da967b2865b2
Closes-Bug: #1623425


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1623425

Title:
  DNSNameServerDbObjectTestCase.test_filtering_by_fields fails sometimes

Status in neutron:
  Fix Released

Bug description:
  The test fails sometimes.

  
neutron.tests.unit.objects.test_subnet.DNSNameServerDbObjectTestCase.test_filtering_by_fields
  
-

  Captured traceback:
  ~~~
  b'Traceback (most recent call last):'
  b'  File 
"/home/vagrant/git/neutron/neutron/tests/unit/objects/test_base.py", line 1215, 
in test_filtering_by_fields'
  b"'Filtering by %s failed.' % field)"
  b'  File 
"/home/vagrant/git/neutron/.tox/py34/lib/python3.4/site-packages/unittest2/case.py",
 line 1182, in assertItemsEqual'
  b'return self.assertSequenceEqual(expected, actual, msg=msg)'
  b'  File 
"/home/vagrant/git/neutron/.tox/py34/lib/python3.4/site-packages/unittest2/case.py",
 line 1014, in assertSequenceEqual'
  b'self.fail(msg)'
  b'  File 
"/home/vagrant/git/neutron/.tox/py34/lib/python3.4/site-packages/unittest2/case.py",
 line 690, in fail'
  b'raise self.failureException(msg)'
  b"AssertionError: Sequences differ: [{'subnet_id': 
'a8b63bc4-9799-4781-83c8-48e9491dcd5e', 'address': 'ioojfcuswf'}] != []"
  b''
  b'First sequence contains 1 additional elements.'
  b'First extra element 0:'
  b"{'subnet_id': 'a8b63bc4-9799-4781-83c8-48e9491dcd5e', 'address': 
'ioojfcuswf'}"
  b''
  b'+ []'
  b"- [{'address': 'ioojfcuswf',"
  b"-   'subnet_id': 'a8b63bc4-9799-4781-83c8-48e9491dcd5e'}] : Filtering 
by order failed."
  b''

  Reproducible with: ostestr  --regex
  
neutron.tests.unit.objects.test_subnet.DNSNameServerDbObjectTestCase.test_filtering_by_fields
  --until-failure

  Log example: http://logs.openstack.org/59/365659/10/check/gate-
  neutron-python34/afb20dd/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1623425/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623838] [NEW] Nova requires netaddr >= 0.7.12 which is not enough

2016-09-15 Thread Thomas Goirand
Public bug reported:

In this commit:
https://github.com/openstack/nova/commit/4647f418afb9ced223c089f9d49cd686eccae9e2

nova starts using the modified_eui64() function which isn't available in
netaddr 0.7.12. It is available in version 0.7.18, which is what the
upper-constraints.txt has. I haven't investigate (yet) when the new
method was introduce in netaddr, though in all reasonableness, I'd
strongly suggest pushing for an upgrade of global-requirements.txt to
0.7.18 (which is what we've been gating on for a long time).

At the packaging level, it doesn't seem to be a big problem, as 0.7.18
is what Xenial has.

The other solution would be to remove the call of modified_eui64() in
Nova, but this looks a more risky option to me, so close from the
release.

** Affects: nova
 Importance: High
 Status: Confirmed


** Tags: newton-backport-potential testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1623838

Title:
  Nova requires netaddr >= 0.7.12 which is not enough

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  In this commit:
  
https://github.com/openstack/nova/commit/4647f418afb9ced223c089f9d49cd686eccae9e2

  nova starts using the modified_eui64() function which isn't available
  in netaddr 0.7.12. It is available in version 0.7.18, which is what
  the upper-constraints.txt has. I haven't investigate (yet) when the
  new method was introduce in netaddr, though in all reasonableness, I'd
  strongly suggest pushing for an upgrade of global-requirements.txt to
  0.7.18 (which is what we've been gating on for a long time).

  At the packaging level, it doesn't seem to be a big problem, as 0.7.18
  is what Xenial has.

  The other solution would be to remove the call of modified_eui64() in
  Nova, but this looks a more risky option to me, so close from the
  release.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1623838/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1614361] Re: tox.ini needs to be updated as openstack infra now supports upper constraints

2016-09-15 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/358948
Committed: 
https://git.openstack.org/cgit/openstack/tacker/commit/?id=21e1a4d4c10899bf9d63b995475c6a6181b38cba
Submitter: Jenkins
Branch:master

commit 21e1a4d4c10899bf9d63b995475c6a6181b38cba
Author: AvnishPal 
Date:   Tue Aug 23 09:55:14 2016 +0530

Use upper constraints for all jobs in tox.ini

Openstack infra now supports upper constraints for
all jobs. Updated tox.ini to use upper constraints
for all jobs.

Change-Id: Ibd2c90826db6d07193e3f01c25cb49ee9994b404
Closes-Bug: #1614361


** Changed in: tacker
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1614361

Title:
  tox.ini needs to be updated as openstack infra now supports upper
  constraints

Status in Cinder:
  In Progress
Status in Designate:
  Fix Released
Status in Glance:
  In Progress
Status in heat:
  In Progress
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Ironic Inspector:
  Fix Released
Status in Mistral:
  Fix Released
Status in networking-ovn:
  Invalid
Status in octavia:
  Fix Released
Status in python-mistralclient:
  In Progress
Status in python-muranoclient:
  Fix Released
Status in tacker:
  Fix Released
Status in OpenStack DBaaS (Trove):
  Invalid
Status in vmware-nsx:
  Fix Released
Status in zaqar:
  Fix Released
Status in Zaqar-ui:
  Fix Released

Bug description:
  Openstack infra now supports upper constraints for releasenotes, cover, venv 
targets.
  tox.ini uses install_command for these targets, which can now be safely 
removed.
  Reference for mail that details this support: 
http://lists.openstack.org/pipermail/openstack-dev/2016-August/101474.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1614361/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623813] [NEW] IPv6 address not shown in metadata

2016-09-15 Thread Dr. Jens Rosenboom
Public bug reported:

Steps to reproduce:

- Set up devstack with default settings from master
- Start an instance using e.g. the default cirros-0.3.4 image
- The instance will receive both an IPv4 and IPv6 address, but this isn't shown 
in the metadata, either on the configdrive or via http:

stack@jr-t5:~/devstack$ nova list
+--+---+++-++
| ID   | Name  | Status | Task State | Power 
State | Networks   |
+--+---+++-++
| dfc9021e-b9d6-4b47-8bcf-d59d6b886a67 | test1 | ACTIVE | -  | Running  
   | private=10.1.0.6, fdc0:b675:211f:0:f816:3eff:fe03:5402 |
+--+---+++-++

$ curl 169.254.169.254/openstack/latest/network_data.json;echo
{"services": [], "networks": [{"network_id": 
"43e683d5-d3c8-4dd3-aab5-a279bbf6d049", "link": "tap5359dc4d-d0", "type": 
"ipv4_dhcp", "id": "network0"}], "links": [{"ethernet_mac_address": 
"fa:16:3e:03:54:02", "mtu": 1450, "type": "ovs", "id": "tap5359dc4d-d0", 
"vif_id": "53
59dc4d-d04b-4a18-aab4-57b763c100af"}]} 
$ cat /mnt/openstack/latest/network_data.json;echo
{"services": [], "networks": [{"network_id": 
"43e683d5-d3c8-4dd3-aab5-a279bbf6d049", "link": "tap5359dc4d-d0", "type": 
"ipv4_dhcp", "id": "network0"}], "links": [{"ethernet_mac_address": 
"fa:16:3e:03:54:02", "mtu": 1450, "type": "ovs", "id": "tap5359dc4d-d0", 
"vif_id": "53
59dc4d-d04b-4a18-aab4-57b763c100af"}]}

Expected result: The presence of an IPv6 network should be shown in the
metadata, in order to allow the instance to enable or disable IPv6
processing accordingly.

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

** Description changed:

  Steps to reproduce:
  
  - Set up devstack with default settings from master
  - Start an instance using e.g. the default cirros-0.3.4 image
  - The instance will receive both an IPv4 and IPv6 address, but this isn't 
shown in the metadata, either on the configdrive or via http:
  
+ ```
  stack@jr-t5:~/devstack$ nova list
  
+--+---+++-++
  | ID   | Name  | Status | Task State | Power 
State | Networks   |
  
+--+---+++-++
  | dfc9021e-b9d6-4b47-8bcf-d59d6b886a67 | test1 | ACTIVE | -  | 
Running | private=10.1.0.6, fdc0:b675:211f:0:f816:3eff:fe03:5402 |
  
+--+---+++-++
  
  $ curl 169.254.169.254/openstack/latest/network_data.json;echo
  {"services": [], "networks": [{"network_id": 
"43e683d5-d3c8-4dd3-aab5-a279bbf6d049", "link": "tap5359dc4d-d0", "type": 
"ipv4_dhcp", "id": "network0"}], "links": [{"ethernet_mac_address": 
"fa:16:3e:03:54:02", "mtu": 1450, "type": "ovs", "id": "tap5359dc4d-d0", 
"vif_id": "53
- 59dc4d-d04b-4a18-aab4-57b763c100af"}]} $ cat 
/mnt/openstack/latest/network_data.json;echo
+ 59dc4d-d04b-4a18-aab4-57b763c100af"}]} 
+ $ cat /mnt/openstack/latest/network_data.json;echo
  {"services": [], "networks": [{"network_id": 
"43e683d5-d3c8-4dd3-aab5-a279bbf6d049", "link": "tap5359dc4d-d0", "type": 
"ipv4_dhcp", "id": "network0"}], "links": [{"ethernet_mac_address": 
"fa:16:3e:03:54:02", "mtu": 1450, "type": "ovs", "id": "tap5359dc4d-d0", 
"vif_id": "53
  59dc4d-d04b-4a18-aab4-57b763c100af"}]}
+ ```
  
  Expected result: The presence of an IPv6 network should be shown in the
  metadata, in order to allow the instance to enable or disable IPv6
  processing accordingly.

** Description changed:

  Steps to reproduce:
  
  - Set up devstack with default settings from master
  - Start an instance using e.g. the default cirros-0.3.4 image
  - The instance will receive both an IPv4 and IPv6 address, but this isn't 
shown in the metadata, either on the configdrive or via http:
  
- ```
  stack@jr-t5:~/devstack$ nova list
  
+--+---+++-++
  | ID   | Name  | Status | Task State | Power 
State | Networks   |
  

[Yahoo-eng-team] [Bug 1623809] [NEW] Quota exceeded when spawning instances in server group

2016-09-15 Thread Slawek Kaplonski
Public bug reported:

There is problem with quota_server_group_members.
Steps to reproduce:
1. user spawns instances in server_group and provide --min-count and 
--max-count parameters
2. both, min-count and max-count are below quota for instances (e.g 
min-count=2, max-count=5)
3. max-count is above quota_server_group_members (e.g. it was set to 3 for 
tenant)

In such case nova will not spawn any instance and returns error "Quota
exceeded" but IMO it should spawn at least 2 instances (min-count) or 3
(quota_server_group_memebers)

** Affects: nova
 Importance: Undecided
 Assignee: Slawek Kaplonski (slaweq)
 Status: New


** Tags: quotas

** Changed in: nova
 Assignee: (unassigned) => Slawek Kaplonski (slaweq)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1623809

Title:
  Quota exceeded when spawning instances in server group

Status in OpenStack Compute (nova):
  New

Bug description:
  There is problem with quota_server_group_members.
  Steps to reproduce:
  1. user spawns instances in server_group and provide --min-count and 
--max-count parameters
  2. both, min-count and max-count are below quota for instances (e.g 
min-count=2, max-count=5)
  3. max-count is above quota_server_group_members (e.g. it was set to 3 for 
tenant)

  In such case nova will not spawn any instance and returns error "Quota
  exceeded" but IMO it should spawn at least 2 instances (min-count) or
  3 (quota_server_group_memebers)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1623809/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1621605] Re: placement api doesn't have tests to confirm unicode entry

2016-09-15 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/365688
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=2e45b95a8f9e0c1b4c7c985f01b4928bd8818a07
Submitter: Jenkins
Branch:master

commit 2e45b95a8f9e0c1b4c7c985f01b4928bd8818a07
Author: Chris Dent 
Date:   Mon Sep 5 14:09:42 2016 +

[placement] Add some tests ensuring unicode resource provider info

Add some gabbi tests which demonstrate that it is possible to create
and query a resource provider that has a name that is outside the
bounds of ascii. The tests using a 4byte wide utf-8 character are left
as xfails because not all installations of mysql will support it.

Also confirm that if a unicode characters (uri-encoded or not) in
the uuid part of a resource providers path will result in the
expected 404 and not explode.

Closes-Bug: #1621605
Change-Id: I3d906c3296aa28b595fcc23c448c1744972c319d


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1621605

Title:
  placement api doesn't have tests to confirm unicode entry

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  There are lots of tests of the placement api but none of them use
  unicode data. The only place where unicode is valid is in resource
  provider name so we should have some gabbi tests of that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1621605/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623517] Re: A PUT or POST sent to placement API without a content-type header will result in a 500, should be a 400

2016-09-15 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/370154
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=281a78e0af3819e7de3ed84ddb83ec93ac0cc281
Submitter: Jenkins
Branch:master

commit 281a78e0af3819e7de3ed84ddb83ec93ac0cc281
Author: Chris Dent 
Date:   Wed Sep 14 15:18:30 2016 +0100

[placement] prevent a KeyError in webob.dec.wsgify

If a PUT, POST or PATCH is sent without a content-type header,
webob.dec.wsgify will raise a KeyError. Avoid this by checking for
the content-type header before reaching any wsgify calls. As noted
in the TODO within this is not the most elegant solution, but
prevents an inadvertent 500 and returns a reasonable 400.

Change-Id: I6e7dffb5dc5f0cdc78a57e8df3ae9952c55163ae
Closes-Bug: #1623517


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1623517

Title:
  A PUT or POST sent to placement API without a content-type header will
  result in a 500, should be a 400

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  If, by some twist of fate, a user agent send a PUT or POST requests to
  the placement API without a content-type header, the service will have
  an uncaught KeyError exception raised in webob as it tries to parse
  the body of the request. Tests which thought they were testing for
  this were not. The webob.dec.wsgify decorator is doing some work
  before the thing which the test exercises gets involved. So further
  tests and guards are required to avoid the 500.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1623517/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623799] [NEW] Serial console not show up on horizon dashboard

2016-09-15 Thread Dao Cong Tien
Public bug reported:

Issue
=

The console tab in Horizon doesn't show the console of an instance.

Steps to reproduce
==

* Install nova-serialproxy and nova-consoleauth
* Enable "serial console" feature in "nova.conf"
  [vnc]
  enabled=False
  [serial_console]
  enabled=True
  base_url=ws://:6083/
  serialproxy_host = 
  proxyclient_address = 
* Launch an instance
* Open the "console" tab of that instance

Expected behavior
=

The serial console of the instance should show up and allow user to
interact with.

Actual behavior
===

* Blank screen (not black screen) without any other info.

Logs & Env
==

* No error/warning logs in Nova and Horizon
* Nova CLI nova get-serial-console  worked correctly and returned a valid 
websocket url.

Version
===

* Used the latest devstack to install openstack with default
configuration except adding serial console settings.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1623799

Title:
  Serial console not show up on horizon dashboard

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Issue
  =

  The console tab in Horizon doesn't show the console of an instance.

  Steps to reproduce
  ==

  * Install nova-serialproxy and nova-consoleauth
  * Enable "serial console" feature in "nova.conf"
[vnc]
enabled=False
[serial_console]
enabled=True
base_url=ws://:6083/
serialproxy_host = 
proxyclient_address = 
  * Launch an instance
  * Open the "console" tab of that instance

  Expected behavior
  =

  The serial console of the instance should show up and allow user to
  interact with.

  Actual behavior
  ===

  * Blank screen (not black screen) without any other info.

  Logs & Env
  ==

  * No error/warning logs in Nova and Horizon
  * Nova CLI nova get-serial-console  worked correctly and returned a 
valid websocket url.

  Version
  ===

  * Used the latest devstack to install openstack with default
  configuration except adding serial console settings.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1623799/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623800] [NEW] Can't add exact count of fixed ips to port (regression)

2016-09-15 Thread Andrey Pavlov
Public bug reported:

environment: latest devstack. services: nova, glance, keystone, cinder, 
neutron, neutron-vpnaas, ec2-api
non-admin project.

we have a scenario when we create a port and then add two fixed ips to it.
Now neutron adds only one fixed_ip to this port but this Monday all was good.
And looks like that now it adds count-1 of passed new fixed_ips.


logs:

2016-09-15 09:13:47.568 14578 DEBUG keystoneclient.session 
[req-41aa5bf9-b166-4eb8-b415-cdf039598667 ef0282da2233468c8e715dbfd4385c80 
c44a90bf24c14dcbac693c9bb8ac1923 - - -] REQ: curl -g -i --cacert 
"/opt/stack/data/ca-bundle.pem" -X GET 
http://10.10.0.4:9696/v2.0/ports/0be539d4-ed3c-4bba-8a25-9cb1641335ab.json -H 
"User-Agent: python-neutronclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}87af409cdd3b0396fa6954bdc181fddac54d823d" 
_http_log_request 
/usr/local/lib/python2.7/dist-packages/keystoneclient/session.py:206
2016-09-15 09:13:47.627 14578 DEBUG keystoneclient.session 
[req-41aa5bf9-b166-4eb8-b415-cdf039598667 ef0282da2233468c8e715dbfd4385c80 
c44a90bf24c14dcbac693c9bb8ac1923 - - -] RESP: [200] Content-Type: 
application/json Content-Length: 735 X-Openstack-Request-Id: 
req-4fcc4b7c-09d3-40b6-9332-a3974e422630 Date: Thu, 15 Sep 2016 06:13:47 GMT 
Connection: keep-alive 
RESP BODY: {"port": {"status": "DOWN", "created_at": "2016-09-15T06:13:46", 
"project_id": "c44a90bf24c14dcbac693c9bb8ac1923", "description": "", 
"allowed_address_pairs": [], "admin_state_up": true, "network_id": 
"93e7bdae-bb7b-4e3e-b33d-e80a561014ea", "tenant_id": 
"c44a90bf24c14dcbac693c9bb8ac1923", "extra_dhcp_opts": [], "updated_at": 
"2016-09-15T06:13:46", "name": "eni-30152657", "device_owner": "", 
"revision_number": 5, "mac_address": "fa:16:3e:12:34:dd", 
"port_security_enabled": true, "binding:vnic_type": "normal", "fixed_ips": 
[{"subnet_id": "783bf3fa-a077-4a27-9fae-05a7e99fe19e", "ip_address": 
"10.7.0.12"}], "id": "0be539d4-ed3c-4bba-8a25-9cb1641335ab", "security_groups": 
["2c51d398-1bd1-4084-8063-41bfe57788a4"], "device_id": ""}}
 _http_log_response 
/usr/local/lib/python2.7/dist-packages/keystoneclient/session.py:231
2016-09-15 09:13:47.628 14578 DEBUG neutronclient.v2_0.client 
[req-41aa5bf9-b166-4eb8-b415-cdf039598667 ef0282da2233468c8e715dbfd4385c80 
c44a90bf24c14dcbac693c9bb8ac1923 - - -] GET call to neutron for 
http://10.10.0.4:9696/v2.0/ports/0be539d4-ed3c-4bba-8a25-9cb1641335ab.json used 
request id req-4fcc4b7c-09d3-40b6-9332-a3974e422630 _append_request_id 
/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py:127


2016-09-15 09:13:47.628 14578 DEBUG keystoneclient.session 
[req-41aa5bf9-b166-4eb8-b415-cdf039598667 ef0282da2233468c8e715dbfd4385c80 
c44a90bf24c14dcbac693c9bb8ac1923 - - -] REQ: curl -g -i --cacert 
"/opt/stack/data/ca-bundle.pem" -X PUT 
http://10.10.0.4:9696/v2.0/ports/0be539d4-ed3c-4bba-8a25-9cb1641335ab.json -H 
"User-Agent: python-neutronclient" -H "Content-Type: application/json" -H 
"Accept: application/json" -H "X-Auth-Token: 
{SHA1}87af409cdd3b0396fa6954bdc181fddac54d823d" -d '{"port": {"fixed_ips": 
[{"subnet_id": "783bf3fa-a077-4a27-9fae-05a7e99fe19e", "ip_address": 
"10.7.0.12"}, {"subnet_id": "783bf3fa-a077-4a27-9fae-05a7e99fe19e"}, 
{"subnet_id": "783bf3fa-a077-4a27-9fae-05a7e99fe19e"}]}}' _http_log_request 
/usr/local/lib/python2.7/dist-packages/keystoneclient/session.py:206
2016-09-15 09:13:48.014 14578 DEBUG keystoneclient.session 
[req-41aa5bf9-b166-4eb8-b415-cdf039598667 ef0282da2233468c8e715dbfd4385c80 
c44a90bf24c14dcbac693c9bb8ac1923 - - -] RESP: [200] Content-Type: 
application/json Content-Length: 816 X-Openstack-Request-Id: 
req-0c86f7d1-ce47-4c9e-b842-1aa37c2ca024 Date: Thu, 15 Sep 2016 06:13:48 GMT 
Connection: keep-alive 
RESP BODY: {"port": {"status": "DOWN", "created_at": "2016-09-15T06:13:46", 
"project_id": "c44a90bf24c14dcbac693c9bb8ac1923", "description": "", 
"allowed_address_pairs": [], "admin_state_up": true, "network_id": 
"93e7bdae-bb7b-4e3e-b33d-e80a561014ea", "tenant_id": 
"c44a90bf24c14dcbac693c9bb8ac1923", "extra_dhcp_opts": [], "updated_at": 
"2016-09-15T06:13:47", "name": "eni-30152657", "device_owner": "", 
"revision_number": 6, "mac_address": "fa:16:3e:12:34:dd", 
"port_security_enabled": true, "binding:vnic_type": "normal", "fixed_ips": 
[{"subnet_id": "783bf3fa-a077-4a27-9fae-05a7e99fe19e", "ip_address": 
"10.7.0.12"}, {"subnet_id": "783bf3fa-a077-4a27-9fae-05a7e99fe19e", 
"ip_address": "10.7.0.9"}], "id": "0be539d4-ed3c-4bba-8a25-9cb1641335ab", 
"security_groups": ["2c51d398-1bd1-4084-8063-41bfe57788a4"], "device_id": ""}}
 _http_log_response 
/usr/local/lib/python2.7/dist-packages/keystoneclient/session.py:231
2016-09-15 09:13:48.015 14578 DEBUG neutronclient.v2_0.client 
[req-41aa5bf9-b166-4eb8-b415-cdf039598667 ef0282da2233468c8e715dbfd4385c80 
c44a90bf24c14dcbac693c9bb8ac1923 - - -] PUT call to neutron for 
http://10.10.0.4:9696/v2.0/ports/0be539d4-ed3c-4bba-8a25-9cb1641335ab.json used 
request id req-0c86f7d1-ce47-4c9e-b842-1aa37c2ca024