[Yahoo-eng-team] [Bug 1557457] Re: [RFE] rate-limit external connectivity traffic.

2016-06-20 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1557457

Title:
  [RFE] rate-limit external connectivity traffic.

Status in neutron:
  Expired

Bug description:
  I want to develop  feature rate-limit based on contrail-openstack
  context.

  My core requirement  is controlling  total rate of  all the VMs
  accessing public/internet network in the one project or one tenant,
  which including inbound and outbound for accessing public/internet
  network traffic. If VM1 access VM2 in the same tenant in the same data
  center, the traffic is not limited.

  The scene is as follows:

  There are two or more  nets in the project customer A. Let's say that
  only two nets now in the projects: Net1 and Net2 .There are VMs in the
  two Nets. VMs access public/internet by their FIPs Their FIPs  are
  FIP1 and FIP2。 I want to limit total bandwidth of FIP1 and FIP2 to
  10Mbits/s bidirectional.

  In the contrail-openstack solution, there is one simple software
  gateway(VGW) which provide the ability to access the public/internet
  for the VM. I do my TC test in this context.

  All the traffic accessing public/internet network  is via the NIC of VGW 
Nodes.
  So My core ides is to use tc tools to limit traffic according to FIPs.

  My  preliminary test is feasible. When I have done it, will update the
  script here.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1557457/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470560] Re: Openstack Kilo: unable to launch instance

2016-06-20 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1470560

Title:
  Openstack Kilo: unable to launch instance

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  When I create virtual machine via Horizon ,I get following error

  I am using Openstack Kilo and trying to boot images(RAW format) which
  are in ceph

  (1) The server has either erred or is incapable of performing the
  requested operation. (HTTP 500) (Request-ID: req-6235c2ce-
  05d3-42ba-a725-c20d491be46d)

  (2) Unable to launch instance

  There are no error logs in nova
  No other logs generated anywhere. everything is clear (No Error)

  Kindly guide me on this

  Thanx a lot

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1470560/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1570510] Re: Create Image: Redunant Error Message

2016-06-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/329668
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=06e1272c7d52eb7d0de4d49737a22c65df5d4864
Submitter: Jenkins
Branch:master

commit 06e1272c7d52eb7d0de4d49737a22c65df5d4864
Author: Val W 
Date:   Thu Jun 16 14:43:02 2016 +

Relocated error message to associated field and corrected grammar

Validation error was displaying at the top of the form making it
seem as though it was a redundant error. Added validation so that
error appears below the affected field. Fixed the grammar of the
original error message (“a image” -> “an image”).

Change-Id: I7250a44138a3559ddf39ff9d7bde1478ecb3a143
Closes-Bug: 1570510


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1570510

Title:
  Create Image: Redunant Error Message

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  admin/images: '+ Create Image' ... if you submit the form without
  filling out any required fields, proper form errors show but also a
  general 'alert' form error.  This is redundant and not needed:
  https://i.imgur.com/NEOb77o.png

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1570510/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1592834] Re: hz-dynamic-table inline batch actions not aligned

2016-06-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/329994
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=30e37aaf2cd76689c9cfe5a6b9d2127e6e76d837
Submitter: Jenkins
Branch:master

commit 30e37aaf2cd76689c9cfe5a6b9d2127e6e76d837
Author: Matt Borland 
Date:   Wed Jun 15 08:55:08 2016 -0600

Fix hz-dynamic-table formatting for magic-search and actions

Using https://review.openstack.org/#/c/309561/ as a guide, this patch
cleans up formatting for actions and the magic-search bar.  It:
 * Pads the action buttons a little; they are not part of a button-group.
 * Does not display the dangling empty facet list if there are no facets.
 * Aligns the batch actions to the right when placed inline with search.
 * Gives more room to the search bar

Change-Id: Iced01bb792ee36699e653bf3c332341bb097e80f
Closes-Bug: 1592834
Closes-Bug: 1592835


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1592834

Title:
  hz-dynamic-table inline batch actions not aligned

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  If you use hz-dynamic-table's batch actions inline with the search
  bar, it doesn't format them particularly well, placing them generally
  in the space given, not pushing them to the right.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1592834/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1592835] Re: hz-dynamic-table has empty div when no facets

2016-06-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/329994
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=30e37aaf2cd76689c9cfe5a6b9d2127e6e76d837
Submitter: Jenkins
Branch:master

commit 30e37aaf2cd76689c9cfe5a6b9d2127e6e76d837
Author: Matt Borland 
Date:   Wed Jun 15 08:55:08 2016 -0600

Fix hz-dynamic-table formatting for magic-search and actions

Using https://review.openstack.org/#/c/309561/ as a guide, this patch
cleans up formatting for actions and the magic-search bar.  It:
 * Pads the action buttons a little; they are not part of a button-group.
 * Does not display the dangling empty facet list if there are no facets.
 * Aligns the batch actions to the right when placed inline with search.
 * Gives more room to the search bar

Change-Id: Iced01bb792ee36699e653bf3c332341bb097e80f
Closes-Bug: 1592834
Closes-Bug: 1592835


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1592835

Title:
  hz-dynamic-table has empty div when no facets

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  hz-dynamic-table displays a weird empty div when there are no facets
  and the user has clicked in the search area.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1592835/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1594593] Re: API tests are broken with 'TypeError: create_tenant() takes exactly 1 argument (2 given)'

2016-06-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/331867
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=a0feab2d8ae20d3dca0a5762d3be1e095f9c1769
Submitter: Jenkins
Branch:master

commit a0feab2d8ae20d3dca0a5762d3be1e095f9c1769
Author: Assaf Muller 
Date:   Mon Jun 20 18:17:22 2016 -0400

Change addCleanup create_tenant to delete_tenant, fix gate

Tempest patch with Change-Id of:
I3fe7b6b7f81a0b20888b2c70a717065e4b43674f

Changed the v2 Keystone tenant API create_tenant to keyword
arguments. This broke our API tests that used create_tenant
with a tenant_id... It looks like the addCleanup that was supposed
to delete the newly created tenant actually created a second
tenant. The existing create_tenant calls were unaffected
by the Tempest change as it is backwards compatible.

Change-Id: Ie82c16ebf8dde988d68a01fc8dfa073085af4728
Closes-Bug: #1594593


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1594593

Title:
  API tests are broken with 'TypeError: create_tenant() takes exactly 1
  argument (2 given)'

Status in neutron:
  Fix Released

Bug description:
  Tempest patch with Change-Id of:
  I3fe7b6b7f81a0b20888b2c70a717065e4b43674f

  Changed the v2 Keystone tenant API create_tenant to keyword arguments. This 
broke our API tests that used create_tenant with a tenant_id... It looks like 
the addCleanup that was supposed
  to delete the newly created tenant actually created a second tenant. The 
existing create_tenant calls were unaffected by the Tempest change as it is 
backwards compatible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1594593/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1594604] [NEW] Nova does not delete or update neutron port for failed VMs

2016-06-20 Thread Arvind Somya
Public bug reported:

Environment:
Stable/Liberty with neutron ML2 (Mechanism Driver is a custom asynchronous 
driver based off the Opendaylight V2 driver)
No agents or OVS bridges in use.
One controller and network node and two compute nodes.

When a VM fails to start on any compute node, nova removes the host
binding from the VM but doesn't send a port update to neutron notifying
it about the hostbinding change.

When the same error state VM is deleted, nova doesn't send any events to
neutron. As a result the driver thinks the port is still active, owned
by an existing VM and bound to the host properly.

NOTE: This behavior is only seen with VM's in the ERROR state, ports for
VMs in ACTIVE state are deleted properly.

$ nova show vm1
+--+---+
| Property | Value  

   |
+--+---+
| OS-DCF:diskConfig| AUTO   

   |
| OS-EXT-AZ:availability_zone  | nova   

   |
| OS-EXT-SRV-ATTR:host | -  

   |
| OS-EXT-SRV-ATTR:hostname | vm1

   |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -  

   |
| OS-EXT-SRV-ATTR:instance_name| instance-077a  

   |
| OS-EXT-SRV-ATTR:kernel_id| 72d053dc-aae8-4559-a7c4-107c980bd674   

   |
| OS-EXT-SRV-ATTR:launch_index | 0  

   |
| OS-EXT-SRV-ATTR:ramdisk_id   | 3d25fd54-ffbc-4370-b971-2196a7963c24   

   |
| OS-EXT-SRV-ATTR:reservation_id   | r-pujfofv9 

   |
| OS-EXT-SRV-ATTR:root_device_name | /dev/vda   

   |
| OS-EXT-SRV-ATTR:user_data| -  

   |
| OS-EXT-STS:power_state   | 0  

   |
| OS-EXT-STS:task_state| -  

   |
| OS-EXT-STS:vm_state  | error  

   |
| OS-SRV-USG:launched_at   | -  
  

[Yahoo-eng-team] [Bug 1594593] [NEW] API tests are broken with 'TypeError: create_tenant() takes exactly 1 argument (2 given)'

2016-06-20 Thread Assaf Muller
Public bug reported:

Tempest patch with Change-Id of:
I3fe7b6b7f81a0b20888b2c70a717065e4b43674f

Changed the v2 Keystone tenant API create_tenant to keyword arguments. This 
broke our API tests that used create_tenant with a tenant_id... It looks like 
the addCleanup that was supposed
to delete the newly created tenant actually created a second tenant. The 
existing create_tenant calls were unaffected by the Tempest change as it is 
backwards compatible.

** Affects: neutron
 Importance: Critical
 Assignee: Assaf Muller (amuller)
 Status: In Progress


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1594593

Title:
  API tests are broken with 'TypeError: create_tenant() takes exactly 1
  argument (2 given)'

Status in neutron:
  In Progress

Bug description:
  Tempest patch with Change-Id of:
  I3fe7b6b7f81a0b20888b2c70a717065e4b43674f

  Changed the v2 Keystone tenant API create_tenant to keyword arguments. This 
broke our API tests that used create_tenant with a tenant_id... It looks like 
the addCleanup that was supposed
  to delete the newly created tenant actually created a second tenant. The 
existing create_tenant calls were unaffected by the Tempest change as it is 
backwards compatible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1594593/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1594592] [NEW] federated_user table failed functional test if db engine is MyISAM

2016-06-20 Thread Guang Yee
Public bug reported:

094_add_federated_user_table.py failed functional test if the default db
engine is MyISAM for MySQL. We need to follow the established pattern of
adding the following

mysql_engine='InnoDB',
mysql_charset='utf8'

to the script during table creation.

Here's an example of one of the failures.

keystone.tests.unit.test_sql_upgrade.MySQLOpportunisticUpgradeTestCase.test_migration_96_constraint_exists
--

Captured traceback:
~~~
Traceback (most recent call last):
  File "keystone/tests/unit/test_sql_upgrade.py", line 1113, in 
test_migration_96_constraint_exists
self.upgrade(95)
  File "keystone/tests/unit/test_sql_upgrade.py", line 262, in upgrade
self._migrate(*args, **kwargs)
  File "keystone/tests/unit/test_sql_upgrade.py", line 276, in _migrate
self.schema_.runchange(ver, change, changeset.step)
  File 
"/home/gyee/projects/keystone/.tox/py27/local/lib/python2.7/site-packages/migrate/versioning/schema.py",
 line 93, in runchange
change.run(self.engine, step)
  File 
"/home/gyee/projects/keystone/.tox/py27/local/lib/python2.7/site-packages/migrate/versioning/script/py.py",
 line 148, in run
script_func(engine)
  File 
"/home/gyee/projects/keystone/keystone/common/sql/migrate_repo/versions/094_add_federated_user_table.py",
 line 39, in upgrade
federated_table.create(migrate_engine, checkfirst=True)
  File 
"/home/gyee/projects/keystone/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/sql/schema.py",
 line 725, in create
checkfirst=checkfirst)
  File 
"/home/gyee/projects/keystone/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1856, in _run_visitor
conn._run_visitor(visitorcallable, element, **kwargs)
  File 
"/home/gyee/projects/keystone/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1481, in _run_visitor
**kwargs).traverse_single(element)
  File 
"/home/gyee/projects/keystone/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/sql/visitors.py",
 line 121, in traverse_single
return meth(obj, **kw)
  File 
"/home/gyee/projects/keystone/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/sql/ddl.py",
 line 764, in visit_table
include_foreign_key_constraints=include_foreign_key_constraints
  File 
"/home/gyee/projects/keystone/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 914, in execute
return meth(self, multiparams, params)
  File 
"/home/gyee/projects/keystone/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/sql/ddl.py",
 line 68, in _execute_on_connection
return connection._execute_ddl(self, multiparams, params)
  File 
"/home/gyee/projects/keystone/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 968, in _execute_ddl
compiled
  File 
"/home/gyee/projects/keystone/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1146, in _execute_context
context)
  File 
"/home/gyee/projects/keystone/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1337, in _handle_dbapi_exception
util.raise_from_cause(newraise, exc_info)
  File 
"/home/gyee/projects/keystone/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py",
 line 202, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
  File 
"/home/gyee/projects/keystone/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1139, in _execute_context
context)
  File 
"/home/gyee/projects/keystone/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py",
 line 450, in do_execute
cursor.execute(statement, parameters)
  File 
"/home/gyee/projects/keystone/.tox/py27/local/lib/python2.7/site-packages/pymysql/cursors.py",
 line 161, in execute
result = self._query(query)
  File 
"/home/gyee/projects/keystone/.tox/py27/local/lib/python2.7/site-packages/pymysql/cursors.py",
 line 317, in _query
conn.query(q)
  File 
"/home/gyee/projects/keystone/.tox/py27/local/lib/python2.7/site-packages/pymysql/connections.py",
 line 835, in query
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
  File 
"/home/gyee/projects/keystone/.tox/py27/local/lib/python2.7/site-packages/pymysql/connections.py",
 line 1019, in _read_query_result
result.read()
  File 
"/home/gyee/projects/keystone/.tox/py27/local/lib/python2.7/site-packages/pymysql/connections.py",
 line 1302, in read
first_packet = self.connection._read_packet()
  File 
"/home/gyee/projects/keystone/.tox/py27/local/lib/python2.7/site-packages/pymysql/connections.py",
 line 981, in _read_packet

[Yahoo-eng-team] [Bug 1588593] Re: If Neutron IPAM driver is setted,using 'net-delete' command to delete the network created when ipam_driver is not set,the command seems to cause dead loop.

2016-06-20 Thread Carl Baldwin
We never really provided an official migration.  Some vendors like
InfoBlox have an unofficial one in order to facilitate migrating to
their drivers.  The reason for this is that the internal driver doesn't
provide any advantage over the non-pluggable implementation.  It is
effectively equivalent.

We are planning a unconditional migration in Newton so that the built-in
implementation will be removed entirely.  I would sit tight

** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1588593

Title:
  If Neutron IPAM driver is setted,using 'net-delete' command to delete
  the network created when ipam_driver is not set,the command seems to
  cause dead loop.

Status in neutron:
  Won't Fix

Bug description:
  In Mitaka,

  When ipam_driver is not setted,created a network with a subnet,then using the 
reference implementation of 
  Neutron IPAM driver by setting 'ipam_driver='internal'',and using 
'net-delete' command to delete the 
  network created when ipam_driver is not set,the command seems to cause dead 
loop.

  
  1)Specifying ‘ipam_driver = ’ in the neutron.conf file,created a 
network with a subnet
  [root@localhost devstack]# neutron net-create net_vlan_01 
--provider:network_type vlan --provider:physical_network physnet1 
--provider:segmentation_id  2 
  Created a new network:
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | availability_zone_hints   |  |
  | availability_zones|  |
  | created_at| 2016-06-03T02:42:50  |
  | description   |  |
  | id| 666f8a6a-e3e3-4183-84b3-a43c92b050f5 |
  | ipv4_address_scope|  |
  | ipv6_address_scope|  |
  | mtu   | 1500 |
  | name  | net_vlan_01  |
  | port_security_enabled | True |
  | provider:network_type | vlan |
  | provider:physical_network | physnet1 |
  | provider:segmentation_id  | 2|
  | qos_policy_id |  |
  | router:external   | False|
  | shared| False|
  | status| ACTIVE   |
  | subnets   |  |
  | tags  |  |
  | tenant_id | 69fa49e368d340679ab3d05de3426bfa |
  | updated_at| 2016-06-03T02:42:50  |
  +---+--+
  [root@localhost devstack]# neutron subnet-create net_vlan_01 --name 
subnet_vlan_01 101.1.1.0/24
  Created a new subnet:
  +---+--+
  | Field | Value|
  +---+--+
  | allocation_pools  | {"start": "101.1.1.2", "end": "101.1.1.254"} |
  | cidr  | 101.1.1.0/24 |
  | created_at| 2016-06-03T02:42:56  |
  | description   |  |
  | dns_nameservers   |  |
  | enable_dhcp   | True |
  | gateway_ip| 101.1.1.1|
  | host_routes   |  |
  | id| 1c60dbd7-ae1e-4d7c-a767-ec3106cc62ad |
  | ip_version| 4|
  | ipv6_address_mode |  |
  | ipv6_ra_mode  |  |
  | name  | subnet_vlan_01   |
  | network_id| 666f8a6a-e3e3-4183-84b3-a43c92b050f5 |
  | subnetpool_id |  |
  | tenant_id | 69fa49e368d340679ab3d05de3426bfa |
  | updated_at| 2016-06-03T02:42:56  |
  +---+--+
  [root@localhost devstack]# neutron net-list
  

[Yahoo-eng-team] [Bug 1594576] [NEW] cc_salt_minion behaves badly if apt-get update fails

2016-06-20 Thread Ross Vandegrift
Public bug reported:

I'm using cloud-init to setup some salt-minion config like so:

 #cloud-config
 salt_minion:
   conf:
 grains:
   my_grain_1: a
   my_grain_2: b

cc_salt_minion.py triggers apt-get update.  If apt-get update fails,
then cc_salt_minion does not update /etc/salt/minion, but nonetheless
continues.  My automation depends on finding these grains, and salt-
minion is already built in.

Please consider the attached patch, which will skip the install if a
flag is included.  Alternatively, cc_salt_minion could write out
/etc/salt/minion even if apt-get fails.

Ross

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Patch added: "add knob to disable salt-minion installation"
   
https://bugs.launchpad.net/bugs/1594576/+attachment/4687510/+files/cc-salt-minion-skip-install.diff

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1594576

Title:
  cc_salt_minion behaves badly if apt-get update fails

Status in cloud-init:
  New

Bug description:
  I'm using cloud-init to setup some salt-minion config like so:

   #cloud-config
   salt_minion:
 conf:
   grains:
 my_grain_1: a
 my_grain_2: b

  cc_salt_minion.py triggers apt-get update.  If apt-get update fails,
  then cc_salt_minion does not update /etc/salt/minion, but nonetheless
  continues.  My automation depends on finding these grains, and salt-
  minion is already built in.

  Please consider the attached patch, which will skip the install if a
  flag is included.  Alternatively, cc_salt_minion could write out
  /etc/salt/minion even if apt-get fails.

  Ross

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1594576/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1594571] [NEW] Propose fix for test_update_port_with_multiple_ip_mac_address_pair

2016-06-20 Thread sunny
Public bug reported:

*High level description: Since the order in which we receive the allowed
address pair doesn't matter , we should remove the order dependency in
_update_port_with_address method. The proposed solution to sort the
allowed address pairs and then perform assert on them.

*Perceived severity: High

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: update-port-with-multiple-ip-mac-address-pair

** Description changed:

- *High level description: Since the order in which we receive the allowed 
address pair doesn't 
- matter , we should remove the order dependency in _update_port_with_address 
- method. The proposed solution to sort the allowed address pairs and 
- then perform assert on them.
+ *High level description: Since the order in which we receive the allowed
+ address pair doesn't matter , we should remove the order dependency in
+ _update_port_with_address method. The proposed solution to sort the
+ allowed address pairs and then perform assert on them.
  
  *Perceived severity: High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1594571

Title:
  Propose fix for test_update_port_with_multiple_ip_mac_address_pair

Status in neutron:
  New

Bug description:
  *High level description: Since the order in which we receive the
  allowed address pair doesn't matter , we should remove the order
  dependency in _update_port_with_address method. The proposed solution
  to sort the allowed address pairs and then perform assert on them.

  *Perceived severity: High

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1594571/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1593295] Re: incorrect nova api documentation for revert resize

2016-06-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/330672
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=ce01ef30e32feff289474047808220f61fbfff16
Submitter: Jenkins
Branch:master

commit ce01ef30e32feff289474047808220f61fbfff16
Author: Matthew Edmonds 
Date:   Thu Jun 16 12:14:42 2016 -0400

fix errors in revert resize api docs

The revert resize action's documentation appears to have been copied
from the documentation for confirm resize, and missed replacing the
word 'confirm' with 'revert' in one case. This fixes that.

It also gives incorrect information about the states involved through
the revert process. This also corrects that.

Change-Id: Ib2436da238a4a7b71454ecfee81ede4054b3018e
Closes-Bug: #1593295


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1593295

Title:
  incorrect nova api documentation for revert resize

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The API documentation for revert resize here:
  http://developer.openstack.org/api-ref/compute/?expanded=revert-
  resized-server-revertresize-action-detail#revert-resized-server-
  revertresize-action makes several statements which appear to have been
  incorrect copies of statements for the confirm resize API. E.g.:

  "You can only confirm the resized server where the status is
  VERIFY_RESIZE and the vm_status is RESIZED."

  and possibly also:

  "If the server status remains RESIZED, the request failed. Ensure you
  meet the preconditions and run the request again. If the request fails
  again, investigate the compute back end."

  since I hope you can revert a resize that never actually made it to
  status RESIZED, in which case there should be more to it than that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1593295/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1594546] [NEW] no need to write systemd.link files

2016-06-20 Thread Scott Moser
Public bug reported:

When fixing bug 1579130 , we made cloud-init rename devices itself,
rather than relying on the systemd.link files to do that.

cloud-init was still writing .link files like:
 /etc/systemd/network/50-cloud-init-ens2.link

That leads to just a confusing situation as cloud-init will trump
any renaming systemd does in all cases.

We'd like to get to a place where cloud-init allows the user to later customize
the name of the devices in a supported manner, but for the moment, these files
only create confusion.

Related Bugs:
 * 1579130:  need to support renaming of devices in container and on first boot

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1594546

Title:
  no need to write systemd.link files

Status in cloud-init:
  New

Bug description:
  When fixing bug 1579130 , we made cloud-init rename devices itself,
  rather than relying on the systemd.link files to do that.

  cloud-init was still writing .link files like:
   /etc/systemd/network/50-cloud-init-ens2.link

  That leads to just a confusing situation as cloud-init will trump
  any renaming systemd does in all cases.

  We'd like to get to a place where cloud-init allows the user to later 
customize
  the name of the devices in a supported manner, but for the moment, these files
  only create confusion.

  Related Bugs:
   * 1579130:  need to support renaming of devices in container and on first 
boot

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1594546/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1594529] [NEW] VM creation failure due to Nova hugepage assumptions

2016-06-20 Thread Paul Michali
Public bug reported:

Description:

In Liberty and Mitaka, Nova assumes that it has exclusive access to the
huge pages on the compute node. It maintains track of the total pages
per NUMA node on the compute node, and then number of used (by Nova VMs)
pages on each NUMA node. This is done for the three huge page sizes
supported.

However, if other third party processes consume huge pages, there will
be a discrepancy between the actual pages available and what Nova thinks
is available. As a result, it is possible (based on the number of pages
and the VM size) for Nova to think it has enough pages, when there are
not enough pages. The create will fail with QEMU reporting insufficient
memory available, for example.


Steps to reproduce:

1. Compute with 32768 2MB pages available, giving 16384 per NUMA node with two 
nodes.
2. Third party process that consumes 256 pages per NUMA node.
3. Create 15 small flavor (2GB = 1024 pages) VMs.
4. Create another small flavor VM.

Expected Result:

That the 16th VM would be created, without an error, and using huge
pages on the second NUMA node (and allow more VMs as well).

Actual Result:

After step 3, Nova thinks there are 1024 pages available, but the
compute host shows only 768 pages available. The scheduler thinks there
is space for one more VM, it will pass the filter. The creation will
commence, as Nova thinks there is enough space on NUMA node 0. QEMU will
fail, indicating that there is not enough memory.

In addition, there are 16128 pages available on NUMA node 1, but Nova
will not attempt using them, as it thinks there is still memory
available on NUMA node 0.

In my case, I had multiple compute hosts and ended up with a "No hosts
available" error, as it fails on each host when trying NUMA node 0. If,
at step 4, one creates a medium flavor VM, it will succeed, as Nova will
not see enough pages on NUMA node 0, and will try NUMA node 1, which has
ample space.

Commentary: Nova checks total huge pages, but not available huge pages.

Note: A feature was added to master (for Newton) that has a config based
mechanism to reserve huge pages for third party applications under bug
1543149. However, the Nova team indicated that this change cannot be
back ported to Liberty.

Environment:

Liberty release (12.0.3), with LB, neutron networking, libvirt 1.2.17,
API QEMU 1.2.17, QEMU 2.3.0.

Config:

nova flavor-key m1.small set hw:numa_nodes=1
nova flavor-key m1.small set hw:mem_page_size=2048

network, subnet, and standard VM create commands.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1594529

Title:
  VM creation failure due to Nova hugepage assumptions

Status in OpenStack Compute (nova):
  New

Bug description:
  Description:

  In Liberty and Mitaka, Nova assumes that it has exclusive access to
  the huge pages on the compute node. It maintains track of the total
  pages per NUMA node on the compute node, and then number of used (by
  Nova VMs) pages on each NUMA node. This is done for the three huge
  page sizes supported.

  However, if other third party processes consume huge pages, there will
  be a discrepancy between the actual pages available and what Nova
  thinks is available. As a result, it is possible (based on the number
  of pages and the VM size) for Nova to think it has enough pages, when
  there are not enough pages. The create will fail with QEMU reporting
  insufficient memory available, for example.

  
  Steps to reproduce:

  1. Compute with 32768 2MB pages available, giving 16384 per NUMA node with 
two nodes.
  2. Third party process that consumes 256 pages per NUMA node.
  3. Create 15 small flavor (2GB = 1024 pages) VMs.
  4. Create another small flavor VM.

  Expected Result:

  That the 16th VM would be created, without an error, and using huge
  pages on the second NUMA node (and allow more VMs as well).

  Actual Result:

  After step 3, Nova thinks there are 1024 pages available, but the
  compute host shows only 768 pages available. The scheduler thinks
  there is space for one more VM, it will pass the filter. The creation
  will commence, as Nova thinks there is enough space on NUMA node 0.
  QEMU will fail, indicating that there is not enough memory.

  In addition, there are 16128 pages available on NUMA node 1, but Nova
  will not attempt using them, as it thinks there is still memory
  available on NUMA node 0.

  In my case, I had multiple compute hosts and ended up with a "No hosts
  available" error, as it fails on each host when trying NUMA node 0.
  If, at step 4, one creates a medium flavor VM, it will succeed, as
  Nova will not see enough pages on NUMA node 0, and will try NUMA node
  1, which has ample space.

  Commentary: Nova checks total huge pages, but not available huge
  pages.

  Note: A feature was added to master (for 

[Yahoo-eng-team] [Bug 1594484] [NEW] Remove admin role name 'admin' hardcode

2016-06-20 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/323953
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/horizon" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit 0a8b2062dbde7f1a69a5f6cff52fb4f6a6effe61
Author: Paul Karikh 
Date:   Tue Sep 30 14:53:21 2014 +0400

Remove admin role name 'admin' hardcode

Because of hardcoding name as the 'admin' was impossible to
use administrative panel with a custom administrative role name.
This fix replaces hardcoding the name of the administrative role
with RBAC policy check.

DocImpact
Related commit: https://review.openstack.org/#/c/123745/
Change-Id: I05c8fc750c56f6f6bb49a435662e821eb0d6ba30
Closes-Bug: #1161144
(cherry picked from commit ce5fb26bf5f431f0cdaa6860a732338db868a8fb)

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: doc horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1594484

Title:
  Remove admin role name 'admin' hardcode

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  https://review.openstack.org/323953
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/horizon" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 0a8b2062dbde7f1a69a5f6cff52fb4f6a6effe61
  Author: Paul Karikh 
  Date:   Tue Sep 30 14:53:21 2014 +0400

  Remove admin role name 'admin' hardcode
  
  Because of hardcoding name as the 'admin' was impossible to
  use administrative panel with a custom administrative role name.
  This fix replaces hardcoding the name of the administrative role
  with RBAC policy check.
  
  DocImpact
  Related commit: https://review.openstack.org/#/c/123745/
  Change-Id: I05c8fc750c56f6f6bb49a435662e821eb0d6ba30
  Closes-Bug: #1161144
  (cherry picked from commit ce5fb26bf5f431f0cdaa6860a732338db868a8fb)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1594484/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1594482] [NEW] list_services API filtered by name can't find the service when using list_limit

2016-06-20 Thread Roxana Gherle
Public bug reported:

/services?name= API can't find the service when using list_limit 
configuration.
Before setting list_limit in keystone.conf the following API call behaves 
correctly: 

stack@mitaka2:/opt/stack/keystone$ curl -H "X-Auth-Token: $TOK" 
http://mitaka2.com:5000/v3/services?name=keystone | python -mjson.tool
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100   318  100   3180 0 95  0  0:00:03  0:00:03 --:--:--95
{
"links": {
"next": null,
"previous": null,
"self": "http://mitaka2.com/identity/v3/services?name=keystone;
},
"services": [
{
"enabled": true,
"id": "f7ef63607b8542e0a7cb9a9b1b119c25",
"links": {
"self": 
"http://mitaka2.com/identity/v3/services/f7ef63607b8542e0a7cb9a9b1b119c25;
},
"name": "keystone",
"type": "identity"
}
]
}

After setting list_limit=3 in the Default section in keystone.conf, the
API can't find the service any more:

stack@mitaka2:/opt/stack/keystone$ curl -H "X-Auth-Token: $TOK" 
http://mitaka2.com:5000/v3/services?name=keystone | python -mjson.tool
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100   143  100   1430 0 43  0  0:00:03  0:00:03 --:--:--43
{
"links": {
"next": null,
"previous": null,
"self": "http://mitaka2.com/identity/v3/services?name=keystone;
},
"services": [],
"truncated": true
}

It seems like the list is truncated before applying the name filter.

** Affects: keystone
 Importance: Undecided
 Assignee: Roxana Gherle (roxana-gherle)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => Roxana Gherle (roxana-gherle)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1594482

Title:
  list_services API filtered by name can't find the service when using
  list_limit

Status in OpenStack Identity (keystone):
  New

Bug description:
  /services?name= API can't find the service when using list_limit 
configuration.
  Before setting list_limit in keystone.conf the following API call behaves 
correctly: 

  stack@mitaka2:/opt/stack/keystone$ curl -H "X-Auth-Token: $TOK" 
http://mitaka2.com:5000/v3/services?name=keystone | python -mjson.tool
% Total% Received % Xferd  Average Speed   TimeTime Time  
Current
   Dload  Upload   Total   SpentLeft  Speed
  100   318  100   3180 0 95  0  0:00:03  0:00:03 --:--:--95
  {
  "links": {
  "next": null,
  "previous": null,
  "self": "http://mitaka2.com/identity/v3/services?name=keystone;
  },
  "services": [
  {
  "enabled": true,
  "id": "f7ef63607b8542e0a7cb9a9b1b119c25",
  "links": {
  "self": 
"http://mitaka2.com/identity/v3/services/f7ef63607b8542e0a7cb9a9b1b119c25;
  },
  "name": "keystone",
  "type": "identity"
  }
  ]
  }

  After setting list_limit=3 in the Default section in keystone.conf,
  the API can't find the service any more:

  stack@mitaka2:/opt/stack/keystone$ curl -H "X-Auth-Token: $TOK" 
http://mitaka2.com:5000/v3/services?name=keystone | python -mjson.tool
% Total% Received % Xferd  Average Speed   TimeTime Time  
Current
   Dload  Upload   Total   SpentLeft  Speed
  100   143  100   1430 0 43  0  0:00:03  0:00:03 --:--:--43
  {
  "links": {
  "next": null,
  "previous": null,
  "self": "http://mitaka2.com/identity/v3/services?name=keystone;
  },
  "services": [],
  "truncated": true
  }

  It seems like the list is truncated before applying the name filter.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1594482/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1592985] Re: Instance snapshots no longer filterable

2016-06-20 Thread Travis Tripp
See comments here on Patchset 7:
https://review.openstack.org/#/c/317741/

With a fresh devstack today (2016-06-20) I was able to successfully
create snapshots from nova.  I don't know if it was a change in nova /
glance, etc or if the steps I took made the difference.  As a note, I
used Ubuntu 14.04 server image and ensured that it was fully booted
before creating a snapshot. I also logged into the image and created a
file in tmp.  Doing this I was able to test a variety of scenarios for
the facet.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1592985

Title:
  Instance snapshots no longer filterable

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  I need help in verifying that this is in fact a bug (confirm that
  behavior has changed) and if so, then we need to consider the
  ramifications.

  Nova has historically set image_type to snapshot when creating
  snapshots. This is a custom property (not a core glance property).
  This property is ONLY set if the image is of type snapshot.

  Horizon uses this property to filter for snapshots using the Glance
  client and we were also adding support for it in Searchlight [0]

  Using a devstack as of Jun 13th, 2016 I can not seem to get nova to
  create a snapshot where that image_type property is set.  I've tried
  this from CLI and horizon.

  Whenever I create the snapshot from the instance it comes through as
  an image initially. Sometimes it seems to stay and sometimes it
  immediately goes to the deleted state.

  [0] https://bugs.launchpad.net/searchlight/+bug/1569485
  [1] https://review.openstack.org/#/c/317741/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1592985/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1594466] [NEW] Additional params needed for openstack rc file

2016-06-20 Thread Matt Borland
Public bug reported:

There are some additional parameters needed for the openstack rc files,
which are generated for download at: Access & Security > Download
(appropriate) RC file.

There are two kinds of files generated right now: Keystone v2 and
Keystone v3.  Which you see as options are based on your
installation/configuration.

Due to changes for Keystone v3, there are some new exports that should be 
honored:
OS_INTERFACE
OS_IDENTITY_API_VERSION
OS_AUTH_VERSION

They should be produced in both the v2 and v3 versions of the file for
compatibility purposes.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1594466

Title:
  Additional params needed for openstack rc file

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  There are some additional parameters needed for the openstack rc
  files, which are generated for download at: Access & Security >
  Download (appropriate) RC file.

  There are two kinds of files generated right now: Keystone v2 and
  Keystone v3.  Which you see as options are based on your
  installation/configuration.

  Due to changes for Keystone v3, there are some new exports that should be 
honored:
  OS_INTERFACE
  OS_IDENTITY_API_VERSION
  OS_AUTH_VERSION

  They should be produced in both the v2 and v3 versions of the file for
  compatibility purposes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1594466/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1594439] [NEW] Bad initialization sequence in Neutron agents (and maybe somewhere else)

2016-06-20 Thread Yuriy Taraday
Public bug reported:

TL;DR: running threads that modify global state (which can be any RPC
calls, or any other library calls) in background while forking happens
may cause weird hard-to-debug errors, so background jobs should be
initialized after forking (or even better in separate process)

Currently at least metadata and l3 agents start background threads that
periodically report state before forking and running main loop in child
threads. Those threads can modify global state so that it is
inconsistent in the time of forking which will lead to bad service
behavior. In the ideal world main process shouldn't do anything except
managing child processes to avoid any global state leakage into child
processes (open fds, locked locks, etc).

This bug was uncovered during investigation into weird seemingly
unrelated error in grenade job in Ann's CR that was changing to new
oslo.db EngineFacade [0]. Symptoms were: SSHTimeout while instance's
cloud-init is busy trying to get metadata and failing. In q-meta logs we
noticed that there's no INFO messages about incoming HTTP requests, but
there are some DEBUG ones, which means that requests are coming to the
agent but are not being responded to. Digging deeper we noticed that in
the normal operation metadata agent should:

- receive request;
- do RPC call to neutron-server;
- do HTTP request to Nova;
- send response.

There were no RPC "CALL" in logs, only CASTs from state reporting and
the very first one CALL for state reporting.

Since it's hard to reproduce gate jobs, especially multinode ones, we've
created another CR [1] that added tracing of every Python LOC to see
what really happens. You can find the long log with all the tracing at
[2] or in attachment (logs don't live forever). It uncovered the
following chain of events:

- main thread in main process starts background thread for state reporting;
- some time later that thread starts reporting, wants to do the first CALL (it 
does CALL once and then it does CASTs);
- to get 'reply_q' (essentially, connection shared for replies IIUC), it 
acquires a lock;
- since there are no connections available at that time (it's the first time 
RPC is used), oslo_messaging starts connecting to RabbitMQ;
- background thread yield execution to main thread;
- main thread forks bunch of WSGI workers;
- in WSGI workers when requests come, handler tries to do CALL to 
neutron-server;
- to get 'reply_q' it tries to acquire a lock, but it is already has been 
"taken" by background thread in the main process;
- it hangs forever, which can be seen in Guru Meditation Report.

There are several problems here (including eventlet having not fork-
aware locks), but the one that can be fixed to fix them all is to start
such background threads after all forking happens. I've published CR [3]
to verify that changing initialization order will fix this issue, and it
did.

Note that from what I've been told, forks can still happen if child
worker unexpectedly dies and main process reforks it. To properly fix
this issue we should not do anything in the main process that can spoil
global state that can influence child process. It means that we'll need
to either run state reporting in a separate process or have an isolated
oslo_messaging environment (context? IANA expert in oslo_messaging) for
it.

[0] https://review.openstack.org/312393
[1] https://review.openstack.org/331485
[2] 
http://logs.openstack.org/85/331485/5/check/gate-grenade-dsvm-neutron-multinode/1c65056/logs/new/screen-q-meta.txt.gz
[3] https://review.openstack.org/331672

** Affects: neutron
 Importance: Undecided
 Assignee: Yuriy Taraday (yorik-sar)
 Status: In Progress

** Attachment added: "Logs with tracing"
   
https://bugs.launchpad.net/bugs/1594439/+attachment/4687361/+files/screen-q-meta.txt.gz

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1594439

Title:
  Bad initialization sequence in Neutron agents (and maybe somewhere
  else)

Status in neutron:
  In Progress

Bug description:
  TL;DR: running threads that modify global state (which can be any RPC
  calls, or any other library calls) in background while forking happens
  may cause weird hard-to-debug errors, so background jobs should be
  initialized after forking (or even better in separate process)

  Currently at least metadata and l3 agents start background threads
  that periodically report state before forking and running main loop in
  child threads. Those threads can modify global state so that it is
  inconsistent in the time of forking which will lead to bad service
  behavior. In the ideal world main process shouldn't do anything except
  managing child processes to avoid any global state leakage into child
  processes (open fds, locked locks, etc).

  This bug was uncovered during investigation into weird seemingly
  unrelated error in grenade job in Ann's CR that was changing 

[Yahoo-eng-team] [Bug 1539766] Re: trust redelegation allows trustee to create a trust (with impersonation set to true) from a redelegated trust (with impersonation set to false)

2016-06-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/330045
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=89d513595c0a2c828a36ec721ccfdfdd77e6bfb0
Submitter: Jenkins
Branch:master

commit 89d513595c0a2c828a36ec721ccfdfdd77e6bfb0
Author: Mikhail Nikolaenko 
Date:   Wed Jun 15 15:58:26 2016 +

Validate impersonation in trust redelegation

Forbids trustee to create a trust (with impersonation set to true) from
a redelegated trust (with impersonation set to false).

Change-Id: I53a593a2056c8e8fa0292a806c3b4b48c16ad7fd
Closes-Bug: #1539766


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1539766

Title:
  trust redelegation allows trustee to create a trust (with
  impersonation set to true) from a redelegated trust (with
  impersonation set to false)

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  When creating a redelegated trust in keystone and the original trust
  did not allow impersonation, the redelegated trust should not be
  allowed to create  a new trust with impersonation set to true.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1539766/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1237711] Re: Creating instance on network with no subnet: no error message

2016-06-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/257296
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=feb0ad027fbe105a9f291507dad0f84fff0ae13d
Submitter: Jenkins
Branch:master

commit feb0ad027fbe105a9f291507dad0f84fff0ae13d
Author: Itxaka 
Date:   Mon Dec 14 12:55:57 2015 +0100

Exclude networks with no subnets angular

Nova doesnt allow to boot from a network which has
no subnet, so we should not show those networks on the
new instance launch angular panel.
On the python launch instance this was solved in patch

https://github.com/openstack/horizon/commit/1b6807baf385d6e5768f149fa8e4d07bc24ebff1

Change-Id: I8b94d45e95f8e22b579d04f6cec7345d947f8e12
Closes-Bug: #1237711


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1237711

Title:
  Creating instance on network with no subnet: no error message

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When trying to launch an instance on a network without any subnet the
  creation fails. No error message is provided even though it is clear
  the issue is due to the lack of a subnet. No entry visible in the log
  for that instance.

  nova scheduler log:
  --
  l2013-10-09 15:14:35.249 INFO nova.scheduler.filter_scheduler 
[req-df928b03-ae40-4d34-ae6c-00160f59dc3c admin admin] Attempting to build 1 
instance(s) uuids: [u'0d2a3866-23b0-4f85-9689-f4b37877e950']
  2013-10-09 15:14:35.279 INFO nova.scheduler.filter_scheduler 
[req-df928b03-ae40-4d34-ae6c-00160f59dc3c admin admin] Choosing host 
WeighedHost [host: kraken-vc1-ubuntu1, weight: 252733.0] for instance 
0d2a3866-23b0-4f85-9689-f4b37877e950
  2013-10-09 15:14:38.028 INFO nova.scheduler.filter_scheduler 
[req-df928b03-ae40-4d34-ae6c-00160f59dc3c admin admin] Attempting to build 1 
instance(s) uuids: [u'0d2a3866-23b0-4f85-9689-f4b37877e950']
  2013-10-09 15:14:38.030 ERROR nova.scheduler.filter_scheduler 
[req-df928b03-ae40-4d34-ae6c-00160f59dc3c admin admin] [instance: 
0d2a3866-23b0-4f85-9689-f4b37877e950] Error from last host: kraken-vc1-ubuntu1 
(node domain-c21(kraken-vc1)): [u'Traceback (most recent call last):\n', u'  
File "/opt/stack/nova/nova/compute/manager.py", line 1039, in _build_instance\n 
   set_access_ip=set_access_ip)\n', u'  File 
"/opt/stack/nova/nova/compute/manager.py", line 1412, in _spawn\n
LOG.exception(_(\'Instance failed to spawn\'), instance=instance)\n', u'  File 
"/opt/stack/nova/nova/compute/manager.py", line 1409, in _spawn\n
block_device_info)\n', u'  File 
"/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 623, in spawn\n
admin_password, network_info, block_device_info)\n', u'  File 
"/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 243, in spawn\n
vif_infos = _get_vif_infos()\n', u'  File 
"/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 227, in _get_vif_infos\n   
 for vif 
 in network_info:\n', u'  File "/opt/stack/nova/nova/network/model.py", line 
375, in __iter__\nreturn self._sync_wrapper(fn, *args, **kwargs)\n', u'  
File "/opt/stack/nova/nova/network/model.py", line 366, in _sync_wrapper\n
self.wait()\n', u'  File "/opt/stack/nova/nova/network/model.py", line 398, in 
wait\nself[:] = self._gt.wait()\n', u'  File 
"/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 168, in 
wait\nreturn self._exit_event.wait()\n', u'  File 
"/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line 120, in wait\n 
   current.throw(*self._exc)\n', u'  File 
"/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 194, in 
main\nresult = function(*args, **kwargs)\n', u'  File 
"/opt/stack/nova/nova/compute/manager.py", line 1230, in 
_allocate_network_async\ndhcp_options=dhcp_options)\n', u'  File 
"/opt/stack/nova/nova/network/api.py", line 49, in wrapper\nres = f(self, 
context, *args, **kwargs)\n', u'  File "/o
 pt/stack/nova/nova/network/neutronv2/api.py", line 315, in 
allocate_for_instance\nraise exception.SecurityGroupCannotBeApplied()\n', 
u'SecurityGroupCannotBeApplied: Network requires port_security_enabled and 
subnet associated in order to apply security groups.\n']
  2013-10-09 15:14:38.055 WARNING nova.scheduler.driver 
[req-df928b03-ae40-4d34-ae6c-00160f59dc3c admin admin] [instance: 
0d2a3866-23b0-4f85-9689-f4b37877e950] Setting instance to ERROR state

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1237711/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1594405] Re: test_secure_client fails

2016-06-20 Thread Darek Smigiel
*** This bug is a duplicate of bug 1593647 ***
https://bugs.launchpad.net/bugs/1593647

** This bug has been marked a duplicate of bug 1593647
   TestDesignateClient.test_secure_client fails with  AssertionError: Expected 
call: mock(verify='...')

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1594405

Title:
  test_secure_client fails

Status in neutron:
  New

Bug description:
  This test is failing in ubuntu package builds.  It was added via:
  https://review.openstack.org/#/c/330817/

  ==
  Failed 1 tests - output below:
  ==

  
neutron.tests.unit.plugins.ml2.extensions.test_dns_integration.TestDesignateClient.test_secure_client
  
-

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File 
"/��PKGBUILDDIR��/neutron/tests/unit/plugins/ml2/extensions/test_dns_integration.py",
 line 522, in test_secure_client
  driver.session.Session.assert_called_with(verify=self.TEST_CA_CERT)
File "/usr/lib/python2.7/dist-packages/mock/mock.py", line 925, in 
assert_called_with
  raise AssertionError('Expected call: %s\nNot called' % (expected,))
  AssertionError: Expected call: 
mock(verify='8e8f09ecf81e4e898a41a2153e7b6a0d')
  Not called

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1594405/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1594405] [NEW] test_secure_client fails

2016-06-20 Thread Corey Bryant
Public bug reported:

This test is failing in ubuntu package builds.  It was added via:
https://review.openstack.org/#/c/330817/

==
Failed 1 tests - output below:
==

neutron.tests.unit.plugins.ml2.extensions.test_dns_integration.TestDesignateClient.test_secure_client
-

Captured traceback:
~~~
Traceback (most recent call last):
  File 
"/��PKGBUILDDIR��/neutron/tests/unit/plugins/ml2/extensions/test_dns_integration.py",
 line 522, in test_secure_client
driver.session.Session.assert_called_with(verify=self.TEST_CA_CERT)
  File "/usr/lib/python2.7/dist-packages/mock/mock.py", line 925, in 
assert_called_with
raise AssertionError('Expected call: %s\nNot called' % (expected,))
AssertionError: Expected call: 
mock(verify='8e8f09ecf81e4e898a41a2153e7b6a0d')
Not called

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1594405

Title:
  test_secure_client fails

Status in neutron:
  New

Bug description:
  This test is failing in ubuntu package builds.  It was added via:
  https://review.openstack.org/#/c/330817/

  ==
  Failed 1 tests - output below:
  ==

  
neutron.tests.unit.plugins.ml2.extensions.test_dns_integration.TestDesignateClient.test_secure_client
  
-

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File 
"/��PKGBUILDDIR��/neutron/tests/unit/plugins/ml2/extensions/test_dns_integration.py",
 line 522, in test_secure_client
  driver.session.Session.assert_called_with(verify=self.TEST_CA_CERT)
File "/usr/lib/python2.7/dist-packages/mock/mock.py", line 925, in 
assert_called_with
  raise AssertionError('Expected call: %s\nNot called' % (expected,))
  AssertionError: Expected call: 
mock(verify='8e8f09ecf81e4e898a41a2153e7b6a0d')
  Not called

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1594405/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1594385] [NEW] Nova does not report signature verification failure

2016-06-20 Thread Dane Fichter
Public bug reported:

When verifying Glance image signatures, Nova does not report a
meaningful error message to the end user. Instead, Nova gives the end
user a "No hosts available" message.

How to verify this behavior:
- Enable Nova's verify_glance_signatures configuration flag
- Upload an image to Glance with incorrect or missing signature metadata
- Attempt to boot an instance of this image via the Nova CLI

You should get an error message with the text "No valid host was found.
There are not enough hosts available". This does not describe the
failure and will lead the end user to think there is an issue with the
storage on the compute node.

** Affects: nova
 Importance: Undecided
 Assignee: Dane Fichter (dane-fichter)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Dane Fichter (dane-fichter)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1594385

Title:
  Nova does not report signature verification failure

Status in OpenStack Compute (nova):
  New

Bug description:
  When verifying Glance image signatures, Nova does not report a
  meaningful error message to the end user. Instead, Nova gives the end
  user a "No hosts available" message.

  How to verify this behavior:
  - Enable Nova's verify_glance_signatures configuration flag
  - Upload an image to Glance with incorrect or missing signature metadata
  - Attempt to boot an instance of this image via the Nova CLI

  You should get an error message with the text "No valid host was
  found. There are not enough hosts available". This does not describe
  the failure and will lead the end user to think there is an issue with
  the storage on the compute node.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1594385/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1594276] Re: Endpoint does not support RPC version 2.0. Attempted method: update_service_capabilities

2016-06-20 Thread John Davidge
This sounds like an issue with installed services running different
versions, rather than a bug. Have you tried asking for support on
https://ask.openstack.org/en/questions/ ? You may be able to get help
there.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1594276

Title:
  Endpoint does not support RPC version 2.0. Attempted method:
  update_service_capabilities

Status in neutron:
  Invalid

Bug description:
  I am confronted with a strange problem... I have OpenStack Mitaka
  running on Ubuntu 16.04. I have an HA deployment with 2 controllers
  and 3 compute nodes. I am using OpenSwitch. Everything is working, but
  after some days I have a network issue on one compute nodes. I am
  unable to reproduce the problem. But when I try to ping a floating ip
  that is assigned to a VM on this compute node. I get no reply. I have
  already verified that all ping requests are received by the tap
  interface.

  I have an error in the neutron-openvswitch-agent.log:
  2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher [-] 
Exception during message handling: Endpoint does not support RPC version 2.0. 
Attempted method: update_service_capabilities
  2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 138, 
in _dispatch_and_reply
  2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher 
incoming.message))
  2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 194, 
in _dispatch
  2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher raise 
UnsupportedVersion(version, method=method)
  2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher 
UnsupportedVersion: Endpoint does not support RPC version 2.0. Attempted 
method: update_service_capabilities
  2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher 

  I have restarted nova-compute and neutron-openvswitch-agent on the
  compute node, but the problem remains.

  Any ideas?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1594276/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1594377] [NEW] resize does not resize ephemeral disks

2016-06-20 Thread Matthew Booth
Public bug reported:

Nova resize does not resize ephemeral disks. I have tested this with the
default qcow2 backend, but I expect it to be true for all backends.

I have created 2 flavors:

| OS-FLV-DISABLED:disabled   | False  |
| OS-FLV-EXT-DATA:ephemeral  | 1  |
| disk   | 1  |
| extra_specs| {} |
| id | test-1 |
| name   | test-1 |
| os-flavor-access:is_public | True   |
| ram| 256|
| rxtx_factor| 1.0|
| swap   | 1  |
| vcpus  | 1  |

and:

| OS-FLV-DISABLED:disabled   | False  |
| OS-FLV-EXT-DATA:ephemeral  | 2  |
| disk   | 2  |
| extra_specs| {} |
| id | test-2 |
| name   | test-2 |
| os-flavor-access:is_public | True   |
| ram| 512|
| rxtx_factor| 1.0|
| swap   | 2  |
| vcpus  | 2  |

I boot an instance with flavor test-1 with:

$ nova boot --flavor test-1 --image cirros foo

It creates instance directory 3fab0565-2eb1-4fd9-933b-4e1d80b1b18c
containing (amongst non-disk files) disk, disk.eph0, disk.swap, and
disk.config. disk.config is not relevant here.

I check the sizes of each of these disks:

instances]$ for disk in disk disk.eph0 disk.swap; do qemu-img info
3fab0565-2eb1-4fd9-933b-4e1d80b1b18c/$disk; done

image: 3fab0565-2eb1-4fd9-933b-4e1d80b1b18c/disk
file format: qcow2
virtual size: 1.0G (1073741824 bytes)
disk size: 10M
cluster_size: 65536
backing file: 
/home/mbooth/data/nova/instances/_base/1ba6fbdbe52377ff7e075c3317a48205ac6c28c4
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false

image: 3fab0565-2eb1-4fd9-933b-4e1d80b1b18c/disk.eph0
file format: qcow2
virtual size: 1.0G (1073741824 bytes)
disk size: 324K
cluster_size: 65536
backing file: /home/mbooth/data/nova/instances/_base/ephemeral_1_40d1d2c
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false

image: 3fab0565-2eb1-4fd9-933b-4e1d80b1b18c/disk.swap
file format: qcow2
virtual size: 1.0M (1048576 bytes)
disk size: 196K
cluster_size: 65536
backing file: /home/mbooth/data/nova/instances/_base/swap_1
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false

I resize foo with:

$ nova resize foo test-2 --poll

I check the sizes again:

instances]$ for disk in disk disk.eph0 disk.swap; do qemu-img info
3fab0565-2eb1-4fd9-933b-4e1d80b1b18c/$disk; done

image: 3fab0565-2eb1-4fd9-933b-4e1d80b1b18c/disk
file format: qcow2
virtual size: 2.0G (2147483648 bytes)
disk size: 26M
cluster_size: 65536
backing file: 
/home/mbooth/data/nova/instances/_base/1ba6fbdbe52377ff7e075c3317a48205ac6c28c4
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false

image: 3fab0565-2eb1-4fd9-933b-4e1d80b1b18c/disk.eph0
file format: qcow2
virtual size: 1.0G (1073741824 bytes)
disk size: 384K
cluster_size: 65536
backing file: /home/mbooth/data/nova/instances/_base/ephemeral_1_40d1d2c
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false

image: 3fab0565-2eb1-4fd9-933b-4e1d80b1b18c/disk.swap
file format: qcow2
virtual size: 2.0M (2097152 bytes)
disk size: 196K
cluster_size: 65536
backing file: /home/mbooth/data/nova/instances/_base/swap_2
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false

Note that the root and swap disks have been resized, but the ephemeral
disk has not. This is caused by 2 bugs.

Firstly, there is some code in finish_migration in the libvirt driver
which purports to resize disks. This code is actually a no-op, because
disk resizing has already been done by _create_image, which called
cache() with the correct size, and therefore did the resizing. However,
as noted in a comment, the no-op code would not have covered our
ephemeral disk anyway, as it only loops over 'disk.local', which is the
legacy disk naming.

Secondly, _create_image does not iterate over ephemeral disks at all
when called by finish_migration, because finish_migration explicitly
passes block_device_info=None.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1594377

Title:
  resize does not resize ephemeral disks

Status in OpenStack Compute (nova):
  New

Bug description:
  Nova resize does not resize ephemeral disks. I have tested this with
  the default qcow2 backend, but I expect it to be true for all
  backends.

  I have created 2 flavors:

  | OS-FLV-DISABLED:disabled   | 

[Yahoo-eng-team] [Bug 1594376] [NEW] Delete subnet fails with "ObjectDeletedError: Instance '' has been deleted, or its row is otherwise not present."

2016-06-20 Thread Matt Riedemann
Public bug reported:

Seen here:

http://logs.openstack.org/91/327191/15/check/gate-tempest-dsvm-neutron-
dvr/ed57c36/logs/screen-q-svc.txt.gz?level=TRACE#_2016-06-20_02_06_12_987

2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource 
[req-3870a7f4-ae6c-4710-b55f-7f2179c3d48a tempest-NetworksTestDHCPv6-2051920777 
-] delete failed
2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/resource.py", line 78, in resource
2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/base.py", line 549, in delete
2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource return 
self._delete(request, id, **kwargs)
2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 148, in wrapper
2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 221, in 
__exit__
2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource 
self.force_reraise()
2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 197, in 
force_reraise
2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 138, in wrapper
2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)
2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/base.py", line 571, in _delete
2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource 
obj_deleter(request.context, id, **kwargs)
2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/common/utils.py", line 600, in inner
2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource return f(self, 
context, *args, **kwargs)
2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py", line 1058, in 
delete_subnet
2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource LOG.debug("Port 
%s deleted concurrently", a.port_id)
2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py", line 
237, in __get__
2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource return 
self.impl.get(instance_state(instance), dict_)
2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py", line 
578, in get
2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource value = 
state._load_expired(state, passive)
2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/state.py", line 474, in 
_load_expired
2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource 
self.manager.deferred_scalar_loader(self, toload)
2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/loading.py", line 669, 
in load_scalar_attributes
2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource raise 
orm_exc.ObjectDeletedError(state)
2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource ObjectDeletedError: 
Instance '' has been deleted, or its row is 
otherwise not present.
2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource 

Looks like it's mostly failure but not 100%, might depend on the job.

http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22ObjectDeletedError%3A%20Instance%20'%3CIPAllocation%5C%22%20AND%20message%3A%5C%22has%20been%20deleted%2C%20or%20its%20row%20is%20otherwise%20not%20present.%5C%22%20AND%20message%3A%5C%22delete_subnet%5C%22%20AND%20tags%3A%5C%22screen-q-svc.txt%5C%22%20AND%20voting%3A1=7d

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1594376

Title:
  Delete subnet fails with "ObjectDeletedError: Instance '' has been deleted, or its row is otherwise not
  present."

Status in neutron:
  New

Bug description:
  Seen here:

  http://logs.openstack.org/91/327191/15/check/gate-tempest-dsvm-
  neutron-
  dvr/ed57c36/logs/screen-q-svc.txt.gz?level=TRACE#_2016-06-20_02_06_12_987

  2016-06-20 

[Yahoo-eng-team] [Bug 1594371] [NEW] Docs for keystone recommend deprecated memcache backend

2016-06-20 Thread Dr. Jens Rosenboom
Public bug reported:

At http://docs.openstack.org/developer/keystone/configuration.html
#cache-configuration-section there is a recommendation to use

backend = keystone.cache.memcache_pool

however this seems to be deprecated in the code:

WARNING oslo_log.versionutils [-] Deprecated:
keystone.cache.memcache_pool backend is deprecated as of Mitaka in favor
of oslo_cache.memcache_pool backend and may be removed in N.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1594371

Title:
  Docs for keystone recommend deprecated memcache backend

Status in OpenStack Identity (keystone):
  New

Bug description:
  At http://docs.openstack.org/developer/keystone/configuration.html
  #cache-configuration-section there is a recommendation to use

  backend = keystone.cache.memcache_pool

  however this seems to be deprecated in the code:

  WARNING oslo_log.versionutils [-] Deprecated:
  keystone.cache.memcache_pool backend is deprecated as of Mitaka in
  favor of oslo_cache.memcache_pool backend and may be removed in N.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1594371/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1593647] Re: TestDesignateClient.test_secure_client fails with AssertionError: Expected call: mock(verify='...')

2016-06-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/331541
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=3703b31eea06d5055a7fc63cec7a24b2aaa1c563
Submitter: Jenkins
Branch:master

commit 3703b31eea06d5055a7fc63cec7a24b2aaa1c563
Author: Ihar Hrachyshka 
Date:   Mon Jun 20 09:38:13 2016 +0200

tests: clean up designate client session mock on test exit

This test was modifying the driver session method without making an
effort to restore the original value after the test case completion.
This resulted in two consequent tests that relied on the method that
were triggered in the same test thread, to trigger a failure for one of
them.

Changed the setup logic for the test class to use mock.patch(..).start()
instead.

Closes-Bug: #1593647
Change-Id: I08be90691b5417025c40c5a18308d820dc7a43d2


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1593647

Title:
  TestDesignateClient.test_secure_client fails with  AssertionError:
  Expected call: mock(verify='...')

Status in neutron:
  Fix Released

Bug description:
  The test sometimes fails in gate:

  http://logs.openstack.org/30/271830/8/check/gate-neutron-
  python27/4b77bb6/testr_results.html.gz

  ft507.2: 
neutron.tests.unit.plugins.ml2.extensions.test_dns_integration.TestDesignateClient.test_secure_client_StringException:
 Traceback (most recent call last):
    File "neutron/tests/unit/plugins/ml2/extensions/test_dns_integration.py", 
line 558, in test_secure_client
  driver.session.Session.assert_called_with(verify=self.TEST_CA_CERT)
    File 
"/home/jenkins/workspace/gate-neutron-python27/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py",
 line 925, in assert_called_with
  raise AssertionError('Expected call: %s\nNot called' % (expected,))
  AssertionError: Expected call: mock(verify='d7302899d10b4e8381f2345b966fd299')
  Not called

  Logstash:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22in%20test_secure_client%5C%22

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1593647/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1517820] Re: glance api in openstack-dashboard can't support v2 glance service

2016-06-20 Thread Timur Sufiev
Closing the bug as Invalid to avoid duplicate patches and waste of
efforts.

Sharat, please look at https://review.openstack.org/#/c/320039/
(implementation of the blueprint referenced above) - your feedback and
suggestions would be very welcome there.

** Changed in: horizon
   Status: In Progress => Invalid

** Changed in: horizon
 Assignee: Sharat Sharma (sharat-sharma) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1517820

Title:
  glance api in openstack-dashboard can't support v2 glance service

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Reproduce procedure:
  1. glance service is configured to only support v2:
  # Allow access to version 1 of glance api
  #enable_v1_api=True
  enable_v1_api=False

  # Allow access to version 2 of glance api
  #enable_v2_api=True

  2. Configure openstack-dashboard to use v2 API for image service:
  OPENSTACK_API_VERSIONS = {
  "image": 2
  }

  3. Access the image page, and got error "Error: Unable to retrieve images.":
  http://xxx/dashboard/project/images/

  4. dashboards still uses v1 client to communicate with glance service:
  httpd log:
  [Thu Nov 19 17:38:12.372453 2015] [:error] [pid 30350] Recoverable error: 
HTTPMultipleChoices (HTTP 300) Requested version of OpenStack Images API is not 
available.

  glance log:
  2015-11-19 17:46:36.173 29679 WARNING 
glance.api.middleware.version_negotiation [-] Unknown version. Returning 
version choices.

  5. From openstack-dashboard code, although function glanceclient accepts 
parameter "version", but function image_list_detailed doesn't provide it:
  @memoized
  def glanceclient(request, version='1'):    Can accept the "version" 
parameter.
  url = base.url_for(request, 'image')
  insecure = getattr(settings, 'OPENSTACK_SSL_NO_VERIFY', False)
  cacert = getattr(settings, 'OPENSTACK_SSL_CACERT', None)
  return glance_client.Client(version, url, token=request.user.token.id,
  insecure=insecure, cacert=cacert)

  def image_list_detailed(request, marker=None, sort_dir='desc',
  sort_key='created_at', filters=None, paginate=False):
  images_iter = glanceclient(request).images.list(page_size=request_size,   
  << Doesn't use the specified version. 
  limit=limit,
  **kwargs)

  Some other functions have the similar problem, such as image_get,
  image_delete, and etc.

  Please advice,
  Thanks,
  Tony

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1517820/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488111] Re: Boot from volumes that fail in initialize_connection are not rescheduled

2016-06-20 Thread Samuel Matzek
** Also affects: mitaka (Ubuntu)
   Importance: Undecided
   Status: New

** No longer affects: mitaka (Ubuntu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1488111

Title:
  Boot from volumes that fail in initialize_connection are not
  rescheduled

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) liberty series:
  New

Bug description:
  Version: OpenStack Liberty

  Boot from volumes that fail in volume initialize_connection are not
  rescheduled.  Initialize connection failures can be very host-specific
  and in many cases the boot would succeed if the instance build was
  rescheduled to another host.

  The instance is not rescheduled because the initialize_connection is being 
called down this stack:
  nova.compute.manager _build_resources
  nova.compute.manager _prep_block_device
  nova.virt.block_device attach_block_devices
  nova.virt.block_device.DriverVolumeBlockDevice.attach

  When this fails an exception is thrown which lands in this block:
  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L1740
  and throws an InvalidBDM exception which is caught by this block:
  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2110

  this in turn throws a BuildAbortException which causes the instance to not be 
rescheduled by landing the flow in this block:
  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L2004

  To fix this we likely need a different exception thrown from
  nova.virt.block_device.DriverVolumeBlockDevice.attach when the failure
  is in initialize_connection and then work back up the stack to ensure
  that when this different exception is thrown a BuildAbortException  is
  not thrown so the reschedule can happen.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1488111/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1594320] Re: Can't create security group

2016-06-20 Thread Turbo Fredriksson
Apparently the problem was me. I had removed "auth_host" and
"auth_protocol" (and not set "auth_uri" or "identity_uri") which
apparently made it default to "https://127.0.0.1:35357;.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1594320

Title:
  Can't create security group

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  - s n i p -
  bladeA01b:~# openstack security group create --description "Allow incoming 
ICMP connections." icmp
  Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ 
and attach the Nova API log if possible.
   (HTTP 500) 
(Request-ID: req-e6ea8936-e20a-4b47-854c-bae4d881fc89)
  - s n i p -

  - s n i p -
  ==> /var/log/nova/nova-api.log <==
  2016-06-20 11:40:36.204 16023 WARNING keystonemiddleware.auth_token [-] Using 
the in-process token cache is deprecated as of the 4.2.0 release and may be 
removed in the 5.0.0 release or the 'O' development cycle. The in-process cache 
causes inconsistent results and high memory usage. When the feature is removed 
the auth_token middleware will not cache tokens by default which may result in 
performance issues. It is recommended to use  memcache for the auth_token token 
cache by setting the memcached_servers option.
  2016-06-20 11:40:36.205 16023 WARNING oslo_config.cfg [-] Option 
"memcached_servers" from group "DEFAULT" is deprecated for removal.  Its value 
may be silently ignored in the future.
  2016-06-20 11:40:37.960 16023 ERROR 
nova.network.security_group.neutron_driver 
[req-e6ea8936-e20a-4b47-854c-bae4d881fc89 9a2b35cadef54b24a231c4f47f07b371 
db39ce688efb4a5bba1e0d3dd682cce6 - - -] Neutron Error creating security group 
icmp
  2016-06-20 11:40:37.960 16023 ERROR 
nova.network.security_group.neutron_driver Traceback (most recent call last):
  2016-06-20 11:40:37.960 16023 ERROR 
nova.network.security_group.neutron_driver   File 
"/usr/lib/python2.7/dist-packages/nova/network/security_group/neutron_driver.py",
 line 52, in create_security_group
  2016-06-20 11:40:37.960 16023 ERROR 
nova.network.security_group.neutron_driver body).get('security_group')
  2016-06-20 11:40:37.960 16023 ERROR 
nova.network.security_group.neutron_driver   File 
"/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 97, in 
with_params
  2016-06-20 11:40:37.960 16023 ERROR 
nova.network.security_group.neutron_driver ret = self.function(instance, 
*args, **kwargs)
  2016-06-20 11:40:37.960 16023 ERROR 
nova.network.security_group.neutron_driver   File 
"/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 853, in 
create_security_group
  2016-06-20 11:40:37.960 16023 ERROR 
nova.network.security_group.neutron_driver return 
self.post(self.security_groups_path, body=body)
  2016-06-20 11:40:37.960 16023 ERROR 
nova.network.security_group.neutron_driver   File 
"/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 363, in 
post
  2016-06-20 11:40:37.960 16023 ERROR 
nova.network.security_group.neutron_driver headers=headers, params=params)
  2016-06-20 11:40:37.960 16023 ERROR 
nova.network.security_group.neutron_driver   File 
"/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 298, in 
do_request
  2016-06-20 11:40:37.960 16023 ERROR 
nova.network.security_group.neutron_driver 
self._handle_fault_response(status_code, replybody, resp)
  2016-06-20 11:40:37.960 16023 ERROR 
nova.network.security_group.neutron_driver   File 
"/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 273, in 
_handle_fault_response
  2016-06-20 11:40:37.960 16023 ERROR 
nova.network.security_group.neutron_driver 
exception_handler_v20(status_code, error_body)
  2016-06-20 11:40:37.960 16023 ERROR 
nova.network.security_group.neutron_driver   File 
"/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 84, in 
exception_handler_v20
  2016-06-20 11:40:37.960 16023 ERROR 
nova.network.security_group.neutron_driver request_ids=request_ids)
  2016-06-20 11:40:37.960 16023 ERROR 
nova.network.security_group.neutron_driver InternalServerError: The server has 
either erred or is incapable of performing the requested operation.
  2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver
  2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver
  2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver
  2016-06-20 11:40:37.960 16023 ERROR 
nova.network.security_group.neutron_driver Neutron server returns request_ids: 
['req-4bfbf32a-1bf0-4336-bd56-ddb80ca1098a']
  2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver
  2016-06-20 11:40:37.963 16023 ERROR nova.api.openstack.extensions 
[req-e6ea8936-e20a-4b47-854c-bae4d881fc89 

[Yahoo-eng-team] [Bug 1594320] [NEW] Can't create security group

2016-06-20 Thread Turbo Fredriksson
Public bug reported:

- s n i p -
bladeA01b:~# openstack security group create --description "Allow incoming ICMP 
connections." icmp
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and 
attach the Nova API log if possible.
 (HTTP 500) 
(Request-ID: req-e6ea8936-e20a-4b47-854c-bae4d881fc89)
- s n i p -

- s n i p -
==> /var/log/nova/nova-api.log <==
2016-06-20 11:40:36.204 16023 WARNING keystonemiddleware.auth_token [-] Using 
the in-process token cache is deprecated as of the 4.2.0 release and may be 
removed in the 5.0.0 release or the 'O' development cycle. The in-process cache 
causes inconsistent results and high memory usage. When the feature is removed 
the auth_token middleware will not cache tokens by default which may result in 
performance issues. It is recommended to use  memcache for the auth_token token 
cache by setting the memcached_servers option.
2016-06-20 11:40:36.205 16023 WARNING oslo_config.cfg [-] Option 
"memcached_servers" from group "DEFAULT" is deprecated for removal.  Its value 
may be silently ignored in the future.
2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver 
[req-e6ea8936-e20a-4b47-854c-bae4d881fc89 9a2b35cadef54b24a231c4f47f07b371 
db39ce688efb4a5bba1e0d3dd682cce6 - - -] Neutron Error creating security group 
icmp
2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver 
Traceback (most recent call last):
2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver  
 File 
"/usr/lib/python2.7/dist-packages/nova/network/security_group/neutron_driver.py",
 line 52, in create_security_group
2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver  
   body).get('security_group')
2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver  
 File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 97, 
in with_params
2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver  
   ret = self.function(instance, *args, **kwargs)
2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver  
 File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
853, in create_security_group
2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver  
   return self.post(self.security_groups_path, body=body)
2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver  
 File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
363, in post
2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver  
   headers=headers, params=params)
2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver  
 File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
298, in do_request
2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver  
   self._handle_fault_response(status_code, replybody, resp)
2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver  
 File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 
273, in _handle_fault_response
2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver  
   exception_handler_v20(status_code, error_body)
2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver  
 File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 84, 
in exception_handler_v20
2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver  
   request_ids=request_ids)
2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver 
InternalServerError: The server has either erred or is incapable of performing 
the requested operation.
2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver
2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver
2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver
2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver 
Neutron server returns request_ids: ['req-4bfbf32a-1bf0-4336-bd56-ddb80ca1098a']
2016-06-20 11:40:37.960 16023 ERROR nova.network.security_group.neutron_driver
2016-06-20 11:40:37.963 16023 ERROR nova.api.openstack.extensions 
[req-e6ea8936-e20a-4b47-854c-bae4d881fc89 9a2b35cadef54b24a231c4f47f07b371 
db39ce688efb4a5bba1e0d3dd682cce6 - - -] Unexpected exception in API method
2016-06-20 11:40:37.963 16023 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
2016-06-20 11:40:37.963 16023 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/extensions.py", line 478, 
in wrapped
2016-06-20 11:40:37.963 16023 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
2016-06-20 11:40:37.963 16023 ERROR nova.api.openstack.extensions   File 

[Yahoo-eng-team] [Bug 1593652] Re: When created subnet for a network failed, it seems that error info returned by neutronclient didn't return network and subnet info

2016-06-20 Thread John Davidge
It seems to me that the problem is with how Horizon is reporting the
error. The python-neutronclient will not return a subnet_id as no subnet
is created. It could be changed to return the network_id, but seeing as
that is included in the subnet-create request I'm not sure it's
necessary.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1593652

Title:
  When created subnet for a network failed,it seems that error info
  returned by neutronclient didn't return  network and subnet info

Status in neutron:
  Invalid

Bug description:
  Created subnet associated with subnetpool_1 for a network,through
  Horizon Web,creating another subnet associated with subnetpool_2
  returned error info as follows:

  Error: Failed to create subnet "" for network "None": Subnets hosted
  on the same network must be allocated from the same subnet pool.
  Neutron server returns request_ids: ['req-
  cad13014-6fe6-4db4-99e6-750de19fbf85']

  " subnet "" " and " network "None" " appeared in the error info above
  is not friendly,it seems that neutronclient didn't return network and
  subnet info caused this.

  [root@localhost devstack]# neutron net-show net_xwj
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | availability_zone_hints   |  |
  | availability_zones| nova |
  | created_at| 2016-06-16T01:52:55  |
  | description   |  |
  | id| 49fbc0cf-1d0a-4fbf-a2fc-e21264752760 |
  | ipv4_address_scope|  |
  | ipv6_address_scope|  |
  | mtu   | 1450 |
  | name  | net_xwj  |
  | port_security_enabled | True |
  | provider:network_type | vxlan|
  | provider:physical_network |  |
  | provider:segmentation_id  | 1076 |
  | router:external   | False|
  | shared| False|
  | status| ACTIVE   |
  | subnets   | e4fadcb7-ac99-4c14-82e2-1195c3618779 |
  |   | 8868eaed-55c5-4707-b41e-8798cc8b9cc9 |
  | tags  |  |
  | tenant_id | d692c40b9bd74af38361d855bebb60ac |
  | updated_at| 2016-06-16T01:52:56  |
  +---+--+
  [root@localhost devstack]# neutron subnet-show  
e4fadcb7-ac99-4c14-82e2-1195c3618779
  +---+-+
  | Field | Value   |
  +---+-+
  | allocation_pools  | {"start": "110.1.0.2", "end": "110.1.0.14"} |
  | cidr  | 110.1.0.0/28|
  | created_at| 2016-06-17T03:18:27 |
  | description   | |
  | dns_nameservers   | |
  | enable_dhcp   | True|
  | gateway_ip| 110.1.0.1   |
  | host_routes   | |
  | id| e4fadcb7-ac99-4c14-82e2-1195c3618779|
  | ip_version| 4   |
  | ipv6_address_mode | |
  | ipv6_ra_mode  | |
  | name  | subnet_xwj_01   |
  | network_id| 49fbc0cf-1d0a-4fbf-a2fc-e21264752760|
  | subnetpool_id | 4547f41d-c4cd-4336-9138-642dd2187b5b|
  | tenant_id | d692c40b9bd74af38361d855bebb60ac|
  | updated_at| 2016-06-17T03:18:27 |
  +---+-+
  [root@localhost devstack]# neutron subnet-show 
8868eaed-55c5-4707-b41e-8798cc8b9cc9
  +---+--+
  | Field | Value|
  

[Yahoo-eng-team] [Bug 1593846] Re: Fix designate dns driver for SSL based endpoints

2016-06-20 Thread John Davidge
** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1593846

Title:
  Fix designate dns driver for SSL based endpoints

Status in neutron:
  New
Status in openstack-manuals:
  New

Bug description:
  https://review.openstack.org/330817
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit c705e2f9f6c7b4a9db4a80a764268e490ea41f01
  Author: imran malik 
  Date:   Wed Jun 8 02:45:32 2016 -0700

  Fix designate dns driver for SSL based endpoints
  
  Allow setting options in designate section to specify if want
  to skip SSL cert check. This makes it possible to work with HTTPS
  based endpoints, the default behavior of keystoneclient is to always
  set verify=True however in current code, one cannot either provide
  a valid CA cert or skip the verification.
  
  DocImpact: Introduce two additional options for `[designate]` section
  in neutron.conf
  CONF.designate.insecure to allow insecure connections over SSL.
  CONF.designate.ca_cert for a valid cert when connecting over SSL
  
  Change-Id: Ic371cc11d783618c38ee40a18206b0c2a197bb3e
  Closes-Bug: #1588067

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1593846/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1594249] Re: Update of dashboard fails on Xenial

2016-06-20 Thread Matthias Runge
This is not a horizon bug itself. this is a distribution issue.

** Changed in: horizon
   Status: New => Invalid

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** No longer affects: horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1594249

Title:
  Update of dashboard fails on Xenial

Status in Ubuntu Cloud Archive:
  New

Bug description:
  I am currently trying to update the horizon dashboard on Ubuntu 16.04
  running OpenStack Mitaka using the new versions coming in from the
  package repository.

  aptitude update && aptitude safe-upgrade
  Get: 1 http://archive.ubuntu.com/ubuntu xenial-proposed InRelease [247 kB]
  Hit http://mirror2.hs-esslingen.de/mariadb/repo/10.1/ubuntu xenial InRelease
  Hit http://de.archive.ubuntu.com/ubuntu xenial InRelease
  Hit http://ppa.launchpad.net/vbernat/haproxy-1.6/ubuntu xenial InRelease
  Get: 2 http://de.archive.ubuntu.com/ubuntu xenial-updates InRelease [94.5 kB]
  Get: 3 http://de.archive.ubuntu.com/ubuntu xenial-backports InRelease [92.2 
kB]
  Get: 4 http://security.ubuntu.com/ubuntu xenial-security InRelease [94.5 kB]
  Hit http://download.ceph.com/debian-jewel xenial InRelease
  Hit http://www.rabbitmq.com/debian testing InRelease
  Get: 5 http://de.archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages 
[213 kB]
  Get: 6 http://de.archive.ubuntu.com/ubuntu xenial-updates/main i386 Packages 
[209 kB]
  Get: 7 http://de.archive.ubuntu.com/ubuntu xenial-updates/universe amd64 
Packages [96.7 kB]
  Get: 8 http://de.archive.ubuntu.com/ubuntu xenial-updates/universe i386 
Packages [93.9 kB]
  Fetched 1,141 kB in 1s (1,011 kB/s)
  W: http://download.ceph.com/debian-jewel/dists/xenial/InRelease: Signature by 
key 08B73419AC32B4E966C1A330E84AC2C0460F3994 uses weak dige st algorithm (SHA1)

  Resolving dependencies...
  The following packages will be upgraded:
base-files linux-firmware lshw openstack-dashboard 
openstack-dashboard-ubuntu-theme python-django-horizon python-glanceclient
python-oslo.concurrency
  8 packages upgraded, 0 newly installed, 0 to remove and 7 not upgraded.
  Need to get 41.8 MB of archives. After unpacking 539 kB will be used.
  Do you want to continue? [Y/n/?] Y
  Get: 1 http://archive.ubuntu.com/ubuntu xenial-proposed/main amd64 base-files 
amd64 9.4ubuntu4.1 [68.4 kB]
  Get: 2 http://archive.ubuntu.com/ubuntu xenial-proposed/main amd64 
openstack-dashboard-ubuntu-theme all 2:9.0.1-0ubuntu1 [79.5 kB]
  Get: 3 http://archive.ubuntu.com/ubuntu xenial-proposed/main amd64 
python-glanceclient all 1:2.0.0-2ubuntu0.16.04.1 [92.1 kB]
  Get: 4 http://archive.ubuntu.com/ubuntu xenial-proposed/main amd64 
python-oslo.concurrency all 3.7.1-0ubuntu1 [24.5 kB]
  Get: 5 http://archive.ubuntu.com/ubuntu xenial-proposed/main amd64 
openstack-dashboard all 2:9.0.1-0ubuntu1 [2,442 kB]
  Get: 6 http://archive.ubuntu.com/ubuntu xenial-proposed/main amd64 
python-django-horizon all 2:9.0.1-0ubuntu1 [6,272 kB]
  Get: 7 http://archive.ubuntu.com/ubuntu xenial-proposed/main amd64 lshw amd64 
02.17-1.1ubuntu3.2 [215 kB]
  Get: 8 http://archive.ubuntu.com/ubuntu xenial-proposed/main amd64 
linux-firmware all 1.157.1 [32.6 MB]
  Fetched 41.8 MB in 0s (42.3 MB/s)
  (Reading database ... 140753 files and directories currently installed.)
  Preparing to unpack .../base-files_9.4ubuntu4.1_amd64.deb ...
  Unpacking base-files (9.4ubuntu4.1) over (9.4ubuntu4) ...
  Processing triggers for plymouth-theme-ubuntu-text (0.9.2-3ubuntu13.1) ...
  update-initramfs: deferring update (trigger activated)
  Processing triggers for install-info (6.1.0.dfsg.1-5) ...
  Processing triggers for man-db (2.7.5-1) ...
  Processing triggers for initramfs-tools (0.122ubuntu8.1) ...
  update-initramfs: Generating /boot/initrd.img-4.4.0-25-generic
  W: mdadm: /etc/mdadm/mdadm.conf defines no arrays.
  Setting up base-files (9.4ubuntu4.1) ...
  Installing new version of config file /etc/update-motd.d/10-help-text ...
  (Reading database ... 140753 files and directories currently installed.)
  Preparing to unpack 
.../openstack-dashboard-ubuntu-theme_2%3a9.0.1-0ubuntu1_all.deb ...
  Unpacking openstack-dashboard-ubuntu-theme (2:9.0.1-0ubuntu1) over 
(2:9.0.0-0ubuntu2.16.04.1) ...
  Preparing to unpack 
.../python-glanceclient_1%3a2.0.0-2ubuntu0.16.04.1_all.deb ...
  Unpacking python-glanceclient (1:2.0.0-2ubuntu0.16.04.1) over (1:2.0.0-2) ...
  Preparing to unpack .../python-oslo.concurrency_3.7.1-0ubuntu1_all.deb ...
  Unpacking python-oslo.concurrency (3.7.1-0ubuntu1) over (3.7.0-2) ...
  Preparing to unpack .../openstack-dashboard_2%3a9.0.1-0ubuntu1_all.deb ...
  Unpacking openstack-dashboard (2:9.0.1-0ubuntu1) over 
(2:9.0.0-0ubuntu2.16.04.1) ...
  Preparing to unpack .../python-django-horizon_2%3a9.0.1-0ubuntu1_all.deb ...
  Unpacking python-django-horizon (2:9.0.1-0ubuntu1) over 
(2:9.0.0-0ubuntu2.16.04.1) 

[Yahoo-eng-team] [Bug 1594284] [NEW] create user through API does not validate domain_id is properly written

2016-06-20 Thread Martin Schuppert
Public bug reported:

When create a new user using the API (not cli client or horizon) it is
possible to pass an domain id which does not match the writing of the
domain id created. e.f.  default -> Default or DEfauLT

In e.g. liberty using keystone v2, this result in keystone user list
actions to fail.

Reproduce with:

1) get token
$ export OS_TOKEN=`curl -si   -H "Content-Type: application/json"   -d '{ 
"auth": { "identity": { "methods": ["password"], "password": { "user": { 
"name": "admin", "domain": { "id": "default" }, "password": "6e37dc4d28444c3a" 
}}}, "scope": { "project": { "name": "admin", "domain": { "id": "default" 
}' http://localhost:5000/v3/auth/tokens | awk '/X-Subject-Token/ {print 
$2}'`

2) create user
$ curl -s  -H "X-Auth-Token: $OS_TOKEN"  -H "Content-Type: application/json"  
-d '{"user": {"name": "newuser", "password": "changeme", "domain_id": 
"DEfauLT"}}'  http://localhost:5000/v3/users | python -mjson.tool
{
"user": {
"domain_id": "DEfauLT",
"enabled": true,
"id": "6553a3cd71794157bef20bc82c98e2b8",
"links": {
"self": 
"http://localhost:5000/v3/users/6553a3cd71794157bef20bc82c98e2b8;
},
"name": "newuser"
}
}

3) use keystone v2 and query users
# openstack user list
The request you have made requires authentication. (HTTP 401) (Request-ID: 
req-306fa0f5-6337-4206-ae91-27f382ca7166)

But getting token works as expected
# openstack token issue
++--+
| Field  | Value|
++--+
| expires| 2016-06-20T09:20:05Z |
| id | 4dd0f55bc2424c31a9c15d185c403dd5 |
| project_id | 211a8c1d7eaa4918a2bd5f2b6d7199ac |
| user_id| 6553a3cd71794157bef20bc82c98e2b8 |
++--+

On liberty:
MariaDB [keystone]> select * from user where name='newuser2'\G;
*** 1. row ***
id: 448f9bfc33dc443e9ec2d18cd16af9f7
  name: newuser2
 extra: {}
  password: 
$6$rounds=1$HNeascl/YNVeJbGU$R4TnvjIbBPKs0YaVyeT6GCyHDz7Y.UFW141xF6f0YyZVXFKjgrA3EryqXoj6PdeNUku0v0Y85K.4FrSKYnmmo0
   enabled: 1
--> domain_id: DEfauLT
default_project_id: NULL
1 row in set (0.00 sec)

Manual change of the domain_id in the DB is needed.

Remarks:
- create user using cli client verifies domain exists
- with Mitaka it is still possible to create user with mismatching domain_id, 
but so far no issues identified (little testing)

[root@rdo-mitaka ~(keystone_admin_v3)]# openstack user show 
6553a3cd71794157bef20bc82c98e2b8
+---+--+
| Field | Value|
+---+--+
| domain_id | DEfauLT  |
| enabled   | True |
| id| 6553a3cd71794157bef20bc82c98e2b8 |
| name  | newuser  |
+---+--+

MariaDB [keystone]> select * from local_user where name='newuser'\G;
*** 1. row ***
   id: 11
  user_id: 6553a3cd71794157bef20bc82c98e2b8
domain_id: DEfauLT
 name: newuser

** Affects: keystone
 Importance: Undecided
 Assignee: Martin Schuppert (mschuppert)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => Martin Schuppert (mschuppert)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1594284

Title:
  create user through API does not validate domain_id is properly
  written

Status in OpenStack Identity (keystone):
  New

Bug description:
  When create a new user using the API (not cli client or horizon) it is
  possible to pass an domain id which does not match the writing of the
  domain id created. e.f.  default -> Default or DEfauLT

  In e.g. liberty using keystone v2, this result in keystone user list
  actions to fail.

  Reproduce with:

  1) get token
  $ export OS_TOKEN=`curl -si   -H "Content-Type: application/json"   -d '{ 
"auth": { "identity": { "methods": ["password"], "password": { "user": { 
"name": "admin", "domain": { "id": "default" }, "password": "6e37dc4d28444c3a" 
}}}, "scope": { "project": { "name": "admin", "domain": { "id": "default" 
}' http://localhost:5000/v3/auth/tokens | awk '/X-Subject-Token/ {print 
$2}'`

  2) create user
  $ curl -s  -H "X-Auth-Token: $OS_TOKEN"  -H "Content-Type: application/json"  
-d '{"user": {"name": "newuser", "password": "changeme", "domain_id": 
"DEfauLT"}}'  http://localhost:5000/v3/users | python -mjson.tool
  {
  "user": {
  "domain_id": "DEfauLT",
  "enabled": true,
  "id": "6553a3cd71794157bef20bc82c98e2b8",
  "links": {
  "self": 

[Yahoo-eng-team] [Bug 1594276] [NEW] Endpoint does not support RPC version 2.0. Attempted method: update_service_capabilities

2016-06-20 Thread Jens Offenbach
Public bug reported:

I am confronted with a strange problem... I have OpenStack Mitaka
running on Ubuntu 16.04. I have an HA deployment with 2 controllers and
3 compute nodes. I am using OpenSwitch. Everything is working, but after
some days I have a network issue on one compute nodes. I am unable to
reproduce the problem. But when I try to ping a floating ip that is
assigned to a VM on this compute node. I get no reply. I have already
verified that all ping requests are received by the tap interface.

I have an error in the neutron-openvswitch-agent.log:
2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher [-] Exception 
during message handling: Endpoint does not support RPC version 2.0. Attempted 
method: update_service_capabilities
2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 138, 
in _dispatch_and_reply
2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher 
incoming.message))
2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 194, 
in _dispatch
2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher raise 
UnsupportedVersion(version, method=method)
2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher 
UnsupportedVersion: Endpoint does not support RPC version 2.0. Attempted 
method: update_service_capabilities
2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher 

I have restarted nova-compute and neutron-openvswitch-agent on the
compute node, but the problem remains.

Any ideas?

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1594276

Title:
  Endpoint does not support RPC version 2.0. Attempted method:
  update_service_capabilities

Status in neutron:
  New

Bug description:
  I am confronted with a strange problem... I have OpenStack Mitaka
  running on Ubuntu 16.04. I have an HA deployment with 2 controllers
  and 3 compute nodes. I am using OpenSwitch. Everything is working, but
  after some days I have a network issue on one compute nodes. I am
  unable to reproduce the problem. But when I try to ping a floating ip
  that is assigned to a VM on this compute node. I get no reply. I have
  already verified that all ping requests are received by the tap
  interface.

  I have an error in the neutron-openvswitch-agent.log:
  2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher [-] 
Exception during message handling: Endpoint does not support RPC version 2.0. 
Attempted method: update_service_capabilities
  2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 138, 
in _dispatch_and_reply
  2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher 
incoming.message))
  2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 194, 
in _dispatch
  2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher raise 
UnsupportedVersion(version, method=method)
  2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher 
UnsupportedVersion: Endpoint does not support RPC version 2.0. Attempted 
method: update_service_capabilities
  2016-06-20 09:38:37.475 16138 ERROR oslo_messaging.rpc.dispatcher 

  I have restarted nova-compute and neutron-openvswitch-agent on the
  compute node, but the problem remains.

  Any ideas?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1594276/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1593652] Re: When created subnet for a network failed, it seems that error info returned by neutronclient didn't return network and subnet info

2016-06-20 Thread dongwenshuai
** Project changed: python-neutronclient => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1593652

Title:
  When created subnet for a network failed,it seems that error info
  returned by neutronclient didn't return  network and subnet info

Status in neutron:
  New

Bug description:
  Created subnet associated with subnetpool_1 for a network,through
  Horizon Web,creating another subnet associated with subnetpool_2
  returned error info as follows:

  Error: Failed to create subnet "" for network "None": Subnets hosted
  on the same network must be allocated from the same subnet pool.
  Neutron server returns request_ids: ['req-
  cad13014-6fe6-4db4-99e6-750de19fbf85']

  " subnet "" " and " network "None" " appeared in the error info above
  is not friendly,it seems that neutronclient didn't return network and
  subnet info caused this.

  [root@localhost devstack]# neutron net-show net_xwj
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | availability_zone_hints   |  |
  | availability_zones| nova |
  | created_at| 2016-06-16T01:52:55  |
  | description   |  |
  | id| 49fbc0cf-1d0a-4fbf-a2fc-e21264752760 |
  | ipv4_address_scope|  |
  | ipv6_address_scope|  |
  | mtu   | 1450 |
  | name  | net_xwj  |
  | port_security_enabled | True |
  | provider:network_type | vxlan|
  | provider:physical_network |  |
  | provider:segmentation_id  | 1076 |
  | router:external   | False|
  | shared| False|
  | status| ACTIVE   |
  | subnets   | e4fadcb7-ac99-4c14-82e2-1195c3618779 |
  |   | 8868eaed-55c5-4707-b41e-8798cc8b9cc9 |
  | tags  |  |
  | tenant_id | d692c40b9bd74af38361d855bebb60ac |
  | updated_at| 2016-06-16T01:52:56  |
  +---+--+
  [root@localhost devstack]# neutron subnet-show  
e4fadcb7-ac99-4c14-82e2-1195c3618779
  +---+-+
  | Field | Value   |
  +---+-+
  | allocation_pools  | {"start": "110.1.0.2", "end": "110.1.0.14"} |
  | cidr  | 110.1.0.0/28|
  | created_at| 2016-06-17T03:18:27 |
  | description   | |
  | dns_nameservers   | |
  | enable_dhcp   | True|
  | gateway_ip| 110.1.0.1   |
  | host_routes   | |
  | id| e4fadcb7-ac99-4c14-82e2-1195c3618779|
  | ip_version| 4   |
  | ipv6_address_mode | |
  | ipv6_ra_mode  | |
  | name  | subnet_xwj_01   |
  | network_id| 49fbc0cf-1d0a-4fbf-a2fc-e21264752760|
  | subnetpool_id | 4547f41d-c4cd-4336-9138-642dd2187b5b|
  | tenant_id | d692c40b9bd74af38361d855bebb60ac|
  | updated_at| 2016-06-17T03:18:27 |
  +---+-+
  [root@localhost devstack]# neutron subnet-show 
8868eaed-55c5-4707-b41e-8798cc8b9cc9
  +---+--+
  | Field | Value|
  +---+--+
  | allocation_pools  | {"start": "110.1.0.18", "end": "110.1.0.30"} |
  | cidr  | 110.1.0.16/28|
  | created_at| 2016-06-17T03:20:54  |
  | description   |  

[Yahoo-eng-team] [Bug 1593652] [NEW] When created subnet for a network failed, it seems that error info returned by neutronclient didn't return network and subnet info

2016-06-20 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Created subnet associated with subnetpool_1 for a network,through
Horizon Web,creating another subnet associated with subnetpool_2
returned error info as follows:

Error: Failed to create subnet "" for network "None": Subnets hosted on
the same network must be allocated from the same subnet pool. Neutron
server returns request_ids: ['req-cad13014-6fe6-4db4-99e6-750de19fbf85']

" subnet "" " and " network "None" " appeared in the error info above is
not friendly,it seems that neutronclient didn't return network and
subnet info caused this.

[root@localhost devstack]# neutron net-show net_xwj
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| availability_zone_hints   |  |
| availability_zones| nova |
| created_at| 2016-06-16T01:52:55  |
| description   |  |
| id| 49fbc0cf-1d0a-4fbf-a2fc-e21264752760 |
| ipv4_address_scope|  |
| ipv6_address_scope|  |
| mtu   | 1450 |
| name  | net_xwj  |
| port_security_enabled | True |
| provider:network_type | vxlan|
| provider:physical_network |  |
| provider:segmentation_id  | 1076 |
| router:external   | False|
| shared| False|
| status| ACTIVE   |
| subnets   | e4fadcb7-ac99-4c14-82e2-1195c3618779 |
|   | 8868eaed-55c5-4707-b41e-8798cc8b9cc9 |
| tags  |  |
| tenant_id | d692c40b9bd74af38361d855bebb60ac |
| updated_at| 2016-06-16T01:52:56  |
+---+--+
[root@localhost devstack]# neutron subnet-show  
e4fadcb7-ac99-4c14-82e2-1195c3618779
+---+-+
| Field | Value   |
+---+-+
| allocation_pools  | {"start": "110.1.0.2", "end": "110.1.0.14"} |
| cidr  | 110.1.0.0/28|
| created_at| 2016-06-17T03:18:27 |
| description   | |
| dns_nameservers   | |
| enable_dhcp   | True|
| gateway_ip| 110.1.0.1   |
| host_routes   | |
| id| e4fadcb7-ac99-4c14-82e2-1195c3618779|
| ip_version| 4   |
| ipv6_address_mode | |
| ipv6_ra_mode  | |
| name  | subnet_xwj_01   |
| network_id| 49fbc0cf-1d0a-4fbf-a2fc-e21264752760|
| subnetpool_id | 4547f41d-c4cd-4336-9138-642dd2187b5b|
| tenant_id | d692c40b9bd74af38361d855bebb60ac|
| updated_at| 2016-06-17T03:18:27 |
+---+-+
[root@localhost devstack]# neutron subnet-show 
8868eaed-55c5-4707-b41e-8798cc8b9cc9
+---+--+
| Field | Value|
+---+--+
| allocation_pools  | {"start": "110.1.0.18", "end": "110.1.0.30"} |
| cidr  | 110.1.0.16/28|
| created_at| 2016-06-17T03:20:54  |
| description   |  |
| dns_nameservers   |  |
| enable_dhcp   | True |
| gateway_ip| 110.1.0.17   |
| host_routes   |  |
| id| 8868eaed-55c5-4707-b41e-8798cc8b9cc9 |
| ip_version| 4|
| ipv6_address_mode |