[Yahoo-eng-team] [Bug 1619847] [NEW] A typo in Horizon and source code in developer.o.o doc

2016-09-02 Thread Ian Y. Choi
Public bug reported:

In
http://git.openstack.org/cgit/openstack/horizon/tree/openstack_dashboard/dashboards/admin/floating_ips/views.py#n119
,

Line 119,  "msg = _('Unknow resource type for detail API.')"
: "Unknow" is typo. It should be changed to "Unknown".

I am not sure whether the source code in following developer
documentation page automatically links to the original source code or
not. If not, please consider it.

In
http://docs.openstack.org/developer/horizon/_modules/openstack_dashboard/dashboards/admin/floating_ips/views.html
,

"msg = _('Unknow resource type for detail API.') "

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1619847

Title:
  A typo in Horizon and source code in developer.o.o doc

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In
  
http://git.openstack.org/cgit/openstack/horizon/tree/openstack_dashboard/dashboards/admin/floating_ips/views.py#n119
  ,

  Line 119,  "msg = _('Unknow resource type for detail API.')"
  : "Unknow" is typo. It should be changed to "Unknown".

  I am not sure whether the source code in following developer
  documentation page automatically links to the original source code or
  not. If not, please consider it.

  In
  
http://docs.openstack.org/developer/horizon/_modules/openstack_dashboard/dashboards/admin/floating_ips/views.html
  ,

  "msg = _('Unknow resource type for detail API.') "

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1619847/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588355] Re: ovs agent resets ovs every 5 seconds

2016-09-02 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1588355

Title:
  ovs agent resets ovs every 5 seconds

Status in neutron:
  Expired

Bug description:
  In the file ovs_ofctl/br-int.py, the function check_canary_table() is defined 
to check the status of the OVS. This function checks it by dumping flows of 
table 23, the canary table. 
  However in my configuration, table 23 does not have any flow. For some reason 
the function setup_canary_table() was not called at all. As the consequence, 
the function check_canary_table() always reports that the OVS is just restarted 
and OVS neutron agent keeps resetting the flows in OVS causing packets to be 
lost every around 5 seconds.
  I'm running Liberty release.
  Thanks,
  Cuong Nguyen

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1588355/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619833] [NEW] api-ref for create server block_device_mapping_v2 is wrong type

2016-09-02 Thread melanie witt
Public bug reported:

The current api-ref for create server shows the
'block_device_mapping_v2' request parameter as:

"block_device_mapping_v2": { "boot_index": "0", "uuid": "ac408821-c95a-
448f-9292-73986c790911", "source_type": "image", "volume_size": "25",
"destination_type": "volume", "delete_on_termination": true }

but specifying it this way raises an error:

DEBUG [nova.api.openstack.wsgi] Returning 400 to user: Invalid input for
field/attribute block_device_mapping_v2. Value: {u'uuid':
u'76fa36fc-c930-4bf3-8c8a-ea2a2420deb6', u'volume_size': 8192,
u'boot_index': 0, u'delete_on_termination': True, u'destination_type':
u'volume', u'source_type': u'image'}. {u'uuid': u'76fa36fc-c930-4bf3
-8c8a-ea2a2420deb6', u'volume_size': 8192, u'boot_index': 0,
u'delete_on_termination': True, u'destination_type': u'volume',
u'source_type': u'image'} is not of type 'array'

so it should be more like:

"block_device_mapping_v2": [{ "boot_index": "0", "uuid": "ac408821-c95a-
448f-9292-73986c790911", "source_type": "image", "volume_size": "25",
"destination_type": "volume", "delete_on_termination": true }]

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: api-ref

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1619833

Title:
  api-ref for create server block_device_mapping_v2 is wrong type

Status in OpenStack Compute (nova):
  New

Bug description:
  The current api-ref for create server shows the
  'block_device_mapping_v2' request parameter as:

  "block_device_mapping_v2": { "boot_index": "0", "uuid": "ac408821
  -c95a-448f-9292-73986c790911", "source_type": "image", "volume_size":
  "25", "destination_type": "volume", "delete_on_termination": true }

  but specifying it this way raises an error:

  DEBUG [nova.api.openstack.wsgi] Returning 400 to user: Invalid input
  for field/attribute block_device_mapping_v2. Value: {u'uuid':
  u'76fa36fc-c930-4bf3-8c8a-ea2a2420deb6', u'volume_size': 8192,
  u'boot_index': 0, u'delete_on_termination': True, u'destination_type':
  u'volume', u'source_type': u'image'}. {u'uuid': u'76fa36fc-c930-4bf3
  -8c8a-ea2a2420deb6', u'volume_size': 8192, u'boot_index': 0,
  u'delete_on_termination': True, u'destination_type': u'volume',
  u'source_type': u'image'} is not of type 'array'

  so it should be more like:

  "block_device_mapping_v2": [{ "boot_index": "0", "uuid": "ac408821
  -c95a-448f-9292-73986c790911", "source_type": "image", "volume_size":
  "25", "destination_type": "volume", "delete_on_termination": true }]

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1619833/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619497] Re: test_get_allocated_net_topology_as_tenant fails with Conflict

2016-09-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/364622
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=e2fdeefd37314d3e6c5910fcc7070de34561ec44
Submitter: Jenkins
Branch:master

commit e2fdeefd37314d3e6c5910fcc7070de34561ec44
Author: Armando Migliaccio 
Date:   Thu Sep 1 19:33:32 2016 -0700

Deal with unknown exceptions during auto allocation

Uncaught exceptions in the core or L3 plugin layers can cause the
auto_allocate plugin to fail unexpectedly. This patch ensures that
any unexpected error is properly handled by cleaning up half
provisioned deployment.

Related-bug: #1612798
Closes-bug: #1619497

Change-Id: I3eb9efd33363045f7b2ccd97fe4a48891f48b161


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1619497

Title:
  test_get_allocated_net_topology_as_tenant fails with Conflict

Status in neutron:
  Fix Released

Bug description:
  http://logs.openstack.org/36/304136/5/gate/gate-neutron-dsvm-
  api/042126d/logs/testr_results.html.gz

  Most likely a bug in the cleanup logic:

  http://logs.openstack.org/36/304136/5/gate/gate-neutron-dsvm-
  api/042126d/logs/screen-q-svc.txt.gz?#_2016-09-01_22_09_41_726

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1619497/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619696] Re: "neutron-db-manage upgrade heads" fails with networksegments_ibfk_2

2016-09-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/365014
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=9127f074f5a5f99eae09aba45a713041a55d91ae
Submitter: Jenkins
Branch:master

commit 9127f074f5a5f99eae09aba45a713041a55d91ae
Author: Jakub Libosvar 
Date:   Fri Sep 2 16:39:17 2016 +0200

db migration: Alter column before setting a FK on column

MySQL doesn't like foreign key columns to be modified,
this adjusts the script to add all constraints and make
modifications before setting up the foreign key relationship.

Change-Id: I494758120c8a87fe584c781b928f8b9d3bac5291
Closes-bug: 1619696


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1619696

Title:
  "neutron-db-manage upgrade heads" fails with networksegments_ibfk_2

Status in neutron:
  Fix Released

Bug description:
  Since this commit: https://review.openstack.org/#/c/293305/

  Puppet OpenStack CI is failing to run db upgrades:

  2016-09-02 13:41:05.973470 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: INFO  
[alembic.runtime.migration] Running upgrade 3b935b28e7a0, 67daae611b6e -> 
b12a3ef66e62, add standardattr to qos policies
  2016-09-02 13:41:05.973831 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: INFO  
[alembic.runtime.migration] Running upgrade b12a3ef66e62, 89ab9a816d70 -> 
97c25b0d2353, Add Name and Description to the networksegments table
  2016-09-02 13:41:05.974141 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: Running upgrade 
for neutron ...
  2016-09-02 13:41:05.974450 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: Traceback (most 
recent call last):
  2016-09-02 13:41:05.974762 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/bin/neutron-db-manage", line 10, in 
  2016-09-02 13:41:05.975062 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
sys.exit(main())
  2016-09-02 13:41:05.975360 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py", line 686, in 
main
  2016-09-02 13:41:05.975647 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: return_val |= 
bool(CONF.command.func(config, CONF.command.name))
  2016-09-02 13:41:05.975959 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py", line 207, in 
do_upgrade
  2016-09-02 13:41:05.976238 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: desc=branch, 
sql=CONF.command.sql)
  2016-09-02 13:41:05.976541 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py", line 108, in 
do_alembic_command
  2016-09-02 13:41:05.976854 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
getattr(alembic_command, cmd)(config, *args, **kwargs)
  2016-09-02 13:41:05.977153 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/alembic/command.py", line 174, in upgrade
  2016-09-02 13:41:05.977420 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
script.run_env()
  2016-09-02 13:41:05.977711 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/alembic/script/base.py", line 397, in run_env
  2016-09-02 13:41:05.978016 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
util.load_python_file(self.dir, 'env.py')
  2016-09-02 13:41:05.978335 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/alembic/util/pyfiles.py", line 81, in 
load_python_file
  2016-09-02 13:41:05.978614 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: module = 
load_module_py(module_id, path)
  2016-09-02 13:41:05.978932 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/alembic/util/compat.py", line 79, in 
load_module_py
  2016-09-02 13:41:05.979212 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: mod = 
imp.load_source(module_id, path, fp)
  2016-09-02 13:41:05.979568 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/neutron/db/migration/alembic_migrations/env.py",
 line 120, in 
  2016-09-02 13:41:05.979862 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
run_migrations_online()
  2016-09-02 13:41:05.980238 | Notice: 

[Yahoo-eng-team] [Bug 1570489] Re: Create Volume Type Extra Spec modal double fail oddity

2016-09-02 Thread Diana Whitten
** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1570489

Title:
  Create Volume Type Extra Spec modal double fail oddity

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  admin/volumes: 'Volume Types' -> 'Edit Extra Specs' -> '+ Create' ...
  When form pops up, don't fill in required values, click 'Create' ...
  form fails as expected with form errors, don't add anything and click
  create again ... Modal disappears and a confusing background growl is
  thrown up "There was an error submitting the form.  Please try again."
  as seen here: https://i.imgur.com/TSXjJkc.png

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1570489/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1617862] Re: NoFilterMatched when kill metadata proxy process in external_process.disable method

2016-09-02 Thread Matt Riedemann
** Also affects: neutron
   Importance: Undecided
   Status: New

** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1617862

Title:
  NoFilterMatched when kill metadata proxy process in
  external_process.disable method

Status in neutron:
  New

Bug description:
  l3_agent.log

  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task 
[req-b879a237-49a6-4ae0-ae5a-b52fff1be64e - - - - -] Error during 
L3NATAgentWithStateReport.periodic_sync_routers_task
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task Traceback 
(most recent call last):
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/oslo_service/periodic_task.py", line 220, in 
run_periodic_tasks
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task task(self, 
context)
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 547, in 
periodic_sync_routers_task
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task 
self.fetch_and_sync_all_routers(context, ns_manager)
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/namespace_manager.py", line 
90, in __exit__
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task 
self._cleanup(_ns_prefix, ns_id)
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l3/namespace_manager.py", line 
140, in _cleanup
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task 
self.process_monitor, ns_id, self.agent_conf)
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/neutron/agent/metadata/driver.py", line 131, 
in destroy_monitored_metadata_proxy
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task 
pm.disable()
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/external_process.py", 
line 109, in disable
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task 
utils.execute(cmd, run_as_root=True)
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 116, in 
execute
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task 
execute_rootwrap_daemon(cmd, process_input, addl_env))
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 102, in 
execute_rootwrap_daemon
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task return 
client.execute(cmd, process_input)
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/oslo_rootwrap/client.py", line 128, in execute
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task res = 
proxy.run_one_command(cmd, stdin)
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task   File 
"", line 2, in run_one_command
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task   File 
"/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in _callmethod
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task raise 
convert_to_error(kind, result)
  2016-08-26 03:02:08.356 30726 ERROR oslo_service.periodic_task NoFilterMatched


  dhcp_agent.log

  2016-08-25 16:27:15.597 27017 ERROR oslo_messaging.rpc.dispatcher 
[req-17ab00ea-166d-402f-8233-01e4c6fdc840 be5b0cbb38af4b18b2e4fd26bbe832d8 
aef8e3f16eb549d39bb6585c68b84442 - - -] Exception during message handling:
  2016-08-25 16:27:15.597 27017 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2016-08-25 16:27:15.597 27017 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 138, 
in _dispatch_and_reply
  2016-08-25 16:27:15.597 27017 ERROR oslo_messaging.rpc.dispatcher 
incoming.message))
  2016-08-25 16:27:15.597 27017 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 183, 
in _dispatch
  2016-08-25 16:27:15.597 27017 ERROR oslo_messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2016-08-25 16:27:15.597 27017 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 127, 
in _do_dispatch
  2016-08-25 16:27:15.597 27017 ERROR oslo_messaging.rpc.dispatcher result 
= func(ctxt, **new_args)
  2016-08-25 16:27:15.597 27017 ERROR oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 271, in 

[Yahoo-eng-team] [Bug 1618697] Re: os-brick 1.6.0 refactor was a major API change

2016-09-02 Thread Matt Riedemann
I think the nova part of this bug was fixed with
https://review.openstack.org/364454 - if there is more to do please be
specific.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1618697

Title:
  os-brick 1.6.0 refactor was a major API change

Status in Cinder:
  In Progress
Status in OpenStack Compute (nova):
  Invalid
Status in os-brick:
  New

Bug description:
  With the release of os-brick 1.6.0 the following review[1] was created
  to use it in upper-constraints.txt

  This review is failing the nova[2] and cinder[3] unit tests

  It's relatively simple to fix these problems to work with 1.6.0 but
  the code needs to work with both 1.5.0 *and* 1.6.0.  This is where we
  have problems.

  The connector objects moved from
  os_brick.initiator.connector.ISCSIConnector (1.5.0) to
  os_brick.initiator.connectors.ISCSIConnector (1.6.0) so any tests need
  shims in place to work with either name.  The shim could be removed
  once global-requirements is bumped to use 1.6.0 as the minimum but
  it's very late to be making that change as that'd cause a re-release
  of any libraries (glance_store) using os-brick.


  
  [1] https://review.openstack.org/#/c/360739/
  [2] 
http://logs.openstack.org/39/360739/2/check/gate-cross-nova-python27-db-ubuntu-xenial/bb19321/console.html#_2016-08-31_02_20_59_089114
  [3] 
http://logs.openstack.org/39/360739/2/check/gate-cross-cinder-python27-db-ubuntu-xenial/444b954/console.html#_2016-08-31_02_25_04_125200

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1618697/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1618707] Re: add comment

2016-09-02 Thread Matt Riedemann
What is this?

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1618707

Title:
  add comment

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  add comment

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1618707/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1618716] Re: XenAPI drive lacks of driver capabilities dict

2016-09-02 Thread Matt Riedemann
I'm not sure this is a bug, it just means that the xenapi driver gets
the same capabilities as are defined in the base ComputeDriver class.
Once it diverges from the base class defaults then it would need it's
own capabilities dict.

** Tags added: xenserver

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1618716

Title:
  XenAPI drive lacks of driver capabilities dict

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  With nova upstream code and XenServer as hypervisor, we found
  XenAPIDriver didn't clarify its own capabilities, see

  Driver:
  https://github.com/openstack/nova/blob/master/nova/virt/driver.py#L135
  XenAPIDriver:
  https://github.com/openstack/nova/blob/master/nova/virt/xenapi/driver.py#L67

  XenAPIDriver should also export its own capabilities

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1618716/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619269] Re: nova docker setup error

2016-09-02 Thread Matt Riedemann
This isn't a nova bug, it's either a problem in your setup or a bug in
nova-docker, which is basically a defunct project, so your bug will
probably not get addressed unless you attempt to fix it yourself, but I
believe the nova-docker repo is being abandoned in newton.

** Also affects: nova-docker
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1619269

Title:
  nova docker setup error

Status in OpenStack Compute (nova):
  Invalid
Status in nova-docker:
  New

Bug description:
  I am trying to setup devstack with nova-docker enabled in my local
  environment.

  Using Mitaka Version itself.

  But stack.sh always results in error.

  Error is as follows:

  2016-09-01 17:32:05.680 ERROR nova.virt.driver [-] Unable to load the 
virtualization driver
  2016-09-01 17:32:05.680 TRACE nova.virt.driver Traceback (most recent call 
last):
  2016-09-01 17:32:05.680 TRACE nova.virt.driver   File 
"/home/infics/stack/nova/nova/virt/driver.py", line 1622, in load_compute_driver
  2016-09-01 17:32:05.680 TRACE nova.virt.driver virtapi)
  2016-09-01 17:32:05.680 TRACE nova.virt.driver   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/importutils.py", line 64, in 
import_object_ns
  2016-09-01 17:32:05.680 TRACE nova.virt.driver cls = 
import_class(import_str)
  2016-09-01 17:32:05.680 TRACE nova.virt.driver   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/importutils.py", line 30, in 
import_class
  2016-09-01 17:32:05.680 TRACE nova.virt.driver __import__(mod_str)
  2016-09-01 17:32:05.680 TRACE nova.virt.driver ImportError: No module named 
novadocker.virt.docker
  2016-09-01 17:32:05.680 TRACE nova.virt.driver 
  n-cpu failed to start

  Always it results in this error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1619269/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619639] Re: Unable to launch Instance with Mitaka - Unexpected API Error

2016-09-02 Thread Matt Riedemann
This is an invalid setup, you have ec2 in the enabled_apis config
option, which isn't supported in mitaka. If the docs say to do that,
then this is a docs bug and should be redirected.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1619639

Title:
  Unable to launch Instance with Mitaka - Unexpected API Error

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Hi,

  Description
  ===

  I have installed Openstack Mitaka release manually and getting
  following error. Please find nova-api error log attached.

  root@con:/home/con# neutron net-list
  
+--+-+--+
  | id   | name| subnets
  |
  
+--+-+--+
  | a5bf7834-7f5f-4c64-ad00-8c40cbc72227 | provider| 
229751f3-b983-4c09-a07b-9c16d0d35e59 192.168.57.0/24 |
  | 4bf2a174-8d35-4306-97f5-acf015637a12 | selfservice | 
87085446-68b4-400a-9158-012b20aaeb71 172.16.1.0/24   |
  
+--+-+--+
  root@con:/home/con# openstack server create --flavor m1.tiny --image cirros 
--nic net-id=4bf2a174-8d35-4306-97f5-acf015637a12 --security-group default 
--key-name mykey selfservice-instance
  Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ 
and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-fafcfa29-dcb9-4c33-a8ea-aee09963cf5f)
  root@con:/home/con# 
  root@con:/home/con# 
  root@con:/home/con# 

  Steps to reproduce

  Install Openstack Mitaka release from following guide

  http://docs.openstack.org/mitaka/install-guide-ubuntu/launch-instance-
  selfservice.html

  Expected result
  ===
  All verification while installation for all components was successful. 
Instance should launch successfully. 

  Actual result
  =
  Getting above error. 

  Environment
  ===
  Mitaka - Manual Installation. 

  Logs & Configs
  ==

  nova-api logs attached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1619639/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1599123] Re: Glance API doesn't work in IPv6 only environment

2016-09-02 Thread Nikhil Komawar
Have you tried this trick
https://github.com/openstack/glance/blob/11cfe49b8f88f68d83028b5920891bb16792da72/glance/cmd/__init__.py#L23-L49
?

I think that'd work for you and we may have to update the documentation
using this bug as an example to update the docs!

** Changed in: glance
   Status: New => Opinion

** Changed in: glance
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1599123

Title:
  Glance API doesn't work in IPv6 only environment

Status in Glance:
  Opinion

Bug description:
  When using IPv6 only deployment, including communication between
  glance-api and glance-registry, glance client itself can make glance
  image-list without problems, however, nova image-list fails with error
  500. Same applies is glance client is called by using v1 API:

  root@ubuntu1604-openstack:~# glance --os-image-api-version 2 image-list
  +--+-+
  | ID   | Name|
  +--+-+
  | f32fd367-a451-4c09-b4fa-b2a60195aa38 | cirros  |
  | ad83b2f0-b99e-4340-b191-27f61de9955d | testimg |
  +--+-+
  root@ubuntu1604-openstack:~# glance --os-image-api-version 1 image-list
  500 Internal Server Error: The server has either erred or is incapable of 
performing the requested operation. (HTTP 500)
  root@ubuntu1604-openstack:~# nova image-list
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-279e9161-ec28-40af-a312-b2a5de829aef)

  Snippet from glance-registry.conf:
  bind_host = fd00:0:0:0::b00

  Snippet form glance-api.conf:
  registry_host = fd00:0:0:0::b00

  When symbolic name of the node is used in glance-api.conf, everything
  works fine.

  The log says:
  2016-07-05 13:11:24.858 3742 DEBUG eventlet.wsgi.server [-] (3742) accepted 
('fd00:0:0:0::b00', 47916, 0, 0) server 
/usr/lib/python2.7/dist-packages/eventlet/wsgi.py:867
  2016-07-05 13:11:24.861 3742 DEBUG glance.api.middleware.version_negotiation 
[-] Determining version of request: GET /v1/images/detail Accept: */* 
process_request 
/usr/local/lib/python2.7/dist-packages/glance/api/middleware/version_negotiation.py:46
  2016-07-05 13:11:24.862 3742 DEBUG glance.api.middleware.version_negotiation 
[-] Using url versioning process_request 
/usr/local/lib/python2.7/dist-packages/glance/api/middleware/version_negotiation.py:58
  2016-07-05 13:11:24.864 3742 DEBUG glance.api.middleware.version_negotiation 
[-] Matched version: v1 process_request 
/usr/local/lib/python2.7/dist-packages/glance/api/middleware/version_negotiation.py:70
  2016-07-05 13:11:24.865 3742 DEBUG glance.api.middleware.version_negotiation 
[-] new path /v1/images/detail process_request 
/usr/local/lib/python2.7/dist-packages/glance/api/middleware/version_negotiation.py:71
  2016-07-05 13:11:25.131 3742 DEBUG glance.common.client 
[req-7c1cf0e3-40e3-4ae2-bd3d-4f2923510386 f2875bc77df6479ea0a55ab4471c2ec0 
4ef8aba5c14c4ff99fe71d9588fcbdfc - - -] Constructed URL: 
http://fd00:0:0:0::b00:9191/images/detail?sort_key=name_dir=asc=20 
_construct_url 
/usr/local/lib/python2.7/dist-packages/glance/common/client.py:398
  2016-07-05 13:11:25.137 3742 ERROR glance.registry.client.v1.client 
[req-7c1cf0e3-40e3-4ae2-bd3d-4f2923510386 f2875bc77df6479ea0a55ab4471c2ec0 
4ef8aba5c14c4ff99fe71d9588fcbdfc - - -] Registry client request GET 
/images/detail raised ClientConnectionError

  
  From where it is obvious that wrong URL is constructed:
  http://fd00:0:0:0::b00:9191/images/...

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1599123/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1611154] Re: Glance v2 gives 403 instead of 409 on Conflict

2016-09-02 Thread Nikhil Komawar
@Lyle: are you using the schemas to validate your request?

I see how a 409 is more convenient.

I'm on the fence for this -- my opinion is that 403 is the right call
from glance's perspective as you are trying to update the reserved
attribute and have some knowledge of the image (using the id).

Though, at the same time for a `save` in your case makes sense to first
check for id and then for other attributes. So, we will have to discuss
this on the review itself. If you'd like to propose a review on this, it
would be quite helpful.

** Changed in: glance
   Status: New => Opinion

** Changed in: glance
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1611154

Title:
  Glance v2 gives 403 instead of 409 on Conflict

Status in Glance:
  Opinion

Bug description:
  Background:

  I'm working on fixing some issues around the handling of Glance V2 in
  the Ruby fog-openstack gem: https://github.com/fog/fog-
  openstack/pull/170. One of these issues was the implementation of a
  `save` method that creates an object if it doesn't exist, otherwise it
  updates the object. Normally the presence of an ID causes the `update`
  method to be called, but Glance V2 allows an ID to be specified on
  `create`. To implement this `save` method, I'd like to always call
  `create`, then rescue and call `update` on a 409 Conflict. However,
  I'm seeing the following behavior.

  Bug:

  Attempt to POST a new image with an conflicting ID (ID already
  exists), but with a read-only attribute set, e.g. `self`.

  ```
  curl -v \
-H "Content-Type: application/json" \
-H "X-Auth-Token: MY_TOKEN" \
-X POST \
-d '{
  "id": "EXISTING_IMAGE_ID",
  "name": "my-image",
  "self": "/v2/foo"
}' \
https://OPENSTACK_HOSTNAME:9292/v2/images
  ```

  Expected to receive "409 Conflict" HTTP response, but was "403
  Forbidden", "Attribute 'self' is read-only.". Removing the `self` from
  the request makes things work as expected, but the lack of the 409
  response code makes it difficult to implement a "create or update"
  method as described above.

  Thanks!

  - Lyle

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1611154/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1618260] Re: Image signature base64 don't wrap lines

2016-09-02 Thread Nikhil Komawar
I doubt if we need doc change for this as the commit itself made the doc
change.

** Changed in: glance
   Status: New => Opinion

** Changed in: glance
   Importance: Undecided => Wishlist

** Changed in: glance
   Importance: Wishlist => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1618260

Title:
  Image signature base64 don't wrap lines

Status in Glance:
  Opinion

Bug description:
  https://review.openstack.org/360411
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/glance" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 5663196c9c30f13a61a44ac7bfd625379d3165f4
  Author: Darren White 
  Date:   Tue Aug 23 14:56:31 2016 +0100

  Image signature base64 don't wrap lines
  
  In the image signature documentation, base64 should not use line
  wrapping (defaults to 76). Disable using -w 0. With line wrapping
  enabled Nova will fail to boot the image.
  
  Added a note to inform users to use -w 0 with Glance v1 but is
  optional for v2.
  
  DocImpact
  
  Closes-Bug: 1617258
  Change-Id: I3585c5cc90e6ea738ff7ecb5a5574cbb0e737511

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1618260/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619771] [NEW] in placement api format of GET .../inventories does not match spec

2016-09-02 Thread Chris Dent
Public bug reported:

The correct format is described at http://specs.openstack.org/openstack
/nova-specs/specs/newton/approved/generic-resource-pools.html#get-
resource-providers-uuid-inventories

In that format the resource provider generation is its own top level
key.

In the code the generation is repeated per resource class which means we
cannot retrieve the resource provider without first inspecting an
inventory.

We should fix this sooner than later so that we have a simpler resource
tracker.

** Affects: nova
 Importance: Undecided
 Assignee: Chris Dent (cdent)
 Status: New


** Tags: scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1619771

Title:
  in placement api format of GET .../inventories does not match spec

Status in OpenStack Compute (nova):
  New

Bug description:
  The correct format is described at
  http://specs.openstack.org/openstack/nova-specs/specs/newton/approved
  /generic-resource-pools.html#get-resource-providers-uuid-inventories

  In that format the resource provider generation is its own top level
  key.

  In the code the generation is repeated per resource class which means
  we cannot retrieve the resource provider without first inspecting an
  inventory.

  We should fix this sooner than later so that we have a simpler
  resource tracker.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1619771/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619758] Re: Credential Encryption breaks deployments without Fernet

2016-09-02 Thread Emilien Macchi
I'm adding TripleO because we need to automate the process of upgrade regarding:
http://docs.openstack.org/releasenotes/keystone/unreleased.html#upgrade-notes

"Keystone now supports encrypted credentials at rest. In order to
upgrade successfully to Newton, deployers must encrypt all credentials
currently stored before contracting the database. Deployers must run
keystone-manage credential_setup in order to use the credential API
within Newton, or finish the upgrade from Mitaka to Newton. This will
result in a service outage for the credential API where credentials will
be read-only for the duration of the upgrade process. Once the database
is contracted credentials will be writeable again. Database contraction
phases only apply to rolling upgrades."

So I'm going to try to make it transparent in puppet-keystone but for
sure TripleO will have to run the command in the upgrade scripts.

** Also affects: tripleo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1619758

Title:
  Credential Encryption breaks deployments without Fernet

Status in OpenStack Identity (keystone):
  New
Status in tripleo:
  New

Bug description:
  A recent change to encrypt credetials broke RDO/Tripleo deployments:


  2016-09-02 17:16:55.074 17619 ERROR keystone.common.fernet_utils 
[req-31d60075-7e0e-401e-a93f-58297cd5439b f2caffbaf10d4e3da294c6366fe19a36 
fd71b607cfa84539bf0440915ea2d94b - default default] Either [fernet_tokens] 
key_repository does not exist or Keystone does not have sufficient permission 
to access it: /etc/keystone/credential-keys/
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi 
[req-31d60075-7e0e-401e-a93f-58297cd5439b f2caffbaf10d4e3da294c6366fe19a36 
fd71b607cfa84539bf0440915ea2d94b - default default] MultiFernet requires at 
least one Fernet instance
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi Traceback (most 
recent call last):
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/wsgi.py", line 225, in 
__call__
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi result = 
method(req, **params)
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/controller.py", line 164, in 
inner
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi return f(self, 
request, *args, **kwargs)
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/credential/controllers.py", line 69, 
in create_credential
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi ref = 
self.credential_api.create_credential(ref['id'], ref)
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/manager.py", line 124, in 
wrapped
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi __ret_val = 
__f(*args, **kwargs)
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/credential/core.py", line 106, in 
create_credential
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi credential_copy 
= self._encrypt_credential(credential)
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/credential/core.py", line 72, in 
_encrypt_credential
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi 
json.dumps(credential['blob'])
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/credential/providers/fernet/core.py",
 line 68, in encrypt
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi crypto, keys = 
get_multi_fernet_keys()
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/credential/providers/fernet/core.py",
 line 49, in get_multi_fernet_keys
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi crypto = 
fernet.MultiFernet(fernet_keys)
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib64/python2.7/site-packages/cryptography/fernet.py", line 128, in 
__init__
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi "MultiFernet 
requires at least one Fernet instance"
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi ValueError: 
MultiFernet requires at least one Fernet instance
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1619758/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619758] [NEW] Credential Encryption breaks deployments without Fernet

2016-09-02 Thread Adam Young
Public bug reported:

A recent change to encrypt credetials broke RDO/Tripleo deployments:


2016-09-02 17:16:55.074 17619 ERROR keystone.common.fernet_utils 
[req-31d60075-7e0e-401e-a93f-58297cd5439b f2caffbaf10d4e3da294c6366fe19a36 
fd71b607cfa84539bf0440915ea2d94b - default default] Either [fernet_tokens] 
key_repository does not exist or Keystone does not have sufficient permission 
to access it: /etc/keystone/credential-keys/
2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi 
[req-31d60075-7e0e-401e-a93f-58297cd5439b f2caffbaf10d4e3da294c6366fe19a36 
fd71b607cfa84539bf0440915ea2d94b - default default] MultiFernet requires at 
least one Fernet instance
2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi Traceback (most recent 
call last):
2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/wsgi.py", line 225, in 
__call__
2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi result = 
method(req, **params)
2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/controller.py", line 164, in 
inner
2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi return f(self, 
request, *args, **kwargs)
2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/credential/controllers.py", line 69, 
in create_credential
2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi ref = 
self.credential_api.create_credential(ref['id'], ref)
2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/manager.py", line 124, in 
wrapped
2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi __ret_val = 
__f(*args, **kwargs)
2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/credential/core.py", line 106, in 
create_credential
2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi credential_copy = 
self._encrypt_credential(credential)
2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/credential/core.py", line 72, in 
_encrypt_credential
2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi 
json.dumps(credential['blob'])
2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/credential/providers/fernet/core.py",
 line 68, in encrypt
2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi crypto, keys = 
get_multi_fernet_keys()
2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/credential/providers/fernet/core.py",
 line 49, in get_multi_fernet_keys
2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi crypto = 
fernet.MultiFernet(fernet_keys)
2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib64/python2.7/site-packages/cryptography/fernet.py", line 128, in 
__init__
2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi "MultiFernet 
requires at least one Fernet instance"
2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi ValueError: 
MultiFernet requires at least one Fernet instance
2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi

** Affects: keystone
 Importance: Undecided
 Status: New

** Affects: tripleo
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1619758

Title:
  Credential Encryption breaks deployments without Fernet

Status in OpenStack Identity (keystone):
  New
Status in tripleo:
  New

Bug description:
  A recent change to encrypt credetials broke RDO/Tripleo deployments:


  2016-09-02 17:16:55.074 17619 ERROR keystone.common.fernet_utils 
[req-31d60075-7e0e-401e-a93f-58297cd5439b f2caffbaf10d4e3da294c6366fe19a36 
fd71b607cfa84539bf0440915ea2d94b - default default] Either [fernet_tokens] 
key_repository does not exist or Keystone does not have sufficient permission 
to access it: /etc/keystone/credential-keys/
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi 
[req-31d60075-7e0e-401e-a93f-58297cd5439b f2caffbaf10d4e3da294c6366fe19a36 
fd71b607cfa84539bf0440915ea2d94b - default default] MultiFernet requires at 
least one Fernet instance
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi Traceback (most 
recent call last):
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/wsgi.py", line 225, in 
__call__
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi result = 
method(req, **params)
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/controller.py", line 164, in 
inner
  2016-09-02 17:16:55.074 17619 ERROR 

[Yahoo-eng-team] [Bug 1381961] Re: Keystone API GET 5000/v3 returns wrong endpoint URL in response body

2016-09-02 Thread Adam Young
Reported in a downstream distribution that should have synced from this
code as still a bug.  please reconfirm.

** Changed in: keystone
   Status: Fix Released => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1381961

Title:
  Keystone API GET 5000/v3 returns wrong endpoint URL in response body

Status in OpenStack Identity (keystone):
  Confirmed

Bug description:
  When I was invoking a GET request to  public endpoint of Keystone, I found 
the admin endpoint URL in response body, I assume it should be the public 
endpoint URL:
  GET https://192.168.101.10:5000/v3

  {
"version": {
  "status": "stable",
  "updated": "2013-03-06T00:00:00Z",
  "media-types": [
{
  "base": "application/json",
  "type": "application/vnd.openstack.identity-v3+json"
},
{
  "base": "application/xml",
  "type": "application/vnd.openstack.identity-v3+xml"
}
  ],
  "id": "v3.0",
  "links": [
{
  "href": "https://172.20.14.10:35357/v3/;,
  "rel": "self"
}
  ]
}
  }

  ===
  Btw, I can get the right URL for public endpoint in the response body of the 
versionless API call:
  GET https://192.168.101.10:5000

  {
"versions": {
  "values": [
{
  "status": "stable",
  "updated": "2013-03-06T00:00:00Z",
  "media-types": [
{
  "base": "application/json",
  "type": "application/vnd.openstack.identity-v3+json"
},
{
  "base": "application/xml",
  "type": "application/vnd.openstack.identity-v3+xml"
}
  ],
  "id": "v3.0",
  "links": [
{
  "href": "https://192.168.101.10:5000/v3/;,
  "rel": "self"
}
  ]
},
{
  "status": "stable",
  "updated": "2014-04-17T00:00:00Z",
  "media-types": [
{
  "base": "application/json",
  "type": "application/vnd.openstack.identity-v2.0+json"
},
{
  "base": "application/xml",
  "type": "application/vnd.openstack.identity-v2.0+xml"
}
  ],
  "id": "v2.0",
  "links": [
{
  "href": "https://192.168.101.10:5000/v2.0/;,
  "rel": "self"
},
{
  "href": 
"http://docs.openstack.org/api/openstack-identity-service/2.0/content/;,
  "type": "text/html",
  "rel": "describedby"
},
{
  "href": 
"http://docs.openstack.org/api/openstack-identity-service/2.0/identity-dev-guide-2.0.pdf;,
  "type": "application/pdf",
  "rel": "describedby"
}
  ]
}
  ]
}
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1381961/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1614994] Re: keystonemiddleware 401 authentication string is not translated

2016-09-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/359675
Committed: 
https://git.openstack.org/cgit/openstack/keystonemiddleware/commit/?id=c43c8e4b7d4ea19267f6434bb45b3f0c0326c284
Submitter: Jenkins
Branch:master

commit c43c8e4b7d4ea19267f6434bb45b3f0c0326c284
Author: ubuntu 
Date:   Wed Aug 24 04:05:45 2016 -0400

Globalize authentication failure error

The authentication failure error during token
validation is currently not globalized. This
patch provides a fix for that.

Change-Id: If5ccdbfd2fc215e3d0013d45c8908344db20789e
Closes-Bug: 1614994


** Changed in: keystonemiddleware
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1614994

Title:
  keystonemiddleware 401 authentication string is not translated

Status in OpenStack Identity (keystone):
  Invalid
Status in keystonemiddleware:
  Fix Released

Bug description:
  The authentication failure message coming from keystonemiddleware auth
  token middleware at
  
https://github.com/openstack/keystonemiddleware/blob/master/keystonemiddleware/auth_token/__init__.py#L573-L582
  is not translated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1614994/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619723] [NEW] in placement api an allocation reporter sometimes needs to be able to report an allocation even if it violates capacity constraints

2016-09-02 Thread Chris Dent
Public bug reported:

If a compute node has been reconfigured such that its allocations are
above its available capacity, the resource tracker still needs to be
able to report existing allocations without failure so that it doesn't
get in a stuck state.

To that end, we will make it so that when sending allocations via a PUT,
if those allocations are already present in the data store, will respond
with success but neither write the database, nor update the resource
provider generation. This allows the resource tracker to know "yeah,
you've got my data" and feel at peace with the state of the world.

** Affects: nova
 Importance: Undecided
 Assignee: Chris Dent (cdent)
 Status: New


** Tags: scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1619723

Title:
  in placement api an allocation reporter sometimes needs to be able to
  report an allocation even if it violates capacity constraints

Status in OpenStack Compute (nova):
  New

Bug description:
  If a compute node has been reconfigured such that its allocations are
  above its available capacity, the resource tracker still needs to be
  able to report existing allocations without failure so that it doesn't
  get in a stuck state.

  To that end, we will make it so that when sending allocations via a
  PUT, if those allocations are already present in the data store, will
  respond with success but neither write the database, nor update the
  resource provider generation. This allows the resource tracker to know
  "yeah, you've got my data" and feel at peace with the state of the
  world.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1619723/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619722] [NEW] in placement api we must be able to update inventory to violate allocations

2016-09-02 Thread Chris Dent
Public bug reported:

If a compute node is reconfigured in a way that makes its inventory
change, those changes must be reflected in the placement service, even
if they violate the existing allocations, otherwise the node is left in
a difficult state.

This is safe because with this new inventory the node won't be scheduled
to: it doesn't have available capacity.

** Affects: nova
 Importance: Undecided
 Assignee: Chris Dent (cdent)
 Status: New


** Tags: scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1619722

Title:
  in placement api we must be able to update inventory to violate
  allocations

Status in OpenStack Compute (nova):
  New

Bug description:
  If a compute node is reconfigured in a way that makes its inventory
  change, those changes must be reflected in the placement service, even
  if they violate the existing allocations, otherwise the node is left
  in a difficult state.

  This is safe because with this new inventory the node won't be
  scheduled to: it doesn't have available capacity.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1619722/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619524] Re: l3 agent uses same request ID for every request

2016-09-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/364634
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=c7369483557e23782d2ec293d548bc96cd729934
Submitter: Jenkins
Branch:master

commit c7369483557e23782d2ec293d548bc96cd729934
Author: Kevin Benton 
Date:   Thu Sep 1 20:07:09 2016 -0700

Make L3 agent use different request-id for each request

Generate a new context object request-id for each reference
to self.context. This allows easier tracking of requests
in logs.

This is the L3 agent equivalent fix of
I1d6dc28ba4752d3f9f1020851af2960859aae520.

Related-Bug: #1618231
Closes-Bug: #1619524
Change-Id: I4a49f05ce0e7467084a1c27a64a0d4cf60a5f8cb


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1619524

Title:
  l3 agent uses same request ID for every request

Status in neutron:
  Fix Released

Bug description:
  The L3 agent uses the same context and subsequently the same request
  ID for every request it makes to the Neutron server. This makes
  searching for single actions based on request ID very ineffective.

  DHCP version of this bug is here:
  https://bugs.launchpad.net/neutron/+bug/1618231

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1619524/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619696] [NEW] "neutron-db-manage upgrade heads" fails with networksegments_ibfk_2

2016-09-02 Thread Emilien Macchi
Public bug reported:

Since this commit: https://review.openstack.org/#/c/293305/

Puppet OpenStack CI is failing to run db upgrades:

2016-09-02 13:41:05.973470 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: INFO  
[alembic.runtime.migration] Running upgrade 3b935b28e7a0, 67daae611b6e -> 
b12a3ef66e62, add standardattr to qos policies
2016-09-02 13:41:05.973831 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: INFO  
[alembic.runtime.migration] Running upgrade b12a3ef66e62, 89ab9a816d70 -> 
97c25b0d2353, Add Name and Description to the networksegments table
2016-09-02 13:41:05.974141 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: Running upgrade 
for neutron ...
2016-09-02 13:41:05.974450 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: Traceback (most 
recent call last):
2016-09-02 13:41:05.974762 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/bin/neutron-db-manage", line 10, in 
2016-09-02 13:41:05.975062 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
sys.exit(main())
2016-09-02 13:41:05.975360 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py", line 686, in 
main
2016-09-02 13:41:05.975647 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: return_val |= 
bool(CONF.command.func(config, CONF.command.name))
2016-09-02 13:41:05.975959 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py", line 207, in 
do_upgrade
2016-09-02 13:41:05.976238 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: desc=branch, 
sql=CONF.command.sql)
2016-09-02 13:41:05.976541 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py", line 108, in 
do_alembic_command
2016-09-02 13:41:05.976854 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
getattr(alembic_command, cmd)(config, *args, **kwargs)
2016-09-02 13:41:05.977153 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/alembic/command.py", line 174, in upgrade
2016-09-02 13:41:05.977420 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
script.run_env()
2016-09-02 13:41:05.977711 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/alembic/script/base.py", line 397, in run_env
2016-09-02 13:41:05.978016 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
util.load_python_file(self.dir, 'env.py')
2016-09-02 13:41:05.978335 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/alembic/util/pyfiles.py", line 81, in 
load_python_file
2016-09-02 13:41:05.978614 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: module = 
load_module_py(module_id, path)
2016-09-02 13:41:05.978932 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/alembic/util/compat.py", line 79, in 
load_module_py
2016-09-02 13:41:05.979212 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: mod = 
imp.load_source(module_id, path, fp)
2016-09-02 13:41:05.979568 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/neutron/db/migration/alembic_migrations/env.py",
 line 120, in 
2016-09-02 13:41:05.979862 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
run_migrations_online()
2016-09-02 13:41:05.980238 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/neutron/db/migration/alembic_migrations/env.py",
 line 114, in run_migrations_online
2016-09-02 13:41:05.980519 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
context.run_migrations()
2016-09-02 13:41:05.980858 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"", line 8, in run_migrations
2016-09-02 13:41:05.981163 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/alembic/runtime/environment.py", line 797, in 
run_migrations
2016-09-02 13:41:05.981445 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
self.get_context().run_migrations(**kw)
2016-09-02 13:41:05.981744 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/alembic/runtime/migration.py", line 312, in 
run_migrations
2016-09-02 13:41:05.982034 | Notice: 

[Yahoo-eng-team] [Bug 1535557] Re: Multiple l3 agents are scheduled to host one newly created router if multiple interfaces are added at the same time

2016-09-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/364278
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=b1ec8d523d4c45616dd71016f7e218b4b732c2ee
Submitter: Jenkins
Branch:master

commit b1ec8d523d4c45616dd71016f7e218b4b732c2ee
Author: John Schwarz 
Date:   Fri Aug 19 15:17:21 2016 +0100

Add binding_index to RouterL3AgentBinding

The patch proposes adding a new binding_index to the
RouterL3AgentBinding table, with an additional Unique Constraint that
enforces a single  per router. This goes a long
way into fixing 2 issues:

1. When scheduling a non-HA router, we only use binding_index=1. This
   means that only a single row containing that router_id can be
   committed into the database. This in fact prevents over-scheduling of
   non-HA routers. Note that for the HA router case, the binding_index
   is simply copied from the L3HARouterAgentPortBinding (since they are
   always created together they should always match).

2. This sets the ground-work for a refactor of the l3 scheduler - by
   using this binding and db-based limitation, we can schedule a router
   to agents using the RouterL3AgentBinding, while postponing the
   creation of L3HARouterAgentPortBinding objects for the agents until
   they ask for it (using sync_routers). This will be a major
   improvement over todays "everything can create
   L3HARouterAgentPortBinding" way of things).

Closes-Bug: #1535557
Change-Id: I3447ea5bcb7c57365c6f50efe12a1671e86588b3


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1535557

Title:
  Multiple l3 agents are scheduled to host one newly created router if
  multiple interfaces are added at the same time

Status in neutron:
  Fix Released

Bug description:
  I have three all-in-one controller nodes deployed by DevStack with the
  latest codes. Neutron servers on these controllers are set behind
  Pacemaker and HAProxy to realize active/active HA. MariaDB Galera
  cluster is used as my database backend.

  In neutron.conf, I have made the following changes:
  router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler

  When we add interfaces of multiple subnets to a newly created router,
  we might end up with more than one l3 agents hosting this router. This
  bug is not easy to reproduce. You may need to repeat the following
  steps several times.

  How to reproduce:

  Prerequisite
  make the following changes in neutron.conf
  [DEFAULT]
  router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler

  Step 0: Confirm multiple l3 agents are running
  $ neutron agent-list --agent_type='L3 agent'
  my result is shown http://paste.openstack.org/show/483963/

  Step 1: Create two networks
  $ neutron net-create net-l3agent-test-1
  $ neutron net-create net-l3agent-test-2

  Step 2: Add one subnet to each of the two networks
  $ neutron subnet-create --name subnet-l3agent-test-1 net-l3agent-test-1 
192.168.11.0/24
  $ neutron subnet-create --name subnet-l3agent-test-2 net-l3agent-test-2 
192.168.12.0/24

  Step 3: Create a router
  $ neutron router-create router-l3agent-test

  Step 4: Add the two subnets as the router's interfaces immediately after 
creating the router at the same time
  On controller1:
  $ neutron router-interface-add router-l3agent-test subnet-l3agent-test-1
  On controller2:
  $ neutron router-interface-add router-l3agent-test subnet-l3agent-test-2

  Step 5: Check which l3 agent(s) is/are hosting the router
  $ neutron l3-agent-list-hosting-router router-l3agent-test
  my result is shown http://paste.openstack.org/show/483962/

  If you end up with only one l3 agent, please proceed as follows
  Step 6: Clear interfaces on the router
  $ neutron router-interface-delete router-l3agent-test subnet-l3agent-test-1
  $ neutron router-interface-delete router-l3agent-test subnet-l3agent-test-2

  Step 7: Delete the router
  $ neutron router-delete router-l3agent-test

  Go back to Step 3-5

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1535557/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619690] [NEW] request logging in placement api always logs success

2016-09-02 Thread Chris Dent
Public bug reported:

The request logging in the placement api will always log a status of
200, even when that's not the case because it it getting status from the
wrong place. A possible fix is to raise the logging up a level to
middleware where it can access the response status more directly (after
exceptions).

** Affects: nova
 Importance: Critical
 Assignee: Chris Dent (cdent)
 Status: New


** Tags: scheduler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1619690

Title:
  request logging in placement api always logs success

Status in OpenStack Compute (nova):
  New

Bug description:
  The request logging in the placement api will always log a status of
  200, even when that's not the case because it it getting status from
  the wrong place. A possible fix is to raise the logging up a level to
  middleware where it can access the response status more directly
  (after exceptions).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1619690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1586268] Re: Unit test: self.assertNotEqual in unit.test_base.BaseTest.test_eq does not work in PY2

2016-09-02 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/342016
Committed: 
https://git.openstack.org/cgit/openstack/tempest/commit/?id=79f5efd4687224519cd22b98b4f37b2832b1aae5
Submitter: Jenkins
Branch:master

commit 79f5efd4687224519cd22b98b4f37b2832b1aae5
Author: Ji-Wei 
Date:   Thu Jul 14 16:58:43 2016 +0800

Class Credentials not define __ne__() built-in function

Class Credentials defines __eq__() built-in function, but does
not define __ne__() built-in function, so self.assertEqual works
but self.assertNotEqual does not work at all in this test case in
python2. This patch fixes it.

Change-Id: I2c0d9d6202d64de57700ceb7c15db8ed3ad7e8ff
Closes-Bug: #1586268


** Changed in: tempest
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1586268

Title:
  Unit test: self.assertNotEqual in  unit.test_base.BaseTest.test_eq
  does not work in PY2

Status in Ceilometer:
  Fix Released
Status in daisycloud-core:
  New
Status in Gnocchi:
  In Progress
Status in heat:
  New
Status in OpenStack Dashboard (Horizon):
  New
Status in Kosmos:
  New
Status in neutron:
  In Progress
Status in OpenStack Compute (nova):
  In Progress
Status in octavia:
  New
Status in Panko:
  In Progress
Status in python-barbicanclient:
  New
Status in python-ceilometerclient:
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-congressclient:
  Fix Released
Status in python-glanceclient:
  In Progress
Status in python-heatclient:
  Fix Released
Status in python-smaugclient:
  Fix Released
Status in python-keystoneclient:
  Fix Released
Status in python-manilaclient:
  In Progress
Status in python-muranoclient:
  Fix Released
Status in python-novaclient:
  Fix Released
Status in taskflow:
  Fix Released
Status in tempest:
  Fix Released

Bug description:
  Version: master(20160527)

  In case cinderclient.tests.unit.test_base.BaseTest.test_eq 
self.assertNotEqual does not work.
  Class base.Resource defines __eq__() built-in function, but does not define 
__ne__() built-in function, so self.assertEqual works but self.assertNotEqual 
does not work at all in this test case.

  steps:
  1 Clone code of python-cinderclient from master.
  2 Modify the case of unit test: cinderclient/tests/unit/test_base.py
    line50--line62.
  def test_eq(self):
  # Two resources with same ID: never equal if their info is not equal
  r1 = base.Resource(None, {'id': 1, 'name': 'hi'})
  r2 = base.Resource(None, {'id': 1, 'name': 'hello'})
  self.assertNotEqual(r1, r2)

  # Two resources with same ID: equal if their info is equal
  r1 = base.Resource(None, {'id': 1, 'name': 'hello'})
  r2 = base.Resource(None, {'id': 1, 'name': 'hello'})
  # self.assertEqual(r1, r2)
  self.assertNotEqual(r1, r2)

  # Two resoruces of different types: never equal
  r1 = base.Resource(None, {'id': 1})
  r2 = volumes.Volume(None, {'id': 1})
  self.assertNotEqual(r1, r2)

  # Two resources with no ID: equal if their info is equal
  r1 = base.Resource(None, {'name': 'joe', 'age': 12})
  r2 = base.Resource(None, {'name': 'joe', 'age': 12})
  # self.assertEqual(r1, r2)
  self.assertNotEqual(r1, r2)

     Modify self.assertEqual(r1, r2) to self.assertNotEqual(r1, r2).

  3 Run unit test, and return success.

  After that, I make a test:

  class Resource(object):
  def __init__(self, person):
  self.person = person

  def __eq__(self, other):
  return self.person == other.person

  r1 = Resource("test")
  r2 = Resource("test")
  r3 = Resource("test_r3")
  r4 = Resource("test_r4")

  print r1 != r2
  print r1 == r2
  print r3 != r4
  print r3 == r4

  The result is :
  True
  True
  True
  False

  Whether r1 is precisely the same to r2 or not, self.assertNotEqual(r1,
  r2) return true.So I think self.assertNotEqual doesn't work at all in
  python2 and  should be modified.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1586268/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597779] Re: Wily Deployements are failing in Maas

2016-09-02 Thread Scott Moser
marked as fix-released.
as bug states, it is fixed in xenial. 
wily is not supported any more, so its wont-fix there.


** No longer affects: maas

** Changed in: cloud-init
   Status: New => Fix Released

** Changed in: cloud-init
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1597779

Title:
  Wily Deployements are failing in Maas

Status in cloud-init:
  Fix Released

Bug description:
  MAAS Version:  1.9.2 ,  1.9.3,  2.0 Beta 8

  Problem Description:  MAAS Wily Deployments have been failing, Even
  though End of Life is upon us, it still should at least be addressed,
  to those who are running regression testing across the distributions.

  This has failed for me on arm64/amd64 so should be easily reproducible
  and I have rebuilt MAAS 2.0 recently and still encountering this
  issue.

  
  Maas log:  

  Jun 30 13:39:23 maas-devel maas.node: [INFO] ms10-39-mcdivittb0: Status 
transition from READY to ALLOCATED
  Jun 30 13:39:23 maas-devel maas.node: [INFO] ms10-39-mcdivittb0: allocated to 
user ubuntu
  Jun 30 13:39:23 maas-devel maas.node: [INFO] ms10-39-mcdivittb0: Status 
transition from ALLOCATED to DEPLOYING
  Jun 30 13:39:23 maas-devel maas.power: [INFO] Changing power state (on) of 
node: ms10-39-mcdivittb0 (4y3h7p)
  Jun 30 13:39:35 maas-devel maas.power: [INFO] Changed power state (on) of 
node: ms10-39-mcdivittb0 (4y3h7p)
  Jun 30 13:42:02 maas-devel maas.node: [INFO] ms10-39-mcdivittb0: Status 
transition from DEPLOYING to FAILED_DEPLOYMENT
  Jun 30 13:42:02 maas-devel maas.node: [ERROR] ms10-39-mcdivittb0: Marking 
node failed: Installation failed (refer to the installation log for more 
information).

  
  At the time Maas Switches the deployment attempt to failed we can see on the 
Host Console:

  [  OK  ] Started Initial cloud-init job (pre-networking).
   Starting Initial cloud-init job (metadata service crawler)...
  [   61.547288] cloud-init[998]: Cloud-init v. 0.7.7 running 'init' at Thu, 30 
Jun 2016 10:02:08 +. Up 61.35 seconds.
  [   61.547966] cloud-init[998]: ci-info: 
Net device 
info
  [   61.548500] cloud-init[998]: ci-info: 
+--+---+--+---+---+---+
  [   61.549001] cloud-init[998]: ci-info: |  Device  |   Up  |   
Address|  Mask | Scope | Hw-Address|
  [   61.549483] cloud-init[998]: ci-info: 
+--+---+--+---+---+---+
  [   61.550051] cloud-init[998]: ci-info: |  enp1s0  |  True |
10.229.65.139 |  255.255.0.0  |   .   | fc:15:b4:21:00:c2 |
  [   61.550562] cloud-init[998]: ci-info: |  enp1s0  |  True |  
fe80::fe15:b4ff:fe21:c2/64  |   .   |  link | fc:15:b4:21:00:c2 |
  [   61.551064] cloud-init[998]: ci-info: | enp1s0d1 | False |  .  
 |   .   |   .   | fc:15:b4:21:00:c3 |
  [   61.551582] cloud-init[998]: ci-info: |lo|  True |  
127.0.0.1   |   255.0.0.0   |   .   | . |
  [   61.552094] cloud-init[998]: ci-info: |lo|  True |   
::1/128|   .   |  host | . |
  [   61.552588] cloud-init[998]: ci-info: |  lxcbr0  |  True |   
10.0.3.1   | 255.255.255.0 |   .   | 96:55:5e:f3:e8:4d |
  [   61.553092] cloud-init[998]: ci-info: |  lxcbr0  |  True | 
fe80::9455:5eff:fef3:e84d/64 |   .   |  link | 96:55:5e:f3:e8:4d |
  [   61.553568] cloud-init[998]: ci-info: 
+--+---+--+---+---+---+
  [   61.553971] cloud-init[998]: ci-info: Route 
IPv4 info+
  [   61.554368] cloud-init[998]: ci-info: 
+---+-++---+---+---+
  [   61.554762] cloud-init[998]: ci-info: | Route | Destination |  Gateway   | 
   Genmask| Interface | Flags |
  [   61.555161] cloud-init[998]: ci-info: 
+---+-++---+---+---+
  [   61.61] cloud-init[998]: ci-info: |   0   |   0.0.0.0   | 10.229.0.1 | 
   0.0.0.0|   enp1s0  |   UG  |
  [   61.555956] cloud-init[998]: ci-info: |   1   |   10.0.3.0  |  0.0.0.0   | 
255.255.255.0 |   lxcbr0  |   U   |
  [   61.556349] cloud-init[998]: ci-info: |   2   |  10.229.0.0 |  0.0.0.0   | 
 255.255.0.0  |   enp1s0  |   U   |
  [   61.556741] cloud-init[998]: ci-info: 
+---+-++---+---+---+
  [   61.559020] cloud-init[998]: 2016-06-30 10:02:09,115 - 
url_helper.py[WARNING]: Setting oauth clockskew for 10.229.32.22 to 14849
  [   61.559436] cloud-init[998]: 2016-06-30 10:02:09,115 - 
handlers.py[WARNING]: failed 

[Yahoo-eng-team] [Bug 1589998] Re: cloud-init runs in single user mode

2016-09-02 Thread Scott Moser
Hi,

I've set this to 'wishlist' as I'm pretty sure it is fixed in 16.04,
with systemd units.  The issue is probably still present in upstart.

I'm open to patches for this, and if it affects 16.04, please take out
of fix-released and comment.

Thanks for filing the bug.

** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: cloud-init
   Status: New => Confirmed

** Changed in: cloud-init
   Importance: Undecided => Wishlist

** Changed in: cloud-init (Ubuntu)
   Status: New => Fix Released

** Changed in: cloud-init (Ubuntu)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1589998

Title:
  cloud-init runs in single user mode

Status in cloud-init:
  Confirmed
Status in cloud-init package in Ubuntu:
  Fix Released

Bug description:
  When I choose single user mode by editing the boot params and adding
  single, I would expect cloud-init and all the other cloud* services to
  see that fact and not run.

  I consider this a bug.

  Is there a work-around to make it not run.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1589998/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1414218] Re: Remove extraneous trace in linux/dhcp.py

2016-09-02 Thread Corey Bryant
** Also affects: cloud-archive/icehouse
   Importance: Undecided
   Status: New

** Changed in: cloud-archive
   Status: Confirmed => Invalid

** Changed in: cloud-archive/icehouse
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1414218

Title:
  Remove extraneous trace in linux/dhcp.py

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive icehouse series:
  Fix Committed
Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Trusty:
  Fix Released

Bug description:
  [Impact]

  The debug tracepoint in Dnsmasq._output_hosts_file is extraneous and
  causes unnecessary performance overhead when creating lots (> 1000)
  ports at one time.

  The trace point is unnecessary since the data is being written to disk
  and the file can be examined in a worst case scenario. The added
  performance overhead is an order of magnitude in difference (~.5
  seconds versus ~.05 seconds at 1500 ports).

  [Test Case]

  1. Deploy OpenStack using neutron for networking
  2. Create 1500 ports
  3. Observe the performance degradation for each port creation.

  [Regression Potential]

  Minimal. This code has been running in stable/juno, stable/kilo, and
  above for awhile.

  [Other Questions]

  This is likely to occur in OpenStack deployments which have large
  networks deployed. The degradation is gradual, but the performance
  becomes unacceptable with large enough networks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1414218/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619639] [NEW] Unable to launch Instance with Mitaka - Unexpected API Error

2016-09-02 Thread Kuldeep Khandelwal
Public bug reported:

Hi,

Description
===

I have installed Openstack Mitaka release manually and getting following
error. Please find nova-api error log attached.

root@con:/home/con# neutron net-list
+--+-+--+
| id   | name| subnets  
|
+--+-+--+
| a5bf7834-7f5f-4c64-ad00-8c40cbc72227 | provider| 
229751f3-b983-4c09-a07b-9c16d0d35e59 192.168.57.0/24 |
| 4bf2a174-8d35-4306-97f5-acf015637a12 | selfservice | 
87085446-68b4-400a-9158-012b20aaeb71 172.16.1.0/24   |
+--+-+--+
root@con:/home/con# openstack server create --flavor m1.tiny --image cirros 
--nic net-id=4bf2a174-8d35-4306-97f5-acf015637a12 --security-group default 
--key-name mykey selfservice-instance
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and 
attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-fafcfa29-dcb9-4c33-a8ea-aee09963cf5f)
root@con:/home/con# 
root@con:/home/con# 
root@con:/home/con# 

Steps to reproduce

Install Openstack Mitaka release from following guide

http://docs.openstack.org/mitaka/install-guide-ubuntu/launch-instance-
selfservice.html

Expected result
===
All verification while installation for all components was successful. Instance 
should launch successfully. 

Actual result
=
Getting above error. 

Environment
===
Mitaka - Manual Installation. 

Logs & Configs
==

nova-api logs attached.

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: "nova-api.log"
   
https://bugs.launchpad.net/bugs/1619639/+attachment/4732877/+files/nova-api.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1619639

Title:
  Unable to launch Instance with Mitaka - Unexpected API Error

Status in OpenStack Compute (nova):
  New

Bug description:
  Hi,

  Description
  ===

  I have installed Openstack Mitaka release manually and getting
  following error. Please find nova-api error log attached.

  root@con:/home/con# neutron net-list
  
+--+-+--+
  | id   | name| subnets
  |
  
+--+-+--+
  | a5bf7834-7f5f-4c64-ad00-8c40cbc72227 | provider| 
229751f3-b983-4c09-a07b-9c16d0d35e59 192.168.57.0/24 |
  | 4bf2a174-8d35-4306-97f5-acf015637a12 | selfservice | 
87085446-68b4-400a-9158-012b20aaeb71 172.16.1.0/24   |
  
+--+-+--+
  root@con:/home/con# openstack server create --flavor m1.tiny --image cirros 
--nic net-id=4bf2a174-8d35-4306-97f5-acf015637a12 --security-group default 
--key-name mykey selfservice-instance
  Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ 
and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-fafcfa29-dcb9-4c33-a8ea-aee09963cf5f)
  root@con:/home/con# 
  root@con:/home/con# 
  root@con:/home/con# 

  Steps to reproduce

  Install Openstack Mitaka release from following guide

  http://docs.openstack.org/mitaka/install-guide-ubuntu/launch-instance-
  selfservice.html

  Expected result
  ===
  All verification while installation for all components was successful. 
Instance should launch successfully. 

  Actual result
  =
  Getting above error. 

  Environment
  ===
  Mitaka - Manual Installation. 

  Logs & Configs
  ==

  nova-api logs attached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1619639/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619637] [NEW] ovsdb native DbSetCommand doesn't convert values into strings

2016-09-02 Thread Jakub Libosvar
Public bug reported:

When passing a dictionary inside of external_ids and there is a list
within the dictionary, Python OVS library raises ovsdb error: expected
string, got  while vsctl interface converts the value to
string and stores it correctly into database.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovs-lib

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1619637

Title:
  ovsdb native DbSetCommand doesn't convert values into strings

Status in neutron:
  New

Bug description:
  When passing a dictionary inside of external_ids and there is a list
  within the dictionary, Python OVS library raises ovsdb error: expected
  string, got  while vsctl interface converts the value to
  string and stores it correctly into database.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1619637/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619624] [NEW] Pagination is not present for Identity-->Projects .

2016-09-02 Thread Sunkara Ramya Sree
Public bug reported:

In Mitaka, there is no pagination for Identity-->Projects
and Number of items per page settings does not work in this page .

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1619624

Title:
  Pagination is not present for Identity-->Projects .

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In Mitaka, there is no pagination for Identity-->Projects
  and Number of items per page settings does not work in this page .

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1619624/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619606] [NEW] snapshot_volume_backed races, could result in data corruption

2016-09-02 Thread Matthew Booth
Public bug reported:

snapshot_volume_backed() in compute.API does not set a task_state during
execution. However, in essence it does:

if vm_state == ACTIVE:
  quiesce()
snapshot()
if vm_state == ACTIVE:
  unquiesce()

There is no exclusion here, though, which means a user could do:

quiesce()
   quiesce()
snapshot()
   snapshot()

unquiesce()--snapshot() now running after unquiesce -> corruption
   unquiesce()

or:

suspend()
snapshot()
  NO QUIESCE (we're suspended)
  snapshot()
   resume()
  --snapshot() now running after resume -> corruption

Same goes for stop/start.

Note that snapshot_volume_backed() is a separate top-level entry point
from snapshot(). snapshot() does not suffer from this problem, because
it atomically sets the task state to IMAGE_SNAPSHOT_PENDING when
running, which prevents the user from performing a concurrent operation
on the instance. I suggest that snapshot_volume_backed() should do the
same.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1619606

Title:
  snapshot_volume_backed races, could result in data corruption

Status in OpenStack Compute (nova):
  New

Bug description:
  snapshot_volume_backed() in compute.API does not set a task_state
  during execution. However, in essence it does:

  if vm_state == ACTIVE:
quiesce()
  snapshot()
  if vm_state == ACTIVE:
unquiesce()

  There is no exclusion here, though, which means a user could do:

  quiesce()
 quiesce()
  snapshot()
 snapshot()

  unquiesce()--snapshot() now running after unquiesce -> corruption
 unquiesce()

  or:

  suspend()
  snapshot()
NO QUIESCE (we're suspended)
snapshot()
 resume()
--snapshot() now running after resume -> corruption

  Same goes for stop/start.

  Note that snapshot_volume_backed() is a separate top-level entry point
  from snapshot(). snapshot() does not suffer from this problem, because
  it atomically sets the task state to IMAGE_SNAPSHOT_PENDING when
  running, which prevents the user from performing a concurrent
  operation on the instance. I suggest that snapshot_volume_backed()
  should do the same.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1619606/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619602] [NEW] Hyper-V: vhd config drive images are not migrated

2016-09-02 Thread Lucian Petrut
Public bug reported:

During cold migration, vhd config drive images are not copied over, on
the wrong assumption that the instance is already configured and does
not need the config drive.

There is an explicit check at the following location:
https://github.com/openstack/nova/blob/8f35bb321d26bd7d296c57f4188ec12fcde897c3/nova/virt/hyperv/migrationops.py#L75-L76

For this reason, migrating instances using vhd config drivers will fail, as 
there is a check ensuring that the config drive is present, if required:
https://github.com/openstack/nova/blob/8f35bb321d26bd7d296c57f4188ec12fcde897c3/nova/virt/hyperv/migrationops.py#L153-L163

The Hyper-V driver should not skip moving the config drive image.

** Affects: compute-hyperv
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: drivers hyper-v

** Project changed: os-win => nova

** Also affects: compute-hyperv
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1619602

Title:
  Hyper-V: vhd config drive images are not migrated

Status in compute-hyperv:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  During cold migration, vhd config drive images are not copied over, on
  the wrong assumption that the instance is already configured and does
  not need the config drive.

  There is an explicit check at the following location:
  
https://github.com/openstack/nova/blob/8f35bb321d26bd7d296c57f4188ec12fcde897c3/nova/virt/hyperv/migrationops.py#L75-L76

  For this reason, migrating instances using vhd config drivers will fail, as 
there is a check ensuring that the config drive is present, if required:
  
https://github.com/openstack/nova/blob/8f35bb321d26bd7d296c57f4188ec12fcde897c3/nova/virt/hyperv/migrationops.py#L153-L163

  The Hyper-V driver should not skip moving the config drive image.

To manage notifications about this bug go to:
https://bugs.launchpad.net/compute-hyperv/+bug/1619602/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619602] [NEW] Hyper-V: vhd config drive images are not migrated

2016-09-02 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

During cold migration, vhd config drive images are not copied over, on
the wrong assumption that the instance is already configured and does
not need the config drive.

There is an explicit check at the following location:
https://github.com/openstack/nova/blob/8f35bb321d26bd7d296c57f4188ec12fcde897c3/nova/virt/hyperv/migrationops.py#L75-L76

For this reason, migrating instances using vhd config drivers will fail, as 
there is a check ensuring that the config drive is present, if required:
https://github.com/openstack/nova/blob/8f35bb321d26bd7d296c57f4188ec12fcde897c3/nova/virt/hyperv/migrationops.py#L153-L163

The Hyper-V driver should not skip moving the config drive image.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: drivers hyper-v
-- 
Hyper-V: vhd config drive images are not migrated
https://bugs.launchpad.net/bugs/1619602
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1545675] Re: Resizing a pinned VM results in inconsistent state

2016-09-02 Thread Stephen Finucane
** Changed in: nova/mitaka
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1545675

Title:
  Resizing a pinned VM results in inconsistent state

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) mitaka series:
  Fix Released

Bug description:
  It appears that executing certain resize operations on a pinned
  instance results in inconsistencies in the "state machine" that Nova
  uses to track instances. This was identified using Tempest and
  manifests itself in failures in follow up shelve/unshelve operations.

  ---

  # Steps

  Testing was conducted on host containing a single-node, Fedora
  23-based (4.3.5-300.fc23.x86_64) OpenStack instance (built with
  DevStack). The '12d224e' commit of Nova was used. The Tempest tests
  (commit 'e913b82') were run using modified flavors, as seen below:

  nova flavor-create m1.small_nfv 420 2048 0 2
  nova flavor-create m1.medium_nfv 840 4096 0 4
  nova flavor-key 420 set "hw:numa_nodes=2"
  nova flavor-key 840 set "hw:numa_nodes=2"
  nova flavor-key 420 set "hw:cpu_policy=dedicated"
  nova flavor-key 840 set "hw:cpu_policy=dedicated"

  cd $TEMPEST_DIR
  cp etc/tempest.conf etc/tempest.conf.orig
  sed -i "s/flavor_ref = .*/flavor_ref = 420/" etc/tempest.conf
  sed -i "s/flavor_ref_alt = .*/flavor_ref_alt = 840/" etc/tempest.conf

  Tests were run in the order given below.

  1. 
tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_instance
  2. 
tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_shelve_unshelve_server
  3. 
tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_resize_server_revert
  4. 
tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_instance
  5. 
tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_shelve_unshelve_server

  Like so:

  ./run_tempest.sh --
  tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_instance

  # Expected Result

  The tests should pass.

  # Actual Result

  +---+--++
  | # | test id  | status |
  +---+--++
  | 1 | 1164e700-0af0-4a4c-8792-35909a88743c |   ok   |
  | 2 | 77eba8e0-036e-4635-944b-f7a8f3b78dc9 |   ok   |
  | 3 | c03aab19-adb1-44f5-917d-c419577e9e68 |   ok   |
  | 4 | 1164e700-0af0-4a4c-8792-35909a88743c |  FAIL  |
  | 5 | c03aab19-adb1-44f5-917d-c419577e9e68 |   ok*  |

  * this test reports as passing but is actually generating errors. Bad
  test! :)

  One test fails while the other "passes" but raises errors. The
  failures, where raised, are CPUPinningInvalid exceptions:

  CPUPinningInvalid: Cannot pin/unpin cpus [1] from the following
  pinned set [0, 25]

  **NOTE:** I also think there are issues with the non-reverted resize
  test, though I've yet to investigate this:

  *
  
tempest.scenario.test_server_advanced_ops.TestServerAdvancedOps.test_resize_server_confirm

  What's worse, this error "snowballs" on successive runs. Because of
  the nature of the failure (a failure to pin/unpin CPUs), we're left
  with a list of CPUs that Nova thinks to be pinned but which are no
  longer actually used. This is reflected by the resource tracker.

  $ openstack server list

  $ cat /opt/stack/logs/screen/n-cpu.log | grep 'Total usable vcpus' | tail 
-1
  *snip* INFO nova.compute.resource_tracker [*snip*] Total usable vcpus: 
40, total allocated vcpus: 8

  The error messages for both are given below, along with examples of
  this "snowballing" CPU list:

  {0}
  tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_instance
  [36.713046s] ... FAILED

   Setting instance vm_state to ERROR
   Traceback (most recent call last):
     File "/opt/stack/nova/nova/compute/manager.py", line 2474, in 
do_terminate_instance
   self._delete_instance(context, instance, bdms, quotas)
     File "/opt/stack/nova/nova/hooks.py", line 149, in inner
   rv = f(*args, **kwargs)
     File "/opt/stack/nova/nova/compute/manager.py", line 2437, in 
_delete_instance
   quotas.rollback()
     File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, 
in __exit__
   self.force_reraise()
     File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, 
in force_reraise
   six.reraise(self.type_, self.value, self.tb)
     File "/opt/stack/nova/nova/compute/manager.py", line 2432, in 
_delete_instance
   self._update_resource_tracker(context, instance)
     File "/opt/stack/nova/nova/compute/manager.py", line 751, in 
_update_resource_tracker
   rt.update_usage(context, instance)
     File 

[Yahoo-eng-team] [Bug 1609217] Re: DVR: dvr router should not exist in not-binded network node

2016-09-02 Thread LIU Yulong
** Changed in: neutron
   Status: Invalid => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1609217

Title:
  DVR: dvr router should not exist in not-binded network node

Status in neutron:
  In Progress

Bug description:
  ENV:
  stable/mitaka
  hosts:
  compute1 (nova-compute, l3-agent (dvr), metedate-agent)
  compute2 (nova-compute, l3-agent (dvr), metedate-agent)
  network1 (l3-agent (dvr_snat), metedata-agent, dhcp-agent)
  network2 (l3-agent(dvr_snat), metedata-agent, dhcp-agent)

  How to reproduce? (scenario 1)
  set: dhcp_agents_per_network = 2

  1. create a DVR router:
  neutron router-create --ha False --distributed True test1

  2. Create a network & subnet with dhcp enabled.
  neutron net-create test1
  neutron subnet-create --enable-dhcp test1 --name test1 192.168.190.0/24

  3. Attach the router and subnet
  neutron router-interface-add test1 subnet=test1

  Then the router test1 will exist in both network1 and network2. But in
  the DB routerl3agentbindings, there is only one record for DVR router
  to one l3 agent.

  http://paste.openstack.org/show/547695/

  And for another scenario 2:
  change the network2 node deployment to only run metedata-agent, dhcp-agent.
  Both in the qdhcp-namespace and the VM could ping each other.
  So qrouter-namespace in the not-binded network node is not used, and should 
not exist.

  Code:
  The function in following position should not return the DVR router id in 
scenario 1.
  
https://github.com/openstack/neutron/blob/master/neutron/db/l3_dvrscheduler_db.py#L263

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1609217/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619161] Re: flavor-list need return the extra-specs information directly

2016-09-02 Thread Ukesh
With the current api design, we have to do multiple request to get the
extra-spec of all the flavors.

extra-spec information is not returned in '/v2.1/​{tenant_id}​/flavors'
as well as in '/v2.1/​{tenant_id}​/flavors/details' api(s) and so, to
get extra-spec for every single flavor, we have to call the api,
'/v2.1/​{tenant_id}​/flavors/​{flavor_id}​/os-extra_specs'.

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1619161

Title:
  flavor-list need return the extra-specs information directly

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Description
  ===
  Now the command of nova flavor-list --extra-specs can view 
  extra-specs information, but I use --debug to see a lot of 
  http GET requests for getting the extra_specs information
  of each flavor.

  With the increase of the flavors, it will get more and more 
  GET requests. This will affect the performance of the query.

  I think that the query returns a list of flavor, it should 
  directly contain extra_specs information.

  Steps to reproduce
  ==
  A chronological list of steps which will bring off the
  issue you noticed:
  * I performed the command:
$ nova --debug flavor-list --extra-specs

  Environment
  ===
  1. Exact version of OpenStack
   Mitaka

  
  Logs & Configs
  ==
  The debug info:
  DEBUG (session:195) REQ: curl -g -i -X GET 
http://10.43.239.62:8774/v2/ed952123e0cc4ced9e581a7710bc24d5/flavors/1/os-extra_specs
 -H "User-Agent: python-novaclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}ae80c1ee126b1a4464c13c843706cc6a5b1bf259"
  DEBUG (connectionpool:368) "GET 
/v2/ed952123e0cc4ced9e581a7710bc24d5/flavors/1/os-extra_specs HTTP/1.1" 200 66
  DEBUG (session:224) RESP: [200] date: Thu, 01 Sep 2016 06:27:08 GMT 
connection: keep-alive content-type: application/json content-length: 66 
x-compute-request-id: req-15182618-4b28-4c78-87ef-d51f8da309f3 
  RESP BODY: {"extra_specs": {}}

  DEBUG (session:195) REQ: curl -g -i -X GET 
http://10.43.239.62:8774/v2/ed952123e0cc4ced9e581a7710bc24d5/flavors/2/os-extra_specs
 -H "User-Agent: python-novaclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}ae80c1ee126b1a4464c13c843706cc6a5b1bf259"
  DEBUG (connectionpool:368) "GET 
/v2/ed952123e0cc4ced9e581a7710bc24d5/flavors/2/os-extra_specs HTTP/1.1" 200 19
  DEBUG (session:224) RESP: [200] date: Thu, 01 Sep 2016 06:27:09 GMT 
connection: keep-alive content-type: application/json content-length: 19 
x-compute-request-id: req-b519d74e-ed98-48e9-90be-838287f7e407 
  RESP BODY: {"extra_specs": {}}

  DEBUG (session:195) REQ: curl -g -i -X GET 
http://10.43.239.62:8774/v2/ed952123e0cc4ced9e581a7710bc24d5/flavors/3/os-extra_specs
 -H "User-Agent: python-novaclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}ae80c1ee126b1a4464c13c843706cc6a5b1bf259"
  DEBUG (connectionpool:368) "GET 
/v2/ed952123e0cc4ced9e581a7710bc24d5/flavors/3/os-extra_specs HTTP/1.1" 200 19
  DEBUG (session:224) RESP: [200] date: Thu, 01 Sep 2016 06:27:09 GMT 
connection: keep-alive content-type: application/json content-length: 19 
x-compute-request-id: req-ad796e53-e8be-4caa-b182-219a1f3e63ca 
  RESP BODY: {"extra_specs": {}}

  DEBUG (session:195) REQ: curl -g -i -X GET 
http://10.43.239.62:8774/v2/ed952123e0cc4ced9e581a7710bc24d5/flavors/97/os-extra_specs
 -H "User-Agent: python-novaclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}ae80c1ee126b1a4464c13c843706cc6a5b1bf259"
  DEBUG (connectionpool:368) "GET 
/v2/ed952123e0cc4ced9e581a7710bc24d5/flavors/97/os-extra_specs HTTP/1.1" 200 39
  DEBUG (session:224) RESP: [200] date: Thu, 01 Sep 2016 06:27:09 GMT 
connection: keep-alive content-type: application/json content-length: 39 
x-compute-request-id: req-4c8d466e-d013-4549-ae74-8ea4ca578061 
  RESP BODY: {"extra_specs": {"hw:numa_nodes": "1"}}

  DEBUG (session:195) REQ: curl -g -i -X GET 
http://10.43.239.62:8774/v2/ed952123e0cc4ced9e581a7710bc24d5/flavors/99/os-extra_specs
 -H "User-Agent: python-novaclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}ae80c1ee126b1a4464c13c843706cc6a5b1bf259"
  DEBUG (connectionpool:368) "GET 
/v2/ed952123e0cc4ced9e581a7710bc24d5/flavors/99/os-extra_specs HTTP/1.1" 200 39
  DEBUG (session:224) RESP: [200] date: Thu, 01 Sep 2016 06:27:09 GMT 
connection: keep-alive content-type: application/json content-length: 39 
x-compute-request-id: req-9663e309-b421-45dd-9d6a-43f5a5464eab 
  RESP BODY: {"extra_specs": {"hw:numa_nodes": "2"}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1619161/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp

[Yahoo-eng-team] [Bug 1604662] Re: Bulk creation for security group returns 500 error.

2016-09-02 Thread Reedip
Python-neutronclient  and openstackclient also do not support bulk
creation of security groups.

** Also affects: python-neutronclient
   Importance: Undecided
   Status: New

** Changed in: python-neutronclient
 Assignee: (unassigned) => Reedip (reedip-banerjee)

** Also affects: python-openstackclient
   Importance: Undecided
   Status: New

** Changed in: python-openstackclient
 Assignee: (unassigned) => Reedip (reedip-banerjee)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1604662

Title:
  Bulk creation for security group returns 500 error.

Status in neutron:
  Confirmed
Status in python-neutronclient:
  New
Status in python-openstackclient:
  New

Bug description:
  
  API request
  
  vagrant@ubuntu:~$ curl -i -X POST -H "X-Auth-Token: $TOKEN" 
http://192.168.122.139:9696/v2.0/security-groups -d 
'{"security_groups":[{"security_group":{"name":"hobo1"}}]}'
  HTTP/1.1 500 Internal Server Error
  Content-Type: application/json
  Content-Length: 150
  X-Openstack-Request-Id: req-48d5282e-f0b6-48b8-887c-7aa0c953ee88
  Date: Wed, 20 Jul 2016 03:54:06 GMT

  {"NeutronError": {"message": "Request Failed: internal server error
  while processing your request.", "type": "HTTPInternalServerError",
  "detail": ""}}

  trace in neutron server
  ===
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource 
[req-48d5282e-f0b6-48b8-887c-7aa0c953ee88 e01bc3eadeb045edb02fc6b2af4b5d49 
867929bfedca4a719e17a7f3293845de -
   - -] create failed: No details.
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 79, in resource
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 401, in create
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource return 
self._create(request, body, **kwargs)
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 148, in wrapper
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource 
self.force_reraise()
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 138, in wrapper
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource return 
f(*args, **kwargs)
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 500, in _create
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource objs = 
do_create(body, bulk=True)
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 496, in do_create
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource 
request.context, reservation.reservation_id)
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource 
self.force_reraise()
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 489, in do_create
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource return 
obj_creator(request.context, **kwargs)
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource TypeError: 
create_security_group_bulk() got an unexpected keyword argument 
'security_groups'
  2016-07-20 12:54:06.234 5351 ERROR neutron.api.v2.resource
  2016-07-20 12:54:06.241 5351 INFO neutron.wsgi 
[req-48d5282e-f0b6-48b8-887c-7aa0c953ee88 e01bc3eadeb045edb02fc6b2af4b5d49 
867929bfedca4a719e17a7f3293845de - - -] 192.168.122.139 - - [20/Jul/2016 
12:54:06] "POST /v2.0/security-groups HTTP/1.1" 500 344 0.203171

To manage notifications about this bug go to: