[Yahoo-eng-team] [Bug 1542470] Re: Correctly set oslo.log configuration default_log_levels

2016-02-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/254253
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=e7a7e46c5c91693d386f390a187d62e3ab5767f2
Submitter: Jenkins
Branch:master

commit e7a7e46c5c91693d386f390a187d62e3ab5767f2
Author: Ronald Bradford 
Date:   Mon Dec 7 10:32:04 2015 -0500

Use oslo.log specified method to set log levels

Use the documented oslo.log [1] method for altering the
default_log_levels to be consistent with TC approved projects.

[1] http://docs.openstack.org/developer/oslo.log/usage.html

Closes-Bug: 1542470
Change-Id: I4999c218e963764bd7c35f813941450b11dc9aa1


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1542470

Title:
  Correctly set oslo.log configuration default_log_levels

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Keystone currently sets an olso.log configuration variable directly in
  the CONF object rather than using the oslo.log provided methods for
  overriding configuration options.

  
  # keystone/common/config.py
  CONF.set_default('default_log_levels=...

  This should be as documented in
  http://docs.openstack.org/developer/oslo.log/usage.html set via

  log.set_defaults(default_log_levels=..

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1542470/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540728] Re: fix hiding of domain and region fields on login screen

2016-02-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/275001
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=169fb08051dff0d4c1ded55747b1fb037cdce176
Submitter: Jenkins
Branch:master

commit 169fb08051dff0d4c1ded55747b1fb037cdce176
Author: daniel-a-nguyen 
Date:   Mon Feb 1 20:29:42 2016 -0800

Fixes reference to css for domain and region

Horizon fails to hide the domain and region fields when WebSSO is
enabled.  All fields should be hidden when the auth_type is not
'Keystone Credentials'.

Change-Id: I782dd3094a34c95127781f4040124a897bb46b1a
Closes-Bug: #1540728


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1540728

Title:
  fix hiding of domain and region fields on login screen

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  When Horizon is  configured with WebSSO enabled the domain field and
  region field should be hidden when the selected auth_type NOT
  "Keystone Credentials".

  Steps to Reproduce
  --
  1. Edit the local_settings.py to have the following

  OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

  WEBSSO_ENABLED = True
  WEBSSO_CHOICES = (
   ("credentials", "Keystone Credentials"),
   ("saml2", "ADFS Credentials"),
   )

  * In addition to Keystone V3 settings.

  2. Login into Horizon
  3. Select 'Keystone Credentials' from the dropdown control.
  4. Observe that the domain field is present along with the name and password 
fields.
  5. Select 'ADFS Credentials'
  6. Observe that the domain field is present.

  Expected Behavior
  
  Selecting 'ADFS Credentials' or anything other than 'Keystone Credentials' 
should hide all the input fields including domain and region.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1540728/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1533663] Re: Volume migration works only via CLI

2016-02-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/276720
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=6a8c054097f87fe2ae5c4ee1886b84d4e4c561fb
Submitter: Jenkins
Branch:master

commit 6a8c054097f87fe2ae5c4ee1886b84d4e4c561fb
Author: Itxaka 
Date:   Fri Feb 5 14:10:19 2016 +0100

api cinder volume_migrate wrong number of params

We were missing the lock_volume parameter, causing the call
to volume_migrate to always fail.

Change-Id: I8d2b5db9e2cb9551ac4bb47564b1f81c088d4ed3
Co-Authored-By: Dmitry Galkin
Closes-bug: #1533663


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1533663

Title:
  Volume migration works only via CLI

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Running OpenStack Stable/Liberty on CentOS 7.1.

  The volume migration between Cinder hosts does not work when invoked via 
Horizon (Pop-up shows: "Error: Failed to migrate volume.").
  The Volume migration invoked via command line however works fine.

  In /var/log/httpd/error_log the following messages can be found on
  every attempt to migrate a volume via Dashboard:

  [:error] [pid 4111] Recoverable error: migrate_volume() takes exactly
  5 arguments (4 given)

  The error presumably comes from /usr/lib/python2.7/site-
  packages/cinderclient/v2/volumes.py at line 578.

  The 'migrate_volume' method there expects all arguments as positional
  and the 'lock_volume' var seems not being provided in the case when
  migration invoked in Horizon.

  Making 'lock_volume' a kwarg with False value as default fixes the issue and 
does not break the original CLI behavior. I.e.  when volume migration invoked 
via CLI the lock_volume will be False, unless the respective flag was 
explicitly given.
  The volume migration invoked via Horizon with this change will now work, but 
volume cannot be 'locked' during migration. The respective functionality seems 
to be not yet fully integrated in Horizon -> there is even no check box in the 
frontend yet and I could not find blueprints proposing those changes..

  So, attached patch is a simple workaround rather than a solution, it
  allows run volume migrations via Horizon, however with no volume
  locking.

  
  My setup includes 2 Cinder Storage nodes (LVM/iSCSI) , where one is also a 
controller (i.e. with cinder-api and cinder-scheduler).

  The versions are as follows:

  openstack-dashboard  1:8.0.0-1.el7
  openstack-dashboard-theme  1:8.0.0-1.el7
  openstack-cinder  1:7.0.1-1.el7
  python-cinder  1:7.0.1-1.el7
  python-cinderclient  1.4.0-1.el7

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1533663/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542587] [NEW] unit tests failed when change exit() to sys.exit()

2016-02-05 Thread Ren Qiaowei
Public bug reported:

when change exit() to sys.exit() in
keystone.cmd.cli.DomainConfigUpload.main(), the following unit tests
failed:

File "keystone/tests/unit/test_cli.py", line 357, in test_config_upload
File "keystone/tests/unit/test_cli.py", line 297, in test_no_overwrite_config
File "keystone/tests/unit/test_cli.py", line 323, in test_config_upload
File "keystone/tests/unit/test_cli.py", line 340, in test_config_upload

the log is as follow:

Captured traceback:
~~~
Traceback (most recent call last):
  File "keystone/tests/unit/test_cli.py", line 357, in test_config_upload
self.assertRaises(SystemExit, cli.DomainConfigUpload.main)
  File 
"/opt/stack/keystone/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 434, in assertRaises
self.assertThat(our_callable, matcher)
  File 
"/opt/stack/keystone/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 445, in assertThat
mismatch_error = self._matchHelper(matchee, matcher, message, verbose)
  File 
"/opt/stack/keystone/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 495, in _matchHelper
mismatch = matcher.match(matchee)
  File 
"/opt/stack/keystone/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py",
 line 108, in match
mismatch = self.exception_matcher.match(exc_info)
  File 
"/opt/stack/keystone/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_higherorder.py",
 line 62, in match
mismatch = matcher.match(matchee)
  File 
"/opt/stack/keystone/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 426, in match
reraise(*matchee)
  File 
"/opt/stack/keystone/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py",
 line 101, in match
result = matchee()
  File 
"/opt/stack/keystone/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 982, in __call__
return self._callable_object(*self._args, **self._kwargs)
  File "keystone/cmd/cli.py", line 696, in main
sys.exit(status)
  File 
"/opt/stack/keystone/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py", 
line 1062, in __call__
return _mock_self._mock_call(*args, **kwargs)
  File 
"/opt/stack/keystone/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py", 
line 1118, in _mock_call
raise effect
keystone.tests.unit.core.UnexpectedExit

** Affects: keystone
 Importance: Undecided
 Assignee: Ren Qiaowei (qiaowei-ren)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => Ren Qiaowei (qiaowei-ren)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1542587

Title:
  unit tests failed when change exit() to sys.exit()

Status in OpenStack Identity (keystone):
  New

Bug description:
  when change exit() to sys.exit() in
  keystone.cmd.cli.DomainConfigUpload.main(), the following unit tests
  failed:

  File "keystone/tests/unit/test_cli.py", line 357, in test_config_upload
  File "keystone/tests/unit/test_cli.py", line 297, in test_no_overwrite_config
  File "keystone/tests/unit/test_cli.py", line 323, in test_config_upload
  File "keystone/tests/unit/test_cli.py", line 340, in test_config_upload

  the log is as follow:

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "keystone/tests/unit/test_cli.py", line 357, in test_config_upload
  self.assertRaises(SystemExit, cli.DomainConfigUpload.main)
File 
"/opt/stack/keystone/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 434, in assertRaises
  self.assertThat(our_callable, matcher)
File 
"/opt/stack/keystone/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 445, in assertThat
  mismatch_error = self._matchHelper(matchee, matcher, message, verbose)
File 
"/opt/stack/keystone/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 495, in _matchHelper
  mismatch = matcher.match(matchee)
File 
"/opt/stack/keystone/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py",
 line 108, in match
  mismatch = self.exception_matcher.match(exc_info)
File 
"/opt/stack/keystone/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_higherorder.py",
 line 62, in match
  mismatch = matcher.match(matchee)
File 
"/opt/stack/keystone/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 426, in match
  reraise(*matchee)
File 
"/opt/stack/keystone/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py",
 line 101, in match
  result = matchee()
File 
"/opt/stack/keystone/.tox/py

[Yahoo-eng-team] [Bug 1519210] Re: opt-out of certain notifications

2016-02-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/253780
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=255685877ec54d1b9689b88cc5af8a5490d30c91
Submitter: Jenkins
Branch:master

commit 255685877ec54d1b9689b88cc5af8a5490d30c91
Author: Fernando Diaz 
Date:   Fri Dec 4 22:23:15 2015 -0600

Opt-out certain Keystone Notifications

This patch will allow certain notifications for events in
Keystone to be opted out. Opting out may be a desired way of
doing this since most keystone deployers will likely like
to by default have all audit traces.

Change-Id: I86caf6e5f25cdd76121881813167c2144bf1d051
Closes-Bug: 1519210


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1519210

Title:
  opt-out of certain notifications

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  keystone currently support a lot of event notifications, just see
  http://docs.openstack.org/developer/keystone/event_notifications.html

  It would be nice if there was a configuration option to allow users to
  opt-out of notifications they didn't care about.

  This could be as simple as:

  [notifications]
  listen_group_create = True
  listen_group_delete = True
  listen_group_update = True
  ...
  listen_authenticate_success = True

  Or something more advanced.

  Either way, each would have to be set to True by default.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1519210/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1538518] Re: Avoid using `len(x)` to check if x is empty

2016-02-05 Thread Ren Qiaowei
** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) => Ren Qiaowei (qiaowei-ren)

** Changed in: nova
   Status: New => In Progress

** Also affects: swift
   Importance: Undecided
   Status: New

** Changed in: swift
 Assignee: (unassigned) => Ren Qiaowei (qiaowei-ren)

** Changed in: swift
   Status: New => In Progress

** Also affects: glance
   Importance: Undecided
   Status: New

** Changed in: glance
 Assignee: (unassigned) => Ren Qiaowei (qiaowei-ren)

** Changed in: glance
   Status: New => In Progress

** Also affects: cinder
   Importance: Undecided
   Status: New

** Changed in: cinder
 Assignee: (unassigned) => Ren Qiaowei (qiaowei-ren)

** Changed in: cinder
   Status: New => In Progress

** Also affects: ceilometer
   Importance: Undecided
   Status: New

** Changed in: ceilometer
 Assignee: (unassigned) => Ren Qiaowei (qiaowei-ren)

** Changed in: ceilometer
   Status: New => In Progress

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) => Ren Qiaowei (qiaowei-ren)

** Changed in: neutron
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1538518

Title:
  Avoid using `len(x)` to check if x is empty

Status in Ceilometer:
  In Progress
Status in Cinder:
  In Progress
Status in Glance:
  In Progress
Status in neutron:
  In Progress
Status in OpenStack Compute (nova):
  In Progress
Status in Rally:
  In Progress
Status in OpenStack Object Storage (swift):
  In Progress

Bug description:
  `len()` is used to check if collection (e.g., a dict, list, set, etc.)
  has items. As collections have a boolean representation too, it is
  better to directly check for true / false.

  rally/common/utils.py
  rally/task/utils.py
  rally/task/validation.py
  tests/unit/doc/test_specs.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1538518/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449263] Re: The native OVSDB Connection class should allow users to pass in their own Idl instance

2016-02-05 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1449263

Title:
  The native OVSDB Connection class should allow users to pass in their
  own Idl instance

Status in neutron:
  Expired

Bug description:
  The OVS library now allows registering a notification hook by
  subclassing the Idl class and defining a notify() function. To be able
  to use this in Neutron (and networking-ovn), it must be possible for
  the Connection object to use a subclassed Idl. It currently is
  hardcoded to instantiate its own idl from ovs.db.idl.Idl.

  networking-ovn needs to pass in a subclassed Idl to be able to notify
  neutron when a port is successfully wired. Neutron could use this to
  avoid having to spawn ovsdb-client monitor when using the native OVSDB
  driver.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1449263/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1468723] Re: Resource uuid is not logged when a resource is created

2016-02-05 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1468723

Title:
  Resource uuid is not logged when a resource is created

Status in neutron:
  Expired

Bug description:
  In the case of Nova, instance uuid is output to log when new instance is 
created.
  The following logs are part of "nova boot" result.
  2015-06-25 11:15:16.637 DEBUG nova.compute.api 
[req-5f2782c8-1bb7-4735-a714-e720833f4a6d admin admin] Going to run 1 
instances... from (pid=3336) _provision_instances 
/opt/stack/nova/nova/compute/api.py:929
  2015-06-25 11:15:16.756 DEBUG nova.compute.api 
[req-5f2782c8-1bb7-4735-a714-e720833f4a6d admin admin] [instance: 
e7dd5627-0b72-48e9-bcc1-0e49f267dcec] block_device_mapping 
BlockDeviceMappingList(objects=[BlockDeviceMapping(UNKNOWN)]) from (pid=3336) 
_create_block_device_mapping /opt/stack/nova/nova/compute/api.py:1200

  But, in the case of Neutron or other component, uuid is not output to log 
when new resource is created.
  If error occurs in that resource, user cannot determine about followings.
   *When the resource is created?
   *Who the resources created?
  That is not useful in all project and inconsistent between projects.

  By adding "resource" argument to logging, new resource uuid is output
  to log.

  Because the log output mechanism has been already implemented in oslo.log[1], 
this patch just adds "resource" information to log.
  In addition, existing processing is not affected unless "resource" is 
specified in logging_context_format_string.

  Followings are sample executed "neutron net-create".

*configure file
  logging_context_format_string = %(asctime)s.%(msecs)03d 
%(color)s%(levelname)s %(name)s [%(request_id)s %(user_name)s 
%(project_id)s%(color)s] %(resource)s%(color)s%(message)s

*souruce file
  rscdict = {'type': 'network', 'id': 
'8e6a748c-aa8a-440b-904a-a6e2f6dc00eb'}
  LOG.info("network create success", resource=rscdict)

*log file
  INFO neutron.db.db_base_plugin_v2 
[req-1667753a-e5f5-4417-bd8d-16f12bcc2e34 admin 
0862ba8c3497455a8fdf40c49f0f2644] 
[network-8e6a748c-aa8a-440b-904a-a6e2f6dc00eb] network create success

  [1]
  https://review.openstack.org/#/c/144813/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1468723/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1426427] Re: Improve tunnel_sync server side rpc to handle race conditions

2016-02-05 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1426427

Title:
  Improve tunnel_sync server side rpc to handle race conditions

Status in neutron:
  Expired

Bug description:
  We  have a concern that we may have race conditions with the following
  code snippet:

  if host:
  host_endpoint = driver.obj.get_endpoint_by_host(host)
  ip_endpoint = driver.obj.get_endpoint_by_ip(tunnel_ip)

  if (ip_endpoint and ip_endpoint.host is None
  and host_endpoint is None):
  driver.obj.delete_endpoint(ip_endpoint.ip_address)
  elif (ip_endpoint and ip_endpoint.host != host):
  msg = (_("Tunnel IP %(ip)s in use with host %(host)s"),
 {'ip': ip_endpoint.ip_address,
  'host': ip_endpoint.host})
  raise exc.InvalidInput(error_message=msg)
  elif (host_endpoint and host_endpoint.ip_address != 
tunnel_ip):
  # Notify all other listening agents to delete stale 
tunnels
  self._notifier.tunnel_delete(rpc_context,
  host_endpoint.ip_address, tunnel_type)
  driver.obj.delete_endpoint(host_endpoint.ip_address)

  Consider two threads (A and B), where for

  Thread A we have following use case:
  if Host is passed from an agent and it is not found in DB but the passed 
tunnel_ip is found, delete the endpoint from DB and add the endpoint with 
  (tunnel_ip, host), it's an upgrade case.

  whereas for Thread B we have following use case:
  if passed host and tunnel_ip are not found in the DB, it is a new endpoint.

  Both threads will do the following in the end:

  tunnel = driver.obj.add_endpoint(tunnel_ip, host)
  tunnels = driver.obj.get_endpoints()
  entry = {'tunnels': tunnels}
  # Notify all other listening agents
  self._notifier.tunnel_update(rpc_context, tunnel.ip_address,
   tunnel_type)
  # Return the list of tunnels IP's to the agent
  return entry

  
  Since, Thread A first deletes the endpoints and adds it, we may have chances 
where Thread B doesn't get that endpoint in get_endpoints call during race 
condition.

  One way to overcome this problem would be instead of doing
  delete_endpoint we could introduce update_endpoint method in
  type_drivers.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1426427/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477190] Re: dhcp._release_lease shouldn't stacktrace when a device isn't found since it could be legitimately gone

2016-02-05 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1477190

Title:
  dhcp._release_lease shouldn't stacktrace when a device isn't found
  since it could be legitimately gone

Status in neutron:
  Expired
Status in tempest:
  Invalid

Bug description:
  http://logs.openstack.org/80/200380/3/gate/gate-tempest-dsvm-neutron-
  full/2b49b0d/logs/screen-q-dhcp.txt.gz?level=TRACE#_2015-07-22_09_30_43_303

  This kind of stuff is really annoying in debugging failures in the
  gate since I think it's a known failure during cleanup of ports when
  deleting an instance, so we shouldn't stacktrace this and log as an
  error, just handle it in the code and move on.

  2015-07-22 09:30:43.303 ERROR neutron.agent.dhcp.agent 
[req-c7faddea-956c-4544-8ec0-1f6e5029cae1 tempest-NetworksTestDHCPv6-1951433064 
tempest-NetworksTestDHCPv6-996832218] Unable to reload_allocations dhcp for 
301ef35d-2482-4a13-a088-f456958e878a.
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent Traceback (most 
recent call last):
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent   File 
"/opt/stack/new/neutron/neutron/agent/dhcp/agent.py", line 115, in call_driver
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent 
getattr(driver, action)(**action_kwargs)
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent   File 
"/opt/stack/new/neutron/neutron/agent/linux/dhcp.py", line 432, in 
reload_allocations
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent 
self._release_unused_leases()
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent   File 
"/opt/stack/new/neutron/neutron/agent/linux/dhcp.py", line 671, in 
_release_unused_leases
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent 
self._release_lease(mac, ip, client_id)
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent   File 
"/opt/stack/new/neutron/neutron/agent/linux/dhcp.py", line 415, in 
_release_lease
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent 
ip_wrapper.netns.execute(cmd, run_as_root=True)
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent   File 
"/opt/stack/new/neutron/neutron/agent/linux/ip_lib.py", line 701, in execute
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent 
extra_ok_codes=extra_ok_codes, **kwargs)
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent   File 
"/opt/stack/new/neutron/neutron/agent/linux/utils.py", line 138, in execute
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent raise 
RuntimeError(m)
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent RuntimeError: 
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent Command: ['ip', 
'netns', 'exec', u'qdhcp-301ef35d-2482-4a13-a088-f456958e878a', 'dhcp_release', 
'tapcb2fa9dc-f6', '2003::e6ef:eb57:47b4:91cf', 'fa:16:3e:18:e6:78']
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent Exit code: 1
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent Stdin: 
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent Stdout: 
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent Stderr: cannot 
setup interface: No such device
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent 
  2015-07-22 09:30:43.303 8230 ERROR neutron.agent.dhcp.agent 

  
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiU3RkZXJyOiBjYW5ub3Qgc2V0dXAgaW50ZXJmYWNlOiBObyBzdWNoIGRldmljZVwiIEFORCB0YWdzOlwic2NyZWVuLXEtZGhjcC50eHRcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQzNzU3NjE3ODQzNn0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1477190/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488747] Re: algorithm error in function _modify_rules of iptables_manager.py

2016-02-05 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1488747

Title:
  algorithm error in function _modify_rules of iptables_manager.py

Status in neutron:
  Expired

Bug description:
  Reproduce:
  1. delete all rules in compute node.
   iptables --flush
  2. restart neutron-openvswitch-agent, and neutron-openvswitch-agent will try 
to rebuild all chains and rules. Then a error message can be found in 
neutron-openvswitch-agent log file, like belows:

  Stdout: 
  Stderr: iptables-restore: line 57 failed

  The attachment is input file of iptables-restore. I found 'COMMIT'
  line at the middle of 'filter' table, it is illegal format.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1488747/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489690] Re: neutron-openvswitch-agent leak sg iptables rules

2016-02-05 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489690

Title:
  neutron-openvswitch-agent leak sg iptables rules

Status in neutron:
  Expired

Bug description:
  In function 'treat_devices_added_or_updated', port not exist at 'br-
  int' will be added into 'skipped_devices', and return to parent
  function 'process_network_ports', and 'process_network_ports' will
  remove these ports from port_info['current'].

  If a port updated due to security group changing, and the port deleted
  just in function 'treat_devices_added_or_updated', so the port aded
  into 'skipped_devices', then it removed from port_info['current'].
  When next 'scan_port', the port not in 'registered_ports', so it not
  added into port_info['removed'], it's chains and rules will never been
  removed. These waste chains and rules stay in iptables util ovs-agent
  restart or compute node restart.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490917] Re: create_router regression for some of plugins

2016-02-05 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1490917

Title:
  create_router regression for some of plugins

Status in neutron:
  Expired

Bug description:
  change I5a78d7f32e8ca912016978105221d5f34618af19 broke
  plugins which calls create_router with a surrounding transaction.
  eg. networking-midonet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1490917/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492547] Re: ICMP codes can't be set in range [0, 255] when set firewall rules

2016-02-05 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1492547

Title:
  ICMP codes can't be set  in range [0,255] when set firewall rules

Status in neutron:
  Expired

Bug description:
   when setting firewall rules, I  select imcp protocol and set  port/port 
range.
  but get error report "Source, destination ports are not allowed when protocol 
 is set to ICMP."

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1492547/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492773] Re: there are a lot of warn log "No DHCP agents available, skipping rescheduling""

2016-02-05 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1492773

Title:
  there are a lot of warn  log "No DHCP agents available, skipping
  rescheduling""

Status in neutron:
  Expired

Bug description:
  when no agent is active,  there are a lot of warn  log "No DHCP agents 
available, skipping rescheduling",
  which are so many that it is difficult to read log

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1492773/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493424] Re: Linuxbridge agent pass the config as parameter

2016-02-05 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1493424

Title:
  Linuxbridge agent pass the config as parameter

Status in neutron:
  Expired

Bug description:
  Instead of using the global cfg.CONF, we enable linuxbridge agent to pass the 
config as parameter like openvswith agent[1]. This is very useful to test the 
agent without having to override
  the global config.

  [1]: https://review.openstack.org/#/c/190638/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1493424/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1500990] Re: dnsmasq responds with NACKs to requests from unknown hosts

2016-02-05 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1500990

Title:
  dnsmasq responds with NACKs to requests from unknown hosts

Status in neutron:
  Expired

Bug description:
   When a request comes in from a host not managed by neutron, dnsmasq
  responds with a NACK. This causes a race condition where if the wrong
  DHCP server responds to the request, your request will not be honored.
  This can be inconvenient if you  are sharing a subnet with other DHCP
  servers.

  Our team recently ran into this in our Ironic development environment
  and were stepping on each other's DHCP requests. A solution is to
  provide an option that ignores unknown hosts rather than NACKing them.

  The symptom of this was the repeated DISCOVER,OFFER,REQUEST,PACK cycle
  with no acceptance from the host. (Sorry for all the omissions, this
  may be overly cautious)

  Sep 16 09:58:18 localhost dnsmasq-dhcp[30340]: DHCPDISCOVER(tapf1244648-f5) 

  Sep 16 09:58:18 localhost dnsmasq-dhcp[30340]: DHCPOFFER(tapf1244648-f5) 
.205  
  Sep 16 09:58:18 localhost dnsmasq-dhcp[30340]: DHCPREQUEST(tapf1244648-f5) 
.205 
  Sep 16 09:58:18 localhost dnsmasq-dhcp[30340]: DHCPACK(tapf1244648-f5) 
.205  
  Sep 16 09:58:21 localhost dnsmasq-dhcp[30340]: DHCPDISCOVER(tapf1244648-f5) 

  Sep 16 09:58:21 localhost dnsmasq-dhcp[30340]: DHCPOFFER(tapf1244648-f5) 
.205  
  Sep 16 09:58:21 localhost dnsmasq-dhcp[30340]: DHCPREQUEST(tapf1244648-f5) 
.205 
  Sep 16 09:58:21 localhost dnsmasq-dhcp[30340]: DHCPACK(tapf1244648-f5) 
.205  
  Sep 16 09:58:25 localhost dnsmasq-dhcp[30340]: DHCPDISCOVER(tapf1244648-f5) 

  Sep 16 09:58:25 localhost dnsmasq-dhcp[30340]: DHCPOFFER(tapf1244648-f5) 
.205 
  Sep 16 09:58:25 localhost dnsmasq-dhcp[30340]: DHCPREQUEST(tapf1244648-f5) 
.205 
  Sep 16 09:58:25 localhost dnsmasq-dhcp[30340]: DHCPACK(tapf1244648-f5) 
.205   

  ...And so on

  I did a dhcpdump and saw NACKs  coming from my two teammates'
  machines.

  Of course multiple DHCP servers on a subnet is not a standard or
  common case, but we've needed this case in our Ironic development
  environment and have found the fix to be useful.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1500990/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496725] Re: Trace when display event tabs in heat section

2016-02-05 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1496725

Title:
  Trace when display event tabs in heat section

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  python-heatclient-0.6.0-1.el7.noarch
  python-django-appconf-0.6-1.el7.noarch
  python-django-1.8.3-1.el7.noarch
  python-django-compressor-1.4-3.el7.noarch
  python-django-horizon-2015.1.0-7.el7.noarch
  python-django-bash-completion-1.8.3-1.el7.noarch
  python-django-pyscss-1.0.5-2.el7.noarch
  python-django-openstack-auth-1.2.0-4.el7.noarch

  
  2015-09-17 07:31:13,120 10568 ERROR django.request Internal Server Error: 
/dashboard/project/stacks/stack/ff039c79-26cf-4a74-b34a-b059c678e795/
  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 
132, in get_response
  response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/lib/python2.7/site-packages/horizon/decorators.py", line 36, in 
dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/horizon/decorators.py", line 52, in 
dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/horizon/decorators.py", line 36, in 
dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/horizon/decorators.py", line 84, in 
dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/django/views/generic/base.py", line 
71, in view
  return self.dispatch(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/django/views/generic/base.py", line 
89, in dispatch
  return handler(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/horizon/tabs/views.py", line 72, in 
get
  return self.handle_tabbed_response(context["tab_group"], context)
File "/usr/lib/python2.7/site-packages/horizon/tabs/views.py", line 65, in 
handle_tabbed_response
  return http.HttpResponse(tab_group.selected.render())
File "/usr/lib/python2.7/site-packages/horizon/tabs/base.py", line 323, in 
render
  return render_to_string(self.get_template_name(self.request), context)
File "/usr/lib/python2.7/site-packages/django/template/loader.py", line 99, 
in render_to_string
  return template.render(context, request)
File "/usr/lib/python2.7/site-packages/django/template/backends/django.py", 
line 74, in render
  return self.template.render(context)
File "/usr/lib/python2.7/site-packages/django/template/base.py", line 209, 
in render
  return self._render(context)
File "/usr/lib/python2.7/site-packages/django/template/base.py", line 201, 
in _render
  return self.nodelist.render(context)
File "/usr/lib/python2.7/site-packages/django/template/base.py", line 903, 
in render
  bit = self.render_node(node, context)
File "/usr/lib/python2.7/site-packages/django/template/debug.py", line 79, 
in render_node
  return node.render(context)
File "/usr/lib/python2.7/site-packages/django/template/debug.py", line 89, 
in render
  output = self.filter_expression.resolve(context)
File "/usr/lib/python2.7/site-packages/django/template/base.py", line 647, 
in resolve
  obj = self.var.resolve(context)
File "/usr/lib/python2.7/site-packages/django/template/base.py", line 787, 
in resolve
  value = self._resolve_lookup(context)
File "/usr/lib/python2.7/site-packages/django/template/base.py", line 847, 
in _resolve_lookup
  current = current()
File "/usr/lib/python2.7/site-packages/horizon/tables/base.py", line 1276, 
in render
  return table_template.render(context)
File "/usr/lib/python2.7/site-packages/django/template/backends/django.py", 
line 74, in render
  return self.template.render(context)
File "/usr/lib/python2.7/site-packages/django/template/base.py", line 209, 
in render
  return self._render(context)
File "/usr/lib/python2.7/site-packages/django/template/base.py", line 201, 
in _render
  return self.nodelist.render(context)
File "/usr/lib/python2.7/site-packages/django/template/base.py", line 903, 
in render
  bit = self.render_node(node, context)
File "/usr/lib/python2.7/site-packages/django/template/debug.py", line 79, 
in render_node
  return node.render(context)
File "/usr/lib/python2.7/site-packages/django/template/defaulttags.py", 
line 576, in render
  return self.nodelist.render(context)
File "/usr/lib/python2.7/site-packages/django/template/base.py", line 903, 
in render
  bit = self.render_node(node, context)
File "/usr/lib/python2.7/site-packages/django/template/debug.py", line 79, 
in render_node

[Yahoo-eng-team] [Bug 1539972] Re: Style: Default Theme: Responsive Menu shouldn't have Arrow

2016-02-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/274377
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=a84e90422595324d6e5b250f5556b04bddfad87f
Submitter: Jenkins
Branch:master

commit a84e90422595324d6e5b250f5556b04bddfad87f
Author: Diana Whitten 
Date:   Sat Jan 30 12:19:37 2016 -0700

Default Theme:Responsive Menu shouldn't have Arrow

The responsive menu on the 'default' theme shouldn't have the little
'arrow' associated with it.

Closes-bug: #1539972

Change-Id: Icb4a500e9bd3bb1e11853391a5357876ffa61348


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1539972

Title:
  Style: Default Theme: Responsive Menu shouldn't have Arrow

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The responsive menu on the 'default' theme shouldn't have the little
  'arrow' associated with it.

  This looks odd: https://i.imgur.com/biCa7M9.png

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1539972/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513782] Re: API response time degradation

2016-02-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/269892
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=4398f14a9ab177c162d3267b1d2b0c7c50bb82a5
Submitter: Jenkins
Branch:master

commit 4398f14a9ab177c162d3267b1d2b0c7c50bb82a5
Author: Ihar Hrachyshka 
Date:   Tue Jan 19 23:10:25 2016 +0100

Postpone heavy policy check for ports to later

When a port is validated, we check for the user to be the owner of
corresponding network, among other things. Sadly, this check requires a
plugin call to fetch the network, which goes straight into the database.
Now, if there are multiple ports to validate with current policy, and
the user is not admin, we fetch the network for each port, f.e. making
list operation on ports to scale badly.

To avoid that, we should postpone OwnerCheck (tenant_id) based
validations that rely on foreign keys, tenant_id:%(network:...)s, to as
late as possible. It will make policy checks avoid hitting database in
some cases, like when a port is owned by current user.

Also, added some unit tests to avoid later regressions:

DbOperationBoundMixin now passes user context into API calls. It allows
us to trigger policy engine checks when executing listing operations.

Change-Id: I99e0c4280b06d8ebab0aa8adc497662c995133ad
Closes-Bug: #1513782


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1513782

Title:
  API response time degradation

Status in neutron:
  Fix Released

Bug description:
  We found response time degradation in list operations for network
  objects.

  Such degradation we found in our rally jobs.
  This job works in devstack with 'fake_virt' libvirt driver.
  One part of this job creates 200 servers with floating ip and lists them and 
their objects.

  I saw that 18.08.2015 response times was good [1] but 21.08.2015 they became 
bad [2].
  Right now response times still bad [3]...

  I suggest that this is a neutron problem because we have several tests that 
measure different aspects.
  For example, listing of regions and images have same times as in the past.
  But degradation of addresses` listing is in ~ten times:
  was (for 200 servers): 0.719 seconds
  now (for 100 servers): 5.039 seconds
  subnets: 1.358 vs 5.606
  network_interfaces: 1.292 vs 10.298

  Also I've asked in mailing list [4] but there was no sensible
  answer...

  [1] 
http://logs.openstack.org/13/211613/6/experimental/ec2-api-rally-dsvm-fakevirt/fac263e/
  [2] 
http://logs.openstack.org/74/213074/7/experimental/ec2-api-rally-dsvm-fakevirt/91d0675/
  [3] 
http://logs.openstack.org/74/213074/7/experimental/ec2-api-rally-dsvm-fakevirt/91d0675/rally-plot/detailed.txt.gz
  [4] 
http://lists.openstack.org/pipermail/openstack-dev/2015-September/073577.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1513782/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1528250] Re: Wrong string format in exception FlavorExtraSpecUpdateCreateFailed

2016-02-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/244029
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=c85cd57c5c156b479db75c9f14fd916f7b64298e
Submitter: Jenkins
Branch:master

commit c85cd57c5c156b479db75c9f14fd916f7b64298e
Author: Pavel Kholkin 
Date:   Wed Nov 11 12:56:44 2015 +0300

enginefacade: 'flavor'

Use enginefacade in 'flavor' section.

Implements: blueprint new-oslodb-enginefacade

Co-Authored-By: Sergey Nikitin 

Closes-Bug: #1528250

Change-Id: I2f57bb46d087948792e749905a0c060d009fbcf8


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1528250

Title:
  Wrong string format in exception FlavorExtraSpecUpdateCreateFailed

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  class FlavorExtraSpecUpdateCreateFailed(NovaException):
  msg_fmt = _("Flavor %(id)d extra spec cannot be updated or created "
  "after %(retries)d retries.")

  'id' here means 'flavorid' which is String (not Integer).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1528250/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1514360] Re: Deleting a rebuilding instance tries to set it to ERROR

2016-02-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/243005
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=9ae822d412885c33a6d2a801be40c5787e64306d
Submitter: Jenkins
Branch:master

commit 9ae822d412885c33a6d2a801be40c5787e64306d
Author: Hans Lindgren 
Date:   Mon Nov 9 09:53:19 2015 +0100

Gracefully handle a deleting instance during rebuild

When rebuilding an instance, if the instance is deleted while in the
middle of such operation, do not log the exception or try to set to
ERROR since this is an expected event that should be handled gracefully.

Change-Id: I9d78455dfd9537998833a5ff0ba5f39f241c3740
Closes-Bug: #1514360


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1514360

Title:
  Deleting a rebuilding instance tries to set it to ERROR

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  As can be seen in the logs[1], this happens quite frequently in
  tempest runs. Although this is not causing any errors, it fill logs
  with stack traces and will unnecessarily save an instance fault in the
  db before the instance itself is deleted.

  [1]
  
http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:%5C%22Setting%20instance%20vm_state%20to%20ERROR%5C%22%20AND%20message:%5C%22Expected:%20%7B'task_state':%20%5Bu'rebuilding'%5D%7D.%20Actual:%20%7B'task_state':%20u'deleting'%7D%5C%22&from=86400s

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1514360/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1536998] Re: DHCP Port update notification not sent when an IPv6 auto_address subnet is created on an existing IPv4 network.

2016-02-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/271223
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=488ef69cfd30af73c340ddd9b32ad554579e8a6d
Submitter: Jenkins
Branch:master

commit 488ef69cfd30af73c340ddd9b32ad554579e8a6d
Author: sridhargaddam 
Date:   Fri Jan 22 10:18:30 2016 +

Trigger dhcp port_update for new auto_address subnets

When an ipv6 auto_address subnet is added to an existing network with IPv4
subnet, the dhcp port in the network is simply updated with an IPv6 address.
However, neutron does not send any port_update notification for the dhcp
port. This causes inconsistencies (while configuring anti-spoofing rules)
for external controllers (like OpenDaylight) as the dhcp port info
in controller does not match with the dhcp port info in Neutron db.
This patch addressess this issue by having the plugin send a port_update
call when the dhcp port is updated with IPv6 auto address.

Closes-Bug: #1536998
Change-Id: I08d257d6d66f95c609a4429e45fde026b42f3fa1


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1536998

Title:
  DHCP Port update notification not sent when an IPv6 auto_address
  subnet is created on an existing IPv4 network.

Status in neutron:
  Fix Released

Bug description:
  When an ipv6 auto_address subnet is added to an existing network with IPv4
  subnet, the dhcp port in the network is simply updated with an IPv6 address.
  However, neutron does not send any port_update notification for the dhcp
  port. This causes inconsistencies (while configuring anti-spoofing rules)
  for external controllers (like OpenDaylight) as the dhcp port info
  in controller does not match with the dhcp port info in Neutron db.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1536998/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542507] [NEW] Pecan: quota list does not include tenant_id

2016-02-05 Thread Salvatore Orlando
Public bug reported:

The operation for listing quotas for all tenants omits a little, but
important detail - the tenant-id from response items.

This is happening because the quota resource is now treated as any other 
resource and therefore subject to policy checks, response formatting etc. etc.
Since the RESOURCE_ATTRIBUTE_MAP for this resource has no tenant_id attribute 
the attribute is removed as alien (unexpected stuff returned by the plugin).

We should really add that attribute to the RESOURCE_ATTRIBUTE_MAP

** Affects: neutron
 Importance: Medium
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: In Progress


** Tags: pecan

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1542507

Title:
  Pecan: quota list does not include tenant_id

Status in neutron:
  In Progress

Bug description:
  The operation for listing quotas for all tenants omits a little, but
  important detail - the tenant-id from response items.

  This is happening because the quota resource is now treated as any other 
resource and therefore subject to policy checks, response formatting etc. etc.
  Since the RESOURCE_ATTRIBUTE_MAP for this resource has no tenant_id attribute 
the attribute is removed as alien (unexpected stuff returned by the plugin).

  We should really add that attribute to the RESOURCE_ATTRIBUTE_MAP

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1542507/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1527514] Re: cells:resize/migrate instance failed in cells

2016-02-05 Thread Andrew Laski
https://review.openstack.org/#/c/183199/ addressed this.

** Changed in: nova
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1527514

Title:
  cells:resize/migrate instance  failed in cells

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  1. version
  kilo 2015.1.0

  2. Relevant log files:

  2015-12-17 16:53:14.218 2695 ERROR nova.api.openstack 
[req-3cdc0eaf-673c-4d31-bf45-467d8d6ed77f f04e318acd7e4e5093c91e6dc74a28c3 
53adc6d6825b43378d6ab89fc38051da - - -] Caught error: Object action create 
failed because: cannot create a Migration object without a migration_type set
  ...
  ...
  2015-12-17 16:53:14.218 2695 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 2725, in 
_resize_cells_support
  2015-12-17 16:53:14.218 2695 TRACE nova.api.openstack mig.create()
  2015-12-17 16:53:14.218 2695 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/nova/objects/base.py", line 208, in wrapper
  2015-12-17 16:53:14.218 2695 TRACE nova.api.openstack return fn(self, 
*args, **kwargs)
  2015-12-17 16:53:14.218 2695 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/nova/objects/migration.py", line 105, in 
create
  2015-12-17 16:53:14.218 2695 TRACE nova.api.openstack reason="cannot 
create a Migration object without a "
  2015-12-17 16:53:14.218 2695 TRACE nova.api.openstack ObjectActionError: 
Object action create failed because: cannot create a Migration object without a 
migration_type set

  More detailed log:
  http://paste.openstack.org/show/482270/

  3. Reproduce steps:
  create a new instance from image in cells, then resize or migrate the 
instance.

  Expected result:
  resize success

  Actual result:
  resize failed , Log in the second quarter

  4, In the process of the resize/migrate, _resize_cells_support will be
  call in nova/compute/api.py. in this func, a migration object will be
  create ,but now this object create failed because "without a
  migration_type", and leading to the resize/migate fail.

  So this is a serious bug If build and use cell environment

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1527514/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542491] [NEW] Scheduler update_aggregates race causes incorrect aggregate information

2016-02-05 Thread James Dennis
Public bug reported:

It appears that if nova-api receives simultaneous requests to add a
server to a host aggregate, then a race occurs that can lead to nova-
scheduler having incorrect aggregate information in memory.

One observed effect of this is that sometimes nova-scheduler will think
a smaller number of hosts are a member of the aggregate than is in the
nova database and will filter out a host that should not be filtered.

Restarting nova-scheduler fixes the issue, as it reloads the aggregate
information on startup.

Nova package versions: 1:2015.1.2-0ubuntu2~cloud0

Reproduce steps:

Create a new os-aggregate and then populate an os-aggregate with
simultaneous API POSTs, note timestamps:

2016-02-04 20:17:08.538 13648 INFO nova.osapi_compute.wsgi.server 
[req-d07a006e-134a-46d8-9815-6becec5b185c 41812fc01c6549ac8ed15c6dab05c670 
326d453c2bd440b4a7160489b632d0a8 - - -] 10.120.13.3 "POST 
/v2.1/326d453c2bd440b4a7160489b632d0a8/os-aggregates HTTP/1.1" status: 200 len: 
439 time: 0.1865470
2016-02-04 20:17:09.204 13648 INFO nova.osapi_compute.wsgi.server 
[req-a0402297-9337-46d6-96d2-066e230e45e1 41812fc01c6549ac8ed15c6dab05c670 
326d453c2bd440b4a7160489b632d0a8 - - -] 10.120.13.2 "POST 
/v2.1/326d453c2bd440b4a7160489b632d0a8/os-aggregates/1/action HTTP/1.1" status: 
200 len: 506 time: 0.2995598
2016-02-04 20:17:09.243 13648 INFO nova.osapi_compute.wsgi.server 
[req-0f543525-c34e-418a-91a9-894d714ee95b 41812fc01c6549ac8ed15c6dab05c670 
326d453c2bd440b4a7160489b632d0a8 - - -] 10.120.13.2 "POST 
/v2.1/326d453c2bd440b4a7160489b632d0a8/os-aggregates/1/action HTTP/1.1" status: 
200 len: 519 time: 0.3140590
2016-02-04 20:17:09.273 13649 INFO nova.osapi_compute.wsgi.server 
[req-2f8d80b0-726f-4126-a8ab-a2eae3f1a385 41812fc01c6549ac8ed15c6dab05c670 
326d453c2bd440b4a7160489b632d0a8 - - -] 10.120.13.2 "POST 
/v2.1/326d453c2bd440b4a7160489b632d0a8/os-aggregates/1/action HTTP/1.1" status: 
200 len: 506 time: 0.3759601
2016-02-04 20:17:09.275 13649 INFO nova.osapi_compute.wsgi.server 
[req-80ab6c86-e521-4bf0-ab67-4de9d0eccdd3 41812fc01c6549ac8ed15c6dab05c670 
326d453c2bd440b4a7160489b632d0a8 - - -] 10.120.13.1 "POST 
/v2.1/326d453c2bd440b4a7160489b632d0a8/os-aggregates/1/action HTTP/1.1" status: 
200 len: 506 time: 0.3433032

Schedule a VM

Expected Result:
nova-scheduler Availability Zone filter returns all members of the aggregate

Actual Result:
nova-scheduler believes there is only one hypervisor in the aggregate. The 
number will vary as it is a race:

2016-02-05 07:48:04.411 13600 DEBUG nova.filters 
[req-c24338b5-a3b8-4864-8140-04ea6fbcf68f 41812fc01c6549ac8ed15c6dab05c670 
326d453c2bd440b4a7160489b632d0a8 - - -] Starting with 4 host(s) 
get_filtered_objects /usr/lib/python2.7/dist-packages/nova/filters.py:70
2016-02-05 07:48:04.411 13600 DEBUG nova.filters 
[req-c24338b5-a3b8-4864-8140-04ea6fbcf68f 41812fc01c6549ac8ed15c6dab05c670 
326d453c2bd440b4a7160489b632d0a8 - - -] Filter RetryFilter returned 4 host(s) 
get_filtered_objects /usr/lib/python2.7/dist-packages/nova/filters.py:84
2016-02-05 07:48:04.412 13600 DEBUG 
nova.scheduler.filters.availability_zone_filter 
[req-c24338b5-a3b8-4864-8140-04ea6fbcf68f 41812fc01c6549ac8ed15c6dab05c670 
326d453c2bd440b4a7160489b632d0a8 - - -] Availability Zone 'temp' requested. 
(oshv0, oshv0) ram:122691 disk:13404160 io_ops:0 instances:0 has AZs: nova 
host_passes 
/usr/lib/python2.7/dist-packages/nova/scheduler/filters/availability_zone_filter.py:62
2016-02-05 07:48:04.412 13600 DEBUG 
nova.scheduler.filters.availability_zone_filter 
[req-c24338b5-a3b8-4864-8140-04ea6fbcf68f 41812fc01c6549ac8ed15c6dab05c670 
326d453c2bd440b4a7160489b632d0a8 - - -] Availability Zone 'temp' requested. 
(oshv2, oshv2) ram:122691 disk:13403136 io_ops:0 instances:0 has AZs: nova 
host_passes 
/usr/lib/python2.7/dist-packages/nova/scheduler/filters/availability_zone_filter.py:62
2016-02-05 07:48:04.413 13600 DEBUG 
nova.scheduler.filters.availability_zone_filter 
[req-c24338b5-a3b8-4864-8140-04ea6fbcf68f 41812fc01c6549ac8ed15c6dab05c670 
326d453c2bd440b4a7160489b632d0a8 - - -] Availability Zone 'temp' requested. 
(oshv1, oshv1) ram:122691 disk:13404160 io_ops:0 instances:0 has AZs: nova 
host_passes 
/usr/lib/python2.7/dist-packages/nova/scheduler/filters/availability_zone_filter.py:62
2016-02-05 07:48:04.413 13600 DEBUG nova.filters 
[req-c24338b5-a3b8-4864-8140-04ea6fbcf68f 41812fc01c6549ac8ed15c6dab05c670 
326d453c2bd440b4a7160489b632d0a8 - - -] Filter AvailabilityZoneFilter returned 
1 host(s) get_filtered_objects 
/usr/lib/python2.7/dist-packages/nova/filters.py:84

Nova API calls show the correct number of members.


I suspect that it is caused by the simultaneous processing or out-of-order 
receipt of update_aggregates RPC calls.

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: ubuntu
 Importance: Undecided
 Status: New


** Tags: race-condition scheduler

** Tags added: race-condition scheduler

** Also affects: ubuntu
   Importance: Undeci

[Yahoo-eng-team] [Bug 1527553] Re: cells:live-migration instance with target host failed in cells

2016-02-05 Thread Andrew Laski
There has never had the ability to move instances between cells.  This
is an intentional design decision and will not be addressed in cells
version 1.

The new cells version 2 effort may address this at some point but it
will be a few cycles before the discussion on moving instances between
cells even begins.

** Changed in: nova
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1527553

Title:
  cells:live-migration  instance with target host failed in cells

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  1. version
  kilo 2015.1.0

  2. Relevant log files:

  the nova-conductor.log on cell(type:compute)

  2015-12-17 14:10:48.441 6355 WARNING nova.scheduler.utils 
[req-dfcb2a34-ef45-4162-8af6-28235621951c f04e318acd7e4e5093c91e6dc74a28c3 
53adc6d6825b43378d6ab89fc38051da - - -] Failed to compute_task_migrate_server: 
Compute service of api_cell!child_cell@chenling-kilo-2 is unavailable at this 
time.
  2015-12-17 14:10:48.441 6355 WARNING nova.scheduler.utils 
[req-dfcb2a34-ef45-4162-8af6-28235621951c f04e318acd7e4e5093c91e6dc74a28c3 
53adc6d6825b43378d6ab89fc38051da - - -] [instance: 
69f74657-ba6e-4939-a7c6-e9dc3811baa5] Setting instance to ACTIVE state

  
  3. Reproduce steps:

  Environment description

  i have a cell(type:api), named api_cell
  two cells(type:compute) named child_cell  and child_cell02.
  There are two computing nodes : chenling-kilo and chenling-kilo-2 mount on 
cell: child_cell
  There are one computing node : CL-SBCJ-5-3-4 mount on cell: child_cell02

  3.1
  create a new instance from image in cells, and this instance created success 
on compute nodes child_cell
  3.2
  then live-migration this instance target to child_cell02

  Expected result:
  live-migration success 

  Actual result:
  live-migration failed , Log in the second quarter

  4.
  The log shows that :
  the host name on api_cell is "api_cell!child_cell@chenling-kilo-2"
  but on child_cell is "chenling-kilo-2"

  when we live-migration  instance with target host, 
  the hostname "api_cell!child_cell@chenling-kilo-2" is send to child_cell's 
nova-conductor.
  nova-conductor check_host_is_up find "api_cell!child_cell@chenling-kilo-2" 
not exist and
  think "Compute service of api_cell!child_cell@chenling-kilo-2 is unavailable 
at this time",
  leads to live-migration failed

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1527553/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542475] [NEW] MTU concerns for the Open vSwitch agent

2016-02-05 Thread Matt Kassawara
Public bug reported:

I ran some experiments with the Open vSwitch (OVS) agent [1] to
determine the source of MTU problems and offer a potential solution. The
environment for these experiments contains the following items:

1) A physical (underlying) network supporting MTU of 1500 or 9000 bytes.
2) One controller node running the neutron server, OVS agent, L3 agent, DHCP 
agent, metadata agent, and OVS provider network bridge br-ex.
3) One compute node running the Open vSwitch agent.
4) A neutron provider/public network.
5) A neutron self-service/private network.
6) A neutron router between the provider and self-service networks.
7) The self-service network uses the VXLAN protocol with IPv4 endpoints which 
adds 50 bytes of overhead.
8) An instance on the self-service network with a floating IP address from an 
allocation pool on the provider network.

Background:

1. Interfaces (or ports) on OVS bridges such as those for overlay
network tunnels appear to use an arbitrarily large MTU. Thus, OVS
bridges and tunnel interfaces somewhat inherit the MTU of physical
network interfaces. For example, if OVS uses the IP address of eth0 for
a tunnel overlay network endpoint and eth0 has a 1500 MTU, the tunnel
interface can only send packets with a payload of up to 1500 bytes
including overlay protocol overhead.

2. OVS creates interfaces (ports) in the host namespace and moves them
to the appropriate namespace(s) rather than creating veth pairs between
namespaces.

3. For Linux bridge devices such as those on the compute node that
implement security groups, Linux assumes a 1500 MTU and changes the MTU
to the lowest MTU of any port on the bridge. For example, a bridge
without ports has a 1500 MTU. If eth0 has a 9000 MTU and you add it as a
port on the bridge, the bridge changes to a 9000 MTU. If eth1 has a 1500
MTU and you add it as a port on the bridge, the bridge changes to a 1500
MTU.

4. Only devices that operate at layer-3 can participate in path MTU
discovery (PMTUD). Therefore, a change of MTU in a layer-2 device such
as a bridge or veth pair causes that device to discard packets larger
than the smallest MTU.

Observations:

1. For any physical network MTU, the port for the self-service network
router interface (qr) in the router namespace (qrouter) has a 1500 MTU.
Background item (2) prevents a MTU disparity at layer-2 between the
router namespace and OVS bridge br-int. If a packet from the provider
network to the instance has a payload larger than 1500 bytes, the router
can send an ICMP message to the source telling it to use a 1500 MTU.
However, the correct MTU for a private network using the VXLAN overlay
protocol should account for 50 bytes of overhead. Thus, OVS fragments
the packet over the tunnel and reassembles it on the compute node
containing the instance.

2. For a physical network MTU larger than 1500, the port for the
provider network router gateway (qg) in the router namespace (qrouter)
has a 1500 MTU. Background item (2) prevents a MTU disparity between the
router namespace and OVS provider network bridge br-ex. If a packet from
the provider network to the instance has a payload larger than 1500
bytes, the router can send an ICMP messages to the source telling it to
use a 1500 MTU regardless of the private network overlay protocol. Thus,
the agent cannot realize a physical network MTU larger than 1500.

3. If a provider or private network uses DHCP, the port in the DHCP
namespace has a 1500 MTU for any physical network MTU.

4. The Linux bridge that implements security groups on the compute node
lacks any ports on physical network interfaces. Background item (3)
causes the bridge to assume a 1500 MTU. Nova actually manages this
bridge and creates a veth pair between it and the Open vSwitch bridge
br-int. Both ends of the veth pair have a 1500 MTU. Background item (1)
indicates that the OVS bridge br-int could have a larger MTU. Thus, OVS
discards packets inbound to instances with a payload larger than 1500
bytes.

5. Instances must use a MTU value the accounts for overlay protocol
overhead. Neutron currently offers a way to provide a correct value via
DHCP. However, considering observation item (4), providing a MTU value
larger than 1500 causes a disparity at layer-2 between the VM and tap
interface port on the Linux bridge that implements security groups on
the compute node. Thus, the bridge discards packets outbound from
instances with a payload larger than 1500 bytes.

6. The nova 'network_device_mtu' option controls the MTU of all devices
that it manages in observation items (4) and (5). For example, using a
value of 9000 causes the bridge, veth pair, and tap to have a 9000 MTU.
Combining this option with providing the correct value to instances via
DHCP essentially resolves MTU problems on compute nodes.

Potential solution:

1. The port for the self-service network router interface (qr) in the
router namespace (qrouter) must use the MTU of the physical network
accounting for any overlay p

[Yahoo-eng-team] [Bug 1542467] Re: LBAAS Intermittent gate failure TestHealthMonitorBasic.test_health_monitor_basic

2016-02-05 Thread Nate Johnston
Confirmed that I see a spike in failures for this test starting between
13:40 and 13:45 EST, but after 14:50 the rate seems to go back down to
normal levels.  Please recheck and see if the issue is still going on;
if so I will reclassify to 'Confirmed'.

http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:\%22AssertionError:%200%20%3D%3D%200%20:%20No%20IPv4%20addresses%20found%20in:%20[]\%22%20AND%20build_name
:\%22gate-tempest-dsvm-neutron-linuxbridge\%22

** Changed in: neutron
   Importance: Undecided => Critical

** Changed in: neutron
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1542467

Title:
  LBAAS Intermittent gate failure
  TestHealthMonitorBasic.test_health_monitor_basic

Status in neutron:
  Opinion

Bug description:
  We are seeing intermittent gate failures at an increasing rate for the
  neutron-lbaas scenario test.

  Example gate run:
  
http://logs.openstack.org/50/259550/7/check/gate-neutron-lbaasv2-dsvm-scenario/d48b77d/

  
neutron_lbaas.tests.tempest.v2.scenario.test_healthmonitor_basic.TestHealthMonitorBasic.test_health_monitor_basic
  [456.618148s] ... FAILED

  File "neutron_lbaas/tests/tempest/v2/scenario/test_healthmonitor_basic.py", 
line 45, in test_health_monitor_basic
  File "neutron_lbaas/tests/tempest/v2/scenario/base.py", line 519, in 
_traffic_validation_after_stopping_server
  File "neutron_lbaas/tests/tempest/v2/scenario/base.py", line 509, in 
_send_requests
  socket.error: [Errno 104] Connection reset by peer

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1542467/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542470] [NEW] Correctly set oslo.log configuration default_log_levels

2016-02-05 Thread Ronald Bradford
Public bug reported:

Keystone currently sets an olso.log configuration variable directly in
the CONF object rather than using the oslo.log provided methods for
overriding configuration options.


# keystone/common/config.py
CONF.set_default('default_log_levels=...

This should be as documented in
http://docs.openstack.org/developer/oslo.log/usage.html set via

log.set_defaults(default_log_levels=..

** Affects: keystone
 Importance: Low
 Assignee: Ronald Bradford (ronaldbradford)
 Status: In Progress

** Changed in: keystone
 Assignee: (unassigned) => Ronald Bradford (ronaldbradford)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1542470

Title:
  Correctly set oslo.log configuration default_log_levels

Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  Keystone currently sets an olso.log configuration variable directly in
  the CONF object rather than using the oslo.log provided methods for
  overriding configuration options.

  
  # keystone/common/config.py
  CONF.set_default('default_log_levels=...

  This should be as documented in
  http://docs.openstack.org/developer/oslo.log/usage.html set via

  log.set_defaults(default_log_levels=..

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1542470/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542468] [NEW] Need to include 'metadata' in Cinder's VolumeSnapshot object in Horizon API

2016-02-05 Thread Vivek Agrawal
Public bug reported:

The VolumeSnapshot object in
'horizon/openstack_dashboard/api/cinder.py'. This object is accessed by
multiple views in Horizon. The object does not have metadata populated
for the snapshot.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1542468

Title:
  Need to include 'metadata' in Cinder's VolumeSnapshot object in
  Horizon API

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The VolumeSnapshot object in
  'horizon/openstack_dashboard/api/cinder.py'. This object is accessed
  by multiple views in Horizon. The object does not have metadata
  populated for the snapshot.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1542468/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542467] [NEW] LBAAS Intermittent gate failure TestHealthMonitorBasic.test_health_monitor_basic

2016-02-05 Thread Michael Johnson
Public bug reported:

We are seeing intermittent gate failures at an increasing rate for the
neutron-lbaas scenario test.

Example gate run:
http://logs.openstack.org/50/259550/7/check/gate-neutron-lbaasv2-dsvm-scenario/d48b77d/

neutron_lbaas.tests.tempest.v2.scenario.test_healthmonitor_basic.TestHealthMonitorBasic.test_health_monitor_basic
[456.618148s] ... FAILED

File "neutron_lbaas/tests/tempest/v2/scenario/test_healthmonitor_basic.py", 
line 45, in test_health_monitor_basic
File "neutron_lbaas/tests/tempest/v2/scenario/base.py", line 519, in 
_traffic_validation_after_stopping_server
File "neutron_lbaas/tests/tempest/v2/scenario/base.py", line 509, in 
_send_requests
socket.error: [Errno 104] Connection reset by peer

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1542467

Title:
  LBAAS Intermittent gate failure
  TestHealthMonitorBasic.test_health_monitor_basic

Status in neutron:
  New

Bug description:
  We are seeing intermittent gate failures at an increasing rate for the
  neutron-lbaas scenario test.

  Example gate run:
  
http://logs.openstack.org/50/259550/7/check/gate-neutron-lbaasv2-dsvm-scenario/d48b77d/

  
neutron_lbaas.tests.tempest.v2.scenario.test_healthmonitor_basic.TestHealthMonitorBasic.test_health_monitor_basic
  [456.618148s] ... FAILED

  File "neutron_lbaas/tests/tempest/v2/scenario/test_healthmonitor_basic.py", 
line 45, in test_health_monitor_basic
  File "neutron_lbaas/tests/tempest/v2/scenario/base.py", line 519, in 
_traffic_validation_after_stopping_server
  File "neutron_lbaas/tests/tempest/v2/scenario/base.py", line 509, in 
_send_requests
  socket.error: [Errno 104] Connection reset by peer

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1542467/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542176] Re: 'dict' object has no attribute 'container_format'\n"]

2016-02-05 Thread Augustina Ragwitz
Thanks for taking the time to report this issue! Please double check your setup 
and try troubleshooting via support channels like #openstack on irc or mailing 
lists. If you determine this isn't a configuration issue, please reopen this 
bug and provide the following information:
- OS type and version
- Nova version
- Full stack trace (api logs)

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1542176

Title:
  'dict' object has no attribute 'container_format'\n"]

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  While creating docker instance getting error below and the instance
  try to build on qemu.

  
  2016-02-05 12:05:12.764 13141 ERROR nova.compute.manager 
[req-0a99ac05-6fe8-4164-b875-48be3f24dd98 8a4e98a2addf47b6af8897af05518c19 
4bbddf0552544090a0d90962b850d952 - - -] [instance: 
60dfa8e9-50f6-47b3-ba21-0e750888cd75] Instance failed to spawn
  2016-02-05 12:05:12.764 13141 ERROR nova.compute.manager [instance: 
60dfa8e9-50f6-47b3-ba21-0e750888cd75] Traceback (most recent call last):
  2016-02-05 12:05:12.764 13141 ERROR nova.compute.manager [instance: 
60dfa8e9-50f6-47b3-ba21-0e750888cd75]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2155, in 
_build_resources
  2016-02-05 12:05:12.764 13141 ERROR nova.compute.manager [instance: 
60dfa8e9-50f6-47b3-ba21-0e750888cd75] yield resources
  2016-02-05 12:05:12.764 13141 ERROR nova.compute.manager [instance: 
60dfa8e9-50f6-47b3-ba21-0e750888cd75]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2009, in 
_build_and_run_instance
  2016-02-05 12:05:12.764 13141 ERROR nova.compute.manager [instance: 
60dfa8e9-50f6-47b3-ba21-0e750888cd75] block_device_info=block_device_info)
  2016-02-05 12:05:12.764 13141 ERROR nova.compute.manager [instance: 
60dfa8e9-50f6-47b3-ba21-0e750888cd75]   File 
"/usr/local/lib/python2.7/dist-packages/novadocker/virt/docker/driver.py", line 
508, in spawn
  2016-02-05 12:05:12.764 13141 ERROR nova.compute.manager [instance: 
60dfa8e9-50f6-47b3-ba21-0e750888cd75] image_name = 
self._get_image_name(context, instance, image_meta)
  2016-02-05 12:05:12.764 13141 ERROR nova.compute.manager [instance: 
60dfa8e9-50f6-47b3-ba21-0e750888cd75]   File 
"/usr/local/lib/python2.7/dist-packages/novadocker/virt/docker/driver.py", line 
371, in _get_image_name
  2016-02-05 12:05:12.764 13141 ERROR nova.compute.manager [instance: 
60dfa8e9-50f6-47b3-ba21-0e750888cd75] fmt = image.container_format
  2016-02-05 12:05:12.764 13141 ERROR nova.compute.manager [instance: 
60dfa8e9-50f6-47b3-ba21-0e750888cd75] AttributeError: 'dict' object has no 
attribute 'container_format'
  2016-02-05 12:05:12.764 13141 ERROR nova.compute.manager [instance: 
60dfa8e9-50f6-47b3-ba21-0e750888cd75]

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1542176/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542436] Re: Nova API error when getting image list

2016-02-05 Thread Augustina Ragwitz
Thanks for taking the time to file a bug! That message can be a little
misleading, unfortunately. Try troubelshooting your setup through
#openstack or the mailing list first. If you discover more information
that indicates this is a bug and not a configuration issue, then feel
free to update this issue.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1542436

Title:
  Nova API error when getting image list

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  I'm going through the openstack.org install for CentOS7

  I ran >: nova image-list

  and got an error asking me to report it.  I should have been given the
  list of images available.

  rpm -qa|grep nova

  openstack-nova-common-12.0.1-1.el7.noarch
  openstack-nova-api-12.0.1-1.el7.noarch
  python-novaclient-2.30.1-1.el7.noarch
  python-nova-12.0.1-1.el7.noarch
  openstack-nova-cert-12.0.1-1.el7.noarch
  openstack-nova-novncproxy-12.0.1-1.el7.noarch
  openstack-nova-scheduler-12.0.1-1.el7.noarch
  openstack-nova-conductor-12.0.1-1.el7.noarch
  openstack-nova-console-12.0.1-1.el7.noarch

  
  2016-02-05 10:07:32.990 2691 INFO nova.osapi_compute.wsgi.server 
[req-6c5b5c34-e597-4081-aac4-d34b9c0a5e30 b5ab4370e06644d68c6ecd76ba93024c 
488fefb6bd5b4689809c8d37a7f91929 - - -] 127.0.0.1 
  "GET /v2/ HTTP/1.1" status: 200 len: 576 time: 2.1554089
  2016-02-05 10:07:33.416 2691 INFO nova.osapi_compute.wsgi.server 
[req-651601fb-9c92-4788-a76e-7703fad40637 b5ab4370e06644d68c6ecd76ba93024c 
488fefb6bd5b4689809c8d37a7f91929 - - -] 127.0.0.1 
  "GET /v2/488fefb6bd5b4689809c8d37a7f91929/os-services HTTP/1.1" status: 200 
len: 1235 time: 0.2256131
  2016-02-05 10:08:04.905 2688 INFO nova.osapi_compute.wsgi.server 
[req-b617c112-d24f-4db9-92ec-4e33fa4f9513 b5ab4370e06644d68c6ecd76ba93024c 
488fefb6bd5b4689809c8d37a7f91929 - - -] 127.0.0.1 
  "GET /v2/ HTTP/1.1" status: 200 len: 576 time: 1.6992910
  2016-02-05 10:08:32.078 2690 INFO nova.osapi_compute.wsgi.server 
[req-b330bd8b-0e83-4c22-b4c2-1c3a088f66df b5ab4370e06644d68c6ecd76ba93024c 
488fefb6bd5b4689809c8d37a7f91929 - - -] 127.0.0.1 
  "GET /v2/ HTTP/1.1" status: 200 len: 576 time: 0.4813519
  2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions 
[req-1314c729-6e60-4e8b-8fec-522fcc5050db b5ab4370e06644d68c6ecd76ba93024c 
488fefb6bd5b4689809c8d37a7f91929 - - -] Unexpected
   exception in API method
  2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
  2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/extensions.py", line 478, 
in wrapped
  2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
  2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/images.py", line 
145, in detail
  2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions 
**page_params)
  2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/image/api.py", line 68, in get_all
  2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions return 
session.detail(context, **kwargs)
  2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/image/glance.py", line 284, in detail
  2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions for 
image in images:
  2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/glanceclient/v1/images.py", line 254, in list
  2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions for 
image in paginate(params, return_request_id):
  2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/glanceclient/v1/images.py", line 238, in 
paginate
  2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions images, 
resp = self._list(url, "images")
  2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/glanceclient/v1/images.py", line 63, in _list
  2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions resp, 
body = self.client.get(url)
  2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 280, in get
  2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions return 
self._request('GET', url, **kwargs)
  2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 272, in 
_request
  2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions resp, 
body_iter

[Yahoo-eng-team] [Bug 1317302] Re: pki_setup shouldn't be required to check revocations

2016-02-05 Thread Brant Knudson
The revocation list is signed by the PKI certificates for some reason.
The revocation list is used for UUID tokens in addition to PKI tokens.

This fix is making it so that the revocation list is not signed by the
PKI certificates.

** Changed in: keystone
   Status: Won't Fix => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1317302

Title:
  pki_setup shouldn't be required to check revocations

Status in OpenStack Identity (keystone):
  In Progress
Status in keystonemiddleware:
  In Progress

Bug description:
  
  With the fix for bug 1312858 , auth_token can validate UUID tokens or hashed 
PKI tokens against the revocation list. But in order to use this in a setting 
where only UUID tokens are being used, the server still needs to have pki_setup 
run. We should be able to check UUID tokens against the revocation list even 
when pki_setup hasn't been done.

  The reason pki_setup has to be done is that the revocation list is
  signed using CMS. The auth_token middleware only accepts the signed
  format for the revocation list.

  The proposed solution is to change the auth_token middleware to also
  accept a revocation list that's not signed. If it's not signed, then
  the PKI certificates aren't required.

  The keystone server will be changed to allow configuring it such that
  the revocation list will be sent as an unencrypted JSON object that
  the auth_token middleware can now accept.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1317302/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542333] Re: class 'keystoneclient.exceptions.Unauthorized

2016-02-05 Thread Augustina Ragwitz
*** This bug is a duplicate of bug 1534273 ***
https://bugs.launchpad.net/bugs/1534273

** This bug has been marked a duplicate of bug 1534273
   Keystone configuration options for nova.conf missing from Redhat/CentOS 
install guide

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1542333

Title:
  class 'keystoneclient.exceptions.Unauthorized

Status in OpenStack Compute (nova):
  New

Bug description:
  Hi,

  I'm unable to launch instances via CLI or Horizon. I can get the same errors 
in nova-api.log via CLI or Horizon as well. I'm running openstack liberty on 
CentOS Linux release 7.2.1511 (Core) on 5 node (controller, compute1, block1, 
object1 , object2). Everything go well except router didn't crate but then I 
just downgrade iproute on controller:
   https://bugs.launchpad.net/neutron/+bug/1528977

  I have use this guide: http://docs.openstack.org/liberty/install-
  guide-rdo/

  Now I can't create instance. I always get that error:

  "class 'keystoneclient.exceptions.Unauthorized"

  
   [root@controller ~]# nova boot --flavor m1.tiny --image cirros --nic 
net-id=652f6452-7a66-476a-809b-df34ea1f78d2 \
  >   --security-group default --key-name mykey public-instance
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-447714f1-8daf-48db-b853-a7f0c634f468)
  [root@controller ~]# 

  [root@controller ~]# rpm -qa | grep nova
  python-nova-12.0.1-1.el7.noarch
  openstack-nova-novncproxy-12.0.1-1.el7.noarch
  python-novaclient-2.30.1-1.el7.noarch
  openstack-nova-conductor-12.0.1-1.el7.noarch
  openstack-nova-console-12.0.1-1.el7.noarch
  openstack-nova-cert-12.0.1-1.el7.noarch
  openstack-nova-scheduler-12.0.1-1.el7.noarch
  openstack-nova-common-12.0.1-1.el7.noarch
  openstack-nova-api-12.0.1-1.el7.noarch
  [root@controller ~]# 

  
  There is more in attachment. 

  Expected result:

  Instance is launched without error.

  Actual result:

  Instance does not launch and the reported errors are observed in the
  nova-api.log file.

  thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1542333/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542436] [NEW] Nova API error when getting image list

2016-02-05 Thread TGABTG
Public bug reported:

I'm going through the openstack.org install for CentOS7

I ran >: nova image-list

and got an error asking me to report it.  I should have been given the
list of images available.

rpm -qa|grep nova

openstack-nova-common-12.0.1-1.el7.noarch
openstack-nova-api-12.0.1-1.el7.noarch
python-novaclient-2.30.1-1.el7.noarch
python-nova-12.0.1-1.el7.noarch
openstack-nova-cert-12.0.1-1.el7.noarch
openstack-nova-novncproxy-12.0.1-1.el7.noarch
openstack-nova-scheduler-12.0.1-1.el7.noarch
openstack-nova-conductor-12.0.1-1.el7.noarch
openstack-nova-console-12.0.1-1.el7.noarch


2016-02-05 10:07:32.990 2691 INFO nova.osapi_compute.wsgi.server 
[req-6c5b5c34-e597-4081-aac4-d34b9c0a5e30 b5ab4370e06644d68c6ecd76ba93024c 
488fefb6bd5b4689809c8d37a7f91929 - - -] 127.0.0.1 
"GET /v2/ HTTP/1.1" status: 200 len: 576 time: 2.1554089
2016-02-05 10:07:33.416 2691 INFO nova.osapi_compute.wsgi.server 
[req-651601fb-9c92-4788-a76e-7703fad40637 b5ab4370e06644d68c6ecd76ba93024c 
488fefb6bd5b4689809c8d37a7f91929 - - -] 127.0.0.1 
"GET /v2/488fefb6bd5b4689809c8d37a7f91929/os-services HTTP/1.1" status: 200 
len: 1235 time: 0.2256131
2016-02-05 10:08:04.905 2688 INFO nova.osapi_compute.wsgi.server 
[req-b617c112-d24f-4db9-92ec-4e33fa4f9513 b5ab4370e06644d68c6ecd76ba93024c 
488fefb6bd5b4689809c8d37a7f91929 - - -] 127.0.0.1 
"GET /v2/ HTTP/1.1" status: 200 len: 576 time: 1.6992910
2016-02-05 10:08:32.078 2690 INFO nova.osapi_compute.wsgi.server 
[req-b330bd8b-0e83-4c22-b4c2-1c3a088f66df b5ab4370e06644d68c6ecd76ba93024c 
488fefb6bd5b4689809c8d37a7f91929 - - -] 127.0.0.1 
"GET /v2/ HTTP/1.1" status: 200 len: 576 time: 0.4813519
2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions 
[req-1314c729-6e60-4e8b-8fec-522fcc5050db b5ab4370e06644d68c6ecd76ba93024c 
488fefb6bd5b4689809c8d37a7f91929 - - -] Unexpected
 exception in API method
2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/extensions.py", line 478, 
in wrapped
2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/images.py", line 
145, in detail
2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions 
**page_params)
2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/image/api.py", line 68, in get_all
2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions return 
session.detail(context, **kwargs)
2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/image/glance.py", line 284, in detail
2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions for image 
in images:
2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/glanceclient/v1/images.py", line 254, in list
2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions for image 
in paginate(params, return_request_id):
2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/glanceclient/v1/images.py", line 238, in 
paginate
2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions images, 
resp = self._list(url, "images")
2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/glanceclient/v1/images.py", line 63, in _list
2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions resp, body 
= self.client.get(url)
2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 280, in get
2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions return 
self._request('GET', url, **kwargs)
2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 272, in 
_request
2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions resp, 
body_iter = self._handle_response(resp)
2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 93, in 
_handle_response
2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions raise 
exc.from_response(resp, resp.content)
2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions 
HTTPInternalServerError: 500 Internal Server Error: The server has either erred 
or is incapable of performing the requested o
peration. (HTTP 500)
2016-02-05 10:08:35.991 2690 ERROR nova.api.openstack.extensions 
2016-02-05 10:08:35.999 2690 INFO nova.api.openstack.wsgi 
[req-1314c729-6e60-4e8b-8fec-522fcc5050db b5ab4370e06644d68c6ecd76b

[Yahoo-eng-team] [Bug 1542430] [NEW] [artifacts] Setting mutable=False for BLOBs lead to exception

2016-02-05 Thread Kirill Zaitsev
Public bug reported:

2016-02-05 20:42:01.198 48899 ERROR glance.common.artifacts.loader 
stored_object = definitions.BinaryObject(mutable=False)
2016-02-05 20:42:01.198 48899 ERROR glance.common.artifacts.loader   File 
"/Users/teferi/murano/glance/glance/common/artifacts/definitions.py", line 520, 
in __init__
2016-02-05 20:42:01.198 48899 ERROR glance.common.artifacts.loader 
mutable=False, **kwargs)
2016-02-05 20:42:01.198 48899 ERROR glance.common.artifacts.loader TypeError: 
__init__() got multiple values for keyword argument 'mutable'

Seems that definitions.BinaryObject should always be mutable=False, but
the error message is misleading. I suggest that this should be a
warning, not an error. mutable=False can be ignored with an error and
mutable=True can lead to an Error, that explains, that it's impossible
to set mutable for BLOB

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: artifacts

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1542430

Title:
  [artifacts] Setting mutable=False for BLOBs lead to exception

Status in Glance:
  New

Bug description:
  2016-02-05 20:42:01.198 48899 ERROR glance.common.artifacts.loader 
stored_object = definitions.BinaryObject(mutable=False)
  2016-02-05 20:42:01.198 48899 ERROR glance.common.artifacts.loader   File 
"/Users/teferi/murano/glance/glance/common/artifacts/definitions.py", line 520, 
in __init__
  2016-02-05 20:42:01.198 48899 ERROR glance.common.artifacts.loader 
mutable=False, **kwargs)
  2016-02-05 20:42:01.198 48899 ERROR glance.common.artifacts.loader TypeError: 
__init__() got multiple values for keyword argument 'mutable'

  Seems that definitions.BinaryObject should always be mutable=False,
  but the error message is misleading. I suggest that this should be a
  warning, not an error. mutable=False can be ignored with an error and
  mutable=True can lead to an Error, that explains, that it's impossible
  to set mutable for BLOB

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1542430/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542421] [NEW] Split-network-plane-for-live-migration

2016-02-05 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/245005
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/nova" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit af41accff9456748a3106bc1206cfc22d10a8cf4
Author: Kevin_Zheng 
Date:   Fri Nov 13 14:14:28 2015 +0800

Split-network-plane-for-live-migration

When we do live migration with QEMU/KVM driver,
we use hostname of target compute node as the
target of live migration. So the RPC call and live
migration traffic will be in same network plane.

This patch adds a new option live_migration_inbound_addr
in configuration file, set None as default value.
When pre_live_migration() executes on destination host, set
the option into pre_migration_data, if it's not None.
When driver.live_migration() executes on source host,
if this option is present in pre_migration_data, the ip/hostname
address is used instead of CONF.libvirt.live_migration_uri
as the uri for live migration, if it's None, then the
mechanism remains as it is now.

This patch (BP) focuses only on the QEMU/KVM driver,
the implementations for other drivers should be done
in a separate blueprint.

DocImpact:new config option "live_migration_inbound_addr" will be added.

Change-Id: I81c783886497a844fb4b38d0f2a3d6c18a99831c
Co-Authored-By: Rui Chen 
Implements: blueprint split-network-plane-for-live-migration

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: doc nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1542421

Title:
  Split-network-plane-for-live-migration

Status in OpenStack Compute (nova):
  New

Bug description:
  https://review.openstack.org/245005
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/nova" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit af41accff9456748a3106bc1206cfc22d10a8cf4
  Author: Kevin_Zheng 
  Date:   Fri Nov 13 14:14:28 2015 +0800

  Split-network-plane-for-live-migration
  
  When we do live migration with QEMU/KVM driver,
  we use hostname of target compute node as the
  target of live migration. So the RPC call and live
  migration traffic will be in same network plane.
  
  This patch adds a new option live_migration_inbound_addr
  in configuration file, set None as default value.
  When pre_live_migration() executes on destination host, set
  the option into pre_migration_data, if it's not None.
  When driver.live_migration() executes on source host,
  if this option is present in pre_migration_data, the ip/hostname
  address is used instead of CONF.libvirt.live_migration_uri
  as the uri for live migration, if it's None, then the
  mechanism remains as it is now.
  
  This patch (BP) focuses only on the QEMU/KVM driver,
  the implementations for other drivers should be done
  in a separate blueprint.
  
  DocImpact:new config option "live_migration_inbound_addr" will be added.
  
  Change-Id: I81c783886497a844fb4b38d0f2a3d6c18a99831c
  Co-Authored-By: Rui Chen 
  Implements: blueprint split-network-plane-for-live-migration

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1542421/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541876] Re: Version 2.50.1 of Selenium breaks integration tests

2016-02-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/276123
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=5a9413c28b37f9a15ed3730ebc64f131a830ee69
Submitter: Jenkins
Branch:master

commit 5a9413c28b37f9a15ed3730ebc64f131a830ee69
Author: Timur Sufiev 
Date:   Thu Feb 4 13:02:04 2016 +0300

Zoom out pages in i9n tests

Since Selenium 2.50.1 release it's no longer possible to click
elements which are outside of visible viewport. To keep both tests
working and error screenshots still visible, standard xvfb screensize
is set to 1920x1080 and page scale is set to 67%.

Also temporarily disable create_instance test since it fails on some
nodes due to missing network.

Related-Bug: #1542211
Closes-Bug: #1541876
Change-Id: Ie96606cf52860dd8bb3286971962a16cb3415daf


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1541876

Title:
  Version 2.50.1 of Selenium breaks integration tests

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Usual stacktrace is below, the issue happens consistently for the same
  tests/table/button combination, but does not always happen for every
  commit (seems to be some correlation to testing node environment,
  since nodes in NodePool may be built from different images).

  2016-02-04 02:30:27.503 | 2016-02-04 02:30:27.457 | Screenshot: 
{{{/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/integration_tests_screenshots/test_create_delete_user_2016.02.04-022512.png}}}
  2016-02-04 02:30:27.524 | 2016-02-04 02:30:27.492 | 
  2016-02-04 02:30:27.551 | 2016-02-04 02:30:27.519 | Traceback (most recent 
call last):
  2016-02-04 02:30:27.574 | 2016-02-04 02:30:27.550 |   File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/tests/test_user_create_delete.py",
 line 26, in test_create_delete_user
  2016-02-04 02:30:27.579 | 2016-02-04 02:30:27.557 | project='admin', 
role='admin')
  2016-02-04 02:30:27.594 | 2016-02-04 02:30:27.572 |   File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/pages/identity/userspage.py",
 line 52, in create_user
  2016-02-04 02:30:27.617 | 2016-02-04 02:30:27.593 | create_user_form = 
self.users_table.create_user()
  2016-02-04 02:30:27.631 | 2016-02-04 02:30:27.608 |   File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/regions/tables.py",
 line 162, in wrapper
  2016-02-04 02:30:27.672 | 2016-02-04 02:30:27.638 | return method(table, 
action_element)
  2016-02-04 02:30:27.695 | 2016-02-04 02:30:27.653 |   File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/pages/identity/userspage.py",
 line 25, in create_user
  2016-02-04 02:30:27.696 | 2016-02-04 02:30:27.656 | create_button.click()
  2016-02-04 02:30:27.696 | 2016-02-04 02:30:27.673 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/selenium/webdriver/remote/webelement.py",
 line 75, in click
  2016-02-04 02:30:27.703 | 2016-02-04 02:30:27.676 | 
self._execute(Command.CLICK_ELEMENT)
  2016-02-04 02:30:27.715 | 2016-02-04 02:30:27.689 |   File 
"/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/webdriver.py",
 line 107, in _execute
  2016-02-04 02:30:27.734 | 2016-02-04 02:30:27.712 | params)
  2016-02-04 02:30:27.787 | 2016-02-04 02:30:27.722 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/selenium/webdriver/remote/webelement.py",
 line 469, in _execute
  2016-02-04 02:30:27.794 | 2016-02-04 02:30:27.772 | return 
self._parent.execute(command, params)
  2016-02-04 02:30:27.803 | 2016-02-04 02:30:27.781 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py",
 line 201, in execute
  2016-02-04 02:30:27.813 | 2016-02-04 02:30:27.789 | 
self.error_handler.check_response(response)
  2016-02-04 02:30:27.816 | 2016-02-04 02:30:27.794 |   File 
"/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/selenium/webdriver/remote/errorhandler.py",
 line 193, in check_response
  2016-02-04 02:30:27.820 | 2016-02-04 02:30:27.797 | raise 
exception_class(message, screen, stacktrace)
  2016-02-04 02:30:27.829 | 2016-02-04 02:30:27.804 | 
selenium.common.exceptions.WebDriverException: Message: Element is not 
clickable at point (944, 0.98333740234375). Other element would receive the 
click: 

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1541876/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542417] [NEW] ldap backend lacks support for user_description_attribute mapping

2016-02-05 Thread Rudolf Vriend
Public bug reported:

The LDAP backend supports mapping between LDAP and keystone user
attributes via the 'user__name' settings in the ldap driver
configuration.

The implementation is incomplete, since there is no support for
specifying a 'user_description_attribute' setting.

As long as the LDAP attribute name is 'description', one could specify a
1:1 'user_additional_attribute_mapping = description:description'
mapping as a workaround, which would yield the desired result.

In case a users full name is stored in a different attribute (as with
many AD backends where the users full name is contained in the
'displayName' attribute) there is no way to specify this mapping and
results in users having no description.

** Affects: keystone
 Importance: Undecided
 Assignee: Rudolf Vriend (rudolf-vriend)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => Rudolf Vriend (rudolf-vriend)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1542417

Title:
  ldap backend lacks support for user_description_attribute mapping

Status in OpenStack Identity (keystone):
  New

Bug description:
  The LDAP backend supports mapping between LDAP and keystone user
  attributes via the 'user__name' settings in the ldap driver
  configuration.

  The implementation is incomplete, since there is no support for
  specifying a 'user_description_attribute' setting.

  As long as the LDAP attribute name is 'description', one could specify
  a 1:1 'user_additional_attribute_mapping = description:description'
  mapping as a workaround, which would yield the desired result.

  In case a users full name is stored in a different attribute (as with
  many AD backends where the users full name is contained in the
  'displayName' attribute) there is no way to specify this mapping and
  results in users having no description.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1542417/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542415] [NEW] Hyper-V live-migration rollback leaves destination node in inconsistent state

2016-02-05 Thread Timofey Durakov
Public bug reported:

Roll-back of destination node in case of live-migration failure depends
on block-migrate flag that passed by operator. So it's possible to leave
destination node in inconsistent state by passing block-migrate flag,
over api.

** Affects: nova
 Importance: Undecided
 Assignee: Timofey Durakov (tdurakov)
 Status: In Progress


** Tags: hyper-v live-migration

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1542415

Title:
  Hyper-V live-migration rollback leaves destination node in
  inconsistent state

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Roll-back of destination node in case of live-migration failure
  depends on block-migrate flag that passed by operator. So it's
  possible to leave destination node in inconsistent state by passing
  block-migrate flag, over api.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1542415/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540259] Re: uselist should be True to DVRPortBinding orm.relationship

2016-02-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/274550
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=951dd5e015cbd102aed80a6808f623b4626e727d
Submitter: Jenkins
Branch:master

commit 951dd5e015cbd102aed80a6808f623b4626e727d
Author: lzklibj 
Date:   Mon Feb 1 16:39:54 2016 +0800

Fix port relationship for DVRPortBinding

uselist=False is used for one to one relationship, but for
DVRPortBinding, a router_interface_distributed port can have
multiple bindings. That causes SAWarning:
Multiple rows returned with uselist=False for lazily-loaded attribute
'Port.dvr_port_binding'.

"uselist=False" is misused in DVRPortBinding port relationship,
this patch will fix this.

Change-Id: I2b00d96aaa445e0977bc3d4957d35a28d44dd953
Closes-Bug: #1540259


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1540259

Title:
  uselist should be True to DVRPortBinding orm.relationship

Status in neutron:
  Fix Released

Bug description:
  In DVR scenario, after a router interface has been bound to multiple hosts, 
when we remove this interface from router, in neutron server log, SQL warning 
will raise:
SAWarning: Multiple rows returned with uselist=False for eagerly-loaded 
attribute 'Port.dvr_port_binding'

  it's caused by
  
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/models.py#L130,
  uselist is set to False. But indeed, table ml2_dvr_port_bindings
  stores all bindings for router_interface_distributed ports, and for a
  that kind of port, it could have multiple bindings. So it's not a one-
  to-one relationship, we should remove "uselist=False" in
  DVRPortBinding port orm.relationship.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1540259/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542385] [NEW] dnsmasq: cannot open log

2016-02-05 Thread Rossella Sblendido
Public bug reported:

it seems dnsmasq can't create the log file...

2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent 
[req-a29e6030-7524-4d33-8a1e-b7bb31809441 a19d99e7a3ac400db02142044a3190eb 
728a8a9d8
1824100bc01735e12273607 - - -] Unable to enable dhcp for 
b3da5a48-39f7-43a0-b6a5-a26e426fec08.
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent Traceback (most 
recent call last):
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/dhcp/agent.py", line 115, in 
call_driver
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent 
getattr(driver, action)(**action_kwargs)
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/dhcp.py", line 206, in 
enable
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent 
self.spawn_process()
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/dhcp.py", line 414, in 
spawn_process
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent 
self._spawn_or_reload_process(reload_with_HUP=False)
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/dhcp.py", line 428, in 
_spawn_or_reload_process
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent 
pm.enable(reload_cfg=reload_with_HUP)
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/external_process.py", 
line 92, in enable
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent 
run_as_root=self.run_as_root)
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 861, in 
execute
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent 
log_fail_as_error=log_fail_as_error, **kwargs)
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 159, in 
execute
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent raise 
RuntimeError(m)
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent RuntimeError:
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent Command: ['sudo', 
'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qdhcp-b3da5a48-39f7-43a0-b6a5-a26e426fec08', 'dnsmasq', '--no-hosts', 
'--no-resolv', '--strict-order', '--except-interface=lo', 
'--pid-file=/var/lib/neutron/dhcp/b3da5a48-39f7-43a0-b6a5-a26e426fec08/pid', 
'--dhcp-hostsfile=/var/lib/neutron/dhcp/b3da5a48-39f7-43a0-b6a5-a26e426fec08/host',
 
'--addn-hosts=/var/lib/neutron/dhcp/b3da5a48-39f7-43a0-b6a5-a26e426fec08/addn_hosts',
 
'--dhcp-optsfile=/var/lib/neutron/dhcp/b3da5a48-39f7-43a0-b6a5-a26e426fec08/opts',
 
'--dhcp-leasefile=/var/lib/neutron/dhcp/b3da5a48-39f7-43a0-b6a5-a26e426fec08/leases',
 '--dhcp-match=set:ipxe,175', '--bind-interfaces', 
'--interface=tap58da37d6-b4', 
'--dhcp-range=set:tag0,192.168.123.0,static,86400s', 
'--dhcp-option-force=option:mtu,1358', '--dhcp-lease-max=256', '--conf-file=', 
'--server=192.168.219.1', '--domain=openstack.local', '--log-queries', 
'--log-dhcp', '--l
 
og-facility=/var/log/neutron/b3da5a48-39f7-43a0-b6a5-a26e426fec08/dhcp_dns_log']
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent Exit code: 3
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent Stdin:
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent Stdout:
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent Stderr:
2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent dnsmasq: cannot 
open log /var/log/neutron/b3da5a48-39f7-43a0-b6a5-a26e426fec08/dhcp_dns_log: No 
such file or directory

** Affects: neutron
 Importance: Medium
 Status: New

** Changed in: neutron
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1542385

Title:
  dnsmasq: cannot open log

Status in neutron:
  New

Bug description:
  it seems dnsmasq can't create the log file...

  2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent 
[req-a29e6030-7524-4d33-8a1e-b7bb31809441 a19d99e7a3ac400db02142044a3190eb 
728a8a9d8
  1824100bc01735e12273607 - - -] Unable to enable dhcp for 
b3da5a48-39f7-43a0-b6a5-a26e426fec08.
  2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent Traceback (most 
recent call last):
  2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent   File 
"/usr/lib/python2.7/site-packages/neutron/agent/dhcp/agent.py", line 115, in 
call_driver
  2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent 
getattr(driver, action)(**action_kwargs)
  2016-02-04 16:52:42.164 10511 ERROR neutron.agent.dhcp.agent   File 
"/usr/lib

[Yahoo-eng-team] [Bug 1475717] Re: RFE: Security Rules should support VRRP protocol

2016-02-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/252155
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=592b548bb6720760efae4b10bec59e78a753f4d7
Submitter: Jenkins
Branch:master

commit 592b548bb6720760efae4b10bec59e78a753f4d7
Author: Li Ma 
Date:   Wed Dec 2 10:30:22 2015 +0800

Add popular IP protocols for security group

Add these additional protocols listed below to
security groups brings convenience to operators
on configuring these protocols. In addition, make
the security group rules more readable.

The added protocols are: ah, dccp, egp, esp, gre,
ipv6-encap, ipv6-frag, ipv6-nonxt, ipv6-opts,
ipv6-route, ospf, pgm, rsvp, sctp, udplite, vrrp.

A related patch is submitted to neutron-lib project:
https://review.openstack.org/259037

DocImpact: You can specify protocol names rather than
protocol number in API and CLI commands. I'll update
the documentation when it is merged.

APIImpact

Change-Id: Iaef9b650449b4d9d362a59305c45e0aa3831507c
Closes-Bug: #1475717


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1475717

Title:
  RFE: Security Rules should support VRRP protocol

Status in neutron:
  Fix Released

Bug description:
  We are following http://blog.aaronorosen.com/implementing-high-
  availability-instances-with-neutron-using-vrrp/ to set up two "service
  vms" as an active-standby pair using VRRP for the Octavia project.
  While doing so we noticed that all VRRP packets were blocked and the
  protocol is not supported by the current security groups. Since that
  will gain more momentum with the NFV story we propose to add this
  additional protocol to security groups.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1475717/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542352] [NEW] Add popular IP protocols for security group

2016-02-05 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/252155
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit 592b548bb6720760efae4b10bec59e78a753f4d7
Author: Li Ma 
Date:   Wed Dec 2 10:30:22 2015 +0800

Add popular IP protocols for security group

Add these additional protocols listed below to
security groups brings convenience to operators
on configuring these protocols. In addition, make
the security group rules more readable.

The added protocols are: ah, dccp, egp, esp, gre,
ipv6-encap, ipv6-frag, ipv6-nonxt, ipv6-opts,
ipv6-route, ospf, pgm, rsvp, sctp, udplite, vrrp.

A related patch is submitted to neutron-lib project:
https://review.openstack.org/259037

DocImpact: You can specify protocol names rather than
protocol number in API and CLI commands. I'll update
the documentation when it is merged.

APIImpact

Change-Id: Iaef9b650449b4d9d362a59305c45e0aa3831507c
Closes-Bug: #1475717

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: doc neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1542352

Title:
  Add popular IP protocols for security group

Status in neutron:
  New

Bug description:
  https://review.openstack.org/252155
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 592b548bb6720760efae4b10bec59e78a753f4d7
  Author: Li Ma 
  Date:   Wed Dec 2 10:30:22 2015 +0800

  Add popular IP protocols for security group
  
  Add these additional protocols listed below to
  security groups brings convenience to operators
  on configuring these protocols. In addition, make
  the security group rules more readable.
  
  The added protocols are: ah, dccp, egp, esp, gre,
  ipv6-encap, ipv6-frag, ipv6-nonxt, ipv6-opts,
  ipv6-route, ospf, pgm, rsvp, sctp, udplite, vrrp.
  
  A related patch is submitted to neutron-lib project:
  https://review.openstack.org/259037
  
  DocImpact: You can specify protocol names rather than
  protocol number in API and CLI commands. I'll update
  the documentation when it is merged.
  
  APIImpact
  
  Change-Id: Iaef9b650449b4d9d362a59305c45e0aa3831507c
  Closes-Bug: #1475717

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1542352/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1535327] Re: neutron.common.rpc has no unit test coverage

2016-02-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/268334
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=e4f2fb9df652354157c076bdf9b3eff464dcd3ba
Submitter: Jenkins
Branch:master

commit e4f2fb9df652354157c076bdf9b3eff464dcd3ba
Author: Ryan Rossiter 
Date:   Fri Jan 15 14:15:02 2016 +

Add tests for RPC methods/classes

The public methods and classes within the neutron.common.rpc module are
untested. This adds tests for all public-facing functions within the
module.

Closes-Bug: #1535327
Change-Id: I80227dd73e58f8b5dbde9cea01ceac22cc8b2e34


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1535327

Title:
  neutron.common.rpc has no unit test coverage

Status in neutron:
  Fix Released

Bug description:
  The neutron.common.rpc module has no unit test coverage. The following
  change was made to neutron/common/rpc.py:

   68 def cleanup():
   69 global TRANSPORT, NOTIFIER
   70 assert TRANSPORT is not None
   71 assert NOTIFIER is not None
   72 #TRANSPORT.cleanup()
   73 #TRANSPORT = NOTIFIER = None

  No unit tests failed as a result of L72-73 being commented out,
  meaning no unit tests are covering some of the main functionality of
  this module. At a minimum, the public functions and classes should be
  unit tested.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1535327/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443421] Re: After VM migration, tunnels not getting removed with L2Pop ON, when using multiple api_workers in controller

2016-02-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/215467
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=c5fa665de3173f3ad82cc3e7624b5968bc52c08d
Submitter: Jenkins
Branch:master

commit c5fa665de3173f3ad82cc3e7624b5968bc52c08d
Author: shihanzhang 
Date:   Fri Aug 21 09:51:59 2015 +0800

ML2: update port's status to DOWN if its binding info has changed

This fixes the problem that when two or more ports in a network
are migrated to a host that did not previously have any ports in
the same network, the new host is sometimes not told about the
IP/MAC addresses of all the other ports in the network. In other
words, initial L2population does not work, for the new host.

This is because the l2pop mechanism driver only sends catch-up
information to the host when it thinks it is dealing with the first
active port on that host; and currently, when multiple ports are
migrated to a new host, there is always more than one active port so
the condition above is never triggered.

The fix is for the ML2 plugin to set a port's status to DOWN when
its binding info changes.

This patch also fixes the bug when nova thinks it should not wait
for any events from neutron because all ports are already active.

Closes-bug: #1483601
Closes-bug: #1443421
Closes-Bug: #1522824
Related-Bug: #1450604

Change-Id: I342ad910360b21085316c25df2154854fd1001b2


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1443421

Title:
  After VM migration, tunnels not getting removed with L2Pop ON, when
  using multiple api_workers in controller

Status in neutron:
  Fix Released
Status in openstack-ansible:
  Confirmed
Status in openstack-ansible trunk series:
  Confirmed

Bug description:
  Using multiple api_workers, for "nova live-migration" command, 
  a) tunnel flows and tunnel ports are always removed from old host
  b) and other hosts(sometimes) not getting notification about port delete from 
old host. So in other hosts, tunnel ports and flood flows(except unicast flow 
about port) for old host still remain.
  Root cause and fix is explained in comments 12 and 13.

  According to bug reporter, this bug can also be reproducible like below.
  Setup : Neutron server  HA (3 nodes).
  Hypervisor – ESX with OVsvapp
  l2 POP is on Network node and off on Ovsvapp.

  Condition:
  Make L2 pop on OVs agent, api workers =10 in the controller.

  On network node,the VXLAN tunnel is created with ESX2 and the Tunnel
  with ESX1 is not removed after migrating VM from ESX1 to ESX2.

  Attaching the logs of servers and agent logs.

  stack@OSC-NS1:/opt/stack/logs/screen$ sudo ovs-vsctl show
  662d03fb-c784-498e-927c-410aa6788455
  Bridge br-ex
  Port phy-br-ex
  Interface phy-br-ex
  type: patch
  options: {peer=int-br-ex}
  Port "eth2"
  Interface "eth2"
  Port br-ex
  Interface br-ex
  type: internal
  Bridge br-tun
  Port patch-int
  Interface patch-int
  type: patch
  options: {peer=patch-tun}
  Port "vxlan-6447007a"
  Interface "vxlan-6447007a"
  type: vxlan
  options: {df_default="true", in_key=flow, 
local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.122"} 
This should have been deleted after MIGRATION.
  Port "vxlan-64470082"
  Interface "vxlan-64470082"
  type: vxlan
  options: {df_default="true", in_key=flow, 
local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.130"}
  Port br-tun
  Interface br-tun
  type: internal
  Port "vxlan-6447002a"
  Interface "vxlan-6447002a"
  type: vxlan
  options: {df_default="true", in_key=flow, 
local_ip="100.71.0.41", out_key=flow, remote_ip="100.71.0.42"}
  Bridge "br-eth1"
  Port "br-eth1"
  Interface "br-eth1"
  type: internal
  Port "phy-br-eth1"
  Interface "phy-br-eth1"
  type: patch
  options: {peer="int-br-eth1"}
  Bridge br-int
  fail_mode: secure
  Port patch-tun
  Interface patch-tun
  type: patch
  options: {peer=patch-int}
  Port "int-br-eth1"
  Interface "int-br-eth1"
  type: patch
  options: {peer="phy-br-eth1"}
  Port br-int
  Interface br-int
  type: internal
  Port int-br-ex
  Interface int-br-ex
  type: patch
  options: {peer=phy-br-ex}

[Yahoo-eng-team] [Bug 1483601] Re: l2 population failed when bulk live migrate VMs

2016-02-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/215467
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=c5fa665de3173f3ad82cc3e7624b5968bc52c08d
Submitter: Jenkins
Branch:master

commit c5fa665de3173f3ad82cc3e7624b5968bc52c08d
Author: shihanzhang 
Date:   Fri Aug 21 09:51:59 2015 +0800

ML2: update port's status to DOWN if its binding info has changed

This fixes the problem that when two or more ports in a network
are migrated to a host that did not previously have any ports in
the same network, the new host is sometimes not told about the
IP/MAC addresses of all the other ports in the network. In other
words, initial L2population does not work, for the new host.

This is because the l2pop mechanism driver only sends catch-up
information to the host when it thinks it is dealing with the first
active port on that host; and currently, when multiple ports are
migrated to a new host, there is always more than one active port so
the condition above is never triggered.

The fix is for the ML2 plugin to set a port's status to DOWN when
its binding info changes.

This patch also fixes the bug when nova thinks it should not wait
for any events from neutron because all ports are already active.

Closes-bug: #1483601
Closes-bug: #1443421
Closes-Bug: #1522824
Related-Bug: #1450604

Change-Id: I342ad910360b21085316c25df2154854fd1001b2


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1483601

Title:
  l2 population failed when bulk live migrate VMs

Status in neutron:
  Fix Released

Bug description:
  when we bulk live migrate VMs, the l2 population may possiblly(not always) 
failed at destination compute nodes, because when nova migrate VM at 
destination compute node, it just update port's binding:host,  the port's 
status is still active, from neutron perspective, the progress of port status 
is : active -> build -> active,
  in bellow case, l2 population  will fail:
  1. nova successfully live migrate vm A and VM B from compute A to compute B.
  2. port A and port B status are active,  binding:host are compute B .
  3. l2 agent scans these two port, then handle them one by one.
  4. neutron-server firstly handle port A, its status will be build(remember 
port B status is still active), and do bellow check
  in l2 population check,  this check will be fail

  def _update_port_up(self, context):
  ..
    if agent_active_ports == 1 or (self.get_agent_uptime(agent) < 
cfg.CONF.l2pop.agent_boot_time):
    # First port activated on current agent in this network,
    # we have to provide it with the whole list of fdb entries

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1483601/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1522824] Re: DVR multinode job: test_shelve_instance failure due to SSHTimeout

2016-02-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/215467
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=c5fa665de3173f3ad82cc3e7624b5968bc52c08d
Submitter: Jenkins
Branch:master

commit c5fa665de3173f3ad82cc3e7624b5968bc52c08d
Author: shihanzhang 
Date:   Fri Aug 21 09:51:59 2015 +0800

ML2: update port's status to DOWN if its binding info has changed

This fixes the problem that when two or more ports in a network
are migrated to a host that did not previously have any ports in
the same network, the new host is sometimes not told about the
IP/MAC addresses of all the other ports in the network. In other
words, initial L2population does not work, for the new host.

This is because the l2pop mechanism driver only sends catch-up
information to the host when it thinks it is dealing with the first
active port on that host; and currently, when multiple ports are
migrated to a new host, there is always more than one active port so
the condition above is never triggered.

The fix is for the ML2 plugin to set a port's status to DOWN when
its binding info changes.

This patch also fixes the bug when nova thinks it should not wait
for any events from neutron because all ports are already active.

Closes-bug: #1483601
Closes-bug: #1443421
Closes-Bug: #1522824
Related-Bug: #1450604

Change-Id: I342ad910360b21085316c25df2154854fd1001b2


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1522824

Title:
  DVR multinode job: test_shelve_instance failure due to SSHTimeout

Status in neutron:
  Fix Released

Bug description:
  gate-tempest-dsvm-neutron-multinode-full fails from time to time due
  to
  tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_instance
  failure:

  Captured traceback:
  2015-12-04 01:17:12.569 | ~~~
  2015-12-04 01:17:12.569 | Traceback (most recent call last):
  2015-12-04 01:17:12.570 |   File "tempest/test.py", line 127, in wrapper
  2015-12-04 01:17:12.570 | return f(self, *func_args, **func_kwargs)
  2015-12-04 01:17:12.570 |   File 
"tempest/scenario/test_shelve_instance.py", line 101, in test_shelve_instance
  2015-12-04 01:17:12.570 | 
self._create_server_then_shelve_and_unshelve()
  2015-12-04 01:17:12.570 |   File 
"tempest/scenario/test_shelve_instance.py", line 93, in 
_create_server_then_shelve_and_unshelve
  2015-12-04 01:17:12.570 | private_key=keypair['private_key'])
  2015-12-04 01:17:12.570 |   File "tempest/scenario/manager.py", line 645, 
in get_timestamp
  2015-12-04 01:17:12.571 | private_key=private_key)
  2015-12-04 01:17:12.571 |   File "tempest/scenario/manager.py", line 383, 
in get_remote_client
  2015-12-04 01:17:12.571 | linux_client.validate_authentication()
  2015-12-04 01:17:12.571 |   File 
"tempest/common/utils/linux/remote_client.py", line 63, in 
validate_authentication
  2015-12-04 01:17:12.571 | self.ssh_client.test_connection_auth()
  2015-12-04 01:17:12.571 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/tempest_lib/common/ssh.py",
 line 167, in test_connection_auth
  2015-12-04 01:17:12.571 | connection = self._get_ssh_connection()
  2015-12-04 01:17:12.572 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/tempest_lib/common/ssh.py",
 line 87, in _get_ssh_connection
  2015-12-04 01:17:12.572 | password=self.password)
  2015-12-04 01:17:12.572 | tempest_lib.exceptions.SSHTimeout: Connection 
to the 172.24.5.209 via SSH timed out.
  2015-12-04 01:17:12.572 | User: cirros, Password: None

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1522824/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542333] [NEW] class 'keystoneclient.exceptions.Unauthorized

2016-02-05 Thread jost
Public bug reported:

Hi,

I'm unable to launch instances via CLI or Horizon. I can get the same errors in 
nova-api.log via CLI or Horizon as well. I'm running openstack liberty on 
CentOS Linux release 7.2.1511 (Core) on 5 node (controller, compute1, block1, 
object1 , object2). Everything go well except router didn't crate but then I 
just downgrade iproute on controller:
 https://bugs.launchpad.net/neutron/+bug/1528977

I have use this guide: http://docs.openstack.org/liberty/install-guide-
rdo/

Now I can't create instance. I always get that error:

"class 'keystoneclient.exceptions.Unauthorized"


 [root@controller ~]# nova boot --flavor m1.tiny --image cirros --nic 
net-id=652f6452-7a66-476a-809b-df34ea1f78d2 \
>   --security-group default --key-name mykey public-instance
ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-447714f1-8daf-48db-b853-a7f0c634f468)
[root@controller ~]# 

[root@controller ~]# rpm -qa | grep nova
python-nova-12.0.1-1.el7.noarch
openstack-nova-novncproxy-12.0.1-1.el7.noarch
python-novaclient-2.30.1-1.el7.noarch
openstack-nova-conductor-12.0.1-1.el7.noarch
openstack-nova-console-12.0.1-1.el7.noarch
openstack-nova-cert-12.0.1-1.el7.noarch
openstack-nova-scheduler-12.0.1-1.el7.noarch
openstack-nova-common-12.0.1-1.el7.noarch
openstack-nova-api-12.0.1-1.el7.noarch
[root@controller ~]# 


There is more in attachment. 

Expected result:

Instance is launched without error.

Actual result:

Instance does not launch and the reported errors are observed in the
nova-api.log file.

thanks

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: "bug-nova-boot-instance.txt"
   
https://bugs.launchpad.net/bugs/1542333/+attachment/4564538/+files/bug-nova-boot-instance.txt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1542333

Title:
  class 'keystoneclient.exceptions.Unauthorized

Status in OpenStack Compute (nova):
  New

Bug description:
  Hi,

  I'm unable to launch instances via CLI or Horizon. I can get the same errors 
in nova-api.log via CLI or Horizon as well. I'm running openstack liberty on 
CentOS Linux release 7.2.1511 (Core) on 5 node (controller, compute1, block1, 
object1 , object2). Everything go well except router didn't crate but then I 
just downgrade iproute on controller:
   https://bugs.launchpad.net/neutron/+bug/1528977

  I have use this guide: http://docs.openstack.org/liberty/install-
  guide-rdo/

  Now I can't create instance. I always get that error:

  "class 'keystoneclient.exceptions.Unauthorized"

  
   [root@controller ~]# nova boot --flavor m1.tiny --image cirros --nic 
net-id=652f6452-7a66-476a-809b-df34ea1f78d2 \
  >   --security-group default --key-name mykey public-instance
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-447714f1-8daf-48db-b853-a7f0c634f468)
  [root@controller ~]# 

  [root@controller ~]# rpm -qa | grep nova
  python-nova-12.0.1-1.el7.noarch
  openstack-nova-novncproxy-12.0.1-1.el7.noarch
  python-novaclient-2.30.1-1.el7.noarch
  openstack-nova-conductor-12.0.1-1.el7.noarch
  openstack-nova-console-12.0.1-1.el7.noarch
  openstack-nova-cert-12.0.1-1.el7.noarch
  openstack-nova-scheduler-12.0.1-1.el7.noarch
  openstack-nova-common-12.0.1-1.el7.noarch
  openstack-nova-api-12.0.1-1.el7.noarch
  [root@controller ~]# 

  
  There is more in attachment. 

  Expected result:

  Instance is launched without error.

  Actual result:

  Instance does not launch and the reported errors are observed in the
  nova-api.log file.

  thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1542333/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540960] Re: DHCP: when upgrading from icehouse DHCP breaks

2016-02-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/275267
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=13a4268062b17fbb6451e53632297c8e06c45a51
Submitter: Jenkins
Branch:master

commit 13a4268062b17fbb6451e53632297c8e06c45a51
Author: Gary Kotton 
Date:   Tue Feb 2 07:26:48 2016 -0800

DHCP: fix regression with DNS nameservers

The commit 3686d035ded94eadab6a3268e4b0f0cca11a22f8 caused a
regression for setups that support a number of DNS servers that
are injected via the DHCP options 5 and 6.

If the dnsmaq has a configured dns-server then it will ignore
the ones that are injected by the admin. That is what the commit
above did. This causes a number of problems. The main one is that
it requires the DHCP agent to have connectivity to the DNS server.

The original code was added in commit
2afff147c45f97a0809db40ad072332fb37ccd8d

Change-Id: Iae3e994533102a2b076cc2dc205cdd5caaee1206
Closes-bug: #1540960


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1540960

Title:
  DHCP: when upgrading from icehouse DHCP breaks

Status in neutron:
  Fix Released

Bug description:
  When upgrading the J, K or any other version the DNS support break. 
  That is, the DHCP agent has the option for setting DNS servers, this is via 
the the options dnsmasq_config_file=/etc/neutron/myconfig.ini
  Where myconfig will have support for the following:
  dhcp-option=5,2.4.5.6,2.4.5.7
  dhcp-option=6,2.4.5.7,2.4.5.6

  In the guest /etc/resolve.conf we see the DHCP address and not the
  configured IP's

  The reason for this is
  
https://github.com/openstack/neutron/commit/3686d035ded94eadab6a3268e4b0f0cca11a22f8

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1540960/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1533663] Re: Volume migration works only via CLI

2016-02-05 Thread Itxaka Serrano
Seems that cinder client uses some black magix and the volume is
automagically passed down somehow somewhere ¯\_(ツ)_/¯

** No longer affects: python-cinderclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1533663

Title:
  Volume migration works only via CLI

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Running OpenStack Stable/Liberty on CentOS 7.1.

  The volume migration between Cinder hosts does not work when invoked via 
Horizon (Pop-up shows: "Error: Failed to migrate volume.").
  The Volume migration invoked via command line however works fine.

  In /var/log/httpd/error_log the following messages can be found on
  every attempt to migrate a volume via Dashboard:

  [:error] [pid 4111] Recoverable error: migrate_volume() takes exactly
  5 arguments (4 given)

  The error presumably comes from /usr/lib/python2.7/site-
  packages/cinderclient/v2/volumes.py at line 578.

  The 'migrate_volume' method there expects all arguments as positional
  and the 'lock_volume' var seems not being provided in the case when
  migration invoked in Horizon.

  Making 'lock_volume' a kwarg with False value as default fixes the issue and 
does not break the original CLI behavior. I.e.  when volume migration invoked 
via CLI the lock_volume will be False, unless the respective flag was 
explicitly given.
  The volume migration invoked via Horizon with this change will now work, but 
volume cannot be 'locked' during migration. The respective functionality seems 
to be not yet fully integrated in Horizon -> there is even no check box in the 
frontend yet and I could not find blueprints proposing those changes..

  So, attached patch is a simple workaround rather than a solution, it
  allows run volume migrations via Horizon, however with no volume
  locking.

  
  My setup includes 2 Cinder Storage nodes (LVM/iSCSI) , where one is also a 
controller (i.e. with cinder-api and cinder-scheduler).

  The versions are as follows:

  openstack-dashboard  1:8.0.0-1.el7
  openstack-dashboard-theme  1:8.0.0-1.el7
  openstack-cinder  1:7.0.1-1.el7
  python-cinder  1:7.0.1-1.el7
  python-cinderclient  1.4.0-1.el7

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1533663/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542303] [NEW] When using realtime guests we should to avoid using QGA

2016-02-05 Thread sahid
Public bug reported:

When running in realtime we should to leading to a very minimal hardware
support for guest and so disable support of QEMU guest agent.

** Affects: nova
 Importance: Undecided
 Assignee: sahid (sahid-ferdjaoui)
 Status: New


** Tags: libvirt

** Changed in: nova
 Assignee: (unassigned) => sahid (sahid-ferdjaoui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1542303

Title:
  When using realtime guests we should to avoid using QGA

Status in OpenStack Compute (nova):
  New

Bug description:
  When running in realtime we should to leading to a very minimal
  hardware support for guest and so disable support of QEMU guest agent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1542303/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1533663] Re: Volume migration works only via CLI

2016-02-05 Thread Itxaka Serrano
Seems that the issue is a bit more complicated.

The v2 client migrate_volume method[0] calls the
manager.migrate_volume[1] with only 3 parameters: host, force_host_copy,
lock_volume while the manager.migrate_volume signature needs 4: volume,
host, force_host_copy, lock_volume.

Looks like the volume ID is not being passed down in the method.


[0]: 
https://github.com/openstack/python-cinderclient/blob/master/cinderclient/v2/volumes.py#L143-L145
[1]: 
https://github.com/openstack/python-cinderclient/blob/master/cinderclient/v2/volumes.py#L505-L517

** Also affects: python-cinderclient
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1533663

Title:
  Volume migration works only via CLI

Status in OpenStack Dashboard (Horizon):
  New
Status in python-cinderclient:
  New

Bug description:
  Running OpenStack Stable/Liberty on CentOS 7.1.

  The volume migration between Cinder hosts does not work when invoked via 
Horizon (Pop-up shows: "Error: Failed to migrate volume.").
  The Volume migration invoked via command line however works fine.

  In /var/log/httpd/error_log the following messages can be found on
  every attempt to migrate a volume via Dashboard:

  [:error] [pid 4111] Recoverable error: migrate_volume() takes exactly
  5 arguments (4 given)

  The error presumably comes from /usr/lib/python2.7/site-
  packages/cinderclient/v2/volumes.py at line 578.

  The 'migrate_volume' method there expects all arguments as positional
  and the 'lock_volume' var seems not being provided in the case when
  migration invoked in Horizon.

  Making 'lock_volume' a kwarg with False value as default fixes the issue and 
does not break the original CLI behavior. I.e.  when volume migration invoked 
via CLI the lock_volume will be False, unless the respective flag was 
explicitly given.
  The volume migration invoked via Horizon with this change will now work, but 
volume cannot be 'locked' during migration. The respective functionality seems 
to be not yet fully integrated in Horizon -> there is even no check box in the 
frontend yet and I could not find blueprints proposing those changes..

  So, attached patch is a simple workaround rather than a solution, it
  allows run volume migrations via Horizon, however with no volume
  locking.

  
  My setup includes 2 Cinder Storage nodes (LVM/iSCSI) , where one is also a 
controller (i.e. with cinder-api and cinder-scheduler).

  The versions are as follows:

  openstack-dashboard  1:8.0.0-1.el7
  openstack-dashboard-theme  1:8.0.0-1.el7
  openstack-cinder  1:7.0.1-1.el7
  python-cinder  1:7.0.1-1.el7
  python-cinderclient  1.4.0-1.el7

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1533663/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542302] [NEW] We should to initialize request_spec to handle expected exception

2016-02-05 Thread sahid
Public bug reported:

in nova/conductor/manager.py, in method build_instances populate_retry
is only doing arithmetic without to interact with third parts and can
throw an *expected* exception where 'build_request_spec()' not. So we
should to change initializes request_spec soon as possible since it's
used in case where that *expected* exception is raised.

** Affects: nova
 Importance: Undecided
 Assignee: sahid (sahid-ferdjaoui)
 Status: New


** Tags: conductor

** Changed in: nova
 Assignee: (unassigned) => sahid (sahid-ferdjaoui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1542302

Title:
  We should to initialize request_spec to handle expected exception

Status in OpenStack Compute (nova):
  New

Bug description:
  in nova/conductor/manager.py, in method build_instances populate_retry
  is only doing arithmetic without to interact with third parts and can
  throw an *expected* exception where 'build_request_spec()' not. So we
  should to change initializes request_spec soon as possible since it's
  used in case where that *expected* exception is raised.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1542302/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542282] [NEW] Multiple service plugins , each with its own service_provider configuration fails to bring up Neutron Service

2016-02-05 Thread Vivekanandan Narasimhan
Public bug reported:

When we use multiple service plugin that each have their own
service_providers configured, the neutron-server service fails to come
up.

For example , when we try to load LBaaS +  BGPVPN + L2GatewayPlugin
services together into the neutron server,  only the last
service_provider entry is retained by
neutron/services/provider_configuration, that results in unmatching
service plugins to EXIT on load, thereby bringing down the neutron
service.

Workaround exists: 
Put in all the service_provider entries into a single service_providers section 
under neutron.conf, and then all the services are able to start up.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1542282

Title:
  Multiple service plugins , each with its own service_provider
  configuration fails to bring up Neutron Service

Status in neutron:
  New

Bug description:
  When we use multiple service plugin that each have their own
  service_providers configured, the neutron-server service fails to come
  up.

  For example , when we try to load LBaaS +  BGPVPN + L2GatewayPlugin
  services together into the neutron server,  only the last
  service_provider entry is retained by
  neutron/services/provider_configuration, that results in unmatching
  service plugins to EXIT on load, thereby bringing down the neutron
  service.

  Workaround exists: 
  Put in all the service_provider entries into a single service_providers 
section under neutron.conf, and then all the services are able to start up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1542282/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542277] [NEW] Pecan: startup fails to associate plugins with some resources

2016-02-05 Thread Salvatore Orlando
Public bug reported:

In some cases, legit resources for supported extensions end up not
having a plugin associated.

This happens because the startup process explicitly checks the
supported_extension_aliases attribute, whereas it should instead do a
deeper check leveraging the helper method
get_supported_extension_aliases provided by the extension manager.

for instance the rbac extension will not work until this is fixed.

** Affects: neutron
 Importance: Medium
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New


** Tags: pecan

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1542277

Title:
  Pecan: startup fails to associate plugins with some resources

Status in neutron:
  New

Bug description:
  In some cases, legit resources for supported extensions end up not
  having a plugin associated.

  This happens because the startup process explicitly checks the
  supported_extension_aliases attribute, whereas it should instead do a
  deeper check leveraging the helper method
  get_supported_extension_aliases provided by the extension manager.

  for instance the rbac extension will not work until this is fixed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1542277/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542232] [NEW] nova nova.conf.sample

2016-02-05 Thread zwei
Public bug reported:

nova generate nova.conf.sampleconfiguration file is incomplete
eg : Not Found option  #iscsi_use_multipath in [ libvirt ]

** Affects: nova
 Importance: Undecided
 Assignee: zwei (suifeng20)
 Status: In Progress

** Project changed: glance => nova

** Changed in: nova
   Status: New => In Progress

** Changed in: nova
 Assignee: (unassigned) => zwei (suifeng20)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1542232

Title:
  nova nova.conf.sample

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  nova generate nova.conf.sampleconfiguration file is incomplete
  eg : Not Found option  #iscsi_use_multipath in [ libvirt ]

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1542232/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542220] [NEW] Vyatta fwaas agent parses extra nova opts

2016-02-05 Thread bharath
Public bug reported:

nova auth url is need to parsed to parsed from the config passed.
Parsing of extra option were handled in vyaata l3 agent , So firewall need to 
import extra_config from vyatta to get the additional options. In this case its 
nova extra option.

** Affects: neutron
 Importance: Undecided
 Assignee: bharath (bharath-7)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => bharath (bharath-7)

** Changed in: neutron
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1542220

Title:
  Vyatta fwaas agent parses extra nova opts

Status in neutron:
  In Progress

Bug description:
  nova auth url is need to parsed to parsed from the config passed.
  Parsing of extra option were handled in vyaata l3 agent , So firewall need to 
import extra_config from vyatta to get the additional options. In this case its 
nova extra option.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1542220/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451669] Re: [network] Neutron advance services should support service_plugins and service_provider auto configuration

2016-02-05 Thread bharath
** Project changed: openstack-chef => neutron

** Changed in: neutron
Milestone: liberity-rc1 => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1451669

Title:
  [network] Neutron advance services should support service_plugins and
  service_provider auto configuration

Status in neutron:
  In Progress

Bug description:
  When we enable Neutron advance services(LBaaS, VPNaaS, FWaaS)
  we need to change enablement section to be true or 'True'. And beside this, 
we also need to configure the service_provider and service_plugin, or there 
will be problems when missing the service_provider and service_plugin.

  When we hard code to configure the service_provider and service_plugin
  for advance services in attribute, there will be problems too.

  Neutron advance services should auto configuration the
  service_provider and service_plugin, when the enablement section is
  true or 'True'

  example:
  =
  when enbale LBaaS, we need change below in our environment(VPNaaS, FWaaS is 
the same):

  1, we should configure this: override enabled to True
  "lbaas": {
"enabled": "True",
  },
  2, Then add "neutron_vpnaas.services.vpn.plugin.VPNDriverPlugin" in 
service_plugins
  "service_plugins": [
"neutron.services.l3_router.l3_router_plugin.L3RouterPlugin",
"neutron_vpnaas.services.vpn.plugin.VPNDriverPlugin"
  ],
  3, Last add the service provider 
  "service_provider": [ 
"LOADBALANCER:Haproxy:neutron_lbaas.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default"
  ],

  disadvantage: customers should always remember to configuration the
  service_plugins and service_provider when enabling advance services,
  and remove the configuration when disabling the service. Or there will
  be errors in Neutron.

  To resolve the inconvenience, we take these changes:
  1, add default values of the plugin and provider in attribute, then we can 
override them when it need to change.
  2, change the code of cookbook-openstack-network/recipe/default and 
cookbook-openstack-network/templates/default/neutron.conf.erb to override the 
service_plugins and service_provider automatically.

  with above change, users will not have to configure the
  service_provider and service_plugins when he want to enable advance
  services, just change the enabled section. our code will handle the
  override for users

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1451669/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1527089] Re: [ipam] Port ID is not present in port dict that is passed to AddressRequestFactory

2016-02-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/259697
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=b2d656aff46ac5a2aa9700fc9ff682ef82ecf8e9
Submitter: Jenkins
Branch:master

commit b2d656aff46ac5a2aa9700fc9ff682ef82ecf8e9
Author: Pavel Bondar 
Date:   Sat Dec 19 16:08:36 2015 +0300

Add generated port id to port dict

Port id generated in create_port was not populated back to port_dict.
As a result port id information was not available on
AddressRequestFactory level (used by 3rd party ipam providers).

This fix populates port_id into copied port dict (to prevent affecting
incoming port dict).
Added UT to make sure that port id is added to port dict which is passed
to AddressFactory.

Change-Id: I62e4d9e887488b9ceeafb90b044f95a22c1765b0
Closes-Bug: #1527089


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1527089

Title:
  [ipam] Port ID is not present in port dict that is passed to
  AddressRequestFactory

Status in networking-infoblox:
  Triaged
Status in neutron:
  Fix Released

Bug description:
  Port dict is passed into AddressRequestFactory [1] to help make ipam decision 
 based on various port info.
  For reference IPAM driver this information is not used, but it is helpful for 
creating custom AddressRequests for 3rd party ipam driver.

  Observed that port dict passed into AddressRequestFactory does not contain 
port id information.
  Here is example of port_dict info captured in AddressRequestFactory:

  {
  'status': 'DOWN',
  'dns_name': '', 
  'binding:host_id': u'vagrant-ubuntu-trusty-64.localdomain', 
  u'name': u'', 
  'allowed_address_pairs': , 
  u'admin_state_up': True, 
  u'network_id': u'9fc7128d-b4bd-4544-829c-96d19105eb5b', 
  u'tenant_id': u'bf0806763e32436bbdb8fd9b6ebfac93', 
  'extra_dhcp_opts': None, 
  'binding:vnic_type': 'normal', 
  'device_owner': 'network:dhcp', 
  'mac_address': 'fa:16:3e:63:de:76', 
  'binding:profile': , 
  'port_security_enabled': , 
  u'fixed_ips': [{u'subnet_id': u'7ca565f8-9cc4-4330-81c9-d3c671beb7b0'}], 
  'security_groups': , 
  u'device_id': 
u'dhcpd439385c-2745-50dd-91dd-8a252bf35915-9fc7128d-b4bd-4544-829c-96d19105eb5b'}

  'id' or 'port_id' is not present in this dict.
  It happens because of the way how create_port() handles id generation [2].
  Port id is generated as uuid (if not set in incoming request, which is 
typically no),
  but not populated back to original port_dict.
  And original dict(without port_id) is passed down to ipam methods 
(allocate_ips_for_port_and_store).

  [1] 
https://github.com/openstack/neutron/blob/master/neutron/ipam/requests.py#L253
  [2] 
https://github.com/openstack/neutron/blob/master/neutron/db/db_base_plugin_v2.py#L1164

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-infoblox/+bug/1527089/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451669] [NEW] [network] Neutron advance services should support service_plugins and service_provider auto configuration

2016-02-05 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

When we enable Neutron advance services(LBaaS, VPNaaS, FWaaS)
we need to change enablement section to be true or 'True'. And beside this, we 
also need to configure the service_provider and service_plugin, or there will 
be problems when missing the service_provider and service_plugin.

When we hard code to configure the service_provider and service_plugin
for advance services in attribute, there will be problems too.

Neutron advance services should auto configuration the service_provider
and service_plugin, when the enablement section is true or 'True'

example:
=
when enbale LBaaS, we need change below in our environment(VPNaaS, FWaaS is the 
same):

1, we should configure this: override enabled to True
"lbaas": {
  "enabled": "True",
},
2, Then add "neutron_vpnaas.services.vpn.plugin.VPNDriverPlugin" in 
service_plugins
"service_plugins": [
  "neutron.services.l3_router.l3_router_plugin.L3RouterPlugin",
  "neutron_vpnaas.services.vpn.plugin.VPNDriverPlugin"
],
3, Last add the service provider 
"service_provider": [ 
"LOADBALANCER:Haproxy:neutron_lbaas.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default"
],

disadvantage: customers should always remember to configuration the
service_plugins and service_provider when enabling advance services, and
remove the configuration when disabling the service. Or there will be
errors in Neutron.

To resolve the inconvenience, we take these changes:
1, add default values of the plugin and provider in attribute, then we can 
override them when it need to change.
2, change the code of cookbook-openstack-network/recipe/default and 
cookbook-openstack-network/templates/default/neutron.conf.erb to override the 
service_plugins and service_provider automatically.

with above change, users will not have to configure the service_provider
and service_plugins when he want to enable advance services, just change
the enabled section. our code will handle the override for users

** Affects: neutron
 Importance: Medium
 Assignee: Song Li (lisong-cruise)
 Status: In Progress


** Tags: network
-- 
[network] Neutron advance services should support service_plugins and 
service_provider auto configuration
https://bugs.launchpad.net/bugs/1451669
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523638] Re: tempest fails with No IPv4 addresses found

2016-02-05 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/276519
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=96c67e22f9cba2ea0e7fb3ba2a63e4905e48c1a4
Submitter: Jenkins
Branch:master

commit 96c67e22f9cba2ea0e7fb3ba2a63e4905e48c1a4
Author: Kevin Benton 
Date:   Thu Feb 4 13:49:42 2016 -0800

Only ensure admin state on ports that exist

The linux bridge agent was calling ensure_port_admin state
unconditionally on ports in treat_devices_added_or_updated.
This would cause it to throw an error on interfaces that
didn't exist so it would restart the entire processing loop.

If another port was being updated in the same loop before this
one, that port would experience a port status life-cycle of
DOWN->BUILD->ACTIVE->BUILD->ACTIVE
   ^ <--- Exception in unrelated port causes cycle
  to start over again.

This causes the bug below because the first active transition will
cause Nova to boot the VM. At this point tempest tests expect the
ports that belong to the VM to be in the ACTIVE state so it filters
Neutron port list calls with "status=ACTIVE". Therefore tempest would
not get any ports back and assume there was some kind of error with
the port and bail.

This patch just makes sure the admin state call is skipped if the port
doesn't exist and it includes a basic unit test to prevent a regression.

Closes-Bug: #1523638
Change-Id: I5330c6111cbb20bf45aec9ade7e30d34e8dd16ca


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1523638

Title:
  tempest fails with  No IPv4 addresses found

Status in neutron:
  Fix Released
Status in tempest:
  In Progress

Bug description:
  http://logs.openstack.org/42/250542/7/check/gate-tempest-dsvm-neutron-
  linuxbridge/3a00f8b/logs/testr_results.html.gz

  Traceback (most recent call last):
File "tempest/test.py", line 113, in wrapper
  return f(self, *func_args, **func_kwargs)
File "tempest/scenario/test_network_basic_ops.py", line 550, in 
test_subnet_details
  self._setup_network_and_servers(dns_nameservers=[initial_dns_server])
File "tempest/scenario/test_network_basic_ops.py", line 123, in 
_setup_network_and_servers
  floating_ip = self.create_floating_ip(server)
File "tempest/scenario/manager.py", line 842, in create_floating_ip
  port_id, ip4 = self._get_server_port_id_and_ip4(thing)
File "tempest/scenario/manager.py", line 821, in _get_server_port_id_and_ip4
  "No IPv4 addresses found in: %s" % ports)
File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/unittest2/case.py",
 line 845, in assertNotEqual
  raise self.failureException(msg)
  AssertionError: 0 == 0 : No IPv4 addresses found in: []

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1523638/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542209] [NEW] stable/kilo is broken due to ContextualVersionConflict

2016-02-05 Thread Ken'ichi Ohmichi
*** This bug is a duplicate of bug 1541879 ***
https://bugs.launchpad.net/bugs/1541879

Public bug reported:

The gate of stable/kilo is broken now like the following:

http://logs.openstack.org/40/275540/2/check/gate-tempest-dsvm-full-
kilo/280bf6c/logs/devstacklog.txt.gz

2016-02-05 04:43:32.031 | + /usr/local/bin/keystone-manage db_sync
2016-02-05 04:43:32.397 | Traceback (most recent call last):
2016-02-05 04:43:32.397 |   File "/usr/local/bin/keystone-manage", line 4, in 

2016-02-05 04:43:32.397 | 
__import__('pkg_resources').require('keystone==2015.1.4.dev2')
2016-02-05 04:43:32.397 |   File 
"/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3141, 
in 
2016-02-05 04:43:32.398 | @_call_aside
2016-02-05 04:43:32.398 |   File 
"/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3127, 
in _call_aside
2016-02-05 04:43:32.398 | f(*args, **kwargs)
2016-02-05 04:43:32.398 |   File 
"/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3154, 
in _initialize_master_working_set
2016-02-05 04:43:32.399 | working_set = WorkingSet._build_master()
2016-02-05 04:43:32.399 |   File 
"/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 642, 
in _build_master
2016-02-05 04:43:32.399 | return cls._build_from_requirements(__requires__)
2016-02-05 04:43:32.399 |   File 
"/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 655, 
in _build_from_requirements
2016-02-05 04:43:32.399 | dists = ws.resolve(reqs, Environment())
2016-02-05 04:43:32.399 |   File 
"/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 833, 
in resolve
2016-02-05 04:43:32.399 | raise VersionConflict(dist, 
req).with_context(dependent_req)
2016-02-05 04:43:32.399 | pkg_resources.ContextualVersionConflict: (fixtures 
1.2.0 (/usr/local/lib/python2.7/dist-packages), 
Requirement.parse('fixtures>=1.3.0'), set(['testtools']))

** Affects: keystone
 Importance: Undecided
 Status: New

** This bug has been marked a duplicate of bug 1541879
   Neutron devstack gate fails to install keystone due to fresh testtools

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1542209

Title:
  stable/kilo is broken due to ContextualVersionConflict

Status in OpenStack Identity (keystone):
  New

Bug description:
  The gate of stable/kilo is broken now like the following:

  http://logs.openstack.org/40/275540/2/check/gate-tempest-dsvm-full-
  kilo/280bf6c/logs/devstacklog.txt.gz

  2016-02-05 04:43:32.031 | + /usr/local/bin/keystone-manage db_sync
  2016-02-05 04:43:32.397 | Traceback (most recent call last):
  2016-02-05 04:43:32.397 |   File "/usr/local/bin/keystone-manage", line 4, in 

  2016-02-05 04:43:32.397 | 
__import__('pkg_resources').require('keystone==2015.1.4.dev2')
  2016-02-05 04:43:32.397 |   File 
"/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3141, 
in 
  2016-02-05 04:43:32.398 | @_call_aside
  2016-02-05 04:43:32.398 |   File 
"/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3127, 
in _call_aside
  2016-02-05 04:43:32.398 | f(*args, **kwargs)
  2016-02-05 04:43:32.398 |   File 
"/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 3154, 
in _initialize_master_working_set
  2016-02-05 04:43:32.399 | working_set = WorkingSet._build_master()
  2016-02-05 04:43:32.399 |   File 
"/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 642, 
in _build_master
  2016-02-05 04:43:32.399 | return 
cls._build_from_requirements(__requires__)
  2016-02-05 04:43:32.399 |   File 
"/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 655, 
in _build_from_requirements
  2016-02-05 04:43:32.399 | dists = ws.resolve(reqs, Environment())
  2016-02-05 04:43:32.399 |   File 
"/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 833, 
in resolve
  2016-02-05 04:43:32.399 | raise VersionConflict(dist, 
req).with_context(dependent_req)
  2016-02-05 04:43:32.399 | pkg_resources.ContextualVersionConflict: (fixtures 
1.2.0 (/usr/local/lib/python2.7/dist-packages), 
Requirement.parse('fixtures>=1.3.0'), set(['testtools']))

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1542209/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542211] [NEW] Some Jenkins nodes fail create_instance integration test due to missing network

2016-02-05 Thread Timur Sufiev
Public bug reported:

Although usually devstack has pre-created network, some Jenkins nodes
don't have it. We should keep this possibility in mind when creating an
instance in integration test. As a proof, below is a on-failure
screenshot that was taken several times in a row for commit
https://review.openstack.org/#/c/276123/

** Affects: horizon
 Importance: High
 Assignee: Timur Sufiev (tsufiev-x)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1542211

Title:
  Some Jenkins nodes fail create_instance integration test due to
  missing network

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Although usually devstack has pre-created network, some Jenkins nodes
  don't have it. We should keep this possibility in mind when creating
  an instance in integration test. As a proof, below is a on-failure
  screenshot that was taken several times in a row for commit
  https://review.openstack.org/#/c/276123/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1542211/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp