[Yahoo-eng-team] [Bug 1687616] Re: KeyError 'options' while doing zero downtime upgrade from N to O

2017-10-30 Thread Lujin Luo
** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1687616

Title:
  KeyError 'options' while doing zero downtime upgrade from N to O

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  I am trying to do a zero downtime upgrade from N release to O release
  following [1].

  I have 3 controller nodes running behind a HAProxy. Everytime, when I
  upgraded one of the keystone and bring it back to the cluster, it
  would encounter this error [2] when I tried to update a created user
  for about 5 minutes. After I brought back all the 3 upgraded keystone
  nodes, and 5 or more minutes later, this error will disappear and
  everything works fine.

  I am using the same conf file for both releases as shown in [3].

  [1]. https://docs.openstack.org/keystone/latest/admin/identity-upgrading.html
  [2]. http://paste.openstack.org/show/608557/
  [3]. http://paste.openstack.org/show/608558/

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1687616/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1687616] Re: KeyError 'options' while doing zero downtime upgrade from N to O

2017-10-30 Thread Sam Morrison
I have just done the N -> O upgrade and have seen this error.

We have done the expand and migrate db syncs.

We have 3 newton keystones and when I added an ocata one I saw this
issue on the ocata one.

Its happening on a POST to /v3/auth/tokens and is affecting about 3% of
requests (we have around 10 requests per second on our keystone)

Happy to provide more information.

Currently I have rolled back but am thinking this might just be an issue
during the transition so could bite the bullet and do it quickly.

** Changed in: keystone
   Status: Expired => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1687616

Title:
  KeyError 'options' while doing zero downtime upgrade from N to O

Status in OpenStack Identity (keystone):
  New

Bug description:
  I am trying to do a zero downtime upgrade from N release to O release
  following [1].

  I have 3 controller nodes running behind a HAProxy. Everytime, when I
  upgraded one of the keystone and bring it back to the cluster, it
  would encounter this error [2] when I tried to update a created user
  for about 5 minutes. After I brought back all the 3 upgraded keystone
  nodes, and 5 or more minutes later, this error will disappear and
  everything works fine.

  I am using the same conf file for both releases as shown in [3].

  [1]. https://docs.openstack.org/keystone/latest/admin/identity-upgrading.html
  [2]. http://paste.openstack.org/show/608557/
  [3]. http://paste.openstack.org/show/608558/

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1687616/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1711786] Re: ephemeral disk format not correct with os_type=linux metadata in image

2017-10-30 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1711786

Title:
  ephemeral disk format not correct with os_type=linux metadata in image

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Description
  ===
  when boot vm with ephemeral disk, the default format is vfat. but when the 
image has metadata os_type=linux, it should use ext4. in fact, it still use 
vfat format in ephemeral disk.

  Steps to reproduce
  ==
  1. not define virt_mkfs in nova.conf (This is by default)
  2. boot a vm with ephemeral disk first.
  3. add metadata os_type=linux in image
  4. boot a vm with ephemeral disk on the same compute node

  Expected result
  ===
  Ephemeral Disk format using ext4 format

  Actual result
  =
  Ephemeral Disk format using vfat format

  Environment
  ===
  We tested in Openstack Mitaka/Netwon and master

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1711786/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714192] Re: can not create instance when using vmware nova driver

2017-10-30 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1714192

Title:
  can not create instance when using vmware nova driver

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Hello,

  I am on a testing of interoperation between vCenter 6.5U1 and OpenStack 
Newton. I get the Nova driver from here: 
  https://github.com/openstack/nova/tree/stable/newton/nova/virt/vmwareapi
  I download the vmdk file from here:
  http://partnerweb.vmware.com/programs/vmdkimage/cirros-0.3.2-i386-disk.vmdk

  I am getting the follow error when I try to create a instance from
  vmdk.

  
  2017-08-31 11:06:57.532 10238 DEBUG oslo_vmware.exceptions [-] Fault 
PlatformConfigFault not matched. get_fault_class 
/usr/local/lib/python2.7/dist-packages/oslo_vmware/exceptions.py:295
  2017-08-31 11:06:57.533 10238 ERROR oslo_vmware.common.loopingcall [-] in 
fixed duration looping call
  2017-08-31 11:06:57.533 10238 ERROR oslo_vmware.common.loopingcall Traceback 
(most recent call last):
  2017-08-31 11:06:57.533 10238 ERROR oslo_vmware.common.loopingcall   File 
"/usr/local/lib/python2.7/dist-packages/oslo_vmware/common/loopingcall.py", 
line 75, in _inner
  2017-08-31 11:06:57.533 10238 ERROR oslo_vmware.common.loopingcall 
self.f(*self.args, **self.kw)
  2017-08-31 11:06:57.533 10238 ERROR oslo_vmware.common.loopingcall   File 
"/usr/local/lib/python2.7/dist-packages/oslo_vmware/api.py", line 452, in 
_poll_task
  2017-08-31 11:06:57.533 10238 ERROR oslo_vmware.common.loopingcall raise 
task_ex
  2017-08-31 11:06:57.533 10238 ERROR oslo_vmware.common.loopingcall 
VimFaultException: An error occurred during host configuration.
  2017-08-31 11:06:57.533 10238 ERROR oslo_vmware.common.loopingcall Faults: 
['PlatformConfigFault']
  2017-08-31 11:06:57.533 10238 ERROR oslo_vmware.common.loopingcall 
  2017-08-31 11:06:57.534 10238 ERROR nova.compute.manager 
[req-883b383d-7af4-419e-b7df-70c39f50c178 admin alt_demo] [instance: 
7f7484b4-9438-4659-b2af-d08ce0d450f0] Instance failed to spawn
  2017-08-31 11:06:57.534 10238 ERROR nova.compute.manager [instance: 
7f7484b4-9438-4659-b2af-d08ce0d450f0] Traceback (most recent call last):
  2017-08-31 11:06:57.534 10238 ERROR nova.compute.manager [instance: 
7f7484b4-9438-4659-b2af-d08ce0d450f0]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2083, in _build_resources
  2017-08-31 11:06:57.534 10238 ERROR nova.compute.manager [instance: 
7f7484b4-9438-4659-b2af-d08ce0d450f0] yield resources
  2017-08-31 11:06:57.534 10238 ERROR nova.compute.manager [instance: 
7f7484b4-9438-4659-b2af-d08ce0d450f0]   File 
"/opt/stack/nova/nova/compute/manager.py", line 1924, in _build_and_run_instance
  2017-08-31 11:06:57.534 10238 ERROR nova.compute.manager [instance: 
7f7484b4-9438-4659-b2af-d08ce0d450f0] block_device_info=block_device_info)
  2017-08-31 11:06:57.534 10238 ERROR nova.compute.manager [instance: 
7f7484b4-9438-4659-b2af-d08ce0d450f0]   File 
"/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 316, in spawn
  2017-08-31 11:06:57.534 10238 ERROR nova.compute.manager [instance: 
7f7484b4-9438-4659-b2af-d08ce0d450f0] admin_password, network_info, 
block_device_info)
  2017-08-31 11:06:57.534 10238 ERROR nova.compute.manager [instance: 
7f7484b4-9438-4659-b2af-d08ce0d450f0]   File 
"/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 739, in spawn
  2017-08-31 11:06:57.534 10238 ERROR nova.compute.manager [instance: 
7f7484b4-9438-4659-b2af-d08ce0d450f0] metadata)
  2017-08-31 11:06:57.534 10238 ERROR nova.compute.manager [instance: 
7f7484b4-9438-4659-b2af-d08ce0d450f0]   File 
"/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 306, in 
build_virtual_machine
  2017-08-31 11:06:57.534 10238 ERROR nova.compute.manager [instance: 
7f7484b4-9438-4659-b2af-d08ce0d450f0] config_spec, self._root_resource_pool)
  2017-08-31 11:06:57.534 10238 ERROR nova.compute.manager [instance: 
7f7484b4-9438-4659-b2af-d08ce0d450f0]   File 
"/opt/stack/nova/nova/virt/vmwareapi/vm_util.py", line 1332, in create_vm
  2017-08-31 11:06:57.534 10238 ERROR nova.compute.manager [instance: 
7f7484b4-9438-4659-b2af-d08ce0d450f0] {'ostype': config_spec.guestId})
  2017-08-31 11:06:57.534 10238 ERROR nova.compute.manager [instance: 
7f7484b4-9438-4659-b2af-d08ce0d450f0]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2017-08-31 11:06:57.534 10238 ERROR nova.compute.manager [instance: 
7f7484b4-9438-4659-b2af-d08ce0d450f0] self.force_reraise()
  2017-08-31 11:06:57.534 10238 ERROR nova.compute.manager [instance: 
7f7484b4-9438-4659-b2af-d08ce0d450f0]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 

[Yahoo-eng-team] [Bug 1728761] [NEW] [Pike support] Use volumev3 and cinderv3 in dashboard

2017-10-30 Thread Seb-Solon
Public bug reported:

Hi there,


Based on the release notes 
https://docs.openstack.org/releasenotes/nova/pike.html[1],

"Nova is now configured to use the v3 version of the Cinder API. You
need to ensure that the v3 version of the Cinder API is available and
listed in the service catalog in order to use Nova with the default
configuration option."

By default Nova catalog_info parameter for cinder is
volumev3:cinderv3:publicURL [https://docs.openstack.org/ocata/config-
reference/compute/config-options.html]

>From what I can see it is still not possible to have the dashboard
working without a volumev2 service_type endpoint created. Having only
volumev3:cinderv3 will result in Unauthorized error in the Dashboard
(throw by django). Even if I update the OPENSTACK_API_VERSIONS dict in
the dashboard config.

After some digging and playing: looks like the permission are not ready
for such v3.

For instance this :
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/volumes/panel.py#L24
point to a volumev2, if I change that to v3 I managed to have the page
served but with a bunch of error

Another piece of code that make me think volumev3 is not really
supported :
https://github.com/openstack/horizon/blob/master/openstack_dashboard/api/cinder.py#L61
. Here we import some v2 client and it seems like it is tag as supported
and/or prefered (preferred_version variable) for v2

Having v3 in the dashboard config will only need to the following error
:
https://github.com/openstack/horizon/blob/master/openstack_dashboard/api/base.py#L96.
Which is logic because of the if (2 line above) that prevent the use of
not supported version (see previous paragraph)

I believe the fix would be to update the permissions, but I am not vary
familiar with the code base so it may not be that easy.

My workaround for now is to register the volumev2 service even if the
url points to 8776:/v3/ as [1] mentions "The base 3.0 version is
identical to v2".


Hope this help to debug / fix the issue,

Regards,

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: cinderv3 pike volumev3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1728761

Title:
  [Pike support] Use volumev3 and cinderv3 in dashboard

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Hi there,

  
  Based on the release notes 
https://docs.openstack.org/releasenotes/nova/pike.html[1],

  "Nova is now configured to use the v3 version of the Cinder API. You
  need to ensure that the v3 version of the Cinder API is available and
  listed in the service catalog in order to use Nova with the default
  configuration option."

  By default Nova catalog_info parameter for cinder is
  volumev3:cinderv3:publicURL [https://docs.openstack.org/ocata/config-
  reference/compute/config-options.html]

  From what I can see it is still not possible to have the dashboard
  working without a volumev2 service_type endpoint created. Having only
  volumev3:cinderv3 will result in Unauthorized error in the Dashboard
  (throw by django). Even if I update the OPENSTACK_API_VERSIONS dict in
  the dashboard config.

  After some digging and playing: looks like the permission are not
  ready for such v3.

  For instance this :
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/volumes/panel.py#L24
  point to a volumev2, if I change that to v3 I managed to have the page
  served but with a bunch of error

  Another piece of code that make me think volumev3 is not really
  supported :
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/api/cinder.py#L61
  . Here we import some v2 client and it seems like it is tag as
  supported and/or prefered (preferred_version variable) for v2

  Having v3 in the dashboard config will only need to the following
  error :
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/api/base.py#L96.
  Which is logic because of the if (2 line above) that prevent the use
  of not supported version (see previous paragraph)

  I believe the fix would be to update the permissions, but I am not
  vary familiar with the code base so it may not be that easy.

  My workaround for now is to register the volumev2 service even if the
  url points to 8776:/v3/ as [1] mentions "The base 3.0 version is
  identical to v2".


  Hope this help to debug / fix the issue,

  Regards,

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1728761/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1692128] Re: VPNaaS stuff should load inside of an l3 agent extension mechanism

2017-10-30 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/488247
Committed: 
https://git.openstack.org/cgit/openstack/neutron-vpnaas/commit/?id=99d2687b8313532512034e5b7d793fc16f9905a0
Submitter: Zuul
Branch:master

commit 99d2687b8313532512034e5b7d793fc16f9905a0
Author: Cao Xuan Hoang 
Date:   Fri Jul 28 08:10:16 2017 +0700

VPN as a Service (VPNaaS) Agent

This is the iteration of the VPNaaS Agent with some basic
functionality to enable integration of Plugin - Agent - Driver.

Co-Authored-By: Van Hung Pham 
Change-Id: I0b86c432e4b2210e5f2a73a7e3ba16d10467f0f2
Closes-Bug: 1692128


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1692128

Title:
  VPNaaS stuff should load inside of an l3 agent extension mechanism

Status in neutron:
  Fix Released

Bug description:
  It would be better if the VPNaaS agent stuff could be loaded just
  inside of the existing l3 agent rather than requiring operators to run
  a completely different binary with a subclass of the existing L3
  agent. That way operators can just make a config change to
  enable/disable vpnaas.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1692128/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1427014] Re: Images with same created timestamp breaks paging

2017-10-30 Thread Gary W. Smith
This panel has be re-written.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1427014

Title:
  Images with same created timestamp breaks paging

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Suppose there are several images which are created at the same
  timestamp, paging back and forth will mess up the order.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1427014/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1482792] Re: Simplify the Instances page filter option. Search should work for any string like any other filter in the Openstack

2017-10-30 Thread Gary W. Smith
Based on the patch that was originally submitted, this bug is requesting
that all panels that are not angularized should have their searches
restricted to searching only by name.

** Changed in: horizon
   Status: In Progress => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1482792

Title:
  Simplify the Instances page filter option. Search should work for any
  string like any other filter in the Openstack

Status in OpenStack Dashboard (Horizon):
  Opinion

Bug description:
  Simplify the Instances page filter option. 
  Search should work for any string like any other filter in the Openstack.

  In project->Instances. filter options are not user friendly compared to other 
pages(volumes, networks etc..)
  e.g. to filter with image ID="" user has to fetch image id first,then 
navigate back to instances page.

  this would be easy for user: if search option is based on the string
  pattern as in volumes, networks, pages etc..

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1482792/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480212] Re: Creating a user with keystone v3 gives warning: 'takes at most 1 positional argument'

2017-10-30 Thread Gary W. Smith
Cannot reproduce this with current horizon running keystone v3.

** Changed in: horizon
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1480212

Title:
  Creating a user with keystone v3 gives warning: 'takes at most 1
  positional argument'

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in python-keystoneclient:
  Invalid

Bug description:
  Log warning: 'create takes at most 1 positional argument (2 given)'
  when I create[1] a user with V3.

  The reason is that we have not accounted for `self`.

  The reason is that we may need to pass related parameters via keyword
  argument.

  [1]: https://github.com/openstack/python-
  keystoneclient/blob/master/keystoneclient/v3/users.py#L53

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1480212/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1728732] [NEW] OpenStack nova service responds with an erroneous httpd redirect to a "GET, version_controller, show" request.

2017-10-30 Thread C Leavett-Brown
Public bug reported:

Description:

When a client, eg. the OpenStack dashboard, makes a nova service request
of "http://controller:8774/v2.1; (show controller versions), it receives
a redirect to "http://controller:8774/v2.1/;. This is erroneous for at
least the following two reasons:

1. If, for security reasons, you place the nova service behind an SSL
termination proxy, the redirect generated as follows:

  from : https://controller:proxy_port/v2.1

  to : http://controller:proxy_port/v2.1/

is invalid because the proxy_port requires encrypted traffic and the
replacement URL is using the wrong protocol (http). The request fails on
the client side with "Unable to establish connection to
http://controller:proxy_port/v2.1/: ('Connection aborted.',
BadStatusLine("''",))".

2. Even if we are not using a proxy server, the nova service is effectively 
complaining about a missing trailing forward slash ("/"), telling the client to 
reissue the same request but with the missing character. This creates 
unnecessary network traffic (the redirect plus a second request) and additional 
server load (two requests instead of one). It should be noted that 
"http://controller:8774/v2.1; is the endpoint specification recommended in the 
OpenStack nova installation guides for the ocata, and pike releases. This will 
result in unnecessary traffic and load on many installations, which will go 
unnoticed because the request eventually works.
Solution:

Replace the first ROUTE_LIST entry (and associated comments) in
nova.api.openstack.compute.routes, changing it from:

# NOTE: This is a redirection from '' to '/'. The request to the '/v2.1'
# or '/2.0' without the ending '/' will get a response with status code
# '302' returned.
('', '/'),

to:

# The following 3 lines replaces a redirect specification that caused 
additional network traffic and load. See bug #x.
('', {
'GET': [version_controller, 'show']

I've applied/tested a fix/workaround here: https://github.com/hep-
gc/nova/commit/b9c27bf29f7042cf637b58c87d6a9b2f3a9b78b6

To recreate:
1. Install Openstack (ocata/pike) as per 
https://docs.openstack.org/pike/install/
2. Monitor network traffic (tcpdump) on client.
3. Login to the dashboard, and view compute->project->overview

To see "Unable to establish connection to http://controller:proxy_port/v2.1/: 
('Connection aborted.', BadStatusLine("''",))" error:
4. Install HAProxy.
5. Serve the nova public endpoint via the SSL termination proxy server. Our 
HAProxy configuration for this is as follows:
  frontend nova_public
bind controller_fqdn:18774 ssl crt 
/etc/letsencrypt/live/controller_fqdn/web_crt_key.pem
reqadd X-Forwarded-Proto:\ https
default_backend nova_internal

  backend nova_internal
redirect scheme https code 301 if !{ ssl_fc }
server controller controller:8774 check
6. Redefine the nova public endpoint in the sql database:
  mysql -ukeystone -p
  connect keystone;
  update endpoint set url="https://otter.heprc.uvic.ca:18774/v2.1; where 
id="xxx"
7. Dashboard will display "Unable to retrieve usage data" red flag each time 
the project overview page is displayed, and the http error log will report the 
connection failure.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1728732

Title:
  OpenStack nova service responds with an erroneous httpd redirect to a
  "GET,version_controller,show" request.

Status in OpenStack Compute (nova):
  New

Bug description:
  Description:

  When a client, eg. the OpenStack dashboard, makes a nova service
  request of "http://controller:8774/v2.1; (show controller versions),
  it receives a redirect to "http://controller:8774/v2.1/;. This is
  erroneous for at least the following two reasons:

  1. If, for security reasons, you place the nova service behind an SSL
  termination proxy, the redirect generated as follows:

from : https://controller:proxy_port/v2.1

to : http://controller:proxy_port/v2.1/

  is invalid because the proxy_port requires encrypted traffic and the
  replacement URL is using the wrong protocol (http). The request fails
  on the client side with "Unable to establish connection to
  http://controller:proxy_port/v2.1/: ('Connection aborted.',
  BadStatusLine("''",))".

  2. Even if we are not using a proxy server, the nova service is effectively 
complaining about a missing trailing forward slash ("/"), telling the client to 
reissue the same request but with the missing character. This creates 
unnecessary network traffic (the redirect plus a second request) and additional 
server load (two requests instead of one). It should be noted that 
"http://controller:8774/v2.1; is the endpoint specification recommended in the 
OpenStack nova installation guides for the ocata, and pike 

[Yahoo-eng-team] [Bug 1720191] Re: test_live_block_migration fails in gate-grenade-dsvm-neutron-multinode-live-migration-nv with "Shared storage live-migration requires either shared storage or boot-f

2017-10-30 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/508271
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=3e3309672f399be1bc3558e260104a2db2595970
Submitter: Zuul
Branch:master

commit 3e3309672f399be1bc3558e260104a2db2595970
Author: Matt Riedemann 
Date:   Thu Sep 28 14:52:56 2017 -0400

Fix live migration grenade ceph setup

Grenade runs in singleconductor mode for queens
as of change:

  If4c82ca12fe7b8b1ca7cfd8181d24dbd8dad3baa

However, the nova configuration during ceph setup
was using NOVA_CPU_CONF which is /etc/nova/nova-cpu.conf,
which is not what we want when configuring nova.conf
for the compute service in singleconductor mode.

Devstack has similar logic for stuff like this, so
we just have to handle it here since we're in a special
snowflake.

The stable/queens systemd stuff is all removed too since
we run with systemd on the old pike side and can restart
services properly with systemd on the new queens side
during grenade live migration CI job runs.

Change-Id: Iccb8eb55a5cc2a3d08e7fd6e31c89b3b5f8d0c70
Closes-Bug: #1720191


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1720191

Title:
  test_live_block_migration fails in gate-grenade-dsvm-neutron-
  multinode-live-migration-nv with "Shared storage live-migration
  requires either shared storage or boot-from-volume with no local
  disks." since ~8/18/2017

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The gate-grenade-dsvm-neutron-multinode-live-migration-nv job has been
  failing at about 100% since August 18:

  
http://graphite.openstack.org/render/?from=-2160hours=500=now=800=ff=00=100=0=Failure%20Rate%20in%20Percent=Test%20failure%20rates%20over%20last%202160%20hours%20%2812%20hour%20rolling%20average%29=true&=lineWidth(color(alias(movingAverage(asPercent(transformNull(stats_counts.zuul.pipeline.check.job
  .gate-grenade-dsvm-neutron-multinode-live-migration-
  nv.FAILURE),transformNull(sum(stats_counts.zuul.pipeline.check.job
  .gate-grenade-dsvm-neutron-multinode-live-migration-
  nv.{SUCCESS,FAILURE}))),%2712hours%27),%20%27gate-grenade-dsvm-
  neutron-multinode-live-migration-nv%20%28check%29%27),%27ff%27),1)

  With this failure:

  http://logs.openstack.org/87/463987/20/check/gate-grenade-dsvm-
  neutron-multinode-live-migration-
  
nv/ae8875f/logs/subnode-2/screen-n-cpu.txt.gz?level=TRACE#_Sep_26_14_28_11_637958

  Sep 26 14:28:11.637958 ubuntu-xenial-2-node-rax-ord-11140716-924370 
nova-compute[32137]: ERROR oslo_messaging.rpc.server [None 
req-b8c949a1-606a-48dc-88df-f848ac421d75 
tempest-LiveMigrationRemoteConsolesV26Test-1985366765 
tempest-LiveMigrationRemoteConsolesV26Test-1985366765] Exception during message 
handling: InvalidSharedStorage: ubuntu-xenial-2-node-rax-ord-11140716-924370 is 
not on shared storage: Shared storage live-migration requires either shared 
storage or boot-from-volume with no local disks.
  Sep 26 14:28:11.638227 ubuntu-xenial-2-node-rax-ord-11140716-924370 
nova-compute[32137]: ERROR oslo_messaging.rpc.server Traceback (most recent 
call last):
  Sep 26 14:28:11.638476 ubuntu-xenial-2-node-rax-ord-11140716-924370 
nova-compute[32137]: ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
160, in _process_incoming
  Sep 26 14:28:11.638699 ubuntu-xenial-2-node-rax-ord-11140716-924370 
nova-compute[32137]: ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  Sep 26 14:28:11.638980 ubuntu-xenial-2-node-rax-ord-11140716-924370 
nova-compute[32137]: ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
213, in dispatch
  Sep 26 14:28:11.639207 ubuntu-xenial-2-node-rax-ord-11140716-924370 
nova-compute[32137]: ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
  Sep 26 14:28:11.639413 ubuntu-xenial-2-node-rax-ord-11140716-924370 
nova-compute[32137]: ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
183, in _do_dispatch
  Sep 26 14:28:11.639617 ubuntu-xenial-2-node-rax-ord-11140716-924370 
nova-compute[32137]: ERROR oslo_messaging.rpc.server result = func(ctxt, 
**new_args)
  Sep 26 14:28:11.639823 ubuntu-xenial-2-node-rax-ord-11140716-924370 
nova-compute[32137]: ERROR oslo_messaging.rpc.server   File 
"/opt/stack/old/nova/nova/exception_wrapper.py", line 76, in wrapped
  Sep 26 14:28:11.639981 ubuntu-xenial-2-node-rax-ord-11140716-924370 
nova-compute[32137]: ERROR oslo_messaging.rpc.server function_name, 
call_dict, binary)
  Sep 26 14:28:11.640148 

[Yahoo-eng-team] [Bug 1727180] Re: neutron.tests.tempest.api.admin.test_tag assumes some extensions

2017-10-30 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/514914
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=907d539df641e684ed29719c863e9665e25a0995
Submitter: Zuul
Branch:master

commit 907d539df641e684ed29719c863e9665e25a0995
Author: YAMAMOTO Takashi 
Date:   Wed Oct 25 14:30:05 2017 +0900

tempest: Sprinkle extension checks

Closes-Bug: #1727180
Change-Id: Ie8fe87bc4b0b36dcf8b7c042149fbe8658e385b1


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1727180

Title:
  neutron.tests.tempest.api.admin.test_tag assumes some extensions

Status in neutron:
  Fix Released

Bug description:
  neutron.tests.tempest.api.admin.test_tag assumes some extensions like trunk.
  it's failing on networking-midonet gate.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1727180/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1728722] [NEW] Resize test fails in conductor during migration/instance allocation swap: "Unable to replace resource claim on source host"

2017-10-30 Thread Matt Riedemann
Public bug reported:

Resize tests are intermittently failing in the gate:

http://logs.openstack.org/96/516396/1/check/legacy-tempest-dsvm-
py35/ecb9db4/logs/screen-n-super-
cond.txt.gz?level=TRACE#_Oct_30_18_01_18_003148

Oct 30 18:01:18.003148 ubuntu-xenial-inap-mtl01-586035 nova-
conductor[22452]: ERROR nova.conductor.tasks.migrate [None req-
2818e7b7-6881-4cfb-ae79-1816cb948748 tempest-
ListImageFiltersTestJSON-1403553182 tempest-
ListImageFiltersTestJSON-1403553182] [instance:
f5aec132-8a62-47a5-a967-8e5d18a9c6f8] Unable to replace resource claim
on source host ubuntu-xenial-inap-mtl01-586035 node ubuntu-xenial-
inap-mtl01-586035 for instance

The request in the placement logs starts here:

http://logs.openstack.org/96/516396/1/check/legacy-tempest-dsvm-
py35/ecb9db4/logs/screen-placement-api.txt.gz#_Oct_30_18_01_16_940644

Oct 30 18:01:17.993287 ubuntu-xenial-inap-mtl01-586035 
devstack@placement-api.service[15936]: DEBUG 
nova.api.openstack.placement.wsgi_wrapper [None 
req-7eec8dd2-f65c-43fa-b3df-cdf7a236aa03 service placement] Placement API 
returning an error response: Inventory changed while attempting to allocate: 
Another thread concurrently updated the data. Please retry your update 
{{(pid=15938) call_func 
/opt/stack/new/nova/nova/api/openstack/placement/wsgi_wrapper.py:31}}
Oct 30 18:01:17.994558 ubuntu-xenial-inap-mtl01-586035 
devstack@placement-api.service[15936]: INFO 
nova.api.openstack.placement.requestlog [None 
req-7eec8dd2-f65c-43fa-b3df-cdf7a236aa03 service placement] 198.72.124.85 "PUT 
/placement/allocations/52b215a6-0d60-4fcc-8389-2645ffb22562" status: 409 len: 
305 microversion: 1.8

The error from placement is a bit misleading. It's probably not that
inventory has changed, but allocations have changed in the meantime
since this is a single-node environment, so capacity changd and
conductor needs to retry, just like the scheduler does.

** Affects: nova
 Importance: High
 Status: Triaged


** Tags: conductor placement resize

** Changed in: nova
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1728722

Title:
  Resize test fails in conductor during migration/instance allocation
  swap: "Unable to replace resource claim on source host"

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  Resize tests are intermittently failing in the gate:

  http://logs.openstack.org/96/516396/1/check/legacy-tempest-dsvm-
  py35/ecb9db4/logs/screen-n-super-
  cond.txt.gz?level=TRACE#_Oct_30_18_01_18_003148

  Oct 30 18:01:18.003148 ubuntu-xenial-inap-mtl01-586035 nova-
  conductor[22452]: ERROR nova.conductor.tasks.migrate [None req-
  2818e7b7-6881-4cfb-ae79-1816cb948748 tempest-
  ListImageFiltersTestJSON-1403553182 tempest-
  ListImageFiltersTestJSON-1403553182] [instance:
  f5aec132-8a62-47a5-a967-8e5d18a9c6f8] Unable to replace resource claim
  on source host ubuntu-xenial-inap-mtl01-586035 node ubuntu-xenial-
  inap-mtl01-586035 for instance

  The request in the placement logs starts here:

  http://logs.openstack.org/96/516396/1/check/legacy-tempest-dsvm-
  py35/ecb9db4/logs/screen-placement-api.txt.gz#_Oct_30_18_01_16_940644

  Oct 30 18:01:17.993287 ubuntu-xenial-inap-mtl01-586035 
devstack@placement-api.service[15936]: DEBUG 
nova.api.openstack.placement.wsgi_wrapper [None 
req-7eec8dd2-f65c-43fa-b3df-cdf7a236aa03 service placement] Placement API 
returning an error response: Inventory changed while attempting to allocate: 
Another thread concurrently updated the data. Please retry your update 
{{(pid=15938) call_func 
/opt/stack/new/nova/nova/api/openstack/placement/wsgi_wrapper.py:31}}
  Oct 30 18:01:17.994558 ubuntu-xenial-inap-mtl01-586035 
devstack@placement-api.service[15936]: INFO 
nova.api.openstack.placement.requestlog [None 
req-7eec8dd2-f65c-43fa-b3df-cdf7a236aa03 service placement] 198.72.124.85 "PUT 
/placement/allocations/52b215a6-0d60-4fcc-8389-2645ffb22562" status: 409 len: 
305 microversion: 1.8

  The error from placement is a bit misleading. It's probably not that
  inventory has changed, but allocations have changed in the meantime
  since this is a single-node environment, so capacity changd and
  conductor needs to retry, just like the scheduler does.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1728722/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1605098] Re: Nova usage not showing server real uptime

2017-10-30 Thread Matthew Edmonds
** Project changed: nova-powervm => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1605098

Title:
  Nova usage not showing server real uptime

Status in OpenStack Compute (nova):
  New

Bug description:
  Hi All,

  I am trying to calculate openstack server "uptime" where nova os usage
  is giving server creation time, which cant take forward for billing,
  Is there any way to do ?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1605098/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1605098] [NEW] Nova usage not showing server real uptime

2017-10-30 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Hi All,

I am trying to calculate openstack server "uptime" where nova os usage
is giving server creation time, which cant take forward for billing, Is
there any way to do ?

** Affects: nova
 Importance: Undecided
 Assignee: maestropandy (maestropandy)
 Status: New

-- 
Nova usage not showing server real uptime
https://bugs.launchpad.net/bugs/1605098
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1724177] Re: Sqlalchemy column in_() operator does not allow NULL element

2017-10-30 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/512908
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=d104ec67c90a9b0a1eae6f40644424b9df4b4156
Submitter: Zuul
Branch:master

commit d104ec67c90a9b0a1eae6f40644424b9df4b4156
Author: Lujin 
Date:   Wed Oct 18 09:56:31 2017 +0900

Add NULL check before passing to in_() column operator

In some cases we may need to pass key=None filters to get_object()
and get_objects() methods, however we lack of NULL check before these
filters reach in_(), which will not return any matching queries in db
layer.

We need to do manual equals matches if NULL element exists in filters,
instead of pass them to in_() operator.

Change-Id: I7980b82e2627b7b097cae0a714d22e680cddd340
Closes-Bug: #1724177


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1724177

Title:
  Sqlalchemy column in_() operator does not allow NULL  element

Status in neutron:
  Fix Released

Bug description:
  I met this issue when I was integrating Floating IP OVO objects. There
  would be a case that we want to pass router_id=None and
  fixed_port_id=None into get_objects() method [1], which eventually
  leads to this method [2].

  In my case, since when key is "router_id" and value is "[None]", the
  in_() clause in Line 205 will not return any matching queries, cause
  in_() does not support None element.

  We need to add a check if [2] when None is contained in value.

  [1] https://review.openstack.org/#/c/396351/34..35/neutron/db/l3_db.py@1429
  [2] 
https://github.com/openstack/neutron/blob/master/neutron/db/_model_query.py#L176

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1724177/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1727855] Re: conductor rebuild_instance does not properly handle image_ref if request_spec is not provided

2017-10-30 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/515530
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=d2690d6b038e200efed05bf7773898a0a8bb01d7
Submitter: Zuul
Branch:master

commit d2690d6b038e200efed05bf7773898a0a8bb01d7
Author: Matt Riedemann 
Date:   Thu Oct 26 17:33:35 2017 -0400

Pass the correct image to build_request_spec in conductor.rebuild_instance

If we're calling build_request_spec in conductor.rebuild_instance,
it's because we are evacuating and the instance is so old it does
not have a request spec. We need the request_spec to pass to the
scheduler to pick a destination host for the evacuation.

For evacuate, nova-api does not pass any image reference parameters,
and even if it did, those are image IDs, not an image meta dict that
build_request_spec expects, so this code has just always been wrong.

This change fixes the problem by passing a primitive version of
the instance.image_meta which build_request_spec will then return
back to conductor and that gets used to build a RequestSpec object
from primitives.

It's important to use the correct image meta so that the scheduler
can properly filter hosts using things like the
AggregateImagePropertiesIsolation and ImagePropertiesFilter filters.

Change-Id: I0c8ce65016287de7be921c312493667a8c7f762e
Closes-Bug: #1727855


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1727855

Title:
  conductor rebuild_instance does not properly handle image_ref if
  request_spec is not provided

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  In Progress
Status in OpenStack Compute (nova) pike series:
  In Progress

Bug description:
  Maybe this doesn't actually matter for rebuild, but the image_ref used
  in this code:

  
https://github.com/openstack/nova/blob/d36dcd52c24c32418fd358d245688c86664025d5/nova/conductor/manager.py#L830

  Is a string image id, it's not a dict or ImageMeta object, it comes
  through the rebuild action API from the user.

  It's important, however, for how scheduler_utils.build_request_spec
  uses it here:

  
https://github.com/openstack/nova/blob/d36dcd52c24c32418fd358d245688c86664025d5/nova/scheduler/utils.py#L79

  Because I was trying to figure out if the image parameter to
  build_request_spec is an ImageMeta object, dict or string - since the
  code appears to assume it's a dict if not provided.

  Conductor will then call RequestSpec.from_primitives and the image is
  used here:

  
https://github.com/openstack/nova/blob/d36dcd52c24c32418fd358d245688c86664025d5/nova/objects/request_spec.py#L250

  And eventually ignored here since it's an unexpected type:

  
https://github.com/openstack/nova/blob/d36dcd52c24c32418fd358d245688c86664025d5/nova/objects/request_spec.py#L135

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1727855/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1727941] Re: ML2 plug-in in neutron - Link to OpenStack Admin Guide does not make sense anymore

2017-10-30 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/515992
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=45609a196fd575aa95c7afeb37a47b4bca810800
Submitter: Zuul
Branch:master

commit 45609a196fd575aa95c7afeb37a47b4bca810800
Author: David Rabel 
Date:   Sat Oct 28 08:49:42 2017 +0200

Correct link in config-ml2.rst

Change-Id: If52835c2fcdb2391fff986dc1fbcc04da0815ff6
Closes-Bug: #1727941


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1727941

Title:
  ML2 plug-in in neutron - Link to OpenStack Admin Guide does not make
  sense anymore

Status in neutron:
  Fix Released

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [x] This doc is inaccurate in this way:

  There is a link to the OpenStack Admin Guide for more information on
  provider networks and project networks. This is an old link (with a
  wrong anchor), which will eventually lead to the OpenStack Networking
  page that is linked in the same line.

  Make a long story short: I would like to delete that link, since it is
  useless.

  It is in paragraph "Project network types". The link to
  https://docs.openstack.org/admin-guide/networking-adv-features.html
  #provider-networks

  Yours
David

  ---
  Release: 11.0.2.dev40 on 2017-10-25 21:29
  SHA: 2cafbe9e06909d8e9710ff57da6b288f56322901
  Source: 
https://git.openstack.org/cgit/openstack/neutron/tree/doc/source/admin/config-ml2.rst
  URL: https://docs.openstack.org/neutron/pike/admin/config-ml2.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1727941/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1728690] [NEW] member_role_id/name conf options reference v2

2017-10-30 Thread Matthew Edmonds
Public bug reported:

The keystone v2 API has been removed, yet we still define the
member_role_id and member_role_name conf options that say they are for
v2. It appears that they may be used in some v3 code. That should either
be modified so that these can be removed, or the help and docs for these
options should be updated to explain their usage with v3.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1728690

Title:
  member_role_id/name conf options reference v2

Status in OpenStack Identity (keystone):
  New

Bug description:
  The keystone v2 API has been removed, yet we still define the
  member_role_id and member_role_name conf options that say they are for
  v2. It appears that they may be used in some v3 code. That should
  either be modified so that these can be removed, or the help and docs
  for these options should be updated to explain their usage with v3.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1728690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1728689] [NEW] Misleading log message when location list is empty

2017-10-30 Thread Brian Rosmaita
Public bug reported:

When (a) show_multiple_locations=True or show_image_direct_url=True and
(b) an image has an empty location list, and (c) any call that returns
an image record is made, a message is logged saying: "There is not
available location for image ".  This message is misleading
because it sounds like there's a problem with the backing store, whereas
all it means is that the locations list for that image is empty.  It's a
minor problem, but it's come up in the glance channel recently [1,2], so
it is causing some operator unpleasantness.

[1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-glance/%23openstack-glance.2017-10-06.log.html#t2017-10-06T13:19:04
[2] 
http://eavesdrop.openstack.org/irclogs/%23openstack-glance/%23openstack-glance.2017-10-11.log.html#t2017-10-11T18:59:31

** Affects: glance
 Importance: Low
 Assignee: Brian Rosmaita (brian-rosmaita)
 Status: Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1728689

Title:
  Misleading log message when location list is empty

Status in Glance:
  Triaged

Bug description:
  When (a) show_multiple_locations=True or show_image_direct_url=True
  and (b) an image has an empty location list, and (c) any call that
  returns an image record is made, a message is logged saying: "There is
  not available location for image ".  This message is
  misleading because it sounds like there's a problem with the backing
  store, whereas all it means is that the locations list for that image
  is empty.  It's a minor problem, but it's come up in the glance
  channel recently [1,2], so it is causing some operator unpleasantness.

  [1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-glance/%23openstack-glance.2017-10-06.log.html#t2017-10-06T13:19:04
  [2] 
http://eavesdrop.openstack.org/irclogs/%23openstack-glance/%23openstack-glance.2017-10-11.log.html#t2017-10-11T18:59:31

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1728689/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1720077] Re: Quality of Service (QoS) , document is critical error

2017-10-30 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/512538
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=2f9c9013e550f8451bd9ba611e333519a417ca80
Submitter: Zuul
Branch:master

commit 2f9c9013e550f8451bd9ba611e333519a417ca80
Author: Rodolfo Alonso Hernandez 
Date:   Tue Oct 17 09:22:00 2017 +0100

Change QoS configuration manual

As [1] shows, the controller node hosts the Neutron server but
also agents like L3 and DHCP which require also OVS or LinuxBridge
agent to be running on it.
To enable QoS is required to enable the 'service_plugins' and
the 'extension_drivers', along with the agent section in the plugin
config if the agent is running on this host.

In the network node and the compute node only the agent
'extensions' configuration is needed to enable QoS
on the agent.

[1] https://docs.openstack.org/security-guide/networking/architecture.html

Closes-Bug: #1720077

Change-Id: I14128aabe0a9209c31a1bd4c76eed1182364ccdf
Co-Authored-By: Slawek Kaplonski 


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1720077

Title:
  Quality of Service (QoS) , document is critical error

Status in neutron:
  Fix Released

Bug description:
  Configuration in network node and compute node is wrong ,

  -
  must comfiguration in controller node

  @controller node : /etc/neutron/neutron.conf
  service_plugins = router,qos

  @controller node : /etc/neutron/plugin.ini
  [ml2]
  extension_drivers = port_security,qos

  -
  then configure in network node :
  /etc/neutron/plugins/ml2/openvswitch_agent.ini
  [agent]
  extensions = qos

  -
  last configure in compute node :

  /etc/neutron/plugins/ml2/openvswitch_agent.ini
  [agent]
  extensions = qos

  -

  otherwise ,
  you will always get error such as follow:

  Using http://controller:9696/v2.0 as public network endpoint
  REQ: curl -g -i -X POST http://controller:9696/v2.0/qos/policies -H 
"User-Agent: openstacksdk/0.9.17 keystoneauth1/3.1.0 python-requests/2.18.4 
CPython/2.7.13" -H "Content-Type: application/json" -H "X-Auth-Token: 
{SHA1}b43a8743512064d5a7fa64c6eb255bdfa4720570" -d '{"policy": {"name": 
"bw-limiter"}}'
  http://controller:9696 "POST /v2.0/qos/policies HTTP/1.1" 404 103
  RESP: [404] Content-Length: 103 Content-Type: application/json 
X-Openstack-Request-Id: req-c3aac80f-9a97-4db7-a8f2-4e6a1ff907b3 Date: Thu, 21 
Sep 2017 06:57:45 GMT Connection: keep-alive
  RESP BODY: {"NeutronError": {"message": "The resource could not be found.", 
"type": "HTTPNotFound", "detail": ""}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1720077/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1727358] Re: cloud-init is slow to complete init on minimized images

2017-10-30 Thread David Britton
Marking as wont-fix for cloud-init for now as that would be a workaround
for the base problem that is not desired right now.  Adding linux-kvm to
evaluate why this difference exists between linux-generic and linux-kvm

** Changed in: cloud-init
   Status: Incomplete => Won't Fix

** Also affects: linux-kvm (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu)
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1727358

Title:
  cloud-init is slow to complete init on minimized images

Status in cloud-init:
  Won't Fix
Status in cloud-init package in Ubuntu:
  Won't Fix
Status in linux-kvm package in Ubuntu:
  New
Status in python3.6 package in Ubuntu:
  Triaged

Bug description:
  http://paste.ubuntu.com/25816789/ for the full logs.

  cloud-init is very slow to complete its initialization steps.
  Specifically, the 'init' takes over 150 seconds.

  Cloud-init v. 17.1 running 'init-local' at Wed, 25 Oct 2017 13:22:07 +. 
Up 2.39 seconds.
  2017-10-25 13:22:07,157 - util.py[WARNING]: did not find either path 
/sys/class/dmi/id or dmidecode command
  Cloud-init v. 17.1 running 'init' at Wed, 25 Oct 2017 13:22:16 +. Up 
11.37 seconds.
  ci-info: Net device 
info+
  ci-info: 
++---+-+---+---+---+
  ci-info: | Device |   Up  | Address |  Mask | Scope | 
Hw-Address|
  ci-info: 
++---+-+---+---+---+
  ci-info: | ens3:  |  True | 192.168.100.161 | 255.255.255.0 |   .   | 
52:54:00:bb:ad:fb |
  ci-info: | ens3:  |  True |.|   .   |   d   | 
52:54:00:bb:ad:fb |
  ci-info: |  lo:   |  True |127.0.0.1|   255.0.0.0   |   .   | 
. |
  ci-info: |  lo:   |  True |.|   .   |   d   | 
. |
  ci-info: | sit0:  | False |.|   .   |   .   | 
. |
  ci-info: 
++---+-+---+---+---+
  ci-info: Route IPv4 
info
  ci-info: 
+---+---+---+-+---+---+
  ci-info: | Route |  Destination  |Gateway| Genmask | 
Interface | Flags |
  ci-info: 
+---+---+---+-+---+---+
  ci-info: |   0   |0.0.0.0| 192.168.100.1 | 0.0.0.0 |ens3  
 |   UG  |
  ci-info: |   1   | 192.168.100.0 |0.0.0.0|  255.255.255.0  |ens3  
 |   U   |
  ci-info: |   2   | 192.168.100.1 |0.0.0.0| 255.255.255.255 |ens3  
 |   UH  |
  ci-info: 
+---+---+---+-+---+---+
  2017-10-25 13:24:38,187 - util.py[WARNING]: Failed to resize filesystem 
(cmd=('resize2fs', '/dev/root'))
  2017-10-25 13:24:38,193 - util.py[WARNING]: Running module resizefs () failed
  Generating public/private rsa key pair.
  Your identification has been saved in /etc/ssh/ssh_host_rsa_key.
  Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub.
  The key fingerprint is:
  SHA256:LKNlCqqOgPB8KBKGfPhFO5Rs6fDMnAvVet/W9i4vLxY root@cloudimg
  The key's randomart image is:
  +---[RSA 2048]+
  | |
  |. +  |
  |   . O . |
  |o . % +. |
  |++.o %=.S|
  |+=ooo=+o. . .E   |
  |* +.+.   . o o.  |
  |=. .  . .=.  |
  |+.  . B= |
  +[SHA256]-+
  Generating public/private dsa key pair.
  Your identification has been saved in /etc/ssh/ssh_host_dsa_key.
  Your public key has been saved in /etc/ssh/ssh_host_dsa_key.pub.
  The key fingerprint is:
  SHA256:dNWNyBHqTUCl820/vL0dEhOVDFYJzqr1WeuqV1PAmjk root@cloudimg
  The key's randomart image is:
  +---[DSA 1024]+
  | .oo=X==o|
  |   =* *+.|
  |. = .B . |
  |   . o =E.. .|
  |S .oo+o..|
  |  o ..*+.|
  | .   +.=o|
  | .o *|
  |   .o..++|
  +[SHA256]-+
  Generating public/private ecdsa key pair.
  Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key.
  Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub.
  The key fingerprint is:
  SHA256:N3RTlPa7KU5ryq6kJAO8Tiq90ub4P1DGSofn6jFkM3k root@cloudimg
  The key's randomart image is:
  +---[ECDSA 256]---+
  | .o. |
  | .o  |
  |   o  . o. . |
  |  +.*. . .  .|
  | .*XE   S o .|
  | oo++. .   . |
  | oo= o . .   .  o|
  |o.Oo. + o . .o.o |
  |oB=+.. . .o++o.  |
  +[SHA256]-+
  Generating public/private ed25519 key pair.
  Your identification has been saved in /etc/ssh/ssh_host_ed25519_key.
  Your public key has been saved in 

[Yahoo-eng-team] [Bug 1728665] [NEW] Removing gateway ip for tenant network (DVR) causes traceback in neutron-openvswitch-agent

2017-10-30 Thread James Denton
Public bug reported:

Version: OpenStack Newton (OSA v14.2.11)
neutron-openvswitch-agent version 9.4.2.dev21

Issue:

Users complained that instances were unable to procure their IP via
DHCP. On the controllers, numerous ports were found in BUILD state.
Tracebacks similar to the following could be observed in the neutron-
openvswitch-agent logs across the (3) controllers.

2017-10-26 16:24:28.458 4403 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] Port 
e9c11103-9d10-4b27-b739-e428773d8fac updated. Details: {u'profile': {}, 
u'network_qos_policy_id': None, u'qos_policy_id': None, 
u'allowed_address_pairs': [], u'admin_state_up': True, u'network_id': 
u'e57257d9-f915-4c60-ac30-76b0e2d36378', u'segmentation_id': 2123, 
u'device_owner': u'network:dhcp', u'physical_network': u'physnet1', 
u'mac_address': u'fa:16:3e:af:aa:f5', u'device': 
u'e9c11103-9d10-4b27-b739-e428773d8fac', u'port_security_enabled': False, 
u'port_id': u'e9c11103-9d10-4b27-b739-e428773d8fac', u'fixed_ips': 
[{u'subnet_id': u'b7196c99-0df6-4b0e-bbfa-e62da96dac86', u'ip_address': 
u'10.1.1.32'}], u'network_type': u'vlan'}
2017-10-26 16:24:28.458 4403 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] Assigning 48 as local vlan 
for net-id=e57257d9-f915-4c60-ac30-76b0e2d36378
2017-10-26 16:24:28.462 4403 INFO neutron.agent.l2.extensions.qos 
[req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] QoS extension did have no 
information about the port e9c11103-9d10-4b27-b739-e428773d8fac that we were 
trying to reset
2017-10-26 16:24:28.462 4403 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] Port 
610c3924-5e94-4f95-b19b-75e43c5729ff updated. Details: {u'profile': {}, 
u'network_qos_policy_id': None, u'qos_policy_id': None, 
u'allowed_address_pairs': [], u'admin_state_up': True, u'network_id': 
u'f09a8be9-a7c7-4f90-8cb3-d08b61095c25', u'segmentation_id': 5, 
u'device_owner': u'network:router_gateway', u'physical_network': u'physnet1', 
u'mac_address': u'fa:16:3e:bf:39:43', u'device': 
u'610c3924-5e94-4f95-b19b-75e43c5729ff', u'port_security_enabled': False, 
u'port_id': u'610c3924-5e94-4f95-b19b-75e43c5729ff', u'fixed_ips': 
[{u'subnet_id': u'3ce21ed4-bb6a-4e67-b222-a055df40af08', u'ip_address': 
u'96.116.48.132'}], u'network_type': u'vlan'}
2017-10-26 16:24:28.463 4403 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] Assigning 43 as local vlan 
for net-id=f09a8be9-a7c7-4f90-8cb3-d08b61095c25
2017-10-26 16:24:28.466 4403 INFO neutron.agent.l2.extensions.qos 
[req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] QoS extension did have no 
information about the port 610c3924-5e94-4f95-b19b-75e43c5729ff that we were 
trying to reset
2017-10-26 16:24:28.467 4403 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] Port 
66db7e2d-bd92-48ea-85fa-5e20dfc5311c updated. Details: {u'profile': {}, 
u'network_qos_policy_id': None, u'qos_policy_id': None, 
u'allowed_address_pairs': [], u'admin_state_up': True, u'network_id': 
u'fd67eae2-9db7-4f7c-a622-39be67090cb4', u'segmentation_id': 2170, 
u'device_owner': u'network:dhcp', u'physical_network': u'physnet1', 
u'mac_address': u'fa:16:3e:c9:24:8a', u'device': 
u'66db7e2d-bd92-48ea-85fa-5e20dfc5311c', u'port_security_enabled': False, 
u'port_id': u'66db7e2d-bd92-48ea-85fa-5e20dfc5311c', u'fixed_ips': 
[{u'subnet_id': u'47366a54-22ca-47a2-b7a0-987257fa83ea', u'ip_address': 
u'192.168.189.3'}], u'network_type': u'vlan'}
2017-10-26 16:24:28.467 4403 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] Assigning 54 as local vlan 
for net-id=fd67eae2-9db7-4f7c-a622-39be67090cb4
2017-10-26 16:24:28.470 4403 INFO neutron.agent.l2.extensions.qos 
[req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] QoS extension did have no 
information about the port 66db7e2d-bd92-48ea-85fa-5e20dfc5311c that we were 
trying to reset
{...snip...}
2017-10-26 16:24:28.501 4403 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-00e34b5f-346a-4c33-a71b-822fde6e6f46 - - - - -] Port 
c53c48d4-77a8-4185-bc87-ff999bdfd4a1 updated. Details: {u'profile': {}, 
u'network_qos_policy_id': None, u'qos_policy_id': None, 
u'allowed_address_pairs': [], u'admin_state_up': True, u'network_id': 
u'06390e9c-6aa4-427a-91dc-5cf2c62be143', u'segmentation_id': 2003, 
u'device_owner': u'network:router_interface_distributed', u'physical_network': 
u'physnet1', u'mac_address': u'fa:16:3e:38:8b:f0', u'device': 
u'c53c48d4-77a8-4185-bc87-ff999bdfd4a1', u'port_security_enabled': False, 
u'port_id': u'c53c48d4-77a8-4185-bc87-ff999bdfd4a1', u'fixed_ips': 
[{u'subnet_id': 

[Yahoo-eng-team] [Bug 1726518] Re: Image not deleted after upload when exceeding image_size_cap

2017-10-30 Thread OpenStack Infra
Fix proposed to branch: master
Review: https://review.openstack.org/516401

** Changed in: glance
   Status: Opinion => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1726518

Title:
  Image not deleted after upload when exceeding image_size_cap

Status in Glance:
  In Progress

Bug description:
  When specifying a maximum size cap in glance-api.conf using
  'image_size_cap', and then trying to upload an image that exceeds that
  size cap a warning is returned that the image is too large and it is
  not actually uploaded into the backend, however the entry for the
  image will stay in the glance database as queued until it is manually
  deleted

  +--+--+
  | Property | Value|
  +--+--+
  | checksum | None |
  | container_format | bare |
  | created_at   | 2017-10-23T15:33:54Z |
  | disk_format  | qcow2|
  | id   | 83ffbe5a-667b-42ff-a742-fc88bf3132e3 |
  | locations| []   |
  | min_disk | 0|
  | min_ram  | 0|
  | name | cap_test |
  | owner| db03a56c279b4c9d83bc897a2221725a |
  | protected| False|
  | size | None |
  | status   | queued   |
  | tags | []   |
  | updated_at   | 2017-10-23T15:33:54Z |
  | virtual_size | None |
  | visibility   | private  |
  +--+--+
  413 Request Entity Too Large: Image exceeds the storage quota: The size of 
the data None will exceed the limit. None bytes remaining. (HTTP 413)

  
=

  +--+--+
  | Field| Value|
  +--+--+
  | checksum | None |
  | container_format | bare |
  | created_at   | 2017-10-23T15:33:54Z |
  | disk_format  | qcow2|
  | file | /v2/images/83ffbe5a-667b-42ff-a742-fc88bf3132e3/file |
  | id   | 83ffbe5a-667b-42ff-a742-fc88bf3132e3 |
  | min_disk | 0|
  | min_ram  | 0|
  | name | cap_test |
  | owner| db03a56c279b4c9d83bc897a2221725a |
  | properties   | locations='[]'   |
  | protected| False|
  | schema   | /v2/schemas/image|
  | size | None |
  | status   | queued   |
  | tags |  |
  | updated_at   | 2017-10-23T15:34:10Z |
  | virtual_size | None |
  | visibility   | private  |
  +--+--+

  This behaviour is undesirable. When a user attempts to upload an image
  that exceeds the size cap the entry should not be added to the glance
  image database, and should not appear in 'glance image-list'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1726518/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1728152] Re: IPv4 and IPv6 Dual Stack Does Not work when instance is not assigned public IPv4 address

2017-10-30 Thread Scott Moser
** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

** Summary changed:

- IPv4 and IPv6 Dual Stack Does Not work when instance is not assigned public 
IPv4 address
+ EC2 IPv4 and IPv6 Dual Stack Does Not work when instance is not assigned 
public IPv4 address

** Also affects: cloud-init (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Artful)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Zesty)
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu Xenial)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu Zesty)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu Artful)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu Bionic)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu Xenial)
   Importance: Undecided => High

** Changed in: cloud-init (Ubuntu Zesty)
   Importance: Undecided => High

** Changed in: cloud-init (Ubuntu Artful)
   Importance: Undecided => High

** Changed in: cloud-init (Ubuntu Bionic)
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1728152

Title:
  EC2 IPv4 and IPv6 Dual Stack Does Not work when instance is not
  assigned public IPv4 address

Status in cloud-init:
  In Progress
Status in cloud-init package in Ubuntu:
  Confirmed
Status in cloud-init source package in Xenial:
  Confirmed
Status in cloud-init source package in Zesty:
  Confirmed
Status in cloud-init source package in Artful:
  Confirmed
Status in cloud-init source package in Bionic:
  Confirmed

Bug description:
  With the following cloud-init configuration:
  system_info:
network:
  renderers: ['netplan', 'eni', 'sysconfig']
  

  network:
version: 2
ethernets:
  id0:
  match:
  name: e*
  dhcp4: true
  dhcp6: true

  with version  17.1-18-gd4f70470-0ubuntu1 on ami-36a8754c, it writes out the 
following network configuration:
  # This file is generated from information provided by
  # the datasource.  Changes to it will not persist across an instance.
  # To disable cloud-init's network configuration capabilities, write a file
  # /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
  # network: {config: disabled}
  network:
  version: 2
  ethernets:
  ens3:
  dhcp6: true
  match:
  macaddress: 02:14:13:66:8a:66
  set-name: ens3

  
  

  This instance is in a (default) VPC with a private IPv4 address and no
  public IPv4 addresses.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1728152/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1705084] Re: [RFE] Allow automatic sub-port configuration on the per-trunk basis in the trunk API

2017-10-30 Thread Miguel Lavalle
This RFE was reviewed in the latest drivers meeting. Team consensus was
that this functionality doesn't belong in Neutron:
http://eavesdrop.openstack.org/meetings/neutron_drivers/2017/neutron_drivers.2017-10-27-14.00.log.html#l-94

** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1705084

Title:
  [RFE] Allow automatic sub-port configuration on the per-trunk basis in
  the trunk API

Status in neutron:
  Won't Fix

Bug description:
  The nova blueprint
  https://review.openstack.org/#/c/471815/7/specs/queens/approved
  /expose-vlan-trunking.rst adds support of automatic sub interfaces
  configuration in the guest instance by exposing trunk details in the
  config drive. However, some deployment requires such auto
  configuration be disabled. This RFE will allow a user to specify the
  intent to enable/disable sub interface auto configuration in the trunk
  create API. It also allows the deployer to specify the default setting
  of enabling/disabling sub interface auto configuration, which can be
  overwritten by that specified in the trunk API.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1705084/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1728642] [NEW] corrupted namespace blasted ovs bridge with thousands of dangling port

2017-10-30 Thread yong sheng gong
Public bug reported:

when dhcp namespace is corrupted somehow, ovs bridge will be blasted
with thousands of dangling ports which are created by dhcp agent.

the corrupted namespace will cause following exception:

2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp 
[req-db1e4f25-2263-49e9-ba5b-308ea9ccfdec - - - - -] Unable to plug DHCP port 
for network 0c59667a-433a-4e97-9568-07ee6210c98b. Releasing port.
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp Traceback (most recent 
call last):
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/dhcp.py", 
line 1407, in setup
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp self.plug(network, 
port, interface_name)
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/dhcp.py", 
line 1375, in plug
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp 
mtu=network.get('mtu'))
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/interface.py",
 line 268, in plug
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp bridge, namespace, 
prefix, mtu)
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/interface.py",
 line 389, in plug_new
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp 
namespace_obj.add_device_to_namespace(ns_dev)
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py",
 line 232, in add_device_to_namespace
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp 
device.link.set_netns(self.namespace)
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py",
 line 516, in set_netns
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp self._as_root([], 
('set', self.name, 'netns', namespace))
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py",
 line 364, in _as_root
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp 
use_root_namespace=use_root_namespace)
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py",
 line 100, in _as_root
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp 
log_fail_as_error=self.log_fail_as_error)
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py",
 line 109, in _execute
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp 
log_fail_as_error=log_fail_as_error)
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/utils.py", 
line 156, in execute
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp raise 
ProcessExecutionError(msg, returncode=returncode)
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp ProcessExecutionError: 
Exit code: 2; Stdin: ; Stdout: ; Stderr: RTNETLINK answers: Invalid argument
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp
2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp
2017-10-30 14:12:35.479 6 ERROR neutron.agent.linux.utils 
[req-29d446ad-eed5-47a0-bfc7-496dad2d35f2 - - - - -] Exit code: 2; Stdin: ; 
Stdout: ; Stderr: RTNETLINK answers: Invalid argument

** Affects: neutron
 Importance: Undecided
 Assignee: yong sheng gong (gongysh)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1728642

Title:
  corrupted namespace blasted ovs bridge with thousands of dangling port

Status in neutron:
  In Progress

Bug description:
  when dhcp namespace is corrupted somehow, ovs bridge will be blasted
  with thousands of dangling ports which are created by dhcp agent.

  the corrupted namespace will cause following exception:

  2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp 
[req-db1e4f25-2263-49e9-ba5b-308ea9ccfdec - - - - -] Unable to plug DHCP port 
for network 0c59667a-433a-4e97-9568-07ee6210c98b. Releasing port.
  2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp Traceback (most 
recent call last):
  2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/dhcp.py", 
line 1407, in setup
  2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp 
self.plug(network, port, interface_name)
  2017-10-30 14:12:35.347 6 ERROR neutron.agent.linux.dhcp   File 

[Yahoo-eng-team] [Bug 1535918] Re: instance.host not updated on evacuation

2017-10-30 Thread James Page
nova (2:13.1.4-0ubuntu4.1~cloud0) trusty-mitaka; urgency=medium
 .
   * New update for the Ubuntu Cloud Archive.
 .
 nova (2:13.1.4-0ubuntu4.1) xenial; urgency=medium
 .
   * d/nova.conf: Add connection strings to default config for sqlite. This
 enables daemons to start by default and fixes failing autopkgtests.
   * d/tests/nova-daemons: Update test to be resilient to timing failures.


** Changed in: cloud-archive/mitaka
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1535918

Title:
  instance.host not updated on evacuation

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in nova-powervm:
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Xenial:
  Fix Released
Status in nova source package in Zesty:
  Fix Released
Status in nova source package in Artful:
  Fix Released

Bug description:
  [Impact]

  I created several VM instances and checked they are all ACTIVE state after 
creating vm.
  Right after checking them, shutdown nova-compute on their host(to test in 
this case).
  Then, I tried to evacuate them to the other host. But it is failed with ERROR 
state.
  I did some test and analysis.
  I found two commits below are related.(Please refer to [Others] section)
  In this context, migration_context is DB field to pass information when 
migration or evacuation.

  for [1], This gets host info from migration_context. if
  migration_context is abnormal or empty, migration would be fail.
  actually, with only this patch, migration_context is empty. so [2] is
  needed. I touched self.client.prepare part in rpcapi.py from original
  patch which is replaced on newer version. because it is related newer
  functionality, I remained mitaka's function call for this issue.

  for [2], This moves recreation check code to former if condition. and it 
calls rebuild_claim to create migration_context when recreate state not only 
scheduled. I adjusted test code which are pop up from backport process and 
seems to be needed. Someone want to backport or cherrypick code related to 
this, they could find it is already exist.
  Only one patch of them didn’t fix this issue as test said.

  [Test case]

  In below env,

  http://pastebin.ubuntu.com/25337153/

  Network configuration is important in this case, because I tested different 
configuration. but couldn't reproduce it.
  reproduction test script ( based on juju )

  http://pastebin.ubuntu.com/25360805/

  [Regression Potential]

  Existing ACTIVE instances or newly creating instances are not affected
  by this code because these commits are only called when doing
  migration or evacuation. If there are ACTIVE instances and instances
  with ERROR state caused by this issue in one host, upgrading to have
  this fix will not affect any existing instances. After upgrading to
  have this fix and trying to evacuate problematic instance again, ERROR
  state should be fixed to ACTIVE. I tested this scenario on simple env,
  but still need to be considered possibility in complex, crowded
  environment.

  [Others]

  In test, I should patch two commits, one from
  https://bugs.launchpad.net/nova/+bug/1686041

  Related Patches.
  [1] 
https://github.com/openstack/nova/commit/a5b920a197c70d2ae08a1e1335d979857f923b4f
  [2] 
https://github.com/openstack/nova/commit/0f2d87416eff1e96c0fbf0f4b08bf6b6b22246d5
 ( backported to newton from below original)
  - 
https://github.com/openstack/nova/commit/a2b0824aca5cb4a2ae579f625327c51ed0414d35
 (
  original)

  [Original description]

  I'm working on the nova-powervm driver for Mitaka and trying to add
  support for evacuation.

  The problem I'm hitting is that instance.host is not updated when the
  compute driver is called to spawn the instance on the destination
  host.  It is still set to the source host.  It's not until after the
  spawn completes that the compute manager updates instance.host to
  reflect the destination host.

  The nova-powervm driver uses instance events callback mechanism during
  plug VIF to determine when Neutron has finished provisioning the
  network.  The instance events code sends the event to instance.host
  and hence is sending the event to the source host (which is down).
  This causes the spawn to fail and also causes weirdness when the
  source host gets the events when it's powered back up.

  To temporarily work around the problem, I hacked in setting
  instance.host = CONF.host; instance.save() in the compute driver but
  that's not a good solution.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1535918/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : 

[Yahoo-eng-team] [Bug 1728603] Re: Resize a boot-from-volume instance with NFS destroys instance

2017-10-30 Thread Matt Riedemann
The referenced change was backported to ocata so marking this as
affecting pike and ocata:

https://review.openstack.org/#/c/441037/

** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Also affects: nova/pike
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1728603

Title:
  Resize a boot-from-volume instance with NFS destroys instance

Status in OpenStack Compute (nova):
  Confirmed
Status in OpenStack Compute (nova) ocata series:
  New
Status in OpenStack Compute (nova) pike series:
  New

Bug description:
  Turns out that the fix for
  https://bugs.launchpad.net/nova/+bug/1666831 accidentally broke boot-
  from-volume setups that use NFS. In particular, this line:

  
https://github.com/openstack/nova/blob/stable/ocata/nova/virt/libvirt/driver.py#L1149

  if os.path.exists(inst_base) and not root_disk.exists():
  try:
  shutil.rmtree(inst_base)
  except OSError as e:
  if e.errno != errno.ENOENT:
  raise

  Causes the instance basedir which includes the instances libvirt.XML
  file to be deleted.

  The above needs to be changed to this in order to prevent BFV
  instances from being destroyed on resize...

   if os.path.exists(inst_base) and not root_disk.exists() and not
  compute_utils.is_volume_backed_instance(instance._context, instance):

  This bug was reported and the fix confirmed by Joris S'heeran

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1728603/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1712851] Re: cloudinit can`t mount configdrive partition

2017-10-30 Thread Scott Moser
*** This bug is a duplicate of bug 1707222 ***
https://bugs.launchpad.net/bugs/1707222

Hi,
I'm pretty sure this is a duplicate of bug 1707222.

A fix for that issue should land in 16.04 and 17.04 shortly via the SRU
bug 1721808.

I'm going to mark this a duplicate of 1707222.  If you find out
otherwise, please explain and re-set the bug to "New".

Thanks!.


** This bug has been marked a duplicate of bug 1707222
   usage of /tmp during boot is not safe due to systemd-tmpfiles-clean

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1712851

Title:
  cloudinit can`t mount configdrive partition

Status in cloud-init:
  New
Status in ironic-python-agent:
  Incomplete

Bug description:
  image for Ironic builded that way:
  DIB_CLOUD_INIT_DATASOURCES="Ec2, ConfigDrive, OpenStack" disk-image-create -o 
baremetal-$DISTRO_NAME-$DIB_RELEASE $DISTRO_NAME baremetal bootloader -p 
linux-image-generic-lts-xenial

  config-drive partition is created by default ironic coreos images:
  root@ubuntu:~# lsblk  -f
  NAME   FSTYPE  LABEL   MOUNTPOINT
  sdaext4cloudimg-rootfs
  └─sda1 iso9660 config-2

  
  cloudinit.log:
  2017-08-24 18:48:50,130 - util.py[DEBUG]: Running command ['mount', '-o', 
'ro,sync', '/dev/sda1', '/tmp/tmpN_ixJ1'] with allowed return codes [0] 
(shell=False, capture=True)
  2017-08-24 18:48:50,203 - util.py[DEBUG]: Recursively deleting /tmp/tmpN_ixJ1
  2017-08-24 18:48:50,203 - cloud-init[DEBUG]: No local datasource found

  
  I try command by hand:
  root@ubuntu:~# mount -o ro,sync /dev/sda1 /mnt
  mount: /dev/sda1 already mounted or /mnt busy

  I modify command by hand:
  root@ubuntu:~# mount -o loop,ro,sync /dev/sda1 /mnt ### mount successful

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1712851/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1728603] [NEW] Resize a boot-from-volume instance with NFS destroys instance

2017-10-30 Thread Jay Pipes
Public bug reported:

Turns out that the fix for https://bugs.launchpad.net/nova/+bug/1666831
accidentally broke boot-from-volume setups that use NFS. In particular,
this line:

https://github.com/openstack/nova/blob/stable/ocata/nova/virt/libvirt/driver.py#L1149

if os.path.exists(inst_base) and not root_disk.exists():
try:
shutil.rmtree(inst_base)
except OSError as e:
if e.errno != errno.ENOENT:
raise

Causes the instance basedir which includes the instances libvirt.XML
file to be deleted.

The above needs to be changed to this in order to prevent BFV instances
from being destroyed on resize...

 if os.path.exists(inst_base) and not root_disk.exists() and not
compute_utils.is_volume_backed_instance(instance._context, instance):

This bug was reported and the fix confirmed by Joris S'heeran

** Affects: nova
 Importance: High
 Status: Confirmed

** Affects: nova/ocata
 Importance: Undecided
 Status: New

** Affects: nova/pike
 Importance: Undecided
 Status: New


** Tags: boot-from-volume resize

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1728603

Title:
  Resize a boot-from-volume instance with NFS destroys instance

Status in OpenStack Compute (nova):
  Confirmed
Status in OpenStack Compute (nova) ocata series:
  New
Status in OpenStack Compute (nova) pike series:
  New

Bug description:
  Turns out that the fix for
  https://bugs.launchpad.net/nova/+bug/1666831 accidentally broke boot-
  from-volume setups that use NFS. In particular, this line:

  
https://github.com/openstack/nova/blob/stable/ocata/nova/virt/libvirt/driver.py#L1149

  if os.path.exists(inst_base) and not root_disk.exists():
  try:
  shutil.rmtree(inst_base)
  except OSError as e:
  if e.errno != errno.ENOENT:
  raise

  Causes the instance basedir which includes the instances libvirt.XML
  file to be deleted.

  The above needs to be changed to this in order to prevent BFV
  instances from being destroyed on resize...

   if os.path.exists(inst_base) and not root_disk.exists() and not
  compute_utils.is_volume_backed_instance(instance._context, instance):

  This bug was reported and the fix confirmed by Joris S'heeran

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1728603/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1617362] Re: Nova 'ComputeManager._destroy_evacuated_instances' method doesn't send notifications

2017-10-30 Thread Balazs Gibizer
Marking this as invalid based on #2 and based on the fact that nobody
answered the questions raised in #2.

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1617362

Title:
  Nova 'ComputeManager._destroy_evacuated_instances' method doesn't send
  notifications

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  The community manager has a way to notify about instance usage - 
notify_about_instance_usage.
  Currently the _destroy_evacuated_instances does not have any notifications 
when all the evacuated instances are destroyed. A notification should be sent 
when this method completes and any instance is actually destroyed.

  https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L627

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1617362/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1728563] [NEW] Live snapshot cannot lock .delta (regression with apparmor)

2017-10-30 Thread Ondrej Vasko
Public bug reported:

When live snapshot is enabled in Nova via `[workarounds]
disable_libvirt_livesnapshot = False`, it fails after done using Horizon
or CLI with error `unable to execute QEMU command 'drive-mirror': Failed
to lock byte 100`.

Non live snapshot is working fine both in Horizon and CLI.

I traced what cause this error thanks to this bug report
`https://bugs.launchpad.net/nova/+bug/1244694`. The reason of that old
bug was wrong Apparmor config. This bug report was solved fine, but
currently with live snapshot I have an issue with libvirt trying to lock
file in `/var/lib/nova/instances/snapshots//.delta` which
fails according to Apparmor log below.

I hotfixed this issue by updating
`/etc/apparmor.d/libvirt/libvirt-${UUID}` for specific instance's UUID.
I appended following line `/var/lib/nova/instances/snapshots/** k,` and
live snapshot is now working well for that instance.

But this is only hotfix and there must be updated Libvirt Apparmor
template to allow using snapshots subdirectories, or there must be
temporary created rule that allows it before live snapshot is made and
deleted after it is done.

Nova log:

ERROR oslo_messaging.rpc.server [req-ae5933d6-a603-48ab-8d53-a4ebdc57ebdc 
82fb7a159550424098f2addf3c30461a 971a410f32a6446c95f73819bf4eaebc - default 
default] Exception during message handling: libvirtError:
internal error: unable to execute QEMU command 'drive-mirror': Failed to lock 
byte 100
ERROR oslo_messaging.rpc.server Traceback (most recent call last):
ERROR oslo_messaging.rpc.server   File 
"/openstack/venvs/nova-16.0.1/lib/python2.7/site-packages/oslo_messaging/rpc/server.py",
 line 160, in _process_incoming
ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message)
ERROR oslo_messaging.rpc.server   File 
"/openstack/venvs/nova-16.0.1/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 213, in dispatch
ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, 
ctxt, args)
ERROR oslo_messaging.rpc.server   File 
"/openstack/venvs/nova-16.0.1/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 183, in _do_dispatch
ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args)
ERROR oslo_messaging.rpc.server   File 
"/openstack/venvs/nova-16.0.1/lib/python2.7/site-packages/nova/exception_wrapper.py",
 line 76, in wrapped
ERROR oslo_messaging.rpc.server function_name, call_dict, binary)
ERROR oslo_messaging.rpc.server   File 
"/openstack/venvs/nova-16.0.1/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
ERROR oslo_messaging.rpc.server self.force_reraise()
ERROR oslo_messaging.rpc.server   File 
"/openstack/venvs/nova-16.0.1/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
ERROR oslo_messaging.rpc.server   File 
"/openstack/venvs/nova-16.0.1/lib/python2.7/site-packages/nova/exception_wrapper.py",
 line 67, in wrapped
ERROR oslo_messaging.rpc.server return f(self, context, *args, **kw)
ERROR oslo_messaging.rpc.server   File 
"/openstack/venvs/nova-16.0.1/lib/python2.7/site-packages/nova/compute/manager.py",
 line 190, in decorated_function
ERROR oslo_messaging.rpc.server "Error: %s", e, instance=instance)
ERROR oslo_messaging.rpc.server   File 
"/openstack/venvs/nova-16.0.1/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
ERROR oslo_messaging.rpc.server self.force_reraise()
ERROR oslo_messaging.rpc.server   File 
"/openstack/venvs/nova-16.0.1/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
ERROR oslo_messaging.rpc.server   File 
"/openstack/venvs/nova-16.0.1/lib/python2.7/site-packages/nova/compute/manager.py",
 line 160, in decorated_function
ERROR oslo_messaging.rpc.server return function(self, context, *args, 
**kwargs)
ERROR oslo_messaging.rpc.server   File 
"/openstack/venvs/nova-16.0.1/lib/python2.7/site-packages/nova/compute/manager.py",
 line 218, in decorated_function
ERROR oslo_messaging.rpc.server kwargs['instance'], e, sys.exc_info())
ERROR oslo_messaging.rpc.server   File 
"/openstack/venvs/nova-16.0.1/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
ERROR oslo_messaging.rpc.server self.force_reraise()
ERROR oslo_messaging.rpc.server   File 
"/openstack/venvs/nova-16.0.1/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb)
ERROR oslo_messaging.rpc.server   File 
"/openstack/venvs/nova-16.0.1/lib/python2.7/site-packages/nova/compute/manager.py",
 line 206, in decorated_function
ERROR oslo_messaging.rpc.server return function(self, context, *args, 
**kwargs)
ERROR oslo_messaging.rpc.server   File 

[Yahoo-eng-team] [Bug 1593672] Re: Nova - Novncproxy - Console opens to an incorrect VM

2017-10-30 Thread Francesco Pantano
** Changed in: nova
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1593672

Title:
  Nova - Novncproxy - Console opens to an incorrect VM

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===
  After deploying several instances of Virtual machines (some of them using 
heat) some instances console are pointing to a different instance. 
  We tried to shut off all the instances and start them again, but the issue is 
still exists. 

  Steps to reproduce
  ==
  At the moment I don't have any information since we are using the instances I 
cannot delete all of them and re-create them...

  But as we remember the following happened:
  - Deployed 33 Win2k3 instances with "Launch instance and Count 33"
  - Deployed 33 Win2k8 instances with "Launch instance and count 33"
  - Deployed 33 Win2k12 instances with heat template -> We noticed the problem
  - Stack deleted with all the instances
  - Deployed 33 Win2k12 instances with "Launch instance and count 33" -> The 
problem is still there 

  root@openstackcompute2:~# dpkg -l | grep nova
  ii  nova-common  2:13.0.0-0ubuntu2~cloud0 
 all  OpenStack Compute - common files
  ii  nova-compute 2:13.0.0-0ubuntu2~cloud0 
 all  OpenStack Compute - compute node base
  ii  nova-compute-kvm 2:13.0.0-0ubuntu2~cloud0 
 all  OpenStack Compute - compute node (KVM)
  ii  nova-compute-libvirt 2:13.0.0-0ubuntu2~cloud0 
 all  OpenStack Compute - compute node libvirt support
  ii  python-nova  2:13.0.0-0ubuntu2~cloud0 
 all  OpenStack Compute Python libraries
  ii  python-novaclient2:3.3.1-2~cloud0 
 all  client library for OpenStack Compute API - Python 2.7
  Description
  ===
  After deploying several instances of Virtual machines (some of them using 
heat) some instances console are pointing to a different instance. 
  We tried to shut off all the instances and start them again, but the issue is 
still exists. 

  Steps to reproduce
  ==
  At the moment I don't have any information since we are using the instances I 
cannot delete all of them and re-create them...

  But as we remember the following happened:
  - Deployed 33 Win2k3 instances with "Launch instance and Count 33"
  - Deployed 33 Win2k8 instances with "Launch instance and count 33"
  - Deployed 33 Win2k12 instances with heat template -> We noticed the problem
  - Stack deleted with all the instances
  - Deployed 33 Win2k12 instances with "Launch instance and count 33" -> The 
problem is still there 

  Steps:
  - Open OPenstack dashboard 
  - Click the instance name
  - Change to console  tab
  - Open console
  or
  - nova get-vnc-console Windows_2k12R2_perf-26

  Expected result
  ===
  We have a console to an another instance

  Environment
  ===
  Control node:
  root@openstack1:/etc/nova# dpkg -l | grep nova
  ii  nova-api   2:13.0.0-0ubuntu2  
all  OpenStack Compute - API frontend
  ii  nova-cells 2:13.0.0-0ubuntu2  
all  Openstack Compute - cells
  ii  nova-cert  2:13.0.0-0ubuntu2  
all  OpenStack Compute - certificate management
  ii  nova-common2:13.0.0-0ubuntu2  
all  OpenStack Compute - common files
  ii  nova-conductor 2:13.0.0-0ubuntu2  
all  OpenStack Compute - conductor service
  ii  nova-consoleauth   2:13.0.0-0ubuntu2  
all  OpenStack Compute - Console Authenticator
  ii  nova-novncproxy2:13.0.0-0ubuntu2  
all  OpenStack Compute - NoVNC proxy
  ii  nova-scheduler 2:13.0.0-0ubuntu2  
all  OpenStack Compute - virtual machine scheduler
  ii  python-nova2:13.0.0-0ubuntu2  
all  OpenStack Compute Python libraries
  ii  python-novaclient  2:3.3.1-2  
all  client library for OpenStack Compute API - Python 2.7

  Compute node1:
  root@openstackcompute:~# dpkg -l | grep nova
  ii  nova-common   2:13.0.0-0ubuntu2~cloud0
all  OpenStack Compute - common files
  ii  nova-compute  2:13.0.0-0ubuntu2~cloud0
all 

[Yahoo-eng-team] [Bug 1723429] Re: Mitaka Series Release Notes in Neutron Release Notes missing notes for versions 8.3 & 8.4

2017-10-30 Thread Yiorgos Stamoulis
setting to 'invalid' as issue no longer exists

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1723429

Title:
  Mitaka Series Release Notes in Neutron Release Notes missing notes for
  versions 8.3 & 8.4

Status in neutron:
  Invalid

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [x] This doc is inaccurate in this way: changes for versions 8.0.0, 8.1.0, 
8.2.0 & 8.2.0-42 are shown but changes for versions 8.3.0 & 8.4.0 are not, 
while they were a couple of days ago!
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release: 11.0.0.0rc2.dev354 on 2017-10-13 00:36
  SHA: 9a7c5a1ff667a0649c81b41ef56cc1fd8d1e947b
  Source: 
https://git.openstack.org/cgit/openstack/neutron/tree/doc/source/mitaka.rst
  URL: https://docs.openstack.org/releasenotes/neutron/mitaka.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1723429/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1728557] [NEW] placement-api-ref: wrong parameter order

2017-10-30 Thread Takashi NATSUME
Public bug reported:

Nova API reference guideline(*1) says:

--
3. Order the fields as follows
1. header
2. path
3. query
4. body
1. top level object (i.e. server)
2. required fields
3. optional fields
4. fields added in microversions (by the microversion they were added)
--

Optional fields should be after required fields.
But request bodies of the following APIs do not follows the guideline.
They are reference for placement API, but it should be fixed for consistency.

- PUT /resource_providers/{uuid}/inventories
- PUT /resource_providers/{uuid}/inventories/{resource_class}

*1: https://wiki.openstack.org/wiki/NovaAPIRef

** Affects: nova
 Importance: Undecided
 Assignee: Takashi NATSUME (natsume-takashi)
 Status: In Progress


** Tags: doc docs placement

** Tags added: placement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1728557

Title:
  placement-api-ref: wrong parameter order

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Nova API reference guideline(*1) says:

  --
  3. Order the fields as follows
  1. header
  2. path
  3. query
  4. body
  1. top level object (i.e. server)
  2. required fields
  3. optional fields
  4. fields added in microversions (by the microversion they were added)
  --

  Optional fields should be after required fields.
  But request bodies of the following APIs do not follows the guideline.
  They are reference for placement API, but it should be fixed for consistency.

  - PUT /resource_providers/{uuid}/inventories
  - PUT /resource_providers/{uuid}/inventories/{resource_class}

  *1: https://wiki.openstack.org/wiki/NovaAPIRef

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1728557/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1723476] Re: py27 job failing on testtools.matchers._impl.MismatchError

2017-10-30 Thread Balazs Gibizer
This bug is not visible on master on any stable branch. The problem
reported here is the result of the changes made in the referenced patch.
So I'm marking this invalid.

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1723476

Title:
   py27 job failing on testtools.matchers._impl.MismatchError

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Related patches:
  https://review.openstack.org/#/c/511835/
  https://review.openstack.org/#/c/511767/

  when I remove some useless methods in nova/objects/compute_node.py, I find 
the error:
   Failed 2 tests - output below:
  2017-10-13 13:07:54.110687 | ==
  2017-10-13 13:07:54.110704 | 
  2017-10-13 13:07:54.110758 | 
nova.tests.unit.notifications.objects.test_notification.TestNotificationObjectVersions.test_versions
  2017-10-13 13:07:54.110820 | 

  2017-10-13 13:07:54.110838 | 
  2017-10-13 13:07:54.110850 | Captured traceback:
  2017-10-13 13:07:54.110861 | ~~~
  2017-10-13 13:07:54.110879 | Traceback (most recent call last):
  2017-10-13 13:07:54.110911 |   File 
"nova/tests/unit/notifications/objects/test_notification.py", line 420, in 
test_versions
  2017-10-13 13:07:54.110933 | 'Some notification objects have changed; 
please make '
  2017-10-13 13:07:54.110992 |   File 
"/home/jenkins/workspace/gate-nova-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual
  2017-10-13 13:07:54.111018 | self.assertThat(observed, matcher, 
message)
  2017-10-13 13:07:54.111062 |   File 
"/home/jenkins/workspace/gate-nova-python27-ubuntu-xenial/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
  2017-10-13 13:07:54.111075 | raise mismatch_error
  2017-10-13 13:07:54.111093 | testtools.matchers._impl.MismatchError: !=:
  2017-10-13 13:07:54.17 | reference = {'ComputeNodeList': 
'1.17-52f3b0962b1c86b98590144463ebb192'}
  2017-10-13 13:07:54.41 | actual= {'ComputeNodeList': 
'1.17-badecf5a910a5e0df8fffb270f30c7da'}
  2017-10-13 13:07:54.78 | : Some notification objects have changed; 
please make sure the versions have been bumped, and then update their hashes 
here.

  
  py27 error:
  
http://logs.openstack.org/35/511835/1/check/gate-nova-python27-ubuntu-xenial/477e1b9/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1723476/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1706563] Re: TestRPC.test_cleanup_notifier_null fails with timeout

2017-10-30 Thread Balazs Gibizer
** Changed in: nova
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1706563

Title:
  TestRPC.test_cleanup_notifier_null fails with timeout

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Seen at least twice on two unrelated patches. 
  
http://logs.openstack.org/43/446243/44/gate/gate-nova-python35/33e584a/console.html#_2017-07-20_05_07_52_157927
  
http://logs.openstack.org/77/453077/16/check/gate-nova-python35/b42eecb/console.html#_2017-07-25_16_02_33_600471

  Signature:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22nova.tests.unit.test_rpc.TestRPC.test_cleanup_notifier_null%20%5B%5D%20...%20inprogress%5C%22

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1706563/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1728533] [NEW] Serial console should be resizable

2017-10-30 Thread Shu Muto
Public bug reported:

We can not set size (number of rows and cols) of serial console.
Serial console should be resizable according to the space to show it.

** Affects: horizon
 Importance: Undecided
 Assignee: Shu Muto (shu-mutou)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Shu Muto (shu-mutou)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1728533

Title:
  Serial console should be resizable

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  We can not set size (number of rows and cols) of serial console.
  Serial console should be resizable according to the space to show it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1728533/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1728358] Re: Radware LBaaS v2 driver - support easy duplication

2017-10-30 Thread Akihiro Motoki
neutron-lbaas is now maintained by the octavia launchpad. Let's forward
this.

** Project changed: neutron => octavia

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1728358

Title:
  Radware LBaaS v2 driver - support easy duplication

Status in octavia:
  New

Bug description:
  This is actually an enhancement of the driver.

  Since no flavoring mechanism exists in neutron-lbaas and there is a need to 
use a driver with different 
  configurations for different LBs, Radware LBaaS v2 driver should support easy 
driver duplication.

  
  The aim is to give user a possibility to inherit the original driver with a 
minimal code amount, so the new driver may be used with its own configuration 
which is different from the configuration of original driver.

To manage notifications about this bug go to:
https://bugs.launchpad.net/octavia/+bug/1728358/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp